AI were to replace software developers
I’d argue the way a lot approach “AI” in this present trend (not to be confused with a lot of other kinds of “AI”) where people are using a lot of tools and performing a lot less research, we need better and more Computer Science to understand these things. If somebody decides to put their trust in their systems, including unreliable ones, they most certainly will need oversight in some capacity and maintenance [from outside the scope of the program].
Personally, I believe you’d not want an “AI” [especially being an unreliable agent] to replace such a role, you need oversight still. You still are going to need those creators. For example, Algorithms researchers like me enjoy when programs are “dumb” [they lack an “intelligence”, as proposed in the particular thread of “AI” that is having people proposed questions like these, these days], the programs do not behave in an unexpected way and with exact precision I can tell you what to expect to happen.
If I didn’t have the theory from Computer Science, I’d have a much harder time doing that. Until these technologies can fill such a role of a “dumb” program [that you can trust nearly 100% or ideally 100% of the time], you’re going to still need humans. Even then, such systems still have humans monitoring and watching them when employed; so I don’t think “AI” of this kind would be much different [just how that is done may be]. Finally, software itself is very important technology in such a system, and in line with my previous point, it’s not something to downplay in its need to be reliable