With the release of OpenAI's GPT-3 the topic of AI writing code is hot again. GPT-3 is a natural language model that, given some examples, tries to predict appropriate text output for a given text input. Since code is just text, there's the question of whether it can be used to produce code based only on a description of what the code should do. And, amazing world that we live in, there are already some early tech demos of exactly that.
A common off-the-cuff reaction to such demos is that this sort of technology will inevitably make Software Engineers obsolete, because if an AI can produce more or less what they produce, then seemingly we won't need to pay them to do it anymore.
The reasoning for these kinds of claims is understandable. If your premise is that a Software Engineer's job is to take a description of what some software should do, and write code to implement that, then it makes sense to view technology like this as a drop-in replacement.
But that's not right. What a Software Engineer really does is build software solutions to solve problems appropriately. That seems like a subtle distinction, but it's a really important one. Code is not the solution, it's the implementation of a solution. The solution certainly requires that code be written, hence the ability to write code. But going the other direction, the ability to write code does not imply the ability to create appropriate solutions to problems.
I think because writing code is intrinsically a difficult skill which is opaque and unfamiliar to outsiders, it appears like the hard part of building software is in fact writing code. But it really isn't, and that's the point. The hard part is solving the problem appropriately. Implementing that solution in code once you have it is comparatively easy.
This is counter-intuitive, and I don't think it's something that can be fully appreciated without having gone through the process of building software yourself. Without this insight, it does indeed seem like an AI writing code opens up the possibility of generalists building software, just by describing their ideas. But that's no more true than saying that hiring a skilled ghost writer opens up the possibility for anybody with a story idea to write a great novel.
So, I understand the basis of these claims from an outsider's perspective. They are reasonable given the premises they're based on. What does surprise me a bit, though, is the common rebuttals that seem to come up from insiders - Software Engineers themselves.
There are two main forms. The first is "this will never work": pointing out all the technical difficulties of such a thing in practice. The other is "it's all just code anyway": the notion that the natural language description of any software past a certain point of complexity would have to be structured and formalised to the extent that it basically becomes code.
I find these rebuttals surprising not because they're necessarily wrong. They're actually very practical - the thing has to basically work before the rest of the conversation is relevant, after all, and we're still a very long way from that. I find them surprising more just because they meet the claims at their own level, seeming to implicitly accept their premise. They say "this is wrong because AI won't be able to meaningfully write code" rather than "this is wrong because when AI can meaningfully write code, far from making Software Engineers obsolete, it will help them".
The reason I'd expect more Software Engineers to hold the latter view is because there's really nothing new here. I guess using AI to produce code looks like some giant leap at first glance, but it's really more a linear progression of something that's been happening in the software industry since day one - using software to automate writing software. Tools and frameworks that improve efficiency in building software by automating more and more of the heavy lifting are something that every Software Engineer is familiar with, and it's understood that they're unequivocally a good thing. Why should an AI writing code be any different?
Furthermore, it's established at this point that such tools do not lead to less need for Software Engineers, because there is not a fixed limit of demand for software to be built. Rather, they allow more and better software to be built faster, opening new opportunities to apply software solutions where they weren't previously economical, growing the software industry and actually increasing the value of the skillset needed to build software, which again, is not the same thing as the ability to write code.
Software Engineers should welcome every advancement that brings forward AI tools that assist in writing larger and more meaningful parts of our code. This isn't something to resist, but something to push forward, because far from making Software Engineers obsolete, these tools will further leverage the skills they have that are most valuable.
And come on, who likes typing anyway?