Introduction
Software companies are pushing their developers more and more to use LLMs in order to increase productivity. The argument is that LLMs allow developers to write more code in a shorter period of time. Whether it’s automating processes like requirement gathering or pull request reviews, or creating code like prototypes, unit tests and new features, or refactoring old code into newer more efficient code – there’s a point to be made. Yes AI can help with that. There’s been a joke for a long time that Google searches are a developer’s partner when it comes to coding. LLMs take it one step further and actually write the code you need based on the code it’s been trained on from the internet. But while LLMs can help developers produce more code, faster; what are the downsides? In this article I’m not talking about the technical downsides, or using AI agents to automate processes, but more about the psychological downsides to the workforce, and how that effects the code being generated.
Motivation
I’ve heard from some of my colleagues that they really like using LLMs because they “hate coding.” I find that quite interesting since being a developer means you’re going to be coding a lot. Why did you go into a career where the majority of its purpose is something you “hate”? If you asked a career counselor in high school what careers pay a lot of money, one of the top answers would be “software engineer.” Many of those kids went that route for that answer alone. They had no real interest in computers, or coding. It was just the default job to get into if you wanted to make a lot of money. These engineers will hail AI as their savior, and if you truly don’t like coding, that makes sense.
Then there’s the other people who got into software development. Nerds. There’s a group of people that were into computers as a hobby, and the natural transition was to continue that obsession into a career. These are the people that actually enjoy coding. Making someone who enjoys coding use an LLM to code for them is like asking a mechanic who enjoys working on cars to control a robot to do it. What motivation does a software engineer who enjoys coding have now that you’ve taken it away from them? Why continue to learn new languages? Why stay in the industry at all?
Ownership/Pride
If you’re a software engineer that enjoys coding, you have a sense of pride in what you create. You took on a challenge, designed a solution, and executed that solution. That’s something to be proud of. It’s also something you take responsibility for. You designed it. You wrote the code. It’s yours. If the code is good, you take credit for that. If the code ends up having problems, you have to take credit for that too and take it upon yourself to fix it. In a nutshell, because you have pride in something, you claim ownership of it, and if you own it, you want it to be good.
In the case of using AI, you tell the LLM what you want, how you want it implemented, and ask it to generate the code. What if the code created by the LLM works but is optimized badly? What if there’s security risks? What if the data model that was created is bad? Would a human have the same amount of pride in something they didn’t actually create? I would argue no; It’s just human nature. A developer in this situation SHOULD take as much care as one that wrote the code from beginning to end, but I don’t think that’s realistic. The feeling of ownership isn’t there. If the level of pride is less, then the level of care is also less. In the end, less pride ends up with lesser code.
Skill
What happens to a software engineer’s coding skills the more and more they use AI? Unlike riding a bike, coding is a “use it or lose it” skill. Much like learning a new language, if you don’t speak it often, you get out of practice and start forgetting what you’ve learned. Using AI to generate code lets that “brain muscle” atrophy over time. In fact Anthropic, one of the leaders in AI development, recently published a study showing that a collection of software engineers using AI over a period of time performed worse on a skills test than the engineers that didn’t. The article about the study can be found HERE. Use it, or lose it. What does this predict for the future of software engineering? Will there be a mass of AI generated code that after years of changes no one knows how to troubleshoot? Will companies be locked into paying for AI forever because there’s no one around who can write decent code from scratch anymore? These are probably exaggerated consequences, but they are something corporate leadership with have to deal with. This also leads us into the next topic, the workforce.
Workforce
Early in the push to use AI for coding there was the concern that AI would replace junior engineers. If the executives of software firms were to think logically, and see the big picture, they’d realize that if they’re not training and mentoring their junior engineers, they’re not investing in their future. Junior engineers become senior engineers and you’ll need that expertise to maintain your code in the future. However, that assumes that executives are thinking logically, and that they’re looking past their upcoming quarterly market goals. While using AI as the scapegoat for layoffs is currently rampant in the industry, there’s still reason to be concerned that the more you rely on AI means relying more and more on AI in the future. If using AI does negatively affect the skills of software engineers, if allowed to reach its inevitable conclusion, you’ll end up with code beholden to AI, maintained by engineers beholden to AI. Is that what these corporations want? Their intellectual property tied to the services of an AI company?
Mental Paradigm
I’ve come to realize that for me, using an LLM to write my code is more mentally tiring than writing the code yourself. When you’re working with an LLM to write code, you’re essentially “chatting” with “someone” all day. We interact with LLM‘s using somewhat normal speech. You’re convincing an entity to write the code you want. The thought process is totally different. When you’re writing the code yourself, there’s no conversation, it’s just logic. That’s a different part of your brain. After a day of coding with AI, I feel like I’ve been in a day long meeting because at it’s most basic form – I was.
Conclusion
AI and using LLMs to assist with software development is here to stay. It IS useful. It does help developers create more code at greater speed. It also allows developers to create things they might not have been able to create without it. But like most things in life, AI is a double-edged sword. The good needs to be balanced with the bad, and I hope the challenges I discussed here are things that we as an industry can avoid or find solutions to.



You must be logged in to post a comment.