View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 19, 2023updated 26 Jan 2024 3:12pm

Your AI coding assistant is a hot mess

Devs love Github Copilot, and Big Tech rivals are rushing to release their own coding assistant. But the mistakes keep piling up.

By Stephanie Stacey

Joe Reeve, an engineering manager at digital analytics platform Amplitude, used AI coding assistant Github Copilot to produce a segment of one of his recent coding projects. It was a fairly simple, if time-consuming, function — one that Reeve had written by himself plenty of times before. “It saved me 25 minutes of writing,” he recalls. “About two hours later, I hit a bug in the code. It took me another two to three hours to figure out what the issue was.” 

The culprit? The AI tool had made a tiny but significant mistake in the code by switching the direction of a single ‘greater-than’ sign. “Since then, I’ve been much more sceptical of the code that it generates,” says Reeve – but he doesn’t want to quit using it just yet. “It’s a very, very powerful tool.” 

Github Copilot, Reeve’s AI tool of choice, was launched for general use in June 2022 at a monthly cost of $10 per individual user. It quickly became one of the most widely used coding assistants, offering autocomplete-style suggestions to curious or overworked developers. Competitors quickly emerged, including Baidu’s Comate, and Amazon’s CodeWhisperer, which was made free to individual users in April — undercutting the $10 price tag of Github Copilot. And, of course, there’s the biggest name in the AI business: OpenAI’s free-to-use ChatGPT, which sparked immense interest when it was launched last November. Although ChatGPT was designed primarily for natural language processing tasks, it’s proven pretty effective at writing and debugging code, even if it sometimes displays an alarming proclivity for hallucinations.

Some developers say these tools are so helpful they’ll soon become mandatory. Others aren’t so sure. Indeed, some of the world’s biggest companies remain so nervous about implementing largely untested AI that they currently prohibit access to the tools. Samsung, Amazon and Verizon have completely barred the use of ChatGPT, citing security concerns. Apple, too, has restricted the use of both ChatGPT and Github Copilot over fears of data leaks — especially as it works to develop its own, rival coding assistants.

These fears, however, don’t seem to have stymied the tools’ rapid growth. In an earnings call in January, Satya Nadella, CEO of Github’s parent company Microsoft, said that Github Copilot had already surpassed one million users. In a recent study by consulting firm Bain & Company, 57% of surveyed software CTOs and engineering leaders said they were actively rolling out their own AI coding assistant. They cited increased speed, quality improvements, and lower costs as the tools’ primary benefits. 

CEO of Github, the company behind AI coding assistant Copilot
Thomas Dohmke, CEO of GitHub, the company behind Copilot, speaks at Web Summit Rio 2023 in Rio de Janeiro, Brazil. (Photo By Vaughn Ridley/Sportsfile/Getty Images)

Who’s using an AI coding assistant?

Almost all of the software developers that spoke to Tech Monitor are primarily using Github Copilot, above other AI assistants, to support their work — a fact that reflects the tool’s clear market dominance and branding advantage. 

Most people at Amplitude have access to GitHub Copilot, explains Reeve. The company recently started providing paid subscriptions to the software, but a lot of its developers were already using their own accounts beforehand. “Engineers just started using [it] because it made their lives significantly easier,” even though they “have to treat it with a level of distrust,” says Reeve. His team have found AI assistants to be particularly useful for reviewing code, which can often be a frustrating and time-consuming task. “This is where tools like ChatGPT can solve existing challenges — by helping engineers quickly understand old systems and code,” says Reeve.

Content from our partners
An evolving cybersecurity landscape calls for multi-layered defence strategies
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways

Mohanjith Sudirikku Hannadige, CTO at Finnish aqua-fitness startup Hydrohex, is another fan of Copilot. “It frees up developers from mundane tasks” and “makes work more enjoyable,” he says. Although human oversight remains essential for correcting the tool’s occasional mishaps, Hannadige estimates that Hydrohex’s engineers now complete their coding tasks twice as fast as they did before adopting the assistant in March. 

Christian Desrosiers, co-founder of AI start-up Visceral, says his team has also begun using AI tools like ChatGPT and Github Copilot, as well as building a specialist in-house AI coding assistant. “We found the biggest immediate productivity gains when writing boilerplate code for stand-alone app components – for example, those that do things like interact with APIs,” he says.

Meanwhile, Perforce Software’s CTO Rod Cope says he always uses Github Copilot when producing his own code. “I’m starting to think of it like a remote pair-programmer,” he says. “They can kind of look over your shoulder and go: ‘Oh, What about that?’” The suggestions might not always be wholly accurate, he says, but they’re increasingly useful as jumping-off points — helping to eliminate the dread-inducing sight of a blank screen. 

There are, however, some notable limitations. While there’s plenty of training data available for the most popular coding languages, like R and Python, AI tools might display stunted abilities in the more niche languages. They also might not be particularly useful for more ambitious projects. “These models are trained on code that already exists,” says Reeve, “meaning the more novel or specific your use case, the less useful they’ll be.”  

Risky business

As testified by Reeve’s wasted hours of bug-hunting, the average AI coding assistant certainly isn’t foolproof. They’re often trained on open-source code, which frequently contains bugs – mistakes that the assistant is prone to replicating. They’re also notoriously prone to wild delusions, a fact, says Desrosiers, that cybercriminals can use to their advantage. Indeed, an AI coding assistant is still liable to occasionally make up the existence of entire coding libraries. “Malicious actors can detect these hallucinations and launch malicious libraries with these names,” he says, “putting at risk people who let these hallucinated libraries execute in their production environment.” 

Careful oversight, says Desrosiers, is the only solution. That, too, can be facilitated by an AI coding assistant. “To de-risk this and other potential issues [at Visceral], we build single-purpose autonomous coding assistants to monitor for such threats,” says Desrosiers.

David Mertz says it’s always important to not be too trusting. “From a security perspective, you just can’t trust code,” says the author and long-time Python programmer. But while Mertz agrees that constant oversight is essential when using AI, he argues that this, in practice, is little different from hiring a junior programmer. 

“There’s a […] difference in the kind of mistakes that inexperienced programmers make versus those that machines make, but they both make mistakes,” he says. Some organisations will always put themselves — and their clients — at risk by performing insufficient, or inadequate, safety checks, “but that’s not that’s not a new danger introduced by machines”. 

Perhaps the biggest risk, then, is simply misplaced faith in AI. Indeed, in a paper published in December 2022, a study from Stanford University found that AI tools can leave developers “deluded” about the quality of their work. Researchers found that participants with access to an AI coding assistant often produced more security vulnerabilities than those without access, yet were simultaneously more likely to believe that they’d written secure code. 

James Hodson, CTO of TechAid, echoes this concern. Relying on an AI coding assistant, he argues, “encourages less oversight of the engineering process, and a lower level of skilled human engagement, which ultimately leads to more security vulnerabilities, harder-to-maintain codebases, and a dilution of the human-capital skills base.” These flaws, he says, are inherent to the nature of LLMs like ChatGPT and Github CoPilot. “Software engineering, to ensure high-quality, maintainability, and long-term fit for purpose, is an engineering process — not a linguistic generation process.” 

Software development and coding
An AI coding assistant like ChatGPT or Github Copilot can save a developer time, but only if it is used carefully. Otherwise, it can create a host of challenges. (Photo by DC Studio/Shutterstock)

Is coding dead?

So, software developers probably aren’t out of a job — at least not yet. “It’s not a panacea and it’s not something that’s going to replace effective programmers,” says Mertz. “It just may be something that makes us more productive.”

Indeed, future developers will still need to have a firm grasp of coding in order to make the most of these tools — even if they improve dramatically. “If you don’t know how to code, the code that the AI assistants generate for you will always look right,” says Cope. This, he adds, means you probably won’t immediately notice nasty bugs that’ll be much tougher to tackle further down the line.  

Even so, an AI coding assistant like Copilot and ChatGPT might ultimately make a developer’s job more satisfying. “Some of them will be very resistant because it’ll feel like it’s taking away what’s special about what they’ve learned,” says Cope. “But I think, for the vast majority of developers, the minutiae is just tedious overhead.” 

Reeve is equally optimistic about the future of software engineering. “I think what’s considered coding is just going to change,” he says. “It used to be that coding was punching holes in bits of cardboard and feeding them through a machine […] Now, really, a lot of the software engineering that we do is thinking about names and structuring code and moving code around.” 

The rise of the AI code assistant, Reeve believes, could further elevate the craft. “Hopefully it means that, as humans, we’ll focus on more of the cutting-edge things,” he says, “because all the other things are going to become much easier.”

Read more: This is how GPT-4 will be regulated

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU