The Potential and Pitfalls of AI-Assisted Coding

The Potential and Pitfalls of AI-Assisted Coding

Brad Ediger

July 08, 2024

Updated May 1, 2025

AI-based coding tools, such as GitHub Copilot, are rapidly gaining popularity, and for good reason. These powerful models — trained on vast datasets of code — have an impressive ability to understand, write, and reason about software. In fact, by 2027, experts estimate that 70% of professional developers will use AI coding tools.

AI-assisted coding can accelerate software development, enhance quality, and boost productivity. Developers using Copilot, for example, completed an average of 26% more tasks than they normally would.

Let’s dive into the benefits of AI-assisted coding, and explore the potential risks organizations must  carefully navigate.

Benefits of AI-Assisted Coding Tools

Implementing AI-assisted coding tools can offer a variety of benefits to the development process, including:

  • Rapid Prototyping: One of the most attractive possibilities of AI code generation is its ability to advance the speed of prototyping processes. By quickly creating models of complex ideas, teams can improve their concepts much faster.
  • Summarization and Understanding: Code-aware language models are good at writing and understanding code. AI tooling provides a significant benefit in helping an experienced developer approach an unfamiliar codebase and find relevant context.
  • Naming and API Design: LLMs are trained on a large corpus of existing data, so they often optimize for the "most popular" response rather than the "correct" one. This can benefit developers when designing APIs since it improves usability, reduces cognitive load, and promotes a positive developer experience.
  • Writing Documentation: AI coding models can help write and edit technical documents to support good documentation. AI also helps generate inline documentation (e.g., Javadoc) that assists others in understanding and working with the code. It also can be used to create documentation drafts from an undocumented codebase, or provide an independent voice to critique designs and documentation already on paper.
  • Automated Testing Support: GitHub promotes Copilot as an "AI pair programmer” meaning that it helps human developers by providing a new viewpoint. For example, it can create test cases to check the code written by humans to ensure the code functions as it should.
  • Smart Templating: LLMs perform similar mass-customization tasks with more awareness of semantics, allowing developers to quickly customize repetitive or similar code. This can help create example data for documentation, testing, demonstrations, or experiments. It can also translate algorithms from one language to another or help write client code.

Challenges of AI-Assisted Coding

Although these benefits may be tempting, AI-assisted programming also carries risks and potential challenges. Though the concept of leveraging technology to reduce manual work and speed up development processes sounds promising, development teams must consider technological and legal considerations before adopting AI-assisted coding.

Technological Considerations

Although the industry continues to learn about the best ways to use AI-assisted coding technology, many of its strengths and weaknesses are still being discovered. Technological factors to consider include:

  • Training Biases: Bias is a common concern in language models, regardless of application. LLMs have demonstrated every sort of human bias present in society, and detecting and mitigating these biases is an ongoing and vital area of research.
  • Exposure to Vulnerabilities: AI coding tools risk potentially exposing the code to vulnerabilities, such as outdated libraries, insecure defaults, or poor patterns sneaking into the code. These vulnerabilities can be leveraged by attackers, and give them access to sensitive data and information.
  • Discursive Gaps: Although large models have some ability to reason, they need to be more comprehensive in their understanding. They are likely to perpetuate or reinforce gaps in the user's knowledge of what is being built rather than to challenge them directly. One mitigation strategy is to repeatedly iterate prompts to challenge a response. These prompts will drive the model towards a better solution.
  • Model Collapse: Model collapse refers to the risk that after the pervasive deployment and publication of AI-derived results, subsequent models are trained on data that includes a previous model's output. This can be detrimental to the performance of those models, as they may perpetuate the biases and misperceptions of the past, or ruin evaluation and benchmarking. To prevent model collapse, benchmarks use canaries which are unique strings not found in regular internet text. They are included in all benchmark datasets and outputs.

Developers must be aware of these technological considerations before deciding to implement AI coding tools. However, these are not the only factors developers must keep in mind.

Product and Legal Considerations

In addition to technological considerations, AI-assisted coding has several legal concerns that may influence important decisions about a product's roadmap. These issues include:

  • Confidentiality: Businesses often use intellectual property laws, like copyrights and patents, to protect their custom software. This keeps their information private. Violating this confidentiality can lead to serious legal consequences. To mitigate some of this risk, developers can use purely local models, keeping sensitive data in-house.
  • Intellectual Property Ownership: The two areas of intellectual property law most affected by AI coding are copyrights and trade secrets. The software industry grew up in a business environment profoundly shaped by copyright law. Today, copyright automatically protects most works created by humans worldwide. This includes the US, the UK, and other signatories of the Berne Convention (which encompasses a total of 181 countries). In these jurisdictions, human-authored software is "born copyrighted." To be copyrighted, AI-generated code must be prompted or refined manually, or combined with other code.
  • Third-Party Infringement: LLMs are trained on public datasets, and produce probabilistic and compositional outputs, shaped by prompt structure and context. However, they will rarely produce code that mirrors their training data. This risk is much more pronounced in pinpoint tools such as GitHub Copiliot. This risk stems from interface designs and usage patterns rather than the underlying model itself. Developers must be able to understand these differences and apply appropriate review practices. For example, GitHub Copilot has duplication detection scans and a Copyright Commitment to provide technical and legal protections against this risk.
  • Provenance and Control: The need for software provenance is not new to generative AI. Most software organizations at a specific size must adopt structured techniques to understand the sources of code, assets, libraries, and other artifacts that feed into their products. Some provenance strategies include repository-level attributes—which can gate AI tooling—and code fencing, which can manually delimit AI-generated code by marking it with special comments at the function/block level, or for a file at a time. 

As laws around AI continue to evolve, developers must be mindful of these legal concerns, and ensure that their software is compliant with these laws.

Code Smarter, Not Harder

As someone at the forefront of emerging technologies for over two decades, I'm incredibly excited about AI’s possibilities for our industry. Pairing these tools with experienced technologists who understand both the benefits and limitations, we’re just starting to scratch the surface of what this means for new opportunities.

Worried your team may be falling behind? Download our AI-assisted coding pulse report to stay ahead of the curve!

Brad Ediger

Executive Consultant

A staple in the Chicago tech scene since 2005, Brad Ediger serves as an Executive Consultant, serving clients with his technical expertise. He joined 8th Light in 2019 when he merged his independent consultancy with the company.