How To Secure AI Generated Code In 6 Steps

Human overseeing AI as it writes code to make sure it is secure

You can secure AI generated code by performing static application security testing, dynamic security testing, software composition analysis, using secure code practices, implementing security controls, and training developers on security best practices.

As artificial intelligence (AI) continues to play an increasingly significant role in software development, it is crucial to prioritize the security of AI generated code.

Learn More: AI Security Trends For 2023

Like code written by humans, AI generated code can also present new security risks that necessitate identification and mitigation.

To tackle this, developers can utilize a range of Application Security (AppSec) testing methods.

These methods facilitate a comprehensive evaluation of AI generated code, uncovering errors, vulnerabilities, and hidden issues.

By implementing these methods, developers can bolster the robustness and security of AI generated code.

Holistic Managed Security Services

SecureTrust Cybersecurity Powered By Helios

recent study by Stanford University researchers revealed that software engineers using AI systems for code generation are more prone to introducing security vulnerabilities in the apps they develop.

The study, which involved 47 developers from diverse backgrounds using Codex, an AI code-generating system by OpenAI, highlighted the potential risks of code-generating systems.

Disturbingly, the study found that participants using Codex were not only more likely to write incorrect and insecure solutions to programming problems compared to a control group, but they were also more likely to misjudge their insecure solutions as secure.

These findings underscore the critical need for rigorous review and testing of AI generated code to identify and address potential vulnerabilities.

Can AI Write Secure Code?

While AI’s evolving sophistication allows it to generate code, the security of this code hinges on several factors.

The AI’s training data is pivotal—if trained on secure coding practices, it can potentially generate secure code, but if exposed to insecure code examples, it may inadvertently introduce vulnerabilities.

The complexity of AI-generated code also impacts its security, with simpler, cleaner code being easier to ensure its security.

can ChatGPT write secure code

The AI’s capacity to learn and adapt is crucial, with systems that can improve over time potentially becoming better at writing secure code.

However, regardless of AI’s capabilities, human oversight remains indispensable, with developers needing to review AI-generated code, conduct security testing, and apply their expertise, especially in the face of rapidly evolving cybersecurity threats.

AI generating code that is not secure

What Are The Security Implications Of AI Coding?

Integration of AI in coding also introduces several security implications that developers and organizations need to be aware of.

Learn More: How AI Will Impact The Future Of Cybersecurity

One of the primary concerns is the introduction of new vulnerabilities. AI systems, like humans, can make mistakes and if trained on insecure coding practices, they may inadvertently introduce security vulnerabilities.

According to a study from Cornell University, AI assistants are helping developers produce more buggy code.

The complexity of AI-generated code, which can sometimes be difficult to understand and review, further compounds this issue.

Another significant implication is the potential lack of human oversight. 

Developers may become overly reliant on AI for code generation, leading to less thorough code reviews and potential oversight of security vulnerabilities.

This is particularly concerning given the rapid rate at which AI can generate code.

Here are some additional security implications of AI coding:

  • Inadequate Security Testing: Traditional security testing methods may not be sufficient for AI-generated code, as they may not account for the unique vulnerabilities that AI can introduce. AI-generated code may require specialized security testing tools and methodologies, which may not be readily available or widely adopted.
  • Increased Attack Surface: As AI becomes more widely used in software development, it increases the overall attack surface for potential cyber threats. AI-generated code, due to its increasing prevalence and potential vulnerabilities, can become an attractive target for attackers.
  • Dependency on Third-Party Libraries and Components: AI systems may use third-party libraries and components in the code they generate, which can introduce security vulnerabilities if these libraries are insecure. Tracking the dependencies in AI-generated code can be challenging, making it harder to identify and address potential security risks.

AI-Powered Managed Cybersecurity Service

SecureTrust Cybersecurity Powered By Helios

Can AI Code Assistants Be Trained With Security In Mind?

Yes, AI code assistants can be trained with a focus on security.
This involves incorporating secure coding practices into the AI’s training data, ensuring the AI learns to generate code that adheres to these practices.

For instance, the AI can be trained to avoid common security pitfalls and to recognize and flag potential security vulnerabilities in the code they generate or review.

This could include:

  1. Identifying patterns of code commonly associated with security vulnerabilities.
  2. Recognizing the use of outdated libraries known to contain security risks.
  3. Avoiding the use of insecure functions or improper handling of user input.

However, training AI code assistants with security in mind is only part of the solution.

It’s crucial to establish a robust review process where human developers verify the security of the AI-generated code, catching any potential vulnerabilities that the AI might miss before the code is deployed.

In addition, organizations should consider implementing a continuous learning model for their AI code assistants, allowing the AI to improve its ability to generate secure code as it is exposed to more examples and receives feedback over time.

Learn More: AI Vs ML In Cybersecurity

layers of data being secured

Securing AI Generated Code In 6 Steps

1. Static application security testing (SAST)

SAST tools prove invaluable in scrutinizing code for potential security vulnerabilities without executing it. 

This form of testing is particularly effective when vetting AI-generated code since it uncovers issues that may not be apparent during conventional testing methods. 

By analyzing the code’s structure, logic, and syntax, SAST tools help identify vulnerabilities and weaknesses in AI-generated code.

2. Dynamic application security testing (DAST)

DAST tools simulate real-world attacks on an application to evaluate its vulnerability. 

When it comes to AI-generated code, DAST testing complements SAST by discovering vulnerabilities that may remain hidden during static analysis. 

By emulating actual attack scenarios, DAST tools provide an additional layer of scrutiny, enhancing the security assessment of AI-generated code.

3. Software composition analysis (SCA)

SCA tools are instrumental in scanning an application to detect third-party libraries and components. 

They play a vital role in assessing AI-generated code, particularly in identifying vulnerabilities stemming from the use of insecure third-party libraries.

 By comprehensively examining the dependencies and associated risks, SCA tools enable developers to mitigate security issues arising from AI-generated code.


In conjunction with these AppSec testing methods, implementing other security practices during the development of AI-generated code is essential. Here are some key practices to consider:

4. Using secure coding practices

Developers must adopt secure coding practices when crafting AI-generated code. 

This encompasses various measures such as:

  • Employing strong passwords
  • Avoiding hardcoded passwords
  • Implementing robust input data sanitization techniques.

It is very important to review ALL code, human and AI generated, for good secure coding practices. 

Make sure your team includes an expert on application security.

5. Implementing security controls

Integrating security controls into the application development process is crucial. 

These controls may include rigorous code reviews, comprehensive unit testing, and thorough integration testing.

Such measures ensure that vulnerabilities within the AI-generated code are identified and addressed at different stages of development, bolstering the overall security posture.

6. Training developers on security

Today, companies must train developers in security best practices, equipping them with the knowledge and skills to identify and report security vulnerabilities effectively. 

By fostering a security-conscious mindset among developers, organizations can instill a proactive approach to safeguarding AI-generated code. 

These efforts will pay off by having a lower attack surface, less administrative overhead securing the product, and lowering your organization’s and customer’s risk of attack through code.

In addition to these AppSec testing methods and security practices, there are other considerations that must be taken into account when evaluating the security of AI-generated code.

  • The complexity of the code: The complexity of AI-generated code can directly impact its security. Complex codebases are more likely to contain security vulnerabilities, necessitating thorough testing and assessment to uncover potential risks. Complex code also takes more time and expertise to review
  • The type of application: Different types of applications may exhibit varying levels of vulnerability to security attacks. For instance, web applications, due to their public accessibility, often attract more attention from attackers. Consequently, scrutinizing AI-generated code for security flaws becomes even more crucial in the case of web applications.
  • The environment in which the code will be used: The intended usage environment for the code can significantly influence its security requirements. AI-generated code destined for a production environment demands more rigorous testing and scrutiny compared to code that will solely be employed for development or testing purposes.

What if I don’t control the code?

When faced with software that lacks robust AppSec development, it is essential to implement additional security measures to protect your business and mitigate potential risks.

Consider the following steps to secure your business:

  • Conduct a Security Assessment: Perform a comprehensive security assessment of your software applications, including vulnerability scanning, penetration testing, and code review.
  • Patch and Update: Stay up-to-date with the latest security patches and updates for your software to address known vulnerabilities.
  • Implement Web Application Firewalls (WAFs): Deploy a Web Application Firewall to provide an added layer of protection against common web-based vulnerabilities.
  • Harden Your Infrastructure: Configure your infrastructure securely by following best practices for server hardening, network segmentation, access controls, and encryption.
  • Continuous Monitoring and Incident Response: Implement continuous monitoring solutions, intrusion detection/prevention systems, and proactive incident response plans.
  • Employee Training and Awareness: Educate your employees on security best practices and promote a security-aware culture within your organization.
  • Engage Security Experts: Seek assistance from external security experts, such as penetration testers and cybersecurity consultants, for periodic security audits and recommendations.

How SecureTrust Secures Your Code

SecureTrust Helios™ can help by providing these post-development AppSec measures.

Businesses can proactively strengthen their security stance and protect sensitive data.

While retrofitting security usually requires additional effort and resources, it is a crucial investment to safeguard your business, maintain customer trust, and mitigate potential threats. 

Fortunately, SecureTrust makes it easier than ever to secure your organization through the world’s first fully Managed Security Subscription service.

Bottomline: Is AI Coding A Security Problem?

By considering all these factors and applying a comprehensive approach to AppSec testing and security practices, developers and customers can bolster the security of AI-generated code.

Further, the emergence of AI in software development has revolutionized coding practices, enabling developers to accomplish tasks at unprecedented speeds.

With the promise of AI handling up to 80% of coding within the next few years, it is crucial to address the security implications associated with AI-generated code.

By leveraging AppSec testing methods, security best practices, and continuous innovation in code analysis techniques, developers can strive towards securing AI-generated code and minimizing the risks introduced by this transformative technology.

As the field progresses, it is imperative for developers and security professionals to remain vigilant, adapt to evolving threats, and implement robust security measures to ensure the integrity and safety of AI-generated code.

Frequently Asked Questions

What are the security risks of AI systems?

AI systems can introduce new security risks, including the introduction of new vulnerabilities, potential lack of human oversight, inadequate security testing, increased attack surface, and dependency on third-party libraries and components.

How good is AI code generation?

AI code generation has evolved significantly, with AI systems now capable of generating complex code. However, the quality and security of this code depend on several factors, including the quality of the AI’s training data and the complexity of the generated code.

How secure is code generated by ChatGPT?

The security of code generated by ChatGPT depends on several factors, including the quality of its training data and its ability to learn and adapt. However, regardless of ChatGPT’s capabilities, human oversight remains indispensable for ensuring the security of the generated code.

Can AI be used for malicious purposes?

Yes, like any technology, AI can be used for malicious purposes. For instance, AI systems can be used to automate cyber attacks or to generate malicious code.

Can artificial intelligence be hacked?

Yes, artificial intelligence systems can be hacked. This can occur if the AI system has vulnerabilities that can be exploited by attackers, or if the AI system is trained on insecure coding practices.

What is the biggest risk of AI?

One of the biggest risks of AI is the potential introduction of new vulnerabilities, particularly if the AI system is trained on insecure coding practices or generates complex code that is difficult to review and understand.

Can you ever write code that is completely secure?

While it is possible to write code that is highly secure, it is generally accepted that no code can be completely secure. This is because new vulnerabilities can be discovered over time, and the security landscape is constantly evolving. Therefore, it is important to conduct regular security reviews and updates to ensure the ongoing security of the code.

AI-Powered Managed Cybersecurity Service

SecureTrust Cybersecurity Powered By Helios
  • Category: AI