In your view, what are the greatest potential risks of using AI coding assistants? What’s the best way to protect the organization against those possible liabilities?

2.1k viewscircle icon1 Upvotecircle icon5 Comments
Sort by:
Senior Director Of Technology in Software10 months ago

AI coding assistants - As the name says - It should just assist you. 

Heavy reliance on coding assistants can lead to innovation fatigue. For asking the right question, you need to understand what should be the potential end result. And then making the most out of the response is again your responsibility.

I have seen devs paste the whole code and ask for issues in code. and that code contains all user name , password, API keys etc. Thats a huge security risk.

VP of Engineering10 months ago

Over-reliance on AI is a significant risk. Developers might lose their problem-solving abilities and critical thinking skills if they depend too much on AI. It's essential to maintain discipline in coding, testing, and sending code to production, whether it's AI-generated or human-written. Ensuring that developers continue to engage in problem-solving and critical thinking is crucial.

Customer Success Manager in Hardware10 months ago

Security vulnerabilities are a major concern. AI-generated code can have hidden security flaws that are not immediately visible. It's also crucial to ensure that the code quality is maintained and that the code does not infringe on intellectual property rights. To mitigate these risks, organizations should implement strict code reviews and testing protocols. This includes having multiple layers of review to catch any potential issues.

1 Reply
no title10 months ago

I agree. AI coding assistants might suggest outdated or incorrect code based on historical data, which may not be applicable to current scenarios. This can mislead developers. To prevent this, it's important to have multiple review cycles and ensure that the code is thoroughly vetted. Additionally, using AI to evaluate AI-generated code can help flag any issues that require manual review.

Data Manager in Banking10 months ago

One of the greatest risks is blindly trusting the AI-generated code. For example, the AI might produce code that is biased or inadvertently leaks sensitive information like passwords. Developers must be cautious and thoroughly review AI-generated code to avoid these issues. Additionally, using tools like ChatGPT can inadvertently train the AI with proprietary information, leading to data leaks.

Content you might like

Finding data and putting it to good use13%

Controlling the security and privacy of data45%

Understanding how data is currently being used20%

All of the above19%

None of the above1%

View Results

Executive Support10%

Projects vs. Operations68%

Building a culture of Security15%

Team Completeness5%

View Results