ChatGPT Generates Mostly Insecure Code, According to Some Researchers!

And the AI Langugage Model Does Accept Its Mistakes When Its Flaws Are Pointed Out!

As of April 23, ChatGPT, an AI chatbot capable of generating different types of text, including code, has been under scrutiny by four researchers from the University of Quebec in Canada. Their findings suggest that ChatGPT often produces code with significant security flaws, and fails to actively alert users to these issues, only admitting its errors upon request.

The researchers presented their findings in a paper where they asked ChatGPT to create 21 programs and scripts using various programming languages such as C, C++, Python, and Java. The purpose of creating these programs was to exhibit distinct security loopholes, which included memory corruption, denial of service, deserialization, and encryption implementations.

After analysis, it was found that only five out of the twenty-one programs produced by ChatGPT on its initial attempt were secure. After further guidance and correction, the AI model managed to produce seven more secure applications. However, these were only safe in terms of the specific vulnerability being evaluated, as the final code may still have other risks and vulnerabilities that can be exploited.

The researchers note that ChatGPT’s flaw lies in its inability to account for adversarial code execution models. While the AI suggests that security issues can be avoided by “not entering invalid data,” this is impractical in real-world scenarios. Nevertheless, ChatGPT appears to recognize and acknowledge critical vulnerabilities in the code it recommends.

Raphaël Khoury, a computer science and engineering professor at the University of Quebec and one of the paper’s co-authors, said,

“Obviously, it’s just an algorithm. It doesn’t know anything, but it can identify unsafe behaviour.”

The researchers discovered that ChatGPT’s initial response to security concerns was to recommend using only valid inputs, which was unreasonable. The model provided useful guidance only after being asked to refine the question.

The researchers assert that this behaviour of ChatGPT is not ideal since users must possess knowledge of specific vulnerabilities and coding techniques to know what questions to ask. The researchers also point out ethical inconsistencies in ChatGPT’s approach. Rather than producing exploit code, it produces code that contains security flaws.

An instance was cited, where the chatbot generated insecure code for a Java deserialization vulnerability and recommended measures to improve its security.

However, it was unable to generate a more secure version of the code. While Khoury acknowledges that ChatGPT poses a risk in its current form, he suggests that there are ways to make good use of this imperfect AI assistant.

“We’ve seen students use this tool, and programmers use this tool in real life,”

he said,

“so having a tool that generates unsafe code is very dangerous. We need to make students aware that if the code is generated with this type of tool, then it is likely to be insecure.”

He also expressed surprise that when they asked ChatGPT to generate code for the same task in different languages, sometimes it produced secure code for one language while generating vulnerable code for another.

“This language model is kind of a black box, and I don’t really have a good explanation or theory for that,”

he said.

So guys, if you liked this post and wish to receive more tech stuff delivered daily, don’t forget to subscribe to the Inspire2Rise newsletter to obtain more timely tech news, updates and more!

Keep visiting for more such excellent posts, internet tips, and gadget reviews, and remember we cover, “Everything under the Sun!”
Follow Inspire2rise on Twitter. | Follow Inspire2rise on Facebook. | Follow Inspire2rise on YouTube.

Ankit is a geek from New Delhi who loves smartphones, games and everything tech. When he's not busy writing here you can find him playing PUBG on his phone!

ChatGPT Generates Mostly Insecure Code, According to Some Researchers!

Leave a Comment

Discover more from Inspire2Rise

Subscribe now to keep reading and get access to the full archive.

Continue reading