Najave Robotics

CYBER SECURITY – ChatGPT as a Security Phenomenon

After the popular ChatGPT recently suffered a data breach, Europol and security companies scrutinized it as a potential ally but also as a suspicious phenomenon in the virtual perimeter of cyber security

By: Mirza Bahić; E-mail: editorial@asmideast.com

The research company OpenAI recently confirmed that in March 2023, the widely popular ChatGPT AI bot was indirectly responsible for a data breach due to a bug that allowed the leakage of users’ private information. The disclosed information included subscribers’ names, email, and physical addresses, as well as the last four digits and expiration dates of their credit cards.

Leakage of Personal and Financial Data

The data breach also caused the ChatGPT to shut down temporarily, and users whose personal data were not compromised could at least see other users’ chat logs with the AI bot within their own app. OpenAI attributed the data breach to a bug in the open-source Redis library. As an additional damage control measure for the reputation of the popular bot, OpenAI announced that it would personally contact all users whose financial data were exposed during the incident. So, it’s the case closed now and everyone’s happy? Not quite.

 Italy Shuts Down ChatGPT, EU Works on Umbrella Law

The year 2023 started badly for the AI service that was launched in November 2022, as evidenced by the Italian government’s decision to shut down ChatGPT due to the platform’s potential to compromise the privacy of its users. In early April, the Italian data protection authority ordered OpenAI to temporarily stop processing data from Italian users due to an investigation into the violation of European privacy regulations. The explanation given was that users were allowed to see parts of the conversations that other users had with the chatbot. Italian regulators stated that there is currently no legal basis for this type of mass collection and processing of personal data. Due to this invasion of users’ privacy, OpenAI risks facing a fine of 20 million euros. Italy is not the only country whose regulators have found themselves in a difficult situation when it comes to the rapid pace of AI advancement and its implications for society.

The European Union is currently working on an umbrella law on artificial intelligence that will also have a security component, with a focus on protecting the privacy of users. This is seen as a middle ground compared to the ban on ChatGPT that is currently in force in countries like China, Iran, North Korea, and Russia. The downside of this approach is that the European AI law may not come into force for several years, by which time this technology will have evolved beyond recognition.

 Warnings From Europol

Lawmakers may react more quickly if they take seriously the warnings about ChatGPT that Europol issued in March of this year. According to them, criminals are already well aware of the possibilities of such artificial intelligence platforms for their goals. This involves its use for phishing and the dissemination of disinformation and malware. As a first step, criminals could use ChatGPT to significantly speed up the research process in fields they are not familiar with. The bot can easily help them create convincing texts to lure victims into fraudulent schemes and provide them with information related to terrorism, cybercrime, and illegal activities. “Although ChatGPT uses protective measures such as content moderation policy and refuses to answer questions described as harmful or biased, attackers can easily bypass them with smart instructions given to the AI,” Europol stated. In this regard, the chatbot’s ability to mimic writing styles emerges as an important tool for future phishing attacks. Because of the apparent authenticity of this content, users will be more likely to click on fake links that require them to leave personal data.

Also, by using ChatGPT, even attackers who are not native English or any other language speakers can create flawless and seemingly legitimate emails. At the same time, creating such messages using ChatGPT is a fast and massive process, with content personalization based on imitation of the tone, style, and details obtained from the collected personal data of the victim.

Misinformation and Coding

The ability of ChatGPT to quickly produce seemingly authentic text makes it an ideal platform for spreading propaganda and misinformation with relatively little effort. ChatGPT can also be used to write computer code, especially for less skilled criminals who do not know how to code but want to use it to infiltrate a desired website, according to Europol. The countermeasure, advised by Europol experts, is to raise awareness about the potential misuse of ChatGPT to ensure the timely closure of legal and technological loopholes before they open wider shortcuts for cybercrime.

A More Positive Role

However, there are two sides to every arms race, so many in the cyber community also see ChatGPT as an ally in the fight against crime. Ethical hackers are already using existing AI tools to assist in vulnerability reporting, code sample creation, and key trend identification in large datasets. What AI, including ChatGPT, can really help security teams with is speed as a key ingredient in vulnerability management. Just as much as ChatGPT can help attackers create authentic phishing messages, the same potential can be used to identify this content. Over time, AI advocates believe that ChatGPT will become a useful tool to determine whether suspected phishing content is generated with malicious intent or is authentic. Additionally, even at this stage, ChatGPT can become a valuable tool in “war games” that simulate hacking attacks and defense against them.

ChatGPT as a Defensive Player

In simulations conducted by several cyber companies, this bot provided detailed responses to inquiries about vulnerabilities in software often used by system security testers, such as Nmap. ChatGPT even provided advanced suggestions for desirable defense update scripts, all with illustrations in the form of written lines of code. As cyber-attacks become more frequent and complex, AI tools like ChatGPT can become valuable defensive players on overloaded IT teams, primarily because they are capable of processing huge amounts of data and predicting trends based on them. With the advancement of complementary machine learning and natural language processing technologies, this process will be even faster and will help strengthen the reputation of ChatGPT and related systems as a supportive pillar of cyber defense instead of the image of a dystopian ally of high-tech crime.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *