With the unprecedented growth and user base of ChatGPT comes the security concern associated with the chatbot.
Hackers have been attacking and leaking account credentials details soon after its creation, as highlighted in a recent post shared by Alon Gal, Co-Founder & CTO of Hudson Rock. This increases the risk of mass data exploitation arising from ChatGPT security incidents ever since its inception.
According to the data collected by the cybersecurity firm, the following graph highlights the drastic increase in compromised computers over a period of time.
ChatGPT security incidents: Popularity and usage of the AI tool
The images posted by Gal showcase the extent of damage done to user data after ChatGPT security incidents. Details including email addresses with passwords, and IP addresses were leaked on the dark web.
According to a report by UBS analysts, ChatGPT emerged as the fastest-growing app in internet history, reaching 100 million active users in just two months after launch. The research further revealed that ChatGPT almost doubled the number of average million unique visitors to 13 million from December to January.
This emphasizes the gravity of the misuse of user credentials especially for minors and students who have turned to it for project or education-related help.
Besides North Korea where ChatGPT is not available because of the strict regulations about internet usage already in place, Italy temporarily banned it. It was the first Western country to do so. The ban is lifted as of now according to a PC Guide report.
“At the end of March, the Italian Data Protection Watchdog blocked the use of ChatGPT over data privacy concerns,” the report added. Other countries like Iran, China, Cuba, and Syria do not have ChatGPT accessible due to their laws and internet policies in place.
Owing to the increased ChatGPT security incidents, Germany has launched a probe into it over data protection concerns.
The Cyber Express reached out to OpenAI regarding the security incidents linked to ChatGPT. We will update this report on receiving their response.
Risks posed by ChatGPT security incidents and regulatory bodies’ concerns
A report by the National Cyber Security Centre explained how ChatGPT can distort results and create confusion among users. It read, “They’re not magic, they’re not artificial general intelligence, and contain some serious flaws, including:
- They can get things wrong and ‘hallucinate’ incorrect facts.
- They can be biased, are often gullible (in responding to leading questions, for example).”
The report further explained that such tools can be coaxed into creating toxic results and are also prone to injection attacks. The query of users will be open to be accessed by OpenAI. Hence NCSC added this word of caution for ChatGPT users, “Do not include sensitive information in queries to public LLMs, (Large Language Model).”
ChatGPT v/s Bard by Google
Bard, an AI chatbot, uses Language Model for Dialogue Application (LaMDA), which is a part of the large language learning model. However, it now uses Google’s PaLM 2, which is considered Google’s next-generation LLM.
While ChatGPT responds with the data existing in its system, Bard will have the added feature of gathering intel from the internet. It will be powered by the Google search engine. This increases the scope of diversified and far-fetched responses.
Bard can expect users who would want to integrate their work with Docs and Gmail accounts.
Whether special AI tools or any online account, security is always a concern and each user is entrusted with maintaining adequate caution in data sharing and account protection. Every tool has its advantages and disadvantages.
It is how we use it as summed up by an Europol report expressing concerns over ChatGPT security incidents.
The report read, “If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps.”
“While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime,” Europol’s first Tech Watch Flash report entitled ‘ChatGPT – the impact of Large Language Models on Law Enforcement,’ concluded.