The presently free-to-use chatbot ChatGPT created by Elon Musk and Sam Altman is garnering different responses and views from cybersecurity researchers. Bundled with a host of pros and cons, this chatbot claims to prevent cybercrime by detecting certain malicious activities but also encourages the same by reducing the time taken to create Proof of Concept.
It is argued that ChatGPT still requires human intervention to make it fully helpful and function to its best capacity.
Researchers at Cyble, the global threat intelligence SaaS provider published a blog outlining the limitations and predictions of using ChatGPT, which was launched by the research body OpenAI, last week.
The service has propelled in its service to over a million within a week of its launch. ChatGPT, where GPT stands for ‘generative pretrained transformer’, is a bot that can answer a variety of questions stored in its dataset.
Positive predictions of using ChatGPT
With its natural language, which has been sourced from famous chatrooms, Twitter and Reddit, ChatGPT can sound almost like a human with structured answers. It is predicted that ChatGPT will help create more automated tools to be used by cybersecurity and infosec companies.
Moreover, the open-Source community can look forward to the creation of custom scripts in the public domain. It can help identify new threats with the correlation and open-source intelligence (OSINT) investigations that are needed by researchers.
ChatGPT can also help create a virtual machine, as shown below:

ChatGPT can be used to scan open ports and vulnerabilities by working on a Python script as shown below:

Digital forensics investigators using ChatGPT to form custom script to check on the network traffic, DNSquery and response address:

A boon to researchers
Besides this, malware researchers can identify complex algorithms, and encryption logic, and understand IDA Pro pseudocode with the help of ChatGPT. It can also help bug bounty researchers to grasp the full severity of vulnerabilities found, to work on scripts for functions such as reconnaissance, and crawling domains to find admin login pages.
Moreover, Android researchers can create Frida scripts for operations including unpacking, and SSL pinning.
The flip side of ChatGPT
It is a possibility that ChatGPT can be misused by threat actors. Such as the mass exploitation of new vulnerabilities within hours of their discovery and streamlining the approach for reconnaissance to increase the chances of cyberattacks.
Furthermore, hacktivist groups may exploit ChatGPT by exfiltrating data from hacked systems. Previously detected malware strains can also be worked upon to enhance their impact on the changing trends and requirements.
Wide-scale social engineering attacks on companies, both public and private, are also expected to increase and evolve with the help of ChatGPT. The dark web might see more data leaks on it using automated scripts.