The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed ChatGPT API keys are currently accessible across public infrastructure, dramatically lowering the barrier for abuse.
CRIL identified more than 5,000 publicly accessible GitHub repositories containing hardcoded OpenAI credentials. In parallel, approximately 3,000 live production websites were found to expose active API keys directly in client-side JavaScript and other front-end assets.
Together, these findings reveal a widespread pattern of credential mismanagement affecting both development and production environments.
GitHub as a Discovery Engine for Exposed ChatGPT API Keys
Public GitHub repositories have become one of the most reliable sources for exposed AI credentials. During development cycles, especially in fast-moving environments, developers often embed ChatGPT API keys directly into source code, configuration files, or .env files. While the intent may be to rotate or remove them later, these keys frequently persist in commit histories, forks, archived projects, and cloned repositories.
CRIL’s analysis shows that these exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. Many repositories were actively maintained or recently updated, increasing the likelihood that the exposed ChatGPT API keys remained valid at the time of discovery.
Once committed, secrets are quickly indexed by automated scanners that monitor GitHub repositories in near real time. This drastically reduces the window between exposure and exploitation, often to mere hours or minutes.
Exposure in Live Production Websites
Beyond repositories, CRIL uncovered roughly 3,000 public-facing websites leaking ChatGPT API keys directly in production. In these cases, credentials were embedded within JavaScript bundles, static files, or front-end framework assets, making them visible to anyone inspecting network traffic or application source code.
A commonly observed implementation resembled:
const OPENAI_API_KEY = “sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX”;
const OPENAI_API_KEY = “sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX”;
The sk-proj- prefix typically denotes a project-scoped key tied to a specific environment and billing configuration. The sk-svcacct- prefix generally represents a service-account key intended for backend automation or system-level integration. Despite their differing scopes, both function as privileged authentication tokens granting direct access to AI inference services and billing resources.
Embedding these keys in client-side JavaScript fully exposes them. Attackers do not need to breach infrastructure or exploit software vulnerabilities; they simply harvest what is publicly available.
“The AI Era Has Arrived — Security Discipline Has Not”
Richard Sands, CISO at Cyble, summarized the issue bluntly: “The AI Era Has Arrived — Security Discipline Has Not.” AI systems are no longer experimental tools; they are production-grade infrastructure powering chatbots, copilots, recommendation engines, and automated workflows. Yet the security rigor applied to cloud credentials and identity systems has not consistently extended to ChatGPT API keys.
A contributing factor is the rise of what some developers call “vibe coding”—a culture that prioritizes speed, experimentation, and rapid feature delivery. While this accelerates innovation, it often sidelines foundational security practices. API keys are frequently treated as configuration values rather than production secrets.
Sands further emphasized, “Tokens are the new passwords — they are being mishandled.” From a security standpoint, ChatGPT API keys are equivalent to privileged credentials. They control inference access, usage quotas, billing accounts, and sometimes sensitive prompts or application logic.
Monetization and Criminal Exploitation
Once discovered, exposed keys are validated through automated scripts and operationalized almost immediately. Threat actors monitor GitHub repositories, forks, gists, and exposed JavaScript assets to harvest credentials at scale.
CRIL observed that compromised keys are typically used to:
- Execute high-volume inference workloads
- Generate phishing emails and scam scripts
- Assist in malware development
- Circumvent service restrictions and usage quotas
- Drain victim billing accounts and exhaust API credits
Some exposed credentials were also referenced in discussions mentioning Cyble Vision, indicating that threat actors may be tracking and sharing discovered keys. Using Cyble Vision, CRIL identified instances in which exposed keys were subsequently leaked and discussed on underground forums.

Unlike traditional cloud infrastructure, AI API activity is often not integrated into centralized logging systems, SIEM platforms, or anomaly detection pipelines. As a result, abuse can persist undetected until billing spikes, quota exhaustion, or degraded service performance reveal the compromise.
Kaustubh Medhe, CPO at Cyble, warned: “Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.”








































