At the India AI Impact Summit 2026, the spotlight turned to a critical question: how do we scale artificial intelligence without scaling risk? During a high-level panel discussion titled “Responsible AI at Scale: Governance, Integrity, and Cyber Readiness for a Changing World,” leaders from government, cybersecurity, public policy, and academia gathered to examine what it truly takes to deploy AI safely and responsibly.
The session brought together Sanjay Seth, Minister of State for Defence; Lt Gen Rajesh Pant, Former National Cyber Security Coordinator of India; Beenu Arora, Co-Founder & CEO of Cyble; Jay Bavisi, Founder and Chairman of EC-Council; Carly Ramsey, Director & Head of Public Policy (APJC) at Cloudflare; Dr. Subi Chaturvedi, Global SVP & Chief Corporate Affairs and Public Policy Officer at InMobi; and Anna Sytnik, Associate Professor at St. Petersburg State University. The discussion was moderated by Vineet, Founder & Global President of CyberPeace.
Opening the session, Rekha Sharma, Member of Rajya Sabha, set the tone by emphasizing the importance of balancing AI-driven innovation with governance, integrity, and long-term societal trust.
As India positions itself as a key voice in shaping global AI policy, the message from the panel was clear — responsible AI at scale requires not just ambition, but strong governance frameworks and serious cyber readiness.
Responsible AI at Scale Requires Governance and Real Security Testing
While governance frameworks were widely discussed, one of the most practical interventions came from Beenu. Drawing from his early career in penetration testing, he reminded the audience that AI systems must be challenged before they are trusted.
“I think my final take is based upon how I started my career, which was trying to hack them on a penetration test,” he said.
That early experience shaped his recommendation for enterprises, academia, and governments building AI systems today.
“For enterprises or any academia, I think red teaming — which is basically trying to hack your AI infrastructure, AI models, or AI assumptions, or stress testing them from a security standpoint — is going to be most critical,” he explained.
In simple terms, if organizations are serious about Responsible AI at Scale, they must actively try to break their own systems before adversaries do. Red teaming AI models, infrastructure, and assumptions is not an aggressive move — it is a responsible one.
Beenu stressed that this urgency stems from where the ecosystem currently stands.
“Especially at these stages where we are still building up the entire security infrastructure around here,” he noted, pointing to the fragility of evolving AI security systems.
His conclusion was direct and policy-relevant:
“That would be my biggest recommendation for enterprises and governments also.”
The Deepfake Reality: AI Threats Are Already Industrialized
To highlight the urgency, Beenu shared a personal example of how AI-powered threats are no longer theoretical.
“Three years ago, my chief of staff got a WhatsApp call mimicking my own voice, asking to process a transaction. She got suspicious and eventually figured out this was a deepfake call.”
What was once a novelty is now operating at scale.
“On average, we are seeing around 70 to 100 thousand new deepfake audio calls in our systems — and many of them are very, very sophisticated. In fact, many are bypassing our own detection.”
The implication is stark: AI-driven deception is becoming industrialized. Deepfake audio and video are no longer fringe experiments — they are operational tools used in real-world attack chains.
Beenu further highlighted the financial consequences:
“Today, we have had companies who lost millions of dollars because of a deepfake video on a Zoom or Teams call asking someone to do something.”
These incidents illustrate a structural shift. AI is no longer just a productivity enabler — it is an active component in modern cyberattacks.
AI Governance Must Match the Speed of Innovation
The broader discussion reinforced that Responsible AI at Scale cannot rely on policy statements alone. It requires adaptive AI governance that reflects national priorities, socio-economic diversity, and security realities.
International AI standards must be contextualized. Transparency must be embedded into system design. Accountability must be clearly assigned. And cyber readiness cannot be postponed until after deployment.
The panel agreed that innovation and oversight must move together. If governance lags too far behind technological advancement, trust erodes.
Building AI Security Infrastructure Before Scaling Further
A key takeaway from the summit was that innovation and security cannot operate on separate tracks. As AI adoption expands across defense, finance, healthcare, and public services, AI security infrastructure must evolve just as quickly.
Responsible AI at Scale means:
- Stress-testing AI systems continuously
- Strengthening cyber resilience frameworks
- Embedding transparency into AI models
- Preparing institutions for large-scale AI risks
India’s ambition to shape global AI norms depends not only on technological capability, but also on credibility and trust.
The discussion made one thing clear that scaling AI responsibly is not about slowing progress. It is about strengthening it.
And as Beenu stressed out, rigorously testing AI systems today may be the most responsible step toward protecting societies tomorrow.







































