Secure your chatbots — or your wife may learn your secrets: Nikesh Arora
New Delhi: What if your spouse discovered your deepest secrets by scrolling through your chatbot conversations? Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks, says companies like his are racing to address emerging risks in an increasingly AI-reliant world.
“My fear is that in about six months, if I’m talking to my AI model, it might know more things about me than I’ve told my wife,” he told the audience at the India AI Impact Summit in New Delhi on Thursday. “I don’t want my wife to get her hands on my Gemini prompts because I’m surprised what it might tell her.”
03:27
The remark drew laughs but underscored a serious concern: AI systems are fast becoming therapists, nutritionists, financial advisers and confidants. Users are sharing intimate details with machines that promise convenience and insight. As Arora noted, if that data “falls into the wrong hands, it’s not a good idea.”
The broader danger, he argued, is structural. “AI is accelerating faster than our institutions, our governance frameworks, and even our intuition,” he said. At present, “the balance is tilted… not in the favour of trust, inclusion, security; it’s actually tilted in the favour of speed.” Every week brings new models and capabilities, often released before guardrails are fully formed.
As the world moves toward an “agentic” future — where AI systems can act autonomously — the risks multiply. “As soon as you give control to an agent, you have to worry about who’s responsible for the actions of those agents,” he said. If an AI mismanages your investments or transfers money without consent, accountability becomes blurred. The same applies to physical systems: how do you ensure that a robot designed to assist at home cannot be hijacked or manipulated?
Arora was blunt about the limits of prohibition. “AI is not going to go away if you govern it out of existence. It cannot be governed out of existence,” he said. The answer, instead, lies in embedding governance and accountability into the technology itself.
For cybersecurity firms, that means building protection from the outset. AI must be “secure, governed and controlled” — not patched after damage is done. That includes safeguarding vast datasets, monitoring AI-generated code that could be malicious or flawed, and preparing for adversarial AI systems designed to exploit vulnerabilities.
Yet Arora clarified that he remains optimistic — not only that we will navigate this new terrain, but that it will create new opportunities. “I have a conviction that we’re going to need five times the number of technology people in the future than we have today,” he said, arguing that security, governance and oversight will generate new roles rather than eliminate them.
“My fear is that in about six months, if I’m talking to my AI model, it might know more things about me than I’ve told my wife,” he told the audience at the India AI Impact Summit in New Delhi on Thursday. “I don’t want my wife to get her hands on my Gemini prompts because I’m surprised what it might tell her.”
AI For Cows?: Nandan Nilekani Reveals How PM Modis AI Vision For Cows Became Amuls Sarlaben App
The remark drew laughs but underscored a serious concern: AI systems are fast becoming therapists, nutritionists, financial advisers and confidants. Users are sharing intimate details with machines that promise convenience and insight. As Arora noted, if that data “falls into the wrong hands, it’s not a good idea.”
The broader danger, he argued, is structural. “AI is accelerating faster than our institutions, our governance frameworks, and even our intuition,” he said. At present, “the balance is tilted… not in the favour of trust, inclusion, security; it’s actually tilted in the favour of speed.” Every week brings new models and capabilities, often released before guardrails are fully formed.
As the world moves toward an “agentic” future — where AI systems can act autonomously — the risks multiply. “As soon as you give control to an agent, you have to worry about who’s responsible for the actions of those agents,” he said. If an AI mismanages your investments or transfers money without consent, accountability becomes blurred. The same applies to physical systems: how do you ensure that a robot designed to assist at home cannot be hijacked or manipulated?
Arora was blunt about the limits of prohibition. “AI is not going to go away if you govern it out of existence. It cannot be governed out of existence,” he said. The answer, instead, lies in embedding governance and accountability into the technology itself.
For cybersecurity firms, that means building protection from the outset. AI must be “secure, governed and controlled” — not patched after damage is done. That includes safeguarding vast datasets, monitoring AI-generated code that could be malicious or flawed, and preparing for adversarial AI systems designed to exploit vulnerabilities.
Yet Arora clarified that he remains optimistic — not only that we will navigate this new terrain, but that it will create new opportunities. “I have a conviction that we’re going to need five times the number of technology people in the future than we have today,” he said, arguing that security, governance and oversight will generate new roles rather than eliminate them.
Popular from Technology
- After CEO Julie Sweet’s ‘exit’ warning, Accenture HR tells senior employees: To get promoted to leadership roles, you would require..
- Explained: What is India’s Sarvam AI model that Google CEO Sundar Pichai is quite impressed with
- Did Insta, YouTube deliberately build addictive tools to trap kids?
- Jeff Bezos' ex-wife MacKenzie Scott, who has donated about 58 million Amazon shares, on who she thinks of every time she makes a huge donation
- Google’s ex CEO Eric Schmidt warns America: We are running out of …
end of article
Trending Stories
- TGBIE Telangana Inter hall ticket 2026 released at tgbie.cgg.gov.in: Direct link to download admit card here
- UK PM Keir Starmer blocks Donald Trump from using RAF bases for potential Iran strike
- 'Out of his limits': Shadab Khan reprimanded by PCB over swipe at Ex-Pakistan legends
- Middle East on boil: 5 reasons why Trump may attack Iran anytime now
- 'Play like Vaibhav Sooryavanshi': R Ashwin's bold take as India march into Super 8
04:28 '11 jets shot down': Donald Trump claims '200% tariffs' warning forced India, Pakistan to stop fighting- 'With so many left-handers, finger spin is the problem': India coach sounds alarm before Super 8
Featured in technology
- Explained: What is India’s Sarvam AI model that Google CEO Sundar Pichai is quite impressed with
- Meta Chief Al Officer Alexandr Wang: ‘In India, we can scale personal superintelligence very fast..’
- Who is Ranvir Sachdeva, 8-year-old who became the youngest speaker at India AI Impact Summit 2026; He met with Google CEO Sundar Pichai, OpenAI’s Sam Altman
10:22 What is Pax Silica and why does India joining the US-led AI coalition matter- The State Elon Musk wants everyone to move may become world's new data center capital
- Meta plans to release smartwatch in 2026, company's second attempt since ...
Photostories
- Top 10 beaches in the world for 2026 – the list is out!
- 'Normal People', 'In The Absence'; ‘Peaky Blinders’ fame Cillian Murphy’s best- OTT dramas to watch
- What are the risks in Real Estate investment?
- From 'Love haze' to 'Cloaring': 5 modern dating and relationships terms you need to know
- 5 best cities for shopping in China
- PM Narendra Modi treated AI Summit dignitaries with regional Indian flavours for lunch: Menu details inside
- Baby names inspired by hope and optimism
- 7 elegant blooms known for their long-lasting freshness
- How to make Chettinad Omelette for breakfast at home
- The 50: Prince Narula, Mr Faisu, Rajat Dalal to Shiv Thakare: Meet the Top 12 contestants of the reality show
Up Next
Start a Conversation
Post comment