Nayan Goel Highlights Emerging Security Risks as AI Adoption Accelerates
As the technology industry rapidly deploys artificial intelligence across banking, healthcare, and critical infrastructure, one Silicon Valley engineer is documenting what could go wrong—and building tools to test it.
There is a familiar pattern in cybersecurity: a new technology arrives, and security becomes an afterthought. It happened with the web in the 1990s and cloud computing in the 2000s. According to Nayan Goel, a Principal Application Security Engineer, it is happening again with AI—but at a much faster pace.
“The systems being deployed today are different from anything we’ve had to secure before,” Goel has noted. “They don’t behave predictably. They interpret language, infer intent, and take actions in ways their designers didn’t anticipate.”
Goel is among a small group of professionals working both to secure real AI systems and to research their risks. He works on AI systems in production at a major fintech company while also publishing research on how these systems can fail.
This dual role shapes his approach. While many researchers study AI systems in controlled settings, Goel works with systems that must function reliably in real-world environments, handling financial data and user activity. His research reflects this practical exposure.
His 2025 paper on federated learning highlights challenges in systems where AI models learn from distributed data without centralising it. He outlines risks such as model poisoning, where attackers inject harmful data; privacy leakage, where sensitive information may be exposed; and Sybil attacks, where fake identities manipulate the system. Instead of offering simple fixes, the research highlights trade-offs between security, accuracy, and system performance.
Goel has also contributed to OWASP, which develops widely used security standards. He co-authored a report on AI agents capable of taking actions without constant human input and contributed to the OWASP LLM Top 10, a list of key vulnerabilities in large language model applications.
Alongside his research, he has built tools to test AI systems. These include a GraphQL Security Tester that generates adversarial queries and a Prompt Injection Tester designed to simulate attacks on AI workflows. The aim is to move beyond theory and understand whether such threats can be replicated in real systems.
Taken together, his work points to a larger issue: AI is increasingly becoming part of critical systems, but the frameworks to secure it are still evolving. Current solutions often involve trade-offs rather than clear answers.
What is emerging is not a complete solution, but a clearer understanding of the risks. Securing AI systems requires new ways of thinking, especially as these systems learn and adapt in unpredictable ways.
For now, that work remains ongoing.
“The systems being deployed today are different from anything we’ve had to secure before,” Goel has noted. “They don’t behave predictably. They interpret language, infer intent, and take actions in ways their designers didn’t anticipate.”
Goel is among a small group of professionals working both to secure real AI systems and to research their risks. He works on AI systems in production at a major fintech company while also publishing research on how these systems can fail.
This dual role shapes his approach. While many researchers study AI systems in controlled settings, Goel works with systems that must function reliably in real-world environments, handling financial data and user activity. His research reflects this practical exposure.
His 2025 paper on federated learning highlights challenges in systems where AI models learn from distributed data without centralising it. He outlines risks such as model poisoning, where attackers inject harmful data; privacy leakage, where sensitive information may be exposed; and Sybil attacks, where fake identities manipulate the system. Instead of offering simple fixes, the research highlights trade-offs between security, accuracy, and system performance.
Goel has also contributed to OWASP, which develops widely used security standards. He co-authored a report on AI agents capable of taking actions without constant human input and contributed to the OWASP LLM Top 10, a list of key vulnerabilities in large language model applications.
Taken together, his work points to a larger issue: AI is increasingly becoming part of critical systems, but the frameworks to secure it are still evolving. Current solutions often involve trade-offs rather than clear answers.
What is emerging is not a complete solution, but a clearer understanding of the risks. Securing AI systems requires new ways of thinking, especially as these systems learn and adapt in unpredictable ways.
For now, that work remains ongoing.
Popular from Business
- How Iran is using China’s playbook to counter Donald Trump’s threats
- Iran offers navigation support to India in Hormuz amid US blockade; rejects charging toll
- Trump’s blockade of Strait of Hormuz begins: How will India be impacted?
- Iranian currency Rial hits 1.58 million per US dollar: Economy in freefall after war, civilian sector collapses
- Gold price today (April 13, 2026): How much 18K, 22K and 24K gold costs in your city? Check prices in Delhi, Mumbai, Chennai & more
end of article
Trending Stories
- Trump’s blockade of Strait of Hormuz begins: How will India be impacted?
- Twitch streamer and YouTuber Deshae Frost responds to viral boat incident video, denies slapping woman and calls clip misleading
- Delhi-Dehradun Expressway: Travel From Delhi To Dehradun In Just 2.5 Hours! Check Top Facts & Photos
- Will Sensex hit 95,000 by December 2026? Top reasons why Morgan Stanley sees a bull run in Indian stock market
- Gold, Silver Rate Today Live Updates: Gold, silver prices expected to be volatile this week; what should investors do?
- Strait crisis: Global traders race to secure oil barrels amid Hormuz supply crunch
- RBI proposes asset-based criteria for PSU inclusion in upper layer NBFC
Photostories
- Fashion flashback: When Asha Bhosle absolutely owned Manish Malhotra’s fashion show at 79 and overshadowed Priyanka Chopra
- You walk every day. You quit sugar. So why is your blood sugar still rising? Doctor explains what people do wrong
- TCS Nashik crisis: From POSH failure to conversion claims, the story keeps getting darker
- From a spacious living room to a terrace balcony with 400 plants: Shakti Mohan’s beautiful 3-storey lavish Mumbai house
- The karmic lesson you must learn based on your birth date
- Step inside Virat Kohli’s ₹80 crore Gurgaon mansion where a unique hanging swimming pool steals the show
- Noida workers defy wage hike, protest again as police step up security - ground report in photos
- 5 parenting takeaways modern moms can steal from Sadhguru and Alia Bhatt’s conversation
- 10-minute walk after meals can help control blood sugar spikes: Why timing matters more than step count and how to make it a daily habit
- How often you should replace kitchen sponges and brushes and why leaving them wet is dangerous
Up Next
Start a Conversation
Post comment