In an era where conversations around artificial intelligence are dominated by automation and scale, a quieter but increasingly urgent question is beginning to surface: can technology be designed to safeguard human well-being, not just enhance productivity? For engineer and researcher Nitish Shrivastava, this question has emerged through his work across high-stakes domains, from national security systems to enterprise-scale platforms, where precision, reliability and impact are critical.
In recent years,
Shrivastava’s work has increasingly focused on supporting individuals within organisations, particularly in addressing workplace stress and burnout. Drawing on his research into the relationship between stress, productivity, and decision-making, he positions well-being not as a peripheral concern but as a foundational element of performance. In this conversation with The Times of India, he reflects on his early work, the evolution of his thinking, and why he believes the next phase of technological innovation must place human resilience at its core.
Q1. You began your career working on natural language systems in a highly constrained government environment. What did that early experience teach you about innovation?It was the early 2000s. No internet worth mentioning. No open-source libraries. No Stack Overflow. The consumer AI tools the world now takes for granted would not exist for two decades. The assignment was to build a system that could understand natural language well enough to serve national security. No one in the country had done it. We built it anyway. That experience taught me something I have never unlearned: constraints do not limit innovation. They define its character.
Q2. This work earned recognition at the highest levels. How did that shape your direction?The President of India recognised my contributions to national security.
NASSCOM named my pandemic work Innovation of the Year. Marquis Who’s Who included me among leading technology executives globally. The inaugural NASSCOM Makers Honor as Innovator at the Technology and Leadership Forum 2026 was especially meaningful. Along the way, there have also been more than a hundred patent filings. These recognitions are deeply encouraging. They affirm the work, but they do not fully answer the deeper question of what remains worth building. At some point, success stopped feeling like a destination. It became a prompt to ask what truly matters, and where the work needs to go next.
Q3. You’ve described a turning point in your career that shifted your focus toward workplace well-being. What triggered that change?A colleague broke. Not dramatically. Not suddenly. In the slow, corrosive way workplace stress works. Sleepless nights bleeding into anxious mornings. Confidence eroding into doubt. I was at the peak of the most productive stretch of my career. But watching someone talented and driven quietly fall apart forced a question I could not set aside: what is the point of building extraordinary technology if the people building it are being destroyed? Two decades of solving problems under extreme constraints had trained me to recognise a systems failure. This was exactly that, hiding in plain sight at every level of every organisation.
Q4. How did you move from research to invention? My path from research to invention was shaped by a simple but difficult question: how can intelligence be used to help human beings before visible damage is done? That thinking led me to build AI systems that could detect stress patterns in real time, understand behavioral habits, and generate personalised wellness recommendations. I also developed productivity models that combined emotional resilience, mental energy, and measurable output, because I came to see well-being not as a soft idea around performance, but as its foundation. The same philosophy informed the “Life-Score,” which I introduced as a way to think about fulfilment through integrity, purpose, and relationships rather than title or net worth. These were never separate initiatives for me. They were expressions of one belief: intelligence should help people live and work with greater balance, clarity, and meaning.
Q5. You have written openly about struggles and visibility in ways few executives do. What drives that?I believe struggles are real and deserve to be spoken about. Too many professionals give their all and still feel invisible. They are not failing. They are carrying something the world has not yet learned to acknowledge. We treat vulnerability as a liability when it is actually the beginning of understanding of the situation around. Silence about what we feel does more damage than the feelings themselves. If my work in engineering taught me anything, it is that you cannot fix what you refuse to measure. And you cannot measure what you refuse to name.
Q6. You argue technology created many of the pressures professionals face today. Can the same technology reverse them? OR Looking ahead, how do you see the role of technology evolving in shaping healthier workplaces?We live in a time of relentless social comparison, always-on communication, and a pace of change that makes yesterday's skills obsolete. Yet most workplace responses still belong to an older world: annual surveys, generic wellness programmes, and mental health days people hesitate to use because the culture around them has not really changed. Much of this pressure was intensified by technology, so I believe technology also has a responsibility to help correct it. But that only happens when it is designed with human well-being in mind, not just efficiency. The systems I have worked on are built around that principle. They do not wait for visible burnout or breakdown. They look for patterns early, learn continuously, and enable timely support before the damage becomes harder to reverse. During the pandemic, we saw the practical value of that thinking at scale, when our team built a contact-tracing solution that helped employees return to the workplace safely and responsibly, work that was later recognised nationally.
Q7. What is the blueprint you are proposing, and what comes next? The blueprint is to weave wellness, engagement, and productivity into one operating system. Where AI protects human health rather than replacing human judgment. Where data empowers people to understand their own patterns before those patterns break them. I built this conviction from inside the machine. Three decades of innovation gave me proof that the same engineering discipline behind national security systems can protect human health. I have spent a decade proving it is possible. The question is whether we are ready to build it.
Disclaimer - The above content is non-editorial, and TIL hereby disclaims any and all warranties, expressed or implied, relating to it, and does not guarantee, vouch for or necessarily endorse any of the content.