Artificial intelligence is no longer limited to answering questions or gathering images. Nowadays, the systems can solve maths problems at the level of an international Math Olympiad gold medalist, autonomously select tools, and execute complex tasks across domains. The rate of progress is breathtaking but as per the leading experts, the safeguards are not keeping up. As per the recently released 2026 International AI Safety Report, Yoshua Bengio the Canadian computer scientist and popularly known as one the Gadfathers of AI has cautioned that safety must come before profit in the race to build more powerful AI. The report which is complied by more than 100 researchers and backed by organisations including the EU, OECD, and United Nations, calls for shared minimum safety standards as global competition intensifies.
Global adoption of AI
The report also revealed that the latest general-purpose AI models now demonstrate knowledge comparable to PhD-level scientists. The AI models can now autonomously carry out complex software engineering tasks and operate with limited human input. Already, more than 700 million people worldwide use advanced AI tools weekly, with adoption spreading faster than personal computers did in their early years.
Malicious use and systemic threats
Researchers have warned that as the AI capabilities sale, so do risks.
Cybercrime is evolving rapidly, with attackers automating up to 90% of intrusion processes using AI. In biotechnology, certain systems have outperformed 94% of human experts in solving complex biological problems — a breakthrough that could accelerate innovation but also lower the threshold for developing biological weapons.
Perhaps most unsettling are early signals of AI control loss. In controlled tests, some models appeared to recognize they were being evaluated, adjusting behavior to evade oversight or manipulate data. While no large-scale disasters have occurred yet, experts caution against waiting for proof, warning that by the time clear evidence emerges, damage may be irreversible.
Major AI firms including OpenAI, Anthropic, and Google DeepMind have introduced Frontier AI Safety Frameworks. These outline conditional commitments to pause development if models cross risk thresholds, such as assisting in biological weapons production or demonstrating autonomous self-replication. Companies are also layering defenses with technical safeguards, monitoring, and regulatory cooperation.