OpenAI’s chief scientist Jakub Pachocki says the company is nearing one of its milestone goals: building AI systems that can function at the level of a research intern. As reported by Business Insider, speaking at the Unsupervised Learning podcast, Pachocki pointed to breakthroughs in coding, math reasoning, and physics research as signs that AI is progressing toward handling complex, multi-step technical work with less human oversight. “I definitely see this as a signal that something here is on track,” he said.
Intern vs. researcher
Pachocki further explained that the key measure is how long a model can work most autonomously. “The way I would distinguish a research intern from a full automated researcher is the span of time that we would have it work mostly autonomously,” he said.
OpenAI has set an internal goal of building an “AI research intern” by September 2026, followed by a fully autonomous AI researcher by March 2028. CEO Sam Altman later posted on X that the company “may totally fail” at this goal but stressed the importance of transparency given the potential impact.
Explosive growth of coding tools
Pachocki also highlighted the explosive growth of coding tools like Codex, which now handle much of OpenAI’s programming work. He also pointed to math benchmarks as a “north star” for improving reasoning, since they are easy to verify.
“For more specific technical ideas, like I have this particular idea how to improve the models, how to run this evaluation differently, I think we have the pieces that we mostly just need to put together,” he said.
Despite progress, Pachocki cautioned that AI is not yet ready to operate independently at the level of a full researcher. “I don’t expect we’ll have systems where you just tell them, ‘go improve your model capability, go solve alignment,’ and they will do it, not this year,” he said.
OpenAI’s push toward AI systems that can function like research interns reflects the company’s ambition to make models more autonomous and useful in technical fields.