When you think of a research assistant, do you picture a machine that reads hundreds of papers and suggests paths you wouldn’t have considered? That’s exactly what professor Ernest Ryu at UCLA experienced when he worked with GPT-5 to tackle an optimization problem that had been unsolved for decades.
A mystery of speed and stability
The problem revolves around a method called NAG (Nesterov Accelerated Gradient), invented in 1983. In simple terms: NAG gives an algorithm a bit of 'momentum' so it reaches the minimum of a function sooner. It’s fast, but the big question was why that extra push didn’t break the method’s stability.
How can something accelerate so much without becoming unstable? For years, the community hadn’t found a complete mathematical explanation.
Think of it like pushing a shopping cart down a slope: a little push helps, but too much and you lose control. The math showed the speed, but the control mechanism remained unclear.
Ryu, with 15 years in applied math and optimization theory, first tried earlier models like ChatGPT-3.5 and saw their limits. When GPT-5 arrived, he tried again. He didn’t expect the machine to solve the problem alone, but he did expect it to help him explore ideas faster.
Collaboration: machine proposes, human verifies
The dynamic was simple and powerful: Ryu asked GPT-5, received ideas (some correct, many incorrect or incomplete), and he evaluated, discarded, or developed the promising ones. The model didn’t invent new mathematical tools; it was exceptional at finding connections and techniques from adjacent fields that Ryu hadn’t known or wouldn’t have checked immediately.
In an intense stretch (about twelve hours spread over three days), after nearly a dozen approaches, a key suggestion emerged: restructure certain equations that govern NAG. The proposal wasn’t perfect, but it contained a structural feature Ryu could turn into the skeleton of a proof. He wrote the final proof, polishing and verifying every step.
'GPT-5 proposed steps that looked plausible but failed when examined. My job was to separate the valuable from the noise,' Ryu explained.
Speed and limits: why the tool mattered
The big advantage was exploration speed. Ideas that might have taken days or weeks to surface were tested in hours. Beyond saving time, that steady stream of proposals changed the psychology of mathematical work: the sense of progress kept motivation high while probing a hard problem.
But there’s a clear caveat: GPT-5 can produce arguments that appear correct without actually being so. Ryu adopted good practices: start new chats to verify, check every calculation by hand, and use his expert judgment to choose which lines to pursue. In the end, GPT-5 was a tool, not an author: the preprint mentions the model and explains its contribution, while Ryu is listed as the human author.
What this means for mathematical research and beyond
Is this a revolution for mathematics? Not in the sense of replacing researchers, but yes in how problems are explored. GPT-5 proved to be a lab partner that speeds up intellectual trial-and-error, especially when you need to cross ideas from different subareas.
Ryu sums up the lesson with practical advice: be patient and adopt a collaborative mindset. If you try to make the model fail, it fails; if you work with it to extract value, it can open useful paths. For him, the balance is clear: AI amplifies human creativity and demands human rigor.
Looking ahead
This experience isn’t isolated: it shows how large language tools can help explore complex problems when combined with human expertise and careful verification. The community will review the preprint in the coming months; meantime, the takeaway is practical and hopeful: AI is already helping solve real questions, but human oversight remains essential.
