
Neel Somani points out that while artificial intelligence may look like it runs on data and algorithms, its real engine is optimization. According to Somani, every breakthrough in the field — from training large language models to deploying real-time decision systems — comes down to solving optimization problems at scale. Optimization isn’t just a mathematical tool; it is the language modern AI speaks.
Neel Somani is a researcher, technologist, and entrepreneur, and he brings a unique perspective to this discussion. A UC Berkeley graduate with triple majors in mathematics, computer science, and business, he has held roles at Airbnb and Citadel before founding Eclipse in 2022 — Ethereum’s fastest Layer 2, powered by the Solana Virtual Machine — raising $65M. Beyond blockchain, Neel has become an active angel investor and philanthropist, now turning his focus toward new projects at the forefront of artificial intelligence.
Optimization is not new. Long before neural networks dominated the headlines, scientists and engineers relied on optimization to solve practical problems. From minimizing fuel consumption in airplanes to maximizing throughput in factories, optimization provided the mathematical scaffolding for better decision-making.
In AI, optimization took on new meaning. Training a model requires adjusting potentially billions of parameters to minimize errors and maximize performance. The famous “gradient descent” algorithm — where parameters are gradually adjusted in the direction that reduces error — epitomizes the optimization mindset. Every step in training a neural network is an act of searching for an optimal configuration in a vast, multidimensional landscape.
This is why many practitioners describe training AI not as teaching or programming in the traditional sense, but as tuning: pushing parameters toward better states, guided by mathematical signals of improvement. “Optimization is less about programming rules and more about guiding parameters toward better solutions,” says Neel Somani.
At its core, learning in AI is synonymous with optimization. When a system learns, it optimizes a function.
In every case, optimization is the invisible hand guiding the model toward improved performance.
The optimization perspective also explains why modern AI is so computationally intensive. These systems are not merely running programs, but rather solving extraordinarily complex optimization problems at scale, often requiring specialized hardware, such as GPUs or TPUs, to navigate the search space efficiently.
“Learning in AI isn’t magic — it’s optimization,” says Neel Somani. “Whether a model is matching predictions to labels, clustering data, or chasing long-term rewards, it’s always searching for a better solution within a defined space. Every improvement comes from that process.”
Large language models (LLMs) like GPT-5 or similar systems bring optimization into sharp relief. Training such a system requires tuning hundreds of billions of parameters across massive datasets. The optimization goal is deceptively simple: minimize the difference between the model’s predicted next token (word, punctuation, symbol) and the actual token in the training data.
But this “simple” optimization is carried out across trillions of predictions, involving colossal computations and intricate scheduling of resources. Every improvement in model accuracy, fluency, or reasoning ability emerges from better optimization strategies — be it improved gradient descent algorithms, clever learning rate schedules, or regularization techniques that prevent overfitting.
Even after training, optimization continues. Fine-tuning for specific tasks, reinforcement learning with human feedback, and prompt engineering all represent layers of optimization aimed at making the system more useful, safer, and aligned with human values.
Optimization does not end once a model is trained. In deployment, optimization governs efficiency, scalability, and responsiveness.
In short, optimization is not a one-time process but an ongoing language AI uses to interact with its environment and with us.
“Training gets the headlines, but deployment is where optimization proves its value,” notes Somani. “Every fraction of a second saved or watt of energy reduced can make the difference between a breakthrough system and one that never scales.”
While optimization enables AI’s breakthroughs, it also exposes limitations. Optimization can be too effective in the wrong direction, leading to unintended consequences.
These pitfalls remind us that optimization is only as good as the goals we define. Choosing the right objective function is as important as the optimization process itself.
Understanding optimization as the language of AI reframes how we think about human-AI interaction. We don’t just “ask” AI systems to perform tasks — we define objectives, constraints, and feedback signals, and then let optimization carry the work forward.
This is why concepts like “alignment” and “safety” matter so much. If AI optimizes for goals not fully aligned with human values, the results can be misaligned or harmful. Researchers increasingly focus on designing objective functions that capture not just efficiency or accuracy, but also ethics, interpretability, and trust.
In practice, optimization creates a feedback loop between humans and AI. We set goals, the system optimizes, we observe results, and then we refine. This dynamic process resembles conversation — a negotiation conducted not in words but in optimization criteria.
“Interacting with AI isn’t about giving commands — it’s about setting the right objectives,” says Somani. ” If those targets are misaligned, optimization can produce results that are technically correct but practically harmful.”
As AI advances, optimization will evolve in several directions:
The story of AI’s future will be the story of new optimization methods — smarter, safer, and more nuanced than those of today.
Optimization is more than a mathematical tool; it is the very language modern AI systems speak. From training deep neural networks to serving billions of users worldwide, every step in the lifecycle of AI depends on optimization. It enables learning, drives performance, and defines how systems interact with their environment.
But like any language, optimization can be misunderstood or misapplied. Its power lies in clarity of goals and careful design of objectives. To build AI systems that serve humanity well, we must become fluent in this language — not only as engineers and researchers, but as a society shaping the future of intelligence.
The next time you hear about a breakthrough in AI, remember: behind the scenes, optimization is doing the talking.
Read more on International Business Times

