![]()
As we approach the AI Impact Summit 2026, global AI exosystems are undergoing a brutal yet necessary recalibration. Those calibrations are driven by the realisation that current AI systems, especially large language models, are powerful pattern recognisers but still brittle tools when treated as general problem solvers rather than as narrowly scoped components in wider systems.The strategic implications of generative AI systems have generated competing visions across scientific, policy, and commercial communities since their widespread adoption began in the early 2020s. The technical breakthroughs in large language and vision models during early 2023 — accompanied by advances in data processing and computational architectures — created narratives of transformative potential across economic sectors. Yet, these models have not resolved long‑standing issues around reasoning and causal understanding that classical AI researchers have highlighted for years.Yet the semiconductor rivalry and Taiwan‑focused supply constraints exposed how an LLM‑centric view of AI had become strategically narrow for countries like India. If India’s AI policy anchors itself only on scaling bigger neural networks hosted abroad, it risks deep technological dependence without acquiring robust capabilities in data engineering, evaluation, and alternative architectures.This article therefore proposes that it is high time that India pursues a unique form of technoeconomic strategic hedging, as its key economic diplomacy deliberations as part of the AI Impact Summit 2026 would undergo from next week. Such hedging should explicitly diversify across AI paradigms — classical machine learning, optimisation, hybrid neuro‑symbolic systems, and domain‑specific models — rather than implicitly narrowing national AI priorities to the single dimension of ever‑larger LLMs.History has witnessed cycles of hype across sectors, domains and other facets of human life. Some hype cycles represent genuine technological progress and justified enthusiasm, while others reflect market dynamics and technically unsound practices designed primarily to extract capital rather than deliver durable value. The current AI landscape leans towards the latter pattern in important respects: large language models are frequently marketed as general reasoning engines, even though they remain pattern recognisers with persistent limitations in robustness and verifiability.When market concentration allows a handful of actors to control both the technology narrative and the infrastructure supporting it, nations face strategic vulnerability to hype‑driven investment cycles that may not align with their development priorities. This vulnerability becomes evident when new model announcements trigger sharp stock‑market reactions, as seen in market moves linked to DeepSeek releases in January 2025 and subsequent launches of advanced coding models in early February 2026: in each case, expectations about sudden disruption outpaced any clear evidence of stable, production‑grade value creation. For countries like India, these episodes are instructive not as warnings against innovation, but as reminders that market perception around “frontier AI” can move much faster than the underlying capability to solve concrete problems reliably. This mismatch is nevertheless unsurprising. If organisations do not clearly define the problem, the data and the metric for success, no frontier model will consistently deliver value beyond demonstrations. At the same time, long‑standing critiques from classical AI and cognitive science highlight that simply scaling current architectures is unlikely to produce strong compositional reasoning or robust understanding. Taken together, these perspectives suggest that technoeconomic strategic hedging should not be about rejecting large models, but about placing them in a broader portfolio of approaches suited to specific domains and constraints that can be engineered, evaluated and governed in line with sector‑specific needs.In such a setting, India’s strategic question is not whether to participate in the global LLM ecosystem, but how to avoid narrowing its AI ambitions to a single class of models whose market narratives can overshadow their technical constraints. Preparing for structural drifts in global AI markets means investing in data quality, evaluation culture, and diverse architectures, so that India can absorb the benefits of frontier systems where they genuinely help, while maintaining the autonomy to build, deploy, and scrutinise its own AI systems across critical domains.Strategic autonomy without matching domestic capability in software, chips, engines and energy leads to continued dependency. In the AI context, this highlights a basic distinction between merely consuming externally built models and infrastructure, and developing capacity across data, architectures and deployment practices within the country.India is perhaps the most democratic and exploited testbed of data for the rest of the world, and hence a very democratised data and consumer market. A large share of global AI systems are trained, fine‑tuned or evaluated on behaviours, languages and environments that include Indian data, yet the modelling, evaluation and deployment capabilities often reside elsewhere. However, India is also a significant beneficiary of the digital nomad economy, which even China cannot replicate due to state‑led data‑outflow and ownership laws, despite its achievements with DeepSeek (whose founder ran a quantitative hedge fund, High‑Flyer, where he developed and used AI for stock‑market trading strategies). This combination means that India can support data localisation where it chooses — through local storage, processing and governance — without cutting itself off from global AI collaboration and markets. Converting this data position into durable technical advantage requires investing not just in compute, but in data pipelines, labelling infrastructure, sectoral benchmarks and engineering practices that allow models to be tested, monitored and updated in production.A true hedge requires betting on what comes after the current wave. We must therefore diversify beyond LLMs into alternative paradigms like neurocompositional methods, symbolic AI and domain‑specific systems. These architectures allow explicit structure — rules, knowledge graphs, constraint solvers — to be combined with learned components, which is valuable where reasoning transparency, auditability and stable generalisation matter more than fluent text generation. These approaches offer protection against LLM saturation and open the door for democratised research. From a practical standpoint, such diversity also reduces the likelihood that a single class of models, with its known failure modes, becomes the bottleneck for critical applications.Perhaps the most radical proposal for the Summit is the “Framework Inversion” principle, where treating data governance as the primary framework with AI governance as merely a subset is more sensible. Under this inversion, questions of consent, provenance, labelling, access and retention are addressed first; model choices then follow from what is permissible and technically appropriate for specific datasets and tasks. This structure also forces clarity about intended use: if the data cannot be collected, documented and governed for a particular purpose, the model should not be deployed for that purpose, regardless of its general capabilities. If models are volatile, data is the controllable asset. We must prioritise data infrastructure, privacy frameworks and cross‑border flow management over chasing the latest model architecture. Technically, this means investing in storage, metadata systems, version control for datasets and monitoring pipelines that track data drift and label quality over time.An effective approach to AI safety and governance in India can focus on empirical evidence and clear measurement. This includes building a stronger epistemic basis by systematically collecting and sharing data on model failures, near‑misses and misuse cases, using transparent methodologies that allow independent inspection and replication. It also involves a multidisciplinary approach with testable hypotheses for each sector, combining computer science, statistics, domain knowledge and legal analysis around specific questions rather than broad, abstract principles. At a basic level, risk needs to be quantified by defining what counts as a material failure, what thresholds are acceptable and how those thresholds map onto limits on deployment, monitoring and escalation. Where AI systems interact with or substitute for human activities, comparisons with human performance should be expressed as concrete limits on when a system may assist, when it requires supervision and when full automation is not appropriate.Educational narratives on AI safety can emphasise epistemic humility, highlighting that current systems are tools with known limitations that must be understood and managed. Stakeholder engagement can be organised in two phases: a research and documentation phase before public communication, followed by a phase focused on perception, business impact and educational needs, given the stochastic nature of AI risk narratives. A simple classification of AI systems by function, autonomy and domain helps avoid over‑generalisation from LLM‑centric debates, while a clear articulation of intended purpose and intended usage — supported by discernible product, service, tool and infrastructure categories — enables verifiable claims and accountability. Examples such as Sahyog Portal and Sanchar Saathi indicate that clearly defined objectives, clean data and modest models, integrated into operational workflows, can produce measurable outcomes without relying on frontier‑scale systems.By: Abhivardhan President, Indian Society of Artificial Intelligence and Law Founder, Indic Pacific Legal ResearchAnd Deepanshu Singh, Distinguished Expert of the Advisory Council, Indian Society of Artificial Intelligence and Law, Senior Programme Manager, GATI Foundation
Read more on The Times of India

