17 Months ago, I made a prediction: the cost of training AI models would plummet at an exponential rate, following a trajectory even steeper than Moore’s Law and mirroring the cost reduction in DNA sequencing.
OpenAI's model training cost would soon become a fraction of current expenditures, with estimates suggesting a 10x reduction within 18 months and a 100x reduction over time.
This was the application of concepts in an article posted in 2016 titled “The Simple Economics of Artificial Intelligence”.
Now, with DeepSeek proving this thesis, the AI landscape is proving to evolve faster than anticipated.
The fundamental takeaway? The race to AGI and then ASI is not just about raw processing power but about maximizing efficiencies in model training, a diaspora of training date, inference … and in my opinion, quantum.
The Competitive Landscape: Why Processing Power Still Matters
Companies like X.com (formerly Twitter), OpenAI, Meta, and DeepSeek are all pushing toward AGI, but the path forward is increasingly defined by efficiency rather than brute force. DeepSeek's ability to achieve significant cost reductions within constrained resources signals a new phase of competition—one where breakthroughs in algorithmic efficiency, specialized chipsets, and data architectures will shape the playing field.
This doesn’t diminish the need for advanced chips, cloud infrastructure, or hyperscale compute—it fuels it, according to the Jevons Paradox. The demand for more efficient clusters capable of running higher-order AI systems, processing larger volumes of data, and executing more complex reasoning tasks will only grow. Whether at the corporate or national level, the real north star remains clear: AGI.
What Comes Next? A New Approach to AI Market Validation
DeepSeek’s success proves that some of these market assumptions were correct. But how many more intuitive predictions—like this one—will hold up to scrutiny?
Starting now, I’ll take bold AI theses like this one—those that have proven partially correct over time—and test them further. I’ll speak with experts, founders, researchers, and investors in AI to either invalidate or validate these theories.
Each interview will be an opportunity to pressure-test assumptions, uncover new insights, and refine the vision for AI’s future. I’ll document this process—the conversations, takeaways, and implications—and share them here.
Stay tuned. This is just the beginning.