Published on April 30, 2026
Big AI companies have focused their resources on pre-trained transformer models, betting that these will lead to human-level general intelligence. This approach heavily utilizes backpropagation, the conventional method for training deep neural networks. The tech landscape has become increasingly polarized around this methodology.
However, skepticism is growing among experts. Ben Goertzel, coiner of “AGI,” believes the industry’s fixation on transformer models is misallocated. He argues that nearly all major AI labs are pursuing variations of the same model instead of exploring innovative alternatives that could yield better results.
The implications of this strategy have become evident. While the pursuit of larger and more complex models may yield some intelligence gains, the financial and resource costs are escalating. As these models consume billions in compute resources, constraints may limit investment in fundamentally novel architectures that could be more effective in achieving human-level generalization.
The risks are palpable. Goertzel emphasizes that existing transformer models cannot learn from new experiences in real-time like humans. This oversight could hinder progress toward genuine AGI. Although some researchers are investigating alternative neural architectures, the prevailing emphasis on scaling existing methods remains a significant barrier to innovation in the field.
Related News
- Former Tokyo Electron Employee Sentenced for TSMC Data Theft
- Musk and Altman's Legal Showdown: A Battle for OpenAI's Soul
- China's Crackdown: Fines for Alibaba and PDD Over Food Safety Flaws
- New Framework Aims to Address Bias in Vision-Language Models
- Geekflare Launches Enhanced Scraping API to Optimize LLM Costs
- McKinsey Consultant’s Ownership of Military Drone Startup Raises Ethical Questions