Published on April 30, 2026
Big AI companies have focused their resources on pre-trained transformer models, betting that these will lead to human-level general intelligence. This approach heavily utilizes backpropagation, the conventional method for training deep neural networks. The tech landscape has become increasingly polarized around this methodology.
However, skepticism is growing among experts. Ben Goertzel, coiner of “AGI,” believes the industry’s fixation on transformer models is misallocated. He argues that nearly all major AI labs are pursuing variations of the same model instead of exploring innovative alternatives that could yield better results.
The implications of this strategy have become evident. While the pursuit of larger and more complex models may yield some intelligence gains, the financial and resource costs are escalating. As these models consume billions in compute resources, constraints may limit investment in fundamentally novel architectures that could be more effective in achieving human-level generalization.
The risks are palpable. Goertzel emphasizes that existing transformer models cannot learn from new experiences in real-time like humans. This oversight could hinder progress toward genuine AGI. Although some researchers are investigating alternative neural architectures, the prevailing emphasis on scaling existing methods remains a significant barrier to innovation in the field.
Related News
- Emerging Markets Surge on AI Optimism After TSMC's Positive Outlook
- ZenTrack Unifies Finances and Health Management for Streamlined Living
- Constellations, a new short story by acclaimed author Jeff VanderMeer, has been
- .MD This Page Transforms Web Content into Markdown in Seconds
- China Halts New Robotaxi Licenses After Baidu's Traffic Standstill
- BMW Revamps i7 with Rimac Tech, Ditches Level 3 Autonomy