Meta Bets Big on Mango and Avocado to Reclaim Ground in the AI Arms Race

Meta plans to launch Mango and Avocado AI models in 2026, aiming to challenge OpenAI and Google with advanced image, video, and reasoning capabilities.

Meta’s AI Comeback Plan
Meta’s Mango and Avocado models signal a renewed push into AI, backed by elite talent, heavy spending, and ambitions beyond today’s language models. Image: Meta Headquarters


Tech Desk — December 22, 2025:

Meta’s reported plan to launch two next-generation artificial intelligence models—Mango and Avocado—in the first half of 2026 marks its most assertive move yet to re-enter the front lines of the AI race. After months in which OpenAI and Google have dominated headlines and user adoption, Meta is signaling that it intends not merely to catch up, but to reshape the competitive landscape.

The projects, first reported by The Wall Street Journal, are being developed under Meta Superintelligence Labs (MSL), a newly formed division created to accelerate the company’s most ambitious AI work. Mango will focus on image and video generation, while Avocado is designed as a next-generation large language model with a strong emphasis on coding, reasoning, and technical problem-solving.

Central to this strategy is Alexandr Wang, Meta’s newly appointed Chief AI Officer and the founder of Scale AI. His arrival—following Meta’s more than $14 billion investment for a near-majority stake in his startup—underscores the scale of the company’s commitment. Wang has been tasked with unifying Meta’s fragmented AI efforts and leading what insiders describe as an elite “super team” of more than 50 researchers, including veterans recruited from OpenAI.

Mango’s ambitions place it squarely against OpenAI’s Sora and Google’s rapidly evolving Gemini video and image tools. High-quality generative media has become one of the most visible and commercially attractive fronts in AI, particularly for creators, advertisers, and social platforms—areas where Meta already has deep reach. Success here could allow Meta to tightly integrate advanced generation tools into Instagram, Facebook, and future mixed-reality products.

Avocado, however, may carry even greater strategic weight. Meta’s existing Llama models have gained respect in open-source circles but have struggled to lead in advanced reasoning and coding benchmarks. Avocado is intended to change that. Wang has pointed to early work on so-called “world models,” which aim to help AI systems understand real-world dynamics rather than rely solely on next-word prediction. If realized, this approach could move Meta closer to perception-based AI, widely seen as a critical next step in the field.

The timing reflects mounting pressure across the industry. Google’s Gemini ecosystem has expanded rapidly, adding hundreds of millions of monthly users in a matter of months. OpenAI, meanwhile, continues to iterate quickly, expanding ChatGPT’s capabilities and pushing Sora deeper into creative workflows. These advances have left little room for hesitation, and Meta’s internal reorganization earlier this year suggests leadership recognizes the cost of moving too slowly.

Still, a 2026 debut carries risk. In an industry defined by rapid iteration, even a year can reshape the competitive order. For Mango and Avocado to matter, they will need to deliver a clear leap forward rather than incremental parity. Meta’s heavy spending, aggressive hiring, and centralized leadership suggest it is aiming for exactly that.

Ultimately, Mango and Avocado represent more than new AI models. They are a test of whether Meta can translate its vast resources and platforms into leadership in foundational AI technology. If successful, the company could reassert itself as a top-tier innovator alongside OpenAI and Google. If not, the gap in the AI arms race may become even harder to close.

Post a Comment

Previous Post Next Post

Contact Form