Google CEO Sundar Pichai warns users not to blindly trust AI as Google readies Gemini 3.0 amid lawsuits, safety criticism, and fears of an AI investment bubble.
![]() |
| Pichai’s warning about AI errors highlights mounting pressure on Google as it prepares Gemini 3.0 amid lawsuits and fears of an AI bubble. Image: CH |
LONDON — November 19
Google and Alphabet CEO Sundar Pichai’s warning that artificial intelligence systems remain error-prone lands at a moment of heightened scrutiny for Google DeepMind and the broader AI industry. In a BBC interview aired Tuesday, Pichai emphasized that users should treat AI as only one tool among many—especially when seeking factual information.
AI assistants can be useful “if you want to creatively write something,” he said, but cautioned that people “have to learn to use these tools for what they’re good at, and not blindly trust everything they say. The current state-of-the-art AI technology is prone to some errors.”
Pichai’s comments come as Google prepares its next major AI release, Gemini 3.0, expected by year’s end. Earlier versions suffered reputational blows after restrictive safety filters and diversity-focused image rules produced historically inaccurate outputs, triggering widespread criticism and turning Google into a punchline across social platforms.
Gemini 3.0 is meant to be a reset—faster, more capable, and less error-prone—but Pichai’s remarks suggest Google wants to manage expectations before launch. A model marketed as both powerful and fallible reflects a broader industry shift: acknowledging limitations is now a strategic necessity.
Beyond model performance, Google faces a lawsuit in California federal court alleging Gemini harvested user data across Gmail, chat, and video services without permission. The claims—which Google denies—revive long-standing concerns about how large AI models are trained and monitored.
Even if unfounded, the case taps into growing skepticism about Big Tech’s handling of sensitive information. With global regulators tightening rules around AI transparency and data rights, Google cannot afford another reputational crisis as it moves into its next development cycle.
Pichai also addressed fears that the rapid growth of AI has inflated a tech bubble. With major firms collectively spending an estimated $400 billion annually on AI models, cloud infrastructure, and talent, industry analysts warn that investment could outpace realistic returns.
Asked whether Google would be shielded in an AI downturn, Pichai was blunt: “I think no company is going to be immune, including us.”
His answer signals internal recognition that even the largest players cannot escape macroeconomic risk. AI is currently a bet with extraordinary upside—but also extraordinary burn rate. If revenue fails to match the investment frenzy, the shock could be felt across the entire sector.
Taken together, Pichai’s comments amount to a strategic recalibration. Google DeepMind is positioning itself as both ambitious and self-aware: pushing the frontier while acknowledging that AI is still unreliable, legally vulnerable, and financially volatile.
As Gemini 3.0 approaches, Google must convince the public— and investors— that its next wave of AI is not only more capable but more trustworthy. The question now is whether the company can deliver breakthroughs quickly enough to justify the immense costs and counter the skepticism rising around the industry.
What is clear from Pichai’s remarks is that Google is preparing for a future where excitement, risk, and doubt coexist—and where even its most powerful AI systems must be introduced with a warning label.
