By Hervé Legenvre and Erkko Autio
This article is part three of an ongoing series – The AI Power Plays – that explores the fiercely competitive AI landscape, where tech giants and startups battle for dominance while navigating the delicate balance of competition and collaboration to stay at the cutting edge of AI innovation.
For several years, Google was widely regarded as the gravity well in the AI ecosystem, owing to its ability to steer developments through the judicious release of AI tools and technologies as open-source. However, the meteoric rise of ChatGPT, the AI-driven chatbot developed by OpenAI, has exposed the limits in Google’s capacity to translate the resulting ecosystem momentum into proprietary competitive advantage. In this article, we examine these dynamics.
Open Technologies and Ecosystem Momentum: The TensorFlow story
In 2015, Google open-sourced TensorFlow, a framework library that became a cornerstone of the AI and Machine Learning technology stack. By virtue of being openly available to researchers and developers, TensorFlow helped consolidate the hitherto fragmented AI and ML technology landscape and it rapidly became the dominant platform for the design and deployment of AI models. This move had a dramatic effect on the AI ecosystem momentum, as it boosted the development and sharing of AI models, complementary technologies and related tools. Large cohorts of developers were trained in online courses or using books published by high-profile AI researchers. The growing TensorFlow community quickly started to build upon and contribute to the TensorFlow platform. This momentum attracted a new generation of computer scientists and developers to AI. Importantly, this development accelerated the discovery and validation of new AI use cases, as AI start-ups started to attract large amounts of capital and clients across different sectors, thereby driving market creation for AI applications.
By 2020, TensorFlow had reached 153,000 GitHub stars and 115 million downloads—a testament to its widespread adoption. By open-sourcing TensorFlow, Google had not simply forfeited intellectual property; rather, it was establishing and consolidating its AI ecosystem leadership.
Capturing Profit from AI: A Nearly Perfect Three-Horizon Strategy
While Google boosted open innovation through its selective open-source strategy, it also implemented a ‘Three-Horizon Strategy’ to capture value from AI. Horizon 1 aimed at harvesting AI advances within Google’s core products. Horizon 2 aimed to catch up with Amazon Web Services and Microsoft Azure in the cloud service business by integrating AI advances into Google’s cloud offerings. Horizon 3 aimed at harvesting AI advances within ‘moonshot’ projects.
Horizon 1: Integrating AI into Core Products
The first horizon in Google’s AI strategy focused on embedding AI capabilities across its existing product portfolio—including Google Search, Google Ads, Google Docs, and YouTube. For instance, in 2017, following the discovery of the transformer architecture, Google integrated its BERT model into Google Search to provide more nuanced and contextually appropriate results. Many such integrations, while enhancing the functionalities of Google’s core products, were often almost imperceptible to Google’s search users.
Horizon 2: Cloud AI Tools and Solutions
The second horizon of Google’s AI strategy sought to differentiate its cloud services with enhanced AI capabilities. In the mid-2010s, Google was trailing behind industry leaders Amazon Web Services (AWS) and Microsoft Azure. To catch up, Google launched several AI tools, including AutoML in 2018, Google AI Platform in 2019, and Vertex AI in 2021. These tools provided advanced interfaces for building and deploying AI and machine learning applications within Google Cloud. However, AWS and Microsoft quickly imitated many of Google’s innovations, as many models and tools essential for AI were accessible under permissive open-source software (OSS) licenses.
Horizon 3: Moonshot Projects Leveraging AI
The third horizon of Google’s strategy involved moonshot projects that pushed the boundaries of AI to create powerful new applications. Among these, Waymo stood out as a very ambitious initiative—aiming to bring self-driving technology to market and lead the autonomous vehicle industry. Waymo leveraged Google’s expertise in AI to power self-driving technologies. Waymo is an attempt to harness AI advances to establish leadership in an unrelated sector beyond Google’s core markets. However, Horizon 3 projects have yet to generate significant revenue in spite of helping expand Google’s strategic reach.
The Unexpected Shock: Generative AI and ChatGPT
Google’s three-horizon strategy was disrupted by the rapid rise of generative AI, particularly by the launch of OpenAI’s ChatGPT in the Fall of 2022. This conversational AI model instantly captured consumer imagination and enterprise interest world-wide, and Microsoft made strategic moves to incorporate ChatGPT into its Bing and other Office products.
This launch exposed what was considered a weakness in Google’s selectively open AI strategy, as it struggled to quickly launch a conversational AI that matched ChatGPT’s capabilities.
While Google’s three-horizon approach looked effective in both capturing current value and laying the foundation for future growth, the rapid adoption of ChatGPT demonstrated that in a dynamic and rapidly evolving technology ecosystem, momentum can be seized through bold and well-timed moves.
Despite Google’s extensive investments into nurturing the AI and machine learning ecosystem, Google’s response to ChatGPT appeared reactive rather than proactive. This delay has prompted observers to question why a company as well-positioned as Google was taken by surprise by a competitor.
Below, we examine three key hypotheses for why Google found it challenging to respond rapidly to OpenAI’s threat and examine how Google’s response has reshaped its AI strategy.
Hypothesis 1: Myopia—An Innovation Blind Spot
Our first hypothesis centres on the concept of myopia—a strategic blind spot that prevented Google from fully appreciating the transformative impact of conversational AI. While Google viewed AI as a powerful complementary technology that boost its core businesses and moonshot projects, the company underestimated the disruptive potential of a freely accessible AI chatbot. In contrast, OpenAI quickly framed ChatGPT not merely as a technology demonstrator, but as a user-centric product in its own right that offered a virtually limitless range of potential use cases.
Hypothesis 2: Fear of Hallucinations and Ethical Concerns
The second hypothesis highlights Google’s concerns regarding inherent risks in releasing generative AI models—particularly the ‘hallucination’ issue which prompts AI models to generate content that may be factually inaccurate, misleading, or potentially offensive. This concern was underscored by an incident in 2022, when a Google engineer publicly claimed that the company’s AI chatbot, LaMDA, had achieved sentience. This episode served as a stark reminder of the challenges associated with deploying powerful language models. While the prospect of reputation damage made Google hesitant to release its AI powered chatbot, OpenAI adopted a bolder posture and released ChatGPT early.
Hypothesis 3: The Wall Between Advanced Research and Business Activities
Our third hypothesis highlights a disconnect between Google’s advanced research and its business operations, which slowed down the transformation of technological achievements into commercially viable applications. Agile innovation requires close alignment between research and business units. However, the structural separation between these activities at Google meant that breakthroughs in AI were not as efficiently translated into new products as they ideally should be.
This disconnect was likely further exacerbated by Google’s open-sourcing strategy for AI technologies. By making many of its models and tools open-source, Google was able to generate significant ecosystem momentum. However, this strategy also allowed competitors, including OpenAI, Microsoft, and AWS to capitalize on AI advances developed within the open-source community.
The combined impact of internal structural separation and the open-sourcing strategy meant that Google may have inadvertently undermined its ability to fully capitalize on its research leadership. This issue highlights the tension between an open-source strategy and proprietary business development—an ongoing challenge for organizations that seek both technological innovation and market leadership.
Google’s Response: New Products and Reorganization
Faced with the competitive pressure from OpenAI and Microsoft’s integration of ChatGPT into Bing, Google initiated a multi-faceted response that sought to both catch up technologically and adapt organizationally. The response consisted of two major initiatives.
Launching Competing Products. Google responded to the rise of ChatGPT by launching Bard, its own conversational AI model, in February 2023. However, the public launch was marred by inaccuracies, which detrimentally affected public perception of Bard. In December 2023, Google announced an upgrade to Bard by powering it with its new LLM, Gemini. Google rebranded Bard as Gemini in February 2024 and started to compete directly with OpenAI’s offerings. Some critics noted that Google’s response to OpenAI was slower than expected, given Google’s extensive resources and expertise in AI research.
Organizational Reorganization: Bridging Research and Business. The second, and arguably more strategic, response was a reorganization aimed at reducing the gap between Google’s research and business divisions. Reports of internal reorganization efforts suggest that Google has sought to bring its AI research closer to product teams, thereby enabling a more seamless transition from innovation to market. This reorganization also involved a shift away from the previous emphasis on open-sourcing technologies, which, while not stopped, have been de-prioritized.
Conclusion
Google’s delayed response to OpenAI’s ChatGPT can be attributed to a combination of strategic myopia, risk aversion, and internal organizational disconnects. These challenges also illustrate that open-technology strategies, while powerful when building momentum around general-purpose technologies, are not always a panacea when it comes to value appropriation. We should keep in mind, however, that the competitive situation in AI remains highly dynamic, and AI and ML technologies continue to evolve at a breakneck pace. We expect to witness numerous shifts in competitive positions as new capabilities are introduced and use cases demonstrated.
Previously in the AI Power Play series
- Why Meta is Positioning Itself as the Champion of Open-Source AI
- Why is OpenAI Moving Towards a Closed Source Strategy?
Coming next in the AI Power Play series:
- Why most Tech companies want to collaborate with Hugging face?
- The rise of Nvidia and its recent open-source stride
About the Authors
Hervé Legenvre is Professor and Research Director at EIPM. He manages education programmes for global clients. He conducts research and teaches on digitalisation, innovation, and supply chain. Lately Hervé has conducted extensive research on how open-source software and open hardware are transforming industry foundations (www.eipm.org).