Nvidia Ai logo on phone screen

By Hervé Legenvre and Erkko Autio

This article is part four of an ongoing series – The AI Power Plays – that explores the fiercely competitive AI landscape, where tech giants and startups battle for dominance while navigating the delicate balance of competition and collaboration to stay at the cutting edge of AI innovation.

NVIDIA is now the world’s most valuable semiconductor company. Its high-end GPUs (Graphics Processing Units) power approximately 80% of the market for generative AI. This dominance stems from its early investment in programmable GPUs and the development of CUDA, its software development platform that enabled developers to leverage the parallel processing power of GPUs for diverse use cases.

We explore how NVIDIA built a complete digital technology stack and combined proprietary and open innovation strategies to promote customer lock-in. We start with a brief account of the NVIDIA story.

History of NVIDIA

NVIDIA was founded in 1993 with a mission to design Graphics Processing Units and introduced 3D graphics to computer gaming and multimedia markets. As its GPUs rapidly gained traction in the marketplace, NVIDIA launched CUDA, its Compute Unified Device Architecture platform in 1996 to help expand GPU use case. This move was inspired by the adoption of NVIDIA’s GeForce GPUs by French researchers for scientific computing, which revealed a powerful new use case for NVIDIA hardware. Although proprietary to NVIDIA, CUDA was openly accessible for external developers to develop different CUDA-based applications, eventually helping build an ecosystem around CUDA.

NVIDIA dedicated considerable resources to CUDA’s development and promotion. The company built a dedicated compiler team, developed SDKs and libraries, and actively engaged with the developer community, promoting the advantages of GPU computing. NVIDIA also supported developers in marketing their CUDA-based applications, thereby facilitating the growth of an ecosystem that would ultimately become a cornerstone behind NVIDIA’s success.

Another pivotal event in NVIDIA’s history occurred in 2012, which fundamentally altered the company’s trajectory. Researchers at the University of Toronto used NVIDIA GPUs to train a deep learning model known as “AlexNet”. This model achieved unprecedented performance in image recognition. This breakthrough demonstrated the potential of GPUs for AI applications, and it helped spark widespread interest in deep learning.

NVIDIA’s years-long cultivation of the CUDA ecosystem meant that NVIDIA was now ideally positioned to seize this new opportunity. This “AlexNet moment” marked the beginning of NVIDIA’s rapid expansion in the AI and data center markets, and it helped establish NVIDIA’s GPUs as essential foundations for deep learning and AI research.

In 2019, NVIDIA further strengthened its position by acquiring Mellanox Technologies, a leader in high-performance networking solutions. Mellanox’s technology complemented NVIDIA’s GPUs, as they alleviated a key bottleneck in data centers: the need for super-fast, massive-volume data transfer between GPUs within and across data centers. This acquisition helped further solidify NVIDIA’s standing in the data center market and allowed it to offer a comprehensive hardware and software platform for AI and high-performance computing.

The 2020s have witnessed a further consolidation of NVIDIA’s role as a keystone provider of AI computing solutions, with NVIDIA’s data center revenue soaring in response to the growing adoption of deep learning across industries. NVIDIA continues to innovate and release successive generations of powerful GPUs, and it continues to expand its software platform to support an ever-widening array of AI applications, thereby helping cement its influence across technology layers.

While AI has been a significant growth driver for NVIDIA, the company has broadened its reach by developing solutions tailored to specific use cases and markets.

In the automotive sector, NVIDIA introduced the DRIVE platform—a comprehensive hardware and software stack designed to integrate AI and autonomous capabilities into self-driving vehicles. Within the DRIVE platform, CUDA enables the execution of complex algorithms essential for autonomous driving, including real-time image processing, sensor fusion, and deep learning inference. NVIDIA’s automotive strategy focuses on partnering with established automakers and equipping them with the necessary tools and technology to develop and deploy self-driving vehicles.

NVIDIA has also made significant strides with Omniverse, a platform for creating and managing digital twins, which are virtual representations of real-world objects and environments. Leveraging NVIDIA’s expertise in graphics processing, AI, simulation, and robotics, Omniverse allows companies to simulate and test modifications to their real-world assets in a virtual environment before implementation. By 2022, over 700 companies had adopted Omniverse, which demonstrates its value in enabling realistic and interactive simulations that help drive operational efficiencies and innovation across industries.

NVIDIA Technology Stack

This account of NVIDIA’s history illustrates how various layers and interconnections have been progressively added to its digital stack over the years.

End-user application

Figure 1: NVIDIA’s technology stack

Figure 1 illustrates NVIDIA’s technology stack in a layered format and showcases how its components build on each other to power advanced applications. At the bottom is the GPU Layer, which provides the hardware for processing data and supporting applications. Above that is the CUDA platform, which includes the software tools developers use to make full use of NVIDIA’s hardware.

The next level is domain-specific platforms such as NVIDIA Drive (for autonomous driving), Clara (for healthcare), and Omniverse (for digital simulations). These platforms are tailored to meet the needs of specific industries and tasks.

On either side, we see complementary resources: AI Framework Integration on the left, which connects NVIDIA’s stack with popular AI tools and frameworks, and AI Models, Data Centers, and Cloud Solutions on the right, which represent the large-scale infrastructure needed to handle and store the vast amounts of data processed by these systems.

At the very top are end-user applications that consumers and businesses use, all built on top of this powerful technology stack.

Three Open Innovation Models

When building and consolidating its dominant position in the AI and Machine Learning technology stack, NVIDIA has skilfully combined proprietary and open-source strategies. We identified three models of open innovation used by NVIDIA and related digital affordances they harnessed. The first was collaborative innovation, by which it has facilitated joint development of specific use cases with specific partners. This approach allowed NVIDIA to expand and enhance platform functionalities. The affordance here is controlled co-creation, which has enabled innovation and customization while helping preserve proprietary control over the core CUDA platform. As an example, NVIDIA leveraged collaborative innovation with companies like Google and Meta to enhance the performance and usability of Google’s and Meta’s widely-used deep learning frameworks – TensorFlow and PyTorch – on NVIDIA’s hardware.

The second open innovation model used by NVIDIA consisted of providing openly accessible resources for access by developers. These included APIs, software libraries, and tools that support the CUDA platform’s technical ecosystem. The digital affordance here was usability without modification: users could leverage these resources for application development but could alter or redistribute those resources further. This approach broadened the developer community’s access to essential resources and enabled wider developer participation while allowing NVIDIA to related resources proprietary. A wide range of tools, libraries, and resources are accessible to developers on the CUDA platform.

The third open innovation strategy harnessed by NVIDIA, open-source resources, offered a critical affordance: customizability. By providing full access to the source code, open-source resources enable users to customize, extend, and redistribute the resource itself. Open-source software supports collaborative development and community-driven innovation. For instance, all 500 of the world’s most powerful supercomputers, which support specialized scientific applications, operate on some variant of Linux to meet their unique demands. NVIDIA’s decision to open-source its Linux GPU kernel modules empower developers to customize and optimize GPU drivers, addressing the specific performance and scalability requirements of high-performance computing (HPC) environments.

Each of the three open innovation strategies allowed unique affordances for NVIDIA, allowing it to support diverse types of user and developer engagement and foster innovation that was consistent with the platform’s strategic goals and collaboration needs.

How NVIDIA’s Open Innovation Strategies Fostered User Lock-In

The three models of open innovation allowed NVIDIA to expand is footprint in the AI technology stack and foster user lock-in. The collaborative innovation with developers of TensorFlow and PyTorch helped optimize these frameworks for NVIDIA GPUs and the CUDA platform and thus encourage widespread adoption and user lock-in without compromising NVIDIA’s proprietary architecture.

Similarly, when NVIDIA engaged in collaborative innovation with vehicle manufacturers to advance autonomous driving and customize the DRIVE platform for the client, the client’s systems became optimized for NVIDIA hardware. This optimization increased switching costs for the client, thereby promoting user lock-in.

NVIDIA’s strategy of providing broad access to CUDA resources, such as the free CUDA Toolkit, documentation, and educational materials, effectively reduced entry barriers for new users. This accessibility drove widespread adoption and also user lock-in, as developers become invested in the CUDA ecosystem and increasingly dependent on NVIDIA’s tools, hardware, and expertise for application development. Once developers had invested time and effort in mastering CUDA, they became less inclined to learn alternative platforms that would require new skillsets and offer less community support.

Finally, NVIDIA’s open-source strategy, under which only specific components (such as Linux kernel modules and certain libraries) were made open source, helped build trust among developers and expand CUDA’s appeal without relinquishing control of the core platform. This approach allowed users to customize and integrate CUDA into their workflows without fully enabling independence from NVIDIA’s ecosystem.

By selectively open-sourcing its Linux GPU kernel modules, NVIDIA strategically positioned itself within the supercomputing market, where developers favor open-source solutions for customizing their infrastructure. This approach incentivized institutions to invest in CUDA-optimized applications and NVIDIA hardware, thereby fostering ecosystem lock-in.

In 2024 NVIDIA is going one step further, as it decided to release leading-edge Large Language Models under open-source licence. NVIDIA’s NVLM 1.0, a multimodal LLM performs well on both vision, language, and text-only tasks, and its performance rivals that of proprietary LLMs from OpenAI and Google. NVIDIA also released an open-source AI model named Nemotron that builds on Meta’s Llama-3 framework and outperforms OpenAI’s latest models in various benchmark tests.

The release of NVLM 1.0 and Nemotron sent a clear message to NVIDIA clients that they do not need proprietary models such as ChatGPT or Gemini for applications built on top of NVIDIA tools, as the NVIDIA technology stack is now able to support image- and video-heavy ML models.

Conclusion

In conclusion, NVIDIA’s journey from a graphics card manufacturer to a platform company at the heart of the AI revolution showcases a powerful ability to leverage a mix of open innovation models to drive user lock-in without losing control of proprietary foundations.

Table 1

Table 1: Open Innovation Strategies Used by NVIDIA

NVIDIA’s strategy demonstrates that a platform can be both generous in sharing resources and strategic in creating a robust, loyal user base. Through a blend of collaborative innovation, openly accessible resources, and selective open-source offerings, NVIDIA has crafted an ecosystem that attracts users while creating lasting dependence on its platform. As illustrated in the table 1, this nuanced approach to open innovation exemplifies how selective openness drives platform dominance and user loyalty. As a consequence, NVIDIA is now the dominant force in the GPU market, holding an impressive 84% market share, and with over 4 million CUDA developers.

Previously in the AI Power Play series

Coming next in the AI Power Play series

  • Why most Tech companies want to collaborate with Hugging face?
  • AI open-source dynamics: a complementarity perspective

About the Authors

Herve LegenvreHervé Legenvre is Professor and Research Director at EIPM. He manages education programmes for global clients. He conducts research and teaches on digitalisation, innovation, and supply chain. Lately Hervé has conducted extensive research on how open-source software and open hardware are transforming industry foundations (www.eipm.org).

Erkko Autio

Erkko Autio FBA FFASL is Professor in Technology Venturing at Imperial College Business School, London. His research focuses on digitalisation, open technology ecosystems, entrepreneurial ecosystems, innovation ecosystems, and business model innovation. He co-founded the Global Entrepreneurship Monitor (www.gemconsortium.org), the Global Entrepreneurship Index (thegedi.org), and Wicked Acceleration Labs (www.wickedacceleration.org). 

LEAVE A REPLY

Please enter your comment!
Please enter your name here