Nvidia is preparing to spotlight its central processing unit strategy at the upcoming Nvidia GTC conference, signalling a shift in how the company supports the next generation of artificial intelligence systems.
For years, Nvidia’s growth has come largely from its powerful graphics processors used to train and run AI models. But as companies deploy more complex “agentic” AI applications, the role of the CPU has become more important in managing data and coordinating multiple AI tasks.
Nvidia executives say CPUs increasingly act as a key control layer in large AI systems. While GPUs handle large-scale calculations, CPUs manage data movement and oversee interactions between multiple AI agents working together.
The company introduced its first data-centre CPU, called Nvidia Grace CPU, in 2021. A newer generation, known as Nvidia Vera CPU, is now entering production. These processors often work alongside Nvidia’s GPUs in large server systems designed for AI workloads.
Demand for computing infrastructure has surged as more organisations experiment with autonomous AI tools capable of completing tasks rather than simply responding to prompts. These systems require both high-performance graphics processors and reliable general-purpose computing power.
The shift also places Nvidia in closer competition with established CPU leaders such as Intel and AMD. Both companies continue to dominate the data-centre CPU market, but Nvidia hopes its chips can better support AI workloads by working closely with its own GPU platforms.
As artificial intelligence adoption accelerates, chipmakers are increasingly expanding their hardware strategies to support a broader range of computing demands.
Related Readings:










