Nvidia Corp. is stepping up its artificial-intelligence competition with Intel Corp. and Advanced Micro Devices Inc. by introducing a new central processing unit to crunch reams of data, with technology based on its acquisition target Arm Holdings PLC.
Chief Executive Jensen Huang on Monday introduced the new technology, dubbed Grace, in the keynote address of Nvidia’s annual GTC developers conference. Nvidia’s new bundle for servers includes a freshly designed CPU, a type of data-center technology dominated by Intel
that Nvidia has long considered manufacturing before taking the plunge.
“Nvidia is now a three-chip company,” Huang said in a news release, referencing the company’s core graphics-processing units, or GPUs, and data-processing units, or DPUs.
Nvidia stock jumped following the announcement. After opening lower than Friday’s closing price, shares jumped between 12 p.m. and 12:30 p.m. Eastern time Monday to a daily gain of more than 2%.
CPUs, GPUs and FPGAs or DPUs combine, with interconnects and other parts, to help computer scientists and companies crunch huge reams of data for AI. Nvidia helped develop and define the entire category when it pushed its GPUs into the realm and showed how they could accelerate teaching of AI systems, but those systems have largely used CPUs from Intel, AMD or International Business Machines Corp.
Nvidia’s CPU will use Arm technology, which has not found wide acceptance in the data-center market. Nvidia has agreed to acquire Arm at a $40 billion valuation, but will make the Grace’s CPU using a license from the chip-architecture company while waiting for approvals for the deal that could stretch in 2022, if they arrive at all.
“Fundamentally, there’s no reason that Arm cannot be as competitive on the higher end as X86s with Intel and AMD and Power is at IBM,” Kevin Krewell, principal analyst at Tirias Research, told MarketWatch.
“Arm’s already on this path and this is an indication that Nvidia supports that and wants to push it forward even faster.”
Nvidia’s Arm acquisition follows a big-money merger that did close, the $6.9 billion deal for Mellanox, a large part of Nvidia’s DPU development. Nvidia announced its third-generation Bluefield DPU as well on Monday, and it also relies on Arm cores.
While Nvidia builds up its capabilities, its competitors are not standing still. AMD is in the process of acquiring Xilinx Inc.
to round out its offerings, and Intel has purchased Habana and other assets while pushing out its own GPUs to better rival Nvidia.
Intel “is building out a portfolio of hardware and software components to mix and match to serve particular end users and workloads,” said Shane Rau, IDC’s research VP for computers and semiconductors. “AMD’s pending acquisition of Xilinx [has a] similar motivation, a hardware and software acquisition so they will have CPU, GPU and, in their case, FPGA.”
Intel and AMD make server CPUs based on the X86 standard, which leads the sector by a wide margin. Rau said “there has been no competition” with x86 for years.
“Right now, X86 is still about 95% of the server CPU market and that’s going across Intel and AMD. In the balance is a little bit of ARM a little bit of IBM Power Z-series systems,” Rau said. “But those non-X86 CPU architectures are going to grow somewhat, if only because end users will conclude that Arm or Power or these other architectures serves their particular end-user use case well.”
Those companies, along with other large tech companies in the U.S. and China, offer remote access to high-powered computing, but their eventual needs will not be the same.
“We may think of ‘Tier 1s’ — Google or Facebook
or Amazon or Baidu
— as monolithic like they’re all the cloud guys, but they have different end-user bases,” Rau said. “Cloud gaming is different than social media and while they can use the same technologies to build out their end solution … they have the capabilities to take those enabling technologies and they want to quickly optimize them, program them to do what they need to do.”
To win that business, the chip makers must be flexible to what the cloud providers need. While announcing its own CPU on Monday, Nvidia also revealed a new deal with Amazon Web Services to power Android-based remote gaming on Amazon’s servers, using Amazon’s own Gravitron processor, for example.
While the large cloud players are the most desired customers long-term for Nvidia, the first target is supercomputers that are crunching some of the largest data sets in the world. Grace — named for famed American computer scientist Grace Hopper — will first be installed in supercomputers designed by Hewlett Packard Enterprise for the Swiss National Supercomputing Centre and the U.S. Department of Energy’s Los Alamos National Laboratory, expected in 2023.
With a nearly two-year wait until Grace is installed and operating, the analysts did not believe Nvidia would fall too far behind its rivals, which are also in the process of collecting assets and putting them into cohesive production. Rau said that it will likely be five years down the road that the chip makers are actually competing on a level playing field, and that will give plenty of time for the Arm acquisition to complete its journey to Nvidia, or not.
“The overarching story Nvidia is trying to tell is that they are committed to Arm and they will use Arm in every aspect of their platforms and Arm is critical to their success in the future,” Krewell said. “Fundamentally even if the deal doesn’t close for one reason or another, they’re still heavily committed to Arm ecosystem. … They can’t sit around and wait for the deal to close, they have to keep moving.”