Nvidia on the Defensive

Nvidia on the DefensiveNvidia on the Defensive

Nvidia has evolved from an accelerator of video games into a major provider of chips for the industrial metaverse and self-driving cars. Its stock market value has topped a whopping $2.5 trillion.

But that’s a lot of money to lose if competitors are successful at cracking Nvidia’s moat. That moat is a plug-and-play system called CUDA that directly links GPUs to any AI application a developer wants to run.

Google’s Gemini

The technology giant’s latest move was a bold one. It’s launching the world’s most powerful AI processors. The company says its new TPU v5p chips will be faster and cheaper than the NVIDIA GPUs that power most AI software. The company expects to sell millions of the chips over the next few years.

The news sparked excitement among investors, sending Nvidia shares up about 10% this week. But the stock’s long-term prospects remain uncertain.

Nvidia’s market capitalization has reached an astounding $2.5 trillion, putting it ahead of Apple and Microsoft. But the billionaire company’s hold on the booming AI sector could soon be challenged by rivals with bigger market caps.

For starters, Google’s generative AI model, called Gemini, is now a contender. Previously known as Bard, the tool was launched in February with much fanfare. But the initial enthusiasm quickly turned into skepticism after a demo video produced a factually inaccurate response to a question about the James Webb Space Telescope.

Google’s Gemini is a large language model (LLM) that can interpret and generate text, images and videos. But the model isn’t perfect. It has been attacked in ways that can cause it to generate election misinformation, show racist images and even leak system prompts. For now, Google has rolled out guardrails to limit the tool’s abilities and added safety protocols.

Amazon’s AWS Deep Learning

Nvidia has become a superstar in the AI ecosystem with its chips powering AI operations around the world. Its stock has tripled in value since the start of this year and its market cap is more than $2 trillion. But competitors have been busy chipping away at Nvidia’s advantage. Tech giants like Google and Amazon have been designing their own AI-optimized GPUs. And startups are now embracing rival technology and moving their models to the cloud.

A serious alternative to Nvidia’s offering is offered by Amazon with its machine learning platform Sagemaker. Sagemaker is a comprehensive set of services that makes it easy to experiment with state of the art deep learning. For example, Sagemaker Notebooks lets you train and test deep learning models in the cloud without needing to invest in high end hardware. Similarly, Sagemaker Studio lets you deploy and run your own trained model in the cloud.

In addition to these tools, Sagemaker offers other services to help you explore and accelerate your AI applications. For instance, P5 instances support NVIDIA’s latest generation H100 Tensor Core GPUs and provide up to 20 exaFLOPS of compute performance to build and train your model. They also feature NVIDIA GPUDirect RDMA for low-latency and high bandwidth communication between instances.

The COVID-19 pandemic has put HPC and AI tools front and center in the battle against the novel coronavirus. For example, a team of researchers at the University of Texas used NVIDIA’s GPU-accelerated tools to create a 3D atomic-scale map of the coronavirus protein that can guide the development of vaccines and treatments.

Microsoft’s Cognitive Services

The generative AI boom has pushed chip designer Nvidia’s revenue and profits through the roof. Its share price has shot past Amazon and Apple, giving it the third-highest market valuation of any public company in the world.

But the company faces an array of challengers as it looks to defend its crown. Rivals like Google, Amazon and Intel have been working away on software that could eat into Nvidia’s moat and potentially render its GPUs obsolete.

Nvidia’s CUDA system links its chips to almost any type of artificial intelligence application developers can dream up. That’s been Nvidia’s secret sauce that gives it a virtual monopoly in the burgeoning industry and has made the company the darling of investors and entrepreneurs alike.

Its products are featured in many ratings and evaluation reports, including The Forrester Wave: Cognitive Search and Gartner’s Magic Quadrant for Insight Machines. It’s a wildly popular product that’s used by big companies such as Microsoft and Alphabet to run their own applications and train generative AI models on them.

The company has also tapped into its massive user base to boost demand for its products. Nvidia’s stock turns up in more retail investors’ portfolios than any other name, according to recent analysis by Vanda Research. That makes Nvidia an attractive choice for those who want to play the AI game but don’t have the capital to invest in it on their own.

Intel’s Nervana

Intel isn’t giving up on its artificial intelligence efforts, though. The chipmaker that controls more than 90 percent of the world’s data centers will add software, a cloud service and future hardware to better tune its products for the unique workloads of AI calculations.

The move bolsters Intel’s machine learning system, which runs the gamut from an open-sourced software platform to an upcoming customized computer chip that is used for everything from analyzing seismic data to find promising places to drill for oil to examining plant genomes to develop new hybrids. It’s a new frontier that has investors salivating and competitors positioning themselves to attack.

Intel CEO Brian Krzanich outlined the strategy in a Monday morning presentation. He cited uses of AI in the fields of self-driving cars, improved weather forecasts and more effective coronavirus diagnosis.

He also pointed out that AI has led to a surge in demand for high-performance computing (HPC) systems.

HPC is a big business for Nvidia, whose GPU chips power them. But a growing number of companies are building their own chips to better compete with Nvidia’s. The latest chip from Blackwell, for instance, promises to deliver twice the performance of the previous generation at a much lower price. That could make it a contender for AI work that’s currently done on Nvidia’s DGX family of servers.

By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *