Nvidia banking on TensorRT to expand generative AI dominance

Nvidia banking on TensorRT to expand generative AI dominance

Nvidia looks to build a bigger presence outside GPU sales as it puts its AI-specific software development kit into more applications.

Nvidia announced that it’s adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion. The company said in a blog post that it aims to make large language models (LLMs) and related tools run faster.

TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result — like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part in the inference side of generative AI.

Its TensorRT-LLM breaks down LLMs and lets them run faster on Nvidia’s H100 GPUs. It works with LLMs like Meta’s Llama 2 and other AI models like Stability AI’s Stable Diffusion. The company said by running LLMs through TensorRT-LLM, “this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants.”

In other words, Nvidia hopes that it will not only provide the GPUs that train and run LLMs but also provide the software that allows models to run and work faster so users don’t seek other ways to make generative AI cost-efficient.

The company said TensorRT-LLM will be “available publicly to anyone who wants to use or integrate it” and can access the SDK on its site.

Nvidia already has a near monopoly on the powerful chips that train LLMs like GPT-4 — and to train and run one, you typically need a lot of GPUs. Demand has skyrocketed for its H100 GPUs; estimated prices have reached $40,000 per chip. The company announced a newer version of its GPU, the GH200, coming next year. No wonder Nvidia’s revenues increased to $13.5 billion in the second quarter.

But the world of generative AI moves fast, and new methods to run LLMs without needing a lot of expensive GPUs have come out. Companies like Microsoft and AMD announced they’ll make their own chips to lessen the reliance on Nvidia. 

And companies have set their sights on the inference side of AI development. AMD plans to buy software company Nod.ai to help LLMs specifically run on AMD chips, while companies like SambaNova already offer services that make it easier to run models as well. 

Nvidia, for now, remains the hardware leader in generative AI, but it already looks like it’s angling for a future where people don’t have to depend on buying huge numbers of its GPUs. 


Source link

We use cookies to give you the best online experience. By agreeing you accept the use of cookies in accordance with our cookie policy.

Close Popup
Privacy Settings saved!
Privacy Settings

When you visit any web site, it may store or retrieve information on your browser, mostly in the form of cookies. Control your personal Cookie Services here.

These cookies are necessary for the website to function and cannot be switched off in our systems.

Technical Cookies
In order to use this website we use the following technically required cookies
  • wordpress_test_cookie
  • wordpress_logged_in_
  • wordpress_sec

WooCommerce
We use WooCommerce as a shopping system. For cart and order processing 2 cookies will be stored. This cookies are strictly necessary and can not be turned off.
  • woocommerce_cart_hash
  • woocommerce_items_in_cart

Decline all Services
Save
Accept all Services
Open Privacy settings