• ayer
At CES 2025, NVIDIA CEO Jensen Huang opened CES 2025 by introducing their next-gen RTX 50 Series Gaming GPUs, Innovations in the Grace Blackwell AI chip technology, and the plans for autonomous cars and humanoid robots.
Transcripción
00:00Everyone, today we're announcing our next generation, the RTX Blackwell family.
00:08Let's take a look.
00:14Our brand new GeForce RTX 50 series Blackwell architecture.
00:23The GPU is just a beast.
00:254090 performance at 549.
00:32Starting from 5070 all the way up to 5090.
00:355090, twice the performance of a 4090.
00:40Well, it is incredible, but we managed to put these gigantic performance GPUs into a laptop.
00:49And what's really amazing is the family of GPUs we're going to put in here.
00:55And so the 5090 will fit into a laptop, a thin laptop.
01:01That last laptop was 14.9 millimeters.
01:04You got a 5080, 5070 Ti, and 5070.
01:08Let's take a look at Blackwell.
01:10This is my show and tell.
01:12This is a show and tell.
01:13This is a show and tell.
01:14So, this NVLink system, this right here, this NVLink system, this is GB200, NVLink 72.
01:24Now, the goal of all of this is so that we can create one giant chip.
01:29The amount of computation we need is really quite incredible.
01:32And this is basically one giant chip.
01:35The memory bandwidth is 1.2 petabytes per second.
01:39That's basically the entire Internet traffic that's happening right now.
01:46The next layer is what we call NVIDIA NEMO.
01:49NEMO is essentially a digital employee onboarding and training evaluation system.
02:01In the future, these AI agents are essentially digital workforce that are working alongside your employees, doing things for you on your behalf.
02:13AI agents are the new digital workforce, working for and with us.
02:20AI agents are a system of models that reason about a mission, break it down into tasks, and retrieve data or use tools to generate a quality response.
02:33NVIDIA's agentic AI building blocks, NEM pre-trained models, and NEMO framework let organizations easily develop AI agents and deploy them anywhere.
02:43We would love to be able to take that AI everywhere.
02:46I've mentioned already that you could take NVIDIA AI to any cloud, but you could also put it inside your company.
02:52But the thing that we want to do more than anything is put it on our PC as well.
02:55If we could figure out a way to make Windows PC a world-class AI PC, it would be completely awesome.
03:05And it turns out the answer is Windows.
03:07It's Windows WSL 2.
03:10Windows WSL 2.
03:12Windows WSL 2 basically is two operating systems within one.
03:17The blueprints that we develop that are going to be up in ai.nvidia.com, so long as the computer fits it, so long as you can fit that model.
03:28And we're going to have many models that fit, whether it's vision models or language models or speech models or these animation human, digital human models.
03:36All kinds of different types of models are going to be perfect for your PC.
03:41And you download it, and it should just run.
03:45And so our focus is to turn Windows WSL 2, Windows PC, into a target first-class platform that we will support and maintain for as long as we shall live.
03:58So, physical AI.
04:01We're announcing NVIDIA Cosmos, a world foundation model that is designed, that was created to understand the physical world.
04:11And the only way for you to really understand this is to see it.
04:14Let's play it.
04:19The next frontier of AI is physical AI.
04:22Model performance is directly related to data availability.
04:27But physical world data is costly to capture, curate, and label.
04:33NVIDIA Cosmos is a world foundation model development platform to advance physical AI.
04:39Cosmos models ingest text, image, or video prompts and generate virtual world states as videos.
04:46Cosmos generations prioritize the unique requirements of AV and robotics use cases, like real-world environments, lighting, and object permanence.
04:57Developers use NVIDIA Omniverse to build physics-based, geospatially accurate scenarios.
05:04Then output Omniverse renders into Cosmos, which generates photoreal, physically-based, synthetic data.
05:11Developers use Cosmos to generate worlds for reinforcement learning AI feedback, to improve policy models, or to test and validate model performance.
05:24Even across multi-sensor views, bringing the power of foresight and multiverse simulation to AI models, generating every possible future to help the model select the right path.
05:38Working with the world's developer ecosystem, NVIDIA is helping advance the next wave of physical AI.
05:46Every robotics company will ultimately have to build three computers, fundamental computers.
05:51One computer, of course, to train the AI.
05:54We call it the DGX computer to train the AI.
05:57Another, of course, when you're done, to deploy the AI.
06:01We call that AGX.
06:02That's inside the car, in the robot, or in an AMR, or in a stadium, or whatever it is.
06:09These computers are at the edge, and they're autonomous.
06:13But to connect the two, you need a digital twin.
06:16And this is all the simulations that you were seeing.
06:19The digital twin is where the AI that has been trained goes to practice, to be refined, to do its synthetic data generation.
06:28These three computers are going to be working interactively.
06:31In the future, every factory will have a digital twin.
06:36And that digital twin operates exactly like the real factory.
06:40The next example, autonomous vehicles.
06:42The AV revolution has arrived.
06:45Well, today, we're announcing that our next-generation processor for the car, our next-generation computer for the car is called Thor.
06:53I have one right here.
06:54Hang on a second.
06:55This is a robotics computer.
06:56It takes sensors, just a madness amount of sensor information, processes it.
07:03You know, umpteen cameras, high-resolution radars, lidars, they're all coming into this chip.
07:11And this chip has to process all that sensor, turn them into tokens, put them into a transformer, and predict the next path.
07:19Okay, so now I told you I was going to show you what would we use Omniverse and Cosmos to do in the context of self-driving cars.
07:31With Cosmos Nemotron VideoSearch, the massively scaled synthetic dataset combined with recorded drives can be curated to train models.
07:44NVIDIA's AI Data Factory scales hundreds of drives into billions of effective models, setting the standard for safe and advanced autonomous driving.
07:56We are going to have mountains of training data for autonomous vehicles.
08:00I think the next part is robotics.
08:03The chat GPT moment for general robotics is just around the corner.
08:08This will be the largest technology industry the world's ever seen.
08:12Robot foundation models, data pipelines, simulation frameworks, and a Thor robotics computer.
08:23The NVIDIA Isaac Groot blueprint for synthetic motion generation is a simulation workflow for imitation learning, enabling developers to generate exponentially large datasets from a small number of human demonstrations.
08:38We're going to have mountains of data to train robots with.
08:42NVIDIA Isaac Groot.
08:45NVIDIA Isaac Groot.
08:46This is our platform to provide technology elements to the robotics industry to accelerate the development of general robotics.
08:55I have one more thing that I want to show you.
08:57Where's DGX-1?
08:59DGX-1 revolutionized artificial intelligence.
09:03And so I just wish that DGX-1 was smaller.
09:10And, you know, so imagine, ladies and gentlemen.
09:30This is NVIDIA's latest AI supercomputer.
09:35And it's fondly called Project Digits right now.
09:42All of NVIDIA software runs on this.
09:44DGX Cloud runs on this.
09:47This sits, well, somewhere, and it's wireless or, you know, connected to your computer.
09:54It's even a workstation if you like it to be.
09:56And you could access it.
09:58You could reach it like a cloud supercomputer.
10:03And NVIDIA's AI works on it.
10:05And it's based on a super secret chip that we've been working on called GB110, the smallest Grace Blackwell that we make.
10:15And this little thing here is in full production.
10:20We're expecting this computer to be available around May timeframe.

Recomendada