Nvidia sees itself as the hardware lord of the "metaverse," and has teased how a parallel 3D universe in which our cartoon selves can work, play, and interact works.
The chip industry has given Omni Verse–an underlying hardware and software engine that acts as a planet core, fusing together virtual communities in an alternate 3D universe–new plumbing. To enhance real-world experiences, Omni verse is also being used to create avatars in cars, hospitals, and robots.
"We're not telling people to replace what they do; we're enhancing what they do," said Richard Kerris, vice president of the Omni verse platform, during a press conference.
The Omni verse announcements were made this week at the company's GPU Technology Conference. On Tuesday, Nvidia CEO Jensen Huang will discuss many of these announcements.
One such announcement is Omni verse Avatar, which can create interactive, intelligent AI avatars to help diners order food, drivers self-park, and better navigate the roads.
Nvidia showed a conversational avatar that could use in restaurants to replace servers. When ordering food, an AI system–represented by an on-screen avatar–could converse in real time using speech recognition and natural intelligence techniques, as well as computer vision to capture a person's mood and recommend dishes based on the knowledge base.
To do so, the avatar will have to run several AI models such as speech, image recognition, and context, which can be difficult. The Unified Compute Framework, developed by the company, models AI as microservices, allowing apps to run on a single or hybrid system. Nvidia already has AI systems in place, such as the Megatron-Turing Natural Language Generation model, which was developed in collaboration with Microsoft. The system will now be available on the company's DGX AI hardware.
Omni verse Avatar is also the underlying technology in Drive Concierge, an in-car AI assistant that is a "personal concierge in the car that will be on call for you," according to Deepu Talla, vice president and general manager of Embedded and Edge Computing.
AI systems in cars, represented by interactive characters, can understand a driver and the car's occupants based on habits, voice, and interactions. As a result, the AI system can make local phone calls and restaurant recommendations.
The system can also detect if a driver is sleeping or alert a rider if they forget something in the car using cameras and other sensors. The AI system's messages displayed on screens as interactive characters or interfaces.
A metaverse isn't a new concept; it dates back to Linden Lab's Second Life and games like The Sims. Nvidia wants to break down proprietary barriers and create a unified metaverse where people can theoretically jump between universes created by different companies.
Nvidia omitted assisting Facebook in realizing its vision of a future centered on the metaverse, which is at the heart of its rebranding to Meta, during the briefing.
However, through its software connectors, Nvidia is enticing other companies to bring their 3D work to the Omni verse platform. This includes Esri's ArcGIS city Engine, which aids in creating 3D urban environments, and Replica Studio's AI voice engine, which can simulate authentic voice for animated characters." What makes it all possible is USD, or Universal Scene Description.
USD is the HTML of 3D, and it's a crucial component because it allows these software products to benefit from the virtual worlds we're talking about," Kerris explained. Pixar created USD to facilitate the collaborative sharing of 3D assets. Nvidia also unveiled Omniverse Enterprise, a subscription service that includes a software stack to assist businesses in developing 3D workflows that can connected to the Omniverse platform. The offering, which will be available through resellers such as Dell, Lenovo, PNY, and Supermicro, will cost $9,000 per year and aimed at industry verticals such as engineering and entertainment.
The Omniverse platform is also being used to create synthetic data for training "digital twins," which are virtual representations of real-world objects. To train robots, the ISAAC SIM can use synthetic data based on real-world and virtual data. By allowing the introduction of new objects, camera views, and lighting, the SIM enables the creation of custom data sets for robot training.
Drive SIM is an automotive equivalent that creates realistic scenes for autonomous driving using simulated cameras. To train AI models for autonomous driving, the SIM uses real-world data. The camera lens models simulate real-world phenomena like motion blur, rolling shutter, and LED flicker.
Nvidia collaborates closely with sensor manufacturers, accurately replicate Drive SIM data. According to Danny Shapiro, vice president of automotive at Nvidia, the camera, radar, lidar, and ultrasonic sensor models are all path-traced using RTX graphics technology. Some hardware announcements were woven into the overall Omniverse story by the company.
You should also check out the following articles:
Subscribe now to our YouTube channel
Subscribe now to our Facebook Page
Subscribe now to our twitter page
Subscribe now to our Instagram
Subscribe To my personal page on linkedin
Subscribe To my personal page on tiktok page for those who love to dance :)
Want to know what else is going to be in the coming years? Follow me. Follow the future. Sign up for my friend's letter.