Nvidia Introduces Neuralangelo: AI Model Transforming Video Content into High Precision 3d Models

Nvidia unveiled Neuralangelo

NVIDIA Research has recently unveiled an extraordinary AI model called Neuralangelo, showcasing the power of neural networks in converting ordinary 2D videos into captivating 3D scenes. This groundbreaking technology represents a significant advancement in the field of 3D reconstruction, enabling the creation of realistic virtual replicas of real-world objects, buildings, and sculptures. With its potential to revolutionize creative workflows across various industries, Neuralangelo stands as a testament to the remarkable capabilities of AI.

Realistic 3D Object Generation with Neuralangelo

Inspired by the renowned artist Michelangelo, Neuralangelo employs neural networks to sculpt intricate and mesmerizing 3D structures using digital information blocks. This advanced AI model excels at generating fine details and textures, empowering creative professionals in importing lifelike 3D objects into design applications. Whether it’s for art, video game development, robotics, or industrial digital twins, Neuralangelo breathes life into visions with unprecedented realism.

Import 3D Objects into Design Apps for Art, Gaming, and Robotics

One of Neuralangelo’s standout features is its ability to faithfully translate complex material textures from 2D videos to 3D assets. From capturing the roughness of roof shingles to the transparency of glass or the smoothness of marble, Neuralangelo surpasses previous methods in accurately representing real-world textures. This breakthrough simplifies the rapid creation of virtual objects for projects using footage captured on smartphones.

Ming-Yu Liu, senior director of research and co-author of Neuralangelo’s paper, highlights the immense benefits this AI model offers to creators by enabling them to recreate the real world in digital environments. Neuralangelo’s potential extends from small statues to massive buildings, allowing developers to import highly detailed objects into virtual environments for video games or industrial digital twins.

Related: Nvidia Plans to Build Israeli Supercomputer to Meet Soaring AI Demand

Capturing Real-World Textures with Neuralangelo

In an impressive demonstration, NVIDIA researchers showcased Neuralangelo’s capabilities by reconstructing iconic objects like Michelangelo’s David, as well as everyday items such as flatbed trucks. Neuralangelo can even recreate the interior and exterior of buildings, as demonstrated by a detailed 3D model of the park at NVIDIA’s Bay Area campus.

To overcome limitations faced by previous AI models in capturing repetitive texture patterns, homogenous colors, and strong color variations accurately, Neuralangelo incorporates instant neural graphics primitives from NVIDIA Instant NeRF. By analyzing 2D videos captured from various angles, Neuralangelo selects frames that provide different viewpoints, allowing the model to comprehend the depth, size, and shape of the scene.

The AI then generates a rough 3D representation, akin to a sculptor shaping their creation. Subsequently, the model refines the details with precision, similar to a sculptor meticulously chiseling stone to mimic intricate textures. The result is stunning 3D objects or large-scale scenes with applications in virtual reality, digital twins, and robotics development, pushing the boundaries of immersive experiences.

Related: Chip Stocks, Led by Nvidia, Experience Strong Week Thanks to AI

Neuralangelo Presentation at CVPR 2023

NVIDIA Research will unveiled Neuralangelo, along with nearly 30 other projects, at the Conference on Computer Vision and Pattern Recognition (CVPR) in Vancouver from June 18th to 22nd. This event will feature a wide range of topics, including pose estimation, 3D reconstruction, and video generation. Among these projects, DiffCollage stands out with its diffusion method for creating large-scale content such as landscape orientations, 360-degree panoramas, and looped motion images. By treating smaller images as sections of a larger visual collage, DiffCollage allows diffusion models to generate cohesive-looking content without requiring training on images of the same scale.

 

Source: ithome.com

Please follow and like us:

Leave a Reply

Your email address will not be published. Required fields are marked *