NVIDIA AI Creates Editable 3D Models From 2D Images
GPU powerhouse NVIDIA has unveiled PartPacker, an innovative AI system that generates editable 3D models from a single 2D image. Developed in collaboration with Peking University and Stanford University, this technology marks a significant departure from traditional methods that create unified meshes. PartPacker produces part-based models, giving users the freedom to edit or animate individual components separately.
This system is poised to enhance workflows in diverse fields such as 3D printing, animation, gaming, and academic research by simplifying asset creation and allowing for deep customization.
How PartPacker Transforms 2D Images into 3D Models
PartPacker generates highly detailed 3D objects from a 2D RGB image using a dual volume packing technique within a Diffusion Transformer architecture. This advanced approach organizes parts into distinct, interconnected volumes that can be manipulated independently. The system's network uses a VAE and a rectified flow model to refine latent codes based on the input image. Uniquely, PartPacker generates two latent codes simultaneously, which enhances the final model's detail and control.
The output consists of 3D triangle meshes in the GLB format with resolutions up to 512³, optimized for NVIDIA GPU acceleration. This facilitates the production of high-quality assets for games, films, and interactive media. It also supports export to popular 3D printing formats like STL and 3MF, enabling complex multi-material printing projects.
A New Level of Flexibility for 3D Creators
Standard 3D generation techniques often result in monolithic meshes that are difficult to modify. PartPacker solves this problem by creating modular, editable components. This offers a flexible alternative that is highly beneficial for industries that depend on customizable 3D assets. The technology empowers creators and researchers to build editable, part-based 3D models from a single image, unlocking workflows that were previously impractical.
How to Access PartPacker
Researchers and developers can find the PartPacker source code and data processing scripts on its GitHub repository. For a hands-on experience, a live demo is available on Hugging Face, allowing users to upload images and instantly generate 3D models. Pre-trained VAE and Flow models are also available for download to assist with mesh reconstruction and 3D generation. For more details, you can visit the official NVIDIA project page.
The Growing Trend of AI in 3D Generation
PartPacker is part of a larger movement towards AI-powered 3D modeling tools. In 2024, Bambu Lab released PrintMon Maker, an AI generator that creates 3D printable characters from text or image prompts. Last year, Nvidia also developed its Magic3D text-to-3D model tool as a competitor to offerings like Google's DreamFusion and Physna Inc.’s generative AI prototype. Magic3D uses a two-stage process to create and refine models, also supporting prompt-based editing.