Back to all posts

Developer Offer

Try ImaginePro API with 50 Free Credits

Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.

Start Free Trial

New AI Model from UTC Revolutionizes 3D Image Modeling

2025-10-23Chuck Wasserstrom4 minutes read
AI
3D Modeling
Medical Imaging

Image guide to the Langevin Variational Autoencoder computational framework Image courtesy of Dr. Zihao Wang

A significant breakthrough in 3D image modeling is emerging from the University of Tennessee at Chattanooga, thanks to the work of Assistant Professor Zihao Wang. Leading a research collaboration, Wang has developed a new approach that promises to make AI in this field more efficient and understandable.

A Breakthrough in 3D Image Modeling

Dr. Wang, who joined the UTC Department of Computer Science and Engineering in 2024, partnered with the French Institute for Research in Computer Science and Automation. Together, they created a lightweight artificial intelligence model designed to learn the difference between an object's shape and its appearance in various images. Their groundbreaking work was detailed in the paper, “Multi-energy Quasi-Symplectic Langevin Inference for Latent Disentangled Learning,” which has been accepted by the prestigious journal IEEE Transactions on Image Processing.

Dr. Zihao Wang Dr. Zihao Wang

Tackling a Long-Standing Challenge

For years, a key challenge in 3D image modeling has been balancing three critical goals: creating models that are lightweight, interpretable, and high-performing. According to Wang, traditional deep learning methods often force a compromise, achieving only two of these three objectives at once. This limitation can result in AI systems that are either too large and slow or too much of a "black box" to be fully understood.

Introducing the Langevin-VAE Framework

The research team's solution is a new computational framework called the Langevin Variational Autoencoder (Langevin-VAE). This innovative model helps computers better distinguish between an object’s fundamental shape and its surface details, a crucial task in fields like medical imaging.

By employing a quasi-symplectic integrator, the model simplifies complex calculations. This allows it to bypass the intensive matrix calculations that typically hinder performance when dealing with high-dimensional data.

"Our goal was to make deep generative models both interpretable and efficient," Wang stated. "By integrating energy-based inference, we enable the model to learn how shape and appearance evolve independently without any supervision."

Shape vs. Appearance explainer Image courtesy of Dr. Zihao Wang

Impressive Performance from a Compact Model

The research demonstrated that the Langevin-VAE model could accurately analyze and reconstruct 3D images of the inner ear and heart. Remarkably, it achieved this using a neural network with just 1.7 million parameters, making it significantly smaller than most comparable models.

Despite its compact size, the Langevin-VAE surpassed larger, state-of-the-art methods in both the quality of its generated images and its ability to disentangle latent features. This success proves that high performance doesn't have to come at the cost of efficiency and interpretability.

Beyond Medical Imaging

While the immediate applications in medical imaging are clear, Wang notes that the framework has far-reaching potential. It opens new doors for developing interpretable AI systems in other complex fields, including 3D modeling, robotics, and scientific visualization.

University Recognition and Support

This innovative work has been supported by several grants, including the Ruth S. Holmberg Grant for Faculty Excellence, the UTC Department of Computer Science and Engineering, and the French National Research Agency.

Dr. Kumar Yelamarthi, Dean of the College of Engineering and Computer Science, praised the research. “Dr. Wang’s research reflects the core values we champion at UTC CECS: curiosity that drives discovery, critical thinking that solves complex problems, and communication that bridges global collaboration,” he said. “This is the kind of innovation that empowers our students and faculty to lead with purpose.”

Wang's ongoing efforts are further supported by his recent selection as a Ruth S. Holmberg Grant for Faculty Excellence recipient. This funding will support his new project, “Develop a Cross-Modal AI Agent for Medical Image Computing,” which builds directly on this foundational research.

Learn More

Read Original Post

Compare Plans & Pricing

Find the plan that matches your workload and unlock full access to ImaginePro.

ImaginePro pricing comparison
PlanPriceHighlights
Standard$8 / month
  • 300 monthly credits included
  • Access to Midjourney, Flux, and SDXL models
  • Commercial usage rights
Premium$20 / month
  • 900 monthly credits for scaling teams
  • Higher concurrency and faster delivery
  • Priority support via Slack or Telegram

Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.

View All Pricing Details
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.