HyperAIHyperAI

Command Palette

Search for a command to run...

Representation Autoencoders

Date

22 days ago

Organization

New York University

Paper URL

2510.11690

Representation Autoencoders (RAEs) were proposed by a team led by Assistant Professor Xie Saining at New York University in October 2025, and the relevant research results were published in the paper "Diffusion Transformers with Representation Autoencoders".

Representational Encoders (RAEs) replace traditional representational encoders (VAEs) by combining a pre-trained representation encoder (such as DINO, SigLIP, or MAE) with a trained decoder. These models provide high-quality reconstructions and semantically rich latent spaces, while allowing for scalable transformer architectures. Compared to VAE-based models, RAEs achieve faster convergence and higher-quality samples during latent diffusion training.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Representation Autoencoders | Wiki | HyperAI