Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.
Figure 2: We highlight the key differences between the original VAR approach (left) and our proposed training approach DepthART (right). In the VAR approach, quantized token maps provided by VQ-VAE serve as both inputs and targets during training. Our DepthART method introduces a refinement process (highlighted in the red box), where the model self-refines by using its predicted token maps as inputs instead of predefined VQ-VAE scales. The targets are defined as the quantized residuals between the encoded depth features fD and the cumulative model predictions up to the current scale. Depth features fD are extracted from the VQ-VAE encoder without undergoing quantization.
Figure 3: Depth maps generated by DepthART.
Figure 4: Point clouds generated by DepthART.
Quantitative evaluation across benchmarks not seen during training. Overall performance is summarized using a rank metric. Our depth autoregressive transformer, trained with DepthART, outperforms the original VAR training procedure and achieves the highest overall performance among a diverse set of depth estimation baselines.
@article{gabdullin2024depthart,
title={DepthART: Monocular Depth Estimation as Autoregressive Refinement Task},
author={Gabdullin, Bulat and Konovalova, Nina and Patakin, Nikolay and Senushkin, Dmitry and Konushin, Anton},
journal={arXiv preprint arXiv:2409.15010},
year={2024}
}