DepthART: Monocular Depth Estimation as Autoregressive Refinement Task

Bulat Gabdullin1,2, Nina Konovalova1, Nikolay Patakin1, Dmitry Senushkin ⚔,1, Anton Konushin1

1AIRI, Moscow, Russia
2HSE University
Project leader

DepthART

Figure 1: We present the Depth Autoregressive Transformer for monocular depth estimation, trained using our novel procedure formulated as the Depth Autoregressive Refinement Task - DepthART. Our model iteratively enhances the depth map by predicting next-scale residuals, resulting in a highly detailed final estimate.

Abstract

Despite recent success in discriminative approaches in monocular depth estimation its quality remains limited by training datasets. Generative approaches mitigate this issue by leveraging strong priors derived from training on internet-scale datasets. Recent studies have demonstrated that large text-to-image diffusion models achieve state-of-the-art results in depth estimation when fine-tuned on small depth datasets. Concurrently, autoregressive generative approaches, such as the Visual AutoRegressive modeling~(VAR), have shown promising results in conditioned image synthesis. Following the visual autoregressive modeling paradigm, we introduce the first autoregressive depth estimation model based on the visual autoregressive transformer. Our primary contribution is DepthART -- a novel training method formulated as Depth Autoregressive Refinement Task. Unlike the original VAR training procedure, which employs static targets, our method utilizes a dynamic target formulation that enables model self-refinement and incorporates multi-modal guidance during training. Specifically, we use model predictions as inputs instead of ground truth token maps during training, framing the objective as residual minimization. Our experiments demonstrate that the proposed training approach significantly outperforms visual autoregressive modeling via next-scale prediction in the depth estimation task. The Visual Autoregressive Transformer trained with our approach on Hypersim achieves superior results on a set of unseen benchmarks compared to other generative and discriminative baselines.

Contributions

  1. We introduce a novel application of autoregressive image modeling for depth estimation by developing the depth autoregressive transformer.
  2. We propose a new training paradigm for depth estimation, termed the Depth Autoregressive Refinement Task (DepthART), which facilitates self-refinement and incorporates multi-modal guidance during training.
  3. We demonstrate, through extensive experiments, that the depth autoregressive transformer trained with DepthART achieves competitive or superior performance compared to existing baselines across several benchmarks not seen during training.

Method

Training procedure

Training procedure of DepthART model

Figure 2: We highlight the key differences between the original VAR approach (left) and our proposed training approach DepthART (right). In the VAR approach, quantized token maps provided by VQ-VAE serve as both inputs and targets during training. Our DepthART method introduces a refinement process (highlighted in the red box), where the model self-refines by using its predicted token maps as inputs instead of predefined VQ-VAE scales. The targets are defined as the quantized residuals between the encoded depth features fD and the cumulative model predictions up to the current scale. Depth features fD are extracted from the VQ-VAE encoder without undergoing quantization.

Results

Depth Maps

Figure 3: Depth maps generated by DepthART.

Point Clouds

Figure 4: Point clouds generated by DepthART.

Comparison with other methods

Comparison with other methods

Quantitative evaluation across benchmarks not seen during training. Overall performance is summarized using a rank metric. Our depth autoregressive transformer, trained with DepthART, outperforms the original VAR training procedure and achieves the highest overall performance among a diverse set of depth estimation baselines.

Citiation


@article{gabdullin2024depthart,
    title={DepthART: Monocular Depth Estimation as Autoregressive Refinement Task},
    author={Gabdullin, Bulat and Konovalova, Nina and Patakin, Nikolay and Senushkin, Dmitry and Konushin, Anton},
    journal={arXiv preprint arXiv:2409.15010},
    year={2024}
}