Feature-aligned N-BEATS with Sinkhorn divergence

Joonhun Lee     Myeongho Jeon     Myungjoo Kang
Seoul National University
Kyunghyun Park
Nanyang Technological University

Abstract

We propose Feature-aligned N-BEATS as a domain-generalized time series forecasting model. It is a nontrivial extension of N-BEATS with doubly residual stacking principle into a representation learning framework. In particular, it revolves around marginal feature probability measures induced by the intricate composition of residual and feature extracting operators of N-BEATS in each stack and aligns them stack-wise via an approximate of an optimal transport distance referred to as the Sinkhorn divergence. The training loss consists of an empirical risk minimization from multiple source domains, i.e., forecasting loss, and an alignment loss calculated with the Sinkhorn divergence, which allows the model to learn invariant features stack-wise across multiple source data sequences while retaining N-BEATS's interpretable design and forecasting power. Comprehensive experimental evaluations with ablation studies are provided and the corresponding results demonstrate the proposed model's forecasting and generalization capabilities.

Main Idea

We devise the stack-wise alignment that is a minimization of divergences between marginal feature measures on a stack-wise basis. This alignment enables the model to learn feature invariance with an ideal frequency of propagation. Indeed, instead of aligning every block for each stack, single alignment for each stack mitigates gradient vanishing/exploding issue via sparsely propagating loss while preserving the interpretability of N-BEATS and ample semantic coverage.
Illustration of Feature-aligned N-BEATS
We adopt the Sinkhorn divergence which is an efficient approximation for the classic optimal transport distances. This choice is motivated by the substantial theoretical evidences of optimal transport distances. Indeed, in the adversarial framework, optimal transport distances have been essential for theoretical evidences and calculation of divergences between push-forward measures induced by a generator and a target measure.

BibTeX


          @inproceedings{lee2024fanbeats,
            title={Feature-aligned N-BEATS with Sinkhorn divergence},
            author={Lee, Joonhun and Jeon, Myeongho and Kang, Myungjoo and Park, Kyunghyun},
            booktitle={The Twelfth International Conference on Learning Representations},
            year={2024}
          }