The objective of transfer learning is to enhance estimation and inference in
a target data by leveraging knowledge gained from additional sources. Recent
studies have explored transfer learning for independent observations in
complex, high-dimensional models assuming sparsity, yet research on time series
models remains limited. Our focus is on transfer learning for sequences of
observations with temporal dependencies and a more intricate model parameter
structure. Specifically, we investigate the vector autoregressive model (VAR),
a widely recognized model for time series data, where the transition matrix can
be deconstructed into a combination of a sparse matrix and a low-rank one. We
propose a new transfer learning algorithm tailored for estimating
high-dimensional VAR models characterized by low-rank and sparse structures.
Additionally, we present a novel approach for selecting informative
observations from auxiliary datasets. Theoretical guarantees are established,
encompassing model parameter consistency, informative set selection, and the
asymptotic distribution of estimators under mild conditions. The latter
facilitates the construction of entry-wise confidence intervals for model
parameters. Finally, we demonstrate the empirical efficacy of our methodologies
through both simulated and real-world datasets.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15691v1