The Topics over Time (ToT) model captures thematic changes in timestamped
datasets by explicitly modeling publication dates jointly with word
co-occurrence patterns. Tuttavia, ToT was not approached in a fully Bayesian
fashion, a flaw that makes it susceptible to stability problems. To address
this issue, we propose a fully Bayesian Topics over Time (BToT) model via the
introduction of a conjugate prior to the Beta distribution. This prior acts as
a regularization that prevents the online version of the algorithm from
unstable updates when a topic is poorly represented in a mini-batch. The
characteristics of this prior to the Beta distribution are studied here for the
first time. Still, this model suffers from a difference in scale between the
single-time observations and the multiplicity of words per document. A
variation of BToT, Weighted Bayesian Topics over Time (WBToT), is proposed as a
solution. In WBToT, publication dates are repeated a certain number of times
per document, which balances the relative influence of words and timestamps
along the inference process. We have tested our models on two datasets: a
collection of over 200 years of US state-of-the-union (SOTU) addresses and a
large-scale COVID-19 Twitter corpus of 10 million tweets. The results show that
WBToT captures events better than Latent Dirichlet Allocation and other SOTA
topic models like BERTopic: the median absolute deviation of the topic presence
over time is reduced by $51\%$ E $34\%$, respectively. Our experiments also
demonstrate the superior coherence of WBToT over BToT, which highlights the
importance of balancing the time and word modalities. Finally, we illustrate
the stability of the online optimization algorithm in WBToT, which allows the
application of WBToT to problems that are intractable for standard ToT.
Questo articolo esplora i giri e le loro implicazioni.
Scarica PDF:
2504.15220v1