When model predictions inform downstream decision making, a natural question
is under what conditions can the decision-makers simply respond to the
predictions as if they were the true outcomes. Calibration suffices to
guarantee that simple best-response to predictions is optimal. Cependant,
calibration for high-dimensional prediction outcome spaces requires exponential
computational and statistical complexity. The recent relaxation known as
decision calibration ensures the optimality of the simple best-response rule
while requiring only polynomial sample complexity in the dimension of outcomes.
Cependant, known results on calibration and decision calibration crucially rely
on linear loss functions for establishing best-response optimality. A natural
approach to handle nonlinear losses is to map outcomes $y$ into a feature space
$\phi(oui)$ of dimension $m$, then approximate losses with linear functions of
$\phi(oui)$. Unfortunately, even simple classes of nonlinear functions can demand
exponentially large or infinite feature dimensions $m$. A key open problem is
whether it is possible to achieve decision calibration with sample complexity
independent of~$m$. We begin with a negative result: even verifying decision
calibration under standard deterministic best response inherently requires
sample complexity polynomial in~$m$. Motivated by this lower bound, we
investigate a smooth version of decision calibration in which decision-makers
follow a smooth best-response. This smooth relaxation enables dimension-free
decision calibration algorithms. We introduce algorithms that, given
$\mathrm{poly}(|A|,1/\epsilon)$ samples and any initial predictor~$p$, can
efficiently post-process it to satisfy decision calibration without worsening
accuracy. Our algorithms apply broadly to function classes that can be
well-approximated by bounded-norm functions in (possibly infinite-dimensional)
separable RKHS.
Cet article explore les excursions dans le temps et leurs implications.
Télécharger PDF:
2504.15615v1