A foundational problem in reinforcement learning and interactive decision
making is to understand what modeling assumptions lead to sample-efficient
learning guarantees, and what algorithm design principles achieve optimal
sample complexity. Recently, Foster et al. (2021) introduced the
Decision-Estimation Coefficient (DEC), a measure of statistical complexity
which leads to upper and lower bounds on the optimal sample complexity for a
general class of problems encompassing bandits and reinforcement learning with
function approximation. In this paper, we introduce a new variant of the DEC,
the Constrained Decision-Estimation Coefficient, and use it to derive new lower
bounds that improve upon prior work on three fronts:
- They hold in expectation, with no restrictions on the class of algorithms
under consideration.
- They hold globally, and do not rely on the notion of localization used by
Foster et al. (2021).
- Most interestingly, they allow the reference model with respect to which
the DEC is defined to be improper, establishing that improper reference models
play a fundamental role.
We provide upper bounds on regret that scale with the same quantity, thereby
closing all but one of the gaps between upper and lower bounds in Foster et al.
(2021). Our results apply to both the regret framework and PAC framework, and
make use of several new analysis and algorithm design techniques that we
anticipate will find broader use.