Abstract
Conventional uncertainty-aware temporal difference (TD) learning
methods often rely on simplistic assumptions, typically including a
zero-mean Gaussian distribution for TD errors. Such oversimplification
can lead to inaccurate error representations and compromised
uncertainty estimation. In this paper, we introduce a novel framework
for generalized Gaussian error modeling in deep reinforcement
learning, applicable to both discrete and continuous control settings.
Our framework enhances the flexibility of error distribution modeling
by incorporating additional higher-order moment, particularly
kurtosis, thereby improving the estimation and mitigation of
data-dependent noise, i.e., aleatoric uncertainty. We examine the
influence of the shape parameter of the generalized Gaussian
distribution (GGD) on aleatoric uncertainty and provide a closed-form
expression that demonstrates an inverse relationship between
uncertainty and the shape parameter. Additionally, we propose a
theoretically grounded weighting scheme to fully leverage the GGD. To
address epistemic uncertainty, we enhance the batch inverse variance
weighting by incorporating bias reduction and kurtosis considerations,
resulting in improved robustness. Extensive experimental evaluations
using policy gradient algorithms demonstrate the consistent efficacy
of our method, showcasing significant performance improvements.