Minimax Learning for Remote Prediction
Abstract
The classical problem of supervised learning is to infer an accurate predictor of a target variable $Y$ from a measured variable $X$ by using a finite number of labeled training samples. Motivated by the increasingly distributed nature of data and decision making, in this paper we consider a variation of this classical problem in which the prediction is performed remotely based on a rateconstrained description $M$ of $X$. Upon receiving $M$, the remote node computes an estimate $\hat Y$ of $Y$. We follow the recent minimax approach to study this learning problem and show that it corresponds to a oneshot minimax noisy source coding problem. We then establish information theoretic bounds on the riskrate Lagrangian cost and a general method to design a nearoptimal descriptorestimator pair, which can be viewed as a rateconstrained analog to the maximum conditional entropy principle used in the classical minimax learning problem. Our results show that a naive estimatecompress scheme for rateconstrained prediction is not in general optimal.
 Publication:

arXiv eprints
 Pub Date:
 May 2018
 arXiv:
 arXiv:1806.00071
 Bibcode:
 2018arXiv180600071L
 Keywords:

 Computer Science  Information Theory
 EPrint:
 10 pages, 4 figures, presented in part at ISIT 2018