On how complexity affects the stability of a predictor

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Given a finite random sample from a Markov chain environment, we select a predictor that minimizes a criterion function and refer to it as being calibrated to its environment. If its prediction error is not bounded by its criterion value, we say that the criterion fails. We define the predictor’s complexity to be the amount of uncertainty in detecting that the criterion fails given that it fails. We define a predictor’s stability to be the discrepancy between the average number of prediction errors that it makes on two random samples. We show that complexity is inversely proportional to the level of adaptivity of the calibrated predictor to its random environment. The calibrated predictor becomes less stable as its complexity increases or as its level of adaptivity decreases.

Original languageEnglish
Title of host publicationInternational Conference on Artificial Intelligence and Statistics, AISTATS 2018Pages 161 - 1672018 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018Playa Blanca, Lanzarote, Canary Islands9 April 2018through 11 April 2018
Pages161-167
Number of pages7
StatePublished - 2018
Event21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain
Duration: 9 Apr 201811 Apr 2018

Conference

Conference21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018
Country/TerritorySpain
CityPlaya Blanca, Lanzarote, Canary Islands
Period9/04/1811/04/18

Fingerprint

Dive into the research topics of 'On how complexity affects the stability of a predictor'. Together they form a unique fingerprint.

Cite this