On complexity and randomness of Markov-chain prediction

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Let {Xt : t ∈ Z} be a sequence of binary random variables generated by a stationary Markov source of order k∗. Let β be the probability of the event "Xt = 1". Consider a learner based on a Markov model of order k, where k may be different from k∗, who trains on a sample sequence X(m) which is randomly drawn by the source. Test the learner's performance by asking it to predict the bits of a test sequence X(n) (generated by the source). An error occurs at time t if the prediction Yt differs from the true bit value Xt, 1 ≤ t ≤ n. Denote by Ξ(n) the sequence of errors where the error bit Ξt at time t equals 1 or 0 according to whether an error occurred or not, respectively. Consider the subsequence Ξ(ν) of Ξ(n) which corresponds to the errors made when predicting a 0, that is, Ξ(ν) consists of those bits Ξt at times t where Yt = 0: In this paper we compute an upper bound on the absolute deviation between the frequency of 1 in Ξ(ν) and β. The bound has an explicit dependence on k, k∗, m, ν, n. It shows that the larger k, or the larger the difference k - k∗, the less random that Ξ(ν) can become.

Original languageEnglish
Title of host publication2015 IEEE Information Theory Workshop, ITW 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781479955268
DOIs
StatePublished - 24 Jun 2015
Event2015 IEEE Information Theory Workshop, ITW 2015 - Jerusalem, Israel
Duration: 26 Apr 20151 May 2015

Publication series

Name2015 IEEE Information Theory Workshop, ITW 2015

Conference

Conference2015 IEEE Information Theory Workshop, ITW 2015
Country/TerritoryIsrael
CityJerusalem
Period26/04/151/05/15

Fingerprint

Dive into the research topics of 'On complexity and randomness of Markov-chain prediction'. Together they form a unique fingerprint.

Cite this