TY - GEN
T1 - On the randomness in learning
AU - Ratsaby, Joel
PY - 2009
Y1 - 2009
N2 - Given a random binary sequence X(n) of random variables, X t, t = 1, 2, . . . , n, for instance, one that is generated by a Markov source (teacher) of order k* (each state represented by k* bits). Assume that the probability of the event Xt = 1 is constant and denote it by β. Consider a learner which is based on a parametric model, for instance a Markov model of order k, who trains on a sequence x (m) which is randomly drawn by the teacher. Test the learner's performance by giving it a sequence x(n) (generated by the teacher) and check its predictions on every bit of x(n). An error occurs at time t if the learner's prediction Yt differs from the true bit value Xt. Denote by ξ(n) the sequence of errors where the error bit ξt at time t equals 1 or 0 according to whether the event of an error occurs or not, respectively. Consider the subsequence ξ(v) of ξ(n) which corresponds to the errors of predicting a 0, i.e., ξ(v) consists of the bits of ξ(n) only at times t such that Yt = 0. In this paper we compute an estimate on the deviation of the frequency of 1s of ξ(v) from β. The result shows that the level of randomness of ξ(v) decreases relative to an increase in the complexity of the learner.
AB - Given a random binary sequence X(n) of random variables, X t, t = 1, 2, . . . , n, for instance, one that is generated by a Markov source (teacher) of order k* (each state represented by k* bits). Assume that the probability of the event Xt = 1 is constant and denote it by β. Consider a learner which is based on a parametric model, for instance a Markov model of order k, who trains on a sequence x (m) which is randomly drawn by the teacher. Test the learner's performance by giving it a sequence x(n) (generated by the teacher) and check its predictions on every bit of x(n). An error occurs at time t if the learner's prediction Yt differs from the true bit value Xt. Denote by ξ(n) the sequence of errors where the error bit ξt at time t equals 1 or 0 according to whether the event of an error occurs or not, respectively. Consider the subsequence ξ(v) of ξ(n) which corresponds to the errors of predicting a 0, i.e., ξ(v) consists of the bits of ξ(n) only at times t such that Yt = 0. In this paper we compute an estimate on the deviation of the frequency of 1s of ξ(v) from β. The result shows that the level of randomness of ξ(v) decreases relative to an increase in the complexity of the learner.
UR - http://www.scopus.com/inward/record.url?scp=77949622819&partnerID=8YFLogxK
U2 - 10.1109/ICCCYB.2009.5393947
DO - 10.1109/ICCCYB.2009.5393947
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:77949622819
SN - 9781424453115
T3 - ICCC 2009 - IEEE 7th International Conference on Computational Cybernetics
SP - 141
EP - 145
BT - ICCC 2009 - IEEE 7th International Conference on Computational Cybernetics
T2 - IEEE 7th International Conference on Computational Cybernetics, ICCC 2009
Y2 - 26 November 2009 through 29 November 2009
ER -