TY - JOUR
T1 - Complexity of hyperconcepts
AU - Ratsaby, Joel
PY - 2006/10/25
Y1 - 2006/10/25
N2 - In machine-learning, maximizing the sample margin can reduce the learning generalization error. Samples on which the target function has a large margin (γ) convey more information since they yield more accurate hypotheses. Let X be a finite domain and S denote the set of all samples S ⊆ X of fixed cardinality m. Let H be a class of hypotheses h on X. A hyperconcepth′ is defined as an indicator function for a set A ⊆ S of all samples on which the corresponding hypothesis h has a margin of at least γ. An estimate on the complexity of the class H′ of hyperconcepts h′ is obtained with explicit dependence on γ, the pseudo-dimension of H and m.
AB - In machine-learning, maximizing the sample margin can reduce the learning generalization error. Samples on which the target function has a large margin (γ) convey more information since they yield more accurate hypotheses. Let X be a finite domain and S denote the set of all samples S ⊆ X of fixed cardinality m. Let H be a class of hypotheses h on X. A hyperconcepth′ is defined as an indicator function for a set A ⊆ S of all samples on which the corresponding hypothesis h has a margin of at least γ. An estimate on the complexity of the class H′ of hyperconcepts h′ is obtained with explicit dependence on γ, the pseudo-dimension of H and m.
KW - Large-margin samples
KW - Learning complexity
KW - Pseudo-dimension
KW - Sample-dependent error-bounds
UR - http://www.scopus.com/inward/record.url?scp=33749241730&partnerID=8YFLogxK
U2 - 10.1016/j.tcs.2006.06.013
DO - 10.1016/j.tcs.2006.06.013
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:33749241730
SN - 0304-3975
VL - 363
SP - 2
EP - 10
JO - Theoretical Computer Science
JF - Theoretical Computer Science
IS - 1
ER -