Abstract
In machine-learning, maximizing the sample margin can reduce the learning generalization error. Samples on which the target function has a large margin (γ) convey more information since they yield more accurate hypotheses. Let X be a finite domain and S denote the set of all samples S ⊆ X of fixed cardinality m. Let H be a class of hypotheses h on X. A hyperconcepth′ is defined as an indicator function for a set A ⊆ S of all samples on which the corresponding hypothesis h has a margin of at least γ. An estimate on the complexity of the class H′ of hyperconcepts h′ is obtained with explicit dependence on γ, the pseudo-dimension of H and m.
| Original language | English |
|---|---|
| Pages (from-to) | 2-10 |
| Number of pages | 9 |
| Journal | Theoretical Computer Science |
| Volume | 363 |
| Issue number | 1 |
| DOIs | |
| State | Published - 25 Oct 2006 |
| Externally published | Yes |
Keywords
- Large-margin samples
- Learning complexity
- Pseudo-dimension
- Sample-dependent error-bounds