Large-width machine learning algorithm

Martin Anthony, Joel Ratsaby

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

We introduce an algorithm, called Large Width (LW), that produces a multi-category classifier (defined on a distance space) with the property that the classifier has a large ‘sample width.’ (Width is a notion similar to classification margin.) LW is an incremental instance-based (also known as ‘lazy’) learning algorithm. Given a sample of labeled and unlabeled examples, it iteratively picks the next unlabeled example and classifies it while maintaining a large distance between each labeled example and its nearest-unlike prototype. (A prototype is either a labeled example or an unlabeled example which has already been classified.) Thus, LW gives a higher priority to unlabeled points whose classification decision ‘interferes’ less with the labeled sample. On a collection UCI benchmark datasets, the LW algorithm ranks at the top when compared to 11 instance-based learning algorithms (or configurations). When compared to the best candidate from instance-based learners, MLP, SVM, decision tree learner (C4.5) and Naive Bayes, LW is ranked at second place after only MLP which comes at first place by a single extra win against LW. The LW algorithm can be implemented in parallel distributed processing to yield a high speedup factor and is suitable for any distance space, with a distance function which need not necessarily satisfy the conditions of a metric.

Original languageEnglish
Pages (from-to)275-285
Number of pages11
JournalProgress in Artificial Intelligence
Volume9
Issue number3
DOIs
StatePublished - 1 Sep 2020

Keywords

  • Large-margin learning
  • Lazy learning
  • Nonparametric classification
  • k-Nearest neighbor

Fingerprint

Dive into the research topics of 'Large-width machine learning algorithm'. Together they form a unique fingerprint.

Cite this