Large-width machine learning algorithm

Martin Anthony, Joel Ratsaby

פרסום מחקרי: פרסום בכתב עתמאמרביקורת עמיתים

2 ציטוטים ‏(Scopus)

תקציר

We introduce an algorithm, called Large Width (LW), that produces a multi-category classifier (defined on a distance space) with the property that the classifier has a large ‘sample width.’ (Width is a notion similar to classification margin.) LW is an incremental instance-based (also known as ‘lazy’) learning algorithm. Given a sample of labeled and unlabeled examples, it iteratively picks the next unlabeled example and classifies it while maintaining a large distance between each labeled example and its nearest-unlike prototype. (A prototype is either a labeled example or an unlabeled example which has already been classified.) Thus, LW gives a higher priority to unlabeled points whose classification decision ‘interferes’ less with the labeled sample. On a collection UCI benchmark datasets, the LW algorithm ranks at the top when compared to 11 instance-based learning algorithms (or configurations). When compared to the best candidate from instance-based learners, MLP, SVM, decision tree learner (C4.5) and Naive Bayes, LW is ranked at second place after only MLP which comes at first place by a single extra win against LW. The LW algorithm can be implemented in parallel distributed processing to yield a high speedup factor and is suitable for any distance space, with a distance function which need not necessarily satisfy the conditions of a metric.

שפה מקוריתאנגלית
עמודים (מ-עד)275-285
מספר עמודים11
כתב עתProgress in Artificial Intelligence
כרך9
מספר גיליון3
מזהי עצם דיגיטלי (DOIs)
סטטוס פרסוםפורסם - 1 ספט׳ 2020

טביעת אצבע

להלן מוצגים תחומי המחקר של הפרסום 'Large-width machine learning algorithm'. יחד הם יוצרים טביעת אצבע ייחודית.

פורמט ציטוט ביבליוגרפי