TY - GEN
T1 - Stability and topology in reservoir computing
AU - Manevitz, Larry
AU - Hazan, Hananel
PY - 2010
Y1 - 2010
N2 - Recently Jaeger and others have put forth the paradigm of "reservoir computing" as a way of computing with highly recurrent neural networks. This reservoir is a collection of neurons randomly connected with each other of fixed weights. Amongst other things, it has been shown to be effective in temporal pattern recognition; and has been held as a model appropriate to explain how certain aspects of the brain work. (Particularly in its guise as "liquid state machine", due to Maass et al.) In this work we show that although it is known that this model does have generalizability properties and thus is robust to errors in input, it is NOT resistant to errors in the model itself. Thus small malfunctions or distortions make previous training ineffective. Thus this model as currently presented cannot be thought of as appropriate as a biological model; and it also suggests limitations on the applicability in the pattern recognition sphere. However, we show that, with the enforcement of topological constraints on the reservoir, in particular that of small world topology, the model is indeed fault tolerant. Thus this implies that "natural" computational systems must have specific topologies and the uniform random connectivity is not appropriate.
AB - Recently Jaeger and others have put forth the paradigm of "reservoir computing" as a way of computing with highly recurrent neural networks. This reservoir is a collection of neurons randomly connected with each other of fixed weights. Amongst other things, it has been shown to be effective in temporal pattern recognition; and has been held as a model appropriate to explain how certain aspects of the brain work. (Particularly in its guise as "liquid state machine", due to Maass et al.) In this work we show that although it is known that this model does have generalizability properties and thus is robust to errors in input, it is NOT resistant to errors in the model itself. Thus small malfunctions or distortions make previous training ineffective. Thus this model as currently presented cannot be thought of as appropriate as a biological model; and it also suggests limitations on the applicability in the pattern recognition sphere. However, we show that, with the enforcement of topological constraints on the reservoir, in particular that of small world topology, the model is indeed fault tolerant. Thus this implies that "natural" computational systems must have specific topologies and the uniform random connectivity is not appropriate.
KW - Machine Learning
KW - Reservoir Computing
KW - Small world topology
KW - robustness
UR - http://www.scopus.com/inward/record.url?scp=78650030227&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-16773-7_21
DO - 10.1007/978-3-642-16773-7_21
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:78650030227
SN - 3642167721
SN - 9783642167720
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 245
EP - 256
BT - Advances in Soft Computing - 9th Mexican International Conference on Artificial Intelligence, MICAI 2010, Proceedings
T2 - 9th Mexican International Conference on Artificial Intelligence, MICAI 2010
Y2 - 8 November 2010 through 13 November 2010
ER -