Equivalences between learning of data and probability distributions, and their applications

Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by Vitányi and Chater has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain famil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Barmpalias, George (VerfasserIn) , Fang, Nan (VerfasserIn) , Stephan, Frank (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 1 August 2018
In: Information and computation
Year: 2018, Jahrgang: 262, Pages: 123-140
ISSN:1090-2651
DOI:10.1016/j.ic.2018.08.001
Online-Zugang:Verlag, Volltext: https://doi.org/10.1016/j.ic.2018.08.001
Verlag: http://www.sciencedirect.com/science/article/pii/S0890540118301172
Volltext
Verfasserangaben:George Barmpalias, Nan Fang, Frank Stephan
Beschreibung
Zusammenfassung:Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by Vitányi and Chater has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, learnability of a subclass of probability measures is equivalent to learnability of the class of the corresponding real parameters. This equivalence allows to transfer results from classical algorithmic theory to learning theory of probability measures. We present a number of such applications, providing many new results regarding EX and BC learnability of classes of measures, thus drawing parallels between the two learning theories.
Beschreibung:Gesehen am 04.03.2020
Beschreibung:Online Resource
ISSN:1090-2651
DOI:10.1016/j.ic.2018.08.001