Equivalences between learning of data and probability distributions, and their applications
Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by Vitányi and Chater has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain famil...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article (Journal) |
| Language: | English |
| Published: |
1 August 2018
|
| In: |
Information and computation
Year: 2018, Volume: 262, Pages: 123-140 |
| ISSN: | 1090-2651 |
| DOI: | 10.1016/j.ic.2018.08.001 |
| Online Access: | Verlag, Volltext: https://doi.org/10.1016/j.ic.2018.08.001 Verlag: http://www.sciencedirect.com/science/article/pii/S0890540118301172 |
| Author Notes: | George Barmpalias, Nan Fang, Frank Stephan |
| Summary: | Algorithmic learning theory traditionally studies the learnability of effective infinite binary sequences (reals), while recent work by Vitányi and Chater has adapted this framework to the study of learnability of effective probability distributions from random data. We prove that for certain families of probability measures that are parametrized by reals, learnability of a subclass of probability measures is equivalent to learnability of the class of the corresponding real parameters. This equivalence allows to transfer results from classical algorithmic theory to learning theory of probability measures. We present a number of such applications, providing many new results regarding EX and BC learnability of classes of measures, thus drawing parallels between the two learning theories. |
|---|---|
| Item Description: | Gesehen am 04.03.2020 |
| Physical Description: | Online Resource |
| ISSN: | 1090-2651 |
| DOI: | 10.1016/j.ic.2018.08.001 |