Optimal convergence rates of deep neural networks in a classification setting

We establish optimal convergence rates up to a log factor for a class of deep neural networks in a classification setting under a restraint sometimes referred to as the Tsybakov noise condition. We construct classifiers based on empirical risk minimization in a general setting where the boundary of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Meyer, Joseph Theo (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 7 December 2023
In: Electronic journal of statistics
Year: 2023, Jahrgang: 17, Heft: 2, Pages: 3613-3659
ISSN:1935-7524
DOI:10.1214/23-EJS2187
Online-Zugang:Verlag, kostenfrei, Volltext: https://doi.org/10.1214/23-EJS2187
Verlag, kostenfrei, Volltext: https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-17/issue-2/Optimal-convergence-rates-of-deep-neural-networks-in-a-classification/10.1214/23-EJS2187.full
Volltext
Verfasserangaben:Joseph T. Meyer

MARC

LEADER 00000caa a2200000 c 4500
001 1885863268
003 DE-627
005 20240703164722.0
007 cr uuu---uuuuu
008 240415s2023 xx |||||o 00| ||eng c
024 7 |a 10.1214/23-EJS2187  |2 doi 
035 |a (DE-627)1885863268 
035 |a (DE-599)KXP1885863268 
035 |a (OCoLC)1443668941 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 27  |2 sdnb 
100 1 |a Meyer, Joseph Theo  |d 1993-  |e VerfasserIn  |0 (DE-588)1305413032  |0 (DE-627)1860989233  |4 aut 
245 1 0 |a Optimal convergence rates of deep neural networks in a classification setting  |c Joseph T. Meyer 
264 1 |c 7 December 2023 
300 |a 47 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
500 |a Gesehen am 15.04.2024 
520 |a We establish optimal convergence rates up to a log factor for a class of deep neural networks in a classification setting under a restraint sometimes referred to as the Tsybakov noise condition. We construct classifiers based on empirical risk minimization in a general setting where the boundary of the Bayes rule can be approximated well by neural networks. Corresponding rates of convergence are proven with respect to the misclassification error using an additional condition that acts as a requirement for the “correct noise exponent”. It is then shown that these rates are optimal in the minimax sense. For other estimation procedures, similar convergence rates have been established. Our first main contribution is to prove that the rates are optimal under the additional condition. Secondly, our main theorem establishes almost optimal rates in a generalized setting. We use this to show optimal rates which circumvent the curse of dimensionality. 
650 4 |a 62C20 
650 4 |a 62G05 
650 4 |a classification 
650 4 |a Deep neural networks 
650 4 |a Tsybakov noise condition 
773 0 8 |i Enthalten in  |t Electronic journal of statistics  |d Ithaca, NY : Cornell University Library, 2007  |g 17(2023), 2, Seite 3613-3659  |h Online-Ressource  |w (DE-627)538998830  |w (DE-600)2381001-4  |w (DE-576)28134714X  |x 1935-7524  |7 nnas  |a Optimal convergence rates of deep neural networks in a classification setting 
773 1 8 |g volume:17  |g year:2023  |g number:2  |g pages:3613-3659  |g extent:47  |a Optimal convergence rates of deep neural networks in a classification setting 
856 4 0 |u https://doi.org/10.1214/23-EJS2187  |x Verlag  |x Resolving-System  |z kostenfrei  |3 Volltext 
856 4 0 |u https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-17/issue-2/Optimal-convergence-rates-of-deep-neural-networks-in-a-classification/10.1214/23-EJS2187.full  |x Verlag  |z kostenfrei  |3 Volltext 
951 |a AR 
992 |a 20240415 
993 |a Article 
994 |a 2023 
998 |g 1305413032  |a Meyer, Joseph Theo  |m 1305413032:Meyer, Joseph Theo  |d 110000  |d 110400  |e 110000PM1305413032  |e 110400PM1305413032  |k 0/110000/  |k 1/110000/110400/  |p 1  |x j  |y j 
999 |a KXP-PPN1885863268  |e 4512232532 
BIB |a Y 
SER |a journal 
JSO |a {"recId":"1885863268","language":["eng"],"type":{"media":"Online-Ressource","bibl":"article-journal"},"note":["Gesehen am 15.04.2024"],"person":[{"family":"Meyer","given":"Joseph Theo","display":"Meyer, Joseph Theo","roleDisplay":"VerfasserIn","role":"aut"}],"title":[{"title_sort":"Optimal convergence rates of deep neural networks in a classification setting","title":"Optimal convergence rates of deep neural networks in a classification setting"}],"relHost":[{"origin":[{"publisherPlace":"Ithaca, NY","dateIssuedDisp":"2007-","publisher":"Cornell University Library","dateIssuedKey":"2007"}],"id":{"issn":["1935-7524"],"eki":["538998830"],"zdb":["2381001-4"]},"physDesc":[{"extent":"Online-Ressource"}],"title":[{"subtitle":"EJS","title":"Electronic journal of statistics","title_sort":"Electronic journal of statistics"}],"pubHistory":["1.2007 -"],"part":{"volume":"17","text":"17(2023), 2, Seite 3613-3659","extent":"47","year":"2023","issue":"2","pages":"3613-3659"},"titleAlt":[{"title":"EJS"}],"type":{"media":"Online-Ressource","bibl":"periodical"},"disp":"Optimal convergence rates of deep neural networks in a classification settingElectronic journal of statistics","recId":"538998830","language":["eng"]}],"physDesc":[{"extent":"47 S."}],"name":{"displayForm":["Joseph T. Meyer"]},"id":{"doi":["10.1214/23-EJS2187"],"eki":["1885863268"]},"origin":[{"dateIssuedKey":"2023","dateIssuedDisp":"7 December 2023"}]} 
SRT |a MEYERJOSEPOPTIMALCON7202