欢迎来到天天文库
浏览记录
ID:34308876
大小:439.97 KB
页数:10页
时间:2019-03-04
《Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks.pdf》由会员上传分享,免费在线阅读,更多相关内容在学术论文-天天文库。
1、www.sciencemag.org/cgi/content/full/313/5786/504/DC1SupportingOnlineMaterialforReducingtheDimensionalityofDatawithNeuralNetworksG.E.Hinton*andR.R.Salakhutdinov*Towhomcorrespondenceshouldbeaddressed.E-mail:hinton@cs.toronto.eduPublished28July2006,Science313,504(2006)DOI:10.1126/science.1127647This
2、PDFfileincludes:MaterialsandMethodsFigs.S1toS5MatlabCodeSupportingOnlineMaterialDetailsofthepretraining:TospeedupthepretrainingofeachRBM,wesubdividedalldatasetsintomini-batches,eachcontaining100datavectorsandupdatedtheweightsaftereachmini-batch.Fordatasetsthatarenotdivisiblebythesizeofaminibatch,
3、theremainingdatavectorswereincludedinthelastminibatch.Foralldatasets,eachhiddenlayerwaspretrainedfor50passesthroughtheentiretrainingset.Theweightswereupdatedaftereachmini-batchusingtheaveragesinEq.1ofthepaperwithalearningrateof .Inaddition, timesthepreviousupdatewasaddedtoeachweightand
4、 timesthevalueoftheweightwassub-tractedtopenalizelargeweights.Weightswereinitializedwithsmallrandomvaluessampledfromanormaldistributionwithzeromeanandstandarddeviationof .TheMatlabcodeweusedisavailableathttp://www.cs.toronto.edu/hinton/MatlabForSciencePaper.htmlDetailsofthefine-tuning:F
5、orthefine-tuning,weusedthemethodofconjugategradientsonlargerminibatchescontaining1000datavectors.WeusedCarlRasmussen's“minimize”code(1).Threelinesearcheswereperformedforeachmini-batchineachepoch.Todetermineanadequatenumberofepochsandtocheckforoverfitting,wefine-tunedeachautoencoderonafractionofthetr
6、ainingdataandtesteditsperformanceontheremainder.Wethenrepeatedthefine-tuningontheentiretrainingset.Forthesyntheticcurvesandhand-writtendigits,weused200epochsoffine-tuning;forthefacesweused20epochsandforthedocumentsweused50epochs.Slightoverfittingwasobservedforthefaces,buttherewasnooverfittingfortheot
7、herdatasets.Overfittingmeansthattowardstheendoftraining,thereconstructionswerestillimprovingonthetrainingsetbutweregettingworseonthevalidationset.Weexperimentedwithvariousvaluesofthelearningrate,momentum,andweight-decay
此文档下载收益归作者所有