欢迎来到天天文库
浏览记录
ID:19891387
大小:5.70 MB
页数:44页
时间:2018-10-07
《中科院机器学习题库-new》由会员上传分享,免费在线阅读,更多相关内容在教育资源-天天文库。
1、机器学习题库一、极大似然1、MLestimationofexponentialmodel(10)AGaussiandistributionisoftenusedtomodeldataontherealline,butissometimesinappropriatewhenthedataareoftenclosetozerobutconstrainedtobenonnegative.Insuchcasesonecanfitanexponentialdistribution,whoseprobabilitydensityfun
2、ctionisgivenbyGivenNobservationsxidrawnfromsuchadistribution:(a)Writedownthelikelihoodasafunctionofthescaleparameterb.(b)Writedownthederivativeoftheloglikelihood.(c)GiveasimpleexpressionfortheMLestimateforb.2、换成Poisson分布:3、二、贝叶斯假设在考试的多项选择中,考生知道正确答案的概率为p,猜测答案的概率为1-
3、p,并且假设考生知道正确答案答对题的概率为1,猜中正确答案的概率为,其中m为多选项的数目。那么已知考生答对题目,求他知道正确答案的概率。1、ConjugatepriorsThereadingsforthisweekincludediscussionofconjugatepriors.Givenalikelihoodforaclassmodelswithparametersθ,aconjugatepriorisadistributionwithhyperparametersγ,suchthattheposteriordist
4、ribution与先验的分布族相同(a)Supposethatthelikelihoodisgivenbytheexponentialdistributionwithrateparameterλ:Showthatthegammadistribution_isaconjugatepriorfortheexponential.Derivetheparameterupdategivenobservationsandthepredictiondistribution.(b)Showthatthebetadistributionis
5、aconjugatepriorforthegeometricdistributionwhichdescribesthenumberoftimeacoinistosseduntilthefirstheadsappears,whentheprobabilityofheadsoneachtossisθ.Derivetheparameterupdateruleandpredictiondistribution.(a)Supposeisaconjugatepriorforthelikelihood;showthatthemixtur
6、epriorisalsoconjugateforthesamelikelihood,assumingthemixtureweightswmsumto1.(d)Repeatpart(c)forthecasewherethepriorisasingledistributionandthelikelihoodisamixture,andthepriorisconjugateforeachmixturecomponentofthelikelihood.somepriorscanbeconjugateforseveraldiffer
7、entlikelihoods;forexample,thebetaisconjugatefortheBernoulliandthegeometricdistributionsandthegammaisconjugatefortheexponentialandforthegammawithfixedα(e)(Extracredit,20)Explorethecasewherethelikelihoodisamixturewithfixedcomponentsandunknownweights;i.e.,theweightsa
8、retheparameterstobelearned.三、判断题(1)给定n个数据点,如果其中一半用于训练,另一半用于测试,则训练误差和测试误差之间的差别会随着n的增加而减小。(2)极大似然估计是无偏估计且在所有的无偏估计中方差最小,所以极大似然估计的风险最小。(3)回归函数A和B,如果A比B更简单,则
此文档下载收益归作者所有