欢迎来到天天文库
浏览记录
ID:18445657
大小:5.70 MB
页数:44页
时间:2018-09-18
《中科院机器学习题库-new》由会员上传分享,免费在线阅读,更多相关内容在教育资源-天天文库。
1、机器学习题库一、极大似然1、MLestimationofexponentialmodel(10)AGaussiandistributionisoftenusedtomodeldataontherealline,butissometimesinappropriatewhenthedataareoftenclosetozerobutconstrainedtobenonnegative.Insuchcasesonecanfitanexponentialdistribution,whoseprobabilitydensityfunctionisgivenbyGivenN
2、observationsxidrawnfromsuchadistribution:(a)Writedownthelikelihoodasafunctionofthescaleparameterb.(b)Writedownthederivativeoftheloglikelihood.(c)GiveasimpleexpressionfortheMLestimateforb.2、换成Poisson分布:3、二、贝叶斯假设在考试的多项选择中,考生知道正确答案的概率为p,猜测答案的概率为1-p,并且假设考生知道正确答案答对题的概率为1,猜中正确答案的概率为,其中m为多选
3、项的数目。那么已知考生答对题目,求他知道正确答案的概率。1、ConjugatepriorsThereadingsforthisweekincludediscussionofconjugatepriors.Givenalikelihoodforaclassmodelswithparametersθ,aconjugatepriorisadistributionwithhyperparametersγ,suchthattheposteriordistribution与先验的分布族相同(a)Supposethatthelikelihoodisgivenbytheexpo
4、nentialdistributionwithrateparameterλ:Showthatthegammadistribution_isaconjugatepriorfortheexponential.Derivetheparameterupdategivenobservationsandthepredictiondistribution.(b)Showthatthebetadistributionisaconjugatepriorforthegeometricdistributionwhichdescribesthenumberoftimeacoinisto
5、sseduntilthefirstheadsappears,whentheprobabilityofheadsoneachtossisθ.Derivetheparameterupdateruleandpredictiondistribution.(a)Supposeisaconjugatepriorforthelikelihood;showthatthemixturepriorisalsoconjugateforthesamelikelihood,assumingthemixtureweightswmsumto1.(d)Repeatpart(c)fortheca
6、sewherethepriorisasingledistributionandthelikelihoodisamixture,andthepriorisconjugateforeachmixturecomponentofthelikelihood.somepriorscanbeconjugateforseveraldifferentlikelihoods;forexample,thebetaisconjugatefortheBernoulliandthegeometricdistributionsandthegammaisconjugatefortheexpon
7、entialandforthegammawithfixedα(e)(Extracredit,20)Explorethecasewherethelikelihoodisamixturewithfixedcomponentsandunknownweights;i.e.,theweightsaretheparameterstobelearned.三、判断题(1)给定n个数据点,如果其中一半用于训练,另一半用于测试,则训练误差和测试误差之间的差别会随着n的增加而减小。(2)极大似然估计是无偏估计且在所有的无偏估计中方差最小,所以极大似然估计的风险最小。(3)回归函数A和
8、B,如果A比B更简单,则
此文档下载收益归作者所有