资源描述:
《hadoop学习心得》由会员上传分享,免费在线阅读,更多相关内容在教育资源-天天文库。
1、1.FileInputFormatsplitsonlylargefiles.Here“large”meanslargerthananHDFSblock.ThesplitsizeisnormallythesizeofanHDFSblock,whichisappropriateformostapplications;however,itispossibletocontrolthisvaluebysettingvariousHadoopproperties.2.SothesplitsizeisblockSize.3.M
2、akingtheminimumsplitsizegreaterthantheblocksizeincreasesthesplitsize,butatthecostoflocality.4.OnereasonforthisisthatFileInputFormatgeneratessplitsinsuchawaythateachsplitisallorpartofasinglefile.Ifthefileisverysmall(“small”meanssignificantlysmallerthananHDFSbl
3、ock)andtherearealotofthem,theneachmaptaskwillprocessverylittleinput,andtherewillbealotofthem(oneperfile),eachofwhichimposesextrabookkeepingoverhead.hadoop处理大量小数据文件效果不好:hadoop对数据的处理是分块处理的,默认是64M分为一个数据块,如果存在大量小数据文件(例如:2-3M一个的文件)这样的小数据文件远远不到一个数据块的大小就要按一个数据块来进行处理
4、。这样处理带来的后果由两个:1.存储大量小文件占据存储空间,致使存储效率不高检索速度也比大文件慢。2.在进行MapReduce运算的时候这样的小文件消费计算能力,默认是按块来分配Map任务的(这个应该是使用小文件的主要缺点)那么如何解决这个问题呢?1.使用Hadoop提供的Har文件,Hadoop命令手册中有可以对小文件进行归档。2.自己对数据进行处理,把若干小文件存储成超过64M的大文件。FileInputFormatisthebaseclassforallimplementationsofInputForma
5、tthatusefilesastheirdatasource(seeFigure7-2).Itprovidestwothings:aplacetodefinewhichfilesareincludedastheinputtoajob,andanimplementationforgeneratingsplitsfortheinputfiles.Thejobofdividingsplitsintorecordsisperformedbysubclasses.AnInputSplithasalengthinbytes,
6、andasetofstoragelocations,whicharejusthostnamestrings.Noticethatasplitdoesn’tcontaintheinputdata;itisjustareferencetothedata.AsaMapReduceapplicationwriter,youdon’tneedtodealwithInputSplitsdirectly,astheyarecreatedbyanInputFormat.AnInputFormatisresponsibleforc
7、reatingtheinputsplits,anddividingthemintorecords.BeforeweseesomeconcreteexamplesofInputFormat,let’sbrieflyexaminehowitisusedinMapReduce.Here’stheinterface:publicinterfaceInputFormat{InputSplit[]getSplits(JobConfjob,intnumSplits)throwsIOException;RecordRe
8、adergetRecordReader(InputSplitsplit,JobConfjob,Reporterreporter)throwsIOException;}TheJobClientcallsthegetSplits()method.Onatasktracker,themaptaskpassesthesplittothegetRecordReader()