使用Maven搭建Hadoop开发环境
关于Maven的使用就不再啰嗦了,网上很多,并且这么多年变化也不大,这里仅介绍怎么搭建Hadoop的开发环境。
1.首先创建工程
mvnarchetype:generate-DgroupId=my.hadoopstudy-DartifactId=hadoopstudy-DarchetypeArtifactId=maven-archetype-quickstart-DinteractiveMode=false
2.然后在pom.xml文件里添加hadoop的依赖包hadoop-common,hadoop-client,hadoop-hdfs,添加后的pom.xml文件如下
<projectxmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>my.hadoopstudy</groupId> <artifactId>hadoopstudy</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>hadoopstudy</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.5.1</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> </project>
3.测试
3.1首先我们可以测试一下hdfs的开发,这里假定使用上一篇Hadoop文章中的hadoop集群,类代码如下
packagemy.hadoopstudy.dfs;
importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.fs.FSDataOutputStream;
importorg.apache.hadoop.fs.FileStatus;
importorg.apache.hadoop.fs.FileSystem;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.io.IOUtils;
importjava.io.InputStream;
importjava.net.URI;
publicclassTest{
publicstaticvoidmain(String[]args)throwsException{
Stringuri="hdfs://9.111.254.189:9000/";
Configurationconfig=newConfiguration();
FileSystemfs=FileSystem.get(URI.create(uri),config);
//列出hdfs上/user/fkong/目录下的所有文件和目录
FileStatus[]statuses=fs.listStatus(newPath("/user/fkong"));
for(FileStatusstatus:statuses){
System.out.println(status);
}
//在hdfs的/user/fkong目录下创建一个文件,并写入一行文本
FSDataOutputStreamos=fs.create(newPath("/user/fkong/test.log"));
os.write("HelloWorld!".getBytes());
os.flush();
os.close();
//显示在hdfs的/user/fkong下指定文件的内容
InputStreamis=fs.open(newPath("/user/fkong/test.log"));
IOUtils.copyBytes(is,System.out,1024,true);
}
}
3.2测试MapReduce作业
测试代码比较简单,如下:
packagemy.hadoopstudy.mapreduce;
importorg.apache.hadoop.conf.Configuration;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.io.IntWritable;
importorg.apache.hadoop.io.Text;
importorg.apache.hadoop.mapreduce.Job;
importorg.apache.hadoop.mapreduce.Mapper;
importorg.apache.hadoop.mapreduce.Reducer;
importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;
importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
importorg.apache.hadoop.util.GenericOptionsParser;
importjava.io.IOException;
publicclassEventCount{
publicstaticclassMyMapperextendsMapper<Object,Text,Text,IntWritable>{
privatefinalstaticIntWritableone=newIntWritable(1);
privateTextevent=newText();
publicvoidmap(Objectkey,Textvalue,Contextcontext)throwsIOException,InterruptedException{
intidx=value.toString().indexOf("");
if(idx>0){
Stringe=value.toString().substring(0,idx);
event.set(e);
context.write(event,one);
}
}
}
publicstaticclassMyReducerextendsReducer<Text,IntWritable,Text,IntWritable>{
privateIntWritableresult=newIntWritable();
publicvoidreduce(Textkey,Iterable<IntWritable>values,Contextcontext)throwsIOException,InterruptedException{
intsum=0;
for(IntWritableval:values){
sum+=val.get();
}
result.set(sum);
context.write(key,result);
}
}
publicstaticvoidmain(String[]args)throwsException{
Configurationconf=newConfiguration();
String[]otherArgs=newGenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length<2){
System.err.println("Usage:EventCount<in><out>");
System.exit(2);
}
Jobjob=Job.getInstance(conf,"eventcount");
job.setJarByClass(EventCount.class);
job.setMapperClass(MyMapper.class);
job.setCombinerClass(MyReducer.class);
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job,newPath(otherArgs[0]));
FileOutputFormat.setOutputPath(job,newPath(otherArgs[1]));
System.exit(job.waitForCompletion(true)?0:1);
}
}
运行“mvnpackage”命令产生jar包hadoopstudy-1.0-SNAPSHOT.jar,并将jar文件复制到hadoop安装目录下
这里假定我们需要分析几个日志文件中的Event信息来统计各种Event个数,所以创建一下目录和文件
/tmp/input/event.log.1
/tmp/input/event.log.2
/tmp/input/event.log.3
因为这里只是要做一个列子,所以每个文件内容可以都一样,假如内容如下
JOB_NEW...
JOB_NEW...
JOB_FINISH...
JOB_NEW...
JOB_FINISH...
然后把这些文件复制到HDFS上
$bin/hdfsdfs-put/tmp/input/user/fkong/input
运行mapreduce作业
$bin/hadoopjarhadoopstudy-1.0-SNAPSHOT.jarmy.hadoopstudy.mapreduce.EventCount/user/fkong/input/user/fkong/output
查看执行结果
$bin/hdfsdfs-cat/user/fkong/output/part-r-00000
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持毛票票。