这里是一个更完整的WordCount实例。在这个实例中使用了很多前面提到的MapReduce框架的特性。
这个实例需要HDFS支持运行,尤其是关于DistributedCache的一些特性。因此,这个实例只能运行于伪分布式或者完全分布式安装的Hadoop上。
看下源代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; import java.net.URI; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Counter; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.StringUtils; public class WordCount2 { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { static enum CountersEnum { INPUT_WORDS } private final static IntWritable one = new IntWritable(1); private Text word = new Text(); private boolean caseSensitive; private Set patternsToSkip = new HashSet(); private Configuration conf; private BufferedReader fis; @Override public void setup(Context context) throws IOException, InterruptedException { conf = context.getConfiguration(); caseSensitive = conf.getBoolean("wordcount.case.sensitive", true); if (conf.getBoolean("wordcount.skip.patterns", true)) { URI[] patternsURIs = Job.getInstance(conf).getCacheFiles(); for (URI patternsURI : patternsURIs) { Path patternsPath = new Path(patternsURI.getPath()); String patternsFileName = patternsPath.getName().toString(); parseSkipFile(patternsFileName); } } } private void parseSkipFile(String fileName) { try { fis = new BufferedReader(new FileReader(fileName)); String pattern = null; while ((pattern = fis.readLine()) != null) { patternsToSkip.add(pattern); } } catch (IOException ioe) { System.err.println("Caught exception while parsing the cached file '" + StringUtils.stringifyException(ioe)); } } @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String line = (caseSensitive) ? value.toString() : value.toString().toLowerCase(); for (String pattern : patternsToSkip) { line = line.replaceAll(pattern, ""); } StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); Counter counter = context.getCounter(CountersEnum.class.getName(), CountersEnum.INPUT_WORDS.toString()); counter.increment(1); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); @Override public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); GenericOptionsParser optionParser = new GenericOptionsParser(conf, args); String[] remainingArgs = optionParser.getRemainingArgs(); if (!(remainingArgs.length != 2 || remainingArgs.length != 4)) { System.err.println("Usage: wordcount [-skip skipPatternFile]"); System.exit(2); } Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount2.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); List otherArgs = new ArrayList(); for (int i = 0; i < remainingArgs.length; ++i) { if ("-skip".equals(remainingArgs[i])) { job.addCacheFile(new Path(remainingArgs[++i]).toUri()); job.getConfiguration().setBoolean("wordcount.skip.patterns", true); } else { otherArgs.add(remainingArgs[i]); } } FileInputFormat.addInputPath(job, new Path(otherArgs.get(0))); FileOutputFormat.setOutputPath(job, new Path(otherArgs.get(1))); System.exit(job.waitForCompletion(true) ? 0 : 1); } } |
看看输入文本文件:
1 2 3 4 5 |
$ bin/hadoop fs -ls /user/joe/wordcount/input/ /user/joe/wordcount/input/file01 /user/joe/wordcount/input/file02 |
查看文件的内容:
1 2 3 4 5 6 7 |
$ bin/hadoop fs -cat /user/joe/wordcount/input/file01 Hello World, Bye World! $ bin/hadoop fs -cat /user/joe/wordcount/input/file02 Hello Hadoop, Goodbye to hadoop. |
运行应用程序:
1 |
hadoop jar wc.jar WordCount2 /user/joe/wordcount/input /user/joe/wordcount/output |
输出执行结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
$ bin/hadoop fs -cat /user/joe/wordcount/output/part-r-00000 Bye 1 Goodbye 1 Hadoop, 1 Hello 2 World! 1 World, 1 hadoop. 1 to 1 |
这次我们的输入内容和第一次有些不同,请注意输出结果是怎样受到影响的。
现在我们通过DistributedCache插入一个模式文件,在这个文件里,列出了一些可以被忽略统计的单词模式。
1 2 3 4 5 6 7 8 9 |
$ bin/hadoop fs -cat /user/joe/wordcount/patterns.txt \. \, \! to |
再次运行,这次我们需要在命令中添加更多的选项:
1 |
$ bin/hadoop jar wc.jar WordCount2 -Dwordcount.case.sensitive=true /user/joe/wordcount/input /user/joe/wordcount/output -skip /user/joe/wordcount/patterns.txt |
输出结果和我们期望的一样:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
$ bin/hadoop fs -cat /user/joe/wordcount/output/part-r-00000 Bye 1 Goodbye 1 Hadoop 1 Hello 2 World 2 hadoop 1 |
再运行一次,这次我们关闭大小写敏感的特性:
1 |
$ bin/hadoop jar wc.jar WordCount2 -Dwordcount.case.sensitive=false /user/joe/wordcount/input /user/joe/wordcount/output -skip /user/joe/wordcount/patterns.txt |
就这样,我们看看输出结果:
1 2 3 4 5 6 7 8 9 10 11 |
$ bin/hadoop fs -cat /user/joe/wordcount/output/part-r-00000 bye 1 goodbye 1 hadoop 2 hello 2 horld 2 |
程序要点
通过使用一些Map/Reduce框架提供的功能,WordCount的第二个版本在原始版本基础上有了如下的改进:
- 展示了应用程序如何在Mapper (和Reducer)中通过setup方法修改配置参数;
- 展示了怎样使用DistributedCache分发作业需要的只读数据。在这个程序中,允许用户设定单词模式,并在统计中跳过符合模式的单词;
- 展示了使用GenericOptionsParser去处理常见hadoop命令行选项的功能;
- 展示了应用程序如何使用Counters,如何通过传递给map(和reduce) 方法的Reporter实例来设置应用程序的状态信息。
#######
发表评论