Using MapReduce, how do you modify the following word count code such that it will only output words above a certain count threshold? (e.g. I want add some kind of filtering of key-value pairs.)
Input:
ant bee cat
bee cat dog
cat dog
Output: let say count threshold is 2 or more
cat 3
dog 2
Following code is from: http://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html#Source+Code
public static class Map1 extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce1 extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
EDIT: RE: about inputs/testcase
Input file ("example.dat") and a simple test case ("testcase") is found here: https://github.com/csiu/tokens/tree/master/other/SO-26695749
EDIT:
The problem wasn't the code. It was due to some strange behavior between the org.apache.hadoop.mapred
package. (Is it better to use the mapred or the mapreduce package to create a Hadoop Job?).