我是新手,尝试运行我第一个hadoop程序。我在Hadoop中执行wordcount作业时遇到了一些问题。
package hdp;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class WordCount extends Configured implements Tool{
public static void main(String[] args) throws Exception {
System.out.println("application starting ....");
int exitCode = ToolRunner.run(new WordCount(), args);
System.out.println(exitCode);
}
@Override
public int run(String[] args) throws Exception {
if (args.length < 2) {
System.out.println("Plz enter input and output directory properly... ");
return -1;
}
JobConf conf = new JobConf(WordCount.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setMapperClass(WordMapper.class);
conf.setReducerClass(WordReducer.class);
conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputKeyClass(IntWritable.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
return 0;
}
@Override
public Configuration getConf() {
return null;
}
@Override
public void setConf(Configuration arg0) {
}
}
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class WordMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable>{
@Override
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> collect, Reporter reporter) throws IOException {
String str = value.toString();
for (String s : str.split(" ")) {
if (s.length() > 0) {
collect.collect(new Text(s), new IntWritable(1));
}
}
}
}
package hdp;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class WordReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
@Override
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int count = 0;
while (values.hasNext()) {
IntWritable intWritable = values.next();
count += intWritable.get();
}
output.collect(key, new IntWritable(count));
}
}
当我运行我的程序,然后我得到以下错误消息。
16/12/23 00:22:41信息MapReduce.job:任务Id:尝试_1482432671993_0001_M_000001_1,状态:失败错误:java.io.ioException:映射项中的类型不匹配:应为org.apache.hadoop.io.intwritable,已收到org.apache.hadoop.mapred.maptask$MapOutputBuffer.Collect(MapTask.java:1072)在org.apache.hadoop.mapred.maptask$OldOutputCollector.Collect16/12/23 00:22:47信息MapReduce.job:任务Id:尝试_1482432671993_0001_M_000000_2,状态:失败错误:java.io.ioException:映射中的键类型不匹配:应为org.apache.hadoop.io.intwritable,已收到org.apache.hadoop.mapred.maptask$MapOutputBuffer.Collect(MapTask.java:1072)在org.apache.hadoop.japtask$OldOutputCollector.Collect(MapTask.java:610)在
请告诉我我错在哪里,我需要什么样的改变。在WordCount.java或WordReducer或WordMapper.java中
您意外地设置了输出键类两次:
conf.setMapOutputKeyClass(IntWritable.class);
应成为
conf.setMapOutputValueClass(IntWritable.class);
错误:java.io.ioException:错误值类:类org.apache.hadoop.io.text不是类org.apache.hadoop.mapred.ifile$writer.append(ifile.java:194)在org.apache.hadoop.mapred.task$combineoutputCollector.collect(task.java:1350)在org.a
我只是重新运行了以下命令:、、。但现在我得到以下错误: 13/11/10 20:52:12 ERROR Security.usergroupInformation:PriviledgedActionException as:hduser case:org.apache.hadoop.ipc.remoteException:org.apache.hadoop.mapred.safemodeExcep
这是我的java hadoop wordcount示例,它给出了以下错误: 线程“main”java.lang.noClassDeffounderRror:org/apache/avro/io/DatumReader在java.lang.class.forName0(本机方法)在java.lang.class.forName(class.java:348)在org.apache.hadoop.io
我一直在努力学习扫描仪课程。我就是无法理解它的方法。我试着运行一组对我来说正确的代码。我试着做了一些调整,但还是没用。我为什么会收到这个错误的任何提示 线程mainjava.util.InputMismatchExceptionjava.util.Scanner.throwFor(未知源)java.util.Scanner.next(未知源)java.util.Scanner.nextInt(未知
我正在为一堂课做家庭作业。你必须计算这个月的工资。每次我尝试运行它时,它总是这样说:我如何修复它?线程“main”java.util.InputMismatchException中的异常 java.util.Scanner.throwFor(Scanner.java:864) at java.util.Scanner.next(扫描仪.java:1485) java.util.Scanner.ne
我必须编写一个程序,读取整数序列,直到输入“停止”,将整数存储在数组中,然后显示输入的数字的平均值。我在输入“停止”时遇到输入不匹配异常,所以它实际上不起作用,但我不知道为什么。我们将非常感谢您的帮助。 进口java.util.Scanner; 公共类 }