html" target="_blank">public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"));
conf.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"));
Job job = new Job();
//job.setJarByClass(WordCount.class);
job.setJobName("WordCounter");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/home/pramukh/eclipse/eclipseproj/hadoop/input/input.txt"));
FileOutputFormat.setOutputPath(job, new Path("/home/pramukh/eclipse/eclipseproj/hadoop/output.txt"));
//System.exit(0);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
这是我的java hadoop wordcount示例,它给出了以下错误:
线程“main”java.lang.noClassDeffounderRror:org/apache/avro/io/DatumReader在java.lang.class.forName0(本机方法)在java.lang.class.forName(class.java:348)在org.apache.hadoop.io.serializer.serializationFactor.getClassByName(配置.java:1074)在org.apache.hadoop.io.serializer.serializationFactor.add(SerializationFactor.69)在在javax.security.auth.subject.doas(subject.java:422)在org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:742)在org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient.java:912)在org.apache.hadoop.mapreduce.job.submit(job.java:500)在org.apache.hadoop.mapreduce.job.submit(
提前道谢。
如果您在此程序中的其余代码被更正;我建议您检查OutputPath文件夹不能存在的路径
我尝试了您的代码更改文件路径,代码可以正常运行
FileInputFormat.addInputPath(job, new Path("hdfs://localhost:9000/wordcount/test.txt"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/wordcount/out"));
我只是重新运行了以下命令:、、。但现在我得到以下错误: 13/11/10 20:52:12 ERROR Security.usergroupInformation:PriviledgedActionException as:hduser case:org.apache.hadoop.ipc.remoteException:org.apache.hadoop.mapred.safemodeExcep
我是新手,尝试运行我第一个hadoop程序。我在Hadoop中执行wordcount作业时遇到了一些问题。 当我运行我的程序,然后我得到以下错误消息。 16/12/23 00:22:41信息MapReduce.job:任务Id:尝试_1482432671993_0001_M_000001_1,状态:失败错误:java.io.ioException:映射项中的类型不匹配:应为org.apache.h
错误是: 无法解析FileInputPath 无法解析FileOutputPath
我试图在Hadoop 1.0.4和Ubuntu 12.04上用C++运行wordcount示例,但我得到以下错误: 错误消息: 13/06/14 13:50:11警告Mapred.JobClient:未设置作业jar文件。可能找不到用户类。请参阅JobConf(Class)或JobConf#setjar(String)。13/06/14 13:50:11 INFO util.NativEcodeL
package com.run.ayena.distributed.test; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apa