public static class Reduce extends Reducer<WritableComparable, Writable, WritableComparable, Writable> {
public void reduce(WritableComparable key,
Iterator<Writable> values,
OutputCollector<WritableComparable, NullWritable> output,
Reporter reporter) throws IOException {
output.collect(key, NullWritable.get());
}
}
public static void main(String[] args) throws Exception {
JobConf jobConf = new JobConf(MapDemo.class);
jobConf.setNumMapTasks(10);
jobConf.setNumReduceTasks(1);
jobConf.setJobName("MapDemo");
jobConf.setOutputKeyClass(Text.class);
jobConf.setOutputValueClass(NullWritable.class);
jobConf.setMapperClass(Map.class);
jobConf.setReducerClass(Reduce.class);
jobConf.setInputFormat(TextInputFormat.class);
jobConf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(jobConf, new Path(args[0]));
FileOutputFormat.setOutputPath(jobConf, new Path(args[1]));
JobClient.runJob(jobConf);
}
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] com/example/mapreduce/MapDemo.java:[71,16] method setReducerClass in class org.apache.hadoop.mapred.JobConf cannot be applied to given types;
required: java.lang.Class<? extends org.apache.hadoop.mapred.Reducer>
found: java.lang.Class<com.example.mapreduce.MapDemo.Reduce>
reason: actual argument java.lang.Class<com.example.mapreduce.MapDemo.Reduce> cannot be converted to java.lang.Class<? extends org.apache.hadoop.mapred.Reducer> by method invocation conversion
[INFO] 1 error
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.679s
[INFO] Finished at: Mon Sep 16 09:23:08 PDT 2013
[INFO] Final Memory: 17M/202M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on project inventory: Compilation failure
[ERROR] com/example/mapreduce/MapDemo.java:[71,16] method setReducerClass in class org.apache.hadoop.mapred.JobConf cannot be applied to given types;
[ERROR] required: java.lang.Class<? extends org.apache.hadoop.mapred.Reducer>
[ERROR] found: java.lang.Class<com.example.mapreduce.MapDemo.Reduce>
你跟踪了哪一个环节?我从没见过这种厕所。但是无论您所遵循的是什么,它肯定是过时的,因为它是在使用旧的API。我怀疑你是否正确地遵循了它。
这应该起作用:
public class WordCount {
/**
* The map class of WordCount.
*/
public static class TokenCounterMapper extends
Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
/**
* The reducer class of WordCount
*/
public static class TokenCounterReducer extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
context.write(key, new IntWritable(sum));
}
}
/**
* The main entry point.
*/
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.addResource(new Path("/Users/miqbal1/hadoop-eco/hadoop-1.1.2/conf/core-site.xml"));
conf.addResource(new Path("/Users/miqbal1/hadoop-eco/hadoop-1.1.2/conf/hdfs-site.xml"));
conf.set("fs.default.name", "hdfs://localhost:9000");
conf.set("mapred.job.tracker", "localhost:9001");
Job job = new Job(conf, "WordCount");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenCounterMapper.class);
job.setReducerClass(TokenCounterReducer.class);
job.setNumReduceTasks(2);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/inputs/demo.txt"));
FileOutputFormat.setOutputPath(job, new Path("/outputs/1111223"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
很少评论:
我是hadoop的新手,刚刚安装了Hadoop2.6。 hadoop jar./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep/user/yoni/input/user/yoni/output101“dfs[a-z.]+” 以及在伪分布式模式下的设置,就像在所有的基本tutilies中一样
问题内容: 下面有没有一种数值稳定的方法来计算softmax函数?我得到的价值在神经网络代码中变成Nans。 问题答案: softmax exp( x )/ sum(exp( x ))实际上在数字上表现良好。它只有正数项,因此我们不必担心重要性下降,并且分母至少与分子一样大,因此可以保证结果介于0到1之间。 唯一可能发生的事故是指数溢出或溢出。 x 的单个元素的上溢或所有元素的下溢将使输出或多或少
有没有一种数值稳定的方法来计算下面的softmax函数?我得到的值在神经网络代码中变成了Nans。
我试图理解Python中的Hadoop字数示例http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/ 作者从简单版本的映射器和缩减器开始。这是缩减器(为了简洁,我删除了一些注释) 作者使用以下方法测试程序: 因此,reducer的编写方式就好像reducer作业的输入数据是这样的: 我
我跟随教程的链接在这里 我正确配置了所有步骤。但是在步骤8中,这是使用字数统计的示例MapReduce作业,当我执行命令时,它会引发异常 我该怎么解决这个问题。例外如下:- 提前感谢…[:)][:)]
(6)、数的补码表示法 在计算机内,为了表示正负数,并便于进行各种算术运算,对有符号数采用二进制的补码表示形式。 补码的最高位用来表示正负数:0—正数,1—负数。 正数的补码是其自身的二进制形式,负数的补码是把其正数的二进制编码变“反”,再加1而得。 (7)、二进制数的符号扩展 在汇编语言中,我们经常要对字/字节的数据进行操作。当把“字节”转换成“字”,或“字”转换成“双字”时,就需要进行符号扩展