当前位置: 首页 > 知识库问答 >
问题:

StanfordNLP:找不到来自kbp的模型(Eclipse)

柳威
2023-03-14

我对Java和Eclipse有点陌生。对于NLP任务,我通常使用python和Nltk。。我正在努力学习这里提供的教程

package edu.stanford.nlp.examples;

import edu.stanford.nlp.coref.data.CorefChain;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.ie.util.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.semgraph.*;
import edu.stanford.nlp.trees.*;
import java.util.*;

public class BasicPipelineExample {

public static String text = "Joe Smith was born in California. " +
      "In 2017, he went to Paris, France in the summer. " +
      "His flight left at 3:00pm on July 10th, 2017. " +
      "After eating some escargot for the first time, Joe said, \"That was delicious!\" " +
      "He sent a postcard to his sister Jane Smith. " +
      "After hearing about Joe's trip, Jane decided she might go to France one day.";



public static void main(String[] args) {
    // set up pipeline properties
    Properties props = new Properties();
    // set the list of annotators to run
    props.setProperty("annotators", "tokenize,ssplit,pos,lemma,ner,parse,depparse,coref,kbp,quote");
    // set a property for an annotator, in this case the coref annotator is being set to use the neural algorithm
    props.setProperty("coref.algorithm", "neural");
    // build pipeline
    StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
    // create a document object
    CoreDocument document = new CoreDocument(text);
    // annnotate the document
    pipeline.annotate(document);
    // examples

    // 10th token of the document
    CoreLabel token = document.tokens().get(10);
    System.out.println("Example: token");
    System.out.println(token);
    System.out.println();

    // text of the first sentence
    String sentenceText = document.sentences().get(0).text();
    System.out.println("Example: sentence");
    System.out.println(sentenceText);
    System.out.println();

    // second sentence
    CoreSentence sentence = document.sentences().get(1);

    // list of the part-of-speech tags for the second sentence
    List<String> posTags = sentence.posTags();
    System.out.println("Example: pos tags");
    System.out.println(posTags);
    System.out.println();

    // list of the ner tags for the second sentence
    List<String> nerTags = sentence.nerTags();
    System.out.println("Example: ner tags");
    System.out.println(nerTags);
    System.out.println();

    // constituency parse for the second sentence
    Tree constituencyParse = sentence.constituencyParse();
    System.out.println("Example: constituency parse");
    System.out.println(constituencyParse);
    System.out.println();

    // dependency parse for the second sentence
    SemanticGraph dependencyParse = sentence.dependencyParse();
    System.out.println("Example: dependency parse");
    System.out.println(dependencyParse);
    System.out.println();

    // kbp relations found in fifth sentence
    List<RelationTriple> relations =
        document.sentences().get(4).relations();
    System.out.println("Example: relation");
    System.out.println(relations.get(0));
    System.out.println();

    // entity mentions in the second sentence
    List<CoreEntityMention> entityMentions = sentence.entityMentions();
    System.out.println("Example: entity mentions");
    System.out.println(entityMentions);
    System.out.println();

    // coreference between entity mentions
    CoreEntityMention originalEntityMention = document.sentences().get(3).entityMentions().get(1);
    System.out.println("Example: original entity mention");
    System.out.println(originalEntityMention);
    System.out.println("Example: canonical entity mention");
    System.out.println(originalEntityMention.canonicalEntityMention().get());
    System.out.println();

    // get document wide coref info
    Map<Integer, CorefChain> corefChains = document.corefChains();
    System.out.println("Example: coref chains for document");
    System.out.println(corefChains);
    System.out.println();

    // get quotes in document
    List<CoreQuote> quotes = document.quotes();
    CoreQuote quote = quotes.get(0);
    System.out.println("Example: quote");
    System.out.println(quote);
    System.out.println();

    // original speaker of quote
    // note that quote.speaker() returns an Optional
    System.out.println("Example: original speaker of quote");
    System.out.println(quote.speaker().get());
    System.out.println();

    // canonical speaker of quote
    System.out.println("Example: canonical speaker of quote");
    System.out.println(quote.canonicalSpeaker().get());
    System.out.println();

  }

}

但我总是得到以下输出,其中包含一个错误,所有与kbp相关的模块都会出现这种情况,我确实按照教程的要求添加了jar文件:

添加注释器标记没有提供标记器类型。默认为PTBTokenzer。添加注释器split添加注释器pos加载来自edu/stanford/nlp/模型/pos-tagger/English-left3word/english-left3words-distsim.tagger的POS标签...完成[0.9秒]。添加注释器引理添加注释器从edu/stanford/nlp/模型/ner/english.all.3class.distsim.crf.ser.gz加载分类器...完成[1.4秒]。加载分类器从edu/stanford/nlp/模型/ner/english.muc.7class.distsim.crf.ser.gz...完成[1.8秒]。加载分类器从edu/stanford/nlp/模型/ner/english.conll.4class.distsim.crf.ser.gz...完成[0.6秒]。异常在线程"main"edu.stanford.nlp.io.RuntimeIOExc0019:无法读取令牌RegexNER从edu/stanford/nlp/模型/kbp/regexner_caseless.tab在edu.stanford.nlp.pipeline.令牌RegexNERAnnotator.read条目(令牌RegexNERAnnotatornnotator.java:593)在edu.stanford.nlp.pipeline.令牌RegexNERAnnotator.(TokensRegexNERAnnotator.java:293)在edu.stanford.nlp.pipeline.NERBUWINAnnotator.setUpFineGrainedNER(NERBUWINAnnotator.java:209)在edu.stanford.nlp.pipeline.NERBUWINA注释器。(nnotator.java:152)在edu.stanford.nlp.pipeline.AnnoatorImplementations.ner(AnnoatorImplementations.java:68)在edu.stanford.nlp.pipeline.StanfordCoreNLP. lambda$getNamedAnnoators45美元(StanfordCoreNLP. java: 546)在edu. stanford. nlp. pieline。StanfordCoreNLP. lambda$null70美元(StanfordCoreNLP. java: 625)在edu. stanford. nlp. util。懒惰3美元计算(懒惰. java: 126)在edu. stanford. nlp. util。Lazy. get(Lazy. java: 31)在edu. stanford. nlp. pipeline。AnnotatorPool. get(AnnotatorPool. java: 149)at edu. stanford. nlp. pipeline.构造(StanfordCoreNLP. java: 495)在edu. stanford. nlp. pipline。StanfordCoreNLP。StanfordCoreNLP. java: 201)在edu. stanford. nlp. pipline。StanfordCoreNLP。StanfordCoreNLP. java: 194)在edu. stanford. nlp. pipline。StanfordCoreNLP。StanfordCoreNLP. java: 181)在NLP. Start. main(Start. java: 13)引起:java. io。IO异常:无法在edu. stanford. nlp. io以类路径、文件名或URL打开"edu/stanford/nlp/模型/kbp/regexner_caseless. tab"。IOUtils. getInputStreamFromURLOrClasspathOrFileSystem(IOUtils. java: 481)在edu. stanford. nlp. io。IOUtils. readerFromString(IOUtils. java: 618)在edu. stanford. nlp. pipeline。TokensRegexNERAnnotator. readEntry(TokensRegexNERAnnotator. java: 590)...14更多

你有办法解决这个问题吗?提前谢谢!

共有2个答案

黎曾笑
2023-03-14

嗯,根据模型页面,有一个单独的kbp材料模型下载。也许您可以访问stanford-english-corenlp-2018-02-27-models,但无法访问stanford-english-kbp-corenlp-2018-02-27-models?我猜这是因为其他模型似乎是根据你在问题中提供给我们的找到的。

何飞翰
2023-03-14

可能您忘了添加stanford-corenlp-3.9.1-models。jar到您的类路径。

 类似资料:
  • 我有两个模块mod1和mod2从同一个父项目。mod2依赖于mod1,而mod2 pom.xml包含以下依赖关系 在mod1中,我在中定义了一个抽象测试类 在mod2中,我定义了一个继承这个类的测试类。 这些都是JUnit4测试。当我直接在Eclipse中运行测试时,它工作得很好。但是当我执行时不起作用,有编译错误显示maven在编译和测试MOD2时没有找到。 我试着在类似的问题上应用所说的:Pa

  • 当开始使用模型时,我得到了以下错误 未找到类帖子。 我所做的一切: -使用命令 -尝试使用

  • 错误:光电控制器中存在FatalErrorException。php第17行:找不到类“App\Http\Controllers\photo” 此代码出现异常-

  • 摘要:找不到用于Lemmatizer(english-lemmatizer.bin)的模型文件 详细信息:OpenNLP工具模型似乎是Apache OpenNLP库的不同组件使用的各种模型的综合存储库。但是,我无法在lemmatizer中找到模型文件。垃圾箱,与柠檬汁机一起使用。Apache OpenNLP开发人员手册为柠檬化步骤提供了以下代码片段: 但是,与其他模型文件不同,我无法找到此模型文件

  • 我正在尝试定义一个Java 9模块。我定义了类似的东西: 然后,我的许多文件开始给我错误,他们找不到一些包。然后我使用了IntelliJ的自动帮助功能,并将其添加到我的模块信息中。java中有几个“requires”语句。所以它变成了这样: 现在IntelliJ显示了我所有的代码,没有错误。但当我在“Maven项目”窗口中单击“编译”(我使用Maven 3.5.3和Oracle JDK 10进行编