当前位置: 首页 > 知识库问答 >
问题:

在Python中解析。rules文件时出现Stanford CoreNLP TokensRegex/Error

杨柏
2023-03-14

我试图在这个链接中解决这个问题,但使用斯坦福nlp库中的regexner是不可能的。

(注意:我使用的是stanfordnlp library Version0.2.0、Stanford CoreNLP Version3.9.2和Python 3.7.3)

ner = { type: "CLASS", value: "edu.stanford.nlp.ling.CoreAnnotations$NamedEntityTagAnnotation" }

$ORGANIZATION_TITLES = "/inc\.|corp\./"

$COMPANY_INDICATOR_WORDS = "/company|corporation/"

ENV.defaults["stage"] = 1

{ pattern: (/works/ /for/ ([{pos: NNP}]+ $ORGANIZATION_TITLES)), action: (Annotate($1, ner, "RULE_FOUND_ORG") ) }

ENV.defaults["stage"] = 2

{ pattern: (([{pos: NNP}]+) /works/ /for/ [{ner: "RULE_FOUND_ORG"}]), action: (Annotate($1, ner, "RULE_FOUND_PERS") ) }
import stanfordnlp


from stanfordnlp.server import CoreNLPClient
# example text
print('---')
print('input text')
print('')
text = "The analysis of shotgun sequencing data from metagenomic mixtures raises complex computational challenges. Part of the difficulty stems from the read length limitation of existing deep DNA sequencing technologies, an issue compounded by the extensive level of homology across viral and bacterial species. Another complication is the divergence of the microbial DNA sequences from the publicly available references. As a consequence, the assignment of a sequencing read to a database organism is often unclear. Lastly, the number of reads originating from a disease causing pathogen can be low (Barzon et al., 2013). The pathogen contribution to the mixture depends on the biological context, the timing of sample extraction and the type of pathogen considered. Therefore, highly sensitive computational approaches are required."
text = "In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples."
print(text)
# set up the client
print('---')
print('starting up Java Stanford CoreNLP Server...')
#I am not sure if I can add here the tokensregex rules
prop={'regexner.mapping': 'rgxrules.txt', "tokensregex.rules": "tokenrgxrules.rules", 'annotators': 'tokenize,ssplit,pos,lemma,ner,regexner,tokensregex'}


# set up the client


with CoreNLPClient(properties=prop,timeout=100000, memory='16G',be_quiet=False ) as client:
    # submit the request to the server
    ann = client.annotate(text)
    # get the first sentence
    sentence = ann.sentence[0]
    Starting server with command: java -Xmx16G -cp /Users/stanford-corenlp-full-2018-10-05//* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 100000 -threads 5 -maxCharLength 100000 -quiet False -serverProperties corenlp_server-f8a9bab3cb0b44da.props -preload tokenize,ssplit,pos,lemma,ner,tokensregex
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - setting default constituency parser
[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz
[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead
[main] INFO CoreNLP - to use shift reduce parser download English models jar from:
[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html
[main] INFO CoreNLP -     Threads: 5
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.1 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
[main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 580704 unique entries out of 581863 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_caseless.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 4869 unique entries out of 4869 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_cased.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 585573 unique entries from 2 files
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
[main] ERROR CoreNLP - Could not pre-load annotators in server; encountered exception:
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.main(StanfordCoreNLPServer.java:1505)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 12 more
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[pool-1-thread-3] INFO CoreNLP - [/0:0:0:0:0:0:0:1:49907] API call w/annotators tokenize,ssplit,pos,lemma,ner,tokensregex
In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.mkStanfordCoreNLP(StanfordCoreNLPServer.java:368)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.access$800(StanfordCoreNLPServer.java:50)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer$CoreNLPHandler.handle(StanfordCoreNLPServer.java:855)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
    at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 23 more
Traceback (most recent call last):
  File "/Users/anaconda3/lib/python3.7/site-packages/stanfordnlp/server/client.py", line 330, in _request
    r.raise_for_status()
  File "/Users/anaconda3/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9000/?properties=%7B%27outputFormat%27%3A+%27serialized%27%7D

共有1个答案

卢光誉
2023-03-14

这个问题似乎发生在stanza 1.1.0和Stanford CoreNLP 3.9.2中,团队正在解决这个问题。我相信这意味着其他东西试图使用相同的端口,服务器正在默默地失败。您应该首先验证是否可以让服务器在Python客户机之外运行。既然您看到了这个错误,我想服务器甚至没有启动。

 类似资料:
  • 问题内容: 如果一个人解析一个简单的Java样式 文件,该文件的内容是键值对(即没有INI样式节头),则该模块将引发异常。有一些解决方法吗? 问题答案: 说您有,例如: 即将是一种格式,除了它缺少开头部分的名称。然后,很容易伪造节标题: 用法: 输出:

  • 问题内容: 我正在用Python阅读JSON文件,其中包含许多字段和值(约8000条记录)。Env:Windows 10,Python 3.6.4;码: 这样我得到一个错误。下面是堆栈跟踪: 伴随着我,我尝试了 与此相关,我的程序运行了很长时间,然后挂起,没有任何输出。 我搜索了几乎与此相关的所有主题,但找不到解决方案。 注意:JSON数据是有效的,因为当我在Postman /任何REST客户端上

  • 问题内容: 如何在Python中解析YAML文件? 问题答案: 不依赖标头的最简单,最纯净的方法是(文档),可以通过以下方式安装: 就是这样。一个普通的函数也存在,但是除非你明确需要提供的任意对象序列化/反序列化,以避免引入执行任意代码的可能性,否则通常应首选该函数。

  • 我尝试了不同的方法在Android中解析JSON文件,但在打印出来时出现了错误。 这是我的整个JSON文件: 以下是我为使其正常工作而实现的代码: 这是onPostExecute方法: 我已经从这个网站的帮助来布局我的功能:https://www.androidhive.info/2012/01/android-json-parsing-tutorial/

  • 本文向大家介绍python实现xlsx文件分析详解,包括了python实现xlsx文件分析详解的使用技巧和注意事项,需要的朋友参考一下 python脚本实现xlsx文件解析,供大家参考,具体内容如下 环境配置: 1.系统环境:Windows 7 64bit 2.编译环境:Python3.4.3 3.依赖库: os sys xlrd re 4.其他工具:none 5.前置条件:待处理的xlsx文件

  • 我正在尝试阅读包含以下内容的pdf文件: 如果我打开它,它可以工作,但是如果我尝试使用编解码器.open(文件名,编码=“utf8”,mode=“rb”)来获取unicode字符串,我得到了以下异常: 您知道从此文件的内容中获取 unicode 字符串的方法吗? PS:我使用的是蟒蛇 2.7