Why this problem occurs?
I was looking for the answer. I have checked the jar. There is a class
AbstractCollinsHeadFinder.java
. From this class, this error comesedu.stanford.nlp.trees.AbstractCollinsHeadFinder.determineHead(AbstractCollinsHeadFinder.java:158) at edu.stanford.nlp.trees.AbstractCollinsHeadFinder.determineHead(AbstractCollinsHeadFinder.java:138)
There are 2 root causes for this error.
- If tree is null, then this error occurs.
- If tree is leaf, then this error occurs.
@Override public Tree determineHead(Tree t, Tree parent) { if (nonTerminalInfo == null) { throw new IllegalStateException("Classes derived from AbstractCollinsHeadFinder must create and fill HashMap nonTerminalInfo."); } // The error mainly generate for the following condition if (t == null || t.isLeaf()) { throw new IllegalArgumentException("Can't return head of null or leaf Tree."); } if (DEBUG) { log.info("determineHead for " + t.value()); } Tree[] kids = t.children(); ------------- ------------- return theHead; }
Resource Link:
Check for parameters:
I have also checked your code. In your setProperty(...), there are some parameters. Maybe there are some parameter missing. So, you can create a object by following the code.
// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Resource Link:
A simple, complete example program:
import java.io.*;
import java.util.*;
import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.trees.TreeCoreAnnotations.*;
import edu.stanford.nlp.util.*;
public class StanfordCoreNlpExample {
public static void main(String[] args) throws IOException {
PrintWriter xmlOut = new PrintWriter("xmlOutput.xml");
Properties props = new Properties();
props.setProperty("annotators",
"tokenize, ssplit, pos, lemma, ner, parse");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
Annotation annotation = new Annotation(
"This is a short sentence. And this is another.");
pipeline.annotate(annotation);
pipeline.xmlPrint(annotation, xmlOut);
// An Annotation is a Map and you can get and use the
// various analyses individually. For instance, this
// gets the parse tree of the 1st sentence in the text.
List<CoreMap> sentences = annotation
.get(CoreAnnotations.SentencesAnnotation.class);
if (sentences != null && sentences.size() > 0) {
CoreMap sentence = sentences.get(0);
Tree tree = sentence.get(TreeAnnotation.class);
PrintWriter out = new PrintWriter(System.out);
out.println("The first sentence parsed is:");
tree.pennPrint(out);
}
}
}
No comments:
Post a Comment