Sunday, November 19, 2017

'java.lang.IllegalArgumentException: Can't return head of null or leaf Tree' with CoreNLP on Android. How to solve this issue ?

Why this problem occurs?

I was looking for the answer. I have checked the jar. There is a class AbstractCollinsHeadFinder.java. From this class, this error comes
edu.stanford.nlp.trees.AbstractCollinsHeadFinder.determineHead(AbstractCollinsHeadFinder.java:158) at edu.stanford.nlp.trees.AbstractCollinsHeadFinder.determineHead(AbstractCollinsHeadFinder.java:138)
There are 2 root causes for this error.
  1. If tree is null, then this error occurs.
  2. If tree is leaf, then this error occurs.
    @Override
    public Tree determineHead(Tree t, Tree parent) {
      if (nonTerminalInfo == null) {
        throw new IllegalStateException("Classes derived from AbstractCollinsHeadFinder must create and fill HashMap nonTerminalInfo.");
      }
      // The error mainly generate for the following condition
      if (t == null || t.isLeaf()) {
        throw new IllegalArgumentException("Can't return head of null or leaf Tree."); 
      }
      if (DEBUG) {
        log.info("determineHead for " + t.value());
      }
    
      Tree[] kids = t.children();
      -------------
      -------------
      return theHead;
    }

Resource Link:

  1. https://github.com/stanfordnlp/CoreNLP/blob/master/src/edu/stanford/nlp/trees/AbstractCollinsHeadFinder.java#L163

Check for parameters:

I have also checked your code. In your setProperty(...), there are some parameters. Maybe there are some parameter missing. So, you can create a object by following the code.
// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution 
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

Resource Link:


A simple, complete example program:

import java.io.*;
import java.util.*;
import edu.stanford.nlp.io.*;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.trees.TreeCoreAnnotations.*;
import edu.stanford.nlp.util.*;

public class StanfordCoreNlpExample {
    public static void main(String[] args) throws IOException {
        PrintWriter xmlOut = new PrintWriter("xmlOutput.xml");
        Properties props = new Properties();
        props.setProperty("annotators",
                "tokenize, ssplit, pos, lemma, ner, parse");
        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
        Annotation annotation = new Annotation(
                "This is a short sentence. And this is another.");
        pipeline.annotate(annotation);
        pipeline.xmlPrint(annotation, xmlOut);
        // An Annotation is a Map and you can get and use the
        // various analyses individually. For instance, this
        // gets the parse tree of the 1st sentence in the text.
        List<CoreMap> sentences = annotation
                .get(CoreAnnotations.SentencesAnnotation.class);
        if (sentences != null && sentences.size() > 0) {
            CoreMap sentence = sentences.get(0);
            Tree tree = sentence.get(TreeAnnotation.class);
            PrintWriter out = new PrintWriter(System.out);
            out.println("The first sentence parsed is:");
            tree.pennPrint(out);
        }
    }
}

Resource Link:

  1. The Stanford CoreNLP Natural Language Processing Toolkit
Resource Link: https://stackoverflow.com/a/40725751/2293534

No comments:

Post a Comment