Hot questions for Using Neural networks in stanford nlp

Question:

I know about the Python wrappers for Stanford CoreNLP package but this package does not seem to contain neural net based dependency parser model. Rather it is present in Stanford-parser-full-****-- package for which I can't find any Python wrapper. My Question: Is there a Python wrapper that would parse using Stanford Neural Net based dependency parser? Any suggestions or directions would be helpful. Thanks!


Answer:

I don't know of any such wrapper at the moment, and there are no plans at Stanford to build one. (Maybe the NLTK developers would be up for the challenge?)

Question:

If I use the Stanford CoreNLP neural network dependency parser with the english_SD model, which performed pretty good according to the website (link, bottom of the page), it provides completely different results compared to this demo, which I assume is based on the LexicalizedParser (or at least any other one).

If I put the sentence I don't like the car in the demo page, this is the result:

If I put the same sentence into the neural network parser, it results in this:

In the result of the neural network parser, everything just depends on like. I think it could be due to the different POS-Tags, but I used the CoreNLP Maxent Tagger with the english-bidirectional-distsim.tagger model, so pretty common I think. Any ideas on this?


Answer:

By default, we use the english-left3words-distsim.tagger model for the tagger which is faster than the bidirectional model but occasionally produces worse results. As both, the constituency parser which is used on the demo page, and the neural network dependency parser which you used, heavily rely on POS tags it is not really surprising that the different POS sequences lead to different parses, especially when the main verb has a function word tag (IN = prepositon) instead of a content word tag (VB = verb, base form).

But also note that the demo outputs dependency parses in the new Universal Dependencies representation, while the english_SD model parses sentences to the old Stanford Dependencies representation. For your example sentence, the correct parses are actually the same but you will see differences for other sentences, especially if they have prepositional phrases which are being treated differently in the new representation.