Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Noun chunks not present in .noun_chunks when nouns are properly tagged. #1785

Closed
theSage21 opened this issue Jan 1, 2018 · 2 comments
Closed
Labels
lang / en English language data and models perf / accuracy Performance: accuracy

Comments

@theSage21
Copy link

This table quickly describes the problem.

Spacy Version pos_ tag_ dep_ noun_chunks
1.8.2 NOUN VERB ADP PROPN ADP PROPN PUNCT WP VBD IN NNP IN NNP . nsubj ROOT prep pobj prep pobj punct [What, F-22, Syria]
2.0.2 NOUN VERB ADP PROPN ADP PROPN PUNCT WP VBD IN NNP IN NNP . nsubj ROOT prep punct punct pobj punct [What, Syria]
2.0.5 NOUN VERB ADP PROPN ADP PROPN PUNCT WP VBD IN NNP IN NNP . nsubj ROOT prep punct punct pobj punct [What, Syria]

In the old spacy an adversarial sentence "What happened for F-22 in Syria?" has .noun_chunks attribute as [what, F-22, Syria] and the new one misses F-22. The pos and tags are the same in both versions. however the dep_ is different. Would this difference reliably explain the difference in noun_chunks?

Old environment

  • Python version: 2.7.12
  • Platform: Linux-4.4.0-1041-aws-x86_64-with-Ubuntu-16.04-xenial
  • spaCy version: 1.8.2
  • Installed models: en

New environment

  • Platform: Linux-4.10.0-42-generic-x86_64-with-Ubuntu-17.04-zesty
  • Models: en_core_web_md, en
  • Python version: 3.5.3
  • spaCy version: 2.0.2
@ines ines added lang / en English language data and models performance labels Jan 3, 2018
@ines ines added perf / accuracy Performance: accuracy and removed performance labels Aug 15, 2018
@ines
Copy link
Member

ines commented Dec 14, 2018

Yes, the noun chunks depend on the part-of-speech tags and the dependency parse, so this issue likely comes down to the difference in predictions made by the parser.

I'm merging this with #3052. We've now added a master thread for incorrect predictions and related reports – see the issue for more details.

@ines ines closed this as completed Dec 14, 2018
@lock
Copy link

lock bot commented Jan 13, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 13, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lang / en English language data and models perf / accuracy Performance: accuracy
Projects
None yet
Development

No branches or pull requests

2 participants