Skip to content

OPENNLP-1479: Write better tests for pattern verification (tokenizers) #559

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Dec 9, 2023

Conversation

l-ma
Copy link
Contributor

@l-ma l-ma commented Dec 5, 2023

This change adds two example tests for German. Let me know if any is on the right track, and if I should add more for German and later for other languages.

Thank you for contributing to Apache OpenNLP.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

For all changes:

  • Is there a JIRA ticket associated with this PR? Is it referenced
    in the commit message?

  • Does your PR title start with OPENNLP-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.

  • Has your PR been rebased against the latest commit within the target branch (typically main)?

  • Is your initial contribution a single, squashed commit?

For code changes:

  • Have you ensured that the full suite of tests is executed via mvn clean install at the root opennlp folder?
  • Have you written or updated unit tests to verify your changes? I am adding tests
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0? NOT APPLICABLE
  • If applicable, have you updated the LICENSE file, including the main LICENSE file in opennlp folder? NOT APPLICABLE
  • If applicable, have you updated the NOTICE file, including the main NOTICE file found in opennlp folder? NOT APPLICABLE

For documentation related changes:

  • Have you ensured that format looks appropriate for the output in which it is rendered? NOT APPLICABLE

Note:

Please ensure that once the PR is submitted, you check GitHub Actions for build issues and submit an update to your PR as soon as possible.

@rzo1 rzo1 requested a review from mawiesne December 5, 2023 08:11
@jzonthemtn
Copy link
Contributor

Thanks @l-ma!

@mawiesne
Copy link
Contributor

mawiesne commented Dec 6, 2023

Thx @l-ma, that's a first contribution of high value. I'm happy you found the Sigmund Freud text sample, I added just recently. I will provide feedback on the German part as soon as I find some spare minutes.

Meanwhile, feel free to add further test cases for other languages you are familiar with, that is, other than English). French could be interesting.

@kinow might potentially provide feedback or examples for PT, ES and other languages from that family / group. He was involved into the topic some months back and opened the related Jira.

Just stack further commits on top of the existing test case, or squash locally and force push into this branch here.

@kinow
Copy link
Member

kinow commented Dec 6, 2023

Thank you @l-ma ! Ping/tag me if you have any PT/ES/CA/etc and I will have a look and try to help 😀

@l-ma
Copy link
Contributor Author

l-ma commented Dec 8, 2023

Thank you for your welcoming words! I've added some more tests (thanks to @juhyep for helping).

The tests now have a lot of duplicated code. Do you want me to extract the common code into a helper method? I saw that some other tests such as SentenceDetectorMEGermanTest have duplicated code, so I'm not sure what you prefer.

@l-ma
Copy link
Contributor Author

l-ma commented Dec 8, 2023

I was also curious how the Tokenizer would handle contractions and quotes. It splits on a single/double quote, so don't becomes [don, 't] and "wasn't" becomes ["wasn, 't, "]. Is there a spec for this behavior?

I understand if these tests are out-of-scope, so let me know if they should be kept or removed.

@mawiesne
Copy link
Contributor

mawiesne commented Dec 8, 2023

Do you want me to extract the common code into a helper method?

@l-ma Yes, that would be preferable. Reusing the very same code is a good thing. If you can refactor existing test code, as mentioned, which fits the scope of the issue, feel free to apply refactoring and extract to helper methods, as well.

But: don't over-engineer things. It's always okay to open a new Jira to allow for focus on separate topics / parts of the code.

Thx again.

@mawiesne
Copy link
Contributor

mawiesne commented Dec 8, 2023

Is there a spec for this behavior?

That's an excellent question. Curiosity for the win! Maybe @jzonthemtn can comment on that. Both of you are English native speakers, so I think you can better judge whether to split apart those fragments or if they should be kept tied as a result of tokenization.

For this finding, and after comments by Jeff, one should open a separate Jira, if this is a "topic" to work on. The tests could be kept and provide a starting point continue work, that is by leaving a comment with the new issue's identifier (OPENNLP-xyz).

@mawiesne
Copy link
Contributor

mawiesne commented Dec 8, 2023

Ping @kinow - ES/PT/CA and IT tests cases now provided by @l-ma. Could you review / comment on that part, pls?

@rzo1
Copy link
Contributor

rzo1 commented Dec 8, 2023

Is there a spec for this behavior?

The Penn Treebank guidelines suggest to tokenize as ca + n't and do + n't. The Python Guys in NLTK adhere to this convention (if the Penn TreeBank Tokenizer is used).

Another example is the English phrasea 12-ft boat . How shall we handle the hyphenated length expression? Is this one or two or even three tokens.

From a very quick literature review it seems, that this ambiquity is an implementation detail and not really defined (as it depends on the actual use-case).

Looking at the Stanford Tokenizer they have a bunch of configeration options for a lot of normalization stuff happening during tokenizing.

@@ -36,4 +36,16 @@ void testTokenizerDownloadedModel() throws IOException {
Assertions.assertEquals(",", tokens[1]);
}

@Test
void testTokenizerDownloadedModelDe() throws IOException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@l-ma As far as I can see, this (integration) test isn't required, as it merely replicates what is already covered by your enhanced unit test. Therefore, just revert TokenizerMEIT.java back to its original form.

@mawiesne mawiesne requested a review from kinow December 8, 2023 12:17
Copy link
Member

@kinow kinow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Portuguese and Spanish tests look good. Catalan needs to use apostrophes for article+vowels, as in French. Thanks!

@mawiesne mawiesne merged commit 5deae24 into apache:main Dec 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants