-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running Tests #18
Comments
This is a bug on our part. Thanks for bringing it up. The integration tests seem to be using legacy code and we'll update them to use the refactored version soon. Keeping this issue open. |
Okay, any idea of by when it will be fixed? Or I can try fixing it and sending a pull request if you can give some idea of how to update the code. As it turns out that when I run the shell_chat I get an error because corenlp is timing out (although the docker image is running) and there is no way for me to test different components other than running the tests. Or is there is one? |
If it is a timeout issue, you can disable timeouts by setting the following flag to False. chirpycardinal/chirpy/core/flags.py Line 3 in 3e0656f
Let us know if this solves the issue for you. If the shell_chat is failing, pretty much all the integration tests will also fail and not really lead you to the root cause. But if you are able to run shell_chat after disabling timeouts, there is hope that failed integration tests will tell you something meaningful. You can also query the remote module directly (localhost:<port>) with postman or curl to check if it is working as expected. You would need to pass json data like so - chirpycardinal/chirpy/annotators/corenlp.py Line 278 in 3e0656f
|
I tried restarting the docker image, still the same output. What can I do next? |
If the container is running and connected, then it must be throwing an error internally. You can look at the logs in a docker container using |
When I do: curl --header "Content-Type: application/json" --request POST --data '{'text': 'hello', 'annotators': 'pos,ner,parse,sentiment'} ' http://localhost:3300/ I get 502 Bad Gateway. When I do docker logs corenlp_container, I see the following two exceptions: First: Second From this it seems that stanfordnlp image/service is not running so when I do (to test this image): Request: curl --header "Content-Type: application/json" --request POST --data '{'text': 'hello'} ' http://localhost:3400/ When I do docker logs stanfordnlp_container I see this in logs (no exception): /usr/lib/python2.7/dist-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security. So not clear to me what is going wrong with the stanfordnlp service, can you please advise? |
So the port 3300 is associated with corenlp annotator, which is different from stanfordnlp annotator on port 3400. They aren't calling each other. The stanfordnlp annotator uses the pure python (now called stanza) version and looking at the logs, it seems that it is working. Underlying the corenlp annotator is the Java based corenlp service. It could be confusing because we use the stanfordnlp (Python) wrapper to start and access the Java CoreNLP server. Looking at the stanfordnlp code (https://github.com/stanfordnlp/stanfordnlp/blob/f584a636a169097c8ac4d69fbeaee2c553b28c9c/stanfordnlp/server/client.py#L91) there is a 120 second timeout. I think what is happening here is that it is taking too long time for the Java based corenlp server to start running. This can happen due to many reasons, the machine could be underpowered to run all the docker containers simultaneously or maybe the docker container doesn't have enough CPU or RAM dedicated to it (you can modify allocated resources in Docker Desktop preferences). As a last resort (I would not recommend this, but if you can't increase compute you might have to do this), you can also clone the stanfordnlp repo, modify the timeout to be much longer, say 1200 and change the dockerfile to |
I increased the resources and now when I do docker logs corenlp_container | tail -f However when I run Am I not doing the curl request correctly? |
I think you should be using double quotes for the strings in the json payload (https://stackoverflow.com/questions/7172784/how-do-i-post-json-data-with-curl). |
Thanks for suggesting the fix. It worked like a charm. Now I want to come back to the original topic of the issue of fixing tests.integration_tests.integration_base, where line 17 from bin.run_utils import setup_lambda, ASKInvocation, setup_logtofile is failing. Can you suggest a quick fix/hack for me to be able to run the tests? This is important/urgent for me because when I write new ResponseGenerators and I need to debug them running tests seems like the easiest way for that (or is there a better way?). |
Unfortunately there isn't a quick fix/hack for this. If you were to try and fix it, you would have to use agents.local_agent directly and replace the calls to ASKInvocation from other places in integration_base. I don't think I'll be able to fix it in the next few days. Meanwhile, it is actually not necessary to be able to run the integration tests to debug the response generators. At first, you could just run the shell_chat and see if there are any particular errors. You could remove all the unnecessary response generators: chirpycardinal/agents/local_agent.py Line 200 in 6359578
Eventually if there's a repeating sequence of user responses that you need to feed while you are debugging, you could provide them programmatically:
|
Hello
I am trying to run tests and running into this issue:
integration_base.py: Line 17
from bin.run_utils import setup_lambda, ASKInvocation, setup_logtofile
I cannot find the run_utils.py file in the repository.
Am I missing something here?
The text was updated successfully, but these errors were encountered: