Provide trustworthy questions to COVID-19 via NLP
Staging: https://covid-staging.deepset.ai/
Prod: https://covid.deepset.ai/
- People have many questions about COVID-19
- Answers are scattered on different websites
- Finding the right answers takes a lot of time
- Trustworthiness of answers is hard to judge
- Many answers get outdated soon
- Aggregate FAQs and texts from trustworty data sources (WHO, CDC ...)
- Provide an UI where people can ask questions
- Use NLP to match incoming questions of users with meaningful answers
- Users can provide feedback about answers to improve the NLP model and flag outdated or wrong answers
- Display most common queries without good answers to guide data collection and model improvements
- Scrapers to collect data
- Elasticsearch to store texts, FAQs, embeddings
- NLP Models implented via Haystack to find answers via a) detecting similar question in FAQs b) detect answers in free texts (extractive QA)
- NodeJS / koa / eggjs middleware
- React Frontend
- Check out the demo app to get a basic idea
- Data: At the moment we are using a CSV with collected FAQs that get's ingested into elasticsearch here
- Model: The NLP model to find answers is build via haystack. It's configured and exposed via this API.
- Frontend/middleware: TODO
This project is build by the community for the community. We are really appreciating every kind of support! There's plenty of work on UX, Design, ML, Backend, Frontend, Middlewware, Data collection ...
We are also happy if you just report bugs, add documentation or flag useful/inappropriate answers returned by the model.
Some next TODOs we see:
- Integrate more data sources via scrapers that return a csv with fields: question, answer, answer_html, link, name, source, category, country, region, city, lang, last_update
- Handling of special non-FAQ questions via other APIs (e.g. “How many infections in Berlin?”)
- Improve API to foster external integrations (e.g. Chatsystems)
- Logging & storage to foster analysis of common queries with bad results
- Support other languages (data collection)
- English evaluation dataset & pipeline to benchmark models
- Benchmark baseline models
- Improve NLP models for FAQ matching (better embeddings, e.g. sentence-bert trained on Quora duplicate questions dataset)
- Add extractive QA Models
- Support other languages (models)
- Tune Elasticsearch + Embedding models
- Integrate user feedback mechanism for answers (flag as "correct", "not matching my question", "outdated", "fake news")
- Tab to explore common queries and those with bad answers
- Logos / icons
- Intuitive displaying of search results
- UX for adding/reviewing data sources by the crowd