Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable automatic benchmarking #3

Open
geraltofrivia opened this issue Jun 19, 2018 · 3 comments
Open

Enable automatic benchmarking #3

geraltofrivia opened this issue Jun 19, 2018 · 3 comments

Comments

@geraltofrivia
Copy link
Member

Is your feature request related to a problem? Please describe.
Need a script to seamlessly use a QA solution, and benchmark its results over LC-QuAD

Describe the solution you'd like

  • Require them to open an API which returns answers or SPARQLS given a question.
  • Use over-arching platforms like Project Hobbit, but this requires dockerized systems and other added constraints.
  • What else?
@geraltofrivia
Copy link
Member Author

geraltofrivia commented Jun 19, 2018

Requesting suggestions from @mohnish-rygbee @debayan @saist1993

@RicardoUsbeck
Copy link

Hi, have you advanced in this point or do are you still searching for suggestions?

@RicardoUsbeck
Copy link

I will just leave a link here in case I forget this topic. https://github.com/dice-group/TextComparisonEvaluationTool/blob/43c263d905f8b05071c6034d484f9fe199ba2e8a/src/main/java/Engines/Run/Pipeline.java#L210 Source code from this link can be reused to call GERBIL/GERBILQA automatically from code. That is, all evaluation of a system (provided it has a webservice) is done via the GERBIL platform, the results are archived there but you can get all the results as JSON-LD as well as a citable URI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants