-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overarching design for new functional testing #38
Comments
Thanks @fcooper8472 -- that all sounds good to me. The one thing I'm less sure about is having a separate table for each test since I see a lot of overlap between the outputs of them. That said, if having the tables separately means that it'll be easier to add tests that return quite different measures, then perhaps this is easiest. |
Yep, there's a lot of overlap, e.g. git commit hash, but at the moment everything's just shoved into a single table as extra lines, so you're not storing any extra data having different tables. I would imagine having some kind of payload object with all the overlapping fields filled in by the base class to enforce some kind of extensible uniformity. Haven't thought through all the details yet though. |
Thanks @fcooper8472 ! Does sound very good! The only bit I'm unsure about is the first point:
Or are you thinking these will all be "settings" in PINTS installation of the FT project? |
(Incidentally, I wouldn't do away with storing the commit hash of FT. Still keep it as meta-data. But yeah having it as a dual key system was probably overkill.) |
It would greatly add to the value of FT, I agree, if it was something you could readily add to any project :D |
Having though about this some more, here is my next iteration of thoughts:
The main problem I'm having now is navigating the existing functional testing code. I just cannot understand how it's supposed to work. I think you'll have to give me a tutorial @MichaelClerx. |
Sounds really good Fergz! I like the idea of it running via workflows.
…On Sun, Jan 31, 2021 at 4:39 PM Fergus Cooper ***@***.***> wrote:
Having though about this some more, here is my next iteration of thoughts:
- Let's keep the tests separate from PINTS
- Functional testing needs to be run from the PINTS repo, or there is
no sane way to run it on just push to master on PINTS. This would mean
writing a functional testing GitHub workflow on PINTS that just checks out
functional-testing, with a token if necessary to push changes
- I want to try and let GitHub run the functional tests (rather than
running them on Skip). Each run can go for up to 6 hours on GH and I think
if we're not well inside that we're probably doing something wrong!
- Let's stop over-thinking how we store results. Instead of having a
database that sits somewhere that we have interact with, let's just keep a
data directory in functional testing and the results in plain text csv
files that get versioned with functional testing. Each test will have a csv
file with whatever columns make sense for that test. Common information
between tests (commit hash, seed etc) can be in a main csv, and referencing
between can be via commit hash.
- Each run of functional testing, run from PINTS, will checkout
functional-testing, run the tests, add (currently 4) rows to the csv files,
rebuild the hugo website, and push all the changes.
The main problem I'm having now is navigating the existing functional
testing code. I just cannot understand how it's supposed to work. I think
you'll have to give me a tutorial @MichaelClerx
<https://github.com/MichaelClerx>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABCILKD7Y3MSR6OICUBM3V3S4WBT7ANCNFSM4WUIUPMA>
.
|
Thanks Fergus! Re: tour. Happy to! But the current code needs some heavy reworking anyway, I find any time I touch it :D |
I'm free after 11 today or else we can do it tomorrow morning? |
In defence of my original level of thinking, the point of the database was to be able to correlate test results longitudinally so that statistical measures of tests that relied on some random input could be obtained. You could achieve that with a collection of CSVs keyed by git hash, though it'd be more work to use the git tree and CSV files to reconstruct the history. |
The main problem I'm suggesting we try to solve is that we currently have a database that needs to exist somewhere. If we want to run a test on GitHub actions, that has to be somewhere that we can read from and write to. Any every physical machine that might want to run tests will need access to wherever the database is kept. We could stick the database in GitHub, but it will change every run and need to be stored entirely every time. So it seems like some plaintext format is the way to go: we would only be versioning the next set of results. I'm not very familiar at all with databases: what kind of operations are you thinking of that are better suited to a database than, say, csv + pandas? |
Like I say, you'll be able to do it in either, but if you want to correlate test results across runs either way then you'll have to have a location for writable storage available between runs whatever you're storing. |
doesn't have to be a fancy database even, could even be a google/MS spreadsheet(s)? |
The problem isn't whether it's SQLite vs excel vs google sheets, it's whether we can read from and write to whatever that source is easily. I'm suggesting versioning the data with the functional testing repo is the obvious (and simplest) solution. But I'm very much open to suggestions if there's a simple (& free) way of doing it another way. |
Some thoughts, very much open to discussion. Most relevant to @MichaelClerx @martinjrobins @ben18785.
Anyone have any major thoughts/comments on this as a basic structure?
The text was updated successfully, but these errors were encountered: