-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add verdi profile setup
#6023
Add verdi profile setup
#6023
Conversation
0f4fe2b
to
3f8392d
Compare
aiida/storage/sqlite_dos/backend.py
Outdated
self.connection.commit() # pylint: disable=no-member | ||
|
||
|
||
class SqliteDosStorage(PsqlDosBackend): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sceptical that here and in SqliteDosMigrator
you can simply inherit from the PostgreSQL implementation and it will all just work, particularly migration, where you have differences in data types (like JSON and not JSONB).
You can also see methods like delete_nodes_and_connections
which specifically uses psql_dos
ORM models
Obviously testing would be required
I would like to say that this PR is a milestone for improving usability. Looking forward to it. |
Its important to note, and document, though that this should probably |
I agree with @chrisjsewell . As he rightly pointed out, there are a lot of shortcuts in the current implementation that have to be checked and it is not sure it supports all use-cases of AiiDA yet. But I think this will never be a real solution for production. Nevertheless, it may be fine for lots of use cases that are low-throughput and could do with a single daemon worker. For those cases it may work and be sufficient. But this needs to be tested and developed a lot more, for which I just haven't had the time yet. |
7f34e08
to
10b977a
Compare
SqliteDosStorage
backend and verdi profile setup
verdi profile setup
I see you are adding pydantic here now. I'm not necessarily against that, it's a nice package, but I think though we would need to put in some thought as to if we are happy to introduce a new dependency and (a) the extra maintenance that can require, plus (b) the potential for incompatibilities with plugins / users code. This is a good example here of dependency implications, since I see you are pinning to pydantic v1, yet I know v2 has just been released (rewritten in rust, my new favourite language). So perhaps you need to update the version here, but then that will make it currently incompatible with aiida-restapi. Also, if we are introducing pydantic, then there is obviously a lot more places that it could potentially be used, e.g. in-place of jsonschema validation and possibly places where we have started to use dataclasses (which in-turn are replacing attribute dicts). |
10b977a
to
1cba486
Compare
Sure, definitely think we should discuss it. The main reason I chose it is because I felt the custom solution of the
I am aware of v2 and originally actually wrote it against that version, but I noticed that downstream dependencies break. Not just the
Agree. One big example that I would want to change is the config options which are also using a custom solution, which would be way better off using something like pydantic. But the change has to start somewhere of course, so if we agree that |
d12ccef
to
d4afaa2
Compare
Quick note re the I was playing around with the new
|
Indeed I would suggest it might be ideal to wait a little, to see if pydantic v2 can be used straight away. Otherwise, it just ends up on the backlog of outdated dependencies, like sqlalchemy
Well this currently uses jsonschema, so it won't be too much of a change, since they are of course both solutions to the same problem of data validation. I suggest you make a separate PR, introducing pydantic by replacing all uses of jsonschema, and thus this being a "one-in, one-out" dependency change. |
I think
Good point, this is still missing, but they can be easily added, just like the user options are already there. I just hadn't added the broker options yet. Will add them soon. I am just struggling with the tests mysteriously failing only on Python 3.9 and only when I import |
Fair enough. Will have a stab at this later |
Indeed we will! It's on our to-do list (@janosh, @munrojm, @tschaume). If it's not done "soon", ping me and I can take a stab at it although I'm not the primary maintainer. |
129b405
to
99da950
Compare
863cdf7
to
e34613f
Compare
8eeee50
to
aa3904f
Compare
9657ee4
to
40be8e4
Compare
With #6117 now merged, this PR is unblocked and ready to go. Any takers for a review? @unkcpz @mbercx @edan-bainglass @superstar54 |
This method takes a name and storage backend class, along with a dictionary of configuration parameters, and creates a profile for it, initializing the storage backend. If successful, the profile is added to the config and it is saved to disk. It is the `Config` class that defines the "structure" of a profile configuration and so it should be this class to takes care of generating this configuration. The storage configuration is the exception, since there are multiple options for this, where the `StorageBackend` plugin defines the structure of the required configuration dictionary. This method will allow to remove all places in the code where a new profile and its configuration dictionary is built up manually.
This command uses the `DynamicEntryPointCommandGroup` to allow creating a new profile with any of the plugins registered in the `aiida.storage` group. Each storage plugin will typically require a different set of configuration parameters to initialize and connect to the storage. These are generated dynamically from the specification returned by the method `get_cli_options` defined on the `StorageBackend` base class. Each plugin implements the abstract `_get_cli_options` method which is called by the former and defines the configuration parameters of the plugin. The values passed to the plugin specific options are used to instantiate an instance of the storage class, registered under the chosen entry point which is then initialised. If successful, the new profile is stored in the `Config` and a default user is created and stored. After that, the profile is ready for use.
The `DynamicEntryPointCommandGroup` depends on the entry point classes to implement the `get_cli_options` method to return a dictionary with a specification of the options to create. The schema of this dictionary was a custom ad-hoc solution for this purpose. Here we switch to using pydantic's `BaseModel` to define the `Config` class attribute which defines the schema for the configuration necessary to construct an instance of the entry points class.
40be8e4
to
d68edcb
Compare
The
verdi profile setup
command is based of theDynamicEntryPointGroup
, which is already used forverdi code create
, to dynamically create a subcommand for each registered storage entry point. It automatically generates CLI options to allow users to specify configuration parameters necessary to initialize the storage.Edit: Removing the commit that adds the
SqliteDosStorage
for now, as that is not required for this PR and it needs more work.TheSqliteDosStorage
is a variant of the defaultPsqlDosStorageBackend
swapping the Postgres database for an Sqlite one. This gets rid of the server requirement as sqlite can run of a file on disk, simplifying the setup significantly. This makes it a perfect storage for use in demos, tutorials and quick temporary work.There was already theSqliteTempStorage
, but that is a fully in memory storage, which is not persisted across Python interpreters, so it has limited usability. TheSqliteZipStorage
is read-only and so also won't be useful for work requiring storing of new data.Theverdi profile setup
command is added to make it easy to create a new profile. With a single command, a fully functioning profile can be setup ready for use: