You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> assert {"test", "test1"} == set(store.list_collection_names()) E AssertionError: assert {'test', 'test1'} == {'test', 'test1', 'test_new'} E Extra items in the right set: E 'test_new' E Full diff: E - {'test', 'test_new', 'test1'} E + {'test', 'test1'}
test_add_texts
> assert data["documents"] == input_texts E AssertionError: assert ['foo', 'dummy', 'a', 'b'] == ['a', 'b'] E At index 0 diff: 'foo' != 'a' E Left contains 2 more items, first extra item: 'a' E Full diff: E - ['a', 'b'] E + ['foo', 'dummy', 'a', 'b']
test_list_collection_names
> assert {"test", "test1"} == set(store.list_collection_names()) E AssertionError: assert {'test', 'test1'} == {'test', 'test1', 'test_new'} E Extra items in the right set: E 'test_new' E Full diff: E - {'test', 'test_new', 'test1'} E + {'test', 'test1'}
So far as I can tell this issues seem to arise from the persistence of state of the DB across tests due to store._client = local_persist_api here for example.
Due to the persistence of the chroma store operations carried out in one test are impacting the stores state for other tests, resulting in the above errors.
Potential Solution / Discussion
Whilst persisting state is an important feature to test I would suggest that for initial unit-tests you probably do not want this functionality, as it couples your tests, resulting in different outcomes depending on the order of execution.
I would propose splitting this module into two sets of tests:
Isolated, singular unit-tests that are independent and can be run in isolation and have a "clean" DB to interact with at the start of each test.
Higher level, integration style tests that check how multiple, consecutive operations behave. I.e. order of operations is considered.
(1) Isolated, independent unit-tests
For the unit-test suite (1) you could implement a "tear-down" function (i.e. a pytest fixture) which would run after any functions that interact with / modify the database. This function could reset the state of the db to ensure consistent behaviour.
Alternatively you could have a "spin-up" function (again a pytest fixture) which would spin up a new db for each function. This would have a resource overhead attached but it would ensure consistent behavior and would ensure the tests remain independent.
(2) Integration tests
Integration might not quite be the right term here but the key part would be that they operate at a higher level, executing the functionality of several unit operations.
These would be more in-line with actual system usage: i.e. create a DB, update a record, delete a record, finally fetch a record.
They wouldn't be exhaustive but could cover some of the expected usage and would allow you to test for common collections of operations.
The text was updated successfully, but these errors were encountered:
Summary
Several of the ChromaDB related tests are failing.
Environment
Running the Tests
On the
develop
branch I followed the setup instructions in DEVELOPMENT.md.To run the test suite I tried two approaches:
poetry
shellpoetry shell # explicitly enter the poetry shell python -m pytest tests --cov
poetry
shellBoth resulted in failures for the functions in test_chroma_store.py.
Test Failure Details
E AssertionError: assert {'test', 'test1'} == {'test', 'test1', 'test_new'}
E Extra items in the right set: E 'test_new'
E Full diff:
E - {'test', 'test_new', 'test1'}
E + {'test', 'test1'}
E AssertionError: assert ['foo', 'dummy', 'a', 'b'] == ['a', 'b']
E At index 0 diff: 'foo' != 'a'
E Left contains 2 more items, first extra item: 'a' E Full diff:
E - ['a', 'b']
E + ['foo', 'dummy', 'a', 'b']
E AssertionError: assert {'test', 'test1'} == {'test', 'test1', 'test_new'}
E Extra items in the right set:
E 'test_new'
E Full diff:
E - {'test', 'test_new', 'test1'}
E + {'test', 'test1'}
So far as I can tell this issues seem to arise from the persistence of state of the DB across tests due to
store._client = local_persist_api
here for example.Due to the persistence of the chroma store operations carried out in one test are impacting the stores state for other tests, resulting in the above errors.
Potential Solution / Discussion
Whilst persisting state is an important feature to test I would suggest that for initial unit-tests you probably do not want this functionality, as it couples your tests, resulting in different outcomes depending on the order of execution.
I would propose splitting this module into two sets of tests:
(1) Isolated, independent unit-tests
For the unit-test suite (1) you could implement a "tear-down" function (i.e. a
pytest
fixture) which would run after any functions that interact with / modify the database. This function could reset the state of the db to ensure consistent behaviour.Alternatively you could have a "spin-up" function (again a
pytest
fixture) which would spin up a new db for each function. This would have a resource overhead attached but it would ensure consistent behavior and would ensure the tests remain independent.(2) Integration tests
Integration might not quite be the right term here but the key part would be that they operate at a higher level, executing the functionality of several unit operations.
These would be more in-line with actual system usage: i.e. create a DB, update a record, delete a record, finally fetch a record.
They wouldn't be exhaustive but could cover some of the expected usage and would allow you to test for common collections of operations.
The text was updated successfully, but these errors were encountered: