-
Notifications
You must be signed in to change notification settings - Fork 56
fix: add pagination to 'submit all' to process all datasets #252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix: add pagination to 'submit all' to process all datasets #252
Conversation
|
||
for page in range(0, num_pages): | ||
paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments) | ||
package_list.extend([pkg['id'] for pkg in paged_response['results']]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Since the whole purpose of pagination is to handle arbitrarily large numbers of datasets, perhaps it would be better to process each batch of 1000 before retrieving the next, rather than assembling a package ID list of arbitrarily large size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.
If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a good contribution but does require spelling corrections and a slight change in logic.
Overall, a good PR :)
{'ignore_auth': True}, {}) | ||
for p_id in package_list: | ||
self._submit_package(p_id, user, indent=2, sync=sync, queue=queue) | ||
check_start = input('This action could take a few minuts depending on the number of DataSets:\nDid you want to start the process? y/N\n') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minuts/minutes
for p_id in package_list: | ||
self._submit_package(p_id, user, indent=2, sync=sync, queue=queue) | ||
else: | ||
print('Submit all process stoped') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stoped/stopped
|
||
for page in range(0, num_pages): | ||
paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments) | ||
package_list.extend([pkg['id'] for pkg in paged_response['results']]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.
If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.
This PR adds a loop to paginate through all datasets in batches of 1000 using package_search, so that all datasets are submitted to the xloader, not just the first 1000.