Skip to content

Conversation

dsanchezmatilla
Copy link

This PR adds a loop to paginate through all datasets in batches of 1000 using package_search, so that all datasets are submitted to the xloader, not just the first 1000.


for page in range(0, num_pages):
paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments)
package_list.extend([pkg['id'] for pkg in paged_response['results']])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Since the whole purpose of pagination is to handle arbitrarily large numbers of datasets, perhaps it would be better to process each batch of 1000 before retrieving the next, rather than assembling a package ID list of arbitrarily large size?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.

If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.

Copy link
Collaborator

@duttonw duttonw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a good contribution but does require spelling corrections and a slight change in logic.

Overall, a good PR :)

{'ignore_auth': True}, {})
for p_id in package_list:
self._submit_package(p_id, user, indent=2, sync=sync, queue=queue)
check_start = input('This action could take a few minuts depending on the number of DataSets:\nDid you want to start the process? y/N\n')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minuts/minutes

for p_id in package_list:
self._submit_package(p_id, user, indent=2, sync=sync, queue=queue)
else:
print('Submit all process stoped')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stoped/stopped


for page in range(0, num_pages):
paged_response = tk.get_action('package_search')({'ignore_auth': True}, arguments)
package_list.extend([pkg['id'] for pkg in paged_response['results']])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on @ThrawnCA input, you could get a total number of packages in the system (without all the data), then ask the question.

If yes, then do the pagination and submit the jobs so you can discard/release memory. This becomes very required when you have over 100,000+ datasets and working on small containers.

@dsanchezmatilla
Copy link
Author

@duttonw and @ThrawnCA , thanks for the advice about possible memory management issues with my code when working with large numbers of datasets. I'm currently working on a solid fix for the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants