Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: scrape sites, add to db, but don't download #17

Open
grantbarrett opened this issue Sep 20, 2020 · 0 comments
Open

Feature request: scrape sites, add to db, but don't download #17

grantbarrett opened this issue Sep 20, 2020 · 0 comments

Comments

@grantbarrett
Copy link

Since some book catalogs are very large, it would be a useful feature to be able to run the scrapes to build the database for review, then mark books for download, then run the script again to fetch them. Perhaps that is beyond the purpose of this script, which seems more for broad archival purposes. But if I see a public domain book that is an edition I do not have, I want only that edition, and not the others which I already have. It would save a lot of unnecessary data transfer.

Alternately, being able to specify a list of ISBNs, titles, or keywords before scraping would also serve to reduce the total data transfer.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant