A Python package to effortlessly download all images from a given URL. Automate image scraping and save them locally in just a few lines of code.
Install the package from PyPI using pip:
pip install LpImagesDownloader
Here’s how you can use the package to download images:
from LpImagesDownloader import download_images
# Download images from a webpage, scrolling 3 times to load dynamic content
download_images("https://en.wikipedia.org/wiki/India", 3)
Setting Up Environment...
Running Operations in the background. You will get the results shortly...
Scrolling Page 1...
Scrolling Page 2...
Scrolling Page 3...
Total detected images on page: 176
Downloading 1.jpg...
Downloading 2.jpg...
...
Total Images Downloaded: 176
You can view the saved images at: Saved Images/India Wikipedia
- Dynamic Content Handling: Automatically scrolls through pages to load dynamic images.
- URL Validation: Ensures all images are valid before downloading.
- Customizable Save Locations: Automatically organizes downloaded images into folders based on page titles.
Created and maintained by @LpCodes.
This project is licensed under the MIT License. Feel free to use and modify it as needed.
Contributions are welcome! Here's how you can contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature-name
. - Make your changes and commit them:
git commit -m 'Add feature-name'
. - Push to your branch:
git push origin feature-name
. - Open a pull request and describe your changes.
Have ideas to improve the package or documentation? Open an issue on the GitHub repository.
- Bug Tracker: Report Issues
- Source Code: GitHub Repository
Thank you for using LP Images Downloader! Your feedback helps make this project better. 😊