Skip to content

Web scraping is an automatic method to obtain large amounts of data from websites. Most of this data is unstructured data in an HTML format which is then converted into structured data in a spreadsheet or a database so that it can be used in various applications

Notifications You must be signed in to change notification settings

muhammedrahil/Web-scraping-Using-Python

Repository files navigation

Web-scraping-Using-Python

Web scraping is an automatic method to obtain large amounts of data from websites. Most of this data is unstructured data in an HTML format which is then converted into structured data in a spreadsheet or a database so that it can be used in various applications

1. Web Scraping

Web scraping is an automatic method to obtain large amounts of data from websites. Most of this data is unstructured data in an HTML format which is then converted into structured data in a spreadsheet or a database so that it can be used in various applications. There are many different ways to perform web scraping to obtain data from websites. These include using online services, particular API’s or even creating your code for web scraping from scratch. Many large websites, like Google, Twitter, Facebook, StackOverflow, etc. have API’s that allow you to access their data in a structured format. This is the best option, but there are other sites that don’t allow users to access large amounts of data in a structured form or they are simply not that technologically advanced. In that situation, it’s best to use Web Scraping to scrape the website for data.

2. How Web Scrapers Work?

Web Scrapers can extract all the data on particular sites or the specific data that a user wants. Ideally, it’s best if you specify the data you want so that the web scraper only extracts that data quickly. For example, you might want to scrape an Amazon page for the types of juicers available, but you might only want the data about the models of different juicers and not the customer reviews.

So, when a web scraper needs to scrape a site, first the URLs are provided. Then it loads all the HTML code for those sites and a more advanced scraper might even extract all the CSS and Javascript elements as well. Then the scraper obtains the required data from this HTML code and outputs this data in the format specified by the user. Mostly, this is in the form of an Excel spreadsheet or a CSV file, but the data can also be saved in other formats, such as a JSON file.

3. Different Types of Web Scrapers

Web Scrapers can be divided on the basis of many different criteria, including Self-built or Pre-built Web Scrapers, Browser extension or Software Web Scrapers, and Cloud or Local Web Scrapers.

eg:

  1. Price Monitoring
  2. Market Research
  3. News Monitoring
  4. Sentiment Analysis
  5. Email Marketing

Reference Documents

Pypy beautifulsoup4 - https://pypi.org/project/beautifulsoup4/

Beautiful Soup Documentation - https://www.crummy.com/software/BeautifulSoup/bs4/doc/

Web scraping wikipedia - https://en.wikipedia.org/wiki/Web_scraping

Web scraping geek for geek - https://www.geeksforgeeks.org/what-is-web-scraping-and-how-to-use-it/

example web scraping youtube video - https://www.youtube.com/watch?v=ng2o98k983k

pip

pip install beautifulsoup4

pip install requests

pip install lxml

About

Web scraping is an automatic method to obtain large amounts of data from websites. Most of this data is unstructured data in an HTML format which is then converted into structured data in a spreadsheet or a database so that it can be used in various applications

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published