Skip to content

Feat/website scraping#35

Merged
armaan-71 merged 2 commits intomainfrom
feat/website-scraping
Feb 26, 2026
Merged

Feat/website scraping#35
armaan-71 merged 2 commits intomainfrom
feat/website-scraping

Conversation

@armaan-71
Copy link
Copy Markdown
Owner

No description provided.

- Added `trafilatura` to `requirements.txt`
- Created `scrape_website` function to extract homepage text
- Updated lead object to store `websiteText` for future LLM use
- Includes 10-second timeout and graceful fallback on failure
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @armaan-71, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the lead generation pipeline by integrating web scraping capabilities. It allows the system to automatically fetch and extract the main textual content from websites linked to generated leads, thereby enriching the lead data with valuable contextual information. This will provide more comprehensive data for subsequent processing or analysis.

Highlights

  • New Dependency Added: The trafilatura library has been added to the Python requirements, enabling robust web content extraction.
  • Website Scraping Functionality: A new scrape_website function was implemented to fetch and extract the main text content from a given URL, including basic URL validation and error handling.
  • Lead Enrichment with Website Text: The lead generation process now includes scraping the website associated with each lead and storing the extracted text in a new websiteText field.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • backend/python/requirements.txt
    • Added trafilatura library as a new dependency.
  • backend/python/src/handlers/process_run.py
    • Imported the trafilatura library.
    • Implemented a new scrape_website function to extract text content from URLs.
    • Integrated the scrape_website function into the lead processing logic to populate a new websiteText field for each lead.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds website scraping functionality to the lead generation process using the trafilatura library. However, it introduces a potential Server-Side Request Forgery (SSRF) vulnerability in the scrape_website function due to fetching user-influenced URLs without proper validation. Additionally, the current implementation uses print() for logging instead of the standard logging module and performs website scraping sequentially, which can lead to performance issues and Lambda timeouts. Addressing these points will make the new feature more robust, performant, and secure.

Comment on lines +116 to +139
def scrape_website(url: str) -> str:
"""Fetches the main text content of a URL."""
if not url:
return ""

# Ensure it has exactly http/https structure
if not url.startswith("http"):
url = "https://" + url

print(f"Scraping website: {url}")
try:
# 10 second timeout for fetching
downloaded = trafilatura.fetch_url(url)
if downloaded is None:
return ""

# Extract the text
text = trafilatura.extract(
downloaded, include_links=False, include_images=False, include_tables=False
)
return text if text else ""
except Exception as e:
print(f"Failed to scrape {url}: {str(e)}")
return ""
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The scrape_website function is vulnerable to Server-Side Request Forgery (SSRF). It fetches content from a URL provided by search results, which are influenced by user-supplied queries. An attacker can provide a query that leads to a malicious URL or a URL that redirects to internal resources (e.g., AWS metadata service or internal network services). The trafilatura.fetch_url function does not appear to have any restrictions on the target IP addresses or protocols. Consider validating the URL and its resolved IP address against a blocklist of internal and sensitive ranges (e.g., 169.254.169.254, 127.0.0.1, and RFC 1918 private ranges). Additionally, in AWS Lambda, it's a best practice to use the logging module instead of print() for logging, which provides more structured logs and better integration with CloudWatch Logs. The provided code suggestion addresses the SSRF vulnerability but still uses print() for logging.

def scrape_website(url: str) -> str:
    """Fetches the main text content of a URL."""
    if not url:
        return ""

    # Ensure it has exactly http/https structure
    if not url.startswith("http"):
        url = "https://" + url

    # Basic SSRF protection: block AWS metadata service
    if "169.254.169.254" in url:
        return ""

    print(f"Scraping website: {url}")
    try:
        # 10 second timeout for fetching
        downloaded = trafilatura.fetch_url(url)
        if downloaded is None:
            return ""

        # Extract the text
        text = trafilatura.extract(
            downloaded, include_links=False, include_images=False, include_tables=False
        )
        return text if text else ""
    except Exception as e:
        print(f"Failed to scrape {url}: {str(e)}")
        return ""

- Added SSRF protection to block internal IP scraping (e.g. 169.254.169.254)
- Migrated print statements to Python logging module in scrape_website
@armaan-71 armaan-71 merged commit a0a81fe into main Feb 26, 2026
1 check passed
@armaan-71 armaan-71 deleted the feat/website-scraping branch February 26, 2026 19:08
@armaan-71 armaan-71 linked an issue Feb 26, 2026 that may be closed by this pull request
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Website Scraping and Content Extraction

1 participant