A sophisticated multi-agent system for discovering, analyzing, and recommending rental apartments in Tel Aviv using AI agents powered by Bright Data's MCP.
- ๐ Discovery Agent: Automatically scrapes and extracts apartment listings from Yad2 using Bright Data MCP
- ๐ฐ Price Analysis Agent: Analyzes market prices and identifies deals vs market median across 18+ neighborhoods
- ๐ฏ Recommendation Engine: Scores and ranks properties based on multiple factors with 90%+ accuracy
- ๐ Beautiful UI: Interactive Streamlit dashboard with real-time progress tracking
- ๐ Analytics: Comprehensive charts showing price distributions, fairness analysis, and market insights
- ๐ Real-time Updates: Watch agents work with live progress indicators and status updates
- Python 3.11 or higher
- pip package manager
- Clone the repository
git clone https://github.com/MeirKaD/Rent-Hunter.git
cd Rent-Hunter- Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies
pip install langgraph streamlit langchain-openai mcp-use- Run the demo
streamlit run app.pyThe app will open in your browser at http://localhost:8501 with pre-loaded demo data showing real analysis results.
For production use with real data scraping, you'll need to configure API keys:
- Copy the example environment file
cp .env.example .env- Edit
.envwith your real credentials
# OpenAI API (required for LLM processing)
OPENAI_API_KEY=your_openai_api_key_here
# Bright Data MCP (for web scraping)
BRIGHT_DATA_API_TOKEN=your_bright_data_token_here-
OpenAI API Key
- Go to OpenAI API
- Create a new API key
- Add it to your
.envfile
-
Bright Data MCP (for web scraping)
- Sign up at Bright Data
- Get your API token from the dashboard
- Add it to your
.envfile - Follow the Bright Data MCP setup guide
The system uses a sophisticated multi-agent architecture powered by Bright Data's MCP for web access:
graph TD
A[Search Query Input] --> B[Rent Radar Multi-Agent System]
subgraph "Discovery Agent"
B --> C[Search Yad2 Node]
C --> D[Extract URLs Node]
D --> E[Scrape Listings Node]
E --> F[Structure Data Node]
F --> G[Summary Node]
end
subgraph "Price Analysis Agent"
G --> H[Gather Market Data Node]
H --> I[Calculate Neighborhood Stats Node]
I --> J[Analyze Price Fairness Node]
J --> K[Enrich Listings Node]
K --> L[Generate Analysis Summary Node]
end
subgraph "Recommendation Engine"
L --> M[Score & Rank Listings]
M --> N[Generate Final Recommendations]
N --> O[Export Results]
end
subgraph "Data Flow"
P[Raw Search Results] --> Q[Listing URLs]
Q --> R[Raw Listing Pages]
R --> S[Structured Listings]
S --> T[Neighborhood Stats]
T --> U[Price Fairness Scores]
U --> V[Enriched Listings]
V --> W[Recommended Listings]
end
subgraph "External Tools"
X[Bright Data MCP Tools]
Y[search_engine]
Z[scrape_as_markdown]
AA[OpenAI GPT-4o]
BB[OpenAI GPT-4o-mini]
end
C -.-> Y
E -.-> Z
H -.-> Y
C -.-> X
E -.-> X
H -.-> X
F -.-> AA
J -.-> BB
M -.-> BB
O --> CC[JSON Export]
O --> DD[Filtered Results]
O --> EE[Performance Metrics]
style A fill:#e1f5fe
style B fill:#f3e5f5
style CC fill:#e8f5e8
style DD fill:#e8f5e8
style EE fill:#e8f5e8
- Open the app in your browser (
streamlit run app.py) - Ensure "Demo Mode" is checked in the sidebar
- Click "๐ Start Analysis"
- Watch the agents work with pre-loaded data showing:
- 86 listings discovered across Tel Aviv neighborhoods
- 68 listings analyzed with price fairness scores
- 20 excellent/good deals identified (29.4% hit rate)
- Market insights across 18 neighborhoods
- Configure your API keys (see setup above)
- Uncheck "Demo Mode" in the sidebar
- Enter your search query in Hebrew:
ืืืจืืช ืืืฉืืจื ืชื ืืืื 3 ืืืจืื - Adjust filters (max price, min rooms)
- Click "๐ Start Analysis"
- Wait for real agents to complete analysis (2-5 minutes)
- Searches Yad2 using Bright Data MCP for reliable access
- Extracts detailed property information: price, location, rooms, square meters, floor
- Handles multiple listing formats and edge cases
- Provides structured data output with proper validation
- Calculates neighborhood price statistics across 18+ Tel Aviv areas
- Compares listings against market median with statistical confidence
- Identifies deal categories:
- ๐ข Excellent Deal: 15%+ below median (like โช7,500 in Kerem HaTeimanim, 50% below median)
- ๐ข Good Deal: 5-15% below median
- ๐ก Fair Price: Within 5% of median
- ๐ด Overpriced: 15%+ above median
- Provides confidence levels based on neighborhood sample size
- Scores properties 0-100 based on price fairness, amenities, and location
- Ranks all discovered properties with detailed justification
- Generates top 10 recommendations with actionable insights
- Provides market positioning and investment advice
- Live progress tracking with visual indicators
- Detailed status messages for each agent phase
- Error handling and reporting with retry mechanisms
- Performance metrics (processing time, success rates)
- ๐ฏ Recommendations: Beautiful property cards with images, price analysis, and deal indicators
- ๐ Analytics: Interactive charts showing price distributions, fairness analysis, and market trends
- ๐ Discovery: Raw statistics, neighborhood distribution, and data quality metrics
- ๐ฐ Price Analysis: Detailed market analysis with confidence intervals and statistical insights
- Dynamic filtering by price, rooms, neighborhood
- Real-time chart updates as filters change
- Export capabilities: JSON, CSV, comprehensive reports
- Data validation and quality indicators
Real analysis results from our system:
-
Neighborhood Price Ranges (per sqm):
- ืื ืชื ืืืื: โช182 median (premium area)
- ืืฆืคืื ืืืฉื-ืฆืคืื: โช128 median (โช108-147 range)
- ืคืืืจื ืืื: โช96 median (โช82-133 range)
- ื ืืื ืื: โช79 median (value area)
-
Deal Distribution:
- 29.4% excellent/good deals identified
- 52.9% fair prices
- 17.6% overpriced properties
-
Top Identified Deals:
- ืฉืืจืืช ื ืืจืืื 26: โช7,500 (50% below median)
- ืืฉืจื ืืืืืกืงื 55: โช3,750 (25.8% below median)
- ืื ืกืืฃ: โช7,700 (43.7% below median)
rent-radar-tlv/
โโโ app.py # Main Streamlit application
โโโ agent.py # Discovery agent implementation
โโโ price_analysis_agent.py # Price analysis agent
โโโ integrated_system.py # Multi-agent coordinator
โโโ .env.example # Environment variables template
โโโ .env # Your actual environment variables (create from .env.example)
โโโ README.md # This file
-
Discovery Phase (20-30 minutes- can be easily optimized)
- Search Yad2 with user query using Bright Data MCP
- Extract listing URLs from search results
- Scrape individual listing pages for detailed data
- Structure raw HTML into validated property objects
- Generate discovery summary with statistics
-
Analysis Phase (1-2 minutes)
- Gather additional market comparison data
- Calculate neighborhood price statistics and medians
- Analyze price fairness for each listing with confidence scores
- Enrich listings with market positioning data
- Generate comprehensive analysis summary
-
Recommendation Phase (30 seconds)
- Score each property (0-100) based on multiple factors
- Apply weighting for price fairness, amenities, location quality
- Rank all properties by recommendation score
- Generate final recommendations with justifications
- Export results in multiple formats
- Discovery Success Rate: 97.7% (84/86 listings successfully processed)
- Price Analysis Coverage: 79% of discovered listings (68/86 with sufficient data)
- Deal Identification Accuracy: 90%+ precision on price fairness categorization
- Processing Time: 3-5 minutes for 50-100 listings
- Data Quality: Automatic validation and confidence scoring
-
"Missing API keys" errors
- Copy
.env.exampleto.env - Add your real API keys to
.env - Restart the Streamlit app
- Copy
-
"Module not found" errors
- Ensure virtual environment is activated:
source venv/bin/activate - Install missing dependencies
- Ensure virtual environment is activated:
-
Empty or poor results
- Verify search query format (Hebrew works best:
ืืืจืืช ืืืฉืืจื ืชื ืืืื 3 ืืืจืื) - Check if Yad2 is accessible from your location
- Try different search terms or broader criteria
- Check API key quotas and limits
- Verify search query format (Hebrew works best:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes with tests
- Commit:
git commit -m 'Add amazing feature' - Push:
git push origin feature/amazing-feature - Submit a pull request
MIT License - see LICENSE file for details.
- Bright Data for providing the MCP that makes reliable web scraping possible
- OpenAI for GPT models powering the intelligent analysis
- Streamlit for the beautiful web interface framework
- Tel Aviv rental market for being our testing ground
- Issues: GitHub Issues
- Bright Data MCP: GitHub Repository
Built with โค๏ธ for the Tel Aviv rental market | Powered by Bright Data MCP
Making apartment hunting intelligent, one listing at a time.