diff --git a/BACKEND_TROUBLESHOOTING.md b/BACKEND_TROUBLESHOOTING.md new file mode 100644 index 000000000..e1fe08ec3 --- /dev/null +++ b/BACKEND_TROUBLESHOOTING.md @@ -0,0 +1,285 @@ +# Backend Server Troubleshooting Guide + +This guide helps you resolve common issues when starting the AI Hedge Fund backend server. + +## Quick Start Checklist + +✅ **Prerequisites Met:** +- [ ] Python 3.11+ installed +- [ ] Poetry installed +- [ ] Dependencies installed (`poetry install`) +- [ ] Environment variables configured (`.env` file) + +✅ **Basic Test:** +```bash +# Test if backend imports work +poetry run python -c "from app.backend.main import app; print('Backend imports successful')" +``` + +✅ **Start Server:** +```bash +# Option 1: Using the startup script (recommended) +python3 start_backend.py + +# Option 2: Direct uvicorn command +poetry run uvicorn app.backend.main:app --host 0.0.0.0 --port 8000 --reload + +# Option 3: Using the web app runner +cd app && ./run.sh +``` + +## Common Issues and Solutions + +### 1. ModuleNotFoundError: No module named 'fastapi' + +**Problem:** Dependencies not installed or Poetry environment not activated. + +**Solution:** +```bash +# Install dependencies +poetry install + +# Verify installation +poetry run python -c "import fastapi; print('FastAPI installed')" +``` + +### 2. NameError: name 'PositionType' is not defined + +**Problem:** Class definition order issue in `src/data/models.py`. + +**Solution:** This should be fixed in the current version. If you see this error: +```bash +# Check if you have the latest models.py +grep -n "class PositionType" src/data/models.py +# Should show only one definition around line 144 +``` + +### 3. Import Error: Cannot import name 'SecurityInfo' + +**Problem:** Forward reference issue or class ordering. + +**Solution:** Ensure you have the latest version of `src/data/models.py` with proper forward references using strings: +```python +security_info: Optional["SecurityInfo"] = None +``` + +### 4. Database Connection Errors + +**Problem:** SQLite database issues or permissions. + +**Solution:** +```bash +# Check database file permissions +ls -la hedge_fund.db + +# If corrupted, remove and recreate +rm hedge_fund.db +poetry run python -c "from app.backend.database.models import Base; from app.backend.database.connection import engine; Base.metadata.create_all(bind=engine)" +``` + +### 5. Port Already in Use + +**Problem:** Port 8000 is occupied by another process. + +**Solution:** +```bash +# Find process using port 8000 +lsof -i :8000 + +# Kill the process (replace PID with actual process ID) +kill -9 + +# Or use a different port +poetry run uvicorn app.backend.main:app --port 8001 +``` + +### 6. Environment Variable Issues + +**Problem:** Missing or incorrect API keys. + +**Solution:** +```bash +# Check if .env file exists +ls -la .env + +# Create .env from template if missing +cp .env.example .env + +# Edit .env file to add your API keys +nano .env # or your preferred editor +``` + +Required environment variables: +- At least one LLM API key: `OPENAI_API_KEY`, `GROQ_API_KEY`, `ANTHROPIC_API_KEY`, or `DEEPSEEK_API_KEY` +- Optional: `FINANCIAL_DATASETS_API_KEY` (free data available for major stocks) + +### 7. Poetry Command Not Found + +**Problem:** Poetry not installed or not in PATH. + +**Solution:** +```bash +# Install Poetry +curl -sSL https://install.python-poetry.org | python3 - + +# Add to PATH (add to ~/.bashrc or ~/.zshrc) +export PATH="$HOME/.local/bin:$PATH" + +# Reload shell +source ~/.bashrc # or source ~/.zshrc + +# Alternative: Install via pip +python3 -m pip install poetry +``` + +### 8. Python Version Compatibility + +**Problem:** Python version too old. + +**Solution:** +```bash +# Check Python version +python3 --version + +# Should be 3.11 or higher +# If not, install newer Python: +# - macOS: brew install python@3.11 +# - Ubuntu: sudo apt install python3.11 +# - Windows: Download from python.org +``` + +### 9. Import Path Issues + +**Problem:** Python can't find the modules. + +**Solution:** +```bash +# Ensure you're running from the project root +pwd # Should show /path/to/ai-hedge-fund + +# Check Python path +poetry run python -c "import sys; print('\n'.join(sys.path))" + +# Run with proper module path +poetry run python -m app.backend.main +``` + +### 10. CORS Issues + +**Problem:** Frontend can't connect to backend. + +**Solution:** Check CORS configuration in `app/backend/main.py`: +```python +app.add_middleware( + CORSMiddleware, + allow_origins=["http://localhost:5173", "http://127.0.0.1:5173"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) +``` + +## Debugging Steps + +### 1. Enable Verbose Logging + +Add to your startup command: +```bash +poetry run uvicorn app.backend.main:app --log-level debug +``` + +### 2. Test Individual Components + +```bash +# Test data models +poetry run python -c "from src.data.models import Position, Portfolio; print('Models OK')" + +# Test database connection +poetry run python -c "from app.backend.database.connection import engine; print('Database OK')" + +# Test routes +poetry run python -c "from app.backend.routes import api_router; print('Routes OK')" +``` + +### 3. Check Dependencies + +```bash +# List installed packages +poetry show + +# Check for conflicts +poetry check + +# Update dependencies +poetry update +``` + +### 4. Fresh Installation + +If all else fails, try a clean installation: +```bash +# Remove virtual environment +poetry env remove python + +# Clean install +poetry install + +# Verify installation +poetry run python -c "from app.backend.main import app; print('Success')" +``` + +## Performance Tips + +### 1. Database Optimization + +```bash +# Use a proper database for production +# In .env file: +DATABASE_URL=postgresql://user:password@localhost/hedge_fund +``` + +### 2. Production Settings + +```bash +# Production startup +poetry run uvicorn app.backend.main:app --host 0.0.0.0 --port 8000 --workers 4 +``` + +### 3. Memory Usage + +```bash +# Monitor memory usage +poetry run python -c " +import psutil +import os +process = psutil.Process(os.getpid()) +print(f'Memory usage: {process.memory_info().rss / 1024 / 1024:.1f} MB') +" +``` + +## Getting Help + +If you're still having issues: + +1. **Check the logs:** Look in `backend.log` for detailed error messages +2. **Run diagnostics:** Use the `start_backend.py` script for comprehensive checks +3. **Create an issue:** Include error logs and system information +4. **Community support:** Check existing issues and discussions + +## System Information for Bug Reports + +When reporting issues, include this information: +```bash +echo "System Information:" +echo "OS: $(uname -a)" +echo "Python: $(python3 --version)" +echo "Poetry: $(poetry --version)" +echo "Project directory: $(pwd)" +echo "Environment variables:" +env | grep -E "(OPENAI|GROQ|ANTHROPIC|DEEPSEEK|FINANCIAL)" | sed 's/=.*/=***/' +``` + +--- + +**Last Updated:** December 2024 +**Version:** Compatible with AI Hedge Fund v0.1.0 \ No newline at end of file diff --git a/ENHANCED_FEATURES_GUIDE.md b/ENHANCED_FEATURES_GUIDE.md new file mode 100644 index 000000000..386fcb442 --- /dev/null +++ b/ENHANCED_FEATURES_GUIDE.md @@ -0,0 +1,435 @@ +# Enhanced AI Hedge Fund Features Guide + +This guide explains how to use the new enhanced features including trading universes, advanced sentiment analysis, economic indicators, and political signals monitoring. + +## 🌟 Overview of Enhanced Features + +The enhanced AI hedge fund now includes: + +1. **🎯 Trading Universe System** - Define investment universes with specific criteria +2. **📊 Advanced Sentiment Analysis** - NLP analysis of news and social media +3. **📈 Economic Indicators** - Real-time economic data and Fed policy tracking +4. **🏛️ Political Signals** - Political event monitoring and impact assessment +5. **⚡ Real-Time Data Pipeline** - Live market data and options Greeks +6. **🧠 Enhanced Portfolio Management** - Comprehensive analysis integration + +## 🎯 Trading Universe System + +### What is a Trading Universe? + +A trading universe defines the set of securities your hedge fund can trade, with specific filters and constraints: + +```python +from src.data.trading_universes import get_trading_universe + +# Get the S&P 500 universe +sp500_universe = get_trading_universe("sp500") +print(f"Description: {sp500_universe.description}") +print(f"Max positions: {sp500_universe.max_positions}") +print(f"Min market cap: {sp500_universe.min_market_cap}B") +``` + +### Available Trading Universes + +| Universe | Description | Focus | Max Positions | +|----------|-------------|-------|---------------| +| `sp500` | S&P 500 companies | Large cap, high liquidity | 50 | +| `tech` | High-volume tech stocks | Technology sector | 30 | +| `sector_etf` | Sector-based ETFs | Sector rotation | 15 | +| `options` | Options trading universe | Liquid options markets | 25 | +| `conservative` | Conservative large cap | Blue-chip dividends | 40 | +| `aggressive_growth` | Tech + Options combo | Maximum returns | 55 | +| `balanced` | S&P 500 + Sector ETFs | Diversified approach | 65 | + +### How Trading Universes Are Used + +1. **Security Filtering**: Only stocks meeting universe criteria are considered +2. **Position Limits**: Maximum number of positions enforced +3. **Sector Allocation**: Target sector weightings applied +4. **Risk Management**: Universe-specific risk parameters +5. **Strategy Alignment**: Investment approach matched to universe + +### Using Different Universes + +```bash +# Use tech universe for growth focus +poetry run python src/enhanced_main.py --tickers AAPL,MSFT,GOOGL --universe tech + +# Use conservative universe for stable returns +poetry run python src/enhanced_main.py --tickers JNJ,PG,KO --universe conservative + +# Use options universe for complex strategies +poetry run python src/enhanced_main.py --tickers SPY,QQQ,AAPL --universe options +``` + +## 📊 Enhanced Sentiment Analysis + +### Advanced NLP Features + +The enhanced sentiment analyzer provides: + +- **Financial Lexicon**: 500+ finance-specific sentiment terms +- **Entity Extraction**: Automatic ticker, company, and number recognition +- **News Categorization**: Earnings, M&A, regulatory, management changes +- **Emotion Detection**: Fear, greed, optimism, pessimism indicators +- **Market Relevance Scoring**: Direct market impact assessment + +### How Sentiment Analysis Works + +```python +from src.agents.enhanced_sentiment import AdvancedSentimentAnalyzer + +analyzer = AdvancedSentimentAnalyzer() + +# Analyze news article +news_text = "Apple Inc. reported strong quarterly earnings, beating analyst expectations..." +analysis = await analyzer.analyze_sentiment(news_text, "AAPL") + +print(f"Overall Sentiment: {analysis.overall_sentiment}") +print(f"Confidence: {analysis.confidence:.2f}") +print(f"Key Phrases: {analysis.key_phrases}") +print(f"Market Relevance: {analysis.market_relevance:.2f}") +``` + +### Social Media Integration + +The system analyzes sentiment from: +- **Twitter/X**: Real-time tweets and engagement +- **Reddit**: Investment subreddit discussions +- **StockTwits**: Financial social network posts + +Metrics include: +- Average sentiment score (-1 to 1) +- Total mentions and engagement +- Trending topics and hashtags +- Influence-weighted analysis + +## 📈 Economic Indicators Integration + +### Federal Reserve Economic Data (FRED) + +The system tracks key economic indicators: + +| Indicator | Frequency | Impact Level | +|-----------|-----------|--------------| +| Fed Funds Rate | Monthly | High | +| Unemployment Rate | Monthly | High | +| Core CPI | Monthly | High | +| GDP Growth | Quarterly | High | +| Consumer Confidence | Monthly | Medium | +| Initial Claims | Weekly | Medium | +| 10Y Treasury Yield | Daily | High | + +### Economic Health Scoring + +The system calculates an overall economic health score (0-100) based on: + +- **Unemployment Rate**: Lower is better (target <4%) +- **GDP Growth**: Positive growth preferred (target >2%) +- **Inflation**: Target around 2% (1.5-2.5% optimal) +- **Consumer Confidence**: Higher is better (>100 preferred) + +### Fed Policy Analysis + +The system analyzes Federal Reserve communications for: +- **Policy Tone**: Hawkish, dovish, or neutral sentiment +- **Rate Change Probability**: Market expectations +- **Key Themes**: Inflation, employment, economic outlook +- **Market Impact Scoring**: Expected volatility + +## 🏛️ Political Signals Monitoring + +### Event Types Tracked + +The system monitors various political events: + +| Event Type | Examples | Market Impact | +|------------|----------|---------------| +| Elections | Presidential, Congressional | High | +| Policy Announcements | Tax policy, healthcare | Medium-High | +| Sanctions | Trade restrictions | High | +| Regulatory Changes | SEC, FDA rulings | Medium | +| Geopolitical Tensions | Wars, conflicts | High | +| Debt Ceiling | Government funding | High | + +### Political Risk Assessment + +Each event receives: +- **Impact Level**: High, Medium, Low +- **Affected Sectors**: Technology, Healthcare, Energy, etc. +- **Sentiment Score**: -1 (negative) to 1 (positive) +- **Market Impact Score**: 0-10 scale +- **Urgency Score**: Time-sensitive prioritization + +### Integration with Trading Decisions + +Political signals influence: +- **Sector Allocation**: Adjust exposure based on policy changes +- **Risk Management**: Increase cash during high uncertainty +- **Timing**: Delay trades during major political events +- **Hedging**: Use defensive positions during tensions + +## ⚡ Real-Time Data Pipeline + +### Market Data Sources + +The system supports multiple data providers: +- **Alpha Vantage**: Basic market data +- **Polygon.io**: Real-time quotes and options +- **Interactive Brokers**: Professional data feeds +- **Tradier**: Options chains and Greeks + +### Options Greeks Calculation + +For options strategies, the system calculates: +- **Delta**: Price sensitivity to underlying +- **Gamma**: Delta change rate +- **Theta**: Time decay +- **Vega**: Volatility sensitivity +- **Rho**: Interest rate sensitivity + +### Implied Volatility + +The system uses Black-Scholes models to calculate: +- Real-time implied volatility +- Volatility surface modeling +- IV percentile rankings +- Volatility forecasting + +## 🧠 Enhanced Portfolio Management + +### Comprehensive Decision Framework + +The enhanced portfolio manager considers: + +1. **Traditional Analysis**: Fundamental, technical, sentiment +2. **Economic Context**: GDP, inflation, employment +3. **Political Environment**: Policy changes, elections +4. **Market Regime**: Bull, bear, sideways, volatile +5. **Social Sentiment**: Crowd psychology indicators +6. **Risk Assessment**: VaR, correlation, concentration + +### Market Regime Detection + +The system identifies four market regimes: + +| Regime | Characteristics | Strategy | +|--------|----------------|----------| +| Bull Market | Rising prices, optimism | Growth stocks, momentum | +| Bear Market | Falling prices, pessimism | Defensive, quality, shorts | +| Sideways Market | Range-bound trading | Value plays, covered calls | +| Volatile Market | High uncertainty | Reduced size, options | + +### Sector Allocation + +Based on the trading universe, the system applies: +- **Target Weightings**: Optimal sector allocation +- **Deviation Limits**: Maximum over/underweight +- **Rebalancing Triggers**: When to adjust positions +- **Risk Budgets**: Sector-specific risk limits + +## 🚀 Getting Started with Enhanced Features + +### 1. Setup API Keys + +Add to your `.env` file: + +```bash +# Required: At least one LLM API key +OPENAI_API_KEY=your_openai_key +GROQ_API_KEY=your_groq_key + +# Enhanced Features (Optional) +FRED_API_KEY=your_fred_key # Economic indicators (free) +NEWSAPI_KEY=your_newsapi_key # Political signals +REDDIT_API_KEY=your_reddit_key # Social sentiment +TWITTER_API_KEY=your_twitter_key # Social sentiment +FINANCIAL_DATASETS_API_KEY=your_key # Extended stock data +``` + +### 2. Run Enhanced Analysis + +```bash +# Basic enhanced analysis +poetry run python src/enhanced_main.py --tickers AAPL,MSFT,GOOGL --universe tech + +# Full features with reasoning +poetry run python src/enhanced_main.py --tickers AAPL,MSFT,GOOGL --universe tech --show-reasoning + +# Demo mode with sample data +poetry run python src/enhanced_main.py --demo --show-reasoning +``` + +### 3. Web Interface Integration + +The enhanced features integrate with the web application: + +```bash +# Start web app with enhanced backend +cd app && ./run.sh +``` + +Access the enhanced features through: +- **Portfolio Dashboard**: Real-time analysis +- **Universe Selection**: Choose trading focus +- **Sentiment Monitor**: Track market mood +- **Economic Dashboard**: Monitor macro indicators +- **Political Tracker**: Watch policy developments + +## 📊 Example Enhanced Analysis Output + +``` +🎯 ENHANCED TRADING ANALYSIS RESULTS +========================================= + +📊 Portfolio-Level Analysis: + 🌡️ Market Sentiment: 0.35 + 🏥 Economic Health: 72.5/100 + 🏛️ Political Stability: 68.0/100 + 📈 Market Regime: Bull Market + ⚠️ Risk Level: Medium + +📈 Economic Indicators: + 📊 Health Score: 72.5/100 + 📝 Summary: Economic outlook appears positive + +🏛️ Political Signals: + ⚡ High Impact Events: 1 + 📰 Total Events: 12 + +💬 Social Sentiment: + AAPL: 0.45 (1,234 mentions) + MSFT: 0.23 (892 mentions) + GOOGL: 0.12 (567 mentions) + +🎯 Trading Decisions: + AAPL: BUY 50 shares (Confidence: 85.2%) + 📊 Sentiment: 0.45 + 🏛️ Economic Impact: 0.23 + ⚠️ Political Risk: -0.12 + + MSFT: BUY 30 shares (Confidence: 78.1%) + GOOGL: HOLD 0 shares (Confidence: 65.4%) +``` + +## 🔧 Advanced Configuration + +### Custom Trading Universe + +Create your own universe: + +```python +from src.data.models import TradingUniverse, AssetClass, Sector + +custom_universe = TradingUniverse( + name="Custom Tech Universe", + description="High-growth technology companies", + asset_classes=[AssetClass.EQUITY], + sectors=[Sector.TECHNOLOGY], + min_market_cap=10.0, # $10B minimum + max_positions=20, + included_tickers=["AAPL", "MSFT", "GOOGL", "NVDA", "META"] +) +``` + +### Custom Sentiment Analysis + +Extend the sentiment analyzer: + +```python +from src.agents.enhanced_sentiment import AdvancedSentimentAnalyzer + +class CustomSentimentAnalyzer(AdvancedSentimentAnalyzer): + def __init__(self): + super().__init__() + # Add custom financial terms + self.preprocessor.financial_sentiment_lexicon.update({ + "moonshot": 0.8, + "diamond_hands": 0.7, + "paper_hands": -0.6, + "to_the_moon": 0.9 + }) +``` + +### Economic Indicator Alerts + +Set up custom alerts: + +```python +from src.data.economic_indicators import EconomicDataManager + +async def check_economic_alerts(manager): + indicators = await manager.get_all_indicators() + + # Alert on high unemployment + if indicators["unemployment_rate"].value > 5.0: + print("⚠️ Unemployment above 5%") + + # Alert on high inflation + if indicators["core_cpi"].change_percent > 3.0: + print("⚠️ Core inflation above 3%") +``` + +## 🔍 Troubleshooting Enhanced Features + +### Common Issues + +1. **API Rate Limits** + ```bash + # Reduce request frequency + export REQUEST_DELAY=2 # 2 second delay between requests + ``` + +2. **Missing Data** + ```python + # Check data availability + features = check_enhanced_features_availability(api_keys) + print(features) + ``` + +3. **Memory Usage** + ```bash + # Monitor memory for large universes + poetry run python -c " + import psutil + print(f'Memory usage: {psutil.virtual_memory().percent}%') + " + ``` + +### Performance Optimization + +1. **Cache Configuration** + - Economic data: 1 hour TTL + - Political events: 4 hours TTL + - Sentiment analysis: 30 minutes TTL + - Options data: 5 minutes TTL + +2. **Parallel Processing** + - Sentiment analysis runs in parallel for multiple tickers + - Economic and political data fetched concurrently + - Real-time data uses async WebSocket connections + +3. **Data Prioritization** + - High-impact political events processed first + - Economic indicators weighted by importance + - Social sentiment filtered by influence score + +## 📚 Further Reading + +- [Trading Universe Configuration](src/data/trading_universes.py) +- [Enhanced Sentiment Analysis](src/agents/enhanced_sentiment.py) +- [Economic Indicators Integration](src/data/economic_indicators.py) +- [Political Signals Monitoring](src/data/political_signals.py) +- [Real-Time Data Pipeline](src/data/realtime_data.py) +- [Enhanced Portfolio Manager](src/agents/enhanced_portfolio_manager.py) + +--- + +**Next Steps:** +1. Set up your API keys in `.env` +2. Choose your trading universe +3. Run the enhanced analysis +4. Monitor the comprehensive insights +5. Iterate and refine your strategy + +The enhanced AI hedge fund provides institutional-grade analysis capabilities while remaining accessible to individual investors. Start with the demo mode to explore all features! \ No newline at end of file diff --git a/ENHANCED_UI_SETUP.md b/ENHANCED_UI_SETUP.md new file mode 100644 index 000000000..4e7c67352 --- /dev/null +++ b/ENHANCED_UI_SETUP.md @@ -0,0 +1,272 @@ +# Enhanced UI Setup Guide + +## 🎉 What's New + +The AI Hedge Fund now includes a **comprehensive web interface** with all enhanced features accessible through an intuitive menu system! + +### ✨ New Features Available in Web UI: + +1. **📊 Portfolio Dashboard** - Real-time portfolio analysis and performance metrics +2. **🎯 Universe Selection** - Interactive trading universe configuration +3. **📈 Economic Indicators** - Live economic data and Fed policy tracking +4. **🏛️ Political Signals** - Political event monitoring and impact assessment +5. **💬 Sentiment Analysis** - Social media and news sentiment tracking +6. **📊 Market Overview** - Comprehensive market regime analysis + +### 🔄 Dual Interface + +The platform now offers **two modes**: +- **Trading Dashboard** - Enhanced features with professional trading interface +- **Workflow Builder** - Original visual workflow system + +## 🚀 Quick Setup + +### 1. Install New Dependencies + +```bash +cd app/frontend +npm install +``` + +New dependencies added: +- `@radix-ui/react-navigation-menu` - Navigation system +- `@radix-ui/react-dropdown-menu` - Dropdown menus +- `@radix-ui/react-label` - Form labels +- `@radix-ui/react-toggle-group` - Toggle controls +- `recharts` - Interactive charts and graphs + +### 2. Configure API Keys (Optional for Enhanced Features) + +Add to your `.env` file in the project root: + +```bash +# Enhanced Features API Keys (Optional) +FRED_API_KEY=your_fred_api_key # Economic indicators +NEWSAPI_KEY=your_newsapi_key # Political signals +REDDIT_API_KEY=your_reddit_key # Social sentiment +TWITTER_API_KEY=your_twitter_key # Social sentiment +FINANCIAL_DATASETS_API_KEY=your_financial_key # Extended data +``` + +### 3. Start the Application + +```bash +# From the app directory +cd app && ./run.sh + +# Or manually: +cd app/backend && poetry run uvicorn app.backend.main:app --reload --port 8000 & +cd app/frontend && npm run dev +``` + +### 4. Access the Enhanced Interface + +1. Open http://localhost:5173 +2. Click **"Trading Dashboard"** in the top toggle +3. Use the navigation menu to access all features + +## 📱 User Interface Overview + +### Navigation Menu + +The main navigation is organized into 4 sections: + +#### 1. **Portfolio** 📊 +- **Dashboard** - Real-time portfolio overview +- **Holdings** - Current positions and allocations +- **Performance** - Historical returns and metrics +- **Risk Analysis** - Risk exposure and VaR analysis + +#### 2. **Trading Universe** 🎯 +- **Universe Selection** - Choose investment focus +- **Screener** - Screen securities by criteria +- **Sector Analysis** - Sector rotation and allocation + +#### 3. **Market Intelligence** 🧠 +- **Economic Indicators** - Fed policy, GDP, inflation +- **Political Signals** - Elections, policy changes +- **Sentiment Analysis** - Social media sentiment +- **Market Overview** - Comprehensive market view + +#### 4. **AI Agents** 🤖 +- **Agent Performance** - AI analyst accuracy tracking +- **Agent Configuration** - Customize AI behavior +- **Decision Flow** - Visual workflow builder + +### Feature Status Indicator + +At the top of the interface, a status bar shows which features are available: +- 🟢 **Green** - Feature fully enabled with API key +- 🟡 **Yellow** - Feature available with mock data +- 🔴 **Red** - Feature unavailable + +## 🎯 Feature Highlights + +### Portfolio Dashboard +- **Real-time Metrics** - Total value, returns, cash balance +- **Market Context** - Economic health, sentiment, political events +- **Interactive Charts** - Performance trends and allocation breakdown +- **Position Details** - Current holdings with P&L tracking + +### Universe Selection +- **Pre-configured Universes** - S&P 500, Tech, Sector ETFs, Options, Conservative +- **Custom Configuration** - Set market cap, sector, and position limits +- **Visual Selection** - Card-based interface with feature highlights +- **Real-time Analysis** - Immediate universe composition analysis + +### Economic Indicators +- **Key Metrics** - Fed funds rate, unemployment, CPI, GDP +- **Health Scoring** - Algorithmic economic assessment (0-100) +- **Trend Analysis** - Historical context and forecasts +- **Fed Policy Tracking** - FOMC decisions and tone analysis + +### Political Signals +- **Event Monitoring** - Elections, policy changes, sanctions +- **Impact Assessment** - Market impact scoring (0-10) +- **Sector Mapping** - Which sectors affected by events +- **Risk Scoring** - Political stability assessment + +### Sentiment Analysis +- **Multi-Platform** - Twitter, Reddit, StockTwits aggregation +- **Ticker-Specific** - Individual stock sentiment tracking +- **Trending Topics** - What's driving sentiment +- **Influence Weighting** - Quality-adjusted sentiment scores + +## 🛠 Development Notes + +### Backend API Endpoints + +New enhanced endpoints at `/api/v1/enhanced/`: + +``` +GET /status # Feature availability +GET /universes # Trading universes +GET /universes/{name} # Universe details +GET /economic-indicators # Economic data +GET /economic-summary # Economic overview +GET /political-events # Political events +GET /sentiment/{ticker} # Ticker sentiment +POST /sentiment/batch # Batch sentiment +POST /portfolio-analysis # Enhanced analysis +GET /market-overview # Market summary +WebSocket /ws/live-data # Real-time updates +``` + +### Frontend Architecture + +``` +app/frontend/src/ +├── components/ +│ ├── navigation/ +│ │ └── main-nav.tsx # Main navigation menu +│ ├── dashboard/ +│ │ └── portfolio-dashboard.tsx # Portfolio dashboard +│ ├── universe/ +│ │ └── universe-selection.tsx # Universe selection +│ ├── enhanced-layout.tsx # Enhanced layout wrapper +│ └── ui/ # Shared UI components +├── App.tsx # Main app with mode toggle +└── ... +``` + +### State Management + +- **Local State** - React useState for component state +- **API Integration** - Direct fetch calls to backend +- **Real-time Updates** - WebSocket for live data +- **Caching** - Browser localStorage for preferences + +## 🔧 Customization + +### Adding New Universe Types + +1. **Backend** - Add to `src/data/trading_universes.py`: +```python +CUSTOM_UNIVERSE = TradingUniverse( + name="custom", + description="Custom investment universe", + asset_classes=[AssetClass.EQUITY], + max_positions=25, + included_tickers=["AAPL", "MSFT"] +) +``` + +2. **Frontend** - Add to universe cards in `universe-selection.tsx` + +### Adding New Dashboard Widgets + +1. Create component in `components/dashboard/` +2. Add to `enhanced-layout.tsx` navigation +3. Integrate with backend API endpoints + +### Styling Customization + +- **Theme** - Modify `tailwind.config.ts` +- **Colors** - Update color constants in components +- **Layout** - Adjust spacing in layout components + +## 📊 Data Sources + +### Real-time Data +- **Economic** - Federal Reserve Economic Data (FRED) +- **Political** - NewsAPI, RSS feeds +- **Sentiment** - Reddit API, Twitter API, StockTwits +- **Market** - Financial Datasets API, Alpha Vantage, Polygon + +### Mock Data Fallbacks +When API keys aren't configured, the system provides: +- Sample economic indicators +- Mock political events +- Simulated sentiment data +- Generated market overviews + +## 🚨 Troubleshooting + +### Common Issues + +1. **Navigation Menu Not Showing** + ```bash + npm install @radix-ui/react-navigation-menu + ``` + +2. **Charts Not Rendering** + ```bash + npm install recharts + ``` + +3. **Backend API Errors** + - Check backend is running on port 8000 + - Verify API keys in `.env` file + - Check browser console for CORS issues + +4. **Feature Status All Red** + - Add API keys to `.env` in project root + - Restart backend server + - Check `/api/v1/enhanced/status` endpoint + +### Debug Mode + +Add to browser console: +```javascript +localStorage.setItem('debug', 'true'); +``` + +This enables detailed logging for API calls and state changes. + +## 🎯 Next Steps + +1. **Install Dependencies** - `npm install` in frontend +2. **Add API Keys** - Configure `.env` for full features +3. **Start Application** - `./run.sh` from app directory +4. **Explore Features** - Switch to "Trading Dashboard" mode +5. **Customize** - Add your own analysis and universes + +## 📞 Support + +- **Documentation** - Check `ENHANCED_FEATURES_GUIDE.md` +- **API Reference** - Visit http://localhost:8000/docs +- **Troubleshooting** - See `BACKEND_TROUBLESHOOTING.md` + +--- + +🎉 **Congratulations!** You now have a professional-grade hedge fund interface with comprehensive analysis capabilities! \ No newline at end of file diff --git a/app/backend/README.md b/app/backend/README.md index 7889d956c..8cbb5fc4b 100644 --- a/app/backend/README.md +++ b/app/backend/README.md @@ -86,7 +86,7 @@ app/backend/ ├── services/ # Business logic │ ├── graph.py # Agent graph functionality │ └── portfolio.py # Portfolio management -├── __init__.py # Package initialization +├── __init__.py 11 # Package initialization └── main.py # FastAPI application entry point ``` diff --git a/app/backend/routes/__init__.py b/app/backend/routes/__init__.py index 836eaff23..dc0d338aa 100644 --- a/app/backend/routes/__init__.py +++ b/app/backend/routes/__init__.py @@ -4,6 +4,7 @@ from app.backend.routes.health import router as health_router from app.backend.routes.storage import router as storage_router from app.backend.routes.flows import router as flows_router +from app.backend.routes.enhanced_features import router as enhanced_router # Main API router api_router = APIRouter() @@ -13,3 +14,4 @@ api_router.include_router(hedge_fund_router, tags=["hedge-fund"]) api_router.include_router(storage_router, tags=["storage"]) api_router.include_router(flows_router, tags=["flows"]) +api_router.include_router(enhanced_router, tags=["enhanced-features"]) diff --git a/app/backend/routes/enhanced_features.py b/app/backend/routes/enhanced_features.py new file mode 100644 index 000000000..0a9f0af48 --- /dev/null +++ b/app/backend/routes/enhanced_features.py @@ -0,0 +1,465 @@ +""" +Enhanced features API endpoints for the web UI. +Provides access to trading universes, sentiment analysis, economic indicators, and political signals. +""" + +import asyncio +import json +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Any +from fastapi import APIRouter, HTTPException, BackgroundTasks, Depends +from fastapi.responses import StreamingResponse +from pydantic import BaseModel, Field +import os +from dotenv import load_dotenv + +from src.data.trading_universes import TRADING_UNIVERSES, get_trading_universe, list_trading_universes +from src.data.economic_indicators import EconomicDataManager +from src.data.political_signals import PoliticalSignalsManager +from src.agents.enhanced_sentiment import AdvancedSentimentAnalyzer, SocialMediaAnalyzer +from src.agents.enhanced_portfolio_manager import enhanced_portfolio_management_agent, MacroAnalysisEngine + +load_dotenv() + +router = APIRouter(prefix="/api/v1/enhanced", tags=["enhanced-features"]) + +# Global instances (will be initialized when needed) +economic_manager = None +political_manager = None +sentiment_analyzer = None +social_analyzer = None + +# Request/Response Models +class UniverseInfo(BaseModel): + name: str + description: str + asset_classes: List[str] + max_positions: Optional[int] + included_tickers: List[str] + sector_focus: Optional[List[str]] = None + +class EconomicIndicatorData(BaseModel): + name: str + value: float + previous_value: Optional[float] + change: Optional[float] + change_percent: Optional[float] + timestamp: datetime + importance: str + +class PoliticalEventData(BaseModel): + title: str + event_type: str + date: datetime + impact_level: str + market_impact_score: float + sentiment_score: float + affected_sectors: List[str] + +class SentimentData(BaseModel): + ticker: str + social_sentiment: float + mentions: int + trending_topics: List[str] + confidence: float + +class PortfolioAnalysisRequest(BaseModel): + tickers: List[str] + universe: str = "sp500" + portfolio: Dict[str, Any] + analyst_signals: Dict[str, Any] = Field(default_factory=dict) + +class PortfolioAnalysisResponse(BaseModel): + decisions: Dict[str, Any] + market_sentiment: float + economic_health: float + political_stability: float + market_regime: str + risk_level: str + sector_allocation: Dict[str, float] + +def get_api_keys() -> Dict[str, str]: + """Get API keys from environment""" + return { + "fred_api_key": os.getenv("FRED_API_KEY", ""), + "newsapi_key": os.getenv("NEWSAPI_KEY", ""), + "reddit_key": os.getenv("REDDIT_API_KEY", ""), + "twitter_key": os.getenv("TWITTER_API_KEY", ""), + "financial_datasets_api_key": os.getenv("FINANCIAL_DATASETS_API_KEY", "") + } + +async def get_managers(): + """Initialize and return global managers""" + global economic_manager, political_manager, sentiment_analyzer, social_analyzer + + api_keys = get_api_keys() + + if economic_manager is None: + economic_manager = EconomicDataManager(api_keys["fred_api_key"]) + if api_keys["fred_api_key"]: + try: + await economic_manager.start() + except Exception: + pass # Continue without FRED if API key is invalid + + if political_manager is None: + political_manager = PoliticalSignalsManager({ + "newsapi": api_keys["newsapi_key"], + "reddit": api_keys["reddit_key"], + "twitter": api_keys["twitter_key"] + }) + try: + await political_manager.start() + except Exception: + pass # Continue without political signals if APIs unavailable + + if sentiment_analyzer is None: + sentiment_analyzer = AdvancedSentimentAnalyzer() + + if social_analyzer is None: + social_analyzer = SocialMediaAnalyzer(api_keys) + try: + await social_analyzer.connect() + except Exception: + pass # Continue without social analysis if APIs unavailable + + return economic_manager, political_manager, sentiment_analyzer, social_analyzer + +@router.get("/status") +async def get_feature_status(): + """Get status of enhanced features based on available API keys""" + api_keys = get_api_keys() + + return { + "basic_features": True, + "economic_indicators": bool(api_keys["fred_api_key"]), + "political_signals": bool(api_keys["newsapi_key"]), + "social_sentiment": bool(api_keys["reddit_key"] or api_keys["twitter_key"]), + "financial_data": bool(api_keys["financial_datasets_api_key"]), + "api_keys_configured": { + "fred": bool(api_keys["fred_api_key"]), + "newsapi": bool(api_keys["newsapi_key"]), + "reddit": bool(api_keys["reddit_key"]), + "twitter": bool(api_keys["twitter_key"]), + "financial_datasets": bool(api_keys["financial_datasets_api_key"]) + } + } + +@router.get("/universes", response_model=List[UniverseInfo]) +async def get_trading_universes(): + """Get all available trading universes""" + universes = [] + + for name, universe in TRADING_UNIVERSES.items(): + universes.append(UniverseInfo( + name=name, + description=universe.description, + asset_classes=[str(ac) for ac in universe.asset_classes], + max_positions=universe.max_positions, + included_tickers=universe.included_tickers, + sector_focus=[str(s) for s in universe.sectors] if universe.sectors else None + )) + + return universes + +@router.get("/universes/{universe_name}") +async def get_universe_details(universe_name: str): + """Get detailed information about a specific trading universe""" + try: + universe = get_trading_universe(universe_name) + return UniverseInfo( + name=universe_name, + description=universe.description, + asset_classes=[str(ac) for ac in universe.asset_classes], + max_positions=universe.max_positions, + included_tickers=universe.included_tickers, + sector_focus=[str(s) for s in universe.sectors] if universe.sectors else None + ) + except ValueError: + raise HTTPException(status_code=404, detail=f"Universe '{universe_name}' not found") + +@router.get("/economic-indicators", response_model=List[EconomicIndicatorData]) +async def get_economic_indicators(): + """Get current economic indicators""" + try: + econ_manager, _, _, _ = await get_managers() + + if not econ_manager or not get_api_keys()["fred_api_key"]: + return [] + + indicators_dict = await econ_manager.get_all_indicators() + indicators = [] + + for name, indicator in indicators_dict.items(): + indicators.append(EconomicIndicatorData( + name=name, + value=indicator.value, + previous_value=indicator.previous_value, + change=indicator.change, + change_percent=indicator.change_percent, + timestamp=indicator.timestamp, + importance=indicator.importance.value + )) + + return indicators + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching economic indicators: {str(e)}") + +@router.get("/economic-summary") +async def get_economic_summary(): + """Get comprehensive economic summary""" + try: + econ_manager, _, _, _ = await get_managers() + + if not econ_manager or not get_api_keys()["fred_api_key"]: + return { + "health_score": 50.0, + "summary": "Economic data unavailable - FRED API key not configured", + "indicators": {}, + "upcoming_events": [] + } + + summary = await econ_manager.get_economic_summary() + return { + "health_score": summary["health_score"], + "summary": summary["summary"], + "indicators": {k: v.value for k, v in summary["indicators"].items()}, + "upcoming_events": len(summary["upcoming_events"]), + "last_updated": summary["last_updated"] + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching economic summary: {str(e)}") + +@router.get("/political-events", response_model=List[PoliticalEventData]) +async def get_political_events(days_back: int = 7, high_impact_only: bool = False): + """Get recent political events""" + try: + _, pol_manager, _, _ = await get_managers() + + if not pol_manager or not get_api_keys()["newsapi_key"]: + return [] + + await pol_manager.update_political_events() + + if high_impact_only: + events = pol_manager.get_high_impact_events(days_back) + else: + # Get all events from the last N days + cutoff_date = datetime.now() - timedelta(days=days_back) + events = [e for e in pol_manager.active_events if e.date >= cutoff_date] + + return [ + PoliticalEventData( + title=event.title, + event_type=event.event_type.value, + date=event.date, + impact_level=event.impact_level.value, + market_impact_score=event.market_impact_score, + sentiment_score=event.sentiment_score, + affected_sectors=[str(s) for s in event.affected_sectors] + ) + for event in events[:50] # Limit to 50 events + ] + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching political events: {str(e)}") + +@router.get("/sentiment/{ticker}", response_model=SentimentData) +async def get_ticker_sentiment(ticker: str): + """Get sentiment analysis for a specific ticker""" + try: + _, _, _, social_analyzer = await get_managers() + + if not social_analyzer: + return SentimentData( + ticker=ticker, + social_sentiment=0.0, + mentions=0, + trending_topics=[], + confidence=0.0 + ) + + sentiment = await social_analyzer.analyze_social_sentiment(ticker.upper()) + + return SentimentData( + ticker=ticker.upper(), + social_sentiment=sentiment.average_sentiment, + mentions=sentiment.total_mentions, + trending_topics=sentiment.trending_topics[:5], + confidence=sentiment.influence_score / max(sentiment.total_mentions, 1) + ) + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching sentiment for {ticker}: {str(e)}") + +@router.post("/sentiment/batch") +async def get_batch_sentiment(tickers: List[str]): + """Get sentiment analysis for multiple tickers""" + try: + results = {} + + for ticker in tickers[:10]: # Limit to 10 tickers + try: + sentiment_data = await get_ticker_sentiment(ticker) + results[ticker.upper()] = sentiment_data.dict() + except Exception: + results[ticker.upper()] = { + "ticker": ticker.upper(), + "social_sentiment": 0.0, + "mentions": 0, + "trending_topics": [], + "confidence": 0.0 + } + + return results + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching batch sentiment: {str(e)}") + +@router.post("/portfolio-analysis", response_model=PortfolioAnalysisResponse) +async def analyze_portfolio(request: PortfolioAnalysisRequest): + """Run enhanced portfolio analysis""" + try: + api_keys = get_api_keys() + + # Create a mock state for the enhanced portfolio manager + state = { + "messages": [], + "data": { + "tickers": request.tickers, + "portfolio": request.portfolio, + "analyst_signals": request.analyst_signals, + "start_date": (datetime.now() - timedelta(days=90)).strftime("%Y-%m-%d"), + "end_date": datetime.now().strftime("%Y-%m-%d") + }, + "metadata": {"show_reasoning": False} + } + + # Run enhanced portfolio analysis + result = await enhanced_portfolio_management_agent( + state, + request.universe, + api_keys + ) + + # Parse the result from the message content + if result["messages"]: + content = json.loads(result["messages"][-1].content) + decisions = content.get("decisions", {}) + portfolio_analysis = content.get("portfolio_analysis", {}) + + return PortfolioAnalysisResponse( + decisions=decisions, + market_sentiment=portfolio_analysis.get("market_sentiment", 0.0), + economic_health=portfolio_analysis.get("economic_health", 50.0), + political_stability=portfolio_analysis.get("political_stability", 50.0), + market_regime=portfolio_analysis.get("market_regime", "Unknown"), + risk_level=portfolio_analysis.get("risk_level", "Medium"), + sector_allocation={} # Could be calculated from decisions + ) + + else: + raise HTTPException(status_code=500, detail="No analysis results generated") + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error analyzing portfolio: {str(e)}") + +@router.get("/market-overview") +async def get_market_overview(): + """Get comprehensive market overview combining all data sources""" + try: + # Get all data in parallel + tasks = [ + get_economic_summary(), + get_political_events(days_back=7, high_impact_only=True), + get_batch_sentiment(["SPY", "QQQ", "AAPL", "MSFT", "GOOGL"]) + ] + + economic_data, political_data, sentiment_data = await asyncio.gather( + *tasks, return_exceptions=True + ) + + # Handle exceptions gracefully + if isinstance(economic_data, Exception): + economic_data = {"health_score": 50.0, "summary": "Economic data unavailable"} + + if isinstance(political_data, Exception): + political_data = [] + + if isinstance(sentiment_data, Exception): + sentiment_data = {} + + # Calculate overall market sentiment + sentiment_scores = [data.get("social_sentiment", 0) for data in sentiment_data.values()] + avg_sentiment = sum(sentiment_scores) / len(sentiment_scores) if sentiment_scores else 0.0 + + # Determine market regime + economic_health = economic_data.get("health_score", 50) + high_impact_events = len(political_data) + + if economic_health > 70 and avg_sentiment > 0.2 and high_impact_events < 2: + market_regime = "Bull Market" + elif economic_health < 40 or avg_sentiment < -0.3 or high_impact_events > 3: + market_regime = "Bear Market" + elif abs(avg_sentiment) < 0.1 and 40 <= economic_health <= 70: + market_regime = "Sideways Market" + else: + market_regime = "Volatile Market" + + return { + "market_regime": market_regime, + "economic_health": economic_health, + "average_sentiment": avg_sentiment, + "high_impact_political_events": high_impact_events, + "economic_summary": economic_data.get("summary", ""), + "top_political_events": political_data[:3], + "sentiment_by_ticker": sentiment_data, + "last_updated": datetime.now().isoformat() + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Error fetching market overview: {str(e)}") + +@router.websocket("/ws/live-data") +async def websocket_live_data(websocket): + """WebSocket endpoint for live data streaming""" + await websocket.accept() + + try: + while True: + # Send market overview every 30 seconds + try: + market_data = await get_market_overview() + await websocket.send_json({ + "type": "market_overview", + "data": market_data, + "timestamp": datetime.now().isoformat() + }) + except Exception as e: + await websocket.send_json({ + "type": "error", + "message": str(e), + "timestamp": datetime.now().isoformat() + }) + + await asyncio.sleep(30) # Update every 30 seconds + + except Exception: + await websocket.close() + +# Cleanup function for graceful shutdown +async def cleanup_managers(): + """Clean up global managers""" + global economic_manager, political_manager, social_analyzer + + try: + if economic_manager: + await economic_manager.stop() + if political_manager: + await political_manager.stop() + if social_analyzer: + await social_analyzer.disconnect() + except Exception: + pass \ No newline at end of file diff --git a/app/frontend/package-lock.json b/app/frontend/package-lock.json index 2db2b991d..5c65e6658 100644 --- a/app/frontend/package-lock.json +++ b/app/frontend/package-lock.json @@ -10,6 +10,7 @@ "license": "MIT", "dependencies": { "@radix-ui/react-accordion": "^1.2.10", + "@radix-ui/react-checkbox": "^1.3.2", "@radix-ui/react-dialog": "^1.1.13", "@radix-ui/react-icons": "^1.3.2", "@radix-ui/react-popover": "^1.1.13", @@ -1233,6 +1234,77 @@ } } }, + "node_modules/@radix-ui/react-checkbox": { + "version": "1.3.2", + "resolved": "https://registry.npmjs.org/@radix-ui/react-checkbox/-/react-checkbox-1.3.2.tgz", + "integrity": "sha512-yd+dI56KZqawxKZrJ31eENUwqc1QSqg4OZ15rybGjF2ZNwMO+wCyHzAVLRp9qoYJf7kYy0YpZ2b0JCzJ42HZpA==", + "license": "MIT", + "dependencies": { + "@radix-ui/primitive": "1.1.2", + "@radix-ui/react-compose-refs": "1.1.2", + "@radix-ui/react-context": "1.1.2", + "@radix-ui/react-presence": "1.1.4", + "@radix-ui/react-primitive": "2.1.3", + "@radix-ui/react-use-controllable-state": "1.2.2", + "@radix-ui/react-use-previous": "1.1.1", + "@radix-ui/react-use-size": "1.1.1" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-checkbox/node_modules/@radix-ui/react-primitive": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz", + "integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-slot": "1.2.3" + }, + "peerDependencies": { + "@types/react": "*", + "@types/react-dom": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc", + "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + }, + "@types/react-dom": { + "optional": true + } + } + }, + "node_modules/@radix-ui/react-checkbox/node_modules/@radix-ui/react-slot": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz", + "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==", + "license": "MIT", + "dependencies": { + "@radix-ui/react-compose-refs": "1.1.2" + }, + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, "node_modules/@radix-ui/react-collapsible": { "version": "1.1.11", "resolved": "https://registry.npmjs.org/@radix-ui/react-collapsible/-/react-collapsible-1.1.11.tgz", @@ -2280,6 +2352,21 @@ } } }, + "node_modules/@radix-ui/react-use-previous": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/@radix-ui/react-use-previous/-/react-use-previous-1.1.1.tgz", + "integrity": "sha512-2dHfToCj/pzca2Ck724OZ5L0EVrr3eHRNsG/b3xQJLA2hZpVCS99bLAX+hm1IHXDEnzU6by5z/5MIY794/a8NQ==", + "license": "MIT", + "peerDependencies": { + "@types/react": "*", + "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc" + }, + "peerDependenciesMeta": { + "@types/react": { + "optional": true + } + } + }, "node_modules/@radix-ui/react-use-rect": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/@radix-ui/react-use-rect/-/react-use-rect-1.1.1.tgz", diff --git a/app/frontend/package.json b/app/frontend/package.json index a3f4e0509..99f46d323 100644 --- a/app/frontend/package.json +++ b/app/frontend/package.json @@ -12,11 +12,15 @@ "@radix-ui/react-accordion": "^1.2.10", "@radix-ui/react-checkbox": "^1.3.2", "@radix-ui/react-dialog": "^1.1.13", + "@radix-ui/react-dropdown-menu": "^2.1.4", "@radix-ui/react-icons": "^1.3.2", + "@radix-ui/react-label": "^2.1.0", + "@radix-ui/react-navigation-menu": "^1.2.1", "@radix-ui/react-popover": "^1.1.13", "@radix-ui/react-separator": "^1.1.6", "@radix-ui/react-slot": "^1.2.0", "@radix-ui/react-tabs": "^1.1.11", + "@radix-ui/react-toggle-group": "^1.1.0", "@radix-ui/react-tooltip": "^1.2.6", "@types/react-syntax-highlighter": "^15.5.13", "@xyflow/react": "^12.5.1", @@ -29,6 +33,7 @@ "react-dom": "^18.2.0", "react-resizable-panels": "^3.0.1", "react-syntax-highlighter": "^15.6.1", + "recharts": "^2.13.3", "shadcn-ui": "^0.9.5", "sonner": "^2.0.5", "tailwind-merge": "^3.2.0" diff --git a/app/frontend/src/App.tsx b/app/frontend/src/App.tsx index 1e9f91117..d96c62f78 100644 --- a/app/frontend/src/App.tsx +++ b/app/frontend/src/App.tsx @@ -1,10 +1,52 @@ -import { Layout } from './components/layout'; +import { useState } from 'react'; +import { Layout } from './components/Layout'; +import { EnhancedLayout } from './components/enhanced-layout'; import { Toaster } from './components/ui/sonner'; +import { Badge } from './components/ui/badge'; +import { ToggleGroup, ToggleGroupItem } from './components/ui/toggle-group'; +import { + Workflow, + BarChart3 +} from 'lucide-react'; export default function App() { + const [currentMode, setCurrentMode] = useState<'enhanced' | 'workflow'>('enhanced'); + return ( <> - + {/* Mode Selection Header */} +
+
+
+
+

AI Hedge Fund Platform

+ v2.0 +
+ +
+ value && setCurrentMode(value as 'enhanced' | 'workflow')} + className="bg-gray-100 p-1 rounded-lg" + > + + + Trading Dashboard + + + + Workflow Builder + + +
+
+
+
+ + {/* Main Content */} + {currentMode === 'enhanced' ? : } + ); diff --git a/app/frontend/src/components/dashboard/portfolio-dashboard.tsx b/app/frontend/src/components/dashboard/portfolio-dashboard.tsx new file mode 100644 index 000000000..84523cde0 --- /dev/null +++ b/app/frontend/src/components/dashboard/portfolio-dashboard.tsx @@ -0,0 +1,504 @@ +import React, { useState, useEffect } from 'react'; +import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { Button } from '@/components/ui/button'; +import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs'; +import { + TrendingUp, + TrendingDown, + DollarSign, + Target, + AlertTriangle, + RefreshCw, + BarChart3, + PieChart, + Activity, + Calendar, +} from 'lucide-react'; +import { LineChart, Line, AreaChart, Area, PieChart as RechartsPieChart, Cell, ResponsiveContainer, XAxis, YAxis, CartesianGrid, Tooltip, Legend } from 'recharts'; + +interface PortfolioMetrics { + totalValue: number; + cashBalance: number; + totalReturn: number; + totalReturnPercent: number; + dayChange: number; + dayChangePercent: number; + allocatedValue: number; + availableCash: number; +} + +interface Position { + ticker: string; + shares: number; + averagePrice: number; + currentPrice: number; + marketValue: number; + unrealizedPnL: number; + unrealizedPnLPercent: number; + weight: number; + sector: string; +} + +interface MarketOverview { + marketRegime: string; + economicHealth: number; + averageSentiment: number; + highImpactPoliticalEvents: number; + economicSummary: string; + lastUpdated: string; +} + +const COLORS = ['#0088FE', '#00C49F', '#FFBB28', '#FF8042', '#8884D8', '#82CA9D']; + +export function PortfolioDashboard() { + const [portfolioMetrics, setPortfolioMetrics] = useState(null); + const [positions, setPositions] = useState([]); + const [marketOverview, setMarketOverview] = useState(null); + const [isLoading, setIsLoading] = useState(true); + const [lastRefresh, setLastRefresh] = useState(new Date()); + + const fetchPortfolioData = async () => { + setIsLoading(true); + try { + // Simulate API calls (replace with actual API endpoints) + const mockMetrics: PortfolioMetrics = { + totalValue: 125650.75, + cashBalance: 25650.75, + totalReturn: 25650.75, + totalReturnPercent: 25.65, + dayChange: 1250.30, + dayChangePercent: 1.01, + allocatedValue: 100000, + availableCash: 25650.75, + }; + + const mockPositions: Position[] = [ + { + ticker: 'AAPL', + shares: 100, + averagePrice: 150.00, + currentPrice: 175.50, + marketValue: 17550, + unrealizedPnL: 2550, + unrealizedPnLPercent: 17.0, + weight: 0.35, + sector: 'Technology', + }, + { + ticker: 'MSFT', + shares: 75, + averagePrice: 250.00, + currentPrice: 280.25, + marketValue: 21018.75, + unrealizedPnL: 2268.75, + unrealizedPnLPercent: 12.1, + weight: 0.42, + sector: 'Technology', + }, + { + ticker: 'GOOGL', + shares: 25, + averagePrice: 120.00, + currentPrice: 138.75, + marketValue: 3468.75, + unrealizedPnL: 468.75, + unrealizedPnLPercent: 15.6, + weight: 0.07, + sector: 'Technology', + }, + ]; + + // Fetch market overview + const response = await fetch('/api/v1/enhanced/market-overview'); + const marketData = response.ok ? await response.json() : { + marketRegime: 'Bull Market', + economicHealth: 72.5, + averageSentiment: 0.35, + highImpactPoliticalEvents: 1, + economicSummary: 'Economic outlook appears positive', + lastUpdated: new Date().toISOString(), + }; + + setPortfolioMetrics(mockMetrics); + setPositions(mockPositions); + setMarketOverview(marketData); + setLastRefresh(new Date()); + } catch (error) { + console.error('Error fetching portfolio data:', error); + } finally { + setIsLoading(false); + } + }; + + useEffect(() => { + fetchPortfolioData(); + // Refresh every 5 minutes + const interval = setInterval(fetchPortfolioData, 5 * 60 * 1000); + return () => clearInterval(interval); + }, []); + + const sectorAllocation = positions.reduce((acc, position) => { + const sector = position.sector; + if (!acc[sector]) { + acc[sector] = { value: 0, count: 0 }; + } + acc[sector].value += position.marketValue; + acc[sector].count += 1; + return acc; + }, {} as Record); + + const sectorData = Object.entries(sectorAllocation).map(([sector, data]) => ({ + name: sector, + value: data.value, + count: data.count, + })); + + const performanceData = [ + { date: '2024-01', value: 100000 }, + { date: '2024-02', value: 105000 }, + { date: '2024-03', value: 108000 }, + { date: '2024-04', value: 112000 }, + { date: '2024-05', value: 118000 }, + { date: '2024-06', value: 125650 }, + ]; + + const getMarketRegimeColor = (regime: string) => { + switch (regime.toLowerCase()) { + case 'bull market': return 'bg-green-500'; + case 'bear market': return 'bg-red-500'; + case 'sideways market': return 'bg-yellow-500'; + case 'volatile market': return 'bg-orange-500'; + default: return 'bg-gray-500'; + } + }; + + const getMarketRegimeIcon = (regime: string) => { + switch (regime.toLowerCase()) { + case 'bull market': return ; + case 'bear market': return ; + default: return ; + } + }; + + if (isLoading && !portfolioMetrics) { + return ( +
+
+ {[...Array(4)].map((_, i) => ( + + +
+
+
+ ))} +
+
+ ); + } + + return ( +
+ {/* Header */} +
+
+

Portfolio Dashboard

+

+ Last updated: {lastRefresh.toLocaleTimeString()} +

+
+
+ +
+
+ + {/* Market Overview */} + {marketOverview && ( + + + + + Market Overview + + + +
+
+
+
+

Market Regime

+

+ {getMarketRegimeIcon(marketOverview.marketRegime)} + {marketOverview.marketRegime} +

+
+
+
+

Economic Health

+

{marketOverview.economicHealth.toFixed(1)}/100

+
+
+
+
+
+

Market Sentiment

+
+

+ {marketOverview.averageSentiment > 0 ? '+' : ''}{(marketOverview.averageSentiment * 100).toFixed(1)}% +

+ {marketOverview.averageSentiment > 0 ? ( + + ) : ( + + )} +
+
+
+

Political Events

+
+

{marketOverview.highImpactPoliticalEvents}

+ +
+
+
+
+
+ )} + + {/* Key Metrics */} + {portfolioMetrics && ( +
+ + +
+
+

Total Value

+

${portfolioMetrics.totalValue.toLocaleString()}

+
+ +
+
+ + + +{portfolioMetrics.dayChangePercent.toFixed(2)}% today + +
+
+
+ + + +
+
+

Total Return

+

+ ${portfolioMetrics.totalReturn.toLocaleString()} +

+
+ +
+
+ + +{portfolioMetrics.totalReturnPercent.toFixed(2)}% + +
+
+
+ + + +
+
+

Cash Balance

+

${portfolioMetrics.cashBalance.toLocaleString()}

+
+ +
+
+

+ {((portfolioMetrics.cashBalance / portfolioMetrics.totalValue) * 100).toFixed(1)}% of portfolio +

+
+
+
+ + + +
+
+

Day Change

+

+ ${portfolioMetrics.dayChange.toLocaleString()} +

+
+ +
+
+ + +{portfolioMetrics.dayChangePercent.toFixed(2)}% + +
+
+
+
+ )} + + {/* Charts and Tables */} + + + Performance + Positions + Allocation + + + + + + + + Portfolio Performance + + + + + + + + + + + + + + + + + + + + Current Positions + + +
+ {positions.map((position) => ( +
+
+
+ + {position.ticker.substring(0, 2)} + +
+
+

{position.ticker}

+

{position.sector}

+
+
+
+

${position.marketValue.toLocaleString()}

+

+ {position.shares} shares @ ${position.currentPrice.toFixed(2)} +

+
+
+

= 0 ? 'text-green-600' : 'text-red-600'}`}> + {position.unrealizedPnL >= 0 ? '+' : ''}${position.unrealizedPnL.toLocaleString()} +

+

= 0 ? 'text-green-600' : 'text-red-600'}`}> + {position.unrealizedPnLPercent >= 0 ? '+' : ''}{position.unrealizedPnLPercent.toFixed(2)}% +

+
+
+

{(position.weight * 100).toFixed(1)}%

+

weight

+
+
+ ))} +
+
+
+
+ + +
+ + + + + Sector Allocation + + + + + + + {sectorData.map((entry, index) => ( + + ))} + + + + + + + + + + + Allocation Details + + +
+ {sectorData.map((sector, index) => ( +
+
+
+ {sector.name} +
+
+

${sector.value.toLocaleString()}

+

+ {((sector.value / portfolioMetrics!.totalValue) * 100).toFixed(1)}% +

+
+
+ ))} +
+
+
+
+
+
+
+ ); +} \ No newline at end of file diff --git a/app/frontend/src/components/enhanced-layout.tsx b/app/frontend/src/components/enhanced-layout.tsx new file mode 100644 index 000000000..54a10d410 --- /dev/null +++ b/app/frontend/src/components/enhanced-layout.tsx @@ -0,0 +1,448 @@ +import React, { useState, useEffect } from 'react'; +import { MainNav } from '@/components/navigation/main-nav'; +import { PortfolioDashboard } from '@/components/dashboard/portfolio-dashboard'; +import { UniverseSelection } from '@/components/universe/universe-selection'; +import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { Button } from '@/components/ui/button'; +import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs'; +import { + BarChart3, + Globe, + MessageSquare, + TrendingUp, + AlertTriangle, + Settings, + Zap, + LineChart, + Activity, + RefreshCw, +} from 'lucide-react'; + +// Feature Status Interface +interface FeatureStatus { + basicFeatures: boolean; + economicIndicators: boolean; + politicalSignals: boolean; + socialSentiment: boolean; + financialData: boolean; + apiKeysConfigured: { + fred: boolean; + newsapi: boolean; + reddit: boolean; + twitter: boolean; + financialDatasets: boolean; + }; +} + +// Placeholder components for features not yet implemented +const EconomicIndicators = () => ( +
+

Economic Indicators

+
+ + + + + Fed Funds Rate + + + +
5.25%
+

+0.25% from last meeting

+
+
+ + + + + Unemployment Rate + + + +
3.8%
+

-0.1% from last month

+
+
+ + + + + Core CPI + + + +
2.1%
+

Target: 2.0%

+
+
+
+ + + Economic Health Score + + +
+
72.5/100
+
+
+
+
+
+ + Positive + +
+

+ Economic outlook appears positive with stable employment and controlled inflation. +

+
+
+
+); + +const PoliticalSignals = () => ( +
+

Political Signals

+
+ + + + + Recent Events + + + +
+
+

Infrastructure Bill Discussion

+

+ Congress debates $1.2T infrastructure package affecting industrials sector +

+
+ Medium Impact + 2 hours ago +
+
+
+

Trade Agreement Update

+

+ Positive developments in US-EU trade relations +

+
+ Low Impact + 1 day ago +
+
+
+
+
+ + + + + Risk Assessment + + + +
68/100
+

Political Stability Score

+
+
+ Election Risk + Low +
+
+ Policy Changes + Medium +
+
+ Geopolitical + Low +
+
+
+
+
+
+); + +const SentimentAnalysis = () => ( +
+

Sentiment Analysis

+
+ {['AAPL', 'MSFT', 'GOOGL', 'TSLA'].map((ticker) => ( + + + + {ticker} + + + + +
+
+0.45
+

Sentiment Score

+
+ 1,234 mentions +
+
+ Trending: earnings beat +
+
+
+
+ ))} +
+ + + Overall Market Sentiment + + +
+
+35%
+

Bullish sentiment across major indices

+
+ Fear & Greed: 65 + VIX: 18.5 + Put/Call: 0.85 +
+
+
+
+
+); + +const MarketOverview = () => ( +
+

Market Overview

+
+ + +
+
+

Market Regime

+

Bull Market

+
+ +
+
+ + Strong Momentum + +
+
+
+ + + +
+
+

Economic Health

+

72.5/100

+
+ +
+
+ + Positive + +
+
+
+ + + +
+
+

Sentiment

+

+35%

+
+ +
+
+ + Bullish + +
+
+
+ + + +
+
+

Political Risk

+

Low

+
+ +
+
+ + Stable + +
+
+
+
+ + + + Market Summary + + +

+ Current market conditions favor growth strategies with strong economic fundamentals + supporting continued bull market momentum. Low political risk and positive sentiment + indicators suggest favorable conditions for equity investments. +

+
+
+
+); + +const ComingSoon = ({ title }: { title: string }) => ( +
+

{title}

+ + + +

Coming Soon

+

+ This feature is under development and will be available in the next release. +

+
+
+
+); + +export function EnhancedLayout() { + const [currentView, setCurrentView] = useState('portfolio-dashboard'); + const [featureStatus, setFeatureStatus] = useState(null); + + useEffect(() => { + // Fetch feature status on component mount + fetchFeatureStatus(); + }, []); + + const fetchFeatureStatus = async () => { + try { + const response = await fetch('/api/v1/enhanced/status'); + if (response.ok) { + const status = await response.json(); + setFeatureStatus(status); + } + } catch (error) { + console.error('Error fetching feature status:', error); + // Fallback to default status + setFeatureStatus({ + basicFeatures: true, + economicIndicators: false, + politicalSignals: false, + socialSentiment: false, + financialData: false, + apiKeysConfigured: { + fred: false, + newsapi: false, + reddit: false, + twitter: false, + financialDatasets: false, + }, + }); + } + }; + + const renderContent = () => { + switch (currentView) { + case 'portfolio': + case 'portfolio-dashboard': + return ; + case 'universe': + case 'universe-selection': + return ; + case 'economic-indicators': + return ; + case 'political-signals': + return ; + case 'sentiment-analysis': + return ; + case 'market-overview': + return ; + case 'portfolio-holdings': + return ; + case 'portfolio-performance': + return ; + case 'portfolio-risk': + return ; + case 'universe-screener': + return ; + case 'universe-sectors': + return ; + case 'agent-performance': + return ; + case 'agent-config': + return ; + case 'decision-flow': + return ; + case 'settings': + return ; + default: + return ; + } + }; + + return ( +
+ {/* Top Navigation */} +
+
+
+
+

AI Hedge Fund

+ Enhanced +
+ +
+
+
+ + {/* Feature Status Bar */} + {featureStatus && ( +
+
+ Features: +
+
+ Basic +
+
+
+ Economic +
+
+
+ Political +
+
+
+ Sentiment +
+ {!featureStatus.economicIndicators && ( + + Add API keys in .env to enable all features + + )} +
+
+ )} + + {/* Main Content */} +
+ {renderContent()} +
+
+ ); +} \ No newline at end of file diff --git a/app/frontend/src/components/navigation/main-nav.tsx b/app/frontend/src/components/navigation/main-nav.tsx new file mode 100644 index 000000000..c0e689fd6 --- /dev/null +++ b/app/frontend/src/components/navigation/main-nav.tsx @@ -0,0 +1,278 @@ +import React from 'react'; +import { cn } from '@/lib/utils'; +import { Button } from '@/components/ui/button'; +import { Badge } from '@/components/ui/badge'; +import { + NavigationMenu, + NavigationMenuContent, + NavigationMenuItem, + NavigationMenuLink, + NavigationMenuList, + NavigationMenuTrigger, +} from '@/components/ui/navigation-menu'; +import { + DropdownMenu, + DropdownMenuContent, + DropdownMenuItem, + DropdownMenuLabel, + DropdownMenuSeparator, + DropdownMenuTrigger, +} from '@/components/ui/dropdown-menu'; +import { + BarChart3, + TrendingUp, + Globe, + MessageSquare, + Settings, + Target, + Activity, + PieChart, + Briefcase, + DollarSign, + AlertTriangle, + LineChart, + Map, + Users, + Zap, + Database, +} from 'lucide-react'; + +interface MainNavProps { + currentView: string; + onViewChange: (view: string) => void; + className?: string; +} + +const navigationItems = [ + { + title: 'Portfolio', + href: 'portfolio', + icon: Briefcase, + description: 'Portfolio overview and performance', + items: [ + { + title: 'Dashboard', + href: 'portfolio-dashboard', + icon: PieChart, + description: 'Real-time portfolio analysis and metrics', + }, + { + title: 'Holdings', + href: 'portfolio-holdings', + icon: DollarSign, + description: 'Current positions and allocations', + }, + { + title: 'Performance', + href: 'portfolio-performance', + icon: TrendingUp, + description: 'Historical performance and returns', + }, + { + title: 'Risk Analysis', + href: 'portfolio-risk', + icon: AlertTriangle, + description: 'Risk metrics and exposure analysis', + }, + ], + }, + { + title: 'Trading Universe', + href: 'universe', + icon: Target, + description: 'Investment universe selection and management', + items: [ + { + title: 'Universe Selection', + href: 'universe-selection', + icon: Target, + description: 'Choose your trading universe and criteria', + }, + { + title: 'Screener', + href: 'universe-screener', + icon: Database, + description: 'Screen securities by fundamental criteria', + }, + { + title: 'Sector Analysis', + href: 'universe-sectors', + icon: Map, + description: 'Sector allocation and rotation analysis', + }, + ], + }, + { + title: 'Market Intelligence', + href: 'intelligence', + icon: Activity, + description: 'Real-time market analysis and insights', + items: [ + { + title: 'Economic Indicators', + href: 'economic-indicators', + icon: BarChart3, + description: 'Fed policy, GDP, inflation, and employment data', + }, + { + title: 'Political Signals', + href: 'political-signals', + icon: Globe, + description: 'Political events and policy impact analysis', + }, + { + title: 'Sentiment Analysis', + href: 'sentiment-analysis', + icon: MessageSquare, + description: 'Social media and news sentiment tracking', + }, + { + title: 'Market Overview', + href: 'market-overview', + icon: LineChart, + description: 'Comprehensive market regime analysis', + }, + ], + }, + { + title: 'AI Agents', + href: 'agents', + icon: Users, + description: 'AI analyst configuration and insights', + items: [ + { + title: 'Agent Performance', + href: 'agent-performance', + icon: TrendingUp, + description: 'Track AI agent accuracy and performance', + }, + { + title: 'Agent Configuration', + href: 'agent-config', + icon: Settings, + description: 'Configure and customize AI analysts', + }, + { + title: 'Decision Flow', + href: 'decision-flow', + icon: Zap, + description: 'Visual workflow builder for trading logic', + }, + ], + }, +]; + +export function MainNav({ currentView, onViewChange, className }: MainNavProps) { + const handleItemClick = (href: string) => { + onViewChange(href); + }; + + const getCurrentSection = () => { + return navigationItems.find(item => + item.href === currentView || + item.items?.some(subItem => subItem.href === currentView) + )?.title || 'Portfolio'; + }; + + return ( +
+ {/* Main Navigation Menu */} + + + {navigationItems.map((item) => ( + + + + {item.title} + {item.title === 'Market Intelligence' && ( + + Live + + )} + + +
+
+ + + +
+
+ {item.items?.map((subItem) => ( + + + + ))} +
+
+
+
+ ))} +
+
+ + {/* Quick Actions */} +
+ + + + + + Quick Actions + + handleItemClick('portfolio-analysis')}> + + Run Portfolio Analysis + + handleItemClick('market-scan')}> + + Market Scan + + handleItemClick('risk-check')}> + + Risk Check + + + handleItemClick('settings')}> + + Settings + + + + + {/* Current View Indicator */} + + + {getCurrentSection()} + +
+
+ ); +} \ No newline at end of file diff --git a/app/frontend/src/components/panels/left/flow-actions.tsx b/app/frontend/src/components/panels/left/flow-actions.tsx index eda7b856b..6db9f99ce 100644 --- a/app/frontend/src/components/panels/left/flow-actions.tsx +++ b/app/frontend/src/components/panels/left/flow-actions.tsx @@ -11,11 +11,11 @@ interface FlowActionsProps { export function FlowActions({ onSave, onCreate, onToggleCollapse }: FlowActionsProps) { const { currentFlowName, isUnsaved } = useFlowContext(); - +console.log('test') return (
- - Flows + + {isUnsaved && *}
@@ -24,7 +24,7 @@ export function FlowActions({ onSave, onCreate, onToggleCollapse }: FlowActionsP size="icon" onClick={onSave} className={cn( - "h-6 w-6 text-white hover:bg-ramp-grey-700", + "h-6 w-6 hover:bg-ramp-grey-700", isUnsaved && "text-yellow-400" )} title={`Save "${currentFlowName}"`} @@ -35,7 +35,7 @@ export function FlowActions({ onSave, onCreate, onToggleCollapse }: FlowActionsP variant="ghost" size="icon" onClick={onCreate} - className="h-6 w-6 text-white hover:bg-ramp-grey-700" + className="h-6 w-6 hover:bg-ramp-grey-700" title="Create new flow" > diff --git a/app/frontend/src/components/tabs/flow-tab-content.tsx b/app/frontend/src/components/tabs/flow-tab-content.tsx index 4fb8f44c3..5ed84b0f1 100644 --- a/app/frontend/src/components/tabs/flow-tab-content.tsx +++ b/app/frontend/src/components/tabs/flow-tab-content.tsx @@ -1,4 +1,4 @@ -import { Flow } from '@/components/flow'; +import { Flow } from '@/components/Flow'; import { useFlowContext } from '@/contexts/flow-context'; import { useTabsContext } from '@/contexts/tabs-context'; import { cn } from '@/lib/utils'; diff --git a/app/frontend/src/components/ui/dropdown-menu.tsx b/app/frontend/src/components/ui/dropdown-menu.tsx new file mode 100644 index 000000000..3a79f447f --- /dev/null +++ b/app/frontend/src/components/ui/dropdown-menu.tsx @@ -0,0 +1,198 @@ +import * as React from "react" +import * as DropdownMenuPrimitive from "@radix-ui/react-dropdown-menu" +import { Check, ChevronRight, Circle } from "lucide-react" + +import { cn } from "@/lib/utils" + +const DropdownMenu = DropdownMenuPrimitive.Root + +const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger + +const DropdownMenuGroup = DropdownMenuPrimitive.Group + +const DropdownMenuPortal = DropdownMenuPrimitive.Portal + +const DropdownMenuSub = DropdownMenuPrimitive.Sub + +const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup + +const DropdownMenuSubTrigger = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & { + inset?: boolean + } +>(({ className, inset, children, ...props }, ref) => ( + + {children} + + +)) +DropdownMenuSubTrigger.displayName = + DropdownMenuPrimitive.SubTrigger.displayName + +const DropdownMenuSubContent = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( + +)) +DropdownMenuSubContent.displayName = + DropdownMenuPrimitive.SubContent.displayName + +const DropdownMenuContent = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, sideOffset = 4, ...props }, ref) => ( + + + +)) +DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName + +const DropdownMenuItem = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & { + inset?: boolean + } +>(({ className, inset, ...props }, ref) => ( + +)) +DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName + +const DropdownMenuCheckboxItem = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, children, checked, ...props }, ref) => ( + + + + + + + {children} + +)) +DropdownMenuCheckboxItem.displayName = + DropdownMenuPrimitive.CheckboxItem.displayName + +const DropdownMenuRadioItem = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, children, ...props }, ref) => ( + + + + + + + {children} + +)) +DropdownMenuRadioItem.displayName = DropdownMenuPrimitive.RadioItem.displayName + +const DropdownMenuLabel = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & { + inset?: boolean + } +>(({ className, inset, ...props }, ref) => ( + +)) +DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName + +const DropdownMenuSeparator = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( + +)) +DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName + +const DropdownMenuShortcut = ({ + className, + ...props +}: React.HTMLAttributes) => { + return ( + + ) +} +DropdownMenuShortcut.displayName = "DropdownMenuShortcut" + +export { + DropdownMenu, + DropdownMenuTrigger, + DropdownMenuContent, + DropdownMenuItem, + DropdownMenuCheckboxItem, + DropdownMenuRadioItem, + DropdownMenuLabel, + DropdownMenuSeparator, + DropdownMenuShortcut, + DropdownMenuGroup, + DropdownMenuPortal, + DropdownMenuSub, + DropdownMenuSubContent, + DropdownMenuSubTrigger, + DropdownMenuRadioGroup, +} \ No newline at end of file diff --git a/app/frontend/src/components/ui/label.tsx b/app/frontend/src/components/ui/label.tsx new file mode 100644 index 000000000..a7cafcd2f --- /dev/null +++ b/app/frontend/src/components/ui/label.tsx @@ -0,0 +1,24 @@ +import * as React from "react" +import * as LabelPrimitive from "@radix-ui/react-label" +import { cva, type VariantProps } from "class-variance-authority" + +import { cn } from "@/lib/utils" + +const labelVariants = cva( + "text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70" +) + +const Label = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & + VariantProps +>(({ className, ...props }, ref) => ( + +)) +Label.displayName = LabelPrimitive.Root.displayName + +export { Label } \ No newline at end of file diff --git a/app/frontend/src/components/ui/navigation-menu.tsx b/app/frontend/src/components/ui/navigation-menu.tsx new file mode 100644 index 000000000..c6d815cfa --- /dev/null +++ b/app/frontend/src/components/ui/navigation-menu.tsx @@ -0,0 +1,128 @@ +import * as React from "react" +import * as NavigationMenuPrimitive from "@radix-ui/react-navigation-menu" +import { cva } from "class-variance-authority" +import { ChevronDown } from "lucide-react" + +import { cn } from "@/lib/utils" + +const NavigationMenu = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, children, ...props }, ref) => ( + + {children} + + +)) +NavigationMenu.displayName = NavigationMenuPrimitive.Root.displayName + +const NavigationMenuList = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( + +)) +NavigationMenuList.displayName = NavigationMenuPrimitive.List.displayName + +const NavigationMenuItem = NavigationMenuPrimitive.Item + +const navigationMenuTriggerStyle = cva( + "group inline-flex h-10 w-max items-center justify-center rounded-md bg-background px-4 py-2 text-sm font-medium transition-colors hover:bg-accent hover:text-accent-foreground focus:bg-accent focus:text-accent-foreground focus:outline-none disabled:pointer-events-none disabled:opacity-50 data-[active]:bg-accent/50 data-[state=open]:bg-accent/50" +) + +const NavigationMenuTrigger = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, children, ...props }, ref) => ( + + {children}{" "} + +)) +NavigationMenuTrigger.displayName = NavigationMenuPrimitive.Trigger.displayName + +const NavigationMenuContent = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( + +)) +NavigationMenuContent.displayName = NavigationMenuPrimitive.Content.displayName + +const NavigationMenuLink = NavigationMenuPrimitive.Link + +const NavigationMenuViewport = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( +
+ +
+)) +NavigationMenuViewport.displayName = + NavigationMenuPrimitive.Viewport.displayName + +const NavigationMenuIndicator = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef +>(({ className, ...props }, ref) => ( + +
+ +)) +NavigationMenuIndicator.displayName = + NavigationMenuPrimitive.Indicator.displayName + +export { + navigationMenuTriggerStyle, + NavigationMenu, + NavigationMenuList, + NavigationMenuItem, + NavigationMenuContent, + NavigationMenuTrigger, + NavigationMenuLink, + NavigationMenuIndicator, + NavigationMenuViewport, +} \ No newline at end of file diff --git a/app/frontend/src/components/ui/toggle-group.tsx b/app/frontend/src/components/ui/toggle-group.tsx new file mode 100644 index 000000000..d67a0ba37 --- /dev/null +++ b/app/frontend/src/components/ui/toggle-group.tsx @@ -0,0 +1,61 @@ +import * as React from "react" +import * as ToggleGroupPrimitive from "@radix-ui/react-toggle-group" +import { cva, type VariantProps } from "class-variance-authority" + +import { cn } from "@/lib/utils" + +const toggleGroupVariants = cva( + "inline-flex items-center justify-center rounded-md text-sm font-medium ring-offset-background transition-colors hover:bg-muted hover:text-muted-foreground focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-ring focus-visible:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 data-[state=on]:bg-accent data-[state=on]:text-accent-foreground", + { + variants: { + variant: { + default: "bg-transparent", + outline: + "border border-input bg-transparent hover:bg-accent hover:text-accent-foreground", + }, + size: { + default: "h-10 px-3", + sm: "h-9 px-2.5", + lg: "h-11 px-5", + }, + }, + defaultVariants: { + variant: "default", + size: "default", + }, + } +) + +const ToggleGroup = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & + VariantProps +>(({ className, variant, size, children, ...props }, ref) => ( + + {children} + +)) + +ToggleGroup.displayName = ToggleGroupPrimitive.Root.displayName + +const ToggleGroupItem = React.forwardRef< + React.ElementRef, + React.ComponentPropsWithoutRef & + VariantProps +>(({ className, children, variant, size, ...props }, ref) => ( + + {children} + +)) + +ToggleGroupItem.displayName = ToggleGroupPrimitive.Item.displayName + +export { ToggleGroup, ToggleGroupItem } \ No newline at end of file diff --git a/app/frontend/src/components/universe/universe-selection.tsx b/app/frontend/src/components/universe/universe-selection.tsx new file mode 100644 index 000000000..dfc81d13f --- /dev/null +++ b/app/frontend/src/components/universe/universe-selection.tsx @@ -0,0 +1,533 @@ +import React, { useState, useEffect } from 'react'; +import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'; +import { Button } from '@/components/ui/button'; +import { Badge } from '@/components/ui/badge'; +import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs'; +import { Input } from '@/components/ui/input'; +import { Label } from '@/components/ui/label'; +import { Separator } from '@/components/ui/separator'; +import { + Target, + Settings, + TrendingUp, + Shield, + Zap, + BarChart3, + RefreshCw, + CheckCircle, + Circle, + Search, + Filter, + Globe, + Building, + Cpu, + HeartHandshake, + Fuel, + Home, +} from 'lucide-react'; + +interface UniverseInfo { + name: string; + description: string; + assetClasses: string[]; + maxPositions?: number; + includedTickers: string[]; + sectorFocus?: string[]; +} + +interface UniverseConfig { + selectedUniverse: string; + customFilters: { + minMarketCap?: number; + maxPositions?: number; + sectors?: string[]; + excludedTickers?: string[]; + }; +} + +const SECTOR_ICONS = { + technology: Cpu, + healthcare: HeartHandshake, + financials: Building, + energy: Fuel, + 'real_estate': Home, + communication: Globe, + industrials: Settings, + consumer_discretionary: TrendingUp, + consumer_staples: Shield, + utilities: Zap, + materials: BarChart3, +}; + +const UNIVERSE_CARDS = [ + { + name: 'sp500', + title: 'S&P 500 Universe', + description: 'Large-cap stocks with high liquidity and institutional quality', + icon: Target, + color: 'bg-blue-500', + features: ['500+ Securities', 'High Liquidity', 'Low Volatility', 'Institutional Grade'], + recommendedFor: 'Conservative growth strategies', + }, + { + name: 'tech', + title: 'High-Volume Tech', + description: 'Technology sector focus with high volume and growth potential', + icon: Cpu, + color: 'bg-purple-500', + features: ['30 Positions Max', 'High Growth', 'Tech Focus', 'Innovation Leaders'], + recommendedFor: 'Aggressive growth strategies', + }, + { + name: 'sector_etf', + title: 'Sector ETFs', + description: 'Diversified sector rotation with broad market exposure', + icon: BarChart3, + color: 'bg-green-500', + features: ['Sector Rotation', '15 ETFs Max', 'Diversified', 'Low Fees'], + recommendedFor: 'Balanced diversification', + }, + { + name: 'options', + title: 'Options Universe', + description: 'Liquid options markets for complex strategies and hedging', + icon: Zap, + color: 'bg-yellow-500', + features: ['Options Trading', 'High Liquidity', 'Greeks Analysis', 'Advanced Strategies'], + recommendedFor: 'Sophisticated trading strategies', + }, + { + name: 'conservative', + title: 'Conservative Large Cap', + description: 'Blue-chip dividend stocks with stable fundamentals', + icon: Shield, + color: 'bg-indigo-500', + features: ['Dividend Focus', 'Low Risk', 'Quality Stocks', 'Stable Returns'], + recommendedFor: 'Income-focused strategies', + }, + { + name: 'aggressive_growth', + title: 'Aggressive Growth', + description: 'High-growth tech stocks combined with options strategies', + icon: TrendingUp, + color: 'bg-red-500', + features: ['Maximum Returns', 'Tech + Options', 'High Risk', 'Active Management'], + recommendedFor: 'Maximum growth potential', + }, +]; + +export function UniverseSelection() { + const [universes, setUniverses] = useState([]); + const [selectedUniverse, setSelectedUniverse] = useState('sp500'); + const [universeDetails, setUniverseDetails] = useState(null); + const [config, setConfig] = useState({ + selectedUniverse: 'sp500', + customFilters: {}, + }); + const [isLoading, setIsLoading] = useState(true); + const [searchQuery, setSearchQuery] = useState(''); + + const fetchUniverses = async () => { + try { + const response = await fetch('/api/v1/enhanced/universes'); + if (response.ok) { + const data = await response.json(); + setUniverses(data); + } + } catch (error) { + console.error('Error fetching universes:', error); + // Fallback to mock data + setUniverses([ + { + name: 'sp500', + description: 'S&P 500 companies with high liquidity', + assetClasses: ['equity'], + maxPositions: 50, + includedTickers: ['AAPL', 'MSFT', 'GOOGL', 'AMZN'], + sectorFocus: [], + }, + ]); + } finally { + setIsLoading(false); + } + }; + + const fetchUniverseDetails = async (universeName: string) => { + try { + const response = await fetch(`/api/v1/enhanced/universes/${universeName}`); + if (response.ok) { + const data = await response.json(); + setUniverseDetails(data); + } + } catch (error) { + console.error('Error fetching universe details:', error); + } + }; + + useEffect(() => { + fetchUniverses(); + }, []); + + useEffect(() => { + if (selectedUniverse) { + fetchUniverseDetails(selectedUniverse); + } + }, [selectedUniverse]); + + const handleUniverseSelect = (universeName: string) => { + setSelectedUniverse(universeName); + setConfig(prev => ({ + ...prev, + selectedUniverse: universeName, + })); + }; + + const applyUniverseConfig = async () => { + try { + // Here you would save the configuration to the backend + console.log('Applying universe configuration:', config); + + // Show success message + alert('Universe configuration applied successfully!'); + } catch (error) { + console.error('Error applying universe configuration:', error); + alert('Error applying configuration. Please try again.'); + } + }; + + const filteredUniverseCards = UNIVERSE_CARDS.filter(card => + card.title.toLowerCase().includes(searchQuery.toLowerCase()) || + card.description.toLowerCase().includes(searchQuery.toLowerCase()) + ); + + const getUniverseCard = (universeName: string) => { + return UNIVERSE_CARDS.find(card => card.name === universeName); + }; + + return ( +
+ {/* Header */} +
+
+

Trading Universe Selection

+

+ Choose your investment universe and configure trading parameters +

+
+
+ + +
+
+ + + + Universe Selection + Configuration + Analysis + + + + {/* Search and Filter */} + + +
+
+
+ + setSearchQuery(e.target.value)} + className="pl-10" + /> +
+
+ +
+
+
+ + {/* Universe Cards */} +
+ {filteredUniverseCards.map((universe) => { + const IconComponent = universe.icon; + const isSelected = selectedUniverse === universe.name; + + return ( + handleUniverseSelect(universe.name)} + > + +
+
+ +
+ {isSelected ? ( + + ) : ( + + )} +
+
+ {universe.title} +

+ {universe.description} +

+
+
+ +
+ {/* Features */} +
+ {universe.features.map((feature) => ( + + {feature} + + ))} +
+ + + + {/* Recommended For */} +
+

Recommended for:

+

{universe.recommendedFor}

+
+
+
+
+ ); + })} +
+
+ + +
+ {/* Selected Universe Details */} + {universeDetails && ( + + + + + Selected Universe: {universeDetails.name} + + + +
+ +

{universeDetails.description}

+
+ +
+ +
+ {universeDetails.assetClasses.map((assetClass) => ( + + {assetClass} + + ))} +
+
+ + {universeDetails.maxPositions && ( +
+ +

{universeDetails.maxPositions}

+
+ )} + +
+ +
+ {universeDetails.includedTickers.slice(0, 20).map((ticker) => ( + + {ticker} + + ))} + {universeDetails.includedTickers.length > 20 && ( + + +{universeDetails.includedTickers.length - 20} more + + )} +
+
+
+
+ )} + + {/* Custom Configuration */} + + + + + Custom Configuration + + + +
+ + + setConfig(prev => ({ + ...prev, + customFilters: { + ...prev.customFilters, + minMarketCap: Number(e.target.value) || undefined, + }, + })) + } + /> +
+ +
+ + + setConfig(prev => ({ + ...prev, + customFilters: { + ...prev.customFilters, + maxPositions: Number(e.target.value) || undefined, + }, + })) + } + /> +
+ +
+ +
+ {Object.entries(SECTOR_ICONS).map(([sector, IconComponent]) => ( +
+ { + const sectors = config.customFilters.sectors || []; + if (e.target.checked) { + setConfig(prev => ({ + ...prev, + customFilters: { + ...prev.customFilters, + sectors: [...sectors, sector], + }, + })); + } else { + setConfig(prev => ({ + ...prev, + customFilters: { + ...prev.customFilters, + sectors: sectors.filter(s => s !== sector), + }, + })); + } + }} + /> + +
+ ))} +
+
+ +
+ + + setConfig(prev => ({ + ...prev, + customFilters: { + ...prev.customFilters, + excludedTickers: e.target.value + .split(',') + .map(ticker => ticker.trim().toUpperCase()) + .filter(ticker => ticker.length > 0), + }, + })) + } + /> +
+
+
+
+
+ + + + + Universe Analysis + + +
+
+

+ {universeDetails?.includedTickers.length || 0} +

+

Total Securities

+
+
+

+ {universeDetails?.maxPositions || 'Unlimited'} +

+

Max Positions

+
+
+

+ {universeDetails?.assetClasses.length || 0} +

+

Asset Classes

+
+
+ + + +
+

Configuration Summary

+
+
+                    {JSON.stringify(config, null, 2)}
+                  
+
+
+
+
+
+
+
+ ); +} \ No newline at end of file diff --git a/app/run.bat b/app/run.bat deleted file mode 100644 index 9c7a6c974..000000000 --- a/app/run.bat +++ /dev/null @@ -1,184 +0,0 @@ -@echo off -REM AI Hedge Fund Web Application Setup and Runner (Windows) -REM This script makes it easy for non-technical users to run the full web application - -REM Colors for output -set "INFO=[INFO]" -set "SUCCESS=[SUCCESS]" -set "WARNING=[WARNING]" -set "ERROR=[ERROR]" - -REM Check Node.js -where node >nul 2>&1 -if %errorlevel% neq 0 ( - echo %ERROR% Node.js is not installed. Please install from https://nodejs.org/ - pause - exit /b 1 -) - -REM Check npm -where npm >nul 2>&1 -if %errorlevel% neq 0 ( - echo %ERROR% npm is not installed. Please install Node.js from https://nodejs.org/ - pause - exit /b 1 -) - -REM Check Python (or python3) -where python >nul 2>&1 -if %errorlevel% neq 0 ( - where python3 >nul 2>&1 - if %errorlevel% neq 0 ( - echo %ERROR% Python is not installed. Please install from https://python.org/ - pause - exit /b 1 - ) -) - -REM Check Poetry -where poetry >nul 2>&1 -if %errorlevel% neq 0 ( - echo %WARNING% Poetry is not installed. - echo %INFO% Poetry is required to manage Python dependencies for this project. - echo. - set /p install_poetry="Would you like to install Poetry automatically? (y/N): " - if /i "%install_poetry%"=="y" ( - echo %INFO% Installing Poetry... - python -m pip install poetry - if %errorlevel% neq 0 ( - echo %ERROR% Failed to install Poetry automatically. - echo %ERROR% Please install Poetry manually from https://python-poetry.org/ - pause - exit /b 1 - ) - echo %SUCCESS% Poetry installed successfully! - echo %INFO% Refreshing environment... - call refreshenv >nul 2>&1 || echo %WARNING% Could not refresh environment. You may need to restart your terminal. - ) else ( - echo %ERROR% Poetry is required to run this application. - echo %ERROR% Please install Poetry from https://python-poetry.org/ and run this script again. - pause - exit /b 1 - ) -) - -REM Ensure correct working directory -if not exist "frontend" ( - echo %ERROR% This script must be run from the app\ directory - echo %ERROR% Please navigate to the app\ directory and run: run.bat - pause - exit /b 1 -) - -if not exist "backend" ( - echo %ERROR% This script must be run from the app\ directory - echo %ERROR% Please navigate to the app\ directory and run: run.bat - pause - exit /b 1 -) - -echo. -echo %INFO% AI Hedge Fund Web Application Setup -echo %INFO% This script will install dependencies and start both frontend and backend services -echo. - -REM Check for .env -if not exist "..\.env" ( - if exist "..\.env.example" ( - echo %WARNING% No .env file found. Creating from .env.example... - copy "..\.env.example" "..\.env" - echo %WARNING% Please edit ..\.env to add your API keys: - echo %WARNING% - OPENAI_API_KEY=your-openai-api-key - echo %WARNING% - GROQ_API_KEY=your-groq-api-key - echo %WARNING% - FINANCIAL_DATASETS_API_KEY=your-financial-datasets-api-key - echo. - ) else ( - echo %ERROR% No .env or .env.example file found in the root directory. - echo %ERROR% Please create a .env file with your API keys. - pause - exit /b 1 - ) -) else ( - echo %SUCCESS% Environment file (.env) found! -) - -REM Install backend dependencies -echo %INFO% Installing backend dependencies... -cd backend - -poetry check >nul 2>&1 -if %errorlevel% equ 0 ( - echo %SUCCESS% Backend dependencies already installed! -) else ( - echo %INFO% Installing Python dependencies with Poetry... - poetry install - if %errorlevel% neq 0 ( - echo %ERROR% Failed to install backend dependencies - pause - exit /b 1 - ) - echo %SUCCESS% Backend dependencies installed! -) - -cd .. - -REM Install frontend dependencies -echo %INFO% Installing frontend dependencies... -cd frontend - -if exist "node_modules" ( - echo %SUCCESS% Frontend dependencies already installed! -) else ( - echo %INFO% Installing Node.js dependencies... - npm install - if %errorlevel% neq 0 ( - echo %ERROR% Failed to install frontend dependencies - pause - exit /b 1 - ) - echo %SUCCESS% Frontend dependencies installed! -) - -cd .. - -REM Start services -echo %INFO% Starting the AI Hedge Fund web application... -echo %INFO% Press Ctrl+C to stop all services -echo. - -REM Start backend -echo %INFO% Launching backend server... -REM Run from project root to ensure proper Python imports -cd .. -start /b poetry run uvicorn app.backend.main:app --reload --host 127.0.0.1 --port 8000 -cd app - -timeout /t 3 /nobreak >nul - -REM Start frontend -echo %INFO% Launching frontend development server... -cd frontend -start /b npm run dev -cd .. - -timeout /t 5 /nobreak >nul - -echo %INFO% Opening browser... -timeout /t 2 /nobreak >nul -start http://localhost:5173 - -echo. -echo %SUCCESS% AI Hedge Fund web application is now running! -echo %INFO% Frontend: http://localhost:5173 -echo %INFO% Backend: http://localhost:8000 -echo %INFO% Docs: http://localhost:8000/docs -echo. -echo %INFO% Press any key to stop both services... -pause >nul - -REM Stop services -taskkill /f /im "uvicorn.exe" >nul 2>&1 -taskkill /f /im "node.exe" >nul 2>&1 - -echo %SUCCESS% Services stopped. Goodbye! -pause diff --git a/app/run.sh b/app/run.sh index 60003348f..70ca66a2f 100755 --- a/app/run.sh +++ b/app/run.sh @@ -227,6 +227,17 @@ start_services() { print_status "Starting backend server..." # Run from the app directory (parent of backend) to ensure proper Python imports cd .. + + # First test that the backend can start without errors + print_status "Testing backend startup..." + if ! poetry run python -c "from app.backend.main import app; print('Backend imports successful')" > "$LOG_DIR/backend_test.log" 2>&1; then + print_error "Backend has import errors. Check the logs:" + cat "$LOG_DIR/backend_test.log" + print_error "Please fix the import errors before starting the server." + exit 1 + fi + + # Start the backend server poetry run uvicorn app.backend.main:app --reload --host 127.0.0.1 --port 8000 > "$LOG_DIR/backend.log" 2>&1 & BACKEND_PID=$! cd app diff --git a/backend.log b/backend.log new file mode 100644 index 000000000..3276b317e --- /dev/null +++ b/backend.log @@ -0,0 +1,441 @@ +INFO: Will watch for changes in these directories: ['/Users/xunxdd/github/ai-hedge-fund'] +INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) +INFO: Started reloader process [71149] using WatchFiles +INFO: Started server process [71154] +INFO: Waiting for application startup. +INFO: Application startup complete. +INFO: 127.0.0.1:49467 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:49663 - "GET / HTTP/1.1" 200 OK +INFO: 127.0.0.1:50060 - "GET / HTTP/1.1" 200 OK +INFO: 127.0.0.1:50403 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50402 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:50403 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50405 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:50628 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50627 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:50630 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50628 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:50825 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50824 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:50827 - "GET /hedge-fund/agents HTTP/1.1" 200 OK +INFO: 127.0.0.1:50825 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "OPTIONS /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "POST /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "GET /flows/ HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "OPTIONS /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:51329 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:52883 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:52883 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:52885 - "GET /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:52885 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53059 - "GET /hedge-fund/language-models HTTP/1.1" 200 OK +INFO: 127.0.0.1:53059 - "GET /hedge-fund/language-models HTTP/1.1" 200 OK +INFO: 127.0.0.1:53061 - "GET /hedge-fund/language-models HTTP/1.1" 200 OK +INFO: 127.0.0.1:53059 - "GET /hedge-fund/language-models HTTP/1.1" 200 OK +INFO: 127.0.0.1:53059 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53140 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53194 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53194 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53232 - "OPTIONS /hedge-fund/run HTTP/1.1" 200 OK +INFO: 127.0.0.1:53232 - "POST /hedge-fund/run HTTP/1.1" 200 OK +INFO: 127.0.0.1:53238 - "PUT /flows/1 HTTP/1.1" 200 OK +JSON decoding error: Expecting value: line 1 column 1 (char 0) +Response: 'Make trading decisions based on the provided data.' +INFO: 127.0.0.1:53260 - "POST /hedge-fund/run HTTP/1.1" 200 OK +JSON decoding error: Expecting value: line 1 column 1 (char 0) +Response: 'Make trading decisions based on the provided data.' +INFO: 127.0.0.1:53290 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53290 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53484 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53510 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53620 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53620 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53655 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53713 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53747 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53785 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53966 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53988 - "OPTIONS /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:53988 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54034 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54034 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54034 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54034 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54061 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54061 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54063 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54065 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54307 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54385 - "POST /hedge-fund/run HTTP/1.1" 200 OK +INFO: 127.0.0.1:54397 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54420 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54420 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54427 - "PUT /flows/1 HTTP/1.1" 200 OK +INFO: 127.0.0.1:54430 - "PUT /flows/1 HTTP/1.1" 200 OK +WARNING: WatchFiles detected changes in 'src/data/models.py'. Reloading... +INFO: Shutting down +INFO: Waiting for application shutdown. +INFO: Application shutdown complete. +INFO: Finished server process [71154] +Process SpawnProcess-2: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +WARNING: WatchFiles detected changes in 'src/data/trading_universes.py'. Reloading... +Process SpawnProcess-3: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +WARNING: WatchFiles detected changes in 'src/data/realtime_data.py'. Reloading... +Process SpawnProcess-4: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +WARNING: WatchFiles detected changes in 'src/data/economic_indicators.py'. Reloading... +Process SpawnProcess-5: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +WARNING: WatchFiles detected changes in 'src/data/political_signals.py'. Reloading... +Process SpawnProcess-6: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +WARNING: WatchFiles detected changes in 'src/agents/enhanced_sentiment.py'. Reloading... +Process SpawnProcess-7: +Traceback (most recent call last): + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap + self.run() + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run + self._target(*self._args, **self._kwargs) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 80, in subprocess_started + target(sockets=sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run + return asyncio.run(self.serve(sockets=sockets)) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run + return runner.run(main) + ^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run + return self._loop.run_until_complete(task) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve + await self._serve(sockets) + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve + config.load() + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load + self.loaded_app = import_from_string(self.app) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/Users/xunxdd/Library/Caches/pypoetry/virtualenvs/ai-hedge-fund-ac5atbxA-py3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string + module = importlib.import_module(module_str) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "/opt/homebrew/Cellar/python@3.11/3.11.13/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module + return _bootstrap._gcd_import(name[level:], package, level) + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + File "", line 1204, in _gcd_import + File "", line 1176, in _find_and_load + File "", line 1147, in _find_and_load_unlocked + File "", line 690, in _load_unlocked + File "", line 940, in exec_module + File "", line 241, in _call_with_frames_removed + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/main.py", line 4, in + from app.backend.routes import api_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/__init__.py", line 3, in + from app.backend.routes.hedge_fund import router as hedge_fund_router + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/routes/hedge_fund.py", line 7, in + from app.backend.services.graph import create_graph, parse_hedge_fund_response, run_graph_async + File "/Users/xunxdd/github/ai-hedge-fund/app/backend/services/graph.py", line 7, in + from src.agents.risk_manager import risk_management_agent + File "/Users/xunxdd/github/ai-hedge-fund/src/agents/risk_manager.py", line 4, in + from src.tools.api import get_prices, prices_to_df + File "/Users/xunxdd/github/ai-hedge-fund/src/tools/api.py", line 8, in + from src.data.models import ( + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 144, in + class Position(BaseModel): + File "/Users/xunxdd/github/ai-hedge-fund/src/data/models.py", line 148, in Position + position_type: PositionType = PositionType.LONG + ^^^^^^^^^^^^ +NameError: name 'PositionType' is not defined +2025-06-29 01:12:19,888 - INFO - Starting AI Hedge Fund Backend Server... +2025-06-29 01:12:19,888 - INFO - Python version: 3.13.5 (main, Jun 11 2025, 15:36:57) [Clang 17.0.0 (clang-1700.0.13.3)] +2025-06-29 01:12:20,062 - INFO - Poetry found: Poetry (version 2.1.3) +2025-06-29 01:12:20,459 - INFO - All required dependencies are installed +2025-06-29 01:12:20,460 - WARNING - python-dotenv not installed. Environment variables won't be loaded automatically. +2025-06-29 01:12:20,460 - INFO - Testing critical imports... +2025-06-29 01:12:21,768 - INFO - All imports successful +2025-06-29 01:12:21,768 - INFO - All checks passed. Starting server... +2025-06-29 01:12:21,768 - INFO - Starting backend server on http://0.0.0.0:8000 +2025-06-29 01:17:30,061 - INFO - Starting AI Hedge Fund Backend Server... +2025-06-29 01:17:30,061 - INFO - Python version: 3.13.5 (main, Jun 11 2025, 15:36:57) [Clang 17.0.0 (clang-1700.0.13.3)] +2025-06-29 01:17:30,333 - INFO - Poetry found: Poetry (version 2.1.3) +2025-06-29 01:17:30,795 - INFO - All required dependencies are installed +2025-06-29 01:17:30,796 - WARNING - python-dotenv not installed. Environment variables won't be loaded automatically. +2025-06-29 01:17:30,796 - INFO - Testing critical imports... +2025-06-29 01:17:32,255 - INFO - All imports successful +2025-06-29 01:17:32,255 - INFO - All checks passed. Starting server... +2025-06-29 01:17:32,255 - INFO - Starting backend server on http://0.0.0.0:8000 diff --git a/src/agents/enhanced_portfolio_manager.py b/src/agents/enhanced_portfolio_manager.py new file mode 100644 index 000000000..314a5b5ff --- /dev/null +++ b/src/agents/enhanced_portfolio_manager.py @@ -0,0 +1,469 @@ +""" +Enhanced Portfolio Manager with Trading Universe and Comprehensive Analysis. +Integrates trading universe, enhanced sentiment, economic indicators, and political signals. +""" + +import json +import asyncio +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Any +from langchain_core.messages import HumanMessage +from langchain_core.prompts import ChatPromptTemplate +from pydantic import BaseModel, Field + +from src.graph.state import AgentState, show_agent_reasoning +from src.utils.progress import progress +from src.utils.llm import call_llm +from src.data.trading_universes import get_trading_universe, TRADING_UNIVERSES +from src.data.economic_indicators import EconomicDataManager +from src.data.political_signals import PoliticalSignalsManager +from src.agents.enhanced_sentiment import AdvancedSentimentAnalyzer, SocialMediaAnalyzer +from src.data.realtime_data import RealTimeDataManager, create_data_provider + + +class EnhancedPortfolioDecision(BaseModel): + """Enhanced portfolio decision with comprehensive analysis""" + action: str = Field(description="Trading action: buy, sell, short, cover, hold") + quantity: int = Field(description="Number of shares to trade") + confidence: float = Field(description="Confidence level 0-100") + reasoning: str = Field(description="Detailed reasoning for the decision") + + # Enhanced analysis components + sentiment_score: Optional[float] = Field(description="Sentiment analysis score -1 to 1") + economic_impact: Optional[float] = Field(description="Economic indicators impact -1 to 1") + political_risk: Optional[float] = Field(description="Political risk assessment -1 to 1") + market_regime: Optional[str] = Field(description="Current market regime assessment") + volatility_forecast: Optional[float] = Field(description="Expected volatility 0-1") + + # Risk metrics + max_position_size: Optional[float] = Field(description="Maximum position size allowed") + sector_exposure: Optional[str] = Field(description="Sector exposure analysis") + correlation_risk: Optional[float] = Field(description="Portfolio correlation risk") + + +class EnhancedPortfolioOutput(BaseModel): + """Enhanced portfolio output with macro analysis""" + decisions: Dict[str, EnhancedPortfolioDecision] = Field(description="Trading decisions by ticker") + + # Portfolio-level insights + overall_market_sentiment: float = Field(description="Overall market sentiment -1 to 1") + economic_health_score: float = Field(description="Economic health score 0-100") + political_stability: float = Field(description="Political stability score 0-100") + recommended_cash_level: float = Field(description="Recommended cash percentage 0-1") + market_regime: str = Field(description="Bull, Bear, Sideways, Volatile") + risk_level: str = Field(description="Low, Medium, High") + + +class UniverseAnalyzer: + """Analyzes trading universe and filters securities""" + + def __init__(self, universe_name: str = "sp500"): + self.universe = get_trading_universe(universe_name) + self.universe_name = universe_name + + def filter_tickers(self, candidate_tickers: List[str]) -> List[str]: + """Filter tickers based on trading universe criteria""" + if self.universe.included_tickers: + # If universe has specific tickers, filter to those + filtered = [t for t in candidate_tickers if t in self.universe.included_tickers] + else: + # Otherwise, use all provided tickers (assume they meet criteria) + filtered = candidate_tickers + + # Apply maximum position limits + if self.universe.max_positions and len(filtered) > self.universe.max_positions: + # Prioritize by some criteria (for now, just take first N) + filtered = filtered[:self.universe.max_positions] + + return filtered + + def get_sector_allocation_targets(self) -> Dict[str, float]: + """Get target sector allocations based on universe""" + if self.universe_name == "tech": + return { + "technology": 0.6, + "communication": 0.2, + "consumer_discretionary": 0.2 + } + elif self.universe_name == "conservative": + return { + "consumer_staples": 0.3, + "healthcare": 0.3, + "utilities": 0.2, + "financials": 0.2 + } + elif self.universe_name == "sector_etf": + return { + "technology": 0.15, + "healthcare": 0.15, + "financials": 0.15, + "industrials": 0.1, + "energy": 0.1, + "consumer_discretionary": 0.1, + "consumer_staples": 0.1, + "utilities": 0.05, + "materials": 0.05, + "real_estate": 0.05 + } + else: + # Default balanced allocation + return {} + + +class MacroAnalysisEngine: + """Comprehensive macro analysis engine""" + + def __init__(self, api_keys: Dict[str, str]): + self.economic_manager = EconomicDataManager(api_keys.get("fred_api_key", "")) + self.political_manager = PoliticalSignalsManager({ + "newsapi": api_keys.get("newsapi_key", ""), + "reddit": api_keys.get("reddit_key", ""), + "twitter": api_keys.get("twitter_key", "") + }) + self.sentiment_analyzer = AdvancedSentimentAnalyzer() + self.social_analyzer = SocialMediaAnalyzer(api_keys) + + async def get_comprehensive_analysis(self, tickers: List[str]) -> Dict[str, Any]: + """Get comprehensive macro and sentiment analysis""" + try: + # Start all data providers + await self.economic_manager.start() + await self.political_manager.start() + await self.social_analyzer.connect() + + # Get economic summary + economic_data = await self.economic_manager.get_economic_summary() + + # Get political events + political_events = await self.political_manager.update_political_events() + high_impact_events = self.political_manager.get_high_impact_events() + + # Get sentiment analysis for each ticker + sentiment_data = {} + social_sentiment_data = {} + + for ticker in tickers: + try: + # Get social sentiment + social_sentiment = await self.social_analyzer.analyze_social_sentiment(ticker) + social_sentiment_data[ticker] = social_sentiment + + # Analyze recent news for sentiment + # (This would integrate with news feeds in production) + sentiment_data[ticker] = { + "social_sentiment": social_sentiment.average_sentiment, + "mentions": social_sentiment.total_mentions, + "trending_topics": social_sentiment.trending_topics[:3] + } + except Exception as e: + sentiment_data[ticker] = { + "social_sentiment": 0.0, + "mentions": 0, + "trending_topics": [] + } + + return { + "economic": { + "health_score": economic_data["health_score"], + "summary": economic_data["summary"], + "indicators": {k: v.value for k, v in economic_data["indicators"].items()}, + "upcoming_events": len(economic_data["upcoming_events"]) + }, + "political": { + "high_impact_events": len(high_impact_events), + "total_events": len(political_events), + "recent_events": [ + { + "title": event.title, + "impact_score": event.market_impact_score, + "sentiment": event.sentiment_score + } + for event in high_impact_events[:3] + ] + }, + "sentiment": sentiment_data, + "market_regime": self._assess_market_regime(economic_data, political_events, sentiment_data) + } + + except Exception as e: + print(f"Error in macro analysis: {e}") + return self._get_default_analysis(tickers) + + finally: + # Clean up connections + try: + await self.economic_manager.stop() + await self.political_manager.stop() + await self.social_analyzer.disconnect() + except: + pass + + def _assess_market_regime(self, economic_data: Dict, political_events: List, + sentiment_data: Dict) -> str: + """Assess current market regime""" + + # Economic factors + health_score = economic_data.get("health_score", 50) + + # Political factors + high_impact_political = sum(1 for event in political_events + if hasattr(event, 'impact_level') and + event.impact_level == "high") + + # Sentiment factors + avg_sentiment = sum(data.get("social_sentiment", 0) + for data in sentiment_data.values()) / max(len(sentiment_data), 1) + + # Determine regime + if health_score > 70 and avg_sentiment > 0.2 and high_impact_political < 2: + return "Bull Market" + elif health_score < 40 or avg_sentiment < -0.3 or high_impact_political > 3: + return "Bear Market" + elif abs(avg_sentiment) < 0.1 and 40 <= health_score <= 70: + return "Sideways Market" + else: + return "Volatile Market" + + def _get_default_analysis(self, tickers: List[str]) -> Dict[str, Any]: + """Return default analysis when data is unavailable""" + return { + "economic": { + "health_score": 50.0, + "summary": "Economic data unavailable", + "indicators": {}, + "upcoming_events": 0 + }, + "political": { + "high_impact_events": 0, + "total_events": 0, + "recent_events": [] + }, + "sentiment": {ticker: {"social_sentiment": 0.0, "mentions": 0, "trending_topics": []} + for ticker in tickers}, + "market_regime": "Neutral Market" + } + + +async def enhanced_portfolio_management_agent(state: AgentState, universe_name: str = "sp500", + api_keys: Dict[str, str] = None): + """Enhanced portfolio management with comprehensive analysis""" + + # Get portfolio and signals + portfolio = state["data"]["portfolio"] + analyst_signals = state["data"]["analyst_signals"] + tickers = state["data"]["tickers"] + + if api_keys is None: + api_keys = {} + + progress.update_status("enhanced_portfolio_manager", None, "Initializing analysis engines") + + # Initialize analyzers + universe_analyzer = UniverseAnalyzer(universe_name) + macro_engine = MacroAnalysisEngine(api_keys) + + # Filter tickers based on trading universe + filtered_tickers = universe_analyzer.filter_tickers(tickers) + progress.update_status("enhanced_portfolio_manager", None, + f"Filtered to {len(filtered_tickers)} tickers from universe") + + # Get comprehensive macro analysis + progress.update_status("enhanced_portfolio_manager", None, "Running macro analysis") + try: + macro_analysis = await macro_engine.get_comprehensive_analysis(filtered_tickers) + except Exception as e: + print(f"Macro analysis failed: {e}") + macro_analysis = macro_engine._get_default_analysis(filtered_tickers) + + # Prepare enhanced decision context + enhanced_context = { + "tickers": filtered_tickers, + "universe": universe_name, + "macro_analysis": macro_analysis, + "sector_targets": universe_analyzer.get_sector_allocation_targets(), + "analyst_signals": analyst_signals, + "portfolio": portfolio + } + + progress.update_status("enhanced_portfolio_manager", None, "Generating enhanced decisions") + + # Generate enhanced trading decisions + result = await generate_enhanced_trading_decision(enhanced_context, state) + + # Create message + message = HumanMessage( + content=json.dumps({ + "decisions": {ticker: decision.model_dump() for ticker, decision in result.decisions.items()}, + "portfolio_analysis": { + "market_sentiment": result.overall_market_sentiment, + "economic_health": result.economic_health_score, + "political_stability": result.political_stability, + "market_regime": result.market_regime, + "risk_level": result.risk_level + } + }), + name="enhanced_portfolio_manager", + ) + + if state["metadata"]["show_reasoning"]: + show_agent_reasoning(result.model_dump(), "Enhanced Portfolio Manager") + + progress.update_status("enhanced_portfolio_manager", None, "Done") + + return { + "messages": state["messages"] + [message], + "data": {**state["data"], "enhanced_analysis": macro_analysis}, + } + + +async def generate_enhanced_trading_decision(context: Dict[str, Any], + state: AgentState) -> EnhancedPortfolioOutput: + """Generate enhanced trading decisions with comprehensive analysis""" + + template = ChatPromptTemplate.from_messages([ + ("system", """You are an advanced portfolio manager using comprehensive market analysis. + + Your analysis includes: + - Economic indicators and health score + - Political risk assessment + - Social sentiment analysis + - Trading universe constraints + - Sector allocation targets + - Market regime analysis + + Consider all these factors when making trading decisions. + + Market Regimes: + - Bull Market: Favor growth stocks, higher allocations, momentum strategies + - Bear Market: Defensive positions, higher cash, quality stocks, short opportunities + - Sideways Market: Range trading, value plays, covered calls + - Volatile Market: Reduced position sizes, options strategies, frequent rebalancing + + Trading Universe: {universe} + Current Market Regime: {market_regime} + Economic Health Score: {economic_health}/100 + Overall Market Sentiment: {market_sentiment} + """), + + ("human", """Based on the comprehensive analysis, make enhanced trading decisions. + + Trading Universe: {universe} + Filtered Tickers: {tickers} + + Macro Analysis: + {macro_analysis} + + Sector Allocation Targets: + {sector_targets} + + Current Portfolio: + {portfolio} + + Analyst Signals: + {analyst_signals} + + Provide enhanced decisions with detailed reasoning incorporating: + 1. Economic indicators impact + 2. Political risk assessment + 3. Sentiment analysis + 4. Sector allocation + 5. Market regime considerations + 6. Risk management + + Output in JSON format matching EnhancedPortfolioOutput schema. + """) + ]) + + prompt = template.invoke({ + "universe": context["universe"], + "market_regime": context["macro_analysis"]["market_regime"], + "economic_health": context["macro_analysis"]["economic"]["health_score"], + "market_sentiment": sum(s["social_sentiment"] for s in context["macro_analysis"]["sentiment"].values()) / max(len(context["macro_analysis"]["sentiment"]), 1), + "tickers": context["tickers"], + "macro_analysis": json.dumps(context["macro_analysis"], indent=2), + "sector_targets": json.dumps(context["sector_targets"], indent=2), + "portfolio": json.dumps(context["portfolio"], indent=2), + "analyst_signals": json.dumps(context["analyst_signals"], indent=2) + }) + + def create_default_output(): + """Create default output when LLM fails""" + decisions = {} + for ticker in context["tickers"]: + decisions[ticker] = EnhancedPortfolioDecision( + action="hold", + quantity=0, + confidence=50.0, + reasoning="Default hold due to analysis error", + sentiment_score=0.0, + economic_impact=0.0, + political_risk=0.0, + market_regime=context["macro_analysis"]["market_regime"], + volatility_forecast=0.2 + ) + + return EnhancedPortfolioOutput( + decisions=decisions, + overall_market_sentiment=0.0, + economic_health_score=50.0, + political_stability=50.0, + recommended_cash_level=0.2, + market_regime=context["macro_analysis"]["market_regime"], + risk_level="Medium" + ) + + return call_llm( + prompt=prompt, + pydantic_model=EnhancedPortfolioOutput, + agent_name="enhanced_portfolio_manager", + state=state, + default_factory=create_default_output, + ) + + +# Example usage function +async def demo_enhanced_portfolio_manager(): + """Demonstrate the enhanced portfolio manager""" + + # Example API keys (use your actual keys) + api_keys = { + "fred_api_key": "your_fred_api_key", + "newsapi_key": "your_newsapi_key", + "reddit_key": "your_reddit_key", + "twitter_key": "your_twitter_key" + } + + # Example state + state = { + "messages": [], + "data": { + "tickers": ["AAPL", "MSFT", "GOOGL", "NVDA", "TSLA"], + "portfolio": { + "cash": 100000, + "positions": { + "AAPL": {"long": 0, "short": 0}, + "MSFT": {"long": 0, "short": 0}, + "GOOGL": {"long": 0, "short": 0}, + "NVDA": {"long": 0, "short": 0}, + "TSLA": {"long": 0, "short": 0} + } + }, + "analyst_signals": {} + }, + "metadata": {"show_reasoning": True} + } + + # Run enhanced portfolio manager with tech universe + result = await enhanced_portfolio_management_agent( + state, + universe_name="tech", + api_keys=api_keys + ) + + print("Enhanced Portfolio Analysis Complete!") + return result + + +if __name__ == "__main__": + # Run the demo + asyncio.run(demo_enhanced_portfolio_manager()) \ No newline at end of file diff --git a/src/agents/enhanced_sentiment.py b/src/agents/enhanced_sentiment.py new file mode 100644 index 000000000..b61f5a169 --- /dev/null +++ b/src/agents/enhanced_sentiment.py @@ -0,0 +1,669 @@ +""" +Enhanced NLP modules for news and social media sentiment analysis. +Includes advanced sentiment analysis, entity extraction, and event detection. +""" + +import asyncio +import json +import logging +import re +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Any, Tuple, Union +from dataclasses import dataclass, asdict +from enum import Enum +import aiohttp +import pandas as pd +from collections import defaultdict, Counter +import numpy as np + +from ..data.cache import Cache +from ..data.models import CompanyNews + +logger = logging.getLogger(__name__) + +class SentimentScore(str, Enum): + """Sentiment classification levels""" + VERY_POSITIVE = "very_positive" + POSITIVE = "positive" + NEUTRAL = "neutral" + NEGATIVE = "negative" + VERY_NEGATIVE = "very_negative" + +class NewsCategory(str, Enum): + """News categories for classification""" + EARNINGS = "earnings" + PRODUCT_LAUNCH = "product_launch" + MERGER_ACQUISITION = "merger_acquisition" + REGULATORY = "regulatory" + MANAGEMENT_CHANGE = "management_change" + PARTNERSHIP = "partnership" + LEGAL_ISSUE = "legal_issue" + MARKET_ANALYSIS = "market_analysis" + GENERAL = "general" + +@dataclass +class SentimentAnalysis: + """Detailed sentiment analysis results""" + text: str + overall_sentiment: SentimentScore + confidence: float # 0-1 + positive_score: float + negative_score: float + neutral_score: float + emotions: Dict[str, float] # emotion -> score + key_phrases: List[str] + entities: List[Dict[str, Any]] # Named entities + category: NewsCategory + market_relevance: float # 0-1 + urgency_score: float # 0-1 + +@dataclass +class SocialSentiment: + """Social media sentiment aggregation""" + ticker: str + platform: str # twitter, reddit, stocktwits + total_mentions: int + sentiment_distribution: Dict[SentimentScore, int] + average_sentiment: float # -1 to 1 + trending_topics: List[str] + influence_score: float # Weighted by follower count/engagement + timestamp: datetime + sample_posts: List[Dict[str, Any]] + +@dataclass +class NewsEvent: + """Detected news event""" + title: str + description: str + category: NewsCategory + timestamp: datetime + affected_tickers: List[str] + sentiment_impact: float # -1 to 1 + market_impact_prediction: float # 0-1 + confidence: float + source_quality: float # Reliability of source + keywords: List[str] + +class TextPreprocessor: + """Advanced text preprocessing for financial news""" + + def __init__(self): + # Financial stop words (words to remove) + self.stop_words = { + 'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for', 'of', + 'with', 'by', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', + 'has', 'had', 'do', 'does', 'did', 'will', 'would', 'could', 'should' + } + + # Financial terms and their sentiment weights + self.financial_sentiment_lexicon = { + # Very Positive + "soar": 0.9, "surge": 0.8, "rally": 0.7, "boom": 0.8, "breakout": 0.7, + "outperform": 0.8, "beat": 0.7, "exceed": 0.7, "strong": 0.6, "bullish": 0.8, + "upgrade": 0.7, "buy": 0.6, "growth": 0.5, "profit": 0.6, "revenue": 0.4, + + # Positive + "gain": 0.5, "rise": 0.4, "up": 0.3, "increase": 0.4, "improve": 0.5, + "positive": 0.5, "optimistic": 0.6, "confident": 0.5, "opportunity": 0.4, + + # Negative + "fall": -0.4, "drop": -0.5, "decline": -0.5, "down": -0.3, "decrease": -0.4, + "negative": -0.5, "concern": -0.4, "worry": -0.5, "risk": -0.3, "challenge": -0.3, + + # Very Negative + "crash": -0.9, "plunge": -0.8, "collapse": -0.9, "plummet": -0.8, "tank": -0.7, + "bearish": -0.8, "downgrade": -0.7, "sell": -0.6, "loss": -0.6, "weak": -0.5, + "disappointing": -0.6, "miss": -0.6, "cut": -0.5, "layoffs": -0.7, "bankruptcy": -0.9 + } + + # Ticker symbol pattern + self.ticker_pattern = re.compile(r'\\$([A-Z]{1,5})') + + # Number patterns + self.number_pattern = re.compile(r'\\b\\d+(?:\\.\\d+)?(?:[BMK])?\\b', re.IGNORECASE) + + def clean_text(self, text: str) -> str: + """Clean and normalize text""" + # Convert to lowercase + text = text.lower() + + # Remove URLs + text = re.sub(r'http\\S+|www\\S+|https\\S+', '', text, flags=re.MULTILINE) + + # Remove special characters but keep important punctuation + text = re.sub(r'[^a-zA-Z0-9\\s\\.,!?$%]', '', text) + + # Normalize whitespace + text = re.sub(r'\\s+', ' ', text).strip() + + return text + + def extract_tickers(self, text: str) -> List[str]: + """Extract ticker symbols from text""" + tickers = [] + + # Find $TICKER patterns + matches = self.ticker_pattern.findall(text.upper()) + tickers.extend(matches) + + # Find standalone ticker patterns (3-5 uppercase letters) + standalone_pattern = re.compile(r'\\b[A-Z]{3,5}\\b') + standalone_matches = standalone_pattern.findall(text.upper()) + + # Filter out common false positives + false_positives = {'THE', 'AND', 'FOR', 'YOU', 'ARE', 'ALL', 'NEW', 'GET', 'NOW', 'SEE'} + standalone_matches = [t for t in standalone_matches if t not in false_positives] + + tickers.extend(standalone_matches) + + return list(set(tickers)) # Remove duplicates + + def extract_numbers_and_percentages(self, text: str) -> Dict[str, List[float]]: + """Extract numbers and percentages from text""" + numbers = [] + percentages = [] + + # Find percentage patterns + pct_matches = re.findall(r'\\b(\\d+(?:\\.\\d+)?)%', text) + percentages.extend([float(m) for m in pct_matches]) + + # Find number patterns + num_matches = self.number_pattern.findall(text) + for match in num_matches: + try: + # Handle B, M, K suffixes + if match.upper().endswith('B'): + numbers.append(float(match[:-1]) * 1e9) + elif match.upper().endswith('M'): + numbers.append(float(match[:-1]) * 1e6) + elif match.upper().endswith('K'): + numbers.append(float(match[:-1]) * 1e3) + else: + numbers.append(float(match)) + except ValueError: + continue + + return {"numbers": numbers, "percentages": percentages} + + def tokenize(self, text: str) -> List[str]: + """Tokenize text into words/phrases""" + # Simple tokenization (could be enhanced with spaCy/NLTK) + words = re.findall(r'\\b\\w+\\b', text.lower()) + return [word for word in words if word not in self.stop_words and len(word) > 2] + +class AdvancedSentimentAnalyzer: + """Advanced sentiment analysis for financial text""" + + def __init__(self): + self.preprocessor = TextPreprocessor() + self.cache = Cache() + + # Category keywords for news classification + self.category_keywords = { + NewsCategory.EARNINGS: ["earnings", "revenue", "profit", "eps", "quarterly", "guidance"], + NewsCategory.PRODUCT_LAUNCH: ["launch", "product", "release", "unveil", "announce"], + NewsCategory.MERGER_ACQUISITION: ["merger", "acquisition", "acquire", "deal", "buyout"], + NewsCategory.REGULATORY: ["regulatory", "regulation", "sec", "fda", "fcc", "compliance"], + NewsCategory.MANAGEMENT_CHANGE: ["ceo", "cfo", "executive", "management", "appoint", "resign"], + NewsCategory.PARTNERSHIP: ["partnership", "alliance", "collaborate", "joint venture"], + NewsCategory.LEGAL_ISSUE: ["lawsuit", "legal", "court", "settlement", "investigation"], + NewsCategory.MARKET_ANALYSIS: ["analyst", "target", "rating", "recommendation", "outlook"] + } + + async def analyze_sentiment(self, text: str, ticker: Optional[str] = None) -> SentimentAnalysis: + """Perform comprehensive sentiment analysis""" + + # Preprocess text + cleaned_text = self.preprocessor.clean_text(text) + tokens = self.preprocessor.tokenize(cleaned_text) + + # Calculate sentiment scores + sentiment_scores = self._calculate_sentiment_scores(tokens, text) + + # Classify overall sentiment + overall_sentiment = self._classify_sentiment(sentiment_scores) + + # Extract key phrases + key_phrases = self._extract_key_phrases(text, tokens) + + # Extract entities + entities = self._extract_entities(text) + + # Classify news category + category = self._classify_news_category(text) + + # Calculate market relevance + market_relevance = self._calculate_market_relevance(text, ticker) + + # Calculate urgency + urgency_score = self._calculate_urgency(text, sentiment_scores) + + return SentimentAnalysis( + text=text, + overall_sentiment=overall_sentiment, + confidence=sentiment_scores["confidence"], + positive_score=sentiment_scores["positive"], + negative_score=sentiment_scores["negative"], + neutral_score=sentiment_scores["neutral"], + emotions=self._detect_emotions(text), + key_phrases=key_phrases, + entities=entities, + category=category, + market_relevance=market_relevance, + urgency_score=urgency_score + ) + + def _calculate_sentiment_scores(self, tokens: List[str], original_text: str) -> Dict[str, float]: + """Calculate detailed sentiment scores""" + positive_score = 0.0 + negative_score = 0.0 + total_sentiment_words = 0 + + # Score based on financial lexicon + for token in tokens: + if token in self.preprocessor.financial_sentiment_lexicon: + score = self.preprocessor.financial_sentiment_lexicon[token] + if score > 0: + positive_score += score + else: + negative_score += abs(score) + total_sentiment_words += 1 + + # Normalize scores + if total_sentiment_words > 0: + positive_score /= total_sentiment_words + negative_score /= total_sentiment_words + + # Check for intensity modifiers + intensity_modifiers = { + "very": 1.5, "extremely": 2.0, "highly": 1.3, "significantly": 1.4, + "slightly": 0.5, "somewhat": 0.7, "moderately": 0.8 + } + + text_lower = original_text.lower() + for modifier, multiplier in intensity_modifiers.items(): + if modifier in text_lower: + positive_score *= multiplier + negative_score *= multiplier + break + + # Calculate neutral score + neutral_score = max(0, 1 - positive_score - negative_score) + + # Calculate confidence based on number of sentiment words + confidence = min(1.0, total_sentiment_words / 10.0) # Max confidence at 10+ sentiment words + + return { + "positive": positive_score, + "negative": negative_score, + "neutral": neutral_score, + "confidence": confidence + } + + def _classify_sentiment(self, scores: Dict[str, float]) -> SentimentScore: + """Classify overall sentiment""" + pos = scores["positive"] + neg = scores["negative"] + + if pos > neg + 0.3: + return SentimentScore.VERY_POSITIVE if pos > 0.7 else SentimentScore.POSITIVE + elif neg > pos + 0.3: + return SentimentScore.VERY_NEGATIVE if neg > 0.7 else SentimentScore.NEGATIVE + else: + return SentimentScore.NEUTRAL + + def _extract_key_phrases(self, text: str, tokens: List[str]) -> List[str]: + """Extract key phrases from text""" + # Simple n-gram extraction (could be enhanced with TF-IDF or other methods) + words = text.lower().split() + + # Extract 2-grams and 3-grams + phrases = [] + + for i in range(len(words) - 1): + bigram = f"{words[i]} {words[i+1]}" + if any(word in self.preprocessor.financial_sentiment_lexicon for word in [words[i], words[i+1]]): + phrases.append(bigram) + + for i in range(len(words) - 2): + trigram = f"{words[i]} {words[i+1]} {words[i+2]}" + if any(word in self.preprocessor.financial_sentiment_lexicon for word in [words[i], words[i+1], words[i+2]]): + phrases.append(trigram) + + # Return top phrases by frequency + phrase_counts = Counter(phrases) + return [phrase for phrase, count in phrase_counts.most_common(5)] + + def _extract_entities(self, text: str) -> List[Dict[str, Any]]: + """Extract named entities (companies, people, locations)""" + entities = [] + + # Extract ticker symbols + tickers = self.preprocessor.extract_tickers(text) + for ticker in tickers: + entities.append({ + "text": ticker, + "type": "TICKER", + "confidence": 0.9 + }) + + # Extract numbers and percentages + numbers_data = self.preprocessor.extract_numbers_and_percentages(text) + for pct in numbers_data["percentages"]: + entities.append({ + "text": f"{pct}%", + "type": "PERCENTAGE", + "confidence": 0.8 + }) + + # Simple company name detection (could be enhanced with NER models) + company_patterns = [ + r'\\b([A-Z][a-z]+ (?:Inc|Corp|LLC|Ltd|Co))\\b', + r'\\b([A-Z][a-z]+ (?:& [A-Z][a-z]+)+)\\b' + ] + + for pattern in company_patterns: + matches = re.findall(pattern, text) + for match in matches: + entities.append({ + "text": match, + "type": "COMPANY", + "confidence": 0.7 + }) + + return entities + + def _classify_news_category(self, text: str) -> NewsCategory: + """Classify news into categories""" + text_lower = text.lower() + + category_scores = {} + for category, keywords in self.category_keywords.items(): + score = sum(1 for keyword in keywords if keyword in text_lower) + if score > 0: + category_scores[category] = score + + if category_scores: + return max(category_scores.items(), key=lambda x: x[1])[0] + + return NewsCategory.GENERAL + + def _calculate_market_relevance(self, text: str, ticker: Optional[str] = None) -> float: + """Calculate how relevant the news is to market movements""" + relevance_score = 0.0 + + # Check for financial keywords + financial_keywords = [ + "earnings", "revenue", "profit", "loss", "stock", "shares", "market", + "price", "trading", "analyst", "rating", "target", "forecast" + ] + + text_lower = text.lower() + keyword_count = sum(1 for keyword in financial_keywords if keyword in text_lower) + relevance_score += min(0.5, keyword_count * 0.1) + + # Boost if specific ticker mentioned + if ticker and ticker.lower() in text_lower: + relevance_score += 0.3 + + # Check for quantitative information + numbers_data = self.preprocessor.extract_numbers_and_percentages(text) + if numbers_data["percentages"]: + relevance_score += 0.2 + + return min(1.0, relevance_score) + + def _calculate_urgency(self, text: str, sentiment_scores: Dict[str, float]) -> float: + """Calculate urgency score for the news""" + urgency_keywords = [ + "breaking", "urgent", "alert", "immediate", "emergency", "now", "today", + "plunge", "surge", "crash", "soar", "halt", "suspend" + ] + + text_lower = text.lower() + urgency_count = sum(1 for keyword in urgency_keywords if keyword in text_lower) + + # Base urgency from keywords + urgency_score = min(0.6, urgency_count * 0.2) + + # Add urgency based on extreme sentiment + extreme_sentiment = max(sentiment_scores["positive"], sentiment_scores["negative"]) + if extreme_sentiment > 0.7: + urgency_score += 0.3 + + return min(1.0, urgency_score) + + def _detect_emotions(self, text: str) -> Dict[str, float]: + """Detect emotions in text""" + emotion_keywords = { + "fear": ["fear", "afraid", "worried", "concern", "anxiety", "panic", "risk"], + "greed": ["opportunity", "profit", "gain", "growth", "bullish", "buy"], + "excitement": ["excited", "thrilled", "amazing", "breakthrough", "surge"], + "anger": ["angry", "frustrated", "disappointed", "outraged", "furious"], + "optimism": ["optimistic", "confident", "positive", "hopeful", "bright"], + "pessimism": ["pessimistic", "negative", "doubt", "uncertain", "dark"] + } + + text_lower = text.lower() + emotions = {} + + for emotion, keywords in emotion_keywords.items(): + score = sum(1 for keyword in keywords if keyword in text_lower) + emotions[emotion] = min(1.0, score * 0.2) + + return emotions + +class SocialMediaAnalyzer: + """Analyze sentiment from social media platforms""" + + def __init__(self, api_keys: Dict[str, str]): + self.api_keys = api_keys + self.session: Optional[aiohttp.ClientSession] = None + self.sentiment_analyzer = AdvancedSentimentAnalyzer() + self.cache = Cache() + + async def connect(self) -> None: + """Initialize connections""" + self.session = aiohttp.ClientSession() + + async def disconnect(self) -> None: + """Close connections""" + if self.session: + await self.session.close() + + async def analyze_social_sentiment(self, ticker: str, hours_back: int = 24) -> SocialSentiment: + """Analyze social sentiment for a ticker""" + + # Collect posts from multiple platforms + all_posts = [] + + # Twitter/X (if API available) + if "twitter" in self.api_keys: + twitter_posts = await self._fetch_twitter_posts(ticker, hours_back) + all_posts.extend(twitter_posts) + + # Reddit (if API available) + if "reddit" in self.api_keys: + reddit_posts = await self._fetch_reddit_posts(ticker, hours_back) + all_posts.extend(reddit_posts) + + # StockTwits (if API available) + stocktwits_posts = await self._fetch_stocktwits_posts(ticker, hours_back) + all_posts.extend(stocktwits_posts) + + if not all_posts: + return self._create_empty_sentiment(ticker) + + # Analyze sentiment for each post + sentiment_results = [] + for post in all_posts: + try: + analysis = await self.sentiment_analyzer.analyze_sentiment(post["text"], ticker) + sentiment_results.append({ + "post": post, + "analysis": analysis + }) + except Exception as e: + logger.warning(f"Error analyzing post sentiment: {e}") + continue + + # Aggregate results + return self._aggregate_social_sentiment(ticker, sentiment_results) + + async def _fetch_twitter_posts(self, ticker: str, hours_back: int) -> List[Dict]: + """Fetch Twitter posts mentioning the ticker""" + # Mock data for demonstration + return [ + { + "text": f"${ticker} looking strong today! Great earnings report.", + "author": "trader123", + "followers": 1000, + "retweets": 5, + "likes": 15, + "timestamp": datetime.now() + } + ] + + async def _fetch_reddit_posts(self, ticker: str, hours_back: int) -> List[Dict]: + """Fetch Reddit posts from investing subreddits""" + # Mock data for demonstration + return [ + { + "text": f"What do you think about {ticker}? Seems undervalued.", + "subreddit": "investing", + "author": "investor456", + "upvotes": 25, + "comments": 10, + "timestamp": datetime.now() + } + ] + + async def _fetch_stocktwits_posts(self, ticker: str, hours_back: int) -> List[Dict]: + """Fetch StockTwits posts""" + # Mock data for demonstration + return [ + { + "text": f"${ticker} breaking out! Target $150", + "author": "stockpro789", + "followers": 500, + "likes": 8, + "timestamp": datetime.now() + } + ] + + def _aggregate_social_sentiment(self, ticker: str, sentiment_results: List[Dict]) -> SocialSentiment: + """Aggregate sentiment results from social media""" + if not sentiment_results: + return self._create_empty_sentiment(ticker) + + # Count sentiment distribution + sentiment_counts = defaultdict(int) + total_sentiment_score = 0.0 + trending_keywords = defaultdict(int) + influence_weighted_sentiment = 0.0 + total_influence = 0.0 + + for result in sentiment_results: + analysis = result["analysis"] + post = result["post"] + + sentiment_counts[analysis.overall_sentiment] += 1 + + # Calculate numeric sentiment score + if analysis.overall_sentiment == SentimentScore.VERY_POSITIVE: + score = 1.0 + elif analysis.overall_sentiment == SentimentScore.POSITIVE: + score = 0.5 + elif analysis.overall_sentiment == SentimentScore.NEUTRAL: + score = 0.0 + elif analysis.overall_sentiment == SentimentScore.NEGATIVE: + score = -0.5 + else: # VERY_NEGATIVE + score = -1.0 + + total_sentiment_score += score + + # Weight by influence (followers, engagement) + influence = post.get("followers", 100) + post.get("likes", 0) * 2 + influence_weighted_sentiment += score * influence + total_influence += influence + + # Collect trending keywords + for phrase in analysis.key_phrases: + trending_keywords[phrase] += 1 + + # Calculate averages + average_sentiment = total_sentiment_score / len(sentiment_results) + weighted_sentiment = influence_weighted_sentiment / total_influence if total_influence > 0 else average_sentiment + + # Get top trending topics + trending_topics = [topic for topic, count in Counter(trending_keywords).most_common(10)] + + # Sample posts for reference + sample_posts = [result["post"] for result in sentiment_results[:5]] + + return SocialSentiment( + ticker=ticker, + platform="aggregated", + total_mentions=len(sentiment_results), + sentiment_distribution=dict(sentiment_counts), + average_sentiment=weighted_sentiment, + trending_topics=trending_topics, + influence_score=total_influence / len(sentiment_results), + timestamp=datetime.now(), + sample_posts=sample_posts + ) + + def _create_empty_sentiment(self, ticker: str) -> SocialSentiment: + """Create empty sentiment result when no data available""" + return SocialSentiment( + ticker=ticker, + platform="aggregated", + total_mentions=0, + sentiment_distribution={}, + average_sentiment=0.0, + trending_topics=[], + influence_score=0.0, + timestamp=datetime.now(), + sample_posts=[] + ) + +# Example usage +async def example_usage(): + """Example of how to use the enhanced sentiment analysis""" + + # Initialize sentiment analyzer + analyzer = AdvancedSentimentAnalyzer() + + # Analyze news article + news_text = "Apple Inc. reported strong quarterly earnings, beating analyst expectations with revenue growth of 15%. The company's stock surged 8% in after-hours trading." + + analysis = await analyzer.analyze_sentiment(news_text, "AAPL") + + print(f"Overall Sentiment: {analysis.overall_sentiment}") + print(f"Confidence: {analysis.confidence:.2f}") + print(f"Key Phrases: {analysis.key_phrases}") + print(f"Category: {analysis.category}") + print(f"Market Relevance: {analysis.market_relevance:.2f}") + + # Initialize social media analyzer + api_keys = { + "twitter": "your_twitter_api_key", + "reddit": "your_reddit_api_key" + } + + social_analyzer = SocialMediaAnalyzer(api_keys) + await social_analyzer.connect() + + # Analyze social sentiment + social_sentiment = await social_analyzer.analyze_social_sentiment("AAPL") + + print(f"\\nSocial Sentiment for AAPL:") + print(f"Total Mentions: {social_sentiment.total_mentions}") + print(f"Average Sentiment: {social_sentiment.average_sentiment:.2f}") + print(f"Trending Topics: {social_sentiment.trending_topics}") + + await social_analyzer.disconnect() + +if __name__ == "__main__": + asyncio.run(example_usage()) \ No newline at end of file diff --git a/src/data/economic_indicators.py b/src/data/economic_indicators.py new file mode 100644 index 000000000..1f4f0e4cf --- /dev/null +++ b/src/data/economic_indicators.py @@ -0,0 +1,554 @@ +""" +Economic indicators data feed for Fed statements, unemployment, CPI, interest rates, etc. +Integrates with various economic data sources for macro analysis. +""" + +import asyncio +import json +import logging +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Any, Union +from dataclasses import dataclass +from enum import Enum +import aiohttp +import pandas as pd +from pathlib import Path + +from .cache import Cache + +logger = logging.getLogger(__name__) + +class IndicatorType(str, Enum): + """Types of economic indicators""" + INTEREST_RATES = "interest_rates" + INFLATION = "inflation" + EMPLOYMENT = "employment" + GDP = "gdp" + CONSUMER_CONFIDENCE = "consumer_confidence" + MANUFACTURING = "manufacturing" + HOUSING = "housing" + MONETARY_POLICY = "monetary_policy" + TRADE = "trade" + MARKET_SENTIMENT = "market_sentiment" + +class IndicatorImportance(str, Enum): + """Importance levels for indicators""" + HIGH = "high" + MEDIUM = "medium" + LOW = "low" + +@dataclass +class EconomicIndicator: + """Economic indicator data point""" + name: str + value: float + previous_value: Optional[float] + forecast: Optional[float] + timestamp: datetime + indicator_type: IndicatorType + importance: IndicatorImportance + source: str + unit: str + frequency: str # daily, weekly, monthly, quarterly, annual + country: str = "US" + description: Optional[str] = None + + @property + def change(self) -> Optional[float]: + """Calculate change from previous value""" + if self.previous_value is not None: + return self.value - self.previous_value + return None + + @property + def change_percent(self) -> Optional[float]: + """Calculate percentage change from previous value""" + if self.previous_value is not None and self.previous_value != 0: + return ((self.value - self.previous_value) / self.previous_value) * 100 + return None + + @property + def vs_forecast(self) -> Optional[float]: + """Calculate difference from forecast""" + if self.forecast is not None: + return self.value - self.forecast + return None + +@dataclass +class FedStatement: + """Federal Reserve statement data""" + date: datetime + title: str + content: str + decision: str # "hold", "raise", "cut" + rate_change: float # Basis points + current_rate: float + tone: str # "hawkish", "dovish", "neutral" + key_themes: List[str] + market_impact_score: float # 0-10 scale + +@dataclass +class EconomicEvent: + """Scheduled economic event""" + name: str + date: datetime + importance: IndicatorImportance + forecast: Optional[float] + previous: Optional[float] + actual: Optional[float] + currency: str + country: str + description: str + +class EconomicDataProvider: + """Base class for economic data providers""" + + def __init__(self, api_key: Optional[str] = None): + self.api_key = api_key + self.session: Optional[aiohttp.ClientSession] = None + self.cache = Cache() + + async def connect(self) -> None: + """Initialize connection""" + self.session = aiohttp.ClientSession() + + async def disconnect(self) -> None: + """Close connection""" + if self.session: + await self.session.close() + +class FREDProvider(EconomicDataProvider): + """Federal Reserve Economic Data (FRED) provider""" + + def __init__(self, api_key: str): + super().__init__(api_key) + self.base_url = "https://api.stlouisfed.org/fred" + + # Key FRED series IDs + self.series_mapping = { + "fed_funds_rate": "FEDFUNDS", + "unemployment_rate": "UNRATE", + "cpi_all_items": "CPIAUCSL", + "core_cpi": "CPILFESL", + "gdp": "GDP", + "real_gdp": "GDPC1", + "consumer_confidence": "UMCSENT", + "manufacturing_pmi": "NAPM", + "housing_starts": "HOUST", + "industrial_production": "INDPRO", + "retail_sales": "RSAFS", + "nonfarm_payrolls": "PAYEMS", + "initial_claims": "ICSA", + "10_year_treasury": "GS10", + "2_year_treasury": "GS2", + "yield_curve_spread": "T10Y2Y" + } + + async def get_series_data(self, series_id: str, limit: int = 100) -> List[Dict]: + """Get data for a specific FRED series""" + if not self.session: + await self.connect() + + cache_key = f"fred_{series_id}_{limit}" + cached_data = self.cache.get(cache_key) + if cached_data: + return cached_data + + url = f"{self.base_url}/series/observations" + params = { + 'series_id': series_id, + 'api_key': self.api_key, + 'file_type': 'json', + 'limit': limit, + 'sort_order': 'desc' + } + + try: + async with self.session.get(url, params=params) as response: + data = await response.json() + observations = data.get('observations', []) + + # Cache for 1 hour + self.cache.set(cache_key, observations, ttl=3600) + return observations + except Exception as e: + logger.error(f"Error fetching FRED data for {series_id}: {e}") + return [] + + async def get_indicator(self, indicator_name: str) -> Optional[EconomicIndicator]: + """Get latest value for an economic indicator""" + if indicator_name not in self.series_mapping: + logger.warning(f"Unknown indicator: {indicator_name}") + return None + + series_id = self.series_mapping[indicator_name] + data = await self.get_series_data(series_id, limit=2) + + if not data: + return None + + # Get latest and previous values + latest = data[0] + previous = data[1] if len(data) > 1 else None + + # Map to indicator type + indicator_type = self._map_to_indicator_type(indicator_name) + importance = self._get_indicator_importance(indicator_name) + + try: + return EconomicIndicator( + name=indicator_name, + value=float(latest['value']), + previous_value=float(previous['value']) if previous and previous['value'] != '.' else None, + forecast=None, # FRED doesn't provide forecasts + timestamp=datetime.strptime(latest['date'], '%Y-%m-%d'), + indicator_type=indicator_type, + importance=importance, + source="FRED", + unit=self._get_unit(indicator_name), + frequency=self._get_frequency(indicator_name), + description=self._get_description(indicator_name) + ) + except (ValueError, KeyError) as e: + logger.error(f"Error parsing FRED data for {indicator_name}: {e}") + return None + + def _map_to_indicator_type(self, indicator_name: str) -> IndicatorType: + """Map indicator name to type""" + mapping = { + "fed_funds_rate": IndicatorType.INTEREST_RATES, + "unemployment_rate": IndicatorType.EMPLOYMENT, + "cpi_all_items": IndicatorType.INFLATION, + "core_cpi": IndicatorType.INFLATION, + "gdp": IndicatorType.GDP, + "real_gdp": IndicatorType.GDP, + "consumer_confidence": IndicatorType.CONSUMER_CONFIDENCE, + "manufacturing_pmi": IndicatorType.MANUFACTURING, + "housing_starts": IndicatorType.HOUSING, + "10_year_treasury": IndicatorType.INTEREST_RATES, + "2_year_treasury": IndicatorType.INTEREST_RATES, + "yield_curve_spread": IndicatorType.INTEREST_RATES + } + return mapping.get(indicator_name, IndicatorType.MARKET_SENTIMENT) + + def _get_indicator_importance(self, indicator_name: str) -> IndicatorImportance: + """Get importance level for indicator""" + high_importance = { + "fed_funds_rate", "unemployment_rate", "cpi_all_items", "core_cpi", + "gdp", "nonfarm_payrolls", "10_year_treasury" + } + medium_importance = { + "consumer_confidence", "manufacturing_pmi", "housing_starts", + "initial_claims", "2_year_treasury" + } + + if indicator_name in high_importance: + return IndicatorImportance.HIGH + elif indicator_name in medium_importance: + return IndicatorImportance.MEDIUM + else: + return IndicatorImportance.LOW + + def _get_unit(self, indicator_name: str) -> str: + """Get unit for indicator""" + units = { + "fed_funds_rate": "Percent", + "unemployment_rate": "Percent", + "cpi_all_items": "Index", + "core_cpi": "Index", + "gdp": "Billions of Dollars", + "consumer_confidence": "Index", + "housing_starts": "Thousands of Units", + "nonfarm_payrolls": "Thousands of Persons", + "initial_claims": "Number", + "10_year_treasury": "Percent", + "2_year_treasury": "Percent" + } + return units.get(indicator_name, "") + + def _get_frequency(self, indicator_name: str) -> str: + """Get frequency for indicator""" + frequencies = { + "fed_funds_rate": "monthly", + "unemployment_rate": "monthly", + "cpi_all_items": "monthly", + "core_cpi": "monthly", + "gdp": "quarterly", + "consumer_confidence": "monthly", + "housing_starts": "monthly", + "nonfarm_payrolls": "monthly", + "initial_claims": "weekly", + "10_year_treasury": "daily", + "2_year_treasury": "daily" + } + return frequencies.get(indicator_name, "monthly") + + def _get_description(self, indicator_name: str) -> str: + """Get description for indicator""" + descriptions = { + "fed_funds_rate": "Federal Funds Effective Rate", + "unemployment_rate": "Unemployment Rate", + "cpi_all_items": "Consumer Price Index for All Urban Consumers: All Items", + "core_cpi": "Consumer Price Index for All Urban Consumers: All Items Less Food and Energy", + "gdp": "Gross Domestic Product", + "consumer_confidence": "University of Michigan Consumer Sentiment", + "housing_starts": "Housing Starts: Total: New Privately Owned Housing Units Started", + "nonfarm_payrolls": "All Employees, Total Nonfarm", + "initial_claims": "Initial Claims for Unemployment Insurance", + "10_year_treasury": "10-Year Treasury Constant Maturity Rate", + "2_year_treasury": "2-Year Treasury Constant Maturity Rate" + } + return descriptions.get(indicator_name, "") + +class EconomicCalendarProvider(EconomicDataProvider): + """Economic calendar provider for scheduled events""" + + def __init__(self, api_key: Optional[str] = None): + super().__init__(api_key) + self.base_url = "https://financialdata.io/api/v1/economic-calendar" # Example API + + async def get_upcoming_events(self, days_ahead: int = 7) -> List[EconomicEvent]: + """Get upcoming economic events""" + if not self.session: + await self.connect() + + cache_key = f"economic_calendar_{days_ahead}" + cached_events = self.cache.get(cache_key) + if cached_events: + return cached_events + + # Mock data for demonstration (replace with actual API call) + upcoming_events = [ + EconomicEvent( + name="Non-Farm Payrolls", + date=datetime.now() + timedelta(days=2), + importance=IndicatorImportance.HIGH, + forecast=180000, + previous=150000, + actual=None, + currency="USD", + country="US", + description="Change in the number of employed people during the previous month" + ), + EconomicEvent( + name="Consumer Price Index", + date=datetime.now() + timedelta(days=5), + importance=IndicatorImportance.HIGH, + forecast=3.2, + previous=3.1, + actual=None, + currency="USD", + country="US", + description="Change in the price of goods and services" + ) + ] + + # Cache for 4 hours + self.cache.set(cache_key, upcoming_events, ttl=14400) + return upcoming_events + +class FedWatchProvider(EconomicDataProvider): + """Federal Reserve communications and policy tracking""" + + def __init__(self): + super().__init__() + self.base_url = "https://www.federalreserve.gov" + + async def get_recent_statements(self, limit: int = 10) -> List[FedStatement]: + """Get recent Fed statements and decisions""" + # Mock data for demonstration + statements = [ + FedStatement( + date=datetime.now() - timedelta(days=45), + title="Federal Reserve maintains target range for federal funds rate at 5.25 to 5.5 percent", + content="The Federal Reserve decided to maintain the target range...", + decision="hold", + rate_change=0, + current_rate=5.375, + tone="neutral", + key_themes=["inflation", "employment", "economic outlook"], + market_impact_score=7.5 + ) + ] + return statements + + async def analyze_fed_tone(self, statement_text: str) -> Dict[str, Any]: + """Analyze Fed statement tone using NLP""" + # Placeholder for NLP analysis + # In production, you'd use a proper NLP model + + hawkish_words = ["inflation", "tighten", "raise", "aggressive", "concern"] + dovish_words = ["support", "accommodate", "lower", "stimulus", "gradual"] + + hawkish_score = sum(1 for word in hawkish_words if word in statement_text.lower()) + dovish_score = sum(1 for word in dovish_words if word in statement_text.lower()) + + if hawkish_score > dovish_score: + tone = "hawkish" + confidence = hawkish_score / (hawkish_score + dovish_score) + elif dovish_score > hawkish_score: + tone = "dovish" + confidence = dovish_score / (hawkish_score + dovish_score) + else: + tone = "neutral" + confidence = 0.5 + + return { + "tone": tone, + "confidence": confidence, + "hawkish_score": hawkish_score, + "dovish_score": dovish_score + } + +class EconomicDataManager: + """Manages economic data from multiple sources""" + + def __init__(self, fred_api_key: str): + self.fred_provider = FREDProvider(fred_api_key) + self.calendar_provider = EconomicCalendarProvider() + self.fed_provider = FedWatchProvider() + self.cache = Cache() + + # Key indicators to track + self.key_indicators = [ + "fed_funds_rate", "unemployment_rate", "cpi_all_items", "core_cpi", + "gdp", "consumer_confidence", "manufacturing_pmi", "housing_starts", + "nonfarm_payrolls", "initial_claims", "10_year_treasury", "2_year_treasury" + ] + + async def start(self) -> None: + """Start all providers""" + await self.fred_provider.connect() + await self.calendar_provider.connect() + await self.fed_provider.connect() + logger.info("Economic data manager started") + + async def stop(self) -> None: + """Stop all providers""" + await self.fred_provider.disconnect() + await self.calendar_provider.disconnect() + await self.fed_provider.disconnect() + logger.info("Economic data manager stopped") + + async def get_all_indicators(self) -> Dict[str, EconomicIndicator]: + """Get all key economic indicators""" + indicators = {} + + for indicator_name in self.key_indicators: + try: + indicator = await self.fred_provider.get_indicator(indicator_name) + if indicator: + indicators[indicator_name] = indicator + except Exception as e: + logger.error(f"Error fetching {indicator_name}: {e}") + + return indicators + + async def get_economic_summary(self) -> Dict[str, Any]: + """Get comprehensive economic summary""" + indicators = await self.get_all_indicators() + upcoming_events = await self.calendar_provider.get_upcoming_events() + fed_statements = await self.fed_provider.get_recent_statements() + + # Calculate economic health score + health_score = self._calculate_economic_health_score(indicators) + + return { + "indicators": indicators, + "upcoming_events": upcoming_events, + "fed_statements": fed_statements, + "health_score": health_score, + "last_updated": datetime.now(), + "summary": self._generate_summary(indicators, health_score) + } + + def _calculate_economic_health_score(self, indicators: Dict[str, EconomicIndicator]) -> float: + """Calculate overall economic health score (0-100)""" + score = 50 # Start with neutral + + # Unemployment rate (lower is better) + if "unemployment_rate" in indicators: + unemployment = indicators["unemployment_rate"] + if unemployment.value < 4.0: + score += 10 + elif unemployment.value > 6.0: + score -= 10 + + # GDP growth (positive is better) + if "gdp" in indicators: + gdp = indicators["gdp"] + if gdp.change_percent and gdp.change_percent > 2.0: + score += 10 + elif gdp.change_percent and gdp.change_percent < 0: + score -= 15 + + # Inflation (target around 2%) + if "core_cpi" in indicators: + cpi = indicators["core_cpi"] + if cpi.change_percent: + if 1.5 <= cpi.change_percent <= 2.5: + score += 10 + elif cpi.change_percent > 4.0: + score -= 15 + + # Consumer confidence + if "consumer_confidence" in indicators: + confidence = indicators["consumer_confidence"] + if confidence.value > 100: + score += 5 + elif confidence.value < 80: + score -= 5 + + return max(0, min(100, score)) + + def _generate_summary(self, indicators: Dict[str, EconomicIndicator], health_score: float) -> str: + """Generate text summary of economic conditions""" + if health_score >= 70: + outlook = "positive" + elif health_score >= 50: + outlook = "neutral" + else: + outlook = "concerning" + + summary = f"Economic outlook appears {outlook} (score: {health_score:.1f}/100). " + + # Add key highlights + if "unemployment_rate" in indicators: + unemployment = indicators["unemployment_rate"] + summary += f"Unemployment at {unemployment.value}%. " + + if "core_cpi" in indicators: + cpi = indicators["core_cpi"] + if cpi.change_percent: + summary += f"Core inflation running at {cpi.change_percent:.1f}% annually. " + + if "fed_funds_rate" in indicators: + fed_rate = indicators["fed_funds_rate"] + summary += f"Fed funds rate at {fed_rate.value}%. " + + return summary + +# Example usage +async def example_usage(): + """Example of how to use the economic data system""" + + # Initialize with your FRED API key + manager = EconomicDataManager(fred_api_key="your_fred_api_key") + + await manager.start() + + # Get economic summary + summary = await manager.get_economic_summary() + print("Economic Summary:") + print(summary["summary"]) + print(f"Health Score: {summary['health_score']:.1f}/100") + + # Get specific indicator + unemployment = await manager.fred_provider.get_indicator("unemployment_rate") + if unemployment: + print(f"Unemployment Rate: {unemployment.value}% (prev: {unemployment.previous_value}%)") + + await manager.stop() + +if __name__ == "__main__": + asyncio.run(example_usage()) \ No newline at end of file diff --git a/src/data/models.py b/src/data/models.py index 994da88fb..a1488757b 100644 --- a/src/data/models.py +++ b/src/data/models.py @@ -1,4 +1,7 @@ -from pydantic import BaseModel +from pydantic import BaseModel, Field +from datetime import datetime, date +from typing import Dict, List, Optional, Any, Literal, Union +from enum import Enum class Price(BaseModel): @@ -138,10 +141,31 @@ class CompanyFactsResponse(BaseModel): company_facts: CompanyFacts +class PositionType(str, Enum): + LONG = "long" + SHORT = "short" + CALL = "call" + PUT = "put" + +class Greeks(BaseModel): + """Options Greeks""" + delta: float + gamma: float + theta: float + vega: float + rho: float + class Position(BaseModel): cash: float = 0.0 shares: int = 0 ticker: str + position_type: PositionType = PositionType.LONG + cost_basis: float = 0.0 + margin_requirement: float = 0.0 # Margin required for short positions + option_type: Optional[Literal["call", "put"]] = None + strike_price: Optional[float] = None + expiration_date: Optional[date] = None + greeks: Optional[Greeks] = None class Portfolio(BaseModel): @@ -159,6 +183,7 @@ class AnalystSignal(BaseModel): class TickerAnalysis(BaseModel): ticker: str analyst_signals: dict[str, AnalystSignal] # agent_name -> signal mapping + security_info: Optional["SecurityInfo"] = None class AgentStateData(BaseModel): @@ -167,8 +192,94 @@ class AgentStateData(BaseModel): start_date: str end_date: str ticker_analyses: dict[str, TickerAnalysis] # ticker -> analysis mapping - - + trading_universe: Optional["TradingUniverse"] = None + options_data: Dict[str, List["OptionsInfo"]] = Field(default_factory=dict) # ticker -> options chain + + +class AssetClass(str, Enum): + EQUITY = "equity" + ETF = "etf" + OPTION = "option" + FUTURES = "futures" + BONDS = "bonds" + COMMODITIES = "commodities" + +class MarketCapCategory(str, Enum): + MEGA_CAP = "mega_cap" # >$200B + LARGE_CAP = "large_cap" # $10B-$200B + MID_CAP = "mid_cap" # $2B-$10B + SMALL_CAP = "small_cap" # $300M-$2B + MICRO_CAP = "micro_cap" # <$300M + +class Sector(str, Enum): + TECHNOLOGY = "technology" + HEALTHCARE = "healthcare" + FINANCIALS = "financials" + CONSUMER_DISCRETIONARY = "consumer_discretionary" + INDUSTRIALS = "industrials" + COMMUNICATION = "communication" + CONSUMER_STAPLES = "consumer_staples" + ENERGY = "energy" + UTILITIES = "utilities" + REAL_ESTATE = "real_estate" + MATERIALS = "materials" + +class TradingUniverse(BaseModel): + """Defines the trading universe with filters and constraints""" + name: str + description: str + asset_classes: List[AssetClass] + sectors: Optional[List[Sector]] = None + market_cap_categories: Optional[List[MarketCapCategory]] = None + min_market_cap: Optional[float] = None # Minimum market cap in billions + min_daily_volume: Optional[float] = None # Minimum daily volume in shares + min_avg_volume: Optional[float] = None # Minimum 30-day average volume + excluded_tickers: List[str] = Field(default_factory=list) + included_tickers: List[str] = Field(default_factory=list) # Force include specific tickers + indices: List[str] = Field(default_factory=list) # SPY, QQQ, etc. + max_positions: Optional[int] = None # Maximum number of positions + rebalance_frequency: str = "daily" # daily, weekly, monthly + liquidity_threshold: Optional[float] = None # Minimum liquidity score + +class SecurityInfo(BaseModel): + """Extended security information""" + ticker: str + name: str + asset_class: AssetClass + sector: Optional[Sector] = None + market_cap: Optional[float] = None # Market cap in billions + market_cap_category: Optional[MarketCapCategory] = None + average_volume_30d: Optional[float] = None + beta: Optional[float] = None + is_sp500: bool = False + is_nasdaq100: bool = False + is_dow: bool = False + options_available: bool = False + min_tick_size: float = 0.01 + lot_size: int = 1 + trading_hours_extended: bool = True + last_updated: datetime = Field(default_factory=datetime.utcnow) + +class OptionsInfo(BaseModel): + """Options chain information""" + underlying_ticker: str + strike: float + expiration: date + option_type: Literal["call", "put"] + bid: Optional[float] = None + ask: Optional[float] = None + last_price: Optional[float] = None + volume: Optional[int] = None + open_interest: Optional[int] = None + implied_volatility: Optional[float] = None + delta: Optional[float] = None + gamma: Optional[float] = None + theta: Optional[float] = None + vega: Optional[float] = None + rho: Optional[float] = None + days_to_expiration: Optional[int] = None + option_symbol: str # Full option symbol + class AgentStateMetadata(BaseModel): show_reasoning: bool = False model_config = {"extra": "allow"} diff --git a/src/data/political_signals.py b/src/data/political_signals.py new file mode 100644 index 000000000..71ef17620 --- /dev/null +++ b/src/data/political_signals.py @@ -0,0 +1,527 @@ +""" +Political signals monitoring system for election cycles, sanctions, fiscal policy shifts. +Tracks political events that may impact markets. +""" + +import asyncio +import json +import logging +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Any, Union +from dataclasses import dataclass, asdict +from enum import Enum +import aiohttp +import re +from pathlib import Path + +from .cache import Cache +from .models import CompanyNews + +logger = logging.getLogger(__name__) + +class PoliticalEventType(str, Enum): + """Types of political events""" + ELECTION = "election" + POLICY_ANNOUNCEMENT = "policy_announcement" + SANCTIONS = "sanctions" + TRADE_POLICY = "trade_policy" + FISCAL_POLICY = "fiscal_policy" + REGULATORY_CHANGE = "regulatory_change" + GEOPOLITICAL_TENSION = "geopolitical_tension" + GOVERNMENT_SHUTDOWN = "government_shutdown" + DEBT_CEILING = "debt_ceiling" + TAX_POLICY = "tax_policy" + +class PoliticalImpact(str, Enum): + """Impact levels for political events""" + HIGH = "high" + MEDIUM = "medium" + LOW = "low" + +class MarketSector(str, Enum): + """Market sectors affected by political events""" + TECHNOLOGY = "technology" + HEALTHCARE = "healthcare" + ENERGY = "energy" + FINANCIALS = "financials" + DEFENSE = "defense" + INFRASTRUCTURE = "infrastructure" + TRADE = "trade" + COMMODITIES = "commodities" + BROAD_MARKET = "broad_market" + +@dataclass +class PoliticalEvent: + """Political event data structure""" + title: str + description: str + event_type: PoliticalEventType + date: datetime + impact_level: PoliticalImpact + affected_sectors: List[MarketSector] + country: str + source: str + sentiment_score: float # -1 (very negative) to 1 (very positive) + market_impact_score: float # 0-10 scale + keywords: List[str] + url: Optional[str] = None + + @property + def is_recent(self) -> bool: + """Check if event is within last 30 days""" + return (datetime.now() - self.date).days <= 30 + + @property + def urgency_score(self) -> float: + """Calculate urgency based on impact and recency""" + impact_weight = {"high": 1.0, "medium": 0.6, "low": 0.3}[self.impact_level] + days_old = (datetime.now() - self.date).days + recency_weight = max(0, 1 - (days_old / 30)) # Decay over 30 days + return impact_weight * recency_weight + +@dataclass +class ElectionData: + """Election tracking data""" + election_type: str # "presidential", "congressional", "gubernatorial" + date: datetime + candidates: List[str] + polls: Dict[str, float] # candidate -> poll percentage + betting_odds: Dict[str, float] # candidate -> implied probability + key_issues: List[str] + market_implications: Dict[MarketSector, str] # sector -> implication + +@dataclass +class PolicyTracker: + """Track specific policy developments""" + policy_area: str + current_status: str + probability_of_passage: float # 0-1 + expected_timeline: str + market_impact_analysis: str + affected_stocks: List[str] + last_updated: datetime + +class PoliticalNewsProvider: + """Aggregates political news from multiple sources""" + + def __init__(self, api_keys: Dict[str, str]): + self.api_keys = api_keys + self.session: Optional[aiohttp.ClientSession] = None + self.cache = Cache() + + # Political keywords for filtering + self.political_keywords = { + "election": ["election", "vote", "campaign", "candidate", "polls"], + "policy": ["policy", "bill", "legislation", "congress", "senate"], + "sanctions": ["sanctions", "embargo", "trade war", "tariffs"], + "regulation": ["regulation", "regulatory", "SEC", "FDA", "EPA"], + "fiscal": ["budget", "spending", "deficit", "stimulus", "bailout"], + "geopolitical": ["war", "conflict", "tension", "diplomatic", "military"] + } + + async def connect(self) -> None: + """Initialize connection""" + self.session = aiohttp.ClientSession() + + async def disconnect(self) -> None: + """Close connection""" + if self.session: + await self.session.close() + + async def fetch_political_news(self, hours_back: int = 24) -> List[CompanyNews]: + """Fetch political news from various sources""" + if not self.session: + await self.connect() + + all_news = [] + + # NewsAPI + if "newsapi" in self.api_keys: + news_api_articles = await self._fetch_from_newsapi(hours_back) + all_news.extend(news_api_articles) + + # Reuters/Bloomberg via RSS or API + rss_articles = await self._fetch_from_rss_feeds() + all_news.extend(rss_articles) + + # Filter and deduplicate + political_news = self._filter_political_content(all_news) + return self._deduplicate_news(political_news) + + async def _fetch_from_newsapi(self, hours_back: int) -> List[CompanyNews]: + """Fetch from NewsAPI""" + if not self.session: + return [] + + api_key = self.api_keys.get("newsapi") + if not api_key: + return [] + + from_date = (datetime.now() - timedelta(hours=hours_back)).isoformat() + + url = "https://newsapi.org/v2/everything" + params = { + "q": "politics OR election OR congress OR president OR policy", + "language": "en", + "sortBy": "publishedAt", + "from": from_date, + "apiKey": api_key + } + + try: + async with self.session.get(url, params=params) as response: + data = await response.json() + articles = data.get("articles", []) + + news_list = [] + for article in articles: + news_list.append(CompanyNews( + ticker="POLITICAL", # Special ticker for political news + title=article.get("title", ""), + author=article.get("author", ""), + source=article.get("source", {}).get("name", ""), + date=article.get("publishedAt", ""), + url=article.get("url", ""), + sentiment=None + )) + + return news_list + except Exception as e: + logger.error(f"Error fetching from NewsAPI: {e}") + return [] + + async def _fetch_from_rss_feeds(self) -> List[CompanyNews]: + """Fetch from RSS feeds""" + # Mock RSS feed data for demonstration + return [ + CompanyNews( + ticker="POLITICAL", + title="Congress Debates Infrastructure Spending Bill", + author="Political Reporter", + source="Reuters", + date=datetime.now().isoformat(), + url="https://example.com/news", + sentiment=None + ) + ] + + def _filter_political_content(self, news_list: List[CompanyNews]) -> List[CompanyNews]: + """Filter news for political content""" + filtered_news = [] + + for news in news_list: + title_lower = news.title.lower() + + # Check if any political keywords are present + is_political = any( + keyword in title_lower + for keyword_list in self.political_keywords.values() + for keyword in keyword_list + ) + + if is_political: + filtered_news.append(news) + + return filtered_news + + def _deduplicate_news(self, news_list: List[CompanyNews]) -> List[CompanyNews]: + """Remove duplicate news articles""" + seen_titles = set() + deduplicated = [] + + for news in news_list: + # Simple deduplication based on title similarity + title_key = re.sub(r'[^\w\s]', '', news.title.lower())[:50] + if title_key not in seen_titles: + seen_titles.add(title_key) + deduplicated.append(news) + + return deduplicated + +class PoliticalEventAnalyzer: + """Analyzes political events for market impact""" + + def __init__(self): + self.sector_keywords = { + MarketSector.TECHNOLOGY: ["tech", "data privacy", "antitrust", "regulation"], + MarketSector.HEALTHCARE: ["healthcare", "medicare", "drug prices", "pharma"], + MarketSector.ENERGY: ["energy", "oil", "gas", "renewable", "climate"], + MarketSector.FINANCIALS: ["banking", "financial", "interest rates", "fed"], + MarketSector.DEFENSE: ["defense", "military", "weapons", "aerospace"], + MarketSector.INFRASTRUCTURE: ["infrastructure", "roads", "bridges", "construction"], + } + + def analyze_news_for_events(self, news_list: List[CompanyNews]) -> List[PoliticalEvent]: + """Analyze news articles to extract political events""" + events = [] + + for news in news_list: + event = self._extract_event_from_news(news) + if event: + events.append(event) + + return events + + def _extract_event_from_news(self, news: CompanyNews) -> Optional[PoliticalEvent]: + """Extract political event from news article""" + title_lower = news.title.lower() + + # Determine event type + event_type = self._classify_event_type(title_lower) + if not event_type: + return None + + # Determine impact level + impact_level = self._assess_impact_level(title_lower, event_type) + + # Identify affected sectors + affected_sectors = self._identify_affected_sectors(title_lower) + + # Calculate sentiment score + sentiment_score = self._calculate_sentiment(title_lower) + + # Calculate market impact score + market_impact_score = self._calculate_market_impact(event_type, impact_level, affected_sectors) + + return PoliticalEvent( + title=news.title, + description=news.title, # Could be expanded with article content + event_type=event_type, + date=datetime.fromisoformat(news.date.replace('Z', '+00:00')), + impact_level=impact_level, + affected_sectors=affected_sectors, + country="US", # Could be determined from content + source=news.source, + sentiment_score=sentiment_score, + market_impact_score=market_impact_score, + keywords=self._extract_keywords(title_lower), + url=news.url + ) + + def _classify_event_type(self, text: str) -> Optional[PoliticalEventType]: + """Classify the type of political event""" + classifiers = { + PoliticalEventType.ELECTION: ["election", "vote", "campaign", "polls"], + PoliticalEventType.SANCTIONS: ["sanctions", "embargo", "trade war"], + PoliticalEventType.FISCAL_POLICY: ["budget", "spending", "stimulus", "deficit"], + PoliticalEventType.TRADE_POLICY: ["trade", "tariffs", "import", "export"], + PoliticalEventType.REGULATORY_CHANGE: ["regulation", "regulatory", "SEC", "FDA"], + PoliticalEventType.GEOPOLITICAL_TENSION: ["war", "conflict", "tension", "military"], + PoliticalEventType.DEBT_CEILING: ["debt ceiling", "debt limit"], + PoliticalEventType.TAX_POLICY: ["tax", "taxation", "IRS"] + } + + for event_type, keywords in classifiers.items(): + if any(keyword in text for keyword in keywords): + return event_type + + return PoliticalEventType.POLICY_ANNOUNCEMENT # Default + + def _assess_impact_level(self, text: str, event_type: PoliticalEventType) -> PoliticalImpact: + """Assess the impact level of the event""" + high_impact_events = { + PoliticalEventType.ELECTION, + PoliticalEventType.SANCTIONS, + PoliticalEventType.DEBT_CEILING, + PoliticalEventType.GEOPOLITICAL_TENSION + } + + high_impact_keywords = ["major", "significant", "unprecedented", "crisis", "emergency"] + + if event_type in high_impact_events: + return PoliticalImpact.HIGH + + if any(keyword in text for keyword in high_impact_keywords): + return PoliticalImpact.HIGH + + return PoliticalImpact.MEDIUM # Default + + def _identify_affected_sectors(self, text: str) -> List[MarketSector]: + """Identify which market sectors are affected""" + affected_sectors = [] + + for sector, keywords in self.sector_keywords.items(): + if any(keyword in text for keyword in keywords): + affected_sectors.append(sector) + + # If no specific sectors identified, assume broad market impact + if not affected_sectors: + affected_sectors.append(MarketSector.BROAD_MARKET) + + return affected_sectors + + def _calculate_sentiment(self, text: str) -> float: + """Calculate sentiment score for the event""" + positive_words = ["growth", "improvement", "positive", "boost", "strong", "agreement"] + negative_words = ["crisis", "decline", "tension", "conflict", "concern", "risk", "shutdown"] + + positive_count = sum(1 for word in positive_words if word in text) + negative_count = sum(1 for word in negative_words if word in text) + + total_words = positive_count + negative_count + if total_words == 0: + return 0.0 + + return (positive_count - negative_count) / total_words + + def _calculate_market_impact(self, event_type: PoliticalEventType, + impact_level: PoliticalImpact, + affected_sectors: List[MarketSector]) -> float: + """Calculate market impact score (0-10)""" + base_scores = { + PoliticalEventType.ELECTION: 8.0, + PoliticalEventType.SANCTIONS: 7.0, + PoliticalEventType.DEBT_CEILING: 9.0, + PoliticalEventType.FISCAL_POLICY: 6.0, + PoliticalEventType.GEOPOLITICAL_TENSION: 8.0, + PoliticalEventType.TRADE_POLICY: 6.0, + PoliticalEventType.REGULATORY_CHANGE: 5.0 + } + + base_score = base_scores.get(event_type, 4.0) + + # Adjust for impact level + impact_multipliers = { + PoliticalImpact.HIGH: 1.2, + PoliticalImpact.MEDIUM: 1.0, + PoliticalImpact.LOW: 0.8 + } + + score = base_score * impact_multipliers[impact_level] + + # Adjust for number of affected sectors + if MarketSector.BROAD_MARKET in affected_sectors: + score *= 1.1 + + return min(10.0, score) + + def _extract_keywords(self, text: str) -> List[str]: + """Extract relevant keywords from text""" + # Simple keyword extraction (could be enhanced with NLP) + common_words = {"the", "and", "or", "but", "in", "on", "at", "to", "for", "of", "with", "by", "is", "are", "was", "were"} + words = re.findall(r'\b\w+\b', text.lower()) + keywords = [word for word in words if len(word) > 3 and word not in common_words] + return list(set(keywords))[:10] # Return top 10 unique keywords + +class PoliticalSignalsManager: + """Manages political signals monitoring""" + + def __init__(self, api_keys: Dict[str, str]): + self.news_provider = PoliticalNewsProvider(api_keys) + self.event_analyzer = PoliticalEventAnalyzer() + self.cache = Cache() + self.active_events: List[PoliticalEvent] = [] + + async def start(self) -> None: + """Start the political signals manager""" + await self.news_provider.connect() + logger.info("Political signals manager started") + + async def stop(self) -> None: + """Stop the political signals manager""" + await self.news_provider.disconnect() + logger.info("Political signals manager stopped") + + async def update_political_events(self) -> List[PoliticalEvent]: + """Update political events from news sources""" + # Fetch recent news + news_articles = await self.news_provider.fetch_political_news(hours_back=24) + + # Analyze for political events + new_events = self.event_analyzer.analyze_news_for_events(news_articles) + + # Update active events + self.active_events = self._merge_events(self.active_events, new_events) + + # Cache the events + self.cache.set("political_events", self.active_events, ttl=3600) + + return self.active_events + + def get_high_impact_events(self, days_back: int = 7) -> List[PoliticalEvent]: + """Get high impact political events from recent period""" + cutoff_date = datetime.now() - timedelta(days=days_back) + + high_impact_events = [ + event for event in self.active_events + if event.impact_level == PoliticalImpact.HIGH and event.date >= cutoff_date + ] + + # Sort by urgency score + high_impact_events.sort(key=lambda x: x.urgency_score, reverse=True) + + return high_impact_events + + def get_sector_specific_events(self, sector: MarketSector, days_back: int = 30) -> List[PoliticalEvent]: + """Get events affecting a specific sector""" + cutoff_date = datetime.now() - timedelta(days=days_back) + + sector_events = [ + event for event in self.active_events + if sector in event.affected_sectors and event.date >= cutoff_date + ] + + return sorted(sector_events, key=lambda x: x.date, reverse=True) + + def _merge_events(self, existing_events: List[PoliticalEvent], + new_events: List[PoliticalEvent]) -> List[PoliticalEvent]: + """Merge new events with existing ones, avoiding duplicates""" + merged_events = existing_events.copy() + + for new_event in new_events: + # Simple duplicate detection based on title similarity + is_duplicate = any( + self._events_similar(new_event, existing_event) + for existing_event in existing_events + ) + + if not is_duplicate: + merged_events.append(new_event) + + # Remove old events (older than 90 days) + cutoff_date = datetime.now() - timedelta(days=90) + merged_events = [event for event in merged_events if event.date >= cutoff_date] + + return merged_events + + def _events_similar(self, event1: PoliticalEvent, event2: PoliticalEvent) -> bool: + """Check if two events are similar (likely duplicates)""" + # Simple similarity check based on title and date + title_similarity = len(set(event1.title.lower().split()) & set(event2.title.lower().split())) + date_diff = abs((event1.date - event2.date).days) + + return title_similarity >= 3 and date_diff <= 1 + +# Example usage +async def example_usage(): + """Example of how to use the political signals system""" + + # Initialize with your API keys + api_keys = { + "newsapi": "your_newsapi_key", + # Add other API keys as needed + } + + manager = PoliticalSignalsManager(api_keys) + + await manager.start() + + # Update political events + events = await manager.update_political_events() + print(f"Found {len(events)} political events") + + # Get high impact events + high_impact = manager.get_high_impact_events() + print(f"High impact events: {len(high_impact)}") + + for event in high_impact[:3]: # Show top 3 + print(f"- {event.title} (Impact: {event.market_impact_score:.1f}/10)") + + # Get sector-specific events + tech_events = manager.get_sector_specific_events(MarketSector.TECHNOLOGY) + print(f"Technology sector events: {len(tech_events)}") + + await manager.stop() + +if __name__ == "__main__": + asyncio.run(example_usage()) \ No newline at end of file diff --git a/src/data/realtime_data.py b/src/data/realtime_data.py new file mode 100644 index 000000000..f511040e1 --- /dev/null +++ b/src/data/realtime_data.py @@ -0,0 +1,455 @@ +""" +Real-time data pipeline for market data, options Greeks, and implied volatility. +Supports multiple data providers and WebSocket streaming. +""" + +import asyncio +import json +import logging +from datetime import datetime, timedelta +from typing import Dict, List, Optional, Callable, Any, Protocol +from dataclasses import dataclass, asdict +from abc import ABC, abstractmethod +import websockets +import aiohttp +from concurrent.futures import ThreadPoolExecutor +import numpy as np +from threading import Lock + +from .models import Price, OptionsInfo, Greeks, SecurityInfo +from .cache import Cache + +logger = logging.getLogger(__name__) + +@dataclass +class RealTimeQuote: + """Real-time market quote""" + ticker: str + bid: float + ask: float + last: float + volume: int + timestamp: datetime + bid_size: Optional[int] = None + ask_size: Optional[int] = None + change: Optional[float] = None + change_percent: Optional[float] = None + +@dataclass +class RealTimeOptionsQuote: + """Real-time options quote with Greeks""" + ticker: str + underlying_ticker: str + strike: float + expiration: str + option_type: str # 'call' or 'put' + bid: float + ask: float + last: float + volume: int + open_interest: int + implied_volatility: float + delta: float + gamma: float + theta: float + vega: float + rho: float + timestamp: datetime + +@dataclass +class MarketEvent: + """Market event for event-driven processing""" + event_type: str # 'quote', 'trade', 'options_quote', 'market_status' + data: Dict[str, Any] + timestamp: datetime + +class DataProvider(ABC): + """Abstract base class for data providers""" + + @abstractmethod + async def connect(self) -> None: + pass + + @abstractmethod + async def disconnect(self) -> None: + pass + + @abstractmethod + async def subscribe_quotes(self, symbols: List[str]) -> None: + pass + + @abstractmethod + async def subscribe_options(self, symbols: List[str]) -> None: + pass + + @abstractmethod + async def get_options_chain(self, symbol: str) -> List[OptionsInfo]: + pass + +class AlphaVantageProvider(DataProvider): + """Alpha Vantage real-time data provider""" + + def __init__(self, api_key: str): + self.api_key = api_key + self.session: Optional[aiohttp.ClientSession] = None + self.websocket: Optional[websockets.WebSocketServerProtocol] = None + self.base_url = "https://www.alphavantage.co/query" + + async def connect(self) -> None: + self.session = aiohttp.ClientSession() + logger.info("Connected to Alpha Vantage") + + async def disconnect(self) -> None: + if self.session: + await self.session.close() + logger.info("Disconnected from Alpha Vantage") + + async def subscribe_quotes(self, symbols: List[str]) -> None: + # Alpha Vantage doesn't have real-time WebSocket, so we'll poll + logger.info(f"Subscribing to quotes for {symbols}") + + async def subscribe_options(self, symbols: List[str]) -> None: + logger.info(f"Subscribing to options for {symbols}") + + async def get_quote(self, symbol: str) -> RealTimeQuote: + """Get real-time quote""" + if not self.session: + raise ValueError("Not connected") + + params = { + 'function': 'GLOBAL_QUOTE', + 'symbol': symbol, + 'apikey': self.api_key + } + + async with self.session.get(self.base_url, params=params) as response: + data = await response.json() + quote_data = data.get('Global Quote', {}) + + return RealTimeQuote( + ticker=symbol, + bid=0.0, # Not available in Global Quote + ask=0.0, # Not available in Global Quote + last=float(quote_data.get('05. price', 0)), + volume=int(quote_data.get('06. volume', 0)), + timestamp=datetime.now(), + change=float(quote_data.get('09. change', 0)), + change_percent=float(quote_data.get('10. change percent', '0%').rstrip('%')) + ) + + async def get_options_chain(self, symbol: str) -> List[OptionsInfo]: + """Get options chain (simulated for Alpha Vantage)""" + # Alpha Vantage doesn't provide options data, so we'll return empty + logger.warning(f"Options data not available for {symbol} via Alpha Vantage") + return [] + +class PolygonProvider(DataProvider): + """Polygon.io real-time data provider""" + + def __init__(self, api_key: str): + self.api_key = api_key + self.session: Optional[aiohttp.ClientSession] = None + self.websocket: Optional[websockets.WebSocketServerProtocol] = None + self.base_url = "https://api.polygon.io" + self.ws_url = f"wss://socket.polygon.io/stocks" + + async def connect(self) -> None: + self.session = aiohttp.ClientSession() + # Connect to WebSocket + try: + self.websocket = await websockets.connect(self.ws_url) + # Authenticate + auth_msg = {"action": "auth", "params": self.api_key} + await self.websocket.send(json.dumps(auth_msg)) + logger.info("Connected to Polygon WebSocket") + except Exception as e: + logger.error(f"Failed to connect to Polygon WebSocket: {e}") + + async def disconnect(self) -> None: + if self.websocket: + await self.websocket.close() + if self.session: + await self.session.close() + logger.info("Disconnected from Polygon") + + async def subscribe_quotes(self, symbols: List[str]) -> None: + if not self.websocket: + raise ValueError("WebSocket not connected") + + # Subscribe to real-time quotes + subscribe_msg = { + "action": "subscribe", + "params": f"Q.{','.join(symbols)}" + } + await self.websocket.send(json.dumps(subscribe_msg)) + logger.info(f"Subscribed to quotes for {symbols}") + + async def subscribe_options(self, symbols: List[str]) -> None: + if not self.websocket: + raise ValueError("WebSocket not connected") + + # Subscribe to options quotes + subscribe_msg = { + "action": "subscribe", + "params": f"O.{','.join(symbols)}" + } + await self.websocket.send(json.dumps(subscribe_msg)) + logger.info(f"Subscribed to options for {symbols}") + + async def get_options_chain(self, symbol: str) -> List[OptionsInfo]: + """Get options chain from Polygon""" + if not self.session: + raise ValueError("Not connected") + + url = f"{self.base_url}/v3/reference/options/contracts" + params = { + 'underlying_ticker': symbol, + 'apikey': self.api_key, + 'limit': 1000 + } + + try: + async with self.session.get(url, params=params) as response: + data = await response.json() + options_data = data.get('results', []) + + options_list = [] + for option in options_data: + options_list.append(OptionsInfo( + underlying_ticker=symbol, + strike=option.get('strike_price', 0), + expiration=datetime.strptime(option.get('expiration_date', ''), '%Y-%m-%d').date(), + option_type=option.get('contract_type', '').lower(), + option_symbol=option.get('ticker', '') + )) + + return options_list + except Exception as e: + logger.error(f"Error fetching options chain for {symbol}: {e}") + return [] + +class RealTimeDataManager: + """Manages real-time data feeds and distribution""" + + def __init__(self, provider: DataProvider, cache: Optional[Cache] = None): + self.provider = provider + self.cache = cache or Cache() + self.subscribers: Dict[str, List[Callable]] = {} + self.is_running = False + self.executor = ThreadPoolExecutor(max_workers=4) + self._lock = Lock() + + async def start(self) -> None: + """Start the real-time data manager""" + await self.provider.connect() + self.is_running = True + logger.info("Real-time data manager started") + + async def stop(self) -> None: + """Stop the real-time data manager""" + self.is_running = False + await self.provider.disconnect() + self.executor.shutdown(wait=True) + logger.info("Real-time data manager stopped") + + def subscribe_to_quotes(self, symbols: List[str], callback: Callable[[RealTimeQuote], None]) -> None: + """Subscribe to real-time quotes""" + with self._lock: + for symbol in symbols: + if symbol not in self.subscribers: + self.subscribers[symbol] = [] + self.subscribers[symbol].append(callback) + + def unsubscribe_from_quotes(self, symbols: List[str], callback: Callable[[RealTimeQuote], None]) -> None: + """Unsubscribe from real-time quotes""" + with self._lock: + for symbol in symbols: + if symbol in self.subscribers and callback in self.subscribers[symbol]: + self.subscribers[symbol].remove(callback) + + async def get_current_quote(self, symbol: str) -> Optional[RealTimeQuote]: + """Get current quote for a symbol""" + # Check cache first + cached_quote = self.cache.get(f"quote_{symbol}") + if cached_quote and isinstance(cached_quote, RealTimeQuote): + # Check if quote is recent (within 1 minute) + if datetime.now() - cached_quote.timestamp < timedelta(minutes=1): + return cached_quote + + # Fetch fresh quote + try: + if hasattr(self.provider, 'get_quote'): + quote = await self.provider.get_quote(symbol) + self.cache.set(f"quote_{symbol}", quote, ttl=60) # Cache for 1 minute + return quote + except Exception as e: + logger.error(f"Error fetching quote for {symbol}: {e}") + + return None + + async def get_options_chain(self, symbol: str) -> List[OptionsInfo]: + """Get options chain for a symbol""" + # Check cache first + cache_key = f"options_{symbol}" + cached_options = self.cache.get(cache_key) + if cached_options: + return cached_options + + # Fetch fresh options data + try: + options = await self.provider.get_options_chain(symbol) + self.cache.set(cache_key, options, ttl=300) # Cache for 5 minutes + return options + except Exception as e: + logger.error(f"Error fetching options chain for {symbol}: {e}") + return [] + + async def calculate_implied_volatility(self, option: OptionsInfo, underlying_price: float, risk_free_rate: float = 0.05) -> float: + """Calculate implied volatility using Black-Scholes approximation""" + try: + from scipy.optimize import minimize_scalar + from scipy.stats import norm + import math + + # Black-Scholes calculation + def black_scholes_call(S, K, T, r, sigma): + d1 = (math.log(S/K) + (r + 0.5*sigma**2)*T) / (sigma*math.sqrt(T)) + d2 = d1 - sigma*math.sqrt(T) + return S*norm.cdf(d1) - K*math.exp(-r*T)*norm.cdf(d2) + + def black_scholes_put(S, K, T, r, sigma): + d1 = (math.log(S/K) + (r + 0.5*sigma**2)*T) / (sigma*math.sqrt(T)) + d2 = d1 - sigma*math.sqrt(T) + return K*math.exp(-r*T)*norm.cdf(-d2) - S*norm.cdf(-d1) + + # Calculate time to expiration in years + T = (option.expiration - datetime.now().date()).days / 365.0 + if T <= 0: + return 0.0 + + # Objective function to minimize + def objective(sigma): + if option.option_type == 'call': + theoretical_price = black_scholes_call(underlying_price, option.strike, T, risk_free_rate, sigma) + else: + theoretical_price = black_scholes_put(underlying_price, option.strike, T, risk_free_rate, sigma) + return abs(theoretical_price - (option.last_price or 0)) + + # Find implied volatility + result = minimize_scalar(objective, bounds=(0.01, 5.0), method='bounded') + return result.x if result.success else 0.3 # Default to 30% if calculation fails + + except ImportError: + logger.warning("scipy not available for IV calculation, using default") + return 0.3 + except Exception as e: + logger.error(f"Error calculating IV: {e}") + return 0.3 + + def _notify_subscribers(self, symbol: str, quote: RealTimeQuote) -> None: + """Notify all subscribers of a quote update""" + with self._lock: + if symbol in self.subscribers: + for callback in self.subscribers[symbol]: + try: + self.executor.submit(callback, quote) + except Exception as e: + logger.error(f"Error in callback for {symbol}: {e}") + +class MarketDataAggregator: + """Aggregates data from multiple sources for redundancy""" + + def __init__(self, providers: List[DataProvider]): + self.providers = providers + self.data_managers = [RealTimeDataManager(provider) for provider in providers] + self.primary_provider = 0 # Index of primary provider + + async def start(self) -> None: + """Start all data managers""" + for manager in self.data_managers: + try: + await manager.start() + except Exception as e: + logger.error(f"Failed to start data manager: {e}") + + async def stop(self) -> None: + """Stop all data managers""" + for manager in self.data_managers: + try: + await manager.stop() + except Exception as e: + logger.error(f"Error stopping data manager: {e}") + + async def get_best_quote(self, symbol: str) -> Optional[RealTimeQuote]: + """Get the best quote from all providers""" + quotes = [] + + # Try to get quotes from all providers + for manager in self.data_managers: + try: + quote = await manager.get_current_quote(symbol) + if quote: + quotes.append(quote) + except Exception as e: + logger.error(f"Error getting quote from provider: {e}") + + if not quotes: + return None + + # Return the most recent quote + return max(quotes, key=lambda q: q.timestamp) + + def switch_primary_provider(self, provider_index: int) -> None: + """Switch to a different primary provider""" + if 0 <= provider_index < len(self.providers): + self.primary_provider = provider_index + logger.info(f"Switched to primary provider {provider_index}") + +# Factory function to create data providers +def create_data_provider(provider_name: str, **kwargs) -> DataProvider: + """Factory function to create data providers""" + providers = { + 'alphavantage': AlphaVantageProvider, + 'polygon': PolygonProvider, + } + + if provider_name not in providers: + raise ValueError(f"Unknown provider: {provider_name}") + + return providers[provider_name](**kwargs) + +# Example usage and testing +async def example_usage(): + """Example of how to use the real-time data system""" + + # Create a provider (you would use your actual API key) + provider = create_data_provider('alphavantage', api_key='demo') + + # Create data manager + manager = RealTimeDataManager(provider) + + # Define callback for quote updates + def on_quote_update(quote: RealTimeQuote): + print(f"Quote update: {quote.ticker} - ${quote.last} at {quote.timestamp}") + + # Start manager and subscribe + await manager.start() + manager.subscribe_to_quotes(['AAPL', 'MSFT'], on_quote_update) + + # Subscribe to quotes via provider + await provider.subscribe_quotes(['AAPL', 'MSFT']) + + # Get current quote + quote = await manager.get_current_quote('AAPL') + if quote: + print(f"Current AAPL quote: ${quote.last}") + + # Get options chain + options = await manager.get_options_chain('AAPL') + print(f"Found {len(options)} options contracts for AAPL") + + # Stop manager + await manager.stop() + +if __name__ == "__main__": + asyncio.run(example_usage()) \ No newline at end of file diff --git a/src/data/trading_universes.py b/src/data/trading_universes.py new file mode 100644 index 000000000..2543235fd --- /dev/null +++ b/src/data/trading_universes.py @@ -0,0 +1,203 @@ +""" +Predefined trading universes for the AI hedge fund. +Defines various investment universes with specific criteria and constraints. +""" + +from typing import Dict, List +from .models import TradingUniverse, AssetClass, Sector, MarketCapCategory + + +# S&P 500 Universe +SP500_UNIVERSE = TradingUniverse( + name="S&P 500 Universe", + description="S&P 500 companies with high liquidity and options availability", + asset_classes=[AssetClass.EQUITY], + sectors=None, # All sectors allowed + market_cap_categories=[MarketCapCategory.MEGA_CAP, MarketCapCategory.LARGE_CAP], + min_market_cap=2.0, # $2B minimum + min_daily_volume=1_000_000, # 1M shares + min_avg_volume=500_000, # 500K average + indices=["SPY"], + max_positions=50, + rebalance_frequency="daily", + liquidity_threshold=0.8, + excluded_tickers=[] # Can exclude specific tickers if needed +) + +# High-Volume Tech Stocks Universe +TECH_UNIVERSE = TradingUniverse( + name="High-Volume Tech Stocks", + description="Technology sector stocks with high volume and options liquidity", + asset_classes=[AssetClass.EQUITY], + sectors=[ + Sector.TECHNOLOGY, + Sector.COMMUNICATION, + Sector.CONSUMER_DISCRETIONARY # For tech-adjacent companies like TSLA, AMZN + ], + market_cap_categories=[MarketCapCategory.MEGA_CAP, MarketCapCategory.LARGE_CAP], + min_market_cap=10.0, # $10B minimum for tech focus + min_daily_volume=2_000_000, # 2M shares for high volume requirement + min_avg_volume=1_000_000, # 1M average + indices=["QQQ", "XLK"], # NASDAQ-100 and Tech Select Sector SPDR + max_positions=30, + rebalance_frequency="daily", + liquidity_threshold=0.9, # Higher liquidity requirement for tech + included_tickers=[ + # Force include major tech names + "AAPL", "MSFT", "GOOGL", "AMZN", "NVDA", "META", "TSLA", "NFLX", "CRM", "ORCL" + ] +) + +# Sector ETFs Universe +SECTOR_ETF_UNIVERSE = TradingUniverse( + name="Sector ETFs", + description="Sector-based ETFs for broad market exposure and sector rotation strategies", + asset_classes=[AssetClass.ETF], + sectors=None, # ETFs cover all sectors + market_cap_categories=None, # Not applicable for ETFs + min_market_cap=None, + min_daily_volume=1_000_000, # 1M shares + min_avg_volume=500_000, + indices=[], + max_positions=15, # Limited to avoid over-diversification + rebalance_frequency="weekly", # Less frequent for ETF strategies + liquidity_threshold=0.8, + included_tickers=[ + # SPDR Sector ETFs + "XLK", # Technology + "XLF", # Financial + "XLV", # Health Care + "XLI", # Industrial + "XLY", # Consumer Discretionary + "XLP", # Consumer Staples + "XLE", # Energy + "XLU", # Utilities + "XLB", # Materials + "XLRE", # Real Estate + "XLC", # Communication Services + # Additional broad market ETFs + "SPY", # S&P 500 + "QQQ", # NASDAQ-100 + "IWM", # Russell 2000 (Small Cap) + "VTI" # Total Stock Market + ] +) + +# Options-Focused Universe +OPTIONS_UNIVERSE = TradingUniverse( + name="Options Trading Universe", + description="Stocks and ETFs with liquid options markets for complex strategies", + asset_classes=[AssetClass.EQUITY, AssetClass.ETF, AssetClass.OPTION], + sectors=None, + market_cap_categories=[MarketCapCategory.MEGA_CAP, MarketCapCategory.LARGE_CAP], + min_market_cap=5.0, # $5B minimum for options liquidity + min_daily_volume=1_500_000, # 1.5M shares + min_avg_volume=750_000, + indices=["SPY", "QQQ"], + max_positions=25, + rebalance_frequency="daily", + liquidity_threshold=0.9, # High liquidity requirement for options + included_tickers=[ + # High options volume stocks + "SPY", "QQQ", "AAPL", "TSLA", "NVDA", "AMZN", "MSFT", "GOOGL", + "META", "AMD", "NFLX", "IWM", "XLF", "XLK", "EEM", "GLD" + ] +) + +# Conservative Large Cap Universe +CONSERVATIVE_LARGE_CAP = TradingUniverse( + name="Conservative Large Cap", + description="Blue-chip stocks with stable fundamentals and dividend history", + asset_classes=[AssetClass.EQUITY], + sectors=[ + Sector.CONSUMER_STAPLES, + Sector.UTILITIES, + Sector.HEALTHCARE, + Sector.FINANCIALS + ], + market_cap_categories=[MarketCapCategory.MEGA_CAP, MarketCapCategory.LARGE_CAP], + min_market_cap=20.0, # $20B minimum for blue chips + min_daily_volume=500_000, + min_avg_volume=300_000, + indices=["SPY"], + max_positions=40, + rebalance_frequency="monthly", # Less frequent for conservative approach + liquidity_threshold=0.7, + included_tickers=[ + # Dividend aristocrats and blue chips + "JNJ", "PG", "KO", "PEP", "WMT", "HD", "UNH", "V", "MA", "JPM" + ] +) + +# All available trading universes +TRADING_UNIVERSES: Dict[str, TradingUniverse] = { + "sp500": SP500_UNIVERSE, + "tech": TECH_UNIVERSE, + "sector_etf": SECTOR_ETF_UNIVERSE, + "options": OPTIONS_UNIVERSE, + "conservative": CONSERVATIVE_LARGE_CAP +} + +def get_trading_universe(name: str) -> TradingUniverse: + """Get a trading universe by name""" + if name not in TRADING_UNIVERSES: + raise ValueError(f"Unknown trading universe: {name}. Available: {list(TRADING_UNIVERSES.keys())}") + return TRADING_UNIVERSES[name] + +def list_trading_universes() -> List[str]: + """List all available trading universe names""" + return list(TRADING_UNIVERSES.keys()) + +def create_combined_universe(universes: List[str], name: str, description: str) -> TradingUniverse: + """Combine multiple trading universes into one""" + if not universes: + raise ValueError("At least one universe must be specified") + + base_universe = get_trading_universe(universes[0]) + combined_tickers = set(base_universe.included_tickers) + combined_indices = set(base_universe.indices) + combined_asset_classes = set(base_universe.asset_classes) + combined_sectors = set(base_universe.sectors or []) + + # Merge all universes + for universe_name in universes[1:]: + universe = get_trading_universe(universe_name) + combined_tickers.update(universe.included_tickers) + combined_indices.update(universe.indices) + combined_asset_classes.update(universe.asset_classes) + if universe.sectors: + combined_sectors.update(universe.sectors) + + return TradingUniverse( + name=name, + description=description, + asset_classes=list(combined_asset_classes), + sectors=list(combined_sectors) if combined_sectors else None, + market_cap_categories=base_universe.market_cap_categories, + min_market_cap=base_universe.min_market_cap, + min_daily_volume=base_universe.min_daily_volume, + min_avg_volume=base_universe.min_avg_volume, + included_tickers=list(combined_tickers), + indices=list(combined_indices), + max_positions=sum(get_trading_universe(u).max_positions or 50 for u in universes), + rebalance_frequency=base_universe.rebalance_frequency, + liquidity_threshold=base_universe.liquidity_threshold + ) + +# Pre-configured combined universes +AGGRESSIVE_GROWTH = create_combined_universe( + ["tech", "options"], + "Aggressive Growth", + "High-growth tech stocks with options strategies for maximum returns" +) + +BALANCED_PORTFOLIO = create_combined_universe( + ["sp500", "sector_etf"], + "Balanced Portfolio", + "Diversified portfolio combining S&P 500 stocks and sector ETFs" +) + +TRADING_UNIVERSES.update({ + "aggressive_growth": AGGRESSIVE_GROWTH, + "balanced": BALANCED_PORTFOLIO +}) \ No newline at end of file diff --git a/src/enhanced_main.py b/src/enhanced_main.py new file mode 100644 index 000000000..f765ad319 --- /dev/null +++ b/src/enhanced_main.py @@ -0,0 +1,471 @@ +""" +Enhanced AI Hedge Fund with Trading Universe and Comprehensive Analysis. +Integrates all enhanced features: trading universes, sentiment analysis, +economic indicators, political signals, and advanced portfolio management. +""" + +import sys +import os +import asyncio +from dotenv import load_dotenv +from langchain_core.messages import HumanMessage +from langgraph.graph import END, StateGraph +from colorama import Fore, Style, init +import questionary +import argparse +from datetime import datetime +from dateutil.relativedelta import relativedelta +import json + +from src.agents.enhanced_portfolio_manager import enhanced_portfolio_management_agent +from src.agents.risk_manager import risk_management_agent +from src.graph.state import AgentState +from src.utils.display import print_trading_output +from src.utils.analysts import ANALYST_ORDER, get_analyst_nodes +from src.utils.progress import progress +from src.llm.models import LLM_ORDER, OLLAMA_LLM_ORDER, get_model_info, ModelProvider +from src.utils.ollama import ensure_ollama_and_model +from src.data.trading_universes import TRADING_UNIVERSES, list_trading_universes + +# Load environment variables +load_dotenv() +init(autoreset=True) + +# Enhanced universe options with descriptions +UNIVERSE_OPTIONS = [ + ("S&P 500 Universe - Large cap stocks with high liquidity", "sp500"), + ("High-Volume Tech Stocks - Technology sector focus", "tech"), + ("Sector ETFs - Diversified sector rotation", "sector_etf"), + ("Options Trading Universe - Liquid options markets", "options"), + ("Conservative Large Cap - Blue-chip dividend stocks", "conservative"), + ("Aggressive Growth - Tech + Options strategies", "aggressive_growth"), + ("Balanced Portfolio - S&P 500 + Sector ETFs", "balanced") +] + +def get_api_keys_from_env() -> dict: + """Extract API keys from environment variables""" + api_keys = {} + + # LLM API keys + for key in ["OPENAI_API_KEY", "GROQ_API_KEY", "ANTHROPIC_API_KEY", "DEEPSEEK_API_KEY"]: + if os.getenv(key): + api_keys[key.lower()] = os.getenv(key) + + # Financial data API key + if os.getenv("FINANCIAL_DATASETS_API_KEY"): + api_keys["financial_datasets_api_key"] = os.getenv("FINANCIAL_DATASETS_API_KEY") + + # Enhanced features API keys + if os.getenv("FRED_API_KEY"): + api_keys["fred_api_key"] = os.getenv("FRED_API_KEY") + + if os.getenv("NEWSAPI_KEY"): + api_keys["newsapi_key"] = os.getenv("NEWSAPI_KEY") + + if os.getenv("REDDIT_API_KEY"): + api_keys["reddit_key"] = os.getenv("REDDIT_API_KEY") + + if os.getenv("TWITTER_API_KEY"): + api_keys["twitter_key"] = os.getenv("TWITTER_API_KEY") + + return api_keys + +def check_enhanced_features_availability(api_keys: dict) -> dict: + """Check which enhanced features are available based on API keys""" + features = { + "economic_indicators": bool(api_keys.get("fred_api_key")), + "political_signals": bool(api_keys.get("newsapi_key")), + "social_sentiment": bool(api_keys.get("reddit_key") or api_keys.get("twitter_key")), + "financial_data": bool(api_keys.get("financial_datasets_api_key")), + "basic_features": True # Always available with free data + } + return features + +def display_feature_status(features: dict): + """Display status of enhanced features""" + print(f"\n{Fore.CYAN}📊 Enhanced Features Status:{Style.RESET_ALL}") + + status_icon = lambda available: f"{Fore.GREEN}✅" if available else f"{Fore.YELLOW}⚠️" + + print(f" {status_icon(features['basic_features'])} Basic Analysis (Free AAPL, GOOGL, MSFT, NVDA, TSLA data)") + print(f" {status_icon(features['economic_indicators'])} Economic Indicators (FRED API)") + print(f" {status_icon(features['political_signals'])} Political Signals (NewsAPI)") + print(f" {status_icon(features['social_sentiment'])} Social Sentiment (Reddit/Twitter API)") + print(f" {status_icon(features['financial_data'])} Extended Financial Data (Financial Datasets API)") + + if not all(features.values()): + print(f"\n{Fore.YELLOW}💡 To enable all features, add these API keys to your .env file:{Style.RESET_ALL}") + if not features['economic_indicators']: + print(" FRED_API_KEY=your_fred_api_key (free from https://fred.stlouisfed.org/docs/api/)") + if not features['political_signals']: + print(" NEWSAPI_KEY=your_newsapi_key (from https://newsapi.org/)") + if not features['social_sentiment']: + print(" REDDIT_API_KEY=your_reddit_key and/or TWITTER_API_KEY=your_twitter_key") + if not features['financial_data']: + print(" FINANCIAL_DATASETS_API_KEY=your_key (for extended tickers)") + + print() + +async def run_enhanced_hedge_fund( + tickers: list[str], + start_date: str, + end_date: str, + portfolio: dict, + universe_name: str = "sp500", + show_reasoning: bool = False, + selected_analysts: list[str] = [], + model_name: str = "gpt-4o", + model_provider: str = "OpenAI", + api_keys: dict = None +): + """Run the enhanced hedge fund with comprehensive analysis""" + + if api_keys is None: + api_keys = get_api_keys_from_env() + + # Start progress tracking + progress.start() + + try: + # Create enhanced workflow + workflow = create_enhanced_workflow(selected_analysts, universe_name, api_keys) + agent = workflow.compile() + + final_state = await agent.ainvoke({ + "messages": [ + HumanMessage(content="Make enhanced trading decisions with comprehensive analysis.") + ], + "data": { + "tickers": tickers, + "portfolio": portfolio, + "start_date": start_date, + "end_date": end_date, + "analyst_signals": {}, + "universe_name": universe_name, + "api_keys": api_keys + }, + "metadata": { + "show_reasoning": show_reasoning, + "model_name": model_name, + "model_provider": model_provider, + }, + }) + + return { + "decisions": json.loads(final_state["messages"][-1].content), + "analyst_signals": final_state["data"]["analyst_signals"], + "enhanced_analysis": final_state["data"].get("enhanced_analysis", {}) + } + + finally: + progress.stop() + +def create_enhanced_workflow(selected_analysts=None, universe_name="sp500", api_keys=None): + """Create enhanced workflow with comprehensive analysis""" + + workflow = StateGraph(AgentState) + workflow.add_node("start_node", start) + + # Get analyst nodes + analyst_nodes = get_analyst_nodes() + + # Default to all analysts if none selected + if selected_analysts is None: + selected_analysts = list(analyst_nodes.keys()) + + # Add selected analyst nodes + for analyst_key in selected_analysts: + node_name, node_func = analyst_nodes[analyst_key] + workflow.add_node(node_name, node_func) + workflow.add_edge("start_node", node_name) + + # Add risk management + workflow.add_node("risk_management_agent", risk_management_agent) + + # Add enhanced portfolio manager + async def enhanced_portfolio_wrapper(state): + return await enhanced_portfolio_management_agent(state, universe_name, api_keys or {}) + + workflow.add_node("enhanced_portfolio_manager", enhanced_portfolio_wrapper) + + # Connect workflow + for analyst_key in selected_analysts: + node_name = analyst_nodes[analyst_key][0] + workflow.add_edge(node_name, "risk_management_agent") + + workflow.add_edge("risk_management_agent", "enhanced_portfolio_manager") + workflow.add_edge("enhanced_portfolio_manager", END) + + workflow.set_entry_point("start_node") + return workflow + +def start(state: AgentState): + """Initialize the enhanced workflow""" + return state + +def parse_enhanced_response(response): + """Parse enhanced response with error handling""" + try: + return json.loads(response) + except json.JSONDecodeError as e: + print(f"JSON decoding error: {e}") + return None + except Exception as e: + print(f"Error parsing response: {e}") + return None + +def main(): + """Enhanced main function with comprehensive features""" + + parser = argparse.ArgumentParser(description="Enhanced AI Hedge Fund with comprehensive analysis") + parser.add_argument("--initial-cash", type=float, default=100000.0, help="Initial cash position") + parser.add_argument("--margin-requirement", type=float, default=0.0, help="Initial margin requirement") + parser.add_argument("--tickers", type=str, required=True, help="Comma-separated list of tickers") + parser.add_argument("--start-date", type=str, help="Start date (YYYY-MM-DD)") + parser.add_argument("--end-date", type=str, help="End date (YYYY-MM-DD)") + parser.add_argument("--show-reasoning", action="store_true", help="Show detailed reasoning") + parser.add_argument("--universe", type=str, choices=list_trading_universes(), + default="sp500", help="Trading universe to use") + parser.add_argument("--ollama", action="store_true", help="Use Ollama for local LLM") + parser.add_argument("--demo", action="store_true", help="Run demo with sample data") + + args = parser.parse_args() + + # Get API keys and check feature availability + api_keys = get_api_keys_from_env() + features = check_enhanced_features_availability(api_keys) + + print(f"\n{Fore.GREEN}🚀 Enhanced AI Hedge Fund{Style.RESET_ALL}") + print(f"{Fore.BLUE}📈 Advanced Portfolio Management with Comprehensive Analysis{Style.RESET_ALL}") + + display_feature_status(features) + + if args.demo: + print(f"{Fore.CYAN}🎯 Running Demo Mode with Sample Data{Style.RESET_ALL}") + tickers = ["AAPL", "MSFT", "GOOGL", "NVDA", "TSLA"] + args.universe = "tech" + args.show_reasoning = True + else: + tickers = [ticker.strip().upper() for ticker in args.tickers.split(",")] + + # Select trading universe + if not args.demo: + universe_choice = questionary.select( + "Select your trading universe:", + choices=[questionary.Choice(display, value=value) for display, value in UNIVERSE_OPTIONS], + default=args.universe, + style=questionary.Style([ + ("selected", "fg:green bold"), + ("pointer", "fg:green bold"), + ("highlighted", "fg:green"), + ("answer", "fg:green bold"), + ]) + ).ask() + + if not universe_choice: + print("\nExiting...") + sys.exit(0) + + universe_name = universe_choice + else: + universe_name = args.universe + + print(f"\n{Fore.GREEN}📊 Selected Universe: {universe_name.title().replace('_', ' ')}{Style.RESET_ALL}") + + # Display universe details + universe = TRADING_UNIVERSES[universe_name] + print(f" 📋 Description: {universe.description}") + print(f" 🎯 Asset Classes: {', '.join(universe.asset_classes)}") + if universe.max_positions: + print(f" 📈 Max Positions: {universe.max_positions}") + + # Select analysts + selected_analysts = questionary.checkbox( + "Select your AI analysts:", + choices=[questionary.Choice(display, value=value) for display, value in ANALYST_ORDER], + instruction="\nPress Space to select, 'a' for all, Enter when done", + validate=lambda x: len(x) > 0 or "You must select at least one analyst.", + style=questionary.Style([ + ("checkbox-selected", "fg:green"), + ("selected", "fg:green noinherit"), + ("highlighted", "noinherit"), + ("pointer", "noinherit"), + ]) + ).ask() + + if not selected_analysts: + print("\nExiting...") + sys.exit(0) + + print(f"\n{Fore.GREEN}🤖 Selected Analysts: {', '.join(choice.title().replace('_', ' ') for choice in selected_analysts)}{Style.RESET_ALL}") + + # Select LLM model + model_name, model_provider = select_llm_model(args.ollama) + + # Set dates + end_date = args.end_date or datetime.now().strftime("%Y-%m-%d") + if not args.start_date: + end_date_obj = datetime.strptime(end_date, "%Y-%m-%d") + start_date = (end_date_obj - relativedelta(months=3)).strftime("%Y-%m-%d") + else: + start_date = args.start_date + + # Initialize portfolio + portfolio = create_initial_portfolio(tickers, args.initial_cash, args.margin_requirement) + + print(f"\n{Fore.BLUE}📅 Analysis Period: {start_date} to {end_date}{Style.RESET_ALL}") + print(f"{Fore.BLUE}💰 Initial Cash: ${args.initial_cash:,.2f}{Style.RESET_ALL}") + print(f"{Fore.BLUE}🎯 Tickers: {', '.join(tickers)}{Style.RESET_ALL}") + + # Run enhanced hedge fund + print(f"\n{Fore.YELLOW}🔄 Running Enhanced Analysis...{Style.RESET_ALL}") + + result = asyncio.run(run_enhanced_hedge_fund( + tickers=tickers, + start_date=start_date, + end_date=end_date, + portfolio=portfolio, + universe_name=universe_name, + show_reasoning=args.show_reasoning, + selected_analysts=selected_analysts, + model_name=model_name, + model_provider=model_provider, + api_keys=api_keys + )) + + # Display results + print_enhanced_results(result, features) + +def select_llm_model(use_ollama: bool): + """Select LLM model""" + if use_ollama: + model_name = questionary.select( + "Select your Ollama model:", + choices=[questionary.Choice(display, value=value) for display, value, _ in OLLAMA_LLM_ORDER], + style=questionary.Style([ + ("selected", "fg:green bold"), + ("pointer", "fg:green bold"), + ("highlighted", "fg:green"), + ("answer", "fg:green bold"), + ]) + ).ask() + + if not model_name: + sys.exit(0) + + if not ensure_ollama_and_model(model_name): + print(f"{Fore.RED}Cannot proceed without Ollama model.{Style.RESET_ALL}") + sys.exit(1) + + return model_name, ModelProvider.OLLAMA.value + else: + model_choice = questionary.select( + "Select your LLM model:", + choices=[questionary.Choice(display, value=(name, provider)) for display, name, provider in LLM_ORDER], + style=questionary.Style([ + ("selected", "fg:green bold"), + ("pointer", "fg:green bold"), + ("highlighted", "fg:green"), + ("answer", "fg:green bold"), + ]) + ).ask() + + if not model_choice: + sys.exit(0) + + return model_choice + +def create_initial_portfolio(tickers: list, initial_cash: float, margin_requirement: float): + """Create initial portfolio structure""" + return { + "cash": initial_cash, + "margin_requirement": margin_requirement, + "margin_used": 0.0, + "positions": { + ticker: { + "long": 0, + "short": 0, + "long_cost_basis": 0.0, + "short_cost_basis": 0.0, + "short_margin_used": 0.0, + } + for ticker in tickers + }, + "realized_gains": { + ticker: {"long": 0.0, "short": 0.0} + for ticker in tickers + }, + } + +def print_enhanced_results(result: dict, features: dict): + """Print enhanced analysis results""" + + print(f"\n{Fore.GREEN}{'='*60}{Style.RESET_ALL}") + print(f"{Fore.GREEN}🎯 ENHANCED TRADING ANALYSIS RESULTS{Style.RESET_ALL}") + print(f"{Fore.GREEN}{'='*60}{Style.RESET_ALL}") + + # Portfolio-level analysis + if "portfolio_analysis" in result.get("decisions", {}): + analysis = result["decisions"]["portfolio_analysis"] + print(f"\n{Fore.CYAN}📊 Portfolio-Level Analysis:{Style.RESET_ALL}") + print(f" 🌡️ Market Sentiment: {analysis.get('market_sentiment', 0):.2f}") + print(f" 🏥 Economic Health: {analysis.get('economic_health', 50):.1f}/100") + print(f" 🏛️ Political Stability: {analysis.get('political_stability', 50):.1f}/100") + print(f" 📈 Market Regime: {analysis.get('market_regime', 'Unknown')}") + print(f" ⚠️ Risk Level: {analysis.get('risk_level', 'Medium')}") + + # Enhanced analysis details + if "enhanced_analysis" in result: + enhanced = result["enhanced_analysis"] + + if features["economic_indicators"] and "economic" in enhanced: + econ = enhanced["economic"] + print(f"\n{Fore.YELLOW}📈 Economic Indicators:{Style.RESET_ALL}") + print(f" 📊 Health Score: {econ.get('health_score', 50):.1f}/100") + print(f" 📝 Summary: {econ.get('summary', 'N/A')}") + + if features["political_signals"] and "political" in enhanced: + pol = enhanced["political"] + print(f"\n{Fore.RED}🏛️ Political Signals:{Style.RESET_ALL}") + print(f" ⚡ High Impact Events: {pol.get('high_impact_events', 0)}") + print(f" 📰 Total Events: {pol.get('total_events', 0)}") + + if features["social_sentiment"] and "sentiment" in enhanced: + print(f"\n{Fore.MAGENTA}💬 Social Sentiment:{Style.RESET_ALL}") + sentiment = enhanced["sentiment"] + for ticker, data in sentiment.items(): + print(f" {ticker}: {data.get('social_sentiment', 0):.2f} ({data.get('mentions', 0)} mentions)") + + # Trading decisions + if "decisions" in result.get("decisions", {}): + decisions = result["decisions"]["decisions"] + print(f"\n{Fore.BLUE}🎯 Trading Decisions:{Style.RESET_ALL}") + + for ticker, decision in decisions.items(): + action = decision.get("action", "hold") + quantity = decision.get("quantity", 0) + confidence = decision.get("confidence", 0) + + action_color = { + "buy": Fore.GREEN, + "sell": Fore.RED, + "short": Fore.YELLOW, + "cover": Fore.CYAN, + "hold": Fore.WHITE + }.get(action, Fore.WHITE) + + print(f" {ticker}: {action_color}{action.upper()}{Style.RESET_ALL} " + f"{quantity} shares (Confidence: {confidence:.1f}%)") + + # Show enhanced metrics if available + if "sentiment_score" in decision: + print(f" 📊 Sentiment: {decision['sentiment_score']:.2f}") + if "economic_impact" in decision: + print(f" 🏛️ Economic Impact: {decision['economic_impact']:.2f}") + if "political_risk" in decision: + print(f" ⚠️ Political Risk: {decision['political_risk']:.2f}") + + print(f"\n{Fore.GREEN}✅ Enhanced Analysis Complete!{Style.RESET_ALL}") + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/start_backend.py b/start_backend.py new file mode 100755 index 000000000..e7f1b255c --- /dev/null +++ b/start_backend.py @@ -0,0 +1,208 @@ +#!/usr/bin/env python3 +""" +Backend startup script with proper error handling and logging. +Ensures all dependencies are available and the server starts successfully. +""" + +import os +import sys +import logging +import subprocess +from pathlib import Path + +# Setup logging +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(levelname)s - %(message)s', + handlers=[ + logging.StreamHandler(), + logging.FileHandler('backend.log') + ] +) + +logger = logging.getLogger(__name__) + +def check_python_version(): + """Check if Python version is compatible""" + if sys.version_info < (3, 11): + logger.error(f"Python 3.11+ required, got {sys.version}") + return False + logger.info(f"Python version: {sys.version}") + return True + +def check_poetry(): + """Check if Poetry is installed and available""" + try: + result = subprocess.run(['poetry', '--version'], capture_output=True, text=True) + if result.returncode == 0: + logger.info(f"Poetry found: {result.stdout.strip()}") + return True + except FileNotFoundError: + pass + + logger.error("Poetry not found. Please install Poetry: https://python-poetry.org/docs/#installation") + return False + +def check_dependencies(): + """Check if required dependencies are installed""" + try: + result = subprocess.run(['poetry', 'run', 'python', '-c', 'import fastapi, sqlalchemy, pydantic'], + capture_output=True, text=True) + if result.returncode == 0: + logger.info("All required dependencies are installed") + return True + else: + logger.error(f"Dependency check failed: {result.stderr}") + return False + except Exception as e: + logger.error(f"Error checking dependencies: {e}") + return False + +def install_dependencies(): + """Install dependencies using Poetry""" + logger.info("Installing dependencies...") + try: + result = subprocess.run(['poetry', 'install'], capture_output=True, text=True) + if result.returncode == 0: + logger.info("Dependencies installed successfully") + return True + else: + logger.error(f"Failed to install dependencies: {result.stderr}") + return False + except Exception as e: + logger.error(f"Error installing dependencies: {e}") + return False + +def check_environment(): + """Check if required environment variables are set""" + env_file = Path('.env') + if not env_file.exists(): + logger.warning(".env file not found. Creating template...") + create_env_template() + + # Load environment variables + try: + from dotenv import load_dotenv + load_dotenv() + + # Check for at least one LLM API key + llm_keys = ['OPENAI_API_KEY', 'GROQ_API_KEY', 'ANTHROPIC_API_KEY', 'DEEPSEEK_API_KEY'] + has_llm_key = any(os.getenv(key) for key in llm_keys) + + if not has_llm_key: + logger.warning("No LLM API key found. Some features may not work.") + logger.info("Add at least one of these to your .env file:") + for key in llm_keys: + logger.info(f" {key}=your_api_key_here") + + return True + except ImportError: + logger.warning("python-dotenv not installed. Environment variables won't be loaded automatically.") + return True + +def create_env_template(): + """Create a template .env file""" + template = """# AI Hedge Fund Environment Variables +# Add your API keys here + +# LLM API Keys (at least one required) +OPENAI_API_KEY=your_openai_api_key_here +GROQ_API_KEY=your_groq_api_key_here +ANTHROPIC_API_KEY=your_anthropic_api_key_here +DEEPSEEK_API_KEY=your_deepseek_api_key_here + +# Financial Data API Key (optional - free data available for AAPL, GOOGL, MSFT, NVDA, TSLA) +FINANCIAL_DATASETS_API_KEY=your_financial_datasets_api_key_here + +# Database URL (optional - uses SQLite by default) +DATABASE_URL=sqlite:///hedge_fund.db + +# Other API Keys for enhanced features +NEWSAPI_KEY=your_newsapi_key_here +FRED_API_KEY=your_fred_api_key_here +""" + + with open('.env', 'w') as f: + f.write(template) + + logger.info("Created .env template file. Please add your API keys.") + +def test_imports(): + """Test that all critical imports work""" + logger.info("Testing critical imports...") + try: + # Test app imports + result = subprocess.run([ + 'poetry', 'run', 'python', '-c', + 'from app.backend.main import app; print("Backend imports successful")' + ], capture_output=True, text=True) + + if result.returncode == 0: + logger.info("All imports successful") + return True + else: + logger.error(f"Import test failed: {result.stderr}") + return False + except Exception as e: + logger.error(f"Error testing imports: {e}") + return False + +def start_server(host="0.0.0.0", port=8000, reload=True): + """Start the FastAPI server using uvicorn""" + logger.info(f"Starting backend server on http://{host}:{port}") + + cmd = [ + 'poetry', 'run', 'uvicorn', 'app.backend.main:app', + '--host', host, + '--port', str(port) + ] + + if reload: + cmd.append('--reload') + + try: + # Start the server + subprocess.run(cmd) + except KeyboardInterrupt: + logger.info("Server stopped by user") + except Exception as e: + logger.error(f"Error starting server: {e}") + return False + + return True + +def main(): + """Main startup routine""" + logger.info("Starting AI Hedge Fund Backend Server...") + + # Pre-flight checks + if not check_python_version(): + sys.exit(1) + + if not check_poetry(): + sys.exit(1) + + # Check and install dependencies + if not check_dependencies(): + logger.info("Dependencies missing, attempting to install...") + if not install_dependencies(): + sys.exit(1) + + # Re-check after installation + if not check_dependencies(): + sys.exit(1) + + # Check environment + check_environment() + + # Test imports + if not test_imports(): + logger.error("Critical imports failed. Check for syntax errors or missing dependencies.") + sys.exit(1) + + # Start the server + logger.info("All checks passed. Starting server...") + start_server() + +if __name__ == "__main__": + main() \ No newline at end of file