-
Notifications
You must be signed in to change notification settings - Fork 8.3k
fix: valdiate watsonX key #11264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: valdiate watsonX key #11264
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughImplemented IBM WatsonX validation in the Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 2 warnings)
✅ Passed checks (4 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @src/lfx/src/lfx/base/models/unified_models.py:
- Line 333: Remove the trailing whitespace on the blank line causing Ruff W293
in src/lfx/src/lfx/base/models/unified_models.py (around the reported location
near line 333); delete the spaces on that empty line so it's truly empty, then
re-run ruff/formatting (or pre-commit) to confirm the W293 warning is resolved.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/lfx/src/lfx/base/models/unified_models.py
🧰 Additional context used
🪛 GitHub Actions: Ruff Style Check
src/lfx/src/lfx/base/models/unified_models.py
[error] 333-333: Ruff: W293 Blank line contains whitespace. Command: uv run --only-dev ruff check --output-format=github .
🪛 GitHub Check: Ruff Style Check (3.13)
src/lfx/src/lfx/base/models/unified_models.py
[failure] 333-333: Ruff (W293)
src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Update Component Index
- GitHub Check: Update Starter Projects
| elif provider == "IBM WatsonX": | ||
| # WatsonX validation would require additional parameters | ||
| # Skip for now as it needs project_id, url, etc. | ||
| return | ||
| from langchain_ibm import ChatWatsonx | ||
|
|
||
| default_url = "https://us-south.ml.cloud.ibm.com" | ||
| llm = ChatWatsonx( | ||
| apikey=api_key, | ||
| url=default_url, | ||
| model_id=first_model, | ||
| project_id="dummy_project_for_validation", # Dummy project_id for validation | ||
| params={"max_new_tokens": 1}, | ||
| ) | ||
| llm.invoke("test") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd src/lfx && find . -name "*.txt" -o -name "*.in" -o -name "*.toml" -o -name "*.cfg" | grep -E "(requirements|setup|pyproject|constraints)" | head -20Repository: langflow-ai/langflow
Length of output: 80
🏁 Script executed:
cd src/lfx && cat -n src/lfx/base/models/unified_models.py | sed -n '315,340p'Repository: langflow-ai/langflow
Length of output: 1303
🏁 Script executed:
cd src/lfx && wc -l src/lfx/base/models/unified_models.pyRepository: langflow-ai/langflow
Length of output: 106
🏁 Script executed:
cd src/lfx && cat -n pyproject.toml | head -100Repository: langflow-ai/langflow
Length of output: 3381
🏁 Script executed:
cd src/lfx && rg "langchain.ibm|langchain-ibm" pyproject.toml -A 2Repository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
cd src/lfx && git log --oneline -1 2>/dev/null || echo "Not a git repo"Repository: langflow-ai/langflow
Length of output: 107
🏁 Script executed:
cd src/lfx && rg "W293" . --type py -l 2>/dev/null | head -5Repository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
cd src/lfx && cat pyproject.toml | grep -A 50 "\[dependency-groups\]"Repository: langflow-ai/langflow
Length of output: 974
🏁 Script executed:
cd src/lfx && rg "langchain.ibm|langchain-ibm|ChatWatsonx" pyproject.tomlRepository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
cd src/lfx && python3 -c "
import re
with open('src/lfx/base/models/unified_models.py', 'r') as f:
lines = f.readlines()
for i in range(320, 334):
line = lines[i]
has_trailing_space = line.rstrip('\n') != line.rstrip('\n').rstrip()
print(f'Line {i+1}: has_trailing_space={has_trailing_space} repr={repr(line)}')" 2>/dev/null || echo "Python check failed"Repository: langflow-ai/langflow
Length of output: 150
🌐 Web query:
ChatWatsonx langchain-ibm 0.3 0.2 API parameters apikey url project_id initialization
💡 Result:
Short answer — how to initialize ChatWatsonx (langchain-ibm v0.2/0.3):
-
Required identification/auth:
- api key: pass api_key to the constructor or set env var WATSONX_API_KEY (older apikey/WATSONX_APIKEY are deprecated). [1][2]
- url: pass the service URL (e.g. https://us-south.ml.cloud.ibm.com) via url/serviceUrl. [1][2]
- project_id or space_id (or deployment_id): you must supply project_id or space_id (or a deployment_id) for the request context unless using a lightweight engine. Get IDs from the project/space Manage → General. [1][3]
-
Typical Python instantiation (example):
from langchain_ibm import ChatWatsonx
from ibm_watsonx_ai.foundation_models.schema import TextChatParametersparameters = TextChatParameters(temperature=0.5, top_p=1, max_completion_tokens=None)
chat = ChatWatsonx(
model_id="ibm/granite-34b-code-instruct", # or deployment_id
url="https://us-south.ml.cloud.ibm.com",
project_id="YOUR_PROJECT_ID",
api_key="YOUR_API_KEY", # or rely on WATSONX_API_KEY env var
params=parameters,
) [1][2] -
Notes:
- apikey / WATSONX_APIKEY names are deprecated — prefer api_key / WATSONX_API_KEY. [2]
- You can also pass an ibm_watsonx_ai.APIClient instance (watsonx_client) instead of raw key/URL. [1]
- For JS/langchain.js the same concepts apply (serviceUrl, projectId/spaceId, model/idOrName). [3]
Sources:
[1] LangChain docs — IBM watsonx.ai (Python) (ChatWatsonx examples & notes).
[2] LangChain reference — ChatWatsonx API (fields: api_key, url, project_id, params; deprecation note).
[3] LangChain (JS) / community Watsonx examples (serviceUrl, projectId/spaceId/deployment).
🏁 Script executed:
cd src/lfx && sed -n '321,333p' src/lfx/base/models/unified_models.py | cat -ARepository: langflow-ai/langflow
Length of output: 549
Fix deprecated apikey parameter and unreliable WatsonX validation logic.
The ChatWatsonx constructor uses deprecated parameter apikey= (line 326); langchain-ibm v0.2/0.3+ requires api_key=. Additionally, hardcoding url="https://us-south.ml.cloud.ibm.com" and using dummy project_id="dummy_project_for_validation" means the validation will fail even for valid keys (wrong region, invalid project), making this check unreliable. Your except Exception handler will treat those failures as "network issues" and allow saving anyway, defeating the purpose.
Consider validating the IBM Cloud API key directly (IAM token exchange) without requiring a real project context, or accept actual project credentials from the user configuration.
Proposed fix
elif provider == "IBM WatsonX":
- from langchain_ibm import ChatWatsonx
-
- default_url = "https://us-south.ml.cloud.ibm.com"
- llm = ChatWatsonx(
- apikey=api_key,
- url=default_url,
- model_id=first_model,
- project_id="dummy_project_for_validation", # Dummy project_id for validation
- params={"max_new_tokens": 1},
- )
- llm.invoke("test")
+ # Validate IBM Cloud API key without requiring watsonx.ai project context
+ import requests
+
+ resp = requests.post(
+ "https://iam.cloud.ibm.com/identity/token",
+ headers={
+ "Accept": "application/json",
+ "Content-Type": "application/x-www-form-urlencoded",
+ },
+ data={
+ "grant_type": "urn:ibm:params:oauth:grant-type:apikey",
+ "apikey": api_key,
+ },
+ timeout=5,
+ )
+ if resp.status_code != requests.codes.ok:
+ msg = "Invalid API key for IBM WatsonX"
+ raise ValueError(msg)Also fix line 333: Ruff W293 (blank line contains whitespace).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif provider == "IBM WatsonX": | |
| # WatsonX validation would require additional parameters | |
| # Skip for now as it needs project_id, url, etc. | |
| return | |
| from langchain_ibm import ChatWatsonx | |
| default_url = "https://us-south.ml.cloud.ibm.com" | |
| llm = ChatWatsonx( | |
| apikey=api_key, | |
| url=default_url, | |
| model_id=first_model, | |
| project_id="dummy_project_for_validation", # Dummy project_id for validation | |
| params={"max_new_tokens": 1}, | |
| ) | |
| llm.invoke("test") | |
| elif provider == "IBM WatsonX": | |
| # Validate IBM Cloud API key without requiring watsonx.ai project context | |
| import requests | |
| resp = requests.post( | |
| "https://iam.cloud.ibm.com/identity/token", | |
| headers={ | |
| "Accept": "application/json", | |
| "Content-Type": "application/x-www-form-urlencoded", | |
| }, | |
| data={ | |
| "grant_type": "urn:ibm:params:oauth:grant-type:apikey", | |
| "apikey": api_key, | |
| }, | |
| timeout=5, | |
| ) | |
| if resp.status_code != requests.codes.ok: | |
| msg = "Invalid API key for IBM WatsonX" | |
| raise ValueError(msg) |
🧰 Tools
🪛 GitHub Actions: Ruff Style Check
[error] 333-333: Ruff: W293 Blank line contains whitespace. Command: uv run --only-dev ruff check --output-format=github .
🪛 GitHub Check: Ruff Style Check (3.13)
[failure] 333-333: Ruff (W293)
src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace
| params={"max_new_tokens": 1}, | ||
| ) | ||
| llm.invoke("test") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Ruff W293 (blank line contains whitespace).
This is currently breaking CI.
Proposed fix
-
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
🧰 Tools
🪛 GitHub Actions: Ruff Style Check
[error] 333-333: Ruff: W293 Blank line contains whitespace. Command: uv run --only-dev ruff check --output-format=github .
🪛 GitHub Check: Ruff Style Check (3.13)
[failure] 333-333: Ruff (W293)
src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace
🤖 Prompt for AI Agents
In @src/lfx/src/lfx/base/models/unified_models.py at line 333, Remove the
trailing whitespace on the blank line causing Ruff W293 in
src/lfx/src/lfx/base/models/unified_models.py (around the reported location near
line 333); delete the spaces on that empty line so it's truly empty, then re-run
ruff/formatting (or pre-commit) to confirm the W293 warning is resolved.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.