Skip to content
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 12 additions & 3 deletions src/lfx/src/lfx/base/models/unified_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -319,9 +319,18 @@
llm = ChatGoogleGenerativeAI(google_api_key=api_key, model=first_model, max_tokens=1)
llm.invoke("test")
elif provider == "IBM WatsonX":
# WatsonX validation would require additional parameters
# Skip for now as it needs project_id, url, etc.
return
from langchain_ibm import ChatWatsonx

default_url = "https://us-south.ml.cloud.ibm.com"
llm = ChatWatsonx(
apikey=api_key,
url=default_url,
model_id=first_model,
project_id="dummy_project_for_validation", # Dummy project_id for validation
params={"max_new_tokens": 1},
)
llm.invoke("test")

Check failure on line 333 in src/lfx/src/lfx/base/models/unified_models.py

View workflow job for this annotation

GitHub Actions / Ruff Style Check (3.13)

Ruff (W293)

src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace
Comment on lines 321 to 333
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cd src/lfx && find . -name "*.txt" -o -name "*.in" -o -name "*.toml" -o -name "*.cfg" | grep -E "(requirements|setup|pyproject|constraints)" | head -20

Repository: langflow-ai/langflow

Length of output: 80


🏁 Script executed:

cd src/lfx && cat -n src/lfx/base/models/unified_models.py | sed -n '315,340p'

Repository: langflow-ai/langflow

Length of output: 1303


🏁 Script executed:

cd src/lfx && wc -l src/lfx/base/models/unified_models.py

Repository: langflow-ai/langflow

Length of output: 106


🏁 Script executed:

cd src/lfx && cat -n pyproject.toml | head -100

Repository: langflow-ai/langflow

Length of output: 3381


🏁 Script executed:

cd src/lfx && rg "langchain.ibm|langchain-ibm" pyproject.toml -A 2

Repository: langflow-ai/langflow

Length of output: 46


🏁 Script executed:

cd src/lfx && git log --oneline -1 2>/dev/null || echo "Not a git repo"

Repository: langflow-ai/langflow

Length of output: 107


🏁 Script executed:

cd src/lfx && rg "W293" . --type py -l 2>/dev/null | head -5

Repository: langflow-ai/langflow

Length of output: 46


🏁 Script executed:

cd src/lfx && cat pyproject.toml | grep -A 50 "\[dependency-groups\]"

Repository: langflow-ai/langflow

Length of output: 974


🏁 Script executed:

cd src/lfx && rg "langchain.ibm|langchain-ibm|ChatWatsonx" pyproject.toml

Repository: langflow-ai/langflow

Length of output: 46


🏁 Script executed:

cd src/lfx && python3 -c "
import re
with open('src/lfx/base/models/unified_models.py', 'r') as f:
    lines = f.readlines()
    for i in range(320, 334):
        line = lines[i]
        has_trailing_space = line.rstrip('\n') != line.rstrip('\n').rstrip()
        print(f'Line {i+1}: has_trailing_space={has_trailing_space} repr={repr(line)}')" 2>/dev/null || echo "Python check failed"

Repository: langflow-ai/langflow

Length of output: 150


🌐 Web query:

ChatWatsonx langchain-ibm 0.3 0.2 API parameters apikey url project_id initialization

💡 Result:

Short answer — how to initialize ChatWatsonx (langchain-ibm v0.2/0.3):

  • Required identification/auth:

    • api key: pass api_key to the constructor or set env var WATSONX_API_KEY (older apikey/WATSONX_APIKEY are deprecated). [1][2]
    • url: pass the service URL (e.g. https://us-south.ml.cloud.ibm.com) via url/serviceUrl. [1][2]
    • project_id or space_id (or deployment_id): you must supply project_id or space_id (or a deployment_id) for the request context unless using a lightweight engine. Get IDs from the project/space Manage → General. [1][3]
  • Typical Python instantiation (example):
    from langchain_ibm import ChatWatsonx
    from ibm_watsonx_ai.foundation_models.schema import TextChatParameters

    parameters = TextChatParameters(temperature=0.5, top_p=1, max_completion_tokens=None)

    chat = ChatWatsonx(
    model_id="ibm/granite-34b-code-instruct", # or deployment_id
    url="https://us-south.ml.cloud.ibm.com",
    project_id="YOUR_PROJECT_ID",
    api_key="YOUR_API_KEY", # or rely on WATSONX_API_KEY env var
    params=parameters,
    ) [1][2]

  • Notes:

    • apikey / WATSONX_APIKEY names are deprecated — prefer api_key / WATSONX_API_KEY. [2]
    • You can also pass an ibm_watsonx_ai.APIClient instance (watsonx_client) instead of raw key/URL. [1]
    • For JS/langchain.js the same concepts apply (serviceUrl, projectId/spaceId, model/idOrName). [3]

Sources:
[1] LangChain docs — IBM watsonx.ai (Python) (ChatWatsonx examples & notes).
[2] LangChain reference — ChatWatsonx API (fields: api_key, url, project_id, params; deprecation note).
[3] LangChain (JS) / community Watsonx examples (serviceUrl, projectId/spaceId/deployment).


🏁 Script executed:

cd src/lfx && sed -n '321,333p' src/lfx/base/models/unified_models.py | cat -A

Repository: langflow-ai/langflow

Length of output: 549


Fix deprecated apikey parameter and unreliable WatsonX validation logic.

The ChatWatsonx constructor uses deprecated parameter apikey= (line 326); langchain-ibm v0.2/0.3+ requires api_key=. Additionally, hardcoding url="https://us-south.ml.cloud.ibm.com" and using dummy project_id="dummy_project_for_validation" means the validation will fail even for valid keys (wrong region, invalid project), making this check unreliable. Your except Exception handler will treat those failures as "network issues" and allow saving anyway, defeating the purpose.

Consider validating the IBM Cloud API key directly (IAM token exchange) without requiring a real project context, or accept actual project credentials from the user configuration.

Proposed fix
 elif provider == "IBM WatsonX":
-            from langchain_ibm import ChatWatsonx
-
-            default_url = "https://us-south.ml.cloud.ibm.com"
-            llm = ChatWatsonx(
-                    apikey=api_key,
-                    url=default_url,
-                    model_id=first_model,
-                    project_id="dummy_project_for_validation",  # Dummy project_id for validation
-                    params={"max_new_tokens": 1},
-                )
-            llm.invoke("test")
+            # Validate IBM Cloud API key without requiring watsonx.ai project context
+            import requests
+
+            resp = requests.post(
+                "https://iam.cloud.ibm.com/identity/token",
+                headers={
+                    "Accept": "application/json",
+                    "Content-Type": "application/x-www-form-urlencoded",
+                },
+                data={
+                    "grant_type": "urn:ibm:params:oauth:grant-type:apikey",
+                    "apikey": api_key,
+                },
+                timeout=5,
+            )
+            if resp.status_code != requests.codes.ok:
+                msg = "Invalid API key for IBM WatsonX"
+                raise ValueError(msg)

Also fix line 333: Ruff W293 (blank line contains whitespace).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif provider == "IBM WatsonX":
# WatsonX validation would require additional parameters
# Skip for now as it needs project_id, url, etc.
return
from langchain_ibm import ChatWatsonx
default_url = "https://us-south.ml.cloud.ibm.com"
llm = ChatWatsonx(
apikey=api_key,
url=default_url,
model_id=first_model,
project_id="dummy_project_for_validation", # Dummy project_id for validation
params={"max_new_tokens": 1},
)
llm.invoke("test")
elif provider == "IBM WatsonX":
# Validate IBM Cloud API key without requiring watsonx.ai project context
import requests
resp = requests.post(
"https://iam.cloud.ibm.com/identity/token",
headers={
"Accept": "application/json",
"Content-Type": "application/x-www-form-urlencoded",
},
data={
"grant_type": "urn:ibm:params:oauth:grant-type:apikey",
"apikey": api_key,
},
timeout=5,
)
if resp.status_code != requests.codes.ok:
msg = "Invalid API key for IBM WatsonX"
raise ValueError(msg)
🧰 Tools
🪛 GitHub Actions: Ruff Style Check

[error] 333-333: Ruff: W293 Blank line contains whitespace. Command: uv run --only-dev ruff check --output-format=github .

🪛 GitHub Check: Ruff Style Check (3.13)

[failure] 333-333: Ruff (W293)
src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix Ruff W293 (blank line contains whitespace).

This is currently breaking CI.

Proposed fix
-            
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
🧰 Tools
🪛 GitHub Actions: Ruff Style Check

[error] 333-333: Ruff: W293 Blank line contains whitespace. Command: uv run --only-dev ruff check --output-format=github .

🪛 GitHub Check: Ruff Style Check (3.13)

[failure] 333-333: Ruff (W293)
src/lfx/src/lfx/base/models/unified_models.py:333:1: W293 Blank line contains whitespace

🤖 Prompt for AI Agents
In @src/lfx/src/lfx/base/models/unified_models.py at line 333, Remove the
trailing whitespace on the blank line causing Ruff W293 in
src/lfx/src/lfx/base/models/unified_models.py (around the reported location near
line 333); delete the spaces on that empty line so it's truly empty, then re-run
ruff/formatting (or pre-commit) to confirm the W293 warning is resolved.

elif provider == "Ollama":
# Ollama is local, just verify the URL is accessible
import requests
Expand Down
Loading