diff --git a/README.md b/README.md index 1544c7bc38..7e91f1cc8d 100644 --- a/README.md +++ b/README.md @@ -34,7 +34,6 @@ Conductor is an open-source orchestration engine built at Netflix to help develo * [Key Features](#key-features) * [Use Cases](#use-cases) * [Getting Started with Conductor](#getting-started-with-conductor) - * [All Options](#all-options) * [Conductor SDKs](#conductor-sdks) * [Documentation](#documentation) * [Community / I Need Help](#community--i-need-help) @@ -71,36 +70,20 @@ Conductor OSS is the continuation of [Netflix Conductor Repository](https://gith * **Microservices orchestration** Orchestrate very complex microservices flows both _synchronously_ and _asynchronously_. * **Durable code execution**, tasks in the workflow are durable with at-least once delivery semantics offered by the queues * **Agentic workflows** Conductor workflows can be fully dynamic, LLMs can plan and design workflows that can be executed by Conductor server at runtime. No compile, deploy cycle required. -* **Agentic RAG** Easy to build RAG pipelines with LLM and Vector DB integrations - +* **Agentic RAG** Easy to build RAG pipelines with LLM and Vector DB integrations - - - + # Getting Started with Conductor -**One-liner for macOS / Linux** -```bash -curl -sSL https://raw.githubusercontent.com/conductor-oss/conductor/main/conductor_server.sh | sh -``` +**Install Conductor CLI and start server** +```shell +# Installs conductor cli +npm install -g @conductor-oss/conductor-cli -**One-liner for Windows PowerShell:** -```powershell -irm https://raw.githubusercontent.com/conductor-oss/conductor/main/conductor_server.ps1 | iex +conductor server start +# see conductor server --help for all the available commands ``` -### All Options - -| Operating System | Interactive | Custom Port & Version | -|------------------|-------------|-----------------------| -| macOS / Linux | `./conductor_server.sh` | `./conductor_server.sh 9090 3.22.0` | -| Windows (CMD) | `conductor_server.bat` | `conductor_server.bat 9090 3.22.0` | -| Windows (PowerShell) | `.\conductor_server.ps1` | `.\conductor_server.ps1 -Port 9090 -Version 3.22.0` | - -Each script will: -1. Download the Conductor server JAR (if not already present) -2. Prompt for parameters if running interactively and not provided -3. Start the server with `java -jar` - -Set `CONDUCTOR_HOME` to specify where the JAR is stored (defaults to current directory). - **Or run with Docker:** ```shell diff --git a/ai/README.md b/ai/README.md index aff5734ddd..61c34d74ee 100644 --- a/ai/README.md +++ b/ai/README.md @@ -1,6 +1,6 @@ # Conductor AI Module -The Conductor AI module provides built-in integration with 12 popular LLM providers and vector databases, enabling AI-powered workflows through simple task definitions -- including chat, embeddings, image generation, audio synthesis, video generation, and tool calling. +The Conductor AI module provides built-in integration with 12 popular LLM providers and vector databases, enabling AI-powered workflows through simple task definitions -- including chat, embeddings, image generation, audio synthesis, video generation, document generation, and tool calling. ## Table of Contents - [Supported Providers](#supported-providers) @@ -20,7 +20,7 @@ The Conductor AI module provides built-in integration with 12 popular LLM provid |----------|:----:|:----------:|:---------:|:---------:|:---------:|--------| | **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | GPT-4o, GPT-4o-mini, DALL-E-3, Sora-2, text-embedding-3-small/large | | **Anthropic** | ✅ | ❌ | ❌ | ❌ | ❌ | Claude 3.5 Sonnet, Claude 3 Opus/Sonnet/Haiku, Claude 4 Sonnet | -| **Google Vertex AI** | ✅ | ✅ | ✅ | ❌ | ✅ | Gemini 1.5/2.0, Veo 2/3, Imagen, text-embedding-004 | +| **Google Gemini** | ✅ | ✅ | ✅ | ❌ | ✅ | Gemini 1.5/2.0, Veo 2/3, Imagen, text-embedding-004 | | **Azure OpenAI** | ✅ | ✅ | ✅ | ❌ | ❌ | GPT-4o, GPT-4, GPT-3.5-turbo, text-embedding-ada-002, DALL-E-3 | | **AWS Bedrock** | ✅ | ✅ | ❌ | ❌ | ❌ | Claude 3.x, Titan, Llama 3.x, amazon.titan-embed-text-v2:0 | | **Mistral AI** | ✅ | ✅ | ❌ | ❌ | ❌ | Mistral Small/Medium/Large, Mixtral 8x7B, mistral-embed | @@ -59,6 +59,7 @@ The Conductor AI module provides built-in integration with 12 popular LLM provid | **Search Embeddings** | `LLM_SEARCH_EMBEDDINGS` | Search using embedding vectors | | **Get Embeddings** | `LLM_GET_EMBEDDINGS` | Retrieve stored embeddings | | **List MCP Tools** | `LIST_MCP_TOOLS` | List tools from MCP server | +| **Generate PDF** | `GENERATE_PDF` | Convert markdown to PDF document | | **Call MCP Tool** | `CALL_MCP_TOOL` | Call a tool on MCP server | --- @@ -192,7 +193,7 @@ Generate videos from text or image prompts. This is an **async task** -- it subm | Parameter | Type | Required | Description | |-----------|------|:--------:|-------------| -| `llmProvider` | String | Yes | Provider name (`openai` or `vertex_ai`) | +| `llmProvider` | String | Yes | Provider name (`openai`, `vertex_ai`, or `google_gemini`) | | `model` | String | Yes | Video model (e.g., `sora-2`, `veo-3`) | | `prompt` | String | Yes | Text description of the video to generate | | `duration` | Integer | No | Duration in seconds (OpenAI: 4, 8, or 12; default: 5) | @@ -223,7 +224,7 @@ Generate videos from text or image prompts. This is an **async task** -- it subm **Provider-Specific Notes:** - **OpenAI Sora**: Supports `sora-2` and `sora-2-pro` models. Valid durations are 4, 8, or 12 seconds. Valid sizes: `1280x720`, `720x1280`, `1792x1024`, `1024x1792`. Returns video + webp thumbnail. -- **Google Vertex AI Veo**: Supports `veo-2.0-generate-001`, `veo-3.0`, `veo-3.1`. Requires Vertex AI credentials (Application Default Credentials). Veo 3+ supports audio generation. +- **Google Gemini Veo**: Supports `veo-2.0-generate-001`, `veo-3.0`, `veo-3.1`. Use `llmProvider` as `google_gemini` or `vertex_ai`. When using API key, no GCP credentials needed. Veo 3+ supports audio generation. --- @@ -318,6 +319,56 @@ Retrieve stored embeddings by document ID. --- +### GENERATE_PDF + +Convert markdown text to a PDF document. Supports full GitHub Flavored Markdown including headings, tables, code blocks, lists, task lists, blockquotes, images, links, and inline formatting. No external API keys required -- uses built-in Apache PDFBox rendering. + +**Inputs:** + +| Parameter | Type | Required | Default | Description | +|-----------|------|:--------:|---------|-------------| +| `markdown` | String | ✅ | - | Markdown text to convert to PDF | +| `pageSize` | String | ❌ | `A4` | Page size: `A4`, `LETTER`, `LEGAL`, `A3`, `A5` | +| `marginTop` | Number | ❌ | `72` | Top margin in points (72pt = 1 inch) | +| `marginRight` | Number | ❌ | `72` | Right margin in points | +| `marginBottom` | Number | ❌ | `72` | Bottom margin in points | +| `marginLeft` | Number | ❌ | `72` | Left margin in points | +| `theme` | String | ❌ | `default` | Style preset: `default` or `compact` | +| `baseFontSize` | Number | ❌ | `11` | Base font size in points | +| `outputLocation` | String | ❌ | auto | Output URI (e.g., `file:///tmp/report.pdf`). Defaults to payload store. | +| `pdfMetadata` | Object | ❌ | - | PDF metadata: `title`, `author`, `subject`, `keywords` | +| `imageBaseUrl` | String | ❌ | - | Base URL for resolving relative image paths | + +**Outputs:** + +| Field | Type | Description | +|-------|------|-------------| +| `result.location` | String | URI of the generated PDF file | +| `result.sizeBytes` | Integer | Size of the generated PDF in bytes | +| `media` | Array | Media items with `location` and `mimeType` (`application/pdf`) | +| `finishReason` | String | `COMPLETED` on success | + +**Supported Markdown Features:** + +| Feature | Syntax | +|---------|--------| +| Headings | `# H1` through `###### H6` | +| Bold / Italic | `**bold**`, `*italic*`, `***both***` | +| Tables | GFM pipe tables with header row | +| Code blocks | Fenced (` ``` `) and indented code blocks | +| Bullet lists | `- item` or `* item` (nested supported) | +| Ordered lists | `1. item` (nested supported) | +| Task lists | `- [x] done`, `- [ ] todo` | +| Blockquotes | `> quoted text` | +| Links | `[text](url)` (rendered as clickable PDF links) | +| Images | `![alt](url)` (HTTP/HTTPS, file://, data: URIs, relative paths) | +| Horizontal rules | `---` | +| Strikethrough | `~~strikethrough~~` | +| Inline code | `` `code` `` | +| Footnotes | `[^1]` references | + +--- + ### LIST_MCP_TOOLS List available tools from an MCP (Model Context Protocol) server. @@ -411,9 +462,15 @@ conductor.ai.anthropic.beta-version=prompt-caching-2024-07-31 | `beta-version` | ❌ | - | Beta features (e.g., prompt caching) | | `completions-path` | ❌ | - | Custom completions endpoint path | -#### Google Vertex AI (Gemini) +#### Google Gemini / Vertex AI + +Use `llmProvider` as either `google_gemini` or `vertex_ai` (both resolve to the same provider). ```properties +# Option 1: API key (simplest — works for image/video/audio gen) +conductor.ai.gemini.api-key=${GEMINI_API_KEY} + +# Option 2: Vertex AI credentials (required for chat completions and embeddings) conductor.ai.gemini.project-id=${GOOGLE_CLOUD_PROJECT} conductor.ai.gemini.location=us-central1 conductor.ai.gemini.publisher=google @@ -421,12 +478,13 @@ conductor.ai.gemini.publisher=google | Property | Required | Default | Description | |----------|:--------:|---------|-------------| -| `project-id` | ✅ | - | GCP project ID | -| `location` | ✅ | - | GCP region (e.g., us-central1) | +| `api-key` | ❌ | - | Gemini API key from [Google AI Studio](https://aistudio.google.com/) | +| `project-id` | ❌ | - | GCP project ID (required for chat/embeddings via Vertex AI) | +| `location` | ❌ | - | GCP region (e.g., us-central1) | | `base-url` | ❌ | `{location}-aiplatform.googleapis.com:443` | API endpoint | | `publisher` | ❌ | - | Model publisher | -> **Note**: Vertex AI uses Application Default Credentials (ADC) or service account credentials from the environment. +> **Note**: When `api-key` is set, image/video/audio generation uses the Google AI API directly. Chat completions and embeddings require Vertex AI credentials (`project-id` + Application Default Credentials or service account). Both can be configured simultaneously. #### Azure OpenAI @@ -572,9 +630,10 @@ The AI module reads from standard environment variables automatically. Set the e | AWS Bedrock | `AWS_ACCESS_KEY_ID` | AWS access key | | AWS Bedrock | `AWS_SECRET_ACCESS_KEY` | AWS secret key | | AWS Bedrock | `AWS_REGION` | AWS region (default: `us-east-1`) | -| Google Vertex AI | `GOOGLE_CLOUD_PROJECT` | GCP project ID | -| Google Vertex AI | `GOOGLE_CLOUD_LOCATION` | GCP region (default: `us-central1`) | -| Google Vertex AI | `GOOGLE_APPLICATION_CREDENTIALS` | Path to service account JSON file | +| Google Gemini | `GEMINI_API_KEY` | Gemini API key from [Google AI Studio](https://aistudio.google.com/) | +| Google Gemini | `GOOGLE_CLOUD_PROJECT` | GCP project ID (for Vertex AI chat/embeddings) | +| Google Gemini | `GOOGLE_CLOUD_LOCATION` | GCP region (default: `us-central1`) | +| Google Gemini | `GOOGLE_APPLICATION_CREDENTIALS` | Path to service account JSON file | | Ollama | `OLLAMA_HOST` | Ollama server URL (default: `http://localhost:11434`) | ### Usage @@ -643,9 +702,18 @@ Run with: docker-compose up -d ``` -### Google Vertex AI with Docker +### Google Gemini with Docker -Google Vertex AI requires a service account credentials file: +**Using API key (simplest):** + +```bash +docker run -d \ + -p 8080:8080 \ + -e GEMINI_API_KEY=your-api-key \ + conductor:server +``` + +**Using Vertex AI credentials (for chat/embeddings):** ```bash docker run -d \ @@ -1339,7 +1407,123 @@ A workflow that generates an image and a video in sequence: } ``` -### 11. LLM Tool Calling with MCP Tools +### 11. PDF Generation (Markdown to PDF) + +Generate a PDF document from markdown content with layout options and metadata: + +```json +{ + "name": "pdf_generation_workflow", + "version": 1, + "schemaVersion": 2, + "tasks": [ + { + "name": "generate_pdf", + "taskReferenceName": "pdf", + "type": "GENERATE_PDF", + "inputParameters": { + "markdown": "# Sales Report\n\n## Summary\n\nTotal revenue: **$5.4M**\n\n| Region | Revenue | Growth |\n|--------|---------|--------|\n| North America | $2.4M | +12% |\n| Europe | $1.8M | +8% |\n\n## Recommendations\n\n1. Expand APAC sales team\n2. Launch enterprise tier in EU\n\n> *Our best quarter yet.*", + "pageSize": "LETTER", + "theme": "default", + "pdfMetadata": { + "title": "Sales Report - Q4 2025", + "author": "Conductor Workflow" + } + } + } + ] +} +``` + +**Output:** +```json +{ + "result": { + "location": "file:///tmp/conductor/wf-123/task-456/abc.pdf", + "sizeBytes": 12345 + }, + "media": [ + { + "location": "file:///tmp/conductor/wf-123/task-456/abc.pdf", + "mimeType": "application/pdf" + } + ], + "finishReason": "COMPLETED" +} +``` + +### 12. LLM-to-PDF Pipeline (Report Generation) + +A multi-step workflow that uses an LLM to generate a markdown report and then converts it to PDF: + +```json +{ + "name": "llm_to_pdf_pipeline", + "version": 1, + "schemaVersion": 2, + "inputParameters": ["topic", "audience"], + "tasks": [ + { + "name": "generate_report_markdown", + "taskReferenceName": "llm_report", + "type": "LLM_CHAT_COMPLETE", + "inputParameters": { + "llmProvider": "openai", + "model": "gpt-4o-mini", + "messages": [ + { + "role": "system", + "message": "You are a professional report writer. Generate well-structured markdown reports." + }, + { + "role": "user", + "message": "Write a report about: ${workflow.input.topic}\nAudience: ${workflow.input.audience}" + } + ], + "temperature": 0.7, + "maxTokens": 2000 + } + }, + { + "name": "convert_to_pdf", + "taskReferenceName": "pdf_output", + "type": "GENERATE_PDF", + "inputParameters": { + "markdown": "${llm_report.output.result}", + "pageSize": "A4", + "pdfMetadata": { + "title": "${workflow.input.topic}", + "author": "Conductor AI Pipeline" + } + } + } + ], + "outputParameters": { + "reportMarkdown": "${llm_report.output.result}", + "pdfLocation": "${pdf_output.output.result.location}", + "pdfSizeBytes": "${pdf_output.output.result.sizeBytes}" + } +} +``` + +**Workflow Input:** +```json +{ + "topic": "Cloud Migration Best Practices", + "audience": "CTO and engineering leadership" +} +``` + +**Workflow Output:** +```json +{ + "reportMarkdown": "# Cloud Migration Best Practices\n\n## Executive Summary\n...", + "pdfLocation": "file:///tmp/conductor/wf-789/task-012/report.pdf", + "pdfSizeBytes": 28456 +} +``` + +### 13. LLM Tool Calling with MCP Tools Use `LLM_CHAT_COMPLETE` with the `tools` parameter to let the LLM autonomously decide when to call MCP tools. When the LLM needs to use a tool, it returns `finishReason: "TOOL_CALLS"` with the tool invocations. @@ -1457,4 +1641,4 @@ env -u OPENAI_API_KEY -u ANTHROPIC_API_KEY ./gradlew :conductor-ai:test ## License -Copyright 2025 Conductor Authors. Licensed under the Apache License 2.0. +Copyright 2026 Conductor Authors. Licensed under the Apache License 2.0. diff --git a/ai/build.gradle b/ai/build.gradle index f75d034f8a..b599f8c515 100644 --- a/ai/build.gradle +++ b/ai/build.gradle @@ -33,6 +33,9 @@ dependencies { api("io.modelcontextprotocol.sdk:mcp:${revMCP}") api "com.squareup.okhttp3:okhttp:4.12.0" + // Markdown parsing for PDF generation + implementation "com.vladsch.flexmark:flexmark-all:${revFlexmark}" + //Document reader and parsers // Source: https://mvnrepository.com/artifact/org.springframework.ai/spring-ai-pdf-document-reader api "org.springframework.ai:spring-ai-pdf-document-reader:${revSpringAI}" diff --git a/ai/examples/15-pdf-generation.json b/ai/examples/15-pdf-generation.json new file mode 100644 index 0000000000..ab2d6ca8ed --- /dev/null +++ b/ai/examples/15-pdf-generation.json @@ -0,0 +1,24 @@ +{ + "name": "pdf_generation_workflow", + "description": "Generate a PDF document from markdown content with custom layout options", + "version": 1, + "schemaVersion": 2, + "tasks": [ + { + "name": "generate_pdf", + "taskReferenceName": "pdf", + "type": "GENERATE_PDF", + "inputParameters": { + "markdown": "# Monthly Sales Report\n\n## Executive Summary\n\nThis report covers sales performance for **Q4 2025**.\n\n| Region | Revenue | Growth |\n|--------|---------|--------|\n| North America | $2.4M | +12% |\n| Europe | $1.8M | +8% |\n| Asia Pacific | $1.2M | +15% |\n\n## Key Highlights\n\n- Total revenue reached **$5.4M**, exceeding target by 10%\n- Customer acquisition increased by *23%* across all regions\n- Product satisfaction score: **4.7/5.0**\n\n## Action Items\n\n1. Expand APAC sales team by Q1 2026\n2. Launch enterprise tier in European market\n3. Increase marketing budget for North America\n\n> *\"Our best quarter yet -- the team delivered exceptional results across every metric.\"* -- VP of Sales\n\n---\n\nGenerated by Conductor Workflow Engine", + "pageSize": "LETTER", + "theme": "default", + "baseFontSize": 11, + "pdfMetadata": { + "title": "Monthly Sales Report - Q4 2025", + "author": "Conductor Workflow", + "subject": "Quarterly Sales Performance" + } + } + } + ] +} diff --git a/ai/examples/16-llm-to-pdf-pipeline.json b/ai/examples/16-llm-to-pdf-pipeline.json new file mode 100644 index 0000000000..bd29136656 --- /dev/null +++ b/ai/examples/16-llm-to-pdf-pipeline.json @@ -0,0 +1,51 @@ +{ + "name": "llm_to_pdf_pipeline", + "description": "End-to-end pipeline: LLM generates a markdown report from user input, then converts it to a PDF document", + "version": 1, + "schemaVersion": 2, + "inputParameters": ["topic", "audience"], + "tasks": [ + { + "name": "generate_report_markdown", + "taskReferenceName": "llm_report", + "type": "LLM_CHAT_COMPLETE", + "inputParameters": { + "llmProvider": "openai", + "model": "gpt-4o-mini", + "messages": [ + { + "role": "system", + "message": "You are a professional report writer. Generate well-structured markdown reports with headings, tables, bullet points, and bold/italic emphasis. Always include an executive summary, key findings, and recommendations sections." + }, + { + "role": "user", + "message": "Write a detailed report about: ${workflow.input.topic}\n\nTarget audience: ${workflow.input.audience}\n\nUse markdown formatting with:\n- A clear title (# heading)\n- Executive summary section\n- Key findings with a data table\n- Bullet-point recommendations\n- A blockquote conclusion" + } + ], + "temperature": 0.7, + "maxTokens": 2000 + } + }, + { + "name": "convert_to_pdf", + "taskReferenceName": "pdf_output", + "type": "GENERATE_PDF", + "inputParameters": { + "markdown": "${llm_report.output.result}", + "pageSize": "A4", + "theme": "default", + "baseFontSize": 11, + "pdfMetadata": { + "title": "${workflow.input.topic}", + "author": "Conductor AI Pipeline", + "subject": "Auto-generated report" + } + } + } + ], + "outputParameters": { + "reportMarkdown": "${llm_report.output.result}", + "pdfLocation": "${pdf_output.output.result.location}", + "pdfSizeBytes": "${pdf_output.output.result.sizeBytes}" + } +} diff --git a/ai/examples/README.md b/ai/examples/README.md index 1ca1d4f964..e844177b91 100644 --- a/ai/examples/README.md +++ b/ai/examples/README.md @@ -74,6 +74,8 @@ The server will be available at `http://localhost:3001/mcp`. | `12-video-gemini-veo.json` | Generate video with Google Veo-3 (async) | Google Vertex AI | | `13-image-to-video-pipeline.json` | Image + video generation pipeline | OpenAI | | `14-stabilityai-image.json` | Image generation with Stability AI (SD3.5) | Stability AI | +| `15-pdf-generation.json` | Generate PDF from markdown content | None (built-in) | +| `16-llm-to-pdf-pipeline.json` | LLM generates report → convert to PDF | OpenAI | --- @@ -314,6 +316,36 @@ curl -X POST 'http://localhost:8080/api/workflow/image_gen_stabilityai' \ -d '{}' ``` +### 15. PDF Generation (Markdown to PDF) + +```bash +# No external API keys required -- uses built-in PDFBox renderer + +# Register +curl -X POST 'http://localhost:8080/api/metadata/workflow' \ + -H 'Content-Type: application/json' \ + -d @15-pdf-generation.json + +# Execute +curl -X POST 'http://localhost:8080/api/workflow/pdf_generation_workflow' \ + -H 'Content-Type: application/json' \ + -d '{}' +``` + +### 16. LLM-to-PDF Pipeline (Report Generation) + +```bash +# Register +curl -X POST 'http://localhost:8080/api/metadata/workflow' \ + -H 'Content-Type: application/json' \ + -d @16-llm-to-pdf-pipeline.json + +# Execute with a topic and audience +curl -X POST 'http://localhost:8080/api/workflow/llm_to_pdf_pipeline' \ + -H 'Content-Type: application/json' \ + -d '{"topic": "Cloud Migration Best Practices", "audience": "CTO and engineering leadership"}' +``` + --- ## Register All Workflows at Once @@ -377,4 +409,4 @@ CREATE EXTENSION IF NOT EXISTS vector; ## License -Copyright 2025 Conductor Authors. Licensed under the Apache License 2.0. +Copyright 2026 Conductor Authors. Licensed under the Apache License 2.0. diff --git a/ai/src/main/java/org/conductoross/conductor/ai/AIModel.java b/ai/src/main/java/org/conductoross/conductor/ai/AIModel.java index 1d14d9a011..9e685fb90e 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/AIModel.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/AIModel.java @@ -60,6 +60,13 @@ enum ConductorTask { */ String getModelProvider(); + /** + * @return alternative provider names that resolve to this same provider + */ + default List getProviderAliases() { + return List.of(); + } + /** * Embedding generation * diff --git a/ai/src/main/java/org/conductoross/conductor/ai/AIModelProvider.java b/ai/src/main/java/org/conductoross/conductor/ai/AIModelProvider.java index aa28136098..02f9f35091 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/AIModelProvider.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/AIModelProvider.java @@ -50,6 +50,9 @@ public AIModelProvider( payloadStoreLocation, result); providerToLLM.put(llm.getModelProvider(), llm); + for (String alias : llm.getProviderAliases()) { + providerToLLM.put(alias, llm); + } } catch (Throwable t) { log.error("cannot init {} model, reason: {}", modelConfiguration, t.getMessage()); } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/LLMHelper.java b/ai/src/main/java/org/conductoross/conductor/ai/LLMHelper.java index 70a329b976..637fbb035a 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/LLMHelper.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/LLMHelper.java @@ -22,6 +22,7 @@ import java.util.Objects; import java.util.Optional; import java.util.Set; +import java.util.UUID; import java.util.function.Consumer; import java.util.stream.Collectors; @@ -38,7 +39,6 @@ import org.conductoross.conductor.ai.models.VideoGenRequest; import org.conductoross.conductor.common.JsonSchemaValidator; import org.conductoross.conductor.common.utils.StringTemplate; -import org.conductoross.conductor.config.AIIntegrationEnabledCondition; import org.springframework.ai.chat.client.ChatClient; import org.springframework.ai.chat.messages.AssistantMessage; import org.springframework.ai.chat.messages.Message; @@ -57,8 +57,6 @@ import org.springframework.ai.image.ImageOptions; import org.springframework.ai.image.ImagePrompt; import org.springframework.ai.image.ImageResponse; -import org.springframework.context.annotation.Conditional; -import org.springframework.stereotype.Component; import org.springframework.util.MimeType; import com.netflix.conductor.common.config.ObjectMapperProvider; @@ -84,10 +82,8 @@ import static org.conductoross.conductor.ai.MimeExtensionResolver.getExtension; import static org.conductoross.conductor.ai.MimeExtensionResolver.getMimeTypeFromUrl; -@Component @Slf4j @RequiredArgsConstructor -@Conditional(AIIntegrationEnabledCondition.class) public class LLMHelper { private static final TypeReference> MAP_OF_STRING_TO_OBJ = new TypeReference<>() {}; @@ -377,6 +373,9 @@ private LLMResponse chatComplete( for (AssistantMessage.ToolCall toolCall : toolCalls) { String name = toolCall.name(); String id = toolCall.id(); + if (id == null || id.isBlank()) { + id = UUID.randomUUID().toString(); + } String argsAsString = toolCall.arguments(); Map args = Map.of(); try { @@ -395,12 +394,6 @@ private LLMResponse chatComplete( .filter(toolSpec -> toolSpec.getName().equals(name)) .findFirst(); - String integrationName = - (String) - matched.map(ToolSpec::getConfigParams) - .orElse(Collections.emptyMap()) - .get("integrationName"); - args.put("integrationName", integrationName); String type = matched.map(ToolSpec::getType).orElse(TASK_TYPE_SIMPLE); tools.add( ToolCall.builder() @@ -672,32 +665,32 @@ private Media getMedia(String mimeType, String content) { private void storeMedia( String location, List media) { - Optional docLoader = + + DocumentLoader documentLoader = documentLoaders.stream() - .filter(documentLoader -> documentLoader.supports(location)) - .findFirst(); - docLoader.ifPresent( - loader -> { - media.stream() - .filter(m1 -> m1.getData() != null) - .forEach( - m -> { - // Each media item gets a unique path with file extension - // to prevent overwriting when multiple items exist - // (e.g., video + thumbnail) - String ext = getExtension(m.getMimeType()); - String uniqueLocation = - location + "_" + java.util.UUID.randomUUID() + ext; - String uploadLocation = - loader.upload( - Map.of(), - m.getMimeType(), - m.getData(), - uniqueLocation); - m.setLocation(uploadLocation); - m.setData(null); - }); - }); + .filter(loader -> loader.supports(location)) + .findFirst() + .orElse(null); + if (documentLoader == null) { + log.debug("no document loaders found, media will not be stored"); + return; + } + media.stream() + .filter(m1 -> m1.getData() != null) + .forEach( + m -> { + // Each media item gets a unique path with file extension + // to prevent overwriting when multiple items exist + // (e.g., video + thumbnail) + String ext = getExtension(m.getMimeType()); + String uniqueLocation = + location + "_" + java.util.UUID.randomUUID() + ext; + String uploadLocation = + documentLoader.upload( + Map.of(), m.getMimeType(), m.getData(), uniqueLocation); + m.setLocation(uploadLocation); + m.setData(null); + }); } /** diff --git a/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessDeniedException.java b/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessDeniedException.java new file mode 100644 index 0000000000..e245300b0a --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessDeniedException.java @@ -0,0 +1,24 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.document; + +/** + * Thrown when a document access request is blocked by {@link DocumentAccessPolicy} due to the + * location matching a blocked path, file name, or host. + */ +public class DocumentAccessDeniedException extends SecurityException { + + public DocumentAccessDeniedException(String message) { + super(message); + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessPolicy.java b/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessPolicy.java new file mode 100644 index 0000000000..986c3d773a --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/document/DocumentAccessPolicy.java @@ -0,0 +1,468 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.document; + +import java.net.InetAddress; +import java.net.URI; +import java.nio.file.Path; +import java.util.ArrayList; +import java.util.List; + +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.core.env.Environment; +import org.springframework.stereotype.Component; + +import jakarta.annotation.PostConstruct; +import lombok.Getter; +import lombok.Setter; +import lombok.extern.slf4j.Slf4j; + +/** + * Enforces access restrictions on document locations to prevent reading or writing sensitive files. + * Blocks well-known sensitive paths on local filesystems and cloud metadata endpoints by default. + * Configurable via {@code conductor.document-access-policy.*} properties. + * + *

By default, the allowed directory list is derived from {@code + * conductor.file-storage.parentDir} so that document workers and the access policy share a single + * configuration. Additional directories can be added via {@code + * conductor.document-access-policy.allowed-directories}. + */ +@Slf4j +@Component +@ConfigurationProperties(prefix = "conductor.document-access-policy") +@Getter +@Setter +public class DocumentAccessPolicy { + + private final Environment env; + + public DocumentAccessPolicy(Environment env) { + this.env = env; + } + + /** + * Additional allowed directory prefixes for local filesystem access, beyond the directory + * configured by {@code conductor.file-storage.parentDir} (which is always included + * automatically). When the effective allowed list is non-empty, only paths that fall + * under one of those directories are permitted — all others are denied regardless of the + * blocklist. + * + *

Example: {@code /tmp/imports/,/data/shared/} adds those trees in addition to the + * file-storage parentDir. Supports {@code ~/} expansion and environment variable defaults via + * Spring. + */ + private List allowedDirectories = List.of(); + + /** Additional path prefixes to block (merged with built-in defaults). */ + private List blockedPathPrefixes = List.of(); + + /** Additional file name patterns to block (exact match, case-insensitive). */ + private List blockedFileNames = List.of(); + + /** Additional hostname/IP patterns to block for HTTP-based loaders. */ + private List blockedHosts = List.of(); + + /** Set to true to disable all access checks (not recommended for production). */ + private boolean disabled = false; + + /** + * Effective allowed directories resolved at startup: the file-storage parentDir (if configured) + * plus any additional entries from {@link #allowedDirectories}. + */ + private List effectiveAllowedDirectories; + + @PostConstruct + void resolveEffectiveAllowedDirectories() { + List dirs = new ArrayList<>(); + + // Always include the file-storage parentDir — this is where workers store output + String parentDir = env.getProperty("conductor.file-storage.parentDir"); + if (parentDir != null && !parentDir.isBlank()) { + dirs.add(parentDir); + } else { + // Match the default used by AIModelProvider when the property is not set + dirs.add(System.getProperty("user.home") + "/worker-payload/"); + } + + if (allowedDirectories != null) { + dirs.addAll(allowedDirectories); + } + + effectiveAllowedDirectories = List.copyOf(dirs); + + log.info( + "Document access policy effective allowed directories: {}", + effectiveAllowedDirectories); + } + + // --- Built-in blocked path prefixes (local filesystem) --- + private static final List DEFAULT_BLOCKED_PATH_PREFIXES = + List.of( + // ---- Linux system credentials & auth ---- + "/etc/passwd", + "/etc/shadow", + "/etc/gshadow", + "/etc/master.passwd", + "/etc/sudoers", + "/etc/sudoers.d/", + "/etc/pam.d/", + "/etc/login.defs", + "/etc/krb5.keytab", + "/etc/security/", + // ---- SSH & TLS ---- + "/etc/ssh/", + "/etc/ssl/", + "/etc/pki/", + // ---- Kernel / process / device ---- + "/proc/", + "/sys/", + "/dev/", + // ---- Root home ---- + "/root/", + // ---- Logs (may leak tokens, IPs, credentials) ---- + "/var/log/", + // ---- Scheduled tasks ---- + "/var/spool/cron/", + // ---- Container & orchestration secrets ---- + "/var/run/secrets/", // Kubernetes service-account tokens + "/run/secrets/", // Docker secrets mount + "/var/run/docker.sock", // Docker socket — full host control + "/etc/kubernetes/", // Node-level K8s configs & PKI + // ---- macOS system paths ---- + "/private/etc/", // macOS symlink to /etc + "/private/var/db/dslocal/", // macOS local directory service + "~/Library/Keychains/", // macOS user keychains + "/Library/Keychains/", // macOS system keychain + // ---- Windows sensitive paths (forward-slash notation) ---- + "C:/Windows/System32/config/", // SAM, SYSTEM, SECURITY hives + "C:/Windows/repair/", // Backup registry hives + "C:/Windows/Panther/", // Unattend.xml with plaintext passwords + "C:/Windows/System32/sysprep/", // Sysprep unattend files + "C:/inetpub/", // IIS web root + // ---- User-home dotfiles — credentials & secrets ---- + "~/.ssh/", + "~/.gnupg/", + "~/.aws/", + "~/.azure/", + "~/.config/gcloud/", + "~/.oci/", // Oracle Cloud CLI + "~/.kube/", + "~/.docker/", + "~/.config/gh/", // GitHub CLI tokens + "~/.m2/", // Maven settings (server credentials) + "~/.gradle/", // Gradle properties (signing keys, repo creds) + "~/.cargo/", // Cargo registry credentials + "~/.gem/", // RubyGems credentials + "~/.terraform.d/", // Terraform credentials + "~/.vault-token", // HashiCorp Vault token + "~/.npmrc", + "~/.yarnrc", + "~/.pypirc", // PyPI upload credentials + "~/.netrc", + "~/.gitconfig", + "~/.git-credentials", + "~/.bash_history", + "~/.zsh_history", + "~/.boto", // Legacy AWS/GCS credentials + "~/.s3cfg" // s3cmd credentials + ); + + // --- Built-in blocked file names (case-insensitive, matched against last path component) --- + private static final List DEFAULT_BLOCKED_FILE_NAMES = + List.of( + // ---- Environment / dotenv files ---- + ".env", + ".env.local", + ".env.development", + ".env.development.local", + ".env.staging", + ".env.test", + ".env.production", + ".env.production.local", + ".env.backup", + ".env.bak", + ".flaskenv", + // ---- Web server auth ---- + ".htpasswd", + // ---- Database credentials ---- + ".pgpass", + ".my.cnf", + ".mylogin.cnf", + // ---- Git credentials ---- + ".git-credentials", + ".gitconfig", + // ---- SSH keys & auth ---- + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "authorized_keys", + "known_hosts", + // ---- TLS / signing keys & keystores ---- + "private.pem", + "private.key", + "server.key", + "keystore.jks", + "truststore.jks", + "cacerts", + // ---- Cloud credentials ---- + "credentials", + "credentials.json", + "credentials.db", + "service-account.json", + "application_default_credentials.json", // GCP ADC + "accessTokens.json", // Azure CLI tokens + "msal_token_cache.json", // Azure MSAL + // ---- Infrastructure-as-code secrets ---- + "terraform.tfstate", + "terraform.tfstate.backup", + "terraform.tfvars", + ".vault-token", + // ---- Build tool credentials ---- + "settings.xml", // Maven + "settings-security.xml", // Maven + "gradle.properties", + ".npmrc", + ".pypirc", + // ---- Framework configs with secrets ---- + "master.key", // Rails master key + "web.config", // IIS / ASP.NET + // ---- Windows system files ---- + "SAM", + "SYSTEM", + "SECURITY", + "Unattend.xml", + "autounattend.xml", + "ConsoleHost_history.txt", // PowerShell history + // ---- macOS ---- + "login.keychain-db", + // ---- Docker ---- + ".dockercfg" // Legacy Docker registry auth + ); + + // --- Built-in blocked hosts (cloud metadata services) --- + private static final List DEFAULT_BLOCKED_HOSTS = + List.of( + "169.254.169.254", // AWS / GCP / Azure / OCI / DO / Hetzner / OpenStack + "169.254.170.2", // AWS ECS container credentials + "metadata.google.internal", // GCP metadata + "metadata.internal", // GCP alias + "100.100.100.200", // Alibaba Cloud metadata + "instance-data.ec2.internal", // AWS metadata DNS alias + "fd00:ec2::254", // AWS IPv6 metadata + "kubernetes.default", // K8s in-cluster API + "kubernetes.default.svc", + "kubernetes.default.svc.cluster.local"); + + /** + * Validates that the given location is safe to access. Throws {@link + * DocumentAccessDeniedException} if blocked. + */ + public void validateAccess(String location) { + if (disabled) { + return; + } + + String normalized = normalizeLocation(location); + + checkBlockedPaths(normalized); + checkBlockedFileNames(normalized); + checkBlockedHosts(location); + checkPathTraversal(normalized); + checkAllowedDirectories(location, normalized); + } + + private void checkBlockedPaths(String normalizedPath) { + for (String prefix : DEFAULT_BLOCKED_PATH_PREFIXES) { + String expandedPrefix = expandHome(prefix); + if (normalizedPath.startsWith(expandedPrefix)) { + throw new DocumentAccessDeniedException( + "Access denied: path matches blocked prefix '" + prefix + "'"); + } + } + for (String prefix : blockedPathPrefixes) { + String expandedPrefix = expandHome(prefix); + if (normalizedPath.startsWith(expandedPrefix)) { + throw new DocumentAccessDeniedException( + "Access denied: path matches blocked prefix '" + prefix + "'"); + } + } + } + + private void checkBlockedFileNames(String normalizedPath) { + String fileName = extractFileName(normalizedPath); + if (fileName == null || fileName.isEmpty()) { + return; + } + String lowerFileName = fileName.toLowerCase(); + + for (String blocked : DEFAULT_BLOCKED_FILE_NAMES) { + if (lowerFileName.equals(blocked.toLowerCase())) { + throw new DocumentAccessDeniedException( + "Access denied: file name '" + fileName + "' is blocked"); + } + } + for (String blocked : blockedFileNames) { + if (lowerFileName.equals(blocked.toLowerCase())) { + throw new DocumentAccessDeniedException( + "Access denied: file name '" + fileName + "' is blocked"); + } + } + } + + private void checkBlockedHosts(String location) { + String host = extractHost(location); + if (host == null || host.isEmpty()) { + return; + } + String lowerHost = host.toLowerCase(); + + // Check against explicit blocklist + for (String blocked : DEFAULT_BLOCKED_HOSTS) { + if (lowerHost.equals(blocked.toLowerCase())) { + throw new DocumentAccessDeniedException( + "Access denied: host '" + host + "' is blocked"); + } + } + for (String blocked : blockedHosts) { + if (lowerHost.equals(blocked.toLowerCase())) { + throw new DocumentAccessDeniedException( + "Access denied: host '" + host + "' is blocked"); + } + } + + // Resolve hostname to IP and check for link-local / metadata ranges. + // This catches obfuscated IPs (hex, octal, decimal encoding) and DNS + // rebinding because InetAddress.getByName normalizes all representations. + checkResolvedAddress(host); + } + + /** + * Resolves the host to an IP address and blocks link-local (169.254.0.0/16) and other dangerous + * ranges that are commonly used for SSRF against cloud metadata services. + */ + private void checkResolvedAddress(String host) { + try { + InetAddress addr = InetAddress.getByName(host); + if (addr.isLinkLocalAddress()) { + throw new DocumentAccessDeniedException( + "Access denied: link-local address range is blocked (host resolves to " + + addr.getHostAddress() + + ")"); + } + if (addr.isLoopbackAddress()) { + throw new DocumentAccessDeniedException( + "Access denied: loopback address is blocked (host resolves to " + + addr.getHostAddress() + + ")"); + } + } catch (DocumentAccessDeniedException e) { + throw e; + } catch (Exception e) { + // DNS resolution failure — allow the request to proceed and fail naturally + log.debug( + "Could not resolve host '{}' for access policy check: {}", + host, + e.getMessage()); + } + } + + private void checkPathTraversal(String normalizedPath) { + if (normalizedPath.contains("/../") + || normalizedPath.endsWith("/..") + || normalizedPath.startsWith("../")) { + throw new DocumentAccessDeniedException( + "Access denied: path traversal sequences are not allowed"); + } + } + + /** + * Only local filesystem paths under the effective allowed directories (file-storage parentDir + + * any additional configured directories) are permitted. HTTP/HTTPS URLs are not subject to this + * check. + */ + private void checkAllowedDirectories(String originalLocation, String normalizedPath) { + List dirs = effectiveAllowedDirectories; + if (dirs == null || dirs.isEmpty()) { + return; + } + // Only apply to local filesystem paths, not HTTP URLs + if (originalLocation.startsWith("http://") || originalLocation.startsWith("https://")) { + return; + } + + for (String dir : dirs) { + String expandedDir = expandHome(dir.endsWith("/") ? dir : dir + "/"); + if (normalizedPath.startsWith(expandedDir) || normalizedPath.equals(expandedDir)) { + return; // Path is within an allowed directory + } + } + + throw new DocumentAccessDeniedException( + "Access denied: path is not under any allowed directory. " + + "Allowed directories: " + + dirs); + } + + private String normalizeLocation(String location) { + // Strip file:// scheme + String path = location; + if (path.startsWith("file://")) { + path = path.substring(7); + } + + // For HTTP URLs, extract the path component + if (path.startsWith("http://") || path.startsWith("https://")) { + try { + URI uri = URI.create(path); + return uri.getPath() != null ? uri.getPath() : ""; + } catch (Exception e) { + return path; + } + } + + // Resolve to absolute path to catch traversal attacks + try { + return Path.of(path).normalize().toString(); + } catch (Exception e) { + return path; + } + } + + private String expandHome(String path) { + if (path.startsWith("~/")) { + return System.getProperty("user.home") + path.substring(1); + } + return path; + } + + private String extractFileName(String path) { + int lastSlash = path.lastIndexOf('/'); + if (lastSlash >= 0 && lastSlash < path.length() - 1) { + return path.substring(lastSlash + 1); + } + return path; + } + + private String extractHost(String location) { + try { + if (location.startsWith("http://") || location.startsWith("https://")) { + URI uri = URI.create(location); + return uri.getHost(); + } + } catch (Exception e) { + // ignore + } + return null; + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/document/FileSystemDocumentLoader.java b/ai/src/main/java/org/conductoross/conductor/ai/document/FileSystemDocumentLoader.java index e7db29d5ca..d1a1d5090f 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/document/FileSystemDocumentLoader.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/document/FileSystemDocumentLoader.java @@ -29,14 +29,21 @@ @Component @ConditionalOnProperty( - value = "conductor.worker.document-loader.file-based.enabled", - havingValue = "true", + value = "conductor.worker.document-loader.type", + havingValue = "file", matchIfMissing = true) @Slf4j public class FileSystemDocumentLoader implements DocumentLoader { + private final DocumentAccessPolicy accessPolicy; + + public FileSystemDocumentLoader(DocumentAccessPolicy accessPolicy) { + this.accessPolicy = accessPolicy; + } + @Override public byte[] download(String location) { + accessPolicy.validateAccess(location); try { return Files.readAllBytes(Path.of(location.replace("file://", ""))); @@ -53,8 +60,10 @@ public String upload( if (data == null) { return null; } + accessPolicy.validateAccess(fileURI); Path path = Path.of(fileURI.replace("file://", "")); var result = path.toFile().getParentFile().mkdirs(); + log.info("writing to {}", path); Files.write(path, data); return "file://" + path.toAbsolutePath().toString(); } catch (IOException e) { @@ -73,6 +82,7 @@ public String upload( if (data == null) { return null; } + accessPolicy.validateAccess(fileURI); Path path = Path.of(fileURI.replace("file://", "")); path.toFile().getParentFile().mkdirs(); Files.copy(data, path, StandardCopyOption.REPLACE_EXISTING); @@ -84,6 +94,7 @@ public String upload( @Override public List listFiles(String location) { + accessPolicy.validateAccess(location); try (Stream paths = Files.list(Path.of(new URI(location)))) { return paths.map(path -> path.toUri().toString()).toList(); } catch (Exception e) { @@ -93,6 +104,7 @@ public List listFiles(String location) { @Override public boolean supports(String location) { - return location.startsWith("file://"); + // either starts with fileURI or does not contain URI scheme + return location.startsWith("file://") || !location.contains("://"); } } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/document/HttpDocumentLoader.java b/ai/src/main/java/org/conductoross/conductor/ai/document/HttpDocumentLoader.java index e290448575..0b46e6b1c2 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/document/HttpDocumentLoader.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/document/HttpDocumentLoader.java @@ -40,8 +40,10 @@ public class HttpDocumentLoader implements DocumentLoader { private static final int MAX_DEPTH = 1; // Specify the depth limit private final OkHttpClient httpClient; + private final DocumentAccessPolicy accessPolicy; - public HttpDocumentLoader() { + public HttpDocumentLoader(DocumentAccessPolicy accessPolicy) { + this.accessPolicy = accessPolicy; this.httpClient = new OkHttpClient.Builder() .connectTimeout(30, TimeUnit.SECONDS) @@ -53,6 +55,7 @@ public HttpDocumentLoader() { @SuppressWarnings("unchecked") @Override public byte[] download(String location) { + accessPolicy.validateAccess(location); try { Map headers = (Map) TaskContext.get().getTask().getInputData().get("headers"); @@ -79,6 +82,7 @@ public String upload( if (fileURI == null) { return null; } + accessPolicy.validateAccess(fileURI); Input input = new Input(); input.getHeaders().putAll(headers); input.setMethod("POST"); diff --git a/ai/src/main/java/org/conductoross/conductor/ai/models/MarkdownToPdfRequest.java b/ai/src/main/java/org/conductoross/conductor/ai/models/MarkdownToPdfRequest.java new file mode 100644 index 0000000000..6562dde62d --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/models/MarkdownToPdfRequest.java @@ -0,0 +1,66 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.models; + +import java.util.Map; + +import lombok.AllArgsConstructor; +import lombok.Builder; +import lombok.Data; +import lombok.EqualsAndHashCode; +import lombok.NoArgsConstructor; + +/** Request model for the GENERATE_PDF system task that converts markdown text to PDF. */ +@Data +@Builder +@NoArgsConstructor +@AllArgsConstructor +@EqualsAndHashCode(callSuper = false) +public class MarkdownToPdfRequest extends LLMWorkerInput { + + /** The markdown text to convert to PDF. Required. */ + private String markdown; + + /** Page size: A4, LETTER, LEGAL. Default: A4. */ + @Builder.Default private String pageSize = "A4"; + + /** Top margin in points (72pt = 1 inch). Default: 72. */ + @Builder.Default private float marginTop = 72f; + + /** Right margin in points. Default: 72. */ + @Builder.Default private float marginRight = 72f; + + /** Bottom margin in points. Default: 72. */ + @Builder.Default private float marginBottom = 72f; + + /** Left margin in points. Default: 72. */ + @Builder.Default private float marginLeft = 72f; + + /** Built-in style preset: "default", "compact". Default: "default". */ + @Builder.Default private String theme = "default"; + + /** Base font size in points. Default: 11. */ + @Builder.Default private float baseFontSize = 11f; + + /** + * Output location URI for the generated PDF. e.g., "file:///tmp/output.pdf". If null, uses the + * default payload store location. + */ + private String outputLocation; + + /** Optional metadata to embed in the PDF (title, author, subject, keywords). */ + private Map pdfMetadata; + + /** Base URL for resolving relative image paths in the markdown. */ + private String imageBaseUrl; +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverter.java b/ai/src/main/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverter.java new file mode 100644 index 0000000000..e1e0b6464d --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverter.java @@ -0,0 +1,153 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.util.List; +import java.util.Map; + +import org.apache.pdfbox.pdmodel.PDDocument; +import org.apache.pdfbox.pdmodel.PDDocumentInformation; +import org.apache.pdfbox.pdmodel.common.PDRectangle; +import org.conductoross.conductor.ai.models.MarkdownToPdfRequest; +import org.conductoross.conductor.config.AIIntegrationEnabledCondition; +import org.springframework.context.annotation.Conditional; +import org.springframework.stereotype.Component; + +import com.vladsch.flexmark.ext.autolink.AutolinkExtension; +import com.vladsch.flexmark.ext.definition.DefinitionExtension; +import com.vladsch.flexmark.ext.footnotes.FootnoteExtension; +import com.vladsch.flexmark.ext.gfm.strikethrough.StrikethroughExtension; +import com.vladsch.flexmark.ext.gfm.tasklist.TaskListExtension; +import com.vladsch.flexmark.ext.tables.TablesExtension; +import com.vladsch.flexmark.parser.Parser; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.data.MutableDataSet; +import lombok.extern.slf4j.Slf4j; + +/** + * Orchestrates the conversion of Markdown text to PDF. Parses markdown using flexmark-java, then + * renders the AST to PDF using Apache PDFBox via {@link PdfDocumentRenderer}. + */ +@Component +@Conditional(AIIntegrationEnabledCondition.class) +@Slf4j +public class MarkdownToPdfConverter { + + private final PdfImageResolver imageResolver; + + public MarkdownToPdfConverter(PdfImageResolver imageResolver) { + this.imageResolver = imageResolver; + } + + /** + * Converts markdown text to a PDF byte array. + * + * @param request the conversion request containing markdown and layout options + * @return the generated PDF as a byte array + */ + public byte[] convert(MarkdownToPdfRequest request) { + if (request.getMarkdown() == null) { + throw new IllegalArgumentException("markdown content must not be null"); + } + + // Step 1: Parse markdown to AST + Document markdownAst = parseMarkdown(request.getMarkdown()); + + // Step 2: Create PDF document + try (PDDocument document = new PDDocument()) { + // Step 3: Configure page size + PDRectangle pageSize = resolvePageSize(request.getPageSize()); + + // Step 4: Set PDF metadata + setMetadata(document, request.getPdfMetadata()); + + // Step 5: Create render context + boolean compact = "compact".equalsIgnoreCase(request.getTheme()); + PdfRenderContext ctx = + new PdfRenderContext( + document, + pageSize, + request.getMarginTop(), + request.getMarginRight(), + request.getMarginBottom(), + request.getMarginLeft(), + request.getBaseFontSize(), + compact); + + // Step 6: Render AST to PDF + PdfDocumentRenderer renderer = + new PdfDocumentRenderer(ctx, imageResolver, request.getImageBaseUrl()); + renderer.render(markdownAst); + + // Step 7: Close the last content stream + if (ctx.getContentStream() != null) { + ctx.getContentStream().close(); + } + + // Step 8: Save to bytes + ByteArrayOutputStream out = new ByteArrayOutputStream(); + document.save(out); + return out.toByteArray(); + + } catch (IOException e) { + throw new RuntimeException("Failed to generate PDF from markdown", e); + } + } + + private Document parseMarkdown(String markdown) { + MutableDataSet options = new MutableDataSet(); + options.set( + Parser.EXTENSIONS, + List.of( + TablesExtension.create(), + StrikethroughExtension.create(), + TaskListExtension.create(), + FootnoteExtension.create(), + AutolinkExtension.create(), + DefinitionExtension.create())); + + Parser parser = Parser.builder(options).build(); + return parser.parse(markdown); + } + + private PDRectangle resolvePageSize(String pageSize) { + if (pageSize == null) return PDRectangle.A4; + return switch (pageSize.toUpperCase()) { + case "LETTER" -> PDRectangle.LETTER; + case "LEGAL" -> PDRectangle.LEGAL; + case "A3" -> new PDRectangle(841.89f, 1190.55f); + case "A5" -> new PDRectangle(419.53f, 595.28f); + default -> PDRectangle.A4; + }; + } + + private void setMetadata(PDDocument document, Map pdfMetadata) { + if (pdfMetadata == null || pdfMetadata.isEmpty()) return; + + PDDocumentInformation info = document.getDocumentInformation(); + if (pdfMetadata.containsKey("title")) { + info.setTitle(pdfMetadata.get("title")); + } + if (pdfMetadata.containsKey("author")) { + info.setAuthor(pdfMetadata.get("author")); + } + if (pdfMetadata.containsKey("subject")) { + info.setSubject(pdfMetadata.get("subject")); + } + if (pdfMetadata.containsKey("keywords")) { + info.setKeywords(pdfMetadata.get("keywords")); + } + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfDocumentRenderer.java b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfDocumentRenderer.java new file mode 100644 index 0000000000..f996b646d2 --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfDocumentRenderer.java @@ -0,0 +1,792 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import org.apache.pdfbox.pdmodel.PDPageContentStream; +import org.apache.pdfbox.pdmodel.common.PDRectangle; +import org.apache.pdfbox.pdmodel.font.PDType1Font; +import org.apache.pdfbox.pdmodel.graphics.image.PDImageXObject; +import org.apache.pdfbox.pdmodel.interactive.action.PDActionURI; +import org.apache.pdfbox.pdmodel.interactive.annotation.PDAnnotationLink; + +import com.vladsch.flexmark.ast.*; +import com.vladsch.flexmark.ext.footnotes.Footnote; +import com.vladsch.flexmark.ext.gfm.strikethrough.Strikethrough; +import com.vladsch.flexmark.ext.gfm.tasklist.TaskListItem; +import com.vladsch.flexmark.ext.tables.*; +import com.vladsch.flexmark.util.ast.Document; +import com.vladsch.flexmark.util.ast.Node; +import lombok.extern.slf4j.Slf4j; + +/** + * Walks the flexmark markdown AST and renders each node type to PDF using Apache PDFBox. Handles + * word wrapping, page breaks, inline formatting, tables, images, lists, code blocks, blockquotes, + * and links. + */ +@Slf4j +public class PdfDocumentRenderer { + + private final PdfRenderContext ctx; + private final PdfImageResolver imageResolver; + private final String imageBaseUrl; + + public PdfDocumentRenderer( + PdfRenderContext ctx, PdfImageResolver imageResolver, String imageBaseUrl) { + this.ctx = ctx; + this.imageResolver = imageResolver; + this.imageBaseUrl = imageBaseUrl; + } + + /** Renders the entire markdown document AST to PDF. */ + public void render(Document document) throws IOException { + ctx.newPage(); + renderChildren(document); + } + + private void renderChildren(Node parent) throws IOException { + Node child = parent.getFirstChild(); + while (child != null) { + renderNode(child); + child = child.getNext(); + } + } + + private void renderNode(Node node) throws IOException { + if (node instanceof Heading) { + renderHeading((Heading) node); + } else if (node instanceof Paragraph) { + renderParagraph((Paragraph) node); + } else if (node instanceof BulletList) { + renderBulletList((BulletList) node); + } else if (node instanceof OrderedList) { + renderOrderedList((OrderedList) node); + } else if (node instanceof FencedCodeBlock) { + renderFencedCodeBlock((FencedCodeBlock) node); + } else if (node instanceof IndentedCodeBlock) { + renderIndentedCodeBlock((IndentedCodeBlock) node); + } else if (node instanceof BlockQuote) { + renderBlockQuote((BlockQuote) node); + } else if (node instanceof ThematicBreak) { + renderThematicBreak(); + } else if (node instanceof TableBlock) { + renderTable((TableBlock) node); + } else if (node instanceof HtmlBlock) { + renderHtmlBlock((HtmlBlock) node); + } else { + // Unknown block-level node: try rendering children + renderChildren(node); + } + } + + // ======================================================================== + // Block-level rendering + // ======================================================================== + + private void renderHeading(Heading heading) throws IOException { + float scale = + switch (heading.getLevel()) { + case 1 -> 2.0f; + case 2 -> 1.6f; + case 3 -> 1.3f; + case 4 -> 1.15f; + case 5 -> 1.0f; + default -> 0.9f; + }; + + float fontSize = ctx.getBaseFontSize() * scale; + float spacing = ctx.isCompact() ? fontSize * 0.5f : fontSize * 0.8f; + + ctx.ensureSpace(fontSize + spacing * 2); + ctx.advanceCursor(spacing); + + List runs = collectTextRuns(heading); + // Force bold for headings + for (TextRun run : runs) { + run.bold = true; + } + renderTextRuns(runs, fontSize); + + // Draw underline for h1 and h2 + if (heading.getLevel() <= 2) { + ctx.advanceCursor(3f); + PDPageContentStream cs = ctx.getContentStream(); + cs.setStrokingColor(0.85f, 0.85f, 0.85f); + cs.setLineWidth(0.5f); + cs.moveTo(ctx.getLeftX(), ctx.getCursorY()); + cs.lineTo(ctx.getRightX(), ctx.getCursorY()); + cs.stroke(); + ctx.advanceCursor(3f); + } + + ctx.advanceCursor(spacing * 0.5f); + } + + private void renderParagraph(Paragraph paragraph) throws IOException { + // Check if paragraph contains only an image + if (paragraph.getFirstChild() instanceof Image + && paragraph.getFirstChild() == paragraph.getLastChild()) { + renderImage((Image) paragraph.getFirstChild()); + return; + } + + float fontSize = ctx.getBaseFontSize(); + float spacing = ctx.isCompact() ? fontSize * 0.3f : fontSize * 0.5f; + + ctx.ensureSpace(ctx.getLineHeight(fontSize) + spacing); + ctx.advanceCursor(spacing); + + List runs = collectTextRuns(paragraph); + renderTextRuns(runs, fontSize); + + ctx.advanceCursor(spacing); + } + + private void renderBulletList(BulletList list) throws IOException { + float spacing = ctx.isCompact() ? 2f : 4f; + ctx.advanceCursor(spacing); + ctx.setListIndentLevel(ctx.getListIndentLevel() + 1); + + Node item = list.getFirstChild(); + while (item != null) { + if (item instanceof TaskListItem taskItem) { + String marker = taskItem.isItemDoneMarker() ? "[x] " : "[ ] "; + renderListItem(item, marker); + } else if (item instanceof BulletListItem) { + renderListItem(item, "- "); + } + item = item.getNext(); + } + + ctx.setListIndentLevel(ctx.getListIndentLevel() - 1); + ctx.advanceCursor(spacing); + } + + private void renderOrderedList(OrderedList list) throws IOException { + float spacing = ctx.isCompact() ? 2f : 4f; + ctx.advanceCursor(spacing); + ctx.setListIndentLevel(ctx.getListIndentLevel() + 1); + + int number = list.getStartNumber(); + Node item = list.getFirstChild(); + while (item != null) { + if (item instanceof OrderedListItem) { + renderListItem(item, number + ". "); + number++; + } + item = item.getNext(); + } + + ctx.setListIndentLevel(ctx.getListIndentLevel() - 1); + ctx.advanceCursor(spacing); + } + + private void renderListItem(Node item, String marker) throws IOException { + float fontSize = ctx.getBaseFontSize(); + ctx.ensureSpace(ctx.getLineHeight(fontSize)); + + // Render marker + PDPageContentStream cs = ctx.getContentStream(); + cs.beginText(); + cs.setFont(ctx.getRegularFont(), fontSize); + cs.newLineAtOffset(ctx.getLeftX(), ctx.getCursorY() - fontSize); + cs.showText(sanitizeText(marker, ctx.getRegularFont())); + cs.endText(); + + // Render item content inline (offset by marker width) + float markerWidth; + try { + markerWidth = ctx.getTextWidth(marker, ctx.getRegularFont(), fontSize); + } catch (IOException e) { + markerWidth = fontSize * marker.length() * 0.5f; + } + + // Render paragraphs and nested lists within the item + Node child = item.getFirstChild(); + boolean firstChild = true; + while (child != null) { + if (child instanceof Paragraph) { + if (firstChild) { + // First paragraph renders on the same line as the marker + List runs = collectTextRuns(child); + renderTextRunsWithOffset(runs, fontSize, markerWidth); + firstChild = false; + } else { + renderParagraph((Paragraph) child); + } + } else if (child instanceof BulletList) { + renderBulletList((BulletList) child); + } else if (child instanceof OrderedList) { + renderOrderedList((OrderedList) child); + } else { + renderNode(child); + } + child = child.getNext(); + } + + if (firstChild) { + // No paragraph children - item has inline content only + ctx.advanceCursor(ctx.getLineHeight(fontSize)); + } + } + + private void renderFencedCodeBlock(FencedCodeBlock codeBlock) throws IOException { + renderCodeContent(codeBlock.getContentChars().toString()); + } + + private void renderIndentedCodeBlock(IndentedCodeBlock codeBlock) throws IOException { + renderCodeContent(codeBlock.getContentChars().toString()); + } + + private void renderCodeContent(String code) throws IOException { + float fontSize = ctx.getBaseFontSize() * 0.85f; + float lineHeight = ctx.getLineHeight(fontSize); + float padding = 8f; + + String[] lines = code.split("\n", -1); + // Remove trailing empty line if present + if (lines.length > 0 && lines[lines.length - 1].isBlank()) { + String[] trimmed = new String[lines.length - 1]; + System.arraycopy(lines, 0, trimmed, 0, trimmed.length); + lines = trimmed; + } + + float blockHeight = lines.length * lineHeight + padding * 2; + ctx.ensureSpace(Math.min(blockHeight, ctx.getLineHeight(fontSize) * 3)); + ctx.advanceCursor(ctx.isCompact() ? 4f : 8f); + + // Draw background rectangle + float bgTop = ctx.getCursorY() + fontSize * 0.3f; + float bgWidth = ctx.getContentWidth(); + PDPageContentStream cs = ctx.getContentStream(); + cs.setNonStrokingColor(0.96f, 0.96f, 0.96f); + cs.addRect(ctx.getLeftX(), bgTop - blockHeight, bgWidth, blockHeight); + cs.fill(); + cs.setNonStrokingColor(0f, 0f, 0f); + + // Render each line + float codeX = ctx.getLeftX() + padding; + for (String line : lines) { + ctx.ensureSpace(lineHeight); + cs = ctx.getContentStream(); + cs.beginText(); + cs.setFont(ctx.getMonoFont(), fontSize); + cs.newLineAtOffset(codeX, ctx.getCursorY() - fontSize); + // Truncate lines that are too wide + String displayLine = + truncateToFit(line, ctx.getMonoFont(), fontSize, bgWidth - padding * 2); + cs.showText(sanitizeText(displayLine, ctx.getMonoFont())); + cs.endText(); + ctx.advanceCursor(lineHeight); + } + + ctx.advanceCursor(ctx.isCompact() ? 4f : 8f); + } + + private void renderBlockQuote(BlockQuote blockQuote) throws IOException { + float spacing = ctx.isCompact() ? 3f : 6f; + ctx.advanceCursor(spacing); + + boolean wasInBlockquote = ctx.isInBlockquote(); + ctx.setInBlockquote(true); + + // Record Y position before rendering children for the vertical bar + float startY = ctx.getCursorY(); + + renderChildren(blockQuote); + + float endY = ctx.getCursorY(); + + // Draw vertical gray bar on the left + float barX = ctx.getLeftX() - 10f; + PDPageContentStream cs = ctx.getContentStream(); + cs.setStrokingColor(0.8f, 0.8f, 0.8f); + cs.setLineWidth(3f); + cs.moveTo(barX, startY); + cs.lineTo(barX, endY); + cs.stroke(); + cs.setStrokingColor(0f, 0f, 0f); + + ctx.setInBlockquote(wasInBlockquote); + ctx.advanceCursor(spacing); + } + + private void renderThematicBreak() throws IOException { + float spacing = ctx.isCompact() ? 8f : 16f; + ctx.ensureSpace(spacing * 2); + ctx.advanceCursor(spacing); + + PDPageContentStream cs = ctx.getContentStream(); + cs.setStrokingColor(0.8f, 0.8f, 0.8f); + cs.setLineWidth(0.5f); + cs.moveTo(ctx.getLeftX(), ctx.getCursorY()); + cs.lineTo(ctx.getRightX(), ctx.getCursorY()); + cs.stroke(); + cs.setStrokingColor(0f, 0f, 0f); + + ctx.advanceCursor(spacing); + } + + private void renderImage(Image image) throws IOException { + String src = image.getUrl().toString(); + byte[] imageBytes = imageResolver.resolve(src, imageBaseUrl); + if (imageBytes == null) { + // Render alt text as placeholder + log.warn("Could not load image: {}", src); + renderPlainText("[Image: " + image.getText() + "]", ctx.getBaseFontSize()); + return; + } + + try { + PDImageXObject pdImage = + PDImageXObject.createFromByteArray(ctx.getDocument(), imageBytes, src); + + float maxWidth = ctx.getContentWidth(); + float maxHeight = + ctx.getPageHeight() - ctx.getMarginTop() - ctx.getMarginBottom() - 40f; + + // Scale to fit within content width and available height + float imgWidth = pdImage.getWidth(); + float imgHeight = pdImage.getHeight(); + + if (imgWidth > maxWidth) { + float ratio = maxWidth / imgWidth; + imgWidth = maxWidth; + imgHeight *= ratio; + } + if (imgHeight > maxHeight) { + float ratio = maxHeight / imgHeight; + imgHeight = maxHeight; + imgWidth *= ratio; + } + + ctx.ensureSpace(imgHeight + 10f); + ctx.advanceCursor(5f); + + float x = ctx.getLeftX(); + float y = ctx.getCursorY() - imgHeight; + + PDPageContentStream cs = ctx.getContentStream(); + cs.drawImage(pdImage, x, y, imgWidth, imgHeight); + + ctx.advanceCursor(imgHeight + 5f); + } catch (Exception e) { + log.warn("Failed to embed image '{}': {}", src, e.getMessage()); + renderPlainText("[Image: " + image.getText() + "]", ctx.getBaseFontSize()); + } + } + + private void renderTable(TableBlock tableBlock) throws IOException { + float fontSize = ctx.getBaseFontSize() * 0.9f; + float cellPadding = 4f; + float lineHeight = ctx.getLineHeight(fontSize); + + // Collect table data + List> headerRows = new ArrayList<>(); + List> bodyRows = new ArrayList<>(); + + Node child = tableBlock.getFirstChild(); + while (child != null) { + if (child instanceof TableHead) { + collectTableRows(child, headerRows); + } else if (child instanceof TableBody) { + collectTableRows(child, bodyRows); + } + child = child.getNext(); + } + + List> allRows = new ArrayList<>(); + allRows.addAll(headerRows); + allRows.addAll(bodyRows); + + if (allRows.isEmpty()) return; + + // Calculate column count and widths + int colCount = allRows.stream().mapToInt(List::size).max().orElse(0); + if (colCount == 0) return; + + float tableWidth = ctx.getContentWidth(); + float colWidth = tableWidth / colCount; + + ctx.advanceCursor(ctx.isCompact() ? 4f : 8f); + + // Render rows + for (int rowIdx = 0; rowIdx < allRows.size(); rowIdx++) { + List row = allRows.get(rowIdx); + boolean isHeader = rowIdx < headerRows.size(); + float rowHeight = lineHeight + cellPadding * 2; + + ctx.ensureSpace(rowHeight); + + float rowY = ctx.getCursorY(); + float cellX = ctx.getLeftX(); + + PDPageContentStream cs = ctx.getContentStream(); + + // Draw header background + if (isHeader) { + cs.setNonStrokingColor(0.96f, 0.96f, 0.96f); + cs.addRect(cellX, rowY - rowHeight, tableWidth, rowHeight); + cs.fill(); + cs.setNonStrokingColor(0f, 0f, 0f); + } + + // Draw cell text + for (int colIdx = 0; colIdx < colCount; colIdx++) { + String cellText = colIdx < row.size() ? row.get(colIdx) : ""; + PDType1Font font = isHeader ? ctx.getBoldFont() : ctx.getRegularFont(); + + // Truncate text to fit in cell + String displayText = + truncateToFit(cellText, font, fontSize, colWidth - cellPadding * 2); + + cs.beginText(); + cs.setFont(font, fontSize); + cs.newLineAtOffset(cellX + cellPadding, rowY - cellPadding - fontSize); + cs.showText(sanitizeText(displayText, font)); + cs.endText(); + + cellX += colWidth; + } + + // Draw cell borders + cs.setStrokingColor(0.85f, 0.85f, 0.85f); + cs.setLineWidth(0.5f); + // Top border + cs.moveTo(ctx.getLeftX(), rowY); + cs.lineTo(ctx.getLeftX() + tableWidth, rowY); + cs.stroke(); + // Bottom border + cs.moveTo(ctx.getLeftX(), rowY - rowHeight); + cs.lineTo(ctx.getLeftX() + tableWidth, rowY - rowHeight); + cs.stroke(); + // Vertical borders + float borderX = ctx.getLeftX(); + for (int i = 0; i <= colCount; i++) { + cs.moveTo(borderX, rowY); + cs.lineTo(borderX, rowY - rowHeight); + cs.stroke(); + borderX += colWidth; + } + cs.setStrokingColor(0f, 0f, 0f); + + ctx.advanceCursor(rowHeight); + } + + ctx.advanceCursor(ctx.isCompact() ? 4f : 8f); + } + + private void collectTableRows(Node section, List> rows) { + Node row = section.getFirstChild(); + while (row != null) { + if (row instanceof TableRow) { + List cells = new ArrayList<>(); + Node cell = row.getFirstChild(); + while (cell != null) { + if (cell instanceof TableCell) { + cells.add(cell.getChars().toString().trim()); + } + cell = cell.getNext(); + } + rows.add(cells); + } + row = row.getNext(); + } + } + + private void renderHtmlBlock(HtmlBlock htmlBlock) throws IOException { + // Render raw HTML as plain text + String text = htmlBlock.getChars().toString().trim(); + if (!text.isEmpty()) { + renderPlainText(text, ctx.getBaseFontSize()); + } + } + + // ======================================================================== + // Inline text rendering + // ======================================================================== + + /** A styled text run within a paragraph. */ + static class TextRun { + String text; + boolean bold; + boolean italic; + boolean code; + boolean strikethrough; + String linkUrl; + + TextRun( + String text, + boolean bold, + boolean italic, + boolean code, + boolean strikethrough, + String linkUrl) { + this.text = text; + this.bold = bold; + this.italic = italic; + this.code = code; + this.strikethrough = strikethrough; + this.linkUrl = linkUrl; + } + } + + /** Collects all inline text runs from a block node, preserving formatting. */ + private List collectTextRuns(Node block) { + List runs = new ArrayList<>(); + collectInlineRuns(block, runs, false, false, false, false, null); + return runs; + } + + private void collectInlineRuns( + Node node, + List runs, + boolean bold, + boolean italic, + boolean code, + boolean strikethrough, + String linkUrl) { + Node child = node.getFirstChild(); + while (child != null) { + if (child instanceof Text) { + String text = child.getChars().toString(); + if (!text.isEmpty()) { + runs.add(new TextRun(text, bold, italic, code, strikethrough, linkUrl)); + } + } else if (child instanceof SoftLineBreak || child instanceof HardLineBreak) { + runs.add(new TextRun("\n", bold, italic, code, strikethrough, linkUrl)); + } else if (child instanceof Code) { + String text = ((Code) child).getText().toString(); + runs.add(new TextRun(text, bold, italic, true, strikethrough, linkUrl)); + } else if (child instanceof StrongEmphasis) { + collectInlineRuns(child, runs, true, italic, code, strikethrough, linkUrl); + } else if (child instanceof Emphasis) { + collectInlineRuns(child, runs, bold, true, code, strikethrough, linkUrl); + } else if (child instanceof Strikethrough) { + collectInlineRuns(child, runs, bold, italic, code, true, linkUrl); + } else if (child instanceof Link link) { + collectInlineRuns( + child, runs, bold, italic, code, strikethrough, link.getUrl().toString()); + } else if (child instanceof Image image) { + // Inline image - add placeholder text + runs.add( + new TextRun( + "[" + image.getText() + "]", + bold, + italic, + code, + strikethrough, + linkUrl)); + } else if (child instanceof Footnote) { + runs.add(new TextRun("[*]", bold, italic, code, strikethrough, linkUrl)); + } else if (child instanceof HtmlInline) { + // Skip inline HTML tags + } else { + // Recurse into unknown inline nodes + collectInlineRuns(child, runs, bold, italic, code, strikethrough, linkUrl); + } + child = child.getNext(); + } + } + + /** Renders text runs with word wrapping across the content area. */ + private void renderTextRuns(List runs, float fontSize) throws IOException { + renderTextRunsWithOffset(runs, fontSize, 0f); + } + + /** + * Renders text runs with word wrapping, with an initial X offset (used for list items where the + * marker occupies the start of the first line). + */ + private void renderTextRunsWithOffset(List runs, float fontSize, float initialOffset) + throws IOException { + float x = ctx.getLeftX() + initialOffset; + float maxX = ctx.getRightX(); + float lineHeight = ctx.getLineHeight(fontSize); + boolean firstLine = true; + + for (TextRun run : runs) { + PDType1Font font = resolveFont(run); + float runFontSize = run.code ? fontSize * 0.85f : fontSize; + + if (run.text.equals("\n")) { + // Explicit line break + ctx.advanceCursor(lineHeight); + ctx.ensureSpace(lineHeight); + x = ctx.getLeftX(); + firstLine = false; + continue; + } + + // Split into words for wrapping + String[] words = run.text.split("(?<=\\s)|(?=\\s)"); + for (String word : words) { + if (word.isEmpty()) continue; + + float wordWidth = ctx.getTextWidth(word, font, runFontSize); + + // Wrap to next line if needed + if (x + wordWidth > maxX && x > ctx.getLeftX() + 1f) { + ctx.advanceCursor(lineHeight); + ctx.ensureSpace(lineHeight); + x = ctx.getLeftX(); + firstLine = false; + // Skip leading whitespace on new line + if (word.isBlank()) continue; + } + + if (firstLine && x == ctx.getLeftX() + initialOffset) { + // First word on first line - position cursor + } else if (x == ctx.getLeftX()) { + // First word on a new line - position cursor + } + + PDPageContentStream cs = ctx.getContentStream(); + + // Draw code background + if (run.code) { + cs.setNonStrokingColor(0.94f, 0.94f, 0.94f); + cs.addRect( + x - 1f, + ctx.getCursorY() - runFontSize - 1f, + wordWidth + 2f, + runFontSize + 3f); + cs.fill(); + cs.setNonStrokingColor(0f, 0f, 0f); + } + + // Draw text + if (run.linkUrl != null) { + cs.setNonStrokingColor(0.02f, 0.4f, 0.84f); + } + + cs.beginText(); + cs.setFont(font, runFontSize); + cs.newLineAtOffset(x, ctx.getCursorY() - fontSize); + cs.showText(sanitizeText(word, font)); + cs.endText(); + + // Draw strikethrough line + if (run.strikethrough) { + float strikeY = ctx.getCursorY() - fontSize * 0.35f; + cs.setLineWidth(0.5f); + cs.moveTo(x, strikeY); + cs.lineTo(x + wordWidth, strikeY); + cs.stroke(); + } + + // Add link annotation + if (run.linkUrl != null) { + cs.setNonStrokingColor(0f, 0f, 0f); + addLinkAnnotation( + x, + ctx.getCursorY() - fontSize - 2f, + wordWidth, + fontSize + 4f, + run.linkUrl); + } + + x += wordWidth; + } + } + + // Advance past the last rendered line + ctx.advanceCursor(lineHeight); + } + + // ======================================================================== + // Utility methods + // ======================================================================== + + private PDType1Font resolveFont(TextRun run) { + if (run.code) return ctx.getMonoFont(); + if (run.bold && run.italic) return ctx.getBoldItalicFont(); + if (run.bold) return ctx.getBoldFont(); + if (run.italic) return ctx.getItalicFont(); + return ctx.getRegularFont(); + } + + private void renderPlainText(String text, float fontSize) throws IOException { + float lineHeight = ctx.getLineHeight(fontSize); + ctx.ensureSpace(lineHeight); + + PDPageContentStream cs = ctx.getContentStream(); + cs.beginText(); + cs.setFont(ctx.getRegularFont(), fontSize); + cs.newLineAtOffset(ctx.getLeftX(), ctx.getCursorY() - fontSize); + String displayText = + truncateToFit(text, ctx.getRegularFont(), fontSize, ctx.getContentWidth()); + cs.showText(sanitizeText(displayText, ctx.getRegularFont())); + cs.endText(); + + ctx.advanceCursor(lineHeight); + } + + private String truncateToFit(String text, PDType1Font font, float fontSize, float maxWidth) { + try { + float width = ctx.getTextWidth(text, font, fontSize); + if (width <= maxWidth) return text; + + // Binary search for truncation point + int end = text.length(); + while (end > 0 + && ctx.getTextWidth(text.substring(0, end) + "...", font, fontSize) + > maxWidth) { + end = end - Math.max(1, end / 4); + } + return end > 0 ? text.substring(0, end) + "..." : "..."; + } catch (IOException e) { + return text.length() > 50 ? text.substring(0, 50) + "..." : text; + } + } + + /** + * Sanitizes text for PDFBox rendering by replacing characters that are not encodable in the + * Standard 14 fonts (WinAnsiEncoding). Non-encodable characters are replaced with '?'. + */ + private String sanitizeText(String text, PDType1Font font) { + if (text == null || text.isEmpty()) return ""; + StringBuilder sb = new StringBuilder(text.length()); + for (int i = 0; i < text.length(); i++) { + char c = text.charAt(i); + try { + font.encode(String.valueOf(c)); + sb.append(c); + } catch (Exception e) { + sb.append('?'); + } + } + return sb.toString(); + } + + private void addLinkAnnotation(float x, float y, float width, float height, String url) { + try { + PDAnnotationLink link = new PDAnnotationLink(); + PDActionURI action = new PDActionURI(); + action.setURI(url); + link.setAction(action); + link.setRectangle(new PDRectangle(x, y, width, height)); + link.setBorderStyle(null); + + ctx.getDocument() + .getPage(ctx.getDocument().getNumberOfPages() - 1) + .getAnnotations() + .add(link); + } catch (Exception e) { + log.debug("Failed to add link annotation for URL: {}", url); + } + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfImageResolver.java b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfImageResolver.java new file mode 100644 index 0000000000..ea8832eb18 --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfImageResolver.java @@ -0,0 +1,105 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.util.Base64; +import java.util.List; + +import org.conductoross.conductor.ai.document.DocumentLoader; +import org.conductoross.conductor.config.AIIntegrationEnabledCondition; +import org.springframework.context.annotation.Conditional; +import org.springframework.stereotype.Component; + +import lombok.extern.slf4j.Slf4j; + +/** + * Resolves image references (URLs, file paths, data URIs) to raw byte arrays for embedding in PDF + * documents. Uses the existing {@link DocumentLoader} infrastructure for downloading remote and + * local images. + */ +@Component +@Conditional(AIIntegrationEnabledCondition.class) +@Slf4j +public class PdfImageResolver { + + private final List documentLoaders; + + public PdfImageResolver(List documentLoaders) { + this.documentLoaders = documentLoaders; + } + + /** + * Resolves an image source to its raw bytes. + * + * @param src the image source (data URI, http/https URL, file:// path, or relative path) + * @param imageBaseUrl optional base URL for resolving relative paths + * @return the image bytes, or null if resolution fails + */ + public byte[] resolve(String src, String imageBaseUrl) { + if (src == null || src.isBlank()) { + return null; + } + + try { + // data: URIs - decode inline base64 + if (src.startsWith("data:")) { + return decodeDataUri(src); + } + + // Resolve relative paths against base URL + String resolvedSrc = src; + if (!isAbsoluteUri(src) && imageBaseUrl != null && !imageBaseUrl.isBlank()) { + resolvedSrc = + imageBaseUrl.endsWith("/") ? imageBaseUrl + src : imageBaseUrl + "/" + src; + } + + // Download via DocumentLoader + return downloadImage(resolvedSrc); + + } catch (Exception e) { + log.warn("Failed to resolve image '{}': {}", src, e.getMessage()); + return null; + } + } + + private byte[] decodeDataUri(String dataUri) { + // Format: data:[][;base64], + int commaIndex = dataUri.indexOf(','); + if (commaIndex < 0) { + log.warn("Invalid data URI format: missing comma separator"); + return null; + } + String encoded = dataUri.substring(commaIndex + 1); + return Base64.getDecoder().decode(encoded); + } + + private boolean isAbsoluteUri(String src) { + return src.startsWith("http://") || src.startsWith("https://") || src.startsWith("file://"); + } + + private byte[] downloadImage(String location) { + return documentLoaders.stream() + .filter(loader -> loader.supports(location)) + .findFirst() + .map( + loader -> { + log.debug("Downloading image from: {}", location); + return loader.download(location); + }) + .orElseGet( + () -> { + log.warn("No DocumentLoader supports image location: {}", location); + return null; + }); + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfRenderContext.java b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfRenderContext.java new file mode 100644 index 0000000000..e807b9a03a --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/pdf/PdfRenderContext.java @@ -0,0 +1,160 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.io.IOException; + +import org.apache.pdfbox.pdmodel.PDDocument; +import org.apache.pdfbox.pdmodel.PDPage; +import org.apache.pdfbox.pdmodel.PDPageContentStream; +import org.apache.pdfbox.pdmodel.common.PDRectangle; +import org.apache.pdfbox.pdmodel.font.PDType1Font; +import org.apache.pdfbox.pdmodel.font.Standard14Fonts; + +import lombok.Getter; +import lombok.Setter; + +/** + * Mutable state object that tracks the current rendering position, page, fonts, and layout + * configuration while walking the markdown AST and rendering to PDF via PDFBox. + */ +@Getter +public class PdfRenderContext { + + private final PDDocument document; + private final float pageWidth; + private final float pageHeight; + private final float marginTop; + private final float marginRight; + private final float marginBottom; + private final float marginLeft; + + // Fonts + private final PDType1Font regularFont; + private final PDType1Font boldFont; + private final PDType1Font italicFont; + private final PDType1Font boldItalicFont; + private final PDType1Font monoFont; + private final PDType1Font monoBoldFont; + + private final float baseFontSize; + private final float lineSpacing; + + // Mutable rendering state + @Setter private PDPageContentStream contentStream; + @Setter private float cursorY; + @Setter private int listIndentLevel; + @Setter private boolean inBlockquote; + @Setter private boolean compact; + + public PdfRenderContext( + PDDocument document, + PDRectangle pageSize, + float marginTop, + float marginRight, + float marginBottom, + float marginLeft, + float baseFontSize, + boolean compact) { + this.document = document; + this.pageWidth = pageSize.getWidth(); + this.pageHeight = pageSize.getHeight(); + this.marginTop = marginTop; + this.marginRight = marginRight; + this.marginBottom = marginBottom; + this.marginLeft = marginLeft; + this.baseFontSize = baseFontSize; + this.lineSpacing = compact ? 1.3f : 1.5f; + this.compact = compact; + + // Standard 14 fonts - always available, no embedding needed + this.regularFont = new PDType1Font(Standard14Fonts.FontName.HELVETICA); + this.boldFont = new PDType1Font(Standard14Fonts.FontName.HELVETICA_BOLD); + this.italicFont = new PDType1Font(Standard14Fonts.FontName.HELVETICA_OBLIQUE); + this.boldItalicFont = new PDType1Font(Standard14Fonts.FontName.HELVETICA_BOLD_OBLIQUE); + this.monoFont = new PDType1Font(Standard14Fonts.FontName.COURIER); + this.monoBoldFont = new PDType1Font(Standard14Fonts.FontName.COURIER_BOLD); + + this.listIndentLevel = 0; + this.inBlockquote = false; + } + + /** Returns the usable content width (page width minus left and right margins and indents). */ + public float getContentWidth() { + return pageWidth - marginLeft - marginRight - getIndentOffset(); + } + + /** Returns the left X coordinate accounting for margins, indentation, and blockquote. */ + public float getLeftX() { + return marginLeft + getIndentOffset(); + } + + /** Returns the right X boundary. */ + public float getRightX() { + return pageWidth - marginRight; + } + + /** Calculates total indent offset from list nesting and blockquote state. */ + private float getIndentOffset() { + float indent = listIndentLevel * 20f; + if (inBlockquote) { + indent += 15f; + } + return indent; + } + + /** Returns the line height for the given font size. */ + public float getLineHeight(float fontSize) { + return fontSize * lineSpacing; + } + + /** + * Ensures there is enough vertical space on the current page for the given height. If not, + * creates a new page. + * + * @param height the required vertical space in points + */ + public void ensureSpace(float height) throws IOException { + if (cursorY - height < marginBottom) { + newPage(); + } + } + + /** Closes the current content stream, adds a new page, and resets the cursor. */ + public void newPage() throws IOException { + if (contentStream != null) { + contentStream.close(); + } + PDPage page = new PDPage(new PDRectangle(pageWidth, pageHeight)); + document.addPage(page); + contentStream = new PDPageContentStream(document, page); + cursorY = pageHeight - marginTop; + } + + /** Advances the cursor down by the specified amount. */ + public void advanceCursor(float amount) { + cursorY -= amount; + } + + /** + * Calculates the width of a text string in the given font and size. + * + * @param text the text to measure + * @param font the font to use + * @param fontSize the font size in points + * @return the width in points + */ + public float getTextWidth(String text, PDType1Font font, float fontSize) throws IOException { + return font.getStringWidth(text) / 1000f * fontSize; + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/Anthropic.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/Anthropic.java index 972027e2db..f2a6f4747d 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/Anthropic.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/Anthropic.java @@ -27,6 +27,8 @@ import org.springframework.ai.chat.prompt.ChatOptions; import org.springframework.ai.image.ImageModel; import org.springframework.ai.tool.ToolCallback; +import org.springframework.http.client.SimpleClientHttpRequestFactory; +import org.springframework.web.client.RestClient; public class Anthropic implements AIModel { @@ -49,8 +51,14 @@ public List generateEmbeddings(EmbeddingGenRequest embeddingGenRequest) { @Override public ChatModel getChatModel() { + SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory(); + factory.setReadTimeout(config.getTimeout()); + var builder = - AnthropicApi.builder().baseUrl(config.getBaseURL()).apiKey(config.getApiKey()); + AnthropicApi.builder() + .baseUrl(config.getBaseURL()) + .apiKey(config.getApiKey()) + .restClientBuilder(RestClient.builder().requestFactory(factory)); if (StringUtils.isNotBlank(config.getVersion())) { builder.anthropicVersion(config.getVersion()); diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/AnthropicConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/AnthropicConfiguration.java index 6ce07fcc94..feb125947a 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/AnthropicConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/anthropic/AnthropicConfiguration.java @@ -12,18 +12,18 @@ */ package org.conductoross.conductor.ai.providers.anthropic; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data @Component @ConfigurationProperties(prefix = "conductor.ai.anthropic") -@AllArgsConstructor @NoArgsConstructor public class AnthropicConfiguration implements ModelConfiguration { @@ -37,6 +37,21 @@ public class AnthropicConfiguration implements ModelConfiguration { private String completionsPath; + private Duration timeout = Duration.ofSeconds(600); + + public AnthropicConfiguration( + String apiKey, + String baseURL, + String version, + String betaVersion, + String completionsPath) { + this.apiKey = apiKey; + this.baseURL = baseURL; + this.version = version; + this.betaVersion = betaVersion; + this.completionsPath = completionsPath; + } + public String getBaseURL() { return baseURL == null ? "https://api.anthropic.com" : baseURL; } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAI.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAI.java index 86942c7808..9690db38ee 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAI.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAI.java @@ -32,6 +32,7 @@ import com.azure.ai.openai.OpenAIClient; import com.azure.ai.openai.OpenAIClientBuilder; import com.azure.core.credential.AzureKeyCredential; +import com.azure.core.util.HttpClientOptions; import com.google.common.primitives.Floats; public class AzureOpenAI implements AIModel { @@ -101,9 +102,15 @@ public ChatModel getChatModel() { } private OpenAIClientBuilder getOpenAIClientBuilder() { + HttpClientOptions clientOptions = + new HttpClientOptions() + .setReadTimeout(config.getTimeout()) + .setResponseTimeout(config.getTimeout()); + return new OpenAIClientBuilder() .endpoint(config.getBaseURL()) - .credential(new AzureKeyCredential(config.getApiKey())); + .credential(new AzureKeyCredential(config.getApiKey())) + .clientOptions(clientOptions); } private boolean isReasoningModel(String modelName) { diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAIConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAIConfiguration.java index 5822969794..3a30a81077 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAIConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/azureopenai/AzureOpenAIConfiguration.java @@ -12,16 +12,16 @@ */ package org.conductoross.conductor.ai.providers.azureopenai; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data -@AllArgsConstructor @NoArgsConstructor @ConfigurationProperties(prefix = "conductor.ai.azureopenai") @Component(value = AzureOpenAI.NAME) @@ -31,6 +31,15 @@ public class AzureOpenAIConfiguration implements ModelConfiguration private String baseURL; private String user; private String deploymentName; + private Duration timeout = Duration.ofSeconds(600); + + public AzureOpenAIConfiguration( + String apiKey, String baseURL, String user, String deploymentName) { + this.apiKey = apiKey; + this.baseURL = baseURL; + this.user = user; + this.deploymentName = deploymentName; + } @Override public AzureOpenAI get() { diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/Bedrock.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/Bedrock.java index d5b28ef583..841fb343a6 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/Bedrock.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/Bedrock.java @@ -30,6 +30,7 @@ import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; import software.amazon.awssdk.core.SdkBytes; +import software.amazon.awssdk.core.client.config.ClientOverrideConfiguration; import software.amazon.awssdk.core.interceptor.Context; import software.amazon.awssdk.core.interceptor.ExecutionAttributes; import software.amazon.awssdk.core.interceptor.ExecutionInterceptor; @@ -89,14 +90,17 @@ public ChatModel getChatModel() { .credentialsProvider(config.getAwsCredentialsProvider()) .region(Region.of(config.getRegion())); + ClientOverrideConfiguration.Builder overrideBuilder = + ClientOverrideConfiguration.builder().apiCallTimeout(config.getTimeout()); + // Add bearer token interceptor if configured if (config.isBearerTokenConfigured()) { - clientBuilder.overrideConfiguration( - c -> - c.addExecutionInterceptor( - new BearerTokenInterceptor(config.getBearerToken()))); + overrideBuilder.addExecutionInterceptor( + new BearerTokenInterceptor(config.getBearerToken())); } + clientBuilder.overrideConfiguration(overrideBuilder.build()); + var client = clientBuilder.build(); return BedrockProxyChatModel.builder() .credentialsProvider(config.getAwsCredentialsProvider()) diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/BedrockConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/BedrockConfiguration.java index 65d0a441d2..51f04583b8 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/BedrockConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/bedrock/BedrockConfiguration.java @@ -12,12 +12,13 @@ */ package org.conductoross.conductor.ai.providers.bedrock; +import java.time.Duration; + import org.apache.commons.lang3.StringUtils; import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import software.amazon.awssdk.auth.credentials.AnonymousCredentialsProvider; @@ -28,7 +29,6 @@ @Data @Component @NoArgsConstructor -@AllArgsConstructor @ConfigurationProperties(prefix = "conductor.ai.bedrock") public class BedrockConfiguration implements ModelConfiguration { @@ -37,6 +37,20 @@ public class BedrockConfiguration implements ModelConfiguration { private String secretKey; private String bearerToken; private String region = "us-east-1"; + private Duration timeout = Duration.ofSeconds(600); + + public BedrockConfiguration( + AwsCredentialsProvider awsCredentialsProvider, + String accessKey, + String secretKey, + String bearerToken, + String region) { + this.awsCredentialsProvider = awsCredentialsProvider; + this.accessKey = accessKey; + this.secretKey = secretKey; + this.bearerToken = bearerToken; + this.region = region; + } @Override public Bedrock get() { diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAI.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAI.java index 522f41bd64..624a34b5f3 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAI.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAI.java @@ -21,6 +21,8 @@ import org.springframework.ai.chat.model.ChatModel; import org.springframework.ai.chat.prompt.ChatOptions; import org.springframework.ai.image.ImageModel; +import org.springframework.http.client.SimpleClientHttpRequestFactory; +import org.springframework.web.client.RestClient; import lombok.extern.slf4j.Slf4j; @@ -101,7 +103,13 @@ public ImageModel getImageModel() { // Initialization helpers private CohereApi createCohereApi() { - CohereApi.Builder builder = CohereApi.builder().apiKey(config.getApiKey()); + SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory(); + factory.setReadTimeout(config.getTimeout()); + + CohereApi.Builder builder = + CohereApi.builder() + .apiKey(config.getApiKey()) + .restClientBuilder(RestClient.builder().requestFactory(factory)); if (config.getBaseURL() != null && !config.getBaseURL().isEmpty()) { builder.baseUrl(config.getBaseURL()); diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAIConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAIConfiguration.java index a9b82b0a53..b4fa5daea1 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAIConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/cohere/CohereAIConfiguration.java @@ -12,11 +12,12 @@ */ package org.conductoross.conductor.ai.providers.cohere; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import lombok.extern.slf4j.Slf4j; @@ -24,13 +25,18 @@ @Data @Component @ConfigurationProperties(prefix = "conductor.ai.cohere") -@AllArgsConstructor @NoArgsConstructor @Slf4j public class CohereAIConfiguration implements ModelConfiguration { private String apiKey; private String baseURL = "https://api.cohere.ai"; + private Duration timeout = Duration.ofSeconds(600); + + public CohereAIConfiguration(String apiKey, String baseURL) { + this.apiKey = apiKey; + this.baseURL = baseURL; + } @Override public CohereAI get() { diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertex.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertex.java index c91d334beb..9cd2c497b7 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertex.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertex.java @@ -60,6 +60,8 @@ public class GeminiVertex implements AIModel { public static final String NAME = "vertex_ai"; + public static final String ALIAS = "google_gemini"; + private final GeminiVertexConfiguration config; public GeminiVertex(GeminiVertexConfiguration config) { @@ -71,6 +73,11 @@ public String getModelProvider() { return NAME; } + @Override + public List getProviderAliases() { + return List.of(ALIAS); + } + @Override public List generateEmbeddings(EmbeddingGenRequest embeddingGenRequest) { VertexAiTextEmbeddingOptions options = @@ -222,8 +229,11 @@ public LLMResponse checkVideoStatus(VideoGenRequest request) { return builder.build(); } - /** Creates a shared Google GenAI Client instance configured for Vertex AI. */ + /** Creates a Google GenAI Client, using API key when available, otherwise Vertex AI. */ private Client createGenAIClient() { + if (config.getApiKey() != null && !config.getApiKey().isBlank()) { + return Client.builder().apiKey(config.getApiKey()).build(); + } return Client.builder() .vertexAI(true) .credentials(config.getGoogleCredentials()) diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertexConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertexConfiguration.java index 45a8708532..a41bbb545e 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertexConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/gemini/GeminiVertexConfiguration.java @@ -33,6 +33,7 @@ public class GeminiVertexConfiguration implements ModelConfiguration { private String apiKey; private String baseURL; + private Duration timeout = Duration.ofSeconds(600); + + public GrokAIConfiguration(String apiKey, String baseURL) { + this.apiKey = apiKey; + this.baseURL = baseURL; + } public String getBaseURL() { return Objects.isNull(baseURL) ? "https://api.x.ai" : baseURL; diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAI.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAI.java index d86173cc2d..6c5ab8094c 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAI.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAI.java @@ -31,6 +31,7 @@ import org.springframework.ai.mistralai.MistralAiEmbeddingModel; import org.springframework.ai.mistralai.api.MistralAiApi; import org.springframework.ai.tool.ToolCallback; +import org.springframework.http.client.SimpleClientHttpRequestFactory; import org.springframework.web.client.RestClient; import lombok.extern.slf4j.Slf4j; @@ -113,13 +114,19 @@ public ImageModel getImageModel() { private MistralAiApi createMistralAiApi() { String apiKey = config.getApiKey(); String baseURL = config.getBaseURL(); + + SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory(); + factory.setReadTimeout(config.getTimeout()); + // Needs accept-encoding headers // https://github.com/spring-projects/spring-ai/issues/372 return MistralAiApi.builder() .baseUrl(baseURL) .apiKey(apiKey) .restClientBuilder( - RestClient.builder().defaultHeader("Accept-Encoding", "gzip, deflate")) + RestClient.builder() + .requestFactory(factory) + .defaultHeader("Accept-Encoding", "gzip, deflate")) .build(); } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAIConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAIConfiguration.java index 43fd031ee0..3a994f6350 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAIConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/mistral/MistralAIConfiguration.java @@ -12,18 +12,18 @@ */ package org.conductoross.conductor.ai.providers.mistral; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data @Component @ConfigurationProperties(prefix = "conductor.ai.mistral") -@AllArgsConstructor @NoArgsConstructor public class MistralAIConfiguration implements ModelConfiguration { @@ -31,6 +31,13 @@ public class MistralAIConfiguration implements ModelConfiguration { private String baseURL; + private Duration timeout = Duration.ofSeconds(600); + + public MistralAIConfiguration(String apiKey, String baseURL) { + this.apiKey = apiKey; + this.baseURL = baseURL; + } + public String getBaseURL() { return baseURL == null ? "https://api.mistral.ai" : baseURL; } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/Ollama.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/Ollama.java index 15c4adf79d..b373fa33e6 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/Ollama.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/Ollama.java @@ -29,6 +29,7 @@ import org.springframework.ai.ollama.api.OllamaChatOptions; import org.springframework.ai.ollama.api.OllamaEmbeddingOptions; import org.springframework.ai.tool.ToolCallback; +import org.springframework.http.client.SimpleClientHttpRequestFactory; import org.springframework.web.client.RestClient; import com.google.common.primitives.Floats; @@ -95,15 +96,19 @@ public ChatOptions getChatOptions(ChatCompletion input) { @Override public ChatModel getChatModel() { - OllamaApi.Builder builder = OllamaApi.builder(); - builder.baseUrl(config.getBaseURL()); + SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory(); + factory.setReadTimeout(config.getTimeout()); + + RestClient.Builder restClientBuilder = RestClient.builder().requestFactory(factory); if (StringUtils.isNotBlank(config.getAuthHeaderName())) { - RestClient.Builder restClientBuilder = - RestClient.builder() - .defaultHeader(config.getAuthHeaderName(), config.getAuthHeader()); - builder.restClientBuilder(restClientBuilder); + restClientBuilder.defaultHeader(config.getAuthHeaderName(), config.getAuthHeader()); } - OllamaApi api = builder.build(); + + OllamaApi api = + OllamaApi.builder() + .baseUrl(config.getBaseURL()) + .restClientBuilder(restClientBuilder) + .build(); return OllamaChatModel.builder().ollamaApi(api).build(); } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/OllamaConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/OllamaConfiguration.java index 3b8b37e85d..a11c0e905a 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/OllamaConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/ollama/OllamaConfiguration.java @@ -12,18 +12,18 @@ */ package org.conductoross.conductor.ai.providers.ollama; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data @Component @ConfigurationProperties(prefix = "conductor.ai.ollama") -@AllArgsConstructor @NoArgsConstructor public class OllamaConfiguration implements ModelConfiguration { @@ -33,6 +33,14 @@ public class OllamaConfiguration implements ModelConfiguration { private String authHeader; + private Duration timeout = Duration.ofSeconds(600); + + public OllamaConfiguration(String baseURL, String authHeaderName, String authHeader) { + this.baseURL = baseURL; + this.authHeaderName = authHeaderName; + this.authHeader = authHeader; + } + public String getBaseURL() { return baseURL == null ? "http://localhost:11434" : baseURL; } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAI.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAI.java index e1ddc46990..06ec5800ab 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAI.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAI.java @@ -279,6 +279,7 @@ private OpenAiSdkChatModel createChatModel() { .organizationId(config.getOrganizationId()) .apiKey(config.getApiKey()) .baseUrl(config.getBaseURL()) + .timeout(config.getTimeout()) .customHeaders(Map.of()) .build(); return new OpenAiSdkChatModel(opts); diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAIConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAIConfiguration.java index b220069af6..29c0eabbfb 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAIConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/openai/OpenAIConfiguration.java @@ -12,11 +12,12 @@ */ package org.conductoross.conductor.ai.providers.openai; +import java.time.Duration; + import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @@ -24,7 +25,6 @@ @Component @ConfigurationProperties(prefix = "conductor.ai.openai") @NoArgsConstructor -@AllArgsConstructor public class OpenAIConfiguration implements ModelConfiguration { private String apiKey; @@ -33,6 +33,14 @@ public class OpenAIConfiguration implements ModelConfiguration { private String organizationId; + private Duration timeout = Duration.ofSeconds(600); + + public OpenAIConfiguration(String apiKey, String baseURL, String organizationId) { + this.apiKey = apiKey; + this.baseURL = baseURL; + this.organizationId = organizationId; + } + public String getBaseURL() { return baseURL == null || baseURL.isBlank() ? "https://api.openai.com/v1" : baseURL; } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAI.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAI.java index f62be0762b..40e1a9d064 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAI.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAI.java @@ -23,6 +23,8 @@ import org.springframework.ai.model.tool.ToolCallingChatOptions; import org.springframework.ai.openai.OpenAiChatModel; import org.springframework.ai.openai.api.OpenAiApi; +import org.springframework.http.client.SimpleClientHttpRequestFactory; +import org.springframework.web.client.RestClient; public class PerplexityAI implements AIModel { @@ -62,11 +64,15 @@ public ChatOptions getChatOptions(ChatCompletion input) { @Override public ChatModel getChatModel() { + SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory(); + factory.setReadTimeout(config.getTimeout()); + OpenAiApi perplexityAI = OpenAiApi.builder() .baseUrl(config.getBaseURL()) .apiKey(config.getApiKey()) .completionsPath(chatPath) + .restClientBuilder(RestClient.builder().requestFactory(factory)) .build(); return OpenAiChatModel.builder().openAiApi(perplexityAI).build(); diff --git a/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAIConfiguration.java b/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAIConfiguration.java index cf3b0cca41..cccf489b05 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAIConfiguration.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/providers/perplexity/PerplexityAIConfiguration.java @@ -12,24 +12,29 @@ */ package org.conductoross.conductor.ai.providers.perplexity; +import java.time.Duration; import java.util.Objects; import org.conductoross.conductor.ai.ModelConfiguration; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; -import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data @NoArgsConstructor -@AllArgsConstructor @Component @ConfigurationProperties(prefix = "conductor.ai.perplexity") public class PerplexityAIConfiguration implements ModelConfiguration { private String apiKey; private String baseURL; + private Duration timeout = Duration.ofSeconds(600); + + public PerplexityAIConfiguration(String apiKey, String baseURL) { + this.apiKey = apiKey; + this.baseURL = baseURL; + } public String getBaseURL() { return Objects.isNull(baseURL) ? "https://api.perplexity.ai/" : baseURL; diff --git a/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/ChatCompleteTaskMapper.java b/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/ChatCompleteTaskMapper.java index f1e6183516..4f2041825e 100644 --- a/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/ChatCompleteTaskMapper.java +++ b/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/ChatCompleteTaskMapper.java @@ -48,7 +48,7 @@ public class ChatCompleteTaskMapper extends AIModelTaskMapper { private static final Set toolTaskTypes = - Set.of(TASK_TYPE_HTTP, TASK_TYPE_SIMPLE, "MCP"); + Set.of(TASK_TYPE_HTTP, TASK_TYPE_SIMPLE, "MCP", "CALL_MCP_TOOL"); public ChatCompleteTaskMapper() { super(ChatCompletion.NAME); @@ -59,9 +59,10 @@ protected TaskModel getMappedTask(TaskMapperContext taskMapperContext) throws TerminateWorkflowException { TaskModel taskModel = super.getMappedTask(taskMapperContext); WorkflowModel workflowModel = taskMapperContext.getWorkflowModel(); - ChatCompletion chatCompletion = - objectMapper.convertValue(taskModel.getInputData(), ChatCompletion.class); + try { + ChatCompletion chatCompletion = + objectMapper.convertValue(taskModel.getInputData(), ChatCompletion.class); List history = chatCompletion.getMessages(); if (chatCompletion.getUserInput() != null && chatCompletion.getMessages().isEmpty()) { history.add(new ChatMessage(ChatMessage.Role.user, chatCompletion.getUserInput())); @@ -73,6 +74,7 @@ protected TaskModel getMappedTask(TaskMapperContext taskMapperContext) if (e instanceof TerminateWorkflowException) { throw (TerminateWorkflowException) e; } else { + log.error("input: {}", taskModel.getInputData()); log.error(e.getMessage(), e); throw new TerminateWorkflowException( String.format( @@ -84,7 +86,9 @@ protected TaskModel getMappedTask(TaskMapperContext taskMapperContext) protected void updateTaskModel(ChatCompletion chatCompletion, TaskModel simpleTask) { Map paramReplacement = chatCompletion.getPromptVariables(); - + if (paramReplacement == null) { + paramReplacement = new HashMap<>(); + } List messages = chatCompletion.getMessages(); if (messages == null) { messages = new ArrayList<>(); @@ -101,8 +105,7 @@ protected void updateTaskModel(ChatCompletion chatCompletion, TaskModel simpleTa } private void getHistory( - WorkflowModel workflow, TaskModel chatCompleteTask, ChatCompletion chatCompletion) - throws Exception { + WorkflowModel workflow, TaskModel chatCompleteTask, ChatCompletion chatCompletion) { Map> refNameToTask = new HashMap<>(); for (TaskModel task : workflow.getTasks()) { refNameToTask @@ -243,7 +246,6 @@ private void getHistory( } } var msg = new ChatMessage(role, String.valueOf(resultObj)); - log.info("msg: {}", msg); if (response.getMedia() != null) { msg.setMedia(response.getMedia().stream().map(Media::getLocation).toList()); } diff --git a/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/PdfGenerationTaskMapper.java b/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/PdfGenerationTaskMapper.java new file mode 100644 index 0000000000..3e3b7c568b --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/tasks/mapper/PdfGenerationTaskMapper.java @@ -0,0 +1,32 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.tasks.mapper; + +import org.conductoross.conductor.ai.models.MarkdownToPdfRequest; +import org.conductoross.conductor.config.AIIntegrationEnabledCondition; +import org.springframework.context.annotation.Conditional; +import org.springframework.stereotype.Component; + +import lombok.extern.slf4j.Slf4j; + +@Component +@Conditional(AIIntegrationEnabledCondition.class) +@Slf4j +public class PdfGenerationTaskMapper extends AIModelTaskMapper { + + public static final String NAME = "GENERATE_PDF"; + + public PdfGenerationTaskMapper() { + super(NAME); + } +} diff --git a/ai/src/main/java/org/conductoross/conductor/ai/tasks/worker/DocumentGenWorkers.java b/ai/src/main/java/org/conductoross/conductor/ai/tasks/worker/DocumentGenWorkers.java new file mode 100644 index 0000000000..f1af6588ee --- /dev/null +++ b/ai/src/main/java/org/conductoross/conductor/ai/tasks/worker/DocumentGenWorkers.java @@ -0,0 +1,111 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.tasks.worker; + +import java.util.List; +import java.util.Map; +import java.util.UUID; + +import org.conductoross.conductor.ai.document.DocumentLoader; +import org.conductoross.conductor.ai.models.LLMResponse; +import org.conductoross.conductor.ai.models.MarkdownToPdfRequest; +import org.conductoross.conductor.ai.models.Media; +import org.conductoross.conductor.ai.pdf.MarkdownToPdfConverter; +import org.conductoross.conductor.config.AIIntegrationEnabledCondition; +import org.conductoross.conductor.core.execution.tasks.AnnotatedSystemTaskWorker; +import org.springframework.context.annotation.Conditional; +import org.springframework.core.env.Environment; +import org.springframework.stereotype.Component; + +import com.netflix.conductor.common.metadata.tasks.Task; +import com.netflix.conductor.sdk.workflow.executor.task.NonRetryableException; +import com.netflix.conductor.sdk.workflow.executor.task.TaskContext; +import com.netflix.conductor.sdk.workflow.task.WorkerTask; + +import lombok.extern.slf4j.Slf4j; + +import static org.apache.commons.lang3.StringUtils.isBlank; + +/** Worker for document generation tasks such as PDF generation from markdown. */ +@Slf4j +@Component +@Conditional(AIIntegrationEnabledCondition.class) +public class DocumentGenWorkers implements AnnotatedSystemTaskWorker { + + private final MarkdownToPdfConverter pdfConverter; + private final List documentLoaders; + private final String payloadStoreLocation; + + public DocumentGenWorkers( + MarkdownToPdfConverter pdfConverter, + List documentLoaders, + Environment env) { + this.pdfConverter = pdfConverter; + this.documentLoaders = documentLoaders; + this.payloadStoreLocation = + env.getProperty( + "conductor.file-storage.parentDir", + System.getProperty("user.home") + "/worker-payload/"); + log.info("Document Workers initialized"); + } + + @WorkerTask("GENERATE_PDF") + public LLMResponse generatePdf(MarkdownToPdfRequest input) { + if (isBlank(input.getMarkdown())) { + throw new NonRetryableException("markdown input is required for GENERATE_PDF task"); + } + + Task task = TaskContext.get().getTask(); + + // Convert markdown to PDF bytes + byte[] pdfBytes = pdfConverter.convert(input); + + // Determine output location + String outputLocation = input.getOutputLocation(); + if (isBlank(outputLocation)) { + outputLocation = + payloadStoreLocation + + task.getWorkflowInstanceId() + + "/" + + task.getTaskId() + + "/" + + UUID.randomUUID() + + ".pdf"; + } + + // Store via DocumentLoader + String storedLocation = storeDocument(outputLocation, pdfBytes); + + return LLMResponse.builder() + .result(Map.of("location", storedLocation, "sizeBytes", pdfBytes.length)) + .media( + List.of( + Media.builder() + .location(storedLocation) + .mimeType("application/pdf") + .build())) + .finishReason("COMPLETED") + .build(); + } + + private String storeDocument(String location, byte[] data) { + return documentLoaders.stream() + .filter(loader -> loader.supports(location)) + .findFirst() + .map(loader -> loader.upload(Map.of(), "application/pdf", data, location)) + .orElseThrow( + () -> + new NonRetryableException( + "No DocumentLoader supports output location: " + location)); + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/document/DocumentAccessPolicyTest.java b/ai/src/test/java/org/conductoross/conductor/ai/document/DocumentAccessPolicyTest.java new file mode 100644 index 0000000000..f7a28ad57e --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/document/DocumentAccessPolicyTest.java @@ -0,0 +1,474 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.document; + +import java.util.List; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Nested; +import org.junit.jupiter.api.Test; +import org.springframework.core.env.Environment; + +import static org.junit.jupiter.api.Assertions.*; +import static org.mockito.Mockito.*; + +class DocumentAccessPolicyTest { + + private DocumentAccessPolicy policy; + private Environment env; + + @BeforeEach + void setUp() { + env = mock(Environment.class); + // Default: no file-storage.parentDir set — uses ~/worker-payload/ fallback + when(env.getProperty("conductor.file-storage.parentDir")).thenReturn(null); + policy = new DocumentAccessPolicy(env); + // Simulate @PostConstruct + policy.resolveEffectiveAllowedDirectories(); + } + + // ======================================================================== + // Blocklist — local filesystem sensitive paths + // ======================================================================== + + @Test + void shouldBlockEtcPasswd() { + assertThrows( + DocumentAccessDeniedException.class, () -> policy.validateAccess("/etc/passwd")); + } + + @Test + void shouldBlockEtcShadow() { + assertThrows( + DocumentAccessDeniedException.class, () -> policy.validateAccess("/etc/shadow")); + } + + @Test + void shouldBlockEtcSshDirectory() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/etc/ssh/sshd_config")); + } + + @Test + void shouldBlockProcSelfEnviron() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/proc/self/environ")); + } + + @Test + void shouldBlockFileUriScheme() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("file:///etc/passwd")); + } + + @Test + void shouldBlockKubernetesSecrets() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/var/run/secrets/kubernetes.io/serviceaccount/token")); + } + + @Test + void shouldBlockDockerSecrets() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/run/secrets/db_password")); + } + + // ======================================================================== + // Blocklist — sensitive file names + // ======================================================================== + + @Test + void shouldBlockDotEnvFile() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/config/.env")); + } + + @Test + void shouldBlockPrivateKey() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/home/user/.ssh/id_rsa")); + } + + @Test + void shouldBlockCredentialsJson() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/credentials.json")); + } + + @Test + void shouldBlockKeystoreJks() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/opt/app/keystore.jks")); + } + + // ======================================================================== + // Blocklist — cloud metadata endpoints + // ======================================================================== + + @Test + void shouldBlockAwsMetadata() { + assertThrows( + DocumentAccessDeniedException.class, + () -> + policy.validateAccess( + "http://169.254.169.254/latest/meta-data/iam/security-credentials/")); + } + + @Test + void shouldBlockAwsEcsMetadata() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://169.254.170.2/v2/credentials/guid")); + } + + @Test + void shouldBlockGcpMetadata() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://metadata.google.internal/computeMetadata/v1/")); + } + + @Test + void shouldBlockAlibabaMetadata() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://100.100.100.200/latest/meta-data/")); + } + + @Test + void shouldBlockAwsDnsAlias() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://instance-data.ec2.internal/latest/meta-data/")); + } + + @Test + void shouldBlockKubernetesApiServer() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("https://kubernetes.default.svc/api/v1/secrets")); + } + + // ======================================================================== + // Link-local range detection (SSRF bypass prevention) + // ======================================================================== + + @Test + void shouldBlockLinkLocalViaResolvedAddress() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://169.254.169.253/something")); + } + + @Test + void shouldBlockLoopbackAddress() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://127.0.0.1/admin")); + } + + @Test + void shouldBlockLocalhostViaResolution() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://localhost/admin")); + } + + // ======================================================================== + // Platform-specific paths + // ======================================================================== + + @Test + void shouldBlockDockerSocket() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/var/run/docker.sock")); + } + + @Test + void shouldBlockKubernetesNodeConfig() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/etc/kubernetes/admin.conf")); + } + + @Test + void shouldBlockWindowsRegistryHive() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("C:/Windows/System32/config/SAM")); + } + + @Test + void shouldBlockWindowsUnattendXml() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("C:/Windows/Panther/Unattend.xml")); + } + + @Test + void shouldBlockTerraformState() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/infra/terraform.tfstate")); + } + + @Test + void shouldBlockVaultToken() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/deploy/.vault-token")); + } + + @Test + void shouldBlockGcpApplicationDefaultCredentials() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/config/application_default_credentials.json")); + } + + @Test + void shouldBlockMavenSettings() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/home/user/.m2/settings.xml")); + } + + @Test + void shouldBlockEnvStaging() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/.env.staging")); + } + + @Test + void shouldBlockWebConfig() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("C:/inetpub/wwwroot/web.config")); + } + + @Test + void shouldBlockMacOsKeychain() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/Library/Keychains/System.keychain")); + } + + @Test + void shouldBlockPowerShellHistory() { + assertThrows( + DocumentAccessDeniedException.class, + () -> + policy.validateAccess( + "C:/Users/admin/AppData/Roaming/PSReadLine/ConsoleHost_history.txt")); + } + + // ======================================================================== + // Path traversal + // ======================================================================== + + @Test + void shouldBlockPathTraversal() { + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/data/../../etc/passwd")); + } + + // ======================================================================== + // Safe paths — should be allowed (paths under ~/worker-payload/ default) + // ======================================================================== + + @Test + void shouldAllowPathUnderDefaultPayloadDir() { + String payloadDir = System.getProperty("user.home") + "/worker-payload/"; + assertDoesNotThrow(() -> policy.validateAccess(payloadDir + "wf123/task456/report.pdf")); + } + + @Test + void shouldAllowNormalHttpUrl() { + assertDoesNotThrow(() -> policy.validateAccess("https://cdn.example.com/image.png")); + } + + @Test + void shouldAllowFileUriUnderPayloadDir() { + String payloadDir = System.getProperty("user.home") + "/worker-payload/"; + assertDoesNotThrow(() -> policy.validateAccess("file://" + payloadDir + "output.pdf")); + } + + // ======================================================================== + // Disabled policy + // ======================================================================== + + @Test + void shouldAllowEverythingWhenDisabled() { + policy.setDisabled(true); + assertDoesNotThrow(() -> policy.validateAccess("/etc/passwd")); + assertDoesNotThrow(() -> policy.validateAccess("http://169.254.169.254/latest/meta-data/")); + } + + // ======================================================================== + // Custom blocklist extensions + // ======================================================================== + + @Test + void shouldBlockCustomPathPrefix() { + policy.setBlockedPathPrefixes(List.of("/custom/sensitive/")); + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/custom/sensitive/data.txt")); + } + + @Test + void shouldBlockCustomFileName() { + policy.setBlockedFileNames(List.of("secret.yaml")); + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/config/secret.yaml")); + } + + @Test + void shouldBlockCustomHost() { + policy.setBlockedHosts(List.of("internal.corp.net")); + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("http://internal.corp.net/api/secret")); + } + + // ======================================================================== + // Allowed directories — derived from file-storage.parentDir + // ======================================================================== + + @Nested + class AllowedDirectoriesTests { + + @Test + void shouldAutoIncludeParentDirFromConfig() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertDoesNotThrow(() -> policy.validateAccess("/data/conductor/wf/task/report.pdf")); + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/other/path/report.pdf")); + } + + @Test + void shouldFallbackToDefaultPayloadDirWhenParentDirNotSet() { + when(env.getProperty("conductor.file-storage.parentDir")).thenReturn(null); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + String defaultDir = System.getProperty("user.home") + "/worker-payload/"; + assertDoesNotThrow(() -> policy.validateAccess(defaultDir + "wf/task/report.pdf")); + } + + @Test + void shouldAllowAdditionalDirectoriesBeyondParentDir() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.setAllowedDirectories(List.of("/tmp/imports/", "/data/shared/")); + policy.resolveEffectiveAllowedDirectories(); + + // parentDir is allowed + assertDoesNotThrow(() -> policy.validateAccess("/data/conductor/output.pdf")); + // additional dirs are allowed + assertDoesNotThrow(() -> policy.validateAccess("/tmp/imports/input.csv")); + assertDoesNotThrow(() -> policy.validateAccess("/data/shared/image.png")); + // anything else is denied + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/home/user/report.pdf")); + } + + @Test + void shouldDenyPathOutsideAllowedDirectories() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/documents/report.pdf")); + } + + @Test + void shouldDenyFileUriOutsideAllowedDirectories() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("file:///home/user/secret.txt")); + } + + @Test + void shouldNotApplyAllowedDirectoriesToHttpUrls() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertDoesNotThrow(() -> policy.validateAccess("https://cdn.example.com/image.png")); + } + + @Test + void shouldStillBlockSensitiveFilesEvenInsideAllowedDir() { + when(env.getProperty("conductor.file-storage.parentDir")).thenReturn("/app/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/app/config/.env")); + } + + @Test + void shouldHandleParentDirWithoutTrailingSlash() { + when(env.getProperty("conductor.file-storage.parentDir")).thenReturn("/data/conductor"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + assertDoesNotThrow(() -> policy.validateAccess("/data/conductor/docs/report.pdf")); + } + + @Test + void shouldIncludeEffectiveDirectoriesInDenialMessage() { + when(env.getProperty("conductor.file-storage.parentDir")) + .thenReturn("/data/conductor/"); + policy = new DocumentAccessPolicy(env); + policy.resolveEffectiveAllowedDirectories(); + + DocumentAccessDeniedException ex = + assertThrows( + DocumentAccessDeniedException.class, + () -> policy.validateAccess("/unauthorized/path/file.txt")); + assertTrue(ex.getMessage().contains("/data/conductor/")); + } + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/integration/AIModelIntegrationTest.java b/ai/src/test/java/org/conductoross/conductor/ai/integration/AIModelIntegrationTest.java index d01c2915a5..6afacf2a9f 100644 --- a/ai/src/test/java/org/conductoross/conductor/ai/integration/AIModelIntegrationTest.java +++ b/ai/src/test/java/org/conductoross/conductor/ai/integration/AIModelIntegrationTest.java @@ -282,7 +282,7 @@ void testChatCompletionHaiku() { assertNotNull(chatModel); ChatCompletion input = new ChatCompletion(); - input.setModel("claude-3-5-haiku-latest"); + input.setModel("claude-haiku-4-5"); input.setMaxTokens(50); input.setTemperature(0.0); @@ -304,7 +304,7 @@ void testDeterministicTemperature() { ChatModel chatModel = anthropic.getChatModel(); ChatCompletion input = new ChatCompletion(); - input.setModel("claude-3-5-haiku-latest"); + input.setModel("claude-haiku-4-5"); input.setMaxTokens(20); input.setTemperature(0.0); // Deterministic diff --git a/ai/src/test/java/org/conductoross/conductor/ai/mapper/PdfGenerationTaskMapperTest.java b/ai/src/test/java/org/conductoross/conductor/ai/mapper/PdfGenerationTaskMapperTest.java new file mode 100644 index 0000000000..b90671a2f6 --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/mapper/PdfGenerationTaskMapperTest.java @@ -0,0 +1,134 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.mapper; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.conductoross.conductor.ai.tasks.mapper.PdfGenerationTaskMapper; +import org.junit.jupiter.api.Test; + +import com.netflix.conductor.common.metadata.tasks.TaskDef; +import com.netflix.conductor.common.metadata.workflow.WorkflowDef; +import com.netflix.conductor.common.metadata.workflow.WorkflowTask; +import com.netflix.conductor.core.execution.mapper.TaskMapperContext; +import com.netflix.conductor.core.utils.IDGenerator; +import com.netflix.conductor.model.TaskModel; +import com.netflix.conductor.model.WorkflowModel; + +import static org.junit.jupiter.api.Assertions.*; + +public class PdfGenerationTaskMapperTest { + + @Test + public void testTaskMapperReturnsCorrectTaskType() { + PdfGenerationTaskMapper mapper = new PdfGenerationTaskMapper(); + assertEquals("GENERATE_PDF", mapper.getTaskType()); + } + + @Test + public void testTaskMapperCreatesScheduledTask() { + String taskType = "GENERATE_PDF"; + WorkflowTask workflowTask = new WorkflowTask(); + workflowTask.setName("pdf_generation_task"); + workflowTask.setType(taskType); + String taskId = new IDGenerator().generate(); + + WorkflowModel workflow = new WorkflowModel(); + WorkflowDef workflowDef = new WorkflowDef(); + workflow.setWorkflowDefinition(workflowDef); + + Map taskInputs = new HashMap<>(); + taskInputs.put("markdown", "# Test Document\n\nHello World"); + taskInputs.put("pageSize", "A4"); + + TaskMapperContext taskMapperContext = + getTaskMapperContext(workflow, workflowTask, taskId, taskInputs); + + PdfGenerationTaskMapper mapper = new PdfGenerationTaskMapper(); + List mappedTasks = mapper.getMappedTasks(taskMapperContext); + + assertEquals(1, mappedTasks.size()); + assertEquals(taskType, mappedTasks.get(0).getTaskType()); + assertEquals(TaskModel.Status.SCHEDULED, mappedTasks.get(0).getStatus()); + } + + @Test + public void testTaskMapperPreservesInputParameters() { + String taskType = "GENERATE_PDF"; + WorkflowTask workflowTask = new WorkflowTask(); + workflowTask.setName("pdf_task"); + workflowTask.setType(taskType); + String taskId = new IDGenerator().generate(); + + WorkflowModel workflow = new WorkflowModel(); + WorkflowDef workflowDef = new WorkflowDef(); + workflow.setWorkflowDefinition(workflowDef); + + Map taskInputs = new HashMap<>(); + taskInputs.put("markdown", "# Report"); + taskInputs.put("pageSize", "LETTER"); + taskInputs.put("theme", "compact"); + taskInputs.put("baseFontSize", 12f); + + TaskMapperContext taskMapperContext = + getTaskMapperContext(workflow, workflowTask, taskId, taskInputs); + + PdfGenerationTaskMapper mapper = new PdfGenerationTaskMapper(); + List mappedTasks = mapper.getMappedTasks(taskMapperContext); + + assertEquals(1, mappedTasks.size()); + Map inputData = mappedTasks.get(0).getInputData(); + assertEquals("# Report", inputData.get("markdown")); + assertEquals("LETTER", inputData.get("pageSize")); + assertEquals("compact", inputData.get("theme")); + } + + @Test + public void testTaskMapperWithEmptyInputs() { + String taskType = "GENERATE_PDF"; + WorkflowTask workflowTask = new WorkflowTask(); + workflowTask.setName("pdf_task"); + workflowTask.setType(taskType); + String taskId = new IDGenerator().generate(); + + WorkflowModel workflow = new WorkflowModel(); + WorkflowDef workflowDef = new WorkflowDef(); + workflow.setWorkflowDefinition(workflowDef); + + TaskMapperContext taskMapperContext = + getTaskMapperContext(workflow, workflowTask, taskId, null); + + PdfGenerationTaskMapper mapper = new PdfGenerationTaskMapper(); + List mappedTasks = mapper.getMappedTasks(taskMapperContext); + + assertEquals(1, mappedTasks.size()); + assertEquals(TaskModel.Status.SCHEDULED, mappedTasks.get(0).getStatus()); + } + + protected TaskMapperContext getTaskMapperContext( + WorkflowModel workflowModel, + WorkflowTask workflowTask, + String taskId, + Map inputs) { + return TaskMapperContext.newBuilder() + .withWorkflowModel(workflowModel) + .withTaskDefinition(new TaskDef()) + .withWorkflowTask(workflowTask) + .withTaskInput(inputs != null ? inputs : new HashMap<>()) + .withRetryCount(0) + .withTaskId(taskId) + .build(); + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/pdf/DocumentGenWorkersTest.java b/ai/src/test/java/org/conductoross/conductor/ai/pdf/DocumentGenWorkersTest.java new file mode 100644 index 0000000000..1b6c2c83b8 --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/pdf/DocumentGenWorkersTest.java @@ -0,0 +1,219 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.util.List; +import java.util.Map; + +import org.conductoross.conductor.ai.document.DocumentLoader; +import org.conductoross.conductor.ai.models.LLMResponse; +import org.conductoross.conductor.ai.models.MarkdownToPdfRequest; +import org.conductoross.conductor.ai.tasks.worker.DocumentGenWorkers; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.mockito.MockedStatic; +import org.springframework.core.env.Environment; + +import com.netflix.conductor.common.metadata.tasks.Task; +import com.netflix.conductor.sdk.workflow.executor.task.NonRetryableException; +import com.netflix.conductor.sdk.workflow.executor.task.TaskContext; + +import static org.junit.jupiter.api.Assertions.*; +import static org.mockito.ArgumentMatchers.*; +import static org.mockito.Mockito.*; + +class DocumentGenWorkersTest { + + private DocumentGenWorkers workers; + private MarkdownToPdfConverter mockConverter; + private DocumentLoader mockLoader; + + @BeforeEach + void setUp() { + mockConverter = mock(MarkdownToPdfConverter.class); + mockLoader = mock(DocumentLoader.class); + when(mockLoader.supports(argThat(s -> s != null && !s.startsWith("s3://")))) + .thenReturn(true); + when(mockLoader.upload(anyMap(), eq("application/pdf"), any(byte[].class), anyString())) + .thenAnswer( + invocation -> { + String uri = invocation.getArgument(3); + return uri; + }); + + Environment env = mock(Environment.class); + when(env.getProperty(eq("conductor.file-storage.parentDir"), anyString())) + .thenReturn("/tmp/test-payload/"); + + workers = new DocumentGenWorkers(mockConverter, List.of(mockLoader), env); + } + + private void withMockedTaskContext(Runnable action) { + Task task = mock(Task.class); + when(task.getWorkflowInstanceId()).thenReturn("wf-123"); + when(task.getTaskId()).thenReturn("task-456"); + + TaskContext taskContext = mock(TaskContext.class); + when(taskContext.getTask()).thenReturn(task); + + try (MockedStatic mockedStatic = mockStatic(TaskContext.class)) { + mockedStatic.when(TaskContext::get).thenReturn(taskContext); + action.run(); + } + } + + @Test + void testGeneratePdfWithValidInput() { + byte[] pdfBytes = new byte[] {0x25, 0x50, 0x44, 0x46}; // %PDF + when(mockConverter.convert(any(MarkdownToPdfRequest.class))).thenReturn(pdfBytes); + + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder().markdown("# Test\n\nHello").build(); + + LLMResponse response = workers.generatePdf(request); + + assertNotNull(response); + assertEquals("COMPLETED", response.getFinishReason()); + assertNotNull(response.getMedia()); + assertEquals(1, response.getMedia().size()); + assertEquals("application/pdf", response.getMedia().get(0).getMimeType()); + assertNotNull(response.getMedia().get(0).getLocation()); + + // Verify result map + @SuppressWarnings("unchecked") + Map result = (Map) response.getResult(); + assertNotNull(result.get("location")); + assertEquals(4, result.get("sizeBytes")); + }); + } + + @Test + void testGeneratePdfWithBlankMarkdownThrowsException() { + assertThrows( + NonRetryableException.class, + () -> { + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder().markdown("").build(); + workers.generatePdf(request); + }); + }); + } + + @Test + void testGeneratePdfWithNullMarkdownThrowsException() { + assertThrows( + NonRetryableException.class, + () -> { + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder().markdown(null).build(); + workers.generatePdf(request); + }); + }); + } + + @Test + void testGeneratePdfWithCustomOutputLocation() { + byte[] pdfBytes = new byte[] {1, 2, 3}; + when(mockConverter.convert(any(MarkdownToPdfRequest.class))).thenReturn(pdfBytes); + + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Custom Location") + .outputLocation("file:///custom/path/output.pdf") + .build(); + + LLMResponse response = workers.generatePdf(request); + + @SuppressWarnings("unchecked") + Map result = (Map) response.getResult(); + assertEquals("file:///custom/path/output.pdf", result.get("location")); + + verify(mockLoader) + .upload( + anyMap(), + eq("application/pdf"), + eq(pdfBytes), + eq("file:///custom/path/output.pdf")); + }); + } + + @Test + void testGeneratePdfWithDefaultOutputLocation() { + byte[] pdfBytes = new byte[] {1, 2, 3}; + when(mockConverter.convert(any(MarkdownToPdfRequest.class))).thenReturn(pdfBytes); + + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder().markdown("# Default Location").build(); + + LLMResponse response = workers.generatePdf(request); + + @SuppressWarnings("unchecked") + Map result = (Map) response.getResult(); + String location = (String) result.get("location"); + assertTrue(location.startsWith("/tmp/test-payload/")); + assertTrue(location.contains("wf-123")); + assertTrue(location.contains("task-456")); + assertTrue(location.endsWith(".pdf")); + }); + } + + @Test + void testGeneratePdfWithUnsupportedOutputLocation() { + byte[] pdfBytes = new byte[] {1, 2, 3}; + when(mockConverter.convert(any(MarkdownToPdfRequest.class))).thenReturn(pdfBytes); + + assertThrows( + NonRetryableException.class, + () -> { + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Unsupported") + .outputLocation("s3://bucket/key.pdf") + .build(); + workers.generatePdf(request); + }); + }); + } + + @Test + void testConverterIsCalledWithRequest() { + byte[] pdfBytes = new byte[] {1}; + when(mockConverter.convert(any(MarkdownToPdfRequest.class))).thenReturn(pdfBytes); + + withMockedTaskContext( + () -> { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Verify Call") + .pageSize("LETTER") + .theme("compact") + .build(); + + workers.generatePdf(request); + + verify(mockConverter).convert(request); + }); + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverterTest.java b/ai/src/test/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverterTest.java new file mode 100644 index 0000000000..e0e01e47f6 --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/pdf/MarkdownToPdfConverterTest.java @@ -0,0 +1,1193 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +import org.apache.pdfbox.Loader; +import org.apache.pdfbox.pdmodel.PDDocument; +import org.apache.pdfbox.text.PDFTextStripper; +import org.conductoross.conductor.ai.document.DocumentLoader; +import org.conductoross.conductor.ai.models.MarkdownToPdfRequest; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.DisplayName; +import org.junit.jupiter.api.Nested; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.*; +import static org.mockito.Mockito.*; + +/** + * Comprehensive tests for the MarkdownToPdfConverter. Tests 10+ different markdown document + * structures and validates PDF output correctness, layout options, and edge cases. + */ +class MarkdownToPdfConverterTest { + + private MarkdownToPdfConverter converter; + private PdfImageResolver imageResolver; + + @BeforeEach + void setUp() { + DocumentLoader httpLoader = mock(DocumentLoader.class); + when(httpLoader.supports(argThat(s -> s != null && s.startsWith("http")))).thenReturn(true); + + imageResolver = new PdfImageResolver(List.of(httpLoader)); + converter = new MarkdownToPdfConverter(imageResolver); + } + + /** Helper to convert markdown and return a valid PDDocument for assertions. */ + private PDDocument convertAndLoad(String markdown) throws IOException { + return convertAndLoad(markdown, null); + } + + private PDDocument convertAndLoad(String markdown, String pageSize) throws IOException { + MarkdownToPdfRequest.MarkdownToPdfRequestBuilder builder = + MarkdownToPdfRequest.builder().markdown(markdown); + if (pageSize != null) { + builder.pageSize(pageSize); + } + byte[] pdf = converter.convert(builder.build()); + assertNotNull(pdf); + assertTrue(pdf.length > 0); + return Loader.loadPDF(pdf); + } + + private String extractText(PDDocument doc) throws IOException { + PDFTextStripper stripper = new PDFTextStripper(); + return stripper.getText(doc); + } + + // ======================================================================== + // Document 1: Basic Headings & Paragraphs + // ======================================================================== + + @Nested + @DisplayName("Document 1: Headings and Paragraphs") + class HeadingsAndParagraphs { + + static final String MARKDOWN = + """ + # Main Title + + This is the first paragraph with some introductory text. + + ## Section One + + Content under section one. This has multiple sentences. The paragraph should wrap properly across lines in the PDF. + + ### Subsection 1.1 + + More detailed content here. + + #### Level 4 Heading + + Deep nested content. + + ##### Level 5 Heading + + Even deeper. + + ###### Level 6 Heading + + The deepest heading level. + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertNotNull(doc); + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsAllHeadingText() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Main Title")); + assertTrue(text.contains("Section One")); + assertTrue(text.contains("Subsection 1.1")); + assertTrue(text.contains("Level 4 Heading")); + assertTrue(text.contains("Level 5 Heading")); + assertTrue(text.contains("Level 6 Heading")); + } + } + + @Test + void testContainsParagraphText() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("first paragraph")); + assertTrue(text.contains("Content under section one")); + assertTrue(text.contains("More detailed content")); + } + } + } + + // ======================================================================== + // Document 2: Text Emphasis & Formatting + // ======================================================================== + + @Nested + @DisplayName("Document 2: Text Emphasis and Inline Formatting") + class EmphasisAndFormatting { + + static final String MARKDOWN = + """ + # Formatting Test + + This paragraph has **bold text** and *italic text* and ***bold italic text***. + + Here is some ~~strikethrough text~~ mixed with regular text. + + Inline `code` appears within a sentence. Multiple `code spans` can appear. + + A paragraph with **bold at the start** and another with *italic at the end*. + + Mix of all: **bold**, *italic*, `code`, and ~~strikethrough~~ in one paragraph. + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsFormattedText() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("bold text")); + assertTrue(text.contains("italic text")); + assertTrue(text.contains("strikethrough text")); + assertTrue(text.contains("code")); + } + } + } + + // ======================================================================== + // Document 3: Bullet Lists (Simple, Nested) + // ======================================================================== + + @Nested + @DisplayName("Document 3: Bullet Lists") + class BulletLists { + + static final String MARKDOWN = + """ + # Bullet Lists + + Simple list: + + - First item + - Second item + - Third item + + Nested list: + + - Level 1 item A + - Level 2 item A1 + - Level 2 item A2 + - Level 3 item A2a + - Level 1 item B + - Level 2 item B1 + + Mixed content list: + + - Item with **bold** text + - Item with `inline code` + - Item with a [link](http://example.com) + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsListItems() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("First item")); + assertTrue(text.contains("Second item")); + assertTrue(text.contains("Third item")); + assertTrue(text.contains("Level 1 item A")); + assertTrue(text.contains("Level 2 item A1")); + assertTrue(text.contains("Level 3 item A2a")); + } + } + } + + // ======================================================================== + // Document 4: Ordered Lists & Task Lists + // ======================================================================== + + @Nested + @DisplayName("Document 4: Ordered Lists and Task Lists") + class OrderedAndTaskLists { + + static final String MARKDOWN = + """ + # Ordered Lists + + 1. First step + 2. Second step + 3. Third step + + Nested ordered: + + 1. Main step one + 1. Sub-step 1a + 2. Sub-step 1b + 2. Main step two + + ## Task List + + - [x] Completed task + - [ ] Pending task + - [x] Another done task + - [ ] Still to do + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsOrderedItems() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("First step")); + assertTrue(text.contains("Second step")); + assertTrue(text.contains("Main step one")); + assertTrue(text.contains("Sub-step 1a")); + } + } + + @Test + void testContainsTaskListItems() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Completed task")); + assertTrue(text.contains("Pending task")); + } + } + } + + // ======================================================================== + // Document 5: Code Blocks + // ======================================================================== + + @Nested + @DisplayName("Document 5: Code Blocks") + class CodeBlocks { + + static final String MARKDOWN = + """ + # Code Examples + + A fenced code block with language: + + ```java + public class Hello { + public static void main(String[] args) { + System.out.println("Hello, World!"); + } + } + ``` + + A fenced code block without language: + + ``` + plain code block + with multiple lines + ``` + + An indented code block: + + indented line 1 + indented line 2 + indented line 3 + + Code with special characters: + + ``` + if (x < 10 && y > 5) { + result = x + y; + } + ``` + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsCodeContent() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Hello")); + assertTrue(text.contains("main")); + assertTrue(text.contains("plain code block")); + } + } + } + + // ======================================================================== + // Document 6: Tables + // ======================================================================== + + @Nested + @DisplayName("Document 6: Tables") + class Tables { + + static final String MARKDOWN = + """ + # Table Examples + + Simple table: + + | Name | Age | City | + |------|-----|------| + | Alice | 30 | New York | + | Bob | 25 | San Francisco | + | Charlie | 35 | London | + + Table with alignment: + + | Left | Center | Right | + |:-----|:------:|------:| + | L1 | C1 | R1 | + | L2 | C2 | R2 | + + Table with formatting: + + | Feature | Status | Notes | + |---------|--------|-------| + | **Bold** | `Active` | Works well | + | *Italic* | `Pending` | In progress | + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsTableData() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Alice")); + assertTrue(text.contains("30")); + assertTrue(text.contains("New York")); + assertTrue(text.contains("Bob")); + assertTrue(text.contains("San Francisco")); + } + } + } + + // ======================================================================== + // Document 7: Blockquotes + // ======================================================================== + + @Nested + @DisplayName("Document 7: Blockquotes") + class Blockquotes { + + static final String MARKDOWN = + """ + # Blockquotes + + A simple blockquote: + + > This is a quoted paragraph. It should be indented with a vertical bar on the left. + + Nested blockquotes: + + > Outer quote + > > Inner quote + > > > Deeply nested quote + + Blockquote with formatting: + + > **Important:** This blockquote contains *formatted* text and `inline code`. + > + > It also spans multiple paragraphs within the same quote block. + + Regular text after the blockquote. + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsBlockquoteText() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("quoted paragraph")); + assertTrue(text.contains("Outer quote")); + assertTrue(text.contains("Inner quote")); + assertTrue(text.contains("Important")); + } + } + } + + // ======================================================================== + // Document 8: Links & Horizontal Rules + // ======================================================================== + + @Nested + @DisplayName("Document 8: Links and Horizontal Rules") + class LinksAndRules { + + static final String MARKDOWN = + """ + # Links and Rules + + A paragraph with an [inline link](https://example.com) in the middle. + + Another [link with title](https://example.com "Example Site") here. + + An auto-linked URL: https://www.conductor.community + + --- + + Content after the first horizontal rule. + + *** + + Content after the second horizontal rule. + + ___ + + Final content. + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsLinkText() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("inline link")); + assertTrue(text.contains("link with title")); + assertTrue(text.contains("Content after the first horizontal rule")); + assertTrue(text.contains("Final content")); + } + } + } + + // ======================================================================== + // Document 9: Images (with mocked resolver) + // ======================================================================== + + @Nested + @DisplayName("Document 9: Images") + class Images { + + static final String MARKDOWN = + """ + # Document with Images + + Here is an image: + + ![Sample Image](http://example.com/sample.png) + + Text continues after the image. + + Another image below: + + ![Logo](http://example.com/logo.jpg) + + Final paragraph. + """; + + @Test + void testProducesValidPdfWithMissingImages() throws IOException { + // Images will fail to load (mocked loader returns null for download) + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + // Should show placeholder text for missing images + assertTrue(text.contains("Sample Image") || text.contains("Image")); + assertTrue(text.contains("Text continues after the image")); + } + } + + @Test + void testImageWithDataUri() throws IOException { + // Create a tiny 1x1 PNG + // This is a minimal valid 1x1 red PNG (67 bytes) + byte[] pngBytes = { + (byte) 0x89, + 0x50, + 0x4E, + 0x47, + 0x0D, + 0x0A, + 0x1A, + 0x0A, + 0x00, + 0x00, + 0x00, + 0x0D, + 0x49, + 0x48, + 0x44, + 0x52, + 0x00, + 0x00, + 0x00, + 0x01, + 0x00, + 0x00, + 0x00, + 0x01, + 0x08, + 0x02, + 0x00, + 0x00, + 0x00, + (byte) 0x90, + 0x77, + 0x53, + (byte) 0xDE, + 0x00, + 0x00, + 0x00, + 0x0C, + 0x49, + 0x44, + 0x41, + 0x54, + 0x08, + (byte) 0xD7, + 0x63, + (byte) 0xF8, + (byte) 0xCF, + (byte) 0xC0, + 0x00, + 0x00, + 0x00, + 0x02, + 0x00, + 0x01, + (byte) 0xE2, + 0x21, + (byte) 0xBC, + 0x33, + 0x00, + 0x00, + 0x00, + 0x00, + 0x49, + 0x45, + 0x4E, + 0x44, + (byte) 0xAE, + 0x42, + 0x60, + (byte) 0x82 + }; + String base64 = java.util.Base64.getEncoder().encodeToString(pngBytes); + String mdWithDataUri = + "# Image Test\n\n![Tiny](data:image/png;base64," + base64 + ")\n\nDone."; + + try (PDDocument doc = convertAndLoad(mdWithDataUri)) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("Image Test")); + assertTrue(text.contains("Done")); + } + } + } + + // ======================================================================== + // Document 10: Complex Mixed Document + // ======================================================================== + + @Nested + @DisplayName("Document 10: Complex Mixed Document") + class ComplexMixed { + + static final String MARKDOWN = + """ + # Project Status Report + + **Date:** January 15, 2026 + **Author:** Conductor Team + + ## Executive Summary + + This report covers the progress of the *Conductor* project over Q4 2025. Key highlights include: + + - Completed AI integration module + - Launched vector database support + - Released MCP tool calling + + --- + + ## Technical Progress + + ### Backend Services + + The backend team delivered several critical features: + + 1. **Chat Completion API** - Full support for 11+ LLM providers + 2. **Media Generation** - Image, audio, and video generation + 3. **Vector Search** - MongoDB, PostgreSQL, and Pinecone + + > **Note:** All features are gated behind the `conductor.integrations.ai.enabled` flag. + + ### Code Sample + + Here is an example workflow definition: + + ```json + { + "name": "ai-workflow", + "tasks": [ + { + "type": "LLM_CHAT_COMPLETE", + "inputParameters": { + "llmProvider": "openai", + "model": "gpt-4" + } + } + ] + } + ``` + + ### Performance Metrics + + | Metric | Q3 2025 | Q4 2025 | Change | + |--------|---------|---------|--------| + | Latency (p99) | 450ms | 320ms | -29% | + | Throughput | 1200 rps | 1800 rps | +50% | + | Error Rate | 0.5% | 0.2% | -60% | + + ## Action Items + + - [x] Deploy v4.0 to production + - [x] Complete documentation update + - [ ] Performance testing for v4.1 + - [ ] Security audit review + + ## Conclusion + + The team has made *excellent* progress. For more details, visit the [project wiki](https://wiki.example.com). + + --- + + *Report generated by Conductor Workflow Engine* + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsAllSections() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Project Status Report")); + assertTrue(text.contains("Executive Summary")); + assertTrue(text.contains("Technical Progress")); + assertTrue(text.contains("Backend Services")); + assertTrue(text.contains("Performance Metrics")); + assertTrue(text.contains("Action Items")); + assertTrue(text.contains("Conclusion")); + } + } + + @Test + void testContainsTableData() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Latency")); + assertTrue(text.contains("320ms")); + assertTrue(text.contains("Throughput")); + } + } + + @Test + void testContainsCodeBlock() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("ai-workflow")); + assertTrue(text.contains("LLM_CHAT_COMPLETE")); + } + } + } + + // ======================================================================== + // Document 11: Long Document (Page Break Testing) + // ======================================================================== + + @Nested + @DisplayName("Document 11: Long Document with Page Breaks") + class LongDocument { + + @Test + void testLongDocumentCreatesMultiplePages() throws IOException { + StringBuilder md = new StringBuilder("# Long Document\n\n"); + for (int i = 1; i <= 100; i++) { + md.append("## Section ").append(i).append("\n\n"); + md.append("This is paragraph ") + .append(i) + .append(". It contains enough text to take up some space on the page. ") + .append("We need to verify that page breaks happen correctly ") + .append("and content flows from one page to the next.\n\n"); + } + + try (PDDocument doc = convertAndLoad(md.toString())) { + assertTrue( + doc.getNumberOfPages() > 1, + "Long document should span multiple pages, got " + doc.getNumberOfPages()); + } + } + + @Test + void testLongDocumentPreservesAllContent() throws IOException { + StringBuilder md = new StringBuilder("# Content Preservation Test\n\n"); + for (int i = 1; i <= 50; i++) { + md.append("- Item number ").append(i).append("\n"); + } + + try (PDDocument doc = convertAndLoad(md.toString())) { + String text = extractText(doc); + assertTrue(text.contains("Item number 1")); + assertTrue(text.contains("Item number 25")); + assertTrue(text.contains("Item number 50")); + } + } + } + + // ======================================================================== + // Document 12: Edge Cases + // ======================================================================== + + @Nested + @DisplayName("Document 12: Edge Cases") + class EdgeCases { + + @Test + void testMinimalMarkdown() throws IOException { + try (PDDocument doc = convertAndLoad("Hello")) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("Hello")); + } + } + + @Test + void testOnlyHeading() throws IOException { + try (PDDocument doc = convertAndLoad("# Just a Title")) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("Just a Title")); + } + } + + @Test + void testEmptyParagraphs() throws IOException { + String md = "First\n\n\n\n\nSecond"; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("First")); + assertTrue(text.contains("Second")); + } + } + + @Test + void testSpecialCharacters() throws IOException { + String md = "# Special Characters\n\nAmpersan: & | Angle brackets: < > | Quotes: \" '"; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testVeryLongSingleLine() throws IOException { + String longLine = "Word ".repeat(500); + String md = "# Long Line Test\n\n" + longLine; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testOnlyCodeBlock() throws IOException { + String md = "```\nfunction hello() {\n return 'world';\n}\n```"; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("hello")); + } + } + + @Test + void testOnlyTable() throws IOException { + String md = "| A | B |\n|---|---|\n| 1 | 2 |\n| 3 | 4 |"; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testOnlyList() throws IOException { + String md = "- one\n- two\n- three"; + try (PDDocument doc = convertAndLoad(md)) { + assertTrue(doc.getNumberOfPages() >= 1); + String text = extractText(doc); + assertTrue(text.contains("one")); + assertTrue(text.contains("two")); + assertTrue(text.contains("three")); + } + } + + @Test + void testWhitespaceOnlyMarkdown() throws IOException { + try (PDDocument doc = convertAndLoad(" \n\n \n")) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + } + + // ======================================================================== + // Layout Options Tests + // ======================================================================== + + @Nested + @DisplayName("Layout Options") + class LayoutOptions { + + @Test + void testA4PageSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# A4 Test\n\nContent") + .pageSize("A4") + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + // A4 = 595.28 x 841.89 points + float width = doc.getPage(0).getMediaBox().getWidth(); + float height = doc.getPage(0).getMediaBox().getHeight(); + assertEquals(595.28f, width, 1f); + assertEquals(841.89f, height, 1f); + } + } + + @Test + void testLetterPageSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Letter Test\n\nContent") + .pageSize("LETTER") + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + // LETTER = 612 x 792 points + float width = doc.getPage(0).getMediaBox().getWidth(); + float height = doc.getPage(0).getMediaBox().getHeight(); + assertEquals(612f, width, 1f); + assertEquals(792f, height, 1f); + } + } + + @Test + void testLegalPageSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Legal Test\n\nContent") + .pageSize("LEGAL") + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + // LEGAL = 612 x 1008 points + float width = doc.getPage(0).getMediaBox().getWidth(); + float height = doc.getPage(0).getMediaBox().getHeight(); + assertEquals(612f, width, 1f); + assertEquals(1008f, height, 1f); + } + } + + @Test + void testUnknownPageSizeDefaultsToA4() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Unknown Size\n\nContent") + .pageSize("CUSTOM_UNKNOWN") + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + float width = doc.getPage(0).getMediaBox().getWidth(); + assertEquals(595.28f, width, 1f); + } + } + + @Test + void testCaseInsensitivePageSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Case Test\n\nContent") + .pageSize("letter") + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + float width = doc.getPage(0).getMediaBox().getWidth(); + assertEquals(612f, width, 1f); + } + } + + @Test + void testCustomMargins() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Margin Test\n\nContent") + .marginTop(36f) + .marginRight(36f) + .marginBottom(36f) + .marginLeft(36f) + .build(); + byte[] pdf = converter.convert(request); + assertNotNull(pdf); + assertTrue(pdf.length > 0); + } + + @Test + void testLargeMargins() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Big Margins\n\nTight content area") + .marginTop(144f) + .marginRight(144f) + .marginBottom(144f) + .marginLeft(144f) + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertNotNull(doc); + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testCompactTheme() throws IOException { + StringBuilder md = new StringBuilder("# Compact Theme Test\n\n"); + for (int i = 1; i <= 30; i++) { + md.append("Paragraph ").append(i).append(". Some content here.\n\n"); + } + MarkdownToPdfRequest defaultReq = + MarkdownToPdfRequest.builder().markdown(md.toString()).theme("default").build(); + MarkdownToPdfRequest compactReq = + MarkdownToPdfRequest.builder().markdown(md.toString()).theme("compact").build(); + + byte[] defaultPdf = converter.convert(defaultReq); + byte[] compactPdf = converter.convert(compactReq); + + // Compact should be smaller or equal (less spacing) + try (PDDocument defaultDoc = Loader.loadPDF(defaultPdf); + PDDocument compactDoc = Loader.loadPDF(compactPdf)) { + assertTrue( + compactDoc.getNumberOfPages() <= defaultDoc.getNumberOfPages(), + "Compact theme should use equal or fewer pages than default"); + } + } + + @Test + void testCustomFontSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Font Size 14\n\nLarger text.") + .baseFontSize(14f) + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertNotNull(doc); + } + } + + @Test + void testSmallFontSize() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Font Size 8\n\nSmall text.") + .baseFontSize(8f) + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertNotNull(doc); + } + } + } + + // ======================================================================== + // PDF Metadata Tests + // ======================================================================== + + @Nested + @DisplayName("PDF Metadata") + class PdfMetadata { + + @Test + void testMetadataIsSet() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Metadata Test") + .pdfMetadata( + Map.of( + "title", "Test Document", + "author", "Unit Test", + "subject", "Testing", + "keywords", "test, pdf, markdown")) + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertEquals("Test Document", doc.getDocumentInformation().getTitle()); + assertEquals("Unit Test", doc.getDocumentInformation().getAuthor()); + assertEquals("Testing", doc.getDocumentInformation().getSubject()); + assertEquals("test, pdf, markdown", doc.getDocumentInformation().getKeywords()); + } + } + + @Test + void testPartialMetadata() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder() + .markdown("# Partial Metadata") + .pdfMetadata(Map.of("title", "Only Title")) + .build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertEquals("Only Title", doc.getDocumentInformation().getTitle()); + assertNull(doc.getDocumentInformation().getAuthor()); + } + } + + @Test + void testNoMetadata() throws IOException { + MarkdownToPdfRequest request = + MarkdownToPdfRequest.builder().markdown("# No Metadata").build(); + byte[] pdf = converter.convert(request); + try (PDDocument doc = Loader.loadPDF(pdf)) { + assertNotNull(doc); + // Title should be null since no metadata was provided + assertNull(doc.getDocumentInformation().getTitle()); + } + } + } + + // ======================================================================== + // Document 13: Deeply Nested Structures + // ======================================================================== + + @Nested + @DisplayName("Document 13: Deeply Nested Structures") + class DeeplyNested { + + static final String MARKDOWN = + """ + # Nested Structures + + > Quote with a list: + > + > - Item in quote A + > - Item in quote B + > - Nested in nested + + > Quote with code: + > + > ``` + > code inside quote + > ``` + + List with multiple paragraphs per item: + + - First item + + Continuation paragraph for first item. + + - Second item + + Continuation paragraph for second item. + + 1. Ordered with nested bullets + - Bullet inside ordered + - Another bullet + 2. Second ordered item + 1. Sub-ordered + 2. Sub-ordered 2 + """; + + @Test + void testProducesValidPdf() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + assertTrue(doc.getNumberOfPages() >= 1); + } + } + + @Test + void testContainsNestedContent() throws IOException { + try (PDDocument doc = convertAndLoad(MARKDOWN)) { + String text = extractText(doc); + assertTrue(text.contains("Item in quote A")); + assertTrue(text.contains("code inside quote")); + assertTrue(text.contains("Continuation paragraph")); + assertTrue(text.contains("Bullet inside ordered")); + } + } + } + + // ======================================================================== + // Conversion Error Handling + // ======================================================================== + + @Nested + @DisplayName("Error Handling") + class ErrorHandling { + + @Test + void testNullMarkdownThrowsException() { + MarkdownToPdfRequest request = MarkdownToPdfRequest.builder().markdown(null).build(); + assertThrows(Exception.class, () -> converter.convert(request)); + } + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfImageResolverTest.java b/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfImageResolverTest.java new file mode 100644 index 0000000000..b32473d296 --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfImageResolverTest.java @@ -0,0 +1,200 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.util.Base64; +import java.util.List; + +import org.conductoross.conductor.ai.document.DocumentLoader; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.*; +import static org.mockito.Mockito.*; + +class PdfImageResolverTest { + + private DocumentLoader httpLoader; + private DocumentLoader fileLoader; + private PdfImageResolver resolver; + + @BeforeEach + void setUp() { + httpLoader = mock(DocumentLoader.class); + when(httpLoader.supports("http://example.com/image.png")).thenReturn(true); + when(httpLoader.supports("https://example.com/image.jpg")).thenReturn(true); + when(httpLoader.supports( + argThat( + s -> + s != null + && (s.startsWith("http://") + || s.startsWith("https://"))))) + .thenReturn(true); + + fileLoader = mock(DocumentLoader.class); + when(fileLoader.supports(argThat(s -> s != null && s.startsWith("file://")))) + .thenReturn(true); + + resolver = new PdfImageResolver(List.of(httpLoader, fileLoader)); + } + + // ========== Data URI Tests ========== + + @Test + void testResolveBase64DataUri() { + byte[] original = {1, 2, 3, 4, 5}; + String encoded = Base64.getEncoder().encodeToString(original); + String dataUri = "data:image/png;base64," + encoded; + + byte[] result = resolver.resolve(dataUri, null); + + assertNotNull(result); + assertArrayEquals(original, result); + } + + @Test + void testResolveDataUriWithDifferentMimeType() { + byte[] original = "SVG content".getBytes(); + String encoded = Base64.getEncoder().encodeToString(original); + String dataUri = "data:image/svg+xml;base64," + encoded; + + byte[] result = resolver.resolve(dataUri, null); + + assertNotNull(result); + assertArrayEquals(original, result); + } + + @Test + void testResolveInvalidDataUriReturnsNull() { + // data URI without comma separator + byte[] result = resolver.resolve("data:image/png;base64", null); + assertNull(result); + } + + // ========== HTTP/HTTPS URL Tests ========== + + @Test + void testResolveHttpUrl() { + byte[] imageBytes = {10, 20, 30}; + when(httpLoader.download("http://example.com/image.png")).thenReturn(imageBytes); + + byte[] result = resolver.resolve("http://example.com/image.png", null); + + assertNotNull(result); + assertArrayEquals(imageBytes, result); + verify(httpLoader).download("http://example.com/image.png"); + } + + @Test + void testResolveHttpsUrl() { + byte[] imageBytes = {40, 50, 60}; + when(httpLoader.download("https://example.com/image.jpg")).thenReturn(imageBytes); + + byte[] result = resolver.resolve("https://example.com/image.jpg", null); + + assertNotNull(result); + assertArrayEquals(imageBytes, result); + } + + @Test + void testResolveHttpUrlFailureReturnsNull() { + when(httpLoader.download("http://example.com/missing.png")) + .thenThrow(new RuntimeException("404")); + + byte[] result = resolver.resolve("http://example.com/missing.png", null); + + assertNull(result); + } + + // ========== File URL Tests ========== + + @Test + void testResolveFileUrl() { + byte[] imageBytes = {70, 80, 90}; + when(fileLoader.download("file:///path/to/image.png")).thenReturn(imageBytes); + + byte[] result = resolver.resolve("file:///path/to/image.png", null); + + assertNotNull(result); + assertArrayEquals(imageBytes, result); + } + + // ========== Relative Path Tests ========== + + @Test + void testResolveRelativePathWithBaseUrl() { + byte[] imageBytes = {11, 22, 33}; + when(httpLoader.download("https://cdn.example.com/assets/logo.png")).thenReturn(imageBytes); + + byte[] result = resolver.resolve("logo.png", "https://cdn.example.com/assets/"); + + assertNotNull(result); + assertArrayEquals(imageBytes, result); + verify(httpLoader).download("https://cdn.example.com/assets/logo.png"); + } + + @Test + void testResolveRelativePathWithBaseUrlNoTrailingSlash() { + byte[] imageBytes = {44, 55, 66}; + when(httpLoader.download("https://cdn.example.com/assets/logo.png")).thenReturn(imageBytes); + + byte[] result = resolver.resolve("logo.png", "https://cdn.example.com/assets"); + + assertNotNull(result); + verify(httpLoader).download("https://cdn.example.com/assets/logo.png"); + } + + @Test + void testResolveRelativePathWithoutBaseUrlFails() { + // No base URL, no loader supports bare relative path + byte[] result = resolver.resolve("images/logo.png", null); + + assertNull(result); + } + + @Test + void testResolveAbsoluteUrlIgnoresBaseUrl() { + byte[] imageBytes = {77, 88, 99}; + when(httpLoader.download("http://other.com/pic.png")).thenReturn(imageBytes); + + // Absolute URL should ignore the base URL + byte[] result = resolver.resolve("http://other.com/pic.png", "https://cdn.example.com/"); + + assertNotNull(result); + verify(httpLoader).download("http://other.com/pic.png"); + } + + // ========== Null and Edge Case Tests ========== + + @Test + void testResolveNullSrcReturnsNull() { + assertNull(resolver.resolve(null, null)); + } + + @Test + void testResolveEmptySrcReturnsNull() { + assertNull(resolver.resolve("", null)); + } + + @Test + void testResolveBlankSrcReturnsNull() { + assertNull(resolver.resolve(" ", null)); + } + + @Test + void testResolveWithNoSupportingLoaderReturnsNull() { + PdfImageResolver emptyResolver = new PdfImageResolver(List.of()); + byte[] result = emptyResolver.resolve("http://example.com/img.png", null); + assertNull(result); + } +} diff --git a/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfRenderContextTest.java b/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfRenderContextTest.java new file mode 100644 index 0000000000..d744955670 --- /dev/null +++ b/ai/src/test/java/org/conductoross/conductor/ai/pdf/PdfRenderContextTest.java @@ -0,0 +1,266 @@ +/* + * Copyright 2026 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package org.conductoross.conductor.ai.pdf; + +import java.io.IOException; + +import org.apache.pdfbox.pdmodel.PDDocument; +import org.apache.pdfbox.pdmodel.common.PDRectangle; +import org.junit.jupiter.api.AfterEach; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import static org.junit.jupiter.api.Assertions.*; + +class PdfRenderContextTest { + + private PDDocument document; + private PdfRenderContext ctx; + + @BeforeEach + void setUp() { + document = new PDDocument(); + ctx = + new PdfRenderContext( + document, + PDRectangle.A4, + 72f, // marginTop + 72f, // marginRight + 72f, // marginBottom + 72f, // marginLeft + 11f, // baseFontSize + false); // compact + } + + @AfterEach + void tearDown() throws IOException { + if (ctx.getContentStream() != null) { + ctx.getContentStream().close(); + } + document.close(); + } + + // ========== Construction Tests ========== + + @Test + void testDefaultConstructionSetsProperties() { + assertEquals(PDRectangle.A4.getWidth(), ctx.getPageWidth(), 0.01f); + assertEquals(PDRectangle.A4.getHeight(), ctx.getPageHeight(), 0.01f); + assertEquals(72f, ctx.getMarginTop(), 0.01f); + assertEquals(72f, ctx.getMarginRight(), 0.01f); + assertEquals(72f, ctx.getMarginBottom(), 0.01f); + assertEquals(72f, ctx.getMarginLeft(), 0.01f); + assertEquals(11f, ctx.getBaseFontSize(), 0.01f); + assertEquals(1.5f, ctx.getLineSpacing(), 0.01f); + assertFalse(ctx.isCompact()); + } + + @Test + void testCompactModeReducesLineSpacing() throws IOException { + PdfRenderContext compactCtx = + new PdfRenderContext(document, PDRectangle.A4, 72f, 72f, 72f, 72f, 11f, true); + assertEquals(1.3f, compactCtx.getLineSpacing(), 0.01f); + assertTrue(compactCtx.isCompact()); + } + + @Test + void testFontsAreInitialized() { + assertNotNull(ctx.getRegularFont()); + assertNotNull(ctx.getBoldFont()); + assertNotNull(ctx.getItalicFont()); + assertNotNull(ctx.getBoldItalicFont()); + assertNotNull(ctx.getMonoFont()); + assertNotNull(ctx.getMonoBoldFont()); + } + + // ========== Layout Calculation Tests ========== + + @Test + void testContentWidthWithDefaultMargins() { + // A4 width = 595.28, margins = 72 + 72 = 144 + float expected = PDRectangle.A4.getWidth() - 72f - 72f; + assertEquals(expected, ctx.getContentWidth(), 0.01f); + } + + @Test + void testContentWidthWithListIndent() { + ctx.setListIndentLevel(1); + float expectedWithIndent = PDRectangle.A4.getWidth() - 72f - 72f - 20f; + assertEquals(expectedWithIndent, ctx.getContentWidth(), 0.01f); + } + + @Test + void testContentWidthWithNestedListIndent() { + ctx.setListIndentLevel(3); + float expectedWithIndent = PDRectangle.A4.getWidth() - 72f - 72f - 60f; + assertEquals(expectedWithIndent, ctx.getContentWidth(), 0.01f); + } + + @Test + void testContentWidthWithBlockquote() { + ctx.setInBlockquote(true); + float expectedWithBQ = PDRectangle.A4.getWidth() - 72f - 72f - 15f; + assertEquals(expectedWithBQ, ctx.getContentWidth(), 0.01f); + } + + @Test + void testContentWidthWithBlockquoteAndListIndent() { + ctx.setInBlockquote(true); + ctx.setListIndentLevel(2); + float expected = PDRectangle.A4.getWidth() - 72f - 72f - 40f - 15f; + assertEquals(expected, ctx.getContentWidth(), 0.01f); + } + + @Test + void testLeftXWithDefaultMargins() { + assertEquals(72f, ctx.getLeftX(), 0.01f); + } + + @Test + void testLeftXWithListIndent() { + ctx.setListIndentLevel(2); + assertEquals(72f + 40f, ctx.getLeftX(), 0.01f); + } + + @Test + void testRightX() { + float expected = PDRectangle.A4.getWidth() - 72f; + assertEquals(expected, ctx.getRightX(), 0.01f); + } + + // ========== Line Height Tests ========== + + @Test + void testLineHeightDefaultSpacing() { + // lineSpacing = 1.5 (non-compact) + assertEquals(11f * 1.5f, ctx.getLineHeight(11f), 0.01f); + } + + @Test + void testLineHeightCompactSpacing() throws IOException { + PdfRenderContext compactCtx = + new PdfRenderContext(document, PDRectangle.A4, 72f, 72f, 72f, 72f, 11f, true); + assertEquals(11f * 1.3f, compactCtx.getLineHeight(11f), 0.01f); + } + + @Test + void testLineHeightWithDifferentFontSizes() { + assertEquals(24f * 1.5f, ctx.getLineHeight(24f), 0.01f); + assertEquals(8f * 1.5f, ctx.getLineHeight(8f), 0.01f); + } + + // ========== Page Management Tests ========== + + @Test + void testNewPageCreatesPageAndContentStream() throws IOException { + ctx.newPage(); + + assertNotNull(ctx.getContentStream()); + assertEquals(1, document.getNumberOfPages()); + float expectedCursorY = PDRectangle.A4.getHeight() - 72f; + assertEquals(expectedCursorY, ctx.getCursorY(), 0.01f); + } + + @Test + void testMultipleNewPagesIncreasePageCount() throws IOException { + ctx.newPage(); + ctx.newPage(); + ctx.newPage(); + + assertEquals(3, document.getNumberOfPages()); + } + + @Test + void testAdvanceCursorMovesDown() throws IOException { + ctx.newPage(); + float initial = ctx.getCursorY(); + ctx.advanceCursor(50f); + assertEquals(initial - 50f, ctx.getCursorY(), 0.01f); + } + + @Test + void testEnsureSpaceCreatesNewPageWhenNeeded() throws IOException { + ctx.newPage(); + // Move cursor near the bottom + ctx.setCursorY(ctx.getMarginBottom() + 5f); + + // Request more space than available + ctx.ensureSpace(20f); + + // Should have created a new page + assertEquals(2, document.getNumberOfPages()); + float expectedCursorY = PDRectangle.A4.getHeight() - 72f; + assertEquals(expectedCursorY, ctx.getCursorY(), 0.01f); + } + + @Test + void testEnsureSpaceDoesNotCreatePageWhenEnoughRoom() throws IOException { + ctx.newPage(); + float savedY = ctx.getCursorY(); + + // Request small amount of space + ctx.ensureSpace(10f); + + // Should stay on same page + assertEquals(1, document.getNumberOfPages()); + assertEquals(savedY, ctx.getCursorY(), 0.01f); + } + + // ========== Text Width Tests ========== + + @Test + void testTextWidthReturnsPositiveValue() throws IOException { + float width = ctx.getTextWidth("Hello World", ctx.getRegularFont(), 11f); + assertTrue(width > 0f); + } + + @Test + void testTextWidthScalesWithFontSize() throws IOException { + float width11 = ctx.getTextWidth("Test", ctx.getRegularFont(), 11f); + float width22 = ctx.getTextWidth("Test", ctx.getRegularFont(), 22f); + assertEquals(width11 * 2, width22, 0.01f); + } + + @Test + void testEmptyTextHasZeroWidth() throws IOException { + float width = ctx.getTextWidth("", ctx.getRegularFont(), 11f); + assertEquals(0f, width, 0.01f); + } + + @Test + void testMonoFontHasConsistentCharWidth() throws IOException { + // Courier is monospaced, so all chars should have same width + float widthI = ctx.getTextWidth("iii", ctx.getMonoFont(), 11f); + float widthM = ctx.getTextWidth("mmm", ctx.getMonoFont(), 11f); + assertEquals(widthI, widthM, 0.01f); + } + + // ========== Different Page Sizes ========== + + @Test + void testLetterPageSize() throws IOException { + PdfRenderContext letterCtx = + new PdfRenderContext(document, PDRectangle.LETTER, 72f, 72f, 72f, 72f, 11f, false); + assertEquals(PDRectangle.LETTER.getWidth(), letterCtx.getPageWidth(), 0.01f); + assertEquals(PDRectangle.LETTER.getHeight(), letterCtx.getPageHeight(), 0.01f); + } + + @Test + void testCustomMargins() throws IOException { + PdfRenderContext customCtx = + new PdfRenderContext(document, PDRectangle.A4, 36f, 50f, 36f, 50f, 12f, false); + float expectedWidth = PDRectangle.A4.getWidth() - 50f - 50f; + assertEquals(expectedWidth, customCtx.getContentWidth(), 0.01f); + assertEquals(50f, customCtx.getLeftX(), 0.01f); + } +} diff --git a/build.gradle b/build.gradle index 4827ad3160..0cf3af768b 100644 --- a/build.gradle +++ b/build.gradle @@ -8,7 +8,7 @@ buildscript { } } dependencies { - classpath 'org.springframework.boot:spring-boot-gradle-plugin:3.3.5' + classpath 'org.springframework.boot:spring-boot-gradle-plugin:3.3.11' classpath 'com.diffplug.spotless:spotless-plugin-gradle:6.+' } } @@ -68,6 +68,20 @@ allprojects { if (details.requested.group == 'org.apache.commons' && details.requested.name == 'commons-lang3') { details.useVersion "3.18.0" } + // Security: CVE-2025-12183 - force lz4-java to patched version + if (details.requested.group == 'org.lz4' && details.requested.name == 'lz4-java') { + details.useVersion '1.8.1' + } + } + // Security: at.yawk.lz4:lz4-java declares Gradle capability org.lz4:lz4-java in its + // module metadata. When lz4-java is upgraded to 1.8.1 (CVE-2025-12183), both modules + // claim the same capability and Gradle can't resolve the conflict automatically. + // Tell Gradle to always prefer the canonical org.lz4 artifact. + resolutionStrategy.capabilitiesResolution.withCapability('org.lz4:lz4-java') { resolution -> + def preferred = resolution.candidates.find { it.id.group == 'org.lz4' } + if (preferred) { + resolution.select(preferred) + } } } } @@ -78,7 +92,7 @@ allprojects { dependencyManagement { imports { - // dependency versions for the BOM can be found at https://docs.spring.io/spring-boot/docs/3.1.4/reference/htmlsingle/#appendix.dependency-versions + // dependency versions for the BOM can be found at https://docs.spring.io/spring-boot/docs/3.3.11/reference/htmlsingle/#appendix.dependency-versions mavenBom(SpringBootPlugin.BOM_COORDINATES) } } @@ -109,6 +123,19 @@ allprojects { } } implementation('org.apache.tomcat.embed:tomcat-embed-core') + + // Security: force minimum versions for vulnerable transitive dependencies + constraints { + implementation('org.apache.tika:tika-core:3.2.2') { + because 'CVE-2025-66516: tika-parser-pdf-module vulnerability' + } + implementation('commons-beanutils:commons-beanutils:1.11.0') { + because 'CVE-2025-48734: PropertyUtilsBean enum suppression' + } + implementation('com.microsoft.sqlserver:mssql-jdbc:12.8.2.jre11') { + because 'CVE-2025-59250: improper input validation' + } + } } // processes additional configuration metadata json file as described here // https://docs.spring.io/spring-boot/docs/2.3.1.RELEASE/reference/html/appendix-configuration-metadata.html#configuration-metadata-additional-metadata diff --git a/core/src/main/java/com/netflix/conductor/core/execution/WorkflowExecutorOps.java b/core/src/main/java/com/netflix/conductor/core/execution/WorkflowExecutorOps.java index fa70b8f9f0..56d9c08a89 100644 --- a/core/src/main/java/com/netflix/conductor/core/execution/WorkflowExecutorOps.java +++ b/core/src/main/java/com/netflix/conductor/core/execution/WorkflowExecutorOps.java @@ -917,8 +917,11 @@ public TaskModel updateTask(TaskResult taskResult) { LOGGER.error(errorMsg, e); } - taskResult.getLogs().forEach(taskExecLog -> taskExecLog.setTaskId(task.getTaskId())); - executionDAOFacade.addTaskExecLog(taskResult.getLogs()); + List taskLogs = taskResult.getLogs(); + if (taskLogs != null) { + taskLogs.forEach(taskExecLog -> taskExecLog.setTaskId(task.getTaskId())); + executionDAOFacade.addTaskExecLog(taskLogs); + } if (task.getStatus().isTerminal()) { long duration = getTaskDuration(0, task); diff --git a/core/src/main/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapper.java b/core/src/main/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapper.java index 0e1368cd1a..db06679fff 100644 --- a/core/src/main/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapper.java +++ b/core/src/main/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapper.java @@ -24,7 +24,6 @@ import com.netflix.conductor.common.metadata.tasks.TaskType; import com.netflix.conductor.common.metadata.workflow.WorkflowDef; import com.netflix.conductor.common.metadata.workflow.WorkflowTask; -import com.netflix.conductor.core.exception.TerminateWorkflowException; import com.netflix.conductor.core.utils.ParametersUtils; import com.netflix.conductor.model.TaskModel; import com.netflix.conductor.model.WorkflowModel; @@ -55,12 +54,10 @@ public String getTaskType() { * * @param taskMapperContext: A wrapper class containing the {@link WorkflowTask}, {@link * WorkflowDef}, {@link WorkflowModel} and a string representation of the TaskId - * @throws TerminateWorkflowException In case if the task definition does not exist * @return a List with just one simple task */ @Override - public List getMappedTasks(TaskMapperContext taskMapperContext) - throws TerminateWorkflowException { + public List getMappedTasks(TaskMapperContext taskMapperContext) { LOGGER.debug("TaskMapperContext {} in SimpleTaskMapper", taskMapperContext); @@ -71,13 +68,12 @@ public List getMappedTasks(TaskMapperContext taskMapperContext) TaskDef taskDefinition = Optional.ofNullable(workflowTask.getTaskDefinition()) - .orElseThrow( + .orElseGet( () -> { - String reason = - String.format( - "Invalid task. Task %s does not have a definition", - workflowTask.getName()); - return new TerminateWorkflowException(reason); + LOGGER.warn( + "Task {} does not have a definition, using defaults", + workflowTask.getName()); + return new TaskDef(); }); Map input = diff --git a/core/src/main/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraint.java b/core/src/main/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraint.java index d80fd07f50..07f4f06ae9 100644 --- a/core/src/main/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraint.java +++ b/core/src/main/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraint.java @@ -257,7 +257,11 @@ private boolean isSwitchTaskValid( private boolean isDoWhileTaskValid( WorkflowTask workflowTask, ConstraintValidatorContext context) { boolean valid = true; - if (workflowTask.getLoopCondition() == null) { + boolean isListIterationMode = + (workflowTask.getItems() != null && !workflowTask.getItems().trim().isEmpty()) + || (workflowTask.getInputParameters() != null + && workflowTask.getInputParameters().containsKey("_items")); + if (!isListIterationMode && workflowTask.getLoopCondition() == null) { String message = String.format( PARAM_REQUIRED_STRING_FORMAT, @@ -407,14 +411,23 @@ private boolean isHttpTaskValid( boolean valid = true; boolean isInputParameterSet = false; boolean isInputTemplateSet = false; + boolean isTopLevelInputSet = false; // Either http_request in WorkflowTask inputParam should be set or in inputTemplate - // Taskdef should be set + // Taskdef should be set, or the input parameters can be provided at the top level + // (e.g. uri, method directly in inputParameters) if (workflowTask.getInputParameters() != null && workflowTask.getInputParameters().containsKey("http_request")) { isInputParameterSet = true; } + // Check if top-level HTTP parameters are provided (uri or method) + if (workflowTask.getInputParameters() != null + && (workflowTask.getInputParameters().containsKey("uri") + || workflowTask.getInputParameters().containsKey("method"))) { + isTopLevelInputSet = true; + } + TaskDef taskDef = Optional.ofNullable(workflowTask.getTaskDefinition()) .orElse( @@ -427,11 +440,11 @@ private boolean isHttpTaskValid( isInputTemplateSet = true; } - if (!(isInputParameterSet || isInputTemplateSet)) { + if (!(isInputParameterSet || isInputTemplateSet || isTopLevelInputSet)) { String message = String.format( PARAM_REQUIRED_STRING_FORMAT, - "inputParameters.http_request", + "inputParameters.http_request or inputParameters.uri", TaskType.HTTP, workflowTask.getName()); context.buildConstraintViolationWithTemplate(message).addConstraintViolation(); diff --git a/core/src/test/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapperTest.java b/core/src/test/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapperTest.java index bebf2e249d..0bc7c04b4a 100644 --- a/core/src/test/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapperTest.java +++ b/core/src/test/java/com/netflix/conductor/core/execution/mapper/SimpleTaskMapperTest.java @@ -16,14 +16,11 @@ import java.util.List; import org.junit.Before; -import org.junit.Rule; import org.junit.Test; -import org.junit.rules.ExpectedException; import com.netflix.conductor.common.metadata.tasks.TaskDef; import com.netflix.conductor.common.metadata.workflow.WorkflowDef; import com.netflix.conductor.common.metadata.workflow.WorkflowTask; -import com.netflix.conductor.core.exception.TerminateWorkflowException; import com.netflix.conductor.core.utils.IDGenerator; import com.netflix.conductor.core.utils.ParametersUtils; import com.netflix.conductor.model.TaskModel; @@ -39,8 +36,6 @@ public class SimpleTaskMapperTest { private IDGenerator idGenerator = new IDGenerator(); - @Rule public ExpectedException expectedException = ExpectedException.none(); - @Before public void setUp() { ParametersUtils parametersUtils = mock(ParametersUtils.class); @@ -78,9 +73,9 @@ public void getMappedTasks() { } @Test - public void getMappedTasksException() { + public void getMappedTasksWithNoTaskDefinition() { - // Given + // Given a workflow task without a task definition WorkflowTask workflowTask = new WorkflowTask(); workflowTask.setName("simple_task"); String taskId = idGenerator.generate(); @@ -101,14 +96,12 @@ public void getMappedTasksException() { .withTaskId(taskId) .build(); - // then - expectedException.expect(TerminateWorkflowException.class); - expectedException.expectMessage( - String.format( - "Invalid task. Task %s does not have a definition", - workflowTask.getName())); - // when - simpleTaskMapper.getMappedTasks(taskMapperContext); + List mappedTasks = simpleTaskMapper.getMappedTasks(taskMapperContext); + + // then a task is created with default task definition values + assertNotNull(mappedTasks); + assertEquals(1, mappedTasks.size()); + assertEquals(TaskModel.Status.SCHEDULED, mappedTasks.get(0).getStatus()); } } diff --git a/core/src/test/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraintTest.java b/core/src/test/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraintTest.java index f7fb3cd71f..a975ac88a5 100644 --- a/core/src/test/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraintTest.java +++ b/core/src/test/java/com/netflix/conductor/validations/WorkflowTaskTypeConstraintTest.java @@ -171,6 +171,46 @@ public void testWorkflowTaskTypeDoWhile() { "loopOver field is required for taskType: DO_WHILE taskName: encode")); } + @Test + public void testWorkflowTaskTypeDoWhileListIterationMode() { + // When 'items' is set, loopCondition should NOT be required (list-iteration mode) + WorkflowTask workflowTask = createSampleWorkflowTask(); + workflowTask.setType("DO_WHILE"); + workflowTask.setItems("${workflow.input.itemList}"); + + WorkflowTask loopTask = new WorkflowTask(); + loopTask.setName("http"); + loopTask.setTaskReferenceName("http_ref"); + loopTask.setType("HTTP"); + workflowTask.setLoopOver(List.of(loopTask)); + + when(mockMetadataDao.getTaskDef(anyString())).thenReturn(new TaskDef()); + + Set> result = validator.validate(workflowTask); + assertEquals(0, result.size()); + } + + @Test + public void testWorkflowTaskTypeDoWhileListIterationModeOrkesCompat() { + // When '_items' is in inputParameters (Orkes compat), loopCondition should NOT be required + WorkflowTask workflowTask = createSampleWorkflowTask(); + workflowTask.setType("DO_WHILE"); + Map inputParam = new HashMap<>(); + inputParam.put("_items", "${workflow.input.itemList}"); + workflowTask.setInputParameters(inputParam); + + WorkflowTask loopTask = new WorkflowTask(); + loopTask.setName("http"); + loopTask.setTaskReferenceName("http_ref"); + loopTask.setType("HTTP"); + workflowTask.setLoopOver(List.of(loopTask)); + + when(mockMetadataDao.getTaskDef(anyString())).thenReturn(new TaskDef()); + + Set> result = validator.validate(workflowTask); + assertEquals(0, result.size()); + } + @Test public void testWorkflowTaskTypeWait() { WorkflowTask workflowTask = createSampleWorkflowTask(); @@ -344,7 +384,32 @@ public void testWorkflowTaskTypeHTTPWithHttpParamMissing() { assertTrue( validationErrors.contains( - "inputParameters.http_request field is required for taskType: HTTP taskName: encode")); + "inputParameters.http_request or inputParameters.uri field is required for taskType: HTTP taskName: encode")); + } + + @Test + public void testWorkflowTaskTypeHTTPWithTopLevelParams() { + WorkflowTask workflowTask = createSampleWorkflowTask(); + workflowTask.setType("HTTP"); + workflowTask.getInputParameters().put("uri", "http://www.netflix.com"); + workflowTask.getInputParameters().put("method", "GET"); + + when(mockMetadataDao.getTaskDef(anyString())).thenReturn(new TaskDef()); + + Set> result = validator.validate(workflowTask); + assertEquals(0, result.size()); + } + + @Test + public void testWorkflowTaskTypeHTTPWithTopLevelUriOnly() { + WorkflowTask workflowTask = createSampleWorkflowTask(); + workflowTask.setType("HTTP"); + workflowTask.getInputParameters().put("uri", "http://www.netflix.com"); + + when(mockMetadataDao.getTaskDef(anyString())).thenReturn(new TaskDef()); + + Set> result = validator.validate(workflowTask); + assertEquals(0, result.size()); } @Test diff --git a/dependencies.gradle b/dependencies.gradle index a55706865c..1389ec34ac 100644 --- a/dependencies.gradle +++ b/dependencies.gradle @@ -75,4 +75,5 @@ ext { revMCP = '0.13.0' revCommonsCompress = '1.26.1' pinecone = '3.0.0' + revFlexmark = '0.64.8' } diff --git a/docker/server/config/config.properties b/docker/server/config/config.properties index a85b024333..ff6554beb8 100755 --- a/docker/server/config/config.properties +++ b/docker/server/config/config.properties @@ -40,3 +40,13 @@ # conductor.elasticsearch.version=7 # conductor.elasticsearch.indexName=conductor +# =====================================================# +# Status Notifier Configuration Properties +# =====================================================# + +# Task statuses to publish (comma-separated list) +# conductor.status-notifier.notification.subscribed-task-statuses=SCHEDULED,COMPLETED,FAILED,TIMED_OUT,IN_PROGRESS + +# Workflow statuses to publish (comma-separated list) +# Valid values: RUNNING,COMPLETED,FAILED,TIMED_OUT,TERMINATED,PAUSED,RESUMED,RESTARTED,RETRIED,RERAN,FINALIZED +# conductor.status-notifier.notification.subscribed-workflow-statuses=RUNNING,COMPLETED,FAILED,TIMED_OUT,TERMINATED,PAUSED diff --git a/http-task/src/main/java/com/netflix/conductor/tasks/http/HttpTask.java b/http-task/src/main/java/com/netflix/conductor/tasks/http/HttpTask.java index 180c1238ea..55ff5886a5 100644 --- a/http-task/src/main/java/com/netflix/conductor/tasks/http/HttpTask.java +++ b/http-task/src/main/java/com/netflix/conductor/tasks/http/HttpTask.java @@ -13,10 +13,12 @@ package com.netflix.conductor.tasks.http; import java.io.IOException; +import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.stream.Collectors; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -34,6 +36,7 @@ import com.netflix.conductor.model.WorkflowModel; import com.netflix.conductor.tasks.http.providers.RestTemplateProvider; +import com.fasterxml.jackson.annotation.JsonSetter; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; @@ -51,7 +54,7 @@ public class HttpTask extends WorkflowSystemTask { static final String MISSING_REQUEST = "Missing HTTP request. Task input MUST have a '" + REQUEST_PARAMETER_NAME - + "' key with HttpTask.Input as value. See documentation for HttpTask for required input parameters"; + + "' key with HttpTask.Input as value OR provide the input parameters directly. See documentation for HttpTask for required input parameters"; private final TypeReference> mapOfObj = new TypeReference>() {}; @@ -77,12 +80,10 @@ public HttpTask( @Override public void start(WorkflowModel workflow, TaskModel task, WorkflowExecutor executor) { Object request = task.getInputData().get(requestParameter); - task.setWorkerId(Utils.getServerId()); if (request == null) { - task.setReasonForIncompletion(MISSING_REQUEST); - task.setStatus(TaskModel.Status.FAILED); - return; + request = task.getInputData(); } + task.setWorkerId(Utils.getServerId()); Input input = objectMapper.convertValue(request, Input.class); if (input.getUri() == null) { @@ -153,7 +154,10 @@ protected HttpResponse httpCall(Input input) throws Exception { HttpHeaders headers = new HttpHeaders(); headers.setContentType(MediaType.valueOf(input.getContentType())); - headers.setAccept(Collections.singletonList(MediaType.valueOf(input.getAccept()))); + headers.setAccept( + input.getAcceptList().stream() + .map(MediaType::valueOf) + .collect(Collectors.toList())); input.headers.forEach( (key, value) -> { @@ -264,7 +268,8 @@ public static class Input { private Map headers = new HashMap<>(); private String uri; private Object body; - private String accept = MediaType.APPLICATION_JSON_VALUE; + private List accept = + new ArrayList<>(Collections.singletonList(MediaType.APPLICATION_JSON_VALUE)); private String contentType = MediaType.APPLICATION_JSON_VALUE; private Integer connectionTimeOut = 3000; private Integer readTimeOut = 3000; @@ -340,17 +345,36 @@ public void setVipAddress(String vipAddress) { } /** - * @return the accept + * @return the first accept media type (for backward compatibility) */ public String getAccept() { + return accept != null && !accept.isEmpty() + ? accept.get(0) + : MediaType.APPLICATION_JSON_VALUE; + } + + /** + * @return the full list of accept media types + */ + public List getAcceptList() { return accept; } /** - * @param accept the accept to set + * @param accept the accept to set — accepts a String or a List of Strings */ - public void setAccept(String accept) { - this.accept = accept; + @JsonSetter("accept") + public void setAccept(Object accept) { + if (accept == null) { + return; + } + if (accept instanceof String) { + this.accept = Collections.singletonList((String) accept); + } else if (accept instanceof List) { + this.accept = + ((List) accept) + .stream().map(Object::toString).collect(Collectors.toList()); + } } /** diff --git a/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskTest.java b/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskTest.java index 5be0e5c26c..68632ae44c 100644 --- a/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskTest.java +++ b/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskTest.java @@ -189,7 +189,10 @@ public void testFailure() { task.getInputData().remove(HttpTask.REQUEST_PARAMETER_NAME); httpTask.start(workflow, task, workflowExecutor); assertEquals(TaskModel.Status.FAILED, task.getStatus()); - assertEquals(HttpTask.MISSING_REQUEST, task.getReasonForIncompletion()); + // Without http_request key, falls back to inputData directly which lacks uri/method + assertTrue( + task.getReasonForIncompletion().contains("Missing HTTP URI") + || task.getReasonForIncompletion().contains("No HTTP method specified")); } @Test @@ -338,7 +341,10 @@ public void testOptional() { task.setReferenceTaskName("t1"); httpTask.start(workflow, task, workflowExecutor); assertEquals(TaskModel.Status.FAILED, task.getStatus()); - assertEquals(HttpTask.MISSING_REQUEST, task.getReasonForIncompletion()); + // Without http_request key, falls back to inputData directly which lacks uri/method + assertTrue( + task.getReasonForIncompletion().contains("Missing HTTP URI") + || task.getReasonForIncompletion().contains("No HTTP method specified")); assertFalse(task.getStatus().isSuccessful()); WorkflowTask workflowTask = new WorkflowTask(); diff --git a/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskUnitTest.java b/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskUnitTest.java new file mode 100644 index 0000000000..4e14e4fc37 --- /dev/null +++ b/http-task/src/test/java/com/netflix/conductor/tasks/http/HttpTaskUnitTest.java @@ -0,0 +1,225 @@ +/* + * Copyright 2022 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package com.netflix.conductor.tasks.http; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.junit.Before; +import org.junit.Test; +import org.springframework.http.MediaType; + +import com.netflix.conductor.core.execution.WorkflowExecutor; +import com.netflix.conductor.model.TaskModel; +import com.netflix.conductor.model.WorkflowModel; +import com.netflix.conductor.tasks.http.providers.DefaultRestTemplateProvider; + +import com.fasterxml.jackson.databind.ObjectMapper; + +import static org.junit.Assert.*; +import static org.mockito.Mockito.mock; + +/** + * Unit tests for HttpTask that do not require Docker/Testcontainers. Tests input resolution (with + * and without http_request key) and accept parameter handling (single string vs list). + */ +public class HttpTaskUnitTest { + + private static final ObjectMapper objectMapper = new ObjectMapper(); + private WorkflowExecutor workflowExecutor; + private HttpTask httpTask; + private final WorkflowModel workflow = new WorkflowModel(); + + @Before + public void setup() { + workflowExecutor = mock(WorkflowExecutor.class); + DefaultRestTemplateProvider restTemplateProvider = + new DefaultRestTemplateProvider( + java.time.Duration.ofMillis(150), java.time.Duration.ofMillis(100)); + httpTask = new HttpTask(restTemplateProvider, objectMapper); + } + + // ---- Accept parameter tests (Input deserialization) ---- + + @Test + public void testAcceptDefaultValue() { + HttpTask.Input input = new HttpTask.Input(); + assertEquals("application/json", input.getAccept()); + assertEquals(1, input.getAcceptList().size()); + assertEquals("application/json", input.getAcceptList().get(0)); + } + + @Test + public void testAcceptSingleString() { + HttpTask.Input input = new HttpTask.Input(); + input.setAccept("text/plain"); + assertEquals("text/plain", input.getAccept()); + assertEquals(1, input.getAcceptList().size()); + assertEquals("text/plain", input.getAcceptList().get(0)); + } + + @Test + public void testAcceptMultipleValues() { + HttpTask.Input input = new HttpTask.Input(); + input.setAccept(Arrays.asList("application/json", "text/plain", "application/xml")); + assertEquals("application/json", input.getAccept()); + List acceptList = input.getAcceptList(); + assertEquals(3, acceptList.size()); + assertEquals("application/json", acceptList.get(0)); + assertEquals("text/plain", acceptList.get(1)); + assertEquals("application/xml", acceptList.get(2)); + } + + @Test + public void testAcceptSingleStringDeserialization() { + Map inputMap = new HashMap<>(); + inputMap.put("uri", "http://example.com"); + inputMap.put("method", "GET"); + inputMap.put("accept", "text/html"); + + HttpTask.Input input = objectMapper.convertValue(inputMap, HttpTask.Input.class); + assertEquals("text/html", input.getAccept()); + assertEquals(1, input.getAcceptList().size()); + assertEquals("text/html", input.getAcceptList().get(0)); + } + + @Test + public void testAcceptListDeserialization() { + Map inputMap = new HashMap<>(); + inputMap.put("uri", "http://example.com"); + inputMap.put("method", "GET"); + inputMap.put("accept", Arrays.asList("application/json", "text/plain")); + + HttpTask.Input input = objectMapper.convertValue(inputMap, HttpTask.Input.class); + assertEquals("application/json", input.getAccept()); + List acceptList = input.getAcceptList(); + assertEquals(2, acceptList.size()); + assertEquals("application/json", acceptList.get(0)); + assertEquals("text/plain", acceptList.get(1)); + } + + @Test + public void testAcceptMediaTypeParsing() { + HttpTask.Input input = new HttpTask.Input(); + input.setAccept(Arrays.asList("application/json", "text/plain")); + List acceptList = input.getAcceptList(); + // Verify all values are valid MediaType strings + for (String accept : acceptList) { + MediaType mediaType = MediaType.valueOf(accept); + assertNotNull(mediaType); + } + } + + // ---- Input resolution tests (http_request key vs direct input) ---- + + @Test + public void testInputWithHttpRequestKey() { + TaskModel task = new TaskModel(); + Map httpRequest = new HashMap<>(); + httpRequest.put("uri", "http://example.com"); + httpRequest.put("method", "GET"); + httpRequest.put("accept", "text/html"); + task.getInputData().put(HttpTask.REQUEST_PARAMETER_NAME, httpRequest); + + httpTask.start(workflow, task, workflowExecutor); + assertTrue( + "Should complete successfully with http_request key", + task.getStatus() == TaskModel.Status.COMPLETED + || task.getStatus() == TaskModel.Status.IN_PROGRESS); + assertNotNull(task.getOutputData().get("response")); + } + + @Test + public void testInputWithoutHttpRequestKey() { + TaskModel task = new TaskModel(); + task.getInputData().put("uri", "http://example.com"); + task.getInputData().put("method", "GET"); + task.getInputData().put("accept", "text/html"); + + httpTask.start(workflow, task, workflowExecutor); + assertTrue( + "Should complete successfully without http_request key", + task.getStatus() == TaskModel.Status.COMPLETED + || task.getStatus() == TaskModel.Status.IN_PROGRESS); + assertNotNull(task.getOutputData().get("response")); + } + + @Test + public void testInputWithoutHttpRequestKeyAndMissingUri() { + TaskModel task = new TaskModel(); + task.getInputData().put("method", "GET"); + + httpTask.start(workflow, task, workflowExecutor); + assertEquals(TaskModel.Status.FAILED, task.getStatus()); + assertTrue( + "Should fail with missing URI", + task.getReasonForIncompletion().contains("Missing HTTP URI")); + } + + @Test + public void testInputWithoutHttpRequestKeyAndMissingMethod() { + TaskModel task = new TaskModel(); + task.getInputData().put("uri", "http://example.com"); + + httpTask.start(workflow, task, workflowExecutor); + assertEquals(TaskModel.Status.FAILED, task.getStatus()); + assertTrue( + "Should fail with missing method", + task.getReasonForIncompletion().contains("No HTTP method specified")); + } + + @Test + public void testInputDirectWithListAccept() { + TaskModel task = new TaskModel(); + task.getInputData().put("uri", "http://example.com"); + task.getInputData().put("method", "GET"); + task.getInputData().put("accept", Arrays.asList("application/json", "text/plain")); + + httpTask.start(workflow, task, workflowExecutor); + assertTrue( + "Should complete successfully with list accept and direct input", + task.getStatus() == TaskModel.Status.COMPLETED + || task.getStatus() == TaskModel.Status.IN_PROGRESS); + assertNotNull(task.getOutputData().get("response")); + } + + @Test + public void testInputHttpRequestKeyWithListAccept() { + TaskModel task = new TaskModel(); + Map httpRequest = new HashMap<>(); + httpRequest.put("uri", "http://example.com"); + httpRequest.put("method", "GET"); + httpRequest.put("accept", Arrays.asList("application/json", "application/xml")); + task.getInputData().put(HttpTask.REQUEST_PARAMETER_NAME, httpRequest); + + httpTask.start(workflow, task, workflowExecutor); + assertTrue( + "Should complete successfully with list accept and http_request key", + task.getStatus() == TaskModel.Status.COMPLETED + || task.getStatus() == TaskModel.Status.IN_PROGRESS); + assertNotNull(task.getOutputData().get("response")); + } + + @Test + public void testEmptyInputFails() { + TaskModel task = new TaskModel(); + // No input at all — falls back to empty inputData map + httpTask.start(workflow, task, workflowExecutor); + assertEquals(TaskModel.Status.FAILED, task.getStatus()); + assertTrue( + "Should fail with missing URI", + task.getReasonForIncompletion().contains("Missing HTTP URI")); + } +} diff --git a/org/springframework/ai/anthropic/api/AnthropicApi.java b/org/springframework/ai/anthropic/api/AnthropicApi.java new file mode 100644 index 0000000000..c5af76ae0b --- /dev/null +++ b/org/springframework/ai/anthropic/api/AnthropicApi.java @@ -0,0 +1,2265 @@ +/* + * Copyright 2023-2025 the original author or authors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.springframework.ai.anthropic.api; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Consumer; +import java.util.function.Predicate; + +import com.fasterxml.jackson.annotation.JsonIgnoreProperties; +import com.fasterxml.jackson.annotation.JsonInclude; +import com.fasterxml.jackson.annotation.JsonInclude.Include; +import com.fasterxml.jackson.annotation.JsonProperty; +import com.fasterxml.jackson.annotation.JsonSubTypes; +import com.fasterxml.jackson.annotation.JsonTypeInfo; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; + +import org.springframework.ai.anthropic.api.AnthropicApi.ChatCompletionRequest.CacheControl; +import org.springframework.ai.anthropic.api.StreamHelper.ChatCompletionResponseBuilder; +import org.springframework.ai.model.ApiKey; +import org.springframework.ai.model.ChatModelDescription; +import org.springframework.ai.model.ModelOptionsUtils; +import org.springframework.ai.model.SimpleApiKey; +import org.springframework.ai.observation.conventions.AiProvider; +import org.springframework.ai.retry.RetryUtils; +import org.springframework.http.HttpHeaders; +import org.springframework.http.HttpStatusCode; +import org.springframework.http.MediaType; +import org.springframework.http.ResponseEntity; +import org.springframework.util.Assert; +import org.springframework.util.LinkedMultiValueMap; +import org.springframework.util.MultiValueMap; +import org.springframework.util.StringUtils; +import org.springframework.web.client.ResponseErrorHandler; +import org.springframework.web.client.RestClient; +import org.springframework.web.reactive.function.client.WebClient; + +/** + * The Anthropic API client. + * + * @author Christian Tzolov + * @author Mariusz Bernacki + * @author Thomas Vitale + * @author Jihoon Kim + * @author Alexandros Pappas + * @author Jonghoon Park + * @author Claudio Silva Junior + * @author Filip Hrisafov + * @author Soby Chacko + * @author Austin Dase + * @since 1.0.0 + */ +public final class AnthropicApi { + + private static final Logger logger = LoggerFactory.getLogger(AnthropicApi.class); + + public static Builder builder() { + return new Builder(); + } + + public static final String PROVIDER_NAME = AiProvider.ANTHROPIC.value(); + + public static final String DEFAULT_BASE_URL = "https://api.anthropic.com"; + + public static final String DEFAULT_MESSAGE_COMPLETIONS_PATH = "/v1/messages"; + + public static final String DEFAULT_ANTHROPIC_VERSION = "2023-06-01"; + + public static final String DEFAULT_ANTHROPIC_BETA_VERSION = "tools-2024-04-04,pdfs-2024-09-25,structured-outputs-2025-11-13"; + + public static final String BETA_EXTENDED_CACHE_TTL = "extended-cache-ttl-2025-04-11"; + + public static final String BETA_SKILLS = "skills-2025-10-02"; + + public static final String BETA_FILES_API = "files-api-2025-04-14"; + + public static final String BETA_CODE_EXECUTION = "code-execution-2025-08-25"; + + public static final String CODE_EXECUTION_TOOL_TYPE = "code_execution_20250825"; + + private static final String FILES_PATH = "/v1/files"; + + private static final String HEADER_X_API_KEY = "x-api-key"; + + private static final String HEADER_ANTHROPIC_VERSION = "anthropic-version"; + + private static final String HEADER_ANTHROPIC_BETA = "anthropic-beta"; + + private static final Predicate SSE_DONE_PREDICATE = "[DONE]"::equals; + + private final String completionsPath; + + private final RestClient restClient; + + private final StreamHelper streamHelper = new StreamHelper(); + + private final WebClient webClient; + + private final ApiKey apiKey; + + /** + * Create a new client api. + * @param baseUrl api base URL. + * @param completionsPath path to append to the base URL. + * @param anthropicApiKey Anthropic api Key. + * @param anthropicVersion Anthropic version. + * @param restClientBuilder RestClient builder. + * @param webClientBuilder WebClient builder. + * @param responseErrorHandler Response error handler. + * @param anthropicBetaFeatures Anthropic beta features. + */ + private AnthropicApi(String baseUrl, String completionsPath, ApiKey anthropicApiKey, String anthropicVersion, + RestClient.Builder restClientBuilder, WebClient.Builder webClientBuilder, + ResponseErrorHandler responseErrorHandler, String anthropicBetaFeatures) { + + Consumer jsonContentHeaders = headers -> { + headers.add(HEADER_ANTHROPIC_VERSION, anthropicVersion); + headers.add(HEADER_ANTHROPIC_BETA, anthropicBetaFeatures); + headers.setContentType(MediaType.APPLICATION_JSON); + }; + + this.completionsPath = completionsPath; + this.apiKey = anthropicApiKey; + + this.restClient = restClientBuilder.clone() + .baseUrl(baseUrl) + .defaultHeaders(jsonContentHeaders) + .defaultStatusHandler(responseErrorHandler) + .build(); + + this.webClient = webClientBuilder.clone() + .baseUrl(baseUrl) + .defaultHeaders(jsonContentHeaders) + .defaultStatusHandler(HttpStatusCode::isError, + resp -> resp.bodyToMono(String.class) + .flatMap(it -> Mono.error(new RuntimeException( + "Response exception, Status: [" + resp.statusCode() + "], Body:[" + it + "]")))) + .build(); + } + + /** + * Creates a model response for the given chat conversation. + * @param chatRequest The chat completion request. + * @return Entity response with {@link ChatCompletionResponse} as a body and HTTP + * status code and headers. + */ + public ResponseEntity chatCompletionEntity(ChatCompletionRequest chatRequest) { + return chatCompletionEntity(chatRequest, new LinkedMultiValueMap<>()); + } + + /** + * Creates a model response for the given chat conversation. + * @param chatRequest The chat completion request. + * @param additionalHttpHeader Additional HTTP headers. + * @return Entity response with {@link ChatCompletionResponse} as a body and HTTP + * status code and headers. + */ + public ResponseEntity chatCompletionEntity(ChatCompletionRequest chatRequest, + MultiValueMap additionalHttpHeader) { + + Assert.notNull(chatRequest, "The request body can not be null."); + Assert.isTrue(!chatRequest.stream(), "Request must set the stream property to false."); + Assert.notNull(additionalHttpHeader, "The additional HTTP headers can not be null."); + + // @formatter:off + return this.restClient.post() + .uri(this.completionsPath) + .headers(headers -> { + headers.addAll(additionalHttpHeader); + addDefaultHeadersIfMissing(headers); + }) + .body(chatRequest) + .retrieve() + .toEntity(ChatCompletionResponse.class); + // @formatter:on + } + + /** + * Creates a streaming chat response for the given chat conversation. + * @param chatRequest The chat completion request. Must have the stream property set + * to true. + * @return Returns a {@link Flux} stream from chat completion chunks. + */ + public Flux chatCompletionStream(ChatCompletionRequest chatRequest) { + return chatCompletionStream(chatRequest, new HttpHeaders()); + } + + /** + * Creates a streaming chat response for the given chat conversation. + * @param chatRequest The chat completion request. Must have the stream property set + * to true. + * @param additionalHttpHeader Additional HTTP headers. + * @return Returns a {@link Flux} stream from chat completion chunks. + */ + public Flux chatCompletionStream(ChatCompletionRequest chatRequest, + MultiValueMap additionalHttpHeader) { + + Assert.notNull(chatRequest, "The request body can not be null."); + Assert.isTrue(chatRequest.stream(), "Request must set the stream property to true."); + Assert.notNull(additionalHttpHeader, "The additional HTTP headers can not be null."); + + AtomicBoolean isInsideTool = new AtomicBoolean(false); + + AtomicReference chatCompletionReference = new AtomicReference<>(); + + // @formatter:off + return this.webClient.post() + .uri(this.completionsPath) + .headers(headers -> { + headers.addAll(additionalHttpHeader); + addDefaultHeadersIfMissing(headers); + }) // @formatter:off + .body(Mono.just(chatRequest), ChatCompletionRequest.class) + .retrieve() + .bodyToFlux(String.class) + .takeUntil(SSE_DONE_PREDICATE) + .filter(SSE_DONE_PREDICATE.negate()) + .map(content -> ModelOptionsUtils.jsonToObject(content, StreamEvent.class)) + .filter(event -> event.type() != EventType.PING) + // Detect if the chunk is part of a streaming function call. + .map(event -> { + logger.debug("Received event: {}", event); + + if (this.streamHelper.isToolUseStart(event)) { + isInsideTool.set(true); + } + return event; + }) + // Group all chunks belonging to the same function call. + .windowUntil(event -> { + if (isInsideTool.get() && this.streamHelper.isToolUseFinish(event)) { + isInsideTool.set(false); + return true; + } + return !isInsideTool.get(); + }) + // Merging the window chunks into a single chunk. + .concatMapIterable(window -> { + Mono monoChunk = window.reduce(new ToolUseAggregationEvent(), + this.streamHelper::mergeToolUseEvents); + return List.of(monoChunk); + }) + .flatMap(mono -> mono) + .map(event -> this.streamHelper.eventToChatCompletionResponse(event, chatCompletionReference)) + .filter(chatCompletionResponse -> chatCompletionResponse.type() != null); + } + + // Files API Methods + + /** + * Get metadata for a specific file generated by Skills or uploaded via Files API. + * @param fileId The ID of the file (format: file_*) + * @return File metadata including filename, size, mime type, and expiration + */ + public FileMetadata getFileMetadata(String fileId) { + Assert.hasText(fileId, "File ID cannot be empty"); + + return this.restClient.get() + .uri(FILES_PATH + "/{id}", fileId) + .headers(headers -> { + addDefaultHeadersIfMissing(headers); + // Append files-api beta to existing beta headers if not already present + String existingBeta = headers.getFirst(HEADER_ANTHROPIC_BETA); + if (existingBeta != null && !existingBeta.contains(BETA_FILES_API)) { + headers.set(HEADER_ANTHROPIC_BETA, existingBeta + "," + BETA_FILES_API); + } + else if (existingBeta == null) { + headers.set(HEADER_ANTHROPIC_BETA, BETA_FILES_API); + } + }) + .retrieve() + .body(FileMetadata.class); + } + + /** + * Download file content by ID. + * @param fileId The ID of the file (format: file_*) + * @return File content as bytes + */ + public byte[] downloadFile(String fileId) { + Assert.hasText(fileId, "File ID cannot be empty"); + + return this.restClient.get() + .uri(FILES_PATH + "/{id}/content", fileId) + .headers(headers -> { + addDefaultHeadersIfMissing(headers); + // Append files-api beta to existing beta headers if not already present + String existingBeta = headers.getFirst(HEADER_ANTHROPIC_BETA); + if (existingBeta != null && !existingBeta.contains(BETA_FILES_API)) { + headers.set(HEADER_ANTHROPIC_BETA, existingBeta + "," + BETA_FILES_API); + } + else if (existingBeta == null) { + headers.set(HEADER_ANTHROPIC_BETA, BETA_FILES_API); + } + }) + .retrieve() + .body(byte[].class); + } + + private void addDefaultHeadersIfMissing(HttpHeaders headers) { + if (!headers.containsKey(HEADER_X_API_KEY)) { + String apiKeyValue = this.apiKey.getValue(); + if (StringUtils.hasText(apiKeyValue)) { + headers.add(HEADER_X_API_KEY, apiKeyValue); + } + } + } + + /** + * Check the Models + * overview and model + * comparison for additional details and options. + */ + public enum ChatModel implements ChatModelDescription { + + // @formatter:off + /** + * The claude-sonnet-4-5 model. + */ + CLAUDE_SONNET_4_5("claude-sonnet-4-5"), + + /** + * The claude-opus-4-5 model. + */ + CLAUDE_OPUS_4_5("claude-opus-4-5"), + + /** + * The claude-haiku-4-5 model. + */ + CLAUDE_HAIKU_4_5("claude-haiku-4-5"), + + /** + * The claude-opus-4-1 model. + */ + CLAUDE_OPUS_4_1("claude-opus-4-1"), + + /** + * The claude-opus-4-0 model. + */ + CLAUDE_OPUS_4_0("claude-opus-4-0"), + + /** + * The claude-sonnet-4-0 model. + */ + CLAUDE_SONNET_4_0("claude-sonnet-4-0"), + + /** + * The claude-3-7-sonnet-latest model. + */ + CLAUDE_3_7_SONNET("claude-3-7-sonnet-latest"), + + /** + * The claude-3-5-sonnet-latest model.(Deprecated on October 28, 2025) + */ + CLAUDE_3_5_SONNET("claude-3-5-sonnet-latest"), + + /** + * The CLAUDE_3_OPUS + */ + CLAUDE_3_OPUS("claude-3-opus-latest"), + + /** + * The CLAUDE_3_SONNET (Deprecated. To be removed on July 21, 2025) + */ + CLAUDE_3_SONNET("claude-3-sonnet-20240229"), + + /** + * The CLAUDE 3.5 HAIKU + */ + CLAUDE_3_5_HAIKU("claude-3-5-haiku-latest"), + + /** + * The CLAUDE_3_HAIKU + */ + CLAUDE_3_HAIKU("claude-3-haiku-20240307"); + + // @formatter:on + + private final String value; + + ChatModel(String value) { + this.value = value; + } + + /** + * Get the value of the model. + * @return The value of the model. + */ + public String getValue() { + return this.value; + } + + /** + * Get the name of the model. + * @return The name of the model. + */ + @Override + public String getName() { + return this.value; + } + + } + + /** + * The role of the author of this message. + */ + public enum Role { + + // @formatter:off + /** + * The user role. + */ + @JsonProperty("user") + USER, + + /** + * The assistant role. + */ + @JsonProperty("assistant") + ASSISTANT + // @formatter:on + + } + + /** + * The thinking type. + */ + public enum ThinkingType { + + /** + * Enabled thinking type. + */ + @JsonProperty("enabled") + ENABLED, + + /** + * Disabled thinking type. + */ + @JsonProperty("disabled") + DISABLED + + } + + /** + * The event type of the streamed chunk. + */ + public enum EventType { + + /** + * Message start event. Contains a Message object with empty content. + */ + @JsonProperty("message_start") + MESSAGE_START, + + /** + * Message delta event, indicating top-level changes to the final Message object. + */ + @JsonProperty("message_delta") + MESSAGE_DELTA, + + /** + * A final message stop event. + */ + @JsonProperty("message_stop") + MESSAGE_STOP, + + /** + * Content block start event. + */ + @JsonProperty("content_block_start") + CONTENT_BLOCK_START, + + /** + * Content block delta event. + */ + @JsonProperty("content_block_delta") + CONTENT_BLOCK_DELTA, + + /** + * A final content block stop event. + */ + @JsonProperty("content_block_stop") + CONTENT_BLOCK_STOP, + + /** + * Error event. + */ + @JsonProperty("error") + ERROR, + + /** + * Ping event. + */ + @JsonProperty("ping") + PING, + + /** + * Artificially created event to aggregate tool use events. + */ + TOOL_USE_AGGREGATE + + } + + /** + * Types of Claude Skills. + */ + public enum SkillType { + + /** + * Pre-built Anthropic skills (xlsx, pptx, docx, pdf). + */ + @JsonProperty("anthropic") + ANTHROPIC("anthropic"), + + /** + * Custom user-created skills. + */ + @JsonProperty("custom") + CUSTOM("custom"); + + private final String value; + + SkillType(String value) { + this.value = value; + } + + public String getValue() { + return this.value; + } + + } + + /** + * Pre-built Anthropic Skills for document generation. + */ + public enum AnthropicSkill { + + /** + * Excel spreadsheet generation skill. + */ + XLSX("xlsx", "Creates Excel spreadsheets"), + + /** + * PowerPoint presentation generation skill. + */ + PPTX("pptx", "Creates PowerPoint presentations"), + + /** + * Word document generation skill. + */ + DOCX("docx", "Creates Word documents"), + + /** + * PDF document generation skill. + */ + PDF("pdf", "Creates PDF documents"); + + private final String skillId; + + private final String description; + + AnthropicSkill(String skillId, String description) { + this.skillId = skillId; + this.description = description; + } + + public String getSkillId() { + return this.skillId; + } + + public String getDescription() { + return this.description; + } + + /** + * Convert to a Skill record with latest version. + * @return Skill record + */ + public Skill toSkill() { + return new Skill(SkillType.ANTHROPIC, this.skillId, "latest"); + } + + /** + * Convert to a Skill record with specific version. + * @param version Version string + * @return Skill record + */ + public Skill toSkill(String version) { + return new Skill(SkillType.ANTHROPIC, this.skillId, version); + } + + } + + /** + * Represents a Claude Skill - either pre-built Anthropic skill or custom skill. + * Skills are collections of instructions, scripts, and resources that extend Claude's + * capabilities for specific domains. + * + * @param type Skill type - ANTHROPIC for pre-built skills or CUSTOM for user-created. + * @param skillId Skill identifier - short name for Anthropic skills (e.g., "xlsx", + * "pptx") or custom skill ID. + * @param version Optional version string (e.g., "latest", "20251013"). Defaults to + * "latest". + */ + @JsonInclude(Include.NON_NULL) + public record Skill(@JsonProperty("type") SkillType type, @JsonProperty("skill_id") String skillId, + @JsonProperty("version") String version) { + + /** + * Create a Skill with default "latest" version. + * @param type Skill type + * @param skillId Skill ID + */ + public Skill(SkillType type, String skillId) { + this(type, skillId, "latest"); + } + + public static SkillBuilder builder() { + return new SkillBuilder(); + } + + public static final class SkillBuilder { + + private SkillType type; + + private String skillId; + + private String version = "latest"; + + public SkillBuilder type(SkillType type) { + this.type = type; + return this; + } + + public SkillBuilder skillId(String skillId) { + this.skillId = skillId; + return this; + } + + public SkillBuilder version(String version) { + this.version = version; + return this; + } + + public Skill build() { + Assert.notNull(this.type, "Skill type cannot be null"); + Assert.hasText(this.skillId, "Skill ID cannot be empty"); + return new Skill(this.type, this.skillId, this.version); + } + + } + + } + + /** + * Container for Claude Skills in a chat completion request. Maximum of 8 skills per + * request. + * + * @param skills List of skills to make available + */ + @JsonInclude(Include.NON_NULL) + public record SkillContainer(@JsonProperty("skills") List skills) { + + public SkillContainer { + Assert.notNull(skills, "Skills list cannot be null"); + Assert.isTrue(skills.size() <= 8, "Maximum of 8 skills per request"); + } + + public static SkillContainerBuilder builder() { + return new SkillContainerBuilder(); + } + + public static final class SkillContainerBuilder { + + private final List skills = new ArrayList<>(); + + public SkillContainerBuilder skill(Skill skill) { + Assert.notNull(skill, "Skill cannot be null"); + this.skills.add(skill); + return this; + } + + public SkillContainerBuilder skill(SkillType type, String skillId) { + return skill(new Skill(type, skillId)); + } + + public SkillContainerBuilder skill(SkillType type, String skillId, String version) { + return skill(new Skill(type, skillId, version)); + } + + public SkillContainerBuilder anthropicSkill(AnthropicSkill skill) { + return skill(skill.toSkill()); + } + + public SkillContainerBuilder anthropicSkill(AnthropicSkill skill, String version) { + return skill(skill.toSkill(version)); + } + + public SkillContainerBuilder customSkill(String skillId) { + return skill(new Skill(SkillType.CUSTOM, skillId)); + } + + public SkillContainerBuilder customSkill(String skillId, String version) { + return skill(new Skill(SkillType.CUSTOM, skillId, version)); + } + + public SkillContainerBuilder skills(List skills) { + Assert.notNull(skills, "Skills list cannot be null"); + this.skills.addAll(skills); + return this; + } + + public SkillContainer build() { + return new SkillContainer(new ArrayList<>(this.skills)); + } + + } + + } + + /** + * File metadata returned from the Files API. + * + * @param id Unique file identifier (format: file_*) + * @param filename Original filename + * @param size File size in bytes + * @param mimeType MIME type of the file + * @param createdAt Creation timestamp + * @param expiresAt Expiration timestamp + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record FileMetadata(@JsonProperty("id") String id, @JsonProperty("filename") String filename, + @JsonProperty("size") Long size, @JsonProperty("mime_type") String mimeType, + @JsonProperty("created_at") String createdAt, @JsonProperty("expires_at") String expiresAt) { + } + + @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type", + visible = true) + @JsonSubTypes({ @JsonSubTypes.Type(value = ContentBlockStartEvent.class, name = "content_block_start"), + @JsonSubTypes.Type(value = ContentBlockDeltaEvent.class, name = "content_block_delta"), + @JsonSubTypes.Type(value = ContentBlockStopEvent.class, name = "content_block_stop"), + @JsonSubTypes.Type(value = PingEvent.class, name = "ping"), + @JsonSubTypes.Type(value = ErrorEvent.class, name = "error"), + @JsonSubTypes.Type(value = MessageStartEvent.class, name = "message_start"), + @JsonSubTypes.Type(value = MessageDeltaEvent.class, name = "message_delta"), + @JsonSubTypes.Type(value = MessageStopEvent.class, name = "message_stop") }) + public interface StreamEvent { + + @JsonProperty("type") + EventType type(); + + } + + /** + * Chat completion request object. + * + * @param model The model that will complete your prompt. See the list of + * models for + * additional details and options. + * @param messages Input messages. + * @param system System prompt. Can be a String (for compatibility) or a + * List<ContentBlock> (for caching support). A system prompt is a way of + * providing context and instructions to Claude, such as specifying a particular goal + * or role. See our + * guide to system + * prompts. + * @param maxTokens The maximum number of tokens to generate before stopping. Note + * that our models may stop before reaching this maximum. This parameter only + * specifies the absolute maximum number of tokens to generate. Different models have + * different maximum values for this parameter. + * @param metadata An object describing metadata about the request. + * @param stopSequences Custom text sequences that will cause the model to stop + * generating. Our models will normally stop when they have naturally completed their + * turn, which will result in a response stop_reason of "end_turn". If you want the + * model to stop generating when it encounters custom strings of text, you can use the + * stop_sequences parameter. If the model encounters one of the custom sequences, the + * response stop_reason value will be "stop_sequence" and the response stop_sequence + * value will contain the matched stop sequence. + * @param stream Whether to incrementally stream the response using server-sent + * events. + * @param temperature Amount of randomness injected into the response.Defaults to 1.0. + * Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple + * choice, and closer to 1.0 for creative and generative tasks. Note that even with + * temperature of 0.0, the results will not be fully deterministic. + * @param topP Use nucleus sampling. In nucleus sampling, we compute the cumulative + * distribution over all the options for each subsequent token in decreasing + * probability order and cut it off once it reaches a particular probability specified + * by top_p. You should either alter temperature or top_p, but not both. Recommended + * for advanced use cases only. You usually only need to use temperature. + * @param topK Only sample from the top K options for each subsequent token. Used to + * remove "long tail" low probability responses. Learn more technical details here. + * Recommended for advanced use cases only. You usually only need to use temperature. + * @param tools Definitions of tools that the model may use. If provided the model may + * return tool_use content blocks that represent the model's use of those tools. You + * can then run those tools using the tool input generated by the model and then + * optionally return results back to the model using tool_result content blocks. + * @param toolChoice How the model should use the provided tools. The model can use a + * specific tool, any available tool, decide by itself, or not use tools at all. + * @param thinking Configuration for the model's thinking mode. When enabled, the + * model can perform more in-depth reasoning before responding to a query. + */ + @JsonInclude(Include.NON_NULL) + public record ChatCompletionRequest( + // @formatter:off + @JsonProperty("model") String model, + @JsonProperty("messages") List messages, + @JsonProperty("system") Object system, + @JsonProperty("max_tokens") Integer maxTokens, + @JsonProperty("metadata") Metadata metadata, + @JsonProperty("stop_sequences") List stopSequences, + @JsonProperty("stream") Boolean stream, + @JsonProperty("temperature") Double temperature, + @JsonProperty("top_p") Double topP, + @JsonProperty("top_k") Integer topK, + @JsonProperty("tools") List tools, + @JsonProperty("tool_choice") ToolChoice toolChoice, + @JsonProperty("thinking") ThinkingConfig thinking, + @JsonProperty("output_format") OutputFormat outputFormat, + @JsonProperty("container") SkillContainer container) { + // @formatter:on + + public ChatCompletionRequest(String model, List messages, Object system, Integer maxTokens, + Double temperature, Boolean stream) { + this(model, messages, system, maxTokens, null, null, stream, temperature, null, null, null, null, null, + null, null); + } + + public ChatCompletionRequest(String model, List messages, Object system, Integer maxTokens, + List stopSequences, Double temperature, Boolean stream) { + this(model, messages, system, maxTokens, null, stopSequences, stream, temperature, null, null, null, null, + null, null, null); + } + + public static ChatCompletionRequestBuilder builder() { + return new ChatCompletionRequestBuilder(); + } + + public static ChatCompletionRequestBuilder from(ChatCompletionRequest request) { + return new ChatCompletionRequestBuilder(request); + } + + @JsonInclude(Include.NON_NULL) + public record OutputFormat(@JsonProperty("type") String type, + @JsonProperty("schema") Map schema) { + + public OutputFormat(String jsonSchema) { + this("json_schema", ModelOptionsUtils.jsonToMap(jsonSchema)); + } + } + + /** + * Metadata about the request. + * + * @param userId An external identifier for the user who is associated with the + * request. This should be a uuid, hash value, or other opaque identifier. + * Anthropic may use this id to help detect abuse. Do not include any identifying + * information such as name, email address, or phone number. + */ + @JsonInclude(Include.NON_NULL) + public record Metadata(@JsonProperty("user_id") String userId) { + + } + + /** + * @param type is the cache type supported by anthropic. Doc + */ + @JsonInclude(Include.NON_NULL) + public record CacheControl(@JsonProperty("type") String type, @JsonProperty("ttl") String ttl) { + + public CacheControl(String type) { + this(type, "5m"); + } + } + + /** + * Configuration for the model's thinking mode. + * + * @param type The type of thinking mode. Currently, "enabled" is supported. + * @param budgetTokens The token budget available for the thinking process. Must + * be ≥1024 and less than max_tokens. + */ + @JsonInclude(Include.NON_NULL) + public record ThinkingConfig(@JsonProperty("type") ThinkingType type, + @JsonProperty("budget_tokens") Integer budgetTokens) { + } + + } + + public static final class ChatCompletionRequestBuilder { + + private String model; + + private List messages; + + private Object system; + + private Integer maxTokens; + + private ChatCompletionRequest.Metadata metadata; + + private List stopSequences; + + private Boolean stream = false; + + private Double temperature; + + private Double topP; + + private Integer topK; + + private List tools; + + private ToolChoice toolChoice; + + private ChatCompletionRequest.ThinkingConfig thinking; + + private SkillContainer container; + + private ChatCompletionRequest.OutputFormat outputFormat; + + private ChatCompletionRequestBuilder() { + } + + private ChatCompletionRequestBuilder(ChatCompletionRequest request) { + this.model = request.model; + this.messages = request.messages; + this.system = request.system; + this.maxTokens = request.maxTokens; + this.metadata = request.metadata; + this.stopSequences = request.stopSequences; + this.stream = request.stream; + this.temperature = request.temperature; + this.topP = request.topP; + this.topK = request.topK; + this.tools = request.tools; + this.toolChoice = request.toolChoice; + this.thinking = request.thinking; + this.container = request.container; + this.outputFormat = request.outputFormat; + } + + public ChatCompletionRequestBuilder model(ChatModel model) { + this.model = model.getValue(); + return this; + } + + public ChatCompletionRequestBuilder model(String model) { + this.model = model; + return this; + } + + public ChatCompletionRequestBuilder messages(List messages) { + this.messages = messages; + return this; + } + + public ChatCompletionRequestBuilder system(Object system) { + this.system = system; + return this; + } + + public ChatCompletionRequestBuilder maxTokens(Integer maxTokens) { + this.maxTokens = maxTokens; + return this; + } + + public ChatCompletionRequestBuilder metadata(ChatCompletionRequest.Metadata metadata) { + this.metadata = metadata; + return this; + } + + public ChatCompletionRequestBuilder stopSequences(List stopSequences) { + this.stopSequences = stopSequences; + return this; + } + + public ChatCompletionRequestBuilder stream(Boolean stream) { + this.stream = stream; + return this; + } + + public ChatCompletionRequestBuilder temperature(Double temperature) { + this.temperature = temperature; + return this; + } + + public ChatCompletionRequestBuilder topP(Double topP) { + this.topP = topP; + return this; + } + + public ChatCompletionRequestBuilder topK(Integer topK) { + this.topK = topK; + return this; + } + + public ChatCompletionRequestBuilder tools(List tools) { + this.tools = tools; + return this; + } + + public ChatCompletionRequestBuilder toolChoice(ToolChoice toolChoice) { + this.toolChoice = toolChoice; + return this; + } + + public ChatCompletionRequestBuilder thinking(ChatCompletionRequest.ThinkingConfig thinking) { + this.thinking = thinking; + return this; + } + + public ChatCompletionRequestBuilder thinking(ThinkingType type, Integer budgetTokens) { + this.thinking = new ChatCompletionRequest.ThinkingConfig(type, budgetTokens); + return this; + } + + public ChatCompletionRequestBuilder container(SkillContainer container) { + this.container = container; + return this; + } + + public ChatCompletionRequestBuilder skills(List skills) { + if (skills != null && !skills.isEmpty()) { + this.container = new SkillContainer(skills); + } + return this; + } + + public ChatCompletionRequestBuilder outputFormat(ChatCompletionRequest.OutputFormat outputFormat) { + this.outputFormat = outputFormat; + return this; + } + + public ChatCompletionRequest build() { + return new ChatCompletionRequest(this.model, this.messages, this.system, this.maxTokens, this.metadata, + this.stopSequences, this.stream, this.temperature, this.topP, this.topK, this.tools, + this.toolChoice, this.thinking, this.outputFormat, this.container); + } + + } + + /////////////////////////////////////// + /// ERROR EVENT + /////////////////////////////////////// + + /** + * Input messages. + * + * Our models are trained to operate on alternating user and assistant conversational + * turns. When creating a new Message, you specify the prior conversational turns with + * the messages parameter, and the model then generates the next Message in the + * conversation. Each input message must be an object with a role and content. You can + * specify a single user-role message, or you can include multiple user and assistant + * messages. The first message must always use the user role. If the final message + * uses the assistant role, the response content will continue immediately from the + * content in that message. This can be used to constrain part of the model's + * response. + * + * @param content The contents of the message. Can be of one of String or + * MultiModalContent. + * @param role The role of the messages author. Could be one of the {@link Role} + * types. + */ + @JsonInclude(Include.NON_NULL) + public record AnthropicMessage( + // @formatter:off + @JsonProperty("content") List content, + @JsonProperty("role") Role role) { + // @formatter:on + } + + /** + * Citations configuration for document ContentBlocks. + */ + @JsonInclude(Include.NON_NULL) + public record CitationsConfig(@JsonProperty("enabled") Boolean enabled) { + } + + /** + * Citation response structure from Anthropic API. Maps to the actual API response + * format for citations. Contains location information that varies by document type: + * character indices for plain text, page numbers for PDFs, or content block indices + * for custom content. + * + * @param type The citation location type ("char_location", "page_location", or + * "content_block_location") + * @param citedText The text that was cited from the document + * @param documentIndex The index of the document that was cited (0-based) + * @param documentTitle The title of the document that was cited + * @param startCharIndex The starting character index for "char_location" type + * (0-based, inclusive) + * @param endCharIndex The ending character index for "char_location" type (exclusive) + * @param startPageNumber The starting page number for "page_location" type (1-based, + * inclusive) + * @param endPageNumber The ending page number for "page_location" type (exclusive) + * @param startBlockIndex The starting content block index for + * "content_block_location" type (0-based, inclusive) + * @param endBlockIndex The ending content block index for "content_block_location" + * type (exclusive) + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record CitationResponse(@JsonProperty("type") String type, @JsonProperty("cited_text") String citedText, + @JsonProperty("document_index") Integer documentIndex, @JsonProperty("document_title") String documentTitle, + + // For char_location type + @JsonProperty("start_char_index") Integer startCharIndex, + @JsonProperty("end_char_index") Integer endCharIndex, + + // For page_location type + @JsonProperty("start_page_number") Integer startPageNumber, + @JsonProperty("end_page_number") Integer endPageNumber, + + // For content_block_location type + @JsonProperty("start_block_index") Integer startBlockIndex, + @JsonProperty("end_block_index") Integer endBlockIndex) { + } + + /** + * The content block of the message. + * + * @param type the content type can be "text", "image", "tool_use", "tool_result" or + * "text_delta". + * @param source The source of the media content. Applicable for "image" types only. + * @param text The text of the message. Applicable for "text" types only. + * @param index The index of the content block. Applicable only for streaming + * responses. + * @param id The id of the tool use. Applicable only for tool_use response. + * @param name The name of the tool use. Applicable only for tool_use response. + * @param input The input of the tool use. Applicable only for tool_use response. + * @param toolUseId The id of the tool use. Applicable only for tool_result response. + * @param content The content of the tool result. Applicable only for tool_result + * response. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlock( + // @formatter:off + @JsonProperty("type") Type type, + @JsonProperty("source") Source source, + @JsonProperty("text") String text, + + // applicable only for streaming responses. + @JsonProperty("index") Integer index, + + // tool_use response only + @JsonProperty("id") String id, + @JsonProperty("name") String name, + @JsonProperty("input") Map input, + + // tool_result response only + @JsonProperty("tool_use_id") String toolUseId, + @JsonProperty("content") Object content, + + // Thinking only + @JsonProperty("signature") String signature, + @JsonProperty("thinking") String thinking, + + // Redacted Thinking only + @JsonProperty("data") String data, + + // cache object + @JsonProperty("cache_control") CacheControl cacheControl, + + // Citation fields + @JsonProperty("title") String title, + @JsonProperty("context") String context, + @JsonProperty("citations") Object citations, // Can be CitationsConfig for requests or List for responses + + // File fields (for Skills-generated files) + @JsonProperty("file_id") String fileId, + @JsonProperty("filename") String filename + ) { + // @formatter:on + + /** + * Create content block + * @param mediaType The media type of the content. + * @param data The content data. + */ + public ContentBlock(String mediaType, String data) { + this(new Source(mediaType, data)); + } + + /** + * Create content block + * @param type The type of the content. + * @param source The source of the content. + */ + public ContentBlock(Type type, Source source) { + this(type, source, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, + null); + } + + /** + * Create content block + * @param source The source of the content. + */ + public ContentBlock(Source source) { + this(Type.IMAGE, source, null, null, null, null, null, null, null, null, null, null, null, null, null, null, + null, null); + } + + /** + * Create content block + * @param text The text of the content. + */ + public ContentBlock(String text) { + this(Type.TEXT, null, text, null, null, null, null, null, null, null, null, null, null, null, null, null, + null, null); + } + + public ContentBlock(String text, CacheControl cache) { + this(Type.TEXT, null, text, null, null, null, null, null, null, null, null, null, cache, null, null, null, + null, null); + } + + // Tool result + /** + * Create content block + * @param type The type of the content. + * @param toolUseId The id of the tool use. + * @param content The content of the tool result. + */ + public ContentBlock(Type type, String toolUseId, String content) { + this(type, null, null, null, null, null, null, toolUseId, content, null, null, null, null, null, null, null, + null, null); + } + + /** + * Create content block + * @param type The type of the content. + * @param source The source of the content. + * @param text The text of the content. + * @param index The index of the content block. + */ + public ContentBlock(Type type, Source source, String text, Integer index) { + this(type, source, text, index, null, null, null, null, null, null, null, null, null, null, null, null, + null, null); + } + + // Tool use input JSON delta streaming + /** + * Create content block + * @param type The type of the content. + * @param id The id of the tool use. + * @param name The name of the tool use. + * @param input The input of the tool use. + */ + public ContentBlock(Type type, String id, String name, Map input) { + this(type, null, null, null, id, name, input, null, null, null, null, null, null, null, null, null, null, + null); + } + + /** + * Create a document ContentBlock with citations and optional caching. + * @param source The document source + * @param title Optional document title + * @param context Optional document context + * @param citationsEnabled Whether citations are enabled + * @param cacheControl Optional cache control (can be null) + */ + public ContentBlock(Source source, String title, String context, boolean citationsEnabled, + CacheControl cacheControl) { + this(Type.DOCUMENT, source, null, null, null, null, null, null, null, null, null, null, cacheControl, title, + context, citationsEnabled ? new CitationsConfig(true) : null, null, null); + } + + public static ContentBlockBuilder from(ContentBlock contentBlock) { + return new ContentBlockBuilder(contentBlock); + } + + /** + * Returns the content as a String if it is a String, null otherwise. + *

+ * Note: The {@link #content()} field was changed from {@code String} to + * {@code Object} to support Skills responses which may return complex nested + * structures. If you are using the low-level API and previously relied on + * {@code content()} returning a String, use this method instead. + * @return the content as String, or null if content is not a String + */ + public String contentAsString() { + return this.content instanceof String s ? s : null; + } + + /** + * The ContentBlock type. + */ + public enum Type { + + /** + * Tool request + */ + @JsonProperty("tool_use") + TOOL_USE("tool_use"), + + /** + * Send tool result back to LLM. + */ + @JsonProperty("tool_result") + TOOL_RESULT("tool_result"), + + /** + * Text message. + */ + @JsonProperty("text") + TEXT("text"), + + /** + * Text delta message. Returned from the streaming response. + */ + @JsonProperty("text_delta") + TEXT_DELTA("text_delta"), + + /** + * When using extended thinking with streaming enabled, you’ll receive + * thinking content via thinking_delta events. These deltas correspond to the + * thinking field of the thinking content blocks. + */ + @JsonProperty("thinking_delta") + THINKING_DELTA("thinking_delta"), + + /** + * For thinking content, a special signature_delta event is sent just before + * the content_block_stop event. This signature is used to verify the + * integrity of the thinking block. + */ + @JsonProperty("signature_delta") + SIGNATURE_DELTA("signature_delta"), + + /** + * Tool use input partial JSON delta streaming. + */ + @JsonProperty("input_json_delta") + INPUT_JSON_DELTA("input_json_delta"), + + /** + * Image message. + */ + @JsonProperty("image") + IMAGE("image"), + + /** + * Document message. + */ + @JsonProperty("document") + DOCUMENT("document"), + + /** + * Thinking message. + */ + @JsonProperty("thinking") + THINKING("thinking"), + + /** + * Redacted Thinking message. + */ + @JsonProperty("redacted_thinking") + REDACTED_THINKING("redacted_thinking"), + + /** + * File content block representing a file generated by Skills. Used in + * {@link org.springframework.ai.anthropic.SkillsResponseHelper} to extract + * file IDs for downloading generated documents. + */ + @JsonProperty("file") + FILE("file"), + + /** + * Bash code execution tool result returned in Skills responses. Observed in + * actual API responses where file IDs are nested within this content block. + * Required for JSON deserialization. + */ + @JsonProperty("bash_code_execution_tool_result") + BASH_CODE_EXECUTION_TOOL_RESULT("bash_code_execution_tool_result"), + + /** + * Text editor code execution tool result returned in Skills responses. + * Observed in actual API responses. Required for JSON deserialization. + */ + @JsonProperty("text_editor_code_execution_tool_result") + TEXT_EDITOR_CODE_EXECUTION_TOOL_RESULT("text_editor_code_execution_tool_result"), + + /** + * Server-side tool use returned in Skills responses. Observed in actual API + * responses when Skills invoke server-side tools. Required for JSON + * deserialization. + */ + @JsonProperty("server_tool_use") + SERVER_TOOL_USE("server_tool_use"); + + public final String value; + + Type(String value) { + this.value = value; + } + + /** + * Get the value of the type. + * @return The value of the type. + */ + public String getValue() { + return this.value; + } + + } + + /** + * The source of the media content. (Applicable for "image" types only) + * + * @param type The type of the media content. Only "base64" is supported at the + * moment. + * @param mediaType The media type of the content. For example, "image/png" or + * "image/jpeg". + * @param data The base64-encoded data of the content. + */ + @JsonInclude(Include.NON_NULL) + public record Source( + // @formatter:off + @JsonProperty("type") String type, + @JsonProperty("media_type") String mediaType, + @JsonProperty("data") String data, + @JsonProperty("url") String url, + @JsonProperty("content") List content) { + // @formatter:on + + /** + * Create source + * @param mediaType The media type of the content. + * @param data The content data. + */ + public Source(String mediaType, String data) { + this("base64", mediaType, data, null, null); + } + + public Source(String url) { + this("url", null, null, url, null); + } + + public Source(List content) { + this("content", null, null, null, content); + } + + } + + public static class ContentBlockBuilder { + + private Type type; + + private Source source; + + private String text; + + private Integer index; + + private String id; + + private String name; + + private Map input; + + private String toolUseId; + + private Object content; + + private String signature; + + private String thinking; + + private String data; + + private CacheControl cacheControl; + + private String title; + + private String context; + + private Object citations; + + private String fileId; + + private String filename; + + public ContentBlockBuilder(ContentBlock contentBlock) { + this.type = contentBlock.type; + this.source = contentBlock.source; + this.text = contentBlock.text; + this.index = contentBlock.index; + this.id = contentBlock.id; + this.name = contentBlock.name; + this.input = contentBlock.input; + this.toolUseId = contentBlock.toolUseId; + this.content = contentBlock.content; + this.signature = contentBlock.signature; + this.thinking = contentBlock.thinking; + this.data = contentBlock.data; + this.cacheControl = contentBlock.cacheControl; + this.title = contentBlock.title; + this.context = contentBlock.context; + this.citations = contentBlock.citations; + this.fileId = contentBlock.fileId; + this.filename = contentBlock.filename; + } + + public ContentBlockBuilder type(Type type) { + this.type = type; + return this; + } + + public ContentBlockBuilder source(Source source) { + this.source = source; + return this; + } + + public ContentBlockBuilder text(String text) { + this.text = text; + return this; + } + + public ContentBlockBuilder index(Integer index) { + this.index = index; + return this; + } + + public ContentBlockBuilder id(String id) { + this.id = id; + return this; + } + + public ContentBlockBuilder name(String name) { + this.name = name; + return this; + } + + public ContentBlockBuilder input(Map input) { + this.input = input; + return this; + } + + public ContentBlockBuilder toolUseId(String toolUseId) { + this.toolUseId = toolUseId; + return this; + } + + public ContentBlockBuilder content(Object content) { + this.content = content; + return this; + } + + public ContentBlockBuilder signature(String signature) { + this.signature = signature; + return this; + } + + public ContentBlockBuilder thinking(String thinking) { + this.thinking = thinking; + return this; + } + + public ContentBlockBuilder data(String data) { + this.data = data; + return this; + } + + public ContentBlockBuilder cacheControl(CacheControl cacheControl) { + this.cacheControl = cacheControl; + return this; + } + + public ContentBlockBuilder fileId(String fileId) { + this.fileId = fileId; + return this; + } + + public ContentBlockBuilder filename(String filename) { + this.filename = filename; + return this; + } + + public ContentBlock build() { + return new ContentBlock(this.type, this.source, this.text, this.index, this.id, this.name, this.input, + this.toolUseId, this.content, this.signature, this.thinking, this.data, this.cacheControl, + this.title, this.context, this.citations, this.fileId, this.filename); + } + + } + } + + /////////////////////////////////////// + /// CONTENT_BLOCK EVENTS + /////////////////////////////////////// + + /** + * Tool description. + * + * @param type The type of the tool (e.g., "code_execution_20250825" for code + * execution). + * @param name The name of the tool. + * @param description A description of the tool. + * @param inputSchema The input schema of the tool. + * @param cacheControl Optional cache control for this tool. + */ + @JsonInclude(Include.NON_NULL) + public record Tool( + // @formatter:off + @JsonProperty("type") String type, + @JsonProperty("name") String name, + @JsonProperty("description") String description, + @JsonProperty("input_schema") Map inputSchema, + @JsonProperty("cache_control") CacheControl cacheControl) { + // @formatter:on + + /** + * Constructor for backward compatibility without type or cache control. + */ + public Tool(String name, String description, Map inputSchema) { + this(null, name, description, inputSchema, null); + } + + } + + /** + * Base interface for tool choice options. + */ + @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type", + visible = true) + @JsonSubTypes({ @JsonSubTypes.Type(value = ToolChoiceAuto.class, name = "auto"), + @JsonSubTypes.Type(value = ToolChoiceAny.class, name = "any"), + @JsonSubTypes.Type(value = ToolChoiceTool.class, name = "tool"), + @JsonSubTypes.Type(value = ToolChoiceNone.class, name = "none") }) + public interface ToolChoice { + + @JsonProperty("type") + String type(); + + } + + /** + * Auto tool choice - the model will automatically decide whether to use tools. + * + * @param type The type of tool choice, always "auto". + * @param disableParallelToolUse Whether to disable parallel tool use. Defaults to + * false. If set to true, the model will output at most one tool use. + */ + @JsonInclude(Include.NON_NULL) + public record ToolChoiceAuto(@JsonProperty("type") String type, + @JsonProperty("disable_parallel_tool_use") Boolean disableParallelToolUse) implements ToolChoice { + + /** + * Create an auto tool choice with default settings. + */ + public ToolChoiceAuto() { + this("auto", null); + } + + /** + * Create an auto tool choice with specific parallel tool use setting. + * @param disableParallelToolUse Whether to disable parallel tool use. + */ + public ToolChoiceAuto(Boolean disableParallelToolUse) { + this("auto", disableParallelToolUse); + } + + } + + /** + * Any tool choice - the model will use any available tools. + * + * @param type The type of tool choice, always "any". + * @param disableParallelToolUse Whether to disable parallel tool use. Defaults to + * false. If set to true, the model will output exactly one tool use. + */ + @JsonInclude(Include.NON_NULL) + public record ToolChoiceAny(@JsonProperty("type") String type, + @JsonProperty("disable_parallel_tool_use") Boolean disableParallelToolUse) implements ToolChoice { + + /** + * Create an any tool choice with default settings. + */ + public ToolChoiceAny() { + this("any", null); + } + + /** + * Create an any tool choice with specific parallel tool use setting. + * @param disableParallelToolUse Whether to disable parallel tool use. + */ + public ToolChoiceAny(Boolean disableParallelToolUse) { + this("any", disableParallelToolUse); + } + + } + + /** + * Tool choice - the model will use the specified tool. + * + * @param type The type of tool choice, always "tool". + * @param name The name of the tool to use. + * @param disableParallelToolUse Whether to disable parallel tool use. Defaults to + * false. If set to true, the model will output exactly one tool use. + */ + @JsonInclude(Include.NON_NULL) + public record ToolChoiceTool(@JsonProperty("type") String type, @JsonProperty("name") String name, + @JsonProperty("disable_parallel_tool_use") Boolean disableParallelToolUse) implements ToolChoice { + + /** + * Create a tool choice for a specific tool. + * @param name The name of the tool to use. + */ + public ToolChoiceTool(String name) { + this("tool", name, null); + } + + /** + * Create a tool choice for a specific tool with parallel tool use setting. + * @param name The name of the tool to use. + * @param disableParallelToolUse Whether to disable parallel tool use. + */ + public ToolChoiceTool(String name, Boolean disableParallelToolUse) { + this("tool", name, disableParallelToolUse); + } + + } + + /** + * None tool choice - the model will not be allowed to use tools. + * + * @param type The type of tool choice, always "none". + */ + @JsonInclude(Include.NON_NULL) + public record ToolChoiceNone(@JsonProperty("type") String type) implements ToolChoice { + + /** + * Create a none tool choice. + */ + public ToolChoiceNone() { + this("none"); + } + + } + + // CB START EVENT + + /** + * Chat completion response object. + * + * @param id Unique object identifier. The format and length of IDs may change over + * time. + * @param type Object type. For Messages, this is always "message". + * @param role Conversational role of the generated message. This will always be + * "assistant". + * @param content Content generated by the model. This is an array of content blocks. + * @param model The model that handled the request. + * @param stopReason The reason the model stopped generating tokens. This will be one + * of "end_turn", "max_tokens", "stop_sequence", "tool_use", or "timeout". + * @param stopSequence Which custom stop sequence was generated, if any. + * @param usage Input and output token usage. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ChatCompletionResponse( + // @formatter:off + @JsonProperty("id") String id, + @JsonProperty("type") String type, + @JsonProperty("role") Role role, + @JsonProperty("content") List content, + @JsonProperty("model") String model, + @JsonProperty("stop_reason") String stopReason, + @JsonProperty("stop_sequence") String stopSequence, + @JsonProperty("usage") Usage usage, + @JsonProperty("container") Container container) { + // @formatter:on + + /** + * Container information for Skills execution context. Contains container_id that + * can be reused in multi-turn conversations. + * + * @param id Container identifier (format: container_*) + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record Container(@JsonProperty("id") String id) { + } + } + + // CB DELTA EVENT + + /** + * Usage statistics. + * + * @param inputTokens The number of input tokens which were used. + * @param outputTokens The number of output tokens which were used. completion). + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record Usage( + // @formatter:off + @JsonProperty("input_tokens") Integer inputTokens, + @JsonProperty("output_tokens") Integer outputTokens, + @JsonProperty("cache_creation_input_tokens") Integer cacheCreationInputTokens, + @JsonProperty("cache_read_input_tokens") Integer cacheReadInputTokens) { + // @formatter:off + } + + /// ECB STOP + + /** + * Special event used to aggregate multiple tool use events into a single event with + * list of aggregated ContentBlockToolUse. + */ + public static class ToolUseAggregationEvent implements StreamEvent { + + private Integer index; + + private String id; + + private String name; + + private String partialJson = ""; + + private List toolContentBlocks = new ArrayList<>(); + + @Override + public EventType type() { + return EventType.TOOL_USE_AGGREGATE; + } + + /** + * Get tool content blocks. + * @return The tool content blocks. + */ + public List getToolContentBlocks() { + return this.toolContentBlocks; + } + + /** + * Check if the event is empty. + * @return True if the event is empty, false otherwise. + */ + public boolean isEmpty() { + return (this.index == null || this.id == null || this.name == null); + } + + ToolUseAggregationEvent withIndex(Integer index) { + this.index = index; + return this; + } + + ToolUseAggregationEvent withId(String id) { + this.id = id; + return this; + } + + ToolUseAggregationEvent withName(String name) { + this.name = name; + return this; + } + + ToolUseAggregationEvent appendPartialJson(String partialJson) { + this.partialJson = this.partialJson + partialJson; + return this; + } + + void squashIntoContentBlock() { + Map map = (StringUtils.hasText(this.partialJson)) + ? ModelOptionsUtils.jsonToMap(this.partialJson) : Map.of(); + this.toolContentBlocks.add(new ContentBlockStartEvent.ContentBlockToolUse("tool_use", this.id, this.name, map)); + this.index = null; + this.id = null; + this.name = null; + this.partialJson = ""; + } + + @Override + public String toString() { + return "EventToolUseBuilder [index=" + this.index + ", id=" + this.id + ", name=" + this.name + ", partialJson=" + + this.partialJson + ", toolUseMap=" + this.toolContentBlocks + "]"; + } + + } + + /////////////////////////////////////// + /// MESSAGE EVENTS + /////////////////////////////////////// + + // MESSAGE START EVENT + + /** + * Content block start event. + * @param type The event type. + * @param index The index of the content block. + * @param contentBlock The content block body. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockStartEvent( + // @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("index") Integer index, + @JsonProperty("content_block") ContentBlockBody contentBlock) implements StreamEvent { + + @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type", + visible = true) + @JsonSubTypes({ + @JsonSubTypes.Type(value = ContentBlockToolUse.class, name = "tool_use"), + @JsonSubTypes.Type(value = ContentBlockText.class, name = "text"), + @JsonSubTypes.Type(value = ContentBlockThinking.class, name = "thinking") + }) + public interface ContentBlockBody { + String type(); + } + + /** + * Tool use content block. + * @param type The content block type. + * @param id The tool use id. + * @param name The tool use name. + * @param input The tool use input. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockToolUse( + @JsonProperty("type") String type, + @JsonProperty("id") String id, + @JsonProperty("name") String name, + @JsonProperty("input") Map input) implements ContentBlockBody { + } + + /** + * Text content block. + * @param type The content block type. + * @param text The text content. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockText( + @JsonProperty("type") String type, + @JsonProperty("text") String text) implements ContentBlockBody { + } + + /** + * Thinking content block. + * @param type The content block type. + * @param thinking The thinking content. + */ + @JsonInclude(Include.NON_NULL) + public record ContentBlockThinking( + @JsonProperty("type") String type, + @JsonProperty("thinking") String thinking, + @JsonProperty("signature") String signature) implements ContentBlockBody { + } + } + // @formatter:on + + // MESSAGE DELTA EVENT + + /** + * Content block delta event. + * + * @param type The event type. + * @param index The index of the content block. + * @param delta The content block delta body. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockDeltaEvent( + // @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("index") Integer index, + @JsonProperty("delta") ContentBlockDeltaBody delta) implements StreamEvent { + + @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type", + visible = true) + @JsonSubTypes({ @JsonSubTypes.Type(value = ContentBlockDeltaText.class, name = "text_delta"), + @JsonSubTypes.Type(value = ContentBlockDeltaJson.class, name = "input_json_delta"), + @JsonSubTypes.Type(value = ContentBlockDeltaThinking.class, name = "thinking_delta"), + @JsonSubTypes.Type(value = ContentBlockDeltaSignature.class, name = "signature_delta") + }) + public interface ContentBlockDeltaBody { + String type(); + } + + /** + * Text content block delta. + * @param type The content block type. + * @param text The text content. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockDeltaText( + @JsonProperty("type") String type, + @JsonProperty("text") String text) implements ContentBlockDeltaBody { + } + + /** + * JSON content block delta. + * @param type The content block type. + * @param partialJson The partial JSON content. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockDeltaJson( + @JsonProperty("type") String type, + @JsonProperty("partial_json") String partialJson) implements ContentBlockDeltaBody { + } + + /** + * Thinking content block delta. + * @param type The content block type. + * @param thinking The thinking content. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockDeltaThinking( + @JsonProperty("type") String type, + @JsonProperty("thinking") String thinking) implements ContentBlockDeltaBody { + } + + /** + * Signature content block delta. + * @param type The content block type. + * @param signature The signature content. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockDeltaSignature( + @JsonProperty("type") String type, + @JsonProperty("signature") String signature) implements ContentBlockDeltaBody { + } + } + // @formatter:on + + // MESSAGE STOP EVENT + + /** + * Content block stop event. + * + * @param type The event type. + * @param index The index of the content block. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ContentBlockStopEvent( + // @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("index") Integer index) implements StreamEvent { + } + // @formatter:on + + /** + * Message start event. + * + * @param type The event type. + * @param message The message body. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record MessageStartEvent(// @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("message") ChatCompletionResponse message) implements StreamEvent { + } + // @formatter:on + + /** + * Message delta event. + * + * @param type The event type. + * @param delta The message delta body. + * @param usage The message delta usage. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record MessageDeltaEvent( + // @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("delta") MessageDelta delta, + @JsonProperty("usage") MessageDeltaUsage usage) implements StreamEvent { + + /** + * @param stopReason The stop reason. + * @param stopSequence The stop sequence. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record MessageDelta( + @JsonProperty("stop_reason") String stopReason, + @JsonProperty("stop_sequence") String stopSequence) { + } + + /** + * Message delta usage. + * @param outputTokens The output tokens. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record MessageDeltaUsage( + @JsonProperty("output_tokens") Integer outputTokens) { + } + } + // @formatter:on + + /** + * Message stop event. + * + * @param type The event type. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record MessageStopEvent( + //@formatter:off + @JsonProperty("type") EventType type) implements StreamEvent { + } + // @formatter:on + + /////////////////////////////////////// + /// ERROR EVENT + /////////////////////////////////////// + /** + * Error event. + * + * @param type The event type. + * @param error The error body. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record ErrorEvent( + // @formatter:off + @JsonProperty("type") EventType type, + @JsonProperty("error") Error error) implements StreamEvent { + + /** + * Error body. + * @param type The error type. + * @param message The error message. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record Error( + @JsonProperty("type") String type, + @JsonProperty("message") String message) { + } + } + // @formatter:on + + /////////////////////////////////////// + /// PING EVENT + /////////////////////////////////////// + /** + * Ping event. + * + * @param type The event type. + */ + @JsonInclude(Include.NON_NULL) + @JsonIgnoreProperties(ignoreUnknown = true) + public record PingEvent( + // @formatter:off + @JsonProperty("type") EventType type) implements StreamEvent { + } + // @formatter:on + + public static final class Builder { + + private String baseUrl = DEFAULT_BASE_URL; + + private String completionsPath = DEFAULT_MESSAGE_COMPLETIONS_PATH; + + private ApiKey apiKey; + + private String anthropicVersion = DEFAULT_ANTHROPIC_VERSION; + + private RestClient.Builder restClientBuilder = RestClient.builder(); + + private WebClient.Builder webClientBuilder = WebClient.builder(); + + private ResponseErrorHandler responseErrorHandler = RetryUtils.DEFAULT_RESPONSE_ERROR_HANDLER; + + private String anthropicBetaFeatures = DEFAULT_ANTHROPIC_BETA_VERSION; + + public Builder baseUrl(String baseUrl) { + Assert.hasText(baseUrl, "baseUrl cannot be null or empty"); + this.baseUrl = baseUrl; + return this; + } + + public Builder completionsPath(String completionsPath) { + Assert.hasText(completionsPath, "completionsPath cannot be null or empty"); + this.completionsPath = completionsPath; + return this; + } + + public Builder apiKey(ApiKey apiKey) { + Assert.notNull(apiKey, "apiKey cannot be null"); + this.apiKey = apiKey; + return this; + } + + public Builder apiKey(String simpleApiKey) { + Assert.notNull(simpleApiKey, "simpleApiKey cannot be null"); + this.apiKey = new SimpleApiKey(simpleApiKey); + return this; + } + + public Builder anthropicVersion(String anthropicVersion) { + Assert.notNull(anthropicVersion, "anthropicVersion cannot be null"); + this.anthropicVersion = anthropicVersion; + return this; + } + + public Builder restClientBuilder(RestClient.Builder restClientBuilder) { + Assert.notNull(restClientBuilder, "restClientBuilder cannot be null"); + this.restClientBuilder = restClientBuilder; + return this; + } + + public Builder webClientBuilder(WebClient.Builder webClientBuilder) { + Assert.notNull(webClientBuilder, "webClientBuilder cannot be null"); + this.webClientBuilder = webClientBuilder; + return this; + } + + public Builder responseErrorHandler(ResponseErrorHandler responseErrorHandler) { + Assert.notNull(responseErrorHandler, "responseErrorHandler cannot be null"); + this.responseErrorHandler = responseErrorHandler; + return this; + } + + public Builder anthropicBetaFeatures(String anthropicBetaFeatures) { + Assert.notNull(anthropicBetaFeatures, "anthropicBetaFeatures cannot be null"); + this.anthropicBetaFeatures = anthropicBetaFeatures; + return this; + } + + public AnthropicApi build() { + Assert.notNull(this.apiKey, "apiKey must be set"); + return new AnthropicApi(this.baseUrl, this.completionsPath, this.apiKey, this.anthropicVersion, + this.restClientBuilder, this.webClientBuilder, this.responseErrorHandler, + this.anthropicBetaFeatures); + } + + } + +} diff --git a/server-lite/src/main/java/org/conductoross/conductor/Conductor.java b/server-lite/src/main/java/org/conductoross/conductor/Conductor.java index 6ae70843fb..579546caa3 100644 --- a/server-lite/src/main/java/org/conductoross/conductor/Conductor.java +++ b/server-lite/src/main/java/org/conductoross/conductor/Conductor.java @@ -21,13 +21,14 @@ import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration; +import org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.core.io.FileSystemResource; // Prevents from the datasource beans to be loaded, AS they are needed only for specific databases. // In case that SQL database is selected this class will be imported back in the appropriate // database persistence module. -@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class}) +@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, MongoAutoConfiguration.class}) @ComponentScan(basePackages = {"com.netflix.conductor", "io.orkes.conductor", "org.conductoross"}) public class Conductor { diff --git a/server/src/main/java/com/netflix/conductor/Conductor.java b/server/src/main/java/com/netflix/conductor/Conductor.java index d6f1a214aa..f25bbbd3ff 100644 --- a/server/src/main/java/com/netflix/conductor/Conductor.java +++ b/server/src/main/java/com/netflix/conductor/Conductor.java @@ -24,6 +24,7 @@ import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration; +import org.springframework.boot.autoconfigure.mongo.MongoAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.core.env.Environment; import org.springframework.core.io.FileSystemResource; @@ -31,7 +32,7 @@ // Prevents from the datasource beans to be loaded, AS they are needed only for specific databases. // In case that SQL database is selected this class will be imported back in the appropriate // database persistence module. -@SpringBootApplication(exclude = DataSourceAutoConfiguration.class) +@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class, MongoAutoConfiguration.class}) @ComponentScan( basePackages = { "com.netflix.conductor", diff --git a/server/src/main/resources/application.properties b/server/src/main/resources/application.properties index 3a5968f09d..d8de35478b 100644 --- a/server/src/main/resources/application.properties +++ b/server/src/main/resources/application.properties @@ -189,6 +189,29 @@ conductor.metrics-logger.enabled=false # Start AI Workers conductor.integrations.ai.enabled=true +# ============================================================================= +# Document Access Policy - Security defaults for DocumentLoader +# ============================================================================= +# Prevents reading/writing sensitive files from local filesystem and blocks +# cloud metadata endpoints (AWS, GCP, Azure, Alibaba). Built-in defaults +# cover /etc/passwd, /etc/shadow, /proc/, ~/.ssh/, ~/.aws/, cloud metadata +# IPs (169.254.169.254, metadata.google.internal), and common secret files +# (.env, credentials.json, id_rsa, keystore.jks, etc.). +# +# The file-storage parentDir is automatically included as an allowed directory +# for local filesystem access. Only paths under allowed directories are permitted; +# all others are denied regardless of blocklists. +# To allow additional directories beyond the parentDir (comma-separated): +#conductor.document-access-policy.allowed-directories=/tmp/imports/,/data/shared/ +# +# Add custom entries (comma-separated) to extend the built-in blocklists: +#conductor.document-access-policy.blocked-path-prefixes=/custom/sensitive/,/internal/data/ +#conductor.document-access-policy.blocked-file-names=secret.yaml,api-key.txt +#conductor.document-access-policy.blocked-hosts=internal.corp.net,10.0.0.1 +# +# Emergency override (NOT recommended for production): +conductor.document-access-policy.disabled=false + # ============================================================================= # AI Provider Configuration - Environment Variable Defaults # ============================================================================= @@ -233,7 +256,8 @@ conductor.ai.bedrock.access-key=${AWS_ACCESS_KEY_ID:} conductor.ai.bedrock.secret-key=${AWS_SECRET_ACCESS_KEY:} conductor.ai.bedrock.region=${AWS_REGION:us-east-1} -# Google Vertex AI (Gemini, Veo) - also uses GOOGLE_APPLICATION_CREDENTIALS for auth +# Google Gemini / Vertex AI (Gemini, Veo) - use API key OR GOOGLE_APPLICATION_CREDENTIALS for auth +conductor.ai.gemini.api-key=${GEMINI_API_KEY:} conductor.ai.gemini.project-id=${GOOGLE_CLOUD_PROJECT:} conductor.ai.gemini.location=${GOOGLE_CLOUD_LOCATION:us-central1} diff --git a/springboot-bom-overrides.gradle b/springboot-bom-overrides.gradle index aa5bd41ad7..854141d6ba 100644 --- a/springboot-bom-overrides.gradle +++ b/springboot-bom-overrides.gradle @@ -12,7 +12,7 @@ */ // Contains overrides for Spring Boot Dependency Management plugin -// Dependency version override properties can be found at https://docs.spring.io/spring-boot/docs/3.1.4/reference/htmlsingle/#appendix.dependency-versions.properties +// Dependency version override properties can be found at https://docs.spring.io/spring-boot/docs/3.3.11/reference/htmlsingle/#appendix.dependency-versions.properties // Conductor's default is ES6, but SB brings in ES7 ext['elasticsearch.version'] = revElasticSearch7 diff --git a/task-status-listener/src/main/java/com/netflix/conductor/contribs/listener/StatusNotifierNotificationProperties.java b/task-status-listener/src/main/java/com/netflix/conductor/contribs/listener/StatusNotifierNotificationProperties.java index 7eaeddccb2..0bff115698 100644 --- a/task-status-listener/src/main/java/com/netflix/conductor/contribs/listener/StatusNotifierNotificationProperties.java +++ b/task-status-listener/src/main/java/com/netflix/conductor/contribs/listener/StatusNotifierNotificationProperties.java @@ -30,6 +30,8 @@ public class StatusNotifierNotificationProperties { private String endpointWorkflow; + private List subscribedWorkflowStatuses; + private String headerPrefer = ""; private String headerPreferValue = ""; @@ -151,4 +153,12 @@ public List getSubscribedTaskStatuses() { public void setSubscribedTaskStatuses(List subscribedTaskStatuses) { this.subscribedTaskStatuses = subscribedTaskStatuses; } + + public List getSubscribedWorkflowStatuses() { + return subscribedWorkflowStatuses; + } + + public void setSubscribedWorkflowStatuses(List subscribedWorkflowStatuses) { + this.subscribedWorkflowStatuses = subscribedWorkflowStatuses; + } } diff --git a/test-harness/src/test/groovy/com/netflix/conductor/test/integration/DoWhileSpec.groovy b/test-harness/src/test/groovy/com/netflix/conductor/test/integration/DoWhileSpec.groovy index b5a9c877ef..d32aca9869 100644 --- a/test-harness/src/test/groovy/com/netflix/conductor/test/integration/DoWhileSpec.groovy +++ b/test-harness/src/test/groovy/com/netflix/conductor/test/integration/DoWhileSpec.groovy @@ -44,7 +44,8 @@ class DoWhileSpec extends AbstractSpecification { 'do_while_system_tasks.json', 'do_while_with_decision_task.json', 'do_while_set_variable_fix.json', - 'do_while_high_iteration_test.json') + 'do_while_high_iteration_test.json', + 'do_while_list_iteration_integration_test.json') } def "Test workflow with 2 iterations of five tasks"() { @@ -1298,6 +1299,52 @@ class DoWhileSpec extends AbstractSpecification { } } + def "Test DO_WHILE list iteration with items parameter iterates over each item"() { + given: "A list of items to iterate over" + def workflowInput = new HashMap() + workflowInput['items'] = ['apple', 'banana', 'cherry'] + + when: "A do_while_list_iteration workflow is started" + def workflowInstanceId = startWorkflow("do_while_list_iteration", 1, "listtest", workflowInput, null) + + then: "Verify the workflow runs all 3 iterations via LAMBDA tasks and completes" + with(workflowExecutionService.getExecutionStatus(workflowInstanceId, true)) { + status == Workflow.WorkflowStatus.COMPLETED + // DO_WHILE task + 1 LAMBDA per iteration = 4 tasks total + tasks.size() == 4 + tasks[0].taskType == 'DO_WHILE' + tasks[0].status == Task.Status.COMPLETED + tasks[0].iteration == 3 + tasks[1].taskType == 'LAMBDA' + tasks[1].status == Task.Status.COMPLETED + tasks[1].iteration == 1 + tasks[2].taskType == 'LAMBDA' + tasks[2].status == Task.Status.COMPLETED + tasks[2].iteration == 2 + tasks[3].taskType == 'LAMBDA' + tasks[3].status == Task.Status.COMPLETED + tasks[3].iteration == 3 + } + } + + def "Test DO_WHILE list iteration with empty items list completes immediately"() { + given: "An empty items list" + def workflowInput = new HashMap() + workflowInput['items'] = [] + + when: "A do_while_list_iteration workflow is started with an empty list" + def workflowInstanceId = startWorkflow("do_while_list_iteration", 1, "emptylisttest", workflowInput, null) + + then: "Verify the workflow completes immediately without executing any loop body tasks" + with(workflowExecutionService.getExecutionStatus(workflowInstanceId, true)) { + status == Workflow.WorkflowStatus.COMPLETED + // Only the DO_WHILE task itself, no LAMBDA tasks scheduled + tasks.size() == 1 + tasks[0].taskType == 'DO_WHILE' + tasks[0].status == Task.Status.COMPLETED + } + } + void verifyTaskIteration(Task task, int iteration) { assert task.getReferenceTaskName().endsWith(TaskUtils.getLoopOverTaskRefNameSuffix(task.getIteration())) assert task.iteration == iteration diff --git a/test-harness/src/test/resources/do_while_list_iteration_integration_test.json b/test-harness/src/test/resources/do_while_list_iteration_integration_test.json new file mode 100644 index 0000000000..d3481a32a2 --- /dev/null +++ b/test-harness/src/test/resources/do_while_list_iteration_integration_test.json @@ -0,0 +1,36 @@ +{ + "name": "do_while_list_iteration", + "description": "DO_WHILE with native list iteration using items parameter", + "version": 1, + "tasks": [ + { + "name": "list_loop", + "taskReferenceName": "list_loop", + "inputParameters": {}, + "type": "DO_WHILE", + "items": "${workflow.input.items}", + "loopOver": [ + { + "name": "LAMBDA_TASK", + "taskReferenceName": "process_item", + "inputParameters": { + "scriptExpression": "return { processedItem: $.list_loop.loopItem, index: $.list_loop.loopIndex }", + "list_loop": "${list_loop.output}" + }, + "type": "LAMBDA", + "startDelay": 0, + "optional": false, + "asyncComplete": false + } + ] + } + ], + "inputParameters": ["items"], + "outputParameters": {}, + "schemaVersion": 2, + "restartable": true, + "workflowStatusListenerEnabled": false, + "timeoutPolicy": "ALERT_ONLY", + "timeoutSeconds": 0, + "ownerEmail": "test@harness.com" +} diff --git a/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisher.java b/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisher.java index 4fac33a3dd..735dbcf1d3 100644 --- a/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisher.java +++ b/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisher.java @@ -13,6 +13,8 @@ package com.netflix.conductor.contribs.listener.statuschange; import java.io.IOException; +import java.util.Arrays; +import java.util.List; import java.util.concurrent.BlockingQueue; import java.util.concurrent.LinkedBlockingDeque; @@ -37,6 +39,7 @@ public class StatusChangePublisher implements WorkflowStatusListener { private BlockingQueue blockingQueue = new LinkedBlockingDeque<>(QDEPTH); private RestClientManager rcm; private ExecutionDAOFacade executionDAOFacade; + private List subscribedWorkflowStatusList; class ExceptionHandler implements Thread.UncaughtExceptionHandler { public void uncaughtException(Thread t, Throwable e) { @@ -88,42 +91,90 @@ public void run() { } @Inject - public StatusChangePublisher(RestClientManager rcm, ExecutionDAOFacade executionDAOFacade) { + public StatusChangePublisher( + RestClientManager rcm, + ExecutionDAOFacade executionDAOFacade, + List subscribedWorkflowStatuses) { this.rcm = rcm; this.executionDAOFacade = executionDAOFacade; + // Preserve backward compatibility - default to COMPLETED and TERMINATED + this.subscribedWorkflowStatusList = + (subscribedWorkflowStatuses != null && !subscribedWorkflowStatuses.isEmpty()) + ? subscribedWorkflowStatuses + : Arrays.asList("COMPLETED", "TERMINATED"); ConsumerThread consumerThread = new ConsumerThread(); consumerThread.start(); } + @Override + public void onWorkflowStarted(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("RUNNING")) { + enqueueWorkflow(workflow); + } + } + @Override public void onWorkflowCompleted(WorkflowModel workflow) { - LOGGER.debug( - "workflows completion {} {}", workflow.getWorkflowId(), workflow.getWorkflowName()); - try { - blockingQueue.put(workflow); - } catch (Exception e) { - LOGGER.error( - "Failed to enqueue workflow: Id {} Name {}", - workflow.getWorkflowId(), - workflow.getWorkflowName()); - LOGGER.error(e.toString()); + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("COMPLETED")) { + enqueueWorkflow(workflow); } } @Override public void onWorkflowTerminated(WorkflowModel workflow) { - LOGGER.debug( - "workflows termination {} {}", - workflow.getWorkflowId(), - workflow.getWorkflowName()); - try { - blockingQueue.put(workflow); - } catch (Exception e) { - LOGGER.error( - "Failed to enqueue workflow: Id {} Name {}", - workflow.getWorkflowId(), - workflow.getWorkflowName()); - LOGGER.error(e.getMessage()); + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("TERMINATED")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowPaused(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("PAUSED")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowResumed(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("RESUMED")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowRestarted(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("RESTARTED")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowRetried(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("RETRIED")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowRerun(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("RERAN")) { + enqueueWorkflow(workflow); + } + } + + @Override + public void onWorkflowFinalized(WorkflowModel workflow) { + if (subscribedWorkflowStatusList != null + && subscribedWorkflowStatusList.contains("FINALIZED")) { + enqueueWorkflow(workflow); } } @@ -137,6 +188,23 @@ public void onWorkflowTerminatedIfEnabled(WorkflowModel workflow) { onWorkflowTerminated(workflow); } + private void enqueueWorkflow(WorkflowModel workflow) { + LOGGER.debug( + "Enqueuing workflow status change: {} {} {}", + workflow.getWorkflowId(), + workflow.getWorkflowName(), + workflow.getStatus()); + try { + blockingQueue.put(workflow); + } catch (Exception e) { + LOGGER.error( + "Failed to enqueue workflow: Id {} Name {}", + workflow.getWorkflowId(), + workflow.getWorkflowName()); + LOGGER.error(e.getMessage()); + } + } + private void publishStatusChangeNotification(StatusChangeNotification statusChangeNotification) throws IOException { String jsonWorkflow = statusChangeNotification.toJsonStringWithInputOutput(); diff --git a/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherConfiguration.java b/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherConfiguration.java index 441475caf3..cd14728c9b 100644 --- a/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherConfiguration.java +++ b/workflow-event-listener/src/main/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherConfiguration.java @@ -41,8 +41,11 @@ public RestClientManager getRestClientManager(StatusNotifierNotificationProperti @Bean public WorkflowStatusListener getWorkflowStatusListener( - RestClientManager restClientManager, ExecutionDAOFacade executionDAOFacade) { + RestClientManager restClientManager, + ExecutionDAOFacade executionDAOFacade, + StatusNotifierNotificationProperties config) { - return new StatusChangePublisher(restClientManager, executionDAOFacade); + return new StatusChangePublisher( + restClientManager, executionDAOFacade, config.getSubscribedWorkflowStatuses()); } } diff --git a/workflow-event-listener/src/test/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherTest.java b/workflow-event-listener/src/test/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherTest.java new file mode 100644 index 0000000000..23326905ce --- /dev/null +++ b/workflow-event-listener/src/test/java/com/netflix/conductor/contribs/listener/statuschange/StatusChangePublisherTest.java @@ -0,0 +1,417 @@ +/* + * Copyright 2024 Conductor Authors. + *

+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + *

+ * http://www.apache.org/licenses/LICENSE-2.0 + *

+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on + * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the + * specific language governing permissions and limitations under the License. + */ +package com.netflix.conductor.contribs.listener.statuschange; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.UUID; +import java.util.concurrent.TimeUnit; + +import org.junit.Before; +import org.junit.Test; +import org.mockito.Mockito; + +import com.netflix.conductor.common.metadata.workflow.WorkflowDef; +import com.netflix.conductor.contribs.listener.RestClientManager; +import com.netflix.conductor.core.dal.ExecutionDAOFacade; +import com.netflix.conductor.model.WorkflowModel; + +import static org.junit.Assert.*; +import static org.mockito.Mockito.*; + +/** Unit tests for StatusChangePublisher to verify configurable workflow status publishing. */ +public class StatusChangePublisherTest { + + private WorkflowModel workflow; + private RestClientManager restClientManager; + private ExecutionDAOFacade executionDAOFacade; + + @Before + public void setUp() { + workflow = new WorkflowModel(); + WorkflowDef def = new WorkflowDef(); + def.setName("test_workflow"); + def.setVersion(1); + workflow.setWorkflowDefinition(def); + workflow.setWorkflowId(UUID.randomUUID().toString()); + workflow.setStatus(WorkflowModel.Status.RUNNING); + + restClientManager = Mockito.mock(RestClientManager.class); + executionDAOFacade = Mockito.mock(ExecutionDAOFacade.class); + } + + @Test + public void testOnWorkflowStarted_WithRunningInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RUNNING", "COMPLETED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowStarted(workflow); + + // Give the consumer thread time to process + TimeUnit.MILLISECONDS.sleep(100); + + // Verify that the workflow was enqueued (would be published) + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowStarted_WithoutRunningInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("COMPLETED", "TERMINATED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowStarted(workflow); + + // Give time to ensure nothing is processed + TimeUnit.MILLISECONDS.sleep(100); + + // Verify that no notification was posted + verify(restClientManager, never()).postNotification(any(), anyString(), anyString(), any()); + } + + @Test + public void testOnWorkflowCompleted_WithCompletedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RUNNING", "COMPLETED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompleted(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowCompleted_WithoutCompletedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RUNNING", "TERMINATED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompleted(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, never()).postNotification(any(), anyString(), anyString(), any()); + } + + @Test + public void testOnWorkflowTerminated_WithTerminatedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("COMPLETED", "TERMINATED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.TERMINATED); + publisher.onWorkflowTerminated(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowPaused_WithPausedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("PAUSED", "RESUMED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.PAUSED); + publisher.onWorkflowPaused(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowResumed_WithResumedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("PAUSED", "RESUMED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.RUNNING); + publisher.onWorkflowResumed(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowRestarted_WithRestartedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RESTARTED", "RETRIED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowRestarted(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowRetried_WithRetriedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RESTARTED", "RETRIED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowRetried(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowRerun_WithReranInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RERAN", "FINALIZED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowRerun(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowFinalized_WithFinalizedInSubscribedList() throws Exception { + List subscribedStatuses = Arrays.asList("RERAN", "FINALIZED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowFinalized(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testWithNullSubscribedList() throws Exception { + StatusChangePublisher publisher = + new StatusChangePublisher(restClientManager, executionDAOFacade, null); + + // These should NOT be published (not in the default list) + publisher.onWorkflowStarted(workflow); + publisher.onWorkflowPaused(workflow); + publisher.onWorkflowResumed(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + // Verify no notifications for non-default statuses + verify(restClientManager, never()).postNotification(any(), anyString(), anyString(), any()); + + // These SHOULD be published (backward-compatible defaults) + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompleted(workflow); + + workflow.setStatus(WorkflowModel.Status.TERMINATED); + publisher.onWorkflowTerminated(workflow); + + // Verify COMPLETED and TERMINATED are published by default + verify(restClientManager, timeout(1000).atLeast(2)) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testWithEmptySubscribedList() throws Exception { + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, Collections.emptyList()); + + // This should NOT be published (not in the default list) + publisher.onWorkflowStarted(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + // Verify no notifications for non-default statuses + verify(restClientManager, never()).postNotification(any(), anyString(), anyString(), any()); + + // These SHOULD be published (backward-compatible defaults) + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompleted(workflow); + + workflow.setStatus(WorkflowModel.Status.TERMINATED); + publisher.onWorkflowTerminated(workflow); + + // Verify COMPLETED and TERMINATED are published by default + verify(restClientManager, timeout(1000).atLeast(2)) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testMultipleStatusesInSubscribedList() throws Exception { + List subscribedStatuses = + Arrays.asList( + "RUNNING", + "COMPLETED", + "TERMINATED", + "PAUSED", + "RESUMED", + "RESTARTED", + "RETRIED", + "RERAN", + "FINALIZED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + // Test multiple status changes + publisher.onWorkflowStarted(workflow); + TimeUnit.MILLISECONDS.sleep(50); + + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompleted(workflow); + TimeUnit.MILLISECONDS.sleep(50); + + workflow.setStatus(WorkflowModel.Status.TERMINATED); + publisher.onWorkflowTerminated(workflow); + TimeUnit.MILLISECONDS.sleep(50); + + // Verify that all three notifications were posted + verify(restClientManager, timeout(1000).atLeast(3)) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowCompletedIfEnabled() throws Exception { + List subscribedStatuses = Arrays.asList("COMPLETED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.COMPLETED); + publisher.onWorkflowCompletedIfEnabled(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testOnWorkflowTerminatedIfEnabled() throws Exception { + List subscribedStatuses = Arrays.asList("TERMINATED"); + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + workflow.setStatus(WorkflowModel.Status.TERMINATED); + publisher.onWorkflowTerminatedIfEnabled(workflow); + + TimeUnit.MILLISECONDS.sleep(100); + + verify(restClientManager, timeout(1000).atLeastOnce()) + .postNotification( + eq(RestClientManager.NotificationType.WORKFLOW), + anyString(), + eq(workflow.getWorkflowId()), + any()); + } + + @Test + public void testCaseSensitivityOfStatusNames() throws Exception { + // Test that status names must match exactly (case-sensitive) + List subscribedStatuses = Arrays.asList("running", "completed"); // lowercase + StatusChangePublisher publisher = + new StatusChangePublisher( + restClientManager, executionDAOFacade, subscribedStatuses); + + publisher.onWorkflowStarted(workflow); // checks for "RUNNING" (uppercase) + + TimeUnit.MILLISECONDS.sleep(100); + + // Should not publish because "running" != "RUNNING" + verify(restClientManager, never()).postNotification(any(), anyString(), anyString(), any()); + } +}