diff --git a/skills/casely/SKILL.md b/skills/casely/SKILL.md new file mode 100644 index 000000000..95129bc53 --- /dev/null +++ b/skills/casely/SKILL.md @@ -0,0 +1,208 @@ +--- +name: casely +description: > + Intelligent QA assistant that automates writing test cases from project documentation. + Use when the user wants to generate test cases from requirements, runs /init, /parse, /style, /plan, /generate, /export, or works with PDF/DOCX/XLSX requirement documents and TestRail-ready Excel export. +license: "MIT" +metadata: + author: "John Wayne" + version: "1.5.0" + category: "QA Automation" + repository: "https://github.com/JohnWayneeee/casely-qa-skill" +--- + +# Casely — QA Test Case Generator + +Casely automates the most time-consuming part of a QA engineer's job: writing test cases. +It reads requirement documents and learns from your team's existing test case examples to produce +structured, style-consistent test suites ready for import into any Test Management System. + +## Why this matters + +Manual test case writing accounts for ~40% of a QA engineer's time. Requirements come in +fragmented formats (PDF, DOCX, XLSX). Every team has its own column structure, naming conventions, +and writing style. Casely solves this by: + +- Converting any document format to clean Markdown via `docling`. +- Extracting formal style rules from your team's example test cases. +- Generating test cases that match your team's exact structure and tone. +- Exporting to Excel with correct column mapping for TMS import. + +--- + +## Commands + +### `/init [ProjectName]` +Creates a new isolated project workspace and verifies the environment. + +### `/parse` +Runs the CaselyParser to convert all raw assets (requirements and examples) to Markdown. + +### `/style` +Analyzes example test cases and generates a persistent `test_style_guide.md`. + +### `/plan` +Scans parsed requirements and suggests a testing plan with modules and test types. + +### `/generate [type]` +Generates atomic test cases of the specified type (functional, negative, integration, boundary, etc.). + +### `/export` +Converts generated Markdown test cases into a formatted `.xlsx` file. + +--- + +## Full Workflow + +### Phase 1: Project Initialization & Environment Setup (`/init`) + +When the user runs `/init [ProjectName]` (or asks to start a new testing project): + +1. **Create Directories:** Create the project directory structure under `projects/` in the repository root: + - `input/requirements/` + - `input/examples/` + - `processed/requirements/` + - `processed/examples/` + - `results/` + - `exports/` + +2. **Environment Setup via `uv`:** + - **Location:** Dependencies are defined in `pyproject.toml` at the **repository root** (not inside the skill folder). Scripts expect `uv sync` to have been run from that root. + - Check if `pyproject.toml` exists at the repo root. If not, run `uv init` there. + - Install/verify dependencies: `uv add docling openpyxl` (or `uv sync` from repo root). + - This ensures a lightning-fast setup and handles all sub-dependencies (e.g. `torch` for `docling`) automatically. + +3. **Confirm to the user:** + - "Project `{project_name}` initialized via UV. Environment and dependencies (`docling`, `openpyxl`) are ready." + - "Place your requirement documents into `projects/{project_name}/input/requirements/` and examples into `projects/{project_name}/input/examples/`." + +### Phase 2: Document Parsing (`/parse`) + +When the user runs `/parse` (or asks to parse/process documents): + +1. **Locate the project.** If there's only one project under `projects/`, use it automatically. + If multiple exist, ask the user which one. + +2. **Run CaselyParser** — The parser is located at `scripts/casely_parser.py` within this skill. + It uses `docling` and supports all major formats. + + Via CLI (optional arguments, auto-detects latest project if omitted): + ```bash + uv run python /scripts/casely_parser.py + ``` + *(Or manual path if needed)* + ```bash + uv run python /scripts/casely_parser.py "projects/{name}/input/requirements" "projects/{name}/processed/requirements" + ``` + +3. **Report results** to the user: how many files were parsed, any errors, and summary of processed files. + +### Phase 3: Style Guide Creation (`/style`) + +1. **Read all parsed example files** from `processed/examples/`. + +2. **Analyze the table structure** to extract headers, data types, and mandatory fields. + - **CRITICAL:** The style guide MUST be an exact replica of the example's column structure. + - **MANDATORY:** Transfer ALL headers from the example files to the `test_style_guide.md` in their exact order. Do not rename, omit (e.g., "Comments", "Author"), or add new columns unless explicitly requested. + +3. **Analyze the writing style** to extract language, tone, and formatting patterns (e.g., how steps are phrased). + +4. **Generate `test_style_guide.md`** in the project root. This file acts as the "source of truth" and must explicitly define the horizontal table row structure. + +5. **Present the style guide** to the user for review. Any manual adjustments to this file will be respected by the generator. + +### Phase 4: Professional Test Design & Planning (`/plan`) + +1. **Load Context & Analysis:** + - Read parsed requirements from `processed/requirements/`. + - Load `test_style_guide.md` to match example structure (columns → test complexity). + +2. **Structural Breakdown:** + - Extract modules/endpoints/logic blocks from requirements. + - Categorize by **Level**: API (fields/status), Integration (flows), E2E (scenarios).[web:8] + +3. **Smart Estimation (Style-Driven):** + - **Metrics from Style Guide:** Fields per test (from columns), branches from logic. + - **Coverage Tiers** (total cases based on examples): + | Tier | Cases/Module | Coverage | Focus | + |------|--------------|----------|-------| + | Smoke | 1-3 | Min | Golden Path[web:13] + | Critical (80%) | N (fields*0.8) | Key paths | High-risk (finance/auth) + | Full | All perms | 100% | Edges/negatives + - **Risk Scoring:** High (security), Med (logic), Low (UI).[web:8] + +4. **Traceability & Prep:** + - Quick **RTM Preview**: Req ID → Planned Cases (e.g., "REQ-001 → 5 cases"). + - **Data/Deps:** Test data rules (valid/edge), mocks needed. + +5. **Output Plan:** + - Table by Module: *Module | Level | Est. Cases (80%) | Type | Tools*. + - **MANDATORY:** Provide ready-to-copy commands for each module. + - Save `test_plan.md` (importable to TMS). + - Ask: *"Generate Critical Path? `/generate functional MODULE_NAME`"* or *"`/generate negative MODULE_NAME`"*. + +**Next:** "`/generate [type]` will create exactly the estimated number of files, with each file containing one atomic test case matching your style guide." + +### Phase 5: Test Case Generation (`/generate [type]`) + +1. **Load context:** + - **BIDING:** Read `test_style_guide.md` (Mandatory Source of Truth). + - Read relevant parsed requirement files. + - Target specific module and test type. + +2. **Generate ATOMIC test cases:** + - **One File = One Test Case (1 ID = 1 Scenario):** Each test case MUST be saved as a separate Markdown file in `results/`. + - **Horizontal Structure:** Each file MUST contain exactly ONE horizontal table row (header row + data row). Do NOT use vertical "key-value" lists. + - **Naming Convention:** `{type}_{id}_{short_description}.md`. + - **Match the style guide exactly** — same columns (1:1 with example), same tone, same structure. + - **No Hallucinations** — only use columns and data points supported by the guide and requirements. + +3. **Proactive Report:** + - Notify the user of created files. + - **Mandatory Next Step:** Always advise the user on what else they can generate. Example: + *"I've generated functional cases. You can now run `/generate negative` to check error handling or `/generate security` for device metadata."* + +### Phase 6: Export to Excel (`/export`) + +1. **Convert Markdown files to Excel** using `scripts/export_to_xlsx.py`. + - **Smart Execution:** The script automatically detects the most recently modified project in the `projects/` directory if no paths are provided. +2. **Atomic One-to-One Export:** For every `.md` file in `results/`, the tool creates exactly one corresponding `.xlsx` file in `exports/`. + - **Behavior:** Direct format conversion preserving the file count. + - **Naming:** Files are named identically to their source: `{type}_{id}_{short_description}.xlsx`. +3. **Internal Structure:** Each Excel file contains a single sheet called "Test Case" with the columns exactly matching the project's style guide. +4. **Plain Text Export:** Content is exported as plain text with support for multi-line cells (using `
`). +5. **Save to `exports/`**. + +--- + +## Important Guidelines + +### Proactive Guidance (Crucial) +After every command, Casely MUST provide a "Next Step" block. +- After `/init` -> suggest `/parse`. +- After `/parse` -> suggest `/style`. +- After `/style` -> suggest `/plan`. +- After `/plan` -> list specific commands like `/generate functional` or `/generate negative`. +- After `/generate` -> suggest `/export` OR other generation types. + +### Language Awareness +Casely is language-agnostic for data. It will detect the language of the provided examples (e.g., Russian) and generate test cases in that same language. The internal logic and style guide should bridge this gap. + +### Atomic over Composite +Validators should always prefer multiple specialized test cases over one "all-in-one" case. This ensures clearer test results and easier bug localization. + +### Style Guide is King +The style guide is the single source of truth. Do not invent new columns or change formatting unless the style guide is updated first. + +--- + +## Skill Files + +### Scripts (`scripts/`) +- `scripts/casely_parser.py` — Document-to-Markdown converter (Docling). +- `scripts/export_to_xlsx.py` — Markdown-to-Excel exporter. + +### References (`references/`) +- `references/parser_usage.md` — Technical details on calling the parser. +- `references/export_guide.md` — Details on the MD-to-Excel conversion logic. +- `references/style_analysis_prompts.md` — Methodologies for style extraction. diff --git a/skills/casely/evals/evals.json b/skills/casely/evals/evals.json new file mode 100644 index 000000000..6d8ef7f36 --- /dev/null +++ b/skills/casely/evals/evals.json @@ -0,0 +1,29 @@ +{ + "skill_name": "casely", + "evals": [ + { + "id": 1, + "prompt": "Initialize a new testing project for the 'Payments' feature. I have requirement documents and some previous test case examples.", + "expected_output": "Skill creates the standard directory structure under projects/Payments/, checks the environment (venv and libraries), and guides the user on where to place files.", + "files": [] + }, + { + "id": 2, + "prompt": "I've uploaded my files and run the parser. Now create a style guide based on my parsed examples in the 'processed' folder.", + "expected_output": "Skill analyzes the Markdown tables in the processed folder that originated from examples, extracts column structure and writing style, and generates a comprehensive test_style_guide.md.", + "files": [] + }, + { + "id": 3, + "prompt": "Based on the requirements in the 'processed' folder and use the 'test_style_guide.md', generate a suite of atomic functional test cases.", + "expected_output": "Skill identifies features from requirements, applies the style guide formatting, and generates atomic test cases (one scenario per row) in the results directory.", + "files": [] + }, + { + "id": 4, + "prompt": "Everything looks good. Export the generated test cases to a formatted Excel file for my TMS.", + "expected_output": "Skill executes the export script to convert Markdown tables in the results folder into a professional .xlsx file in the exports folder.", + "files": [] + } + ] +} \ No newline at end of file diff --git a/skills/casely/references/export_guide.md b/skills/casely/references/export_guide.md new file mode 100644 index 000000000..24c95f827 --- /dev/null +++ b/skills/casely/references/export_guide.md @@ -0,0 +1,35 @@ +# Export Guide: Markdown to Excel + +This guide describes how Casely converts generated Markdown test cases into formatted Excel files for TMS import. + +## Overview + +The `export_to_xlsx.py` script parses Markdown tables and recreates them in an Excel workbook using the `openpyxl` library. + +## Features + +- **Column Mapping:** Automatically maps Markdown headers to Excel columns. +- **Formatting:** Applies bold fonts and background fills to headers. +- **Auto-Width:** Calculates appropriate column widths based on content. +- **Multi-line Support:** Correctly handles line breaks (`
` or `\n`) within cells. +- **Styling:** Adds borders and alternating row colors for readability. + +## Usage + +Run the script from the command line: + +```bash +python scripts/export_to_xlsx.py +``` + +- `results_dir`: Directory containing the `.md` files to export. +- `output_dir`: Directory where the `.xlsx` files will be created (one per Markdown file). + +If you omit both arguments, the script will: + +- Automatically detect the most recently modified project under `projects/` +- Use its `results/` folder as the source and `exports/` as the output directory + +## Handling Special Characters + +The script cleans worksheet names by removing illegal characters (like `\ / * ? [ ] :`) to ensure Excel compatibility. diff --git a/skills/casely/references/parser_usage.md b/skills/casely/references/parser_usage.md new file mode 100644 index 000000000..e1b91eff2 --- /dev/null +++ b/skills/casely/references/parser_usage.md @@ -0,0 +1,45 @@ +# Parser Usage Guide + +This guide explains how to use the `casely_parser.py` script provided with the Casely skill. + +## Overview + +The parser uses the `docling` library to convert various document formats into Markdown. This allows the LLM to easily analyze requirements and test case examples. + +## Supported Formats + +- Documents: PDF, DOCX, PPTX, XLSX, HTML, HTM, TXT, MD +- Images: PNG, JPG, JPEG, TIFF + +## Command Line Interface (CLI) + +You can run the parser directly from the terminal: + +```bash +python scripts/casely_parser.py +``` + +- `input_dir`: Path to the folder containing your source documents. +- `output_dir`: Path where the converted Markdown files will be saved. + +## Programmatic Usage + +You can also import and use the `DocumentParser` class in your Python scripts: + +```python +from casely_parser import DocumentParser + +# Initialize the parser +parser = DocumentParser() + +# Parse a specific folder +result = parser.parse_folder("projects/MyProject/input/requirements", "projects/MyProject/processed") + +# Access results +print(f"Processed {result.processed} new files.") +``` + +## Naming Convention + +Processed files are saved in the output directory with the prefix `_parsed_` and the `.md` extension. +Example: `Requirement.pdf` becomes `_parsed_Requirement.md`. diff --git a/skills/casely/references/style_analysis_prompts.md b/skills/casely/references/style_analysis_prompts.md new file mode 100644 index 000000000..b2ec08b6b --- /dev/null +++ b/skills/casely/references/style_analysis_prompts.md @@ -0,0 +1,27 @@ +# Test Style Analysis Methodology + +This document outlines how Casely extracts formatting and stylistic rules from example test cases to ensure consistent generation. + +## Analysis Process + +1. **Structure Extraction:** + - Detects all column headers from the Markdown table. + - Identifies the order of columns to maintain the sequence in new test cases. + - Infers data types (numeric, date, enum) from the content of the cells. + +2. **Stylistic Patterns:** + - **Preconditions:** Checks if they are numbered (1, 2, 3) or bulleted. Analyzes the level of technical detail. + - **Steps:** Analyzes the verb tense (imperative, etc.) and punctuation style. + - **Expected Results:** Detects if results are grouped or single sentences. + +3. **Taxonomy Discovery:** + - Identifies allowed values for Priority, Status, and other enumerated fields. + - Detects specific prefixes or suffixes used in IDs or titles. + +## Persistence + +The results of this analysis are saved in the project's `test_style_guide.md`. This file serves as the strict template for all future generations. + +## Language Detection + +The analyzer detects the primary language of the examples. All future test cases for that project will be generated in that language by default to ensure consistency with the existing test base. diff --git a/skills/casely/scripts/casely_parser.py b/skills/casely/scripts/casely_parser.py new file mode 100644 index 000000000..f53a8c328 --- /dev/null +++ b/skills/casely/scripts/casely_parser.py @@ -0,0 +1,214 @@ +""" +CaselyParser — Document-to-Markdown converter for LLM analysis. + +Uses docling to convert PDF, DOCX, PPTX, XLSX, HTML and other formats +to clean Markdown while preserving structure, tables and text. + +Usage: + python casely_parser.py + + Or programmatically: + from casely_parser import DocumentParser + dp = DocumentParser() + result = dp.parse_folder("input/requirements", "processed/requirements") +""" + +import sys +import logging +import argparse +from pathlib import Path +from typing import List, Optional, cast +from dataclasses import dataclass + +try: + from docling.document_converter import DocumentConverter +except ImportError: + print("❌ Error: docling is not installed. Install with: pip install docling") + sys.exit(1) + +# Logger configuration +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s [%(levelname)s] %(message)s', + handlers=[ + logging.StreamHandler() + ] +) +logger = logging.getLogger(__name__) + + +@dataclass +class ParsingResult: + """Result of a folder parsing operation""" + processed: int + skipped: int + errors: List[str] + + +class DocumentParser: + """ + Document parser based on docling. + + Supported formats: + PDF, DOCX, PPTX, XLSX, HTML, HTM, MD, TXT, PNG, JPG, JPEG, TIFF, TIF + """ + + SUPPORTED_EXTENSIONS = { + '.pdf', '.docx', '.pptx', '.xlsx', '.html', + '.htm', '.md', '.txt', '.png', '.jpg', + '.jpeg', '.tiff', '.tif' + } + + def __init__(self, config: Optional[dict] = None): + """ + Initialize the parser. + + Args: + config: Configuration dictionary (encoding, overwrite, etc.) + """ + logger.info("🚀 Initializing Docling parser") + self.converter = DocumentConverter() + self.config = config or {} + + def parse_file(self, file_path: Path, output_dir: Path) -> bool: + """ + Parse a single file to Markdown. + + Args: + file_path: Path to the source file + output_dir: Directory to save the result + + Returns: + True if file was successfully converted, False if skipped + """ + output_file = output_dir / f"_parsed_{file_path.stem}.md" + + if output_file.exists(): + logger.info(f"⏭️ Already processed: {output_file.name}") + return False + + try: + logger.info(f"🔄 Parsing: {file_path.name}") + result = self.converter.convert(str(file_path)) + md_content = result.document.export_to_markdown() + output_file.write_text(md_content, encoding='utf-8') + logger.info(f"✅ Success: {file_path.name} → {output_file.name}") + return True + except Exception as e: + logger.error(f"❌ Error in {file_path.name}: {e}") + raise + + def parse_folder(self, raw_dir: str, ready_dir: str) -> ParsingResult: + """ + Parses all supported files from raw_dir into ready_dir. + + Saves as _parsed_{name}.md. Skips already processed files. + + Args: + raw_dir: Source directory + ready_dir: Output directory + + Returns: + ParsingResult containing counts and errors + """ + raw_path = Path(raw_dir) + ready_path = Path(ready_dir) + ready_path.mkdir(parents=True, exist_ok=True) + + if not raw_path.exists(): + logger.error(f"❌ Directory not found: {raw_dir}") + return ParsingResult(0, 0, [f"Directory not found: {raw_dir}"]) + + result = ParsingResult(processed=0, skipped=0, errors=[]) + + for file_path in raw_path.iterdir(): + if not (file_path.is_file() and + file_path.suffix.lower() in self.SUPPORTED_EXTENSIONS): + logger.debug(f"⏭️ Unsupported format: {file_path.name}") + continue + + try: + success = self.parse_file(file_path, ready_path) + if success: + result.processed += 1 + else: + result.skipped += 1 + except Exception as e: + result.errors.append(f"{file_path.name}: {str(e)}") + + logger.info(f"🎉 Folder complete: {raw_dir} → {ready_dir} " + f"({result.processed} new, {result.skipped} skipped)") + if result.errors: + logger.warning(f"⚠️ Errors in {len(result.errors)} files") + + return result + + @classmethod + def get_supported_formats(cls) -> str: + """Returns string of supported extensions""" + return ', '.join(sorted(cls.SUPPORTED_EXTENSIONS)) + + +def find_latest_project() -> Optional[Path]: + """Find the most recently modified project directory.""" + projects_dir = Path("projects") + if not projects_dir.exists(): + return None + + subdirs = [d for d in projects_dir.iterdir() if d.is_dir()] + if not subdirs: + return None + + return max(subdirs, key=lambda d: d.stat().st_mtime) + + +def main(): + """CLI interface for the parser""" + arg_parser = argparse.ArgumentParser( + description='CaselyParser — Document to Markdown converter' + ) + arg_parser.add_argument('input_dir', nargs='?', help='Source directory') + arg_parser.add_argument('output_dir', nargs='?', help='Output directory (Markdown)') + args = arg_parser.parse_args() + + logger.info("📋 Supported formats: %s", DocumentParser.get_supported_formats()) + parser = DocumentParser() + + # Case 1: Manual paths provided + if args.input_dir and args.output_dir: + parser.parse_folder(args.input_dir, args.output_dir) + return + + # Case 2: Auto-detect latest project + latest_project_opt = find_latest_project() + if not latest_project_opt: + logger.error("❌ No project found and no paths provided.") + sys.exit(1) + + latest_project = cast(Path, latest_project_opt) + logger.info(f"📂 Auto-detected project: {latest_project.name}") + + # List of sub-folders to process + sub_tasks = [ + ("input/requirements", "processed/requirements"), + ("input/examples", "processed/examples") + ] + + total_processed = 0 + for inp_sub, out_sub in sub_tasks: + inp_path = latest_project / inp_sub + out_path = latest_project / out_sub + + if inp_path.exists(): + logger.info(f"🔎 Scanning {inp_sub}...") + result = parser.parse_folder(str(inp_path), str(out_path)) + total_processed += result.processed + + if total_processed == 0: + logger.info("ℹ️ No new files to process.") + else: + logger.info(f"✅ Finished! Total new files: {total_processed}") + + +if __name__ == '__main__': + main() diff --git a/skills/casely/scripts/export_to_xlsx.py b/skills/casely/scripts/export_to_xlsx.py new file mode 100644 index 000000000..577d780a8 --- /dev/null +++ b/skills/casely/scripts/export_to_xlsx.py @@ -0,0 +1,181 @@ +""" +Casely Export Module — converts Markdown test case tables into individual Excel files. +""" + +import re +import sys +import argparse +from pathlib import Path +from typing import Optional, List, Tuple + +try: + from openpyxl import Workbook + from openpyxl.worksheet.worksheet import Worksheet + from openpyxl.styles import Font, Alignment + from openpyxl.utils import get_column_letter +except ImportError: + print("Error: openpyxl is required. Install it with: pip install openpyxl") + sys.exit(1) + +MIN_COL_WIDTH = 10 +MAX_COL_WIDTH = 60 + + +def _split_table_row(line: str) -> List[str]: + """Split a Markdown table row into individual cell values.""" + raw_parts = re.split(r'(?= 0 else raw_len + int(end)): + cell = raw_parts[i] + if cell is not None: + result.append(cell.strip().replace(r'\|', '|')) + return result + + +def parse_md_table(md_content: str) -> Tuple[List[str], List[List[str]]]: + """Parse a Markdown table into headers and data rows.""" + lines: List[str] = [ + line.strip() + for line in md_content.strip().split('\n') + if line.strip().startswith('|') + ] + if len(lines) < 2: + return [], [] + + headers: List[str] = _split_table_row(lines[0]) + rows: List[List[str]] = [] + + for line in lines[1:]: + # Skip separator rows like |---|---| + if re.match(r'^\|[\s\-:|]+\|$', line): + continue + + row: List[str] = _split_table_row(line) + if not row: + continue + + # Pad short rows with empty strings + while len(row) < len(headers): + row.append('') + + # Trim extra columns + rows.append(row[0:len(headers)]) + + return headers, rows + + +def export_to_xlsx(results_dir: str, output_path: str) -> None: + """Convert each Markdown test case to a separate Excel file.""" + results_path = Path(results_dir) + out_dir = Path(output_path) + + if not results_path.exists(): + print(f"Error: Results directory not found: {results_dir}") + sys.exit(1) + + # Create the export directory if it does not exist + out_dir.mkdir(parents=True, exist_ok=True) + + md_files = sorted(results_path.glob('*.md')) + if not md_files: + print(f"Warning: No .md files found in {results_dir}") + return + + for md_file in md_files: + headers, rows = parse_md_table(md_file.read_text(encoding='utf-8')) + if not headers: + print(f"Skipping {md_file.name}: No table found.") + continue + + # Create a new workbook for each file + wb = Workbook() + ws_opt = wb.active + if ws_opt is None: + print(f"Skipping {md_file.name}: Failed to create worksheet.") + continue + ws: Worksheet = ws_opt # type: ignore[assignment] + ws.title = "Test Case" + + # Write headers + for col_idx, header in enumerate(headers, 1): + ws.cell(row=1, column=col_idx, value=header) + + # Write data rows + for row_idx, row in enumerate(rows, 2): + for col_idx, value in enumerate(row, 1): + clean_value = value.replace('
', '\n').replace('
', '\n') if value else '' + cell = ws.cell(row=row_idx, column=col_idx, value=clean_value) + cell.alignment = Alignment(wrap_text=True, vertical='top') + + # Style headers: bold + center + header_font = Font(bold=True) + for col_idx in range(1, len(headers) + 1): + header_cell = ws.cell(row=1, column=col_idx) + header_cell.font = header_font + header_cell.alignment = Alignment(horizontal='center', vertical='center') + + # Auto-fit column widths + for col_idx in range(1, len(headers) + 1): + max_len = len(str(ws.cell(row=1, column=col_idx).value or '')) + total_rows = len(rows) + for row_idx in range(2, total_rows + 2): + cell_val = str(ws.cell(row=row_idx, column=col_idx).value or '') + # For multiline cells, use the longest line + for line in cell_val.split('\n'): + max_len = max(max_len, len(line)) + col_letter = get_column_letter(col_idx) + width = min(max(max_len + 2, MIN_COL_WIDTH), MAX_COL_WIDTH) + ws.column_dimensions[col_letter].width = width + + # Save with the same name but .xlsx extension + dest_file = out_dir / f"{md_file.stem}.xlsx" + wb.save(str(dest_file)) + print(f"Exported: {dest_file.name}") + + + +def find_latest_project() -> Optional[Path]: + """Find the most recently modified project directory.""" + projects_dir = Path("projects") + if not projects_dir.exists(): + return None + + subdirs = [d for d in projects_dir.iterdir() if d.is_dir()] + if not subdirs: + return None + + return max(subdirs, key=lambda d: d.stat().st_mtime) + + +def main() -> None: + """CLI interface for the exporter.""" + arg_parser = argparse.ArgumentParser( + description='Casely Export — Markdown to Excel converter' + ) + arg_parser.add_argument('results_dir', nargs='?', help='Path to results MD files') + arg_parser.add_argument('output_path', nargs='?', help='Path to export XLSX files') + args = arg_parser.parse_args() + + results_dir: Optional[str] = args.results_dir + output_path: Optional[str] = args.output_path + + # If arguments are not provided, try to find the project automatically + if not results_dir or not output_path: + latest = find_latest_project() + if latest is not None: + results_dir = str(latest / "results") + output_path = str(latest / "exports") + print(f"Auto-detected project: {latest.name}") + else: + print("Error: No paths provided and no projects found in 'projects/' directory.") + sys.exit(1) + + export_to_xlsx(results_dir, output_path) + + +if __name__ == '__main__': + main()