Skip to content

Conversation

@petercrocker
Copy link
Contributor

@petercrocker petercrocker commented Dec 28, 2025

  • Add .venv to yamllint ignore list
  • Fix import sorting in all Python modules
  • Update uv.lock with new dev dependencies

Summary by CodeRabbit

  • New Features

    • None visible to end users.
  • Chores

    • Added repository CODEOWNERS and Apache 2.0 license.
    • Major CI/workflow overhaul and updates to development tooling, linters, and packaging config.
    • Added markdown lint task and expanded linting/formatting rules.
  • Bug Fixes / Reliability

    • More robust service-discovery handling with safer file and GraphQL error behavior.
  • Documentation

    • Updated README and project docs with clearer links and usage/configuration notes.

✏️ Tip: You can customize this high-level summary in your review settings.

- Add CODEOWNERS file for code ownership
- Add Apache 2.0 LICENSE.txt
- Update Python version constraint to >=3.10,<3.13
- Add dev dependencies (mypy, pytest, yamllint, invoke, type stubs)
- Add ruff configuration (line-length, format, lint rules)
- Add pytest configuration
- Update CI workflow:
  - Add concurrency control and explicit permissions
  - Update action versions (checkout@v5, setup-uv@v7, setup-node@v6)
  - Add mypy type checking to python-lint job
  - Switch yaml-lint to use uv instead of pip
  - Add markdown-lint job
  - Update Vale version to 3.13.0
- Add .venv to yamllint ignore list
- Fix import sorting in all Python modules
- Update uv.lock with new dev dependencies
@coderabbitai
Copy link

coderabbitai bot commented Dec 28, 2025

Walkthrough

Adds a repository CODEOWNERS file and an Apache-2.0 LICENSE.txt. Replaces and reorganizes CI workflows (.github/workflows/ci.yml) with renamed workflow, concurrency controls, updated actions/tool versions, new markdown-lint job, and modified lint/build steps. Updates pyproject.toml with dev dependency and tool configurations (ruff, mypy, pytest, packaging). Introduces YAML linter ignore for .venv, adds markdown lint task, updates documentation and README links, and makes multiple formatting/import reordering and minor API-typing refinements across infrahub_exporter sources.

Pre-merge checks

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 55.56% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ⚠️ Warning The title 'chore: update CI pipeline to be consistent' refers to CI pipeline updates, but the changeset encompasses far more: type annotations strengthening, markdown linting, documentation updates, license addition, CODEOWNERS configuration, multiple tool version upgrades, and broader code formatting improvements across the repository. Revise the title to accurately reflect the broader scope, such as 'chore: comprehensive CI pipeline updates and code quality improvements' or split into multiple PRs by concern.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@petercrocker petercrocker requested a review from a team December 28, 2025 10:02
@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Dec 28, 2025

Deploying infrahub-exporter with  Cloudflare Pages  Cloudflare Pages

Latest commit: 0a0d764
Status: ✅  Deploy successful!
Preview URL: https://ccc58dd6.infrahub-exporter.pages.dev
Branch Preview URL: https://pmc-20251228-updates.infrahub-exporter.pages.dev

View logs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
pyproject.toml (1)

8-15: Remove Python 3.13 from pyproject.toml or add testing and documentation support.

The project documentation (AGENTS.md) explicitly states the target is Python 3.10-3.12, but pyproject.toml includes Python 3.13 in both the requires-python constraint (>=3.10,<3.14) and classifiers. Additionally, the CI workflow does not test Python 3.13. Either remove the Python 3.13 classifier and update requires-python to >=3.10,<3.13 to align with documented targets, or formally add Python 3.13 testing to CI and update documentation.

infrahub_exporter/metrics_exporter.py (3)

34-48: Missing type annotation for kp parameter.

Per coding guidelines, all functions must have type annotations. The kp parameter should be typed as MetricsKind.

🔎 Proposed fix
     class MetricMeter:
-        def __init__(self, kp: MetricsKind, exporter: "MetricsExporter"):
+        def __init__(self, kp: MetricsKind, exporter: "MetricsExporter") -> None:
             self.kp = kp
             self.exporter = exporter

50-54: Missing return type annotation on __init__.

Per coding guidelines, all functions must have type annotations. Add -> None to the __init__ method signature.

🔎 Proposed fix
-    def __init__(self, client: InfrahubClient, settings: SidecarSettings):
+    def __init__(self, client: InfrahubClient, settings: SidecarSettings) -> None:
         self.client = client

105-105: Use specific type annotation instead of Any.

The kp parameter is typed as Any but should be MetricsKind for proper type safety and IDE support. This is also consistent with how kp is used elsewhere in the class.

🔎 Proposed fix
-    async def _fetch_and_store(self, kp: Any) -> None:
+    async def _fetch_and_store(self, kp: MetricsKind) -> None:
🧹 Nitpick comments (7)
infrahub_exporter/main.py (4)

45-71: Return type annotation is misleading.

The method _setup_routes always returns None (line 71), but the return type annotation suggests it could return JSONResponse | None. The return type should simply be None.

🔎 Proposed fix
-    def _setup_routes(self) -> JSONResponse | None:
+    def _setup_routes(self) -> None:
         @self.app.get("/")
         async def health() -> PlainTextResponse:
             ...
         ...
-        return None

88-97: Server task reference is not stored for cleanup.

The asyncio.create_task(server.serve()) creates a fire-and-forget task without storing the reference. If the server encounters an error, it won't be handled. Consider storing the task reference for proper lifecycle management and to allow stop() to cancel it gracefully.

🔎 Proposed fix
+        self._server_task: asyncio.Task | None = None  # Add to __init__
+
     async def start(self) -> None:
         config = uvicorn.Config(
             app=self.app,
             host=self.listen_address,
             port=self.listen_port,
             log_level=logging.getLevelName(logger.level).lower(),
         )
         server = uvicorn.Server(config)
-        asyncio.create_task(server.serve())
+        self._server_task = asyncio.create_task(server.serve())
         logger.info(f"Server listening on {self.listen_address}:{self.listen_port}")

99-101: stop() method doesn't actually stop the server.

The method logs and sleeps but doesn't cancel the server task. This should cancel the stored server task for proper cleanup.

🔎 Proposed fix
     async def stop(self) -> None:
         logger.info("Stopping server...")
-        await asyncio.sleep(0.1)
+        if self._server_task:
+            self._server_task.cancel()
+            try:
+                await self._server_task
+            except asyncio.CancelledError:
+                pass

128-136: TLS configuration is hardcoded.

The tls_insecure=False and timeout=10 values are hardcoded. Consider making these configurable through SidecarSettings for flexibility in different deployment environments.

infrahub_exporter/metrics_exporter.py (3)

23-28: Consider using @dataclass for cleaner implementation.

MetricEntry is a simple data container that would benefit from using @dataclass decorator for reduced boilerplate.

🔎 Proposed refactor
+from dataclasses import dataclass
+
-class MetricEntry:
-    """Represents a single metric data point with labels and value."""
-
-    def __init__(self, labels: dict[str, Any], value: int) -> None:
-        self.labels = labels
-        self.value = value
+@dataclass
+class MetricEntry:
+    """Represents a single metric data point with labels and value."""
+    labels: dict[str, Any]
+    value: int

61-82: start_otlp is marked async but contains no await.

The method is declared as async but doesn't contain any await expressions. While this works, it's slightly misleading. Consider making it a sync method, or add a comment explaining it's async for API consistency.


184-190: Consider handling exceptions in asyncio.gather.

If one _fetch_and_store task raises an unhandled exception (not caught internally), asyncio.gather will propagate it and potentially crash the polling loop. Consider using return_exceptions=True to ensure one failing kind doesn't stop polling for others.

🔎 Proposed fix
     async def _poll_loop(self) -> None:
         """Background loop to fetch metrics periodically."""
         interval = self.settings.poll_interval_seconds
         while True:
             tasks = [self._fetch_and_store(kp) for kp in self.settings.metrics.kind]
-            await asyncio.gather(*tasks)
+            results = await asyncio.gather(*tasks, return_exceptions=True)
+            for kp, result in zip(self.settings.metrics.kind, results):
+                if isinstance(result, Exception):
+                    logger.error(f"Polling failed for kind '{kp.kind}': {result}")
             await asyncio.sleep(interval)
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2cca55 and 5dc0217.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • .github/CODEOWNERS
  • .github/workflows/ci.yml
  • .yamllint.yml
  • LICENSE.txt
  • infrahub_exporter/config.py
  • infrahub_exporter/main.py
  • infrahub_exporter/metrics_exporter.py
  • infrahub_exporter/service_discovery.py
  • pyproject.toml
  • tasks.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml

Files:

  • infrahub_exporter/service_discovery.py
  • infrahub_exporter/config.py
  • tasks.py
  • infrahub_exporter/metrics_exporter.py
  • infrahub_exporter/main.py
**/service_discovery.py

📄 CodeRabbit inference engine (AGENTS.md)

ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Files:

  • infrahub_exporter/service_discovery.py
**/config.py

📄 CodeRabbit inference engine (AGENTS.md)

Use Pydantic models for YAML configuration with environment variable overrides via INFRAHUB_SIDECAR_* prefix

Files:

  • infrahub_exporter/config.py
**/metrics_exporter.py

📄 CodeRabbit inference engine (AGENTS.md)

MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in _store dict

Files:

  • infrahub_exporter/metrics_exporter.py
**/main.py

📄 CodeRabbit inference engine (AGENTS.md)

HTTP server must expose health (/), metrics (/metrics), and service discovery (/sd/*) endpoints via FastAPI

Files:

  • infrahub_exporter/main.py
🧠 Learnings (4)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Applied to files:

  • infrahub_exporter/service_discovery.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Run linters (yamllint, ruff, mypy) using invoke lint command before committing code

Applied to files:

  • .github/workflows/ci.yml
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict

Applied to files:

  • infrahub_exporter/metrics_exporter.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI

Applied to files:

  • infrahub_exporter/main.py
🧬 Code graph analysis (1)
infrahub_exporter/metrics_exporter.py (1)
infrahub_exporter/main.py (1)
  • metrics (51-56)
🪛 GitHub Actions: CI
infrahub_exporter/service_discovery.py

[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.

infrahub_exporter/config.py

[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.

infrahub_exporter/metrics_exporter.py

[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.

infrahub_exporter/main.py

[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.

🪛 LanguageTool
LICENSE.txt

[style] ~161-~161: ‘any and all’ might be wordy. Consider a shorter alternative.
Context: ...ge, computer failure or malfunction, or any and all other commercial damages or losse...

(EN_WORDINESS_PREMIUM_ANY_AND_ALL)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (22)
.yamllint.yml (1)

7-7: LGTM!

Adding .venv to the ignore list is a standard best practice that prevents yamllint from scanning virtual environment files.

tasks.py (1)

4-4: LGTM!

Import ordering follows alphabetical convention (Context before task).

.github/CODEOWNERS (1)

1-3: LGTM!

The CODEOWNERS file establishes clear ownership and review requirements for the repository.

infrahub_exporter/service_discovery.py (2)

21-145: Well-implemented service discovery with TTL caching.

The ServiceDiscoveryManager correctly:

  • Executes GraphQL queries against Infrahub
  • Implements per-query TTL caching via CachedTargets
  • Transforms results to Prometheus SD JSON format with proper label mappings
  • Handles nested field extraction with dot-notation
  • Uses async/await patterns appropriately

All functions have proper type annotations as required by coding guidelines.


1-9: Run ruff format to fix formatting.

The pipeline indicates this file needs formatting. The import reordering should be automatically handled by the formatter.

Run the following to fix:

ruff format infrahub_exporter/service_discovery.py
⛔ Skipped due to learnings
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
LICENSE.txt (1)

1-190: LGTM!

The Apache License 2.0 has been properly added to the repository with the correct copyright notice for OpsMill.

pyproject.toml (4)

39-46: LGTM!

The dev dependencies align well with the CI workflow updates and support comprehensive code quality checks (linting, type checking, testing).


52-61: LGTM!

Ruff configuration is well-structured with:

  • Reasonable line length (120)
  • Standard formatting preferences (double quotes, space indentation)
  • Essential lint rules enabled (errors, naming, imports, warnings)

67-68: LGTM!

Adding warn_return_any and warn_unused_ignores strengthens type checking without being overly strict.


71-76: LGTM!

Pytest configuration properly supports async tests with asyncio_mode = "auto" and ignores deprecation warnings to reduce noise.

.github/workflows/ci.yml (5)

10-12: LGTM!

Adding concurrency control with cancel-in-progress: true prevents redundant workflow runs and saves CI resources.


14-15: LGTM!

Explicit read-only permissions follow the principle of least privilege for CI workflows.


51-59: LGTM!

The streamlined Python linting workflow using uv run is cleaner and aligns with the updated tooling in pyproject.toml. Separating format checks, import sorting, general linting, and type checking provides clear failure messages.


76-89: LGTM!

Adding a dedicated markdown-lint job improves documentation quality checks and complements the existing Vale-based style validation.


136-142: Manual Vale installation noted.

The comment indicates the GitHub Action for Vale is broken, necessitating manual installation. This is a reasonable workaround.

Consider tracking this as a technical debt item to revisit when the official Vale action is fixed. You may want to add a TODO comment or create an issue to monitor the upstream action status.

infrahub_exporter/main.py (2)

1-14: Import reorganization looks good.

The imports are properly organized and the file correctly imports all necessary dependencies for the HTTP server, Prometheus metrics, and service discovery functionality.


161-165: LGTM!

Standard async entrypoint pattern with proper KeyboardInterrupt handling to prevent traceback noise on Ctrl+C.

infrahub_exporter/metrics_exporter.py (5)

10-16: Import reorganization looks good.

The imports are properly structured for both Prometheus and OTLP metric export functionality.


192-200: LGTM!

The start method properly initializes exporters conditionally based on settings and stores the poll task reference for cleanup. Good implementation pattern.


202-210: LGTM!

Proper async task cancellation pattern with CancelledError handling.


84-103: Prometheus Collector interface properly implemented.

The collect method correctly implements the Prometheus Collector interface, yielding GaugeMetricFamily metrics from the _store dict as specified in the coding guidelines.


148-174: Individual fetch calls inside loops may be redundant given relationships are already included in the initial query.

The code already requests relationships via the include parameter when fetching items (await self.client.all(kind=kp.kind, include=kp.include, ...)). However, for RelatedNode relationships, await attr.fetch() is called unconditionally on each item (line 152), and for RelationshipManager relationships, fetch is called per peer when not cached (line 170). Since relationships should already be loaded via include, these individual fetch calls may be unnecessary. If they are required due to SDK behavior, clarify why the initial include parameter doesn't fully load relationships.

Comment on lines +18 to +19
VALE_VERSION: "3.13.0"
UV_VERSION: "0.9.18"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check the file and git history to confirm previous versions
git log --oneline --all -p .github/workflows/ci.yml | head -100

Repository: opsmill/infrahub-exporter

Length of output: 2698


🌐 Web query:

UV 0.5.10 to 0.9.18 breaking changes migration guide

💡 Result:

Here’s a concise migration checklist for upgrading uv from 0.5.10 → 0.9.18 (key breaking changes and what to do). Sources cited inline.

  1. Pin / verify Python versions
  • 0.9.x changed default/available Python handling (Python 3.14 surfaced as a default/stable in 0.9.0). Pin the Python you expect (or test on 3.14) and update any CI images. If your builds rely on an implicit system Python, add explicit python-version constraints. (newreleases.io)
  1. Update build-backend bounds in pyproject.toml
  • If you have an upper bound for uv_build in [build-system], update it to a 0.9.x-compatible range (examples in docs show uv_build>=0.9.18,<0.10.0). Verify build-backend compatibility after upgrade. (docs.astral.sh)
  1. Review deprecated/changed CLI flags and commands
  • Some flags/behaviors were deprecated or changed across 0.9.x (e.g., deprecation warnings such as --project on uv init in 0.9.9). Audit scripts/automation that call uv CLI and adjust flags per current help output. Run uv -h / uv -h to detect mismatches. (github.com)
  1. uv publish and environment-variable behavior
  • Earlier versions had issues where uv publish didn’t read UV_PUBLISH_URL / UV_PUBLISH_TOKEN; test your publish workflow (and explicitly pass --publish-url/--token as a fallback). 0.9.x contains fixes/enhancements to publish handling—retest and adjust CI if necessary. (github.com)
  1. Configuration / tool settings
  • New configuration keys and flags landed across 0.9.x (example: accept --torch-backend in [tool.uv] in 0.9.18). Review your [tool.uv] config and environment variables for newly supported or renamed keys. (newreleases.io)
  1. New features that may change behavior
  • 0.9.x added features (resolver improvements, cache commands, export/sync changes, Python variant handling) that can alter resolver outcomes or CLI output — verify dependency resolution, lockfiles (uv.lock), and automation that parses uv output. Run a full install/lock and compare results. (github.com)
  1. Test matrix and CI
  • Run your full test matrix (all supported Python versions you target) under 0.9.18. Pay attention to interpreter detection changes and platform-specific messages that may affect automation. (github.com)
  1. Read the per-release notes / changelogs
  • For any remaining edge cases, consult the uv release notes / changelogs for each 0.6.x → 0.9.18 release to catch smaller breaking changes or behavioral tweaks you rely on. The project split changelogs by breaking version — use those files for detailed diffs. (git.joshthomas.dev)

If you want, I can:

  • generate a concrete changelog diff (0.5.10 → 0.9.18) with links to each release note, or
  • run a checklist against a sample pyproject.toml / CI config you paste.

Citations:


🌐 Web query:

Vale 3.7.1 to 3.13.0 breaking changes release notes

💡 Result:

Summary of notable breaking / backwards-impacting changes from Vale v3.7.1 → v3.13.0

  • Global/big breaking change (pre-v3.x you may already have encountered): v3.0 moved vocabularies from $StylesPath/Vocab → $StylesPath/config/vocabularies (users upgrading from v2→v3 must move them). [1]

  • v3.7.x → v3.7.1: bugfixes only (Julia parser added; fixes to matchToken behavior) — no breaking changes called out. [2]

  • v3.9.x: added comment-based controls (switching styles per-block; ignore individual matches). These change how inline comment config can be used and may interact with existing in-file config; not strictly breaking, but can affect behavior if you relied on older comment semantics. [3]

  • v3.10.0: built-in MDX support and a config change — it’s no longer necessary to set CommentDelimiters or use a [formats] association for Markdown. That change can alter how previously-customized Markdown/comment configurations behave. [4]

  • v3.11.0: added front-matter field linting (YAML/TOML/JSON) producing new scopes (e.g., text.frontmatter.title). This creates additional scopes that rules can target — can change matching behavior if you relied on older scope names. [5]

  • v3.12.0: introduced Views (new way to extract scoped content from structured files and source code). Views change how content is scoped/linted (can be breaking for workflows expecting the previous blueprint/format handling). [5]

  • v3.13.0: added tree-sitter support for Java source files (affects how Java files are parsed/linted); minor fixes for continuation-character errors. [6]

If you need, I can:

  • extract the exact changelog text for each release (3.8 → 3.13) or
  • produce a migration checklist (config settings to review and example changes).

Sources

  1. v3.0 breaking notes (vocab path change). [1] (newreleases.io)
  2. v3.7.1 changelog (fixes, Julia parser). [2] (newreleases.io)
  3. v3.9 (comment syntax features). [3] (newreleases.io)
  4. v3.10 (MDX support; CommentDelimiters/formats change). [4] (newreleases.io)
  5. v3.11 & v3.12 (front matter scopes; Views). [5] (github.com)
  6. v3.13 (tree-sitter Java; fixes). [6] (github.com)

Citations:


Test these tool versions for compatibility concerns.

These versions include significant changes that warrant testing:

  • UV 0.9.18: Python version handling changes, CLI flag deprecations, and resolver improvements (major jump from 0.5.x).
  • Vale 3.13.0: New scoping behavior, Views feature, and front-matter linting (includes multiple feature additions from 3.7.x).

Run the full test suite with these versions to ensure no unexpected behavior changes or build failures.

🤖 Prompt for AI Agents
.github/workflows/ci.yml lines 18-19 specify VALE_VERSION "3.13.0" and
UV_VERSION "0.9.18"; the reviewer asks these versions be validated for
compatibility. Update CI to run the full test suite with these exact tool
versions (or add a matrix job that includes them), run the test matrix
locally/CI, capture and fix any failures caused by changed CLI flags, Python
handling, resolver behavior, or Vale scoping/front-matter changes (e.g., adjust
CLI invocations, update Python compatibility layers, and modify Vale
config/rules as needed), then commit CI adjustments and any code/config fixes so
the pipeline passes with these versions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
infrahub_exporter/metrics_exporter.py (2)

35-37: Add missing return type annotation.

The __init__ method is missing a return type annotation. According to the coding guidelines, all functions must have type annotations.

🔎 Proposed fix
-    def __init__(self, kp: MetricsKind, exporter: "MetricsExporter"):
+    def __init__(self, kp: MetricsKind, exporter: "MetricsExporter") -> None:

Based on coding guidelines.


48-52: Add missing return type annotation.

The __init__ method is missing a return type annotation. According to the coding guidelines, all functions must have type annotations.

🔎 Proposed fix
-    def __init__(self, client: InfrahubClient, settings: SidecarSettings):
+    def __init__(self, client: InfrahubClient, settings: SidecarSettings) -> None:

Based on coding guidelines.

🧹 Nitpick comments (1)
infrahub_exporter/metrics_exporter.py (1)

103-103: Consider using a more specific type annotation.

The kp parameter uses Any type, which reduces type safety. Based on the usage context and the reference to self.settings.metrics.kind (line 176), this should likely be MetricsKind.

🔎 Proposed fix
-    async def _fetch_and_store(self, kp: Any) -> None:
+    async def _fetch_and_store(self, kp: MetricsKind) -> None:
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5dc0217 and d3c9c30.

📒 Files selected for processing (4)
  • infrahub_exporter/config.py
  • infrahub_exporter/main.py
  • infrahub_exporter/metrics_exporter.py
  • infrahub_exporter/service_discovery.py
✅ Files skipped from review due to trivial changes (1)
  • infrahub_exporter/config.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • infrahub_exporter/main.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml

Files:

  • infrahub_exporter/service_discovery.py
  • infrahub_exporter/metrics_exporter.py
**/service_discovery.py

📄 CodeRabbit inference engine (AGENTS.md)

ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Files:

  • infrahub_exporter/service_discovery.py
**/metrics_exporter.py

📄 CodeRabbit inference engine (AGENTS.md)

MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in _store dict

Files:

  • infrahub_exporter/metrics_exporter.py
🧠 Learnings (2)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Applied to files:

  • infrahub_exporter/service_discovery.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict

Applied to files:

  • infrahub_exporter/metrics_exporter.py
🧬 Code graph analysis (2)
infrahub_exporter/service_discovery.py (1)
infrahub_exporter/config.py (1)
  • ServiceDiscoveryQuery (55-67)
infrahub_exporter/metrics_exporter.py (1)
infrahub_exporter/main.py (1)
  • metrics (47-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
infrahub_exporter/service_discovery.py (1)

2-2: LGTM: Formatting and import organization improvements.

The changes in this file are limited to import reordering and code formatting (collapsing multi-line statements to single lines). No functional logic has been modified, and all error handling, caching, and GraphQL execution patterns remain intact.

Also applies to: 7-7, 43-43, 126-126

infrahub_exporter/metrics_exporter.py (1)

54-80: LGTM: Metrics exporter implementation follows requirements.

The implementation correctly:

  • Implements the Prometheus Collector interface with the collect() method
  • Runs a background polling loop via _poll_loop()
  • Stores metrics in the _store dict as required
  • Supports both Prometheus and OTLP export paths
  • Handles error cases gracefully with proper logging

The unified exporter design cleanly separates concerns between the two export backends.

Based on coding guidelines.

Also applies to: 82-101, 172-178, 180-198

peer = itm._client.store.get(
key=attr.peer.id, raise_when_missing=False
)
peer = itm._client.store.get(key=attr.peer.id, raise_when_missing=False)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, search for _client usage patterns in the codebase
rg -nP --type=py '\._client\.store\.get|itm\._client' -C 2

Repository: opsmill/infrahub-exporter

Length of output: 1176


🏁 Script executed:

#!/bin/bash
# Check the context in the metrics_exporter.py file around lines 149 and 157
sed -n '145,160p' infrahub_exporter/metrics_exporter.py

Repository: opsmill/infrahub-exporter

Length of output: 924


🏁 Script executed:

#!/bin/bash
# Search for any documentation or comments about accessing _client
rg -nP --type=py '(_client|store\.get)' -C 1 | head -100

Repository: opsmill/infrahub-exporter

Length of output: 1332


🏁 Script executed:

#!/bin/bash
# Check if there are public API methods available instead of accessing _client directly
rg -nP --type=py 'class InfrahubNode|def.*store' | head -30

Repository: opsmill/infrahub-exporter

Length of output: 273


🏁 Script executed:

#!/bin/bash
# Search for InfrahubNode class and its public methods
rg -nP --type=py 'class InfrahubNode|def get_human_friendly_id|def fetch' -A 2 | head -50

Repository: opsmill/infrahub-exporter

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or comments about store access in the repo
find . -type f \( -name "*.md" -o -name "*.rst" -o -name "*.txt" \) ! -path "./.*" | xargs grep -l "store" 2>/dev/null | head -5

Repository: opsmill/infrahub-exporter

Length of output: 92


🏁 Script executed:

#!/bin/bash
# Search for any public methods that might provide access to related nodes without using _client
rg -nP --type=py 'def.*peer|def.*related.*node' -A 3 | head -40

Repository: opsmill/infrahub-exporter

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Check the imports to understand what's available from infrahub_sdk
rg -nP --type=py '^from infrahub_sdk|^import infrahub' | head -20

Repository: opsmill/infrahub-exporter

Length of output: 792


🏁 Script executed:

#!/bin/bash
# Check README for SDK usage patterns related to store or relationships
head -200 README.md

Repository: opsmill/infrahub-exporter

Length of output: 1628


🏁 Script executed:

#!/bin/bash
# Search for any store-related documentation in the repo
rg -nP 'store.*get|_client.*store' -B 2 -A 2 --type=md

Repository: opsmill/infrahub-exporter

Length of output: 159


🏁 Script executed:

#!/bin/bash
# Look at how attr.peer and attr.peers are being used to understand the pattern
rg -nP --type=py 'attr\.peer|attr\.peers' -B 3 -A 3 | head -80

Repository: opsmill/infrahub-exporter

Length of output: 1534


🏁 Script executed:

#!/antml:bash
# Check if there's a way to access store through a public method or property
rg -nP --type=py 'def.*store|@property.*store|\.store' -A 1 | head -60

Repository: opsmill/infrahub-exporter

Length of output: 820


🏁 Script executed:

#!/bin/bash
# Look at the full context of the _fetch_and_store method
sed -n '100,170p' infrahub_exporter/metrics_exporter.py

Repository: opsmill/infrahub-exporter

Length of output: 3020


🏁 Script executed:

#!/bin/bash
# Search for any comments or documentation about why _client is needed
rg -nP --type=py 'store|_client' -B 2 -A 2 | head -100

Repository: opsmill/infrahub-exporter

Length of output: 4792


🏁 Script executed:

#!/antml:bash
# Check if there's a pyproject.toml or requirements file that shows infrahub-sdk version
cat pyproject.toml | grep -A 5 -B 5 infrahub

Repository: opsmill/infrahub-exporter

Length of output: 1029


🏁 Script executed:

#!/bin/bash
# Look for any issues or discussions about public vs private API access
find . -type f \( -name "*.md" -o -name "*.txt" -o -name "*.py" \) -exec grep -l "public.*api\|private\|_client\|store" {} \; 2>/dev/null | head -10

Repository: opsmill/infrahub-exporter

Length of output: 199


🌐 Web query:

infrahub-sdk InfrahubNode _client store API documentation

💡 Result:

The Infrahub Python SDK exposes a client with a local object store (client.store) used by the SDK to cache nodes and related objects. Key behaviors and API methods:

  • Purpose: caches objects returned by client.get/filters/all, related objects when prefetch_relationships or fetch is used, and newly created objects. [1]
  • Retrieve from store: client.store.get(key=...) — accepts id or hfid (when using hfid include kind). Example: tag_in_store = client.store.get(key=tag.id). [1]
  • Manually store: client.store.set(key=..., node=...) — you can pick a custom key (e.g., name value). [1]
  • Control population: query methods accept populate_store=False to avoid automatically storing returned objects. [1]
  • Context: how to create/configure InfrahubClient (async/sync) and examples for using the client that interact with the store are in the client guide. [2]

References:
[1] Infrahub SDK — Using the client store (API usage and examples).
[2] Infrahub SDK — Creating/configuring an Infrahub client (client usage, async/sync examples).


🌐 Web query:

infrahub-sdk relationship handling public API store access

💡 Result:

Brief summary — how Infrahub SDK handles relationships, public API access, and the client store:

  • Client store: the SDK keeps a local cache ("store") of nodes returned by get/filters/all, created objects, and related objects when you use prefetch_relationships or the relationship fetch() method. You can get/set by id/hfid or custom keys and disable auto-population with populate_store=False. [1]

  • Relationship handling: related nodes can be returned inline when you call queries with prefetch_relationships, or fetched later via node.relationship.fetch() which will store the related node in the client store. Use the store to look up related objects without re-querying. [1][3]

  • Public API / GraphQL: you can also use the API/GraphQL endpoints directly (execute_graphql available in SDK) to retrieve relationships; query filters support filtering on relationship attributes. [3]

  • Authentication & access control: API access follows Infrahub auth rules — anonymous read-only by default (configurable), or use JWT (short-lived) or API tokens (long-lived) for programmatic access. Permissions are applied via roles/groups. Supply tokens in headers (X-INFRAHUB-KEY) or via the SDK auth flow. [4]

  • Practical notes:

    • To avoid storing results locally, call queries with populate_store=False. [1]
    • To eagerly load relations, use prefetch_relationships on queries; to lazy-load a single relation, call fetch() on the relationship. [1][3]
    • Use API or SDK tokens depending on your security/expiry needs; frontend only supports JWT, API tokens work for GraphQL Playground and long-lived integrations. [4]

Sources:
[1] Using the client store — Infrahub docs.
[2] How to create and configure an Infrahub client — Infrahub docs.
[3] Querying data in Infrahub (relationships, filters, execute_graphql) — Infrahub docs.
[4] User management and authentication (API tokens, JWT, anonymous access) — Infrahub docs.


Replace private _client attribute access with the public client.store API.

The code at lines 149 and 157 accesses the private _client attribute from InfrahubNode instances. The documented SDK pattern is to use the public client.store API. Since self.client is available in this class, use self.client.store.get(key=..., raise_when_missing=False) instead of itm._client.store.get(...) and self.client.store.get(key=..., raise_when_missing=False) instead of itm._client.store.get(...).

🤖 Prompt for AI Agents
In infrahub_exporter/metrics_exporter.py around lines 149 to 157, replace direct
access to the private attribute itm._client with the public SDK client via
self.client.store; specifically change any calls like
itm._client.store.get(key=..., raise_when_missing=False) to use
self.client.store.get(key=..., raise_when_missing=False) (apply the same
replacement for all occurrences noted at ~149 and ~157).

- Change _fetch_and_store parameter from Any to MetricsKind for type safety
- Add -> None return type annotations to __init__ methods
- Fix _setup_routes return type from JSONResponse | None to None
- Update AGENTS.md to reflect Python 3.13 support per pyproject.toml
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
AGENTS.md (3)

1-1: Filename mismatch: Update file header from CLAUDE.md to AGENTS.md.

Line 1 shows # CLAUDE.md but the file is named AGENTS.md. Update the header to match the actual filename.

🔎 Proposed fix
-# CLAUDE.md
+# AGENTS.md

8-8: Add blank line before the unordered list (MD032).

Markdown linting requires a blank line before lists. This blocks the CI pipeline.

🔎 Proposed fix
 Infrahub Exporter is a Python microservice that bridges Infrahub (infrastructure management platform) with monitoring systems. It provides:
+
 - Prometheus metrics export via `/metrics` endpoint

55-55: Add blank line before the unordered list (MD032).

Markdown linting requires a blank line before lists. This blocks the CI pipeline.

🔎 Proposed fix
 Configuration is YAML-based (see `examples/config.yml`). Key sections:
+
 - `infrahub`: Server address, API token, branch
♻️ Duplicate comments (1)
infrahub_exporter/metrics_exporter.py (1)

149-149: Use public self.client.store API instead of private _client attribute.

Lines 149 and 157 access the private _client attribute from InfrahubNode instances. As noted in a previous review, the documented SDK pattern is to use the public client.store API via self.client.store.get(key=..., raise_when_missing=False) instead of itm._client.store.get(...).

🔎 Suggested fix
-                        peer = itm._client.store.get(key=attr.peer.id, raise_when_missing=False)
+                        peer = self.client.store.get(key=attr.peer.id, raise_when_missing=False)

And at line 157:

-                            node = itm._client.store.get(key=p.id, raise_when_missing=False)
+                            node = self.client.store.get(key=p.id, raise_when_missing=False)

Also applies to: 157-157

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d3c9c30 and f36dad7.

📒 Files selected for processing (3)
  • AGENTS.md
  • infrahub_exporter/main.py
  • infrahub_exporter/metrics_exporter.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml

Files:

  • infrahub_exporter/main.py
  • infrahub_exporter/metrics_exporter.py
**/main.py

📄 CodeRabbit inference engine (AGENTS.md)

HTTP server must expose health (/), metrics (/metrics), and service discovery (/sd/*) endpoints via FastAPI

Files:

  • infrahub_exporter/main.py
**/metrics_exporter.py

📄 CodeRabbit inference engine (AGENTS.md)

MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in _store dict

Files:

  • infrahub_exporter/metrics_exporter.py
🧠 Learnings (5)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : Target Python 3.10-3.12 as specified in pyproject.toml

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : All functions must have type annotations

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI

Applied to files:

  • infrahub_exporter/main.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Applied to files:

  • infrahub_exporter/main.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict

Applied to files:

  • infrahub_exporter/metrics_exporter.py
🧬 Code graph analysis (2)
infrahub_exporter/main.py (3)
infrahub_exporter/config.py (4)
  • Config (90-92)
  • ServiceDiscoveryConfig (70-74)
  • ServiceDiscoveryQuery (55-67)
  • SidecarSettings (78-107)
infrahub_exporter/metrics_exporter.py (1)
  • MetricsExporter (31-198)
infrahub_exporter/service_discovery.py (2)
  • ServiceDiscoveryManager (21-139)
  • get_targets (28-41)
infrahub_exporter/metrics_exporter.py (2)
infrahub_exporter/main.py (1)
  • metrics (47-52)
infrahub_exporter/config.py (2)
  • MetricsKind (10-15)
  • SidecarSettings (78-107)
🪛 GitHub Actions: CI
AGENTS.md

[error] 8-8: MD032/blanks-around-lists Lists should be surrounded by blank lines [Context: "- Prometheus metrics export vi..."]


[error] 55-55: MD032/blanks-around-lists Lists should be surrounded by blank lines [Context: "- infrahub: Server address, ..."]

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
AGENTS.md (1)

62-62: Verify Python 3.13 support in pyproject.toml.

The update extends Python target support from 3.10-3.12 to 3.10-3.13. Ensure that pyproject.toml has been updated to declare Python 3.13 as a supported version and that CI workflows (e.g., .github/workflows/ci.yml) include Python 3.13 in their test matrix.

infrahub_exporter/main.py (2)

41-41: Good fix: return type annotation corrected.

The return type change from JSONResponse | None to None is correct. This method registers routes but doesn't return anything, so -> None properly reflects its behavior.


56-64: LGTM: Proper closure variable capture with default argument.

The use of a default argument (q: ServiceDiscoveryQuery = query) correctly captures each loop iteration's query value at function definition time, avoiding the common Python closure late-binding gotcha. Each endpoint will properly reference its corresponding query configuration.

infrahub_exporter/metrics_exporter.py (2)

35-35: Good improvement: complete type annotations added.

The addition of -> None return type annotations to __init__ methods (lines 35, 48) and the explicit return type on _otlp_callback (line 39) aligns with the coding guidelines requirement that all functions must have type annotations.

Also applies to: 39-39, 48-48


103-103: Excellent: parameter type strengthened from Any to MetricsKind.

Refining the kp parameter type from Any to MetricsKind provides stronger type safety and better IDE support, directly addressing the PR objective for improved typing.

- Change _fetch_and_store parameter from Any to MetricsKind for type safety
- Add -> None return type annotations to __init__ methods
- Fix _setup_routes return type from JSONResponse | None to None
- Update AGENTS.md to reflect Python 3.13 support per pyproject.toml
- Add lint_markdown task to run markdownlint on all Markdown files
- Include markdownlint in lint_all task to match CI pipeline
- Fix README.md link text from generic "here" to descriptive text
- Fix lint_yaml docstring (was incorrectly saying "Python files")
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f36dad7 and 0a0d764.

📒 Files selected for processing (3)
  • AGENTS.md
  • README.md
  • tasks.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml

Files:

  • tasks.py
🧠 Learnings (6)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : Target Python 3.10-3.12 as specified in pyproject.toml

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/config.py : Use Pydantic models for YAML configuration with environment variable overrides via `INFRAHUB_SIDECAR_*` prefix

Applied to files:

  • AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Run linters (yamllint, ruff, mypy) using invoke lint command before committing code

Applied to files:

  • tasks.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
README.md (1)

25-25: Documentation link URL updated correctly.

The change updates the documentation reference to point to the correct integration docs path (/integrations/exporter/), which aligns with the PR's documentation alignment objectives.

Please verify that the updated URL is valid and accessible in the Infrahub documentation system. If possible, confirm this URL change is consistent with any other documentation references updated in this PR.

tasks.py (3)

4-4: Import order change is cosmetic.

The imports have been alphabetized. This is a minor style improvement with no functional impact.


23-24: Docstring correction is accurate.

The updated docstring now correctly reflects that this task checks YAML files, not Python files. Good catch!


62-62: LGTM: Integration into lint_all is appropriate.

The markdown linting task is correctly integrated into the lint_all sequence alongside other linters, matching the CI workflow structure.

AGENTS.md (1)

64-64: Python 3.10-3.13 support is correctly declared in pyproject.toml (requires-python = ">=3.10,<3.14").

Comment on lines +31 to +38
@task
def lint_markdown(context: Context) -> None:
"""Run Linter to check all Markdown files."""
print(" - Check code with markdownlint")
exec_cmd = "markdownlint '**/*.md' --ignore node_modules --ignore docs/node_modules"
with context.cd(MAIN_DIRECTORY_PATH):
context.run(exec_cmd)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

markdownlint-cli dependency is not declared in Python project.

The lint_markdown task assumes markdownlint is available in the environment. However, markdownlint-cli is not declared in pyproject.toml as a dev dependency—it's only installed via npm in the CI workflow. Local developers running invoke lint or invoke lint-markdown will encounter a command-not-found error unless they manually install markdownlint-cli via npm.

Consider one of the following:

  1. Document the npm prerequisite in the README or a development setup guide
  2. Add a check in the task to verify markdownlint is available and provide a helpful error message
  3. Use a Python-based markdown linter that can be declared as a dev dependency
🔎 Example: Add availability check with helpful error
 @task
 def lint_markdown(context: Context) -> None:
     """Run Linter to check all Markdown files."""
+    # Check if markdownlint is available
+    result = context.run("which markdownlint", warn=True, hide=True)
+    if result is None or result.exited != 0:
+        print("ERROR: markdownlint not found. Install with: npm install -g markdownlint-cli")
+        sys.exit(1)
+    
     print(" - Check code with markdownlint")
     exec_cmd = "markdownlint '**/*.md' --ignore node_modules --ignore docs/node_modules"
     with context.cd(MAIN_DIRECTORY_PATH):
         context.run(exec_cmd)

Committable suggestion skipped: line range outside the PR's diff.

@petercrocker petercrocker changed the title Pmc-20251228-updates chore: update CI pipeline to be consistent Dec 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants