-
Notifications
You must be signed in to change notification settings - Fork 0
chore: update CI pipeline to be consistent #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add CODEOWNERS file for code ownership - Add Apache 2.0 LICENSE.txt - Update Python version constraint to >=3.10,<3.13 - Add dev dependencies (mypy, pytest, yamllint, invoke, type stubs) - Add ruff configuration (line-length, format, lint rules) - Add pytest configuration - Update CI workflow: - Add concurrency control and explicit permissions - Update action versions (checkout@v5, setup-uv@v7, setup-node@v6) - Add mypy type checking to python-lint job - Switch yaml-lint to use uv instead of pip - Add markdown-lint job - Update Vale version to 3.13.0
- Add .venv to yamllint ignore list - Fix import sorting in all Python modules - Update uv.lock with new dev dependencies
WalkthroughAdds a repository CODEOWNERS file and an Apache-2.0 LICENSE.txt. Replaces and reorganizes CI workflows (.github/workflows/ci.yml) with renamed workflow, concurrency controls, updated actions/tool versions, new markdown-lint job, and modified lint/build steps. Updates pyproject.toml with dev dependency and tool configurations (ruff, mypy, pytest, packaging). Introduces YAML linter ignore for .venv, adds markdown lint task, updates documentation and README links, and makes multiple formatting/import reordering and minor API-typing refinements across infrahub_exporter sources. Pre-merge checks❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying infrahub-exporter with
|
| Latest commit: |
0a0d764
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://ccc58dd6.infrahub-exporter.pages.dev |
| Branch Preview URL: | https://pmc-20251228-updates.infrahub-exporter.pages.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
pyproject.toml (1)
8-15: Remove Python 3.13 from pyproject.toml or add testing and documentation support.The project documentation (AGENTS.md) explicitly states the target is Python 3.10-3.12, but
pyproject.tomlincludes Python 3.13 in both therequires-pythonconstraint (>=3.10,<3.14) and classifiers. Additionally, the CI workflow does not test Python 3.13. Either remove the Python 3.13 classifier and updaterequires-pythonto>=3.10,<3.13to align with documented targets, or formally add Python 3.13 testing to CI and update documentation.infrahub_exporter/metrics_exporter.py (3)
34-48: Missing type annotation forkpparameter.Per coding guidelines, all functions must have type annotations. The
kpparameter should be typed asMetricsKind.🔎 Proposed fix
class MetricMeter: - def __init__(self, kp: MetricsKind, exporter: "MetricsExporter"): + def __init__(self, kp: MetricsKind, exporter: "MetricsExporter") -> None: self.kp = kp self.exporter = exporter
50-54: Missing return type annotation on__init__.Per coding guidelines, all functions must have type annotations. Add
-> Noneto the__init__method signature.🔎 Proposed fix
- def __init__(self, client: InfrahubClient, settings: SidecarSettings): + def __init__(self, client: InfrahubClient, settings: SidecarSettings) -> None: self.client = client
105-105: Use specific type annotation instead ofAny.The
kpparameter is typed asAnybut should beMetricsKindfor proper type safety and IDE support. This is also consistent with howkpis used elsewhere in the class.🔎 Proposed fix
- async def _fetch_and_store(self, kp: Any) -> None: + async def _fetch_and_store(self, kp: MetricsKind) -> None:
🧹 Nitpick comments (7)
infrahub_exporter/main.py (4)
45-71: Return type annotation is misleading.The method
_setup_routesalways returnsNone(line 71), but the return type annotation suggests it could returnJSONResponse | None. The return type should simply beNone.🔎 Proposed fix
- def _setup_routes(self) -> JSONResponse | None: + def _setup_routes(self) -> None: @self.app.get("/") async def health() -> PlainTextResponse: ... ... - return None
88-97: Server task reference is not stored for cleanup.The
asyncio.create_task(server.serve())creates a fire-and-forget task without storing the reference. If the server encounters an error, it won't be handled. Consider storing the task reference for proper lifecycle management and to allowstop()to cancel it gracefully.🔎 Proposed fix
+ self._server_task: asyncio.Task | None = None # Add to __init__ + async def start(self) -> None: config = uvicorn.Config( app=self.app, host=self.listen_address, port=self.listen_port, log_level=logging.getLevelName(logger.level).lower(), ) server = uvicorn.Server(config) - asyncio.create_task(server.serve()) + self._server_task = asyncio.create_task(server.serve()) logger.info(f"Server listening on {self.listen_address}:{self.listen_port}")
99-101:stop()method doesn't actually stop the server.The method logs and sleeps but doesn't cancel the server task. This should cancel the stored server task for proper cleanup.
🔎 Proposed fix
async def stop(self) -> None: logger.info("Stopping server...") - await asyncio.sleep(0.1) + if self._server_task: + self._server_task.cancel() + try: + await self._server_task + except asyncio.CancelledError: + pass
128-136: TLS configuration is hardcoded.The
tls_insecure=Falseandtimeout=10values are hardcoded. Consider making these configurable throughSidecarSettingsfor flexibility in different deployment environments.infrahub_exporter/metrics_exporter.py (3)
23-28: Consider using@dataclassfor cleaner implementation.
MetricEntryis a simple data container that would benefit from using@dataclassdecorator for reduced boilerplate.🔎 Proposed refactor
+from dataclasses import dataclass + -class MetricEntry: - """Represents a single metric data point with labels and value.""" - - def __init__(self, labels: dict[str, Any], value: int) -> None: - self.labels = labels - self.value = value +@dataclass +class MetricEntry: + """Represents a single metric data point with labels and value.""" + labels: dict[str, Any] + value: int
61-82:start_otlpis marked async but contains no await.The method is declared as
asyncbut doesn't contain anyawaitexpressions. While this works, it's slightly misleading. Consider making it a sync method, or add a comment explaining it's async for API consistency.
184-190: Consider handling exceptions inasyncio.gather.If one
_fetch_and_storetask raises an unhandled exception (not caught internally),asyncio.gatherwill propagate it and potentially crash the polling loop. Consider usingreturn_exceptions=Trueto ensure one failing kind doesn't stop polling for others.🔎 Proposed fix
async def _poll_loop(self) -> None: """Background loop to fetch metrics periodically.""" interval = self.settings.poll_interval_seconds while True: tasks = [self._fetch_and_store(kp) for kp in self.settings.metrics.kind] - await asyncio.gather(*tasks) + results = await asyncio.gather(*tasks, return_exceptions=True) + for kp, result in zip(self.settings.metrics.kind, results): + if isinstance(result, Exception): + logger.error(f"Polling failed for kind '{kp.kind}': {result}") await asyncio.sleep(interval)
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
.github/CODEOWNERS.github/workflows/ci.yml.yamllint.ymlLICENSE.txtinfrahub_exporter/config.pyinfrahub_exporter/main.pyinfrahub_exporter/metrics_exporter.pyinfrahub_exporter/service_discovery.pypyproject.tomltasks.py
🧰 Additional context used
📓 Path-based instructions (5)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml
Files:
infrahub_exporter/service_discovery.pyinfrahub_exporter/config.pytasks.pyinfrahub_exporter/metrics_exporter.pyinfrahub_exporter/main.py
**/service_discovery.py
📄 CodeRabbit inference engine (AGENTS.md)
ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Files:
infrahub_exporter/service_discovery.py
**/config.py
📄 CodeRabbit inference engine (AGENTS.md)
Use Pydantic models for YAML configuration with environment variable overrides via
INFRAHUB_SIDECAR_*prefix
Files:
infrahub_exporter/config.py
**/metrics_exporter.py
📄 CodeRabbit inference engine (AGENTS.md)
MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in
_storedict
Files:
infrahub_exporter/metrics_exporter.py
**/main.py
📄 CodeRabbit inference engine (AGENTS.md)
HTTP server must expose health (
/), metrics (/metrics), and service discovery (/sd/*) endpoints via FastAPI
Files:
infrahub_exporter/main.py
🧠 Learnings (4)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Applied to files:
infrahub_exporter/service_discovery.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Run linters (yamllint, ruff, mypy) using invoke lint command before committing code
Applied to files:
.github/workflows/ci.yml
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict
Applied to files:
infrahub_exporter/metrics_exporter.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI
Applied to files:
infrahub_exporter/main.py
🧬 Code graph analysis (1)
infrahub_exporter/metrics_exporter.py (1)
infrahub_exporter/main.py (1)
metrics(51-56)
🪛 GitHub Actions: CI
infrahub_exporter/service_discovery.py
[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.
infrahub_exporter/config.py
[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.
infrahub_exporter/metrics_exporter.py
[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.
infrahub_exporter/main.py
[error] 1-1: Ruff format check failed. 4 files would be reformatted by 'ruff format'. Run 'ruff format' to fix code style.
🪛 LanguageTool
LICENSE.txt
[style] ~161-~161: ‘any and all’ might be wordy. Consider a shorter alternative.
Context: ...ge, computer failure or malfunction, or any and all other commercial damages or losse...
(EN_WORDINESS_PREMIUM_ANY_AND_ALL)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (22)
.yamllint.yml (1)
7-7: LGTM!Adding
.venvto the ignore list is a standard best practice that prevents yamllint from scanning virtual environment files.tasks.py (1)
4-4: LGTM!Import ordering follows alphabetical convention (Context before task).
.github/CODEOWNERS (1)
1-3: LGTM!The CODEOWNERS file establishes clear ownership and review requirements for the repository.
infrahub_exporter/service_discovery.py (2)
21-145: Well-implemented service discovery with TTL caching.The ServiceDiscoveryManager correctly:
- Executes GraphQL queries against Infrahub
- Implements per-query TTL caching via CachedTargets
- Transforms results to Prometheus SD JSON format with proper label mappings
- Handles nested field extraction with dot-notation
- Uses async/await patterns appropriately
All functions have proper type annotations as required by coding guidelines.
1-9: Runruff formatto fix formatting.The pipeline indicates this file needs formatting. The import reordering should be automatically handled by the formatter.
Run the following to fix:
ruff format infrahub_exporter/service_discovery.py⛔ Skipped due to learnings
Learnt from: CR Repo: opsmill/infrahub-exporter PR: 0 File: AGENTS.md:0-0 Timestamp: 2025-12-28T09:33:46.407Z Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL cachingLICENSE.txt (1)
1-190: LGTM!The Apache License 2.0 has been properly added to the repository with the correct copyright notice for OpsMill.
pyproject.toml (4)
39-46: LGTM!The dev dependencies align well with the CI workflow updates and support comprehensive code quality checks (linting, type checking, testing).
52-61: LGTM!Ruff configuration is well-structured with:
- Reasonable line length (120)
- Standard formatting preferences (double quotes, space indentation)
- Essential lint rules enabled (errors, naming, imports, warnings)
67-68: LGTM!Adding
warn_return_anyandwarn_unused_ignoresstrengthens type checking without being overly strict.
71-76: LGTM!Pytest configuration properly supports async tests with
asyncio_mode = "auto"and ignores deprecation warnings to reduce noise..github/workflows/ci.yml (5)
10-12: LGTM!Adding concurrency control with
cancel-in-progress: trueprevents redundant workflow runs and saves CI resources.
14-15: LGTM!Explicit read-only permissions follow the principle of least privilege for CI workflows.
51-59: LGTM!The streamlined Python linting workflow using
uv runis cleaner and aligns with the updated tooling in pyproject.toml. Separating format checks, import sorting, general linting, and type checking provides clear failure messages.
76-89: LGTM!Adding a dedicated markdown-lint job improves documentation quality checks and complements the existing Vale-based style validation.
136-142: Manual Vale installation noted.The comment indicates the GitHub Action for Vale is broken, necessitating manual installation. This is a reasonable workaround.
Consider tracking this as a technical debt item to revisit when the official Vale action is fixed. You may want to add a TODO comment or create an issue to monitor the upstream action status.
infrahub_exporter/main.py (2)
1-14: Import reorganization looks good.The imports are properly organized and the file correctly imports all necessary dependencies for the HTTP server, Prometheus metrics, and service discovery functionality.
161-165: LGTM!Standard async entrypoint pattern with proper
KeyboardInterrupthandling to prevent traceback noise on Ctrl+C.infrahub_exporter/metrics_exporter.py (5)
10-16: Import reorganization looks good.The imports are properly structured for both Prometheus and OTLP metric export functionality.
192-200: LGTM!The
startmethod properly initializes exporters conditionally based on settings and stores the poll task reference for cleanup. Good implementation pattern.
202-210: LGTM!Proper async task cancellation pattern with
CancelledErrorhandling.
84-103: Prometheus Collector interface properly implemented.The
collectmethod correctly implements the PrometheusCollectorinterface, yieldingGaugeMetricFamilymetrics from the_storedict as specified in the coding guidelines.
148-174: Individual fetch calls inside loops may be redundant given relationships are already included in the initial query.The code already requests relationships via the
includeparameter when fetching items (await self.client.all(kind=kp.kind, include=kp.include, ...)). However, for RelatedNode relationships,await attr.fetch()is called unconditionally on each item (line 152), and for RelationshipManager relationships, fetch is called per peer when not cached (line 170). Since relationships should already be loaded viainclude, these individual fetch calls may be unnecessary. If they are required due to SDK behavior, clarify why the initial include parameter doesn't fully load relationships.
| VALE_VERSION: "3.13.0" | ||
| UV_VERSION: "0.9.18" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check the file and git history to confirm previous versions
git log --oneline --all -p .github/workflows/ci.yml | head -100Repository: opsmill/infrahub-exporter
Length of output: 2698
🌐 Web query:
UV 0.5.10 to 0.9.18 breaking changes migration guide
💡 Result:
Here’s a concise migration checklist for upgrading uv from 0.5.10 → 0.9.18 (key breaking changes and what to do). Sources cited inline.
- Pin / verify Python versions
- 0.9.x changed default/available Python handling (Python 3.14 surfaced as a default/stable in 0.9.0). Pin the Python you expect (or test on 3.14) and update any CI images. If your builds rely on an implicit system Python, add explicit python-version constraints. (newreleases.io)
- Update build-backend bounds in pyproject.toml
- If you have an upper bound for uv_build in [build-system], update it to a 0.9.x-compatible range (examples in docs show uv_build>=0.9.18,<0.10.0). Verify build-backend compatibility after upgrade. (docs.astral.sh)
- Review deprecated/changed CLI flags and commands
- Some flags/behaviors were deprecated or changed across 0.9.x (e.g., deprecation warnings such as --project on uv init in 0.9.9). Audit scripts/automation that call uv CLI and adjust flags per current help output. Run uv -h / uv -h to detect mismatches. (github.com)
- uv publish and environment-variable behavior
- Earlier versions had issues where uv publish didn’t read UV_PUBLISH_URL / UV_PUBLISH_TOKEN; test your publish workflow (and explicitly pass --publish-url/--token as a fallback). 0.9.x contains fixes/enhancements to publish handling—retest and adjust CI if necessary. (github.com)
- Configuration / tool settings
- New configuration keys and flags landed across 0.9.x (example: accept --torch-backend in [tool.uv] in 0.9.18). Review your [tool.uv] config and environment variables for newly supported or renamed keys. (newreleases.io)
- New features that may change behavior
- 0.9.x added features (resolver improvements, cache commands, export/sync changes, Python variant handling) that can alter resolver outcomes or CLI output — verify dependency resolution, lockfiles (uv.lock), and automation that parses uv output. Run a full install/lock and compare results. (github.com)
- Test matrix and CI
- Run your full test matrix (all supported Python versions you target) under 0.9.18. Pay attention to interpreter detection changes and platform-specific messages that may affect automation. (github.com)
- Read the per-release notes / changelogs
- For any remaining edge cases, consult the uv release notes / changelogs for each 0.6.x → 0.9.18 release to catch smaller breaking changes or behavioral tweaks you rely on. The project split changelogs by breaking version — use those files for detailed diffs. (git.joshthomas.dev)
If you want, I can:
- generate a concrete changelog diff (0.5.10 → 0.9.18) with links to each release note, or
- run a checklist against a sample pyproject.toml / CI config you paste.
Citations:
- 1: https://newreleases.io/project/github/astral-sh/uv/release/0.9.0?utm_source=openai
- 2: https://docs.astral.sh/uv/concepts/projects/init/?utm_source=openai
- 3: https://github.com/astral-sh/uv/releases?utm_source=openai
- 4: uv publish does not read environment variables astral-sh/uv#9997
- 5: https://newreleases.io/project/github/astral-sh/uv/release/0.9.18?utm_source=openai
- 6: https://github.com/astral-sh/uv/releases?utm_source=openai
- 7: https://github.com/astral-sh/uv/releases?utm_source=openai
- 8: https://git.joshthomas.dev/mirrors/uv/blame/commit/19ea0f4932940ceb9d9d436193cd4809b3172e68/CHANGELOG.md?utm_source=openai
🌐 Web query:
Vale 3.7.1 to 3.13.0 breaking changes release notes
💡 Result:
Summary of notable breaking / backwards-impacting changes from Vale v3.7.1 → v3.13.0
-
Global/big breaking change (pre-v3.x you may already have encountered): v3.0 moved vocabularies from $StylesPath/Vocab → $StylesPath/config/vocabularies (users upgrading from v2→v3 must move them). [1]
-
v3.7.x → v3.7.1: bugfixes only (Julia parser added; fixes to matchToken behavior) — no breaking changes called out. [2]
-
v3.9.x: added comment-based controls (switching styles per-block; ignore individual matches). These change how inline comment config can be used and may interact with existing in-file config; not strictly breaking, but can affect behavior if you relied on older comment semantics. [3]
-
v3.10.0: built-in MDX support and a config change — it’s no longer necessary to set CommentDelimiters or use a [formats] association for Markdown. That change can alter how previously-customized Markdown/comment configurations behave. [4]
-
v3.11.0: added front-matter field linting (YAML/TOML/JSON) producing new scopes (e.g., text.frontmatter.title). This creates additional scopes that rules can target — can change matching behavior if you relied on older scope names. [5]
-
v3.12.0: introduced Views (new way to extract scoped content from structured files and source code). Views change how content is scoped/linted (can be breaking for workflows expecting the previous blueprint/format handling). [5]
-
v3.13.0: added tree-sitter support for Java source files (affects how Java files are parsed/linted); minor fixes for continuation-character errors. [6]
If you need, I can:
- extract the exact changelog text for each release (3.8 → 3.13) or
- produce a migration checklist (config settings to review and example changes).
Sources
- v3.0 breaking notes (vocab path change). [1] (newreleases.io)
- v3.7.1 changelog (fixes, Julia parser). [2] (newreleases.io)
- v3.9 (comment syntax features). [3] (newreleases.io)
- v3.10 (MDX support; CommentDelimiters/formats change). [4] (newreleases.io)
- v3.11 & v3.12 (front matter scopes; Views). [5] (github.com)
- v3.13 (tree-sitter Java; fixes). [6] (github.com)
Citations:
- 1: https://newreleases.io/project/github/errata-ai/vale/release/v3.0.0?utm_source=openai
- 2: https://newreleases.io/project/github/errata-ai/vale/release/v3.7.1?utm_source=openai
- 3: https://newreleases.io/project/github/errata-ai/vale/release/v3.9.0?utm_source=openai
- 4: https://newreleases.io/project/github/errata-ai/vale/release/v3.10.0?utm_source=openai
- 5: https://github.com/errata-ai/vale/releases?utm_source=openai
- 6: https://github.com/errata-ai/vale/releases?utm_source=openai
Test these tool versions for compatibility concerns.
These versions include significant changes that warrant testing:
- UV 0.9.18: Python version handling changes, CLI flag deprecations, and resolver improvements (major jump from 0.5.x).
- Vale 3.13.0: New scoping behavior, Views feature, and front-matter linting (includes multiple feature additions from 3.7.x).
Run the full test suite with these versions to ensure no unexpected behavior changes or build failures.
🤖 Prompt for AI Agents
.github/workflows/ci.yml lines 18-19 specify VALE_VERSION "3.13.0" and
UV_VERSION "0.9.18"; the reviewer asks these versions be validated for
compatibility. Update CI to run the full test suite with these exact tool
versions (or add a matrix job that includes them), run the test matrix
locally/CI, capture and fix any failures caused by changed CLI flags, Python
handling, resolver behavior, or Vale scoping/front-matter changes (e.g., adjust
CLI invocations, update Python compatibility layers, and modify Vale
config/rules as needed), then commit CI adjustments and any code/config fixes so
the pipeline passes with these versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
infrahub_exporter/metrics_exporter.py (2)
35-37: Add missing return type annotation.The
__init__method is missing a return type annotation. According to the coding guidelines, all functions must have type annotations.🔎 Proposed fix
- def __init__(self, kp: MetricsKind, exporter: "MetricsExporter"): + def __init__(self, kp: MetricsKind, exporter: "MetricsExporter") -> None:Based on coding guidelines.
48-52: Add missing return type annotation.The
__init__method is missing a return type annotation. According to the coding guidelines, all functions must have type annotations.🔎 Proposed fix
- def __init__(self, client: InfrahubClient, settings: SidecarSettings): + def __init__(self, client: InfrahubClient, settings: SidecarSettings) -> None:Based on coding guidelines.
🧹 Nitpick comments (1)
infrahub_exporter/metrics_exporter.py (1)
103-103: Consider using a more specific type annotation.The
kpparameter usesAnytype, which reduces type safety. Based on the usage context and the reference toself.settings.metrics.kind(line 176), this should likely beMetricsKind.🔎 Proposed fix
- async def _fetch_and_store(self, kp: Any) -> None: + async def _fetch_and_store(self, kp: MetricsKind) -> None:
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
infrahub_exporter/config.pyinfrahub_exporter/main.pyinfrahub_exporter/metrics_exporter.pyinfrahub_exporter/service_discovery.py
✅ Files skipped from review due to trivial changes (1)
- infrahub_exporter/config.py
🚧 Files skipped from review as they are similar to previous changes (1)
- infrahub_exporter/main.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml
Files:
infrahub_exporter/service_discovery.pyinfrahub_exporter/metrics_exporter.py
**/service_discovery.py
📄 CodeRabbit inference engine (AGENTS.md)
ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Files:
infrahub_exporter/service_discovery.py
**/metrics_exporter.py
📄 CodeRabbit inference engine (AGENTS.md)
MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in
_storedict
Files:
infrahub_exporter/metrics_exporter.py
🧠 Learnings (2)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Applied to files:
infrahub_exporter/service_discovery.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict
Applied to files:
infrahub_exporter/metrics_exporter.py
🧬 Code graph analysis (2)
infrahub_exporter/service_discovery.py (1)
infrahub_exporter/config.py (1)
ServiceDiscoveryQuery(55-67)
infrahub_exporter/metrics_exporter.py (1)
infrahub_exporter/main.py (1)
metrics(47-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
infrahub_exporter/service_discovery.py (1)
2-2: LGTM: Formatting and import organization improvements.The changes in this file are limited to import reordering and code formatting (collapsing multi-line statements to single lines). No functional logic has been modified, and all error handling, caching, and GraphQL execution patterns remain intact.
Also applies to: 7-7, 43-43, 126-126
infrahub_exporter/metrics_exporter.py (1)
54-80: LGTM: Metrics exporter implementation follows requirements.The implementation correctly:
- Implements the Prometheus
Collectorinterface with thecollect()method- Runs a background polling loop via
_poll_loop()- Stores metrics in the
_storedict as required- Supports both Prometheus and OTLP export paths
- Handles error cases gracefully with proper logging
The unified exporter design cleanly separates concerns between the two export backends.
Based on coding guidelines.
Also applies to: 82-101, 172-178, 180-198
| peer = itm._client.store.get( | ||
| key=attr.peer.id, raise_when_missing=False | ||
| ) | ||
| peer = itm._client.store.get(key=attr.peer.id, raise_when_missing=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, search for _client usage patterns in the codebase
rg -nP --type=py '\._client\.store\.get|itm\._client' -C 2Repository: opsmill/infrahub-exporter
Length of output: 1176
🏁 Script executed:
#!/bin/bash
# Check the context in the metrics_exporter.py file around lines 149 and 157
sed -n '145,160p' infrahub_exporter/metrics_exporter.pyRepository: opsmill/infrahub-exporter
Length of output: 924
🏁 Script executed:
#!/bin/bash
# Search for any documentation or comments about accessing _client
rg -nP --type=py '(_client|store\.get)' -C 1 | head -100Repository: opsmill/infrahub-exporter
Length of output: 1332
🏁 Script executed:
#!/bin/bash
# Check if there are public API methods available instead of accessing _client directly
rg -nP --type=py 'class InfrahubNode|def.*store' | head -30Repository: opsmill/infrahub-exporter
Length of output: 273
🏁 Script executed:
#!/bin/bash
# Search for InfrahubNode class and its public methods
rg -nP --type=py 'class InfrahubNode|def get_human_friendly_id|def fetch' -A 2 | head -50Repository: opsmill/infrahub-exporter
Length of output: 159
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation or comments about store access in the repo
find . -type f \( -name "*.md" -o -name "*.rst" -o -name "*.txt" \) ! -path "./.*" | xargs grep -l "store" 2>/dev/null | head -5Repository: opsmill/infrahub-exporter
Length of output: 92
🏁 Script executed:
#!/bin/bash
# Search for any public methods that might provide access to related nodes without using _client
rg -nP --type=py 'def.*peer|def.*related.*node' -A 3 | head -40Repository: opsmill/infrahub-exporter
Length of output: 159
🏁 Script executed:
#!/bin/bash
# Check the imports to understand what's available from infrahub_sdk
rg -nP --type=py '^from infrahub_sdk|^import infrahub' | head -20Repository: opsmill/infrahub-exporter
Length of output: 792
🏁 Script executed:
#!/bin/bash
# Check README for SDK usage patterns related to store or relationships
head -200 README.mdRepository: opsmill/infrahub-exporter
Length of output: 1628
🏁 Script executed:
#!/bin/bash
# Search for any store-related documentation in the repo
rg -nP 'store.*get|_client.*store' -B 2 -A 2 --type=mdRepository: opsmill/infrahub-exporter
Length of output: 159
🏁 Script executed:
#!/bin/bash
# Look at how attr.peer and attr.peers are being used to understand the pattern
rg -nP --type=py 'attr\.peer|attr\.peers' -B 3 -A 3 | head -80Repository: opsmill/infrahub-exporter
Length of output: 1534
🏁 Script executed:
#!/antml:bash
# Check if there's a way to access store through a public method or property
rg -nP --type=py 'def.*store|@property.*store|\.store' -A 1 | head -60Repository: opsmill/infrahub-exporter
Length of output: 820
🏁 Script executed:
#!/bin/bash
# Look at the full context of the _fetch_and_store method
sed -n '100,170p' infrahub_exporter/metrics_exporter.pyRepository: opsmill/infrahub-exporter
Length of output: 3020
🏁 Script executed:
#!/bin/bash
# Search for any comments or documentation about why _client is needed
rg -nP --type=py 'store|_client' -B 2 -A 2 | head -100Repository: opsmill/infrahub-exporter
Length of output: 4792
🏁 Script executed:
#!/antml:bash
# Check if there's a pyproject.toml or requirements file that shows infrahub-sdk version
cat pyproject.toml | grep -A 5 -B 5 infrahubRepository: opsmill/infrahub-exporter
Length of output: 1029
🏁 Script executed:
#!/bin/bash
# Look for any issues or discussions about public vs private API access
find . -type f \( -name "*.md" -o -name "*.txt" -o -name "*.py" \) -exec grep -l "public.*api\|private\|_client\|store" {} \; 2>/dev/null | head -10Repository: opsmill/infrahub-exporter
Length of output: 199
🌐 Web query:
infrahub-sdk InfrahubNode _client store API documentation
💡 Result:
The Infrahub Python SDK exposes a client with a local object store (client.store) used by the SDK to cache nodes and related objects. Key behaviors and API methods:
- Purpose: caches objects returned by client.get/filters/all, related objects when prefetch_relationships or fetch is used, and newly created objects. [1]
- Retrieve from store: client.store.get(key=...) — accepts id or hfid (when using hfid include kind). Example: tag_in_store = client.store.get(key=tag.id). [1]
- Manually store: client.store.set(key=..., node=...) — you can pick a custom key (e.g., name value). [1]
- Control population: query methods accept populate_store=False to avoid automatically storing returned objects. [1]
- Context: how to create/configure InfrahubClient (async/sync) and examples for using the client that interact with the store are in the client guide. [2]
References:
[1] Infrahub SDK — Using the client store (API usage and examples).
[2] Infrahub SDK — Creating/configuring an Infrahub client (client usage, async/sync examples).
🌐 Web query:
infrahub-sdk relationship handling public API store access
💡 Result:
Brief summary — how Infrahub SDK handles relationships, public API access, and the client store:
-
Client store: the SDK keeps a local cache ("store") of nodes returned by get/filters/all, created objects, and related objects when you use prefetch_relationships or the relationship fetch() method. You can get/set by id/hfid or custom keys and disable auto-population with populate_store=False. [1]
-
Relationship handling: related nodes can be returned inline when you call queries with prefetch_relationships, or fetched later via node.relationship.fetch() which will store the related node in the client store. Use the store to look up related objects without re-querying. [1][3]
-
Public API / GraphQL: you can also use the API/GraphQL endpoints directly (execute_graphql available in SDK) to retrieve relationships; query filters support filtering on relationship attributes. [3]
-
Authentication & access control: API access follows Infrahub auth rules — anonymous read-only by default (configurable), or use JWT (short-lived) or API tokens (long-lived) for programmatic access. Permissions are applied via roles/groups. Supply tokens in headers (X-INFRAHUB-KEY) or via the SDK auth flow. [4]
-
Practical notes:
- To avoid storing results locally, call queries with populate_store=False. [1]
- To eagerly load relations, use prefetch_relationships on queries; to lazy-load a single relation, call fetch() on the relationship. [1][3]
- Use API or SDK tokens depending on your security/expiry needs; frontend only supports JWT, API tokens work for GraphQL Playground and long-lived integrations. [4]
Sources:
[1] Using the client store — Infrahub docs.
[2] How to create and configure an Infrahub client — Infrahub docs.
[3] Querying data in Infrahub (relationships, filters, execute_graphql) — Infrahub docs.
[4] User management and authentication (API tokens, JWT, anonymous access) — Infrahub docs.
Replace private _client attribute access with the public client.store API.
The code at lines 149 and 157 accesses the private _client attribute from InfrahubNode instances. The documented SDK pattern is to use the public client.store API. Since self.client is available in this class, use self.client.store.get(key=..., raise_when_missing=False) instead of itm._client.store.get(...) and self.client.store.get(key=..., raise_when_missing=False) instead of itm._client.store.get(...).
🤖 Prompt for AI Agents
In infrahub_exporter/metrics_exporter.py around lines 149 to 157, replace direct
access to the private attribute itm._client with the public SDK client via
self.client.store; specifically change any calls like
itm._client.store.get(key=..., raise_when_missing=False) to use
self.client.store.get(key=..., raise_when_missing=False) (apply the same
replacement for all occurrences noted at ~149 and ~157).
- Change _fetch_and_store parameter from Any to MetricsKind for type safety - Add -> None return type annotations to __init__ methods - Fix _setup_routes return type from JSONResponse | None to None - Update AGENTS.md to reflect Python 3.13 support per pyproject.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
AGENTS.md (3)
1-1: Filename mismatch: Update file header from CLAUDE.md to AGENTS.md.Line 1 shows
# CLAUDE.mdbut the file is namedAGENTS.md. Update the header to match the actual filename.🔎 Proposed fix
-# CLAUDE.md +# AGENTS.md
8-8: Add blank line before the unordered list (MD032).Markdown linting requires a blank line before lists. This blocks the CI pipeline.
🔎 Proposed fix
Infrahub Exporter is a Python microservice that bridges Infrahub (infrastructure management platform) with monitoring systems. It provides: + - Prometheus metrics export via `/metrics` endpoint
55-55: Add blank line before the unordered list (MD032).Markdown linting requires a blank line before lists. This blocks the CI pipeline.
🔎 Proposed fix
Configuration is YAML-based (see `examples/config.yml`). Key sections: + - `infrahub`: Server address, API token, branch
♻️ Duplicate comments (1)
infrahub_exporter/metrics_exporter.py (1)
149-149: Use publicself.client.storeAPI instead of private_clientattribute.Lines 149 and 157 access the private
_clientattribute fromInfrahubNodeinstances. As noted in a previous review, the documented SDK pattern is to use the publicclient.storeAPI viaself.client.store.get(key=..., raise_when_missing=False)instead ofitm._client.store.get(...).🔎 Suggested fix
- peer = itm._client.store.get(key=attr.peer.id, raise_when_missing=False) + peer = self.client.store.get(key=attr.peer.id, raise_when_missing=False)And at line 157:
- node = itm._client.store.get(key=p.id, raise_when_missing=False) + node = self.client.store.get(key=p.id, raise_when_missing=False)Also applies to: 157-157
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
AGENTS.mdinfrahub_exporter/main.pyinfrahub_exporter/metrics_exporter.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml
Files:
infrahub_exporter/main.pyinfrahub_exporter/metrics_exporter.py
**/main.py
📄 CodeRabbit inference engine (AGENTS.md)
HTTP server must expose health (
/), metrics (/metrics), and service discovery (/sd/*) endpoints via FastAPI
Files:
infrahub_exporter/main.py
**/metrics_exporter.py
📄 CodeRabbit inference engine (AGENTS.md)
MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in
_storedict
Files:
infrahub_exporter/metrics_exporter.py
🧠 Learnings (5)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : Target Python 3.10-3.12 as specified in pyproject.toml
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : All functions must have type annotations
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI
Applied to files:
infrahub_exporter/main.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Applied to files:
infrahub_exporter/main.py
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict
Applied to files:
infrahub_exporter/metrics_exporter.py
🧬 Code graph analysis (2)
infrahub_exporter/main.py (3)
infrahub_exporter/config.py (4)
Config(90-92)ServiceDiscoveryConfig(70-74)ServiceDiscoveryQuery(55-67)SidecarSettings(78-107)infrahub_exporter/metrics_exporter.py (1)
MetricsExporter(31-198)infrahub_exporter/service_discovery.py (2)
ServiceDiscoveryManager(21-139)get_targets(28-41)
infrahub_exporter/metrics_exporter.py (2)
infrahub_exporter/main.py (1)
metrics(47-52)infrahub_exporter/config.py (2)
MetricsKind(10-15)SidecarSettings(78-107)
🪛 GitHub Actions: CI
AGENTS.md
[error] 8-8: MD032/blanks-around-lists Lists should be surrounded by blank lines [Context: "- Prometheus metrics export vi..."]
[error] 55-55: MD032/blanks-around-lists Lists should be surrounded by blank lines [Context: "- infrahub: Server address, ..."]
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
AGENTS.md (1)
62-62: Verify Python 3.13 support in pyproject.toml.The update extends Python target support from 3.10-3.12 to 3.10-3.13. Ensure that
pyproject.tomlhas been updated to declare Python 3.13 as a supported version and that CI workflows (e.g.,.github/workflows/ci.yml) include Python 3.13 in their test matrix.infrahub_exporter/main.py (2)
41-41: Good fix: return type annotation corrected.The return type change from
JSONResponse | NonetoNoneis correct. This method registers routes but doesn't return anything, so-> Noneproperly reflects its behavior.
56-64: LGTM: Proper closure variable capture with default argument.The use of a default argument (
q: ServiceDiscoveryQuery = query) correctly captures each loop iteration'squeryvalue at function definition time, avoiding the common Python closure late-binding gotcha. Each endpoint will properly reference its corresponding query configuration.infrahub_exporter/metrics_exporter.py (2)
35-35: Good improvement: complete type annotations added.The addition of
-> Nonereturn type annotations to__init__methods (lines 35, 48) and the explicit return type on_otlp_callback(line 39) aligns with the coding guidelines requirement that all functions must have type annotations.Also applies to: 39-39, 48-48
103-103: Excellent: parameter type strengthened fromAnytoMetricsKind.Refining the
kpparameter type fromAnytoMetricsKindprovides stronger type safety and better IDE support, directly addressing the PR objective for improved typing.
- Change _fetch_and_store parameter from Any to MetricsKind for type safety - Add -> None return type annotations to __init__ methods - Fix _setup_routes return type from JSONResponse | None to None - Update AGENTS.md to reflect Python 3.13 support per pyproject.toml
- Add lint_markdown task to run markdownlint on all Markdown files - Include markdownlint in lint_all task to match CI pipeline - Fix README.md link text from generic "here" to descriptive text - Fix lint_yaml docstring (was incorrectly saying "Python files")
…porter into pmc-20251228-updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
AGENTS.mdREADME.mdtasks.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: All functions must have type annotations
Use async/await pattern for asynchronous operations following event-driven architecture
Target Python 3.10-3.12 as specified in pyproject.toml
Files:
tasks.py
🧠 Learnings (6)
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/service_discovery.py : ServiceDiscoveryManager must execute GraphQL queries against Infrahub and transform results to Prometheus SD JSON format with per-query TTL caching
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/metrics_exporter.py : MetricsExporter must implement the Prometheus Collector interface and run a background polling loop with metrics stored in `_store` dict
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/main.py : HTTP server must expose health (`/`), metrics (`/metrics`), and service discovery (`/sd/*`) endpoints via FastAPI
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/*.py : Target Python 3.10-3.12 as specified in pyproject.toml
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Applies to **/config.py : Use Pydantic models for YAML configuration with environment variable overrides via `INFRAHUB_SIDECAR_*` prefix
Applied to files:
AGENTS.md
📚 Learning: 2025-12-28T09:33:46.407Z
Learnt from: CR
Repo: opsmill/infrahub-exporter PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-28T09:33:46.407Z
Learning: Run linters (yamllint, ruff, mypy) using invoke lint command before committing code
Applied to files:
tasks.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (5)
README.md (1)
25-25: Documentation link URL updated correctly.The change updates the documentation reference to point to the correct integration docs path (
/integrations/exporter/), which aligns with the PR's documentation alignment objectives.Please verify that the updated URL is valid and accessible in the Infrahub documentation system. If possible, confirm this URL change is consistent with any other documentation references updated in this PR.
tasks.py (3)
4-4: Import order change is cosmetic.The imports have been alphabetized. This is a minor style improvement with no functional impact.
23-24: Docstring correction is accurate.The updated docstring now correctly reflects that this task checks YAML files, not Python files. Good catch!
62-62: LGTM: Integration into lint_all is appropriate.The markdown linting task is correctly integrated into the
lint_allsequence alongside other linters, matching the CI workflow structure.AGENTS.md (1)
64-64: Python 3.10-3.13 support is correctly declared in pyproject.toml (requires-python = ">=3.10,<3.14").
| @task | ||
| def lint_markdown(context: Context) -> None: | ||
| """Run Linter to check all Markdown files.""" | ||
| print(" - Check code with markdownlint") | ||
| exec_cmd = "markdownlint '**/*.md' --ignore node_modules --ignore docs/node_modules" | ||
| with context.cd(MAIN_DIRECTORY_PATH): | ||
| context.run(exec_cmd) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
markdownlint-cli dependency is not declared in Python project.
The lint_markdown task assumes markdownlint is available in the environment. However, markdownlint-cli is not declared in pyproject.toml as a dev dependency—it's only installed via npm in the CI workflow. Local developers running invoke lint or invoke lint-markdown will encounter a command-not-found error unless they manually install markdownlint-cli via npm.
Consider one of the following:
- Document the npm prerequisite in the README or a development setup guide
- Add a check in the task to verify markdownlint is available and provide a helpful error message
- Use a Python-based markdown linter that can be declared as a dev dependency
🔎 Example: Add availability check with helpful error
@task
def lint_markdown(context: Context) -> None:
"""Run Linter to check all Markdown files."""
+ # Check if markdownlint is available
+ result = context.run("which markdownlint", warn=True, hide=True)
+ if result is None or result.exited != 0:
+ print("ERROR: markdownlint not found. Install with: npm install -g markdownlint-cli")
+ sys.exit(1)
+
print(" - Check code with markdownlint")
exec_cmd = "markdownlint '**/*.md' --ignore node_modules --ignore docs/node_modules"
with context.cd(MAIN_DIRECTORY_PATH):
context.run(exec_cmd)Committable suggestion skipped: line range outside the PR's diff.
Summary by CodeRabbit
New Features
Chores
Bug Fixes / Reliability
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.