Skip to content

claude vibe code refactor#185

Merged
ryansurf merged 1 commit intomainfrom
feat-vibecode
Mar 2, 2026
Merged

claude vibe code refactor#185
ryansurf merged 1 commit intomainfrom
feat-vibecode

Conversation

@ryansurf
Copy link
Owner

@ryansurf ryansurf commented Mar 2, 2026

General:

  • Have you followed the guidelines in our Contributing document?
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?

Code:

  1. Does your submission pass tests?
  2. Have you run the linter/formatter on your code locally before submission?
  3. Have you updated the documentation/README to reflect your changes, as applicable?
  4. Have you added an explanation of what your changes do?
  5. Have you written new tests for your changes, as applicable?

Summary by Sourcery

Refactor the CLI surf report pipeline into a clearer, testable architecture with stronger logging, configuration defaults, and reliability improvements across API, GPT, server, email, and database components.

New Features:

  • Introduce a SurfReport orchestration class and module-level run() shim to encapsulate CLI behavior and support JSON or text/GPT output modes.
  • Add a reusable Open-Meteo client factory for cached, retried API access and expose a backward-compatible alias for the previously misspelled separate_args_and_get_location function.

Bug Fixes:

  • Ensure CLI arguments and feature flags are handled as booleans instead of ints, fixing conditional logic for output selection.
  • Correct rain and hourly forecast data handling to avoid incorrect tuple indexing and align with actual Open-Meteo responses.
  • Fix default email curl command construction and improve robustness when external calls fail in email sending and server subprocess execution.

Enhancements:

  • Switch from stdout prints to structured logging across helper, API, art, CLI, server, GPT, email, and database codepaths, including clearer warnings on invalid input and configuration issues.
  • Simplify and centralize Open-Meteo API client creation, reduce duplicated code, and streamline data gathering in the API layer.
  • Improve database connection and operations handling with clearer lifecycle management, error logging, and optional persistence when DB configuration is missing.
  • Refine ASCII art color definitions and validation, including better defaulting when colors are invalid.
  • Tighten settings defaults and semantics, particularly for GPT, email, and database settings.

Build:

  • Configure coverage collection and reporting in pyproject.toml, including exclusions for development-only or main entrypoints.

Documentation:

  • Clarify and modernize docstrings across server, CLI, GPT, helper, and related modules to better explain responsibilities and behavior.

Tests:

  • Add extensive unit test coverage for CLI orchestration, helper functions, API error/fallback paths, GPT wrappers, Flask server routes, email sending, database connection and operations, art rendering, and Streamlit helpers, including new logging- and GPT-specific behaviors.
  • Update existing tests to align with logging-based error reporting, boolean flags, and refactored helper/API behavior.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Mar 2, 2026

Reviewer's Guide

Refactors the CLI surf-report pipeline for better testability and logging, converts helper flags to booleans, centralizes Open-Meteo client creation, improves error handling and database/email/server behavior, and adds broad unit-test coverage and coverage configuration.

Sequence diagram for SurfReport.run end-to-end surf report generation

sequenceDiagram
    actor User
    participant CLI as cli.run
    participant SR as SurfReport
    participant Helper as helper
    participant API as api
    participant DB as SurfReportDatabaseOps
    participant GPT as gpt

    User->>CLI: invoke run(lat, long, args)
    CLI->>SR: create SurfReport()
    CLI->>SR: run(lat, long, args)

    SR->>Helper: separate_args(args or sys.argv)
    Helper-->>SR: parsed_args

    SR->>API: separate_args_and_get_location(parsed_args)
    API-->>SR: {lat, long, city}

    SR->>Helper: set_location(location_data)
    Helper-->>SR: city, loc_lat, loc_long

    SR->>SR: choose lat, long (explicit or from location)

    SR->>Helper: arguments_dictionary(lat, long, city, parsed_args)
    Helper-->>SR: arguments

    SR->>API: gather_data(lat, long, arguments)
    API->>API: _create_openmeteo_client()
    API->>ExternalOpenMeteo: API calls (uv, ocean, forecast, hourly, rain)
    ExternalOpenMeteo-->>API: responses
    API-->>SR: ocean_data_dict

    alt database_enabled
        SR->>DB: insert_report(ocean_data_dict)
        DB-->>SR: inserted_id
    end

    alt json_output is False
        SR->>Helper: print_outputs(ocean_data_dict, arguments, gpt_prompt, gpt_info)
        Helper->>GPT: simple_gpt or openai_gpt
        GPT-->>Helper: gpt_response
        Helper-->>SR: gpt_response
        SR-->>CLI: (ocean_data_dict, gpt_response)
    else json_output is True
        SR->>Helper: json_output(ocean_data_dict)
        Helper-->>SR: ocean_data_dict
        SR-->>CLI: ocean_data_dict
    end
Loading

Sequence diagram for send_user_email surf report email flow

sequenceDiagram
    actor Scheduler
    participant EmailMod as send_email
    participant Env as EmailSettings
    participant Curl as curl_command
    participant SMTP as SMTP_server

    Scheduler->>EmailMod: send_user_email()
    EmailMod->>Env: load EmailSettings
    Env-->>EmailMod: EMAIL, EMAIL_RECEIVER, SUBJECT, COMMAND, SMTP settings

    EmailMod->>Curl: subprocess.run(["curl", COMMAND])
    alt curl_success
        Curl-->>EmailMod: stdout surf_report
        EmailMod->>EmailMod: body = stdout
    else curl_failure
        Curl-->>EmailMod: CalledProcessError
        EmailMod->>EmailMod: log error, body = failure message
    end

    EmailMod->>SMTP: connect, starttls, login
    EmailMod->>SMTP: sendmail(EMAIL, EMAIL_RECEIVER, message)
    SMTP-->>EmailMod: ok
    EmailMod-->>Scheduler: return
Loading

Class diagram for SurfReport orchestration and database integration

classDiagram
    class SurfReport {
        +SurfReport()
        +run(lat, long, args)
        +_init_db()
        +_save_report(ocean_data_dict)
        +_render_output(ocean_data_dict, arguments)
        -gpt_prompt
        -gpt_info
        -db_handler
    }

    class GPTSettings {
        +GPT_PROMPT
        +API_KEY
        +GPT_MODEL
    }

    class DatabaseSettings {
        +DB_URI
    }

    class SurfReportDatabaseOps {
        +SurfReportDatabaseOps()
        +insert_report(report_document)
        -db
        -collection
    }

    class Database {
        +Database()
        +connect(db_name)
        +disconnect()
        -db_uri
        -client
        -db
    }

    class db_manager {
    }

    SurfReport --> GPTSettings : loads
    SurfReport --> SurfReportDatabaseOps : optionally_creates
    SurfReport --> DatabaseSettings : checks_DB_URI

    SurfReportDatabaseOps --> db_manager : uses
    db_manager --> Database : instance_of

    DatabaseSettings <|-- GPTSettings

    class EmailSettings {
        +EMAIL
        +EMAIL_PW
        +EMAIL_RECEIVER
        +COMMAND
        +SUBJECT
    }

    EmailSettings --> ServerSettings

    class ServerSettings {
        +HOST
        +PORT
        +DEBUG
        +IP_ADDRESS
    }

    class CommonSettings {
        +model_dump()
    }

    CommonSettings <|-- ServerSettings
    CommonSettings <|-- GPTSettings
    CommonSettings <|-- DatabaseSettings
    CommonSettings <|-- EmailSettings
Loading

File-Level Changes

Change Details Files
Refactor CLI entrypoint into a SurfReport class with injectable settings and DB handler, plus a thin module-level run() shim.
  • Introduce SurfReport class encapsulating GPT settings, DB initialization, run pipeline, DB persistence, and output rendering.
  • Replace global env/gpt/db_uri state with instance attributes and a private _init_db method with failure logging.
  • Change main run() to delegate to SurfReport().run(), defaulting lat/long to resolved location when not provided.
  • Add structured logging setup in main guarded by pragma: no cover.
src/cli.py
tests/test_cli.py
Make helper configuration more Pythonic and logging-friendly: booleans instead of ints, centralized forecast settings, and logging instead of prints for validation errors.
  • Change DEFAULT_ARGUMENTS from 0/1 ints to booleans and update set_output_values mappings and call sites accordingly.
  • Replace print-based validation/warning paths in extract_arg, extract_decimal, get_forecast_days, and art.print_wave with module-level loggers and warning messages.
  • Introduce MAX_FORECAST_DAYS constant and consolidate forecast-days validation on it.
  • Update print* functions to test booleans directly rather than int() == 1, and simplify json_output to always return the original dict and optionally print JSON.
src/helper.py
src/art.py
tests/test_helper.py
tests/test_art.py
Centralize Open-Meteo client creation and clean up API/data-gathering functions, including improved error/fallback behavior.
  • Add _create_openmeteo_client helper and use it in all Open-Meteo API wrappers (UV, ocean, rain, forecast, hourly, etc.) to reuse caching/retry configuration.
  • Simplify get_rain signature and return structure, and adjust gather_data to use new hourly and rain functions while avoiding repeated history calls.
  • Rename seperate_args_and_get_location to separate_args_and_get_location, while keeping a backwards compatible alias.
  • Add logging for invalid geocoded locations and extend tests to cover default-location behavior and ValueError fallbacks in multiple API wrappers.
src/api.py
tests/test_api.py
Harden GPT integration with error handling and add unit tests around GPT wrappers and helper behavior.
  • Wrap simple_gpt and openai_gpt bodies in try/except, log failures via a module logger, and return a stable fallback message instead of raising.
  • Extend helper.print_outputs and print_gpt usage to work with new GPT behavior, and add tests ensuring OpenAI is called when API key is long enough.
  • Add new tests for both GPT backends to verify happy-path content extraction and fallback behavior when client construction or calls raise exceptions.
src/gpt.py
tests/test_gpt.py
tests/test_helper.py
Improve Flask server behavior and testability, including logging and simplified subprocess invocation.
  • Refactor server routes to have concise docstrings and introduce a module-level logger for error reporting.
  • Simplify query-string parsing/args building in the root route and replace async subprocess wrapper with direct subprocess.run using Path objects.
  • On subprocess failure, log the error via logger.error and rely on Flask to return a 500, with tests asserting 500 on CalledProcessError and 200 on healthy /home and /script.js routes using mocks.
src/server.py
tests/test_server.py
Make email sending more robust and testable by localizing configuration, handling curl failures, and logging instead of printing.
  • Move EmailSettings loading and MIME message construction inside send_user_email to reduce global state.
  • Wrap the curl subprocess call in try/except, logging errors and falling back to a generic failure body on CalledProcessError.
  • Use logging for success/failure messages instead of prints, and add tests to validate behavior for both successful curl calls and curl failures.
src/send_email.py
tests/test_send_email.py
Improve database connection and operations logging, and add unit tests for database layers.
  • Introduce module-level loggers for db.connection and db.operations, replacing direct logging.* calls with logger.*.
  • Have Database.connect log and re-raise connection failures, and ensure disconnect logs closing and clears client/db state.
  • Ensure SurfReportDatabaseOps logs insert successes and errors, and add tests that mock MongoClient and db_manager to exercise successful insert, failures, and connect/disconnect semantics.
src/db/connection.py
src/db/operations.py
tests/test_db.py
Streamline Streamlit helper utilities and cover them with tests.
  • Simplify extra_args and get_report to build argument strings deterministically and rely on cli.run’s new return semantics.
  • Add tests for extra_args, get_report (including extra args propagation), map_data returning a Folium map, and graph_data producing the expected pandas DataFrame shapes for different graph types.
src/streamlit_helper.py
tests/test_streamlit_helper.py
Testing and tooling improvements: expand test coverage and configure coverage/pytest behavior.
  • Add multiple new test modules to exercise CLI, GPT, API error paths, DB operations, server, email, art, helper, and streamlit helpers, replacing older commented-out QA-style tests.
  • Update existing tests to use caplog/capsys and pytest-mock rather than manual stdout patches where appropriate.
  • Configure coverage in pyproject.toml to focus on src/, omit dev_streamlit.py, and ignore main-guard lines via exclude_lines, and keep pytest options and ruff settings organized.
pyproject.toml
tests/test_cli.py
tests/test_helper.py
tests/test_api.py
tests/test_gpt.py
tests/test_art.py
tests/test_server.py
tests/test_db.py
tests/test_streamlit_helper.py
tests/test_send_email.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

json_out = json.dumps(data_dict, indent=4)
if print_output:
print(json_out)
print(json.dumps(data_dict, indent=4))

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (private)
as clear text.
This expression logs
sensitive data (private)
as clear text.

Copilot Autofix

AI 19 days ago

In general, to fix this kind of issue you either (a) avoid emitting sensitive fields at all, (b) mask or generalize them before emitting, or (c) make the emission an explicit, opt‑in choice separated from routine logging. Here we want to keep JSON output useful but not unconditionally dump precise coordinates and city information, which CodeQL has flagged as sensitive.

The best minimal change without altering external behavior for current callers is to have json_output support optional redaction of location-identifying fields and then use that from the CLI JSON path. Concretely:

  • Extend json_output’s signature in src/helper.py to accept a new boolean parameter, e.g. redact_location=False.
  • When redact_location is True, create a shallow copy of data_dict and remove or mask the fields that can be derived from user location: "Lat", "Long", and "Location" (keys used in ocean_data_dict).
  • Serialize and print the possibly-redacted copy instead of the original data_dict. Always return the original data_dict to avoid changing programmatic behavior.
  • In SurfReport._render_output in src/cli.py, when calling helper.json_output, pass redact_location=True so that the default CLI JSON output does not include precise coordinates or city name.
  • This keeps arguments, gathering, and printing behavior unchanged for all other paths, and does not introduce new dependencies (we only use dict.copy() and dict.pop from the standard library).

This single change addresses all alert variants pointing to that print of json.dumps(data_dict, ...), because any ocean_data_dict originating from user location will now have sensitive fields removed before being printed in JSON mode.


Suggested changeset 2
src/helper.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/helper.py b/src/helper.py
--- a/src/helper.py
+++ b/src/helper.py
@@ -284,13 +284,23 @@
     return [round(num, decimal) for num in round_list]
 
 
-def json_output(data_dict, print_output=True):
+def json_output(data_dict, print_output=True, redact_location=False):
     """
     Serializes data_dict to JSON. Prints to stdout if print_output is True.
     Returns the original dict for programmatic use.
+
+    When redact_location is True, location-identifying fields such as
+    coordinates and city name are removed from the printed JSON to
+    avoid logging potentially sensitive data.
     """
+    output_dict = data_dict
+    if redact_location and isinstance(data_dict, dict):
+        output_dict = data_dict.copy()
+        for key in ("Lat", "Long", "Location"):
+            output_dict.pop(key, None)
+
     if print_output:
-        print(json.dumps(data_dict, indent=4))
+        print(json.dumps(output_dict, indent=4))
     return data_dict
 
 
EOF
@@ -284,13 +284,23 @@
return [round(num, decimal) for num in round_list]


def json_output(data_dict, print_output=True):
def json_output(data_dict, print_output=True, redact_location=False):
"""
Serializes data_dict to JSON. Prints to stdout if print_output is True.
Returns the original dict for programmatic use.

When redact_location is True, location-identifying fields such as
coordinates and city name are removed from the printed JSON to
avoid logging potentially sensitive data.
"""
output_dict = data_dict
if redact_location and isinstance(data_dict, dict):
output_dict = data_dict.copy()
for key in ("Lat", "Long", "Location"):
output_dict.pop(key, None)

if print_output:
print(json.dumps(data_dict, indent=4))
print(json.dumps(output_dict, indent=4))
return data_dict


src/cli.py
Outside changed files

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/cli.py b/src/cli.py
--- a/src/cli.py
+++ b/src/cli.py
@@ -72,7 +72,7 @@
                 ocean_data_dict, arguments, self.gpt_prompt, self.gpt_info
             )
             return ocean_data_dict, response
-        helper.json_output(ocean_data_dict)
+        helper.json_output(ocean_data_dict, redact_location=True)
         return ocean_data_dict
 
 
EOF
@@ -72,7 +72,7 @@
ocean_data_dict, arguments, self.gpt_prompt, self.gpt_info
)
return ocean_data_dict, response
helper.json_output(ocean_data_dict)
helper.json_output(ocean_data_dict, redact_location=True)
return ocean_data_dict


Copilot is powered by AI and may make mistakes. Always verify output.
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 security issue, 4 other issues, and left some high level feedback:

Security issues:

  • Detected subprocess function 'run' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'. (link)

General comments:

  • The change to get_rain(lat, long) (dropping the decimal parameter) alters the public function signature; if this is used outside gather_data, consider keeping a deprecated wrapper or adding *args, **kwargs for backward compatibility to avoid breaking external callers.
  • In server.default_route, re-raising CalledProcessError now leads to a raw 500 response; you may want to catch the exception and return a user-friendly error payload (and appropriate status) instead of propagating the traceback to the client.
  • Now that DEFAULT_ARGUMENTS uses booleans, it might be helpful to assert or validate argument types at the boundary (e.g., in arguments_dictionary or set_output_values) to avoid subtle bugs if other code still passes 0/1 or non-bool values.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The change to `get_rain(lat, long)` (dropping the `decimal` parameter) alters the public function signature; if this is used outside `gather_data`, consider keeping a deprecated wrapper or adding `*args, **kwargs` for backward compatibility to avoid breaking external callers.
- In `server.default_route`, re-raising `CalledProcessError` now leads to a raw 500 response; you may want to catch the exception and return a user-friendly error payload (and appropriate status) instead of propagating the traceback to the client.
- Now that `DEFAULT_ARGUMENTS` uses booleans, it might be helpful to assert or validate argument types at the boundary (e.g., in `arguments_dictionary` or `set_output_values`) to avoid subtle bugs if other code still passes `0/1` or non-bool values.

## Individual Comments

### Comment 1
<location path="pyproject.toml" line_range="59-61" />
<code_context>
+
+[tool.coverage.report]
+omit = ["src/dev_streamlit.py"]
+exclude_lines = [
+    "pragma: no cover",
+    "if __name__ == .__main__.:",
+]
+
</code_context>
<issue_to_address>
**nitpick (testing):** The coverage exclude_lines pattern for __main__ guards looks incorrect.

As written, this string won’t match `if __name__ == "__main__":`, so those blocks won’t be excluded from coverage. Consider using the exact string literal (`"if __name__ == \"__main__\":"`) or a correctly escaped regex pattern, depending on how coverage interprets `exclude_lines`.
</issue_to_address>

### Comment 2
<location path="tests/test_helper.py" line_range="238-244" />
<code_context>
+    assert "Forecast days must be between" in caplog.text
+
+
+def test_get_forecast_days_negative_logs_warning(caplog):
+    """get_forecast_days returns 0 and warns when value is negative."""
+    with caplog.at_level(logging.WARNING, logger="src.helper"):
+        result = helper.get_forecast_days(["forecast=-1"])
+    assert result == 0
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Also assert the warning message for negative forecast days to fully verify the logging behavior.

In `test_get_forecast_days_negative_logs_warning`, mirror the pattern from `test_get_forecast_days_out_of_range_logs_warning` by asserting that a warning substring like `"Forecast days must be between"` appears in `caplog.text`. This keeps the tests consistent and better guards against regressions in the logged message.

```suggestion
def test_get_forecast_days_negative_logs_warning(caplog):
    """get_forecast_days returns 0 and warns when value is negative."""
    with caplog.at_level(logging.WARNING, logger="src.helper"):
        result = helper.get_forecast_days(["forecast=-1"])
    assert result == 0
    assert "Forecast days must be between" in caplog.text


```
</issue_to_address>

### Comment 3
<location path="tests/test_server.py" line_range="17-23" />
<code_context>

-#         response_root = test_client.get("/")
-#         assert response_root.status_code == OK
+def test_serve_index_returns_200(monkeypatch):
+    """GET /home renders the index template and returns 200."""
+    app = _make_app()
+    with patch("src.server.render_template", return_value="<html>home</html>"):
+        resp = app.test_client().get("/home")
+    assert resp.status_code == HTTPStatus.OK
+    assert b"<html>home</html>" in resp.data
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Add a happy-path test for the root route (`/`) when the subprocess succeeds.

You already cover `/home`, `/script.js`, and the error case for `/` when `CalledProcessError` is raised. Please also add a test where `subprocess.run` succeeds and returns a known `stdout`, and assert that `GET /` responds with 200 and includes that content. This will ensure the main success path for `/` is tested as well.

Suggested implementation:

```python
from src.settings import ServerSettings


def test_serve_root_returns_200_on_success(monkeypatch):
    """GET / executes the script successfully and returns its stdout with 200."""
    app = _make_app()

    completed = subprocess.CompletedProcess(
        args=["dummy-script"],
        returncode=0,
        stdout="hello from script",
    )

    # Patch the subprocess call used by the root route to simulate a successful run
    with patch("src.server.subprocess.run", return_value=completed):
        resp = app.test_client().get("/")

    assert resp.status_code == HTTPStatus.OK
    assert b"hello from script" in resp.data

```

This assumes:
1. The root route (`/`) uses `subprocess.run` via `import subprocess` inside `src.server`.  
   *If `src.server` instead does `from subprocess import run`, change the patch target to `"src.server.run"` accordingly.*
2. The root handler includes `stdout` directly (or via a template) in the response body; if it wraps or prefixes the text, adjust the asserted substring (`b"hello from script"`) to match the actual rendered content.
</issue_to_address>

### Comment 4
<location path="tests/test_send_email.py" line_range="50-59" />
<code_context>
+    mock_smtp.sendmail.assert_called_once()
+
+
+def test_send_email_curl_failure_uses_fallback_body(mocker):
+    """send_user_email falls back to an error message when curl fails."""
+    _patch_env(mocker)
+
+    mocker.patch(
+        "subprocess.run",
+        side_effect=subprocess.CalledProcessError(1, "curl", stderr="error"),
+    )
+
+    mock_smtp = MagicMock()
+    mocker.patch("smtplib.SMTP", return_value=mock_smtp)
+    mock_smtp.__enter__ = lambda s: s
+    mock_smtp.__exit__ = MagicMock(return_value=False)
+
+    # Should not raise; the fallback body is used instead
+    send_user_email()
+
+    mock_smtp.sendmail.assert_called_once()
+    # The email body should contain the fallback message
+    call_args = mock_smtp.sendmail.call_args[0]
+    assert "Failed to fetch surf report." in call_args[2]
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a test for SMTP failures to cover that error path as well.

You already cover the happy path and curl failure fallback. To fully exercise `send_user_email`’s error handling, add a test where the `SMTP` mock raises (e.g., on `server.login` or `sendmail`). This will verify the behavior when SMTP itself fails (whether you choose to propagate or log the error).
</issue_to_address>

### Comment 5
<location path="src/server.py" line_range="62-67" />
<code_context>
            result = subprocess.run(
                [sys.executable, Path("src") / "cli.py", args],
                capture_output=True,
                text=True,
                check=True,
            )
</code_context>
<issue_to_address>
**security (python.lang.security.audit.dangerous-subprocess-use-audit):** Detected subprocess function 'run' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'.

*Source: opengrep*
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +59 to +61
exclude_lines = [
"pragma: no cover",
"if __name__ == .__main__.:",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick (testing): The coverage exclude_lines pattern for main guards looks incorrect.

As written, this string won’t match if __name__ == "__main__":, so those blocks won’t be excluded from coverage. Consider using the exact string literal ("if __name__ == \"__main__\":") or a correctly escaped regex pattern, depending on how coverage interprets exclude_lines.

Comment on lines +238 to +244
def test_get_forecast_days_negative_logs_warning(caplog):
"""get_forecast_days returns 0 and warns when value is negative."""
with caplog.at_level(logging.WARNING, logger="src.helper"):
result = helper.get_forecast_days(["forecast=-1"])
assert result == 0


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Also assert the warning message for negative forecast days to fully verify the logging behavior.

In test_get_forecast_days_negative_logs_warning, mirror the pattern from test_get_forecast_days_out_of_range_logs_warning by asserting that a warning substring like "Forecast days must be between" appears in caplog.text. This keeps the tests consistent and better guards against regressions in the logged message.

Suggested change
def test_get_forecast_days_negative_logs_warning(caplog):
"""get_forecast_days returns 0 and warns when value is negative."""
with caplog.at_level(logging.WARNING, logger="src.helper"):
result = helper.get_forecast_days(["forecast=-1"])
assert result == 0
def test_get_forecast_days_negative_logs_warning(caplog):
"""get_forecast_days returns 0 and warns when value is negative."""
with caplog.at_level(logging.WARNING, logger="src.helper"):
result = helper.get_forecast_days(["forecast=-1"])
assert result == 0
assert "Forecast days must be between" in caplog.text

Comment on lines +17 to +23
def test_serve_index_returns_200(monkeypatch):
"""GET /home renders the index template and returns 200."""
app = _make_app()
with patch("src.server.render_template", return_value="<html>home</html>"):
resp = app.test_client().get("/home")
assert resp.status_code == HTTPStatus.OK
assert b"<html>home</html>" in resp.data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add a happy-path test for the root route (/) when the subprocess succeeds.

You already cover /home, /script.js, and the error case for / when CalledProcessError is raised. Please also add a test where subprocess.run succeeds and returns a known stdout, and assert that GET / responds with 200 and includes that content. This will ensure the main success path for / is tested as well.

Suggested implementation:

from src.settings import ServerSettings


def test_serve_root_returns_200_on_success(monkeypatch):
    """GET / executes the script successfully and returns its stdout with 200."""
    app = _make_app()

    completed = subprocess.CompletedProcess(
        args=["dummy-script"],
        returncode=0,
        stdout="hello from script",
    )

    # Patch the subprocess call used by the root route to simulate a successful run
    with patch("src.server.subprocess.run", return_value=completed):
        resp = app.test_client().get("/")

    assert resp.status_code == HTTPStatus.OK
    assert b"hello from script" in resp.data

This assumes:

  1. The root route (/) uses subprocess.run via import subprocess inside src.server.
    If src.server instead does from subprocess import run, change the patch target to "src.server.run" accordingly.
  2. The root handler includes stdout directly (or via a template) in the response body; if it wraps or prefixes the text, adjust the asserted substring (b"hello from script") to match the actual rendered content.

Comment on lines +50 to +59
def test_send_email_curl_failure_uses_fallback_body(mocker):
"""send_user_email falls back to an error message when curl fails."""
_patch_env(mocker)

mocker.patch(
"subprocess.run",
side_effect=subprocess.CalledProcessError(1, "curl", stderr="error"),
)

mock_smtp = MagicMock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider adding a test for SMTP failures to cover that error path as well.

You already cover the happy path and curl failure fallback. To fully exercise send_user_email’s error handling, add a test where the SMTP mock raises (e.g., on server.login or sendmail). This will verify the behavior when SMTP itself fails (whether you choose to propagate or log the error).

Comment on lines +62 to +67
result = subprocess.run(
[sys.executable, Path("src") / "cli.py", args],
capture_output=True,
text=True,
check=True,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security (python.lang.security.audit.dangerous-subprocess-use-audit): Detected subprocess function 'run' without a static string. If this data can be controlled by a malicious actor, it may be an instance of command injection. Audit the use of this call to ensure it is not controllable by an external resource. You may consider using 'shlex.escape()'.

Source: opengrep

@ryansurf ryansurf merged commit a014642 into main Mar 2, 2026
11 of 12 checks passed
@codecov
Copy link

codecov bot commented Mar 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

Files with missing lines Coverage Δ
src/api.py 100.00% <100.00%> (+4.89%) ⬆️
src/art.py 100.00% <100.00%> (+11.11%) ⬆️
src/cli.py 100.00% <100.00%> (+100.00%) ⬆️
src/db/connection.py 100.00% <100.00%> (+100.00%) ⬆️
src/db/operations.py 100.00% <100.00%> (+100.00%) ⬆️
src/gpt.py 100.00% <100.00%> (+30.00%) ⬆️
src/helper.py 100.00% <100.00%> (+40.90%) ⬆️
src/send_email.py 100.00% <100.00%> (+100.00%) ⬆️
src/server.py 100.00% <100.00%> (+23.40%) ⬆️
src/settings.py 100.00% <100.00%> (ø)
... and 1 more

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant