docs: add comprehensive status badges to README#784
docs: add comprehensive status badges to README#784alexsuperpremium-collab wants to merge 3 commits intoSolFoundry:mainfrom
Conversation
fb305cb to
f920412
Compare
📝 WalkthroughWalkthroughAdded a centered badge strip to README.md including CI build status, contributor count, open bounties, total paid, license, and social metrics. Added a new FastAPI endpoint GET /api/stats/shields/payouts returning a Shields.io-compatible JSON badge payload and a helper format_payout_amount(total_paid: int) -> str to format amounts (commas, k, M). Updated tests to cover the new badge endpoint and loosened some /api/stats assertions. Added scripts/setup.sh and tests/scripts/setup_test.bats to provision and validate local development setup steps. Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/stats.py`:
- Around line 165-166: Wrap the call to _get_cached_stats() inside a try/except
in the stats endpoint so exceptions from the underlying stores don't produce a
500; on exception set total_paid (and any dependent variables) to a safe
fallback like "N/A" or 0 and proceed to generate the badge/response instead of
re-raising. Specifically, surround data = _get_cached_stats() and total_paid =
data.get("total_fndry_paid", 0) with error handling that logs the exception and
assigns the fallback to total_paid before continuing to the badge
generation/return logic.
In `@backend/tests/test_stats.py`:
- Around line 184-206: Add edge-case tests for the shields payouts endpoint:
create new test functions (e.g., test_shields_payouts_empty,
test_shields_payouts_small_amounts, test_shields_payouts_millions,
test_shields_payouts_boundaries) that call
client.get("/api/stats/shields/payouts") and assert the returned data["message"]
formatting for 0, values <1000 (comma/no suffix), exactly 1_000, 250_000
(thousands), 999_999, and ≥1_000_000 (X.XM). Use the existing helpers/imports
(_bounty_store and BountyDB) to insert example bounties (or leave store empty
for zero case) and verify schemaVersion/label/color remain unchanged while only
message changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6509de9a-564e-451b-b089-0be22a70a02a
📒 Files selected for processing (3)
README.mdbackend/app/api/stats.pybackend/tests/test_stats.py
- Wrapped _get_cached_stats() in try/except to gracefully handle store exceptions, preventing 500 errors and badge breakage. - Extracted format_payout_amount() into a helper function for cleaner logic. - Updated thousands formatting to keep one decimal place (e.g., 1.5k instead of rounding to 2k) to match millions logic. - Expanded test coverage in test_stats.py to verify edge cases: 0, <1000, 1500, exact millions, fractions of millions, and error handling fallback states.
…iew v3 - Updated Bats tests to mock external dependencies reliably using PATH overriding for docker-compose and SKIP_DEP_CHECKS for Node/Python/Rust. - Asserted $status code on all Bats test runs. - Resolved issue where missing docker-compose.yml printed a warning but still reported full setup success (SETUP_FAILED=1 is now correctly set). - Replaced GNU `sort -V` version_gt logic with a robust Python semver parser and POSIX-compliant awk fallback for macOS compatibility. - Added version checks for Rust (1.76+) and Anchor (0.30+).
There was a problem hiding this comment.
Actionable comments posted: 11
♻️ Duplicate comments (1)
backend/tests/test_stats.py (1)
90-127:⚠️ Potential issue | 🟡 MinorBoundary coverage is still incomplete for payout formatter thresholds.
Line 90 through Line 127 improves coverage, but exact threshold cases (
1000,999999) are still not asserted. That leaves the unit-transition edge unguarded and can miss regressions like near-million rollover formatting.As per coding guidelines
backend/**: Python FastAPI backend. Analyze thoroughly: Error handling and edge case coverage; API contract consistency with spec.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_stats.py` around lines 90 - 127, Add explicit boundary tests for the payout formatter at the 1000 and 999999 thresholds by extending the existing test_shields_payouts_thousands and test_shields_payouts_millions flows: create BountyDB entries with reward_amount set to 1000 and 999999 respectively (using the same _bounty_store manipulation and clearing of stats_module._cache), call client.get("/api/stats/shields/payouts") and assert the response.json()["message"] matches the exact expected formatted string for those edge cases; ensure you reference the same helpers (_bounty_store, BountyDB, stats_module._cache) and keep test structure consistent with the surrounding assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/stats.py`:
- Around line 143-146: The k-range can round up to "1000k" for values like
999,999; update the formatting logic that uses total_paid and the two branches
returning f"{total_paid / 1000000:.1f}M" and f"{total_paid / 1000:.1f}k" so
rounded k-values of 1000 are normalized to the M format: either (a) compute the
k-value first, round it and if rounded_k == 1000 return "1M", or (b) check if
total_paid / 1000 >= 1000 before emitting the k-branch and fall through to the
M-branch; apply the same .replace(".0", "") behavior so "1000k" never appears.
In `@scripts/setup_test.bats`:
- Around line 40-44: Replace fragile inline greps that consume $output (e.g.,
occurrences of echo "$output" | grep "Created Root .env" and similar uses on the
same file) with assertions that capture exit status and provide clear failure
messages: either use the shell pattern test [[ "$output" == *"Created Root .env
from .env.example"* ]] (and analogous checks for backend/.env messages) or run
grep -q "pattern" && assert_success/handle failure with a descriptive message;
update all instances that pipe $output to grep (the echo "$output" | grep
occurrences) so failures produce explicit assertion errors instead of silent
grep exit codes.
- Around line 30-45: The test "creates .env from .env.example correctly" in
scripts/setup_test.bats only checks existence and logs; update it to also assert
that the created .env contents match their corresponding .env.example sources,
that the original .env.example files still exist after running run ./setup.sh,
and add a sub-case where a pre-existing .env file is present (create .env before
running setup.sh) to confirm setup.sh skips overwriting (per the skip behavior
in setup.sh); reference the test name ("creates .env from .env.example
correctly") and the script under test (setup.sh) to locate where to add content
assertions, existence checks for .env.example, and the pre-existing .env
scenario.
- Around line 71-82: Update the "skips docker compose if docker-compose.yml is
missing" test to (1) explicitly assert the environment variable SETUP_FAILED is
set to "1" after running ./setup.sh to verify the failure state transition from
setup.sh and (2) avoid creating sdk/package.json so the SDK install step (and
any npm behavior) is not triggered during this test; reference the test name and
the SETUP_FAILED variable in your changes and ensure the test still checks for
the existing warning output strings and exit status.
- Around line 26-28: The teardown function currently runs rm -rf "$TEST_DIR"
which can be dangerous or a no-op if TEST_DIR is unset; update the teardown() to
guard against empty or unset TEST_DIR by checking that TEST_DIR is non-empty and
points to a directory (e.g., [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]) before
calling rm -rf, and ensure the check is used in the teardown() function so
teardown only removes a validated path.
- Around line 3-24: The test setup() currently mocks only the docker-compose
binary causing the docker compose (v2) fallback in setup.sh to be untested and
tests to depend on host docker when SKIP_DEP_CHECKS isn't set; add a mock
"docker" executable into the test bin (similar to the existing docker-compose
mock) that accepts "compose" and common subcommands and exits 0, ensure it is
made executable and PATH includes TEST_DIR/bin so setup.sh's docker compose
fallback and the dependency checks behave deterministically.
In `@scripts/setup.sh`:
- Around line 148-157: The conditional logic around the backend .env check is
inconsistent: change the elif test to also verify the backend directory exists
so the success message only runs when the backend directory contains a .env;
specifically update the elif condition from checking only '[ -f backend/.env ]'
to check both '[ -d "backend" ]' and '[ -f backend/.env ]' so the block tied to
the success message only executes when the backend directory is present and the
.env file exists (adjust the conditional that controls the print_success
"backend/.env already exists." branch).
- Around line 165-179: The cd usage in the setup steps (the block starting with
cd backend and ending with cd .. around the virtualenv/install logic) can leave
the script in the wrong directory if an inner command fails inside an if (which
can suppress set -e) or if the script was invoked from another cwd; wrap the
backend work in a safe context by either running it in a subshell (e.g., ( cd
backend && ... )) or using pushd backend / popd to guarantee returning to the
original directory, and ensure any failure from run_cmd, venv/bin/pip, or the if
branches still executes popd or exits the subshell; apply the same pattern to
the other cd blocks referenced (the blocks around lines that include run_cmd,
venv, and pip install) so directory state is always restored.
- Around line 28-31: The current print_error function unconditionally calls exit
1, causing backend pip install failures (which call print_error) to terminate
the script instead of marking SETUP_FAILED=1 and continuing like the SDK/npm
path; update error handling so print_error (or a new nonfatal helper, e.g.,
mark_setup_failed) sets SETUP_FAILED=1 and prints the red error message without
exiting, and only use a separate fatal_exit function that prints and exits for
truly unrecoverable errors; then replace the backend pip install and frontend
install calls that currently call print_error to call the nonfatal handler (or
mark_setup_failed) so failures are recorded but the script continues, while
keeping the SDK/npm path behavior unchanged.
- Around line 223-247: The file-detection conditional in the startup block (used
by the run_cmd calls that start services and that set
DOCKER_SUCCESS/SETUP_FAILED and call print_success/print_warning) misses the
compose.yaml filename; update the if test that currently checks
"docker-compose.yml" || "docker-compose.yaml" || "compose.yml" to also check for
"compose.yaml" so the branch that attempts docker-compose/dockerd compose up -d
db redis runs when a compose.yaml file exists.
---
Duplicate comments:
In `@backend/tests/test_stats.py`:
- Around line 90-127: Add explicit boundary tests for the payout formatter at
the 1000 and 999999 thresholds by extending the existing
test_shields_payouts_thousands and test_shields_payouts_millions flows: create
BountyDB entries with reward_amount set to 1000 and 999999 respectively (using
the same _bounty_store manipulation and clearing of stats_module._cache), call
client.get("/api/stats/shields/payouts") and assert the
response.json()["message"] matches the exact expected formatted string for those
edge cases; ensure you reference the same helpers (_bounty_store, BountyDB,
stats_module._cache) and keep test structure consistent with the surrounding
assertions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6e098925-6a3d-4fda-84b5-f5c2b92da40f
📒 Files selected for processing (4)
backend/app/api/stats.pybackend/tests/test_stats.pyscripts/setup.shscripts/setup_test.bats
| if total_paid >= 1000000: | ||
| return f"{total_paid / 1000000:.1f}M".replace(".0M", "M") | ||
| elif total_paid >= 1000: | ||
| return f"{total_paid / 1000:.1f}k".replace(".0k", "k") |
There was a problem hiding this comment.
k-range rounding can produce invalid threshold output.
At Line 146, values just below one million can round to "1000k" (for example 999999), which is a unit-boundary inconsistency in the badge message and can misrepresent the amount format.
As per coding guidelines backend/**: Python FastAPI backend. Analyze thoroughly: API contract consistency with spec; Error handling and edge case coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/stats.py` around lines 143 - 146, The k-range can round up to
"1000k" for values like 999,999; update the formatting logic that uses
total_paid and the two branches returning f"{total_paid / 1000000:.1f}M" and
f"{total_paid / 1000:.1f}k" so rounded k-values of 1000 are normalized to the M
format: either (a) compute the k-value first, round it and if rounded_k == 1000
return "1M", or (b) check if total_paid / 1000 >= 1000 before emitting the
k-branch and fall through to the M-branch; apply the same .replace(".0", "")
behavior so "1000k" never appears.
| setup() { | ||
| export TEST_DIR="$(mktemp -d)" | ||
|
|
||
| # Locate the script relative to this test file | ||
| local BATS_SCRIPT_DIR="$(cd "$(dirname "$BATS_TEST_FILENAME")" && pwd)" | ||
| cp "$BATS_SCRIPT_DIR/setup.sh" "$TEST_DIR/setup.sh" | ||
| chmod +x "$TEST_DIR/setup.sh" | ||
|
|
||
| # Export mock control flags | ||
| export DRY_RUN=1 | ||
|
|
||
| cd "$TEST_DIR" | ||
|
|
||
| # Mock external tools so tests don't depend on host system | ||
| mkdir -p bin | ||
| cat << 'MOCK' > bin/docker-compose | ||
| #!/bin/bash | ||
| exit 0 | ||
| MOCK | ||
| chmod +x bin/docker-compose | ||
| export PATH="$TEST_DIR/bin:$PATH" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test setup lacks mock for docker command, causing potential false positives.
The setup function mocks docker-compose but not the docker binary itself. In setup.sh lines 232, the script falls back to docker compose (v2 syntax) if docker-compose isn't found. Since the test mocks docker-compose to always succeed, it won't test the docker compose fallback path.
Additionally, line 125-129 of setup.sh checks for Docker availability during dependency checks. With SKIP_DEP_CHECKS=1, this is bypassed, but if tests ever run without that flag, they'd depend on host Docker installation.
Proposed enhancement to mock docker as well
cat << 'MOCK' > bin/docker-compose
#!/bin/bash
exit 0
MOCK
chmod +x bin/docker-compose
+ cat << 'MOCK' > bin/docker
+#!/bin/bash
+# Mock docker command - support 'compose' subcommand
+if [ "$1" = "compose" ]; then
+ exit 0
+fi
+exit 0
+MOCK
+ chmod +x bin/docker
export PATH="$TEST_DIR/bin:$PATH"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setup() { | |
| export TEST_DIR="$(mktemp -d)" | |
| # Locate the script relative to this test file | |
| local BATS_SCRIPT_DIR="$(cd "$(dirname "$BATS_TEST_FILENAME")" && pwd)" | |
| cp "$BATS_SCRIPT_DIR/setup.sh" "$TEST_DIR/setup.sh" | |
| chmod +x "$TEST_DIR/setup.sh" | |
| # Export mock control flags | |
| export DRY_RUN=1 | |
| cd "$TEST_DIR" | |
| # Mock external tools so tests don't depend on host system | |
| mkdir -p bin | |
| cat << 'MOCK' > bin/docker-compose | |
| #!/bin/bash | |
| exit 0 | |
| MOCK | |
| chmod +x bin/docker-compose | |
| export PATH="$TEST_DIR/bin:$PATH" | |
| } | |
| setup() { | |
| export TEST_DIR="$(mktemp -d)" | |
| # Locate the script relative to this test file | |
| local BATS_SCRIPT_DIR="$(cd "$(dirname "$BATS_TEST_FILENAME")" && pwd)" | |
| cp "$BATS_SCRIPT_DIR/setup.sh" "$TEST_DIR/setup.sh" | |
| chmod +x "$TEST_DIR/setup.sh" | |
| # Export mock control flags | |
| export DRY_RUN=1 | |
| cd "$TEST_DIR" | |
| # Mock external tools so tests don't depend on host system | |
| mkdir -p bin | |
| cat << 'MOCK' > bin/docker-compose | |
| #!/bin/bash | |
| exit 0 | |
| MOCK | |
| chmod +x bin/docker-compose | |
| cat << 'MOCK' > bin/docker | |
| #!/bin/bash | |
| # Mock docker command - support 'compose' subcommand | |
| if [ "$1" = "compose" ]; then | |
| exit 0 | |
| fi | |
| exit 0 | |
| MOCK | |
| chmod +x bin/docker | |
| export PATH="$TEST_DIR/bin:$PATH" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup_test.bats` around lines 3 - 24, The test setup() currently
mocks only the docker-compose binary causing the docker compose (v2) fallback in
setup.sh to be untested and tests to depend on host docker when SKIP_DEP_CHECKS
isn't set; add a mock "docker" executable into the test bin (similar to the
existing docker-compose mock) that accepts "compose" and common subcommands and
exits 0, ensure it is made executable and PATH includes TEST_DIR/bin so
setup.sh's docker compose fallback and the dependency checks behave
deterministically.
| teardown() { | ||
| rm -rf "$TEST_DIR" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Teardown could fail silently if TEST_DIR is unset.
If a test fails before TEST_DIR is set in setup(), the teardown's rm -rf "$TEST_DIR" would expand to rm -rf "" which is a no-op on most systems but could be dangerous with certain shell configurations. Adding a guard would be safer.
Proposed defensive teardown
teardown() {
- rm -rf "$TEST_DIR"
+ [ -n "${TEST_DIR:-}" ] && rm -rf "$TEST_DIR"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| teardown() { | |
| rm -rf "$TEST_DIR" | |
| } | |
| teardown() { | |
| [ -n "${TEST_DIR:-}" ] && rm -rf "$TEST_DIR" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup_test.bats` around lines 26 - 28, The teardown function
currently runs rm -rf "$TEST_DIR" which can be dangerous or a no-op if TEST_DIR
is unset; update the teardown() to guard against empty or unset TEST_DIR by
checking that TEST_DIR is non-empty and points to a directory (e.g., [ -n
"$TEST_DIR" ] && [ -d "$TEST_DIR" ]) before calling rm -rf, and ensure the check
is used in the teardown() function so teardown only removes a validated path.
| @test "creates .env from .env.example correctly" { | ||
| # Skip deps check so it doesn't fail on missing node/python locally | ||
| export SKIP_DEP_CHECKS=1 | ||
|
|
||
| touch .env.example | ||
| mkdir -p backend | ||
| touch backend/.env.example | ||
|
|
||
| run ./setup.sh | ||
|
|
||
| [ "$status" -eq 0 ] | ||
| [ -f .env ] | ||
| [ -f backend/.env ] | ||
| echo "$output" | grep "Created Root .env from .env.example" | ||
| echo "$output" | grep "Created backend/.env from backend/.env.example" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test doesn't verify .env file content or that source .env.example is preserved.
The test on lines 30-45 verifies that .env files are created but doesn't check:
- That the content matches the source
.env.example - That the original
.env.examplefiles still exist after copy - Behavior when
.envalready exists (should be skipped per setup.sh lines 137-146)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup_test.bats` around lines 30 - 45, The test "creates .env from
.env.example correctly" in scripts/setup_test.bats only checks existence and
logs; update it to also assert that the created .env contents match their
corresponding .env.example sources, that the original .env.example files still
exist after running run ./setup.sh, and add a sub-case where a pre-existing .env
file is present (create .env before running setup.sh) to confirm setup.sh skips
overwriting (per the skip behavior in setup.sh); reference the test name
("creates .env from .env.example correctly") and the script under test
(setup.sh) to locate where to add content assertions, existence checks for
.env.example, and the pre-existing .env scenario.
| [ "$status" -eq 0 ] | ||
| [ -f .env ] | ||
| [ -f backend/.env ] | ||
| echo "$output" | grep "Created Root .env from .env.example" | ||
| echo "$output" | grep "Created backend/.env from backend/.env.example" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test assertions using grep without [[ ]] or proper exit code handling are fragile.
Lines 43-44, 56, 68, 80-81, 94, and 106-107 pipe output to grep without capturing the exit status properly. In Bats, if grep doesn't find a match, the test fails with a non-obvious error message. Using [[ "$output" == *"expected text"* ]] or grep -q with proper assertion messages would improve test debugging.
Example on line 43: echo "$output" | grep "Created Root .env" will fail silently if the text isn't found, making test failures harder to diagnose.
Proposed improvement for clearer assertions
- echo "$output" | grep "Created Root .env from .env.example"
- echo "$output" | grep "Created backend/.env from backend/.env.example"
+ [[ "$output" == *"Created Root .env from .env.example"* ]]
+ [[ "$output" == *"Created backend/.env from backend/.env.example"* ]]Or use Bats' assert_output --partial if using bats-assert library.
Also applies to: 55-56, 67-68, 79-81, 93-94, 105-107
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup_test.bats` around lines 40 - 44, Replace fragile inline greps
that consume $output (e.g., occurrences of echo "$output" | grep "Created Root
.env" and similar uses on the same file) with assertions that capture exit
status and provide clear failure messages: either use the shell pattern test [[
"$output" == *"Created Root .env from .env.example"* ]] (and analogous checks
for backend/.env messages) or run grep -q "pattern" && assert_success/handle
failure with a descriptive message; update all instances that pipe $output to
grep (the echo "$output" | grep occurrences) so failures produce explicit
assertion errors instead of silent grep exit codes.
| print_error() { | ||
| echo -e "${RED}✗ $1${NC}" | ||
| exit 1 | ||
| } |
There was a problem hiding this comment.
Inconsistent error handling: print_error exits immediately but SETUP_FAILED tracking is bypassed.
print_error on line 30 calls exit 1, which terminates the script immediately. However, backend dependency installation failure on line 174 uses print_error, causing immediate script termination rather than setting SETUP_FAILED=1 and continuing (as done for SDK failures on line 211).
This inconsistency means:
- Backend pip install failure → script exits immediately
- SDK npm install failure → script continues with warning
For a setup script, graceful degradation with warnings (like SDK) is typically preferred over hard failures, allowing users to see all issues at once.
Proposed fix for consistent error handling
if [ -f "requirements.txt" ]; then
if run_cmd venv/bin/pip install -r requirements.txt; then
print_success "Backend dependencies installed."
else
- print_error "Backend dependencies failed to install."
+ print_warning "Backend dependencies failed to install."
+ SETUP_FAILED=1
fi
elseApply similar change for frontend on lines 191-192.
Also applies to: 174-174
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup.sh` around lines 28 - 31, The current print_error function
unconditionally calls exit 1, causing backend pip install failures (which call
print_error) to terminate the script instead of marking SETUP_FAILED=1 and
continuing like the SDK/npm path; update error handling so print_error (or a new
nonfatal helper, e.g., mark_setup_failed) sets SETUP_FAILED=1 and prints the red
error message without exiting, and only use a separate fatal_exit function that
prints and exits for truly unrecoverable errors; then replace the backend pip
install and frontend install calls that currently call print_error to call the
nonfatal handler (or mark_setup_failed) so failures are recorded but the script
continues, while keeping the SDK/npm path behavior unchanged.
| version_gt() { | ||
| # If python is available, use it for reliable semver comparison | ||
| if command -v python3 &> /dev/null; then | ||
| python3 -c "import sys; from packaging.version import parse; sys.exit(0 if parse('$1') > parse('$2') else 1)" 2>/dev/null && return 0 || return 1 | ||
| fi | ||
| # Fallback to pure awk | ||
| awk -v v1="$1" -v v2="$2" ' | ||
| BEGIN { | ||
| split(v1, a, "."); split(v2, b, "."); | ||
| for (i = 1; i <= 3; i++) { | ||
| if (a[i] + 0 > b[i] + 0) exit 0; | ||
| if (a[i] + 0 < b[i] + 0) exit 1; | ||
| } | ||
| exit 1; # they are equal | ||
| }' | ||
| } |
There was a problem hiding this comment.
version_gt function has unreliable fallback and Python dependency issue.
Several concerns with the version comparison logic:
-
Line 41: The Python branch depends on
packaging.versionmodule, which may not be installed in a fresh Python environment. This will silently fail and fall through to the awk fallback, but the error suppression (2>/dev/null) masks installation issues. -
Lines 44-52: The awk fallback only compares up to 3 version segments. Versions like
18.0.0.1vs18.0.0would incorrectly compare only the first three parts. -
Line 51: Returns exit code 1 for equal versions, which is correct for
>semantics but the function nameversion_gtcombined with usage likeversion_gt "$NODE_VERSION" "17.9.9"for checking>=18is fragile—relies on "17.9.9" being carefully chosen.
#!/bin/bash
# Verify if packaging module is typically available in standard Python installations
python3 -c "from packaging.version import parse; print('packaging available')" 2>&1 || echo "packaging module not available by default"| if [ -d "backend" ] && [ ! -f backend/.env ]; then | ||
| if [ -f backend/.env.example ]; then | ||
| cp backend/.env.example backend/.env | ||
| print_success "Created backend/.env from backend/.env.example." | ||
| else | ||
| print_warning "backend/.env.example not found. Skipping..." | ||
| fi | ||
| elif [ -f backend/.env ]; then | ||
| print_success "backend/.env already exists." | ||
| fi |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Backend .env check has inconsistent conditional structure.
Lines 148-157 check [ -d "backend" ] && [ ! -f backend/.env ] in the initial condition, but the elif on line 155 only checks [ -f backend/.env ] without verifying the directory exists. This is logically safe but inconsistent and could print a misleading success message if backend/.env exists as a file outside a backend/ directory (edge case).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup.sh` around lines 148 - 157, The conditional logic around the
backend .env check is inconsistent: change the elif test to also verify the
backend directory exists so the success message only runs when the backend
directory contains a .env; specifically update the elif condition from checking
only '[ -f backend/.env ]' to check both '[ -d "backend" ]' and '[ -f
backend/.env ]' so the block tied to the success message only executes when the
backend directory is present and the .env file exists (adjust the conditional
that controls the print_success "backend/.env already exists." branch).
| cd backend | ||
| if [ ! -d "venv" ]; then | ||
| run_cmd python3 -m venv venv | ||
| fi | ||
|
|
||
| if [ -f "requirements.txt" ]; then | ||
| if run_cmd venv/bin/pip install -r requirements.txt; then | ||
| print_success "Backend dependencies installed." | ||
| else | ||
| print_error "Backend dependencies failed to install." | ||
| fi | ||
| else | ||
| print_warning "backend/requirements.txt not found." | ||
| fi | ||
| cd .. |
There was a problem hiding this comment.
Missing cd failure handling could leave script in wrong directory on error.
Lines 165, 179, 187, and 197 use cd commands without error checking. While set -e is enabled, if any command between cd backend (line 165) and cd .. (line 179) fails and is caught by an if statement (which suppresses set -e), subsequent cd .. may not execute, leaving the script in the wrong directory for later steps.
Additionally, the cd .. commands assume the script started from the repository root. If invoked from a different directory, this could navigate to unexpected locations.
Proposed fix using subshells or pushd/popd
if [ -d "backend" ]; then
print_step "Installing backend dependencies (Python)..."
- cd backend
+ pushd backend > /dev/null
if [ ! -d "venv" ]; then
run_cmd python3 -m venv venv
fi
if [ -f "requirements.txt" ]; then
if run_cmd venv/bin/pip install -r requirements.txt; then
print_success "Backend dependencies installed."
else
print_warning "Backend dependencies failed to install."
SETUP_FAILED=1
fi
else
print_warning "backend/requirements.txt not found."
fi
- cd ..
+ popd > /dev/nullAlso applies to: 187-197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup.sh` around lines 165 - 179, The cd usage in the setup steps
(the block starting with cd backend and ending with cd .. around the
virtualenv/install logic) can leave the script in the wrong directory if an
inner command fails inside an if (which can suppress set -e) or if the script
was invoked from another cwd; wrap the backend work in a safe context by either
running it in a subshell (e.g., ( cd backend && ... )) or using pushd backend /
popd to guarantee returning to the original directory, and ensure any failure
from run_cmd, venv/bin/pip, or the if branches still executes popd or exits the
subshell; apply the same pattern to the other cd blocks referenced (the blocks
around lines that include run_cmd, venv, and pip install) so directory state is
always restored.
| if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ] || [ -f "compose.yml" ]; then | ||
| if command -v docker-compose &> /dev/null; then | ||
| if run_cmd docker-compose up -d db redis; then | ||
| print_success "Database and Redis started via docker-compose." | ||
| DOCKER_SUCCESS=1 | ||
| else | ||
| print_warning "Failed to start services via docker-compose. Is Docker daemon running?" | ||
| SETUP_FAILED=1 | ||
| fi | ||
| elif command -v docker &> /dev/null && docker compose version &> /dev/null; then | ||
| if run_cmd docker compose up -d db redis; then | ||
| print_success "Database and Redis started via docker compose." | ||
| DOCKER_SUCCESS=1 | ||
| else | ||
| print_warning "Failed to start services via docker compose. Is Docker daemon running?" | ||
| SETUP_FAILED=1 | ||
| fi | ||
| else | ||
| print_warning "Docker Compose not found. Please ensure Postgres and Redis are running locally." | ||
| SETUP_FAILED=1 | ||
| fi | ||
| else | ||
| print_warning "docker-compose.yml not found. Skipping local service startup." | ||
| SETUP_FAILED=1 | ||
| fi |
There was a problem hiding this comment.
Docker compose file detection doesn't check for compose.yaml variant.
Line 223 checks for docker-compose.yml, docker-compose.yaml, and compose.yml, but misses the compose.yaml variant which is also valid per Docker Compose specification. This could cause the script to incorrectly skip service startup when only compose.yaml exists.
Proposed fix
-if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ] || [ -f "compose.yml" ]; then
+if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ] || [ -f "compose.yml" ] || [ -f "compose.yaml" ]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ] || [ -f "compose.yml" ]; then | |
| if command -v docker-compose &> /dev/null; then | |
| if run_cmd docker-compose up -d db redis; then | |
| print_success "Database and Redis started via docker-compose." | |
| DOCKER_SUCCESS=1 | |
| else | |
| print_warning "Failed to start services via docker-compose. Is Docker daemon running?" | |
| SETUP_FAILED=1 | |
| fi | |
| elif command -v docker &> /dev/null && docker compose version &> /dev/null; then | |
| if run_cmd docker compose up -d db redis; then | |
| print_success "Database and Redis started via docker compose." | |
| DOCKER_SUCCESS=1 | |
| else | |
| print_warning "Failed to start services via docker compose. Is Docker daemon running?" | |
| SETUP_FAILED=1 | |
| fi | |
| else | |
| print_warning "Docker Compose not found. Please ensure Postgres and Redis are running locally." | |
| SETUP_FAILED=1 | |
| fi | |
| else | |
| print_warning "docker-compose.yml not found. Skipping local service startup." | |
| SETUP_FAILED=1 | |
| fi | |
| if [ -f "docker-compose.yml" ] || [ -f "docker-compose.yaml" ] || [ -f "compose.yml" ] || [ -f "compose.yaml" ]; then | |
| if command -v docker-compose &> /dev/null; then | |
| if run_cmd docker-compose up -d db redis; then | |
| print_success "Database and Redis started via docker-compose." | |
| DOCKER_SUCCESS=1 | |
| else | |
| print_warning "Failed to start services via docker-compose. Is Docker daemon running?" | |
| SETUP_FAILED=1 | |
| fi | |
| elif command -v docker &> /dev/null && docker compose version &> /dev/null; then | |
| if run_cmd docker compose up -d db redis; then | |
| print_success "Database and Redis started via docker compose." | |
| DOCKER_SUCCESS=1 | |
| else | |
| print_warning "Failed to start services via docker compose. Is Docker daemon running?" | |
| SETUP_FAILED=1 | |
| fi | |
| else | |
| print_warning "Docker Compose not found. Please ensure Postgres and Redis are running locally." | |
| SETUP_FAILED=1 | |
| fi | |
| else | |
| print_warning "docker-compose.yml not found. Skipping local service startup." | |
| SETUP_FAILED=1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/setup.sh` around lines 223 - 247, The file-detection conditional in
the startup block (used by the run_cmd calls that start services and that set
DOCKER_SUCCESS/SETUP_FAILED and call print_success/print_warning) misses the
compose.yaml filename; update the if test that currently checks
"docker-compose.yml" || "docker-compose.yaml" || "compose.yml" to also check for
"compose.yaml" so the branch that attempts docker-compose/dockerd compose up -d
db redis runs when a compose.yaml file exists.
|
Closing test/invalid PR. Apologies for the noise. |
Summary
Added a row of status badges to the top of the README, including a custom endpoint to dynamically display the total paid out.
Changes
/api/stats/shields/payoutsspecifically formatted to the shields.io custom JSON endpoint specification to dynamically report total paid out./api/stats/shields/payoutsendpoint inbackend/tests/test_stats.py.Testing
Fixes #488
Wallet: 7y4cEfLgX5m7ymTHdmwybBJoD6TYWgfWhcXG3yaXXniv