Skip to content
This repository was archived by the owner on Jan 22, 2026. It is now read-only.

Test fully specifying client config with env var#18

Merged
mangelajo merged 8 commits into
mainfrom
env
May 29, 2025
Merged

Test fully specifying client config with env var#18
mangelajo merged 8 commits into
mainfrom
env

Conversation

@NickCao
Copy link
Copy Markdown
Collaborator

@NickCao NickCao commented May 28, 2025

Summary by CodeRabbit

  • Tests
    • Introduced a comprehensive automated test suite covering client and exporter creation, login, configuration, lease management, command execution, and deletion.
    • Simplified the workflow by replacing multiple manual setup steps with a single streamlined automated test execution for improved reliability and clarity.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 28, 2025

Walkthrough

The workflow has been streamlined by removing a series of manual shell commands for setting up and testing clients and exporters. Instead, it now installs the bats testing framework and runs a single automated test suite (tests.bats), which comprehensively covers client and exporter lifecycle operations in a Jumpstarter environment.

Changes

File(s) Change Summary
action.yml Removed manual test/setup shell commands; now installs bats and dependencies, then runs tests.bats.
tests.bats Added new BATS test script automating setup, testing, and teardown of clients/exporters and leases.

Sequence Diagram(s)

sequenceDiagram
    participant Workflow
    participant BATS Test Suite
    participant Jumpstarter CLI
    participant Kubernetes

    Workflow->>BATS Test Suite: Run tests.bats
    BATS Test Suite->>Jumpstarter CLI: Create clients/exporters
    BATS Test Suite->>Jumpstarter CLI: Login with various methods
    BATS Test Suite->>Jumpstarter CLI: Apply YAML overlays
    BATS Test Suite->>Jumpstarter CLI: Operate leases and run exporters
    BATS Test Suite->>Jumpstarter CLI: Retrieve and delete clients/exporters
Loading

Poem

In the warren where the tests now run,
Manual steps are gone—just one by one!
BATS hops in, with scripts so neat,
Automating every feat.
Clients, exporters, leases in tow—
The rabbit’s tests are good to go! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests.bats (3)

20-30: Simple and effective cleanup.

The teardown function properly mirrors the setup by cleaning up all created resources. Consider making it more robust by adding error handling to continue cleanup even if individual delete commands fail.

teardown_file() {
  # delete clients
-  jmp admin delete client   test-client-oidc
-  jmp admin delete client   test-client-sa
-  jmp admin delete client   test-client-legacy
+  jmp admin delete client   test-client-oidc || true
+  jmp admin delete client   test-client-sa || true
+  jmp admin delete client   test-client-legacy || true

  # delete exporters
-  jmp admin delete exporter test-exporter-oidc
-  jmp admin delete exporter test-exporter-sa
-  jmp admin delete exporter test-exporter-legacy
+  jmp admin delete exporter test-exporter-oidc || true
+  jmp admin delete exporter test-exporter-sa || true
+  jmp admin delete exporter test-exporter-legacy || true
}

68-77: Consider adding background process cleanup.

Starting exporters in background is necessary for the test, but consider adding cleanup mechanisms to handle cases where the test fails and processes might be left running.

+  # Store background PIDs for cleanup
+  jmp run --exporter test-exporter-oidc &
+  EXPORTER_OIDC_PID=$!
+  jmp run --exporter test-exporter-sa &
+  EXPORTER_SA_PID=$!
+  jmp run --exporter test-exporter-legacy &
+  EXPORTER_LEGACY_PID=$!
+
+  # Add trap for cleanup on exit
+  trap 'kill $EXPORTER_OIDC_PID $EXPORTER_SA_PID $EXPORTER_LEGACY_PID 2>/dev/null || true' EXIT
-  jmp run --exporter test-exporter-oidc &
-  jmp run --exporter test-exporter-sa &
-  jmp run --exporter test-exporter-legacy &

32-106: Consider breaking down the large test case.

While the current test is comprehensive, consider splitting it into multiple focused test cases for better maintainability and clearer failure isolation:

  • Authentication and login tests
  • Exporter configuration and startup tests
  • Lease management tests
  • Admin command tests
  • Cleanup and deletion tests

This would make it easier to identify which specific functionality failed and improve test debugging.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5839fd8 and dc93fd0.

📒 Files selected for processing (2)
  • action.yml (2 hunks)
  • tests.bats (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (ubuntu-24.04)
  • GitHub Check: test (ubuntu-24.04-arm)
🔇 Additional comments (9)
action.yml (2)

18-22: LGTM! Good addition of BATS testing framework.

Installing BATS via apt-get is appropriate for the Ubuntu-based GitHub Actions environment and will enable structured testing.


97-97: Excellent refactoring to use structured testing.

Replacing the manual shell commands with a comprehensive BATS test suite improves maintainability and readability. The --trace --verbose-run flags will provide good debugging information if tests fail.

tests.bats (7)

1-18: Comprehensive setup covering different authentication scenarios.

The setup function properly creates clients and exporters with various authentication methods (OIDC, service account, legacy) and appropriate labels for selector-based testing. The flag usage is consistent and appropriate.


32-46: Well-structured authentication testing.

The test properly covers different authentication methods including OIDC username/password and Kubernetes service account tokens. Good use of different client types to validate various authentication flows.


58-63: Good use of yq for configuration overlay.

Using yq to apply YAML overlays to exporter configurations is the right approach for merging additional configuration without overwriting the base setup.


72-77: Excellent use of Kubernetes condition waiting.

Waiting for both "Online" and "Registered" conditions ensures the exporters are fully ready before proceeding with tests. This prevents race conditions.


81-88: Comprehensive lease and admin command testing.

Good coverage of lease lifecycle (create, list, delete) and admin commands for different resource types. This validates the full management workflow.


94-96: Effective selector-based testing.

Testing shell commands with different clients and selectors validates that the labeling and targeting system works correctly across different authentication methods.


104-105: Proper verification of resource cleanup.

Using ! to verify that secrets are properly deleted after cleanup is the correct approach for negative assertions in BATS.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests.bats (2)

25-25: Fix misleading test name.

The test name "can login with oidc" is misleading as it tests multiple authentication methods including service accounts and legacy authentication, not just OIDC.

-@test "can login with oidc" {
+@test "can login with different authentication methods" {

75-116: Address the commented-out comprehensive test.

This large commented-out test block suggests incomplete functionality or work in progress. It includes important testing scenarios like lease management, shell command execution, and secret cleanup verification.

The commented-out test covers critical functionality including:

  • Exporter runtime testing
  • Kubernetes resource status waiting
  • Lease management
  • Shell command execution
  • Secret cleanup verification

Consider either:

  1. Enabling this test if the functionality is ready
  2. Creating separate test cases for each scenario
  3. Adding TODO comments explaining why it's disabled

Do you want me to help break this down into smaller, manageable test cases or create an issue to track this work?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dc93fd0 and eee5c87.

📒 Files selected for processing (2)
  • action.yml (2 hunks)
  • tests.bats (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (ubuntu-24.04)
  • GitHub Check: test (ubuntu-24.04-arm)
🔇 Additional comments (7)
action.yml (1)

18-22: LGTM! Good approach to install BATS testing framework.

Installing BATS via apt-get is straightforward and includes the necessary support libraries (bats-support and bats-assert) for enhanced testing capabilities.

tests.bats (6)

1-4: LGTM! Proper BATS setup.

The setup function correctly loads the necessary BATS libraries for enhanced testing capabilities.


6-12: LGTM! Well-structured client creation test.

The test covers different authentication methods consistently. The use of --unsafe for OIDC/SA and --save for legacy clients appears intentional based on the authentication patterns.


14-23: LGTM! Comprehensive exporter creation test.

Good coverage of different exporter types with appropriate labels for identification in later tests.


62-72: LGTM! Proper cleanup tests.

The deletion tests provide necessary cleanup functionality and use consistent patterns with the --delete flag.


29-38:

Details

✅ Verification successful

Verify service account token lifecycle management.

The test creates service account tokens but doesn't explicitly clean them up. Consider if token cleanup is needed or if they have appropriate expiration.

Run the following script to check if there are any token cleanup mechanisms or expiration policies:


🏁 Script executed:

#!/bin/bash
# Description: Check for service account token management patterns
# Expected: Find token creation/deletion patterns or expiration settings

# Look for token creation and cleanup patterns
rg -A 5 -B 5 'kubectl.*token' .

# Check for any token-related cleanup in the codebase
ast-grep --pattern 'kubectl delete $_ token $_'

Length of output: 1447


No manual cleanup required for ephemeral service account tokens

The kubectl create token calls in tests.bats use the TokenRequest API, which issues short-lived tokens (default TTL ~1 hour) without creating persisted Kubernetes objects. They expire automatically, so no explicit deletion is needed.


51-56:

Details

✅ Verification successful

Verify exporter.yaml file dependency.

The test applies YAML overlays from $GITHUB_ACTION_PATH/exporter.yaml but this file dependency isn't explicitly documented. Ensure this file exists and contains the expected configuration.

Run the following script to verify the exporter.yaml file exists:


🏁 Script executed:

#!/bin/bash
# Description: Verify exporter.yaml file exists and examine its structure
# Expected: File should exist with valid YAML configuration

# Check if exporter.yaml exists
fd exporter.yaml

# If found, examine its structure
if [ -f ./exporter.yaml ]; then
  echo "=== exporter.yaml content ==="
  cat ./exporter.yaml
fi

Length of output: 291


exporter.yaml test dependency verified

exporter.yaml is present at the repository root and contains valid YAML:

export:
  power:
    type: jumpstarter_driver_power.driver.MockPower
  storage:
    type: jumpstarter_driver_opendal.driver.MockStorageMux

No missing file or configuration issues detected.

Comment thread action.yml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
tests.bats (1)

83-90: Consider test isolation concerns.

This test depends on the previous test having created and run exporters. Consider making tests more independent or documenting the dependencies clearly.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eee5c87 and 24970d3.

📒 Files selected for processing (1)
  • tests.bats (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (ubuntu-24.04)
  • GitHub Check: test (ubuntu-24.04-arm)
🔇 Additional comments (4)
tests.bats (4)

1-4: LGTM: Proper BATS setup.

The setup function correctly loads the required BATS libraries for support and assertions.


6-12: LGTM: Client creation test is well-structured.

The test properly covers different authentication methods (OIDC username, Kubernetes service account, and legacy). The use of --unsafe and --out /dev/null flags appropriately handles testing scenarios.


14-23: LGTM: Exporter creation with proper labeling.

The test correctly creates exporters with different authentication methods and applies appropriate labels for testing scenarios.


111-111:

Details

✅ Verification successful

Verify negative assertion syntax.

Ensure that the run ! syntax correctly captures the expected failure when secrets are deleted.

Also applies to: 121-121


🏁 Script executed:

#!/bin/bash
# Description: Verify BATS negative assertion syntax usage
# Expected: Documentation on proper usage of `run !` for negative assertions

rg -A 3 -B 3 "run !" --type bash

Length of output: 63


🏁 Script executed:

#!/bin/bash
# Locate tests.bats and show lines 105–130 for context around `run !`
file=$(fd --type f tests.bats | head -n1)
echo "Inspecting file: $file"
sed -n '105,130p' "$file"

Length of output: 790


🌐 Web query:

How to assert a command fails in Bats tests? Does using "run ! command" correctly invert the exit status?

💡 Result:

In Bats (Bash Automated Testing System), there are several ways to assert that a command fails. The run ! command syntax is indeed one approach, but it's important to understand how it works and when to use it.

Asserting Command Failures in Bats

Using run ! (Bats 1.5.0 and newer)

For Bats versions 1.5.0 and newer, you can use the run ! syntax to run a command and capture its output while expecting it to fail:

@test "command should fail" {
  run ! some_command
  [ "$status" -ne 0 ]
  # Additional assertions on $output if needed
}

This correctly inverts the exit status expectation, running the command and expecting it to fail (non-zero exit status)[3].

Using assert_failure (with bats-assert library)

The bats-assert library provides a cleaner way to test for command failures:

@test "command should fail" {
  load 'test_helper/bats-support/load'
  load 'test_helper/bats-assert/load'
  
  run some_command
  assert_failure
  # Optionally check specific exit code
  # assert_failure 1
}

This approach is more readable and provides better error messages when tests fail[1][2]. The assert_failure function checks that the command failed and can optionally verify a specific non-zero exit code.

Alternative Methods (for older Bats versions)

If you're using an older version of Bats without run ! support, you can use one of these approaches:

  1. Using ! command || false:
@test "command should fail" {
  run ! some_command || false
  [ "$status" -ne 0 ]
}
  1. Running the command and checking its status:
@test "command should fail" {
  run some_command
  [ "$status" -ne 0 ]
}

Important Gotchas with Negation

There's an important gotcha to be aware of: Bash deliberately excludes negated return values from causing a pipeline to exit (related to Bash's -e option behavior)[3]. This means:

  • If a negated command (! command) is not the final statement in a test, it won't fail the test even if it should.
  • The exit status will only propagate through to the test's return code if it's the final statement in the test[3].

This is why using run ! (in newer Bats versions) or ! command || false (in older versions) is recommended, as they explicitly cause the test to fail when the command unexpectedly succeeds[3].

Example with Output Verification

When testing command failures, you often want to verify both the exit status and the error message:

@test "invoking foo with a nonexistent file fails with error message" {
  run foo nonexistent_filename
  [ "$status" -eq 1 ]
  [ "$output" = "foo: no such file 'nonexistent_filename'" ]
}

When using bats-assert, this becomes even clearer:

@test "invoking foo with a nonexistent file fails with error message" {
  run foo nonexistent_filename
  assert_failure
  assert_output "foo: no such file 'nonexistent_filename'"
}

This approach provides better error reporting when the test fails, showing you the expected vs. actual output[5][6].

Citations:


Confirm Bats negative‐assertion syntax
The run ! kubectl … form is supported in Bats 1.5.0+ and will correctly invert the exit status—causing the test to fail if the secret still exists. No code changes are required. Ensure your test runner is using Bats ≥ 1.5.0.

Comment thread tests.bats
Comment thread tests.bats
Comment thread tests.bats
Comment thread tests.bats
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
action.yml (1)

84-85: Consider enabling strict Bash error handling
Adding set -euxo pipefail at the top of your run blocks helps catch failures, undefined variables, and pipeline errors early. For example:

 run: |
-  export ENDPOINT=$(helm get values jumpstarter --output json | jq -r '."jumpstarter-controller".grpc.endpoint')
+  set -euxo pipefail
+  export ENDPOINT=$(helm get values jumpstarter --output json | jq -r '."jumpstarter-controller".grpc.endpoint')
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 24970d3 and 3acee95.

📒 Files selected for processing (2)
  • action.yml (3 hunks)
  • tests.bats (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests.bats
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (ubuntu-24.04)
  • GitHub Check: test (ubuntu-24.04-arm)
🔇 Additional comments (2)
action.yml (2)

18-22: Add BATS testing framework installation
The newly added step correctly installs bats along with bats-support and bats-assert, ensuring your end-to-end tests have all necessary dependencies.


85-85: Export ENDPOINT within the same shell
By exporting ENDPOINT at the top of the run block, it’s available to all subsequent commands in this step (including the BATS invocation). Nice improvement.

Comment thread action.yml
@NickCao NickCao changed the title [WIP] test fully specifying client config with env var Test fully specifying client config with env var May 29, 2025
Comment thread tests.bats
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 39c70f7 and 1b4421e.

📒 Files selected for processing (1)
  • tests.bats (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests.bats (3)
Learnt from: NickCao
PR: jumpstarter-dev/jumpstarter-e2e#18
File: tests.bats:62-81
Timestamp: 2025-05-28T18:32:19.124Z
Learning: In the jumpstarter E2E test suite (tests.bats), the background exporter processes started in the "can run exporters" test are intentionally kept running to support subsequent tests like "can operate on leases" and "can lease and connect to exporters". These processes should not be cleaned up within individual test cases.
Learnt from: NickCao
PR: jumpstarter-dev/jumpstarter-e2e#18
File: tests.bats:0-0
Timestamp: 2025-05-28T18:31:42.647Z
Learning: In BATS testing framework, commands automatically cause test failure when they return non-zero exit codes. Explicit validation with `run` and `assert_success` is only needed when you want to capture output or prevent automatic failure behavior.
Learnt from: NickCao
PR: jumpstarter-dev/jumpstarter-e2e#18
File: tests.bats:51-56
Timestamp: 2025-05-28T18:32:24.154Z
Learning: In BATS test files, explicit error handling with if statements and manual returns is unnecessary because BATS automatically fails a test if any command within the test returns a non-zero exit code.
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test (ubuntu-24.04-arm)
  • GitHub Check: test (ubuntu-24.04)
🔇 Additional comments (10)
tests.bats (10)

1-4: LGTM: Proper BATS setup configuration.

The setup function correctly loads the required BATS libraries for support and assertions.


6-12: LGTM: Client creation test covers multiple authentication methods.

The test properly creates clients using different authentication methods (OIDC username, service account, and legacy), which provides good coverage of the authentication scenarios.


14-23: LGTM: Exporter creation with proper labeling.

The test correctly creates exporters with different authentication methods and applies appropriate labels for targeting in subsequent tests.


25-60: LGTM: Comprehensive OIDC login testing.

The test covers various login scenarios including username/password authentication, service account tokens, and YAML overlay operations. The configuration listing commands provide good verification points.


62-87: LGTM: Background exporter processes for E2E testing.

Based on the retrieved learnings, these background processes are intentionally kept running to support subsequent tests that depend on active exporters. The kubectl wait commands properly verify the exporters reach the expected conditions.


100-110: LGTM: Lease lifecycle operations testing.

The test properly covers lease creation, listing, and deletion operations with appropriate client configuration and cleanup.


112-123: LGTM: Shell command execution across different authentication methods.

The test validates that shell commands can be executed through different client types targeting various labeled exporters, providing good coverage of the connection functionality.


125-129: LGTM: Admin CLI CRD retrieval testing.

Simple and effective test for verifying admin CLI can retrieve the different custom resource types.


131-145: LGTM: Proper client deletion with verification.

The test correctly verifies resource existence before deletion and uses the run ! pattern to confirm resources are properly removed after deletion.


147-161: LGTM: Consistent exporter deletion testing.

The test follows the same pattern as client deletion, properly verifying both secret and custom resource cleanup.

Comment thread tests.bats
Comment on lines +89 to +98
@test "can specify client config only using environment variables" {
kubectl -n default wait --for=condition=Online --for=condition=Registered \
exporters.jumpstarter.dev/test-exporter-oidc

JMP_NAMEPSACE=default \
JMP_NAME=test-exporter-legacy \
JMP_ENDPOINT=$(kubectl get clients.jumpstarter.dev -n default test-client-legacy -o 'jsonpath={.status.endpoint}') \
JMP_TOKEN=$(kubectl get secrets -n default test-client-legacy-client -o 'jsonpath={.data.token}' | base64 -d) \
jmp shell --selector example.com/board=oidc j power on
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix typo in environment variable name and clarify naming convention.

There's a typo in the environment variable name that will prevent proper configuration.

Apply this diff to fix the typo:

- JMP_NAMEPSACE=default \
+ JMP_NAMESPACE=default \

Additionally, consider clarifying the naming convention - using JMP_NAME=test-exporter-legacy for client configuration might be confusing since it suggests an exporter name rather than a client session name.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@test "can specify client config only using environment variables" {
kubectl -n default wait --for=condition=Online --for=condition=Registered \
exporters.jumpstarter.dev/test-exporter-oidc
JMP_NAMEPSACE=default \
JMP_NAME=test-exporter-legacy \
JMP_ENDPOINT=$(kubectl get clients.jumpstarter.dev -n default test-client-legacy -o 'jsonpath={.status.endpoint}') \
JMP_TOKEN=$(kubectl get secrets -n default test-client-legacy-client -o 'jsonpath={.data.token}' | base64 -d) \
jmp shell --selector example.com/board=oidc j power on
}
@test "can specify client config only using environment variables" {
kubectl -n default wait --for=condition=Online --for=condition=Registered \
exporters.jumpstarter.dev/test-exporter-oidc
JMP_NAMESPACE=default \
JMP_NAME=test-exporter-legacy \
JMP_ENDPOINT=$(kubectl get clients.jumpstarter.dev -n default test-client-legacy -o 'jsonpath={.status.endpoint}') \
JMP_TOKEN=$(kubectl get secrets -n default test-client-legacy-client -o 'jsonpath={.data.token}' | base64 -d) \
jmp shell --selector example.com/board=oidc j power on
}
🤖 Prompt for AI Agents
In tests.bats around lines 89 to 98, there is a typo in the environment variable
name "JMP_NAMEPSACE" which should be corrected to "JMP_NAMESPACE" to ensure
proper configuration. Additionally, rename "JMP_NAME" to something more
descriptive like "JMP_CLIENT_NAME" to clearly indicate it refers to the client
session name rather than an exporter name, improving clarity and
maintainability.

Copy link
Copy Markdown
Member

@mangelajo mangelajo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice!!!

@mangelajo mangelajo merged commit 4671491 into main May 29, 2025
3 checks passed
@NickCao NickCao deleted the env branch May 29, 2025 23:09
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants