Skip to content

Continuous Fuzzing #1420

Continuous Fuzzing

Continuous Fuzzing #1420

Workflow file for this run

name: Continuous Fuzzing
# Runs hourly to continuously explore input space
on:
schedule:
- cron: '0 * * * *'
workflow_dispatch:
push:
branches: [main]
paths:
- 'internal/**/*.go'
- 'cmd/**/*.go'
- '.github/workflows/fuzz.yml'
pull_request:
branches: [main]
paths:
- 'internal/**/*.go'
- 'cmd/**/*.go'
jobs:
fuzz:
name: Fuzz (${{ matrix.target.test }})
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
target:
- package: github.com/blackwell-systems/goldenthread/internal/emitter/zod
test: FuzzEmit
time: 10m
- package: github.com/blackwell-systems/goldenthread/internal/emitter/zod
test: FuzzEmitFieldName
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/emitter/zod
test: FuzzEmitValidation
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/emitter/zod
test: FuzzEmitPattern
time: 10m
- package: github.com/blackwell-systems/goldenthread/internal/emitter/zod
test: FuzzEmitEnum
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/hash
test: FuzzComputeSchemaHash
time: 10m
- package: github.com/blackwell-systems/goldenthread/internal/hash
test: FuzzComputeSchemaHash_Stability
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/hash
test: FuzzComputeSchemaHash_TypeChanges
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/hash
test: FuzzComputeSchemaHash_FieldOrder
time: 5m
- package: github.com/blackwell-systems/goldenthread/internal/parser
test: FuzzParsePackages
time: 5m
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25.6'
cache: true
# Go fuzz corpora live under testdata/fuzz/<FuzzFunc>/...
- name: Restore fuzz corpus cache
uses: actions/cache@v4
with:
path: |
**/testdata/fuzz
key: fuzz-corpus-${{ github.ref_name }}-${{ matrix.target.package }}-${{ matrix.target.test }}
restore-keys: |
fuzz-corpus-${{ github.ref_name }}-${{ matrix.target.package }}-
fuzz-corpus-${{ github.ref_name }}-
fuzz-corpus-
- name: Run fuzzing
id: fuzz
continue-on-error: true
shell: bash
run: |
set -o pipefail
echo "Running ${{ matrix.target.test }} in ${{ matrix.target.package }}"
go test ${{ matrix.target.package }} \
-fuzz=^${{ matrix.target.test }}$ \
-fuzztime=${{ matrix.target.time }} \
-v 2>&1 | tee fuzz-output.log
echo "exit_code=${PIPESTATUS[0]}" >> $GITHUB_OUTPUT
- name: Check for failures
if: steps.fuzz.outputs.exit_code != '0'
run: |
echo "::error::Fuzzing discovered a bug in ${{ matrix.target.test }}!"
echo "Fuzz output:"
cat fuzz-output.log
# Extract the failing case ID if present
CASE_ID=$(grep -oP 'Failing input written to testdata/fuzz/\K[^/]+/[a-f0-9]+' fuzz-output.log || echo "unknown")
echo "::error::Failing test case: $CASE_ID"
echo "::error::Re-run locally with: go test ${{ matrix.target.package }} -run=${{ matrix.target.test }}/$CASE_ID"
exit 1
- name: Create GitHub issue for fuzz failure
if: failure()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const output = fs.readFileSync('fuzz-output.log', 'utf8');
// Go prints: "Failing input written to testdata/fuzz/<FuzzFunc>/<hash>"
const caseMatch = output.match(/Failing input written to testdata\/fuzz\/([^/]+\/[a-f0-9]+)/);
const caseId = caseMatch ? caseMatch[1] : 'unknown';
const caseHash = caseId.includes('/') ? caseId.split('/').pop() : caseId;
const testName = '${{ matrix.target.test }}';
const pkg = '${{ matrix.target.package }}';
// Create issue
await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `Fuzzing found bug in ${testName}`,
body: `## Fuzzing Failure
**Fuzz Target**: \`${testName}\`
**Package**: \`${pkg}\`
**Failing Case**: \`${caseId}\`
**Run**: [Workflow run](https://github.com/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId})
### How to Reproduce
\`\`\`bash
go test ${pkg} -run=${testName}/${caseHash} -v
\`\`\`
### Artifacts
- Download the failing input from the workflow artifacts: [fuzz-failure-${testName}-${context.runId}]
- Full fuzz output available in workflow logs
### Next Steps
1. Reproduce locally with the command above
2. Examine the failing input case
3. Fix the bug
4. Add regression test to prevent recurrence
5. Update FUZZING_BUGS.md with details
---
<details>
<summary>Fuzz Output (last 100 lines)</summary>
\`\`\`
${output.split('\n').slice(-100).join('\n')}
\`\`\`
</details>`,
labels: ['bug', 'fuzzing', 'automated']
});
- name: Upload failure artifacts
if: failure()
uses: actions/upload-artifact@v4
with:
name: fuzz-failure-${{ matrix.target.test }}-${{ github.run_id }}
path: |
fuzz-output.log
**/testdata/fuzz
retention-days: 30
- name: Parse fuzzing stats
if: always()
run: |
# Extract fuzzing statistics from output
EXECS=$(grep -oP 'execs: \K[0-9]+' fuzz-output.log | tail -1 || echo "0")
INTERESTING=$(grep -oP 'new interesting: \K[0-9]+' fuzz-output.log | tail -1 || echo "0")
echo "Fuzzing Stats for ${{ matrix.target.test }}:"
echo "- Executions: $EXECS"
echo "- New interesting inputs: $INTERESTING"
# Create summary
cat >> $GITHUB_STEP_SUMMARY << EOF
## Fuzzing: ${{ matrix.target.test }}
- **Package**: \`${{ matrix.target.package }}\`
- **Duration**: ${{ matrix.target.time }}
- **Executions**: $EXECS
- **New Interesting Inputs**: $INTERESTING
- **Status**: ${{ steps.fuzz.outputs.exit_code == '0' && 'PASS' || 'FAIL' }}
EOF
summary:
name: Fuzzing Summary
runs-on: ubuntu-latest
needs: fuzz
if: always()
steps:
- name: Create overall summary
run: |
cat >> $GITHUB_STEP_SUMMARY << 'EOF'
# Continuous Fuzzing Complete
Fuzzing has been running across multiple targets to discover edge cases and bugs.
## What is Fuzzing?
Fuzzing automatically generates millions of random test inputs to find:
- Crashes and panics
- Invalid UTF-8 output
- Incorrect behavior on edge cases
- Security vulnerabilities
## Corpus Growth
The fuzz corpus (interesting test cases) is cached and grows over time,
making each run more effective than the last.
## Next Steps
- If all tests passed: corpus has been updated with new interesting cases
- If tests failed: check the artifacts for failing inputs and fix the bugs
EOF