Skip to content

[BUG] HTTP Client Doesn't Handle Chunked Transfer-Encoding #38

@olddev94

Description

@olddev94

Project

vgrep

Description

The HTTP client in src/server/client.rs lines 89-105 reads the response body without checking Content-Length or Transfer-Encoding headers. When the server uses chunked transfer encoding (common with Axum's streaming responses), the response body includes chunk size markers that corrupt JSON parsing.

Error Message

Error: Failed to parse server response

Caused by:
    expected value at line 1 column 1

Debug Logs

System Information

- Bounty Version: 0.1.0
- OS: Ubuntu 24.04 LTS
- Rust: 1.75+

Screenshots

No response

Steps to Reproduce

  1. Start vgrep server: vgrep serve
  2. Index a large codebase with many files
  3. Perform a search that returns many results: vgrep "function" -m 100
  4. Observe intermittent JSON parse failures

Expected Behavior

The HTTP client should:

  1. Parse response headers to detect Transfer-Encoding: chunked
  2. If chunked, decode chunks properly (read size, read data, repeat until size=0)
  3. If Content-Length is present, read exactly that many bytes

Actual Behavior

The client reads everything after headers as raw body content, including:

  • Chunk size markers (e.g., 1a\r\n)
  • Chunk terminators (0\r\n\r\n)

This corrupts the JSON response and causes parse failures.

Additional Context

This bug may not manifest in all cases because Axum doesn't always use chunked encoding. Small responses may be sent with Content-Length. The bug is more likely with:

  • Large search results
  • Many files indexed
  • Slow network conditions

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingvalidValid issuevgrep

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions