fix: tail-based log reads, hour presets, Live default#246
fix: tail-based log reads, hour presets, Live default#246alexanderwanyoike merged 1 commit intodevfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds ISO datetime range parsing to logs queries, implements single-day tail-based log reads with partial-object fetches and datetime-window filtering, extends log-streaming to apply time filters, augments MinIO test mocks, and updates frontend interval presets/serialization for hour-based and ISO datetime ranges. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LogsService
participant Storage as MinIO/Storage
participant Processor as StreamProcessor
rect rgba(0,128,0,0.5)
Client->>LogsService: GET /logs?dateRange=...&limit
end
alt Single-day & not metrics (tail path)
LogsService->>Storage: statObject(key)
Storage-->>LogsService: { size }
LogsService->>Storage: getPartialObject(range=tail~512KB) or getObject(full)
Storage-->>LogsService: bytes
LogsService->>Processor: split into lines
Processor->>Processor: drop first partial line if mid-file
Processor->>Processor: normalize lines
Processor->>Processor: apply startTime/endTime filter (if present)
Processor-->>LogsService: last N matching lines
LogsService-->>Client: 200 { data: [...] }
else Multi-day or metrics (stream path)
loop per-day
LogsService->>Storage: getObject(day)
Storage-->>LogsService: stream bytes
LogsService->>Processor: stream-split lines
Processor->>Processor: normalize & apply time filter (timestamped lines only)
end
Processor-->>LogsService: accumulated chronological lines (apply offset/limit)
LogsService-->>Client: 200 { data: [...] }
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
api/src/logs/__tests__/logs.service.spec.ts (1)
638-646: Minor: FirststatObjectmock on line 638 is immediately overwritten.The mock on line 638 (
mockResolvedValue({ size: fileSize })) is overwritten by line 646 before it's used. Consider removing the unused setup to improve test clarity.🧹 Suggested cleanup
- mockMinioClient.statObject.mockResolvedValue({ size: fileSize }); - // Tail window covers the whole file in this case, so getObject is used - // But for a proper tail test, simulate a file larger than TAIL_BYTES - // by using getPartialObject with the tail portion const tailContent = lines.slice(-15).join("\n") + "\n"; - const tailSize = Buffer.byteLength(tailContent, "utf-8"); const bigFileSize = 512 * 1024 + 1000; // bigger than TAIL_BYTES mockMinioClient.statObject.mockResolvedValue({ size: bigFileSize });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/__tests__/logs.service.spec.ts` around lines 638 - 646, The first mock call mockMinioClient.statObject.mockResolvedValue({ size: fileSize }) is overwritten by the later mockResolvedValue({ size: bigFileSize }) and never used; remove the unused setup (the reference to fileSize) or consolidate so only the intended statObject mock for the big file remains, ensuring the test uses bigFileSize for the tail scenario (check nearby references to fileSize, bigFileSize, tailContent and getPartialObject/getObject to keep the expected behavior).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/src/logs/logs.service.ts`:
- Around line 156-201: tailFilteredLogs currently ignores query.offset causing
pagination to fail; implement offset support by trimming entries to the last
(query.limit + query.offset) lines then dropping the first query.offset results
before returning. Concretely, in tailFilteredLogs after populating entries (and
before the final splice that enforces limit), compute const keep =
Math.min(entries.length, query.limit + (query.offset ?? 0)); if (entries.length
> keep) entries.splice(0, entries.length - keep); then if (query.offset &&
query.offset > 0) entries.splice(0, Math.min(query.offset, entries.length));
this uses existing symbols entries, query.offset, query.limit and keeps
normalizeLine and the rest of the method unchanged; also update callers/test or
add a defensive check in use-bot-logs.ts loadMore if needed.
In `@frontend/src/components/bot/interval-picker.tsx`:
- Around line 35-36: The hour presets in interval-picker.tsx currently set both
"1h" and "6h" with days: 0 which yields identical date-only queries; update the
preset data structure (e.g., add an hours or duration field to the preset
objects used by the IntervalPicker component) and wire that field either to the
API call (include hours for server-side filtering in the request payload from
the function that builds query params) or perform client-side filtering by
timestamp after fetch (use the new hours value to compute start timestamp and
filter results). Specifically, modify the presets array entries for label "1h"
and "6h" to include a unique hours/duration property and update the code paths
that build the API request or process fetched logs so they consume that new
property (look for usages of the presets and the function that constructs the
date filter).
---
Nitpick comments:
In `@api/src/logs/__tests__/logs.service.spec.ts`:
- Around line 638-646: The first mock call
mockMinioClient.statObject.mockResolvedValue({ size: fileSize }) is overwritten
by the later mockResolvedValue({ size: bigFileSize }) and never used; remove the
unused setup (the reference to fileSize) or consolidate so only the intended
statObject mock for the big file remains, ensuring the test uses bigFileSize for
the tail scenario (check nearby references to fileSize, bigFileSize, tailContent
and getPartialObject/getObject to keep the expected behavior).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 974aa316-bd25-43a7-9826-1790fc0df68e
📒 Files selected for processing (6)
api/src/logs/__tests__/logs.service.spec.tsapi/src/logs/logs.service.tsfrontend/src/components/bot/__tests__/interval-picker.test.tsxfrontend/src/components/bot/interval-picker.tsxfrontend/src/components/dashboard/bot-detail-panel.tsxfrontend/src/hooks/use-bot-logs.ts
1da061c to
cb5ac95
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
api/src/logs/__tests__/logs.service.spec.ts (1)
242-255:⚠️ Potential issue | 🟡 MinorEarly-stop test no longer exercises the stream path.
Using
dateRange: "20260401-20260401"now routes to the single-date tail branch, so this test can pass without validatingstreamFilteredLogsearly termination.✅ Suggested test correction
- mockMinioClient.getObject.mockResolvedValue(stream); + mockMinioClient.getObject + .mockResolvedValueOnce(stream) + .mockResolvedValueOnce(stringStream("")); const result = await service.getLogs("bot-1", { - dateRange: "20260401-20260401", + dateRange: "20260401-20260402", limit: 2, offset: 0, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/__tests__/logs.service.spec.ts` around lines 242 - 255, The test for early-stop currently uses dateRange "20260401-20260401" which triggers the single-date tail branch instead of the stream-from-start branch, so it no longer validates stream early termination; update the test to force the stream path by using a dateRange that triggers the stream branch (e.g., a range where start != end or whatever condition your getLogs implementation uses) and keep the rest of the setup (stringStream lines, jest.spyOn(stream, "destroy") as destroySpy, and mockMinioClient.getObject.mockResolvedValue(stream)) so that service.getLogs("bot-1", ...) exercises streamFilteredLogs early termination and asserts destroySpy was called and limit behavior.
♻️ Duplicate comments (1)
api/src/logs/logs.service.ts (1)
67-77:⚠️ Potential issue | 🟠 MajorSingle-date tail path silently ignores
offset.
offsetis applied instreamFilteredLogsbut not intailFilteredLogs, so single-date requests return inconsistent pagination semantics and can repeat the same slice foroffset > 0.🐛 Minimal fail-fast fix (contract-safe)
if (dates.length === 1 && query.type !== "metrics") { + if (query.offset > 0) { + return Failure( + "offset is not supported for single-date tail queries; use a dateRange query for paginated reads", + ); + } // Single date, non-metrics: tail for latest entries const logPath = `logs/${botId}/${dates[0]}.log`; await this.tailFilteredLogs(logPath, dates[0], query, entries); } else {Also applies to: 192-253
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 67 - 77, The single-date path uses tailFilteredLogs without honoring query.offset, causing duplicate slices for offset>0; update the call site or tailFilteredLogs implementation so offset is applied consistently: either pass query.offset into tailFilteredLogs (and make tailFilteredLogs accept an offset param) or have tailFilteredLogs skip the first query.offset matching entries (same skipping semantics as streamFilteredLogs uses with the skipped object), ensuring entries and query.limit are enforced identically between tailFilteredLogs and streamFilteredLogs.
🧹 Nitpick comments (1)
frontend/src/hooks/use-bot-logs.ts (1)
204-207: Consider centralizing dateRange serialization logic.The same ISO-vs-day separator logic is now duplicated across multiple hooks; extracting a shared helper would reduce drift risk.
♻️ Suggested refactor
+// e.g. frontend/src/lib/logs/date-range.ts +export function serializeDateRange(start: string, end: string): string { + const separator = start.includes("T") ? "--" : "-"; + return `${start}${separator}${end}`; +}-const separator = startDate.includes("T") ? "--" : "-"; -updateQuery({ - dateRange: `${startDate}${separator}${endDate}`, +updateQuery({ + dateRange: serializeDateRange(startDate, endDate), date: undefined, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/use-bot-logs.ts` around lines 204 - 207, The dateRange serialization (choosing "--" for ISO datetimes vs "-" for YYYYMMDD) is duplicated; extract a shared helper like serializeDateRange(startDate, endDate) and use it wherever date ranges are built (e.g., in use-bot-logs.ts around updateQuery and other hooks that replicate this logic) so callers simply call updateQuery({ dateRange: serializeDateRange(startDate, endDate) }); ensure the helper lives in a common util module and includes unit tests to cover both ISO and date-only inputs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/dashboard/bot-detail-panel.tsx`:
- Around line 91-100: The component-scoped ref streamingInitialized prevents
per-bot defaulting; update the useEffect that currently checks
streamingInitialized, useStreaming, and calls setInterval_ so it resets the
guard on bot change: include the unique bot identifier (e.g., props.bot.id or
botId) in the effect dependencies and when that id changes clear
streamingInitialized.current (set to false) before running the existing logic,
then keep the existing check (if useStreaming && !streamingInitialized.current)
to set streamingInitialized.current = true and call setInterval_(LIVE_INTERVAL);
this ensures streamingInitialized is scoped to each bot and the interval
defaulting behaves correctly across bot switches.
---
Outside diff comments:
In `@api/src/logs/__tests__/logs.service.spec.ts`:
- Around line 242-255: The test for early-stop currently uses dateRange
"20260401-20260401" which triggers the single-date tail branch instead of the
stream-from-start branch, so it no longer validates stream early termination;
update the test to force the stream path by using a dateRange that triggers the
stream branch (e.g., a range where start != end or whatever condition your
getLogs implementation uses) and keep the rest of the setup (stringStream lines,
jest.spyOn(stream, "destroy") as destroySpy, and
mockMinioClient.getObject.mockResolvedValue(stream)) so that
service.getLogs("bot-1", ...) exercises streamFilteredLogs early termination and
asserts destroySpy was called and limit behavior.
---
Duplicate comments:
In `@api/src/logs/logs.service.ts`:
- Around line 67-77: The single-date path uses tailFilteredLogs without honoring
query.offset, causing duplicate slices for offset>0; update the call site or
tailFilteredLogs implementation so offset is applied consistently: either pass
query.offset into tailFilteredLogs (and make tailFilteredLogs accept an offset
param) or have tailFilteredLogs skip the first query.offset matching entries
(same skipping semantics as streamFilteredLogs uses with the skipped object),
ensuring entries and query.limit are enforced identically between
tailFilteredLogs and streamFilteredLogs.
---
Nitpick comments:
In `@frontend/src/hooks/use-bot-logs.ts`:
- Around line 204-207: The dateRange serialization (choosing "--" for ISO
datetimes vs "-" for YYYYMMDD) is duplicated; extract a shared helper like
serializeDateRange(startDate, endDate) and use it wherever date ranges are built
(e.g., in use-bot-logs.ts around updateQuery and other hooks that replicate this
logic) so callers simply call updateQuery({ dateRange:
serializeDateRange(startDate, endDate) }); ensure the helper lives in a common
util module and includes unit tests to cover both ISO and date-only inputs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c71fe195-f5ef-4e63-9a3f-5eee126b4de1
📒 Files selected for processing (8)
api/src/logs/__tests__/logs.service.spec.tsapi/src/logs/logs.service.tsfrontend/src/components/bot/__tests__/interval-picker.test.tsxfrontend/src/components/bot/interval-picker.tsxfrontend/src/components/dashboard/bot-detail-panel.tsxfrontend/src/hooks/use-bot-events.tsfrontend/src/hooks/use-bot-logs-stream.tsfrontend/src/hooks/use-bot-logs.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/src/components/bot/tests/interval-picker.test.tsx
- frontend/src/components/bot/interval-picker.tsx
cb5ac95 to
f512a3b
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
api/src/logs/logs.service.ts (1)
122-145:⚠️ Potential issue | 🟠 MajorPaginate the filtered result set, not raw log lines.
query.offsetis applied before the new datetime-window check, so ISO datetime queries can skip lines that would have been filtered out anyway and return the wrong page.Suggested fix
- // Handle offset - if (skipped.count < query.offset) { - skipped.count++; - continue; - } - const normalized = this.normalizeLine(line, logDate); // Filter by datetime window if set if (query.startTime && query.endTime) { const entryTime = normalized.timestamp ? new Date(normalized.timestamp) : null; if ( entryTime && (entryTime < query.startTime || entryTime > query.endTime) ) { continue; // Outside time window } // Lines without timestamps are included (can't filter, don't exclude) } + if (skipped.count < query.offset) { + skipped.count++; + continue; + } + entries.push(normalized);Mirror the same ordering in the leftover branch as well.
Also applies to: 166-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 122 - 145, The offset is being applied before the datetime-window filter which causes pages to skip lines that would be filtered out; move the skipped.count / query.offset check to after normalizing the line and after the datetime-window check so that query.offset and pagination are applied to the already-filtered entries (i.e., perform normalizeLine → datetime filter → then increment/check skipped.count and continue), and make the same change in the leftover branch that also handles pagination so both paths paginate the filtered result set consistently (refer to normalizeLine, entries, skipped.count, and query.offset).api/src/logs/__tests__/logs.service.spec.ts (1)
242-261:⚠️ Potential issue | 🟡 MinorThis test never reaches the stream path.
dateRange: "20260401-20260401"still parses to a single date, sogetLogs()callstailFilteredLogs()here. That means this assertion can pass without covering the early-stop logic instreamFilteredLogs().Suggested fix
- it("should stop reading early once limit is reached (stream path)", async () => { - // Use date range to trigger stream-from-start path + it("should stop reading early once limit is reached (stream path)", async () => { + // Use a multi-day range to trigger stream-from-start path const lines = Array.from( { length: 1000 }, (_, i) => `{"level":"info","msg":"line-${i}"}`, ).join("\n"); @@ const result = await service.getLogs("bot-1", { - dateRange: "20260401-20260401", + dateRange: "20260401-20260402", limit: 2, offset: 0, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/__tests__/logs.service.spec.ts` around lines 242 - 261, The test is accidentally hitting tailFilteredLogs because the provided dateRange ("20260401-20260401") parses as a single day; change the test input so getLogs triggers the streaming branch (streamFilteredLogs) instead — e.g. use a multi-day range like "20260401-20260402" or omit/alter dateRange to force the stream-from-start path; update the assertion to still expect the stream to be destroyed and that two lines are returned, and keep references to getLogs, streamFilteredLogs, and tailFilteredLogs to locate the logic.
♻️ Duplicate comments (1)
api/src/logs/logs.service.ts (1)
249-252:⚠️ Potential issue | 🟠 Major
offsetstill has no effect on tail reads.This path always returns the newest
limitentries and never consumesquery.offset, so single-date pagination cannot advance beyond the first page.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 249 - 252, The tail-read truncation ignores query.offset and always returns the newest query.limit entries; update the trimming logic in the method that uses entries, query.limit and query.offset to apply offset before limiting—for example compute start = Math.max(0, entries.length - query.offset - query.limit) and end = entries.length - query.offset, then slice/splice to keep entries between start and end so pagination advances correctly when query.offset > 0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/src/logs/logs.service.ts`:
- Around line 224-226: The code currently drops the first line whenever start >
0 (lines.shift()), which wrongly discards a valid line when the read offset
landed exactly at a newline boundary; change the condition to only drop the
first line when the read offset is mid-line by checking the actual content start
(e.g., if start > 0 && content[0] !== '\n' then lines.shift()). Update the logic
around the variables content, start, and lines in logs.service.ts so you only
remove the first partial line when the content does not begin with a newline
(consider also handling '\r\n' if your logs use CRLF).
- Around line 198-216: The code calls statObject(this.logBucket, logPath) and
then separately calls getPartialObject/getObject, but if the object is deleted
between calls the current catch only handles NoSuchKey on statObject and still
lets the subsequent read throw; update the read path to handle missing-object
the same way as the stream path: wrap the await
this.minioClient.getPartialObject(...) and await this.minioClient.getObject(...)
calls in try/catch (or check the returned stream error) and if
hasErrorCode(error) && error.code === "NoSuchKey" return an empty result (as the
stream path does), ensuring use of the same this.logBucket, logPath, and
LogsService.TAIL_BYTES symbols and preserving existing behavior for other
errors.
---
Outside diff comments:
In `@api/src/logs/__tests__/logs.service.spec.ts`:
- Around line 242-261: The test is accidentally hitting tailFilteredLogs because
the provided dateRange ("20260401-20260401") parses as a single day; change the
test input so getLogs triggers the streaming branch (streamFilteredLogs) instead
— e.g. use a multi-day range like "20260401-20260402" or omit/alter dateRange to
force the stream-from-start path; update the assertion to still expect the
stream to be destroyed and that two lines are returned, and keep references to
getLogs, streamFilteredLogs, and tailFilteredLogs to locate the logic.
In `@api/src/logs/logs.service.ts`:
- Around line 122-145: The offset is being applied before the datetime-window
filter which causes pages to skip lines that would be filtered out; move the
skipped.count / query.offset check to after normalizing the line and after the
datetime-window check so that query.offset and pagination are applied to the
already-filtered entries (i.e., perform normalizeLine → datetime filter → then
increment/check skipped.count and continue), and make the same change in the
leftover branch that also handles pagination so both paths paginate the filtered
result set consistently (refer to normalizeLine, entries, skipped.count, and
query.offset).
---
Duplicate comments:
In `@api/src/logs/logs.service.ts`:
- Around line 249-252: The tail-read truncation ignores query.offset and always
returns the newest query.limit entries; update the trimming logic in the method
that uses entries, query.limit and query.offset to apply offset before
limiting—for example compute start = Math.max(0, entries.length - query.offset -
query.limit) and end = entries.length - query.offset, then slice/splice to keep
entries between start and end so pagination advances correctly when query.offset
> 0.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 83b40da6-0d4d-4cdc-92b8-74b7cc03a0fe
📒 Files selected for processing (8)
api/src/logs/__tests__/logs.service.spec.tsapi/src/logs/logs.service.tsfrontend/src/components/bot/__tests__/interval-picker.test.tsxfrontend/src/components/bot/interval-picker.tsxfrontend/src/components/dashboard/bot-detail-panel.tsxfrontend/src/hooks/use-bot-events.tsfrontend/src/hooks/use-bot-logs-stream.tsfrontend/src/hooks/use-bot-logs.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- frontend/src/hooks/use-bot-events.ts
- frontend/src/hooks/use-bot-logs-stream.ts
- frontend/src/hooks/use-bot-logs.ts
- frontend/src/components/bot/tests/interval-picker.test.tsx
- frontend/src/components/dashboard/bot-detail-panel.tsx
f512a3b to
5064a2d
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/components/bot/interval-picker.tsx (1)
91-101:⚠️ Potential issue | 🔴 CriticalMissing
typeproperty causes TypeScript build failure.The pipeline failure indicates that
handleRangeSelectcreates an object missing the requiredtypeproperty. Since custom calendar ranges are date-based (not live), this should betype: "range".🐛 Proposed fix
const handleRangeSelect = (selected: DateRange | undefined) => { setRange(selected); if (selected?.from && selected?.to) { onChange({ + type: "range", label: "custom", start: formatDate(selected.from), end: formatDate(selected.to), }); setCalendarOpen(false); setRange(undefined); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/bot/interval-picker.tsx` around lines 91 - 101, handleRangeSelect is building the onChange payload without the required type field, causing TypeScript failures; update the object passed to onChange inside handleRangeSelect to include type: "range" (since this is a custom date range) along with label, start, and end so the shape matches the expected type/interface used by onChange.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/bot/interval-picker.tsx`:
- Around line 44-66: The test is calling computeInterval with the old signature
(computeInterval("7d", 7)) so it silently defaults to 1 day; update the test in
frontend/src/components/bot/__tests__/interval-picker.test.tsx to call
computeInterval("7d", { days: 7 }) to match the new preset object signature used
by computeInterval and ensure the intended 7-day range is produced.
---
Outside diff comments:
In `@frontend/src/components/bot/interval-picker.tsx`:
- Around line 91-101: handleRangeSelect is building the onChange payload without
the required type field, causing TypeScript failures; update the object passed
to onChange inside handleRangeSelect to include type: "range" (since this is a
custom date range) along with label, start, and end so the shape matches the
expected type/interface used by onChange.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 964b6986-a907-4ed1-a0b7-f9221738d096
📒 Files selected for processing (9)
api/src/logs/__tests__/logs.service.spec.tsapi/src/logs/logs.service.tsfrontend/src/app/api/logs/[botId]/route.tsfrontend/src/components/bot/__tests__/interval-picker.test.tsxfrontend/src/components/bot/interval-picker.tsxfrontend/src/components/dashboard/bot-detail-panel.tsxfrontend/src/hooks/use-bot-events.tsfrontend/src/hooks/use-bot-logs-stream.tsfrontend/src/hooks/use-bot-logs.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- frontend/src/hooks/use-bot-events.ts
- frontend/src/hooks/use-bot-logs-stream.ts
- frontend/src/components/bot/tests/interval-picker.test.tsx
5064a2d to
b2c4c15
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
api/src/logs/logs.service.ts (1)
122-144:⚠️ Potential issue | 🟠 MajorApply
offsetafter the datetime filter.
offsetis consumed before the newstartTime/endTimecheck, so multi-day ISO datetime ranges skip lines that would never be returned.offset=5should skip the first 5 matching entries, not the first 5 lines in the file.Move the skip check below the time-window filter
- // Handle offset - if (skipped.count < query.offset) { - skipped.count++; - continue; - } - const normalized = this.normalizeLine(line, logDate); // Filter by datetime window if set if (query.startTime && query.endTime) { const entryTime = normalized.timestamp ? new Date(normalized.timestamp) : null; if ( entryTime && (entryTime < query.startTime || entryTime > query.endTime) ) { continue; // Outside time window } // Lines without timestamps are included (can't filter, don't exclude) } + + if (skipped.count < query.offset) { + skipped.count++; + continue; + } entries.push(normalized);Apply the same reordering in Lines 165-186 for the leftover branch.
Also applies to: 169-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 122 - 144, The offset is being applied before the datetime filter causing non-matching lines to consume skipped.count; move the skip logic so it runs after the time-window check: first call this.normalizeLine(line, logDate) and perform the startTime/endTime filtering using normalized.timestamp (as currently done), and only if the line passes the time filter apply the skipped.count/offset check and then push into entries; make the identical reorder in the alternate/leftover branch where skipped.count is currently checked (the same logic around skipped.count, normalizeLine, time-window check, and entries.push should be applied there too).api/src/logs/__tests__/logs.service.spec.ts (1)
242-257:⚠️ Potential issue | 🟡 MinorThis test still takes the tail branch.
dateRange: "20260401-20260401"collapses to a single date, sogetLogs()now callstailFilteredLogs(), notstreamFilteredLogs(). Thedestroy()assertion can pass without proving the early-stop behavior you're trying to cover.Use a multi-day range to force streaming
- dateRange: "20260401-20260401", + dateRange: "20260401-20260402",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/__tests__/logs.service.spec.ts` around lines 242 - 257, The test "should stop reading early once limit is reached (stream path)" is using a single-day dateRange so service.getLogs() calls tailFilteredLogs() instead of streamFilteredLogs(); change the test to use a multi-day range (e.g., "20260401-20260402" or any span >1 day) so getLogs() exercises the stream path, then keep the destroy() spy/assertion to verify early stop behavior; refer to service.getLogs, tailFilteredLogs, and streamFilteredLogs when updating the test.
♻️ Duplicate comments (3)
frontend/src/components/dashboard/bot-detail-panel.tsx (1)
91-100:⚠️ Potential issue | 🟠 MajorReset the Live-default guard per bot.
streamingInitializedis still component-scoped, so switching bots reuses the previous bot's interval state. Realtime → scheduled can leave the new bot on Live, and realtime → realtime can skip re-defaulting the new bot to Live.Suggested fix
- const streamingInitialized = useRef(false); + const initializedForBotId = useRef<string | null>(null); useEffect(() => { - if (useStreaming && !streamingInitialized.current) { - streamingInitialized.current = true; - setInterval_(LIVE_INTERVAL); - } - }, [useStreaming]); + if (!bot) return; + if (initializedForBotId.current === bot.id) return; + + initializedForBotId.current = bot.id; + setInterval_(useStreaming ? LIVE_INTERVAL : DEFAULT_DAY_INTERVAL); + }, [bot, useStreaming]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/dashboard/bot-detail-panel.tsx` around lines 91 - 100, The current streamingInitialized ref is component-scoped so it persists across different bots; update the logic in the useEffect around streamingInitialized/useStreaming so the guard is reset whenever the active bot changes (e.g., reset streamingInitialized.current = false when the bot id/identifier changes or include the bot id in the effect dependency and reinitialize the ref), then run the existing check to set streamingInitialized.current = true and call setInterval_(LIVE_INTERVAL) only for the newly loaded bot; reference streamingInitialized (useRef), the useEffect that depends on useStreaming, and setInterval_/LIVE_INTERVAL when applying the change.api/src/logs/logs.service.ts (2)
224-226:⚠️ Potential issue | 🟡 MinorDo not drop the first tailed line blindly.
If the tail boundary lands exactly on a line boundary,
lines.shift()removes a valid entry. Read one byte earlier, or inspect the preceding byte, before deciding whether the first line is partial.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 224 - 226, The code currently does "const lines = content.split('\n'); if (start > 0) lines.shift()" which blindly drops the first line when start>0 and thus can remove a valid line if the read boundary landed exactly at a line break; instead, before calling lines.shift() check the actual preceding byte/char at position start-1 (or read one byte earlier) — only drop the first element if that preceding byte is not a newline (i.e., the first returned line is partial); reference the variables/content handling where you split into lines, the start variable, and the conditional shift to implement this guard.
209-216:⚠️ Potential issue | 🟠 MajorCatch
NoSuchKeyon the tail read too.This is still a stat-then-read sequence. If the object is rotated or deleted between those two MinIO calls, this path fails the request instead of matching
streamFilteredLogs()'s empty-result behavior.Suggested fix
- const stream: NodeJS.ReadableStream = - start > 0 - ? await this.minioClient.getPartialObject( - this.logBucket, - logPath, - start, - ) - : await this.minioClient.getObject(this.logBucket, logPath); + let stream: NodeJS.ReadableStream; + try { + stream = + start > 0 + ? await this.minioClient.getPartialObject( + this.logBucket, + logPath, + start, + ) + : await this.minioClient.getObject(this.logBucket, logPath); + } catch (error: unknown) { + if (hasErrorCode(error) && error.code === "NoSuchKey") return; + throw error; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/src/logs/logs.service.ts` around lines 209 - 216, The tail-read path currently calls this.minioClient.getPartialObject / getObject and fails if the object is deleted between stat and read; wrap the getPartialObject/getObject call in a try/catch inside the same method and catch MinIO "NoSuchKey" errors (check error.code or error.name === 'NoSuchKey'), and when caught return the same empty-result behavior used by streamFilteredLogs() (e.g., resolve to an empty Readable/empty result) so rotated/deleted objects are treated like no-logs instead of an error; update references to this.minioClient.getPartialObject, this.minioClient.getObject and ensure the catch only handles NoSuchKey while rethrowing other errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@api/src/logs/__tests__/logs.service.spec.ts`:
- Around line 242-257: The test "should stop reading early once limit is reached
(stream path)" is using a single-day dateRange so service.getLogs() calls
tailFilteredLogs() instead of streamFilteredLogs(); change the test to use a
multi-day range (e.g., "20260401-20260402" or any span >1 day) so getLogs()
exercises the stream path, then keep the destroy() spy/assertion to verify early
stop behavior; refer to service.getLogs, tailFilteredLogs, and
streamFilteredLogs when updating the test.
In `@api/src/logs/logs.service.ts`:
- Around line 122-144: The offset is being applied before the datetime filter
causing non-matching lines to consume skipped.count; move the skip logic so it
runs after the time-window check: first call this.normalizeLine(line, logDate)
and perform the startTime/endTime filtering using normalized.timestamp (as
currently done), and only if the line passes the time filter apply the
skipped.count/offset check and then push into entries; make the identical
reorder in the alternate/leftover branch where skipped.count is currently
checked (the same logic around skipped.count, normalizeLine, time-window check,
and entries.push should be applied there too).
---
Duplicate comments:
In `@api/src/logs/logs.service.ts`:
- Around line 224-226: The code currently does "const lines =
content.split('\n'); if (start > 0) lines.shift()" which blindly drops the first
line when start>0 and thus can remove a valid line if the read boundary landed
exactly at a line break; instead, before calling lines.shift() check the actual
preceding byte/char at position start-1 (or read one byte earlier) — only drop
the first element if that preceding byte is not a newline (i.e., the first
returned line is partial); reference the variables/content handling where you
split into lines, the start variable, and the conditional shift to implement
this guard.
- Around line 209-216: The tail-read path currently calls
this.minioClient.getPartialObject / getObject and fails if the object is deleted
between stat and read; wrap the getPartialObject/getObject call in a try/catch
inside the same method and catch MinIO "NoSuchKey" errors (check error.code or
error.name === 'NoSuchKey'), and when caught return the same empty-result
behavior used by streamFilteredLogs() (e.g., resolve to an empty Readable/empty
result) so rotated/deleted objects are treated like no-logs instead of an error;
update references to this.minioClient.getPartialObject,
this.minioClient.getObject and ensure the catch only handles NoSuchKey while
rethrowing other errors.
In `@frontend/src/components/dashboard/bot-detail-panel.tsx`:
- Around line 91-100: The current streamingInitialized ref is component-scoped
so it persists across different bots; update the logic in the useEffect around
streamingInitialized/useStreaming so the guard is reset whenever the active bot
changes (e.g., reset streamingInitialized.current = false when the bot
id/identifier changes or include the bot id in the effect dependency and
reinitialize the ref), then run the existing check to set
streamingInitialized.current = true and call setInterval_(LIVE_INTERVAL) only
for the newly loaded bot; reference streamingInitialized (useRef), the useEffect
that depends on useStreaming, and setInterval_/LIVE_INTERVAL when applying the
change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ac2e6aa7-c280-4a0a-9397-2da7cf72dd54
📒 Files selected for processing (9)
api/src/logs/__tests__/logs.service.spec.tsapi/src/logs/logs.service.tsfrontend/src/app/api/logs/[botId]/route.tsfrontend/src/components/bot/__tests__/interval-picker.test.tsxfrontend/src/components/bot/interval-picker.tsxfrontend/src/components/dashboard/bot-detail-panel.tsxfrontend/src/hooks/use-bot-events.tsfrontend/src/hooks/use-bot-logs-stream.tsfrontend/src/hooks/use-bot-logs.ts
✅ Files skipped from review due to trivial changes (1)
- frontend/src/app/api/logs/[botId]/route.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/src/components/bot/tests/interval-picker.test.tsx
f553daa to
1994802
Compare
API: tail reads from end of file for single-date queries. Supports ISO datetime ranges (start--end format) for sub-day filtering alongside existing YYYYMMDD-YYYYMMDD format. CLI unchanged. Frontend: 1h/6h presets send actual datetime ranges. Day presets send YYYYMMDD as before. Default limit bumped to 1000. Live is default for realtime bots. IntervalValue has type field (live|range). API proxy now forwards all query params instead of cherry-picking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1994802 to
6909bac
Compare
Summary
API: Tail-based log reads
getPartialObject(512KB tail window)Frontend: Hour presets
Frontend: Fixes
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests