When using streaming (stream=True), token counts are not available until the stream completes. Currently guard.record() requires known token counts upfront.
Need a context manager or callback that accumulates tokens from stream chunks:
with guard.stream_tracker(model="openai/gpt-4o") as tracker:
for chunk in stream:
tracker.add_chunk(chunk)
# auto-records on exit