-
Notifications
You must be signed in to change notification settings - Fork 98
First part of openAI realtime cleanup. #176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis PR tightens agent track-handling safety, refactors the OpenAI Realtime integration (new Realtime/RTCManager constructors, client-based SDP exchange, async connect/send APIs), removes an AudioForwarder info log, bumps Changes
Sequence Diagram(s)sequenceDiagram
participant Test
participant Realtime
participant RTCManager
participant AsyncOpenAI
participant WebRTC as "WebRTC Peer"
Test->>Realtime: __init__(model, api_key, client, fps, realtime_session)
Realtime->>Realtime: build realtime_session (audio/transcription/VAD)
Realtime->>Realtime: _set_instructions("be friendly")
Realtime->>RTCManager: __init__(realtime_session, client)
Test->>Realtime: connect()
Realtime->>RTCManager: connect()
RTCManager->>RTCManager: _add_data_channel / set tracks
RTCManager->>AsyncOpenAI: POST /realtime (SDP offer via _exchange_sdp)
AsyncOpenAI-->>RTCManager: SDP answer
RTCManager->>WebRTC: setRemoteDescription(answer)
RTCManager-->>Realtime: connection established
Test->>Realtime: send_text("Hello")
Realtime->>RTCManager: send_text()
RTCManager->>RTCManager: _send_event (data channel)
RTCManager->>WebRTC: data channel send
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
126-126: Drop the write toself.rtc.instructions
RTCManagerhas noinstructionsattribute, so this line is both a mypy error and a dead write. With the guarded assignment above, the session already carries the right instructions. Please remove the line (or replace it with a proper session update event if you still need to propagate changes).plugins/openai/vision_agents/plugins/openai/rtc_manager.py (1)
8-51: Fix WebRTC track typings
self._video_to_openai_trackis always instantiated, yet typed optional;self._video_senderstarts asNonebut later stores anRTCRtpSender. This leads to the mypy errors surfaced in CI. Please tighten the types and importRTCRtpSender:-from aiortc import RTCPeerConnection, RTCSessionDescription, RTCDataChannel +from aiortc import ( + RTCPeerConnection, + RTCSessionDescription, + RTCDataChannel, + RTCRtpSender, +) … - self._audio_to_openai_track: QueuedAudioTrack = QueuedAudioTrack( - sample_rate=48000 - ) - self._video_to_openai_track: Optional[QueuedVideoTrack] = QueuedVideoTrack() - self._video_sender = None + self._audio_to_openai_track: QueuedAudioTrack = QueuedAudioTrack( + sample_rate=48000 + ) + self._video_to_openai_track: QueuedVideoTrack = QueuedVideoTrack() + self._video_sender: Optional[RTCRtpSender] = None
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (7)
agents-core/vision_agents/core/agents/agents.py(2 hunks)agents-core/vision_agents/core/utils/audio_forwarder.py(0 hunks)plugins/openai/pyproject.toml(1 hunks)plugins/openai/tests/test_openai_realtime.py(1 hunks)plugins/openai/vision_agents/plugins/openai/openai_realtime.py(7 hunks)plugins/openai/vision_agents/plugins/openai/rtc_manager.py(7 hunks)tests/test_audio_queue.py(1 hunks)
💤 Files with no reviewable changes (1)
- agents-core/vision_agents/core/utils/audio_forwarder.py
🧰 Additional context used
🧬 Code graph analysis (3)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (4)
plugins/openai/tests/test_openai_realtime.py (1)
realtime(20-32)agents-core/vision_agents/core/edge/sfu_events.py (1)
Participant(229-270)plugins/openai/vision_agents/plugins/openai/rtc_manager.py (1)
RTCManager(29-297)agents-core/vision_agents/core/llm/llm.py (2)
_set_instructions(206-210)_build_enhanced_instructions(83-109)
plugins/openai/tests/test_openai_realtime.py (2)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
_set_instructions(439-441)agents-core/vision_agents/core/llm/llm.py (1)
_set_instructions(206-210)
plugins/openai/vision_agents/plugins/openai/rtc_manager.py (6)
plugins/openai/tests/test_openai_realtime.py (1)
realtime(20-32)agents-core/vision_agents/core/utils/audio_track.py (1)
QueuedAudioTrack(9-11)agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(12-74)stop(73-74)plugins/openai/vision_agents/plugins/openai/openai_realtime.py (2)
connect(114-140)close(180-181)agents-core/vision_agents/core/utils/audio_forwarder.py (3)
AudioForwarder(12-68)start(30-35)stop(37-46)agents-core/vision_agents/core/utils/video_forwarder.py (4)
start(94-100)remove_frame_handler(76-92)add_frame_handler(48-74)stop(102-112)
🪛 GitHub Actions: CI (unit)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py
[error] 90-90: Value of "instructions" has incompatible type "str | None"; expected "str" [typeddict-item]
[error] 126-126: "RTCManager" has no attribute "instructions" [attr-defined]
[error] 270-270: Incompatible types in assignment (expression has type "RateLimitsUpdatedEvent", variable has type "SessionCreatedEvent") [assignment]
[error] 272-272: Incompatible types in assignment (expression has type "ResponseDoneEvent", variable has type "SessionCreatedEvent") [assignment]
[error] 274-274: "SessionCreatedEvent" has no attribute "response" [attr-defined]
[error] 275-275: "SessionCreatedEvent" has no attribute "response" [attr-defined]
[error] 441-441: Value of "instructions" has incompatible type "str | None"; expected "str" [typeddict-item]
plugins/openai/vision_agents/plugins/openai/rtc_manager.py
[error] 76-76: Incompatible types in assignment (expression has type "RTCRtpSender", variable has type "None") [assignment]
[error] 76-76: Argument 1 to "addTrack" of "RTCPeerConnection" has incompatible type "QueuedVideoTrack | None"; expected "MediaStreamTrack" [arg-type]
[error] 87-87: Argument "role" to "ConversationItem" has incompatible type "str"; expected "Literal['user', 'assistant', 'system'] | None" [arg-type]
[error] 198-198: Incompatible types in assignment (expression has type "RTCRtpSender", variable has type "None") [assignment]
[error] 297-297: Value of type "Coroutine[Any, Any, None]" must be used. Are you missing an await?
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: unit / Ruff & mypy
- GitHub Check: unit / Test "not integration"
| self.realtime_session["audio"]["output"] = RealtimeAudioConfigOutputParam() | ||
| self.realtime_session["audio"]["output"]["voice"] = self.voice | ||
|
|
||
| self.realtime_session["instructions"] = self.instructions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid writing None into session instructions
RealtimeSessionCreateRequestParam["instructions"] is declared as str, but both here and in _set_instructions we can assign None. That violates the API contract (see the CI failure at Line 90) and will surface as a runtime bug once the typed dict is validated. Please guard the assignment and drop the key when we have no content.
- self.realtime_session["instructions"] = self.instructions
+ if self.instructions is not None:
+ self.realtime_session["instructions"] = self.instructions
+ else:
+ self.realtime_session.pop("instructions", None)
…
- self.realtime_session["instructions"] = self._build_enhanced_instructions()
+ enhanced_instructions = self._build_enhanced_instructions()
+ if enhanced_instructions is not None:
+ self.realtime_session["instructions"] = enhanced_instructions
+ else:
+ self.realtime_session.pop("instructions", None)Also applies to: 439-441
🧰 Tools
🪛 GitHub Actions: CI (unit)
[error] 90-90: Value of "instructions" has incompatible type "str | None"; expected "str" [typeddict-item]
🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/openai_realtime.py around line 90
(and similarly around lines 439-441), the code assigns self.instructions (which
may be None) directly into realtime_session["instructions"], violating the
declared str type; change the logic to only set the "instructions" key when
self.instructions is a non-empty string (or truthy), and otherwise ensure the
key is removed or not present in the dict (i.e., guard the assignment and drop
the key when there is no content).
| async def start_video_sender( | ||
| self, stream_video_track: MediaStreamTrack, fps: int = 1, shared_forwarder=None | ||
| ) -> None: | ||
| await self._set_video_track() | ||
|
|
||
| # This method can be called twice with different forwarders | ||
| # Remove handler from old forwarder if it exists | ||
| if self._current_video_forwarder is not None: | ||
| await self._current_video_forwarder.remove_frame_handler(self._send_video_frame) | ||
| logger.debug("Removed old video frame handler from previous forwarder") | ||
|
|
||
| # Store reference to new forwarder and add handler | ||
| self._current_video_forwarder = shared_forwarder | ||
| shared_forwarder.add_frame_handler( | ||
| self._send_video_frame, fps=float(fps), name="openai" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle missing shared_forwarder before using it
start_video_sender is called from watch_video_track with shared_forwarder=None, which currently raises an AttributeError on the very next line. Please either require the caller to supply a forwarder or create one here before attaching handlers. For example:
- if self._current_video_forwarder is not None:
- await self._current_video_forwarder.remove_frame_handler(self._send_video_frame)
- logger.debug("Removed old video frame handler from previous forwarder")
-
- # Store reference to new forwarder and add handler
- self._current_video_forwarder = shared_forwarder
- shared_forwarder.add_frame_handler(
- self._send_video_frame, fps=float(fps), name="openai"
- )
+ if self._current_video_forwarder is not None:
+ await self._current_video_forwarder.remove_frame_handler(
+ self._send_video_frame
+ )
+ logger.debug("Removed old video frame handler from previous forwarder")
+
+ if shared_forwarder is None:
+ raise ValueError(
+ "shared_forwarder must be provided when starting the video sender"
+ )
+
+ self._current_video_forwarder = shared_forwarder
+ shared_forwarder.add_frame_handler(
+ self._send_video_frame, fps=float(fps), name="openai"
+ )This turns the crash into a clear error and prompts the caller to supply the forwarder it intends to share.
🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/rtc_manager.py around lines 103
to 118, start_video_sender dereferences shared_forwarder without checking for
None which causes an AttributeError when called with shared_forwarder=None;
update the method to validate shared_forwarder before use — either raise a clear
ValueError indicating the caller must provide a forwarder or instantiate a
default forwarder locally (and document which behavior is chosen), and only call
add_frame_handler/remove_frame_handler when shared_forwarder is not None; ensure
_current_video_forwarder is updated consistently and add a debug log for the
chosen path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
plugins/openai/vision_agents/plugins/openai/rtc_manager.py (1)
103-118: shared_forwarder can still be None when dereferenced.Line 116 calls
shared_forwarder.add_frame_handler(...)without verifying thatshared_forwarderis notNone, which will raise anAttributeErrorwhen called with the default parameter value.Apply this diff to validate the parameter:
async def start_video_sender( self, stream_video_track: MediaStreamTrack, fps: int = 1, shared_forwarder=None ) -> None: await self._set_video_track() # This method can be called twice with different forwarders # Remove handler from old forwarder if it exists if self._current_video_forwarder is not None: await self._current_video_forwarder.remove_frame_handler(self._send_video_frame) logger.debug("Removed old video frame handler from previous forwarder") + if shared_forwarder is None: + raise ValueError( + "shared_forwarder must be provided when starting the video sender" + ) + # Store reference to new forwarder and add handler self._current_video_forwarder = shared_forwarder shared_forwarder.add_frame_handler( self._send_video_frame, fps=float(fps), name="openai" )Based on past review comments.
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
432-434: Instructions assignment may need refinement.The
or ""fallback ensures a string type, but the past review comment suggested omitting the key entirely when there's no content rather than setting an empty string. An empty string might have different semantics than an absent key in the OpenAI API.Consider this approach to conditionally set the key:
def _set_instructions(self, instructions: str): super()._set_instructions(instructions) - self.realtime_session["instructions"] = self._build_enhanced_instructions() or "" + enhanced = self._build_enhanced_instructions() + if enhanced: + self.realtime_session["instructions"] = enhanced + else: + self.realtime_session.pop("instructions", None)Based on past review comments.
🧹 Nitpick comments (2)
plugins/openai/vision_agents/plugins/openai/rtc_manager.py (2)
193-200: Reduce logging verbosity for internal operations.Lines 194 and 197 use
logger.infofor internal method calls. These should belogger.debugto avoid cluttering production logs.Apply this diff:
async def _set_video_track(self) -> None: - logger.info("_set_video_track") + logger.debug("_set_video_track") if self._video_to_openai_track: if self._video_sender is None: - logger.info("_set_video_track enableing addTrack") + logger.debug("_set_video_track enabling addTrack") self._video_sender = self.pc.addTrack(self._video_to_openai_track) # adding tracks requires renegotiation await self.renegotiate()
233-239: Video frame logging will flood production logs.Line 237 logs every video frame at
infolevel. With typical video at 1+ fps, this will generate excessive log volume.Apply this diff:
async def _send_video_frame(self, frame: av.VideoFrame) -> None: """ Send a video frame to Gemini using send_realtime_input """ - logger.info(f"Sending video frame: {frame}") + logger.debug(f"Sending video frame: {frame}") if self._video_to_openai_track: await self._video_to_openai_track.add_frame(frame)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py(7 hunks)plugins/openai/vision_agents/plugins/openai/rtc_manager.py(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
plugins/openai/vision_agents/plugins/openai/rtc_manager.py (5)
agents-core/vision_agents/core/utils/audio_track.py (1)
QueuedAudioTrack(9-11)agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(12-74)stop(73-74)plugins/openai/vision_agents/plugins/openai/openai_realtime.py (2)
connect(112-130)close(170-171)agents-core/vision_agents/core/utils/audio_forwarder.py (3)
AudioForwarder(12-68)start(30-35)stop(37-46)agents-core/vision_agents/core/utils/video_forwarder.py (4)
start(94-100)remove_frame_handler(76-92)add_frame_handler(48-74)stop(102-112)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (4)
plugins/openai/tests/test_openai_realtime.py (1)
realtime(20-32)agents-core/vision_agents/core/edge/sfu_events.py (1)
Participant(229-270)plugins/openai/vision_agents/plugins/openai/rtc_manager.py (1)
RTCManager(29-298)agents-core/vision_agents/core/llm/llm.py (2)
_set_instructions(206-210)_build_enhanced_instructions(83-109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Ruff & mypy
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Ruff & mypy
🔇 Additional comments (12)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (6)
64-106: Constructor refactoring looks solid.The new signature accommodates flexible client injection and realtime session configuration. The audio config defaults (semantic VAD, gpt-4o-mini-transcribe) are sensible, and the three-tier client initialization (injected → api_key → env) provides good flexibility.
112-130: Clean connection flow.The connect method properly sequences callback wiring, RTC connection establishment, tool registration, and event emission. Well-structured.
132-168: LGTM on simple response methods.Both methods appropriately delegate to the RTCManager layer. The participant tracking in
simple_audio_responsesupports multi-user scenarios correctly.
196-214: Event normalization approach is sound.The workaround for OpenAI's event type inconsistency (copying and normalizing to
response.audio_transcript.done) is a pragmatic solution to the upstream issue.
329-430: Tool execution flow is well-designed.The tool call handling properly extracts parameters, executes via
_run_one_tool, and sends results back. The automaticresponse.createtrigger (lines 419-427) ensures the conversation continues with audio after tool execution, which maintains natural interaction flow.
436-520: Helper methods are well-implemented.The tool conversion and registration logic properly transforms internal schemas to OpenAI's format and gracefully handles edge cases (empty tool lists, missing fields).
plugins/openai/vision_agents/plugins/openai/rtc_manager.py (6)
1-10: Import additions support the refactored API.The
Literalimport addresses the past review comment about constraining theroleparameter. The OpenAI realtime types enable proper type checking.
39-57: Constructor refactoring aligns with new architecture.Accepting
realtime_sessionandclientas parameters enables proper dependency injection and removes hard-coded configuration.
59-77: Connection setup is properly sequenced.The
connect()method establishes the data channel, audio/video tracks, and incoming audio handling in the correct order. Renegotiation after adding video track is appropriate.
82-101: send_text properly addresses past type safety concern.The
roleparameter is now correctly typed asLiteral["user", "assistant", "system"], which prevents invalid role values and satisfies mypy.
162-189: Renegotiation and data channel setup are correct.The SDP exchange and data channel initialization follow standard WebRTC patterns. Event handling via asyncio task creation is appropriate.
250-277: SDP exchange properly uses client API key.The refactored
_exchange_sdpcorrectly extracts the API key from the injected client and uses the model fromrealtime_session. The debug logging will aid troubleshooting.
| elif et == "session.created": | ||
| SessionCreatedEvent(**event) | ||
| elif et == "rate_limits.updated": | ||
| RateLimitsUpdatedEvent(**event) | ||
| elif et == "response.done": | ||
| logger.info("OpenAI response done %s", event) | ||
| e = ResponseDoneEvent(**event) | ||
| response_done_event = ResponseDoneEvent.model_validate(event) | ||
|
|
||
| if e.response.status == "failed": | ||
| raise Exception("OpenAI realtime failure %s", e.response) | ||
| if response_done_event.response.status == "failed": | ||
| raise Exception("OpenAI realtime failure %s", response_done_event.response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the exception format string.
Line 268 uses %s format placeholder but doesn't apply the % operator, so it will print the literal string "OpenAI realtime failure %s" rather than interpolating the response object.
Apply this diff:
- if response_done_event.response.status == "failed":
- raise Exception("OpenAI realtime failure %s", response_done_event.response)
+ if response_done_event.response.status == "failed":
+ raise Exception(f"OpenAI realtime failure {response_done_event.response}")🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/openai_realtime.py around lines
260 to 268, the exception raised for a failed response uses a format placeholder
without applying it, resulting in the literal string being raised; replace the
current raise Exception("OpenAI realtime failure %s",
response_done_event.response) with a single string that interpolates the
response (for example using an f-string or .format) so the response object is
included in the Exception message and remove the comma to avoid creating a
tuple.
| async def close(self) -> None: | ||
| """Close the WebRTC connection and clean up resources.""" | ||
| try: | ||
| if self.data_channel is not None: | ||
| self.data_channel.close() | ||
| self.data_channel = None | ||
| if self._audio_to_openai_track is not None: | ||
| self._audio_to_openai_track.stop() | ||
| if self._video_to_openai_track is not None: | ||
| self._video_to_openai_track.stop() | ||
| await self.pc.close() | ||
| except Exception as e: | ||
| logger.debug(f"RTCManager close error: {e}") | ||
| if self.data_channel is not None: | ||
| self.data_channel.close() | ||
| self.data_channel = None | ||
| self._audio_to_openai_track.stop() | ||
| if self._video_to_openai_track is not None: | ||
| self._video_to_openai_track.stop() | ||
| close_coro = self.pc.close() | ||
| await asyncio.to_thread(lambda: asyncio.run(close_coro)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Simplify the peer connection close pattern.
Line 298 uses asyncio.to_thread(lambda: asyncio.run(close_coro)), which creates a new event loop in a thread to run an async coroutine. This is unnecessarily complex.
If self.pc.close() returns a coroutine, directly await it:
- close_coro = self.pc.close()
- await asyncio.to_thread(lambda: asyncio.run(close_coro))
+ await self.pc.close()If self.pc.close() is actually a blocking synchronous call, then:
- close_coro = self.pc.close()
- await asyncio.to_thread(lambda: asyncio.run(close_coro))
+ await asyncio.to_thread(self.pc.close)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def close(self) -> None: | |
| """Close the WebRTC connection and clean up resources.""" | |
| try: | |
| if self.data_channel is not None: | |
| self.data_channel.close() | |
| self.data_channel = None | |
| if self._audio_to_openai_track is not None: | |
| self._audio_to_openai_track.stop() | |
| if self._video_to_openai_track is not None: | |
| self._video_to_openai_track.stop() | |
| await self.pc.close() | |
| except Exception as e: | |
| logger.debug(f"RTCManager close error: {e}") | |
| if self.data_channel is not None: | |
| self.data_channel.close() | |
| self.data_channel = None | |
| self._audio_to_openai_track.stop() | |
| if self._video_to_openai_track is not None: | |
| self._video_to_openai_track.stop() | |
| close_coro = self.pc.close() | |
| await asyncio.to_thread(lambda: asyncio.run(close_coro)) | |
| async def close(self) -> None: | |
| if self.data_channel is not None: | |
| self.data_channel.close() | |
| self.data_channel = None | |
| self._audio_to_openai_track.stop() | |
| if self._video_to_openai_track is not None: | |
| self._video_to_openai_track.stop() | |
| await self.pc.close() |
🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/rtc_manager.py around lines 290
to 298, the peer-connection close uses asyncio.to_thread(lambda:
asyncio.run(close_coro)) which spawns a new event loop in a thread; instead, if
self.pc.close() returns a coroutine, directly await it (await self.pc.close());
if self.pc.close() is a blocking synchronous call, call it in a thread without
creating a new event loop (await asyncio.to_thread(self.pc.close)); also keep
existing cleanup (closing data_channel, stopping tracks) but avoid wrapping an
already-async coroutine in asyncio.run inside to_thread.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Dependencies