-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
LibMedia+LibWeb: Create demuxer and extract tracks on a separate thread #7049
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LibMedia+LibWeb: Create demuxer and extract tracks on a separate thread #7049
Conversation
|
I think we may be able to avoid having a ThreadData class like in AudioDataProvider and VideoDataProvider if we just use WeakPlaybackManager to directly modify the relevant fields of the PlaybackManager in a deferred_invoke back to the main thread in initialize_from_data. We should probably also use the existing error callback for initialization errors, make that private and set it as part of the constructor so that the media element is able to react to initialization errors the same way as playback errors (assuming the spec doesn't differentiate those?). Also, I think it makes sense to have the track lists initially empty and just append to them as we receive data instead of replacing them, since that's my understanding for how new data is handled when MSE comes into the mix. |
7afb16f to
f98a3f2
Compare
https://html.spec.whatwg.org/multipage/media.html#media-data-processing-steps-list seems to define differently error handling for those cases:
sure, I made appropriate changes. |
Gotcha, in that case we would probably be fine to pass a callback for handling those format errors early enough that they can be guaranteed to be handled. For playback errors, we can keep the public callback, since it can be set before any of the decoding threads start working. |
f98a3f2 to
a655e19
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry to bring you another potential refactor, but I wonder if we could make the playback manager setup something like:
m_playback_manager = PlaybackManager::create();
m_playback_manager.on_track_added = [...](Track const& track) { ... }; // Add a track based on the instructions for audio or video tracks in spec, set video resolution if a video track is selected
m_playback_manager.on_metadata_parsed = [...] { ... }; // Set the ready state to HAVE_METADATA, set duration
m_playback_manager.add_media_source(m_media_data); // Triggers the above callbacks once the necessary data has been receivedHopefully we can also keep the getters for the preferred tracks and set their backing fields before we call on_track_added, so that the media element steps can check if there's a preferred track to enable when it receives a new track to run the processing steps on.
I also have my branch that adds an on_duration_changed event, but I think we probably want to keep the metadata parsed part separate.
| if (audio_sink) | ||
| return make_ref_counted<WrapperTimeProvider<AudioMixingSink>>(*audio_sink); | ||
| return make_ref_counted<GenericTimeProvider>(); | ||
| })(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could potentially create a GenericTimeProvider by default when initializing the PlaybackManager, and then swap it out for an AudioMixingSink when we find an audio track, so we can keep the time provider field non-null and avoid some assertions.
I think it should suffice to do
if (audio_sink) {
auto current_time = playback_manager->current_time();
playback_manager->m_time_provider = make_ref_counted<WrapperTimeProvider<AudioMixingSink>>(*audio_sink);
playback_manager->m_time_provider->set_time(current_time);
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to implement it precisely as you suggested above, but AudioMixingSink::set_time() would try to access uninitialized m_playback_stream then, so instead I made it to verify that current_time() of previous time provider is zero. I think it should be safe to assume current_time() is always zero at this point, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I guess I forgot to note how to handle when we already have an audio mixing sink, but in that case we don't need to recreate it. Technically, we could always use an AudioMixingSink, since it will handle adding tracks just fine, but GenericTimeProvider is obviously much lighter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it would be something along the lines of:
if (!playback_manager->m_audio_tracks.is_empty() && playback_manager->m_audio_sink == nullptr) {
playback_manager->m_audio_sink = MUST(AudioMixingSink::try_create());
auto current_time = playback_manager->current_time();
playback_manager->m_time_provider = make_ref_counted<WrapperTimeProvider<AudioMixingSink>>(*audio_sink);
playback_manager->m_time_provider->set_time(current_time);
}I believe this should avoid any issues with an unset playback stream, hopefully.
(I should probably look at making it so that setting time doesn't assume the playback stream exists though, that setup is deferred so it is possible for that to happen in normal use...)
ccf585f to
13616d4
Compare
13616d4 to
dc96d2d
Compare
Demuxer creation and track+duration extraction are moved to a separate thread so that the media data byte buffer is no longer accessed from the main thread. This will be important once the buffer is populated incrementally, as having the main thread both populate and read from the same buffer could easily lead to deadlocks. Aside from that, moving demuxer creation off the main thread helps to be more responsive. `VideoDataProvider` and `AudioDataProvider` now accept the main thread event loop pointer as they are constructed from the thread responsible for demuxer creation.
dc96d2d to
a51d3d2
Compare
Demuxer creation and track+duration extraction are moved to a separate thread so that the media data byte buffer is no longer accessed from the main thread. This will be important once the buffer is populated incrementally, as having the main thread both populate and read from the same buffer could easily lead to deadlocks. Aside from that, moving demuxer creation off the main thread helps to be more responsive.
VideoDataProviderandAudioDataProvidernow accept the main thread event loop pointer as they are constructed from the thread responsible for demuxer creation.This is a part of #6921 that could be merged independently.