-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Live Streaming Support #189
Comments
This is a great use case. What I'd recommend to start with is to look at waveform-data.js, the module that produces the data for drawing the waveform (the algorithm is quite simple). You could possibly use the Web Audio API to get the raw audio samples from the Peaks.js assumes the audio has a fixed length, so integrating a real-time waveform update into the existing code will require some thought. But if you can get a simple proof of concept working, I'd be happy to look at how it could be done. |
Thank you for your fast reply. I looked at d3 viz regarding MediaElementAudioSourceNode. There is similar solution of visualizing audio waveform which also employs canvas and web audio API. There is example using microphone as AudioSourceNode but it uses extensive memory while recording to the point my firefox hang up. So I will try to leverage internal browser cache for storing the audio instead of creating one. I do expect high processor usage for this feature. Will try to implement first. Thank you for your guidance. |
Just want to share my plan and ask pointer. So far, what I understood are peaks.js handle ui rendering and waveform-data.js handles waveform generation used by peaks.js. I was learning a lot how to tap the data from The idea is keeping the way peaks.js works that handles waveform visualization and native player synchronization while continuously draws waveform as new audio frame arrives or schedule the update at some interval to avoid high processor usage. I saw waveform-data.js provide How do you think? Are they possible @chrisn? I will learn further while continuing the experiment. |
You can find details of the data format we use here. I haven't had time to look into this yet, but I imagine that adapting Peaks.js to display a real-time updated waveform will be non-trivial. |
Thank you for the guidance. I'll keep sharing the progress. |
Hello @chrisn, I understand that audiowaveform exists to generate digitized waveform data in form of When doing live streaming, I understand why you said it won't be trivial. Yes, the hard part was generating the waveform data out of audio stream. Later, we have to notify peaks.js to redraw the waveform. I was playing with ScriptProcessorNode from which we can get raw data (PCM) from each channel and found out that these values were consistent from a song (and the example value from
From AnalyserNode we can get frequency data and time domain data. To be frank, I have problem understanding your audiowaveform as I am not familiar with C++. Could you please share the algorithm? It seems that the important thing are To be frank, I'm new to audio processing/visualization. I apologize when speaking for misplaced context. Your guidances are highly appreciated. |
Fortunately there is webaudio-peaks that calculate the peaks. I tested it with a song but not live streaming one. Next step doing live update of the peaks, but still peaks.js still a puzzle for me. Learning further to make it redraw at runtime. |
Hi @chrisn , I managed to create not so stable live streaming POC with the help of some backend code to update the peaks.json (waveform data in json format) on server due to my lack of understanding of peaks.js inner working to redraw the waveform out of new peaks data. The idea here is to send the peaks into backend through stomp for efficiencies, and the backend will update the data based on that information. Then, at specific interval, I recreate (destroy-init) the waveform. Could you please give me pointer how to introduce live update feature of the waveform? Hopefully the slider still pointing to latest time line of the audio because it is live streaming. Might be like: peaksInstance.updatePeaks(peaksData);
peaksInstance.repaint(); Best regards. |
hello @achmadns kind regards |
Hello @manuliner , unfortunately, the last progress only load peaks metadata from remote file that being updated on specific interval. We need another copy of the audio source to make it work and update the peaks diagram once there is audio stream being analysed. You can check it in my fork. |
Hey @achmadns , i am currently looking into this Problem. My Approach is to fetch via videojs-contrib-hls my m3u8 playlist. I get an array with my TS files . The only problem at the moment is, that i cant decode the TS files with the webaudiokit do you have any ideas to transform them to the arrayBuffer peaks.js need to generate the waveform data? when i solved this, i will iterate through my m3u8 file and generate the waveform data file and merge them. |
Hi @manuliner , sorry I haven't experienced that yet. |
@achmadns, @manuliner, did either of you ever find a solution to this problem? In my application, the user is recording audio, and I would like to show a scrolling waveform like in Pro Tools or Audacity. It looks like it shouldn't be too difficult to code something up using the Web Audio API and canvases, but I would rather use an existing library if there is something that solves this. |
Dear team, thank you for the amazing tools you have here. The visualization is beautiful.
I tried the example to generate real time the waveform originated from icecast source which is live streaming one. I just replace the audio
src
into an icecast url which allows any origin ( to avoid avoidcors
problem). This is the setting (part of/etc/icecast.xml
):And then to stream the audio, I used gstreamer
Finally replace the audio url:
I can play the audio but the waveform is not drawn. If I want to contribute this feature, from where I have to start?
Thank you,
The text was updated successfully, but these errors were encountered: