Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Live Streaming Support #189

Open
achmadns opened this issue May 18, 2017 · 13 comments
Open

Live Streaming Support #189

achmadns opened this issue May 18, 2017 · 13 comments

Comments

@achmadns
Copy link

Dear team, thank you for the amazing tools you have here. The visualization is beautiful.

I tried the example to generate real time the waveform originated from icecast source which is live streaming one. I just replace the audio src into an icecast url which allows any origin ( to avoid avoid cors problem). This is the setting (part of /etc/icecast.xml):

<icecast>
......

	<http-headers>
		<header name="Access-Control-Allow-Origin" value="*" />
	</http-headers>

.......
</icecast>

And then to stream the audio, I used gstreamer

gst-launch-1.0 uridecodebin uri=file://$HOME/Music/audio.ogg ! audioconvert ! vorbisenc ! oggmux ! shout2send mount=/audio.ogg port=8000 username=source password=hackme ip=icecast.local

Finally replace the audio url:

<source src="http://icecast.local:8000/audio.ogg" type="audio/ogg">

I can play the audio but the waveform is not drawn. If I want to contribute this feature, from where I have to start?

Thank you,

@chrisn
Copy link
Member

chrisn commented May 19, 2017

This is a great use case. What I'd recommend to start with is to look at waveform-data.js, the module that produces the data for drawing the waveform (the algorithm is quite simple).

You could possibly use the Web Audio API to get the raw audio samples from the <audio> element, using MediaElementAudioSourceNode. Pass the audio samples to the downsampling algorithm, and render the results to a <canvas> element.

Peaks.js assumes the audio has a fixed length, so integrating a real-time waveform update into the existing code will require some thought. But if you can get a simple proof of concept working, I'd be happy to look at how it could be done.

@achmadns
Copy link
Author

achmadns commented May 22, 2017

Thank you for your fast reply. I looked at d3 viz regarding MediaElementAudioSourceNode. There is similar solution of visualizing audio waveform which also employs canvas and web audio API. There is example using microphone as AudioSourceNode but it uses extensive memory while recording to the point my firefox hang up. So I will try to leverage internal browser cache for storing the audio instead of creating one.

I do expect high processor usage for this feature. Will try to implement first.

Thank you for your guidance.

@achmadns
Copy link
Author

achmadns commented May 24, 2017

Just want to share my plan and ask pointer.

So far, what I understood are peaks.js handle ui rendering and waveform-data.js handles waveform generation used by peaks.js. I was learning a lot how to tap the data from <audio> which already supports streaming which I experimented with ScriptProcessorNode and AnalyserNode in here.

The idea is keeping the way peaks.js works that handles waveform visualization and native player synchronization while continuously draws waveform as new audio frame arrives or schedule the update at some interval to avoid high processor usage.

I saw waveform-data.js provide json as their source. I plan to gradually build the waveform by appending the frame/binaries/pcm out of each audio stream or I don't know yet if waveform-data.js capable of interpreting waveform for each pcm data. I am still figuring out the json format from test data example

How do you think? Are they possible @chrisn? I will learn further while continuing the experiment.

@chrisn
Copy link
Member

chrisn commented May 25, 2017

You can find details of the data format we use here. I haven't had time to look into this yet, but I imagine that adapting Peaks.js to display a real-time updated waveform will be non-trivial.

@achmadns
Copy link
Author

Thank you for the guidance. I'll keep sharing the progress.

@achmadns
Copy link
Author

achmadns commented May 29, 2017

Hello @chrisn, I understand that audiowaveform exists to generate digitized waveform data in form of .dat or .json file. In my test, I found the limit of minimal audio that supported by peaks.js was 13 seconds minimal, with minimum zoom level 512, with sample per pixel maximum 512 (I tested with sample per pixel 1024 was a failure). Shorter audio than that was unable to be visualized. Well, I am the type of trying before reading manual guy. Just to let you know.

When doing live streaming, I understand why you said it won't be trivial. Yes, the hard part was generating the waveform data out of audio stream. Later, we have to notify peaks.js to redraw the waveform.

I was playing with ScriptProcessorNode from which we can get raw data (PCM) from each channel and found out that these values were consistent from a song (and the example value from audioProcessEvent.inputBuffer with type AudioBuffer):

sampleRate: 44100
length: 4096
numberOfChannels: 2
duration: 0.09287981859410431

From AnalyserNode we can get frequency data and time domain data. To be frank, I have problem understanding your audiowaveform as I am not familiar with C++. Could you please share the algorithm? It seems that the important thing are sample_per_pixel and bits fields on waveform data format. and I'll try to generate the waveform data from information I gathered from ScriptProcessorNode and AnalyserNode. Although there is more challenge as those two processors continuously processing data even there is no audio being played which makes the pcm data being silent and not to mention the calculation should be on worker thread.

To be frank, I'm new to audio processing/visualization. I apologize when speaking for misplaced context. Your guidances are highly appreciated.

@achmadns
Copy link
Author

achmadns commented May 29, 2017

Fortunately there is webaudio-peaks that calculate the peaks. I tested it with a song but not live streaming one. Next step doing live update of the peaks, but still peaks.js still a puzzle for me. Learning further to make it redraw at runtime.

@achmadns
Copy link
Author

Hi @chrisn , I managed to create not so stable live streaming POC with the help of some backend code to update the peaks.json (waveform data in json format) on server due to my lack of understanding of peaks.js inner working to redraw the waveform out of new peaks data.

The idea here is to send the peaks into backend through stomp for efficiencies, and the backend will update the data based on that information. Then, at specific interval, I recreate (destroy-init) the waveform. Could you please give me pointer how to introduce live update feature of the waveform? Hopefully the slider still pointing to latest time line of the audio because it is live streaming.

Might be like:

peaksInstance.updatePeaks(peaksData);
peaksInstance.repaint();

Best regards.

@manuliner
Copy link

manuliner commented Aug 15, 2017

hello @achmadns
do you have any news on this?
i am looking for the same kind of functionality

kind regards
Manu

@achmadns
Copy link
Author

Hello @manuliner , unfortunately, the last progress only load peaks metadata from remote file that being updated on specific interval. We need another copy of the audio source to make it work and update the peaks diagram once there is audio stream being analysed. You can check it in my fork.

@manuliner
Copy link

Hey @achmadns , i am currently looking into this Problem. My Approach is to fetch via videojs-contrib-hls my m3u8 playlist. I get an array with my TS files . The only problem at the moment is, that i cant decode the TS files with the webaudiokit

do you have any ideas to transform them to the arrayBuffer peaks.js need to generate the waveform data?

when i solved this, i will iterate through my m3u8 file and generate the waveform data file and merge them.

@achmadns
Copy link
Author

achmadns commented Oct 9, 2017

Hi @manuliner , sorry I haven't experienced that yet.

@greaber
Copy link

greaber commented Feb 18, 2019

@achmadns, @manuliner, did either of you ever find a solution to this problem? In my application, the user is recording audio, and I would like to show a scrolling waveform like in Pro Tools or Audacity. It looks like it shouldn't be too difficult to code something up using the Web Audio API and canvases, but I would rather use an existing library if there is something that solves this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants