Why Generating Waveform for a Large Audio File from Client Side May Not Be a Good Thing

Thanks to Web Audio API's AudioContext.decodeAudioData.

The other day I was trying to create a fancy, purely client-side, soundcloud-like waveform like the one embedded below for my podcast player Shikwasa. With Web Audio API's handy decodeAudioData, extracting the complete data looked easy as pie at first glance.

Soundcloud's waveform

Until my Chrome crashed the moment it ran the script.

To be fair, the audio file to be decoded has a size of 70.9MB. After some inspection I found AudioContext.decodeAudioData itself used up almost 10GB of memory simply to decode the audio. No way I could implement the feature with this page-bloater.

On Web Audio API's Github repository, there has been a discussion on this issue, somewhat aggressively asked the team to remove decodeAudioData for the same reason:

...it will waste hundreds of megs of memory and take several seconds of CPU power, and battery on mobile.

Apparently, SoundCloud decompresses audios from its servers, but there are plenty of pure up front open-source libraries that generate audio waveform themselves. How did they manage to do so?

I went through amplitude.js, wavesurfer.js, soundcloud-waveform-generator and even BBC's waveform-data.js, they all use the same core method to extract data - unfortunately, the disasterous AudioContext.decodeAudioData. In the last repo someone also mentioned a similar issue.

It might work okay on a small, heavily-compressed .mp3 song but never, ever for a 30 min+ podcast episode.

So far client-side solution seems like a dead-end to me, and I strongly advise you to stop frustrating yourself with it as well. If you have a better solution, please leave a comment and let me know.