1

I currently played around with the Web Audio API a little bit. I managed to "read" a microphone and play it to my speakers which worked quite seamlessly.

Using the Web Audio API, I now would like to resample an incoming audio stream (aka. microphone) from 44.1kHz to 16kHz. 16kHz, because I am using some tools which require 16kHz. Since 44.1kHz divided by 16kHz is not an integer, I believe I cannot just simply use a low-pass filter and "skip samples", right?

I also saw that some people suggested to use the .createScriptProcessor(), but since it is deprecated I feel kind of bad to use it, so I'm searching a different approach now. Also, I don't necessarily need the audioContext.Destination to hear it! It is still fine if I get the "raw" data of the resampled output.


My approaches so far

  • Creating an AudioContext({sampleRate: 16000}) --> throws an error: "Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported."
  • Using an OfflineAudioContext --> but it seems to have no option for streams (only for buffers)
  • Using an AudioWorkletProcessor to resample. In this case, I think, that I could use the processor to actually resample the input and output the "resampled" source. But I couldn't really figure how to resample it.

main.js

...
microphoneGranted: async function(stream){
    audioContext = new AudioContext();
    var microphone = audioContext.createMediaStreamSource(stream);
    await audioContext.audioWorklet.addModule('resample_proc.js');
    const resampleNode = new AudioWorkletNode(audioContext, 'resample_proc');
    microphone.connect(resampleNode).connect(audioContext.destination);
}
...

resample_proc.js (assuming only one input and output channel)

class ResampleProcesscor extends AudioWorkletProcessor {
    ...
    process(inputs, outputs, parameters) {
        const input = inputs[0];
        const output = outputs[0];
    
        if(input.length > 0){
            const inputChannel0 = input[0];
            const outputChannel0 = output[0];
    
            for (let i = 0; i < inputChannel0.length; ++i) {
                //do something with resample here?
            }
    
            return true;
        }
    }
}
registerProcessor('resample_proc', ResampleProcesscor);

Thank you!

greycatbug
  • 31
  • 1
  • 4

2 Answers2

1

The WebAudio API now allows to resample by passing the sample rate in the constructor. This code works in Chrome and Safari:

const audioStream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false })
const audioContext = new AudioContext({ sampleRate: 16000 })
const audioStreamSource = audioContext.createMediaStreamSource(audioStream);
audioStreamSource.connect(audioContext.destination)

But fails in Firefox that throws a NotSupportedError exception with AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.

In the example below, I've downsampled the audio coming from the microphone to 8kHz and added a one second delay so we can clearly hear the effect of downsampling: https://codesandbox.io/s/magical-rain-xr4g80

Philippe Sultan
  • 2,111
  • 17
  • 23
0

Your general idea looks good. While I can't provide the code to do the resampling, I can point out that you might want to start with Sample-rate conversion. Method 1 would work here with L/M = 160/441. Designing the filters takes a bit of work but only needs to be done once. You can also search for polyphase filtering for hints on how to do this effectively.

What chrome does in various parts is to use a windowed-sinc function to resample between any set of rates. This is method 2 in the wikipedia link.

Raymond Toy
  • 5,490
  • 10
  • 13