/* We create the device on the JavaScript side and reference it using an index. We use this to make it possible to reference the device between JavaScript and C. */
/* We create the device on the JavaScript side and reference it using an index. We use this to make it possible to reference the device between JavaScript and C. */
device.intermediaryBufferView = new Float32Array(Module.HEAPF32.buffer, device.intermediaryBuffer, device.intermediaryBufferSizeInBytes);
device.intermediaryBufferView = new Float32Array(Module.HEAPF32.buffer, device.intermediaryBuffer, device.intermediaryBufferSizeInBytes);
/*
/*
Both playback and capture devices use a ScriptProcessorNode for performing per-sample operations.
Both playback and capture devices use a ScriptProcessorNode for performing per-sample operations which is deprecated, but
thanks to Emscripten it can now work nicely with miniaudio. Therefore, this code will be considered legacy once the
ScriptProcessorNode is actually deprecated so this is likely to be temporary. The way this works for playback is very simple. You just set a callback
AudioWorklet implementation is enabled by default in miniaudio.
that's periodically fired, just like a normal audio callback function. But apparently this design is "flawed" and is now deprecated in favour of
something called AudioWorklets which _forces_ you to load a _separate_ .js file at run time... nice... Hopefully ScriptProcessorNode will continue to
The use of ScriptProcessorNode is quite simple - you simply provide a callback and do your audio processing in there. For
work for years to come, but this may need to change to use AudioSourceBufferNode instead, which I think is what Emscripten uses for it's built-in SDL
capture it's slightly unintuitive because you need to attach your node to the destination in order to capture anything.
implementation. I'll be avoiding that insane AudioWorklet API like the plague...
Therefore, the output channel count needs to be set for capture devices or else you'll get errors by the browser. In the
callback we just output silence to ensure nothing comes out of the speakers.
For capture it is a bit unintuitive. We use the ScriptProccessorNode _only_ to get the raw PCM data. It is connected to an AudioContext just like the
playback case, however we just output silence to the AudioContext instead of passing any real data. It would make more sense to me to use the
MediaRecorder API, but unfortunately you need to specify a MIME time (Opus, Vorbis, etc.) for the binary blob that's returned to the client, but I've
been unable to figure out how to get this as raw PCM. The closest I can think is to use the MIME type for WAV files and just parse it, but I don't know
how well this would work. Although ScriptProccessorNode is deprecated, in practice it seems to have pretty good browser support so I'm leaving it like
this for now. If anyone knows how I could get raw PCM data using the MediaRecorder API please let me know!