Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup formating in audio_worklet JS code. NFC #21203

Merged
merged 1 commit into from
Jan 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 52 additions & 31 deletions src/audio_worklet.js
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
// This file is the main bootstrap script for Wasm Audio Worklets loaded in an Emscripten application.
// Build with -sAUDIO_WORKLET=1 linker flag to enable targeting Audio Worklets.
// This file is the main bootstrap script for Wasm Audio Worklets loaded in an
// Emscripten application. Build with -sAUDIO_WORKLET=1 linker flag to enable
// targeting Audio Worklets.

// AudioWorkletGlobalScope does not have a onmessage/postMessage() functionality at the global scope, which
// means that after creating an AudioWorkletGlobalScope and loading this script into it, we cannot
// AudioWorkletGlobalScope does not have a onmessage/postMessage() functionality
// at the global scope, which means that after creating an
// AudioWorkletGlobalScope and loading this script into it, we cannot
// postMessage() information into it like one would do with Web Workers.

// Instead, we must create an AudioWorkletProcessor class, then instantiate a Web Audio graph node from it
// on the main thread. Using its message port and the node constructor's
// "processorOptions" field, we can share the necessary bootstrap information from the main thread to
// the AudioWorkletGlobalScope.
// Instead, we must create an AudioWorkletProcessor class, then instantiate a
// Web Audio graph node from it on the main thread. Using its message port and
// the node constructor's "processorOptions" field, we can share the necessary
// bootstrap information from the main thread to the AudioWorkletGlobalScope.

function createWasmAudioWorkletProcessor(audioParams) {
class WasmAudioWorkletProcessor extends AudioWorkletProcessor {
Expand Down Expand Up @@ -98,8 +100,9 @@ function createWasmAudioWorkletProcessor(audioParams) {
// Call out to Wasm callback to perform audio processing
if (didProduceAudio = this.callbackFunction(numInputs, inputsPtr, numOutputs, outputsPtr, numParams, paramsPtr, this.userData)) {
// Read back the produced audio data to all outputs and their channels.
// (A garbage-free function TypedArray.copy(dstTypedArray, dstOffset, srcTypedArray, srcOffset, count) would sure be handy..
// but web does not have one, so manually copy all bytes in)
// (A garbage-free function TypedArray.copy(dstTypedArray, dstOffset,
// srcTypedArray, srcOffset, count) would sure be handy.. but web does
// not have one, so manually copy all bytes in)
for (i of outputList) {
for (j of i) {
for (k = 0; k < 128; ++k) {
Expand All @@ -111,24 +114,28 @@ function createWasmAudioWorkletProcessor(audioParams) {

stackRestore(oldStackPtr);

// Return 'true' to tell the browser to continue running this processor. (Returning 1 or any other truthy value won't work in Chrome)
// Return 'true' to tell the browser to continue running this processor.
// (Returning 1 or any other truthy value won't work in Chrome)
return !!didProduceAudio;
}
}
return WasmAudioWorkletProcessor;
}

// Specify a worklet processor that will be used to receive messages to this AudioWorkletGlobalScope.
// We never connect this initial AudioWorkletProcessor to the audio graph to do any audio processing.
// Specify a worklet processor that will be used to receive messages to this
// AudioWorkletGlobalScope. We never connect this initial AudioWorkletProcessor
// to the audio graph to do any audio processing.
class BootstrapMessages extends AudioWorkletProcessor {
constructor(arg) {
super();
// Initialize the global Emscripten Module object that contains e.g. the Wasm Module and Memory objects.
// After this we are ready to load in the main application JS script, which the main thread will addModule()
// Initialize the global Emscripten Module object that contains e.g. the
// Wasm Module and Memory objects. After this we are ready to load in the
// main application JS script, which the main thread will addModule()
// to this scope.
globalThis.Module = arg['processorOptions'];
#if !MINIMAL_RUNTIME
// Default runtime relies on an injected instantiateWasm() function to initialize the Wasm Module.
// Default runtime relies on an injected instantiateWasm() function to
// initialize the Wasm Module.
globalThis.Module['instantiateWasm'] = (info, receiveInstance) => {
var instance = new WebAssembly.Instance(Module['wasm'], info);
receiveInstance(instance, Module['wasm']);
Expand All @@ -139,41 +146,55 @@ class BootstrapMessages extends AudioWorkletProcessor {
console.log('AudioWorklet global scope looks like this:');
console.dir(globalThis);
#endif
// Listen to messages from the main thread. These messages will ask this scope to create the real
// AudioWorkletProcessors that call out to Wasm to do audio processing.
// Listen to messages from the main thread. These messages will ask this
// scope to create the real AudioWorkletProcessors that call out to Wasm to
// do audio processing.
let p = globalThis['messagePort'] = this.port;
p.onmessage = (msg) => {
let d = msg.data;
if (d['_wpn']) { // '_wpn' is short for 'Worklet Processor Node', using an identifier that will never conflict with user messages
if (d['_wpn']) {
// '_wpn' is short for 'Worklet Processor Node', using an identifier
// that will never conflict with user messages
#if MODULARIZE
// Instantiate the MODULARIZEd Module function, which is stored for us under the special global
// name AudioWorkletModule in MODULARIZE+AUDIO_WORKLET builds.
// Instantiate the MODULARIZEd Module function, which is stored for us
// under the special global name AudioWorkletModule in
// MODULARIZE+AUDIO_WORKLET builds.
if (globalThis.AudioWorkletModule) {
AudioWorkletModule(Module); // This populates the Module object with all the Wasm properties
delete globalThis.AudioWorkletModule; // We have now instantiated the Module function, can discard it from global scope
// This populates the Module object with all the Wasm properties
AudioWorkletModule(Module);
// We have now instantiated the Module function, can discard it from
// global scope
delete globalThis.AudioWorkletModule;
}
#endif
// Register a real AudioWorkletProcessor that will actually do audio processing.
registerProcessor(d['_wpn'], createWasmAudioWorkletProcessor(d['audioParams']));
#if WEBAUDIO_DEBUG
console.log(`Registered a new WasmAudioWorkletProcessor "${d['_wpn']}" with AudioParams: ${d['audioParams']}`);
#endif
// Post a Wasm Call message back telling that we have now registered the AudioWorkletProcessor class,
// and should trigger the user onSuccess callback of the emscripten_create_wasm_audio_worklet_processor_async() call.
// Post a Wasm Call message back telling that we have now registered the
// AudioWorkletProcessor class, and should trigger the user onSuccess
// callback of the
// emscripten_create_wasm_audio_worklet_processor_async() call.
p.postMessage({'_wsc': d['callback'], 'x': [d['contextHandle'], 1/*EM_TRUE*/, d['userData']] }); // "WaSm Call"
} else if (d['_wsc']) { // '_wsc' is short for 'wasm call', using an identifier that will never conflict with user messages
} else if (d['_wsc']) {
// '_wsc' is short for 'wasm call', using an identifier that will never
// conflict with user messages
Module['wasmTable'].get(d['_wsc'])(...d['x']);
};
}
}

// No-op, not doing audio processing in this processor. It is just for receiving bootstrap messages.
// However browsers require it to still be present. It should never be called because we never add a
// node to the graph with this processor, although it does look like Chrome does still call this function.
// No-op, not doing audio processing in this processor. It is just for
// receiving bootstrap messages. However browsers require it to still be
// present. It should never be called because we never add a node to the graph
// with this processor, although it does look like Chrome does still call this
// function.
process() {
// keep this function a no-op. Chrome redundantly wants to call this even though this processor is never added to the graph.
// keep this function a no-op. Chrome redundantly wants to call this even
// though this processor is never added to the graph.
}
};

// Register the dummy processor that will just receive messages.
registerProcessor("message", BootstrapMessages);
registerProcessor('message', BootstrapMessages);
35 changes: 25 additions & 10 deletions src/library_webaudio.js
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,11 @@ let LibraryWebAudio = {
// Wasm handle ID.
$emscriptenGetAudioObject: (objectHandle) => EmAudio[objectHandle],

// emscripten_create_audio_context() does not itself use emscriptenGetAudioObject() function, but mark it as a
// dependency, because the user will not be able to utilize the node unless they call emscriptenGetAudioObject()
// on it on JS side to connect it to the graph, so this avoids the user needing to manually do it on the command line.
// emscripten_create_audio_context() does not itself use
// emscriptenGetAudioObject() function, but mark it as a dependency, because
// the user will not be able to utilize the node unless they call
// emscriptenGetAudioObject() on it on JS side to connect it to the graph, so
// this avoids the user needing to manually do it on the command line.
emscripten_create_audio_context__deps: ['$emscriptenRegisterAudioObject', '$emscriptenGetAudioObject'],
emscripten_create_audio_context: (options) => {
let ctx = window.AudioContext || window.webkitAudioContext;
Expand All @@ -58,7 +60,7 @@ let LibraryWebAudio = {
console.dir(opts);
#endif

return ctx && emscriptenRegisterAudioObject(new ctx(opts));
return ctx && emscriptenRegisterAudioObject(new ctx(opts));
},

emscripten_resume_audio_context_async: (contextHandle, callback, userData) => {
Expand Down Expand Up @@ -166,14 +168,18 @@ let LibraryWebAudio = {
return audioWorkletCreationFailed();
}

// TODO: In MINIMAL_RUNTIME builds, read this file off of a preloaded Blob, and/or embed from a string like with WASM_WORKERS==2 mode.
// TODO: In MINIMAL_RUNTIME builds, read this file off of a preloaded Blob,
// and/or embed from a string like with WASM_WORKERS==2 mode.
audioWorklet.addModule('{{{ TARGET_BASENAME }}}.aw.js').then(() => {
#if WEBAUDIO_DEBUG
console.log(`emscripten_start_wasm_audio_worklet_thread_async() addModule('audioworklet.js') completed`);
#endif
audioWorklet.bootstrapMessage = new AudioWorkletNode(audioContext, 'message', {
processorOptions: {
'$ww': _wasmWorkersID++, // Assign the loaded AudioWorkletGlobalScope a Wasm Worker ID so that it can utilized its own TLS slots, and it is recognized to not be the main browser thread.
// Assign the loaded AudioWorkletGlobalScope a Wasm Worker ID so that
// it can utilized its own TLS slots, and it is recognized to not be
// the main browser thread.
'$ww': _wasmWorkersID++,
#if MINIMAL_RUNTIME
'wasm': Module['wasm'],
'mem': wasmMemory,
Expand All @@ -187,8 +193,11 @@ let LibraryWebAudio = {
});
audioWorklet.bootstrapMessage.port.onmessage = _EmAudioDispatchProcessorCallback;

// AudioWorklets do not have a importScripts() function like Web Workers do (and AudioWorkletGlobalScope does not allow dynamic import() either),
// but instead, the main thread must load all JS code into the worklet scope. Send the application main JS script to the audio worklet.
// AudioWorklets do not have a importScripts() function like Web Workers
// do (and AudioWorkletGlobalScope does not allow dynamic import()
// either), but instead, the main thread must load all JS code into the
// worklet scope. Send the application main JS script to the audio
// worklet.
return audioWorklet.addModule(
#if MINIMAL_RUNTIME
Module['js']
Expand All @@ -205,7 +214,10 @@ let LibraryWebAudio = {
},

$_EmAudioDispatchProcessorCallback: (e) => {
let data = e.data, wasmCall = data['_wsc']; // '_wsc' is short for 'wasm call', trying to use an identifier name that will never conflict with user code
let data = e.data;
// '_wsc' is short for 'wasm call', trying to use an identifier name that
// will never conflict with user code
let wasmCall = data['_wsc'];
wasmCall && getWasmTableEntry(wasmCall)(...data['x']);
},

Expand Down Expand Up @@ -237,7 +249,10 @@ let LibraryWebAudio = {
#endif

EmAudio[contextHandle].audioWorklet.bootstrapMessage.port.postMessage({
_wpn: UTF8ToString(HEAPU32[options]), // '_wpn' == 'Worklet Processor Name', use a deliberately mangled name so that this field won't accidentally be mixed with user submitted messages.
// '_wpn' == 'Worklet Processor Name', use a deliberately mangled name so
// that this field won't accidentally be mixed with user submitted
// messages.
_wpn: UTF8ToString(HEAPU32[options]),
audioParams,
contextHandle,
callback,
Expand Down
2 changes: 1 addition & 1 deletion test/test_interactive.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ def test_audio_worklet_emscripten_futex_wake(self):

# Tests a second AudioWorklet example: sine wave tone generator.
def test_audio_worklet_tone_generator(self):
self.btest('webaudio/tone_generator.c', expected='0', args=['-sAUDIO_WORKLET', '-sWASM_WORKERS'])
self.btest('webaudio/audio_worklet_tone_generator.c', expected='0', args=['-sAUDIO_WORKLET', '-sWASM_WORKERS'])

# Tests that AUDIO_WORKLET+MINIMAL_RUNTIME+MODULARIZE combination works together.
def test_audio_worklet_modularize(self):
Expand Down
Loading
Loading