Commit 79aca688 authored by David Reid's avatar David Reid

Make documentation narrower.

parent 3181b0c2
...@@ -12,7 +12,8 @@ GitHub: https://github.com/mackron/miniaudio ...@@ -12,7 +12,8 @@ GitHub: https://github.com/mackron/miniaudio
/* /*
1. Introduction 1. Introduction
=============== ===============
miniaudio is a single file library for audio playback and capture. To use it, do the following in one .c file: miniaudio is a single file library for audio playback and capture. To use it, do the following in
one .c file:
```c ```c
#define MINIAUDIO_IMPLEMENTATION #define MINIAUDIO_IMPLEMENTATION
...@@ -21,16 +22,19 @@ miniaudio is a single file library for audio playback and capture. To use it, do ...@@ -21,16 +22,19 @@ miniaudio is a single file library for audio playback and capture. To use it, do
You can do `#include "miniaudio.h"` in other parts of the program just like any other header. You can do `#include "miniaudio.h"` in other parts of the program just like any other header.
miniaudio uses the concept of a "device" as the abstraction for physical devices. The idea is that you choose a physical device to emit or capture audio from, miniaudio uses the concept of a "device" as the abstraction for physical devices. The idea is that
and then move data to/from the device when miniaudio tells you to. Data is delivered to and from devices asynchronously via a callback which you specify when you choose a physical device to emit or capture audio from, and then move data to/from the device
initializing the device. when miniaudio tells you to. Data is delivered to and from devices asynchronously via a callback
which you specify when initializing the device.
When initializing the device you first need to configure it. The device configuration allows you to specify things like the format of the data delivered via When initializing the device you first need to configure it. The device configuration allows you to
the callback, the size of the internal buffer and the ID of the device you want to emit or capture audio from. specify things like the format of the data delivered via the callback, the size of the internal
buffer and the ID of the device you want to emit or capture audio from.
Once you have the device configuration set up you can initialize the device. When initializing a device you need to allocate memory for the device object Once you have the device configuration set up you can initialize the device. When initializing a
beforehand. This gives the application complete control over how the memory is allocated. In the example below we initialize a playback device on the stack, device you need to allocate memory for the device object beforehand. This gives the application
but you could allocate it on the heap if that suits your situation better. complete control over how the memory is allocated. In the example below we initialize a playback
device on the stack, but you could allocate it on the heap if that suits your situation better.
```c ```c
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount) void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
...@@ -63,20 +67,27 @@ but you could allocate it on the heap if that suits your situation better. ...@@ -63,20 +67,27 @@ but you could allocate it on the heap if that suits your situation better.
} }
``` ```
In the example above, `data_callback()` is where audio data is written and read from the device. The idea is in playback mode you cause sound to be emitted In the example above, `data_callback()` is where audio data is written and read from the device.
from the speakers by writing audio data to the output buffer (`pOutput` in the example). In capture mode you read data from the input buffer (`pInput`) to The idea is in playback mode you cause sound to be emitted from the speakers by writing audio data
extract sound captured by the microphone. The `frameCount` parameter tells you how many frames can be written to the output buffer and read from the input to the output buffer (`pOutput` in the example). In capture mode you read data from the input
buffer. A "frame" is one sample for each channel. For example, in a stereo stream (2 channels), one frame is 2 samples: one for the left, one for the right. buffer (`pInput`) to extract sound captured by the microphone. The `frameCount` parameter tells you
The channel count is defined by the device config. The size in bytes of an individual sample is defined by the sample format which is also specified in the how many frames can be written to the output buffer and read from the input buffer. A "frame" is
device config. Multi-channel audio data is always interleaved, which means the samples for each frame are stored next to each other in memory. For example, in one sample for each channel. For example, in a stereo stream (2 channels), one frame is 2
a stereo stream the first pair of samples will be the left and right samples for the first frame, the second pair of samples will be the left and right samples samples: one for the left, one for the right. The channel count is defined by the device config.
for the second frame, etc. The size in bytes of an individual sample is defined by the sample format which is also specified
in the device config. Multi-channel audio data is always interleaved, which means the samples for
The configuration of the device is defined by the `ma_device_config` structure. The config object is always initialized with `ma_device_config_init()`. It's each frame are stored next to each other in memory. For example, in a stereo stream the first pair
important to always initialize the config with this function as it initializes it with logical defaults and ensures your program doesn't break when new members of samples will be the left and right samples for the first frame, the second pair of samples will
are added to the `ma_device_config` structure. The example above uses a fairly simple and standard device configuration. The call to `ma_device_config_init()` be the left and right samples for the second frame, etc.
takes a single parameter, which is whether or not the device is a playback, capture, duplex or loopback device (loopback devices are not supported on all
backends). The `config.playback.format` member sets the sample format which can be one of the following (all formats are native-endian): The configuration of the device is defined by the `ma_device_config` structure. The config object
is always initialized with `ma_device_config_init()`. It's important to always initialize the
config with this function as it initializes it with logical defaults and ensures your program
doesn't break when new members are added to the `ma_device_config` structure. The example above
uses a fairly simple and standard device configuration. The call to `ma_device_config_init()` takes
a single parameter, which is whether or not the device is a playback, capture, duplex or loopback
device (loopback devices are not supported on all backends). The `config.playback.format` member
sets the sample format which can be one of the following (all formats are native-endian):
+---------------+----------------------------------------+---------------------------+ +---------------+----------------------------------------+---------------------------+
| Symbol | Description | Range | | Symbol | Description | Range |
...@@ -88,22 +99,30 @@ backends). The `config.playback.format` member sets the sample format which can ...@@ -88,22 +99,30 @@ backends). The `config.playback.format` member sets the sample format which can
| ma_format_u8 | 8-bit unsigned integer | [0, 255] | | ma_format_u8 | 8-bit unsigned integer | [0, 255] |
+---------------+----------------------------------------+---------------------------+ +---------------+----------------------------------------+---------------------------+
The `config.playback.channels` member sets the number of channels to use with the device. The channel count cannot exceed MA_MAX_CHANNELS. The The `config.playback.channels` member sets the number of channels to use with the device. The
`config.sampleRate` member sets the sample rate (which must be the same for both playback and capture in full-duplex configurations). This is usually set to channel count cannot exceed MA_MAX_CHANNELS. The `config.sampleRate` member sets the sample rate
44100 or 48000, but can be set to anything. It's recommended to keep this between 8000 and 384000, however. (which must be the same for both playback and capture in full-duplex configurations). This is
usually set to 44100 or 48000, but can be set to anything. It's recommended to keep this between
Note that leaving the format, channel count and/or sample rate at their default values will result in the internal device's native configuration being used 8000 and 384000, however.
which is useful if you want to avoid the overhead of miniaudio's automatic data conversion.
Note that leaving the format, channel count and/or sample rate at their default values will result
In addition to the sample format, channel count and sample rate, the data callback and user data pointer are also set via the config. The user data pointer is in the internal device's native configuration being used which is useful if you want to avoid the
not passed into the callback as a parameter, but is instead set to the `pUserData` member of `ma_device` which you can access directly since all miniaudio overhead of miniaudio's automatic data conversion.
structures are transparent.
In addition to the sample format, channel count and sample rate, the data callback and user data
Initializing the device is done with `ma_device_init()`. This will return a result code telling you what went wrong, if anything. On success it will return pointer are also set via the config. The user data pointer is not passed into the callback as a
`MA_SUCCESS`. After initialization is complete the device will be in a stopped state. To start it, use `ma_device_start()`. Uninitializing the device will stop parameter, but is instead set to the `pUserData` member of `ma_device` which you can access
it, which is what the example above does, but you can also stop the device with `ma_device_stop()`. To resume the device simply call `ma_device_start()` again. directly since all miniaudio structures are transparent.
Note that it's important to never stop or start the device from inside the callback. This will result in a deadlock. Instead you set a variable or signal an
event indicating that the device needs to stop and handle it in a different thread. The following APIs must never be called inside the callback: Initializing the device is done with `ma_device_init()`. This will return a result code telling you
what went wrong, if anything. On success it will return `MA_SUCCESS`. After initialization is
complete the device will be in a stopped state. To start it, use `ma_device_start()`.
Uninitializing the device will stop it, which is what the example above does, but you can also stop
the device with `ma_device_stop()`. To resume the device simply call `ma_device_start()` again.
Note that it's important to never stop or start the device from inside the callback. This will
result in a deadlock. Instead you set a variable or signal an event indicating that the device
needs to stop and handle it in a different thread. The following APIs must never be called inside
the callback:
```c ```c
ma_device_init() ma_device_init()
...@@ -113,12 +132,14 @@ event indicating that the device needs to stop and handle it in a different thre ...@@ -113,12 +132,14 @@ event indicating that the device needs to stop and handle it in a different thre
ma_device_stop() ma_device_stop()
``` ```
You must never try uninitializing and reinitializing a device inside the callback. You must also never try to stop and start it from inside the callback. There You must never try uninitializing and reinitializing a device inside the callback. You must also
are a few other things you shouldn't do in the callback depending on your requirements, however this isn't so much a thread-safety thing, but rather a never try to stop and start it from inside the callback. There are a few other things you shouldn't
real-time processing thing which is beyond the scope of this introduction. do in the callback depending on your requirements, however this isn't so much a thread-safety
thing, but rather a real-time processing thing which is beyond the scope of this introduction.
The example above demonstrates the initialization of a playback device, but it works exactly the same for capture. All you need to do is change the device type The example above demonstrates the initialization of a playback device, but it works exactly the
from `ma_device_type_playback` to `ma_device_type_capture` when setting up the config, like so: same for capture. All you need to do is change the device type from `ma_device_type_playback` to
`ma_device_type_capture` when setting up the config, like so:
```c ```c
ma_device_config config = ma_device_config_init(ma_device_type_capture); ma_device_config config = ma_device_config_init(ma_device_type_capture);
...@@ -126,8 +147,9 @@ from `ma_device_type_playback` to `ma_device_type_capture` when setting up the c ...@@ -126,8 +147,9 @@ from `ma_device_type_playback` to `ma_device_type_capture` when setting up the c
config.capture.channels = MY_CHANNEL_COUNT; config.capture.channels = MY_CHANNEL_COUNT;
``` ```
In the data callback you just read from the input buffer (`pInput` in the example above) and leave the output buffer alone (it will be set to NULL when the In the data callback you just read from the input buffer (`pInput` in the example above) and leave
device type is set to `ma_device_type_capture`). the output buffer alone (it will be set to NULL when the device type is set to
`ma_device_type_capture`).
These are the available device types and how you should handle the buffers in the callback: These are the available device types and how you should handle the buffers in the callback:
...@@ -140,23 +162,29 @@ These are the available device types and how you should handle the buffers in th ...@@ -140,23 +162,29 @@ These are the available device types and how you should handle the buffers in th
| ma_device_type_loopback | Read from input buffer, leave output buffer untouched. | | ma_device_type_loopback | Read from input buffer, leave output buffer untouched. |
+-------------------------+--------------------------------------------------------+ +-------------------------+--------------------------------------------------------+
You will notice in the example above that the sample format and channel count is specified separately for playback and capture. This is to support different You will notice in the example above that the sample format and channel count is specified
data formats between the playback and capture devices in a full-duplex system. An example may be that you want to capture audio data as a monaural stream (one separately for playback and capture. This is to support different data formats between the playback
channel), but output sound to a stereo speaker system. Note that if you use different formats between playback and capture in a full-duplex configuration you and capture devices in a full-duplex system. An example may be that you want to capture audio data
will need to convert the data yourself. There are functions available to help you do this which will be explained later. as a monaural stream (one channel), but output sound to a stereo speaker system. Note that if you
use different formats between playback and capture in a full-duplex configuration you will need to
convert the data yourself. There are functions available to help you do this which will be
explained later.
The example above did not specify a physical device to connect to which means it will use the operating system's default device. If you have multiple physical The example above did not specify a physical device to connect to which means it will use the
devices connected and you want to use a specific one you will need to specify the device ID in the configuration, like so: operating system's default device. If you have multiple physical devices connected and you want to
use a specific one you will need to specify the device ID in the configuration, like so:
```c ```c
config.playback.pDeviceID = pMyPlaybackDeviceID; // Only if requesting a playback or duplex device. config.playback.pDeviceID = pMyPlaybackDeviceID; // Only if requesting a playback or duplex device.
config.capture.pDeviceID = pMyCaptureDeviceID; // Only if requesting a capture, duplex or loopback device. config.capture.pDeviceID = pMyCaptureDeviceID; // Only if requesting a capture, duplex or loopback device.
``` ```
To retrieve the device ID you will need to perform device enumeration, however this requires the use of a new concept called the "context". Conceptually To retrieve the device ID you will need to perform device enumeration, however this requires the
speaking the context sits above the device. There is one context to many devices. The purpose of the context is to represent the backend at a more global level use of a new concept called the "context". Conceptually speaking the context sits above the device.
and to perform operations outside the scope of an individual device. Mainly it is used for performing run-time linking against backend libraries, initializing There is one context to many devices. The purpose of the context is to represent the backend at a
backends and enumerating devices. The example below shows how to enumerate devices. more global level and to perform operations outside the scope of an individual device. Mainly it is
used for performing run-time linking against backend libraries, initializing backends and
enumerating devices. The example below shows how to enumerate devices.
```c ```c
ma_context context; ma_context context;
...@@ -197,44 +225,57 @@ backends and enumerating devices. The example below shows how to enumerate devic ...@@ -197,44 +225,57 @@ backends and enumerating devices. The example below shows how to enumerate devic
ma_context_uninit(&context); ma_context_uninit(&context);
``` ```
The first thing we do in this example is initialize a `ma_context` object with `ma_context_init()`. The first parameter is a pointer to a list of `ma_backend` The first thing we do in this example is initialize a `ma_context` object with `ma_context_init()`.
values which are used to override the default backend priorities. When this is NULL, as in this example, miniaudio's default priorities are used. The second The first parameter is a pointer to a list of `ma_backend` values which are used to override the
parameter is the number of backends listed in the array pointed to by the first parameter. The third parameter is a pointer to a `ma_context_config` object default backend priorities. When this is NULL, as in this example, miniaudio's default priorities
which can be NULL, in which case defaults are used. The context configuration is used for setting the logging callback, custom memory allocation callbacks, are used. The second parameter is the number of backends listed in the array pointed to by the
user-defined data and some backend-specific configurations. first parameter. The third parameter is a pointer to a `ma_context_config` object which can be
NULL, in which case defaults are used. The context configuration is used for setting the logging
callback, custom memory allocation callbacks, user-defined data and some backend-specific
configurations.
Once the context has been initialized you can enumerate devices. In the example above we use the simpler `ma_context_get_devices()`, however you can also use a Once the context has been initialized you can enumerate devices. In the example above we use the
callback for handling devices by using `ma_context_enumerate_devices()`. When using `ma_context_get_devices()` you provide a pointer to a pointer that will, simpler `ma_context_get_devices()`, however you can also use a callback for handling devices by
upon output, be set to a pointer to a buffer containing a list of `ma_device_info` structures. You also provide a pointer to an unsigned integer that will using `ma_context_enumerate_devices()`. When using `ma_context_get_devices()` you provide a pointer
receive the number of items in the returned buffer. Do not free the returned buffers as their memory is managed internally by miniaudio. to a pointer that will, upon output, be set to a pointer to a buffer containing a list of
`ma_device_info` structures. You also provide a pointer to an unsigned integer that will receive
the number of items in the returned buffer. Do not free the returned buffers as their memory is
managed internally by miniaudio.
The `ma_device_info` structure contains an `id` member which is the ID you pass to the device config. It also contains the name of the device which is useful The `ma_device_info` structure contains an `id` member which is the ID you pass to the device
for presenting a list of devices to the user via the UI. config. It also contains the name of the device which is useful for presenting a list of devices
to the user via the UI.
When creating your own context you will want to pass it to `ma_device_init()` when initializing the device. Passing in NULL, like we do in the first example, When creating your own context you will want to pass it to `ma_device_init()` when initializing the
will result in miniaudio creating the context for you, which you don't want to do since you've already created a context. Note that internally the context is device. Passing in NULL, like we do in the first example, will result in miniaudio creating the
only tracked by it's pointer which means you must not change the location of the `ma_context` object. If this is an issue, consider using `malloc()` to context for you, which you don't want to do since you've already created a context. Note that
allocate memory for the context. internally the context is only tracked by it's pointer which means you must not change the location
of the `ma_context` object. If this is an issue, consider using `malloc()` to allocate memory for
the context.
2. Building 2. Building
=========== ===========
miniaudio should work cleanly out of the box without the need to download or install any dependencies. See below for platform-specific details. miniaudio should work cleanly out of the box without the need to download or install any
dependencies. See below for platform-specific details.
2.1. Windows 2.1. Windows
------------ ------------
The Windows build should compile cleanly on all popular compilers without the need to configure any include paths nor link to any libraries. The Windows build should compile cleanly on all popular compilers without the need to configure any
include paths nor link to any libraries.
2.2. macOS and iOS 2.2. macOS and iOS
------------------ ------------------
The macOS build should compile cleanly without the need to download any dependencies nor link to any libraries or frameworks. The iOS build needs to be The macOS build should compile cleanly without the need to download any dependencies nor link to
compiled as Objective-C and will need to link the relevant frameworks but should compile cleanly out of the box with Xcode. Compiling through the command line any libraries or frameworks. The iOS build needs to be compiled as Objective-C and will need to
requires linking to `-lpthread` and `-lm`. link the relevant frameworks but should compile cleanly out of the box with Xcode. Compiling
through the command line requires linking to `-lpthread` and `-lm`.
Due to the way miniaudio links to frameworks at runtime, your application may not pass Apple's notarization process. To fix this there are two options. The Due to the way miniaudio links to frameworks at runtime, your application may not pass Apple's
first is to use the `MA_NO_RUNTIME_LINKING` option, like so: notarization process. To fix this there are two options. The first is to use the
`MA_NO_RUNTIME_LINKING` option, like so:
```c ```c
#ifdef __APPLE__ #ifdef __APPLE__
...@@ -244,8 +285,9 @@ first is to use the `MA_NO_RUNTIME_LINKING` option, like so: ...@@ -244,8 +285,9 @@ first is to use the `MA_NO_RUNTIME_LINKING` option, like so:
#include "miniaudio.h" #include "miniaudio.h"
``` ```
This will require linking with `-framework CoreFoundation -framework CoreAudio -framework AudioUnit`. Alternatively, if you would rather keep using runtime This will require linking with `-framework CoreFoundation -framework CoreAudio -framework AudioUnit`.
linking you can add the following to your entitlements.xcent file: Alternatively, if you would rather keep using runtime linking you can add the following to your
entitlements.xcent file:
``` ```
<key>com.apple.security.cs.allow-dyld-environment-variables</key> <key>com.apple.security.cs.allow-dyld-environment-variables</key>
...@@ -257,23 +299,28 @@ linking you can add the following to your entitlements.xcent file: ...@@ -257,23 +299,28 @@ linking you can add the following to your entitlements.xcent file:
2.3. Linux 2.3. Linux
---------- ----------
The Linux build only requires linking to `-ldl`, `-lpthread` and `-lm`. You do not need any development packages. The Linux build only requires linking to `-ldl`, `-lpthread` and `-lm`. You do not need any
development packages.
2.4. BSD 2.4. BSD
-------- --------
The BSD build only requires linking to `-lpthread` and `-lm`. NetBSD uses audio(4), OpenBSD uses sndio and FreeBSD uses OSS. The BSD build only requires linking to `-lpthread` and `-lm`. NetBSD uses audio(4), OpenBSD uses
sndio and FreeBSD uses OSS.
2.5. Android 2.5. Android
------------ ------------
AAudio is the highest priority backend on Android. This should work out of the box without needing any kind of compiler configuration. Support for AAudio AAudio is the highest priority backend on Android. This should work out of the box without needing
starts with Android 8 which means older versions will fall back to OpenSL|ES which requires API level 16+. any kind of compiler configuration. Support for AAudio starts with Android 8 which means older
versions will fall back to OpenSL|ES which requires API level 16+.
There have been reports that the OpenSL|ES backend fails to initialize on some Android based devices due to `dlopen()` failing to open "libOpenSLES.so". If There have been reports that the OpenSL|ES backend fails to initialize on some Android based
this happens on your platform you'll need to disable run-time linking with `MA_NO_RUNTIME_LINKING` and link with -lOpenSLES. devices due to `dlopen()` failing to open "libOpenSLES.so". If this happens on your platform
you'll need to disable run-time linking with `MA_NO_RUNTIME_LINKING` and link with -lOpenSLES.
2.6. Emscripten 2.6. Emscripten
--------------- ---------------
The Emscripten build emits Web Audio JavaScript directly and should compile cleanly out of the box. You cannot use -std=c* compiler flags, nor -ansi. The Emscripten build emits Web Audio JavaScript directly and should compile cleanly out of the box.
You cannot use -std=c* compiler flags, nor -ansi.
2.7. Build Options 2.7. Build Options
...@@ -415,29 +462,36 @@ The Emscripten build emits Web Audio JavaScript directly and should compile clea ...@@ -415,29 +462,36 @@ The Emscripten build emits Web Audio JavaScript directly and should compile clea
3. Definitions 3. Definitions
============== ==============
This section defines common terms used throughout miniaudio. Unfortunately there is often ambiguity in the use of terms throughout the audio space, so this This section defines common terms used throughout miniaudio. Unfortunately there is often ambiguity
section is intended to clarify how miniaudio uses each term. in the use of terms throughout the audio space, so this section is intended to clarify how miniaudio
uses each term.
3.1. Sample 3.1. Sample
----------- -----------
A sample is a single unit of audio data. If the sample format is f32, then one sample is one 32-bit floating point number. A sample is a single unit of audio data. If the sample format is f32, then one sample is one 32-bit
floating point number.
3.2. Frame / PCM Frame 3.2. Frame / PCM Frame
---------------------- ----------------------
A frame is a group of samples equal to the number of channels. For a stereo stream a frame is 2 samples, a mono frame is 1 sample, a 5.1 surround sound frame A frame is a group of samples equal to the number of channels. For a stereo stream a frame is 2
is 6 samples, etc. The terms "frame" and "PCM frame" are the same thing in miniaudio. Note that this is different to a compressed frame. If ever miniaudio samples, a mono frame is 1 sample, a 5.1 surround sound frame is 6 samples, etc. The terms "frame"
needs to refer to a compressed frame, such as a FLAC frame, it will always clarify what it's referring to with something like "FLAC frame". and "PCM frame" are the same thing in miniaudio. Note that this is different to a compressed frame.
If ever miniaudio needs to refer to a compressed frame, such as a FLAC frame, it will always
clarify what it's referring to with something like "FLAC frame".
3.3. Channel 3.3. Channel
------------ ------------
A stream of monaural audio that is emitted from an individual speaker in a speaker system, or received from an individual microphone in a microphone system. A A stream of monaural audio that is emitted from an individual speaker in a speaker system, or
stereo stream has two channels (a left channel, and a right channel), a 5.1 surround sound system has 6 channels, etc. Some audio systems refer to a channel as received from an individual microphone in a microphone system. A stereo stream has two channels (a
a complex audio stream that's mixed with other channels to produce the final mix - this is completely different to miniaudio's use of the term "channel" and left channel, and a right channel), a 5.1 surround sound system has 6 channels, etc. Some audio
should not be confused. systems refer to a channel as a complex audio stream that's mixed with other channels to produce
the final mix - this is completely different to miniaudio's use of the term "channel" and should
not be confused.
3.4. Sample Rate 3.4. Sample Rate
---------------- ----------------
The sample rate in miniaudio is always expressed in Hz, such as 44100, 48000, etc. It's the number of PCM frames that are processed per second. The sample rate in miniaudio is always expressed in Hz, such as 44100, 48000, etc. It's the number
of PCM frames that are processed per second.
3.5. Formats 3.5. Formats
------------ ------------
...@@ -459,8 +513,8 @@ All formats are native-endian. ...@@ -459,8 +513,8 @@ All formats are native-endian.
4. Decoding 4. Decoding
=========== ===========
The `ma_decoder` API is used for reading audio files. Decoders are completely decoupled from devices and can be used independently. The following formats are The `ma_decoder` API is used for reading audio files. Decoders are completely decoupled from
supported: devices and can be used independently. The following formats are supported:
+---------+------------------+----------+ +---------+------------------+----------+
| Format | Decoding Backend | Built-In | | Format | Decoding Backend | Built-In |
...@@ -471,7 +525,8 @@ supported: ...@@ -471,7 +525,8 @@ supported:
| Vorbis | stb_vorbis | No | | Vorbis | stb_vorbis | No |
+---------+------------------+----------+ +---------+------------------+----------+
Vorbis is supported via stb_vorbis which can be enabled by including the header section before the implementation of miniaudio, like the following: Vorbis is supported via stb_vorbis which can be enabled by including the header section before the
implementation of miniaudio, like the following:
```c ```c
#define STB_VORBIS_HEADER_ONLY #define STB_VORBIS_HEADER_ONLY
...@@ -487,8 +542,9 @@ Vorbis is supported via stb_vorbis which can be enabled by including the header ...@@ -487,8 +542,9 @@ Vorbis is supported via stb_vorbis which can be enabled by including the header
A copy of stb_vorbis is included in the "extras" folder in the miniaudio repository (https://github.com/mackron/miniaudio). A copy of stb_vorbis is included in the "extras" folder in the miniaudio repository (https://github.com/mackron/miniaudio).
Built-in decoders are amalgamated into the implementation section of miniaudio. You can disable the built-in decoders by specifying one or more of the Built-in decoders are amalgamated into the implementation section of miniaudio. You can disable the
following options before the miniaudio implementation: built-in decoders by specifying one or more of the following options before the miniaudio
implementation:
```c ```c
#define MA_NO_WAV #define MA_NO_WAV
...@@ -496,10 +552,12 @@ following options before the miniaudio implementation: ...@@ -496,10 +552,12 @@ following options before the miniaudio implementation:
#define MA_NO_FLAC #define MA_NO_FLAC
``` ```
Disabling built-in decoding libraries is useful if you use these libraries independantly of the `ma_decoder` API. Disabling built-in decoding libraries is useful if you use these libraries independantly of the
`ma_decoder` API.
A decoder can be initialized from a file with `ma_decoder_init_file()`, a block of memory with `ma_decoder_init_memory()`, or from data delivered via callbacks A decoder can be initialized from a file with `ma_decoder_init_file()`, a block of memory with
with `ma_decoder_init()`. Here is an example for loading a decoder from a file: `ma_decoder_init_memory()`, or from data delivered via callbacks with `ma_decoder_init()`. Here is
an example for loading a decoder from a file:
```c ```c
ma_decoder decoder; ma_decoder decoder;
...@@ -513,17 +571,20 @@ with `ma_decoder_init()`. Here is an example for loading a decoder from a file: ...@@ -513,17 +571,20 @@ with `ma_decoder_init()`. Here is an example for loading a decoder from a file:
ma_decoder_uninit(&decoder); ma_decoder_uninit(&decoder);
``` ```
When initializing a decoder, you can optionally pass in a pointer to a `ma_decoder_config` object (the `NULL` argument in the example above) which allows you When initializing a decoder, you can optionally pass in a pointer to a `ma_decoder_config` object
to configure the output format, channel count, sample rate and channel map: (the `NULL` argument in the example above) which allows you to configure the output format, channel
count, sample rate and channel map:
```c ```c
ma_decoder_config config = ma_decoder_config_init(ma_format_f32, 2, 48000); ma_decoder_config config = ma_decoder_config_init(ma_format_f32, 2, 48000);
``` ```
When passing in `NULL` for decoder config in `ma_decoder_init*()`, the output format will be the same as that defined by the decoding backend. When passing in `NULL` for decoder config in `ma_decoder_init*()`, the output format will be the
same as that defined by the decoding backend.
Data is read from the decoder as PCM frames. This will return the number of PCM frames actually read. If the return value is less than the requested number of Data is read from the decoder as PCM frames. This will return the number of PCM frames actually
PCM frames it means you've reached the end: read. If the return value is less than the requested number of PCM frames it means you've reached
the end:
```c ```c
ma_uint64 framesRead = ma_decoder_read_pcm_frames(pDecoder, pFrames, framesToRead); ma_uint64 framesRead = ma_decoder_read_pcm_frames(pDecoder, pFrames, framesToRead);
...@@ -547,8 +608,10 @@ If you want to loop back to the start, you can simply seek back to the first PCM ...@@ -547,8 +608,10 @@ If you want to loop back to the start, you can simply seek back to the first PCM
ma_decoder_seek_to_pcm_frame(pDecoder, 0); ma_decoder_seek_to_pcm_frame(pDecoder, 0);
``` ```
When loading a decoder, miniaudio uses a trial and error technique to find the appropriate decoding backend. This can be unnecessarily inefficient if the type When loading a decoder, miniaudio uses a trial and error technique to find the appropriate decoding
is already known. In this case you can use `encodingFormat` variable in the device config to specify a specific encoding format you want to decode: backend. This can be unnecessarily inefficient if the type is already known. In this case you can
use `encodingFormat` variable in the device config to specify a specific encoding format you want
to decode:
```c ```c
decoderConfig.encodingFormat = ma_encoding_format_wav; decoderConfig.encodingFormat = ma_encoding_format_wav;
...@@ -556,21 +619,24 @@ is already known. In this case you can use `encodingFormat` variable in the devi ...@@ -556,21 +619,24 @@ is already known. In this case you can use `encodingFormat` variable in the devi
See the `ma_encoding_format` enum for possible encoding formats. See the `ma_encoding_format` enum for possible encoding formats.
The `ma_decoder_init_file()` API will try using the file extension to determine which decoding backend to prefer. The `ma_decoder_init_file()` API will try using the file extension to determine which decoding
backend to prefer.
5. Encoding 5. Encoding
=========== ===========
The `ma_encoding` API is used for writing audio files. The only supported output format is WAV which is achieved via dr_wav which is amalgamated into the The `ma_encoding` API is used for writing audio files. The only supported output format is WAV
implementation section of miniaudio. This can be disabled by specifying the following option before the implementation of miniaudio: which is achieved via dr_wav which is amalgamated into the implementation section of miniaudio.
This can be disabled by specifying the following option before the implementation of miniaudio:
```c ```c
#define MA_NO_WAV #define MA_NO_WAV
``` ```
An encoder can be initialized to write to a file with `ma_encoder_init_file()` or from data delivered via callbacks with `ma_encoder_init()`. Below is an An encoder can be initialized to write to a file with `ma_encoder_init_file()` or from data
example for initializing an encoder to output to a file. delivered via callbacks with `ma_encoder_init()`. Below is an example for initializing an encoder
to output to a file.
```c ```c
ma_encoder_config config = ma_encoder_config_init(ma_encoding_format_wav, FORMAT, CHANNELS, SAMPLE_RATE); ma_encoder_config config = ma_encoder_config_init(ma_encoding_format_wav, FORMAT, CHANNELS, SAMPLE_RATE);
...@@ -585,8 +651,9 @@ example for initializing an encoder to output to a file. ...@@ -585,8 +651,9 @@ example for initializing an encoder to output to a file.
ma_encoder_uninit(&encoder); ma_encoder_uninit(&encoder);
``` ```
When initializing an encoder you must specify a config which is initialized with `ma_encoder_config_init()`. Here you must specify the file type, the output When initializing an encoder you must specify a config which is initialized with
sample format, output channel count and output sample rate. The following file types are supported: `ma_encoder_config_init()`. Here you must specify the file type, the output sample format, output
channel count and output sample rate. The following file types are supported:
+------------------------+-------------+ +------------------------+-------------+
| Enum | Description | | Enum | Description |
...@@ -594,8 +661,10 @@ sample format, output channel count and output sample rate. The following file t ...@@ -594,8 +661,10 @@ sample format, output channel count and output sample rate. The following file t
| ma_encoding_format_wav | WAV | | ma_encoding_format_wav | WAV |
+------------------------+-------------+ +------------------------+-------------+
If the format, channel count or sample rate is not supported by the output file type an error will be returned. The encoder will not perform data conversion so If the format, channel count or sample rate is not supported by the output file type an error will
you will need to convert it before outputting any audio data. To output audio data, use `ma_encoder_write_pcm_frames()`, like in the example below: be returned. The encoder will not perform data conversion so you will need to convert it before
outputting any audio data. To output audio data, use `ma_encoder_write_pcm_frames()`, like in the
example below:
```c ```c
framesWritten = ma_encoder_write_pcm_frames(&encoder, pPCMFramesToWrite, framesToWrite); framesWritten = ma_encoder_write_pcm_frames(&encoder, pPCMFramesToWrite, framesToWrite);
...@@ -606,15 +675,18 @@ Encoders must be uninitialized with `ma_encoder_uninit()`. ...@@ -606,15 +675,18 @@ Encoders must be uninitialized with `ma_encoder_uninit()`.
6. Data Conversion 6. Data Conversion
================== ==================
A data conversion API is included with miniaudio which supports the majority of data conversion requirements. This supports conversion between sample formats, A data conversion API is included with miniaudio which supports the majority of data conversion
channel counts (with channel mapping) and sample rates. requirements. This supports conversion between sample formats, channel counts (with channel
mapping) and sample rates.
6.1. Sample Format Conversion 6.1. Sample Format Conversion
----------------------------- -----------------------------
Conversion between sample formats is achieved with the `ma_pcm_*_to_*()`, `ma_pcm_convert()` and `ma_convert_pcm_frames_format()` APIs. Use `ma_pcm_*_to_*()` Conversion between sample formats is achieved with the `ma_pcm_*_to_*()`, `ma_pcm_convert()` and
to convert between two specific formats. Use `ma_pcm_convert()` to convert based on a `ma_format` variable. Use `ma_convert_pcm_frames_format()` to convert `ma_convert_pcm_frames_format()` APIs. Use `ma_pcm_*_to_*()` to convert between two specific
PCM frames where you want to specify the frame count and channel count as a variable instead of the total sample count. formats. Use `ma_pcm_convert()` to convert based on a `ma_format` variable. Use
`ma_convert_pcm_frames_format()` to convert PCM frames where you want to specify the frame count
and channel count as a variable instead of the total sample count.
6.1.1. Dithering 6.1.1. Dithering
...@@ -631,8 +703,9 @@ The different dithering modes include the following, in order of efficiency: ...@@ -631,8 +703,9 @@ The different dithering modes include the following, in order of efficiency:
| Triangle | ma_dither_mode_triangle | | Triangle | ma_dither_mode_triangle |
+-----------+--------------------------+ +-----------+--------------------------+
Note that even if the dither mode is set to something other than `ma_dither_mode_none`, it will be ignored for conversions where dithering is not needed. Note that even if the dither mode is set to something other than `ma_dither_mode_none`, it will be
Dithering is available for the following conversions: ignored for conversions where dithering is not needed. Dithering is available for the following
conversions:
``` ```
s16 -> u8 s16 -> u8
...@@ -644,14 +717,16 @@ Dithering is available for the following conversions: ...@@ -644,14 +717,16 @@ Dithering is available for the following conversions:
f32 -> s16 f32 -> s16
``` ```
Note that it is not an error to pass something other than ma_dither_mode_none for conversions where dither is not used. It will just be ignored. Note that it is not an error to pass something other than ma_dither_mode_none for conversions where
dither is not used. It will just be ignored.
6.2. Channel Conversion 6.2. Channel Conversion
----------------------- -----------------------
Channel conversion is used for channel rearrangement and conversion from one channel count to another. The `ma_channel_converter` API is used for channel Channel conversion is used for channel rearrangement and conversion from one channel count to
conversion. Below is an example of initializing a simple channel converter which converts from mono to stereo. another. The `ma_channel_converter` API is used for channel conversion. Below is an example of
initializing a simple channel converter which converts from mono to stereo.
```c ```c
ma_channel_converter_config config = ma_channel_converter_config_init( ma_channel_converter_config config = ma_channel_converter_config_init(
...@@ -677,34 +752,43 @@ To perform the conversion simply call `ma_channel_converter_process_pcm_frames() ...@@ -677,34 +752,43 @@ To perform the conversion simply call `ma_channel_converter_process_pcm_frames()
} }
``` ```
It is up to the caller to ensure the output buffer is large enough to accomodate the new PCM frames. It is up to the caller to ensure the output buffer is large enough to accomodate the new PCM
frames.
Input and output PCM frames are always interleaved. Deinterleaved layouts are not supported. Input and output PCM frames are always interleaved. Deinterleaved layouts are not supported.
6.2.1. Channel Mapping 6.2.1. Channel Mapping
---------------------- ----------------------
In addition to converting from one channel count to another, like the example above, the channel converter can also be used to rearrange channels. When In addition to converting from one channel count to another, like the example above, the channel
initializing the channel converter, you can optionally pass in channel maps for both the input and output frames. If the channel counts are the same, and each converter can also be used to rearrange channels. When initializing the channel converter, you can
channel map contains the same channel positions with the exception that they're in a different order, a simple shuffling of the channels will be performed. If, optionally pass in channel maps for both the input and output frames. If the channel counts are the
however, there is not a 1:1 mapping of channel positions, or the channel counts differ, the input channels will be mixed based on a mixing mode which is same, and each channel map contains the same channel positions with the exception that they're in
specified when initializing the `ma_channel_converter_config` object. a different order, a simple shuffling of the channels will be performed. If, however, there is not
a 1:1 mapping of channel positions, or the channel counts differ, the input channels will be mixed
When converting from mono to multi-channel, the mono channel is simply copied to each output channel. When going the other way around, the audio of each output based on a mixing mode which is specified when initializing the `ma_channel_converter_config`
channel is simply averaged and copied to the mono channel. object.
In more complicated cases blending is used. The `ma_channel_mix_mode_simple` mode will drop excess channels and silence extra channels. For example, converting When converting from mono to multi-channel, the mono channel is simply copied to each output
from 4 to 2 channels, the 3rd and 4th channels will be dropped, whereas converting from 2 to 4 channels will put silence into the 3rd and 4th channels. channel. When going the other way around, the audio of each output channel is simply averaged and
copied to the mono channel.
The `ma_channel_mix_mode_rectangle` mode uses spacial locality based on a rectangle to compute a simple distribution between input and output. Imagine sitting
in the middle of a room, with speakers on the walls representing channel positions. The MA_CHANNEL_FRONT_LEFT position can be thought of as being in the corner In more complicated cases blending is used. The `ma_channel_mix_mode_simple` mode will drop excess
of the front and left walls. channels and silence extra channels. For example, converting from 4 to 2 channels, the 3rd and 4th
channels will be dropped, whereas converting from 2 to 4 channels will put silence into the 3rd and
Finally, the `ma_channel_mix_mode_custom_weights` mode can be used to use custom user-defined weights. Custom weights can be passed in as the last parameter of 4th channels.
The `ma_channel_mix_mode_rectangle` mode uses spacial locality based on a rectangle to compute a
simple distribution between input and output. Imagine sitting in the middle of a room, with
speakers on the walls representing channel positions. The `MA_CHANNEL_FRONT_LEFT` position can be
thought of as being in the corner of the front and left walls.
Finally, the `ma_channel_mix_mode_custom_weights` mode can be used to use custom user-defined
weights. Custom weights can be passed in as the last parameter of
`ma_channel_converter_config_init()`. `ma_channel_converter_config_init()`.
Predefined channel maps can be retrieved with `ma_get_standard_channel_map()`. This takes a `ma_standard_channel_map` enum as it's first parameter, which can Predefined channel maps can be retrieved with `ma_get_standard_channel_map()`. This takes a
be one of the following: `ma_standard_channel_map` enum as it's first parameter, which can be one of the following:
+-----------------------------------+-----------------------------------------------------------+ +-----------------------------------+-----------------------------------------------------------+
| Name | Description | | Name | Description |
...@@ -778,7 +862,8 @@ Below are the channel maps used by default in miniaudio (`ma_standard_channel_ma ...@@ -778,7 +862,8 @@ Below are the channel maps used by default in miniaudio (`ma_standard_channel_ma
6.3. Resampling 6.3. Resampling
--------------- ---------------
Resampling is achieved with the `ma_resampler` object. To create a resampler object, do something like the following: Resampling is achieved with the `ma_resampler` object. To create a resampler object, do something
like the following:
```c ```c
ma_resampler_config config = ma_resampler_config_init( ma_resampler_config config = ma_resampler_config_init(
...@@ -815,16 +900,20 @@ The following example shows how data can be processed ...@@ -815,16 +900,20 @@ The following example shows how data can be processed
// number of output frames written. // number of output frames written.
``` ```
To initialize the resampler you first need to set up a config (`ma_resampler_config`) with `ma_resampler_config_init()`. You need to specify the sample format To initialize the resampler you first need to set up a config (`ma_resampler_config`) with
you want to use, the number of channels, the input and output sample rate, and the algorithm. `ma_resampler_config_init()`. You need to specify the sample format you want to use, the number of
channels, the input and output sample rate, and the algorithm.
The sample format can be either `ma_format_s16` or `ma_format_f32`. If you need a different format you will need to perform pre- and post-conversions yourself The sample format can be either `ma_format_s16` or `ma_format_f32`. If you need a different format
where necessary. Note that the format is the same for both input and output. The format cannot be changed after initialization. you will need to perform pre- and post-conversions yourself where necessary. Note that the format
is the same for both input and output. The format cannot be changed after initialization.
The resampler supports multiple channels and is always interleaved (both input and output). The channel count cannot be changed after initialization. The resampler supports multiple channels and is always interleaved (both input and output). The
channel count cannot be changed after initialization.
The sample rates can be anything other than zero, and are always specified in hertz. They should be set to something like 44100, etc. The sample rate is the The sample rates can be anything other than zero, and are always specified in hertz. They should be
only configuration property that can be changed after initialization. set to something like 44100, etc. The sample rate is the only configuration property that can be
changed after initialization.
The miniaudio resampler has built-in support for the following algorithms: The miniaudio resampler has built-in support for the following algorithms:
...@@ -836,21 +925,27 @@ The miniaudio resampler has built-in support for the following algorithms: ...@@ -836,21 +925,27 @@ The miniaudio resampler has built-in support for the following algorithms:
The algorithm cannot be changed after initialization. The algorithm cannot be changed after initialization.
Processing always happens on a per PCM frame basis and always assumes interleaved input and output. De-interleaved processing is not supported. To process Processing always happens on a per PCM frame basis and always assumes interleaved input and output.
frames, use `ma_resampler_process_pcm_frames()`. On input, this function takes the number of output frames you can fit in the output buffer and the number of De-interleaved processing is not supported. To process frames, use
input frames contained in the input buffer. On output these variables contain the number of output frames that were written to the output buffer and the `ma_resampler_process_pcm_frames()`. On input, this function takes the number of output frames you
number of input frames that were consumed in the process. You can pass in NULL for the input buffer in which case it will be treated as an infinitely large can fit in the output buffer and the number of input frames contained in the input buffer. On
buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated as seek. output these variables contain the number of output frames that were written to the output buffer
and the number of input frames that were consumed in the process. You can pass in NULL for the
input buffer in which case it will be treated as an infinitely large buffer of zeros. The output
buffer can also be NULL, in which case the processing will be treated as seek.
The sample rate can be changed dynamically on the fly. You can change this with explicit sample rates with `ma_resampler_set_rate()` and also with a decimal The sample rate can be changed dynamically on the fly. You can change this with explicit sample
ratio with `ma_resampler_set_rate_ratio()`. The ratio is in/out. rates with `ma_resampler_set_rate()` and also with a decimal ratio with
`ma_resampler_set_rate_ratio()`. The ratio is in/out.
Sometimes it's useful to know exactly how many input frames will be required to output a specific number of frames. You can calculate this with Sometimes it's useful to know exactly how many input frames will be required to output a specific
`ma_resampler_get_required_input_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain number of number of frames. You can calculate this with `ma_resampler_get_required_input_frame_count()`.
input frames. You can do this with `ma_resampler_get_expected_output_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain
number of input frames. You can do this with `ma_resampler_get_expected_output_frame_count()`.
Due to the nature of how resampling works, the resampler introduces some latency. This can be retrieved in terms of both the input rate and the output rate Due to the nature of how resampling works, the resampler introduces some latency. This can be
with `ma_resampler_get_input_latency()` and `ma_resampler_get_output_latency()`. retrieved in terms of both the input rate and the output rate with
`ma_resampler_get_input_latency()` and `ma_resampler_get_output_latency()`.
6.3.1. Resampling Algorithms 6.3.1. Resampling Algorithms
...@@ -860,28 +955,31 @@ The choice of resampling algorithm depends on your situation and requirements. ...@@ -860,28 +955,31 @@ The choice of resampling algorithm depends on your situation and requirements.
6.3.1.1. Linear Resampling 6.3.1.1. Linear Resampling
-------------------------- --------------------------
The linear resampler is the fastest, but comes at the expense of poorer quality. There is, however, some control over the quality of the linear resampler which The linear resampler is the fastest, but comes at the expense of poorer quality. There is, however,
may make it a suitable option depending on your requirements. some control over the quality of the linear resampler which may make it a suitable option depending
on your requirements.
The linear resampler performs low-pass filtering before or after downsampling or upsampling, depending on the sample rates you're converting between. When The linear resampler performs low-pass filtering before or after downsampling or upsampling,
decreasing the sample rate, the low-pass filter will be applied before downsampling. When increasing the rate it will be performed after upsampling. By default depending on the sample rates you're converting between. When decreasing the sample rate, the
a fourth order low-pass filter will be applied. This can be configured via the `lpfOrder` configuration variable. Setting this to 0 will disable filtering. low-pass filter will be applied before downsampling. When increasing the rate it will be performed
after upsampling. By default a fourth order low-pass filter will be applied. This can be configured
via the `lpfOrder` configuration variable. Setting this to 0 will disable filtering.
The low-pass filter has a cutoff frequency which defaults to half the sample rate of the lowest of the input and output sample rates (Nyquist Frequency). This The low-pass filter has a cutoff frequency which defaults to half the sample rate of the lowest of
can be controlled with the `lpfNyquistFactor` config variable. This defaults to 1, and should be in the range of 0..1, although a value of 0 does not make the input and output sample rates (Nyquist Frequency).
sense and should be avoided. A value of 1 will use the Nyquist Frequency as the cutoff. A value of 0.5 will use half the Nyquist Frequency as the cutoff, etc.
Values less than 1 will result in more washed out sound due to more of the higher frequencies being removed. This config variable has no impact on performance
and is a purely perceptual configuration.
The API for the linear resampler is the same as the main resampler API, only it's called `ma_linear_resampler`. The API for the linear resampler is the same as the main resampler API, only it's called
`ma_linear_resampler`.
6.4. General Data Conversion 6.4. General Data Conversion
---------------------------- ----------------------------
The `ma_data_converter` API can be used to wrap sample format conversion, channel conversion and resampling into one operation. This is what miniaudio uses The `ma_data_converter` API can be used to wrap sample format conversion, channel conversion and
internally to convert between the format requested when the device was initialized and the format of the backend's native device. The API for general data resampling into one operation. This is what miniaudio uses internally to convert between the format
conversion is very similar to the resampling API. Create a `ma_data_converter` object like this: requested when the device was initialized and the format of the backend's native device. The API
for general data conversion is very similar to the resampling API. Create a `ma_data_converter`
object like this:
```c ```c
ma_data_converter_config config = ma_data_converter_config_init( ma_data_converter_config config = ma_data_converter_config_init(
...@@ -900,8 +998,9 @@ conversion is very similar to the resampling API. Create a `ma_data_converter` o ...@@ -900,8 +998,9 @@ conversion is very similar to the resampling API. Create a `ma_data_converter` o
} }
``` ```
In the example above we use `ma_data_converter_config_init()` to initialize the config, however there's many more properties that can be configured, such as In the example above we use `ma_data_converter_config_init()` to initialize the config, however
channel maps and resampling quality. Something like the following may be more suitable depending on your requirements: there's many more properties that can be configured, such as channel maps and resampling quality.
Something like the following may be more suitable depending on your requirements:
```c ```c
ma_data_converter_config config = ma_data_converter_config_init_default(); ma_data_converter_config config = ma_data_converter_config_init_default();
...@@ -935,25 +1034,34 @@ The following example shows how data can be processed ...@@ -935,25 +1034,34 @@ The following example shows how data can be processed
// of output frames written. // of output frames written.
``` ```
The data converter supports multiple channels and is always interleaved (both input and output). The channel count cannot be changed after initialization. The data converter supports multiple channels and is always interleaved (both input and output).
The channel count cannot be changed after initialization.
Sample rates can be anything other than zero, and are always specified in hertz. They should be set to something like 44100, etc. The sample rate is the only Sample rates can be anything other than zero, and are always specified in hertz. They should be set
configuration property that can be changed after initialization, but only if the `resampling.allowDynamicSampleRate` member of `ma_data_converter_config` is to something like 44100, etc. The sample rate is the only configuration property that can be
set to `MA_TRUE`. To change the sample rate, use `ma_data_converter_set_rate()` or `ma_data_converter_set_rate_ratio()`. The ratio must be in/out. The changed after initialization, but only if the `resampling.allowDynamicSampleRate` member of
resampling algorithm cannot be changed after initialization. `ma_data_converter_config` is set to `MA_TRUE`. To change the sample rate, use
`ma_data_converter_set_rate()` or `ma_data_converter_set_rate_ratio()`. The ratio must be in/out.
The resampling algorithm cannot be changed after initialization.
Processing always happens on a per PCM frame basis and always assumes interleaved input and output. De-interleaved processing is not supported. To process Processing always happens on a per PCM frame basis and always assumes interleaved input and output.
frames, use `ma_data_converter_process_pcm_frames()`. On input, this function takes the number of output frames you can fit in the output buffer and the number De-interleaved processing is not supported. To process frames, use
of input frames contained in the input buffer. On output these variables contain the number of output frames that were written to the output buffer and the `ma_data_converter_process_pcm_frames()`. On input, this function takes the number of output frames
number of input frames that were consumed in the process. You can pass in NULL for the input buffer in which case it will be treated as an infinitely large you can fit in the output buffer and the number of input frames contained in the input buffer. On
buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated as seek. output these variables contain the number of output frames that were written to the output buffer
and the number of input frames that were consumed in the process. You can pass in NULL for the
input buffer in which case it will be treated as an infinitely large
buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated
as seek.
Sometimes it's useful to know exactly how many input frames will be required to output a specific number of frames. You can calculate this with Sometimes it's useful to know exactly how many input frames will be required to output a specific
`ma_data_converter_get_required_input_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain number of number of frames. You can calculate this with `ma_data_converter_get_required_input_frame_count()`.
input frames. You can do this with `ma_data_converter_get_expected_output_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain
number of input frames. You can do this with `ma_data_converter_get_expected_output_frame_count()`.
Due to the nature of how resampling works, the data converter introduces some latency if resampling is required. This can be retrieved in terms of both the Due to the nature of how resampling works, the data converter introduces some latency if resampling
input rate and the output rate with `ma_data_converter_get_input_latency()` and `ma_data_converter_get_output_latency()`. is required. This can be retrieved in terms of both the input rate and the output rate with
`ma_data_converter_get_input_latency()` and `ma_data_converter_get_output_latency()`.
...@@ -976,24 +1084,29 @@ Biquad filtering is achieved with the `ma_biquad` API. Example: ...@@ -976,24 +1084,29 @@ Biquad filtering is achieved with the `ma_biquad` API. Example:
ma_biquad_process_pcm_frames(&biquad, pFramesOut, pFramesIn, frameCount); ma_biquad_process_pcm_frames(&biquad, pFramesOut, pFramesIn, frameCount);
``` ```
Biquad filtering is implemented using transposed direct form 2. The numerator coefficients are b0, b1 and b2, and the denominator coefficients are a0, a1 and Biquad filtering is implemented using transposed direct form 2. The numerator coefficients are b0,
a2. The a0 coefficient is required and coefficients must not be pre-normalized. b1 and b2, and the denominator coefficients are a0, a1 and a2. The a0 coefficient is required and
coefficients must not be pre-normalized.
Supported formats are `ma_format_s16` and `ma_format_f32`. If you need to use a different format you need to convert it yourself beforehand. When using Supported formats are `ma_format_s16` and `ma_format_f32`. If you need to use a different format
`ma_format_s16` the biquad filter will use fixed point arithmetic. When using `ma_format_f32`, floating point arithmetic will be used. you need to convert it yourself beforehand. When using `ma_format_s16` the biquad filter will use
fixed point arithmetic. When using `ma_format_f32`, floating point arithmetic will be used.
Input and output frames are always interleaved. Input and output frames are always interleaved.
Filtering can be applied in-place by passing in the same pointer for both the input and output buffers, like so: Filtering can be applied in-place by passing in the same pointer for both the input and output
buffers, like so:
```c ```c
ma_biquad_process_pcm_frames(&biquad, pMyData, pMyData, frameCount); ma_biquad_process_pcm_frames(&biquad, pMyData, pMyData, frameCount);
``` ```
If you need to change the values of the coefficients, but maintain the values in the registers you can do so with `ma_biquad_reinit()`. This is useful if you If you need to change the values of the coefficients, but maintain the values in the registers you
need to change the properties of the filter while keeping the values of registers valid to avoid glitching. Do not use `ma_biquad_init()` for this as it will can do so with `ma_biquad_reinit()`. This is useful if you need to change the properties of the
do a full initialization which involves clearing the registers to 0. Note that changing the format or channel count after initialization is invalid and will filter while keeping the values of registers valid to avoid glitching. Do not use
result in an error. `ma_biquad_init()` for this as it will do a full initialization which involves clearing the
registers to 0. Note that changing the format or channel count after initialization is invalid and
will result in an error.
7.2. Low-Pass Filtering 7.2. Low-Pass Filtering
...@@ -1022,16 +1135,18 @@ Low-pass filter example: ...@@ -1022,16 +1135,18 @@ Low-pass filter example:
ma_lpf_process_pcm_frames(&lpf, pFramesOut, pFramesIn, frameCount); ma_lpf_process_pcm_frames(&lpf, pFramesOut, pFramesIn, frameCount);
``` ```
Supported formats are `ma_format_s16` and` ma_format_f32`. If you need to use a different format you need to convert it yourself beforehand. Input and output Supported formats are `ma_format_s16` and` ma_format_f32`. If you need to use a different format
frames are always interleaved. you need to convert it yourself beforehand. Input and output frames are always interleaved.
Filtering can be applied in-place by passing in the same pointer for both the input and output buffers, like so: Filtering can be applied in-place by passing in the same pointer for both the input and output
buffers, like so:
```c ```c
ma_lpf_process_pcm_frames(&lpf, pMyData, pMyData, frameCount); ma_lpf_process_pcm_frames(&lpf, pMyData, pMyData, frameCount);
``` ```
The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8. If you need more, you can chain first and second order filters together. The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8. If you need more,
you can chain first and second order filters together.
```c ```c
for (iFilter = 0; iFilter < filterCount; iFilter += 1) { for (iFilter = 0; iFilter < filterCount; iFilter += 1) {
...@@ -1039,15 +1154,18 @@ The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8. ...@@ -1039,15 +1154,18 @@ The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8.
} }
``` ```
If you need to change the configuration of the filter, but need to maintain the state of internal registers you can do so with `ma_lpf_reinit()`. This may be If you need to change the configuration of the filter, but need to maintain the state of internal
useful if you need to change the sample rate and/or cutoff frequency dynamically while maintaing smooth transitions. Note that changing the format or channel registers you can do so with `ma_lpf_reinit()`. This may be useful if you need to change the sample
count after initialization is invalid and will result in an error. rate and/or cutoff frequency dynamically while maintaing smooth transitions. Note that changing the
format or channel count after initialization is invalid and will result in an error.
The `ma_lpf` object supports a configurable order, but if you only need a first order filter you may want to consider using `ma_lpf1`. Likewise, if you only The `ma_lpf` object supports a configurable order, but if you only need a first order filter you
need a second order filter you can use `ma_lpf2`. The advantage of this is that they're lighter weight and a bit more efficient. may want to consider using `ma_lpf1`. Likewise, if you only need a second order filter you can use
`ma_lpf2`. The advantage of this is that they're lighter weight and a bit more efficient.
If an even filter order is specified, a series of second order filters will be processed in a chain. If an odd filter order is specified, a first order filter If an even filter order is specified, a series of second order filters will be processed in a
will be applied, followed by a series of second order filters in a chain. chain. If an odd filter order is specified, a first order filter will be applied, followed by a
series of second order filters in a chain.
7.3. High-Pass Filtering 7.3. High-Pass Filtering
...@@ -1062,8 +1180,8 @@ High-pass filtering is achieved with the following APIs: ...@@ -1062,8 +1180,8 @@ High-pass filtering is achieved with the following APIs:
| ma_hpf | High order high-pass filter (Butterworth) | | ma_hpf | High order high-pass filter (Butterworth) |
+---------+-------------------------------------------+ +---------+-------------------------------------------+
High-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_hpf1`, `ma_hpf2` and `ma_hpf`. See example code for low-pass filters High-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_hpf1`,
for example usage. `ma_hpf2` and `ma_hpf`. See example code for low-pass filters for example usage.
7.4. Band-Pass Filtering 7.4. Band-Pass Filtering
...@@ -1077,9 +1195,10 @@ Band-pass filtering is achieved with the following APIs: ...@@ -1077,9 +1195,10 @@ Band-pass filtering is achieved with the following APIs:
| ma_bpf | High order band-pass filter | | ma_bpf | High order band-pass filter |
+---------+-------------------------------+ +---------+-------------------------------+
Band-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_bpf2` and `ma_hpf`. See example code for low-pass filters for example Band-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_bpf2` and
usage. Note that the order for band-pass filters must be an even number which means there is no first order band-pass filter, unlike low-pass and high-pass `ma_hpf`. See example code for low-pass filters for example usage. Note that the order for
filters. band-pass filters must be an even number which means there is no first order band-pass filter,
unlike low-pass and high-pass filters.
7.5. Notch Filtering 7.5. Notch Filtering
...@@ -1114,7 +1233,8 @@ Low shelf filtering is achieved with the following APIs: ...@@ -1114,7 +1233,8 @@ Low shelf filtering is achieved with the following APIs:
| ma_loshelf2 | Second order low shelf filter | | ma_loshelf2 | Second order low shelf filter |
+-------------+------------------------------------------+ +-------------+------------------------------------------+
Where a high-pass filter is used to eliminate lower frequencies, a low shelf filter can be used to just turn them down rather than eliminate them entirely. Where a high-pass filter is used to eliminate lower frequencies, a low shelf filter can be used to
just turn them down rather than eliminate them entirely.
7.8. High Shelf Filtering 7.8. High Shelf Filtering
...@@ -1127,8 +1247,9 @@ High shelf filtering is achieved with the following APIs: ...@@ -1127,8 +1247,9 @@ High shelf filtering is achieved with the following APIs:
| ma_hishelf2 | Second order high shelf filter | | ma_hishelf2 | Second order high shelf filter |
+-------------+------------------------------------------+ +-------------+------------------------------------------+
The high shelf filter has the same API as the low shelf filter, only you would use `ma_hishelf` instead of `ma_loshelf`. Where a low shelf filter is used to The high shelf filter has the same API as the low shelf filter, only you would use `ma_hishelf`
adjust the volume of low frequencies, the high shelf filter does the same thing for high frequencies. instead of `ma_loshelf`. Where a low shelf filter is used to adjust the volume of low frequencies,
the high shelf filter does the same thing for high frequencies.
...@@ -1138,7 +1259,8 @@ adjust the volume of low frequencies, the high shelf filter does the same thing ...@@ -1138,7 +1259,8 @@ adjust the volume of low frequencies, the high shelf filter does the same thing
8.1. Waveforms 8.1. Waveforms
-------------- --------------
miniaudio supports generation of sine, square, triangle and sawtooth waveforms. This is achieved with the `ma_waveform` API. Example: miniaudio supports generation of sine, square, triangle and sawtooth waveforms. This is achieved
with the `ma_waveform` API. Example:
```c ```c
ma_waveform_config config = ma_waveform_config_init( ma_waveform_config config = ma_waveform_config_init(
...@@ -1160,11 +1282,12 @@ miniaudio supports generation of sine, square, triangle and sawtooth waveforms. ...@@ -1160,11 +1282,12 @@ miniaudio supports generation of sine, square, triangle and sawtooth waveforms.
ma_waveform_read_pcm_frames(&waveform, pOutput, frameCount); ma_waveform_read_pcm_frames(&waveform, pOutput, frameCount);
``` ```
The amplitude, frequency, type, and sample rate can be changed dynamically with `ma_waveform_set_amplitude()`, `ma_waveform_set_frequency()`, The amplitude, frequency, type, and sample rate can be changed dynamically with
`ma_waveform_set_type()`, and `ma_waveform_set_sample_rate()` respectively. `ma_waveform_set_amplitude()`, `ma_waveform_set_frequency()`, `ma_waveform_set_type()`, and
`ma_waveform_set_sample_rate()` respectively.
You can invert the waveform by setting the amplitude to a negative value. You can use this to control whether or not a sawtooth has a positive or negative You can invert the waveform by setting the amplitude to a negative value. You can use this to
ramp, for example. control whether or not a sawtooth has a positive or negative ramp, for example.
Below are the supported waveform types: Below are the supported waveform types:
...@@ -1202,13 +1325,16 @@ miniaudio supports generation of white, pink and Brownian noise via the `ma_nois ...@@ -1202,13 +1325,16 @@ miniaudio supports generation of white, pink and Brownian noise via the `ma_nois
ma_noise_read_pcm_frames(&noise, pOutput, frameCount); ma_noise_read_pcm_frames(&noise, pOutput, frameCount);
``` ```
The noise API uses simple LCG random number generation. It supports a custom seed which is useful for things like automated testing requiring reproducibility. The noise API uses simple LCG random number generation. It supports a custom seed which is useful
Setting the seed to zero will default to `MA_DEFAULT_LCG_SEED`. for things like automated testing requiring reproducibility. Setting the seed to zero will default
to `MA_DEFAULT_LCG_SEED`.
The amplitude, seed, and type can be changed dynamically with `ma_noise_set_amplitude()`, `ma_noise_set_seed()`, and `ma_noise_set_type()` respectively. The amplitude, seed, and type can be changed dynamically with `ma_noise_set_amplitude()`,
`ma_noise_set_seed()`, and `ma_noise_set_type()` respectively.
By default, the noise API will use different values for different channels. So, for example, the left side in a stereo stream will be different to the right By default, the noise API will use different values for different channels. So, for example, the
side. To instead have each channel use the same random value, set the `duplicateChannels` member of the noise config to true, like so: left side in a stereo stream will be different to the right side. To instead have each channel use
the same random value, set the `duplicateChannels` member of the noise config to true, like so:
```c ```c
config.duplicateChannels = MA_TRUE; config.duplicateChannels = MA_TRUE;
...@@ -1228,8 +1354,9 @@ Below are the supported noise types. ...@@ -1228,8 +1354,9 @@ Below are the supported noise types.
9. Audio Buffers 9. Audio Buffers
================ ================
miniaudio supports reading from a buffer of raw audio data via the `ma_audio_buffer` API. This can read from memory that's managed by the application, but miniaudio supports reading from a buffer of raw audio data via the `ma_audio_buffer` API. This can
can also handle the memory management for you internally. Memory management is flexible and should support most use cases. read from memory that's managed by the application, but can also handle the memory management for
you internally. Memory management is flexible and should support most use cases.
Audio buffers are initialised using the standard configuration system used everywhere in miniaudio: Audio buffers are initialised using the standard configuration system used everywhere in miniaudio:
...@@ -1252,11 +1379,14 @@ Audio buffers are initialised using the standard configuration system used every ...@@ -1252,11 +1379,14 @@ Audio buffers are initialised using the standard configuration system used every
ma_audio_buffer_uninit(&buffer); ma_audio_buffer_uninit(&buffer);
``` ```
In the example above, the memory pointed to by `pExistingData` will *not* be copied and is how an application can do self-managed memory allocation. If you In the example above, the memory pointed to by `pExistingData` will *not* be copied and is how an
would rather make a copy of the data, use `ma_audio_buffer_init_copy()`. To uninitialize the buffer, use `ma_audio_buffer_uninit()`. application can do self-managed memory allocation. If you would rather make a copy of the data, use
`ma_audio_buffer_init_copy()`. To uninitialize the buffer, use `ma_audio_buffer_uninit()`.
Sometimes it can be convenient to allocate the memory for the `ma_audio_buffer` structure and the raw audio data in a contiguous block of memory. That is, Sometimes it can be convenient to allocate the memory for the `ma_audio_buffer` structure and the
the raw audio data will be located immediately after the `ma_audio_buffer` structure. To do this, use `ma_audio_buffer_alloc_and_init()`: raw audio data in a contiguous block of memory. That is, the raw audio data will be located
immediately after the `ma_audio_buffer` structure. To do this, use
`ma_audio_buffer_alloc_and_init()`:
```c ```c
ma_audio_buffer_config config = ma_audio_buffer_config_init( ma_audio_buffer_config config = ma_audio_buffer_config_init(
...@@ -1277,13 +1407,18 @@ the raw audio data will be located immediately after the `ma_audio_buffer` struc ...@@ -1277,13 +1407,18 @@ the raw audio data will be located immediately after the `ma_audio_buffer` struc
ma_audio_buffer_uninit_and_free(&buffer); ma_audio_buffer_uninit_and_free(&buffer);
``` ```
If you initialize the buffer with `ma_audio_buffer_alloc_and_init()` you should uninitialize it with `ma_audio_buffer_uninit_and_free()`. In the example above, If you initialize the buffer with `ma_audio_buffer_alloc_and_init()` you should uninitialize it
the memory pointed to by `pExistingData` will be copied into the buffer, which is contrary to the behavior of `ma_audio_buffer_init()`. with `ma_audio_buffer_uninit_and_free()`. In the example above, the memory pointed to by
`pExistingData` will be copied into the buffer, which is contrary to the behavior of
`ma_audio_buffer_init()`.
An audio buffer has a playback cursor just like a decoder. As you read frames from the buffer, the cursor moves forward. The last parameter (`loop`) can be An audio buffer has a playback cursor just like a decoder. As you read frames from the buffer, the
used to determine if the buffer should loop. The return value is the number of frames actually read. If this is less than the number of frames requested it cursor moves forward. The last parameter (`loop`) can be used to determine if the buffer should
means the end has been reached. This should never happen if the `loop` parameter is set to true. If you want to manually loop back to the start, you can do so loop. The return value is the number of frames actually read. If this is less than the number of
with with `ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an example for reading data from an audio buffer. frames requested it means the end has been reached. This should never happen if the `loop`
parameter is set to true. If you want to manually loop back to the start, you can do so with with
`ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an example for reading data from an
audio buffer.
```c ```c
ma_uint64 framesRead = ma_audio_buffer_read_pcm_frames(pAudioBuffer, pFramesOut, desiredFrameCount, isLooping); ma_uint64 framesRead = ma_audio_buffer_read_pcm_frames(pAudioBuffer, pFramesOut, desiredFrameCount, isLooping);
...@@ -1292,8 +1427,8 @@ with with `ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an exam ...@@ -1292,8 +1427,8 @@ with with `ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an exam
} }
``` ```
Sometimes you may want to avoid the cost of data movement between the internal buffer and the output buffer. Instead you can use memory mapping to retrieve a Sometimes you may want to avoid the cost of data movement between the internal buffer and the
pointer to a segment of data: output buffer. Instead you can use memory mapping to retrieve a pointer to a segment of data:
```c ```c
void* pMappedFrames; void* pMappedFrames;
...@@ -1309,23 +1444,30 @@ pointer to a segment of data: ...@@ -1309,23 +1444,30 @@ pointer to a segment of data:
} }
``` ```
When you use memory mapping, the read cursor is increment by the frame count passed in to `ma_audio_buffer_unmap()`. If you decide not to process every frame When you use memory mapping, the read cursor is increment by the frame count passed in to
you can pass in a value smaller than the value returned by `ma_audio_buffer_map()`. The disadvantage to using memory mapping is that it does not handle looping `ma_audio_buffer_unmap()`. If you decide not to process every frame you can pass in a value smaller
for you. You can determine if the buffer is at the end for the purpose of looping with `ma_audio_buffer_at_end()` or by inspecting the return value of than the value returned by `ma_audio_buffer_map()`. The disadvantage to using memory mapping is
`ma_audio_buffer_unmap()` and checking if it equals `MA_AT_END`. You should not treat `MA_AT_END` as an error when returned by `ma_audio_buffer_unmap()`. that it does not handle looping for you. You can determine if the buffer is at the end for the
purpose of looping with `ma_audio_buffer_at_end()` or by inspecting the return value of
`ma_audio_buffer_unmap()` and checking if it equals `MA_AT_END`. You should not treat `MA_AT_END`
as an error when returned by `ma_audio_buffer_unmap()`.
10. Ring Buffers 10. Ring Buffers
================ ================
miniaudio supports lock free (single producer, single consumer) ring buffers which are exposed via the `ma_rb` and `ma_pcm_rb` APIs. The `ma_rb` API operates miniaudio supports lock free (single producer, single consumer) ring buffers which are exposed via
on bytes, whereas the `ma_pcm_rb` operates on PCM frames. They are otherwise identical as `ma_pcm_rb` is just a wrapper around `ma_rb`. the `ma_rb` and `ma_pcm_rb` APIs. The `ma_rb` API operates on bytes, whereas the `ma_pcm_rb`
operates on PCM frames. They are otherwise identical as `ma_pcm_rb` is just a wrapper around
`ma_rb`.
Unlike most other APIs in miniaudio, ring buffers support both interleaved and deinterleaved streams. The caller can also allocate their own backing memory for Unlike most other APIs in miniaudio, ring buffers support both interleaved and deinterleaved
the ring buffer to use internally for added flexibility. Otherwise the ring buffer will manage it's internal memory for you. streams. The caller can also allocate their own backing memory for the ring buffer to use
internally for added flexibility. Otherwise the ring buffer will manage it's internal memory for
you.
The examples below use the PCM frame variant of the ring buffer since that's most likely the one you will want to use. To initialize a ring buffer, do The examples below use the PCM frame variant of the ring buffer since that's most likely the one
something like the following: you will want to use. To initialize a ring buffer, do something like the following:
```c ```c
ma_pcm_rb rb; ma_pcm_rb rb;
...@@ -1335,35 +1477,49 @@ something like the following: ...@@ -1335,35 +1477,49 @@ something like the following:
} }
``` ```
The `ma_pcm_rb_init()` function takes the sample format and channel count as parameters because it's the PCM varient of the ring buffer API. For the regular The `ma_pcm_rb_init()` function takes the sample format and channel count as parameters because
ring buffer that operates on bytes you would call `ma_rb_init()` which leaves these out and just takes the size of the buffer in bytes instead of frames. The it's the PCM varient of the ring buffer API. For the regular ring buffer that operates on bytes you
fourth parameter is an optional pre-allocated buffer and the fifth parameter is a pointer to a `ma_allocation_callbacks` structure for custom memory allocation would call `ma_rb_init()` which leaves these out and just takes the size of the buffer in bytes
routines. Passing in `NULL` for this results in `MA_MALLOC()` and `MA_FREE()` being used. instead of frames. The fourth parameter is an optional pre-allocated buffer and the fifth parameter
is a pointer to a `ma_allocation_callbacks` structure for custom memory allocation routines.
Use `ma_pcm_rb_init_ex()` if you need a deinterleaved buffer. The data for each sub-buffer is offset from each other based on the stride. To manage your Passing in `NULL` for this results in `MA_MALLOC()` and `MA_FREE()` being used.
sub-buffers you can use `ma_pcm_rb_get_subbuffer_stride()`, `ma_pcm_rb_get_subbuffer_offset()` and `ma_pcm_rb_get_subbuffer_ptr()`.
Use `ma_pcm_rb_init_ex()` if you need a deinterleaved buffer. The data for each sub-buffer is
Use `ma_pcm_rb_acquire_read()` and `ma_pcm_rb_acquire_write()` to retrieve a pointer to a section of the ring buffer. You specify the number of frames you offset from each other based on the stride. To manage your sub-buffers you can use
need, and on output it will set to what was actually acquired. If the read or write pointer is positioned such that the number of frames requested will require `ma_pcm_rb_get_subbuffer_stride()`, `ma_pcm_rb_get_subbuffer_offset()` and
a loop, it will be clamped to the end of the buffer. Therefore, the number of frames you're given may be less than the number you requested. `ma_pcm_rb_get_subbuffer_ptr()`.
After calling `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()`, you do your work on the buffer and then "commit" it with `ma_pcm_rb_commit_read()` or Use `ma_pcm_rb_acquire_read()` and `ma_pcm_rb_acquire_write()` to retrieve a pointer to a section
`ma_pcm_rb_commit_write()`. This is where the read/write pointers are updated. When you commit you need to pass in the buffer that was returned by the earlier of the ring buffer. You specify the number of frames you need, and on output it will set to what
call to `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()` and is only used for validation. The number of frames passed to `ma_pcm_rb_commit_read()` and was actually acquired. If the read or write pointer is positioned such that the number of frames
`ma_pcm_rb_commit_write()` is what's used to increment the pointers, and can be less that what was originally requested. requested will require a loop, it will be clamped to the end of the buffer. Therefore, the number
of frames you're given may be less than the number you requested.
If you want to correct for drift between the write pointer and the read pointer you can use a combination of `ma_pcm_rb_pointer_distance()`,
`ma_pcm_rb_seek_read()` and `ma_pcm_rb_seek_write()`. Note that you can only move the pointers forward, and you should only move the read pointer forward via After calling `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()`, you do your work on the
the consumer thread, and the write pointer forward by the producer thread. If there is too much space between the pointers, move the read pointer forward. If buffer and then "commit" it with `ma_pcm_rb_commit_read()` or `ma_pcm_rb_commit_write()`. This is
where the read/write pointers are updated. When you commit you need to pass in the buffer that was
returned by the earlier call to `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()` and is
only used for validation. The number of frames passed to `ma_pcm_rb_commit_read()` and
`ma_pcm_rb_commit_write()` is what's used to increment the pointers, and can be less that what was
originally requested.
If you want to correct for drift between the write pointer and the read pointer you can use a
combination of `ma_pcm_rb_pointer_distance()`, `ma_pcm_rb_seek_read()` and
`ma_pcm_rb_seek_write()`. Note that you can only move the pointers forward, and you should only
move the read pointer forward via the consumer thread, and the write pointer forward by the
producer thread. If there is too much space between the pointers, move the read pointer forward. If
there is too little space between the pointers, move the write pointer forward. there is too little space between the pointers, move the write pointer forward.
You can use a ring buffer at the byte level instead of the PCM frame level by using the `ma_rb` API. This is exactly the same, only you will use the `ma_rb` You can use a ring buffer at the byte level instead of the PCM frame level by using the `ma_rb`
functions instead of `ma_pcm_rb` and instead of frame counts you will pass around byte counts. API. This is exactly the same, only you will use the `ma_rb` functions instead of `ma_pcm_rb` and
instead of frame counts you will pass around byte counts.
The maximum size of the buffer in bytes is `0x7FFFFFFF-(MA_SIMD_ALIGNMENT-1)` due to the most significant bit being used to encode a loop flag and the internally The maximum size of the buffer in bytes is `0x7FFFFFFF-(MA_SIMD_ALIGNMENT-1)` due to the most
managed buffers always being aligned to MA_SIMD_ALIGNMENT. significant bit being used to encode a loop flag and the internally managed buffers always being
aligned to `MA_SIMD_ALIGNMENT`.
Note that the ring buffer is only thread safe when used by a single consumer thread and single producer thread. Note that the ring buffer is only thread safe when used by a single consumer thread and single
producer thread.
...@@ -1395,24 +1551,32 @@ Some backends have some nuance details you may want to be aware of. ...@@ -1395,24 +1551,32 @@ Some backends have some nuance details you may want to be aware of.
11.1. WASAPI 11.1. WASAPI
------------ ------------
- Low-latency shared mode will be disabled when using an application-defined sample rate which is different to the device's native sample rate. To work around - Low-latency shared mode will be disabled when using an application-defined sample rate which is
this, set `wasapi.noAutoConvertSRC` to true in the device config. This is due to IAudioClient3_InitializeSharedAudioStream() failing when the different to the device's native sample rate. To work around this, set `wasapi.noAutoConvertSRC`
`AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM` flag is specified. Setting wasapi.noAutoConvertSRC will result in miniaudio's internal resampler being used instead to true in the device config. This is due to IAudioClient3_InitializeSharedAudioStream() failing
which will in turn enable the use of low-latency shared mode. when the `AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM` flag is specified. Setting wasapi.noAutoConvertSRC
will result in miniaudio's internal resampler being used instead which will in turn enable the
use of low-latency shared mode.
11.2. PulseAudio 11.2. PulseAudio
---------------- ----------------
- If you experience bad glitching/noise on Arch Linux, consider this fix from the Arch wiki: - If you experience bad glitching/noise on Arch Linux, consider this fix from the Arch wiki:
https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Glitches,_skips_or_crackling. Alternatively, consider using a different backend such as ALSA. https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Glitches,_skips_or_crackling.
Alternatively, consider using a different backend such as ALSA.
11.3. Android 11.3. Android
------------- -------------
- To capture audio on Android, remember to add the RECORD_AUDIO permission to your manifest: `<uses-permission android:name="android.permission.RECORD_AUDIO" />` - To capture audio on Android, remember to add the RECORD_AUDIO permission to your manifest:
- With OpenSL|ES, only a single ma_context can be active at any given time. This is due to a limitation with OpenSL|ES. `<uses-permission android:name="android.permission.RECORD_AUDIO" />`
- With AAudio, only default devices are enumerated. This is due to AAudio not having an enumeration API (devices are enumerated through Java). You can however - With OpenSL|ES, only a single ma_context can be active at any given time. This is due to a
perform your own device enumeration through Java and then set the ID in the ma_device_id structure (ma_device_id.aaudio) and pass it to ma_device_init(). limitation with OpenSL|ES.
- The backend API will perform resampling where possible. The reason for this as opposed to using miniaudio's built-in resampler is to take advantage of any - With AAudio, only default devices are enumerated. This is due to AAudio not having an enumeration
potential device-specific optimizations the driver may implement. API (devices are enumerated through Java). You can however perform your own device enumeration
through Java and then set the ID in the ma_device_id structure (ma_device_id.aaudio) and pass it
to ma_device_init().
- The backend API will perform resampling where possible. The reason for this as opposed to using
miniaudio's built-in resampler is to take advantage of any potential device-specific
optimizations the driver may implement.
11.4. UWP 11.4. UWP
--------- ---------
...@@ -1431,26 +1595,34 @@ Some backends have some nuance details you may want to be aware of. ...@@ -1431,26 +1595,34 @@ Some backends have some nuance details you may want to be aware of.
11.5. Web Audio / Emscripten 11.5. Web Audio / Emscripten
---------------------------- ----------------------------
- You cannot use `-std=c*` compiler flags, nor `-ansi`. This only applies to the Emscripten build. - You cannot use `-std=c*` compiler flags, nor `-ansi`. This only applies to the Emscripten build.
- The first time a context is initialized it will create a global object called "miniaudio" whose primary purpose is to act as a factory for device objects. - The first time a context is initialized it will create a global object called "miniaudio" whose
- Currently the Web Audio backend uses ScriptProcessorNode's, but this may need to change later as they've been deprecated. primary purpose is to act as a factory for device objects.
- Google has implemented a policy in their browsers that prevent automatic media output without first receiving some kind of user input. The following web page - Currently the Web Audio backend uses ScriptProcessorNode's, but this may need to change later as
has additional details: https://developers.google.com/web/updates/2017/09/autoplay-policy-changes. Starting the device may fail if you try to start playback they've been deprecated.
without first handling some kind of user input. - Google has implemented a policy in their browsers that prevent automatic media output without
first receiving some kind of user input. The following web page has additional details:
https://developers.google.com/web/updates/2017/09/autoplay-policy-changes. Starting the device
may fail if you try to start playback without first handling some kind of user input.
12. Miscellaneous Notes 12. Miscellaneous Notes
======================= =======================
- Automatic stream routing is enabled on a per-backend basis. Support is explicitly enabled for WASAPI and Core Audio, however other backends such as - Automatic stream routing is enabled on a per-backend basis. Support is explicitly enabled for
PulseAudio may naturally support it, though not all have been tested. WASAPI and Core Audio, however other backends such as PulseAudio may naturally support it, though
- The contents of the output buffer passed into the data callback will always be pre-initialized to silence unless the `noPreZeroedOutputBuffer` config variable not all have been tested.
in `ma_device_config` is set to true, in which case it'll be undefined which will require you to write something to the entire buffer. - The contents of the output buffer passed into the data callback will always be pre-initialized to
- By default miniaudio will automatically clip samples. This only applies when the playback sample format is configured as `ma_format_f32`. If you are doing silence unless the `noPreZeroedOutputBuffer` config variable in `ma_device_config` is set to true,
clipping yourself, you can disable this overhead by setting `noClip` to true in the device config. in which case it'll be undefined which will require you to write something to the entire buffer.
- By default miniaudio will automatically clip samples. This only applies when the playback sample
format is configured as `ma_format_f32`. If you are doing clipping yourself, you can disable this
overhead by setting `noClip` to true in the device config.
- The sndio backend is currently only enabled on OpenBSD builds. - The sndio backend is currently only enabled on OpenBSD builds.
- The audio(4) backend is supported on OpenBSD, but you may need to disable sndiod before you can use it. - The audio(4) backend is supported on OpenBSD, but you may need to disable sndiod before you can
use it.
- Note that GCC and Clang requires `-msse2`, `-mavx2`, etc. for SIMD optimizations. - Note that GCC and Clang requires `-msse2`, `-mavx2`, etc. for SIMD optimizations.
- When compiling with VC6 and earlier, decoding is restricted to files less than 2GB in size. This is due to 64-bit file APIs not being available. - When compiling with VC6 and earlier, decoding is restricted to files less than 2GB in size. This
is due to 64-bit file APIs not being available.
*/ */
#ifndef miniaudio_h #ifndef miniaudio_h
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment