Commit 3d82237e authored by David Reid's avatar David Reid

Update documentation for examples.

parent b80f7f94
/*
The example demonstrates how to implement a fixed sized callback. miniaudio does not have built-in support for
firing the data callback with fixed sized buffers. In order to support this you need to implement a layer that
sits on top of the normal data callback. This example demonstrates one way of doing this.
Shows one way to implement a data callback that is called with a fixed frame count.
This example uses a ring buffer to act as the intermediary buffer between the low-level device callback and the
fixed sized callback. You do not need to use a ring buffer here, but it's a good opportunity to demonstrate how
to use miniaudio's ring buffer API. The ring buffer in this example is in global scope for simplicity, but you
can pass it around as user data for the device (device.pUserData).
miniaudio does not have built-in support for firing the data callback with fixed sized buffers. In order to support
this you need to implement a layer that sits on top of the normal data callback. This example demonstrates one way of
doing this.
This only works for output devices, but can be implemented for input devices by simply swapping the direction
This example uses a ring buffer to act as the intermediary buffer between the low-level device callback and the fixed
sized callback. You do not need to use a ring buffer here, but it's a good opportunity to demonstrate how to use
miniaudio's ring buffer API. The ring buffer in this example is in global scope for simplicity, but you can pass it
around as user data for the device (device.pUserData).
This example only works for output devices, but can be implemented for input devices by simply swapping the direction
of data movement.
*/
#define MINIAUDIO_IMPLEMENTATION
......
/* This example simply captures data from your default microphone until you press Enter. The output is saved to the file specified on the command line. */
/*
Demonstrates how to capture data from a microphone using the low-level API.
This example simply captures data from your default microphone until you press Enter. The output is saved to the file
specified on the command line.
Capturing works in a very similar way to playback. The only difference is the direction of data movement. Instead of
the application sending data to the device, the device will send data to the application. This example just writes the
data received by the microphone straight to a WAV file.
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......
/*
Demonstrates duplex mode which is where data is captured from a microphone and then output to a device.
This example captures audio from the default microphone and then outputs it straight to the default playback device
without any kind of modification. If you wanted to, you could also apply filters and effects to the input stream
before outputting to the playback device.
Note that the microphone and playback device must run in lockstep. Any kind of timing deviation will result in audible
glitching which the backend may not be able to recover from. For this reason, miniaudio forces you to use the same
sample rate for both capture and playback. If internally the native sample rates differ, miniaudio will perform the
sample rate conversion for you automatically.
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......
/*
Demonstrates how to enumerate over devices.
Device enumaration requires a `ma_context` object which is initialized with `ma_context_init()`. Conceptually, the
context sits above a device. You can have many devices to one context.
If you use device enumeration, you should explicitly specify the same context you used for enumeration in the call to
`ma_device_init()` when you initialize your devices.
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......
/*
This example simply captures data from your default playback device until you press Enter. The output is saved to the file
specified on the command line.
Demonstrates how to implement loopback recording.
This example simply captures data from your default playback device until you press Enter. The output is saved to the
file specified on the command line.
Loopback mode is when you record audio that is played from a given speaker. It is only supported on WASAPI, but can be
used indirectly with PulseAudio by choosing the appropriate loopback device after enumeration.
To use loopback mode you just need to set the device type to ma_device_type_loopback and set the capture device config
properties. The output buffer in the callback will be null whereas the input buffer will be valid.
......
/*
Shows one way to handle looping of a sound.
This example uses a decoder as the data source. Decoders can be used with the `ma_data_source` API which, conveniently,
supports looping via the `ma_data_source_read_pcm_frames()` API. To use it, all you need to do is pass a pointer to the
decoder straight into `ma_data_source_read_pcm_frames()` and it will just work.
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......
/*
Demonstrates one way to load multiple files and play them all back at the same time.
When mixing multiple sounds together, you should not create multiple devices. Instead you should create only a single
device and then mix your sounds together which you can do by simply summing their samples together. The simplest way to
do this is to use floating point samples and use miniaudio's built-in clipper to handling clipping for you. (Clipping
is when sample are clampled to their minimum and maximum range, which for floating point is -1..1.)
```
Usage: simple_mixing [input file 0] [input file 1] ... [input file n]
Example: simple_mixing file1.wav file2.flac
```
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......@@ -8,7 +17,7 @@ Example: simple_mixing file1.wav file2.flac
#include <stdio.h>
/*
For simplicity, this example requires the device use floating point samples.
For simplicity, this example requires the device to use floating point samples.
*/
#define SAMPLE_FORMAT ma_format_f32
#define CHANNEL_COUNT 2
......
/*
Demonstrates how to load a sound file and play it back using the low-level API.
The low-level API uses a callback to deliver audio between the application and miniaudio for playback or recording. When
in playback mode, as in this example, the application sends raw audio data to miniaudio which is then played back through
the default playback device as defined by the operating system.
This example uses the `ma_decoder` API to load a sound and play it back. The decoder is entirely decoupled from the
device and can be used independently of it. This example only plays back a single sound file, but it's possible to play
back multiple files by simple loading multiple decoders and mixing them (do not create multiple devices to do this). See
the simple_mixing example for how best to do this.
*/
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
......
/*
Demonstrates playback of a sine wave.
Since all this example is doing is playing back a sine wave, we can disable decoding (and encoding) which will slightly
reduce the size of the executable. This is done with the `MA_NO_DECODING` and `MA_NO_ENCODING` options.
The generation of sine wave is achieved via the `ma_waveform` API. A waveform is a data source which means it can be
seamlessly plugged into the `ma_data_source_*()` family of APIs as well.
A waveform is initialized using the standard config/init pattern used throughout all of miniaudio. Frames are read via
the `ma_waveform_read_pcm_frames()` API.
This example works with Emscripten.
*/
#define MA_NO_DECODING
#define MA_NO_ENCODING
#define MINIAUDIO_IMPLEMENTATION
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment