* Call this function continuously to generate your output.
* carrier_buffer and modulator_buffer should contain the carrier and modulator signals respectively.
* The modulator must always have one channel.
* If the carrier has two channels, the samples in carrier_buffer must be interleaved.
* output_buffer will be filled with the result, and must be able to hold as many channels as the carrier.
* If the carrier has two channels, the output buffer will be filled with interleaved samples.
* output_buffer may be the same pointer as either carrier_buffer or modulator_buffer as long as it can hold the same number of channels as the carrier.
* The processing is performed in place.
* frames specifies the number of sample frames that should be processed.
* Returns nonzero (true) on success or 0 (false) on failure.
* The function will only fail if one or more of the parameters are invalid.
/* The main instance structure. This is the structure that you will create an instance of when using the vocoder. */
structvoclib_instance
{
voclib_bandanalysis_bands[VOCLIB_MAX_BANDS];/* The filterbank used for analysis (these are applied to the modulator). */
voclib_envelopeanalysis_envelopes[VOCLIB_MAX_BANDS];/* The envelopes used to smooth the analysis bands. */
voclib_bandsynthesis_bands[VOCLIB_MAX_BANDS*2];/* The filterbank used for synthesis (these are applied to the carrier). The second half of the array is only used for stereo carriers. */
floatreaction_time;/* In seconds. Higher values make the vocoder respond more slowly to changes in the modulator. */
floatformant_shift;/* In octaves. 1.0 is unchanged. */
/* !!! THIS FILE WILL BE MERGED INTO miniaudio.h WHEN COMPLETE !!! */
/*
EXPERIMENTAL
============
Everything in this file is experimental and subject to change. Some stuff isn't yet implemented, in particular spatialization. I've noted some ideas that are
basically straight off the top of my head - many of these are probably outright wrong or just generally bad ideas.
Very simple APIs for spatialization are declared by not yet implemented. They're just placeholders to give myself an idea on some of the API design.
The idea is that you have an `ma_engine` object - one per listener. Decoupled from that is the `ma_resource_manager` object. You can have one `ma_resource_manager`
object to many `ma_engine` objects. This will allow you to share resources between each listener. The `ma_engine` is responsible for the playback of audio from a
list of data sources. The `ma_resource_manager` is responsible for the actual loading, caching and unloading of those data sources. This decoupling is
something that I'm really liking right now and will likely stay in place for the final version.
You create "sounds" from the engine which represent a sound/voice in the world. You first need to create a sound, and then you need to start it. Sounds do not
start by default. You can use `ma_engine_play_sound()` to "fire and forget" sounds.
Sounds can be allocated to groups called `ma_sound_group`. This is how you can support submixing and is one way you could achieve the kinds of groupings you see
in games for things like SFX, Music and Voices. Unlike sounds, groups are started by default. When you stop a group, all sounds within that group will be
stopped atomically. When the group is started again, all sounds attached to the group will also be started, so long as the sound is also marked as started.
The creation and deletion of sounds and groups should be thread safe.
The engine runs on top of a node graph, and sounds and groups are just nodes within that graph. The output of a sound can be attached to the input of any node
on the graph. To apply an effect to a sound or group, attach it's output to the input of an effect node. See the Routing Infrastructure section below for
details on this.
The best resource to use when understanding the API is the function declarations for `ma_engine`. I expect you should be able to figure it out! :)
*/
#ifndef miniaudio_engine_h
#define miniaudio_engine_h
#ifdef __cplusplus
extern"C"{
#endif
/*
Engine
======
The `ma_engine` API is a high-level API for audio playback. Internally it contains sounds (`ma_sound`) with resources managed via a resource manager
(`ma_resource_manager`).
Within the world there is the concept of a "listener". Each `ma_engine` instances has a single listener, but you can instantiate multiple `ma_engine` instances
if you need more than one listener. In this case you will want to share a resource manager which you can do by initializing one manually and passing it into
`ma_engine_config`. Using this method will require your application to manage groups and sounds on a per `ma_engine` basis.
/* Adjust the volume of the splitter node's endpoints. We'll just do it 50/50 so that both of them combine to reproduce the original signal at the endpoint. */
/* Adjust the volume of the splitter node's endpoints. We'll just do it 50/50 so that both of them combine to reproduce the original signal at the endpoint. */