This is happening because the processing function keeps reading audio
data from the AudioBuffer even after it has been marked as stopped.
There is also an error in ReadAudioBufferFramesInInternalFormat() where
if it is called on a stopped sound, it'll still return audio frames.
This has also been addressed with this commit.
* Audio: Stop setting capture config options.
Since the device is being configured as a playback device, all capture
config options are unused and therefore need to not be set.
* Audio: Stop pre-silencing the miniaudio output buffer.
raylib already manually silences the output buffer prior to mixing so
there is no reason to have miniaudio also do it. It can therefore be
disabled via the device config to make data processing slightly more
efficient.
* Audio: Stop forcing fixed sized processing callbacks.
There is no requirement for raylib to have guaranteed fixed sized
audio processing. By disabling it, audio processing can be made more
efficient by not having to run the data through an internal intermediary
buffer.
* Audio: Make the period size (latency) configurable.
The default period size is 10ms, but this is inappropriate for certain
platforms so it is useful to be able to allow those platforms to
configure the period size as required.
* Audio: Fix documentation for pan.
The pan if -1..1, not 0..1.
* Audio: Remove use of ma_data_converter_get_required_input_frame_count().
This function is being removed from miniaudio. To make this work with
the current architecture of raylib it requires the use of a cache.
This commit implements a generic solution that works across all
AudioBuffer types (static, streams and callback based), but the static
case could be optimized to avoid the cache by incorporating the
functionality of ReadAudioBufferFramesInInternalFormat() into
ReadAudioBufferFramesInMixingFormat(). It would be unpractical to avoid
the cache with streams and callback-based AudioBuffers however so this
commit sticks with a generic solution.
* Audio: Correct usage of miniaudio's dynamic rate adjustment.
This affects pitch shifting. The output rate is being modified with
ma_data_converter_set_rate(), but then that value is being used in the
computation of the output rate the next time SetAudioBufferPitch() which
results in a cascade. The correct way to do this is to use an anchored
output rate as the basis for the calculation after pitch shifting. In
this case, it's the device's sample rate that acts as the anchor.
* Audio: Optimize memory usage for data conversion.
This reduces the per-AudioBuffer conversion cache from 256 PCM frames
down to 8.
[utils] was created long time ago, when [rcore] contained all the platforms code, the purpose of the file was exposing basic filesystem functionality across modules and also logging mechanism but many things have changed since then and there is no need to keep using this module.
- Logging system has been move to [rcore] module and macros are exposed through `config.h` to other modules
- File system functionality has also been centralized in [rcore] module that along the years it was already adding more and more file-system functions, now they are all in the same module
- Android specific code has been moved to `rcore_android.c`, it had no sense to have specific platform code in `utils`, [rcore] is responsible of all platform code.
In the case of a failure within miniaudio on the function: ma_convert_frames, the dynamic memory allocated for the `data` variable will leak on the early return.