Description
Currently, each object which utilizes the audiosample API is required to support multiple formats and manage common audio tasks independently. In order to reduce flash size, complexity, and the barrier to entry to create new audiosample objects (namely audio effects), the audio pathway should be streamlined by using a single format and adding shared resources for general audio processing tasks. These optimizations include but are not limited to the following lists. Feel free to suggest other areas of improvement.
For all audio objects:
- Remove single_channel_output (see Remove single_channel_output from the audio API layer #9877)
- Limit the audio format to signed 16-bit integers
- Use bytes_per_sample instead of bits_per_sample (most all calculations are on the byte-level)
For audio effects objects:
- Unify double-buffering and sample processing implementation, including lfo ticking at regular intervals
- Provide shared mixing implementation (linear and cross-fade)
If these updates are carried out, some audio object constructor arguments and properties may need to be deprecated such as bits_per_sample
and samples_signed
as they will be dictated by the output. sample_rate
and channel_count
are likely to still be necessary per object.
This issue was inspired by the discussion within #10052 with @jepler and @gamblor21.