AppSoundEngine is a very fast low latency framework built upon System Sound Services for easy implementation of user interface sound effects within your iOS application. It is basically a objective-c wrapper around SystemSoundID (representing system sound object) and System Sound Services raw C functions, most importantly sound completion.
What is its history and what kinds of problems does it solve?
During development of Countdown Me I had to solve some problems with sound effects.
First of all – latency. Latency is the delay between pressing the button and actually hearing the sound. Acceptable latency is <10 ms. If you happen to squeeze it like this, the user has a “hardware” feeling of immediacy. So this was my aim. Here is great information on available audio frameworks on iOS platform. The easiest to use is probably AV Foundation framework with AVPlayer. The serious drawback is latency. I have measured over 100 ms. Completely unusable for UI sounds from my point of view. So I switched to Audio Toolbox framework and System sound services. This helped a lot with latency, especially when I did not create each sound on demand, but created them once during startup and cached them in app delegate (or you can use another dedicated object, so that single responsibility principle would not be broken). Latency went well bellow 10 ms, so latency problem was solved.
But System sound services have one serious drawback – they can play only one sound at a time. If you start playing new sound while another one was playing, the latter is suddenly stopped. This is extremely unpleasant behaviour from user point of view. Typical scenario is in utility app (like Countdown Me) – in this kind of app you have info button, which flips current view and shows some settings. When you tap the button there is tap sound, and immediately during the flip should play flip sound – but the tap may be finished too early by flip sound.
You have two possibilities how to solve this.
We can setup flip method to wait for the sound to finish, and then execute flip together with flip sound. But this solution is not good – it postpones flip action (degrades user experience) and it slows the execution of unit tests as well.
In AppSoundEngine I decided not to wait for sound finish in the method, but dispatch the sounds to the AppSoundEngine immediately, asynchronously and let AppSoundEngine play them after each other, so that each sound is played only after previous is finished. Independently from the other code flow. Now the app feels faster, methods run without waiting and unit tests run fast too.
What sounds to use?
The AudioSystemServicesPlaySound function* lets you very simply play short sound files. The simplicity carries with it a few restrictions. Your sound files must be:
- No longer than 30 seconds in duration
- In linear PCM or IMA4 (IMA/ADPCM) format
- Packaged in a
.caf
,.aif
, or.wav
file
The file size of an uncompressed audio (linear PCM) recording can be calculated using this formula:
File Size (Bytes) = (sampling rate) x (bit depth) x (number of channels) x (seconds) / 8
44100 x 16 x 2 x 1 / 8 = 176400 Bytes (172,27 kiloBytes per second of your UI sounds, if uncompressed and cd quality)
afconvert -f caff -d ima4 audiofile.wav
Now just add the sounds to the project and load them in to engine. Your app just got another dimension – sound.
I hope you find this article, and the AppSoundEngine itself useful. If you think of some enhancement, or new functionality, please let me know.
Instructions and source code are on GitHub.
Thanks zoul for inspiration.
* AudioServicesPlaySystemSound function is used by AppSoundEngine to actually play sounds.