objective c - Ejemplo de uso de servicios de cola de audio
objective-c core-audio (3)
Estoy buscando un ejemplo de uso de servicios de cola de audio.
Me gustaría crear un sonido usando una ecuación matemática y luego escucharlo.
Aquí está mi código para generar sonido a partir de una función. Supongo que sabe cómo usar los servicios de AudioQueue, configurar una sesión de audio e iniciar y detener correctamente una cola de salida de audio.
Aquí hay un fragmento para configurar e iniciar una salida de AudioQueue:
// Get the preferred sample rate (8,000 Hz on iPhone, 44,100 Hz on iPod touch)
size = sizeof(sampleRate);
err = AudioSessionGetProperty (kAudioSessionProperty_CurrentHardwareSampleRate, &size, &sampleRate);
if (err != noErr) NSLog(@"AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate) error: %d", err);
//NSLog (@"Current hardware sample rate: %1.0f", sampleRate);
BOOL isHighSampleRate = (sampleRate > 16000);
int bufferByteSize;
AudioQueueBufferRef buffer;
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
// New output queue ---- PLAYBACK ----
if (isPlaying == NO) {
err = AudioQueueNewOutput (&streamFormat, AudioEngineOutputBufferCallback, self, nil, nil, 0, &outputQueue);
if (err != noErr) NSLog(@"AudioQueueNewOutput() error: %d", err);
// Enqueue buffers
//outputFrequency = 0.0;
outputBuffersToRewrite = 3;
bufferByteSize = (sampleRate > 16000)? 2176 : 512; // 40.5 Hz : 31.25 Hz
for (i=0; i<3; i++) {
err = AudioQueueAllocateBuffer (outputQueue, bufferByteSize, &buffer);
if (err == noErr) {
[self generateTone: buffer];
err = AudioQueueEnqueueBuffer (outputQueue, buffer, 0, nil);
if (err != noErr) NSLog(@"AudioQueueEnqueueBuffer() error: %d", err);
} else {
NSLog(@"AudioQueueAllocateBuffer() error: %d", err);
return;
}
}
// Start playback
isPlaying = YES;
err = AudioQueueStart(outputQueue, nil);
if (err != noErr) { NSLog(@"AudioQueueStart() error: %d", err); isPlaying= NO; return; }
} else {
NSLog (@"Error: audio is already playing back.");
}
Aquí está la parte que genera el tono:
// AudioQueue output queue callback.
void AudioEngineOutputBufferCallback (void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
AudioEngine *engine = (AudioEngine*) inUserData;
[engine processOutputBuffer:inBuffer queue:inAQ];
}
- (void) processOutputBuffer: (AudioQueueBufferRef) buffer queue:(AudioQueueRef) queue {
OSStatus err;
if (isPlaying == YES) {
[outputLock lock];
if (outputBuffersToRewrite > 0) {
outputBuffersToRewrite--;
[self generateTone:buffer];
}
err = AudioQueueEnqueueBuffer(queue, buffer, 0, NULL);
if (err == 560030580) { // Queue is not active due to Music being started or other reasons
isPlaying = NO;
} else if (err != noErr) {
NSLog(@"AudioQueueEnqueueBuffer() error %d", err);
}
[outputLock unlock];
} else {
err = AudioQueueStop (queue, NO);
if (err != noErr) NSLog(@"AudioQueueStop() error: %d", err);
}
}
-(void) generateTone: (AudioQueueBufferRef) buffer {
if (outputFrequency == 0.0) {
memset(buffer->mAudioData, 0, buffer->mAudioDataBytesCapacity);
buffer->mAudioDataByteSize = buffer->mAudioDataBytesCapacity;
} else {
// Make the buffer length a multiple of the wavelength for the output frequency.
int sampleCount = buffer->mAudioDataBytesCapacity / sizeof (SInt16);
double bufferLength = sampleCount;
double wavelength = sampleRate / outputFrequency;
double repetitions = floor (bufferLength / wavelength);
if (repetitions > 0.0) {
sampleCount = round (wavelength * repetitions);
}
double x, y;
double sd = 1.0 / sampleRate;
double amp = 0.9;
double max16bit = SHRT_MAX;
int i;
SInt16 *p = buffer->mAudioData;
for (i = 0; i < sampleCount; i++) {
x = i * sd * outputFrequency;
switch (outputWaveform) {
case kSine:
y = sin (x * 2.0 * M_PI);
break;
case kTriangle:
x = fmod (x, 1.0);
if (x < 0.25)
y = x * 4.0; // up 0.0 to 1.0
else if (x < 0.75)
y = (1.0 - x) * 4.0 - 2.0; // down 1.0 to -1.0
else
y = (x - 1.0) * 4.0; // up -1.0 to 0.0
break;
case kSawtooth:
y = 0.8 - fmod (x, 1.0) * 1.8;
break;
case kSquare:
y = (fmod(x, 1.0) < 0.5)? 0.7: -0.7;
break;
default: y = 0; break;
}
p[i] = y * max16bit * amp;
}
buffer->mAudioDataByteSize = sampleCount * sizeof (SInt16);
}
}
Algo a tener en cuenta es que su devolución de llamada se llamará en un subproceso no principal, por lo que tiene que practicar la seguridad de subprocesos con bloqueos, exclusión mutua u otras técnicas.
Esta es una versión que usa C # de la misma muestra de @lucius
void SetupAudio ()
{
AudioSession.Initialize ();
AudioSession.Category = AudioSessionCategory.MediaPlayback;
sampleRate = AudioSession.CurrentHardwareSampleRate;
var format = new AudioStreamBasicDescription () {
SampleRate = sampleRate,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
BitsPerChannel = 16,
ChannelsPerFrame = 1,
BytesPerFrame = 2,
BytesPerPacket = 2,
FramesPerPacket = 1,
};
var queue = new OutputAudioQueue (format);
var bufferByteSize = (sampleRate > 16000)? 2176 : 512; // 40.5 Hz : 31.25 Hz
var buffers = new AudioQueueBuffer* [numBuffers];
for (int i = 0; i < numBuffers; i++){
queue.AllocateBuffer (bufferByteSize, out buffers [i]);
GenerateTone (buffers [i]);
queue.EnqueueBuffer (buffers [i], null);
}
queue.OutputCompleted += (object sender, OutputCompletedEventArgs e) => {
queue.EnqueueBuffer (e.UnsafeBuffer, null);
};
queue.Start ();
return true;
}
Este es el generador de tonos:
void GenerateTone (AudioQueueBuffer *buffer)
{
// Make the buffer length a multiple of the wavelength for the output frequency.
uint sampleCount = buffer->AudioDataBytesCapacity / 2;
double bufferLength = sampleCount;
double wavelength = sampleRate / outputFrequency;
double repetitions = Math.Floor (bufferLength / wavelength);
if (repetitions > 0)
sampleCount = (uint)Math.Round (wavelength * repetitions);
double x, y;
double sd = 1.0 / sampleRate;
double amp = 0.9;
double max16bit = Int16.MaxValue;
int i;
short *p = (short *) buffer->AudioData;
for (i = 0; i < sampleCount; i++) {
x = i * sd * outputFrequency;
switch (outputWaveForm) {
case WaveForm.Sine:
y = Math.Sin (x * 2.0 * Math.PI);
break;
case WaveForm.Triangle:
x = x % 1.0;
if (x < 0.25)
y = x * 4.0; // up 0.0 to 1.0
else if (x < 0.75)
y = (1.0 - x) * 4.0 - 2.0; // down 1.0 to -1.0
else
y = (x - 1.0) * 4.0; // up -1.0 to 0.0
break;
case WaveForm.Sawtooth:
y = 0.8 - (x % 1.0) * 1.8;
break;
case WaveForm.Square:
y = ((x % 1.0) < 0.5)? 0.7: -0.7;
break;
default: y = 0; break;
}
p[i] = (short)(y * max16bit * amp);
}
buffer->AudioDataByteSize = sampleCount * 2;
}
}
También quieres estas definiciones:
enum WaveForm {
Sine, Triangle, Sawtooth, Square
}
WaveForm outputWaveForm;
const float outputFrequency = 220;
Nivel alto: use AVAudioPlayer https://github.com/hollance/AVBufferPlayer
Nivel medio: las colas de audio trailinthesand.com/exploring-iphone-audio-part-1/ te ayudan a ir bien. NOTA: quité el http para que el antiguo enlace pueda estar allí, pero lo hace directamente a un sitio incorrecto, por lo que aparentemente ha cambiado.
Nivel bajo: alternativamente, puede bajar un nivel y hacerlo con unidades de audio: http://cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html