I’m working on an audio project and have a question regarding the meaning of the timestamp retrieved by AudioQueueGetCurrentTime().
According to the Apple Developer Documentation, the following calculation gives the audio time being played (since AudioQueueStart):
- (Float64) GetCurrentTime {
AudioTimeStamp c;
AudioQueueGetCurrentTime(playState.queue, NULL, &c, NULL);
return c.mSampleTime / _av->audio.sample_rate;
}
However, in a project I’m working on, I noticed the following code inside the fillAudioBuffer callback function of AudioQueue:
static void fillAudioBuffer(AudioQueueRef queue, AudioQueueBufferRef buffer){
int lengthCopied = INT32_MAX;
int dts= 0;
int isDone = 0;
buffer->mAudioDataByteSize = 0;
buffer->mPacketDescriptionCount = 0;
OSStatus err = 0;
AudioTimeStamp bufferStartTime;
AudioQueueGetCurrentTime(queue, NULL, &bufferStartTime, NULL);
while(buffer->mPacketDescriptionCount < numPacketsToRead && lengthCopied > 0){
if (buffer->mAudioDataByteSize) {
break;
}
lengthCopied = getNextAudio(_av,buffer->mAudioDataBytesCapacity-buffer->mAudioDataByteSize, (uint8_t*)buffer->mAudioData+buffer->mAudioDataByteSize,&dts,&isDone);
if(!lengthCopied || isDone) break;
if(aqStartDts < 0) aqStartDts = dts;
if (dts>0) currentDts = dts;
if(buffer->mPacketDescriptionCount ==0){
bufferStartTime.mFlags = kAudioTimeStampSampleTimeValid;
bufferStartTime.mSampleTime = (Float64)(dts-aqStartDts) * _av->audio.frame_size;
if (bufferStartTime.mSampleTime <0 ) bufferStartTime.mSampleTime = 0;
PMSG2("AQHandler.m fillAudioBuffer: DTS for %x: %lf time base: %lf StartDTS: %dn", (unsigned int)buffer, bufferStartTime.mSampleTime, _av->audio.time_base, aqStartDts);
}
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mStartOffset = buffer->mAudioDataByteSize;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mDataByteSize = lengthCopied;
buffer->mPacketDescriptions[buffer->mPacketDescriptionCount].mVariableFramesInPacket = _av->audio.frame_size;
buffer->mPacketDescriptionCount++;
buffer->mAudioDataByteSize += lengthCopied;
}
#ifdef DEBUG
int audioBufferCount, audioBufferTotal, videoBufferCount, videoBufferTotal;
bufferCheck(_av,&videoBufferCount, &videoBufferTotal, &audioBufferCount, &audioBufferTotal);
PMSG2("AQHandler.m fillAudioBuffer: Video Buffer: %d/%d Audio Buffer: %d/%dn", videoBufferCount, videoBufferTotal, audioBufferCount, audioBufferTotal);
PMSG2("AQHandler.m fillAudioBuffer: Bytes copied for buffer 0x%x: %dn",(unsigned int)buffer, (int)buffer->mAudioDataByteSize );
#endif
if(buffer->mAudioDataByteSize){
if(err=AudioQueueEnqueueBufferWithParameters(queue, buffer, 0, NULL, 0, 0, 0, NULL, &bufferStartTime, NULL))
{
#ifdef DEBUG
char sErr[10];
PMSG2(@"AQHandler.m fillAudioBuffer: Could not enqueue buffer 0x%x: %d %s.", buffer, err, FormatError(sErr, err));
#endif
}
}
}
Based on the documentation for AudioQueueEnqueueBufferWithParameters
and the variable naming used by the author, bufferStartTime
seems to represent the time when the newly filled audio buffer will start playing, i.e., the time when all current audio in the queue has finished playing and the new audio starts. This interpretation suggests bufferStartTime
is not the same as the time of the audio currently being played.
I have browsed through many related questions, but I still have some doubts.. I’m currently fixing an audio-video synchronization issue in my project, and there isn’t much detailed information in the Apple Developer Documentation (or maybe my search skills are lacking).
Can someone clarify the exact meaning of the timestamp returned by AudioQueueGetCurrentTime() in this context? Is it the time when the current audio will finish playing, or is it the time when the new audio will start playing? Any additional resources or documentation that explain this in detail would also be appreciated.
White0930 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.