ags wrote:As I tried to explain in my previous post, there will ALWAYS be a variability in the time when a given frame is processed in an SDL and when it sounds. Biggest variability is between the time required for the onset of the first sound (played via bytecode interpretation) and onsets of subsequent sounds (if played via code loaded into memory). But JVM switching also accounts for variability.
I think this breaks down into two related but separate issues:
First, the latency between writing audio (to a SDL) and when that audio is actually heard. If I could find a way to measure that, and it was consistent, I could adjust my processing and delivery of audio data to that SDL.
The other problem is that even if I were able to get an accurate measurement of the delay (latency) in the audio lines, if I then set system timers to cause action in sync with the audio frame rate based on audio sample rate (frame rate for uncompressed PCM audio) there is still an error path. Again from observation not direct knowledge, it seems that there is no direct link between the sound card clock (controlling real time frame rate and what should really be the master of all actions) and the system clock (which I would use as a surrogate for the actual sound card clock to synchronize my actions with the audio being heard). Even if I were to establish perfect synchronization at the start of a song, by the time 5 or 10 minutes of audio has been processed, I suspect there will be an accumulation of errors between the sound card clock and the system clock that becomes noticeable.The sound card clock and the system clock are perfectly in synch. The problem is the granularity of the system clock. With Microsoft, the clock info is only updated once every 15 msec or so. As a consequence of this, System.currentTimeMillis() will only report the time at the last update. Also, Thread.sleep(millis) and setting a Timer are subject to this constraint. The error doesn't accumulate, though. It is just an issue of having a reduced "granularity" that creates a predictable error. (System.nanoTime will always be more accurate--it is not subject to letting the OS determine the update frequency.)
I'm open to ideas, criticism or examples of how I might acheive my goal. While it might not be practical or the only way, the best I can describe what I'm looking for is to have an audio-event driven system where I can determine a frame position corresponding to the audio being rendered in real time and trigger events to keep other actions synchronized with that audio. If a constant delay (latency) was maintained and measureable, that would be OK.Agreed about the difficulty!
This is a difficult problem.
Phil Freihofner wrote:1) I know to the exact frame where my cues are. The problem is that it is to be synced with the frame when it is rendered (heard) not when written to the line.
Given all this, I think the questions are:
1) how much in advance do you know about given frames that you wish to use as cues?
2) how well can you guestimate an anchor point? (and, is there a way to improve upon the guess with feedback?)
3) how accurately can you trigger the visual cue?