I'm experimenting to understand details of how to best use SourceDataLine/TargetDataLines to synchronize audio with other actions of my project. I am realizing that it is beyond the capabilities of Java Sound to provide instantaneous and accurate frame position in the audio stream (beyond .1 seconds or so). This is probably more a reality of building a cross-platform sound system and the tight level of integration of the sound hardware (card) into the compute platform than a failure of the Java Sound implementation.
In my experimentation, it looks like there is some undocumented behavior relating to how a SourceDataLine reports its frame position. This may be machine-specific, depending on the drivers & sound hardware, but it may also be an effect of the version of the JVM/JRE I"m using. The documentation for SourceDataLine.getLongFramePosition() states:
the number of frames already processed since the line was opened
However, what I'm seeing is that the frame position reported will "stick" at 0 after the SDL.start() call until the first SDL.write() call. After that, the behavior is as expected. In addition, I'm seeing that if the SDL is not refreshed in a certain period of time (becomes starved of audio data), the frame position will again be reset to 0. This makes some sense, allowing more accurate frame reporting without having to worry about the time between opening and starting the SDL and actually beginning to load it with audio data. At first I thought the "reset" would occur if the SDL buffer was allowed to be drained of all audio data (meaning, if I wrote <n> bytes into an <m> byte buffer, as soon as those <n> bytes were "loaded" into the SDL's internal buffer (distinct from the buffer that is created with the SDL.open() call; I assume this is what is happening in the Java Sound implementation). Then I thought it was if there was no SDL.write() call for a length of time equal to the time represented by the size of the SDL buffer (that is, bufferSizeInBytes/audioFormatFrameSize/audioFormatFrameRate). Neither of those proved to be true; apparently there is some amount of time that has to elapse, and it does seem to be related to the amount of data that was writen to the SDL buffer, but I can't determine exactly what the relationship is. If source is available that would be most helpful.
Is there any way to determine if I can count on this behavior? (probably not, as it is a side effect if not documented, guaranteed behavior)
Thanks, I've responded in that other thread also. Is there any way to gain access (with permission) to the JS implementation source, or an example of what it might be on some systems, as a way to understand more about how I can use it properly? The existing tutorial and examples does not provide enough detail for what I'm trying to do.
Source code came with my JDK, and that has the SourceDataLine Interface in it. But getting the actual code that implements this might be tricky.
I think it is in rt.jar, in the library: com.sun.media.sound and the source for this library is not included in the JDK source. From this library, I suspect the implementation is the DirectAudioDevice class. But I am far from certain. I got my copy from Java.net, "for research purposes only". But I can't remember the URL!
But I don't think this will help you much, as the variability in timing is going to be a given independent of the implementation. Please reference the article that I linked in your other post! "Real Time Low Latency Audio Processing in Java"