0 Replies Latest reply: May 27, 2012 11:53 PM by 845492 RSS

    Looking for empirical data (hints) to minimize CPU load when writing to SDL

    845492
      This may be as much a Java question as a Java Sound question. I've started looking at tuning an application that is writing PCM audio (byte[]) to a SourceDataLine. I noticed that it was consuming a lot of CPU time (about 50% on a dual-core laptop, used for testing on low-end hardware). I did a quick comparison using Audacity, and saw it only used about 10% CPU time for the same audio. After scratching my head for a while, it dawned on me that I was spending a lot of time with the audio player thread blocked waiting for SourceDataLine.write() to complete. I presumed that the SDL.write() call would put the calling thread in a wait() state and notify() it when there was room in the buffer. That seems incorrect. So I've gone from this:

      while (SDL.write(buffer,offset,bytesToWrite)!=-1) {
      ...
      }

      to this:

      while (SDL.write(buffer,offset,SDL.available())!=-1) {
      sleep(20); // in try/catch block of course
      ...
      }

      The CPU usage is now down to about 15%.

      Question: is there empirical knowledge/best or accepted practices on how to do this for optimal (lowest) CPU use? For instance, I see that when using the default SDL.open() call, the default buffer is sized to hold 1 second of audio data for the specified format. Is there a known relationship between this buffer size and performance/efficiency? Would I be better off writing a specific amount of data (SDL.write()) as a percentage of the total buffer size, and sleep for an amount of time that is a bit less than the real time represented by that size? Maybe the entire approach is flawed?

      I appreciate any insight the experts may be able to shed on this. It's embarrasing that I didn't see the problem with my first implementation immediately.

      [Aside: it +appears+ that the TDL.read() method does indeed suspend the calling thread while the buffer is being filled, as I suspected would be the case for SDL.write(). I say this based on experience where I used a simiilar implementation as the first above when reading, and CPU usage on the same machine was well below the 50% consumed by the SDL.write(). That is:

      while (TDL.read(buffer,offset,bytesToRead)!=-1) {
          ...
      }

      However, I was using a much smaller "bytesToRead" value to reduce latency. Can anyone prove/disprove my suspicion?]