There seems to be a discrepancy between what is reported via pq and what is visible through ipcs (the actual ipc queues). We have a script that polls pq (via tmadmin) and pumps statistics to a tool called Introscope used for presenting system stats historically. We also through a Java agent pump stats from Weblogic's WTC. On the same service without queue building in IPC there is in pq and a clear time difference (100+ms) between the imported WTC service and the actual Tuxedo service.
This we have confirmed several times in our QA and production.
My theory is that despite no OS queue building (ipc) Tuxedo as a container reacts to something internal to itself. If there is no queue building outside the container then it has to be inside?
Another thing to note is I don't have any skills on how Tuxedo does the actual service call. Assuming it is a "ipc message queue" only that holds the call inclusive data (in-argument) while the pool of services offering the called service are busy. Or are semaphores involved to help out with the in-argument storage to keep the size of the data on the message queues down. What I am getting at is that the Tuxedo implementation is what gives rise to the diff between pq and ipcs.
It is a common misconcpetion that the Tuxedo tmadmin pq command is the same as the operating system ipcs command.
To force Tuxedo to do this there are some things that are needed that depend on the server you are interested in, the release of Tuxedo,
and the rolling patch level of Tuxedo.
For general application servers. Tuxedo 10.3 RP091 (Bug 13109309) enhances environment variable TM_SVRLOAD_REMEDY_ROUNDS
to use ipcs results (e.g if set==50). See KM Doc ID 1463650.1
For the GWTDOMAIN server. Tuxedo 8.0 RP391 (Bug 8130933),Tuxedo 8.1 RP298, Tuxedo 9.0 RP046 add environment variable
TM_GWT_READIPCQUEUE. When set to 'y' tmadmin pq will provide ipcs results. See KM Doc ID 777521.1.