How is it calculated (IIRC): 100 – 100 * cell physical IO interconnect bytes / cell physical IO bytes eligible for predicate offload
If the value is positive, it is the percent of data sent that was actually sent vs what was read (projection and restriction applied here).
If the value is negative, it means the data sent is larger than the data read from disk. This can only happen if the data is compressed.
If I recall, the challenge is that the expression is combining a "one-directional" number with a "bi-directional" measurement. The bytes_eligible_for_predicate_offload is a one-directional number, in that it a measurement showing data going from the cell to the db server. The io_interconnect_bytes is a "bi-directional" measurement in that it counts up bytes regardless of whether they were sent from cell to db server or from db server to cell. Thus in cases where there is large amounts of traffic from the db server to the cell, you will see the efficiency number get reduced and then driven negative.
Use-cases where there is large amount of traffic from the db server to the cell include writing the temp sort segments when you do a disk sort. If I recall, you also see this on the DML statements like INSERT as SELECT, CTAS, etc. I am not sure if writes get double or triple counted based on whether you use normal or high redundancy.
Since for a given explain plan and dataset and SGA, you'd write the same temp sort segments regardless of whether or not you used Exadata storage, it seems unfortunate to me that the "cell efficiency" metric gets influenced by the temp sort writes and other stuff.
Of course, I might be wrong as it has been half-a-year since I was hands-on with Exadata and they might have refined how they calculate this in later releases.
In any case, check your query in real-time sql monitor and see if it is used TEMP space and doing direct writes and such.