You might be able to use dbconsole on the affected tables to revert and lock the stats to a known working time, while you figure out the actual cause. However, if the actual cause turns out to be the current data is not properly described by the old stats, this may not work. But, it depends. I've had good luck with this in "it just passed a threshold" cases, like when you have a compound key with a division in it, and one successful division is skewing the data.
One way or another you need to figure out the actual problem. An explain plan with predicates for both good and bad executions is usually a good start. See HOW TO: Post a SQL statement tuning request - template posting (but it seems the code formatting instructions have broken, just use the advanced editor to select a fixed font instead).
Have you tried restoring statistics to the time performance was acceptable with dbms_stats.RESTORE_TABLE_STATS?
On a side node, consider using SQL Plan Management as it would avoid situations like this. With SPM you could safely review the new plan and decide if it was worth it or not.
Also, if you are not absolutely sure the new options of gathering statistics will be beneficial, use pending statistics to test it for yourself before publishing them.
What was the change ? What was the rollback ?
What are the parameters you are passing to the Gather_Table_Stats procedure ?
Has the underlying data changed significantly ? Even a simple crossing of a threshold in the pattern of the data can cause the optimizer to to use a different execution plan as its costing changes.
Hemant K Chitale