Yes, this is a tricky one. The first thing that comes to my mind is some sort of MANOVA or MANCOVA, but I can't immediately see how to bend it into the shape that you want. I'll carry on thinking along these lines, but for the moment...
Perhaps a reasonable approach is to go back, not to p-values, but to normalized effect sizes. There is a ono-to-one correspondence between them, but the scale of effect size would seem to make more sense than the non-linear transform that you go through to get a p-value.
So, consider one of the parameters
xi that you're interested in and consider its values before and after the intervention in samples A and B. In each of these samples calculate the effect size (after-before) and divide it by its standard deviation to get
θiA and
θiB. Then to get a comparison between
θiA and
θiB either subtract one from the other, or take a ratio or something like that. Then order the differences (or the ratios).
This approach has the advantage of simplicity; but it has the disadvantage that there is no obvious analytic way of coming up with a statistical way of telling whether one parameter is really more differentiating between the effect in A and B than another. It can be done, using a computationally intensive technique called the bootstrap, but this would require a bit of non-trivial statistical programming.