I want to propose an adversarial perturbation method for global optimization. I don’t know much about these perturbation methods for global optimization and I haven’t seen any theorem about them yet, but I guess this can be categorized as an adversarial perturbation method! I have not run any simulation yet since this morning that I found out that this method is as ridiculous as any other perturbation method! Anyway, I may do more investigations in the future:
Suppose that you can write your objective function in the additive form: f^k(X,\theta^k) = \sum^N_i(\theta^k_i g_i(X) ). The main objective function has all \theta^k_i equal to 1.
We are in step k and we have done a local search. So X^k is the local minimum of f^k(X,\theta^k).
Now as the perturbation, we can change \theta^k_i. If we change the sign of those \theta^k_i, we would get an “Alternating Trajectory Method” (named after whom?!). The conventional adversarial method tries to find the gradient of f w.r.t. theta and goes it uphill. I propose to change sign of a few \theta^k_i randomly with a probability of \mu in which \mu has the property that \theta^k_i will be 1 at the limit.
This way, we (hopefully) solve the primary objective function in the limit, we can change the minimum to a saddle point instantly (so, goes away from the local min very fast which is considered as an advantage of an adversarial method), and make a link between a trajectory method and a perturbation one.