SoloGen’s Adversarial Perturbation Method for Global Optimization

I want to propose an adversarial perturbation method for global optimization. I don’t know much about these perturbation methods for global optimization and I haven’t seen any theorem about them yet, but I guess this can be categorized as an adversarial perturbation method! I have not run any simulation yet since this morning that I found out that this method is as ridiculous as any other perturbation method! Anyway, I may do more investigations in the future:

Suppose that you can write your objective function in the additive form: f^k(X,\theta^k) = \sum^N_i(\theta^k_i g_i(X) ). The main objective function has all \theta^k_i equal to 1.
We are in step k and we have done a local search. So X^k is the local minimum of f^k(X,\theta^k).
Now as the perturbation, we can change \theta^k_i. If we change the sign of those \theta^k_i, we would get an “Alternating Trajectory Method” (named after whom?!). The conventional adversarial method tries to find the gradient of f w.r.t. theta and goes it uphill. I propose to change sign of a few \theta^k_i randomly with a probability of \mu in which \mu has the property that \theta^k_i will be 1 at the limit.
This way, we (hopefully) solve the primary objective function in the limit, we can change the minimum to a saddle point instantly (so, goes away from the local min very fast which is considered as an advantage of an adversarial method), and make a link between a trajectory method and a perturbation one.

Reviewing Night

Well! Just now, I finished writing three reviews for my Teaching and Research Methods course. Well! Those would not be the kind of reviews I like to do if they were real paper (I like to read cited papers too), but they are not that bad. Well! It is too late in the night, not eating any thing, and I should go home and cook something. Wow … a sample of hard working?!

Duality of Inversion and Gradient Descent?

Suppose that there is a linear problem defined as X = inv(A)b. You solve it using an iterative gradient-descent method defined on some error function like (X – inv(A)b).^2 (or AX-b).^2. Moreover, suppose that you know X is a member of class_X which simplifies the gradient descent method in someway (you know that you should not go in some directions). Can we incorporate this knowledge in traditional linear matrix equalities solvers (not those that are based on calculating the gradient of …) or use it in finding the inverse of A somehow more efficiently? I know that I have not defined this problem clear, but anyway, my descriptions may give some hint on what is the problem: is there any duality relation between calculation of inverse of A and solving the minimization function I mentioned?!