Suppose that there is a linear problem defined as X = inv(A)b. You solve it using an iterative gradient-descent method defined on some error function like (X – inv(A)b).^2 (or AX-b).^2. Moreover, suppose that you know X is a member of class_X which simplifies the gradient descent method in someway (you know that you should not go in some directions). Can we incorporate this knowledge in traditional linear matrix equalities solvers (not those that are based on calculating the gradient of …) or use it in finding the inverse of A somehow more efficiently? I know that I have not defined this problem clear, but anyway, my descriptions may give some hint on what is the problem: is there any duality relation between calculation of inverse of A and solving the minimization function I mentioned?!