Suppose that there is a linear problem defined as X = inv(A)b. You solve it using an iterative gradient-descent method defined on some error function like (X – inv(A)b).^2 (or AX-b).^2. Moreover, suppose that you know X is a member of class_X which simplifies the gradient descent method in someway (you know that you should not go in some directions). Can we incorporate this knowledge in traditional linear matrix equalities solvers (not those that are based on calculating the gradient of …) or use it in finding the inverse of A somehow more efficiently? I know that I have not defined this problem clear, but anyway, my descriptions may give some hint on what is the problem: is there any duality relation between calculation of inverse of A and solving the minimization function I mentioned?!
?!
these to look to me as different method of solving the same problem. The first one is incremental an the second one is not. Maybe be some extended RLS is the path you are looking for.
yeap! It seems the same for me too. But I couldn’t show the similarity between the performance of two methods.
Suppose A is full-rank, positive-definite and you solve 2Ax=b by minimizing x’Ax-b’x using conjugate gradient. Then every direction is feasible, and every direction of descent leads towards the solution. So what sort of constraints could you have on the gradient, and where would you get them?
Or are you thinking for the cases when A is non-positive definite, full rank?
I guess my point is — we can solve matrix inversion by forming a quadratic, and looking for it’s minimum. But if you have some constraints on your gradient, you are not minimizing a quadratic hence this doesn’t correspond to a matrix inversion procedure. Right?
To Yaroslav: emmm … the problem needs more clarification. But I guess you are right that posing some constraints change the problem. But what if the constraints are in this form:
(Minimization Problem) You may change the gradient only in some random directions (not conjugare directions).
What is the same statement in the linear matrix equation terms?! Multiplying only a few of those corresponding random rows of the inverse of A to b?! If it is the case, is there any computationally effective method to do so without calculating the inverse or …?! I guess, yes!