Johnson-Lindenstrauss Lemma:
For any 0
(1-e)||v-u||^2 <= ||f(v) - f(u)||^2 <= (1+e)||v-u||^2. Also this map can be found in a randomized polynomial time. In other words, this theorem (or lemma!) states that we can always reduce the dimension of some data points with a distortion which is O(ln(n)/eps^2). Isn't it a useful results for dimension reduction and manifold learning? Specially when we do not want to assume that those samples are strictly on the manifold, but just are close (Actually it seems that Sanjoy Dasgupta and Anupam Gupta that I am refering to their paper considered the same type of application.). For reading more, refer to Sanjoy Dagupta, Anupam Gupta, "An elementary proof of the Johnson-Lindenstrauss Lemma,” Random Structures and Algorithms, 22(1):60-65, 2003.