Walter J. Freeman, “Happiness doesn’t Come in Bottles”

Have you ever felt that you are not happy from you life even if you have *objectively* successful life (e.g. a lot of money, many publications, and etc.)? And have you asked yourself what was wrong with your life?

Accidentally, I found this article “Happiness doesn’t Come in Bottles” by Walter J. Freeman. It is about happiness and its connection to the dynamical and chemical phenomenon happening in our brain. I do not want to summarize the article. Instead, I suggest you to read this short article.

I do not know if all results and suggestions in this paper are precise (the paper is a bit old and it is not written as a report of scientific discovery), but I know reading the paper was/is stimulating for myself. Maybe we geek people in the community (be in ML, CS, EE, Math, etc.) need more advice of this kind (no offense for sure!).

Scholarpedia

Have you seen Scholarpedia?
It is a wiki project on scientific subjects that are written by experts of the field and are peer reviewed by others. It means that you will know who is actually writing the most part of an article, and you know s/he is an expert.

The project is in its infancy now, so you probably cannot find the subject you like to read about, but there are several authors promising to dedicate an article to it (and the good point is that I know many of them – maybe just because they are really big figures). Currently, three main subjects are covered: Computational Intelligence, Computational Neuroscience, and Dynamical System.

This idea looks interesting to me, but I am not completely certain if it goes very well or not. The potential strength of such a Wiki project comparing to usual edited volumes is their self-sustainablity generated by people’s continual contribution. Because of rather strict conditions to start writing an article (I cannot contribute a new article unless I am really known in the field), and because of the difficulty of editing current articles (your edits should be approved by the curator of the article), I believe that the dynamics of the system is close to a damping one. I do not say these properties are bad. Actually, they bring some strength to the project (i.e. quality of experts), but they have some negative impacts too.
Whether Scholarpedia will continue to grow or just converge to a fixed point(!) is not clear to me. We may need two years to be able to predict its fate. (;

If people continue to expand this project for the next, say, 10 years, it would be a great human project. Though if it needs a constant push from its original editors, the project will not be *so* special anymore.
The other problem with the project is that it is copyrighted! I don’t like it personally, but maybe it is the best choice.
All said, I hope this project goes on well and cover other subjects of science too.

Computational Learning and Motor Control Lab@USC

You may like to visit Stefan Schaal’s Computational Learning and Motor Control Lab. at the University of Southern California (USC). Beside many papers on the application of RL, supervised ML, nonlinear control, and etc in robotics domain, you can find some nice movies too. For instance, I like the one that robot learns to imitate pole-balancing so much (here!).
I should read a few of those papers soon. I guess I read a paper by this group (Peters and et. al.) about natural policy gradient about a year ago.

Johnson-Lindenstrauss Lemma

Johnson-Lindenstrauss Lemma:

For any 0= 4(eps^2/2 – eps^3/3)^-1 * ln(n). Then for any n points in R^d, there is a map from R^d to R^k such that all the distances between mapped points f(u) and f(v) (u and v \in R^d), satisfy the following relation:

(1-e)||v-u||^2 <= ||f(v) - f(u)||^2 <= (1+e)||v-u||^2. Also this map can be found in a randomized polynomial time. In other words, this theorem (or lemma!) states that we can always reduce the dimension of some data points with a distortion which is O(ln(n)/eps^2). Isn't it a useful results for dimension reduction and manifold learning? Specially when we do not want to assume that those samples are strictly on the manifold, but just are close (Actually it seems that Sanjoy Dasgupta and Anupam Gupta that I am refering to their paper considered the same type of application.). For reading more, refer to Sanjoy Dagupta, Anupam Gupta, "An elementary proof of the Johnson-Lindenstrauss Lemma,”  Random Structures and Algorithms, 22(1):60-65, 2003.

CFP: 2007 IEEE Symposium Series on Computational Intelligence and Scheduling (CISched 2007)

2007 IEEE Symposium Series on Computational Intelligence and Scheduling (CISched 2007)
April 1-5, 2007, Hilton Hawaii Village Resort, Honolulu, HI, USA

This symposium is part of the IEEE Symposium on Computational Intelligence 2007 (IEEE SSCI 2007).
Registration at CISched 2007 allows access to the other computational Intelligence conferences which are taking place at the same time. These include (but see here for a complete list).

  • First IEEE Symposium on Computational Intelligence in Image and Signal Processing
  • First IEEE Symposium on Computational Intelligence in Multicriteria Decision Making
  • First IEEE Symposium on Computational Intelligence in Image and Signal Processing
  • First IEEE Symposium on Computational Intelligence in Multicriteria Decision Making
  • First IEEE Symposium on Computational Intelligence in Data Mining
  • First IEEE Symposium on Computational Intelligence in Scheduling
  • 2007 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)
  • 2007 IEEE Swarm Intelligence Symposium (SIS)
  • First IEEE Symposium on Artificial Life
  • Workshop on Evolvable Hardware

Scope of Symposium
===============

Continue reading “CFP: 2007 IEEE Symposium Series on Computational Intelligence and Scheduling (CISched 2007)”

Several Fancy Robots

I find this page of NewScientist.com very interesting. You can find videos of several fascinating robots in the page. Among those that I have seen (not all of them), I found 1) the self-replicating robots of the Cornell university (see Hod Lipson’s page), 2) dancing Qrio robots of Sony, and 3) Asimo of Sony more interesting.

This post is a great place to ask this question: What are remaining open problems in robotics?
I know there are many difficult problems in the field. However, is there any problem that we are far behind solving it? One answer is, of course, the problem of having intelligent machines. Putting it aside, is there any mechanical/control-oriented difficult challenge?