No recent post! It is pitty. I’ll try to solve it. I must do a few fundamental changes for making this blog active.
Ph.C.
Doing a Ph.D., according to this funny articles, can strongly be easen by eating chocolate. I have not tested that in a systematic way (yes! I eat chocolate too!), but I’ll try to do so. If my weight increases in the near future, you will know what the reason is.
HH!
Happy Holidays!
Tim Berners-Lee’s weblog
And from now, you can read the weblog of Tim Berners-Lee, the grandfather of WWW!!!! Welcome Tim!
Generation of Greatness (To Read)
Image Colorization
I just find it interesting:
Anat Levin, Dani Lischinski, Yair Weiss, “Colorization Using Optimization,” ACM Transactions on Graphics, Aug 2004.
On Reliability
For a successful technology, reality must take precedence over public relations, for nature cannot be fooled. [*]
You may like to read R. P. Feynman, Personal observations on the reliability of the Shuttle. I think every engineers can get something from this paper.
Active Learning and Reinforcement Learning
The problem of active learning can be considered as a special case of reinforcement learning (Sanjoy Dasgupta noted it). We can consider it as learning a policy (which selects new data point) that maximizes the increase in some classification performance, e.g. empirical risk, our estimate about structural risk, or anything similar.
@NIPS 2005
Right now, I am in the middle of the NIPS 2005 conference waiting for another oral session to begin.
In this afternoon session, Sanjoy Dasgupta talked about his new result about active learning and the possible merit of it comparing with the traditional supervised case.
He defined something named searchability index which shows the difficulty of the problem in active learning. This index shows that to what extent the ideal binary search-like division of hypothesis space is applicable. If that index is high (or constant over all hypothesis space), the sample complexity of the problem is VC_dim*log(1/eps) (binary division). The other extreme case is VC_dim*(1/eps) (supervised case).
Anyway, you may like to read his paper (I’ll cite it later. I can’t remember the title of the talk.).
I like the idea of active learning. Moreover, I wonder if RL can benefits from it (or vice versa). It seems that active learning strategy is somehow like selecting the exploration strategy.
Going to the NIPS
I will attend this NIPS 2005 conference this year. I try to log the conference. I don’t know whether I can do it or not. Anyway, let’s try it …