Right now, I am in the middle of the NIPS 2005 conference waiting for another oral session to begin.
In this afternoon session, Sanjoy Dasgupta talked about his new result about active learning and the possible merit of it comparing with the traditional supervised case.
He defined something named searchability index which shows the difficulty of the problem in active learning. This index shows that to what extent the ideal binary search-like division of hypothesis space is applicable. If that index is high (or constant over all hypothesis space), the sample complexity of the problem is VC_dim*log(1/eps) (binary division). The other extreme case is VC_dim*(1/eps) (supervised case).
Anyway, you may like to read his paper (I’ll cite it later. I can’t remember the title of the talk.).
I like the idea of active learning. Moreover, I wonder if RL can benefits from it (or vice versa). It seems that active learning strategy is somehow like selecting the exploration strategy.