Myths of Reinforcement Learning

RL is slow?
Large state spaces are hard for RL?!
RL does not work well with function approximation?!
Non-Markovianness invalidates standard RL methods?!
POMDPs are hard for RL to deal with?!

In the opinion of Satinder Singh, these are myths of RL. He answers to these myths and expresses that these are not true beliefs. Moreover, there are many other information about reinforcement learning (myths, success, and …) in The University of Michigan Reinforcement Learning Group.

5 Replies to “Myths of Reinforcement Learning”

  1. Hello, my friend!! ๐Ÿ˜€

    I’m very happy with your comment in my blog, The Genetic Argonaut. ๐Ÿ˜€

    So far I didn’t tested what you said in my blog, because my first aim was to search for the best solution that I could. And I find a value very close to it: 38.9 and optimum is 40. But I thanks to you for the recommendation and I’ll test soon the Simulated Annealing.

    Your blog is very cool. With your excuse, I’m adding to my favourites. ๐Ÿ˜€

    []รƒโ€šร‚ยดs

    Nosophorus

  2. Thanks for your comment and adding my weblog to your favorite! (; Hope be successful in finding the optimum! ๐Ÿ˜€

  3. Your article
    “Behavior Hierarchy Learning in a Behavior-based System using Reinforcement Learning,”
    was good inspiration

  4. I am happy that my article was inspiring for you Lew. What is your research about? Behavior-based systems or something else?!

Leave a Reply

Your email address will not be published. Required fields are marked *