RL is slow?
Large state spaces are hard for RL?!
RL does not work well with function approximation?!
Non-Markovianness invalidates standard RL methods?!
POMDPs are hard for RL to deal with?!
…
In the opinion of Satinder Singh, these are myths of RL. He answers to these myths and expresses that these are not true beliefs. Moreover, there are many other information about reinforcement learning (myths, success, and …) in The University of Michigan Reinforcement Learning Group.
Hello, my friend!! ๐
I’m very happy with your comment in my blog, The Genetic Argonaut. ๐
So far I didn’t tested what you said in my blog, because my first aim was to search for the best solution that I could. And I find a value very close to it: 38.9 and optimum is 40. But I thanks to you for the recommendation and I’ll test soon the Simulated Annealing.
Your blog is very cool. With your excuse, I’m adding to my favourites. ๐
[]รโรยดs
Nosophorus
Thanks for your comment and adding my weblog to your favorite! (; Hope be successful in finding the optimum! ๐
Your article
“Behavior Hierarchy Learning in a Behavior-based System using Reinforcement Learning,”
was good inspiration
…was good inspiration for my research.
Thank You for it.
I am happy that my article was inspiring for you Lew. What is your research about? Behavior-based systems or something else?!