Limits...
A reward optimization method based on action subrewards in hierarchical reinforcement learning.

Fu Y, Liu Q, Ling X, Cui Z - ScientificWorldJournal (2014)

Bottom Line: Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards.The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method.All the performance with different parameters is compared and analyzed as well.

View Article: PubMed Central - PubMed

Affiliation: Suzhou Industrial Park Institute of Services Outsourcing, Suzhou, Jiangsu 215123, China ; School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China.

ABSTRACT
Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.

Show MeSH
© Copyright Policy - open-access
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3926376&req=5

Mentions: On the basis of the theory discussed above, the framework of reward function optimization algorithm is as in Algorithm 1.


A reward optimization method based on action subrewards in hierarchical reinforcement learning.

Fu Y, Liu Q, Ling X, Cui Z - ScientificWorldJournal (2014)

© Copyright Policy - open-access
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3926376&req=5

Mentions: On the basis of the theory discussed above, the framework of reward function optimization algorithm is as in Algorithm 1.

Bottom Line: Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards.The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method.All the performance with different parameters is compared and analyzed as well.

View Article: PubMed Central - PubMed

Affiliation: Suzhou Industrial Park Institute of Services Outsourcing, Suzhou, Jiangsu 215123, China ; School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu 215006, China.

ABSTRACT
Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.

Show MeSH