Limits...
Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.

Liu B, Chen S, Li S, Liang Y - Sensors (Basel) (2012)

Bottom Line: Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling.Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD).Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

View Article: PubMed Central - PubMed

Affiliation: Key Laboratory of Visual Media Processing and Transmission, Shenzhen Institute of Information Technology, Shenzhen, Guangdong 518029, China. boliu@cs.umass.edu

ABSTRACT
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

No MeSH data available.


Illustration of Kernelized Bellman Error Decomposition.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3376585&req=5

f1-sensors-12-02632: Illustration of Kernelized Bellman Error Decomposition.

Mentions: Let us move to introduce the Bellman error of the kernelized value function, which is the one-step temporal difference error,(22)BE (V^)=R+γPV^−V^KBE(V^)=R+γP𝒦w−𝒦wAccording to [15], the Bellman error can be decomposed into two parts, which are not orthogonal to each other, the reward error and the transition error, which reflects the approximation error of the reward vector R and the transition matrix P, respectively. The geometric illustration can be seen in Figure 1, which is the kernelized version of Figure 1 in [20]. The Bellman error decomposition equation is as sequel,(23)BE (V^)=ΔR+γΔΦwΦKBE (V^)=ΔR+γΔK′w


Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.

Liu B, Chen S, Li S, Liang Y - Sensors (Basel) (2012)

Illustration of Kernelized Bellman Error Decomposition.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3376585&req=5

f1-sensors-12-02632: Illustration of Kernelized Bellman Error Decomposition.
Mentions: Let us move to introduce the Bellman error of the kernelized value function, which is the one-step temporal difference error,(22)BE (V^)=R+γPV^−V^KBE(V^)=R+γP𝒦w−𝒦wAccording to [15], the Bellman error can be decomposed into two parts, which are not orthogonal to each other, the reward error and the transition error, which reflects the approximation error of the reward vector R and the transition matrix P, respectively. The geometric illustration can be seen in Figure 1, which is the kernelized version of Figure 1 in [20]. The Bellman error decomposition equation is as sequel,(23)BE (V^)=ΔR+γΔΦwΦKBE (V^)=ΔR+γΔK′w

Bottom Line: Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling.Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD).Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

View Article: PubMed Central - PubMed

Affiliation: Key Laboratory of Visual Media Processing and Transmission, Shenzhen Institute of Information Technology, Shenzhen, Guangdong 518029, China. boliu@cs.umass.edu

ABSTRACT
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

No MeSH data available.