Human action recognition system is fundamental of human activity and behavior recognition, especially for video analysis technologies. In this paper, we introduce an improvement method for human action recognition proposed by P.Chawalitsittikul et al. The actions from RGBD multi-views, taken from cameras at different static-viewpoints in the overlapping Area of Interest, are fused at high-level decision. Our empirical fusion model is derived from performances of action recognition in various viewpoints: front, slant, side, back-slant, and back. The results shown that the fusion model improves significantly the accuracy using only one more camera.