A variety of portable or wearable navigation systems mounted on smart glasses and smartphones have been developed to assist visually impaired people over the last decade. In these systems, collision detection is one of the key components. Many conventional methods with the monocular vision estimate the collision risk based on the motion information of obstacles in the image by measuring the size change of objects using detected feature points and their corresponding motion vectors. However, the size change is sometimes incorrectly measured due to unreliable feature points and motion vectors. To overcome this problem, we present a motion clustering scheme to remove outliers among both feature points and motion vectors. Experimental results indicate that the proposed collision detection method outperforms the conventional one in terms of detection and false positive rates.