This paper aims to develop a vision-based driver assistance system for scene awareness using video frames obtained from a dashboard camera. A saliency image map is devised with features pertinent to the driving scene. This saliency map mimics the human contour and motion sensitive visual perception by extracting spatial, spectral, and temporal information from the input frames and applying entropy driven image-context-feature data fusion. The resultant fusion output comprises high-level descriptors for still segment boundaries and non-stationary object appearance. Following the segmentation and foreground object detection stage, an adaptive maximum likelihood classifier selects road surface regions. The proposed scene driven vision system improves the driver’s situational awareness by enabling adaptive road surface classification. As experimental results demonstrate, context-aware low-level to high-level information fusion based on human vision model produces superior segmentation, tracking, and classification results that lead to high- level abstraction of driving scene.