Visual odometry and mapping are likely to be essential components in future ADAS and autonomous driving systems. In particular, visual odometry can provide localization when GPS estimates are lost or degraded, and can provide localization relative to the surrounding environment for purposes of navigation and hazard avoidance. Feature-based visual odometry algorithms extract distinct corner points from the scene and track them over time in order to maintain an estimate of ego-motion. From prior work, it is known that odometry can fail depending on scene content. Tracking is lost when too few detected points contribute to tracking, where the remaining points are false-matches and are outliers to the motion estimator. Exclusion of these poor corners in advance can increase the robustness of visual odometry algorithms, in particular in challenging visual conditions due to weather, time of day, and the nature of the driving environment. This paper investigates the effect of scene content on visual odometry. A scene content classifier is used as a first step to analyze the scene and identify image regions which produce the most reliable image corners. The scene content classifier is based on computing image region structural features. It classifies the image into Random, Textured, and Transient tiles. Results show that corners detected in Random tiles area least likely to contribute to tracking, while Textured and Transient tiles are more reliable. Exclusion of these image regions in advance of motion estimation results in a more robust algorithm capable of maintain accurate motion estimation even with few tracked corners. The approach is evaluated using the widely used KITTI dataset (project by Karlsruhe Institute of Technology and Toyota Technological Institute).