Real-time and reliable perception of the surrounding environment is an important prerequisite for advanced driving assistance system (ADAS) and automatic driving. And vision-based detection plays a significant role in environment perception for automatic vehicles. Although deep convolutional neural networks enable efficient recognition of various objects, it has difficulty in accurately detecting special vehicles, rocks, road pile, construction site, fence and so on. In this work, we address the task of traffic scene understanding with semantic image segmentation. Both driveable area and the classification of object can be attained from the segmentation result. First, we define 29 classes of objects in traffic scenarios with different labels and modify the Deeplab V2 network. Then in order to reduce the running time, MobileNet architecture is applied to generate the feature map instead of the original models. After that, the Cityscapes Dataset, which focuses on semantic understanding of urban street scenes, is used to train the network with the modified labels. Finally, we test the network and measure the performance. With the same network (Deeplab V2), VGG-16 and ResNet-101 are also tested. Consequently, we attain similar performance with MobileNet and ResNet-101 models, but using MobileNet requires much fewer operations and time. Compared with VGG-16, MobileNet architecture has better performance and is also more efficient. The using of lightweight mobile models reduce the computation and enable the on-device applications for semantic segmentation in traffic scene understanding.