Traffic light detection has great significant for unmanned vehicle and driver assistance system. Meanwhile many detection algorithms have been proposed in recent years. However, traffic light detection still cannot achieve a desirable result under complicated illumination, bad weather condition and complex road environment. Besides, it is difficult to detect multi-scales traffic lights by embedded devices simultaneously, especially the tiny ones. To solve these problems, this paper presents a robust vision-based method to detect traffic light, the method contains main two stages: the region proposal stage and the traffic light recognition stage. On region proposal stage, we utilize lane detection to remove partial background from the original images. Then, we apply adaptive canny edge detection to highlight region proposal in Cr color channel, where red or green color proposals can be separated easily. Finally, extract the enlarged traffic light RoI (Region of Interest) to classify. On traffic light recognition stage, a tinny but effective convolution neural network (CNN), named TLRNet, classifies each traffic light RoI into its own class. In fact, deep learning (DL) is bad for detecting small object in many fields, so we use region proposal stage to get RoI and classification by CNN to achieve a good result. We validate our method both on Laboratory for Intelligent and Safe Automobiles (LISA) Traffic Lights Dataset and video sequences captured from Beijing’s streets. The experimental results prove that the proposed method can achieve a good result for the multi-scales traffic lights in the TX1 embedded platform, and reach a real-time performance at 28fps.