Intelligent driving, aimed for collision avoidance and self-navigation, is mainly based on environmental sensing via radar, lidar and/or camera. While each of the sensors has its own unique pros and cons, camera is especially good at object detection, recognition and tracking. However, unpredictable environmental illumination can potentially cause misdetection or false detection. To investigate the influence of illumination conditions on detection algorithms, we reproduced various illumination intensities in a photo-realistic virtual world, which leverages recent progress in computer graphics, and verified vehicle detection effect there. In the virtual world, the environmental illumination is controlled precisely from low to high to simulate different illumination conditions in the driving scenarios (with relative luminous intensity from 0.01 to 400). Sedan cars with different colors are modelled in the virtual world and used for detection task. Faster RCNN and You Only Look Once (YOLO), which are the object detection neural networks with high accuracy and efficiency, were chosen for experiments. Results show that: (1) vehicle under too high illumination condition can hardly be detected; (2) as the illumination intensity adjusted from 0.01 to 400, the detection confidences of red and blue cars are higher than other colors, the detection confidence deviations of red and blue cars are also small, which means they are robust to the variation of illumination. This work can provide some insights not only on future autonomous vehicle design, but also on future on-board camera design.