Impact of Operating Parameters on Ignition System Energy Consumption 2014-01-1233
The use of cooled EGR in gasoline engines improves the fuel efficiency of the engine through a variety of mechanisms, including improving the charge properties (e.g. the ratio of specific heats), reducing knock and enabling higher compression ratio operation and, at part loads conditions in particular, reducing pumping work. One of the limiting factors on the level of improvement from cooled EGR is the ability of the ignition system to ignite a dilute mixture and maintain engine stability. Previous work from SwRI has shown that, by increasing the ignition duration and using a continuous discharge ignition system, an improved ignition system can substantially increase the EGR tolerance of an engine [1, 2]. This improvement comes at a cost, however, of increased ignition system energy requirements and a potential decrease in spark plug durability.
This work examines the impact of engine operating parameters on the ignition energy requirements under high dilution operation. The goal of the exercise was to define the boundaries of the operating map that required high energy ignition and identify the region where lower levels of ignition energy can be employed. To accomplish this, the engine was run at the three part load conditions from the previous work  as well as at two wide open throttle (WOT) conditions. At these steady state conditions, EGR sweeps were performed at varying ignition energy levels to define the impact of ignition energy on EGR tolerance and combustion performance. The results show that the ignition energy requirement differs with engine speed and load and that there are many conditions where the ignition energy requirement can be decreased. Another brief task was dedicated to the effect of ignition energy at different flow fields. As predicted, the increased charge motion reduced the ignition energy requirement.