The sheer popularity and potential of modifying the 5 liter Mustang has given rise to an aftermarket that is overflowing with options. The world of forced induction is no exception. Adding a compressor is an appealing way for a 5.0 owner to gain a solid 100 hp or more on a stock engine, but the number of choices can be overwhelming for those who are new to the topic. In order to make the best decision as to where to spend a rather large piece of money, it's important to understand what's out there and what it means for a particular application. This primer is intended to offer a basic technical overview of full time forced induction topics as they relate to the 5.0L EFI Mustang.
Quality vs. Quantity
Roughly speaking, for a given engine, the amount of power produced is going to be proportional to the amount of air (oxygen) that can be combusted. On the surface this would indicate that the larger the compressor the better. Unfortunately the onset of detonation becomes a limiting factor that occurs long before reaching the limits of how much air can be physically forced through an engine.
During a normal combustion event, the spark plug ignites the fuel mixture creating a flame front which burns in a controlled fashion across the combustion chamber. The energy from the reaction is applied to the piston evenly (relatively speaking) throughout the power stroke. During detonation the fuel mixture burns in a more spontaneous manner releasing the bulk the energy over a much shorter time period. The result is a peak pressure on the piston and combustion chamber walls that, in an extreme case, can be over an order of magnitude higher than that produced during the normal combustion event. To put this in perspective, a 200 hp engine that is detonating badly can experience the same stress that a 2000 hp engine experiences during normal combustion. Those familiar with the limitations of the stock 5 liter block know that it will not last long under these conditions.
For a given fuel (octane), detonation is a first order function of local temperature within the combustion chamber. As a result the minimization of charge temperature is of primary importance in a high performance forced induction setup. When air is compressed before entering the combustion chamber it heats up. For a given amount of compression there is a thermodynamic minimum temperature rise associated with it. This temperature increase is given by
D Tmin = T1((P2/P1)^0.283-1)
where T1 is the intial temperature in degrees Rankine, P1 is the initial pressure in psig, and P2 is the final pressure in psig. The conversion from Farenheit to Rankine is R=F+460.
The adiabatic efficiency of a compressor is a measure of how much it heats the the compressed air with respect to this thermodynamic minimum. The formula for adiabatic efficiency is given by
AE = (D Tmin/D Tactual)*100%
where D Tactual is the temperature at the compressor outlet. So a higher AE is desireable as it will allow more power to be made within limits of detonation. Some ballpark peak efficiencies of importance to the 5 liter world:
Kenne Bell 1500 ~75-80%
Note the use of the term peak efficiency. In actuality AE is a function of pressure ratio and CFM. A peak efficiency number unto itself does nothing to indicate the suitability of a compressor to a particular application. For this a compressor map is required. A compressor map is a plot of AE and compressor rpm over the entire operating space of the compressor. With this information in hand the operating space of an engine can be overlayed to determine the suitability of a particular compressor/engine combination with regard to it's intented application.
Minimizing Charge Temperatures
Ok, so now that we understand that temperature limits how much power we can produce via the onset of detonation let's discuss two techniques for minimizing this problem.
An intercooler is nothing more than a heat exchanger that dissipates some portion of the heat in the compressed charge to the atmosphere before entering the engine. Just like compressors, intercoolers are also rated in terms of their efficiency
cooling efficiency = [(Tinlet - Toutlet)/Tambient]*100%
A good air to air intercooler will have an efficiency of at least 80%. This effect is not an insignificant one. Let's look at an example:
90° F ambient ----- 70% Adiabatic efficiency ----- 10 psi output
non-intercooled outlet temp= 214.3° F
80% efficient intercooler
intercooler outlet temp=114.9° F
On a 400hp car this is probably worth 50-80 hp in increased margin to detonation when properly compensated with additional boost.
In a water injection setup a fine mist of water is sprayed directly into the intake charge in a manner analogous to a fuel injector. By taking advantage of the high latent heat of evaporation for water the intake charge can be cooled considerably. Water injection has a very important advantage over conventional intercooling. The increase in charge temperature is a result of compression. In a forced induction engine there are two sources of compression: the compressor and the engine piston. Only water injection can control the temperature increase associated with the static compression of the engine. For this reason it has benefits on both an intercooled forced induction car and a naturally aspirated car. The advantages of water injection are that it is typically cheaper to implement than intercooling and can be easier to fit in a crowded engine bay. It's disadvantage is that it must be reliable and water levels monitored closely to prevent potential engine damage.
An intercooler spray is an external nozzle(s) set up to spray water on the cooling surfaces of an intercooler. This helps increase the cooling efficiency of the intecooler and is most beneficial for turbos as the wastegate control (turbo rpm independent of engine rpm) allows the turbos to dynamically compensate for changes in density (pressure actually) as the additional charge cooling is being employed.
The advantage of such a system is that a very crude (read inexpensive) setup can be effectively employed. If water runs out at WOT the engine is not as risk as it is in a water injection system that is being used to move beyond the normal point of detonation. The biggest drawback is that ic sprays can use A LOT of water and must refilled quite frequently.
Boost is boost?
Let's talk a little bit about the number that shows up on the boost gauge. Most people use boost as the fundamental metric for making comparisons between compressors and setups. In reality you would be hard pressed to find a less meaningful basis for comparison. Instead of a lot of theory let's just look at a couple of real world examples to illustrate two of the more important effects.
Effect of charge temperature
So what is boost? Boost is just a pressure measurement usually taken around the area of the intake manifold. But manifold pressure unto itself does not determine power. Power is more directly related to the mass flow of oxygen into the combustion chamber. One problem with trying to equate boost with oxygen mass is that temperature is being left uncontrolled. As charge temperatures increase so does the measured boost. Both of which can happen independently of the mass transport of oxygen.
This effect is shown rather dramatically when comparing an intercooled to a non intercooled setup. A Vortech s-trim producing a measured 8psi on a stock 5.0 will generally add between 100-120 rwhp. The Incon twin turbo in intercooled fashion can add close to 200 rwhp on a stock 5.0 at 8psi.
Of course part of that difference in rwhp can be attributed to the lower parasitic loss of the turbo but it still serves to illustrate the fact that manifold pressure is not a very good indicator of hp across setups.
Effect of engine pumping capacity/efficiency
Another important point to realize is that boost is a measure of the dynamic equilibrium pressure between two pumps. Just as increasing the output of a compressor will increase manifold pressure, increasing the capacity of the engine (displacement) or volumetric efficiency (heads, intake, etc.) will result in a reduction of the manifold pressure.
For example let's put a supercharger on a bone stock 5.0 and pulley it to produce 12psi at the manifold. What happens happens to the measured boost when we do nothing more than add a set of high flowing heads and intake? What happens is that boost decreases. Now if boost was your only metric for comparing the two setups you might be tempted to say that less power was being produced with the upgraded induction path. In practice we know that this isn't the case. What you have is an inverse relationship between manifold pressure and cylinder pressure. A high boost at the manifold can result from a restrictive intake path that doesn't allow much air to reach the cylinder where is it combusted to produce power. If the restrictions in the intake path are reduced then a greater amount of the air backed up in the intake is permitted to make it's way into the combustion chamber. The result is increased hp and a reduction in pressure at the manifold.
How much boost can my engine take?
Here's a popular question that trips people up. Hopefully, after having read the above, your first reaction was that it's not a clearcut answer. The question becomes even more confusing when you throw in detonation. Let's consider the question from two standpoints: 1) the mechanical limitation of the engine without detonation and 2) the onset of detonation.
In the first case, if detonation is not an issue then what determines how much power you can produce? The answer is the mechanical strength of the block. Hp is a good first order indicator of the stress being produced. Hence most mechanical engine parts are rated in terms of how much hp they can withstand. As we have seen above there is no set correlation between boost and hp making the question difficult and generally misleading to try to answer.
In the second case the hp produced is well below the mechanical limits of engine but the amount of boost that can be run is limited by the onset of detonation. In this case you are primarily concerned with charge temperature and the octane rating of the fuel being used. Again the fundamental problem experience in trying to answer such a question is that there not a set relationship between boost and the relevant parameters.
The moral of the story being that boost is simply not a good number by which to gauge a forced induction setup. A much better question would be "How much hp can my engine support?" or "How much hp will this compressor be able to produce without detonation on pump gas on my particular engine?"
So what are parasitic losses? Parasitic losses are the amount of hp lost by the engine to drive the compressor. In other words it's hp that would go to the rear wheels if the compressor wasn't present. It's also the fundamental advantage of a turbocharger over a supercharger. Measuring the hp required to drive a supercharger is a fairly straightforward process. With a turbocharger things become more involved especially if changes to the cam, exhaust, etc. are also made in a truly optimized turbo setup. Without getting bogged down in a lot of detail let's say that conservatively a turbo will only impose about 10-15% of the parasitic loss of an equivalent supercharger. For an otherwise stock 5.0 this will be in the neighborhood of 20-30 extra rwhp for a turbo. For a 7 second strip car the hp advantage for a turbo will easily be in the triple figures.
Boost threshold refers to the engine rpm at which full boost is available at WOT. Boost response is how the maximum boost changes over the entire operating rpm of the engine. These characteristics affect the powerband of a forced induction setup as well as the general driving characteristics. Let's define the two basic types of compressors:
1) Positive Displacement - this type of compressor is defined by boost production that is independent of operating rpm. In other words the compressor will move the same amount of air per compressor revolution independent of how fast the compressor is spinning.
2) Dynamic - for this type of compressor, boost is a function of operating rpm. The faster the compressor is spinning the more boost it will produce per revolution.
Some specific examples:
1) Kenne Bell 1500 - this is a positive displacement blower with a very low boost threshold at about 2000 rpm. Above the boost threshold boost is constant all of the way to the maximum operating speed of the blower. The result is a very big block like feel.
2) Vortech s-trim - all centrifugal blowers are of the dynamic variety. Since the compressor's rpm is mechanically linked to the engine rpm what results is a rising boost curve all the way to redline. In effect making the engine redline the boost threshold. The result is a car with a more gradual onset of power maximized for top end.
3) Turbo - a turbo is a centrifugal compressor with one extremely important difference – it's operating speed, and consequently boost, may be controlled independently of engine rpm via a wastegate which regulates the flow of exhaust gases over the drive turbine. What this means is that a boost response similar to a positive displacement compressor can be obtained. In fact with the addition of an electronic boost controller the boost response can be programmed as a function of rpm.
There is an important distinction to be made between how much hp a compressor will support and how much hp a compressor will produce on a particular application. Hp capacity relates to how much air the compressor can physically move at it's maximum operating rpm. Recall that the amount of boost that may be run is limited by detonation, not by compressor capacity. For instance, a Vortech s-trim can support around 600hp. This does not mean that it will produce 600hp on an otherwise stock engine. Unless a compressor is too small for a particular application detonation will always limit how much air can be forced through an engine. For a given fuel this limit is primarily a function of charge temperature.
This brings us to compressor sizing. As eluded to earlier the adiabatic efficiency of a compressor is a function of how much air it is flowing and it's particular pressure ratio (boost). As it turns out a compressor has a relatively small optimum operating range at which the AE's are maximized. The consequences of improperly matching a compressor's size to it's application can result in one of two situations:
1) If a compressor is too small for a particular engine, the compressor will act as a restriction, effectively choking the engine and producing less power than it is capable with respect to it's detonation limits. Example: A Kenne Bell 1500 will only support around 450 rwhp. If you were to substitute the KB for a larger blower on a combo producing 600hp the hp would fall off dramatically.
2) If a compressor is too large for a particular application it will be forced to operate far out of it's optimum operating range resulting in higher discharge temps and less margin to detonation than a properly sized blower. Example: Replacing a Vortech s-trim on a 400hp combo with a t-trim will result in hotter output temps and likely greater parasitic losses.
There are a couple of areas in which optimizing an engine for naturally aspirated power and forced induction power are at complete odds with one another. Static (engine) compression ratio is one of those. Most musclecar enthusiasts are readily aware of the fact that increasing the compression ratio of a naturally aspirated engine results in more power per unit of fuel combusted. The amount of engine compression that can be run is limited by the fuel octane that will be used.
Forced induction engines are fundamentally different in that two sources of compression are available: the compressor and the engine. Effective compression is the term often used to describe the impact of the external compressor on the combustion efficiency.
So how does a lower static compression benefit a forced induction engine? Let's look at an example. Compare these two situations:
1) On a stock 5.0 with ~9:1 static compression it is found that a non intercooled supercharger can run a maximum of 9psi of boost without detonation. The effective compression is ~14.5:1 and 320 peak rwhp is produced.
2) Same engine/supercharger combination but with low compression 8:1 pistons installed. It is found that a maximum of 13 psi can be run without detonation. Effective compression is again ~14.5:1 but significantly more air is allowed to enter the combustion chamber. Peak rwhp is now 380. In both cases the same octane fuel is used.
So a static compression change in a forced induction setup, when compensated for with additional boost, is analogous to adding displacement in a naturally aspirated engine. Sounds easy enough right? So what is to stop us from 7:1 or even 6:1 static compression? Well there are a couple of considerations.
First is the size of the compressor. As static compression decreases, the flow requirements of the compressor increase. You don't want to go so low on the static compression that the flow requirements exceed the capacity of the compressor. This is primarily a concern only if a compressor upgrade is not in the budget. In other words the static compression must be matched to the compressor being used and the fuel octane being run.
The more fundamental limitation comes about as a result of the boost threshold. Since the compressor is being relied upon to maintain the effective compression at it's optimum ratio, when you are below the rpm at which maximum boost is available then the effective compression will drop along with some power and throttle response.
With this in mind some compressor types are better suited to taking advantage of a low compression engine.
Perhaps the best is the 2.2L Kenne Bell (Blowzilla). With full boost available at 2000rpm and plenty of capacity there is almost no downside to running as low a static compression as the blower can support. Centrifugals are at the other end of the spectrum. With the boost threshold at redline there is a clear tradeoff of some low end for top end power.
Low compression/high boost combinations are a staple of the fast import cars. This is the primary reason the fastest turbo 4 and 6 cylinder cars typically put out a higher hp/displacement ratio than 5.0 liter Mustangs. I've seen a 4 cylinder car running as low as 6:1 compression and 45psi of boost.
The problem with this technique is that, for small displacement cars that are inherently weak on low end torque, sacrificing some of this low end torque can lead to a car that is unpleasant to drive around town. On the other hand this is where an engine like a pushrod 5 liter V8 has a huge fundamental advantage. Anyone that has been in a 400hp Mustang can tell you that low end torque is not a problem, even with a worst case centrifugal supercharger setup. Trading some of the excess low end for large gains in top end power is almost a freebie. It is also one of the most beneficial and under utilized techniques for making power in the 5.0 world.