Skip to main content

TS

The University of New South Wales

Dynamic Voltage & Frequency Scaling

DVFS is a technique whereby a CPU's operating frequency is reduced in order to reduce its power consumption. When a CPU's frequency is reduced, its supply voltage can also be reduced (since the transistors do not need to switch as quickly). Reducing the supply voltage can lead to a reduction in the energy used by some parts of the CPU, for a given workset.

It is common for published research on DVFS techniques to ignore the frequency-independent component of the power consumption, both internally and externally to the CPU. This leads to misleading results, since the effects of execution time on the total energy used are under-estimated. It is also common for research to ignore the CPU-frequency independent component of the execution time (i.e. assuming a constant number of cycles across all CPU frequencies).

We have conducted a number of experiments using our instrumented hardware platform, PLEB 2, which allows us to measure the CPU-core, memory and I/O power consumption independently. Our experiments involve running MiBench benchmark suite representing embedded multimedia systems, at a number of different frequency operating points. Those operating points varied the CPU core frequency and the on-chip bus frequency.

Looking only at the CPU energy, the traditional model appears very roughly correct–CPU energy is approximately proportional to the square of the voltage (on which the maximum frequency is dependent). The graph below shows our measurements for the various MiBench benchmarks. Note that the first and last operating points behave slightly differently to the middle three—this is because of the different f_intbus values.

CPU Energy vs. Setpoint

The memory power (in contrast to energy as measured in the earlier diagram) is shown below. The applications which execute a large number of memory operations clearly stand out. The memory has a constant baseline power consumption, and consumes more power when accessed (i.e. the average power is higher for memory-bound applications as shown below).

Memory power vs. Setpoint

Note that the above shows the time-independent, average power. When we examine how much energy the above benchmarks consumed over their complete execution time, a different story unfolds. The execution time (for a non memory-bound application) decreases as the CPU frequency increases. This shorter running time results in a lower overall energy.

Memory energy vs. Setpoint

When combined with the CPU energy, the total energy for each benchmark run is shown below.

Total energy vs. Setpoint

This data suggests that running at the highest frequency consumes the least energy—a counter-intuitive idea. This assumes that the system is fully loaded, with no idle time (or that time spent idle has zero power). This is rarely the case. To give a more realistic idea of the energy savings via dynamic adjustment of the frequency/voltage, the idle power must be considered as well. For each of the benchmarks run in these experiments, the longest running time was measured. Any time savings via increasing the frequency were considered idle time, and the processor was considered to use the measured idle power during this period. The graph below shows these results. Interestingly the lowest energy operating point is clearly dependent on the workload.

Total energy, padded for idle tie, vs. Setpoint

In reality, the amount of time spent idle will be dependent on the specific system being implemented. In some systems, it will be possible to eliminate idle time altogether by entering a deep sleep mode. In others, periodic tasks will make the overhead of entering deep sleep too large, and idling will be necessary.

Served by Apache on Linux on seL4.