About Virtual Machines
What are virtual machines?
A virtual machine is defined to be an efficient, isolated duplicate of a real machine. This means that it should be possible to run software written for the real machine, including operating systems, as a guest on a virtual machine, and the behaviour should be identical to running it on a real machine, except for
- less resources being available on the virtual machine than on the physical machine
- some unavoidable timing differences (evident when dealing with devices).
The isolation property implies that several virtual machine can be run concurrently without interference.
The efficiency requirement implies that execution speed should be close to real hardware. This implies that most instructions are directly executed by hardware.
The virtual machine typically consists of a privileged part running on hardware, which traps and interprets privileged instructions executed by the guest system executing inside the virtual machine. That privileged part is called the virtual-machine monitor (VMM) or also the hypervisor. The VMM controls resources and schedules guests (if there are multiple guests).
What are virtual machines not?
Virtual machines in this sense are similar to simulators and emulators. The difference is that they are faster than simulators (due to native execution of most code), and more accurate than emulators (which only provide a behavioural model of the hardware). Unlike simulators or emulators, virtual machines can normally only execute on the hardware they are virtualising (or at least hardware that is very close to what is virtualised).
The kind virtual machine we are concerned with always virtualise some (almost) real hardware at the level of the instruction-set architecture (ISA). This is different from so-called process virtual machines, such as Java virtual machines.
Why virtual machines?
Typically virtual machines are used to run a complete operating system (or, especially in the embedded space, some other bare-metal code) on top of another system, rather than on bare hardware. The reasons for doing this include:
- Server consolidation
- The administration, air-conditioning and power costs for a single large machine may be smaller than for many mid-range or smaller machines. But each department that used to own its own machine wants a similar degree of control in the consolidated system, including the ability to configure the system to their needs. Solution: provide a virtual machine to each task, and use a hypervisor that provides resource partitioning between the virtual machines.
- Resource isolation
It is often desired to run certain applications in a fully
isolated environment, so they cannot interfere with any other
applications, nor can they be interfered with. A virtual machine
provides such strong isolation.
Yes, this is really the operating system's job. But most modern operating systems are surprisingly bad at resource isolation. The present boom of virtual machine is witness to the fact that contemporary OSes are bad at this core responsibility.
- A virtual machine can provide a quick and easy vehicle for experimenting with kernel features, giving many of the advantages of running on a simulator at much better performance. This approach (like the two below) is particularly attractive with hosted virtual machines (shown in the diagram at the right), which run as user programs on top of a normal operating system.
- Virtual machines can provide high-performance debugging environments which can exhibit properties not available on real hardware (including the ability to go backwards in time).
- Different hardware
- The virtualised system need not be identical to the host system. One can emulate different amounts of memory, different numbers of processors, different I/O subsystems, slightly different instruction sets, etc., etc.
There are a number of different techniques for making virtualization work.