Computers can not represent fractions, just like we have trouble representing fractions as decimals. For example, writing 1 1/3 as a decimal results in 1.33333. So, we can represent fractional numbers (numbers between 0 and 1) using *floating point* numbers.

Floating point numbers use three essential components. See diagram:

Floating point numbers usually are stored as 32 bits, 23 for the mantissa, 8 for exponent and 1 for the sign. Due to the complicity of using floating point numbers, computer now use microprossors, with coprocessors named 'Floating-point Units' or and FPU. The IEEE (Institute of Electronic and Electronics Engineers have produced this system, now know as the IEEE 754 system. This system is used to convert and store floating point numbers. For example, we can convert the number 418.125 to a 32 bit floating point number as follows:

Firstly convert to binary

418.125 = 110100010.001

Then normalise the binary number

110100010.001 becomes 1.10100010001 x 2 (to the power of 8)

Find the sign, mantissa and exponent

Sign = 0 (therefore the number is positive)

Mantissa = 10100010001

Exponent = 127 + 8 = 135 which = 10000111

Therefore the 32 bit number is equal to 0 10000111 10100010001000000000000

Real numbers sacrifice accuracy for size. A greater range of numbers can be created, but the accuracy of the larger numbers is not always sufficient. This is due to the lack of available decimal numbers. Because of this, double precision decimals were created. They use 64 bits, to allow for a larger decimal capacity, meaning large numbers can be accurately represented. However, this in turn sacrifices the speed of the conversion program, as double the bits are used.

Because it is blatantly obvious that you do not want to convert to floating point yourself, you can visit this site to gain a greater, simpler understanding of the conversion process.