'Is the leftmost bit of the mantissa always 1?
I am reading the book „the secret life of programs“. In the first chapter there is an explanation of a trick invented by Digital Equipment Corporation (DEC) that allows to double accuracy: „throwing away the leftmost bit of the mantissa doubles the accuracy, since we know that it will always be 1, which makes room for one more bit.“
I cannot understand. As an example: Let‘s consider a 4-bit floating-point representation with 2 bits of mantissa and 2 bits of exponent. The binary number 01.11 then represents 4.0. The leftmost bit is 0 and not 1.
Can anyone explain to me, what is meant by „throwing away the leftmost bit“ and „ doubling accuracy“ with a simple example?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
