Real numbers and numerical precision: Difference between revisions

From phys660
Jump to navigationJump to search
 
(4 intermediate revisions by the same user not shown)
Line 65: Line 65:
Such severe cancellation can usually be ''eliminated by algebraic reformulation''. In the case of the quadratic equation, the cancellation observed in the previous exercise results from the subtraction performed between <math> -b </math> and <math> \sqrt{b^2-4ac} </math>. This cancellation occurs when <math> 4ac </math> is small relative to  <math> b^2 </math>, so that <math> \sqrt{b^2-4ac} \approx b </math>. This problem may be resolved by calculating the larger root (in absolute value) using the quadratic formula and obtaining the smaller root by another means. The larger root (in absolute value) can be obtained from the quadratic formula by choosing the sign of  <math> \sqrt{b^2-4ac} </math> so that no subtraction occurs. The smaller root (in absolute value) can be obtained by observing that the product of the roots of a quadratic equation must equal the constant term. So, for a general quadratic equation, <math> ax^2 + bx + c = 0 </math> , the product of the roots, <math> r_1 r_2 =c/a </math>. Thus, the second root may be obtained by division, circumventing the cancellation problem discussed above.
Such severe cancellation can usually be ''eliminated by algebraic reformulation''. In the case of the quadratic equation, the cancellation observed in the previous exercise results from the subtraction performed between <math> -b </math> and <math> \sqrt{b^2-4ac} </math>. This cancellation occurs when <math> 4ac </math> is small relative to  <math> b^2 </math>, so that <math> \sqrt{b^2-4ac} \approx b </math>. This problem may be resolved by calculating the larger root (in absolute value) using the quadratic formula and obtaining the smaller root by another means. The larger root (in absolute value) can be obtained from the quadratic formula by choosing the sign of  <math> \sqrt{b^2-4ac} </math> so that no subtraction occurs. The smaller root (in absolute value) can be obtained by observing that the product of the roots of a quadratic equation must equal the constant term. So, for a general quadratic equation, <math> ax^2 + bx + c = 0 </math> , the product of the roots, <math> r_1 r_2 =c/a </math>. Thus, the second root may be obtained by division, circumventing the cancellation problem discussed above.


== Theory: Representation of real numbers in digital computers ==
The two different algorithms to solve a quadratic equation are encoded into functions [[Media:gquad_solver.m|gquad_solver.m]] and
[[Media:bquad_solver.m|bquad_solver.m]] on [[Code Examples]] page.
 
== Representation of real numbers in digital computers: Single vs. double precision ==


Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like <math> (9.90625)_10 </math> in the decimal representation is given in a binary representation by:
Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like <math> (9.90625)_10 </math> in the decimal representation is given in a binary representation by:
Line 110: Line 113:
where the first bit is reserved for the sign, 1 in this case yielding a negative sign. The exponent <math> m </math> is given by the next 8 binary numbers <math> 01111101 </math> resulting in 125 in the decimal system. However, since the exponent has eight bits, this means it has <math> 2^8 - 1  = 255 </math> possible numbers in the interval <math> -127 \le m \le 127 </math>, our final exponent is <math> 125 - 127 = -2  </math> resulting in <math> 2^{-2} </math>. Inserting the sign and the mantissa yields the final number in the decimal representation as:
where the first bit is reserved for the sign, 1 in this case yielding a negative sign. The exponent <math> m </math> is given by the next 8 binary numbers <math> 01111101 </math> resulting in 125 in the decimal system. However, since the exponent has eight bits, this means it has <math> 2^8 - 1  = 255 </math> possible numbers in the interval <math> -127 \le m \le 127 </math>, our final exponent is <math> 125 - 127 = -2  </math> resulting in <math> 2^{-2} </math>. Inserting the sign and the mantissa yields the final number in the decimal representation as:


<math> -2^{-2} \left(1 \times 2^0 + 1\times 2^{-1} + 1\times 2^{−2} + 1 \times 2^{-3} + 0\times 2^{-4} + 1\times 2^{-5} \right) = (−0.4765625)_10 </math>
<math> -2^{-2} \left(1 \times 2^0 + 1\times 2^{-1} + 1\times 2^{−2} + 1 \times 2^{-3} + 0\times 2^{-4} + 1\times 2^{-5} \right) = (−0.4765625)_{10} </math>


In this case we have an exact machine representation with 32 bits (actually, we need less than 23 bits for the mantissa). If our number <math> x </math> can be exactly represented in the machine, we call such <math> x </math>  
In this case we have an exact machine representation with 32 bits (actually, we need less than 23 bits for the mantissa). If our number <math> x </math> can be exactly represented in the machine, we call such <math> x </math>  
Line 117: Line 120:
Unfortunately, most numbers cannot and are thereby only approximated in the machine. When such a number occurs as the result of reading some input data or of a computation, an inevitable error will arise in representing it as accurately as possible by a machine number. A floating number <math> x </math>, labelled <math> fl(x) </math> will therefore always be represented as
Unfortunately, most numbers cannot and are thereby only approximated in the machine. When such a number occurs as the result of reading some input data or of a computation, an inevitable error will arise in representing it as accurately as possible by a machine number. A floating number <math> x </math>, labelled <math> fl(x) </math> will therefore always be represented as


<math> fl(x) = x(1 ± \varepsilon_x) </math>
<math> fl(x) = x ( 1 \pm \varepsilon_x) </math>


with <math> x </math> the exact number and the error <math> |\varepsilon_x| \le |\varepsilon_M| </math> where <math> \varepsilon_M </math> is the precision assigned. A number like <math> 1/10 </math> has no exact binary representation with single or double precision. Since the mantissa
with <math> x </math> the exact number and the error <math> |\varepsilon_x| \le |\varepsilon_M| </math> where <math> \varepsilon_M </math> is the precision assigned. A number like <math> 1/10 </math> has no exact binary representation with single or double precision. Since the mantissa

Latest revision as of 23:23, 19 February 2012

Introduction

An important aspect of computational physics is the numerical precision involved. To design a good algorithm, one needs to have a basic understanding of propagation of inaccuracies and errors involved in calculations. There is no magic recipe for dealing with underflow, overflow, accumulation of errors and loss of precision, and only a careful analysis of the functions involved can save one from serious problems.

Since we are interested in the precision of the numerical calculus, we need to understand how computers represent real and integer numbers. Most computers deal with real numbers in the binary system, or octal and hexadecimal, in contrast to the decimal system that we humans prefer to use. The binary system uses 2 as the base, in much the same way that the decimal system uses 10. Since the typical computer communicates with us in the decimal system, but works internally in e.g., the binary system, conversion procedures must be executed by the computer, and these conversions involve hopefully only small roundoff errors

Computers are also not able to operate using real numbers expressed with more than a fixed number of digits, and the set of values possible is only a subset of the mathematical integers or real numbers. The so-called word length we reserve for a given number places a restriction on the precision with which a given number is represented. This means in turn, that for example floating numbers are always rounded to a machine dependent precision, typically with 6-15 leading digits to the right of the decimal point. Furthermore, each such set of values has a processor-dependent smallest negative and a largest positive value. Why do we at all care about rounding and machine precision?

Example: Loss of precision in subtracting nearly equal numbers

Assume that we can represent a floating number with a precision of 5 digits only to the right of the decimal point. This is nothing but a mere choice of ours, but mimicks the way numbers are represented in the machine. Then we try to evaluate the function

for small values of . Note that we can also rewrite this expression by multiplying the denominator and numerator with to obtain the equivalent expression

.

If we now choose (in radians), our choice of precision results in

Failed to parse (SVG with PNG fallback (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://en.wikipedia.org/api/rest_v1/":): {\displaystyle \sin(0.007) \approx 0.59999 \times 10^{−2} } ,

and

. The first expression for results in

while the second expression results in

which is also the exact result. In the first expression, due to our choice of precision, we have only one relevant digit in the numerator, after the subtraction. This leads to a loss of precision and a wrong result due to a cancellation of two nearly equal numbers. If we had chosen a precision of six leading digits, both expressions yield the same answer. If we were to evaluate , then the second expression for can lead to potential losses of precision due to cancellations of nearly equal numbers.

This simple example demonstrates the loss of numerical precision due to roundoff errors, where the number of leading digits is lost in a subtraction of two near equal numbers. The lesson to be drawn is that we cannot blindly compute a function. We will always need to carefully analyze our algorithm in the search for potential pitfalls. There is no magic recipe however, the only guideline is an understanding of the fact that a machine cannot represent correctly all numbers.

Example: Subtleties in solving a quadratic equation

Solve the quadratic equation:

where , and , by evaluating the quadratic formula using three-digit decimal arithmetic and unbiased rounding. The exact roots rounded to 6 digits are 0.0399675 and 12.3600.

This example illustrates an important numerical problem called cancellation or loss of significance which manifests itself when subtracting values of nearly equal magnitude. Cancellation occurs when the digits necessary to accurately define the difference have been discarded by rounding in previous calculations due to the finite precision of machine arithmetic. Problems arise when this difference is an intermediate result which must be used to complete the calculation--most of the significant digits that remain after rounding are eliminated by subtraction.

To complicate the situation, the digits that become significant after subtraction may be accurate to only a few places due to the previous rounding errors in the two values being subtracted. For example, suppose that a calculation contains the intermediate values, and , both correct only to 6 significant figures, with the last two digits incorrect due to rounding errors in previous calculations. Assuming a computer with 8 digit decimal arithmetic, the computed difference in the two numbers is . On the assumption that the last two digits are incorrect due to previous rounding errors, this difference contains no correct figures, all of which have been brought to significance after subtraction.

Such severe cancellation can usually be eliminated by algebraic reformulation. In the case of the quadratic equation, the cancellation observed in the previous exercise results from the subtraction performed between and . This cancellation occurs when is small relative to , so that . This problem may be resolved by calculating the larger root (in absolute value) using the quadratic formula and obtaining the smaller root by another means. The larger root (in absolute value) can be obtained from the quadratic formula by choosing the sign of so that no subtraction occurs. The smaller root (in absolute value) can be obtained by observing that the product of the roots of a quadratic equation must equal the constant term. So, for a general quadratic equation, , the product of the roots, . Thus, the second root may be obtained by division, circumventing the cancellation problem discussed above.

The two different algorithms to solve a quadratic equation are encoded into functions gquad_solver.m and bquad_solver.m on Code Examples page.

Representation of real numbers in digital computers: Single vs. double precision

Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like in the decimal representation is given in a binary representation by:

Failed to parse (SVG with PNG fallback (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://en.wikipedia.org/api/rest_v1/":): {\displaystyle (1001.11101)_2 = 1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 + 1 \times 2^{-1} + 1 \times 2^{− 2} + 1 \times 2^{−3} + 0 \times 2^{−4} +1 \times 2^{-5} }

and it has an exact machine number representation since we need a finite number of bits to represent this number. This representation is however not very practical. Rather, we prefer to use a scientific notation. In the decimal system we would write a number like 9.90625 in what is called the normalized scientific notation. This means simply that the decimal point is shifted and appropriate powers of 10 are supplied. Our number could then be written as:

and a real non-zero number could be generalized as

with a a number in the range . In a similar way we can represent binary number if scientific notation as:

with a number int the range . This means that the mantissa of a binary number would be represented by the general formula:

In a typical computer, floating-point numbers are represented in the way described above, but with certain restrictions on and imposed by the available word length. In the machine, our number is represented as

where is the sign bit, and the exponent gives the available range. With a single-precision word, 32 bits, 8 bits would typically be reserved for the exponent, 1 bit for the sign and 23 for the mantissa. This means that if we declare a variable size_of_cube as

Fortran: REAL(4) :: size_of_cube
C++:     float size_of_cube 

we are reserving 4 bytes in memory, with 8 bits for the exponent, 1 for the sign and and 23 bits for the mantissa, implying a numerical precision to the sixth or seventh digit, since the least significant digit is given by . The range of the exponent goes from to , where 128 stems from the fact that 8 bits are reserved for the exponent.

A modification of the scientific notation for binary numbers is to require that the leading binary digit 1 appears to the left of the binary point. In this case the representation of the mantissa would be and . This form is rather useful when storing binary numbers in a computer word, since we can always assume that the leading bit 1 is there. One bit of space can then be saved meaning that a 23 bits mantissa has actually 24 bits. This means explicitely that a binary number with 23 bits for the mantissa reads:

As an example, consider the 32 bits binary number

where the first bit is reserved for the sign, 1 in this case yielding a negative sign. The exponent is given by the next 8 binary numbers resulting in 125 in the decimal system. However, since the exponent has eight bits, this means it has possible numbers in the interval , our final exponent is resulting in . Inserting the sign and the mantissa yields the final number in the decimal representation as:

Failed to parse (syntax error): {\displaystyle -2^{-2} \left(1 \times 2^0 + 1\times 2^{-1} + 1\times 2^{−2} + 1 \times 2^{-3} + 0\times 2^{-4} + 1\times 2^{-5} \right) = (−0.4765625)_{10} }

In this case we have an exact machine representation with 32 bits (actually, we need less than 23 bits for the mantissa). If our number can be exactly represented in the machine, we call such a machine number.

Unfortunately, most numbers cannot and are thereby only approximated in the machine. When such a number occurs as the result of reading some input data or of a computation, an inevitable error will arise in representing it as accurately as possible by a machine number. A floating number , labelled will therefore always be represented as

with the exact number and the error where is the precision assigned. A number like has no exact binary representation with single or double precision. Since the mantissa

is always truncated at some stage due to its limited number of bits, there is only a limited number of real binary numbers. The spacing between every real binary number is given by the chosen machine precision. For a 32 bit words this number is approximately and for double precision (64 bits) we have , or in terms of a binary base as and , respectively. for single and double precision, respectively.