Real numbers and numerical precision: Difference between revisions

From phys660
Jump to navigationJump to search
Line 67: Line 67:
== Theory: Representation of real numbers in digital computers ==
== Theory: Representation of real numbers in digital computers ==


Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like <math> (9.90625)_10 </math> in the decimal representation is given in a binary representation by
Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like <math> (9.90625)_10 </math> in the decimal representation is given in a binary representation by:


<math> (1001.11101)_2 = 1 \times 2^3 + 0 \times 2^2 + 0 \times 21 + 1 \times 20 + 1 \times 2 1+ 1 \times 2 − 2+1 \times 2 − 3 + 0 \times 2− 4 +1 \times 2^{-5} </math>
<math> (1001.11101)_2 = 1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 + 1 \times 2^{-1} + 1 \times 2^{− 2} + 1 \times 2^{−3} + 0 \times 2^{−4} +1 \times 2^{-5} </math>
 
and it has an exact machine number representation since we need a finite number of bits to represent this number. This representation is however not very practical. Rather, we prefer to use a scientific notation. In the decimal system we would write a number like 9.90625 in what is called the normalized scientific notation. This means simply that the decimal point is shifted and appropriate powers of 10 are supplied. Our number could then be written as:
 
<math> 9.90625 = 0.990625 \times 10^1 </math>
 
and a real non-zero number could be generalized as
 
<math> x= \pm r \times 10^n </math>
 
with a <math> r </math> a number in the range <math> 1/10 \le r < 1 </math>. In a similar way we can represent binary number if scientific notation as:
 
<math>  x= \pm q \times 2^m </math>
 
with a <math> q </math> number int the range <math> 1/2 \le q < 1 </math>. This means that the mantissa of a binary number would be represented by the general formula:
 
<math> (0.a_{-1}a_{-2} \ldots a_{-n})_2 = a_{-1} \times 2^{-1} + a_{-1} \times 2^{-1} + \ldots + a_{-n} \times 2^{-n} </math>

Revision as of 23:46, 19 February 2012

Introduction

An important aspect of computational physics is the numerical precision involved. To design a good algorithm, one needs to have a basic understanding of propagation of inaccuracies and errors involved in calculations. There is no magic recipe for dealing with underflow, overflow, accumulation of errors and loss of precision, and only a careful analysis of the functions involved can save one from serious problems.

Since we are interested in the precision of the numerical calculus, we need to understand how computers represent real and integer numbers. Most computers deal with real numbers in the binary system, or octal and hexadecimal, in contrast to the decimal system that we humans prefer to use. The binary system uses 2 as the base, in much the same way that the decimal system uses 10. Since the typical computer communicates with us in the decimal system, but works internally in e.g., the binary system, conversion procedures must be executed by the computer, and these conversions involve hopefully only small roundoff errors

Computers are also not able to operate using real numbers expressed with more than a fixed number of digits, and the set of values possible is only a subset of the mathematical integers or real numbers. The so-called word length we reserve for a given number places a restriction on the precision with which a given number is represented. This means in turn, that for example floating numbers are always rounded to a machine dependent precision, typically with 6-15 leading digits to the right of the decimal point. Furthermore, each such set of values has a processor-dependent smallest negative and a largest positive value. Why do we at all care about rounding and machine precision?

Example: Loss of precision in subtracting nearly equal numbers

Assume that we can represent a floating number with a precision of 5 digits only to the right of the decimal point. This is nothing but a mere choice of ours, but mimicks the way numbers are represented in the machine. Then we try to evaluate the function

for small values of . Note that we can also rewrite this expression by multiplying the denominator and numerator with to obtain the equivalent expression

.

If we now choose (in radians), our choice of precision results in

Failed to parse (syntax error): {\displaystyle \sin(0.007) \approx 0.59999 \times 10^{−2} } ,

and

. The first expression for results in

while the second expression results in

which is also the exact result. In the first expression, due to our choice of precision, we have only one relevant digit in the numerator, after the subtraction. This leads to a loss of precision and a wrong result due to a cancellation of two nearly equal numbers. If we had chosen a precision of six leading digits, both expressions yield the same answer. If we were to evaluate , then the second expression for can lead to potential losses of precision due to cancellations of nearly equal numbers.

This simple example demonstrates the loss of numerical precision due to roundoff errors, where the number of leading digits is lost in a subtraction of two near equal numbers. The lesson to be drawn is that we cannot blindly compute a function. We will always need to carefully analyze our algorithm in the search for potential pitfalls. There is no magic recipe however, the only guideline is an understanding of the fact that a machine cannot represent correctly all numbers.

Example: Subtleties in solving a quadratic equation

Solve the quadratic equation:

where , and , by evaluating the quadratic formula using three-digit decimal arithmetic and unbiased rounding. The exact roots rounded to 6 digits are 0.0399675 and 12.3600.

This example illustrates an important numerical problem called cancellation or loss of significance which manifests itself when subtracting values of nearly equal magnitude. Cancellation occurs when the digits necessary to accurately define the difference have been discarded by rounding in previous calculations due to the finite precision of machine arithmetic. Problems arise when this difference is an intermediate result which must be used to complete the calculation--most of the significant digits that remain after rounding are eliminated by subtraction.

To complicate the situation, the digits that become significant after subtraction may be accurate to only a few places due to the previous rounding errors in the two values being subtracted. For example, suppose that a calculation contains the intermediate values, and , both correct only to 6 significant figures, with the last two digits incorrect due to rounding errors in previous calculations. Assuming a computer with 8 digit decimal arithmetic, the computed difference in the two numbers is . On the assumption that the last two digits are incorrect due to previous rounding errors, this difference contains no correct figures, all of which have been brought to significance after subtraction.

Such severe cancellation can usually be eliminated by algebraic reformulation. In the case of the quadratic equation, the cancellation observed in the previous exercise results from the subtraction performed between and . This cancellation occurs when is small relative to , so that . This problem may be resolved by calculating the larger root (in absolute value) using the quadratic formula and obtaining the smaller root by another means. The larger root (in absolute value) can be obtained from the quadratic formula by choosing the sign of so that no subtraction occurs. The smaller root (in absolute value) can be obtained by observing that the product of the roots of a quadratic equation must equal the constant term. So, for a general quadratic equation, , the product of the roots, . Thus, the second root may be obtained by division, circumventing the cancellation problem discussed above.

Theory: Representation of real numbers in digital computers

Real numbers are stored with a decimal precision (or mantissa) and the decimal exponent range. The mantissa contains the significant figures of the number (and thereby the precision of the number). A number like in the decimal representation is given in a binary representation by:

Failed to parse (syntax error): {\displaystyle (1001.11101)_2 = 1 \times 2^3 + 0 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 + 1 \times 2^{-1} + 1 \times 2^{− 2} + 1 \times 2^{−3} + 0 \times 2^{−4} +1 \times 2^{-5} }

and it has an exact machine number representation since we need a finite number of bits to represent this number. This representation is however not very practical. Rather, we prefer to use a scientific notation. In the decimal system we would write a number like 9.90625 in what is called the normalized scientific notation. This means simply that the decimal point is shifted and appropriate powers of 10 are supplied. Our number could then be written as:

and a real non-zero number could be generalized as

with a a number in the range . In a similar way we can represent binary number if scientific notation as:

with a number int the range . This means that the mantissa of a binary number would be represented by the general formula: