The Wolfram Language supports three types of computer arithmetic:
Each has their respective benefits and weaknesses, and understanding when to use each for the fastest and most accurate results is an important part of any Wolfram Language programmer’s toolbox. This article aims to provide a general overview of each type without diving into all the technical specifics. More details can be found in the Wolfram Language documentation, which will be linked throughout.
Machine-precision arithmetic is used when a number is entered with a decimal point (for example, 4.2 or 1.) or when the single argument N function is used (for example, N[1/2]). On most modern computers, machine-precision arithmetic results in roughly 16 digits of mantissa (digits following the decimal point).
The main advantage of using machine-precision arithmetic is speed. Arbitrary-precision numerical calculations are usually many times slower than machine-precision calculations.
While machine-precision computation is fast, it should be avoided when working with very large or very small numbers where accuracy is critical (use arbitrary precision instead) or when a symbolic result is required (use infinite precision instead).
It is also worth noting that when using machine precision, roundoff errors are not tracked. This is done for speed, but can result in erroneous results. For example, Out is incorrect in the following:
Most of the Wolfram Language’s built-in mathematical functions provide output that best matches the given input’s precision. If you feed these functions a machine-precision input, they will result in a machine-precision output. For example:
Similarly, if you combine machine-precision inputs with arbitrary- or infinite-precision inputs in a single calculation, the result will be given in machine precision:
Numerical operators (NIntegrate, NSum, NDSolve, etc.) return machine precision by default.
When you do calculations with arbitrary-precision arithmetic numbers, the Wolfram Language keeps track of precision at all points. In general, the Wolfram Language tries to give you results which have the highest possible precision, given the precision of the input you provided.
Arbitrary-precision numbers are most commonly created by using the N function with its second argument. For example, N[Pi, 20] gives the numeric result of Pi to 20 digits of precision (3.1415926535897932385).
Arbitrary precision is useful for calculations that require a high degree of precision, including working with very large or very small numbers. However, arbitrary-precision computations are slower than machine-precision calculations, so arbitrary-precision numbers are usually not used when high accuracy is not required.
As with machine-precision numbers, if you feed built-in mathematical functions an arbitrary-precision input, they will result in an arbitrary-precision output. For example:
Combining machine-precision and arbitrary-precision input will result in machine-precision output. As a result, errors will not be tracked.
An effective way to evaluate an expression in arbitrary-precision is to use SetPrecision:
You can force numerical operators (NIntegrate, NSum, NDSolve, etc.) to use arbitrary precision by setting their WorkingPrecision option.
Infinite-precision arithmetic is used when exact inputs are known and exact outputs are desired, or when manipulating expressions algebraically.
Rationalize is a useful function to convert floating-point numbers to an exact number.
If you feed built-in mathematical functions an infinite-precision input, they will result in an infinite-precision output. For example:
Combining infinite-precision and machine-precision input will result in machine-precision output.