YogeshChauhan . com

Solution to Precision Problem in JavaScript Numbers

in JavaScript on November 27, 2020

We know that unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.

JavaScript numbers are always 64-bit Floating Point, so there are exactly 64 bits to store a number: 52 of them are used to store the digits, 11 of them store the position of the decimal point (they are zero for integer numbers), and 1 bit is for the sign.

Value (Fraction) Exponent Sign
52 bits (0 – 51)  11 bits (52 – 62) 1 bit (63)

If a number is too big, it would overflow the 64-bit storage, potentially giving an infinity:


console.log( 1e309 ); // Infinity
console.log( 1e308 ); // 1e+308

Precision (or imprecision?!)

Integers are accurate up to 15 digits and by integers I mean numbers without a period or exponent notation.

That means,


var x = 999999999999999;   // x will be 999999999999999
var y = 9999999999999999;  // y will be 10000000000000000

The maximum number of decimals is 17, but floating point arithmetic is not always 100% accurate:


var x = 0.2 + 0.1;         // x will be 0.30000000000000004

So, this will result in a false!


console.log( 0.1 + 0.2 == 0.3 ); // false

A number is stored in memory in its binary form, a sequence of bits – ones and zeroes. But fractions like 0.1, 0.2 that look simple in the decimal numeric system are actually unending fractions in their binary form.

In other words, what is 0.1? It is one divided by ten 1/10, one-tenth. In decimal numeral system such numbers are easily representable. Compare it to one-third: 1/3. It becomes an endless fraction 0.33333(3).

So, division by powers 10 is guaranteed to work well in the decimal system, but division by 3 is not. For the same reason, in the binary numeral system, the division by powers of 2 is guaranteed to work, but 1/10 becomes an endless binary fraction.

There’s just no way to store exactly 0.1 or exactly 0.2 using the binary system, just like there is no way to store one-third as a decimal fraction.

The numeric format IEEE-754 solves this by rounding to the nearest possible number. These rounding rules normally don’t allow us to see that “tiny precision loss”, but it exists.

The same issue exists in many other programming languages.

PHP, Java, C, Perl, Ruby give exactly the same result, because they are based on the same numeric format.

Work around the problem?

The most reliable method is to round the result with the help of a method toFixed(n):


console.log( 0.1 + 0.2 == 0.3 ); // false

let sum = 0.1 + 0.2;

console.log( sum.toFixed(2) == 0.3 ); // true

toFixed always returns a string. It ensures that it has 2 digits after the decimal point.  We can use the unary plus to coerce it into a number:


let sum = 0.1 + 0.2;
console.log( +sum.toFixed(2) ); // 0.3

One more solution


var x = (0.2 * 10 + 0.1 * 10) / 10;       // x will be 0.3

Sources

amazon

Most Read

#1 How to check if radio button is checked or not using JavaScript? #2 Solution to “TypeError: ‘x’ is not iterable” in Angular 9 #3 How to add Read More Read Less Button using JavaScript? #4 How to uninstall Cocoapods from the Mac OS? #5 How to Use SQL MAX() Function with Dates? #6 PHP Login System using PDO Part 1: Create User Registration Page

Recently Posted

Jun 16 What are Stored Procedures for SQL Server? Jun 16 What are Class Constants in PHP? Jun 15 A short basic guide on states in React Jun 15 How to define constants in PHP? Jun 15 How to define visibility for a property in PHP? Jun 15 How to use @if and @else in SCSS?

You might also like these

Pagination in CSS with multiple examplesCSSThe actual difference between indexOf() and search() in JavaScriptJavaScriptHow to create a simple slider with CSS and jQuery?CSSWhat is a Strict Requirement in PHP 7 Function Declarations?PHPHow to create rotating texts using JavaScript and CSS?CSSSome interesting HTML Input Attributes to rememberHTML