When You Can’t Count On Your Numbers

By YUI TeamMarch 10th, 2009

JavaScript has a single number type: IEEE 754 Double Precision floating point. Having a single number type is one of JavaScript’s best features. Multiple number types can be a source of complexity, confusion, and error. A single type is simplifying and stabilizing.

Unfortunately, a binary floating point type has some significant disadvantages. The worst is that it cannot accurately represent decimal fractions, which is a big problem because humanity has been doing commerce in decimals for a long, long time. There would be advantages to switching to a binary-based number system, but that is not going to happen. As a consequence, 0.1 + 0.2 === 0.3 is false, which is the source of a lot of confusion.

When working with floating point numbers, it is important to understand the limitations and program defensively. For example, the Associative Law does not hold. (((a + b) + c) + d) is not guaranteed to produce the same result as ((a + b) + (c + d)).

Let’s demonstrate this. We’ll start with a partial_reduce function. We pass it an array and a function, and it returns in array containing the results of calling the function on pairs of elements. This sort of thing might be popular in the future to take advantage of parallelism because work on each of the pairs could happen simultaneously.

    var partial_reduce = function (array, func) {
        var i, result = [], x = array.length - 1;
        for (i = 0; i < x; i += 2) {
              result.push(func(array[i], array[i + 1]));
        if (i === x) {
        return result;

We can then write an add function and a totalizer function that works by looping over partial_reduce until it produces a single value.

    var add = function (a, b) {
        return a + b;
    var totalizer = function (array) {
        while (array.length > 1) {
            array = partial_reduce(array, add);
        return array[0];

If I make an array containing 10000 elements all set to 0.01, then totalizer(array) produces 100, which is good.

Now let’s try totaling the same array the old fashioned, sequential way. array.reduce(add, 0) produces 100.00000000001425 which is close, but no cigar. Every floating point operation can potentially accumulate some noise. The order in which you perform the operations can have an impact on the amount of noise you get.

There is work on a decimal flavor of IEEE 754, and we looked at incorporating it into the next edition of ECMAScript. Unfortunately, adding a second number type to a language having only one can do a lot of violence to the language, so we deferred consideration of the decimal type to a future edition. Also, the proposed decimal type is extremely slow in execution, and to my eye is much too complicated in its specification.

Note: The reduce method will appear in the next edition.


  1. Jeffrey Gilbert said:
    March 10, 2009 at 2:51 pm

    This did little or nothing to explain where the noise comes from.

    console.log((0.1 + 0.2) == 0.3);
    console.log((0.1 + 0.2) === 0.3);
    console.log(0.1 + 0.2);


    Yeah, I had no idea this was the case. What’s the best way to combat this when doing decimal math in javascript?

  2. @Jeffrey, I’ve been doing something similar to what’s below (except I pre-set the number of decimal places). I doubt it’s the best, most elegant, or most proper way. It is a quick fix, however, that seems to get the job done. Modifying it to accept the arguments variable may be a good idea.

    function addDecimal(num1, num2) {
    var str1 = num1.toString().split(‘.’)[1],
    str2 = num2.toString().split(‘.’)[1],
    places = str1.length > str2.length ? str1.length : str2.length,
    v = num1+num2;
    return v.toFixed(places);

  3. It comes from 0.3 being a non-terminating decimal in binary, and so having to be rounded to 0.30000000000000004 for storage.

    Usually, one doesn’t do decimal math where it matters.

    For financial stuff, store it as integers and only shift it smaller for display.

    Do bounds checking, not equality testing.

    x == 0.3

    x > 0.3-epsilon && x < 0.3+epsilon

    More generally, do the math in a server language that has a good math library with BigNums.

  4. mm… There’re problems while I try to apply the demonstration code…

    But if I changed the “x=1″ and the while condition in totalizer to “array.length>1″, it worked fine.

  5. There are some ways to deal with this, by monkey patching some precision calculations using significant digits.

    Math.precision = function (n) { return n.toString(10).replace(/^0+([^0])/,’$1′).replace(/([^0]+)0+$/,’$1′).replace(/^0+$/, ‘0’).replace(/\./, ”).length; }

    Math.roundTo = function (n, prec) { return Math.round(n * Math.pow(10,prec)) / Math.pow(10,prec); }

    Number.prototype.equals = function (n) { var prec = Math.min(Math.precision(this), Math.precision(n)); return Math.roundTo(this, prec) === Math.roundTo(n, prec); }

    Number.prototype.add = function (n) { var prec = Math.min(Math.precision(this), Math.precision(n); return Math.roundTo(this + n, prec); };

    (0.1 + 0.2).equals(0.3); // true
    (0.1).add(0.2) === 0.3; // true

    Unless you’re using a very high precision, this wouldn’t be a problem.

  6. Julian Wong said:
    March 11, 2009 at 5:56 am

    The code didn’t works for me.

    var partial_reduce = function (array, func) {
    var i, result = [], x = array.length – 1;
    if (x 0) make sense to me.

  7. Would love to hear more about the binary number system and why it’s not going to happen…

    The link points to another bibliography link that is broken…

    With JavaScript gaining more and more importance these days, it seems to me that the real solution is to update to a more intelligent yet backwardly compatible numbering system.

  8. Daniel Hart said:
    March 14, 2009 at 3:27 am

    A minor change to your add function will fix the problem:

    var add = function (a, b) {
    return Math.round((a + b) * 100)/100;

    array.reduce(add, 0) now produces 100 instead of 100.00000000001425

  9. Hi, you may try this library “…allow calculations with nearly arbitrary precision”:


    Source code


  10. The real problem is that JavaScript painted itself into a corner by trying to look cute. While on the surface it does seem like having one numeric type is “simplifying and stabilizing”, the amount of confusion it caused users and the complexity of the things people have to do to work around it should be the deciding factor to update the spec. I have no idea what a right way to do that would be though. Oh well.

  11. No programmer should be using equality tests on floating-point numbers, although I admit I don’t hear that kind of advice tossed around much anymore. You don’t ask if 0.1 + 0.2 === 0.3 .

    Instead, you ask if 0.1 + 0.2 – 0.3 < 0.00001 (or some other suitably small number). This works for all real-world cases where you’d use a float. It isn’t that big a deal. Please do not flame me with complaints that it increases download size.

  12. Color me confused. Why should 0.1 + 0.2 ever not equal 0.3?

    floats are usually stored as: NNNN x 10^^POW

    For the above example it should be something like:

    1 x10^-1 + 2 x10^-1 = 3 x10^-1

    How on earth is it being stored that there is noise like that?

    Lua, for example, uses only floating point[1] and it can handle 0.1 + 0.2 = 0.3 just fine.


    [1] http://lua-users.org/wiki/FloatingPoint

  13. Christian, I highly recommend that you read the document whose url you posted.

  14. I just did and it still doesn’t explain why it works in Lua and not in JavaScript and Python.

    I’ve looked in the Lua code and I don’t see why it would be different, though I haven’t exhausted the lua source yet.

    Did you read the Lua article?


  15. Ah-hah! I discovered why Lua looks like it gets it wrong and Python and JavaScript look like they don’t.

    They are all using floats underneath. But lua is using a different sprintf string to represent the numbers. This is potentially confusing, but it is also nice in other ways.

    lua> print(0.1 + 0.2)
    lua> print((0.1 + 0.2) == 0.3)

    This is because Lua is using the sprintf format: “%.14g”

    It seems python and JavaScript use “%.17f” or something similar.

    Now I understand. Comment to myself when I was coloring myself confused; floats are binary exponents, not decimal.

    I think that probably replacing JavaScript numbers with something like perl’s arbitrary precision number would be best.

    Code using workarounds for floats, such as (0.1 + 0.2 – 0.3 < 0.00001), would still work. And naive code would work correctly.


  16. I Second the Goldberg reference.

    This same issue happens with every single language that has floating point numbers. The IEEE standard means that everyone does these calculations in the same way with the same (very good) set of rules. But it’s impossible to remove the issue and still use binary floating point.

    One common workaround is to round to the nearest 1/100th or 1/10000th, this is common in financial software. (More common than storing values in cents)

    Another is to use vulgar fractions, either with the denominator as a power of 10 or the general form with any denominator. (This is common in BIG NUMBER packages)

    But none of these (including the IEEE 754) is a simple solution; that’s one of the reasons why IEEE 754 is as common as it is, there’s a lot of work gone into that standard.