# Comparing Floating Point Numbers

## Floating Points and Rounding Errors.

Working with floating points number can sometimes provide some (un)pleasant surprise, since many real numbers do not have a finite representation and this can lead to rounding errors.

If you need to refresh your knowledge (and have some time to spare), you can find a great treatment in the article What Every Computer Scientist Should Know About Floating-Point Arithmetic:

Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit b ack into its finite representation. This rounding error is the characteristic feature of floating-point computation.

A small example should make the problem clearly visible:

Here we are taking a double (a), diving it by 131 and then creating a new number (c) by adding the result of that division 131 times. By pure math, a and c should be equal. If you try this small program yourself, this is the output that you would get:

Are they equal?: False
Difference (a - c) is -4,9737991503207E-14


As you can see, the result is not what you might expect, and it's affected by a rounding error that is building up every time b is added to the partial result c.

Note that this is dependent on the number that I have chosen. If you substitute 131 with 5, you will not experience this problem.

Sometimes this behavior might annoy you, especially when you don't need such a high precision and a “rough” equality is all you need. A typical situation is unit testing an algorithm against some expected result: your assertions may fail because of a rounding error, even though the logic of your algorithm is formally correct.

In such a case, you might want to implement your own comparer. A desirable property for such a comparer is a threshold value: if the difference between two floating points is below that threshold they are considered equal.

All you need is to implement the .NET interface IEqualityComparer interface.

I have created my own comparer to use in my unit tests. I only have to implement 2 methods: Equals() and GetHashCode().

I don't care much for the actual algorithm of the hashcode implementation, so I am reusing double own algorithm.

Instead, I have applied my own logic to the Equals method: to me, two doubles are equal if the absolute value of their difference is within a given threshold (which I can pass to the comparer in the constructor).

## The Comparer In Use

Suppose I have an algorithm to calculate the histogram of a given collection of doubles, and a unit test to validate the calculation against a collection of expected results.

In the example below, samples are the input values which I want to create the histogram for, and expectedBreaks is a sequence containing the left and right limits of the bins of the histogram (in the example I am requiring 5 bins, so the number of breaks is 5 + 1 = 6).

I have figured out the threshold to pass (0.01) based on the relative size of the doubles under analysis. Of course, you should use your own judgement to come up with a sensible threshold according of the range of your input data.

Updated: