Floating point numbers have been given a bad rap. They're mocked, maligned, and feared; the but of every joke, the scapegoat for every rounding error.
But this stigma is not deserved. Floats are friends! Friends that have been stuck between a rock and a computationally hard place, and been forced to make some compromises along the way… but friends never the less!
In this talk we'll look at the compromises that were made while designing the floating point standard (IEEE754), how to work within those compromises to make sure that 0.1 + 0.2 = 0.3 and not 0.30000000000000004, how and when floats can and cannot be safely used, and some interesting history around fixed point number representation.
This talk is ideal for anyone who understands (at least in principle) binary numbers, anyone who has been frustrated by nan or the fact that 0.3 == 0.1 + 0.2 => False, and anyone who wants to be the life of their next party.
This talk will not cover more complicated numerical methods for, ex, ensuring that algorithms are floating-point safe. Also, if you're already familiar with the significance of "52" and the term "mantissa", this talk might be more entertaining than it will be educational for you.