## 1 + 0 = 1 - often but not always

Numbers play an important role in our everyday life. And computers are known two work fast and efficiently with numbers, hence their name. Therefore, it might come as a surprise that numbers, as represented by modern computers, do not always behave as one expects.

For example, consider the simple computation of computing the sum of 0.1 and 0.2:

```
1]: 0.1+0.2
In [1]: 0.30000000000000004 Out[
```

Which does not produce the expected result of 0.3. Also, there are numbers to which we can add one, and still yield the same number:

```
2]: 1e16+1 == 1e16
In [2]: True Out[
```

Both behaviors are due to the specific representation of floating point numbers using the IEEE-754 standard. These numbers are not, as many might assume, a faithful representation of mathematical reals: they are an approximation with a lot of corner cases that one needs to take care of when implementing safety of security critical software - or software, that just should be correct.

Ok, floating point numbers are difficult. But integers are much simpler and work as expected, right? Consider the following program, running on an imaginary 8-Bit Computer:

```
#include <stdio.h>
int main() {
signed int x=70; // Let's assume 8-Bit integers, i.e., we
signed int y=50; // can represent numbers between -128 and 127.
if (x + y > 0){
("Hello World\n");
printf}else{
("Hello Universe\n");
printf}
return 0;
}
```

Assuming 8-Bit integers, the maximum positive number that can be represented is 255. Actually, 142+130 yields here -3, which shows the unexpected behavior that adding two positive numbers can result in a negative result.

These special cases can lead to functional errors as well as being the root cause of severe security vulnerabilities.

Will discuss these examples and others, and how they can result in security vulnerabilities in my talk at the Heise devSec conference that takes place from the 4th to 6th of October 2022 in Karlsruhe, Germany.