Adding two binary numbers works in the same way as adding decimal numbers. Let us first consider the sum $$a+b$$ of two bits $$a$$ and $$b\text{.}$$ In general, the sum of two bits is two bits long, because the addition can cause a carry. Bit 0 of the sum is given by the xor of both bits: $$a\bxor b\text{.}$$ The carry $$c$$ of that position is 1 if and only if $$a=b=1\text{,}$$ i.e. $$a\band b=1\text{.}$$

When adding entire bit strings not just single digits, the carry at a position needs to be propagated into the next higher position. So, if $$c_i$$ denotes the carry bit position $$i-1$$ generates, the sum bit at position $$i$$ is given as

\begin{equation} s_i\defeq a_i\bxor b_i\bxor c_i\tag{1.1} \end{equation}

The carry bit of position $$i-1$$ may also influence the carry bit $$c_{i+1}$$ out of position $$i\text{.}$$ For example, if $$a_i=b_i=c_i=1\text{,}$$ then $$c_{i+1}=1\text{.}$$ The carry bit position $$i$$ generates is defined as

\begin{equation} c_{i+1} \defeq (a_i\band b_i)\bor (a_i\band c_i)\bor (b_i\band c_i)\tag{1.2} \end{equation}

The following lemma proves that $$c_{i+1}s_i$$ indeed is the sum of $$a_i\text{,}$$ $$b_i\text{,}$$ and $$c_i\text{.}$$

By value table:

\begin{equation*} \begin{array}[b]{ccc|c||cc|c} a_i \amp b_i \amp c_i \amp a_i+b_i+c_i \amp c_{i+1} \amp s_i \amp \buintp{c_{i+1}s_i} \\ \hline 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \amp 1 \amp 0 \amp 1 \amp 1 \\ 0 \amp 1 \amp 0 \amp 1 \amp 0 \amp 1 \amp 1 \\ 1 \amp 1 \amp 0 \amp 2 \amp 1 \amp 0 \amp 2 \\ 0 \amp 0 \amp 1 \amp 1 \amp 0 \amp 1 \amp 1 \\ 1 \amp 0 \amp 1 \amp 2 \amp 1 \amp 0 \amp 2 \\ 0 \amp 1 \amp 1 \amp 2 \amp 1 \amp 0 \amp 2 \\ 1 \amp 1 \amp 1 \amp 3 \amp 1 \amp 1 \amp 3 \\ \end{array} \end{equation*} Figure 1.3.2. The bits in the dashed box represent the right-hand side of Lemma 1.3.1 and the digits in the solid box the ones on the left-hand side.

Now, consider two bit strings $$a$$ and $$b$$ of equal length $$n\text{.}$$ When adding $$a$$ and $$b$$ we assume that there flows no carry into the least-significant digit, i.e. $$c_0=0\text{.}$$

$$a+b\defeq c_ns_{n-1}\dots s_0$$

### Remark1.3.4.Efficiency and practicability of our adder.

While Lemma 1.3.5 shows that the addition algorithm we defined is sound, it has a significant practical disadvantage. Our algorithm is a so-called ripple-carry adder which means that the carry “ripples” through the positions: In order to compute $$s_i$$ you have to have computed $$c_i$$ for which you need $$s_{i-1}$$ and so forth. This makes the depth of the actual circuit of ands, ors, xors that implements the adder linear in terms of the number of positions. Since the circuit depth significantly influences the frequency by which you can clock the computer and therefore its speed, practical implementations use different adders, so called carry-lookahead adders that have more gates but are flatter and can therefore be clocked faster.

The following lemma shows that this definition is sound, i.e. the bit operations by which we defined the addition really produce a bit string that represents the natural number of the sum of the natural numbers that are represented by the individual bit strings.

By induction over $$n\text{.}$$

1. Base case $$n=1$$.

Follows directly from Lemma 1.3.1.

2. Induction step $$n-1\to n$$.

The induction hypothesis is

\begin{equation*} \buintp{c_{n-1}\bseqq s{n-2}}=\buintp{\bseqq{a}{n-2}}+\buintp{\bseqq{b}{n-2}} \end{equation*}

By using the definitions and lemmas we have established above, we get:

\begin{align*} \buintp{a}+\buintp{b} \amp = a_{n-1}2^{n-1} + \buintp{\bseqq a{n-2}} + b_{n-1}2^{n-1} + \buintp{\bseqq{b}{n-2}} \amp \knowl{./knowl/lem-arith-splitval.html}{\text{Lemma 1.1.2}}\\ \amp = (a_{n-1}+b_{n-1})\cdot 2^{n-1} + \buintp{\bseqq{a}{n-2}} + \buintp{\bseqq{b}{n-2}} \\ \amp = (a_{n-1}+b_{n-1})\cdot 2^{n-1} + \buintp{c_{n-1}\bseqq{s}{n-2}} \amp \text{IH}\\ \amp = (a_{n-1}+b_{n-1}+c_{n-1})\cdot 2^{n-1} + \buintp{\bseqq {s}{n-2}} \amp \knowl{./knowl/lem-arith-splitval.html}{\text{Lemma 1.1.2}}\\ \amp = \buintp{c_ns_{n-1}}\cdot 2^{n-1} + \buintp{\bseqq {s}{n-2}} \amp \knowl{./knowl/lem-arith-place.html}{\text{Lemma 1.3.1}}\\ \amp = \buintp{c_ns_{n-1}\cdots s_0} \amp \knowl{./knowl/lem-arith-splitval.html}{\text{Lemma 1.1.2}} \end{align*}

Note that the sum contains $$n+1$$ bits. If the carry bit out of the last position is $$0\text{,}$$ Lemma 1.3.5 tell us that the lower $$n$$ bits are accurate, i.e. $$\buintp{\bseq sn}=\buintp{a}+\buintp{b}\text{.}$$ If the carry out of the last position is $$1\text{,}$$ the lower $$n$$ bits are not accurate and the sum is greater or equal $$2^n\text{.}$$ In this case an overflow happened and we would need all $$n+1$$ bits to accurately represent the result. Since computers internally operate on bit strings of fixed size, there is no space to store this additional bit directly in-place with the result. If we are only interested in the $$n$$ least significant bits when adding, we write shortly

to indicate that we ignore the carry out of the most significant place.

Let us now turn to subtraction. Instead of inventing a new algorithm for subtracting, we are going to use a trick to express subtraction by addition. This trick hinges on the fact that we are only interested in subtracting words of $$n$$ bits and not bit strings of arbitrary length. We define subtraction to fulfill the following specification:

\begin{equation} \buintp{a-b}\defeq\buintp{a}+2^n-\buintp{b}\tag{1.4} \end{equation}

At first sight, this looks odd. Why we would we add $$2^n\text{?}$$ Consider the following example of base 10 arithmetic:

### Example1.3.6.

Assume we want to subtract base 10 numbers of length 3, and more concretely suppose we want to compute $$3-2\text{.}$$

\begin{equation*} 3-2\equiv 3-2+1000\equiv 3+998\equiv 1001\equiv 1\mod 1000 \end{equation*}

What essentially happens here is that we, instead of subtracting $$2\text{,}$$ we added $$998=1000-2$$ and just considered the result modulo $$1000$$ which is $$1\text{.}$$ This of course only works if we limit the digit sequences to a certain length before (here 3).

So instead of subtracting $$\buintp b$$ from $$\buintp a$$ we are going to add $$2^n-\buintp b$$ to $$\buintp a\text{.}$$ But $$2^n-\buintp b$$ is still a subtraction, how is this going to help? We will see in the soundness proof of of the subtraction how we can leverage Lemma 1.1.6 to reduce $$2^n-\buintp b$$ to negation and compute it efficiently.

### Definition1.3.7.Subtraction.

$$a-b\defeq a+\bneg b\text{ with }c_0=1$$

\begin{align*} \buintp{a-b} \amp = \buintp{a+\bneg b}\text{ with }c_0=1 \amp \knowl{./knowl/def-arith-sub.html}{\text{Definition 1.3.7}}\\ \amp = \buintp a+\buintp{\bneg b}+1 \amp \knowl{./knowl/lem-arith-uadd.html}{\text{Lemma 1.3.5}}\\ \amp = \buintp a+\left(\sum_{i=0}^{n-1}(1-b_i)2^i\right)+1 \amp \knowl{./knowl/def-arith-uintp.html}{\text{Definition 1.1.1}}\\ \amp = \buintp a+\left(\sum_{i=0}^{n-1}2^i\right)+1-\left(\sum_{i=0}^{n-1}b_i2^i\right) \amp \\ \amp = \buintp a+\left(\sum_{i=0}^{n-1}2^i\right)+1-\buintp b \amp \knowl{./knowl/def-arith-uintp.html}{\text{Definition 1.1.1}}\\ \amp = \buintp a+2^n-\buintp b \amp \knowl{./knowl/lem-sumnine.html}{\text{Lemma 1.1.6}} \end{align*}

Definition 1.3.7 gives us a nice way to subtract essentially by adding the two's complement of the subtrahend. The fact that the result does not fit into $$n$$ bits does not trouble us, on the contrary: It gives us a nice way to check if the result is even valid! If the carry out of the last position is set, then $$\buintp a\ge\buintp b$$ and the subtraction is well-defined and the lower $$n$$ bits represent the accurate result. If $$c_n=0$$ then $$\buintp a\lt\buintp b\text{,}$$ an overflow occurred and the lower $$n$$ bits do not mean a thing.

### Remark1.3.9.Practical Considerations.

Definition 1.3.7 already suggests how an adder has to be extended to support subtraction: The right operand of the ALU must be optionally negateable and the carry bit into the least position must be presettable to either $$0$$ or $$1\text{.}$$

Many processors provide the carry out of the last position in a special register, the so-called flag register. It is then often just called the carry bit because all the intermediate carries are unaccessible to the programmer. Some architectures negate the carry bit when subtracting, actually making it a borrow bit. For example, on an x86 machine the carry bit is clear when computing $$3-2$$ whereas an ARM processor would set the carry in that case.

Pro comment: This is why on x86 the instruction that subtracts and reading $$c_0$$ from the flags is called sbb (sub with borrow) while it is called sbc (sub with carry) on ARM processors. Such instructions are used when implementing arithmetic on bit strings that are longer than the word size of the processors.