Skip to main content

Chapter 1 Computer Arithmetic

Most computers are digital (from Latin digitus = finger). This means that a piece of information is represented by a symbol, called digit, drawn from a finite set. (There are also analog computers where information is represented by a continuous quantity like voltage.) Almost all digital computers are binary which means that the set of symbols has two elements commonly called 0 and 1. A binary digit is called bit.

One important property of computers is that hardware typically operates on bit strings of fixed size and is therefore not able to work with arbitrarily large numbers directly. This size is called the word size of the machine. Many programming languages reflect this fact in their type systems by providing types like uint32_t which stands for a natural number between \(0\) and \(2^{32}-1\) in the C programming language.

Computers define a special kind of arithmetic of these finite-length bit strings called two's-complement arithmetic. Two's-complement arithmetic is pre-dominant in modern computers and at the core of the integer types of many modern programming languages. It allows to interpret bit strings as unsigned (natural) numbers or signed (integers) numbers and defines the appropriate operations for both interpretations.

Understanding the details of two's-complement arithmetic is important to understand the effects of the computations in computer programs, especially in corner cases when overflows happen or in low-level settings when debugging code or analyzing machine code programs.