Up to now we were only assigning non-negative numbers to bit strings using the function \(\buintp\cdot\text{.}\) However, it would be nice to be able to do meaningful computations on bit strings that represent negative numbers as well. To do this, we need to come up with a new interpretation of bit strings that represents positive and negative numbers.
Definition1.4.1.Signed interpretation of bit strings.
We have proven (Lemma 1.3.5) that the addition algorithm of Definition 1.3.3 always preserves the values under the unsigned interpretation modulo\(2^n\text{.}\) We have already discussed the role of the carry out of the most significant position to detect overflows in the case of adding or subtracting unsigned numbers. We will see shortly that our addition algorithm also works for signed numbers in a similar way: It preserves the signed interpretation modulo \(2^n\) but has a different criterion for determining an overflow, as the following examples show:
The next lemma clarifies under which circumstances the lower \(n\) bits of an addition represent the accurate result with respect to the signed interpretation and also gives a condition for the overflow:
Lemma1.4.3.Signed addition.
\(\bsintp a+\bsintp b=\bsintp{a\baddn b}\) if and only if \(c_n= c_{n-1}\text{.}\)
As mentioned before, many processors provide the carry out of the most significant position as the carry flag in a flag register. Additionally, these processors typically provide an overflow bit which is just the xor between the carry into and out of the most significant position as indicated by Lemma 1.4.3.
Suppose your ALU does not provide you with an overflow bit and suppose you do not have access to \(c_{n-1}\text{.}\) So you cannot use Lemma 1.4.3 to compute the overflow bit. Explore how you can compute the overflow bit from the most significant bits of the added numbers, of the sum, and maybe from the carry bit. Come up with adequate boolean expressions that compute the overflow bit from these components.
Not every operation gives meaningful results on signed and unsigned alike. Consider the less-than operation \(a\lt b\) that yields 1 if \(a\lt b\) and 0 otherwise. For example:
So, when performing a comparison on two bit strings, one has to decide if the bit strings are to be interpreted as signed or unsigned numbers. Consequently, modern microprocessors have two operations for comparing integers. On MIPS, which we will discuss in the next section, there is slt which interprets the bit strings as signed integers and sltu which interprets them as unsigned integers.
Remark1.4.8.
Being signed or unsigned is not a property of the bit string itself but how you interpret it. A binary operation on bit strings just produces another bit string. It may, however, be sound with respect to the signed or the unsigned interpretation (comparison) or with respect to both (addition).
Sign and Zero Extension.
It happens frequently that we want to convert a bit string of length \(m\) into a longer bit string of length \(n\) so that the longer bit string represents the same number as the shorter one.
In such a situation it is important how we want to interpret the bit string: as a signed or an unsigned number because for either case the conversion is different. Converting a shorter to longer bit sequence such that their value stays the same under the unsigned interpretation is called zero extension where as sign extension preserves their value under the signed interpretation.
Figure 1.4.9 shows all bit strings of lengths 4, 3, and 2. Apparently, when the longer bit string preserves the value of the shorter under \(\buintp\cdot\) the bit string is extended with zeros. In the case where the signed value shall be preserved, the most significant bit is replicated: 11 becomes 111 becomes 1111, whereas 01 becomes 001 and 0001. Proving that zero extension preserves the unsigned value is straightforward. The signed case is slightly more interesting: