DK-Flag Erik Østergaard - Binary Number System Go to Home Page
   Return
  
  
Bottom of This Page

Binary Number System


The Binary Number Base Systems

Most modern computer systems (including the IBM PC) operate using binary logic. The computer represents values using two voltage levels (usually 0V for logic 0 and either +3.3 V or +5V for logic 1). With two levels we can represent exactly two different values. These could be any two different values, but by convention we use the values zero and one. These two values, coincidentally, correspond to the two digits used by the binary number system.

Since there is a correspondence between the logic levels used by the computer and the two digits used in the binary numbering system, it should come as no surprise that computers employ the binary system. The binary number system works like the decimal number system except the Binary Number System:

uses base 2
includes only the digits 0 and 1 (any other digit would make the number an invalid binary number)

The weighted values for each position is determined as follows:

2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0 2^-1 2^-2
128 64 32 16 8 4 2 1 .5 .25

In the United States among other countries, every three decimal digits is separated with a comma to make larger numbers easier to read. For example, 123,456,789 is much easier to read and comprehend than 123456789. We will adopt a similar convention for binary numbers. To make binary numbers more readable, we will add a space every four digits starting from the least significant digit on the left of the decimal point. For example, the binary value 1010111110110010 will be written 1010 1111 1011 0010.


Number Base Conversion

Binary to Decimal

It is very easy to convert from a binary number to a decimal number. Just like the decimal system, we multiply each digit by its weighted position, and add each of the weighted values together. For example, the binary value 1100 1010 represents:

1*2^7 + 1*2^6 + 0*2^5 + 0*2^4 + 1*2^3 + 0*2^2 + 1*2^1 + 0*2^0 =

1 * 128 + 1 * 64 + 0 * 32 + 0 * 16 + 1 * 8 + 0 * 4 + 1 * 2 + 0 * 1 =

128 + 64 + 0 + 0 + 8 + 0 + 2 + 0 =

202


Decimal to Binary

To convert decimal to binary is slightly more difficult. There are two methods, that may be used to convert from decimal to binary, repeated division by 2, and repeated subtraction by the weighted position value.

Repeated Division By 2

For this method, divide the decimal number by 2, if the remainder is 0, on the side write down a 0. If the remainder is 1, write down a 1. This process is continued by dividing the quotient by 2 and dropping the previous remainder until the quotient is 0. When performing the division, the remainders which will represent the binary equivalent of the decimal number are written beginning at the least significant digit (right) and each new digit is written to more significant digit (the left) of the previous digit. Consider the number 2671.

Division Quotient Remainder Binary Number
2671 / 2 1335 1 1
1335 / 2 667 1 11
667 / 2 333 1 111
333 / 2 166 1 1111
166 / 2 83 0 0 1111
83 / 2 41 1 10 1111
41 / 2 20 1 110 1111
20 / 2 10 0 0110 1111
10 / 2 5 0 0 0110 1111
5 / 2 2 1 10 0110 1111
2 / 2 1 0 010 0110 1111
1 / 2 0 1 1010 0110 1111

The Subtraction Method

For this method, start with a weighted position value greater that the number.

If the number is greater than the weighted position for the digit, write down a 1 and subtract the weighted position value.
 
If the number is less than the weighted position for the digit, write down a 0 and subtract 0.

This process is continued until the result is 0. When performing the subtraction, the digits which will represent the binary equivalent of the decimal number are written beginning at the most significant digit (the left) and each new digit is written to the next lesser significant digit (on the right) of the previous digit. Consider the same number, 2671, using a different method.

Weighted Value Subtraction Remainder Binary Number
2^12 = 4096 2671 - 0 2671 0
2^11 = 2048 2671 - 2048 623 0 1
2^10 = 1024 623 - 0 623 0 10
2^9 = 512 623 - 512 111 0 101
2^8 = 256 111 - 0 111 0 1010
2^7 = 128 111 - 0 111 0 1010 0
2^6 = 64 111 - 64 47 0 1010 01
2^5 = 32 47 - 32 15 0 1010 011
2^4 = 16 15 - 0 15 0 1010 0110
2^3 = 8 15 - 8 7 0 1010 0110 1
2^2 = 4 7 - 4 3 0 1010 0110 11
2^1 = 2 3 - 2 1 0 1010 0110 111
2^0 = 1 1 - 1 0 0 1010 0110 1111

Binary Number Formats

We typically write binary numbers as a sequence of bits (bits is short for binary digits). We have defined boundaries for these bits. These boundaries are:

Name Size (bits) Example
Bit 1 1
Nibble 4 0101
Byte 8 0000 0101
Word 16 0000 0000 0000 0101
Double Word 32 0000 0000 0000 0000 0000 0000 0000 0101

In any number base, we may add as many leading zeroes as we wish without changing its value. However, we normally add leading zeroes to adjust the binary number to a desired size boundary. For example, we can represent the number five as:

Bit 101
Nibble 0101
Byte 0000 0101
Word 0000 0000 0000 0101

We'll number each bit as follows:

  1. The rightmost bit in a binary number is bit position zero.
  2. Each bit to the left is given the next successive bit number.

Bit zero is usually referred to as the LSB (least significant bit). The left-most bit is typically called the MSB (most significant bit). We will refer to the intermediate bits by their respective bit numbers.


The Bit

The smallest "unit" of data on a binary computer is a single bit. Since a single bit is capable of representing only two different values (typically zero or one) you may get the impression that there are a very small number of items you can represent with a single bit. Not true! There are an infinite number of items you can represent with a single bit.

With a single bit, you can represent any two distinct items. Examples include zero or one, true or false, on or off, male or female, and right or wrong. However, you are not limited to representing binary data types (that is, those objects which have only two distinct values).

To confuse things even more, different bits can represent different things. For example, one bit might be used to represent the values zero and one, while an adjacent bit might be used to represent the values true and false. How can you tell by looking at the bits? The answer, of course, is that you can't. But this illustrates the whole idea behind computer data structures: data is what you define it to be.

If you use a bit to represent a boolean (true/false) value then that bit (by your definition) represents true or false. For the bit to have any true meaning, you must be consistent. That is, if you're using a bit to represent true or false at one point in your program, you shouldn't use the true/false value stored in that bit to represent red or blue later.

Since most items you will be trying to model require more than two different values, single bit values aren't the most popular data type. However, since everything else consists of groups of bits, bits will play an important role in your programs. Of course, there are several data types that require two distinct values, so it would seem that bits are important by themselves. however, you will soon see that individual bits are difficult to manipulate, so we'll often use other data types to represent boolean values.


The Nibble

A nibble is a collection of bits on a 4-bit boundary. It wouldn't be a particularly interesting data structure except for two items: BCD (binary coded decimal) numbers and hexadecimal (base 16) numbers. It takes four bits to represent a single BCD or hexadecimal digit.

With a nibble, we can represent up to 16 distinct values. In the case of hexadecimal numbers, the values 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F are represented with four bits. BCD uses ten different digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and requires four bits. In fact, any sixteen distinct values can be represented with a nibble, but hexadecimal and BCD digits are the primary items we can represent with a single nibble.

b3 b2 b1 b0

The Byte

Without question, the most important data structure used by the 80x86 microprocessor is the byte. This is true since the ASCII code is a 7-bit non-weighted binary code that is used on the byte boundary in most computers. A byte consists of eight bits and is the smallest addressable datum (data item) in the microprocessor.

Main memory and I/O addresses in the PC are all byte addresses. This means that the smallest item that can be individually accessed by an 80x86 program is an 8-bit value. To access anything smaller requires that you read the byte containing the data and mask out the unwanted bits.

The bits in a byte are numbered from bit zero (b0) through seven (b7) as follows:

b7 b6 b5 b4 b3 b2 b1 b0

Bit 0 is the low order bit or least significant bit, bit 7 is the high order bit or most significant bit of the byte. We'll refer to all other bits by their number.

A byte also contains exactly two nibbles. Bits b0 through b3 comprise the low order nibble, and bits b4 through b7 form the high order nibble. Since a byte contains exactly two nibbles, byte values require two hexadecimal digits.

Since a byte contains eight bits, it can represent 2^8, or 256, different values. Generally, we'll use a byte to represent:

  1. unsigned numeric values in the range 0 => 255
  2. signed numbers in the range -128 => +127
  3. ASCII character codes
  4. other special data types requiring no more than 256 different values. Many data types have fewer than 256 items so eight bits is usually sufficient.

Since the PC is a byte addressable machine, it turns out to be more efficient to manipulate a whole byte than an individual bit or nibble. For this reason, most programmers use a whole byte to represent data types that require no more than 256 items, even if fewer than eight bits would suffice. For example, we'll often represent the boolean values true and false by 00000001 and 00000000 (respectively).

Probably the most important use for a byte is holding a character code. Characters typed at the keyboard, displayed on the screen, and printed on the printer all have numeric values. To allow it to communicate with the rest of the world, the IBM PC uses a variant of the ASCII character set. There are 128 defined codes in the ASCII character set. IBM uses the remaining 128 possible values for extended character codes including European characters, graphic symbols, Greek letters, and math symbols.


The Word

NOTE:
The boundary for a Word is defined as either 16-bits or the size of the data bus for the processor, and a Double Word is Two Words. Therefore, a Word and a Double Word is not a fixed size but varies from system to system depending on the processor. However, for our discussion, we will define a word as two bytes.

For the 8085 and 8086, a word is a group of 16 bits. We will number the bits in a word starting from bit zero (b0) through fifteen (b15) as follows:

b15 b14 b13 b12 b11 b10 b9 b8 b7 b6 b5 b4 b3 b2 b1 b0

Like the byte, bit 0 is the LSB and bit 15 is the MSB. When referencing the other bits in a word use their bit position number.

Notice that a word contains exactly two bytes. Bits b0 through b7 form the low order byte, bits 8 through 15 form the high order byte. Naturally, a word may be further broken down into four nibbles. Nibble zero is the low order nibble in the word and nibble three is the high order nibble of the word. The other two nibbles are "nibble one" or "nibble two".

With 16 bits, you can represent 2^16 (65,536) different values. These could be the unsigned numeric values in the range of 0 => 65,535, signed numeric values in the range of -32,768 => +32,767, or any other data type with no more than 65,536 values. The three major uses for words are

  1. 16-bit integer data values
  2. 16-bit memory addresses
  3. any number system requiring 16 bits or less


The Double Word

A double word is exactly what its name implies, two words. Therefore, a double word quantity is 32 bits. Naturally, this double word can be divided into a high order word and a low order word, four bytes, or eight nibbles.

Double words can represent all kinds of different data. It may be

  1. an unsigned double word in the range of 0 => 4,294,967,295,
  2. a signed double word in the range -2,147,483,648 => 2,147,483,647,
  3. a 32-bit floating point value
  4. any data that requires 32 bits or less.


Working with Logarithms / Udregning ved hjælp af logaritmer

A logarithm is used when working with exponentiation. We all learned that the formula X = YZ means take the value Y and multiply it by itself the number of times specified by Z. For example, 23 = 8 (2*2*2). The value Z is the exponential value of the equation. As long as you know what the Y and Z values are in the equation, it is easy to calculate the value of X. / En logaritme bruges, når der udregnes med eksponent. Vi lærte alle, at formlen X = YZ betyder, tag værdien Y og multiplicer (gang) den med sig selv antallet af gange angivet af Z. For eksempel, 23 = 8 (2*2*2). Værdien Z er eksponentiel-værdien i ligningen. Så længe man kender, hvad Y og Z værdierne er i ligningen, er det nemt at beregne værdien af X.

Unfortunately, you may not always know the value of Y and Z. How do you determine Z if you know the value of X and Y? This is when you use a logarithm. A logarithm is the exponent value that indicates the number of times the value Y needs to be multiplied by itself to get the value X. The value that is multiplied (Y) is considered to be the base of the formula. / Uheldigvis kender man ikke altid værdien af Y og Z. Hvordan bestemmer man Z, hvis man kender værdien af X og Y? Det er da, man bruger en logaritme. En logaritme er eksponent-værdien, som angiver antallet af gange værdien Y behøver at blive multipliceret (ganget) med sig selv for at få værdien X. Værdien som er multipliceret (ganget) (Y) betragtes som grundtallet i formlen.

There are two basic types of logarithms: common and natural. A common logarithm uses a value 10 as the base value. Therefore, in the basic formula for exponentiation above, X = YZ, the value of Y is 10, and Z is the number of times that Y needs to be multiplied by itself to return the value indicated by X. / Der er to grundlæggende typer af logaritmer: sædvanlig og naturlig. En sædvanlig logaritme bruger en værdi 10 som grundtal. Derfor i den grundlæggende eksponent-formel ovenfor, X = YZ, er værdien af Y 10, og Z er antallet af gange som Y behøver at blive multipliceret (ganget) med sig selv for at returnere værdien angivet af X.

Natural logarithms use a base value of approximately 2.71828182845905, normally referred to as e. The mathematical notation e is Euler's constant, the base of natural algorithms, made common by the mathematician Leonhard Euler (Born / FødtBasel, Switzerland April 15, 1707 - Died / DødRussia September 18, 1783). VBScript provides two functions for working with logarithms: Exp() and Log(). Each of these functions assumes that the base value is e. The Log() function returns the natural logarithm of the supplied numeric expression, and the Exp() function raises the supplied numeric expression to e. The similar methods in JavaScript is called: Math.exp() and Math.log(). / Naturlige logaritmer bruger et grundtal på tilnærmelsesvis 2,71828182845905, i reglen henvist til som e. Den matematiske notation e er Eulers konstant, de naturlige algoritmers grundtal, gjort alminding af matematikeren Leonhard Euler (Born / FødtBasel, Schweiz 15. april 1707 - Died / DødRusland 18. september 1783). VBScript har to funktioner til udregninger med logaritmer: Exp() og Log(). Hver af disse funktioner antager, at grundtallet er e. Log() funktionen returnerer den naturlige logaritme til det leverede numeriske udtryk, og Exp() funktionen opløfter det leverede numeriske udtryk til e. De lignende metoder i JavaScript kaldes: Math.exp() og Math.log().

It is possible to use these VBScript functions or JavaScript methods if you have a different base value by using a simple formula. By dividing the natural log of the desired number (X) by the natural log of the desired base (Y), you can determine the desired logarithm value (Z) in VBScript: Z = Log(X) / Log(Y) or similar in JavaScript: Z = ((Math.log(X)) / (Math.log(Y)));. / Det er muligt at bruge disse VBScript funktioner eller JavaScript metoder, hvis man har et andet grundtal ved at bruge en simpel formel. Ved at dividere den naturlige log til det ønskede tal (X) med den naturlige log til det ønskede grundtal (Y), kan man bestemme den ønskede logaritme-værdi (Z) i VBScript: Z = Log(X) / Log(Y) eller lignende i JavaScript: Z = ((Math.log(X)) / (Math.log(Y)));.

JavaScript comments: / JavaScript bemærkninger:

The custom function Pow2(NumDbl), which returns the base to an exponent power of 2, and the custom function Log2(NumDbl), which calculates base-2 logarithms, can be seen in this page's source code. They uses respectively the JavaScript Math.pow() method, which returns base to the exponent power, that is, base exponent, and a formula based on the JavaScript Math.log() method, which returns the natural logarithm (base E) of a number. / Funktionen lavet på bestilling Pow2(NumDbl), som returnerer grundtallet til en eksponent potens af 2, og funktionen lavet på bestilling Log2(NumDbl), som beregner grundtal-2 logaritmer, kan ses i denne sides kildekode. De bruger henholdsvis JavaScript Math.pow() metoden, som returnerer grundtallet til en eksponent potens, det vil sige grundtal eksponent, og en formel baseret på JavaScript Math.log() metoden, som returnerer den naturlige logaritme (grundtal E) af et tal.

See the JavaScript by View Source / Se JavaScript'et via Vis Kilde

You can see the JavaScript by using View Source. / Man kan se JavaScript'et ved at bruge Vis Kilde.


My Sources / Mine kilder

Sources: Various books, the Internet, and various encyclopedias.

Kilder: Forskellige bøger, internettet og forskellige leksikoner.


Computer Data Representation and Number Systems / Computer data repræsentation og talsystemer


   Top of This Page
   Return
   Go to Home Page