- Worldstart's Tech Tips And Computer Help - http://www.worldstart.com -

32-Bit and 64-Bit Explained Part One


In response to multiple reader requests, we decided to attempt to explain – and, in my case, to understand–32-bit and 64-bit operating systems. In this first segment of a three part series, we’ll define some of the basic terms. The next segment will review speed, memory, hardware, and software. In the third installment, we’ll demonstrate how to learn which system is being run and, if it’s a 32-bit system, whether an upgrade is possible and practical. If this information affects you the same way it did me, it will be helpful, confusing, a cure for insomnia, or all three.

In one context, 32-bit and 64-bit refers to how a CPU (computer processor) handles information. These terms also indicate the number of bits that comprise a single data element (for example, a pixel in an image). In that case, when dealing with resource hogging data like images, audio, or video, there is a distinct advantage to a 64-bit system. However, when writing emails or text documents, the benefits of 64-bit may be less apparent.

What is a bit?

A bit (short for binary digit) is the smallest unit of digital information, represented by either 0 or 1. Arranging a series of bits in sequence creates a binary math language that the processing chips can understand. As a result, CPUs are identified by their ability to process these sequences (32-bit or 64-bit). Eight consecutive bits in such a sequence equals a byte (short for binary term). Large numbers of bytes are then combined to create kilobytes, megabytes, gigabytes, terabytes, etc. To further understand these terms, and to learn how to make conversions, please see the article, Gigabyte/Megabyte Conversions [1].

Not confusing enough?

The terms, 32-bit and 64-bit indicate the width of the registers, which are storage areas within the computer. The registers can contain either the address location in the computer memory where data is stored, or the data itself. All computer data is processed using information represented in these registers.

Each instruction (the most basic computer command) can process the number of bits indicated in the registers. So, a 64-bit machine processes a 64-bit width register with each instruction. Likewise, a 32-bit machine processes a 32-bit width register per instruction. While it would seem that a 64-bit processor would naturally be faster, the number of instructions executed per cycle (the fundamental unit of time measurement in a device) indicates actual processing speed, so that may not always be the case.

It’s the combination of hardware and software elements which make up the computer architecture that determines processing speed. This will be discussed in part two of this series, where we’ll take a more in-depth look at processors, memory, and how hardware and software interact to improve (or–if not correctly balanced–reduce) overall performance.