64-Bit Computing


The question of why we need 64-bit computing is often asked but rarely answered in a satisfactory manner. There are good reasons for the confusion surrounding the question. That is why first of all; let's look through the list of users who need 64 addressing and 64-bit calculations today: oUsers of CAD, designing systems, simulators do need RAM over 4 GB. Although there are ways to avoid this limitation (for example, Intel PAE), it impacts the performance. Thus, the Xeon processors support the 36bit addressing mode where they can address up to 64GB RAM. The idea of this support is that the RAM is divided into segments, and an address consists of the numbers of segment and locations inside the segment. This approach causes almost 30% performance loss in operations with memory. Besides, programming is much simpler and more convenient for a flat memory model in the 64bit address space - due to the large address space a location has a simple address processed at one pass. A lot of design offices use quite expensive workstations on the RISC processors where the 64bit addressing and large memory sizes are used for a long time already. oUsers of data bases. Any big company has a huge data base, and extension of the maximum memory size and possibility to address data directly in the data base is very costly. Although in the special modes the 32bit architecture IA32 can address up to 64GB memory, a transition to the flat memory model in the 64bit space is much more advantageous in terms of speed and ease of programming. oScientific calculations. Memory size, a flat memory model and no limitation for processed data are the key factors here. Besides, some algorithms in the 64bit representation have a much simpler form. oCryptography and safety ensuring applications get a great benefit from 64bit integer calculations.

In computer architecture, 64-bit integers, memory addresses, or other data units are those that are at most 64 bits (8 octets) wide. Also, 64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.

64-bit CPUs have existed in supercomputers since the 1960s and in RISC-based workstationsservers since the early 1990s. In 2003 they were introduced to the (previously 32-bit) mainstream personal computer arena, in the form of the x86-64 and 64-bit PowerPC processor architectures. and

A CPU that is 64-bit internally might have external data buses or address buses with a different size, either larger or smaller; the term "64-bit" is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses (e.g. the original Pentium and later CPUs), and may occasionally be referred to as "64-bit" for this reason. Likewise, some 16-bit processors (for instance, the MC68000) were referred to as 16-/32-bit processors as they had 16-bit buses, but had some internal 32-bit capabilities. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data (e.g. 64-bit double-precision floating-point quantities are common). Without further qualification, "64-bit" computer architecture generally has integer registers that are 64 bits wide, which allows it to support (both internally and externally) 64-bit "chunks" of integer data.

Registers in a processor are generally divided into three groups: integer, floating point, and other. In all common general purpose processors, only the integer registers are capable of storing pointer values (that is, an address of some data in memory). The non-integer registers cannot be used to store pointers for the purpose of reading or writing to memory, and therefore cannot be used to bypass any memory restrictions imposed by the size of the integer registers.

Nearly all common general purpose processors (with the notable exception of most ARM and 32-bit MIPS implementations) have integrated floating point hardware, which may or may not use 64-bit registers to hold data for processing. For example, the x86 architecture includes the x87SSE instructions, which use 8 128-bit wide registers. By contrast, the 64-bit Alpha floating-point instructions which use 8 80-bit registers in a stack configuration; later revisions of x86, also include family of processors defines 32 64-bit wide floating point registers in addition to its 32 64-bit wide integer registers.

Most CPUs are designed so that the contents of a single integer register can store the addressvirtual memory. Therefore, the total number of addresses in the virtual memory – the total amount of data the computer can keep in its working area – is determined by the width of these registers. Beginning in the 1960s with the IBM System/360, then (amongst many others) the DEC VAX minicomputer in the 1970s, and then with the Intel 80386 in the mid-1980s, a de facto consensus developed that 32 bits was a convenient register size. A 32-bit register meant that 232 addresses, or 4 GBs of RAM, could be referenced. At the time these architectures were devised, 4 GB of memory was so far beyond the typical quantities (0.016 GB) available in installations that this was considered to be enough "headroom" for addressing. 4 GB addresses were considered an appropriate size to work with for another important reason: 4 billion integers are enough to assign unique references to most physically countable things in applications like databases. (location) of any datum in the computer's

However, by the early 1990s, the continual reductions in the cost of memory led to installations with quantities of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4-gigabyte ceiling became desirable for handling certain types of problems. In response, a number of companies began releasing new families of chips with 64-bit architectures, initially for supercomputers and high-end workstation and server machines. 64-bit computing has gradually drifted down to the personal computer desktop, with some models in Apple's Macintosh lines switching to PowerPC 970 processors (termed "G5" by Apple) in 2002 and to 64-bit x86-64PCs. The emergence of the 64-bit architecture effectively increases the memory ceiling to 264 addresses, equivalent to approximately 17.2 billion gigabytes, 16.8 million terabytes, or 16 exabytes of RAM. To put this in perspective, in the days when 4 MB of main memory was commonplace, the maximum memory ceiling of 232 addresses was about 1,000 times larger than typical memory configurations. Today, when 2 GB of main memory is common, the ceiling of 264 addresses is about ten billion times larger, i.e. ten million times more headroom than the 232 case. processors in 2003 (with the launch of the AMD Athlon 64), and with x86-64 processors becoming common in high-end

Most 64-bit consumer PCs on the market today have an artificial limit on the amount of memory they can recognize, because physical constraints make it highly unlikely that one will need support for the full 16.8 million terabyte capacity. Apple's Mac Pro, for example, can be physically configured with up to 32 gigabytes of memory.

When reading about PCs and servers, you'll often see the CPU described by the number of bits (e.g., 32-bit or 64-bit), here's a little info about what that means.

32-bit refers to the number of bits (the smallest unit of information on a machine) that can be processed or transmitted in parallel, or the number of bits used for single element in a data format. The term when used in conjunction with a microprocessor indicates the width of the registers; a special high-speed storage area within the CPU. A 32-bit microprocessor can process data and memory addresses that are represented by 32 bits.

64-bit therefore refers to a processor with registers that store 64-bit numbers. A generalization would be to suggest that 64-bit architecture would double the amount of data a CPU can process per clock cycle. Users would note a performance increase because a 64-bit CPU can handle more memory and larger files. One of the most attractive features of 64-bit processors is the amount of memory the system can support. 64-bit architecture will allow systems to address up to 1 terabyteGB) of memory. In today's 32-bit desktop systems, you can have up to 4GB of RAM (provided your motherboard that can handle that much RAM) which is split between the applications and the operating system (OS). (1000

The majority of desktop computers today don't even have 4GB of memory installed, and most small business and home desktop computer software do not require that much memory either. As more complex software and 3D games become available however, we could actually see this become a limitation, but for the average home user that is very far down the road indeed.

Unfortunately, most benefits of a 64-bit CPU will go unnoticed without the key components of a 64-bit operating system and 64-bit software and drivers which are able to take advantage of 64-bit processor features. Additionally for the average home computer user, 32-bits is more than adequate computing power.

When making the transition from 32-bit to 64-bit desktop PCs, users won't actually see Web browsers and word processing programs run faster. Benefits of 64-bit processors would be seen with more demanding applications such as video encoding, scientific research, searching massive databases; tasks where being able to load massive amounts of data into the system's memory is required.

While talk of 64-bit architecture may make one think this is a new technology, 64-bit computing has been used over the past ten years in supercomputing and database management systems. Many companies and organizations with the need to access huge amounts of data have already made the transition to using 64-bit servers, since a 64-bit server can support a greater number of larger files and could effectively load large enterprise databases to into memory allowing for faster searches and data retrieval. Additionally, using a 64-bit server means organizations can support more simultaneous users on each server potentially removing the need for extra hardware as one 64-bit server could replace the use of several 32-bit servers on a network.

It is in scientific and data management industries where the limitations of the 4GB memory of a 32-bit system have been reached and the need for 64-bit processing becomes apparent. Some of the major software developers in the database management systems business, such as Oracle and SQL Server, to name just two, offer 64-bit versions of their database management systems.

While 64-bit servers were once used only by those organizations with massive amounts of data and big budgets, we do see in the near future 64-bit enabled systems hitting the mainstream market. It is only a matter of time until 64-bit software and retail OS packages become available thereby making 64-bit computing an attractive solution for business and home computing needs.

The essence of the move to 64-bit computing is a set of extensions to the x86 intruction set pioneered by AMD and now known as AMD64. During development, they were sensibly called x86-64, but AMD decided to rename them to AMD64, probably for marketing reasons. In fact, AMD64 is also the official name of AMD's K8 microarchitecture, just to keep things confusing. When Intel decided to play ball and make its chips compatible with the AMD64 extensions, there was little chance they would advertise their processors "now with AMD64 compatibility!" Heart attacks all around in the boardroom. And so EM64T, Intel's carbon copy of AMD64 renamed to Intel Extended Memory 64 Technology, was born.

The difference in names obscures a distinct lack of difference in functionality. Code compiled for AMD64 will run on a processor with EM64T and vice versa. They are, for our purposes, the same thing.

Whatever you call 'em, 64-bit extensions are increasingly common in newer x86-compatible processors. Right now, all Athlon 64 and Opteron processors have x86-64 capability, as do Intel's Pentium 4 600 series processors and newer Xeons. Intel has pledged to bring 64-bit capability throughout its desktop CPU line, right down into the Celeron realm. AMD hasn't committed to bringing AMD64 extensions to its Sempron lineup, but one would think they'd have to once the Celeron makes the move.

For some time now, various flavors of Linux compiled for 64-bit processors have been available, but Microsoft's version of Windows for x86-64 is still in beta. That's about to change, at long last, in April. Windows XP Professional x64 Edition, as it's called, is finally upon us, as are server versions of Windows with 64-bit support. (You'll want to note that these operating systems are distinct from Windows XP 64-bit Edition, intended for Intel Itanium processors, which is a whole different ball of wax.) Windows x64 is currently available to the public as a Release Candidate 2, and judging by our experience with it, it's nearly ready to roll. Once the Windows XP x64 Edition hits the stores, I expect that we'll see the 64-bit marketing push begin in earnest, and folks will want to know more about what 64-bit computing really means for them.

The immediate impact, in a positive sense, isn't much at all. Windows x64 can run current 32-bit applications transparently, with few perceptible performance differences, via a facility Microsoft has dubbed WOW64, for Windows on Windows 64-bit. WOW64 allows 32-bit programs to execute normally on a 64-bit OS. Using Windows XP Pro x64 is very much like using the 32-bit version of Windows XP Pro, with the same basic look and feel. Generally, things just work as they should.

There are differences, though. Device drivers, in particular, must be recompiled for Windows x64. The 32-bit versions won't work. In many cases, Windows x64 ships with drivers for existing hardware. We were able to test on the Intel 925X and nForce4 platforms without any additional chipset drivers, for example. In other cases, we'll have to rely on hardware vendors to do the right thing and release 64-bit drivers for their products. Both RealTek and NVIDIA, for instance, supply 64-bit versions of their audio and video drivers, respectively, that share version numbers and feature sets with the 32-bit equivalents, and we were able to use them in our testing. ATI has a 64-bit beta version of its Catalyst video drivers available, as well, but not all hardware makers are so on the ball.

Some other types of programs won't make the transition to Windows x64 seamlessly, either. Microsoft ships WinXP x64 with two versions of Internet Explorer, a 32-bit version and a 64-bit version. The 32-bit version is the OS default because nearly all ActiveX controls and the like are 32-bit code, and where would we be if we couldn't execute the full range of spyware available to us? Similarly, some system-level utilities and programs that do black magic with direct hardware access are likely to break in the 64-bit version of Windows. There will no doubt be teething pains and patches required for certain types of programs, despite Microsoft's best efforts.

Of course, many applications will be recompiled as native 64-bit programs as time passes, and those 64-bit binaries will only be compatible with 64-bit processors and operating systems. Those applications should benefit in several ways from making the transition.

Microsoft 64-Bit

Today, 64-bit processors have become the standard for systems ranging from the most scalable servers to desktop PCs. The way to take full advantage of these systems is with 64-bit editions of Microsoft Windows products.

The 64-bit systems offer direct access to more virtual and physical memory than 32-bit systems and process more data per clock cycle, enabling more scalable, higher performing computing solutions. There are two 64-bit Windows platforms: x64-based and Itanium-based.

x64 solutions are the direct descendants of x86 32-bit products, and are the natural choice for most server application deployments—small or large. Itanium-based systems offer alternative system designs and a processor architecture best suited to extremely large database and custom application solutions.