Jumpers & Transfer Modes - Helpful and Thorough Information
(This index will appear on our Technical System
Informational Pages for Easier Access)
(main) PC Complete System Tuning - Optimization, Calibrations Etc...
BIOS Setting - Tech Terms Information
GAMERS AWARD BIOS 1
BIOS Optimization - Tutorials about your BIOS
GAMERS AWARD BIOS 2
BIOS Optimization- Our Favorite BIOS Settings
BIOS Flashing - Tools /Utilities /Downloads /Information
Calibrating Video Displays - Gamma /Color /Free Software
Increase Speed - Cable /DSL /Modems /Updates /Software /Testing
INTERNET SPEED TESTING
Internet Connection Speed Test - Test Cable/ISDN /DSL /Modems
HIGH TECH FORUMS
High-tech Forums - Benchmarking / Help / Hardware /Software
TWEAKS & TUNING
High-tech Programming Tools - CPU, FSB, RAM, Tech Programs
HARD DRIVE SPEED
Jumpers & Transfer Modes - Helpful and Thorough Information
RAM CHIP MODULES - DIMMS/SIMMs
Examples of RAM - View modules used in today's Computers
We have put much effort into making this site and collecting links for our visitors. Although this effort may not seem important
to you, we ask that you respect our hard work and link to the page instead of grabbing links by the page and posting them on another site. Our links have tracking and you will be breaking
Copyrights, as well as internet web site ethics. Please respect our wishes and your conscience will remain clean and pure. Thank You
Hard drive Jumper Settings and Hard drive
Transfer Modes Information
|Hard Disk Basics
Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes. They were originally called "fixed disks" or "Winchesters" (a
code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic
film found in tapes and floppies.
At the simplest level, a hard disk is not that different from a cassette tape. Both hard disks and cassette tapes use the same magnetic recording techniques described in How Tape Recorders
Work. Hard disks and cassette tapes also share the major benefits of magnetic storage -- the magnetic medium can be easily erased and rewritten, and it will "remember" the magnetic flux patterns stored onto the medium for
Deciding whether a computer will contain more than one operating system
Choosing between NTFS, FAT, or FAT32
You can choose between three file systems for disk partitions on a computer running Windows 2000 Server:
FAT , and
FAT32 NTFS is the recommended system. FAT and FAT32 are similar to each other, except that FAT32 is designed for larger disks than FAT. (The file system that works most easily with large disks is NTFS.) This topic
provides information to help you compare the file systems. For additional information about each file system, see:
NTFS has always been a more powerful file system than FAT and FAT32. Windows 2000 Server includes a new version of NTFS, with support for a variety
of features including Active Directory, which is needed for domains, user accounts, and other important security features.
The Setup program makes it easy to convert your partition to the new version of NTFS, even if it used FAT or FAT32 before. This kind of conversion
keeps your files intact (unlike formatting a partition). If you do not need to keep your files intact and you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than converting from FAT or FAT32. Formatting a partition erases all data on the partition, but a partition that is formatted with NTFS rather than converted from FAT or FAT32 will have less fragmentation and better performance.
However, it is still advantageous to use NTFS, regardless of whether the partition was formatted with NTFS or converted. A partition can also be
converted after Setup by using Convert.exe. For more information about Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the command window, type help
convert and then press ENTER.
You can use important features such as Active Directory and domain-based security only by choosing NTFS as your file system.
There is one situation in which you might want to choose FAT or FAT32 as your file system. If it is necessary to have a computer that will sometimes
run an earlier operating system and sometimes run Windows 2000, you will need to have a FAT or FAT32 partition as the primary (or startup) partition on the hard disk. This is because earlier operating systems, with one
exception, can't access a partition if it uses the latest version of NTFS.
The one exception is Windows NT version 4.0 with Service Pack 4 or later, which has access to partitions with the latest version of NTFS, but with
some limitations. Windows NT 4.0 cannot access files that have been stored using NTFS features that did not exist when Windows NT 4.0 was released. For more information, see:
For anything other than a situation with multiple operating systems, however, the recommended file system is NTFS.
The following table describes the compatibility of each file system with various operating systems.
The following table compares disk and file sizes possible with each file system.
running Windows 2000 can access files on an NTFS partition. A computer running Windows NT 4.0 with Service Pack 4 or later might be able to access some files. Other operating systems allow no access.
available through MS-DOS, all versions of Windows, Windows NT, Windows 2000, and OS/2.
available only through Windows 95 OSR2, Windows 98, and Windows 2000.
|Recommended minimum volume size is
approximately 10 MB. Recommended practical maximum for volumes is 2 TB (terabytes). Much larger sizes are possible.
Cannot be used on floppy disks.
||Volumes from floppy disk
size up to 4 GB. Does not support domains.
||Volumes from 512 MB to 2 TB.
In Windows 2000, you can format a FAT32 volume only up to 32 GB.
Does not support domains.
|File size limited only by size of
||Maximum file size 2 GB.
||Maximum file size 4 GB.
Programmed I/O (PIO) Modes
The oldest method of transferring data over the IDE/ATA interface is through the use of programmed I/O. This is a technique whereby the system CPU and support hardware directly control
the transfer of data between the system and the hard disk. There are several different speeds of programmed I/O, which are of course called programmed I/O modes, or more commonly, PIO modes.
Through the mid-1990s, programmed I/O was the only way that most systems ever accessed IDE/ATA hard disks. Three lower-speed modes were defined as part of the
original ATA standards document; two more were added as part of ATA-2 as well as part of several unofficial standards. The table
below shows the five different PIO modes, along with the cycle time for each transfer and the corresponding throughput of the PIO mode:
A few things about this table bear mention. First of all, the PIO modes are defined in terms of their cycle time, representing how many nanoseconds it takes for each transfer to occur.
The maximum transfer rate is the reciprocal of the cycle time, doubled because the IDE/ATA interface is two bytes (16 bits) wide. Also, conspicuous by its absence from the table above is the so-called "PIO mode 5", which does not
exist and was never implemented in any IDE/ATA hard disks. Apparently, at one point some discussion occurred about creating a faster PIO mode, which was tentatively called "PIO mode 5". This mode was to support a transfer rate of
22.2 MB/s, but it was never implemented (probably because the much faster 33 MB/s Ultra DMA mode 2 was on the horizon). Some motherboard manufacturers made a point of providing early support for
this proposed mode in their BIOS setup programs, so you may occasionally see it mentioned.
Obviously, faster modes are better, because they mean a higher theoretical burst transfer rate over the interface. This transfer rate represents the external data transfer rate for
the drive. Remember that this is the speed of the interface and not necessarily the sustained transfer rate of the drive itself, which is almost always slower (and should be). Of course today all new drives have sustained transfer
rates well in excess of what even the fastest PIO mode can handle, which is one reason why PIO has fallen out of favor. Also worth mention is that very old systems using ISA hard disk controllers cannot use even PIO modes 3 or 4,
because their transfer rate exceeds the capacity of the ISA bus!
As I mentioned, programmed I/O is performed by the system CPU; the system processor is responsible for executing the instructions that transfer the data to and from the drive, using
special I/O locations. This technique works fine for slow devices like keyboards and modems, but for performance components like hard disks it causes performance issues. Not only does PIO involved a lot of wasteful overhead, the CPU
is "distracted" from its ordinary work whenever a hard disk read or write is needed. This means that using PIO is ideally suited for lower-performance applications and single tasking. It also means that the more data the system must
transfer, the more the CPU gets bogged down. As hard disk transfer rates continue to increase, the load on the CPU would have continued to grow. This is the other key reason why PIO modes are no longer used on new systems, having
been replaced by DMA modes, and then later, Ultra DMA.
Each IDE channel supports the use of two devices, designated as master and slave. Modern systems allow the use of master and slave devices running at different PIO modes on the same
channel; this is called independent device timing and is a function of the system chipset and BIOS. When this feature is not supported, both devices may be limited to the slower of the two devices' maximum PIO mode, but this hasn't
been a big issue since the mid-to-late 1990s.
PIO modes do not require any special drivers under normal circumstances; support for them is built into the system BIOS. This universal support, along with their conceptual simplicity,
is why they were traditionally the default way that most drives are used. Today, however, PIO is just not up to handling modern drives, which use Ultra DMA to keep the load on the CPU down and to allow access to Ultra DMA's much higher
performance. Support for PIO modes is still universal on almost all systems and drives made since the mid-1990s, for backwards compatibility. It is used, for example, as a "last resort" when driver or software issues cause problems
with Ultra DMA accesses.
||Cycle Time (nanoseconds)
||Maximum Transfer Rate (MB/s)
The first formal standard defining the AT Attachment interface was submitted to ANSI for approval in 1990. It took a long time for this first ATA standard to be approved. Presumably,
it took so long because it was the first standard to define the interface, and therefore much debate and discussion probably took place during the approval process. It was finally published in 1994 as ANSI standard X3.221-1994, titled
AT Attachment Interface for Disk Drives. This standard is sometimes called ATA-1 to distinguish it from its successors.
The original IDE/ATA standard defines the following features and transfer modes:
"Plain" ATA does not include support for enhancements such as ATAPI support for non-hard-disk IDE/ATA devices, block mode transfers,
logical block addressing, Ultra DMA modes or other advanced features. Drives developed to meet this standard are no longer made, as the standard is old and obsolete. In fact, at the recommendation of the T13 Technical Committee, ATA-1
was withdrawn as an official ANSI standard in 1999. This is presumably due to its age, and the large number of replacement ATA standards already published by that time.
- Two Hard Disks: The specification calls for a single channel in a PC, shared by two devices that are configured as master and slave.
- PIO Modes: ATA includes support for PIO modes 0, 1 and 2.
- DMA Modes: ATA includes support for single word DMA modes 0, 1 and 2, and multiword DMA mode 0.
The original ATA standard defined features that were appropriate for early IDE/ATA hard disks. However, it was not well-suited to support the growing size and performance needs of a
newer breed of hard disks. These disks required faster transfer rates and support for enhanced features.
In an ideal world, the standards committee would have gotten the various hard disk manufacturers together to define a new standard to support the added features everyone wanted. Unfortunately,
several companies were impatient, and once again started the industry down the road to incompatible proprietary extensions to the original ATA standard. Seagate defined what it called "Fast ATA", an extension to regular ATA, and "Fast
ATA-2" soon followed. These extensions were also picked up and used by Quantum. Western Digital, meanwhile, created "Enhanced IDE" or "EIDE", a somewhat different ATA feature set expansion. All of this happened in around 1994.
To try to once again correct the growing confusion being caused by all these unofficial standards,
the ATA interface committee created a new, official ATA-2 specification that essentially combines the features and attributes defined by the marketing programs created by Seagate, Quantum and Western Digital. This standard was published
in 1996 as ANSI standard X3.279-1996, AT Attachment Interface with Extensions.
ATA-2 was a significant enhancement of the original ATA standard. It defines the following improvements over the base ATA standard (with which it is backward compatible.
Unfortunately, even after consensus was reached on ATA-2, the old marketing terms continued to be used. Fortunately, all of the drives of this era have now passed into obsolescence, and the hard disk
companies are in much better agreement now on what terms should be used to describe the hard disk interface. Although the marketing people keep trying.
- Faster PIO Modes: ATA-2 adds the faster PIO modes 3 and 4 to those supported by ATA.
- Faster DMA Modes: ATA-2 adds multiword DMA modes 1 and 2 to the ATA modes.
- Block Transfers: ATA-2 adds commands to allow block transfers for improved performance.
- Logical Block Addressing (LBA): ATA-2 defines support (by the hard disk) for logical block addressing. Using LBA requires BIOS support on the other end of the interface as well.
- Improved "Identify Drive" Command: This command allows hard disks to respond to inquiries from software, with more accurate information about their geometry and other characteristics.
The ATA-3 standard is a minor revision of ATA-2, which was published in 1997 as ANSI standard X3.298-1997, AT Attachment 3 Interface. It defines the following improvements compared to ATA-2 with
which it is backward compatible:
ATA-3 was approved rather quickly after ATA-2, while the market was still spinning from all the non-standard "ATA-2-like" interface names being tossed. This, combined with the fact that ATA-3 introduced
no higher-performance transfer modes, caused it to be all but ignored in the marketplace. Hard disk manufacturers added features defined in the standard (such as SMART) to their drives, but didn't tend to use the "ATA-3" term itself
in their literature.
Note: You may see a so-called "PIO Mode 5" described in some places, with the claim that it was introduced
in ATA-3. This mode was suggested by some controller manufacturers but never approved and never implemented. It is not defined in any of the ATA standards and only exists in some BIOS setup programs...
See the discussion of PIO modes for more information.
Note: ATA-3 does not define any of the Ultra DMA modes; these were first defined with
ATA/ATAPI-4. ATA-3 is also not the same as "ATA-33", a slang term for the 33 MB/s first version of Ultra ATA, itself a slang term for the 33 MB/s
Ultra DMA transfer mode 2.
- Improved Reliability: ATA-3 improves the reliability of the higher-speed transfer modes, which can be an issue due to the low-performance standard cable used up to that point in IDE/ATA. (An
improved cable was defined as part of ATA/ATAPI-4.)
- Self-Monitoring Analysis and Reporting Technology (SMART): ATA-3 introduced this reliability feature.
- Security Feature: ATA-3 defined security mode, which allows devices to be protected with a password.
SFF-8020 / ATA Packet Interface (ATAPI)
Originally, the IDE/ATA interface was designed to work only with hard disks. CD-ROMs and tape drives used either proprietary interfaces (often implemented on sound cards), the floppy disk interface
(which is slow and cumbersome) or SCSI. In the early 1990s it became apparent that there would be enormous advantages to using the standard IDE/ATA interface to support devices other than hard disks, due to its high performance, relative
simplicity, and universality. The intention was not to replace SCSI of course, but rather to get rid of the proprietary interfaces (which nobody really likes) and the slow floppy interface for tape drives.
Unfortunately, because of how the ATA command structure works, it wasn't possible to simply put non-hard-disk devices on the IDE channel and expect them to work. Therefore, a special protocol was
developed called the AT Attachment Packet Interface or ATAPI. The ATAPI standard is used for devices like optical, tape and removable storage drives. It enables them to plug into the standard IDE cable used by IDE/ATA hard disks,
and be configured as master or slave, etc. just like a hard disk would be. When you see a CD-ROM or other non-hard-disk peripheral advertised as being an "IDE device" or working with IDE, it is really using the ATAPI protocol.
Internally, however, the ATAPI protocol is not identical to the standard ATA (ATA-2, etc.) command set used by hard disks at all. The name "packet interface" comes from the fact that commands to ATAPI
devices are sent in groups called packets. ATAPI in general is a much more complex interface than regular ATA, and in some ways resembles SCSI more than IDE in terms of its command set and operation. At the time it was created, SCSI
was the interface of choice for many CD-ROM and higher-end tape drives.
A special ATAPI driver is used to communicate with ATAPI devices. This driver must be loaded into memory before the device can be accessed (most newer operating systems support ATAPI internally and
in essence, load their own drivers for the interface). The actual transfers over the channel use regular PIO or DMA modes, just like hard disks, although support for the various modes differs much more widely by device than it does
for hard disks. For the most part, ATAPI devices will coexist with IDE/ATA devices and from the user's perspective, they behave as if they are regular IDE/ATA hard disks on the channel. Newer BIOS will even allow booting from ATAPI
The first standard that described ATAPI wasn't actually even developed by the people who maintain the ATA standards. It was defined by the Small Form Factor committee, an industry group that traditionally
defined standards for physical issues like PC cables and screw hole locations, but somehow got involved in storage interfacing. The first ATAPI standard document produced by this group was called SFF-8020 (later renamed INF-8020),
which is now quite old and obsolete. In the late 1990s, the T13 Technical Committee took over control of the ATAPI command set and protocol, combining it with ATA into the ATA/ATAPI-4 standard.
The next significant enhancement to the ATA standard after ATA-2 saw the ATA Packet Interface (ATAPI) feature set merged with the conventional ATA command set and protocols to create ATA/ATAPI-4.
This standard was published by ANSI in 1998 as NCITS 317-1998, AT Attachment with Packet Interface Extensions. Note the change to "NCITS" in the document number, from the "X3" used in earlier ATA standards.
Aside from combining ATA and ATAPI, this standard defined several other significant enhancements and changes:
Of course, the Ultra DMA modes were the most exciting part of this new standard. Ultra DMA modes 0 and 1 were never really implemented by hard disk manufacturers, but UDMA mode 2 made quite a splash,
as it doubled the throughput of the fastest transfer mode then available. Ultra DMA mode 2 was quickly dubbed "Ultra DMA/33", and drives conforming to ATA/ATAPI-4 are often called "Ultra ATA/33" drives, which technically does not
- Ultra DMA Modes: High-speed Ultra DMA modes 0, 1 and 2, defining transfer rates of 16.7, 25 and 33.3 MB/s were created.
- High-Performance IDE Cable: An improved, 80-conductor IDE cable was first defined in this standard. It was thought that the higher-speed Ultra DMA modes would require the use of this cable
in order to eliminate interference caused by their higher speed. In the end, the use of this cable was left "optional" for these modes. (It became mandatory under the still faster UDMA modes defined in
- Cyclical Redundancy Checking (CRC): This feature was added to ensure the integrity of data sent using the faster Ultra DMA modes. Read more about it here.
- Advanced Commands Defined: Special command queuing and overlapping protocols were defined.
- Command Removal: The command set was "cleaned up", with several older, obsolete commands removed.
Not content to rest on their laurels with the adoption of ATA/ATAPI-4, the T13 committee immediately began work on its next generation, ATA/ATAPI-5. This standard was published by ANSI in 2000 as
NCITS 340-2000, AT Attachment with Packet Interface - 5.
The changes defined in ATA/ATAPI-5 include:
Like ATA-3, not that many changes were made in ATA/ATAPI-5 (compared to ATA/ATAPI-4 and ATA-2, for example). Unlike ATA-3, the main change made here was a high-profile one: another doubling of the
throughput of the interface to 66.7 MB/second. Unsurprisingly, the same companies that called ATA/ATAPI-4 drives "Ultra ATA/33" labeled ATA/ATAPI-5 drives running Ultra DMA mode 4 as "Ultra ATA/66". During 1999 and early 2000, new
IDE/ATA drives conforming to this standard were the most common on the market.
- New Ultra DMA Modes: Higher-speed Ultra DMA modes 3 and 4, defining transfer rates of 44.4 and 66.7 MB/s were specified.
- Mandatory 80-Conductor IDE Cable Use: The improved 80-conductor IDE cable first defined in ATA/ATAPI-4 for optional use,
is made mandatory for UDMA modes 3 and 4. ATA/ATAPI-5 also defines a method by which a host system can detect if an 80-conductor cable is in use, so it can determine whether or not to enable the higher speed transfer modes.
- Miscellaneous Command Changes: A few interface commands were changed, and some old ones deleted.
At the time that I write this in late 2000, the T13 Technical Committee is working on he next version of
the ATA standard, ATA/ATAPI-6. It is likely that this standard will be completed in 2001 and published sometime later that year or early in 2002.
Since this standard is still in development, it is impossible to be sure exactly what features and changes it will include. One addition to the standard does seem almost certain: the new Ultra DMA
mode 5, which increases transfer throughput to 100 MB/s. Since this is already the standard on currently-shipping drives, I can't imagine it not making the next standard! Beyond that, only the T13 folks know, and at this point, perhaps
not even them. Aside from Ultra DMA mode 5, some of the rumored possible new features for the next standard include:
- LBA Address Size Expansion: Hard disk sizes are now approaching the maximum that can be represented under the traditional 28-bit LBA scheme; this is the 137 GB size barrier. Most people don't
know anything about this "size barrier of the future" now, but I predict that in 2001 or 2002 it's going to be a hot topic of conversation. To get around this limitation, an addressing mode will probably be included in ATA/ATAPI-6
that expands the address width from 28 bits to either 48 or 64 (either of which will keep us busy for quite a while...)
- Hard Disk Noise Reduction ("Acoustic Management"): Some hard disk companies are working on technologies to allow the mechanics of the hard disk to be modified under software control, letting
the user choose between higher performance or quieter operation. Commands to cover this feature may make the next standard, as they are mentioned in the current draft.
- Audio and Video Streaming: Some manufacturers have apparently suggested new commands related to multimedia streaming, but I don't know any more about them than that right now.
- The PC Guide (http://www.PCGuide.com)
Site Version: 2.2.0 - Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.
SCSI History and Future:
SCSI stands for "small computer systems interface" or some such nonsense. Apple and some Server manufacturers made it what it is. It is and always has managed to be a better interface for hard drives and CDROM drives than IDE and
its various versions, even the new ATA66/100/133 (Ultra DMA66/100/133) even though "IDE" keeps improving and keeping itself pretty close in performance in many ways. SCSI has also managed to support a number of other devices besides
hard drives and CDROM drives, unlike IDE. SCSI supports scanners, and other peripherals (those that support it) and over the years has been THE best computer peripheral interface where speed is concerned. Nowadays firewire
(IEEE-1394) and now USB and USB 2, are attempting to wipe it out as a peripheral interface, and they will probably be successful. As a Server and high performance interface for storage (hard drives, RAID, etc) , SCSI is still way
ahead of these newcomers and is likely to continue to be for some time.
SCSI is a "parallel" interface. That means it sends an entire "chunk" (byte) of information at a time, rather than sending things one "bit" at a time. This can give
it great speed, but also tends to cause problems with the length of the cables involved. It also means a cable will have a lot of wires in it rather than a few, which basically causes the whole length problem thingy with SCSI. The
length thingy may be a problem with SCSI, but with "IDE" the length problem is REALLY bad. Every IDE spec tends to be 18 inches for a maximum cable length. *** SCSI has way longer lengths like 12 meters nowadays.
Thats like 36 feet for us American non metric types! IDE is also very limited as far as the number of devices per "channel". Two devices (hard drives, CDROM's) and that's it! Every "modern" IDE controller has two channels, so that's
four devices, maximum. SCSI goes way past that. Depending on the SCSI type you can have 7, 15, or even more devices connected. This can be greatly reduced by which type of SCSI you have, which types of devices you have connected and
also their cable lengths.
The main thing to remember now, is there are basically two types of SCSI out there for the average SCSI user. "Single ended" and "Low Voltage Differential" (LVD). Everything before Ultra2 and Ultra 160 SCSI (not Ultra SCSI-2, or Ultra
SCSI-3, they were before Ultra2) is "Single ended, or "SE". These devices have slower speeds, and can possibly slow down a "LVD" (Ultra2/Ultra160) device when connected together with them. They can also severely limit
the cable lengths allowed when connected with LVD (Ulra2/Ultra160) devices. Got it? I didn't think so... OK, so say you have a new groovy Ultra160 or even Ultra320 SCSI controller card connected to some amazingly fast Seagate Cheetah
15K (15000 RPM for you complete non geeks, that's a real fast freaking hard drive) drives and you want to hook up a scanner or a zip drive or maybe some CD burner to the same cable or the external connector. BAM! You could be slowing
down the hard drives "throughput" (slowing down its data transfer rate). Some SCSI cards do have the various connectors "segmented" or isolated from each other so that there will be no problem as long as you don't connect these slower
devices to the same cable. RTFM! (read the "friendly" manual?).
Another very misunderstood problem is the "High Byte Termination" problem.
When a wide SCSI bus or device must connect to a narrow SCSI bus or device, care should be taken to assure the proper termination of the high data byte. What's the high data byte? Well, "wide SCSI" means 16 bit or 2 "bytes" (8 bits
make a byte) SCSI. Narrow SCSI is 1 byte (8 bits) SCSI. Normally wide SCSI devices will work just fine using only the first byte, and therefore can be used on a narrow bus (cable). The problem is that if the second or "high byte"
just, sort of, disappears, then it will not be terminated properly and all sorts of problems can occur. Also, whereas the wide drives of the past (non LVD) were perfectly happy to be directly connected to a narrow bus with no termination
at the drive itself, it seems that most LVD (Ultra2 or Ultra160) drives require this high byte to be terminated at the drive
Ultra DMA (UDMA) ModesWith the increase in performance of hard disks over the last few years, the use of
programmed I/O modes became a hindrance to performance. As a result, focus was placed on the use of direct memory access (DMA) modes. In particular, bus
mastering DMA on the PCI bus became mainstream due to its efficiency advantages.
Of course, hard disks get faster and faster, and the maximum speed of multiword DMA mode 2, 16.7 MB/s, quickly became insufficient for the fastest drives. However, the engineers who went to work to
speed up the interface discovered that this was no simple task. The IDE/ATA interface, and the flat ribbon cable it used, were designed for slow data transfer--about 5 MB/s. Increasing the speed of the interface (by reducing the cycle
time) caused all sorts of signaling problems related to interference. So instead of making the interface run faster, a different approach had to be taken: improving the efficiency of the interface itself. The result was the creation
of a new type of DMA transfer modes, which were called Ultra DMA modes.
The key technological advance introduced to IDE/ATA in Ultra DMA was double transition clocking. Before Ultra DMA, one transfer of data occurred on each clock cycle, triggered by the rising edge of
the interface clock (or "strobe"). With Ultra DMA, data is transferred on both the rising and falling edges of the clock. (For a complete description of clocked data transfer and double transition clocking, see this fundamentals section.)
Double transition clocking, along with some other minor changes made to the signaling technique to improve efficiency, allowed the data throughput of the interface to be doubled for any given clock speed.
In order to improve the integrity of this now faster interface, Ultra DMA also introduced the use of cyclical redundancy checking or CRC on the interface. The device sending data uses the CRC algorithm
to calculate redundant information from each block of data sent over the interface. This "CRC code" is sent along with the data. On the other end of the interface, the recipient of the data does the same CRC calculation and compares
its result to the code the sender delivered. If there is a mismatch, this means data was corrupted somehow and the block of data is resent. CRC is similar in concept and operation to the way error checking is done on the system memory.
If errors occur frequently, the system may determine that there are hardware issues and thus drop down to a slower Ultra DMA mode, or even disable Ultra DMA operation.
The first implementation of Ultra DMA was specified in the ATA/ATAPI-4 standard and included three Ultra DMA modes, providing up to 33 MB/s of throughput. Several newer, faster Ultra DMA modes were
added in subsequent years. This table shows all of the current Ultra DMA modes, along with their cycle times and maximum transfer rates:
The cycle time shows the speed of the interface clock; the clock's frequency is the reciprocal of this number. The maximum transfer rate is four times the reciprocal of the cycle time--double transition
clocking means each cycle has two transfers, and each transfer moves two bytes (16 bits). Only modes 2, 4 and 5 have ever been used in drives; I'm not sure why they even bothered with mode 0, perhaps for compatibility. Ultra DMA mode
5 is the latest, and is implemented in all currently-shipping drives. It is anticipated that it will be included in the forthcoming ATA/ATAPI-6 standard.
Note: In common parlance, drives that use Ultra DMA are often called "Ultra ATA/xx" where "xx" is the speed of the interface. So, few people really talk about current drives being "Ultra DMA mode
5", they say they are "Ultra ATA/100".
|Cycle Time (nanoseconds)
||Maximum Transfer Rate (MB/s)
Double transition clocking is what allows Ultra DMA mode 2 to have a maximum transfer rate of 33.3 MB/s despite having a clock cycle time identical to "regular DMA" multiword mode 2, which has half
that maximum. Now, you may be asking yourself: if they had to go to double transition clocking to get to to 33.3 MB/s, how did they get to 66 MB/s, and then 100 MB/s? Well, they did in fact speed up the interface after all. But the
use of double transition clocking let them do it while staying at half the speed they would have needed. Without double transition clocking, Ultra DMA mode 5 would have required a cycle time of 20 nanoseconds instead of 40, making
implementation much more difficult.
Even with the advantage of double transition clocking, going above 33 MB/s finally exceeded the capabilities of the old 40-conductor standard IDE cable. To use Ultra DMA modes over 2, a special, 80-conductor
IDE cable is required. This cable uses the same 40 pins as the old cables, but adds 40 ground lines between the original 40 signals to separate those lines from each other and prevent interference and data corruption. I discuss the
80-conductor in much more detail here. (The 80-conductor cable was actually specified in ATA/ATAPI-4 along with the first Ultra DMA modes, but it was "optional" for modes 0, 1 and 2.)
Today, all modern systems that use IDE/ATA drives should be using one of the Ultra DMA modes. There are several specific requirements for running Ultra DMA:
On new systems there are few issues with running Ultra DMA, because the hardware is all new and designed to run in Ultra DMA mode. With older systems, things are a bit more complex. In theory, new
drives should be backwards compatible with older controllers, and putting an Ultra DMA drive on an older PC should cause it to automatically run in a slower mode, such as PIO mode 4. Unfortunately, certain motherboards don't function
well when an Ultra DMA drive is connected, and this may result in lockups or errors. A BIOS upgrade from the motherboard manufacturer is a good idea, if you are able to do this. Otherwise, you may need to use a special Ultra DMA software
utility (available from the drive manufacturer) to tell the hard disk not to try to run in Ultra DMA mode. The same utility can be used to enable Ultra DMA mode on a drive that is set not to use it. You should use the utility specific
to whatever make of drive you have.
- Hard Disk Support: The hard disk itself must support Ultra DMA. In addition, the appropriate Ultra DMA mode must be enabled on the drive.
- Controller Support: A controller capable of Ultra DMA transfers must be used. This can be either the interface controller built into the motherboard, or an add-in IDE/ATA interface card.
- Operating System Support: The BIOS and/or operating system must support Ultra DMA transfers, and the hard disk must be set to operate in Ultra DMA in the operating system.
- 80-Conductor Cable: For Ultra DMA modes over 2, an 80-conductor cable must be used. If an 80-conductor cable is not detected by the system, 66 MB/s or 100 MB/s operation will be disabled. See
the discussion of the 80-conductor cable for more.
16-Bit and 32-Bit Access
One of the options on some chipsets and BIOS is so-called 32-bit access or 32-bit transfers. In fact, the IDE/ATA interface always does transfers 16 bits at a time, reflecting its name ("AT attachment"--the
original AT used a 16-bit data bus and a 16-bit ISA I/O bus). For this reason, the name "32-bit" access or transfer is somewhat of a misnomer.
Since modern PCs use 32-bit I/O buses such as the PCI bus, doing 16-bit transfers is a waste of half of the potential bandwidth of the bus. Enabling 32-bit access in the BIOS (if available) causes
the PCI hard disk interface controller to bundle together two 16-bit chunks of data from the drive into a 32-bit group, which is then transmitted to the processor or memory. This results in a small performance increase.
Note: Some BIOS (or add-in controller cards) may automatically and permanently enable this feature,
and therefore not bother to mention it in the BIOS setup program.
Note: It should be noted that this has nothing to do at all with the very similar sounding "32-bit
disk access" and "32-bit file access" that are options within Windows 3.x. These have more to do with how Windows and its drivers function than anything to do with the hard disk itself.
On some systems you will find an option in the system BIOS called block mode. Block mode is a performance enhancement that allows the grouping of multiple read or write commands over the IDE/ATA interface
so that they can be handled on a single interrupt.
Interrupts are used to signal when data is ready to be transferred from the hard disk; each one, well, interrupts other work being done by the processor. Newer drives, when used with a supporting
BIOS allow you to transfer as many as 16 or 32 sectors with a single interrupt. Since the processor is being interrupted much less frequently, performance is much improved, and more data is moving around with less command overhead,
which is much more efficient than transferring data one sector at a time.
Single, Master and Slave Drives and Jumpering
Each IDE/ATA channel can support either one or two devices. IDE/ATA devices of course each contain their own integrated controllers, and so in order to maintain order on the channel, it is necessary
to have some way of differentiating between the two devices. This is done by giving each device a designation as either master or slave, and then having the controller address commands and data to either one or the other.
The drive that is the target of the command responds to it, and the other one ignores the command, remaining silent.
Note that despite the hierarchical-sounding names of "master" and "slave", the master drive does not have any special status compared to the slave one; they are really equals in most respects. The
slave drive doesn't rely on the master drive for its operation or anything like that, despite the names (which are poorly-chosen--in the standards
the master is usually just "drive 0" and the slave "drive 1"). The only practical difference between master and slave is that the PC considers the master "first" and the slave "second" in general terms. For example, DOS/Windows will
assign drive letters to the master drive before the slave drive. If you have a master and slave on the primary IDE channel and each has only one regular, primary partition, the master will be "C:" and the slave "D:". This means that
the master drive (on the primary channel) is the one that is booted, and not the slave.
Devices are designated as master or slave using jumpers, small connectors that fit over pairs of pins to program the drive through hardware. Each manufacturer uses a different combination of jumpers
for specifying whether its drive is master or slave on the channel, though they are all similar. Some manufacturers put this information right on the top label of the drive itself, while many do not; it sometimes takes some hunting
around to find where the jumper pins are on the drive even once you know how the jumpers are supposed to go. The manufacturers are better about this now than they have been in the past, and jumpering information is always available
in the manual of the hard disk, or by checking the manufacturer's web site and searching for the model number.
ATAPI devices such as optical, Zip and tape drives are jumpered in pretty much the same way as hard disks. They have the advantage of often having
their jumpers much more clearly labeled than their hard disk counterparts. Most optical drives, for example, have three jumper blocks at the back, labeled "MA" (master), "SL" (slave) or "CS" (cable select).
If you are using two drives on a channel, it is important to ensure that they are jumpered correctly. Making both drives the master, or both the slave, will likely result in a very confused system.
Note that in terms of configuration, it makes no difference which connector on the standard IDE cable is used in a standard IDE setup, because it is the jumpers that control master and slave, not the cable. This does not apply when
cable select is being used, however. Also, there can be electrical signaling issues if one connects a single drive to only the middle connector on a cable, leaving the end connector unattached. In particular, the use of
Ultra DMA is not supported in such a configuration.
As long as one drive is jumpered as master and the other as slave, any two IDE/ATA/ATAPI devices should work together on a single channel. Unfortunately, some older hard disks will fail to
work properly when they are placed on a channel with another manufacturer's disk. One of the reasons why drives don't always "play nicely together" has to do with the Drive Active / Signal Present (/DASP) signal.
This is an IDE/ATA interface signal carried on pin #39, which is used for two functions: indicating that a drive is active (during operation), and also indicating that a slave drive is present on the channel (at startup). Some early
drives don't handle this signal properly, a residue of poor adherence to ATA standards many years ago. If an older slave drive won't work with a newer master, see if your master drive has an "SP" (slave present) jumper, and if so,
enable it. This may allow the slave drive to be detected.
Drive compatibility problems can be extremely frustrating, and beyond the suggestion above, there usually is no solution, other than separating the drives onto different channels. Sometimes brand
X won't work as a slave when brand Y is the master, but X will work as a master when Y is the slave! Modern drives adhere to the formal ATA standards and so as time goes on and more of these older "problem" drives fall out of the
market, making all of this less and less of a concern. Any hard disk bought in the last five years should work just fine with any other of the same vintage or newer.
When using only a single drive on a channel, there are some considerations to be aware of. Some hard disks have only a jumper for master or slave; when the drive is being used solo on a channel it
should be set to master. Other manufacturers, notably Western Digital, actually have three settings for their drives: master, slave, and single. The last setting is intended for use when the drive is alone on the channel. This
type of disk should be set to single, and not master, when being used alone.
Also, a single device on an IDE channel "officially" should not be jumpered as a slave. In practice, this will often work despite being formally "illegal". Many ATAPI drives come jumpered by default
as slave--because they are often made slaves to a hard disk's master on the primary IDE channel, this saves setup time. However, for performance reasons they are sometimes put on the secondary channel, and often the system assemblers
don't bother to change the jumpers. It will work, but I don't recommend it; if nothing more, it's confusing to find a slave with no master when you or someone else goes back into the box a year or two later to upgrade.
For performance reasons, it is better to avoid mixing slower and faster devices on the same channel. If you are going to share a channel between a hard disk and an ATAPI device, it is generally a
good idea to make the hard disk the master. In some situations there can be problems slaving a hard disk to an optical drive; it will usually work but it is non-standard, and since there is no advantage to making the ATAPI device
the master, the configuration is best avoided.
There are many more performance considerations to take into account when deciding how to jumper your IDE devices, if you are using several different ones on more than one channel. Since only one of
the master and slave can use any channel at a time, there are sometimes advantages to using more than one IDE/ATA channel even if not strictly necessary based on the number of devices you are trying to support. There can also be issues
with using a drive that has support for a fast transfer mode like Ultra DMA with older devices that don't support these faster modes.
Official IDE/ATA Standards and Feature Sets
There's an old joke that says the great thing about standards is that there are so many to choose from. Anyone who has tried to understand hard disk interface standards knows exactly what this means.
To help you comprehend what can be a very confusing subject, I have spent considerable time researching all of the issues related to IDE/ATA standards, and have presented them here in what I hope is fairly plain English.
Standards probably get just a bit too much flak. Despite the confusion that standards can cause, that's nothing compared to the confusion caused by a lack of standards. So it was in the early days
of the IDE/ATA interface. During the late 1980s, IDE/ATA grew in popularity, but there were no standards in place to ensure that the interface decisions made by different hardware companies were compatible with each other. Several
manufacturers succumbed to the temptation to make slight "improvements" to the interface. As a result, many early IDE/ATA drives exhibited compatibility problems, especially when one attempted to hook up a master and slave device
on the same cable.
Recognizing the potential for utter chaos here, a number of designers and manufacturers of hard disks and related technology got together to form the Common Access Method (CAM) committee on the AT
Attachment interface. The first document describing the proposed IDE/ATA standard was introduced in early 1989. It was submitted in 1990 to the American National Standards Institute (ANSI), and eventually became the first formal ATA
standard. The CAM committee was eventually replaced with other similar groups charged with the various tasks associated with managing the IDE/ATA interface.
Today, ATA standards are developed, maintained and approved by a number of related organizations, each playing a particular role. Here's how they all fit together:
So basically, T13 is the group that actually does the work of developing new IDE/ATA standards. The T13 group is comprised primarily of technical people from various hard disk and other technology
companies, but the group (and the development process itself) is open to all interested parties. Comments and opinions on standards under development are welcomed from anyone, not just T13 members. The standards development process
is intended to create consensus, to ensure that everyone who will be developing hardware and software agrees on how to implement new technology.
Once the T13 group is done with a particular version of the standard, they submit it to NCITS and ANSI for approval. This approval process can take some time; which is why the official standards are
usually published several months, or even years, after the technology they describe is actually implemented. While approval of the standard is underway, companies develop products using technology described in the standard, confident
that agreement has already been reached. Meanwhile, the T13 group starts work on the next version of the standard.
Now that you understand how the standards process works, you are in much better shape to read the rest of this section, which describes all of the different standards that describe the IDE/ATA interface.
They are listed in approximately chronological order. Remember when reading that subsequent standards build upon earlier ones, and that in general, hardware implementing newer standards is backward-compatible with older hardware.
Note: Standards that have been approved and published by ANSI are available for purchase in
either print form or electronic format from ANSI's web site. Draft standards that are under development (as well as older drafts of approved
standards) can be found at the T13 Technical Committee web site.
- American National Standards Institute: ANSI is commonly thought of as an organization that develops
and maintains standards, but in fact they do neither. They are an oversight and accrediting organization that facilitates and manages the standards development process. As such, they are the "high level management" of the standards
world. They qualify other organizations as Standards Developing Organizations or SDOs. ANSI also publishes standards once they have been developed and approved.
- Information Technology Industry Council: ITIC is a group of several dozen companies in the information
technology (computer) industry. ITIC is the SDO approved by ANSI to develop and process standards related to many computer-related topics.
- National Committee for Information Technology: NCITS is a committee established by ITIC to develop
and maintain standards related to the information technology world. NCITS was formerly known under the name "Accredited Standards Committee X3, Information Technology", or more commonly, just "X3". It maintains several sub-committees
that develop and maintain standards for various technical subjects.
- T13 Technical Committee: T13 is the actual technical standards committee responsible for the IDE/ATA
Hard-drive Fragmentation Effects
Disk fragmentation occurs when a file is broken up into pieces to fit on the disk. Because files are constantly being written, deleted, and resized, fragmentation is a natural occurrence. When a file is spread out over
several locations, it takes longer to read and write. But the effects of fragmentation are far more widespread: Slow performance, long boot times, random crashes and freeze-ups—even complete inability to boot up at all. Many users
blame these problems on the operating systems, when disk fragmentation is often the real culprit.
1) Sluggish performance, especially when running multiple applications
2) Slow application load times
3) Slow booting/startup (or inability to boot at all)
4) Long load times to open graphics
5) Slow file saves
6) Random crashes, freezes and lock-ups