What is the most common port that even a beginner in the computer world knows about? Of course USB. With it, we connect external storage media to our PCs, do not we? And it was not always so. In the early 90's, when there was no question of USB, a hard disk Backpack from the company Microsolutions was released. Its peculiarity was the type of connection – through the port for the printer. The first model was 80 MB in size.
The driver of this device needed to add only one line to CONFIG.SYS, and it consumed less than 5K of memory. Backpack, at the time of release, worked as fast as internal IDE-drives for 80 or 512 MB. The transfer rate was about 1MB per second. Unlike Zip Drives, Backpack were the real hard disks with a typical performance indicator – search time, which was 12 ms. It was even possible to create a chain of such disks by connecting one to the other.
Bigfoot disk manufacturers have set themselves the task of creating a reliable and fast storage device with minimal costs for Megabytes. And they approached this issue very unusual, using the 5.25-inch form factor (the size of a CD-ROM or an old diskette reader). What is the point? Plates measuring 5.25 inches allowed to store more data than was possible on standard 3.5 inch CDs. The rotation speed was not the greatest – only 3600 r / s, but due to the increased circumference it was higher than those of the same CD. Not to mention the fact that the cost of Quantum Bigfoot was significantly lower.
However, there were significant flaws when working with a large form factor. Average search time Bigfoot was no more than 12-14 milliseconds. Although this is a great achievement for such a large disk. Even more important problem was latency (delay). In order to understand this problem better, the comparative figures of Bigfoot and HDD of other manufacturers are given below:
If we consider the rightmost column, then we see that two SCSI drives WD and Cheetah) doubled their performance, but their cost was $ 1000 more than IDE drives. However, the 3.5-inch Medalist was only $ 30 more expensive than Bigfoot and much faster. And if you add 60 evergreens to the cost of Bigfoot, you could get Deskstar 4 with 5400 rpm.
Bigfoot's storage is of course interesting, but Quantum did not focus on speed and performance, but on cost.
Iomega Zip Drive
Hard disks with removable drives have never been very popular, with the exception of Zome Drive from Iomega, which were at the height of several years. For a couple of hundred bucks you could get an external device of parallel ports with 100MB cartridges that look like overgrown floppy disks. They could be inserted into the device as they fit. Performance was very acceptable, although the cost of one megabyte was extremely high. The data transfer rate is 1MB per second. Not bad. But the search time was not so joyous – as much as 29 ms.
Overall, the Zip Drive was too small, too slow and expensive to become a mass product. However, at one time they were very popular, because they were something unusual and new on the market.
SDX meant the acceleration of data storage. It was not quite a hard drive. Rather, it was a new method of implementing a CD-ROM drive connection.
The standard method for connecting a drive to a PC has a dedicated bus: early models (mostly single- and two-speed ones) used one of three connections: Panasonic, Mitsumi or Sony. SCSI was and still is another effective, albeit quite expensive, option. Virtually all the more modern CD-ROM drives used the ATA bus, better known as the IDE. The IDE CDs were fast, inexpensive and easy to set up. For a long time on all personal computers there were two IDE controllers, each of which was capable of supporting two IDE devices: usually hard disks and CD-ROMs, although tape drives and CD-recordings were also available in the IDE. (The DVD drives came later, but worked the same way.)
Usually the CD-ROM drive was configured as a standalone device on the second IDE controller or as a copy of the hard drive on the The primary IDE controller. Performance was good: the maximum transfer rate of IDE data for ATA-33 reached 33 MB / s.
The principle of SDX work from Western Digital was completely different: the SDX CD was designed to connect to an SDX hard disk that, In turn, connects to a standard IDE controller. This is not a slave IDE device, SDX uses a different connection altogether. With a 10-pin cable instead of a 40-pin IDE cable.
An SDX hard disk was installed to automatically cache a relatively slow CD-ROM drive, which greatly accelerated the search time on the hard drive and gave a slightly higher data rate, Which increased the productivity of CD-ROM. This was the main task of the SDX cache for hard drives.
The cache can significantly affect the performance of the CD. RAM caching CD-drive exists from the very beginning of two-speed drives and is a standard part of all major operating systems. For example, the old DOS Smart Drive automatically used a small amount of RAM to cache the CD drive, as well as the hard drive. (Assuming that you downloaded SMARTDRV after MSCDEX, of course.) However, although the RAM was very fast – about 2000 times faster than the CD-ROM drive – the maximum reasonable cache size based on RAM in those days was only a few MB, And for CDs – about 650 MB.
A more practical idea was to use a hard disk to cache the CD drive. In the 1990s, the hard drive was much slower than the RAM, but still about 10 times faster than the CD-ROM. Even then, most hard drives were so large that tuning to gigabytes or so for the CD cache was usually practical. This is what SDX was talking about. The SDX hard disk was designed to cache connected SDX CD-ROM drives. Roughly speaking, SDX can double the performance of the CD drive and do this without using any additional CPU or PCI resources and without adding another layer to the file systems.
On the other hand, there was no need to use a new and non-standard Type of CD-ROM drive to get the advantages of a hard disk CD-cache. This can easily be done in software, with a performance gain equal to or greater than SDX. There were several software CDs, and they worked with any type of CD: old or new, IDE or SCSI.
Advantages of SDX
- Caching a CD drive without using additional CPU or PCI bus resources.
- The firmware cache did not require operating system support, so SDX automatically worked with Unix, OS / 2, Linux and MacOS, not just DOS and Windows.
- Future hard drive updates can cause the CD-ROM drive to also provide an effective performance boost.
- The production of SDX CDs was slightly cheaper.
Disadvantages of SDX
- Update of SDX CD-ROMs exclusively to other models of the same brand.
- Very limited support.
- Slightly increased the cost of the hard drive.
- Lack of use in laptops.
Advantages common to both SDX and software
- Significant increase in CD performance – minimum increase for large files, more performance increase for small files .
- Longer life of CD-ROM drives.
Advantages of the software
- There are many more possibilities for intelligent caching, including prefetching certain files from popular program names.
- Allowable dynamic configuration of the total cache size without repartitioning the hard disk.
- Worked with any type of hard disk or CD-ROM drive, new or existing.
- A free choice of hard drive brand and cache software is allowed.
- It is much easier to update.
Disadvantages of the software
- The CPU usage is slightly increased – by 3-4%.
- Added another layer to the Windows file system.
- Limited operating system support.
To achieve full success, SDX was supposed to work with other hard disk manufacturers, not just Western Digital. WD said they intend to make the SDX interface an open standard, such as IDE or SCSI, but they did not do it on time. Other manufacturers of hard drives, in particular Maxtor, wanted to use SDX, but WD requested licensing fees, which strongly repelled other vendors who believed that this was at odds with the principles of open access. For example, the Ultra ATA technology from Quantum was absolutely free.
It could be concluded that WD just wanted to earn more money on licensing fees, before extending its technology to open access and developing it as a single standard. At the end of 1997, this led to serious disagreements among hard disk manufacturers, and WD had to abandon its Napoleonic plans.
Western Digital Portfolio
In the mid-90s Western Digital received a slap from the world of computer technology for its strange ideas (see the point above). But they decided not to give up and do not stop there. A new type of Portfolio with a non-standard form factor for laptops was introduced – 3 inches, not standard 2.5. At first it seemed that this was another crazy idea. If the principle of a laptop is smaller and lighter, then why make it more? However, WD said they found a way to get more memory in a smaller space using a slightly larger disk.
At that time, the development of notebooks was aimed at larger ones in a cut but at the same time more Flat models. All this was due to the increase in screens. Portfolio has kept pace with these trends. Increasing the size by half an inch allowed up to 70% to increase storage capacity on a single disk. Using a slightly larger WD plate, it was possible to store twice as much information on the disc as a cubic inch. In addition, this disk is faster and easier to rotate.
In turn, WD had a very big competitor with its vision of how hard disks should develop, it's IBM. They increased productivity through such methods and technologies: first of all, a significant increase in data density, MRX and GMR reading heads, glass plates, advanced PRML, increased speed of rotation and smart formatting. Of course, taking up 40% of the world market, IBM could afford to conduct research in the field of hard drives.
In contrast, it was an innovative 3-inch drive from WD, which greatly saved their money, and did not require high data density , With which the manufacturer has always had problems. Thus, WD was able to squeeze into the market and capture part of it.
The most outstanding representative of the Portfolio line had a very impressive specification:
- 2 plates 2.1 GB;
- heads of MR and PRML;
- shock resistance 300/100 g;
- the search time is 14 ms;
- rotation speed 4000 rpm;
- the data transfer rate is 83 Mbit / s.
These figures were inferior only to the flagships of IBM's line of notebooks.
Everything would be fine, but WD's financial problems led to the fact that the Portfolio line in January 1998 was collapsed. The same thing did the other manufacturers of 3-inch drives – JTS. In those days, many companies suffered from lack of finance, rather than from lack of ideas, and WD is a good example.
Seagate Barracuda 2HP
Unfortunately, especially for Seagate , These discs are no longer produced. But they were very popular at the time, because they differed from the others in that they used 2 read heads, which eventually provided twice the data transfer rate.
A large number of read heads Recording is not something unique, but they are all used one by one, and the maximum data transfer rate depends on how much data passes under the head. In contrast, Barracuda 2HP divided the data stream into 2 halves, sending one half of each of the two heads and recompiling the data on reading. At that time, the best rims with a rotation speed of 7200 rpm produced a transmission speed of 50-60 Mbit / s, and Barracuda produced as much as 113 Mbit / s, which made this disk the fastest on the planet at that time.
This idea was not new, but it was the first realized in the world. So why 2HP technology has just disappeared now?
First of all, this is due to the fact that such speeds can be achieved by RAID-array (combining several disks into one structure). In this variant, a complex task of data partitioning and recombination is performed by the RAID controller or the controller software. In addition, RAID has become a cheaper option for increasing disk performance over the past few years.
The second reason for the disappearance of 2HP technology was the time to market. Developers 2HP spent a lot of time implementing new models that adhere to this technology. While other manufacturers did not waste time in the empty.
Specification of Barracuda 2HP
The IBM Microdrive is 1 inch across, it's much lighter than the PCMCIA, but it's still a real hard disk with moving read-write heads and a tiny one-inch disc rotating at an incredible 4500 rpm at that time.
2 released models stored 170 and 360 MB of data, and their performance was comparable to the performance of a 20-megabyte desktop disk.
The 170MB version was unique because It has only one read-write head.
No disk had only one head, up to the beginning of 2000.
Specification of IBM Microdrive models
Of course there is now A lot of modern drives with performance are much higher than those of our list, but they still remain an integral part of the history of hard drives due to their unique technology at that time, and therefore deserve attention. They served as an impetus to the development and research of HDD, and subsequently to SSD.
Advertising. Stock! Only now get up to 4 months of free use of VPS (KVM) with dedicated drives in the Netherlands and the USA (configurations from VPS (KVM) – E5-2650v4 (6 Cores) / 10GB DDR4 / 240GB SSD or 4TB HDD / 1Gbps 10TB – $ 29 / month and higher, options are available with RAID1 and RAID10) a full analogue of dedicated servers, with the order for 1-12 months, the terms of the promotion are here, existing subscribers can get a 2 month bonus!