Why does the Cray 2 use 400 Hz power, and why generate that from motors?
Why does the Cray 2 use 400 Hz power, and why generate that from motors?
This Cray sales brochure details on page 4 that the Cray-2 uses 400 Hz power, and generates this from motors.
400 Hz power from motor generators
I'm not sure that in 1986, the state of the art used 400 Hz as a frequency for timing the signals and the circuits. I'm also not sure that if 400 Hz is needed, that motors are the best way to generate that. I'd guess it's too imprecise for timing a supercomputer.
400Hz is indeed a standard in Aircraft, and especially, military installations. I can only guess, but would assume the generators have been used to completely isolate and protect the non-neglectable investment in a Cray from mains disturbances like over-current, frequency drift, and especially lightning strike. The mechanical coupling can achieve that, any electrical means cannot.
– tofro
Aug 29 at 13:49
It's perhaps worth noting at this point that Seymour Cray had a military background, and that many of his customers were military or other government groups. If this was the kind of power supply they'd expect, he'd know that and would provide it for them.
– Jules
Aug 29 at 17:22
@tofro Indeed, I can confirm that Navy radar and associated control systems runs 400 Hz. There is a whole separate electrical system installed to provide this power to the combat systems suite.
– kingledion
Aug 30 at 20:41
Mil Standards do, however specify 108V instead of 208 the Cray wants.@kingledion
– tofro
Aug 30 at 21:16
6 Answers
6
The reason for any higher frequency supply is almost always the same: you get to convert a single input to different voltages for less wasted heat in a smaller volume. Such systems were very common in applications where you have lots of different voltages downstream from the plug. The first small version I'm aware of was the motor-generator sets used for powering AI radars in WWII, which was literally a guy unscrewing the DC alternator from an engine and saying "make me an AC version exactly like this". But they were already very common at that point for industrial supplies in factories and such.
It's very easy to make a synchronous motor that will run at a fixed RPM even under changing loads. A flywheel can go a long way. In a computer applications, or radar, the loads tend to be fairly even on any macroscopic time frame on the order of seconds (although not always, see DRTE). You can then use that smoothed motor output to drive a generator for practically anything you might need. A motor running at fixed speed will get you well over 90% efficiency, as will a generator, so the end-to-end efficiency is very good.
Switching power supplies, which are now basically universal, were always available, but not widely used. For one thing, the electronics were very large (I've seen versions several feet tall) and very expensive. They also gave off enough UV to cause sunburns and had to be isolated in metal cabinets. By the 1970s you have solid-state versions which were good enough for low-end systems, like the famous example in the Apple II, but these were not nearly powerful enough for a mainframe. Some thyristors were large enough, but tended to have lower efficiency.
It wasn't until about the 1990s that really large thyristors and IGBT's were available that could handle this sort of power. They rapidly took over, and are now basically universal from the iPhone to the TGV train. As they allowed very rapid switching, which is basically their efficiency measure, modern examples are well into 95+ range - the inverters on my roof are 93%, but the same box made today after more 8 years is a whopping 97% efficient. That's still 10W out the window though...
Its an interesting side-note that the emergence of the modern EV has as much to do with these electronics as it does with li-ion. Using older conversion systems, like the DC systems you used to see in experimental cars in the 90's, you were tossing out miles of range in warm electronics.
"Less heat in less volume" is critical. For its time, the Cray-2 was small - it had to be, to minimize the wiring lengths between the boards and maximize the clock speed. (FWIW I've actually stood right next to a working Cray-2, back in the day - people's first reaction to the tiny "fishtank" cabinet tended to be, "OK so where is the rest of it?" You are talking about a system that would (almost) fit inside a 4-foot cube, but consumed nearly 200kW of power. Compare that with a modern server rack!
– alephzero
Aug 29 at 16:05
… Incidentally, Seymour Cray was a rather short guy (about 5 foot 3 inches IIRC), so the iconic pictures of Cray in front of his own computers make them look bigger than they really are.
– alephzero
Aug 29 at 16:12
Btw, low-power SMPS (like mobile chargers, computer power supplies, etc.) are not using IGBTs nor thyristors nowadays. They use MOSFETs and some cheap ones probably still BJTs. IGBT range starts from say kilowatts of power output.
– lvd
Aug 29 at 23:35
@DigitalTrauma - the early rectifiers were vacuum tubes with traces of mercury. Mercury has strong emissions in the UV region. The ones in radars were well into the 5 MW range (pulsed) so we're talking about maybe 100kW per square meter, compared to 1kW/m for all sunlight put together, of which UV is a tiny component.
– Maury Markowitz
Aug 30 at 15:35
@MauryMarkowitz - mercury arc rectifiers contained a lot more than a trace of mercury. A high power rectifier mentioned here magazine.ieee-pes.org/marchapril-2014/history-12 contained 2.5 litres!
– grahamj42
Aug 30 at 20:42
I'm not sure that in 1986, the state of the art used 400 Hz as a frequency for timing the signals and the circuits.
The 400 Hz is not related to any kind of timing. It's about getting size and losses of transformers down. With increasing frequency, the efficiency of transformation increases, thus producing less waste heat. At the same time sizes of the transformers shrink for a given load, thus saving in cost and, equally important, size.
It's the same principle that gave us the switching power supply and its high efficiency and low losses. Here also the primary AC gets turned into a high frequency one so a rather tiny transformer can be used instead of the huge copper piles in a linear PS. Unlike the Cray, a modern power supply uses several kHz instead of 400 Hz.
I'm also not sure that if 400 Hz is needed, that motors are the best way to generate that.
It's not so much about exact 400 Hz, as the conversion of a lot of power. In the mid-1980s power semiconductors weren't at a point where such loads could be handled fully electronically (*1). Coupling a motor running at mains frequency and a generator on its output shaft that emits a higher frequency current - basically the same as the motor, just more poles. Prior to power electronics this was the only way to transform frequency. For example, rail power in Germany (and others) is 16.6 Hz while mains is 50 Hz. Beside rail power stations with generators producing 16.6 Hz, there where also converters from main power using motor generator units.
Beside the advantage in lower copper cost, more compact design and higher efficiency, such a motor-generator inherently works as a UPS. Due to inertia, short interruptions on the input side are smoothed out, making the operation way more reliable. In fact, depending on the size of the motor-generator, it could even supply the whole machine (including disks) for several seconds. That's more than enough time to write a savepoint (*2) and have an orderly (emergency) shutdown. That's why mainframes often have a special power fail interrupt state.
I'd guess it's too imprecise for timing a supercomputer.
As said before, it's not about anything in timing, but size and efficiency of the power conversion.
400 Hz in particular was chosen, as this was already standard since the 1940s for certain military equipment and airplanes. It's always better to use some standard and standardized equipment that comes with it. So Cray didn't have to create their own converters.
*1 - and even today, as we can do it, we still use the same trick. Just now it first gets rectified, then turned into high frequency pulsed (several dozen kHz) current, which still gets transformed to whatever is needed and rectified again.
*2 - A savepoint is a recording of the actual state of a job under program control that allows restart when restored. It's not the same as freezing the program, more like an automated save to files when closed.
I suppose writing a savepoint may be compared to the modern notion of hibernation.
– Ruslan
Aug 29 at 21:54
@Ruslan No, exactly, rather the oposite. Hibernation is like saving the memory content of an aplication by some OS function. A savepoint is an action an application programm takes after being called to do so - because of immanent thread of being terminated. For modern database application, think about commiting all open transactions (or rollback) and writing a record where to continue with the input stream and alike. In case of a restart the application has to read that savepoint and try to restart - or fail. It's an application process supported by the OS, but not an OS process.
– Raffzahn
Aug 29 at 22:05
@Ruslan it’s more analagous to what happens after you do a soft power off on a pc or phone and then you have to wait while it “closes running applications” or “the system is shutting down” until it actually turns off.
– Jason C
Aug 31 at 2:21
Aviation converters seem to produce 108V according to MIL standards - Retro datacenter equipment like the Cray (and a lot of other mainframes and supercomputers) want 208V. So, you cannot use existing Aviation equipment.
– tofro
Aug 31 at 12:00
The discussion of savepoints brings back memories: In the late 90s I took a 300-level operating systems class from a professor who insisted that paged virtual memory only won out by an unfortunate accident of history, and dropfiles were the way to go.
– Charles Duffy
Sep 1 at 17:14
As others have said, the choice of a higher frequency enables smaller transformers, but also smaller smoothing capacitors. 3-phase supplies were used to reduce the ripple even further. Don't forget we are talking about very high currents with ECL systems.
Motor generators existed right from the earliest days of electrical distribution for AC-DC or DC-DC conversion - semiconductors were discovered later. The effect of a heavy flywheel between the two handled brownouts and switching to a backup generator very well.
Where I worked, we had motor generators on the 240V running the disks as well as a 400Hz set for the CPU. The 240V sets were about 30 years old when we replaced them with a UPS at the time we upgraded to a RAID cabinet from a room full of single-density disks and (slightly) downgraded to an IBM 9121-521 (IIRC) from a 3084Q.
In 1986 a semiconductor supply would have been feasible, but the prospective customer would probably already have the 400Hz MG set, so why make him pay more to replace something he already had ?
I think there is a lot of truth in the "smaller capacitors", ripple reduction, and "mechanical" smoothing of the DC mentioned here - rectified DC at higher frequencies is way easier to smooth than at lower frequencies. Not so sure if the transformer size as mentioned here and in other answers really was an issue - I consider this relevant in an airplane, not in a Cray (after all, it's not a laptop).
– tofro
Aug 30 at 7:58
@tofro Cray had a flair for presentation of computers, and that's at least part of what captured the public imagination about them. They always looked the part, sleek and futuristic. In this instance, he had a design idea that required the power supply to be a similar size to the computer itself. If it cost a bit more to make the power system small enough to fit, it really isn't likely anyone would have cared. Anyone who needed a Cray 2 would have set aside however much budget was required. It was a sellers market, and at that end of it only Cray was selling anything
– Jules
Aug 30 at 12:18
The Cray needed stable DC power at extremely high current. By using a 3-phase 400 Hz system they could use high frequency transformers in each power supply unit to generate 12 phases** which results in very little ripple voltage (about 1%) at relatively high frequency (4.8 kHz), so they did not need to use enormous capacitors (which wear out). The transformers were distributed and located under the card racks, so size was a factor.
By using motor-generator sets (three according to this document) they could get some UPS-like hold-up time and get the benefits of higher frequency (proportionally smaller transformers). Only the 50 or 60 Hz three-phase motor had to be large, all the other components could be smaller. 400 Hz is a standard military/aerospace frequency (chosen because it represents the highest frequency practical with steel core materials), so it was a sensible choice, and the transformer vendors would have ready made solutions.
**That's assuming it's the same topology as the Cray-3 and Cray-1 (Cray 1 described below). No regulation! ECL circuits draw essentially constant current, so that helped a lot.
There are twenty -5.2 volt supplies and sixteen -2.0 volt supplies. The supplies are divided into twelve groups of three. Each group supplies one column. The power supply design assumes a constant load. The power supplies do not have internal regulation, but depend on the motor-generator to isolate and regulate incoming power. The power supplies use a twelve-phase transformer, silicon diodes, balancing coil, and a filter choke to supply low ripple DC voltages. The entire supply is mounted on a Freon-cooled heat sink. Power is distributed via bus bars to the load.
The motor generator supplies 208 V, 400 cycle, three-phase power to the power distribution cabinet, which the power distribution cabinet supplies via a variac to each power supply.
In addition to earlier answers, stating smaller transformers, less ripple on rectified voltage and motor-generator inherent momentary power losses sustainability, such a seemingly overengineered system also protects well from (rare) power line surges, which increases the machine stability.
Some of such surges could pass rather well through inter-winding capacitance of the transformers as well as be not very well filtered by capacitors. When the only power link with the external world is a mechanical shaft, that could be as well dielectric and (almost) arbitrarily long, power surges could be easily defeated.
The Cray is not a home computer that deduces timing from the mains frequency - This is a shortcut cheap computers could take to save some components, but nothing that was done in the "professional" world. In a sufficiently professional computer (and the Cray, if anything, is one...), there is no coherence whatsoever between mains frequency and internal timing.
A short side note: The specific power requirements of the Cray 2 were in no way unique: Cray's earlier works, like the CDC 6600 designed in the 1960s, used the same supply voltage. Some of my earlier assumptions like that the supply voltage selection was based on specific Cray 2 features like its cooling system, are thus void. IBM celebrates their "eliminated the need for 400 Hz" only as late as 1989.
I am not so convinced that the reasons for the selection of this specific power source in the Cray were the same as why it is an established standard in aviation technology. After all, the Cray is not a laptop, so space and weight requirements like in an aircraft weren't really so much of an issue.
I suspect rather more other reasons for Cray towards this decision:
400 Hz supply at 208 V simply was the standard as set by IBM, and what you would have expected to be faced with in a supercomputing data center that was the target customer for Cray machines. The IBM System/360 mainly relied on these voltage input characteristics, so you would expect any Cray machine to be set up according to these requirements - so, after all, this apparently has not been a Cray decision, (also not driven for their preference of small-scale design), but rather they had to adapt to an existing environment.
As of the why behind original (apparently, IBM) design decision, most of the other answers will apply - mainly size and efficiency. Also, given the wide non-standardized power supply voltages in the world in the mid-1960s, one can assume mainframe and supercomputer vendors were pretty much free in their decision on input requirements and apparently 400 Hz was chosen, because the technology was available from the aviation and defense industry.
Did any "home computer" use the mains frequency for anything timing-related?
– Spehro Pefhany
Aug 30 at 14:56
@SpehroPefhany Me myself, I've heard of clocks that do that, but never computers.
– Wilson
Aug 30 at 15:47
@SpehroPefhany Not to my knowledge. The TV set you connected your computer to, however, typically did.
– tofro
Aug 30 at 15:47
The VIC-20 had 9VAC inside it from the transformer but I think the timing was from a 14.31818MHz crystal.
– Spehro Pefhany
Aug 30 at 16:07
Required, but never shown
Required, but never shown
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
I don’t know why, but 400 Hz was used for large mainframe systems before the Cray 2. It’s used in the aircraft industry and on ships because it allows the use of smaller and lighter transformers (lighter because they don’t need as much iron as 50/60 Hz transformers). So perhaps this is used to allow smaller motors to be used in the cooling pumps?
– Stephen Kitt
Aug 29 at 13:42