Part-bad chips other than RAM
Part-bad chips other than RAM
In the early eighties, you could buy half-bad 64k RAM chips at a discount. Some cost-conscious manufacturers such as Sinclair and Tandy took advantage of this, buying eight such chips to make a 32K memory bank.
Intuitively it seems like this should be possible with other kinds of chips. For example, the VIC-II is a big complex chip for its time; at least in the early days, yield must have been significantly less than 100%. Most of the area of the VIC-II is used for sprites. That suggests the majority of reject chips would be perfectly usable for non-sprite display. Still useless for the C64, of course, but Commodore did (mistakenly, but still did) introduce the C16 as a spriteless lower-end machine. Instead of basing it on the Plus 4, could they not have given it reject VIC-II chips and saved money? But Commodore was a very cost-conscious company that was known to base designs on what kind of chip they currently had in surplus; if they didn't do that, is there a reason I am overlooking?
Similarly the excellent SID (C64 sound chip). Intuitively it would seem a defect in one of the voices, say, should leave the chip still usable with a smaller repertoire in a lower-end machine like the C16. Is there a reason why this was not done?
The one historical case I know of where part-bad chips were successfully sold with reduced functionality (apart from the possibly apocryphal case of Soviet-made CPUs each coming with a list of instructions that particular chip would successfully execute) is the 486, where units with a defective FPU were sold as the 486SX. Are there are any other cases?
Keep in mind that verification what exactly does not work might be more expensive than the value of the defective chips. Some functions (like: does a sprite show up properly on screen) are really not that easy to test automatically.
– tofro
Aug 26 at 16:35
This is not how it worked. Before modern exhaustive chip testing technologies like boundary scan and JTAG were invented in the 90ies, the only way to test a chip was to set a series of test patterns on the input and observe the outputs - These tests rarely broke down the chip into working and non-working functional areas, but rather sorted chips in "passed" and "failed" - And the failed ones simply went to the trash without further investigation. For a simple RAM chip, you could check whether it held storage at least in an upper and lower area and could sell the other half as functional, but
– tofro
Aug 26 at 16:53
@rwallace Tofro is perfect right here. Chip testing isn't as easy, even with RAMs its quite some work, and they are incredible simple compared to chips like a VIC-II. I wouldn't wonder if the tests at MOS would have been rather frugal, and the real tests where only done after the whole machine was assembled. It's way lest costly to plug a ROM cardridge into a finished C64 and have a cheap employe check if it seams fine. Remembering ho ingredible high rejection rates and returns of newly bought C64 where does support this.
– Raffzahn
Aug 26 at 17:21
The VIC-II chip that shipped in my Commodore 64 had two partially-defective sprites (sprite 0 and one other) whose output was slow to "turn on". At places where a transparent pixel was followed by a non-transparent pixel, the background would show through. One of the CIA chips was also partially defective, with a "real-time clock" circuit that would not advance between seconds and minutes (which made the last level of Raid over Moscow much easier than it should have been, since the timer would never expire).
– supercat
Aug 27 at 18:47
3 Answers
3
For example, the VIC-II is a big complex chip for its time; at least in the early days, yield must have been significantly less than 100%.
Not really, while the VIC-II had a transistor count a bit larger than the original 8µm NMOS 6502. Not a lot, but it was manufactured in a 5 µm process, resulting in a smaller size and higher yield. The 1981 65C02 had almost three times the transistor count of the NMOS CPU, but due to being manufactured in 3 µm, its size was only a little over 1/4th (6 vs 21 mm^2) and 5 µm was already considered outdated in 1980.
Most of the area of the VIC-II is used for sprites.
I wouldn't call the 40 bytes of line buffer insignificant
That suggests the majority of reject chips would be perfectly usable for non-sprite display.
The whole idea of using rejected chips, especially when it's about rather small (even at that time) chips like the VIC, isn't working out in a commercial way. At least not if you're the original manufacturer and/or customer for such a 'less function' version.
Lets assume, for the sake of the argument, that the area used for sprites is one quarter (25%) of the chip, and the failure location is random. So of all chips with only one failure location, only the 25% with that failure can be used as 'without sprites' (again assuming given that it isn't a failure disabling disables the whole chip - like a contact between +5V and ground :)) For chips with two failures it's maybe another 5% that fall in the same area, therefore it isn't worth looking at even higher rates.
Next, this is all in relation to the fault rate of the production. Here it is important that the 5 µm process on which the VIC-II was produced was well proven and established. Not anywhere near cutting edge - that would have been around 1-2µm at that time. I think it is safe to assume that Commodore did get a yield of way more than 80% already at the first run, and they would have been able to get this to 95+% within a few months.
With numbers there is no business case for the manufacturer to go for 'less functional' versions. After all, at 80% good rate (which is rather bad), the best possible quote (if all failing chips had only a single failure) would be 5%. That's no production number even worth making a different stamp to mark them. Especially, as from a manufacturer's perspective, all to do is reducing the fault rate to improve sales of the main product.
Last but not least, thorough testing slows down production and costs a lot of money - much more when done into every detail like needed to select chips that are still working at a reduced level. Money that is wasted on good chips. So getting quality up and restricting tests to detect faults as fast as possible and then aborting makes the whole production more cost efficient.
From a customer's perspective (in this case the Commodore computer factories), it's even worse, as your ability to build such a machine on a random output quota from the manufacturer of a product he doesn't want to make in the first place. So with every iteration the VIC-II output gets improved, your supply goes away. To further output machines you must go ahead and use full functional VIC-IIs - which would mean that there you earn less money (assuming the final customer pays less for a less capable machine) from the same resource (VIC-II chip production).
[But then why did the 32 KiBit RAM chips surface?]
A simple matter of scale. For one, 64 KiBit RAM chips are about 10 times more complex than a VIC-II, while at the same time their structure is way more symmetric, enhancing the chance that a faults impact is truly local. Even more important, they were produced in (almost) infinitely larger numbers than for example VIC-IIs - just think, each C64 already had 8 of them compared to only one VIC-II. Also there were many more manufacturers. Each of them trying to get their production running and quality up - each of them going through a cycle from many faulty chips to less of them.
Also, fault-wise it was more simple with RAMs, as any chip with only a single fault that is not located in control or interface (which is less than 2% of the surface) can be used as 32 KiBit one - that makes, using the above reasoning, 98% of single faulty and 49% of double faulty. Sounds like a way better quota doesn't it?
It's further a matter of scaling. With many millions of 64 KiB chips, each factory, especially during run up, produced hundreds of thousands of partial faulties, making it a good business for others to buy this waste at next to nothing and spend money in testing and repackaging then as 32 KiBit.
Bottom line, it's about numbers and Commodore wasn't going anywhere near this.
Still useless for the C64, of course, but Commodore did (mistakenly, but still did) introduce the C16 as a spriteless lower-end machine.
I need to object here (even though not part of the core question). The C16 was neither a mistake, nor a bad idea. Its design was meant to counter Sinclair. Delivering a low end machine that could beat the Monster from the Island :)) Serious, the US home computer industry was shocked by the ZX-80/81 and the prospect of an even more capable future colour machine. The TED as C116 was directly set against that - and the C16 only a fast byproduct of a cheaper 'real' computer using VIC20/C64 parts.
Instead of basing it on the Plus 4, could they not have given it reject VIC-II chips and saved money?
As noted above, there wasn't any real business case to save money. More importantly, the TED had a much higher gate count than the VIC-II. Not least due to its 75 (instead of 40) byte line buffer; it also had a 16-bit timer, a sound channel and an I/O port. Except for the sprites, all its graphic capabilities surpassed the VIC-II. While being more complex it was at the same time targeting a way lower priced market, this should tell how little transistor count meant - at least in this region of rather low integrated ones.
Similarly the excellent SID (C64 sound chip). Intuitively it would seem a defect in one of the voices, say, should leave the chip still usable with a smaller repertoire in a lower-end machine like the C16. Is there a reason why this was not done?
Most likely again, not enough failing chips to make it worthwhile. See above.
But more important, it would have meant that the C16 would have had two chips, like the C64, making production way more expensive than it was targeted for. Beside more board space and traces, also another 64 drillings would have been needed, well, plus having to produce two chips instead of one.
The one historical case I know of where part-bad chips were successfully sold with reduced functionality is the 486, where units with a defective FPU were sold as the 486SX. Are there are any other cases?
If at all, only in the beginning. for most of their availability, 486SX were crippled on purpose to allow sales at lower prices, without hurting the profits on 486DX sales.
Many people in the U.S., myself included, tended to judge the success or failure of Commodore's machines based upon sales in this country, even though a large portion of Commodore's target market was in Europe; I think the 16 and Plus/4 did better there than in the U.S.
– supercat
Aug 27 at 18:49
@supercat Not realy, while there where considerable sales as low cost machines, that made the C16 available for several years, it was dwarfed by the C64s success. Being produced for several years should also give a good hint that they where in fact a success - even in the US - as I don't think Commodore would have continued to build them while loosign money, would they?
– Raffzahn
Aug 27 at 19:33
@supercat Thinking of it, I'm a bit puzzled about you comment, as my answer did nowere state anything about success or failure of either machine. All mentioned is the reasoning behind creating the C116 to counter the ZX machines at their price level and creating the C16 as 'real' machine (read no rubber keyboard) poitioned below the C64 but above the rubber keyboard class. Also, they where part of a biger strategy conceived before the gap closing C64 became the unexpected success it was. The Plus/4 was more of a last resort to use now surplus stock (and designs) made for the new line.
– Raffzahn
Aug 27 at 19:39
I was responding to your statement that the C16 wasn't a mistake, nor a bad idea, and wanted to acknowledge why some people might think it was: many people in the U.S. had no clue how popular various machines were in Europe.
– supercat
Aug 27 at 19:42
@supercat Considering something a mistake or bad idea is not related in any way to being a success or not. Many great ideas never became successful - as well as bad ideas becoming (the C64 being one of them). I still fail to see the relation.
– Raffzahn
Aug 27 at 19:51
There was a brief fad in the early eighties for what's called "Wafer Scale Integration". That is to say, producing an entire wafer of silicon for a single circuit. The best known example was Gene Amdahl's Trilogy Systems. A Wafer Scale circuit can be used to build a massively powerful computer system in a single component, but as wafers are almost never produced without defects the concept relies on providing redundant units and being able to configure the system to use working ones and disable failed ones. The idea turned out to be too difficult to implement successfully, and AFAIK has never really been repeated, but a lot of research was put into it, and yielded some very good ways of testing wafers and disabling failed components.
Today, my understanding is that similar approaches are sometimes used with multicore processors: if you buy a two-core chip, it's entirely possible that what you actually get is a four-core chip with two of the cores disabled because they tested faulty.
This answer makes me wonder how system-on-a-chip designs compare to those Wafter Scale Integration designs...
– Michael Kjörling
Aug 29 at 20:01
If you include FPGAs and flash memory (SSD chips), etc., many designs are (still?) manufactured with extra rows or blocks of stuff on the chip, were some number are expected to be disabled after failing (or too many passing?) device test, with the devices then still being sold under the same generic part number.
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.
Are you able to dig out more knowledge about that Soviet CPU?
– Wilson
Aug 26 at 16:22