As an example - when I have to retest a wafer, I sometimes find that a few devices which had passed the first time, fail at retest, or vice-versa.What I needed was a tool which 'merged' two input files, and produced a single output file which, for each serial number, contained the 'best' result from each of the input files.
... I can picture the scenario now: 'So, if the yield was 90% both times you tested the wafer - how come you now claim it's 95% ?'
Hmm. I suppose I'd be reluctant to use this approach to summarizing failure rates as well. Testing circuits on wafers is way outside my field, but I would have expected that if there is a subset "A" that fails on one pass, and a subset "B" that fails on another pass, then the set of troublesome serial numbers to report as unreliable should be the union of sets A and B, rather than their intersection.
I can understand the perspective that the only "real" failures are the ones that consistently failed on every pass. But there is the other perspective: that the only "real" successes are the ones that never failed on any pass. Fortunately, perl makes it easy to report the results, no matter which perspective you choose.
In reply to Re: If at first you don't succeed ...
by graff
in thread If at first you don't succeed ...
by pavium
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |