FANDOM


The focus of this section is the testing of CMOS digital logic portions of highly complex digital logic devices such as microprocessors and more generally the testing of logic cores that could stand-alone or be integrated into more complex devices. Of primary concern will be the trends of key logic test attributes assuming the continuation of fundamental roadmap trends. The “high volume microprocessor” and “Consumer SoC” devices are chosen as the primary reference because the most trend data is available for them. Specific test requirements for embedded memory (such as cache), I/O, mixed-signal, or RF are addressed in their respective sections and must also be comprehended when considering complex logic devices that contain these technologies.

Spreadsheet Table TST7 – Logic Assumptions

Spreadsheet Table TST8 – Logic Test Data Volume

Spreadsheet Table TST9 – Logic ATE Requirements

=== 1.1.1    High Volume Microprocessor Trends Drivers

=Edit

The trends in Table TST8 are extracted from other parts of the ITRS, and are reproduced here to form the foundation of the key assumptions used to forecast future logic testing requirements. The first two line items in Spreadsheet Table TST8 show the trends of functions per chip (number of transistors) and chip size at production. Chip size in terms of area is held relatively constant aside from incremental yearly reductions within a process generation. The next line item reflects a trend toward multiple core designs to address, in part, what has become the primary microprocessor scaling constraint - the diminishing returns of clock frequency increases. There is a trend to greatly increase the number of cores with each process generation, and these will include multiple cores of a particular instruction set, but also include other types of cores, such as graphics units (GPUs), specialized I/O units (e.g. USB) and various other cores not necessarily specific to that microprocessor.

=== 1.1.2    System Trends Drivers ===

System trends drivers are very important to consider when projecting future test requirements. For example, one of the most critical system constraints is power consumption. The proliferation of mobile applications, lagging battery technology improvements, system power dissipation issues, and increased energy costs are all contributing to a practical cap on device power consumption. The era of device power increasing unconstrained with increasing performance is over. This does not necessarily mean that performance will be similarly capped, but this is one of the main challenges to be overcome if Moore’s Law is to continue. Innovations in transistors, process technology, design architecture, and system technologies (including 3-D chip stacking) could all have a major impact.

One system technology innovation that could impact test would be the integration of voltage regulation on-chip/package. Increasing chip power and increasing number of cores make this ever more likely for at least two reasons. The first reason is that eventually the package limits power consumption by constraining the number of power/ground pins and the maximum current per pin. These constraints can be greatly eased with on-chip regulation since you can then deliver power to the chip at a significantly higher voltage. The second reason is that multi-core architectures may necessitate more sophisticated independent power delivery to each core in order to fully optimize the power consumption. Eventually it is likely that this will need to be done on-chip. Overall, this trend would simplify the problem of delivering power to devices under test, but it also could create many new test issues since precise voltage control and current measurement have always been an important aspect of testing high power devices.

Another important system trend is the continuous increase of chip-to-chip data bandwidth required over time. This translates into increasing chip I/O data rates and / or an increasing numbers of I/O pins. In order to reliably achieve speeds much greater than one giga-transfers per second (GT/s), it is necessary to incorporate high-speed serial signal techniques such as differential signaling or embedding the clock together with the data. Refer to the High Speed Input/Output Interface section for more detailed discussion on the test requirements and challenges for testing these interfaces.

One manifestation of increasing data bandwidth will likely be the use of 3-D stacked memory with high-density / low-power through-silicon via (TSV) interfaces.  This presents a number of test challenges, including the economic imperative to identify Known Good Die (KGD) prior to stacking, the inability to physically probe the TSVs, the associated loss of access to pins which are often re-purposed as scan inputs/outputs, and the increasing test cost associated with post-bond testing of a stacked system.

Because of the huge amount of transistors able to be placed on current and future chips, producing a new chip design in a reasonable amount of time practically requires the use of an IP/SoC design methodology based on embedded cores. Such a core-based or IP/SoC design methodology allows some chip components to be designed by different groups or companies and then integrated into a single SoC. Using multiple instances of the same core design helps reduce the designer’s time and effort to apply the available transistors for meaningful benefit in the final product. Test can also exploit the use of cores by applying a hierarchical test methodology that isolates cores from surrounding logic using, for example, the IEEE 1500 standard for Embedded Core Testing. Much of the logic test analysis is based on the assumption of the use of hierarchical and core-based testing for both microprocessor and SoC devices now and going forward.  Specifically, the calculations for test time and test data volume assume that all embedded cores are wrapped and tested with a two-pass approach: first, the individual cores are tested with the wrappers in “inward-facing” (or INTEST) mode; then the connections and glue logic between the cores is tested with the wrappers in “outward-facing” (or EXTEST) mode.

=== 1.1.3    DFT Trends Drivers ===

In order for test cost not to increase proportionally with chip scale trends, continuing improvements to DFT coverage and effectiveness will be crucial. The general trend toward multiple reusable cores offers many exciting DFT possibilities. Could the cores be tested in parallel? Could the cores test each other? Could one or more cores be redundant so that “bad” cores could be disabled by test or even in end-use? Also, could there be opportunity for general purpose test cores—sort of an on-chip ATE? It is very difficult to determine exactly how this will evolve and how this will impact manufacturing test requirements. However, there are some clear trends:

·        When multiple identical cores are designed into a device the trend for these cores to be tested in parallel while sharing the same scan data stream will continue.   We assumed here that there is an ongoing requirement for visibility into any faults on a per-core basis that will require individual scan output data access to all cores – even identical copies of the same logic.

·        Structural, self-test and test data compression techniques will continue and will be important for containing test data volume increases as well as constraining the device I/O interface during test; in many cases it is expected that core designs will include such self-test and/or test data compression functions to be utilized when isolated from their surrounding logic during core INTEST modes of die testing.  The use of scan compression on a per-core basis is assumed in test cost projections.

·        DFT will continue to be essential for localizing failures to accelerate yield learning.

·        DFT is required to minimize the complexity of testing embedded technologies such as memories and I/O. For example, memory BIST engines are routinely integrated into the device to alleviate the need for external algorithmic pattern generator capability and, similarly, I/O DFT features (such as internal loopback BIST and eye diagram mapping circuits) are increasingly being employed to alleviate the need for sophisticated I/O testing capabilities in high volume manufacturing test.

·        DFT will increasingly be required to ensure device deterministic behavior or accommodate device non-deterministic behavior. Power management features, I/O communication protocols, and device self-repair mechanisms are a few examples of desirable non-deterministic behaviors.

=== 1.1.4    ATE Trends Drivers ===

Automated Test Equipment (ATE) must continue to provide the necessary operating environment and interfaces to meet the test requirements of future SOCs.  Drivers include both device parameters like I/O speeds, chip power and thermal environment, and test length, as well as test floor requirements like multi-site testing and tester footprint and cost. 

I/O data rates are bounded on the low end by the need to provide slow speed access for structural or DFT-based testing and on the high end by the native speeds and protocols of the chip interfaces. Requirements for cycle-accurate tester determinism may be relaxed with the advent of protocol-aware testers.  Support for a wide variety of different I/O types, including on the same device, will be necessary through at least the near term horizon

There is a trend toward low-power design to serve the mobile client and dense server markets, and even the traditional high-performance microprocessor designs appear to be peaking at a power consumption during test of 400 W through the end of the roadmap (though it is likely that there will always be a segment of the microprocessor market which pushes the power envelope).  The low-power design trends include device states with specified power envelopes, necessitating accurate power measurement instrumentation which can be applied during various test conditions.

Test equipment vector memory requirements are expected to grow modestly over time, but are at risk if on-chip scan compression ratios hit a plateau.  The following section explains the context which drives the test data volume calculations.

=== 1.1.5    Logic test assumptions and context ===

The scan test context for an individual core is shown in Figure TST17.

Figure TST11 - Scan test view of a core

Each scan-testable core is assumed to include scan compression hardware (a decompressor and a compressor, referred to collectively as a “codec”) which expands a small number of scan input channels into a large number of scan chains (each with a uniform length), then compacts those chains into a small number of scan output channels.  The core includes scan input pins and scan output pins for these scan channels.  If there are pipeline flops added within the codecs to support a desired scan shift frequency, these are simply added as overhead to the scan chain length.

An SOC consists of a number of cores, as shown in Figure TST18.

Figure TST18- SOC containing many scan-testable cores

The SOC consists of a number of scan-testable cores.  In order to provide the stimulus to the cores and observe responses from them, the chip pins must be connected to the core scan-in and scan-out ports.  There are a number of techniques used for this purpose, including the use of dedicated test pins, the re-use of functional chip pins in test mode, or combinations thereof.  Figure TST18 shows networks between the pins and the cores (on both input and output sides) which are responsible for mapping chip pins to core scan ports.  If there are enough chip pins to connect to all the core scan ports, then these networks are trivial (i.e. wires).  However, it is often the case that there are insufficient chip pins, so the networks may implement schemes which share, multiplex, sequence, combine, or otherwise map the pins to the cores.  The approach chosen has an impact on the test application time and test data volume.  However, for the purposes of keeping the projections independent of this network architecture, the calculations have simply modeled the situation by assuming that the number of chip pins is sufficient to connect to all the scan ports of all the cores.  Compensation for the size of the chips is done by allowing the length of the scan chains to increase.  While this isn’t physically accurate – cores come with chains of a pre-determined length – it does result in the correct calculation of test data volume without nearly as much complexity in the model.

One important variant of this architecture is shown in Figure TST19, where there are multiple identical copies of Core_2.

Figure TST12 - SOC with multiple copies of Core_2

Multiple identical cores offer an opportunity for significant test cost savings since the copies may all be tested in parallel.  There are a number of methods to accomplish this, the most common of which is illustrated Figure TST19: the inputs to all identical copies of the core are broadcast, while the outputs from each core are uniquely observed.  This can lead to the need for a large number of chip output pins to perform all tests at once.  If the number of SoC pins is less than the number of core pins, the tests for the cores must be broken into a number of sessions.  However, as noted above, this is handled in the calculations via the trick of simply pretending that the pins are there but that the scan chains are longer.

The final concept to set the context for the calculations is the separation of core logic from random logic, as illustrated in Figure TST20.

Figure TST13- Core logic vs. random logic between cores

In addition to the core-level scan tests (sometimes called “inward-facing” or “intest” modes), there are also tests for the random (or “glue”) logic between cores.  These tests utilize the wrapper flops inside the cores in their “outward-facing” or “extest” mode to exercise the random logic.  The calculations separate core logic from random logic.

=== 1.1.6    Logic test calculations ===

The assumptions used for test data volume calculations are shown in Spreadsheet Table TST8.

The test data volume projections are shown in Spreadsheet Table TST9.

New this year we have published the equations behind the various logic table calculations in Spreadsheet Table TST22.

Graphs of some of the key conclusions from this work are shown in Spreadsheet Table TST21.

One of the key results of the projections is shown in the graph of test data volume in Figure TST21.

Figure TST21 - Test data volume trend over time

Rather than the historically steady increase in test data volume, the coming trend towards the use of many copies of identical cores (and the associated opportunity for concurrent testing of those cores) will significantly flatten out the growth over time.  This trend is more apparent in highly homogeneous server MPU devices and less impactful on SOC-CP devices which tend to be more heterogeneous.

The ATE needs projections in terms of pattern depth and test-times are shown in Spreadsheet Table TST9.