Optimize Yields for Semiconductor Manufacturing
Applying predictive failure analysis in statistical models can simulate processor behavior
A processor consists of billions of transistors, and each transistor has different geometries and shapes depending on its function. Each unique transistor shape can result in separate physical characteristics—for example, current-carrying and switching-speed features—and will furthermore have distinctive variations in the manufacturing process. Transistors are grouped into small cells and blocks to create logical and computational units called standard cells that can perform functions such as storing bits, performing logical comparisons, and evaluating numerical expressions.
When transistors are put into standard cells, their behaviors also change because the patterns of transistors and wires can affect behavior and statistical variation. Circuit designers are extremely challenged to comprehend these variations and design circuits that will work well under big variations in the manufacturing process and operating conditions in the field. These conditions include extreme temperature fluctuations that can cause significant changes in transistor and circuit behavior, which must also be accounted for. The key is to have statistical mathematical models of how the transistors and standard cells behave that are put into what is commonly referred to as a process design kit (PDK).
To help ensure the models in the PDK are accurate, every processor is typically manufactured with a set of experimental structures. These structures represent the patterns of transistors and standard cells on the processor, and they can be measured during its testing. Often, millions of data points per processor are collected across millions of different processors. As a result, the amount of data is staggering, and all this data must be collected, analyzed, and used to adjust the models in the PDK that are used by designers. This approach is where parallel processing and analysis of the data on the cloud can play a major role. With each generation of integrated circuit manufacturing, increasing amounts of data need to be collected and analyzed.
During the process of designing integrated circuits, the statistical models in the PDK are used to simulate how the entire processor that is composed of millions of standard cells will behave. This part of the problem is called predictive failure analytics for the integrated circuit design, and has recently become a critical factor in the design of leading-edge products in 20 nm and 16 nm manufacturing processes. And it is expected to become even more crucial for future 12 nm and 10 nm manufacturing nodes. (For more information on predictive failure analytics, see the sidebar, “Predictive Failure Analytics for Semiconductor Design.”)
The simulation problem for capturing the behavior of the processor even without statistical variations is quite astounding. A differential equation is built with hundreds of millions or even billions of variables, and each variable represents a node voltage in the circuit. Each node voltage is itself a function of several transistors whose behavior is a complex nonlinear equation with hundreds of parameters and thousands of lines of code. This equation is then solved across tens of thousands of time points to capture the circuit behavior during multiple clock cycles of the processor.
To allow run times of less than one week for such demanding simulations, powerful multi-core servers are deployed, often with 64 cores or more. However, that number is just to capture the behavior of the circuit at one sample point in the manufacturing process. To accurately predict circuit behaviors for memory circuits with hundreds of millions of components, the complex simulation process must be repeated thousands or even billions of times, depending on the yield requirements of the circuit. The case in which billions of sample points are needed is called a high sigma problem, and it requires special mathematical techniques that can reduce the number of samples from billions to thousands to make the problem tractable.
Optimal circuit behavior
The end result is that the circuit behaviors can be predicted and augmented to work optimally under the wide range of operating conditions and great variations inherent in the manufacturing process. Without this capability, the processor would either not work at all or have a very bad yield, rendering costs prohibitive. The underlying high sigma algorithms used in predictive yield analytics for integrated circuits are generic and can be applied to big data analysis in other fields, especially those in which the modeling and simulation challenges are very complex. These fields include integrated circuit design and manufacturing in medical and aerospace applications. The techniques and framework would be very amenable to cloud computing architectures for both scalability of processing power and data handling, as well as for enabling such analysis for organizations that would otherwise not have the means.
Please share any thoughts or questions in the comments.