Blogs

Optimize Yields for Semiconductor Manufacturing

Applying predictive failure analysis in statistical models can simulate processor behavior

Cloud and mobile computing are expected to be the dominant computing platforms in the near future, and big data—together with predictive analytics—is likely to play a big role in driving productivity and profitability for organizations spanning a wide range of industries. At the same time, low-power and high-performance integrated circuits provide the foundation of this revolution, and the design and manufacturing of these integrated circuits are entering an advanced, revolutionary phase.

Moreover, as the atomic limits of transistors are approached, local atomic-scale random variations in the manufacturing process make the cost-effective production of integrated circuits extremely challenging. And this challenge is especially formidable considering the simultaneous requirements of high performance, low power, and low cost—the essential characteristics of both the cloud computing backplane and mobile smartphone and tablet systems.

Fortunately, combining predictive analytics and data mining techniques to analyze and optimize integrated circuit designs can maximize yield and help overcome these challenges. Inaccuracies in the models and variations in the process become more pronounced, and designers are forced to understand the variability effects in a processor with enhanced accuracy and fidelity and to consider more physical effects than ever before.

 

Predictive Failure Analytics for Semiconductor Design - sidebar  

Sneak Preview

Predictive Failure Analytics for Semiconductor Design

Attendees of Insight 2014, October 26–30 at Mandalay Bay in Las Vegas, Nevada, can learn much more about this topic at session TMA-6770A, “Predictive Failure Analytics: How to Use Big Data to Optimize Yields in Semiconductor Manufacturing.” This session offers a deep dive into applying data mining and advanced predictive analytics to help diagnose and improve yield for semiconductor design and manufacturing.

In this case study approach, predictive failure analytics is used to optimize critical components of integrated circuits and handle massive amounts of data arising from the monitoring and modeling of the manufacturing process. Server farms are used for parallel processing, and the technique can handle large numbers of process and design variables with massive amounts of data, demonstrating the capability of high dimensionality. The algorithms are generic and can be applied to a wide range of fields including medical, financial, and other fields of engineering.

Staggering proportions

A processor consists of billions of transistors, and each transistor has different geometries and shapes depending on its function. Each unique transistor shape can result in separate physical characteristics—for example, current-carrying and switching-speed features—and will furthermore have distinctive variations in the manufacturing process. Transistors are grouped into small cells and blocks to create logical and computational units called standard cells that can perform functions such as storing bits, performing logical comparisons, and evaluating numerical expressions.

When transistors are put into standard cells, their behaviors also change because the patterns of transistors and wires can affect behavior and statistical variation. Circuit designers are extremely challenged to comprehend these variations and design circuits that will work well under big variations in the manufacturing process and operating conditions in the field. These conditions include extreme temperature fluctuations that can cause significant changes in transistor and circuit behavior, which must also be accounted for. The key is to have statistical mathematical models of how the transistors and standard cells behave that are put into what is commonly referred to as a process design kit (PDK).

To help ensure the models in the PDK are accurate, every processor is typically manufactured with a set of experimental structures. These structures represent the patterns of transistors and standard cells on the processor, and they can be measured during its testing. Often, millions of data points per processor are collected across millions of different processors. As a result, the amount of data is staggering, and all this data must be collected, analyzed, and used to adjust the models in the PDK that are used by designers. This approach is where parallel processing and analysis of the data on the cloud can play a major role. With each generation of integrated circuit manufacturing, increasing amounts of data need to be collected and analyzed.

During the process of designing integrated circuits, the statistical models in the PDK are used to simulate how the entire processor that is composed of millions of standard cells will behave. This part of the problem is called predictive failure analytics for the integrated circuit design, and has recently become a critical factor in the design of leading-edge products in 20 nm and 16 nm manufacturing processes. And it is expected to become even more crucial for future 12 nm and 10 nm manufacturing nodes. (For more information on predictive failure analytics, see the sidebar, “Predictive Failure Analytics for Semiconductor Design.”)

The simulation problem for capturing the behavior of the processor even without statistical variations is quite astounding. A differential equation is built with hundreds of millions or even billions of variables, and each variable represents a node voltage in the circuit. Each node voltage is itself a function of several transistors whose behavior is a complex nonlinear equation with hundreds of parameters and thousands of lines of code. This equation is then solved across tens of thousands of time points to capture the circuit behavior during multiple clock cycles of the processor.

To allow run times of less than one week for such demanding simulations, powerful multi-core servers are deployed, often with 64 cores or more. However, that number is just to capture the behavior of the circuit at one sample point in the manufacturing process. To accurately predict circuit behaviors for memory circuits with hundreds of millions of components, the complex simulation process must be repeated thousands or even billions of times, depending on the yield requirements of the circuit. The case in which billions of sample points are needed is called a high sigma problem, and it requires special mathematical techniques that can reduce the number of samples from billions to thousands to make the problem tractable.

Optimal circuit behavior

The end result is that the circuit behaviors can be predicted and augmented to work optimally under the wide range of operating conditions and great variations inherent in the manufacturing process. Without this capability, the processor would either not work at all or have a very bad yield, rendering costs prohibitive. The underlying high sigma algorithms used in predictive yield analytics for integrated circuits are generic and can be applied to big data analysis in other fields, especially those in which the modeling and simulation challenges are very complex. These fields include integrated circuit design and manufacturing in medical and aerospace applications. The techniques and framework would be very amenable to cloud computing architectures for both scalability of processing power and data handling, as well as for enabling such analysis for organizations that would otherwise not have the means.

Please share any thoughts or questions in the comments.