Subscribe to DSC Newsletter

Data Science Applications in Semiconductor Manufacturing

In this post, we will look at how data science can be used to improve mechanical and materials engineering in the semiconductor manufacturing industry by summarizing the work that Pivotal’s Data Science team did for a real-world customer.


The Objective

As in any manufacturing or engineering process, it is always best to “fail fast”. Early detection saves wasted materials and effort, and can also be invaluable for safety. While working with a semi-conductor company, we set out to systematically identify defect patterns on a wafer earlier in manufacturing, and tie them back to the production process for root cause analysis.


For background, there are a couple vocabulary terms that are useful for this case study:

  • Die: a small block of a semiconducting material on which a circuit is fabricated.

  • Wafer: is substrate which consists of multiple dies.

  • Wafer prober: used to test for functional defects for dies on a wafer.


Wafers move through a long series of steps through the manufacturing process in groups of approximately 25. The wafers are tested to characterize the dies on the wafer as good or bad, generating a wafer bin map (WBM) that shows the specific test for which a die has failed. If a wafer consists of more than a threshold of dies that are not functional, then it is discarded.


The Process

Illustrated in the diagram below, the process is a multi-step de-noising, preprocessing, feature extraction, dimensionality reduction, outlier detection, and clustering to show how yield and profitability are improved.



[H2]De-Noising and Preprocessing

For simplicity, each die is considered failed if it fails at least one test, otherwise it is considered functional. Fails are marked as 1, and passes are marked 0. Since we are working to identify a pattern in the failures, we need to reduce the noise and enhance the signal.


To reduce the noise, we used a median filtering technique, where the median value of die failures in a bin neighborhood is used to replace the central bin value. 


Fig. 1: Wafer bin map images before (left) and after (right) denoising. Blue

color denotes that the die on the wafer has failed, while red denotes that

the die has passed.

[H2]Feature Extraction

Next we need to extract features from the wafer by creating a feature vector. Starting from top left position to bottom right position, we mapped out a 1519-dimensional binary feature vector, representing all the positions of die failures on the wafer.


[H2]Dimensionality Reduction

Since 1519 is a lot of dimensions to work with, we decided to use a feature reduction technique called Non-negative Matrix Factorization (NMF). The advantages here are threefold—we can take into account the collinearity in the dies; we can reduce the computational complexity; and, we can provide better means of visualization. In this step, the feature vectors from all the wafers are first arranged in the form of a matrix.


Note: Alternative techniques for dimensionality reduction, such as Singular Value Decomposition (SVD) or Principal Component Analysis (PCA), could also have been used.


Fig. 2:  Visualizing 130 wafers in two dimensions after dimensionality reduction.



[H2]Outlier Detection and Clustering

Since outliers do not fall into any specific pattern, we next needed to remove the outliers from the wafer data. To do so, each wafer is first represented in a lower dimensional space with K dimensions, using the NMF technique described above. For visualization purposes the data is reduced to a two-dimensional space which shows there are clear outliers in the data.


For each point, the sum of Euclidean distances to every other point is calculated. If a point has a greater distance from all other points, the score will be higher, and the point is deemed an outlier. The outlier scores for each of the 130 wafers are shown in Fig. 3.


Note: Alternative outlier detection measures like Local Outlier Factor (LOF) can also be used, where distance from K-nearest neighbors is used to calculate the density of a point and detect outliers.


Fig. 3: Outliers scores for 130 wafers using Euclidean distance


Once the outlying wafers are removed, the remaining wafers are clustered to obtain wafer groups. A k-means clustering algorithm, available through the newly incubating Apache MADlib project, is used to group the wafers into 20 clusters with random initial seeding. Shown in Fig. 5, wafers with similar defect patterns were grouped into one cluster.


Note: We chose the number 20 randomly, however a simulation based approach could be used to tune this parameter.


Fig. 5: Wafers belong to a single cluster having same defect pattern (defect in the center). Blue color denotes that the die has failed, while red denotes it passed.


wafer_ 26 .jpegwafer_ 109 .jpegwafer_ 11 .jpeg


[H2]Next Steps

Once we established the defect patterns from the wafers, we were then able to correlate these failures back to the specific process parameters of this manufacturer for root cause analysis and we were able to generally improve profitability of this manufacturing process.


For more on this case study, see the full story on the Pivotal Data Science blog.

Views: 2272


You need to be a member of Data Science Central to add comments!

Join Data Science Central


  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service