throbber
IEEE TRANSACTIONS ON COMPONENTS, PACKAGING, AND MANUFACTURING TECHNOLOGY—PART C, VOL. 20, NO. 4, OCTOBER 1997
`
`295
`
`Equipment Fault Detection Using Spatial Signatures
`
`Martha M. Gardner, Jye-Chyi Lu, Ronald S. Gyurcsik, Member, IEEE, Jimmie J. Wortman, Senior Member, IEEE,
`Brian E. Hornung, Member, IEEE, Holger H. Heinisch, Student Member, IEEE, Eric A. Rying, Student Member, IEEE,
`Suraj Rao, Joseph C. Davis, Member, IEEE, and Purnendu K. Mozumder, Senior Member, IEEE
`
`Abstract—This paper describes a new methodology for equip-
`ment fault detection. The key features of this methodology are
`that it allows for the incorporation of spatial information and
`that it can be used to detect and diagnose equipment faults si-
`multaneously. This methodology consists of constructing a virtual
`wafer surface from spatial data and using physically based spatial
`signature metrics to compare the virtual wafer surface to an
`established baseline process surface in order to detect equipment
`faults. Statistical distributional studies of the spatial signature
`metrics provide the justification of determining the significance
`of the spatial signature. Data collected from a rapid thermal
`chemical vapor deposition (RTCVD) process and from a plasma
`enhanced chemical vapor deposition (PECVD) process are used to
`illustrate the procedures. This method detected equipment faults
`for all 11 wafers that were subjected to induced equipment faults
`in the RTCVD process, and even diagnosed the type of equipment
`fault for 10 of these wafers. This method also detected 42 of 44
`induced equipment faults in the PECVD process.
`
`Index Terms— Equipment fault diagnosis, process improve-
`ment, simulation, statistical metrology.
`
`I. INTRODUCTION
`
`The use of site-specific models has been shown to have
`better sensitivity, with respect to spatially dependent process
`variations, than mean-based models [1]. However, detection
`of equipment faults identified from models based on data
`from different sites can have inconsistent results; i.e., some
`site models may detect a certain type of equipment fault,
`while other site models do not [1]. Saxena, et al. [2] have
`used a monitor wafer controller (MWC) to fix this to some
`degree. It has also been shown that
`the use of a virtual
`wafer surface, rather than specific sites on a wafer, captures
`even more information about the spatial signatures generated
`from different equipment conditions [3]. Kibarian and Strojwas
`[4] have also developed models which account for spatial
`dependencies and shown how the models can be used to
`separate spatial dependencies from other causes.
`The detection and diagnosis of equipment faults in semi-
`conductor processes is usually a two step procedure. Detection
`refers to the identification of the occurrence of an equipment
`fault, whereas diagnosis refers to the classification of equip-
`ment faults. Faults are detected using one method. Then faults
`are classified using another method. Current research in the
`literature has concentrated on equipment fault diagnosis, rather
`than the detection of the existence of equipment faults. For
`example, pattern recognition techniques including statistical
`discriminant analysis techniques [1], fuzzy logic techniques
`[5], and neural networks [6] have been used for diagnosis
`purposes. Hu et al. [7], Butler and Stefani [8], and Bombay
`and Spanos [9] have applied empirical (or semi-empirical)
`polynomial modeling techniques to relate process outputs
`to process settings, and May and Spanos [10] have used
`evidential reasoning to integrate in-line, off-line, and main-
`tenance data for fault diagnosis. However, methods such as
`statistical discriminant analysis do not make use of the spatial
`information and the physical knowledge of equipment faults.
`The equipment fault detection methodology described in
`this paper is unique not only in that it incorporates the use
`of integrated spatial information in a virtual wafer surface, but
`also in that it can be used to detect and classify equipment
`faults at the same time. The main focus of this paper is using
`the spatial signatures of the differences between observed and
`expected virtual wafer surfaces to construct physically based
`metrics which can be used to detect and diagnose various types
`of equipment faults. When establishing an equipment fault sig-
`nature library, it would be ideal to have experiments conducted
`to model wafer spatial measurements at process conditions
`without faults and with certain known faults; however, his-
`torical data on existing faults may also be used. Using the
`1083–4400/97$10.00 ª
`
`EQUIPMENT faults are often the cause of major variations
`
`in semiconductor manufacturing processes. Considering
`the expense of processing, these variations can cause dramatic
`yield losses [1]. Traditionally, the mean or signal-to-noise ratio
`of the wafer surface data is modeled, and the resulting model is
`used to detect equipment faults according to statistical process
`control (SPC) techniques; however, as wafer sizes increase
`and film thicknesses are reduced, the use of integrated spatial
`information will have a greater impact on detecting equipment
`faults.
`
`Manuscript received March 24, 1997; revised September 29, 1997. This
`work was supported in part by Texas Instruments, Inc. and the NSF Engineer-
`ing Research Centers Program through the Center for Advanced Electronics
`Materials Processing, Grant CDR 8721505,
`the Semiconductor Research
`Corporation, SRC Contract 94-MP-132, and the SRC SCOE program at
`NCSU, SRC Contract 94-MC-509.
`M. M. Gardner was with the Department of Statistics, North Carolina
`State University, Raleigh, NC 27695 USA and is now with General Electric,
`Nishayuna, NY 12309 USA
`J.-C. Lu are with the Department of Statistics, North Carolina State
`University, Raleigh, NC 27695 USA.
`R. S. Gyurcsik is with the Semiconductor Research Corporation, Research
`Triangle Park, NC 27709 USA.
`J. J. Wortman, H. H. Heinisch, and E. A. Rying are with the Department
`of Electrical and Computer Engineering, North Carolina State University,
`Raleigh, NC 27695 USA.
`B. E. Hornung is with Motorola, Austin, TX 78721 USA.
`S. Rao, J. C. Davis, and P. K. Mozumder are with Texas Instruments, Inc.,
`Dallas, TX 75265 USA.
`Publisher Item Identifier S 1083-4400(97)09163-8.
`
`1997 IEEE
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 1 of 10
`
`

`

`296
`
`IEEE TRANSACTIONS ON COMPONENTS, PACKAGING, AND MANUFACTURING TECHNOLOGY—PART C, VOL. 20, NO. 4, OCTOBER 1997
`
`(a)
`
`(b)
`
`Fig. 1. Equipment fault detection and diagnosis chart.
`
`experimental data, one can construct physically based signa-
`ture metrics to detect and identify equipment faults. When the
`basic faults are understood, new faults can be added into the
`study. Fig. 1 shows a flow chart of all the steps in the process.
`If a certain type of fault is known to have a specific shape,
`then classification of faults can also be verified by comparing
`a newly fitted surface to the known fault surface. In this case,
`by treating the fault surface as the “target,” the methodology of
`spatial signature metrics can be used to statistically compare
`the newly fitted surface to this “target” to determine if the
`newly fitted surface belongs in this fault class.
`Section II describes the equipment fault detection method-
`ology using spatial signatures in detail. Sections III and IV
`provide illustrating examples from experiments conducted at
`North Carolina State University (NCSU) and Texas Instru-
`ments, Inc. (TI), respectively. Section V draws conclusions
`from this study and points to potential future work.
`
`II. FAULT DETECTION METHODOLOGY
`Fig. 2(a) and (b) show how equipment faults can be manifest
`in the spatial response of the process. Fig. 2(a) shows the gate
`oxide thickness surface of a wafer that was processed under
`fault-free conditions. Fig. 2(b) shows the gate oxide thickness
`surface of a wafer processed under known equipment faults.
`and
`represent the
`and
`distances from the center of
`the wafer. Not only is there an apparent decrease in thickness
`between the two surfaces, but also a change in spatial pattern.
`The next five subsections present the new methodology of
`using spatial signatures to detect equipment faults:
`1) modeling wafer surface data using thin-plate splines;
`2) estimation of the baseline or “fault-free” surface;
`3) construction of physically based signature metrics for
`comparing wafer surfaces;
`4) estimation of the statistical distribution of metrics;
`5) use of spatial metrics for equipment fault detection.
`
`Fig. 2. Fitted wafer surfaces from wafers processed (a) with no equipment
`faults and (b) with known equipment faults.
`
`A. Modeling Wafer Surface Data Using Thin-Plate Splines
`While recognizing that other modeling methods are avail-
`able, this study uses thin-plate splines to model the virtual
`wafer surface. A virtual wafer surface model of spatial process
`behavior is less sensitive to the position of measurement sites,
`measurement error, and angular orientation than techniques
`focusing on individual data points [11]. The thin-plate spline
`can be viewed as a multi-dimensional extension of the cubic
`smoothing spline. Although splines, in general, are constrained
`to pass through the knots of the function [e.g., gate oxide
`thickness measurements at ( ,
`) distances from the center of a
`wafer], the thin-plate spline attempts to produce the smoothest
`curve possible between the knots and, therefore, does not
`have the requirement that the surface actually pass through the
`knots. The estimator of the thin-plate spline
`is the minimizer
`of the following penalized sums of squares [12]:
`
`(1)
`
`is the
`where the first term represents the lack of fit,
`roughness penalty function, and
`is the spline smoothing
`parameter. For this study, thin-plate spline fittings were formed
`by using a collection of routines called FUNFITS written for
`use in the S-plus statistical software [13]. A thin-plate spline
`can then be used to predict the response at any location on the
`wafer and thus can be used to predict the entire wafer surface.
`In this study,
`is set to be very small (0.001) which gives
`more of an interpolating surface as recommended by Davis
`et al. [3].
`
`B. Estimation of the Target Surface
`A target surface needs to be specified for evaluating equip-
`ment performance. In many cases, the target surface may be
`known; e.g., a non uniformity study where the target thickness
`is 70 ˚A at all locations. However, due to process effects as
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 2 of 10
`
`

`

`GARDNER et al.: EQUIPMENT FAULT DETECTION USING SPATIAL SIGNATURES
`
`297
`
`Individual wafer surface from wafer processed under fault-free
`Fig. 4.
`conditions in a PECVD experiment.
`
`(a)
`
`(b)
`
`Fig. 3. Replicate wafer surfaces from wafers processed under fault-free
`conditions in a RTCVD experiment.
`
`Fig. 5. Two replicates from fault-free conditions averaged to form target
`surface.
`
`a result of equipment design, the wafer surface may not be
`flat even though there is no equipment fault. Assuming a flat
`target surface in this situation may lead to incorrect equipment
`fault detection. Thus, if the experimental wafer surfaces under
`equipment fault-free conditions are not flat, then these surfaces
`should be used as the target surface rather than a constant. For
`example, the two surfaces shown in Fig. 3(a) and (b) are the
`surfaces from two replicates at the fault-free condition in a
`RTCVD experiment conducted at NCSU and have a nonlinear
`pattern. Fig. 4 shows a surface from the fault-free condition
`in a PECVD silicon nitride experiment conducted at Texas
`Instruments, and this surface has a linear pattern. However,
`none of the surfaces shown in Fig. 3(a) and (b) or Fig. 4 reflect
`a constant baseline process surface.
`In addition, the target surface should be validated after
`preventive maintenance or any other procedure which alters
`the tool. The proposed methodology can be used to determine
`if any significant changes in the tool have occurred. If no
`significant changes have occurred, then the new data can be
`used in conjunction with the historical data to update the target
`surface. The following method allows for a target surface to
`be estimated from data collected from fault-free runs.
`Data is collected from wafers under the equipment fault-
`free condition to obtain a good estimate of the target surface. If
`there is slow drift, and this slow drift is considered to be typical
`phenomenon, then the wafers are still considered fault-free.
`Statistical outlier diagnosis can be used to screen the data. Af-
`ter fitting the spline surface to each set of wafer measurements,
`
`the target surface is obtained by averaging location specific
`parameter estimates from the individual spline equations. As
`an example, by averaging the two surfaces in Fig. 3(a) and (b)
`from the RTCVD experiment at NCSU, we obtain the target
`surface as shown in Fig. 5. A randomization procedure for use
`with wafer surfaces processed under the fault-free condition is
`currently being studied to better incorporate wafer-to-wafer
`variation in the proposed methodology, but this randomization
`procedure is beyond the scope of this paper.
`A typical method for deriving the target surface is to first
`average the data collected from specific sites on “replicated”
`wafers at the fault-free condition and then fit a spline surface
`to the averaged data to create the target surface. However, this
`approach averages the data at
`sites, where
`is the number
`of data collected on a wafer, and requires that data be collected
`at the same sites on all wafers, as well as does not take into
`account any wafer-to-wafer variation. The approach described
`in the previous paragraph averages the spline estimates, which,
`in the intuitive sense, averages the spline surface at all possible
`sites. In the spatial signature metrics developed in next section,
`a grid of close to 700 sites is used for prediction.
`If the purpose of the statistical test is to compare an average
`of wafer surfaces to a target, then averaging the wafer data
`first would be appropriate. However, our concentration in this
`work is to compare the spatial surface of a single wafer to
`the target, subject to random variation, and the variance is
`underestimated if the data from the target wafers are averaged
`before the spline is fit. For example, if the spline fits from three
`replicate wafers have individual variances
`,
`, and
`, then
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 3 of 10
`
`

`

`298
`
`IEEE TRANSACTIONS ON COMPONENTS, PACKAGING, AND MANUFACTURING TECHNOLOGY—PART C, VOL. 20, NO. 4, OCTOBER 1997
`
`averaging the variance after the splines are fit yields a variance
`of
`. Now if we let
`represent the vector
`of averaged data, then since the variance of a mean
`is
`where
`is the variance of
`, by averaging the data first, an
`extra factor of 1/3 will be introduced into the variation before
`the spline is ever fit. Thus, the variance of the spline fit when
`averaging the wafer data first will be (
`) times smaller than
`the variance of the spline fit to the individual wafer data, where
`is the number of replicated wafers.
`Another alternative is to treat data from all wafers pro-
`cessed under the fault-free condition as coming from a single
`wafer, then construct a spline to estimate the target surface.
`Although this method can capture variation at a particular site
`without deflating it, this method also loses individual wafer
`characteristics.
`
`C. Construction of Physically Based Spatial Signature Metrics
`Different equipment faults may produce distinct spatial
`signatures. For instance, an equipment fault may affect only
`a specific region of the wafer surface rather than the entire
`wafer surface. In this case, a certain performance evaluation
`metric may better detect this particular type of equipment fault.
`It is also possible that several metrics may have to be used
`simultaneously to detect certain types of faults. Understanding
`the physical processes that create faults, and their resulting
`signatures, also greatly aids in constructing and deciding what
`types of evaluation metrics to use. Four metrics are presented
`below as examples of how different metrics may be needed for
`detecting certain fault signatures. The metrics are extensions
`of the uniformity metrics presented by Davis et al. [3] with
`an expected surface used as the target surface. All metrics dis-
`cussed here are based on loss functions. For all these metrics,
`denotes a newly fitted thin-plate spline surface,
`denotes
`the target surface, and
`denotes the wafer surface region.
`The quadratic and absolute loss functions are commonly
`used in many fields to quantify the penalty from departing
`from the target. The first
`two metrics used in this work
`are a squared deviate from target metric and an absolute
`value deviate from target metric. Both statistics are general
`metrics used to quantify the surface difference (
`) and are
`nonlinear functions of the error volume between two spline
`surfaces. The metrics are calculated as
`
`(2)
`
`The squared metric,
`, penalizes much more than the
`absolute metric,
`, with respect to larger departures from
`the target. Both metrics cover the entire wafer surface and
`place equal weights on information at all wafer sites. In
`addition, both metrics yield the same equipment fault detection
`results in this study.
`The following metric is an example of a metric that can be
`used to detect an equipment fault that leads to a thicker wafer
`surface. This metric is calculated as
`
`(3)
`
`where
`) could be any functions. For example,
`(
`could be the squared error loss function and
`set to 0. This
`example would only penalize surfaces thicker than the target.
`Another example is to place different weights on the penalties
`for
`than for
`. Again, understanding the reasons for
`getting equipment faults and their resulting spatial signatures
`plays an important role in the selection of the
`functions.
`Another type of metric that should be considered is one
`which allows for different regions of the wafer surface to
`be weighted differently. For instance, error in the center of
`the wafer may be of more importance than error toward the
`edge of the wafer. Also, certain equipment faults may cause
`defects, such as a thinner surface, in specific regions of the
`wafer surface rather than the entire wafer surface. An example
`of a metric that weights wafer surface regions differently is
`calculated as
`
`(4)
`
`where
`denotes the number of nonoverlapping regions, and
`denote the weight and penalty functions for the th
`and
`region, respectively. This metric has the potential to be very
`useful, particularly in the stage of equipment fault diagnosis,
`since it is more general than the other suggested metrics. In
`fact, the previous metrics may be considered as special cases
`of this metric.
`
`D. Estimation of Signature Metrics
`The calculation of the metrics discussed in Section II-C
`involves integration of the difference of two spline surfaces
`over certain regions. This integration can be done by using a
`numerical integration technique where the fitted surface
`is
`evaluated on an
`grid (with points outside the radius of
`the wafer removed) and the evaluated results, e.g.,
`, are
`then summed for each of the
`grid points and multiplied
`by the area of one of the grid elements. The metrics may be
`approximated as [3]
`
`metrics
`
`(5)
`
`where
`is the loss function incorporated into the metric,
`vector of the target,
`is an
`matrix
`is an
`of thin-plate spline coefficients for the measurements,
`is
`an
`vector of the measurements,
`is the number of
`measurements taken on the wafer, and
`is the area of one
`of the grid elements. This approximation was shown to have
`good results for
`30 [3]. Thus,
`30 is used in the
`experimental examples presented in this paper.
`
`E. Use of the Signature Metrics in Equipment Fault Detection
`If the metrics indicate that the surface of a newly processed
`wafer is statistically significantly different from the target
`wafer surface, then the conclusion is that an equipment fault
`has occurred. Therefore, the null distributions of the metrics
`should be studied in order to set up the “cut-off value” for
`determining if there is a significant difference between a newly
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 4 of 10
`
`

`

`GARDNER et al.: EQUIPMENT FAULT DETECTION USING SPATIAL SIGNATURES
`
`299
`
`TABLE I
`WAFER LABELS FOR WAFERS PROCESSED UNDER
`EACH COMBINATION OF EXPERIMENTAL CONDITIONS
`
`fitted surface and the target wafer surface. In other words, the
`distributions of the metrics under the “fault-free” condition are
`needed to determine the “cut-off values” in the tail(s) of the
`distributions for a specified level of significance.
`An analytical approach based on standard statistical asymp-
`totic normal approximation theory was first considered. Ap-
`proximation theory cannot be applied in this study since
`traditional distributional spline results are for the independent
`identically distributed case; however, as
`(the number of
`spatial measurements) goes to a very large number, many
`devices are being sampled on a fixed size wafer, and the data
`become more dependent because of spatial correlations. This
`resulting increasing dependence is called infill-asymptotics
`[14]. An alternate Bayesian (simulation) approach can be taken
`to determine the null distribution of the metrics using the
`following steps.
`1) According to a procedure given in Green and Silverman
`[12], assuming a Gaussian prior distribution, the poste-
`rior distribution of the spline surface
`has the following
`multivariate normal distribution [12]:
`
`MVN
`
`(6)
`
`where
`is calcu-
`is the vector of fitted values,
`lated as: (the residual sums of squares about the fitted
`curve)/equivalent error degrees of freedom, and
`is the projection matrix which maps the vector of ob-
`served values to their predicted values. Since there were
`multiple wafers processed independently at the baseline
`conditions in this study, the averages of the
`’s and
`’s from all baseline wafers were used in (6).
`2) The following parametric bootstrapping approach can
`be used to simulate independent observations from the
`null posterior distribution of the metric. First, 5000 sets
`of
`observations are simulated from the multivariate
`normal model (6). A spline surface is fitted to each
`set of observations, and the spatial signature metrics
`are calculated using (2)–(5). As a result, 5000 indepen-
`dent observations are obtained from the null posterior
`distribution of each signature metric.
`3) With a pre-specified significance level,
`0.01 (or
`0.05), a “cut-off” value is defined as the 99th (or 95th)
`percentile from the null distribution of a specific metric.
`If the calculated metrics (2), (3), or (4) for a newly fitted
`surface are larger than their respective “cut-off values,” it
`is statistically significant at level
`that the wafer was not
`processed under fault-free conditions. The decision rule
`is still applicable even if the sampling scheme changes
`since the target surface remains the same. The decision
`rule is selected to balance the Type I error (false positive)
`
`and the Type II error (false negative) as follows: Fix
`Prob(Type I error) below a chosen level, e.g., 0.01, and
`select the test that minimizes
`Prob(Type II error).
`The choice of
`is up to the discretion of the practitioner.
`In this study, more conservative “cut-off” values were
`desired, so
`0.01 was selected.
`
`III. EXPERIMENTALVERIFICATION WITH
`NCSU LABORATORY EQUIPMENT
`
`A. Design of Experiment and Data Collection
`To test the proposed methodology, a laboratory experiment
`was conducted at NCSU where the following two types of
`equipment faults were induced in a prototype RTP (rapid ther-
`mal processing) single wafer system: lamps burning out and
`a miscalibrated SiH /Ar mass flow controller. The response
`measured was SiO thickness. There were 15 wafers available
`for the experiment. Preliminary experiments, in which one
`lamp at a time was removed, were performed to decide on how
`many lamps should be disengaged during an experiment. The
`removal of a single lamp caused a marked decrease in oxide
`thickness. For example, when a side lamp was disengaged,
`a 13.5 ˚A decrease in average oxide thickness was observed.
`When a bottom center lamp was disengaged, the average oxide
`thickness decreased by 10.22 ˚A.
`The final experimental design used to induce equipment
`faults was an unbalanced 3 design. While the N O flow was
`held constant at 500 sccm, three 10% SiH /Ar flow rates of
`20, 25, and 30 sccm were used for low, medium, and high flow
`rates, respectively. Table I shows all possible experimental
`conditions along with the wafer labels of each wafer processed
`under each combination of conditions. The baseline “fault-
`free” state was the condition with all lamps working and
`with medium flow rate. Three replicates were allocated for
`estimating the target surface and for constructing the posterior
`distributions of the spatial signature metrics. For each wafer,
`the oxide thickness was estimated at 17 points. The sampling
`scheme was designed to cover the wafer regions evenly as
`shown in Fig. 6.
`The oxide thickness was estimated by measuring the
`–
`characteristics of capacitors located at the measurement points
`using a Keithley 595 Quasistatic Meter and a Keithley 590
`– Analyzer. The gate oxide thickness was then extracted
`–
`data while accounting for polysilicon depletion
`from the
`and quantum effects using a program written by Hauser [15].
`Unfortunately, data could not be collected from wafer 5 (low
`flow rate and a side lamp out) since the wafer was damaged
`during processing. However, since the goal of this study is not
`an experimental design analysis, this was not a major concern.
`
`B. Application of the Proposed Methodology
`to the Experimental Data
`The laboratory experiment had three runs processed under
`the baseline condition. One of the wafers appeared to be
`consistently thicker than the other two wafers at all sampled
`sites. The data from this wafer was excluded from the esti-
`mation of the target surface, but was set aside for verification
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 5 of 10
`
`

`

`300
`
`IEEE TRANSACTIONS ON COMPONENTS, PACKAGING, AND MANUFACTURING TECHNOLOGY—PART C, VOL. 20, NO. 4, OCTOBER 1997
`
`Metrics 1–3 are all general metrics used to determine whether
`or not an equipment fault has occurred. Metric 3 incorporates
`specification limits on the target surface to increase robustness
`to a given noise level in the data. It was noted that the
`experimental runs with high flow rate and all lamps working
`yielded gate oxide thicknesses much thicker than the target
`surface. Metric 4 was designed to specifically detect this type
`of equipment fault. It was also noted that whenever a lamp was
`disengaged, the resulting surface was much thinner. Metric 5
`was designed to specifically detect this type of equipment fault.
`The null posterior distributions of each of these metrics were
`simulated using the average predicted surface of wafers 1 and
`11 as the target surface. 5000 observations were simulated for
`each of the five metrics using parametric bootstrapping from
`a MVN
`is the average of the predicted
`values at the actual 17 measurement locations using the thin-
`plate spline fittings for wafers 1 and 11, and
`is
`the average of the covariance matrices from the thin-plate
`spline fittings for wafers 1 and 11. A set of 5000 simulated
`observations was then used to calculate each metric resulting
`in a series of 5000 independent observations from the null
`distribution of each metric. The resulting null distributions for
`metrics 1–5 are shown in Fig. 7.
`Empirical
`“cut-off values” for each metric were
`then determined from the null distributions of the metrics, i.e.,
`the point from the null distribution where 1% of the simulated
`observations fall above that point. Each metric was also calcu-
`lated for each wafer (excluding the two wafers used to estimate
`the target surface). These calculated values are compared to
`the “cut-off values” in order to determine whether or not a
`fault is detected. Recall that the experimental combination
`of all lamps working and a medium flow rate represent the
`fault-free condition, with all other experimental combinations
`representing fault conditions. The general metrics (Metrics
`1–3) detected equipment faults for all wafers which were
`subjected to induced equipment faults (wafers 2–4, 6–10, and
`13–15). Metric 4, which was designed to detect equipment
`faults that caused a thicker than target surface, only detected
`equipment faults for the 2 wafers with a high flow rate, but all
`the lamps working (wafers 3 and 8) and, therefore, effectively
`detected a specific type of equipment fault: high flow rate
`only. Metric 5, which was designed to detect equipment faults
`that caused a thinner than target surface, detected equipment
`faults for all wafers subjected to a disengaged lamp, no matter
`what the flow rate was. It was also observed that the metric 5
`calculations (25.991, 31.827, and 21.46) for a disengaged side
`lamp (wafers 6, 9, and 13) were much higher than the metric
`5 calculations (2.346, 10.781, 10.766, 2.655, and 1.052) for
`the wafers with the disengaged bottom lamp (wafers 2, 4,
`10, 14, and 15). Thus, metric 5 also effectively detected a
`specific type of equipment fault: side lamp or bottom lamp
`burning out. No faults were detected for wafer 12 by any
`of the metrics, which was expected since this wafer was
`the replicate from the fault-free condition withheld from the
`estimation of the target surface for verification purposes. Table
`II shows the numerical results for each metric along with the
`0.01 level of significance
`corresponding “cut-off value” at
`for each metric. One-tailed tests are used here because of the
`
`Fig. 6. Sampling scheme.
`
`purposes in later wafer surface comparisons. The remaining
`two wafers had thin-plate splines fitted to their gate oxide
`thickness measurements, and the surfaces were predicted on a
`30
`30 grid of equally spaced points which was then trimmed
`to a circle with radius equal to 2 in. The predicted points from
`the two wafers were then averaged across the remaining 692
`locations to construct the target surface.
`is
`As stated in the introduction, experimental data that
`collected at fault conditions will help identify physically based
`spatial signature metrics for equipment fault detection. General
`metrics are used to detect the presence of an equipment fault,
`and specific metrics are used to capture distinct shapes of
`spatial surfaces to detect specific fault patterns. Thus, the
`following general and specific metrics were designed:
`
`1) M
`
`2) M
`
`3) M
`
`4) M
`
`5) M
`
`d
`
`d
`
`d
`
`(squared metric)
`
`(absolute value metric)
`
`˚A
`
`otherwise (spec limits metric)
`
`d
`
`d
`
`if
`
`(square above—absolute value below metric)
`
`d
`
`if
`
`d
`
`if
`
`(square below—absolute value above metric).
`
`Applied Materials, Inc. Ex. 1012
`Applied v. Ocean, IPR Patent No. 6,836,691
`Page 6 of 10
`
`

`

`GARDNER et al.: EQUIPMENT FAULT DETECTION USING SPATIAL SIGNATURES
`
`301
`
`TABLE II
`RTCVD CALCULATED METRICS AND “CUT-OFF” VALUES
`
`(a)
`
`(b)
`
`(c)
`
`(d)
`
`(e)
`
`Fig. 7. Null distributions of metrics for RTCVD experiment: (a) Squared
`deviate from target metric, (b) absolute deviate from target metric, (c)
`spec limits metric, (d) square above—absolute below metric, and (e) square
`below—absolute above metric.
`
`physically based design of the metrics, particularly metrics
`4 and 5 which are used to detect thicker than target surfaces
`and thinner than target surfaces, respectively. The shaded areas
`represent the wafers for which equipment faults were detected,
`i.e., wafers for which the metric values are greater than the
`“cut-off values.”
`
`IV. EXPERIMENTAL VERIFICATION
`WITH PECVD SILICON NITRIDE DATA
`In the previous section an example was given where metrics
`were designed to detect certain types of equipment faults
`when the faults were known to have a certain type of spatial
`signature. However,
`in practice,
`there may be too many
`potential faults, and the relationships between the faults and
`the spatial signatures may not be clear. Even in this case,
`the general metrics can still be used to detect the presence of
`
`equipment faults. To illustrate this, the proposed equipment
`fault detection methodology was also applied to experimental
`data collected at Texas Instruments, Inc. An experiment was
`conducted for the purpose of developing equipment models for
`use in a diagnosis system prototype [1]. The data was collected
`from a six factor central composite design experiment with
`45 combinations of equipment conditions (without replicates)
`plus an additional five runs replicated at the center point.
`The six factors were pressure,
`flow, sum of silane and
`ammonia flows (SiH
`NH ), ratio of silane and ammonia
`flows (SiH /NH ), ratio of RF power to electrode g

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket