Process quality monitoring and control:Acceptance sampling plan,Types of Sampling Plans,Random Sampling,Process improvement Tools,ANOVA,Statistical Analysis and Designed Experiment.

Process quality monitoring and control

Process quality is monitored by using acceptance sampling technique and control is achieved through Statistical process control (SPC) chart. Monitoring is essential so as to utilize full potential of the process. Statistical quality control is dating back to the 1920s. Dr. Walter A. Shewhart of the Bell Telephone Laboratories was one of the early pioneers of the field. In 1924 he wrote a memorandum showing a modem control chart, one of the basic tools of statistical process control. Dr. W. Edwards Deming and Dr. Joseph  M. Juran have been instrumental in spreading statistical quality-control methods since World War II.

In any production process, regardless of how well-designed or carefully maintained it is, a certain amount of inherent or natural variability will always exist. This natural variability or " noise" is the cumulative effect of many small, essentially unavoidable causes. When the noise in a process is relatively small, we usually consider it an acceptable level of process performance. In the framework of statistical quality control, this natural variability is often called “chance cause variability". A process that is operating with only chance causes of variation is said to be in statistical control. In other words the chance causes are an inherent part of the process.

Other kinds of variability may occasionally be present in the output of a process, This variability in key quality characteristics usually arises from sources, such as improperly adjusted machines, operator errors, or defective raw materials. Such variability is generally large when compared to the background noise and it usually represents an unacceptable level of process performance. We refer these sources of variability that are not part of the chance cause pattern as assignable causes. A process that is operating in the presence of assignable causes is said to be out-of-control. There are varied statistical control chart to identify out of control signal. Typically any control chart will have an upper and lower control limit and a central line as given in Figure 2-17.

clip_image001

Figure 2-17 Control Chart Limits

There is a close connection between control charts and hypothesis testing. Essentially the control chart is a test of the hypothesis that the process is in a state of statistical control.

There can be attribute (say monitoring defects or defective) control chart and variable control chart. Variable control chart example is given earlier. A ‘c’ attribute chart monitors defects. A ‘c’ chart is shown below, where number of defect data in engine assembly are collected over period of time. Thus at a particular time a sample engine assembly is selected and number of defects in the assembly is recorded and monitored.

image

P-chart is used to monitor defectives.

Details on various types of control chart and their specific application in varied situation can be seen in many well known text books (Mitra, A, 2008; Montgomery, D.C., 2008) or web (http://www.youtube.com/watch?v=gTxaQkuv6sU) .

The principles of control chart are based on acceptance sampling plans, provided by Harold F. Dodge and Harry G. Romig, who are employees of Bell System. Acceptance sampling plan is discussed in the following section.

Acceptance sampling plan

Acceptance sampling is concerned with inspection and decision making regarding product quality. In 1930’s and 1940’s, acceptance sampling was one of the major components of quality control, and was used primarily for incoming or receiving inspection.

A typical application of acceptance sampling is as follows: A company receives a shipment of product from its vendor. This product is often a component or raw material used in company's manufacturing process. A sample is taken from the lot, and some quality characteristic of the units in the sample is inspected referring to specification. On the basis of information in this sample, a decision is made regarding acceptance or rejection of the whole lot. Sometimes we refer to this decision as lot sentencing. Accepted lots are put into production; rejected lots are returned to the vendor or may be subjected to some other lot-disposition action.

Although it is customary to think of acceptance sampling as a receiving inspection activity, there are other uses of sampling methods. For example, frequently a manufacturer will sample and inspect its own product at various stages of production. Lots that are accepted are sent forward for further processing, and rejected lots may be reworked or scrapped.

Three aspects of sampling include:

1) Purpose of acceptance sampling is to take decision on acceptance of lots, not to estimate the lot quality. Acceptance-sampling plans do not provide any direct form of quality control.

2) Acceptance sampling simply accepts and rejects· lots. This is a post mortem kind of activity. Statistical process controls are used to control and systematically improve quality by reducing variability, but acceptance sampling is not.

3) Most effective use of acceptance sampling is not to "inspect quality into the product," but rather as an audit tool to ensure that output of a process conforms to requirements.

Advantages and Limitations of Sampling Plan as compared to 100 % inspection

In comparison with 100% inspection, acceptance sampling has following advantages.

(i) It is usually less expensive because there is less inspection.

(ii) There is less handling of product, hence reduced damage.

(iii) It is highly effective and applicable to destructive testing.

(iv) Fewer personnel are involved in inspection activities.

(v) It often greatly reduces amount of inspection error.

(vi) Rejection of entire lots as opposed to the simple return of defectives provides a stronger motivation to suppliers for quality improvement.

Acceptance sampling also has several limitations which include:

(i) There is risk of accepting "bad" lots and rejecting "good" lots, or Type I and Type II error.

(ii) Less information is usually generated about the product.

Types of Sampling Plans

There are a number of ways to classify acceptance-sampling plans. One major classification is by attributes and variables. Variables are quality characteristics that are measured on a numerical scale whereas attributes are quality characteristics are expressed on a "go, no-go" basis.

A single-sampling plan is a lot-sentencing procedure in which one sample units is selected at random from the lot, and disposition of the lot is determined based on information contained in that sample. For example, a single-sampling for attributes would consist of a sample size n and an acceptance number c. The procedure would operate as follows: Select n items at random from the lot. If there are fewer than c defectives in the sample, accept the lot, and if there are more than c defective in the sample, reject the lot.

Double-sampling plans are somewhat more complicated. Following an initial sample, a decision based on the information in that sample is made either to accept the lot, reject the lot or to take a second sample. If the second sample is taken, information from both first and second sample is combined in order to reach a decision whether to accept or reject the lot.

A multiple-sampling plan is an extension of the double-sampling concept, in that more than two samples may be required in order to reach a decision regarding the disposition of the lot. Sample sizes in multiple sampling are usually smaller than they are in either single or double sampling. The ultimate extension of multiple sampling is sequential sampling, in which units are selected from the lot one at a time, and following inspection of each unit, a decision is made either to accept the lot, reject the lot, or select another unit.

Random Sampling

Units selected for inspection from the lot should be chosen at random, and they should be representative of all the items in the lot. Random-sampling concept is extremely important in acceptance sampling and statistical quality control. Unless random samples are used, bias may be introduced. Say, suppliers may ensure that units packaged on the top of the lot are of extremely good quality, knowing that inspector will select sample from the top layer. This also helps in identifying any hidden factor during experimentation.

The technique often suggested for drawing a random sample is to first assign a number to each item in the lot. Then n random numbers are drawn (from random number table or using excel/statistical software), where the range of these numbers is from 1 to the maximum number of units in the lot. This sequence of random numbers determines which units in the lot will constitute a sample. If products have serial or other code numbers, these numbers can be used to avoid process of actually assigning numbers to each unit. Details on different sampling plan can be seen in Mitra, A (2008).

Process improvement Tools

Acceptance sampling and statistical quality control techniques may not significantly reduce variability in the output. Process improvement by variation reduction is an important feature in quality management. There are varieties of statistical tools available for improving processes. Some of them are discussed below.

ANOVA

Many experiments involve more than two levels of a factor. Experimenter is interested to understand the influence of the factor on variability of output characteristic. In this case, Analysis of variance (ANOVA) is the appropriate statistical technique. This technique is explained with the help of following example.

Say, a product development engineer is interested in investigating tensile strength of a new synthetic fiber. The engineer knows from previous experience that the strength is affected by weight percent of cotton used in the blend of materials for the fiber. Furthermore, he suspects that increasing cotton content will increase the strength, at least initially. He also knows that cotton content should range of 1 to 25 percent if final product is to have other quality characteristics that are desired. Engineer decides to test specimens at five levels of cotton weight percent: 5, 10, 15, and 20 percent. She also decides to test five specimens at each level of cotton content. This is an example of a single-factor (Cotton Weight %) experiment with a (level) = 5 of the factor and n = 5 replicates. The 25 runs should be made in random sequence.

image

The randomized test sequence is necessary to prevent the effects of unknown nuisance variables (or hidden factor influence), perhaps varying out of control during experiment, from contaminating the results. Balanced experiments with equal number of replicates are also preferred to minimize the experimental error.

This is so-called a single-factor analysis of variance model and known as fixed effects model as the factor level can be changed as required by the experimenter . Recall that yij, represents the jth sample (j=1,..n, replicate) output observations under ith treatment combination. Let represent the average of the observations under the ith treatment. Similarly, let y represent the grand total of all the observations and represent the grand average of all the observations. Expressed symbolically,

image

where, N or ∗ is the total number of observations. The "dot" subscript notation used in above equations implies summation over the subscript that it replaces.

The appropriate hypotheses are

image

Decomposition of the total sum of squares

The name analysis of variance is derived from a partitioning of total variability into its component parts. The total corrected sum of squares is used as a measure of overall variability in the data. Intuitively, this is reasonable because, if we were to divide SS, by the appropriate number of degrees of freedom (in this case, N -1), we would have the sample variance of the y 's. The sample variance is of course a standard measure of variability.

Note that the total corrected sum of squares SST may be written as

image

and as it is proved that the third product term vanishes, we can rewrite the overall expression of SSTotal (SST) as

SST = SSTreatments + SSE

Where, SSTreatments, is so-called the sum of squares due to treatments (i.e., between treatments), and SSError, is called the sum of squares due to error (i.e., within treatments). There are an N total observations: thus SST has N - 1 degrees of freedom. If there area levels of the factor (and a treatment means), so SSTreatments, has a - 1 degrees of freedom. Finally, within any treatment there are n replicates providing n - 1 degrees of freedom with which to estimate the experimental error. Because there are a treatments, we have a(n - 1) = an - a = N - a degrees of freedom for error.

Statistical Analysis

The Analysis of variance table (Table 2-4) for the single-factor fixed effects model is given below

Table 2-4 ANOVA Table with formula

image

Because the degrees of freedom for SSTreatments and SSError add to N-1, the total number of degrees of freedom, Cochran’s theorem implies that SSTreatments are independently distributed chi-square random variables. Therefore, if the null hypothesis of no difference in treatment means is true, the ratio

image

is distributed as F with a - 1 and N - a degrees of freedom. Equation above is the test statistic for the hypothesis of no differences in treatment means.

From the expected mean squares we see that, in general, MSE is an unbiased estimator of

ó 2 .

Also, under the null hypothesis, MSTreatments is an unbiased estimator of ó . However, if the null hypothesis is false, the expected value of MSTreatments is greater than ó 2 . Therefore, under the alternative hypothesis, the expected value of the numerator of the test statistic (Equation given above for Fo) is greater than the expected value of the denominator, and we should reject Ho on values of the test statistic that are too large. This implies an upper-tail and one-tail critical region. Therefore, we should reject Ho and conclude that there are differences in the treatment means if

image

where, Fo is computed from above equation. Alternatively, we can also use the p-value approach for decision making as provided by statistical softwares, say MINITAB, SAS.

Using MINITAB, we can obtain the following graphs and results for the above mentioned experiment on tensile strength:

image

From Box-plot it is observed that as cotton weight % increases tensile strength also improves. However, whether any two means are significantly different cannot be commented based on Box plot.

image

The residual plot confirms Normality assumption of error. Conclusion in case error is non- normal may be erroneous. Normality assumption can be tested by using Anderson-Darling test statistic value provided in MINITAB (Stat->Basic Statistics-> Normality Test).

image

ANOVA results given abo veconfir m that changing the % weight influences tensile strength and it is linear ( fro m R-square value).

For determining the best setting o f % cotton weight, one can do the Fisher LSD co mpariso n test as given in Figure 2-22.

Fisher 95% Individual Confidence Intervals

image

The comparison tests confirm t hat cotton weight % 20 is significantly different fro m 15 %, and thus setting o f 20 is suggested. Details on interpretation of comparison test and ANOVA is given in MINIT AB example, MINIT AB help, and any standard text book on Quality ( Montgomery, D. C., 2014)

Designed Experiment

Statistical design of experiments refers to the process of planning an experiment so that appropriate data which can be analyzed by statistical methods can be collected, resulting in valid and objective conclusions. Statistical approach to experimental design is necessary if we wish to draw meaningful conclusions from the data. This helps in confirming any causal relationship. When problem involves data that are subject to experimental errors, statistical methodology is the only objective approach for analysis. Thus, there are two aspects to any experimental problem: design of experiment and statistical analysis of data.

Three basic principles of experimental design are replication, randomization and blocking (local control). By replication we mean a repetition of basic trial on different sample. In the metallurgical experiment, twice replication would consist of treating two specimens by oil quenching. Thus, if five specimens are treated in a quenching medium in different time point, we say that five replicates have been obtained.

Randomization is the cornerstone underlying use of statistical methods in experimental design. By randomization we mean that both allocation of experimental material and order in which individual runs or trials of experiment are to be performed are randomly determined. Statistical methods require that observations (or errors) be independently distributed random variables. Randomization usually validates this assumption. By properly randomizing an experiment, we also assist in "averaging out" the effects of extraneous (hidden) factors that may be present.

Blocking (or local control) nuisance variables is a design technique used to improve precision with which comparisons among factors of interest are made. For example, an experiment in a chemical process may require two batches of raw material to make all required runs. However, there could be differences between batches due to supplier-to-supplier variability, and if we are not specifically interested in this effect, we would think of different batches of raw material as a

nuisance factor. Generally, a block is a set of relatively homogeneous experimental conditions. There are many design options for blocking nuisance variables.

Guidelines for designing experiment

To use statistical approach in designing and analyzing an experiment, it is necessary for everyone involved in the experiment to have a clear idea in advance of exactly what is to be studied (objective of study), how data is to be collected, and at least an understanding of how this data is to be analyzed. Below section briefly discusses on outline and elaborate on some of the key steps in Design of Experiment (DOE). Remember that experiment can fail. However, it always provides some meaningful information.

Recognition of a problem and its statement-This may seem to be rather obvious point, but in practice it is often not simple to realize that a problem requiring experimentation exists, nor is it simple to develop a clear and generally accepted statement of problem. It is necessary to develop all ideas about objectives of experiment. Usually, it is important to solicit input from all concerned parties: engineering, quality assurance, manufacturing, marketing, management, customers (internal or external) and operating personnel.

It is usually helpful to prepare a list of specific problems or questions that are to be addressed by the experiment. A clear statement of problem often contributes substantially to better understanding of phenomenon being studied and final solution to the problem. It is also important to keep overall objective in mind.

Choice of factors, levels, and range- When considering factors that may influence performance of a process or system, experimenter usually finds that these factors can be classified as either potential design (x) factors or nuisance (z) factors. Potential design factors are those factors that experimenter may wish to vary during the experiment. Often we find that there are a lot of potential design factors, and some further classification of them is necessary. Some useful classification is design factors, held-constant factors, and allowed to-vary factors. The design factors are the factors actually selected for study in the experiment. Held-constant factors are variables that may exert some effect on the response, but for purposes of present experiment these factors are not of interest, so they will be held at a specific level.

Nuisance (allowed to-vary) factors, on the other hand may have large effects that must be accounted for, yet we may not be interested in them in the context of the present experiment.

Selection of the response variable- In selecting response variable, experimenter should be certain that this variable really provides useful information about process under study. Most often, average or standard deviation (or both) of the measured characteristic will be response variable. It iscritically important to identify issues related to defining responses of interest and how they are to be measured before conducting the experiment. Sometimes designed experiments are employed to study and improve the performance of measurement systems.

Choice of experimental design- If the pre-experimental planning activities mentioned above are done correctly, this step is relatively easy. Choice of design involves consideration of sample size (number of replicates) keeping in mind precision required for experiment, selection of a suitable run order for the experimental trials and determination of whether or not blocking restrictions are to be involved.

Performing the experiment- When running an experiment, it is vital to monitor the process carefully to ensure that everything is being done according to plan. Errors in experimental procedure or instrumental error during measurement at this stage will usually destroy experimental validity. Up-front planning is crucial to success. It is easy to underestimate the logistical and planning aspects of running a designed experiment in a complex manufacturing or research and development environment. Coleman and Montgomery (1993) suggest that prior to conducting experiment a few trial runs or pilot runs are often helpful.

Statistical analysis of the data-Statistical methods should be used to analyze the data so that results and conclusions are objective rather than judgmental in nature. If the experiment has been designed correctly statistical methods required are not elaborate. There are many excellent software packages (JMP, MINITAB, and DESIGN EXPERT) to assist in data analysis. Often we find that simple graphical methods play an important role in data analysis and interpretation.

Conclusions and recommendations- Once the data is analyzed, experimenter must draw practical conclusions from the results, and recommend a course of action. Graphical methods are often useful in this stage, particularly in presenting results to others. Follow-up runs and confirmation testing should also be performed to validate the conclusions from the experiment.

There are many design options for statistical experiment. For two factor experiment, the basic design is a two-way ANOVA. For higher number of factors, factorial design using orthogonal array is typically used. There are also central composite designs (CCD), extremely useful to identify higher order terms in the response surface model developed based on factorial design (http://www.youtube.com/watch?v=Z-uqadwwFsU) . Three level Box–Behnken or BBD design (http://www.itl.nist.gov/div898/handbook/pri/section3/pri3362.htm) is also very useful in situation to identify quadratic terms and interaction in factorial design. Fractional factorial design is recommended if more than 8 factors are to be studied and there is a need to reduce the number of factors. This is also known as ‘screening experiment’. Sequential DOE or Response surface design is used to reach to global optimal setting in case of unimodal function. Taguchi’s method is a nonconventional approach used when there is little or no higher order interaction. Desirability function and dual response optimization may be used in case there are multiple y’s to be optimized simultaneously. Book by Montgomery, D.C. (2014) is an excellent reference to learn DOE techniques.

Comments

Popular posts from this blog

What are component and system reliability and how it can be improved?

How leadership influence process quality initiatives?

Product Quality Improvement Lecture 3-What is Design FMEA?