Statistical Design of Experiments (DOE)
|Some Tools for Testing||Accuracy|
|Documented Cost savings by Using DOE||The Miracle of DOE|
|Characteristics of a Designed Experiment||The Importance of DOE|
|DOE areas of Application||Steps in using DOE|
|Terminology||Importance of the Test Design|
|Variable Effects||Advantages of DOE|
|When do we need designed tests?||Examples of Interactions|
|Formula for a Successful Test||Two Types of Interactions|
|Advantages of Statistical Modeling||Requisites of a Good Test|
|Simple Statistical Model|
Click here for more articles that
help define the use design of experiments operations and theory.
People in the biblical days used writing instruments, the forerunner of modern-day pencils. Some time later paper in the form of scrolls was used to record information.
In the industrial revolution in 1700-1800 test instruments became widely used as measurement tools. In the early 20th century we had the first use of calculators. In the 1940’s computers were first used.
In 1951 Box and Wilson invented a technique called “Response Surface Methods” for conducting designed experiments.
Designed experiments (DOE) can be classified as
another tool, or set of tools, to be used for gathering test data.
A statistically designed test emphasizes planning before
the test is run. Objectives are clearly outlined. The data analysis is
outlined before the test starts. Included are the assumed model type and
model terms to be included. For example, the model might include linear
terms, 2-factor interactions, and quadratics
Using DOE reduces the amount of testing versus testing without DOE. The author has documented the following from a sample of tests:
|Number of Test Parts|
(in test parts)
(Per test part)
Using DOE saves money versus
testing without DOE. The author has documented the above from a sample
of tests. The savings ranged from $7,000. to $40,000. per test. Of
course the amount of savings depends on the cost of testing. Our long
term average shows a 55% reduction in testing using DOE. This is in line
with the above reduction from 378 total tests to 168 tests, which is
also a 55% reduction in testing.
|Design Variables||Response Variables|
The terms in column (1) are synonyms as are the terms in column (2). Design variables are the variables we have control over. Response variables are the variables we can’t control but can measure. Typically we want to achieve certain values of Response variables by manipulating the levels of Design variables. Examples of Design variables include weight, size, % ingredients, processing settings such as time and temperature. Examples of response variables include consumer acceptance measures, quality measures, purity, yield, cost, tensile strength
The distinction between design and response variables
can be blurred. If two variables are highly correlated and can’t be
varied independently, then it is best to chose one as a design variable
and measure the other as a response variable.
The influence of controlled design variables on measured response variables
Frequently, we want to find the effects of variables when conducting a test.
Types of variable effects on measured responses:
Linear (main effects)
1.When theory is unknown or inadequate
2. When the risk is high
3. There are a lot of unknowns
4. For new products
5. When other people are not convinced
If we can understand the underlying mechanism inherent in a system and can formulate a model between design and response variables, then we may not need a designed test. But this is usually not the case. Using DOE results in empirical models being developed which are more than adequate as replacements for theoretical models.
When the risk of making incorrect product decisions is high we need DOE. For example, when making a change to a profitable product we usually want concrete evidence that the change is for the better. Using DOE is like taking out an insurance policy against making bad product decisions.
DOE is especially useful when making decisions involving a lot of unknowns. For example, when developing a new product there are usually a lot of unknowns about how best to design the product. DOE can turn unknowns into accurate estimates of the effects of variables.
Many times people involved in product development need to be convinced that a certain direction is best. These people require hard evidence on which to base decisions. They are not willing go forward on the basis of product expert’s recommendations. A properly designed test will convince the skeptics of the best course of action.
Frequently there are many theories going around through
an organization about how to design or redesign a product.
An excellent test starts with the experimenter’s knowledge of the product. A designed test is not a substitute for product knowledge. A designed test provides data that enhances product knowledge.
In a designed test we may want to learn what is the best (optimum) product. Many times this best product is not one that was actually included in the test. Statistical modeling allows us to extrapolate the data and search for the best possible product within the test variable ranges.
In addition to finding the best product (or best operating conditions) we want to know why this product was the best. The model allows us to plot graphs showing how variables are related and what levels of variables make up the optimum product.
There may be tradeoffs between levels of variables,
especially measured responses. For example, cost may be higher for an
expensive ingredient. Use of models shows us the interrelationships
(Arrival Time) = (Beginning Time) + (Total Distance in miles) / (Average speed in miles per Hour)
(Average speed in miles per hour)
Or in mathematical form,
Y = A + B / X
Leaving at 12:00 and traveling 150 miles at an average
speed of 50 miles per hour gives an arrival time of 3:00.
Accuracy of statistical models is excellent. In a study where cost could be calculated to four decimal places the following data was found
Quite good accuracy, especially considering the product predicted was not in the test design.
In a statistically designed test, several factors are varied
No matter what the experiment’s result, there will always be some eager to:
or Believe it supports his own pet theory
1. Plan test for DOE
• Absolutely necessary to meet objectives
1. DOE eliminates the ‘confounding of effects’ whereby the effects of design variables are mixed up. Confounding of effects means we can’t correlate product changes with product characteristics.
2. DOE helps us handle experimental error. Any data point may contain bad data, i.e. is
3. DOE helps us determine the important variables that need to be controlled.
4. DOE helps us find the unimportant variables that may not need to be controlled.
5. DOE helps us measure interactions, which is very important
Examples of Interactions
Synergistic – variables together are good, e.g. teamwork</p>
Randomization is the running of test parts in random order. It is the opposite of running tests systematically. The running of tests randomly prevents the confounding of effects that can happen when tests are run in a standard order. For example, if temperature is a controlled design variable, it would be best not to run all the temperatures at a given level at the same time. If all test points at a given temperature are run at the same time, the effects of time can be confounded (mixed up) with ten effects of temperature. Using randomization is like taking out an insurance policy against effects of extraneous variables such as time.
Blocking is the deliberate screening out of the effects of variables thought to have an influence on the test results. For example it may be thought that test instruments have an effect on the test results. If so then we may want to conduct the test using several test instruments to reduce and quantify the effects test instruments.
Replication is the running of one or more test parts under the same conditions. That is we repeat some of the test parts in the test design. Repeat testing of test parts builds confidence in the test results and enables us to compute the statistical significance of test results.
It is of utmost importance that the ranges of controlled design variables be reasonable. These ranges should be in line with the test objectives. If the test objective is to improve the current product, then the test variable ranges should reflect this. Typically the ranges would be clustered around the current product values. Whereas if the test objective is to develop a new product then the test variable ranges would encompass all possible achievable values, typically wider ranges then for improving a current product. Of course the ranges of test variables should include only achievable products.
The increment between levels of test variables should be realistic. Increments can be too wide or too narrow. If increments are too wide, we may miss finding information between the levels. If increments are too narrow, especially compared to our ability to hit the level, we may not get a good reading of the variable.
The implementation of the above requisites should be made while working with a statistician.
Back to top
DOES, Inc. · 2531 Woodbine Road. · Winston-Salem, NC 27104 · Phone:(336) 830-DOES(3637)
Fax (336)-721-2441 email: Bill@Doesinc.com