Subscribe to DSC Newsletter

Do we need maths proficiency for design of experiments?

I use SAS JMP to run multivariate testing experiments (MVT, regression), it generates a test plan and performs regression analysis, defining significant effects. The most important thing is to be able to interpret and know the pitfalls.
Mathematical calculations, as such, are not needed.

I have read statistics books such as Montgomery Design of experiments and they are usually full of formulas.

I still cannot understand how are they used in real world.

Yea I understand that I need to know concepts, different types of designs, basics of statistics as p value, standard deviation, how to calculate proper sample size, etc.

I speak about design of experiments mostly in web marketing and user experience fields.

Tags: Design, experiments, of

Views: 341

Reply to This

Replies to This Discussion

I am an engineer NOT a statisticans, but I have been involved in hundreds of experiments of many types over past 50 years, mostly in industrial settings. Lately, I have been using JMP to study non-industrial datasets.
To run experiments involving "human factors" rather than "machine factors" or "weather factors" you have problems with complex variance due to bias, capabilities, complexity of variance components. And the responses are often NOMINAL or ORDINAL rather than CONTINUOUS so sample sizes may be quite large to get statistical significance. Then their is the question of economic importance. Design the experiment first with the computerized Custom Design platform, being very careful to describe the factors in terms of "easy" or "hard" to adjust in running the experiment, and in terms of DATA TYPES. The software will guide you but if you data is given by survey or web crawling you may have have a "CONTROL" group of factor settings. And you may even not have any ability to adjust factor settings.
Then you are in the world of Exploratory Data Analysis, not Statistical Design of Experiments....but analysis is similar you just get "associations" rather than "causal factors."

JMP and other DOE Stat software are great helps in all these cases, as long as you are careful about DATA TYPE (N, O, C, %) and sample size (which shows up easily in graphs and tabular columns IF you correctly set the data type. And before any complex math analysis, you can graph the data 3 ways to view OUTLIERS and then try to look back at the experimental procedures and data collection to see if the outliers are TYPOS or SOURCE TOOL ERRORS (fuzzy gages, or bad software used in the data collection or data restructuring).

So if you plot the dots of responses vs factors, try 3 common plot types to help illuminate the data to see if it deserves deeper analysis.
1. Distribution of ALL data (Distribution Platform in JMP) to spot outliers and shape of data.
2. Variance Components (nested is good enough) of Response(s) vs Factors in nests of 2 to 5 roughly to visualize and then quantify the % variance explained by each factor, with ALL OF THE ERROR CONFOUNDED WITH THE LOWEST NESTED FACTOR. Often re-arranging the ORDER of the nesting clarifies both the graphical view and the % contributed and may reduce the noise % (lowest factor values + noise). This is called the Variability Platform in JMP.
3. Fit Platforms in JMP such as Y by X look at EACH factor vs EACH response as if they were only important effect, but can help you think of what the result would be for "one factor at a time" experiments..confused by interactions perhaps. For continuous data, after screening outliers you can try Fit Model for all factors and responses, which gets very complex but JMP has great tools AND SUPPORT HELP IN CONTEXT and in BOOKS attached to the software. REGRESSIONS can be very misleading with messy data. Ordinal vs Continuous factors use different models and sub-platforms. Often the keys are found in first 2 platforms IF you have domain knowledge, and without domain knowledge your really need a TEAM rather than a "lone wolf" approach.

Just my first thoughts based on your field and questions.
P values are serious topic in itself.
Now days, with automated data collection, large sample sizes, and precision gages for continuous data, you get very low P values simply due to large sample size, so everything looks significant. P values and T tests and even Anova were created when data was sparse, experiments were expensive, and hints were needed in order to drive further studies. Randomization and Replication were the key, and if you could not randomize, you needed more replicates and blocked designs.

Web click-throughs, etc are massive samples but not continuous data. Datasciencecentrol.com has many tools to study messy data.
Wikipedia now days covers P value issues as well as EDA vs SPC vs DOE (and ASPC the automation of manual statistical process control systems).
Confidence Intervals added to graphical studies are more helpful than P values imo.
And Hypothesis testing can mislead if data type and method of data collection is stated wrong, and are easily OVERWHELMED with large sample sizes.
You can get the huge data, then take smaller random samples and analyze each of those "cuts" to see the problem.
Newer methods for classification of exploratory data may be useful to use, rather than forcing data to fit a Design Platform after the fact.

Michael, thank you very very much for your comprehensive answer.

I am not a statistician as well, I have degree in psychology of management and experience in conversion optimisation using design of experiments.

I now see some jobs as "Digital Optimisation Analyst" that look very appealing to me, because they include mainly Design of Experiments in their responsibilities and I want to prepare myself for them. I am quite passionate about DOE.

You say: "Then you are in the world of Exploratory Data Analysis".

For DOE in lets say Direct-Mail you define several factors of mail piece(letter,envelope, flyer; their characteristics, color, call to action, discounts, content, etc), 2 level Yes/No.

You then create different runs=mail pieces and randomly send them out. Depending on conversion rate you can estimate a sample size.

Thank you for your explanation how to view OUTLIERS and test hypothesis with large sample sizes. It is very helpful.

RSS

Videos

  • Add Videos
  • View All

© 2020   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service