Exploratory vs. Hypothesis-Driven Experiments

The night of December 9, 2008, I had dinner at the P.R. Grill in Pentagon City with Drs. K.H.K. and W.J.K., as well as with I.K.. An animated dinner conversation accompanied the good food. This post is a follow-up to one of the topics covered.

In exploratory analyses or experiments, very many variables may be correlated against an effect just to see which, if any, might possible be related to the effect of interest. It is a sort of fishing expedition. Such “data mining” can lead to many false positives (Type I errors).

You may recall that I brought up as an example the case of butter production in Bangladesh. D. Leinweber sifted through a UN data CD ROM and correlated multiple (I do not know exactly how many) indicators with the S&P 500, and found that butter production in Bangladesh was the single best predictor of the S&P 500 (He Who Mines Data May Strike Fool’s Gold, Business Week, 6/16/1997). So why don’t we all follow Bangladeshi butter production to time the stock market?

Here’s a Gedankenexperiment. Imagine you perform 10,000 experiments, and that all of them are just random noise without a true effect or signal. If you do your significance testing with an alpha-level of 0.05, then you’d expect about 5% or 500 of those experiments to appear to be significant at the 0.05 level just by chance alone. (As an aside, the way we do statistics these days is apparently a horrible conflation of Fisher’s significance testing with p-values and Neyman-Pearson hypothesis testing with alpha-levels.) I.e., the data could be telling you that 500 of the experiments were “successes”, when in reality they were just random noise.

Here’s another thought experiment. Ravens’ Stadium in Baltimore can hold 70,000 people. If the stadium were full and each of the 70,000 people flipped a coin fifteen times, there’s a greater than 88% chance that at least one person to flip ten heads in a row. But if this happened, it is due to chance; it is not because that particular person is an expert in flipping coins.

So, we need multiple testing procedures to protect ourselves against these errors. For example, there is the famous Bonferroni correction. In functional brain imaging, we use methods based on (Gaussian) random field theory. More recently, we (in functional brain imaging) have started using methods based on controlling the expected false discovery rate (FDR).

Exploratory analyses can suggest some interesting phenomenon, e.g., that two variables are correlated; so, sometimes they are called “hypothesis-generating experiments”. But then you need to do a hypothesis-driven experiment to really convince everybody that the two variables are indeed correlated, and that they weren’t discovered in a fishing expedition (i.e., that it isn’t merely a case of finding somebody who flipped 15 heads in a row in Ravens’ Stadium). In a hypothesis-driven experiment, you declare at the outset that your hypothesis is that X and Y are correlated, and you design the experiment specifically to test that hypothesis. It’s a surgical strike, rather than a fishing expedition.

So, exploratory analyses are a case where the data may not truly reflect a “real” signal, which was actually your original question. I think that another case, somewhat slippery, is the selection of a significance threshold. You might generate a t-test that is significant if you had chosen an alpha level of 0.05, but non-significant if you had chosen an alpha level of 0.01. So, “significance” depends on what you had chosen. (Again, I am aware that there are controversies regarding the use of p-values; e.g., see this book.)

Advertisements
Published in: on 14 December 2008 at 3:17 pm  Comments (4)  
Tags:

The URI to TrackBack this entry is: https://markovthoughtchain.wordpress.com/2008/12/14/exploratory-vs-hypothesis-driven-experiments/trackback/

RSS feed for comments on this post.

4 CommentsLeave a comment

  1. Actually, the problems are far worse than merely technical. Journals usually don’t publish negative results. Everyone likes positive results. The standard social science model, in which the researcher presents a hypothesis and then “tests” it, is totally incredible.

  2. Exploratory analyses and observational studies, etc can only show association not causation. In order to show causation you have to do a well controlled , randomized interventional study. But something that has always bothered me is when a variable or endpoint, that was not the primary one, is analyzed and the study had enough power to show difference for that endpoint and was well controlled to show that difference it still isn’t valid. It has to be prospectively chosen in order to be valid. It’s probably better evidence of causation than a observational study but still it isn’t valid, at least for the FDA. I guess that’s one of the rules of the game and we just have to accept it.

  3. I have to admit that I don’t yet really understand how to show that A actually causes B; it’s a topic I haven’t looked into too deeply yet. Suppose we do a well-controlled, randomized interventional study, and we use some basic analysis such as a correlational analysis or t-test. Aren’t we still stuck with showing only that the independent and dependent variables are associated, rather than that one causes the other? Or does the “interventional” aspect lend some sort of temporal dimension to the experiment, which allows us to infer causality?

  4. From the statistics lectures I’ve attended well controlled randomized trial are supposed to show causality. I guess if there were no randomization and intervention it would only show association but then wouldn’t it be a cohort study instead of randomized interventional study? If you gave A to a group and placebo to another group and the two groups had the same demographics and after the two treatments were given during a certain period an endpoint was higher in the group given A then it shows causality. It could be a type I error but I guess thats why a meta-analysis of similar trials are needed.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: