OK, the results are in from my colleague, Dr. Toija Riggins:
What does this mean? Well, in short, we found no statistically significant difference in response between the group that was exposed to infrasound and the group that was not. Unlike Wiseman and Angliss' study (described in Part I of this series), we also had only a handful of reported "unusual" experiences. We haven't finished processed them yet, but having gone through them, we don't anticipate finding anything there either.
Here's the details from Dr. Riggins:
I used SPSS version 15, and the analysis I conducted is called a Multivariate Analysis of Variance (MANOVA) with 5 dependent variables (our five scales - Happy/Sad, Aroused/Sleepy, Excited/Bored, Angry/Calm, Confident/Fearful) and the one main, important independent variable (Infrasound On or Off). The MANOVA analysis didn't find any statistically significant differences based on whether the infrasound was on or off. Gender, age, and superstition didn't make a difference either. I've attached a file [download a pdf copy here] with some output from the basic test of whether or not having infrasound on or off made a difference. The first section you want to look at is the section called "Multivariate tests." Look at the "Wilks' Lambda" statistic in the second section for the variable called "OnOff" (NOT the first set of figures called the intercept). You'll see that the significance level (Sig.) is greater than .05, actually a lot greater, meaning that we cannot reject the null hypothesis that the means of the two groups (infrasound on and off) are equal. In English, that means we didn't find a statistically significant difference in the means of the two groups on any of the emotional scales. If we had found something in this first statistic, we could pay attention to the section called "Between-subjects effects." That section shows whether there are significant differences between infrasound on and off for each of the individual emotional scales. When we have multiple dependent variables (our scales in this case), we can't use the regular .05 significance level as our cut off because we increase the risk of Type 1 error (i.e., finding a significant result when there isn't really one). So, we do what's called a Bonferroni adjustment and divide the number of dependent variables (we have 5 scale variables) by the .05 level and use .01 as the cut off for statistical significance. As we would expect since we didn't get anything in the overall analysis, you'll see that we don't have any Sig. values for our 5 emotional scales that are below that .01 cut off. I didn't include them here, but I did go ahead and do the additional analyses looking at gender, superstition and age. Nothing significant was there either, as we would expect, given that we didn't find anything in the overall analysis of just the group with infrasound versus the group without it. Finally, you can take a look at the very last table of "Estimated Marginal (group) Means. You will see that the means are extremely similar for the group with infrasound and the group without it for every one of our five emotional scales. For example, you will see that on the Happy-Sad Emotional Scale, the mean for the group with infrasound off is 3.467 and the mean for the group with infrasound on was 3.492. Those are pretty much statistically identical, even with a sample size of 200 people. The same pattern follows with the other scales, with all of the means being statistically indifferent from each other. If something had been there, we would have seen a big difference between the groups, such that the infrasound off group's mean was significantly higher than the infrasound off group's mean.
So, it was an enormous amount of work, and in the end we found nothing. But that's science! And it really wasn't nothing--we proved a case where 19Hz did not cause anyone to have unusual experiences.
My special thanks to Dr. Riggins, Andrew Puccio, Dr. Richard Wiseman, Sarah Angliss, all our volunteers (listed in Part II) and everyone who helped us run this experiment. I personally learned a huge amount, and know now what it would take to test all the other ideas I've been considering. Next time, I'll pick something we can test that doesn't have to work in conjunction with the project (Gravesend Inn) that takes up all my time each fall.
CUNYMedia has a nice video piece on the project here.
Welcome Boing Boing readers!
Final wrap up including raw survey data and statistical analysis of the unsusual experiences here.