The second statistics book I ever read was How to Lie With Statistics by Darrell Huff (1954/1982). If any statistic book could be considered a classic, it should be this book. At the time, I found it amusing, but did not really take it to heart. I learned that in order to get two out of three dentists to recommend almost anything, all you had to do was ask enough sets of three dentists a question. Eventually, you will likely get two out of the three to recommend it. I knew such tactics were really just the tactic of marketing and assumed this had nothing to do with scholarly research, especially not scholarly research in psychology.
A few years later, I started working on my PhD in clinical psychology. I did my Masters and Doctoral research projects, and each time felt like a kid in a candy store when I was able to start analyzing the results. As I “discovered” these “truths” that my “research” revealed, I was eager to share them with the world. After graduating, I continued to do research with the same zeal. I wanted to learn more advanced statistical procedures and developed more ideas for research projects than I could actually follow through with.
Yet, the more I did research, the more I began to question my own research and that of others, especially as I reflected on my results after the glow of the original findings began to wear off. I could see how the data could be interpreted in different ways and the limitations of the measures became increasingly evident. More and more, I heard people who participated in the research talk about why they answered the way they did and realized, “well, that completely changes the meaning of this.” This was particularly true when we included some focus groups as part of research projects. The participants did not interpret questions on these measures with strong reliability and validity the way they were supposed to interpret them.
The more I heard this, the more difficult it was to assume that these were the outliers, as many researchers would claim. The more I looked at it, the more I became convinced that there was much subjectivity that was part of “objective” research. I started to realize that many of the things researchers like myself told ourselves to justify our results were just that: justifications. They comforted and justified the researcher, but did not truly address the limitations of the research.
Bad Faith
I was beginning to learn that research was more subjective than I originally was willing to admit. Yet, I believed that researchers did their best to be responsible with their research. So, I still kept faith in science, kept doing research, and even taught a few classes on research. At the same time, I was also teaching classes on the history and philosophy of psychology, which included addressing epistemology and the philosophy of science. In trying to incorporate diversity into these courses, I read and added as required reading Guthrie’s (2003) very important book Even the Rat Was White. Few books more powerfully illustrate how science can be misused.
Guthrie (2003) demonstrated how much of the early psychological research was based on what could be called the White male standard. It was assumed that the White male was superior in essentially every way. Thus, when research on the difference between racial groups began, it was implicitly, if not explicitly, assumed the White male results reflected the ideal. Thus, when people from African descent were quicker on a finger-tapping test this was interpreted as evidence that they must be genetically designed for manual labor.
This was only one of many very offensive interpretations of the research in which the underlying assumptions played as strong of a role in determining “the truth” or “the facts” as did the actual statistical data. These early researchers did not necessarily have bad intentions, but that they were unaware of how their biases were distorting their “objective” research. Furthermore, this lack of awareness caused real harm to people.
Reflecting upon this, I remembered something one of my graduate school professors told me. He stated that getting published was less about the results and more about how the researcher interpreted the results. It was being able to “sell” our results as significant that often started the path to distortion. As many researchers lived in the “publish or perish” world, selling one’s results was pivotal in being able to assure job security and advance in one’s career. Selling results had high stakes.
This same professor also liked to advocate for a new journal to be titled, The Journal of Research in Support of the Null Hypothesis. This journal, which since has become a reality, was intended to publish research that did not produce significant results. He pointed out that it was not uncommon for several similar research projects to occur with only one having significant results. Of course, the one that was published was the minority: the one with positive results. He provided us with examples of research that was frequently referred to in the literature, but had never been successfully replicated. Attempts to replicate were met with insignificant results and because of this were never published. Few knew, but science moved on, largely taking for granted the positive results.
Stacking the Variables
When designing research projects, a critical element is deciding what variables to include. This is tricky business. If you include too many variables, it is challenging to identify unique contributions from each variable. If you do not include enough variables, then your research will not be taken as seriously and will be questioned pertaining to how you know that there are not other variables actually responsible for the change or influence that you have not measured. Let me provide two examples to help explain.
Many factors influence depression. If you are doing a correlational study, you are most likely interested in finding the unique contributions of one or more variables. Let’s say, just for the sake of the example, that 50 variables influence depression. All of those influences may be very small and many of them may function together. You are interested in how the lack of perceived support contributes to depression. Thus, you will include a measure of lack of perceived support and a measure of depression. However, if you do not include any other measures, critics may respond saying, “Maybe the lack of perceived support is actually the result of criticalness of important people in one’s life,” or a variety of other factors.
Thus, it is important to include enough other variables that one can “control for.” In this example, any time both criticalness from others and lack of perceived support are related to depression, you will statistically remove this and only consider when the lack of perceived support is related to depression when criticalness of others is not. Typically, research will control for 2-4 variables as control variables. If you included 10 variables, there is a much smaller chance of finding unique variance because you will be removing any variance that occurs concurrently with the other variables, regardless of whether this variance is by chance or because the two variables are related to or influencing each other.
In a second example, let’s say you are examining how Therapy X influences depression over a 20-week period. Again, you need to “control” for other variables, which is often done through a control group that is either not receiving therapy or only receiving supportive therapy. It is then assumed that the techniques of Therapy X are the cause of the change that occurs in the treatment group and not the control group.
However, there are again many other factors that could come into place. For example, it may be leaving one’s home, aspects of the therapy relationship, belief that therapy will work, or various other factors that could be causing the change. Indeed, research on the common factors of therapy suggests this is often the case (Elkins, 2009; Wampold, 2001). The more one measures control variables, such as empathy or providing a rationale for why therapy will work, the less chance one has of finding unique change that can be attributable to the therapy modality. Yet, most live in denial about these common factors and assume what the researcher chooses to include and chooses to interpret as the change factor is indeed responsible for the change.
Thus, in designing research projects, there is often a discussion about what control variables to consider and how many to include. Including too many, or others that are likely to similarly influence the dependent variable (e.g., depression, in the above example), means you are unlikely to find unique variance. Including too few variables or missing variables others are likely to suggest may be the actual cause often leads to criticism and possibly not having your research accepted for publication. Thus, there is a political game that researchers must play when deciding what variables to include in a study. This “game” is strongly influenced by wanting to find positive results and get published. This introduces a limitation and subjective element into research that is rarely considered by the consumers of that research.
Conclusion… For Now
As we conclude, let me emphasize that in my loss of faith in research I am not suggesting that we abandon research. Rather, I am suggesting that we need to be honest about it. Additionally, I am not suggesting that researchers are maliciously or intentionally distorting their results. Rather, I am concerned that they do not adequately consider the variables I have pointed out as influencing the research.
In this first part, I have focused heavily on the limitations inherent in research, and in particular, the reality that research, even what is generally considered objective research, is much more subjective than what is often acknowledged. In part two, I will continue the journey of my loss of faith by focusing more on the political aspects of research. Additionally, I will conclude by suggesting ways that we can be more responsible in our consumptions of research, as well in our conducting and reporting of research.
References
Elkins, D. N. (2009). Humanistic psychology: A clinical manifesto. Colorado Springs, CO: University of the Rockies Press.
Guthrie, R. V. (2003). Even the rat was white: A historical view of psychology (2nd ed.). Boston, MA: Pearson.
Huff, D. (1982). How to lie with statistics. New York, NY: Norton. (Original work published in 1954).
Wampold, B. E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Taylor & Francis.
— Louis Hoffman
Read more stories by Louis Hoffman
Keep up with our community – follow us on Facebook and Twitter