I’ve finally gotten a handle on the frustration that’s been building within me regarding psychology as an academic field.
First, I want to be clear that I still am deeply interested in psychology and current research. Many of the people I follow on Twitter are doing intriguing work, and engage with others in fascinating discussions that span nearly all of psychology and often include its forebears, philosophy and biology. That’s what prompted me to start this blog. However, they aren’t directly responsible for my frustration with psychology. Its roots are in my vision of what psychology should be.
My interest in understanding human functioning—as well as dysfunctioning and malfunctioning—is what made psychological research so appealing to me. I see psychology as a means to discover if there are universal principles in these realms, and if so, what they are and what limits they have. The knowledge gained could also help improve our functioning too. So by definition, psychological research should be focused, as much as possible, on real-world activities and contexts. But for the most part, that isn’t how it’s gone.
Philosophical assumptions and –isms
When psychology started to become separate from philosophy as a field of study, some fundamental philosophical assumptions were carried over. This makes some sense: psychological research still needed a foundation. Similarly, it’s often useful to have a theoretical framework in mind too. Somewhere along the way, though, some of those philosophical assumptions shifted to becoming implicitly accepted as true, and some psychological theories grew out of this state.
Reductionism is a very common example. Reductionism claims that the best way to study a phenomenon is to simplify it as much as possible, and then research it in the most rigorous and controlled way possible. Findings from related research can then be pieced together to explain how it all works together.
Dr. Hermann Ebbinghaus’ early work in memory research offers a good demonstration of reductionism, as the essay “Introduction to Memory: Hermann Ebbinghaus (1885/1913)” describes. To summarize, in order to study how we memorize information, Ebbinghaus created lists of nonsense syllables (all consonant–vowel–consonant) as his test materials, set a criterion for achieving memorization, and tested himself extensively under a variety of conditions to both identify and replicate robust results. He then used his findings to estimate how long it would take to memorize “meaningful information”.
Most memory research has continued in this vein, except these days computers are likely used to control most aspects of stimulus creation and presentation to participants. I participated in some of this research as an undergrad; it was a requirement for all introductory psychology courses. The setup and task were very artificial, leading me to wonder for the first time how well all the memory research I’d been learning about actually captured how we remember things in daily life. (Spoiler: it wasn’t the last time.)
Models do what?
Analogies and other comparisons have long been used to try to understand aspects of functioning. Today, the computer metaphor for the brain is extremely common. When I was an undergrad in the pre-computer era, the telephone system was used as an analogy for how the nervous system works, with the brain being the “master switchboard”. This tool has been around in philosophy for a very long time. These models of the brain and nervous system trace directly back to René Descartes’ mechanistic view of biology: bodies operate similarly to machines, with the main difference being that a spirit or soul animates living things, while external energy moves machines.
Based on this assumption, today there are many models of functioning that are tested by simulations. Machine learning as a simulation of how human brains learn is a common example. Maybe I’m missing something in this endeavor, but it seems to me that this is essentially making too much out of correlated observations. Someone develops a model of facial recognition, for example, and at first is isn’t very good compared to a typical person’s ability to recognize faces. So things are tweaked, and as the computer model’s facial recognition abilities improve, it’s claimed that the researchers have demonstrated how brains do this task.
Huh?
That’s a huge logical leap—and a dangerous one, in my opinion.
Seeing this time and again in graduate school shifted my thinking, and my teaching. I stopped focusing on “facts” and models, and instead on the relationship between theories, their assumptions, and the research that comes out of all that. On the first day of each class, I take some time to talk with my students about the cliché “the map is not the terrain” and how it applies to psychology.
Teaching: Facts or critical thinking?
I’ve taught undergrad courses my entire academic career. Based on what I’ve observed among many of my colleagues—and what I’ve heard from students who didn’t have me for intro psych—my approach to teaching is atypical. While in grad school at Ohio State, I was able to find and read original research articles for classic psychological research in its excellent library. I invested a fair bit of time in this because I discovered that the intro psych textbooks pushed at us typically had wildly different presentations of the same psychological research or phenomenon (Phineas Gage’s brain injury is a great example). Sadly, it’s gotten even worse now that it’s much easier to set oneself up online as some kind of expert.
The texts also practically fetishize new research. This is lost on college students who generally don’t have any context for understanding it, much less placing it in a philosophical and historical context.
I took a different approach in my courses, choosing to present those contexts to my students for the topics we covered (because of time constraints, my recent intro psychology courses didn’t cover each chapter). We also discussed problematic elements where they arose, and explored the relevance of historical findings to people’s lives today. This might be most clear in some areas of developmental psychology, where age ranges for developing certain cognitive skills have shifted since they were first identified and described. Cultural and social issues were considered in many topics too.
In other words, I built my courses around my long-held belief and hope that psychology would increase our understanding of ourselves and improve our functioning. I hoped to give my students a basis for seeing and understanding where psychology is now, and how it can improve. I wanted them to understand that it isn’t wise to think of ourselves as “brains in meatsacks”, slaves to brain chemicals and hormones. And yes, I introduced them to the ecological theory, as a more holistic and realistic approach to psychological research. I didn’t want nor expect them all to agree with it, of course. I just wanted them to understand that how a psychologist builds a theory affects the research process and the findings they’ll get.
What’s the best way forward?
In my years of teaching, I’ve encountered many students who are suspicious about psychology. Some don’t see any value in it. Others think that they’ll learn how to read and/or influence people, and are disappointed when I don’t focus on that. Many think of psychology only in a mental health context, and are disappointed when they see that’s just a couple of weeks at the end of the term.
I think much of this stems from a similar origin as my frustration with psychology: it’s been oversold—especially since it’s a fairly young social science—and has under-delivered. Assumptions have become implicit foundations of many psychological theories, which has led much research into the weeds rather than to a realistic, holistic, dynamic approach.
This state is much more dangerous to psychology than the ongoing replication crisis. And I’ve barely scratched the surface here. I’m not planning to focus heavily on my crotchety discontents, but now that I’ve gotten a deeper understanding of them, I’ll probably return to the topic a few times.