Notes and thoughts on three classic works in psychology
These are notes taken while I am reading.
Miller begins his paper with a rather information introduction to the topic. Interestingly, he feels it necessary to describe the theoretical framework he is working in. This is unusual because most papers nowadays may introduce some theoretical background, but they almost never fully explicate the base assumptions in the way that Miller does. One working on, say, embodied linguistic representations usually does not feel obligated in opening papers with an introduction to the notion of representation, how co-variation in neural spikes are assumed to carry information about objects in the world, the mind as a computer metaphor, or other such basic fundamental assumptions in their field. They may give an introduction to what is means for a concept to be embodied, but that is a more specific theoretical consideration that rests on several more basic theoretical assumptions.
It is now clearer why Miller goes to such lengths to discuss the information processing metaphor for the mind; this paper is an attempt to push it rather heavily. Anyone who has taken a cognitive psychology undergraduate course will be familiar with the notion that working memory has a capacity of seven items plus or minus two. However, the way I just described it is using the notion of working memory as a container, where there is a limit to the contents. Miller is using the limits of working memory to motivate the notion that channel capacity is a decent metaphor for the mind. Here, channel capacity can be understood as the limit of information that can pass through some transducer. For example, my WIFI capacity could be around twenty-five megabits per second, which means that my router cannot process more than twenty-five megabits. It appears that the human mind can only process seven items at once, thus it has a seven item channel capacity (plus or minus 2).
There isn’t really much left to say about this paper. It is a classic I’m glad I eventually read, and it is interesting to see information theory put forward as a novel approach. In cognitive science, there is a push to move away from information theory and it is now seen as the orthodox. I wonder how useful information theory is in the context of the limits of memory. Mathematical psychology of the nineteenth century tracked similar phenomena without need for information theory. For example, Weber’s law was constructed to track the least noticeable difference between magnitudes of stimuli. Here we might say, in informational terms, that it is tracking some form of sub-capacity of the channel. But Weber’s law has been effective for centuries before information theory and deals with a similar concept as the caps on memory. Maybe the idea is that information theory makes sense of the phenomena like Weber’s law, but isn’t the mathematical model supposed to be what makes sense of phenomena? Weber’s law makes sense of itself.
This one is really interesting in that it is essentially a discussion about the philosophy of psychological explanation. I am not quite sure exactly what his target is. Is it the fact that investigation are done with an assumption of binary results, or it is that progress is being made bottom-up, with individual phenomena being explained in piecemeal without an over-hanging narrative that ties that together.
Newell claims that one of the big problems is the lack of “control structure” in psychological theory. This is difficult to understand, and it does not help that his explanation of it invokes outdated coding languages (admittedly not his own fault). Here is my best attempt. The control structure refers to the relations between phenomena that make sense of them. In other words, the control structure is the syntax, and the experiments are the semantics. Without syntax, we may understand individual words, but any larger strings of words would be impossible to articulate. Thus, we need syntax to connect words into larger constructions, so that words may make sense together, not just apart.
In the sense used by Newell, it seems that there is no framework that connects the desperate experimental phenomena. This seems wrong, simply because he admits that these experiments advance the overarching theoretical approach. Isn’t that the control structure? The information processing metaphor for cognition would be the glue that holds together those findings. It would be hard to imagine any other way to discuss the control structure. He uses the computer analogy, but this can be misleading, especially since computer science has come so far, and new connectionist, neural net focused appraoches do not have the same type of architecture. Instead, look at other, more established scientific explanantions. What would be the “control structure” in Newtonian physics? The structure that makes sense of the relation between two objects is the overarching theory of gravity. Here, the semantics (the experimental results) and the syntax (the theoretical structure that makes sense of the semantics) are resolved by just positing gravitational force, and then modeling that force. Thus, we have a satisfying control structure for a vast collection of experimental results (we are ignoring relativity and quantum for simplicity’s sake, as well as because I cannot, for the life of me, understand the latter. Though perhaps that would be an example of several control structures that operate on different levels of explanation, something we now readily accept in the cognitive sciences nowadays).
I am ignoring Newell’s focus on information processing in my concluding remarks, as I have little to say other than the field has moved on from the strict computer metaphor, and even proponents of the computer metaphor today do not mean it in such a fashion. Instead, I want to talk about his notion of psychological progress. Part of me thinks he is correct to be worried. Often times people like to use the metaphor of a puzzle to describe scientific progress. We chip away at some unknown by slow and steady methods of observation and experimentation, putting together a mosaic of evidence that eventually fit together to form a coherent whole. However, Newell does not think this is how psychology should work, and indeed I doubt that this is seriously how science works. Rather, science needs an idea of a big picture in order to effectively investigate the puzzle pieces. Thus, we scientists continue guessing at what the puzzle is of when attempting to further investigate it. We call call this a path to discovery. Without an idea of where we are going, we simply wouldn’t have an idea of how to get there. I think this is what Newell is getting at. Too much is left unconstrained and disconnected in these smaller theories.
At the same time, too much emphasis on looking forward towards a grand theory is dangerous. It can bias the types of explorations we conduct, and close us off to important investigations because these areas are not covered by out overarching theory. There is no room in the information theory paradigm for dynamic and embodied cognition, so if one is set on only working towards that theory, important aspects of the mind would be ignored and never investigated. Also, we need researchers willing to do important, somewhat theoretically neutral, but minor experiments on phenomena that may not be theoretically interesting on their own, but accumulate into impossible to ignore sets of data. Every famous researcher who puts forward a grand theory is doing so off of the backs of normal scientists accumulating data that may not have a paradigm in mind.
It is good to see that Semon carries on the tradition of classic German thinkers. He is very difficult to understand, but what he has to say is constantly remarked as important.
I have very little to say as commentary on Semon’s theory. Honestly, I have a difficult time understanding the verbiage, and I have a lack of patience with reading nineteenth century literature for that reason. I also am not well versed in memory research, either of the 1970s or of today, so I can’t comment on the forward thinking nature of his work.
I consider myself fairly well versed in cognitive psychology, and cognitive science in general, but I have never taken a course in or read serious work on memory. I see memory mentioned quite often, and it is implicitly always there in the background, but it has never been seriously discussed in the work I’ve read. Instead, I am much more comfortable discussing mental representations and attractor nets, both of which assume a theory of memory, but neither of which really detail one. As such, I found the Semon text to be impenetrable, with interesting insights sprinkled throughout.
I will comment on the notion of separate memory paths. Semon claims that there repetition creates new new engrams, which I interpret as representations, and doesn’t just strengthen old ones. This seems obviously incorrect, but only in hindsight that even the authors of this paper (written in the late 70s) did not have. Repetition of stimuli (and likely the associated behavior) would likely strengthen neural connections, resulting in more connection weighting between neurons. There is a sense in which repetition might create new representations, in that repetition may adjust weighting depending on context, but those new weights would still be operating with the same representation, not a different one. Besides, what would happen to the old representation on repeated exposure? Is it still “stored” or is it forgotten? If it is forgotten, then it seems like a waste, especially if the repetitions came under different contexts, which is likely. Rather, it would be better to say that there is one representation that is adjusted throughout time. But again, it would be completely unfair to call this criticism of Semon, considering the state of biological psychology at the time.