a short introduction to functionalism
These two sections were part of a chapter on the mind. I re-wrote them for the current version of the chapter so that the presentation of functionalism would be a little less technical. This version might still be of interest, however, to anyone looking for a (very) short introduction to the topic. So, here they are.
 Â
4. Functionalism
Monismâthe idea that the universe is composed of only one substance, matterâis really a category of theories. The version of monism that dominated the second half of the twentieth century began with the philosopher Hilary Putnamâs observation that a mental state such as pain can be experienced by very different kinds of creatures. His examples were mammals, reptiles, octopuses (which are a type of mollusk), and aliens. The first three have, here on earth, taken different evolutionary paths, and so their brains are not that similar. (Of course, the brain of a cat and the brain of a primate are not that similar either, but since they are both mammals and share an evolutionary history, their brains are more similar to each other than either is to a reptile or a mollusk.) Still, mammals, reptiles, and octopuses can all experience pain. And an alien will have yet another type of brain but can still, presumably, experience pain.
Putnamâs response to this observation was to suggest that pain and all other mental states should notâand, in fact, could notâbe defined as cellular or molecular or chemical states of the brain. Rather they should be defined in terms of how they function. Pain is not âc-fibers firingâ (to use a popular example in the philosophical literature). It is rather the mental state that causes me to say âouchâ and pull back from the stimulus causing the pain.
Before going any further, letâs think a little more about the difference between function and structure. Sitting on my desk is a pen that is mostly made of plastic. The cylinder is clear, the cap is blue, inside the cylinder is a thin tube filled with blue ink, and, at the tip of the pen, is a small ball made of tungsten carbide. Those features of the penâa cylinder a little over 5 inches long and a quarter of an inch in diameter, a small ball at one end, and so forthâare structural features. Without them, the pen wouldnât exist. But what makes a pen a pen is the function (or the task or the job) that it performs, not itâs specific structural features. It has only one function: facilitating the manual application of ink to a surface, but various structures can perform this function. Instead of plastic, a pen can be made of metal, reed, or a large feather. Instead of a quarter inch in diameter, it can be wider or narrower. Instead of a ball on the end it, it can have a nib (as fountain pens do), a felt tip, or the sharpened end of a feather. But whatever its structure, as long as it performs the correct function, itâs a pen. That insight, that certain things are defined in terms of their function, is the core idea for this theory of the mind, which is called, appropriately enough, functionalism.
So, if I tried to define a pen as a clear plastic cylinder, a little over 5 inches long, a quarter of an inch in diameter, with a small tungsten carbide ball at one end, and blue ink stored in a tube inside the cylinder, that definition would be incorrect. Itâs a description of some pens, but since it leaves out many other instruments that count as pens, it fails as a definition. In other words, pens are multiply realizedâthat is, there are multiple structures that realize (or instantiate) the function facilitating the manual application of ink to a surface. Similarly, at least according to Putnam and many other philosophers, pain and all other mental states are multiply realized. Pain can be instantiated in the mammalian brain, in the reptile brain, in the mollusk brain, and even, presumably, in the alien brain. Consequently, if we try to define pain as a particular sort of activity in the human brain, that definition will be wrong because it leaves out the other kinds of neural activity that are pain in other species.
Weâll return to functionalism, but letâs clarify one more thing first. By mental states, we mean, for instance, beliefs, desires (i.e., wants), thoughts, ideas (although thoughts and ideas may be the same as beliefs), intentions, sensations, and emotions. Most mental states, although not all, have content. For instance, my belief that today is Thursday is a belief that has the content today is Thursday. Similarly, my desire that it snow this weekend is a desire with the content it snow this weekend.
Functionalism holds that mental states are functional states. It is still a version of monism, though, so it agrees thatâjust as a pen has to be instantiated in some physical objectâmental states are instantiated in the brain. But, according to functionalism, we donât have toâin fact, it would be wrong toâdefine mental states as particular states of the brain. That gives functionalism a certain appeal. First, because it is a version of monism, it doesnât have any of the problems that dualism encountered. Second, it allows us to characterize the mind in a way that is very familiar to us. I feel (a mental state!) as though I have beliefs, desires, hopes, fears, and so forth. Furthermore, those mental states, just as Descartes said, seem to define who I am. I might be disappointed if the best theory of the mind told me that the mind is really just a series of neurons firing in the brain. Some people would be more than just disappointed. The philosopher Jerry Fodor, who along with Putnam was instrumental in developing functionalism, says at one point, âif it isnât literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for my saying . . . if none of that is literally true, then practically everything I believe about anything is false and itâs the end of the world.â1 But luckily for Fodor, functionalism tells us that beliefs, desires, sensations, emotions, and so forth are real and have scientific credibility.
Consider another non-mental example, but one that is more complex than a pen. Here is a simplified description of how a car runs:
The fuel injector sprays a mixture of air and gasoline at the intake valve. The intake valve opens letting the air-gas mixture into the cylinder chamber. The piston compresses the air-gasoline mixture. The spark plug emits a spark that ignites the gasoline, which drives the piston down the cylinder chamber. The downward movement of the piston turns the crankshaft, which turns the wheels.
Notice that each component in this system is described in terms of what it doesâthat is, itâs functionânot what it is made of. For instance, based on this description, we have no idea what an intake valve looks like or how it is constructed. But that doesnât really seem to matter because we know that an intake valve is the component that lets the air-gas mixture pass from the fuel injector into the cylinder chamber. Letting the air-gas mixture into the cylinder chamber is the intake valveâs function, and, perhaps, to understand what an intake valve is, thatâs all we need to know.
Now, a mental example:
I look at the clock and see that it is 6:00 pm. Seeing that it is 6:00 pm causes the belief that it is 6:00 pm which causes the thought that itâs time to stop working and the desire for a beer. I already have the belief that there is a beer in the refrigerator, and so the belief that there is a beer in the refrigerator plus my desire for a beer cause me to get up and walk toward the kitchen.
The mental states are the belief that it is 6:00 pm, the thought that itâs time to stop working, the desire for a beer, and the belief that there is a beer in the refrigerator. According to functionalism, this little scenario is the way that we define these mental states. The desire for a beer, at least for me, is the mental state that is caused by the belief that it is 6:00 pm and causes the action: walking into the kitchen. Thatâs a functional characterization of my desire for a beer.
Of course, to be made complete, the description would have to include all of the mental states that can cause the desire for a beer and all of the other mental states and actions that this desire causes. It would also have to include the caveat that the belief that it is 6:00 pm wonât always cause the desire for a beer. I have to have the belief that Iâm in the right environment for a beer and the belief that I donât have any more pressing work. There might also be other things about my historyâfor instance, having tasted beer beforeâthat contribute to the desire for a beer. And once I have the desire for a beer, I wonât walk into the kitchen if I have the belief that there isnât a beer in the refrigerator.
The full story for that desire and for all my other mental states is going to get quite complicated, but if we wanted to do the work, functionalism provides the framework for explaining the entire mind. Since, each mental state is defined by what causes it and what it causes, all that is needed for a complete description of the mind is a description of every mental statesâ causal interactions with inputs from the environment, other mental states, and our reactions and behaviors.
 Â
5. Functionalism: Consequences and a Problem
Itâs not a coincidence that functionalism was developed and gained moment at the same time that electronic computers were becoming widely used. Functionalism is often explained by analogy with computer programs, which are also functionally described processes for generating outputs in response to inputs. Given this theory of the mind, then, we have a straightforward answer to the question Can a computer or a robot have a mind? The answer is yes. If the mind is just a series of functionally defined internal states, then not only can a computer have a mind, but our minds are essentially just programs. There are problems with the theory, however, and weâll examine one of them now.
Earlier, I said that the mental state that is caused by the belief that it is 6:00 pm and causes walking to the kitchen is the desire for a beer. Similarly, the mental state that is caused by seeing that it is 6:00 pm and causes the desire for a beer is the belief that it is 6:00 pm. We can diagram those interactions, with arrows indicating âcausesâ, this way:
seeing that it is 6:00 pm â the belief that it is 6:00 pm â desire for a beer.
The mental state in the middle would still be the same mental state if I had called it anything else or simply labeled it x. For instance, in this process:
seeing that it is 6:00 pm â x â desire of a beer
x is still caused by the same perception and it still causes the same desire. According to functionalism, thatâs all that there is to the belief that it is 6:00 pm, or whatever we want to call it.
Functionalism embraces the implication that a robot could have all of the parts of this process: seeing that it is 6:00 pm (which is just a perception) the belief that it is 6:00 pm (which is just the mental state that is caused by that perception), the desire for a beer (which is a mental state caused by that belief), and walking into the kitchen (which is an action caused by that desire). It may seem a little odd to say that a robot can have the belief that it is 6:00 pm or the desire for a beer, but functionalism may be right that those mental states are nothing more than how they function in that process (and maybe in other processes in the mind). If thatâs so, then itâs clear that a robot could have those mental states in its robot mind.
Now, consider the following. Letâs say that as Iâm entering the kitchen to get my beer, I hit my elbow on the door frame. This causes pain, which causes me to utter âouch!â The mental state here is pain. It is caused by hitting my elbow against the door frame and causes the utterance âouch!â Again, we can diagram this process this way:
hitting elbow â pain â âouch!â
But unlike the belief that it is 6:00 pm, for pain, there seems to be more to the mental state than just what causes it and what it then causes. There is also, as we said earlier, a certain kind of experience that accompanies this mental state. A robot could have a mental state that is caused by hitting its elbow on a door frame and which causes it to say âouch!â But our intuition is that the robot isnât going to have the experience of pain, or any experience at all, for that matter.
The problem, then, for functionalism is that this theory doesnât have an obvious way of characterizing conscious experience. Philosophers have worked to correct this by modifying the theory. One way of doing this is by introducing second-order beliefs. A second-order belief is a belief about another mental state, and it is in virtue of having a belief about another mental state that the latter becomes a conscious mental state. In other words, mental states become conscious when we think about them.
Take a belief that, unless you are standing, I am sure that you have right now: the belief that the chair you are sitting in will hold you. Your behavior gives away that you have it. If you didnât have this belief, then, unless you were feeling especially daring, you wouldnât be sitting in that chair. But until you read the last three sentences, the belief that the chair will hold you was an unconscious belief. It was residing somewhere in your mind outside of consciousness. Now, however, you have a belief about that belief. That is, right now, you have this second-order belief: the belief that you have âthe belief that the chair will hold youâ, and because you have that second-order belief, the belief that the chair will hold you has become a conscious belief.
Similarly, I can have an unconscious desire for a beer, but when I think about that desire, it becomes a conscious desire. In other words, once I form a second-order belief about the desire for a beer, that desire enters consciousness. And letâs say that it is the second-order belief about this desire (the belief that I have âa desire for a beerâ) that causes walking to the kitchen. Although it makes functionalism more complex, the idea that thinking about a mental state is what makes that mental state conscious seems to make sense.
But, while this modification to functionalism works for beliefs and desires, itâs a little bit more of a stretch for a sensation like pain. One issue is whether pain can even be an unconscious mental state. There may be times when someone should be in pain. But if he or she is not having the conscious experience of pain, then, apparently, there just isnât any pain. But that aside, while we might have to think about a belief for that belief to become conscious, pain appears to be much more direct and immediate. It doesnât seem quite right to say that we can only have the conscious experience of pain when we have this second-order belief: the belief that I am in pain.
Of course, what doesnât seem quite right sometimes turns out to be true. But there is also a more significant problem here. The initial charge was that functionalism couldnât account for consciousness. A creature, such as a robot, that lacked consciousness, could have all of the mental states described in the original version of functionalism. Introducing second-order beliefs doesnât change that at all. A robot could have second-order beliefs just as well as it could have any other mental states. Just as it can have the belief that it is 6:00 pm, it can have the belief that it has âthe belief that it is 6:00 pmâ. But simply by having second-order beliefs, it wouldnât become a conscious creature. Or put another way, second-order beliefs donât explain conscious experience, and thereâs nothing about second-order beliefs that would make it impossible for an unconscious creature that had other mental states to have them.
There are other ways that philosophers have attempted to incorporate conscious experience into functionalism, but, despite these attempts, functionalism just doesnât seem equipped to explain consciousness. Nevertheless, many philosophers, psychologists, and cognitive scientists still consider functionalism a viable theory. In recent years, however, two other theories about the mind have gained momentum.
-
Fodor, J. (1989). Making mind matter more. Philosophical Topics, 17, p. 77. â©