BA in Psychology & Arabic, 2013, University of St. Thomas, Minnesota
An important aspect of communicating a message is coming up with an appropriate combination of words. A lot of work in language production has focused on how we access and combine words (or signs). I’m interested in how words compete and are selected to convey a message. For example, if rushed, do speakers systematically produce less accurate, but more readily available words? The hypothesis I’m pursuing is that one’s general experience using words may bias word selection, sometimes against particular message needs. Such an account would provide cognitive evidence for how it is that we don’t always say exactly what we mean.
Two important sources for how we choose the order of our words are the lifelong experience we have with language (for example, English sentences tend to be ordered subject-verb-object), and the more immediately recent things we’ve heard or said. For example, previous passive sentences will encourage you to say your next sentence in passive-voice. I’m part of a large collaborative project looking at how similar these planning effects are to that of non-linguistic complex plans , such as ordering a series of actions in service of a goal like reaching for or touching an object. Our preliminary work suggests that when multiple options are available the planning for both touching a series of dots and saying a sentence are similarly biased by previous experience.
Much of my work lately has been focused on factors that push you to choose one set of words instead of another for the same message. Words can vary in how appropriately they capture your intended message. For example, you might refer to a cat as kitty, Whiskers, or the little furball. Some labels might be more appropriate for what you’re trying to convey and some labels might be easier to generate. How do these competing factors interact? By testing how much speakers are willing to compromise the accuracy of what they say for the ease of saying it, we can get a sense of what parts of speaking are more carefully planned, or more important to the speaker.
Past Research Projects
My first project with Maryellen looked at the relationship between the properties of the words themselves and how this affects the structure of what is said. We asked native speakers of English to communicate silently using gestures of their own invention. One reason communicating with your hands is different from your mouth is that it is possible to visually depict complex relationships with a single handsign, when it would normally take a combination of spoken words to do. For example, imagine gesturing (compared to verbally describing) a person throwing a football. We showed participants 20 scenes depicting events of similar complexity (a person doing something with/to an object), and most participants spontaneously invented a single complex gesture depicting these relationships. While these gesture productions were lacking many properties of languages, they suggest that the communicative task of describing the relationship between multiple entities, such as a person acting on an object, does not require a word for each, if the language system can represent the complex information in one fell swoop.
At the University of St. Thomas I explored discourse factors that affect the frequency of co-speech gestures. We wanted to know how co-speech gesture varied within a conversation as a function of listener feedback, and found that listener requests for repair produced more co-speech gestures relative to listener affirmations.