Mark Koranda

Graduate Student

BA in Psychology & Arabic, 2013, University of St. Thomas, Minnesota


Research Interests

In my work with Maryellen MacDonald I’ve been interested in how the surface properties of communicating — hands vs voice, recently used words (and their sound), sentence structures– influence what you say. For example, the sound of the last word you said will affect which word you say next, and what the hands are good at depicting will affect how you gesture an event. We’ve looked at sentence planning using a picture-naming and silent gesture paradigm, and word choice using an artificial language and picture-word naming paradigms. These factors are important because they are present almost every time we communicate, potentially explaining why we sometimes speak imprecisely, and contributing to why words’ meanings in a language shift over time.


Research has shown that the character of many signs obey “structural mappings” between meaning and hand shape (iconicity). To what extent is this a part of non-signer intuitions for making up gesture-only messages (silent gesture paradigm)? We asked non-signers to make up gestures to communicate events in pictures. Some events afforded clear structural mapping, while others do not. Critically, the ones that do not are semantically related to ones that do. For the latter events, non-signers produced structurally mapped signs that undermine message accuracy, showing a compromise between message accuracy and linguistic form.

Along with other collaborators, we’ve looked at the extent to which serial order planning of linguistic elements (producing syntactic structure) shares abstract planning resources with things like motor action planning (executing a series of motor tasks).

More recently I’ve been studying cognitive biases in production choice at the lexical level. I’m looking at this in two different ways. In one (with Maryellen MacDonald and Martin Zettersten), participants learn an 8-word artificial language in order to help elves hunt for gold. Word learning and word use varies in frequency. During timed, free-response trials, participants use highly frequent words instead of more accurate, known, but less frequent words.

In another paradigm, I’m interested in how phonological properties shape word choice. We’re using an adapted Picture-Word-Interference paradigm to show that participants will choose labels less often if the word on the preceding trial has a high degree of onset overlap.

As an extension of a class project, I’m collaborating on a computational phonology project with Eric Raimy and Calvin Kosmatka (UW-Madison Linguistics). Even the smallest unit of sound in languages (eg. phoneme) are comprised of multiple phonetic features, and theories differ in which features are relevant. Given a theory for sounds, what minimal combinations of features can successfully distinguish the sounds of a language? I (along with collaborators) developed a python program that generates all possible combinations given a theory and known sounds in a language. This novel approach allows us to demonstrate the relative importance of some features, and lack of importance of others; and the strengths and weaknesses of phonological theories.

Personal Interests

I take photographs of abandoned buildings. I also enjoy writing about some identities, like deafhood and military life. I also enjoy rearranging the furniture in my house.

Comments are closed