We're using this design to work out how people select words to put into speech. True to form in the sciences, this is not nearly as simple as it sounds. It turns out there's quite a few stages in the speech process. You have a semantic/conceptual stage, where concepts are abstract representations without words or meaning attached (There is research to suggest that concepts are actually complex sensory representations). You then have a lexical stage, where the concept is attached to a kind of grammatical code, called a lemma. Then it gets actual sounds attached to it at the phonological stage, at which point we think it gets forwarded to motor areas to be turned into processes for your vocal cords etc. to deal with. And of course all this happens after your low-level visual processes have gone through and dealt with the basic properties of what you perceive.
Essentially the PWI throws a cognitive spanner in the works; by varying the relationship between the picture and word (sometimes the word is related to the picture, sometimes not), we can determine the process/structure by which we select words and meanings. Happily, in recent years there have been a few papers that generated an almighty stink amongst the researchers of speech production and thus new research is most welcome to try and resolve matters to some extent, which is what I'm aiming to do.
Speaking of which, my work ethic awakens with a roar and a snarl, and so I should get back into it. See you next week!
No comments:
Post a Comment