** Please Note This Sad News **


1974, BA/BSc — St. John’s College (Annapolis, MD)
1976, Cert. — Anthropology Film Centre (Santa Fe, NM)
1978, MA — Indiana University (Bloomington, IN)
1987, PhD — Indiana University (Bloomington, IN)

Research Interest

I am interested in the structure, organization, and function of communicative expression in any form, but particularly language and music. My research focuses on quantitative analyses of the observable attributes of the production and perception of spoken and musical performance. In particular, I am interested in computing the ubiquitous, but time-varying, coordination that exists within and between interacting individuals and is necessary in establishing and maintaining perceptibly stable and meaningful patterns of behavior. Adriano V. Barbosa and I have developed non-invasive techniques for assessing communicative expression using measures of visible motion derived from video, thus enabling us to acquire the same quality data in the field as in the lab. [COD lab site under construction]

Courses Currently Teaching

Recent Teaching

COGS 200, 300, 303, 401, 402
LING 100, 314, 447, 507, 508, 518, 530

Selected Publications


  1. Burton S, Déchaine R-M, & EricVatikiotis-Bateson (2012) Linguistics for dummies (John Wiley & Sons Canada, Toronto) p 336.

Auditory-Visual Speech Processing

  1. Bailly G, Perrier P, & Vatikiotis-Bateson E eds (2012). Advances in audio-visual speech processing (University Press, Cambridge), p 506.
  2. Vatikiotis-Bateson, E., & Kuratate, T. (2012). Audiovisual speech processing: Progress and prospects. Journal of Acoustical Science and Technology, 33(3), 135-141.

Animation and synthesis of talking heads

  1. Kuratate, T., Vatikiotis-Bateson, E., & Yehia, H. C. (2005). Estimation and animation of faces using facial motion mapping and a 3D face database. In J. G. Clement & M. K. Marks (Eds.), Computer-graphic facial reconstruction (pp. 325-346). Amsterdam: Academic Press.
  2. Rubin, P., & Vatikiotis-Bateson, E. (1998). Talking heads. In D. Burnham, J. Robert-Ribes, & E. Vatikiotis-Bateson (Eds.), International Conference on Auditory-Visual Speech Processing – AVSP’98 (pp. 231-235). Terrigal, Australia. see [Talking Heads website](http://www.haskins.yale.edu/featured/heads/heads.html)
  3. Rubin, P., Fels, S., & Vatikiotis-Bateson, E. (2011). Gesture to Gesture: creating the gesture-based articulatory synthesizer of the future. In S. Fels & N. d’Allessandro (Eds.), First International Conference on Performative Speech and Singing Synthesis – P3S 2011 (pp. 119-134). Vancouver, BC.
  4. Vatikiotis-Bateson, E., Kroos, C., Kuratate, T., Munhall, K. G., & Pitermann, M. (2000). Task constraints on robot realism: The case of talking heads. In K. Kamejima (Ed.), 9th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2000) (pp. 352-357). Osaka, Japan: IEEE.
  5. Vatikiotis-Bateson, E., Kroos, C., Kuratate, T., Munhall, K. G., Rubin, P., & Yehia, H. C. (2000). Building talking heads: Production based synthesis of audiovisual speech. Paper presented at the Humanoids 2000 — First IEEE-RAS International Conference on Humanoid Robots, Cambridge, MA.
  6. Yehia, H. C., Kuratate, T., & Vatikiotis-Bateson, E. (2002). Linking facial animation, head motion, and speech acoustics. Journal of Phonetics, 30(3), 555-568.

Audiovisual “brain function” studies

  1. Callan, D. E., Jones, J. A., Munhall, K., Callan, A. M., Kroos, C., & Vatikiotis-Bateson, E. (2004). Neural processes underlying perceptual enhancement by visual speech gestures. NeuroReport, 14(17), 2213-2218.
  2. Callan, D. E., Jones, J. A., Munhall, K. G., Kroos, C., Callan, A. M., & Vatikiotis-Bateson, E. (2002). Mirror neuron system activity and audiovisual speech perception. Paper presented at the Eighth International Conference on Functional Mapping of the Human Brain.
  3. Callan, D. E., Jones, J. A., Munhall, K. G., Kroos, C., Callan, A. M., & Vatikiotis-Bateson, E. (2004). Multisensory Integration Sites Identified by Perception of Spatial Wavelet Filtered Visual Speech Gesture Information. Journal of Cognitive Neuroscience, 16(5), 805–816.

Audiovisual perception and production

  1. Abel, J., Barbosa, A. V., Black, A., Mayer, C., & Vatikiotis-Bateson, E. (2011). The labial viseme reconsidered: Evidence from production and perception. In Y. Laprie & I. Steiner (Eds.), 9th International Seminar on Speech Production (ISSP) (pp. 337-344). Montreal, PQ.
  2. Fais, L., Cass, B., Leibowich, J., Barbosa, A. V., & Vatikiotis-Bateson, E. (2012). Here’s looking at you, baby: What gaze and movement reveal about minimal pair word-object association at 14 months. Journal of Laboratory Phonology, 3(1), 91-124.
  3. Lander, K., Hill, H., Kamachi, M., & Vatikiotis-Bateson, E. (2007). It’s not what you say but the way you say it: Matching faces and voices. Journal of Experimental Psychology: Human Perception and Performance, 33(4), 905-914.
  4. Munhall, K. G., Jones, J. A., Callan, D. E., Kuratate, T., & Vatikiotis-Bateson, E. (2004). Visual prosody and speech intelligibility: Head movement improves auditory speech perception. Psychological Science, 15(2), 133-137.
  5. Munhall, K. G., Jozan, G., Kroos, C., & Vatikiotis-Bateson, E. (2004). Spatial frequency requirements for audiovisual speech perception. Perception & Psychophysics, 66(4), 574-583.
  6. Munhall, K. G., & Vatikiotis-Bateson, E. (1998). The moving face during speech communication. In R. Campbell, B. Dodd & D. Burnham (Eds.), Hearing by Eye, Part 2: Advances in the psychology of speechreading and auditory-visual speech (pp. 123-139). Sussex: Taylor & Francis – Psychology Press.
  7. Paula H de, Yehia, H. C., Shiller, D., Jozan, G., Munhall, K. G., & Vatikiotis-Bateson, E. (2006). Analysis of audiovisual speech intelligibility based on spatial and temporal filtering of visual speech information. Speech Production: Models, Phonetic Processes, and Techniques, eds Harrington J & Tabain M (Psychology Press, London), pp 135-147.
  8. Vatikiotis-Bateson, E., Eigsti, I.-M., Yano, S., & Munhall, K. G. (1998). Eye movement of perceivers during audiovisual speech perception. Perception & Psychophysics, 60(6), 926-940.
  9. Vatikiotis-Bateson, E., & Munhall, K. G. (2012). Empirical perceptual-motor linkage of multimodal speech. In G. Bailly, P. Perrier & E. Vatikiotis-Bateson (Eds.), Advances in auditory and visual speech perception (pp. 346-367). Cambridge, UK: University Press.
  10. Vatikiotis-Bateson, E., & Yehia, H. C. (2002). Speaking mode variability in multimodal speech production. IEEE Transactions in Neural Networks, 13(4), 894-899.
  11. Yehia, H. C., Rubin, P. E., & Vatikiotis-Bateson, E. (1998). Quantitative association of vocal-tract and facial behavior. Speech Communication, 26, 23-44.

Computational modeling of motor control

  1. Hirayama, M., Vatikiotis-Bateson, E., Honda, K., Koike, Y., & Kawato, M. (1993). Physiologically based speech synthesis. In S. J. Hanson, J. D. Cowan & C. L. Giles (Eds.), Advances in Neural Information Processing Systems (Vol. 5, pp. 658-665). San Mateo, CA: Morgan Kaufmann Publishers.
  2. Hirayama, M., Vatikiotis-Bateson, E., & Kawato, M. (1994). Inverse dynamics of speech motor control. In S. J. Hanson, J. D. Cowan & C. L. Giles (Eds.), Advances in Neural Information Processing Systems (Vol. 6, pp. 1043-1050). San Mateo, CA: Morgan Kaufmann Publishers.
  3. Vatikiotis-Bateson, E., & Kelso, J. A. S. (1993). Rhythm type and articulatory dynamics in English, French, and Japanese.Journal of Phonetics, 21(231-265).
  4. Vatikiotis-Bateson, E., & Ostry, D. J. (1995). An analysis of the dimensionality of jaw motion in speech. Journal of Phonetics, 23, 101-117.
  5. Wada, Y., Koike, Y., Vatikiotis-Bateson, E., & Kawato, M. (1995). A computational theory for movement pattern recognition based on optimal movement pattern generation. Biological Cybernetics, 73, 15-25.

Time-varying coordination in communicative behavior

  1. Barbosa, A. V., Yehia, H. C., & Vatikiotis-Bateson, E. (2008). Linguistically Valid Movement Behavior Measured Non-Invasively. In R. Goecke, P. Lucey & S. Lucey (Eds.), Auditory and Visual Speech Processing — AVSP08 (pp. 173-177). Moreton Island, Australia: Causal Productions.
  2. Barbosa, A. V., Dechaine, R.-M., Vatikiotis-Bateson, E., & Yehia, H. C. (2012). Quantifying time-varying coordination of multimodal speech signals using correlation map analysis. Journal of the Acoustical Society of America, 131(3), 2162-2172.
  3. Kozima, H., & Vatikiotis-Bateson, E. (2001). Communicative criteria for processing time/space-varying information. In P. Coiffet (Ed.), 10th IEEE International Workshop on Robot and Human Communication (ROMAN 2001) (pp. 377-382). Bordeaux-Paris: IEEE.
  4. Latif N, Barbosa AV, Vatikiotis-Bateson E, Castelhano MS, & Munhall KG (2014) Movement Coordination during Conversation. PLoS One 9(8):e105036.
  5. Oberg MA, Vatiokiotis-Bateson E, & Barbosa AV (2013) Coordinating conversation through posture. Proceedings of Meetings on Acoustics (POMA) 19:060046.
  6. Sharon, R., L. Fais, and E. Vatikiotis-Bateson (2014). Wiggle room – How gestural parameters affect singer and audience cognition in Art Song performance. In Language and the Creative Mind, M. Borkent, B. Dancygier, and J. Hinnell, Eds. CSLI: Stanford. p. 347-373.
  7. Tiede M, et al. (2012) Speech articulator movements recorded from facing talkers using two electromagnetic articulometer systems simultaneously. Proceedings of Meetings on Acoustics (POMA) 11:060007-060009.
  8. Vatikiotis-Bateson, E. (2010). Coordination, concurrency, and synchrony in communication. Transactions of Instituto de Estudos Avançados Transdisciplinares – IEAT, 11, 1-40.
  9. Vatikiotis-Bateson, E., & Munhall, K. G. (2012). Time-Varying Coordination in Multisensory Speech Processing. In B. Stein (Ed.), The New handbook of multisensory processing (pp. 421-434). Cambridge, MA: MIT Press.
  10. Vatikiotis-Bateson, E., A.V. Barbosa, and C.T. Best (2014) Articulatory coordination of two vocal tracts. Journal of Phonetics, 44: p. 167–181.
  11. Vatikiotis-Bateson, E., Oberg, M., Barbosa, A. V., McAllister, K., Hermiston, N., & Kurth, R. (2009). Postural entrainment by vocal effort in singing and speech. In T. E. Jukka Louhivuori, Suvi Saarikallio, Tommi Himberg, Päivi-Sisko Eerola (Ed.), Proceedings of the 7th Triennial Conference of European Society for the Cognitive Sciences of Music (ESCOM 2009) (pp. 604-609). Jyväskylä, Finland.