Developmental Perspectives on Speech Planning and Production
Abstract: It is easy to identify young children’s voices in a crowd: these are typically higher than those of adults, but may also have missing grammatical morphemes, more pauses and slower speaking rate. These differences suggest that children may plan their speech differently from the way adults do, and raise questions about how their speech planning processes develop and change over time. However, most models of speech planning assume an adult language processor; there has been little attempt to model connected speech processes in children. Recent adult research shows that speech initiation time and the size of (prosodic) planning units can differ as a function of task, suggesting that even adult speakers may not always plan their speech in the same way for the same utterance. These findings present both a challenge and an opportunity for exploring similar issues in child speech. This talk thus presents preliminary findings regarding aspects of children’s speech planning processes with the goal of moving toward a developmental model of speech planning and production.
Presenter: Dr. Katherine Demuth is Distinguished Professor in Linguistics at Macquarie University, an ARC Laureate Fellow, and a member of both ASSA and RSNSW. She is the Director of the Child Language Lab at Macquarie University, a member of its Centre for Language Sciences (CLaS), and a member of the ARC Centre of Excellence for Cognition and its Disorders (CCD). Her research focuses on children's acquisition of language, including studies of perception, production, and processing. She is especially interested in the development of phonological, morphological and syntactic representations, in both bilinguals, those with language learning disorder, and those with hearing loss. Much of her work has been crosslinguistic, using acoustic/phonetic and phonological insights from the structure of different languages to better understand the mechanisms underlying the developmental process of language acquisition.
Vocal Biomarkers of Neurological Conditions Based on Motor Timing and Coordination
Abstract: Toward the goal of noninvasive objective means to detect and monitor psychological, neurodegenerative, and neurotraumatic conditions, MIT Lincoln Laboratory is developing vocal biomarkers that reflect a change in brain functioning as manifested in motor control. Specifically, vocal features are based on timing and coordination of articulatory components of vocal expression, motivated by the hypothesis that these relations are associated with neural coordination across different parts of the brain essential in speech motor control. Timing- and coordination-based features are extracted using behavioral measures from the acoustic signal, as well as from associated facial measures during speaking, but also from neurocomputational models of speech production. These models are governed by motor control parameters that are constrained by neurophysiology and possibly complement acoustic-derived features. This presentation gives the foundation for extracting our vocal features and illustrates use of these markers by an example in each of the three above application areas: major depression disorder, Parkinson’s disease, and mild traumatic brain injury. The measurement and modeling framework may provide a common neurophysiological feature basis in detecting and monitoring of neurological disease from speech, while potentially providing features to distinguish across disorders and to monitor and predict the effect of treatments.
Presenter: Dr. Thomas F. Quatieri received the B.S. degree (summa cum laude) from Tufts University, and the S.M., E.E., and Sc.D. degrees from the Massachusetts Institute of Technology (MIT). He is currently a Senior Member of the Technical Staff with MIT Lincoln Laboratory, Lexington, involved in bridging human language and bioengineering research and technologies. He holds a faculty appointment in the Harvard-MIT Speech and Hearing Bioscience and Technology Program. His interests include speech and auditory signal processing and neuro-biophysical modeling for biomedical and biometric applications, with a focus on multi-modal recognition and monitoring of neurological and cognitive stress disorders. He is the author of the textbook Discrete-Time Speech Signal Processing: Principles and Practice (Prentice-Hall, 2001) and has developed the MIT graduate course Digital Speech Processing. He is active in advising graduate students on the MIT and Harvard campuses. Dr. Quatieri is a recipient of four IEEE Best Paper Awards, including an IEEE W. R.G. Baker Prize Award, as well as a recipient of the 2010 MIT Lincoln Laboratory Best Paper Award for an IEEE Transactions Audio, Speech and Language Processing article. Dr. Quatieri is a member of the team that won in 2004 the MIT Lincoln Laboratory Team Award for excellence in speech research and technology transfer, and he led the MIT Lincoln Laboratory team that won the 2013 and 2014 AVEC Depression Challenges, as well as the 2015 MIT Lincoln Laboratory Team Award for vocal and facial biomarkers. He has served on the IEEE Digital Signal Processing Technical Committee, the IEEE Speech and Language Technical Committee, and the IEEE James L. Flanagan Speech and Audio Awards committee. He has also served as Associate Editor for the IEEE Transactions on Signal Processing in the area of nonlinear systems and is currently an associate editor of the Computer, Speech, and Language journal in the area of neurological disorders. Dr. Quatieri is a Fellow of the IEEE and a member of Tau Beta Pi, Eta Kappa Nu, Sigma Xi, ICSA, ARO, and ASA.
Dialogues, Data and Daily Activities: Research on Socially Intelligent Robots
Abstract: Several humanoid robot agents have appeared in recent years enabling potentially useful applications for every-day tasks in healthcare assistance, home services, nursing, caregiving, education, etc. In these tasks, dialogue capability is an important part of the robotís functionality: the robot can provide useful information, chat about interesting topics, and instruct the human users in natural language. The robot appears as a social agent which acts and interacts in the physical world. However, many practical challenges remain to be solved before robots become fluently conversing companions which can act appropriately in real world situations. In this talk I will focus especially on dialogue modeling to enable spoken interaction between users and social robots. I will survey some of the important issues when aiming to expand the robot's interaction capabilities with knowledge about human activities and multimodality, and exemplify them with a robot application for care-taking tasks. I will also present our work on situational awarenss and eye-tracking experiments related to human-robot interaction. I will conclude the talk with technological and conversational challenges brought forward by the robot's dual character as an elaborated computer on one hand and an autonomous agent on the other hand.
Presenter: Dr. Kristiina Jokinen is Senior Researcher at AI Research Center at AIST Tokyo Waterfront. Before joining AIRC, she was Professor and Project Manager at University of Helsinki and at University of Tartu. She received her PhD from UMIST, Manchester, and was awarded a JSPS Fellowship to research at NAIST, Japan. She was Invited Researcher at ATR Research Labs in Kyoto, and Visiting Professor at Doshisha University in Kyoto in 2009-2010. She was Nokia Foundation Fellow in Stanford in 2006, and is Life Member of Clare Hall at University of Cambridge. Her research focuses on spoken dialogue systems, corpus analysis, and cooperative and multimodal human-robot communication. She has widely published on these topics, including three books. Together with G. Wilcock she developed the WikiTalk open-domain dialogue application for social robots. She has had a leading role in multiple national and international cooperation projects. She served as General Chair for SIGDial 2017 and ICMI 2013, and Area Chair for Interspeech 2017 and COLING 2014. She organised IWSDS 2016, the northernmost dialogue conference in Lapland, and edited the Springer book "Dialogues with Social Robots" (LNEE 427). She is in the organising committee for the workshop "AI and Multimodal Human-Robot Interaction" at IJCAI-ECAI in 2018.