Although we use speech every day, researchers who study it know it’s much more than the words we choose. But when they look for technology to measure the acoustics of pronunciation, pitch, breathiness and other elements, their options are limited.
With a grant from the National Science Foundation, College of Allied Health Sciences Professor Suzanne Boyce, PhD, was able to expand the discussion around those options. Boyce’s grant was used to bring the field’s top software developers from around the world to a one-day workshop titled, "Software to Empower Learning and Research in Speech (STELARIS): A Workshop for Developers and Teachers,” held on January 31.
The event, also attended by an international group of researchers and teachers specializing in speech, was focused on making speech analysis technology more accessible in education.
"The problem is that most software that analyzes speech acoustics is designed in research labs for a specific research focus,” says Boyce, a professor of communication sciences and disorders.
"The research labs make it open-source, but it isn’t user-friendly for the speech language pathologists, neuroscientists, audiologists, linguists and engineers who study and teach about speech every day.”
Due to the high cost of commercial software, she says teachers are often forced to use research-focused software in their classes. With more accessible, user-friendly software, Boyce believes more students will become interested in speech, teachers can teach it better and both can pursue better research focused on it.
"The ultimate aim is user-friendly software that students from high school to graduate levels can download themselves and work through exercises at their own pace without needing a lot of hands-on guidance,” she says.
Boyce says the workshop organizers—herself and collaborators from three other universities in Europe and the U.S.—received an "amazing” response from the participants, who are making plans to continue the discussion at future conferences.
Potential steps include having speech educators spend a sabbatical in a computer programming lab, where they can suggest simple ways to make programs applicable to the classroom—such as making it easier to adjust the parameters of a test or model.
"It’s not always obvious how to change those parameters in the software,” says Boyce, "but when you’re teaching, you want that very transparent for students. It allows them to get a glimpse into the guts of the concept and a glimpse into the guts of the computer program, without being computer programmers themselves.”
The group has now established a website, and various interested members are using it for an exchange medium. The group plans to meet again at the Interspeech Conference in Italy in October.
Uses for Speech Analysis Technology:
For speech language pathologists: To evaluate voice and speech disorders and help patients track their progress.
For neuroscientists: To investigate how the brain processes speech.
For audiologists: To evaluate the effectiveness of hearing aids and cochlear implants.
For linguistics: To study dialect and social differences in speech.
For engineers: In speech recognition systems and to improve speech transmission devices.