That audio can then be fed into automatic speech recognition models used by Siri and other voice assistants to teach them when someone with an atypical speech pattern is talking. The goal is to build up a collection of clips where people with a stutter are speaking. The new study focuses on coming designing AI training tools for spotting when someone speaks with a stutter. A new study showcases their work developing a database of relevant audio clips to train the AI accordingly. Apple researchers are working on ways to teach Siri to determine if a speaker is stuttering and compensating so the voice assistant can understand what they are saying.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |