A virtual assistant, simply put, is a software agent that responds to the commands or instructions given by a human being and performs tasks accordingly. It eliminates the need of using buttons or switches while operating a device and getting a particular task done. To make one’s life easier, there are several kinds of virtual assistants available today. However, that one virtual assistant that became really popular and paved the way for many others was Siri.
Launched in the year 2011, Siri was a virtual assistant that was developed and launched by Apple. Siri was designed to understand voice-based commands, focus-tracking, controls derived from gestures, and user interface based on natural-language to reply to the questions posed by the user, serve recommendations and carry out tasks based on the instructions provided. The more time Siri spends with the user and is operated, the faster it adapts itself to the voice commands given by the user.
Apple has always made an effort to come up with different ways to bring out an improvement in its products from time to time. While Siri continues to be a popular virtual assistant, it faces competition from several other virtual assistants developed by different companies. Therefore, it becomes all the more important for the global tech giant to keep updating Siri and ensuring that it stays relevant.
Apple is now conducting research on the steps or the technique that can be implemented to make Siri understand the commands of people with atypical speech or the ones who speak with a stutter in a better manner. According to the information published in The Wall Street Journal, the company’s internal team has been doing extensive research on this for a while.
Apple has put together as many as 28,000 audio clips from different podcasts featuring content of individuals who stutter. All this content would be utilized in training Siri towards understanding the instructions given by people with atypical speech. As per a spokesperson from Apple, the information sourced out by Apple will help in augmenting voice recognition systems for patterns related to atypical speech.
Apart from this initiative, Apple has also developed a Hold to Talk feature that will give users the freedom to control the amount of time they want Siri to listen to them. This will eliminate the possibility of Siri cutting off individuals with a stutter before they have finished issuing a command or asking a question. Apple had earlier implemented a Type to Siri feature in iOS 11 that gave people the option to use the virtual assistant without using their voice.
Apple is planning to put together a research paper that would contain all the information about the efforts it is making to bring out improvements in Siri. The research paper is likely to be published sometime this week.
Meanwhile, Google and Amazon are also figuring out ways to provide the kind of training to Google Assistant and Alexa that would enable them to understand its diverse range of users in a more effective way.