“Siri, do I look sad?” just might become one of the oft-repeated queries of iPhone users to Apple’s Siri. A promising insight from Apple’s patent has revealed that the company could be developing a way to add facial analysis to Siri. This is in order to interpret a user’s request like analyzing certain emotions tied to certain queries.
Such a breakthrough in the field of artificial intelligence has yet to be fully realized though. That means the said technology will take time to develop. Nevertheless, Apple’s initial aim is to reduce the number of times a person speaks a request or question to Siri and reduce misinterpretation.
In order to do this, Siri will have to analyze emotions or at least attempt to.
“Intelligent software agents can perform actions on behalf of a user. Actions can be performed in response to a natural-language user input, such as a sentence spoken by the user. In some circumstances, an action taken by an intelligent software agent may not match the action that the user intended,” as stated in Apple’s patent.
“As an example, the face image in the video input… may be analysed to determine whether particular muscles or muscle groups are activated by identifying shapes or motions,” it adds.
If successful, Apple’s Siri might also resort to personalized responses or actions which can then be further customized. Email or personal music playlists come to mind as a few of the uses for such A.I. advancement.
Apple’s Siri will definitely then be made to recognize whether a user is annoyed or pleased depending on facial analysis or many other factors like the voice tone or the manner of the speaking. At the moment, Apple hasn’t revealed any further developments regarding its plans for a more sensitive Siri.