TechCrunch/MSN, 4/29/2017
Amazon is furthering the humanization of its virtual assistant Alexa by equipping it with more emotion functionality. Developers can use a markup language called Speech Synthesis Markup Language (SSML) that allows for the coding of Alexa’s intonation, emphasis, and region-specific responses. This opens up new possibilities for app companies and how their virtual assistants are used in the world.
[See the full post at: Alexa Learns to Talk Like a Human With Whispers, Pauses, and Emotion]