EMMA: Extensible MultiModal Annotation markup language Version 1.1 Draft Published

27 June 2013

The Multimodal Interaction Working Group has published a Working Draft of EMMA: Extensible MultiModal Annotation markup language Version 1.1. This specification provides details for an XML markup language for containing and annotating the interpretation of user input. Examples of interpretation of user input are a transcription into words of a raw signal, for instance derived from speech, pen or keystroke input, a set of attribute/value pairs describing their meaning, or a set of attribute/value pairs describing a gesture. The interpretation of the user's input is expected to be generated by signal interpretation processes, such as speech and ink recognition, semantic interpreters, and other types of processors for use by components that act on the user's inputs such as interaction managers. Learn more about the Multimodal Interaction Activity.

Comments

Popular posts from this blog

How to Structure Content for AI Search Engines in 2025-2026

Basic optimization tips : Effects on traffic

ChatGPT Atlas - The Evolution of Search: From Links to Conversations