XML News from Wednesday, February 14, 2007

The W3C CSS Working Group has posted a new working draft of CSS3 module: Generated Content for Paged Media that "describes how CSS style sheets can express named strings, leaders, cross-references, footnotes, endnotes, running headers and footers, named flows, new counter styles, page and column floats, hyphenation, bookmarks, change bars, continuation markers, named page lists, and generated lists."

The W3C CSS Working Group has resurrected Behavioral Extensions to CSS. "Behavioral Extensions provide a way to link to binding technologies, such as XBL, from CSS style sheets. This allows bindings to be selected using the CSS cascade, and thus enables bindings to transparently benefit from the user style sheet mechansim, media selection, and alternate style sheets." In brief this proposes a new binding property that can locate rendering descriptions in another document:

input[type="checkbox"] {
  binding: url("http://example.org/htmlBindings.xml#checkbox");

The W3C Web Application Formats Working Group has posted a working draft of Widgets 1.0 Requirements. The goal is to

standardize the way client-side web applications (widgets) are to be scripted, digitally signed, secured, packaged and deployed in a way that is device independent.

The type of web applications that are addressed by this document are usually small client-side applications for displaying and updating remote data, packaged in a way to allow a single download and installation on a client machine. The application may execute outside of the typical web browser interface. Examples include clocks, stock tickers, currency converters, news readers, games and weather forecasters. Some existing industry solutions go by the names "widgets", "gadgets" or "modules".

The W3C Voice Browser Activity has published the proposed recommedation of Semantic Interpretation for Speech Recognition (SISR) Version 1.0.

This document defines the process of Semantic Interpretation for Speech Recognition and the syntax and semantics of semantic interpretation tags that can be added to speech recognition grammars to compute information to return to an application on the basis of rules and tokens that were matched by the speech recognizer. In particular, it defines the syntax and semantics of the contents of Tags in the Speech Recognition Grammar Specification [SRGS].

The results of semantic interpretation describe the meaning of a natural language utterance. The current specification represents this information as an ECMAScript object, and defines a mechanism to serialize the result into XML. The W3C Multimodal Interaction Activity [MMI] is defining an XML data format [EMMA] for containing and annotating the information in user utterances. It is expected that the EMMA language will be able to integrate results generated by Semantic Interpretation for Speech Recognition.