Advertisment

Let''s talk with SALT...

author-image
CIOL Bureau
Updated On
New Update

The world has fast moved from text, to the world of audio, video and multimedia. Software companies are innovating tools to enable delivery of voice based web applications, and give multimodal access thereby enabling end users several options of interacting with an application.

Advertisment

Using Multimodal access users can give input to an application using voice, keyboard, keypad, mouse or a stylus and get output as synthesized speech, audio, plain text, motion video and/or graphics.

Back in 2002, Cisco Systems, Comverse, Intel, Microsoft, Philips and ScanSoft started work on building an API for speech applications and announced a collection of Speech Application Language Tags (SALT). The group called SALT Forum published the SALT 1.0 specification in 2002, and contributed it to the World Wide Web Consortium (W3C).

According to the SALT Forum website, "The Speech Application Language Tags (SALT) 1.0 specification enables multimodal and telephony-enabled access to information, applications, and Web services from PCs, telephones, tablet PCs, and wireless personal digital assistants (PDAs). The Speech Application Language Tags extend existing mark-up languages such as HTML, XHTML, and XML. Multimodal access will enable users to interact with an application in a variety of ways: they will be able to input data using speech, a keyboard, keypad, mouse and/or stylus, and produce data as synthesized speech, audio, plain text, motion video, and/or graphics. Each of these modes will be able to be used independently or concurrently."

Read more about SALT. Take a detailed look at the SALT 1.0 specification, and read about SALT implementations tools and products.