Media-Independent Interfaces in a Media-Dependent World
Ken Arnold
Ken.Arnold@east.sun.com
Sun Microsystems Labs
2 Elizabeth Dr.
Chelmsford, MA 01824
Kee Hinckley
nazgul@utopia.com
Utopia, Inc.
25 Forest Circle
Winchester, MA 01890
Eric Shienbrood
ers@wildfire.com
Wildfire Communications
20 Maguire Rd.
Lexington, MA 02173
Abstract
Wildfire is a communications assistant that uses speech recognition to
work over phone lines. At least that's what it is today. But in the
future it wants to run on desktops, PDAs (like the Newton Message
Pad), and who knows what all. To provide a level of media
independence, we designed a subsystem to isolate the communications
knowledge of the assistant from the mechanisms of prompt/response.
This layer is called the MMUI. It provides abstractions of input and
output that let the assistant ask questions and get responses without
knowledge of the specifics of the communication channels involved. The
specifics of speech recognition, as well as the degree of abstraction
desired, make this an interesting case of presentation/semantic split
using object polymorphism. This presentation will cover the design of
the MMUI, its fundamental weaknesses, and furious handwaving over
future directions to mend them.
Download the full text of this paper in
ASCII (39,035 bytes) and
POSTSCRIPT (107,965 bytes) form.
To Become a USENIX Member, please see our
Membership Information.