Frederic VERNIER
HCIL Talk
October 16, 2000 (monday)

Abstract

After saying a few words about the lab where I did my work in Grenoble, I will present my thesis two main research axes: Multimodality and Visualization. The goal of my research is to apply multimodal interactions to the design of graphical user interfaces for large information spaces. The large variety of multimodal interfaces stems from the diversity of available modalities AND on the many ways of combining them.

For input user interfaces, the "put that there" paradigm illustrates through the usage of different modalities (speech and mouse) the variety of interaction means. This richness stems not only from the different available modalities but also comes from the multiple ways of combining them: There are spatial, temporal, syntactic and semantic aspects involved in the combination of modalities.

For output multimodality, the "look at that" paradigm will be introduced. For example a system can display a graphic and play a sound "this graphic has been updated 15 minutes ago". The aspects of the combination for output are similar to those for input. The main difference is that the system must be able to combine the modalities (selection, synchronization, etc.): that is the combination is no longer performed by the user.

Screen-based modalities, play a central role in my work because my research focuses on the design of output multimodal user interfaces for large information spaces. Graphical modalities define a vast range of possibilities. A graphical multimodal user interface can for example provide to the user several views of different data at the same time or different complementary views of the same data such as the focus+context design rule. In these two cases, the richness of the interface is due to the use of different graphical modalities and the various ways of combining them. To better understand this combination I will present several existing taxonomies in visualization and our design space, which contains various combinations of output modalities.

At last I will present three systems or will make three demonstrations depending on the possibilities. These three visualization systems illustrate the concepts of the taxonomies and raise new chalenges and new problems. The talk will surely finishes with informal discutions about implementing visualization systems in JAVA, adding new features to my systems, exchanging piece of code ...



Introduction
  • Who I am, Where am I coming from ...
  • My approach of visualization
  • My background of multimodality field

    Multimodality applied to visualization
  • Output multimodality applied to visualization means multiple views or Focus+Context
  • Design space for the composition of output modalities
  • links between output multimodality (Multiple views, focus+context) and input multimodality (Multiple navigation tools).

    Visualization : Taxonomies to understand
  • Ed Chi framework, Card taxonomy, Shneiderman taxonomy, ...
  • Taxonomy of points of view on data (C. Bruley * S. Card)
  • Taxonomy of deformations (for fisheyes views)

    Visualization : from concepts to implementation (links in french, sorry)
  • Parent, hierarchy visualization based on Treemaps (1)
  • MulTab, huge spreadsheet visualization using fisheye views (2)
  • VITESSE, interface to WWW search engines using many different distorsion-based views (4)