by Hang Ung and Thomas Sandholm
Ecole Polytechnique and HP Labs
In this paper we describe a system that allows audio-based real-time coordination of a group of users with mobile devices. The assumption is that at least one person in the group is able to easily enter text from a keyboard-like control, e.g. from a desktop, PC or tablet. This person, who we call the coordinator, can then communicate with one or more people, called operatives, with mobile devices and engaged in an activity that makes it hard or impossible for them to see the screen of the device and to use touch based input mechanisms. Examples include driving, running, biking, and walking. A simple example is when you are in a meeting and want to communicate in real-time with your friend who is driving a car. A phone call might be optimal for the driver while IM or SMS would be optimal for you. In this case our system provides the optimal interface for each user while still allowing them to communicate in real-time. Another goal is that the only system requirement both for the coordinator and the operatives is a browser capable of rendering HTML 5 content to allow coordination across a diverse fleet of devices. We scope our work to only look at use cases where the operatives never have to provide any explicit input back to the coordinator apart from automatically detected device properties, such as geolocation. This restriction is not as limiting as it seems since the same user can easily switch between the operative and coordinator roles.