A multipurpose test-bed for integrating user interface and sensor technologies has been developed, based on a client- server architecture. Various interaction modalities (Speech recognition, 3-D Audio, Pointing, wireless Handheld- PC-based control and interaction, sensor interaction, etc.) are implemented as servers, encapsulating and exposing commercial and research software packages. The system allows for integrated user interaction with large and small displays using speech commands combined with pointing, spatialized audio, and other modalities. Simultaneous and independent speech recognition for two users is supported; users may be equipped with conventional acoustic or new body-coupled microphones.