Research

We are a technical HCI research lab contributing new methods, tools and technologies for creating novel interactive systems and technologies.

We have a history of building and studying next-generation user interfaces, involving touch, gesture, speech, multi-modal, multi-device interactions, and now virtual, augmented, and mixed reality interfaces, as well as novel techniques and tools to design them.

As part of our research, we have created many systems and tools, including jQMetrics, jQMultiTouch, W3Touch, CrowdAdapt, CrowdStudy, Kinect Analysis, Kinect Browser, XDStudio, XDKinect, and XDBrowser.

Latest Research

Our current focus is on virtual, augmented, and mixed reality interfaces. We have created new systems and tools to support cross-device augmented and mixed reality interfaces, and new prototyping tools, with our first three papers appearing at CHI 2018. For example, here is a video summary of our upcoming ProtoAR tool:

Below are some examples from our ongoing research:

  • XD-AR: an augmented reality platform that we are developing for multi-user multi-device augmented reality experiences that allow users to “edit” the physical world in realtime
  • 3D Virtual/Physical Lab: virtual and physical 3D models of our lab environment that we are using to design new forms of natural user interfaces and cross-device interactions for both users in the lab and users participating remotely
  • Surface Pad: initially conceived as a sketching interface that can work with new kinds of input devices, we are currently exploring how the interface can be extended to easily import and blend content from the digital and physical worlds

Previous Research

Previously, while at Carnegie Mellon University, the lab’s PI Michael Nebeling investigated ways of orchestrating multiple devices and crowds to enable complex information seeking, sensemaking and productivity tasks on small mobile and wearable devices. He also contributed to the Google IoT project led by CMU. This work led to three papers at ACM CHI 2016:

  • XDBrowser: a new cross-device web browser that I used to elicit 144 multi-device web page designs for five popular web interfaces leading to seven cross-device web page design patterns (CHI’16 paper, CHI’16 talk)
  • Snap-To-It: a mobile app allowing users to opportunistically interact with appliances in multi-device environments simply by taking a picture of them (CHI’16 paper)

Selected Publications

For the full list of publications, please check Google Scholar and DBLP.

  1. ProtoAR: Rapid Physical-Digital Prototyping of Mobile Augmented Reality Applications CHI’18a
    M. Nebeling, J. Nebeling, A. Yu, R. Rumble
  2. GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes CHI’18b
    M. Speicher, M. Nebeling
  3. User-Driven Design Principles for Gesture Representations CHI’18c
    E. McAweeney, H. Zhang, M. Nebeling
  4. XDBrowser 2.0: Semi-Automatic Generation of Cross-Device Interfaces CHI’17
    M. Nebeling