Between phones, tablets, computers, games and TVS, the average American spends nearly ten hours a day staring at a screen. “The interface” is, of course, a key part of every interaction with technology. But Christian Glover Wilson, VP, Technology & Strategy at Tigerspike, believes we’re headed to a future where we don’t need any interfaces at all to interact with our technology.

DOMINIQUE LEE, VP of Research & Strategy at Tigerspike asked Christian a few questions about this screenless future.

CHRISTIAN: Screenless interfaces borrow from natural human experiences, – like voice and face recognition, haptics, gestures, ambient communication and thought control – to remove the need for a screen altogether. Examples of that are talking to Amazon Echo or Fitbit communicating to you via a subtle vibration.

DOMINIQUE: Why get rid of the screen? Is it all that bad?

CHRISTIAN: We don’t communicate with computers because we enjoy it — we do so because they help us achieve some other thing. Screenless interfaces reduce the time we spend looking at computers, while still achieving similar outcomes. It’s technology that understand us in our own natural words, behaviors and gestures, so we can get on with our lives.

DOMINIQUE: How have advances in technology led to the point we are at now, where we are on the cusp of getting rid of the interface altogether?

CHRISTIAN: Until now, interacting with a computer was a case of using the interface it provides: learn to use a QWERTY keyboard, see whatever can fit on the computer’s screen. We were bending to the limits and capabilities of the technology. A mouse was a more natural and intuitive interface than pressing keys on a keyboard, touchscreens get even closer to the metaphor, modern multi-touch screens add gesture and even react to the force of the touch – every advance moving closer to a natural, human-scaled mode of interaction.

Today we have to make even fewer compromises in our uses of technology thanks to recent advances: smaller machines, improved understanding of user experience, and the democratization of computers mean technology is adapting more to human capabilities rather than the other way around.  For example, it turns out we can sense even subtle vibrations against our skin even through a pocket, so mobile phones adopted vibration as a silent means to communicate. Facebook announced at their latest ‘F8’ conference that they are taking this idea further still by development technology to allow users to ‘hear’ language through their skin; this could allow simultaneous translation while listening to someone speak … adding touch to the list of senses we can use to communicate complex concepts.

Today, we’re seeing the screen disappear as the sole manifestation of human and computer interaction –  a switch to smart objects that communicate to our different senses.


TV, Microwave Ovens, Refrigerators


Computer & Keyboard


Digital Camera, Computer Mouse


Cellphone, Vacuum Cleaner


CD players, Internet World Wide Web


VoIP, Streaming Media, HDTV, WiFi


MP3 players (iPod), Smartphone (touchscreen)


Touchscreen cellphone (iPhone)


Smart TV


Touchscreen tablet (iPad), 3D TV


Motion Sensor Gaming Console Kinect, Smart Home automation (Nest)




Smartwatch (Apple Watch)


Nanotechnology, Smart speaker (Amazon Echo / Google Home)


Mixed reality experiences, networking connected devices


Human bio technologies

DOMINIQUE: You mentioned a few before, but what technologies are leading invisible user interface design?

CHRISTIAN: It’s all about  technologies that communicate with different senses – not just a screen for us to look at but voice assistants (like Siri, Alexa or Google Assistant), gesture interfaces (like XBox Kinect or Magic Leap), and subtle haptic interfaces like the Apple Watch.

Augmented and mixed reality systems (like HoloLens or Oculus Rift) are combining many of these interface approaches to produce immersive computing experiences that blur the line between physical and virtual worlds.

DOMINIQUE: What design or development challenges need to be considered when there is no UI to work with?

CHRISTIAN: Designers need to think about user education, user interaction, user commands, best practices and security. Knowing how to set-up, interact and use technology without a screen can be frustrating. A quick start guide might accompany a product but it’s rarely read by the end user. To be successful, learning and usage needs to be simple and can’t only be targeted to a tech-savvy audience.

As far as bigger picture implications, few tech giants or startups involved in this space have addressed the overarching social and anthropological questions around creating a more natural relationship between human and computers.  

DOMINIQUE: What about the nitty gritty of user interaction? Any concerns there?

CHRISTIAN: One of the main problems with screenless interfaces is accuracy: ensuring the technology accurately detects user activation and usage commands without a keyboard, a mouse or touchable screen. For example, noise interference like dishwashers, air vents, and a host of other noisemakers may prevent Voice Assistants from hearing users properly.

User commands can also trigger unexpected responses. Often with screenless interfaces, the control mechanism could be a user’s voice or gesture. A user may intend to control the interface only some of the time but could inadvertently trigger it if not designed properly.

DOMINIQUE: Are there any guidelines and standards for creating screenless experiences?

CHRISTIAN: We’re still in the Wild West and there’s not a clear vernacular or set of standards for designing for minimal or interfaceless interfaces. Design guidelines and standards are limited and a universal language for smart objects is still many years off. So now’s a good time to experiment and try things to play a role in creating those standards. The field is wide open.

DOMINIQUE: In the future, will we say goodbye to the UI?

GLOVER WILSON: Looking to the future, while it might not be life without devices, screens, and hardware – it will certainly be life without the devices we know today.

DOMINIQUE: Do you think adoption of screenless tech will be limited to an elite few?

GLOVER WILSON: I don’t think so. Technology has become incredibly democratized in recent years. More people in the world now own smartphones than have toilets. As the need for devices and hardware (computers, monitors, keyboards) is removed, the barriers to entry will be even lower for people all along the economic spectrum.

DOMINIQUE: Paint a picture for me of what this future of screenless technology looks like.

CHRISTIAN: The ambient technologies of the future will weave themselves into our everyday lives. They will be unobtrusive, intelligent, often invisible in our natural environment but ever-present experiences that we can access when needed.

Today, humans have to augment themselves to communicate with their machines. For example, I grew up in England and often with American voice user interfaces, I sometimes have to change the most natural thing there is, my accent and pronunciation to accord with the vagaries of the AUI (audio user interface) – saying “carrrrr” just so that Siri understands me isn’t the most dignified way to interact with technology. In the future, that won’t be the case because the technology will adapt to suit us … I hope!

We will see a blurring of the boundaries between human embedded devices (e.g. RFID chip)  and personal devices (e.g. smartphone, fitbit). A fitbit under the skin? Neural lace for a mouse? Users will blend interfaces and creators need to play to a blended experience rather than backing one interface or one sense. This might mean that a user learns a system by looking at a visual screen and switches from one interface to another based on their need and context of use.

Finding the right fidelity of message for your medium is really the key takeaway from this multi-device, user empowered future world.

Interview with Christian Glover Wilson, VP, Technology & Strategy at Tigerspike by Dominique Lee, Vice President, Research & Strategy, Tigerspike NYC