The Island of Misfit Responsive Designs, Part 1

Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.

—Steve Jobs

Thousands of job reqs for UX Designers over the last few years have listed a desire for experience in responsive design. And, why wouldn’t product teams want to build software in this way? One design that has to be developed, maintained, supported. It simplifies a number of things. I have advocated for responsive design on a number of occasions myself. When resources are short, a responsive design is a MUCH better choice than just ignoring mobile. I mean… people don’t really use mobile devices to check the Internet or anything, do they?

But, as the conscientious designers that we all are, we must consider also the use contexts and experiences that lend itself from a responsive design. We should take a step back and A) figure out if responsive design makes sense at all, and B) understand what components of an experience aren’t handled by responsive design and possibly how to address them.

Here are four ideas that muddy the waters of responsive design just a bit:

  • Differing use contexts
  • Differing technical capabilities
  • Multidevice “smart” ecosystems
  • Challenges unique to responsive design process

1. Differing experiences/differing use contexts

When diving into a responsive design, it’s easy to forget that because the design can be used on so many different situations that the contexts between those situations may be vastly different. When a designer is asked to think about a design that works specifically for mobile or specifically for desktops, then that designer can have a laser focus of how this software will be asked to perform (by the user) under these situations. When focusing on responsive design, then all of a sudden we have to split our focus on the user across all these different contexts, potentially even different users all together.

Let’s just think about the difference between a site viewed on a mobile device and a desktop.


Mobile Website

  • Used most frequently for short bursts of activity
  • Use is often distracted or for multitasking—tasks are short and well-defined
  • User frequently stands and may have other things in their hand during use
  • Location may change during the course of use
  • May be moving mobile device in 3D space and use different ways of touching and interacting device: one-handed, two-handed horizontally, one handed in lap, etc.

Desktop (and slightly less so laptops) Website

  • Used for more extended projects
  • Use may also be distracted and multitasking, but across much longer tasks
  • Use is almost always seated
  • User is stationary
  • Computer is designed to stay in place, mouse and keyboard use is priority
* These are generalizations and any given app may not follow this pattern exactly

 

These contexts of use may necessitate vastly different designs to support the type of use and experience expected through the software. But, then let’s take it a step further. Perhaps, users of mobiles have a shorter attention span than desktop users and so entire flows need to be more linear and discrete. Perhaps, users of mobile devices are more expert, while desktop users are more novice perhaps necessitating more hinting and instructions required for the desktop version. In my line of work, I design software for managers at various entertainment venues, and we have found that when working with customers mobile phones have a stigma of “messing around on their phone” whereas a tablet is seen as being much more acceptable and professional when helping customers.

4f3f96245a44bb3b3d4f98036560048c
Do we have any pet personas?

Perhaps, users are more likely to toss their phone across the room when they don’t work as expected—I don’t know—then, clearly, making sure the phone’s interface is as easy to use as possible is a high priority. The exact details may differ from use context to use context, but there will certainly be some differences.

Ask yourself these questions (or questions like these) before engaging in a responsive design:

  • Assuming the user using each device is different (probably not always a safe assumption), how do their needs for this design differ? Are they the same across devices? Can this design serve all these needs?
  • What are all the different use cases of this design? Where do they use it? What else might they be doing at the same time? Who else is with them?
  • Are there any use cases that make sense on one device but not others?
  • What levels of experiences are had by users of various devices? What are their expectations and can we meet those expectations with a single design?

2. Devices provide alternative capabilities

So let’s say a design jumps over the use context hurdle (sorry, the Olympics are on and I couldn’t help myself), we still have to take into consideration, there are a lot of capabilities one device may have over others.

Many mobile phones have telephony, cameras, gyroscopes, biometric scanning, and often interlink and share contact lists, news streams, social media, and other inter connected apps. Various wearables have the ability to be tethered to other devices, collect vitals, record data via cameras and so forth. Tablets share a lot of similarities with mobile as well as laptops but blend the two together including the use of cameras, detachable keyboards, and stylus recognition. Laptops enable some mobility, while also providing the standard WIMP model that most of us grew up with. Lastly, standalone desktops provide access to printers and other peripherals (e.g., scanners) that require stationary use.

If we think about responsive design, without considering what any devices such a design would be used on, we might miss out on opportunities to provide appropriately enhanced experiences. By building one design, we might not think about one of my favorite design contexts—affordances. Affordances are cues that a design gives to how it could be used.

Now, the affordance purists would state traditional affordance theory as cues that exist only in physical characteristics of a design. For example, a chair affords sitting on it due to its shape, or buttons on a webpage afford clicking due to the way the buttons look. Another example: Moving a phone to discover the gyroscopic capabilities of an app like Swarm is both natural and pleasurable. I believe that we can “learn” non-physical affordances as well such as a touchscreen can be touched or that smartphones can be used for taking pictures. (If you doubt me, consider Pierce’s typology of signs in semiotics: icon vs. symbol).

IMG_1245
As I move my phone on Swarm, the coins at the top of my screen move around as a reminder to earn points by checking in and serves as a small delighter for using the app.

So, again, considering responsive design from the perspective of differing device types:

  • What will someone using this software want to do with this data?
  • Will people using this software on a desktop/laptop want to print any of it?
  • Can a camera help on a mobile device alleviate the pain of, say, data entry? Can biometrics alleviate the pain of logging in?
  • Does using device specific technology cause more problems than its worth? And, if that’s the case, are we short-changing the user experience to save some development and support time?

So, we have covered the way varying use contexts of responsive design and differences in the media on which the designs are used. In my next post, I’ll follow up this criticism describing new innovations in multidevice interaction and the often invisible cost of doing responsive design.

Don’t miss it! Same UX time, same UX channel…