Short-term Memory and UI Design

Not too long ago, I attended a talk by Jeff Johnson hosted by BayCHI. He introduced his new book Designing with the Mind in Mind, which reveals the psychology behind user interface design. His lessons covered everything from Gestalt theory to blind spots, but what I found most interesting was the influence of memory.

Short-term memory is best described as the conscious mind. It is what is happening right now. Is it how many numbers you can remember, which is 3-5 unrelated items (e.g. a zip code) or more if the items are related (e.g. 3-5 random words vs. a sentence of words). The latter uses the brain’s feature detection, which draws on connections from previous experiences—more neurons fire and trigger recognition.

A scenario of this is if I am typing a collection of words into a search engine and those words are out of sight once the search results are presented, I may become frustrated as a user because the task has distracted me from recalling what words I entered into the search field. To help the user recall what words were used, some search results have the words highlighted. Providing cues like this will help the user focus on the task and aid in the recall of information.

What I walked away with was that asking a user to keep track of features in his/her short-term memory is work. Good design is invisible and UI/UX is no exception. Intuition is based on experience, so the more unified and consistent and experience is, the more likely a task will seem effortless.

Short-term memory does have its faults as seen here in this video. While entertaining in a prank sort of way, it also shows how task and distraction can blind us from what is literally right in front of us. Enjoy!

http://www.youtube.com/watch?v=UYeJ1BHHDIg

Letterpress Printing

I recently completed a series of workshops at the Center for the Book in San Francisco in order to be certified to rent their letterpress printing equipment. After spending so much time on the computer designing complex interfaces and using programs to layout pages of content, it was refreshing to get back to good old ink, paper, and type.

Each of the three workshops lasted for 8 hours, totaling 24 hours worth of press time. The instructors were experts on the Vandercook press and taught us the mechanics of the machines as well as printmaking techniques. We started each class with a task (usually laying out a page of a small chapbook), which involved choosing a typeface and handset it on a composing stick seen here.

Composing Stick, Letterpress printing

Composing Stick, Letterpress printing

California job case

California job case

This often took the most time to do since you had to find each character in the giant California job case (a drawer segmenting the individual letters from one typeface), place the letters on the composing stick spelling the words backwards, and use leading plus spaces (solid metal parts for filling space) to lock up any loose areas. Once the type was all set, it was transferred to a galley (a metal tray), then onto the bed of the press. Blocks of wood called furniture were placed around the composition then locked into place with a quoin.

From there, ink was mixed by hand and applied to the rollers of the Vandercook press. Letting the rollers run for a bit helped distribute the ink so the color and density of the ink was even across the composition. Next, the rollers were set on ‘trip’ to ink the composition and the paper was aligned to the printing area.

Test runs were then done to determine the impression or bite the type had into the paper, the density of the ink, to check if any of the characters were damaged, and to check the registration of the page. Once all adjustments and corrections were made, it was a repetitive process of inserting paper, rolling the rollers over the bed, and removing the printed piece to dry. After the print run was completed, type was removed, cleaned, and sorted back into their cases, and the rollers were cleaned in a very process driven way so none of the ink we used was left over for the next print run to pick up.

Other details like adjusting the impression in the paper, printing multiple colors, using photo polymer plates, and printing on damp paper were taught in the second and third workshops. Now, it is remembering it all and doing it on my own.

Letterpress I, a book of overheard sayings

Letterpress II, posters

Letterpress II, posters

Letterpress III, Chapbook of false truths

Letterpress III, Chapbook of false truths

Letterpress III, Chapbook close up

Letterpress III, Chapbook close up

Here’s a great short documentary on how letterpress printing is done and why it is so appealing:
http://www.youtube.com/watch?v=Iv69kB_e9KY

Here is an interesting description of what a chapbook is, its origins, and where my inspiration for my composition came from:
http://en.wikipedia.org/wiki/Chapbook
http://en.wikipedia.org/wiki/The_Wise_Men_of_Gotham

Here are photos of the prints currently on exhibit at the Center for the Book. After going through all three workshops, I truly appreciate how much of a craft this is and how exceptional these prints are—a must see!
http://www.aardvarkletterpressfinearteditions.com/editions.html

Take the classes and learn the craft of letterpress printing! Students ranged from computer programmers, to graphic designers, to Telsa engineers!
http://sfcb.org/workshops

Emotional Contagions

Think of the last time you felt moved by a television commercial. Was it the story it told that triggered your emotional response? Was it a song? Perhaps it was just an image of another person showing emotion. Each of these examples has an explanation and a reason for being used in communications — especially advertising.

When a baby is born, it is immediately wired to copy mechanical behaviors. If you smile at a baby, it is likely he/she will smile back. It is mirror neurons that are responsible for this. A baby, after all, hasn’t really learned yet that a smile represents happiness. Another wiring of behavior is the emotional contagion. This is seen when a baby simply cries because another one is crying. If you put ten babies in a room and provoke one to cry, it is likely that you’ll have a room full of crying babies in no time. It is this emotional contagion that follows us into adulthood.

There is currently one television commercial that seems to trigger an emotional response from me (besides laughter) and I’ve been curious to find out why. It isn’t the sight of another person with tears rolling down his/her face, but a rapid flood of smaller cues that trigger stories I can relate to. The commercial is from Chevron’s Human Energy campaign, which launched in 2007.

In this 30 second ad, a total of 15 clips of candid and seemingly unrelated scenes appear during a voice over:

“The world is changing and how we use energy today cannot be how we use it tomorrow. There is no one solution. It’s not simply more oil or more renewables or being more efficient. It’s all of it. Our way of life depends on developing all forms of energy and to use less of it. It’s time to put our differences aside. Will you be part of the solution?”

The cast of talent recruited to create this 30 second “rallying cry” included director Lance Acord (cinematographer), British composer Paul Leonard-Morgan, and voice-over narrator Campbell Scott (Damages). The tone of voice, complimented by the gentle piano melody, reinforced the analogy-triggering clips of video that evoke feelings of chaos and problem-solving and contrast it with family and responsibility. All of this to present a plea of awareness, participation, and cooperation.

Now, if you really want to sob, throw in a curve ball and create a story that has heightened exposure at the same time — an immediate, very visible analogy. A perfect example is another Chevron commercial (aired in 2007) about The Impossible. If you’ve been watching the news over the last month, I guarantee it will leave you with goosebumps. It has convinced me that I need to be part of the ‘solution’.

Hesitation

Measuring hesitation can be valuable. Already, devices like Google’s Android uses information from the phone’s GPS to detect traffic speed. The data is then sent to Google maps and appears as a visual overlay of information — red means there’s a traffic jam. Hesitation can also come in forms that indicate if problem solving is taking place or if doubt exists. I often watch people as they use an app on their mobile device to see if they are in fact saving or killing time.

I was among the first consumers to use the Starbucks app, which is basically a digital version of their gift cards. When I used it for the first time, I fumbled through the steps that produced a QR code for the cashier to then scan and I thought “wouldn’t it be faster if I just handed them my plastic card?”. Hesitation can kill an app like this.

So, how do we manage hesitation? We hire user experience designers, cognitive scientists, information architects, talented developers, and visual designers to make a product as intuitive and responsive as possible. The negative effect of hesitation is it can turn a user away (download times), frustrate (this is taking too long to learn – it isn’t sticking), confuse (I’m lost and have to search for navigation paths), or lose trust (why isn’t this saving?). Hesitation can also be positive, meaning the user is persuade by the product/service because the content is engaging.

With mobile devices becoming more popular it will become increasingly important to factor in hesitation times. When sitting at a desktop computer, the user is static and less likely to be confronted with environmental distractions such as moving in a line at a coffee shop or paying attention when crossing the street. This means user testing, like the device, should be mobile.

Tron Returns to the Big Screen

After the impressive success of James Cameron’s Avatar, a pioneer of computer-generated imagery (CGI) returns to the big screen. Tron Legacy, a sequel to the 1982 cult classic Tron, is due to be released this December. The first fans to see the trailer were appropriately at the Comic-Con and are already buzzing.

Anyone who has taken a class in 3D animation has likely been lectured about the 1982 break through in CGI known as Tron. Those who haven’t will hopefully be looking it up on YouTube. It is a great way to see just how far computer graphics has come, especially when integrated with actors.

The story was ahead of its time too. Some may see it as an early Matrix, where the main character is being held captive in a digital world after hacking into a large corporation’s master control program. Instead of the coolest part of the film being a character dodging bullets in slow motion though, it’s a Lightbike racing scene. In the original film, the computer animation would only allow the bikes to turn on right angles, so it looked like something from an Atari game. Nevertheless, the scene became a legend.

So how will the new film pay homage to the original’s iconic visuals? Will IMAX 3D finally make it a blockbuster film? Was it the 1982 technology that didn’t quite satisfy the viewer’s expectations or was it the story? What does CGI need to do to be fully persuasive and appreciated by both the animation gurus and the general public?

If done well, it could be a successful sequel like the latest Star Trek film where computer graphics dazzle and delight us while the characters and story remain true to the culture and emotion of its fans. If done poorly, it could be another flop but with cooler graphics. Either way, it will remain a pioneer of computer graphics in film.

View the 2010 trailer

View the 1982 trailer

YouTube Testing HTML5

Google has expanded its adoption of HTML5 by releasing a beta version of YouTube using the new yet unapproved code (Feb 2010). However, the only browsers that support the code are Google’s Chrome, IE with a Chrome frame, and you guessed it if you read our earlier article, Apple’s Safari.

The opt-in page http://www.youtube.com/html5 states that these are the only browsers that support both the HTML5 video tag and the h.264 video codec. In previous posts, the h.264 codec (form of video compression) was described as an Apple codec that competed with the Flash video compressor. Before the iPhone was released, Apple worked with YouTube to convert most of their Flash videos to h.264 so the iPhone could display the videos. Today, I am sure the YouTube HTML5 beta is trying to achieve the same thing but this time for the sake of the iPad.

So, where is Adobe’s voice in all of this? Only a few weeks ago, they released an invitation to a webcast of the announcement of Creative Suite 5 (CS5). That same day, Steve Jobs and Google’s CEO Eric Schmidt were seen in a very public area having a coffee on the sidewalk of a cafe in Palo Alto. Bystanders took photos and reported hearing Jobs say “They’re going to see it all eventually so who cares how they get it.” Many suspect this was a PR stunt, but no one seems to know what for. Web content was more than likely the topic of conversation, but was it about HTML5 adoption?

In the meantime, Apple is presenting a gallery of iPad friendly websites that have Flash content but use HTML5 instead of the Flash plugin to display it. Even though this was always an issue on the iPhone, it is only now that the public is really complaining about it. Designers and programmers are catching on and major tech publishers like O’Reilly are getting ready to release instructional books on the subject (June 2010).

The iPad is giving us a push to move forward with HTML5, but will Adobe respond? Designers need the tools to keep up with the challenges tech and new devices present. This includes enabling designers, Flash developers, and web developers to work together using similar tools and languages to create an experience that all end users can connect with. It isn’t Flash or the iPad that is the problem, it is the issue of downloading plugins. HTML5 will solve this, but designers need the ‘go’ signal. Hopefully, that will happen next Monday.

A User Experience in Budapest

Our interaction with computers is rapidly changing as new devices, online applications, and operating systems continue to compete for a place in our daily routine. It is the role of a user experience designer to detect errors, ease the learning curve, and make these new interactions seem effortless and welcomed.

While attending lectures on interaction design and reading articles on user experience, the subject often sheds light on the frustration of its professionals towards the industry’s slow acceptance of user experience design. This is in part due to the profession still being quite young. To help educate the industry on what a user experience designer does, a wealth of videos, books, slide decks, and articles have been created. Many use every day events to help illustrate what user experience is and how important it is to study it, detect the problem, hypothesis a solution, prototype and test it. Because I believe this is the best way to present new information in an entertaining way, I’m going to use my own experience as an example.

Last August, I had the privilege of traveling around eastern Europe with my husband. I was fascinated with signage systems and how each city handled communicating to such an international crowd. I kept anticipating communication break downs or mishaps during the use of different transit systems. It didn’t happen. The only frustration and complete confusion I experienced occurred in the simplest of places — the change room at the Széchenyi Fürdo in Budapest.

The grand mineral pool in the city’s main park is a paradise. It is like swimming in the courtyard of a palace. Getting into it is something else. Once paying your admission you enter a room full of changing stalls. All the wooden doors were closed so like a good patron I waited patiently. None opened. Eventually, I tried turning the knobs of some of the doors and finally got into one only to find another wooden door on the far side of the stall. No locks were on the doors. I recalled being given instructions to exit the stall through the far door where I would then find a locker to store my belongings in, but this was the only place to get changed into swim wear. No ladies room here. Annoyed but determined, I put a hand on one door and a foot on the other and somehow managed to get changed without someone walking in on me. I pulled open the far door of the stall, entered the locker room, and then exited into the wonderful steaming mineral pools.

After my fill of whirl pools, fountains, water jets, and steaming baths, I returned to the locker area. Then, the stalls. Once the door closed behind me I thought “this can’t be right — what am I doing wrong?”. And then it occurred to me that both doors swung into the stall (you enter a room by turning a handle and pushing forward and the reverse when leaving). Then I saw it. Neatly camouflaged against the wall of the stall was a piece of wood with a hinge on it. I pulled it towards me and it fell to a position just long enough to overlap the doors. If someone tried to get in, the bench would barricade the door from opening. Filled with a bit of glee, I got changed and made my way out just in time to hear a woman in the stall next to me say “Qu’est que c’est?”.

Clever right? An intuitive and effortless experience, no. This is an example of a poor user experience. What we can learn from it is this.

1. Highly repetitive actions become invisible (e.g. we turn a handle clockwise and push forward to enter a room without thinking about it). When building a website, keep highly used elements simple and in expected locations. A previous entry on log in buttons is another example of this.

2. When expectations are not met, problem solving begins (e.g. I expected to find a locking device on the door). This can increase the amount of time a user spends on a page, misleading the information analytics capture. Instead of thinking a page is popular because users spend most of their time on it, consider it could actually mean it’s a problem page because users are spending their time processing an error. Eye tracking tests can help diagnose this.

3. When there is no aid to assist in our problem solving, we turn to our own past experiences (e.g. I’ve been in a room with no lock before and have used my foot to hold the door closed). This is how we learn. When a person is confronted with a new situation, they compare it to previous experiences and build on it. It is why analogies and metaphors can be so powerful.

4. When a conclusion is settled on, don’t assume the user will be satisfied (e.g. I was damn determined to figure it out). Test, test, test. The user may have figured out how something works, but it doesn’t mean they’ll remember it the next time they visit the site.

5. Reward the user by listening to feedback and responding (e.g. if you’re ever in Budapest changing room, look for the folding bench).

More reading:

Design and Design Failures

10 Most Common Misconceptions About User Experience Design

Designing for the Digital Age: How to Create Human-Centered Products and Services

Pantone Announces 2010 Color of the Year

Pantone turquoise 2010Ladies and gentlemen, we have a color for 2010. Last year it was Pantone 14-0848 Mimosa, a bright yellow encouraging a positive outlook towards the anticipation of a gloomy year. This year, in continuation of the uplifting colors theme, Pantone has unveiled its color for 2010 — 15-5519 Turquoise. It reflects a serene tropical environment — a place for relaxation and renewal after a stressful year.

HTML5 on the iPad

In last month’s article “How Tablets Could Influence Online Marketing“, the issue of enabling Flash on mobile devices was raised. On January 27th, Apple revealed the much anticipated tablet we now know as the iPad. One disappointment (though expected) was that, like the iPhone, it would not play Flash.

So what’s the big deal? Why do users want Flash on mobile devices and why won’t the providers allow it? For the users, it means having access to sites like Hulu. In Apple’s case it’s business. In order to make large media files, such as video, small enough for reasonable download time, a codec (short for coder/decoder) needs to be used to compress the file. On these mobile devices, Apple only wants you to use theirs.

Microsoft, Apple, and Adobe each developed their own codecs that we recognize in their container format as .avi (Audio Video Interleave), .mov (QuickTime), and .swf or .flv (Flash video). Flash became the preferred codec with large video sites like YouTube using it for their video compression and embedding. It was a good solution during a time when QuickTime only played on Macs and Windows Media on PCs. Apple was persistent and responded by coming up with their own digital video software, Final Cut Pro, which provided their codec H.264 for video compression. Before releasing the iPhone, they also approached YouTube and had them convert their videos to the H.264 codec so they would play on the iPhone’s OS.

Why so persistent? Jobs explained at an employee meeting following the iPad release that “Apple does not support Flash because it is so buggy. Whenever a Mac crashes more often than not it’s because of Flash. No one will be using Flash. The world is moving to HTML5.” Maybe not the world, but Google did with their Google Voice app, which was rejected by the iTune’s app store only late last year. Google fired back by using HTML5 that, conveniently, Apple’s Safari browser has adopted.

What HTML5 (the next revision of HTML) could do for Flash is exactly what it did for Google Voice. It is going to make an otherwise inaccessible media format accessible via the <embed> and <video> tag in browsers. So far, the language has been in development for 5 years and hasn’t been approved by the World Wide Web Consortium (W3C). According to the W3C timetable, it is estimated that HTML5 will reach W3C Recommendation by late 2010, though its editors (one from Google and the other from Apple) are expecting it will be later in 2012. Until the language is recommended, browsers adopt it, and designers and developers educate themselves on how to work in HTML5, users will continue to complain and business will be lost.

In an interview conducted by Charlie Rose, TechCrunch blogger Michael Arrington shared his thoughts on the strengths of the iPad. He was then asked what he didn’t like about it. Immediately, he responded “I don’t like the fact that it doesn’t allow Flash in a browser…I think that’s a real problem”. It is a problem, but only as long as it takes for websites to adapt. The device is not meant to download plug-ins so it will not play any media that isn’t prepared with its own technology (i.e. H.264). It will, however, allow you to view embedded media. So, instead of relying on users to download the latest Flash plug-in or hoping they’ll choose to visit your Flash site on a desktop or laptop, consider using HTML 5 to embed the media.

More reading/viewing:
Charlie Rose interview on the iPad
The iAgency — How the iPad Will Change the Advertising Business

The Future of Web Content — HTML 5, Flash, and Mobile Apps

Apple Shows Off Safari 4’s Pioneering HTML 5 Support
http://www.w3schools.com/html5/html5_reference.asp
http://en.wikipedia.org/wiki/HTML5

Oya Attends the Crunchies Awards

It has become a tradition to welcome the new year by attending the Crunchies Awards hosted by TechCrunch. Clearly in competition this year with attracting an audience that might otherwise be at CES, the ceremony welcomed Facebook CEO Mark Zuckerburg, Google’s VP of Engineering Vic Gundotra, Linkin Park vocalist Mike Shinoda (he has an app), Zynga CEO Mark Pincus, the fabulous entertainers the Richter Scales, and many others.

The annual event, featuring on stage interviews by TechCrunch founder and blogger Michael Arrington, acknowledges the success and efforts of internet and tech industry startups. The award categories include best apps (mobile, social, internet), tech achievement, bootstrapped startup, international startup, design, enterprise, clean tech, PR, VC firm, Angels, new gadget, founder, CEO, and best overall startup. Facebook has dominated the CEO, founder, and overall startup categories for the last 3 years, leaving some questioning what a startup actually means today.

Although the event is into its third year now, the ceremony continues to create a reputation of being unorganized and terribly casual. Music abruptly turning on and shutting off, losing presenters back stage, dead air, mics not turned on (or off), slides skipping forward to reveal the winner for a split second before even being announced, titles on the slides that can hardly be read, and so on. Many jabs were taken at this tragedy by presenters, award winners, and even the hosts. While Mike Shinoda (Linkin Park… does anyone else struggle with calling it Linkin and not LinkedIn?) was preparing to announce the winner of the award for best new gadget, he muttered “I have been to a lot of award shows and I just want to give you guys a little kudos for your production value today. I think it should be noted that any one of the nominees here tonight could probably buy the VMAs ten times over. It’s nice to see you guys are keeping it modest”. The crowd’s chuckle confirmed his observation.

Despite desperately needing a stage manager and presentation designer, the awards ceremony has become an event that the tech industry looks forward to each year. It’s a chance to step out from behind the virtual curtain and mingle with the people who create the gadgets, networks, communities, and internet phenomenons. And, if that doesn’t interest you, there’s plenty of food, drinks, music, and photo opportunities at the after party.

See you next year.

Find out who the winners are.

Watch footage from the event.

View photos from the event.

Check out Oya’s photos from the event.