Jan 31, 2013

Presenting Conference Monitor at Social Informatics 2012

This was the first annual social infomatics conference, and it was in DC. I was hoping that my paper will get accepted. I always hope the same for all my papers and all the conference, but this time I was very optimistic, I worked a lot on this. Finally I was very happy after getting the acceptance email and very positive reviews from the committee. My flight for Dhaka was just right after the last day of the conference, so I was preparing my talk at the same time I was preparing my self for my trip to home. I am glad that my colleagues at the HCIL helped me with the practice talk. This is a culture I highly value here, getting feedback from our peer before we actually give the talk at a conference.  After the practice talk, I changed my presentation a lot, and later made a demo video of the tool that I presented at the conference. It helped a lot to show the demo: it made clear what my research was and what the tool does. The conference did not have any separate demonstration session, so I showed the demo of conference Monitor during my talk. I was overloaded with coffee, it was right after the final exam of Information Visualization course, I was the TA of the course. After the final exam, I collected all the exam papers, took the metro to the conference venue and had one more hour before my talk. The talk went better than I expected, I received positive feedback on my research topic, other researchers showed interest on the tool and wanted to use it for their own research. And, above all, no one fell asleep during my talk, what more can you expect ?:) I demonstrated how Conference Monitor can be used to analyze and visualize tweets during an event in real time, and how it can identify the influential and active people in the back-channel communication during an event.

I could not attend the second day, I needed to grade the final exam paper, but at least I could attend the key note by Noshir Contractor. From the keynote speech and the three sessions I attended, I learned about new techniques and metric to analyze social network and people'e influence on a network. After I felt that even though many people are using visual analytics for network visualization, their analysis and presentation could be more understandable if they knew more about netviz nirvana, you can do a lot just by following some principles of removing node clutter and edge overlaps. You can use visualization to make your presentation look flashy and cool,  but that is not the visual analytics. You know how important your research is when you see how other people are not doing it right. And again, I felt as a visual analytic researcher how can we make people realize the importance of using visualization in the RIGHT way?


Jan 30, 2013

Understanding the Cognitive Walkthrough Process

In one of my interviews for summer research internship, I was asked how I will evaluate my software without running a usability study or expert feedback, when should I be confident that it is ready. Yes, I can always use my own judgement, but that does not count as a scientific method. If I am the only person to evaluate a UI without using any user experiment pr expert feedback, what method I can rely on to evaluate the UI ? I knew about the Cognitive Walkthrough process to evaluate user interface without the involvement of the users, but didn't actually use this on any project. I thought I should better understand it properly this time. 

Cognitive Walkthrough:
Objective is to evaluate the usability of a user interface.
Does not require user study. Can be evaluated using the early prototype of a tool.
The designer of the interface can perform this evaluation.

But what is this walkthrough ? First, look back in the cognitive theory: how do users interact with an interface without prior knowledge about the system and without any learning ? They have a particular goal in mind, say, search a word in a document. Then they scan the UI and look for interface elements that might serve the purpose, for example, if there is a button with the label 'search', or if there is a search option in the right click menu bar. Then, if available, they select that button, or click the 'search' option from the menu bar. What happens after that? If the button is actually for searching word in the document, it will give feedback to the user to enter the word, and then perform the search action, after finishing searching, it might show dialog box with how many times the word appears in the document and highlight the occurrences. On the other hand, that search button may not be an option for word search, rather it might be a search option for other files in the directory. So the user should get feedback from the system that it is a file search input. This process of setting goal, searching in interface, selecting interface element, and processing feedback from the interface comprise the cognitive process of users interacting with an interface.

For evaluating an interface, designers and their peers follow the same model, which we can call a Cognitive Walkthrough. So they define a goal, suggest which are the possible course of action that a user will possibly follow to reach the goal, and evaluate the likelihood of a user to select that course of action, and then finally evaluate the system's feedback to the user.

The evaluators can be UI experts or designer.
-They first understand the user's goal (searching a word), whether the goal is clear to the users or not, whether the user will know what to do to get what they intend,
-the accessibility of the control designed for the goal (if the search button is easily visible or accessible),
-whether the goal and the control labels are appropriate (if the label or tooltip correctly says that it is for searching word inside the document), and
-whether the feedback provided by the action is understandable for the users (the button should open an input dialog box for word entry, after searching it should say that the search is complete, show the count and highlight the words, or say if there is no match).

So finally a Walkthrough Evaluation Sheet contain these above mentioned 4 criteria for evaluation. This stage of evaluation can be done in early stage of the design and development, even using paper prototype, it can detect early design flaws and it's cheaper than recruiting users for a usability test. However, it cannot fully replace the usability study;  it can miss lots of usability issues due to false assumptions about the users, incomplete description and decomposition of the tasks, and finally, the real interface may not be the same as the early prototype.

Reference: http://www.sigchi.org/chi95/proceedings/tutors/jr_bdy.htm