Wednesday, October 15, 2008

User Needs during Social Search

There has been a lot of buzz around social search in the online tech community, but I am largely disappointed by the new tools and services I've encountered. It's not that these sites are unusable, but that they each seem to take on a different conception of what social search is and when/how it will be useful. Have these sites actually studied users doing social search tasks?

Social search may never have one clear, precise definition---and that's fine. However, my instinct is to look at the users and their behaviors, goals, and needs before designing technology. Actually useful social search facilities may be some ways off still (despite the numerous social search sites that advertise themselves as the future of search). First, we need to address some questions, such as:

  1. Where are social interactions useful in the search process?

  2. Why are social interactions useful when they occur?

Study Methods
To answer these questions, Ed Chi & I ran a survey on Mechanical Turk asking 150 users to recount their most recent search experience (also briefly described here and here). We didn't provide grand incentives for completing our survey (merely 20-35 cents), but we structured the survey in a narrative format and figured that most people completed it because it was fun or interesting. (This is a major reason for Turker participation.)

For example, instead of asking a single open-ended question about the search process, we first asked people when the episode occurred, what type of information they were seeking, why they needed it, and what they were doing immediately before they began their search. After this, we probed for details of the search act itself along with actions users took after the search. Our 27-question survey was structured in a before-during-after type format, primarily to establish a narrative and to collect as much detailed information about the context and purpose of users' actions.

We collected responses from 150 anonymous, English-speaking users with diverse backgrounds and occupations. In fact, there was so much diversity in our sample that the most highly represented professions were in Education (9%) and Financial Services (9%). The next ranking professions were Healthcare (7%) and Government Agency (6%) positions. We were quite surprised by the range of companies people worked for: from 1-person companies run out of people's homes to LexisNexis, Liberty Mutual, EA Games, and the IRS!

Our data analysis resulted in a model of social search that incorporated our findings of the role of social interactions during search with related work in search, information seeking and foraging. Without presenting the whole model here, I will highlight the summary points and conclusions from our work. (The full paper is available here.)

Search Motivations
There were two classes of "users" in our sample who we named according to their inherent search motivations. The majority of searchers were self-motivated (69%), meaning that their searches were self-initiated, done for their own personal benefit, or because they had a personal interest in finding the answer to a question. The remaining 31% of users were "externally-motivated"---or were performing searches because of a specific request by a boss, customer, or client.

Not surprisingly, a majority (70%) of externally-motivated searchers interacted with others before they executed a search. The fact that these searches were prompted by other people often led to conversations between the searcher and requester so that the searcher could gather enough information to establish the guidelines for the task. This class of behavior is noteworthy because even though these users engaged in social interactions, they were often required to or may not have otherwise had the occasion to interact.

Although only 30% of self-motivated searchers interacted with others before they executed a search, their reasons for interacting were more varied. While some still needed to establish search guidelines, others were seeking advice, brainstorming ideas, or collecting search tips (e.g., keywords, URLs, etc.). In many cases, these social interactions were natural extensions of their natural search process---these users were performing self-initiated searches afterall. Again this is noteworthy, suggesting that self-motivated searchers would be best supported by social search facilities.

Search Acts
Next, we identified three types of search acts: navigational, transactional, and informational. These classifications were based on Broder’s (2002) taxonomy of information needs in web search, and I'm only going to review our users' informational search patterns (searching for information assumed to be present, but otherwise unknown) since it proved to be the most interesting. Informational search is typically an exploratory process, combining foraging and sensemaking. As an example:
An environmental engineer began searching online for a digital schematic of a storm-water pump while simultaneously browsing through printed materials to get "a better idea of what the tool is called." This search was iteratively refined as the engineer encountered new information, first on and then on Google, that allowed him to update his representation of the search space, or what might be called a "search schema." He finally discovered a keyword combination that provided the desired results.

Over half of search experiences in our sample were informational in nature (59.3%), and their associated search behaviors (foraging and sensemaking) led to interactions with others nearly half the time. Furthermore, 61.1% of information searchers were self-motivated. It appears there is a demand and a desire for social inputs where the search query is undeveloped or poorly specified, and personally relevant.

Post-Search Sharing
Finally, we noticed that, again, nearly half our users (47.3%) shared information with others following their search. This is not wholly unexpected, but points to the need for better online organizational and sharing tools, especially ones that could be built into the web browser or search engine itself. Instead, an interesting finding is why people chose to share information.

Externally-motivated searchers almost always shared information out of obligation---to provide information back to the boss or client who requested the search in the first place. Self-motivated searchers, however, often shared information to get feedback, to make sure the information was accurate and valid, or because they thought others would find it interesting.

Summary and Conclusion
In summary, we classified two types of users in our study: externally-prompted searchers and self-motivated searchers. The self-motivated were the most interesting because of their search habits, propensity to seek help from others, and the reasons behind their social exchanges. For this class of users, a majority performed informational, exploratory searches where the search query was ambiguous, unclear, or poorly specified, leading to a need for guidance from others. Their social interactions, therefore, were primarily used to brainstorm, get more information, and further develop their search schema before embarking on their search. Finally, the search process didn't end after these users identified preliminary search results---they often shared their findings out of interest to others, but also to get feedback, validate their results, and contemplate refining and repeating their search.

It is noteworthy that we did not ask users to report social search experiences in the survey. Instead, we asked for their most recent search act, regardless of what it was, expecting that across all 150 examples we would be able to begin finding generalizable patterns. Indeed, a large majority performed social search acts, but nearly all of the social exchanges were done through real-world interactions---not through online tools. It is no surprise that online tools need to better support social search experiences (our study is only further proof of this); but our study does contribute to a better understanding of user needs during "social" search, which may lead to tools that can best identify and support the class of users and search types best suited for explicit and implicit social support during search.

Finally, in response to the questions I posed at the very beginning:

Where are social interactions useful in the search process?
Before, during, and after a "search act"! Over 2/3 of our sample interacted with others at some point during the course of searching. However, social interactions may not benefit everyone equally---they appear to provide the best support for self-motivated users and users performing informational searches.

Why are social interactions useful when they occur?
It depends! The reasons for engaging with others ranged from a need to establish search guidelines to a need for brainstorming, collecting search tips, seeking advice, getting feedback, and validating search results. Social support during search may be best appreciated and adopted if it directly addresses these types of user needs.

Brynn M. Evans, Ed H. Chi. Towards a Model of Understanding Social Search. In Proc. of Computer-Supported Cooperative Work (CSCW), (to appear). ACM Press, 2008. San Diego, CA.


Unknown said...

As the founder of a social search engine, Tusavvy
during a couple of my reading on the post and full paper,
I have felt that the survey and its design suggestion with 'canonical social model' is very Intriguing.

Moreover, I was stunned by the similarity of the line of thoughts to our beta implementaion

I strongly encourage you to visit our public beta to explore.

Seems to me that our query construction helper is very much overlapped with the "before search" suggestion.
Moreover, Tusavvy provides built-in web interface with search results for post sharing purposes.

Lastly, note that Tusavvy search results are pulled out from human expertise.

JaeSung Ro | Founder | zSoup

Brynn Evans said...

Hi JaeSung! I haven't heard of Tusavvy but I am checking it out.

It sounds very promising that the suggestions in the paper align with the thoughts on your beta implementation.

I'm glad you found our post and hope we can continue the conversation about social search.

Unknown said...


We're hoping to continue a dialogue as well. Especially, one of the topics we like to have a dialogue is - haven't been covered in the paper we think - how to use social graph in demographic sense, not individual manner.


Ed H. Chi said...


We have been building a social exploration engine that uses social tags that was somewhat informed by the findings in the study. We're encouraged to see that you have a social search engine based on tags, so there is definitely a meeting of minds here. Stay tuned to our release soon.

Unknown said...


Definitely there is a meeting of minds and line of similar thoughts here.
Our team was also so glad to know ASC's works.

Very briefly, in IR(information retrieval) perspective,
using tag for index/lexicon was very unconventional and bumpy roads during the course of a year long beta development.
As Mr. Heyman investigated, there are both good and bad news for using them.

A couple of things we have learned is

- tags is incomplete to build lexicon; for IR purposes, it is messy in raw format.
So it requires us to do fairly significant works to process them. i.e., cleansing, tokenizing and so forth
- sparsity and scale are both big issues. It was challenging to tackle mitigation
- Once you scale and supplement social annotated data,
we couldn't rely on lexical approach for tags and associated URLs
but statistical methods.

But the fact that matter is
social search engines will provide "sharing domain knowledge and expertise" as you can try on Tusavvy.
We believe it returns "really unusual, but accurate results."
That is why this space is so intriguing.

If there is a chance to share more findings in private setting,
we're happy to do so.

Looking forward to seeing what you are building


Anonymous said...

Hi Brynn,

Very interesting search model!

Are you planning another research in this field (social search)?

-- Moti

Ed H. Chi said...

Brynn and I are working failure cases now; so we want to see if failures have different characteristics.