Showing posts with label social transparency. Show all posts
Showing posts with label social transparency. Show all posts

Friday, September 3, 2010

Revamping WikiDashboard

I released WikiDashboard almost three years ago. Believe it or not, the server for WikiDashboard has been running under my desk for three full years (the photo shows the actual server). It was launched in a rush to meet a deadline for an academic paper that we published at a conference (ACM SIGCHI 2008) and limited maintenance has been done so far.

The old Power Mac (http://en.wikipedia.org/wiki/Power_Mac_G5 ) has been pretty reliable but it is becoming increasingly untrustworthy lately. Frustrated with frequent crashes, hangs, and sluggishness, I finally decided to do something. As I’m migrating the tool out of the old machine, I’ve added a few new features. I hope you find it useful.


Faster and scalable infrastructure
The server is now running on Google App Engine. WikiDashboard is hosted as a web app on the same systems that power Google applications. WikDashboard should provide faster, reliable, and scalable service to you. I plan to keep the old server running for a bit but it will eventually forward the traffic to the new server.

Support ten more languages
Thank you to everyone who showed interest in having WikiDashboard in your own language version!

Bongwon Suh
http://www.parc.com/suh
@billsuh http://twitter.com/billsuh

Monday, June 29, 2009

Live data again: WikiDashboard visualizes the editing patterns of 'David Rohde' case...

Yesterday, NYTimes finally broke the silence on the kidnapping of David S. Rohde by the Taliban. Turns out, Rohde had escaped, and that the news media finally reported the kidnapping since the publicity on the case would no longer be a bargaining chip for his captors. The NYTimes article showed how keeping this news off of Wikipedia was nearly impossible if it weren't for the coordinated effort of several administrators and Jimbo Wales himself.

WikiDashboard visualized this editing pattern directly. In the figure below, I've highlighted the various edit wars between the anonymous editors (97.106.51.95; 97.106.45.230; and 97.106.52.36, which are believed to be the same person) and some of the administrators such as Rjd0060 and MBisanz and the involvement of a robot XLinkBot. You can also see the huge attention on this article in the last week or so in the visualization.


Check out the editing history and the edit war in detail by reading the edit history.

All of this makes for a great way for us to announce that WikiDashboard now works on the live Wikipedia data again; Thanks to the heroic efforts of Bongwon Suh in my group. He figured out how to execute his SQL query in a quick way on the new DB server.

Thursday, August 14, 2008

Anonymous / Pseudonym edits in Wikipedia: a good idea still?


In the last few months, a somewhat sticky issue around the use of pseudonym occurred on this blog. A writer for a newspaper called SF Weekly, was being attacked online for writing an article about editor wars, in which she focused on an Wikipedian named "Griot". We blogged about this article, and a bunch of both anonymous comments as well as pseudo-anonymous comments ensued. I was hesitating about stepping in to censor the comments, since our research very much believed in "social transparency". This means presenting all of the information for everyone to see, and letting the social process sort out the truth.

Yesterday, the Electronic Frontier Foundation helped Wikipedia win an important lawsuit, which "found that federal law immunizes the Wikimedia Foundation from liability for statements made by its users." An interesting question is whether this includes _all_ statements, or just some of it. What if someone pretends to be someone else (which happened in the comments section of our blog post)? If I obtained a handle (pseudonym) of BillGates or BarackObama, and pretended to be him, can I really say anything I want? What about libel, slander, and defamation?

How far does anonymity gets us in eliciting all of the material that needs to be said? And how damaging is it to have it as part of Wikipedia? What about the use of pseudonyms? These are interesting research questions. Giant experiments like Citizendium are trying to answer some of these questions. What about different degrees of pseudonym like non-disposable pseudonym vs. disposable pseudonym, or pseudonyms that resolve to a real person and a real name under court order? (Disposable pseudonyms are handles that you can throw away easily and simply obtain a new one; blogger.com here has this option in the commenting feature, for example.)

In the spirit of "social transparency", I believe that disposable pseudonym can be quite destructive to an online community. When accountability is not maintained, quality of the material is suspect. "Social transparency" means an increase in accountability. It's a form of a reputation system. Some researchers are suggesting that online accountable pseudonyms is the way to deal with these identity problems. SezWho, Disqus are examples of how to deal with these reputation and identity problems in the blog comment space. I think it is inevitable that we will need better reputation and identity systems on the web.

As linked above, a good discussion about pseudonyms can be found in:
An Offline Foundation for Online Accountable Pseudonyms. by Bryan Ford and Jacob Strauss. In the Proceedings of the First International Workshop on Social Network Systems (SocialNets 2008), Glasgow, Scotland, April 2008.

Friday, October 5, 2007

Social transparency and the quality of co-created contents

How do you measure the accuracy and quality of what people are collectively creating? For example, on Yahoo! Answers, people post questions and tons of people respond. How would you measure the quality of the content?

What’s amazing about this as a research area is that it starts to touch on deep classic philosophic questions like: What do we know about authority? What does it mean? Where does authority come from? What makes someone trust you? When you ask a question about the quality of any information, you have to answer these questions. Who is the person who wrote it? Why should I trust that person? Just because Encyclopedia Britannica hires a bunch of experts to write for them, why should I believe them? What makes them an authoritative figure on how bees build their beehives? What is it about their authority, just because they’re attached to some higher education institution, that makes you want to believe them more than someone else?

When the Augmented Social Cognition research group tried to answer these questions, we ended up with an internal debate about what we mean by “quality.” And I think we come up with a model for understanding quality. We realized that, in academia, much of authority and the assignment of trust actually comes from transparency. Why should I believe in calculus? Well, because the mathematics is built on a foundation of axioms and rule sets that you can follow, which you can look up and examine. You trust calculus because there is a transparency built into the system. You can come to your own conclusion about the quality of the information based upon an examination of the facts. This is the scientific method!

What’s interesting is that exactly the same argument is being applied to Wikipedia. It says to you: you should believe in the quality of the information in Wikipedia because it’s transparent. Anyone can look at the editing history and see who has edited an entry, whether they chose to sign their name after it, and what kind of edits they made in other parts of Wikipedia. Everything is transparent and completely traceable; you can examine Wikipedia back to the first word that was written. And Wikipedia is relying on the fact that it’s completely transparent to gain authority. There is nothing opaque about it. I think that’s why Wikipedia has become so successful. It’s because they stumbled upon some of these fundamental design principles and paradigms that makes this work. They could have made the design decision where one can only examine the last 50 edits. Wikipedia could have come up with many other design choices that would not make the system completely transparent. Is it an accident that they ended up with a system that can be traced back to the first edits? I think not.

However, (and that's a big however!), some people are still having trouble with the quality of information on Wikipedia even though it’s transparent. Why? One possiblity is that they have an all-or-nothing attitude. Well, if one article could be way-off, why should I trust another article? They don't, and probably don't want to, examine the history of individual articles before deciding on their individual trustworthiness, perhaps because it's too hard and too time-consuming.

So one hypothesis is that readers don't have the right tools to easily examine and trace back the editing history. That's why the idea of the WikiDashboard might be a really powerful way for fixing these problems. Social dashboards of these kinds are visualizations or graphical depictions of editing histories that will make it much easier for people to look at the history of an article and make up their own minds about its trustworthiness. The tool will enable us to do fundamental research on testing the hypothesis that transparency is what enables trust.

One thing we have done is to actually ran some experiments to understand if people are more willing to believe in information if you make the editing histories and activities more transparent. More on that on the next post.