2012
10.19

本文作者:小红猪小分队

Source:http://www.newscientist.com/article/mg21528821.700-reality-checker-how-to-cut-nonsense-from-the-net.html?full=true&print=true

HOW many times have you believed a lie today? Probably at least once if you’ve read the news: long-running studies show thatjust over half of US newspaper storiescontain at least one error. If you’ve looked up health information, you can probably chalk up another mistake: several surveys have found widespread errors in popular health websites.

Some falsehoods have spread so far and so wide, few people question them anymore. Guess, for instance, which of the following statements are incorrect: Napoleon was short, all bats are blind, you must drink eight glasses of water per day. Answer: all of them. Napoleon was of above average height for his time, many species of bat can see, and healthy people can meet their hydration needs by simply drinking when thirsty.

The notion that everyday life is rife with misinformation is hardly new, but plenty of people are worried that things are getting worse. Thanks to the internet, rumours, inaccuracies and lies have the means to bounce around social networks, blogs and news sites with unprecedented speed – and often with significant consequences. Take one of the biggest political fibs of recent years: the claim that US healthcare reforms would include “death panels”, groups of bureaucrats that would rule on the fate of patients. That suggestion spread quickly online, and corrupted honest debate about the reforms.

It’s no surprise, then, that a number of organisations are fighting back. They are converging on the idea that new technologies can prevent nonsense from spreading on the internet, and by extension through popular discourse itself. Several groups are about to roll out tools designed to flag up errors in any online content, and so nip them in the in bud before they spread. One group is taking aim at Twitter. Another has email in its sights. Several want to annotate the whole of the web. When combined, these systems may forge a future in which erroneous material, whether it appears on The New York Times website or a conspiracy-mad blog, comes with a warning sign. “That day is not far away,” says Bill Adair, editor of PolitiFact, an independent fact-checking organisation. The question is: are we ready for a healthy dose of reality?

The idea that our daily information diet contains errors arguably dates back to the arrival of the printing press. And if errors are ancient, so are attempts to combat them.

Yet many people believe that the internet has upped the ante. They argue that, as well as acting as an almighty echo chamber for lies and falsehoods, the internet has given a more powerful voice to those who wish to sow confusion and conspiracy. Meanwhile, trust in mainstream news organisations of all political viewpoints has been declining for more than a decade – in some cases deservedly, in others perhaps less so (see graphs).

On the flip side, the technology underpinning the internet is also offering the opportunity to tackle misinformation in ways that have not been possible until now.

Pants on fire

The first of these tools has emerged in the political sphere, and builds upon work done by Adair and colleagues at Politifact. The organisation employs journalists to scrutinise around 35 political statements a week, awarding each a “Truth-O-Meter” rating from “true” to “pants on fire”. Their reasoning is published, as well as links to sources. For instance, PolitiFact has found that, of 400 statements made by Barack Obama, just over 1 in 4 were “mostly false” or worse. For Mitt Romney, Obama’s challenger in the presidential election this November, the figure was over 40 per cent. Between them, the pair have earned 21 “pants on fire” verdicts.

PolitiFact has made waves – it won a 2009 Pulitzer prize and has inspired similar groups in Europe – but its “pants on fire” determinations do no good if they are confined to its website. For a fact-check to change a belief, it needs to be available at the moment the information is consumed. And that is where the new tools come in.

Another initiative, Fact Spreaders, aims to integrate PolitiFact’s checks into Twitter, as well as those performed by FactCheck, a similar US organisation. It begins with software, due to launch later this year, that looks at whether tweets contain a URL linking to information that checkers have flagged as problematic. The team is also experimenting with artificial intelligence that scrutinises the content of tweets, says co-founder Paul Resnick, a computer scientist at the University of Michigan in Ann Arbor. Then volunteers post a Twitter response to the tweet, containing a link to the fact-check. “We need to have facts spread as far as rumour spreads,” says Resnick.

Others are using the PolitiFact database in different ways. Truth Goggles, a browser extension being developed by Massachusetts Institute of Technology researcher Dan Schultz, alerts a user when they encounter a questionable statement on a web page. Clicking on the statement produces a window that summarises PolitiFact’s result. And Lazytruth, built by Schultz’s colleague Matt Stempeck and others, runs checks on email.

If these services prove popular, they would help to deal with many high-profile political falsehoods that get passed around. But what about all the other errors? It would surely take thousands of checkers to keep up.

The solution may be to recruit that army online. Perhaps the most ambitious attempt to use the crowd to purge the internet of falsehoods is a tool called Hypothes.is, which is due to launch next year. Dan Whaley, the founder of the non-profit organisation in San Francisco that is developing it, decided to act after watching the confusion caused by the debates around healthcare legislation and the banking crisis: “Like the rest of us, I feel the pain of trying to understand what is going on and what information to trust.”

His solution is to build software that would allow the annotation of almost any assertion online, be it on a webpage that hosts a video on foxnews.com, or a single sentence in an article atnytimes.com. Whaley calls it “the internet, peer-reviewed”.

This involves building a browser extension that lets people place layers of annotations onto a web page. Crucially, the original publisher cannot hide or delete them. To avoid crowding the screen, Hypothes.is users will see a “heat map”: a thin bar along the side of their screen that directs them to disputed statements. They can then click on the bar to see what annotators have said.

Judged by the mixed quality of comments left after news articles, this might not be appealing. So Whaley’s team is developing a sophisticated ranking system to prioritise insightful annotations. It will feature a voting system that draws on advice gleaned from the likes of reddit, a news-sharing service that makes it easy for users to vote on others’ submissions. The system will also employ “meta-moderation”, a process in which high-ranking users are asked to rate the contributions of others. A favourable rating will increase a user’s reputation score, which in turn will increase the visibility of their annotations. And the moderator is chosen at random, which will help prevent distortion by a group of annotators with an axe to grind.

Chicken-and-egg problem

The beauty of Hypothes.is lies in its flexibility. An annotator can link a political fib to a fact-check at PolitiFact, or a health scare to a page of reputable medical information. In principle, efforts could range from correcting serious scientific misinformation to common myths like the idea that Napoleon was short.

The potential is huge. But Hypothes.is will surely face a recruitment challenge. According to programmer Rob Ennals, it’s the “chicken-and-egg” problem. In 2009, Ennals launched Dispute Finder, a browser extension that linked users to counter-arguments for whatever they were reading. It was intended as a demonstration, and Ennals shut the service down when he took a job at Google. Even so, his experience was enough to convince him of the difficulty in basing such a service on user submissions. Many contributors are needed to build a database of annotations or rebuttals, but users do not have much incentive to join until that database is already in place.

Still, think of Wikipedia: few people were aware of it when it launched with a small but committed group of contributors in 2001. The site managed to attract enough like-minded folks, and even though most people today have never edited a Wikipedia article, the English-language version alone now boasts more than 4 million entries.

Whaley and colleagues will try to pull off something similar when they launch. Hypothes.is will begin by focusing on a specific type of content, perhaps legislative documents or scientific papers. For science, there is arguably already a demand and, perhaps, a dedicated workforce: a 2006 survey by the Pew Research Center’s Internet and American Life project showed that 80 per cent of people have tried to fact-check scientific information they have read online. If the system attracts informed and intelligent users, everyone will benefit. Many sites will come with “fact or fiction” warnings.

Yet hanging over all of the projects is a cynical but important question: do people really want to hear the truth? Pretty much everyone claims that they do, but recent research paints a more depressing picture.

For starters, there is the effect that psychologists call “naive realism”: the mistaken belief that our views are based on a rational analysis of the world. Consider what tends to happen after a controversial call in a football game, when fans of opposing teams often reach opposite conclusions about the decision.

Such disagreements can be shrugged off in sports, but the same mechanism underlies political controversies too. When Bill Clinton was accused of lying to a 1998 investigation into allegations of sexual harassment, some liberals saw a witch hunt. Many conservatives decided he should be impeached. People on both sides of the political spectrum sincerely believed that they were being objective, even though their opinions matched their party allegiance. Psychologists also say that everybody suffers from a “bias blind spot” – an inability to see that their interpretation of facts is inevitably shaped by their own biases and world view.

Still, surely unambiguous corrections help change minds? Several studies suggest that the situation is more complex. In one experiment, political scientists Brendan Nyhan, now at Dartmouth College in Hanover, New Hampshire, and Jason Reifler of Georgia State University, Atlanta, asked students to read news stories wrongly stating that George W. Bush had banned all stem-cell research. Some stories included a correction. Many students who were sympathetic to Bush took the correction on board, but their liberal-leaning peers did not. A second study using a false story about the discovery of weapons of mass destruction in Iraq produced similar results. In fact, reading the correction made some more certain that WMDs had been found.

“It’s threatening to admit that you’re wrong or that your side is wrong,” says Nyhan, who is working with Resnick on Fact Spreaders. “So people think of reasons to disbelieve the information that they are given.”

If Nyhan is right, no amount of fact-checking will change the minds of, say, climate change deniers or anti-vaccination campaigners. That might, however, be the wrong way to look at the potential of the fact-checking services. The battle for hearts and minds of those groups has indeed been lost, but new conspiracy theories will emerge. If services like Fact Spreaders and Hypothes.is are up and running, they could nip those theories in the bud.

Take the example of the “death panel” rumour that almost derailed US healthcare reforms. It began with a political commentator named Betsy McCaughey. She claimed that the bills would “absolutely require” elderly people to attend sessions at which they would learn “how to end their life sooner”. The claim spread widely online before extra fuel was added by talk-show hosts and by Sarah Palin, who asserted on Facebook that it meant that the government would set up death panels to rule on the fate of patients. Could the falsehood have been extinguished if web browsers had highlighted it early, before it reached the mainstream? Possibly.

There is also hope of overcoming people’s resistance to being told they are wrong. Nyhan and Reifler recently reviewed the social science of misinformation and came up with advice on how to maximise the impact of corrections. They recommend not using negations, for example. Saying “John is not a criminal” risks people becoming more familiar with the allegations against John, so “John is exonerated” would be better.

It is also often better to provide a visual correction. Take the rumour that the Obamas had decided to call Christmas trees “holiday trees” and banned religious ornaments from the White House. A written rebuttal might have helped, but a video of the Obamas talking about the religious meaning of Christmas would probably have been more effective.

Such thinking will help shape Fact Spreaders. Resnick says that its volunteers will be advised to try and align themselves with the people they are attempting to correct, perhaps using phrases like “I was shocked by that as well until I found out it wasn’t true”. The idea here, borne out by studies, is that people are more likely to be persuaded if they think the messenger shares their world view.

These advances will not banish lies or convince conspiracy theorists. But we should not expect them to, say the people behind the fact-checking technology. Instead, we should see the services as tools that will strip the junk out of our unhealthy information diets. And if there will always be people who are willing to fabricate statements, there will also always be people who prefer to base their arguments on evidence. They will continue to fight lies where they find them, and the new technology gives them a better means of doing so. “We’re going to have an ongoing arms race,” says Resnick. “It’s up to those of us who care about the truth to participate.”



暂无回复

添加回复
回到顶部

无觅相关文章插件,快速提升流量