Health & Medicine
Cleaner safe air needs you!
Why the US intelligence community is investing in a global effort to boost analytical thinking by tapping the wisdom of crowds
Published 6 February 2017
Seventeenth century German polymath Gottfried Leibniz said ruefully that if we had a set of scales to weigh the merits of competing arguments, we would have something more valuable than any miraculous science for making gold.
In what may be one of the largest research efforts aimed at improving human reasoning, the US Government is putting real gold behind a global effort to find Leibniz’s wished for scales. But they are not looking to artificial intelligence or to highly-trained experts. They hope to find the answer in the crowd - you and I.
“We all know different things, and different people think differently. The challenge is to work out a way to improve reasoning by tapping into the diversity of knowledge and expertise out there,” says University of Melbourne cognitive scientist and philosopher Associate Professor Tim van Gelder. He is a co-leader of one of only four international research teams tasked to come up with platforms for enabling groups of intelligence analysts to collaborate on complex analyses.
But for Associate Professor van Gelder and his colleagues on the University of Melbourne-led team, the prize isn’t just a tool for intelligence analysts, but a platform that can also help improve reasoning in the public arena. Such a platform, that they are calling an arguwiki, could help nudge our politics away from slanging matches fuelled by biases and fake news, to something more deliberative where just about anyone could help build a rational consensus on controversial issues.
The US intelligence community’s research arm, the Intelligence Advanced Research Projects Activity (IARPA), has allocated over US$100 million for research into how they can harness the wisdom of crowds to improve intelligence analysis. The four-and-a-half year project has been dubbed CREATE, which stands for Crowdsourcing Evidence, Argumentation, Thinking and Evaluation. Four research teams from around the world have been awarded a share of the funding to develop and road-test solutions.
The University of Melbourne’s team, known as SWARM (Smartly-assembled Wiki-style Argument Marshalling) was awarded up to US$19 million for the project and is now in friendly competition with teams led by Monash University, New York’s Syracuse University and Virginia’s George Mason University.
How group biases skew our thinking | |
---|---|
Shared information bias | The social tendency to focus group interactions around shared information rather than information that only one person has. It can result in overweighting the shared information that may be irrelevant to the decision at hand. |
Hidden profiles | When private pieces of information that are critical to the decision remain unshared, sometimes with disastrous consequences. |
Information cascades | People in a group who are quick to argue for an idea or perspective can influence others who then question their own competing perspective, often unreasonably, before they make a contribution. |
Group think | Irrational decision making that occurs in groups that lack diverse thinking, are isolated or distracted by other decision pressures. |
Halo-effect | This is when people in a group uncritically accept arguments put forward by charismatic, high status or convincing people rather than listening to more expert voices. |
The table above is a list of the common pitfalls of thinking in groups. The SWARM project will look to combat these pitfalls by developing a platform that can identify and crowd source good reasoning.
The SWARM Project is a collaboration between the School of Bioscience, the School of Historical and Philosophical Studies and the School of Engineering. It also includes Imperial College London, where co-leader Professor Mark Burgman, the former head of the School of Bioscience, is now Director of the Centre for Environmental Policy. Also involved in SWARM are collaborating researchers from Stanford in California, as well as Monash.
IARPA was established in 2006 in a bid to rethink intelligence analysis to better head off blunders such as the failure to foresee the 9/11 terrorist attacks on the US and the misreading of Iraq’s capability to deploy weapons of mass destruction that sparked the Second Gulf War. CREATE is therefore primarily focused on developing a system to help intelligence analysis, but Associate Professor van Gelder says similar platforms could be used in any context where groups of people try to reason their way through complex issues.
“We are trying to tackle Leibniz’ philosophical problem, that we don’t have an effective method for weighing up competing sets of arguments, but we are also trying to build it into a system that people find easy and attractive to use,” says Associate Professor van Gelder.
“We believe we can build a better tool for intelligence analysts if we can find something that would work even in the public domain, dealing with hotly contested issues such as climate change.”
The SWARM acronym was inspired by the way bees work together to build a beehive from small individual contributions. In the same way, the researchers are aiming to find a way that well reasoned consensus can be built from crowds of individuals.
“The SWARM approach starts from the observation that many people take readily to arguing in online forums, and to collaboratively drafting documents such as wiki pages and Google docs,” says Associate Professor van Gelder. “Indeed some Wikipedia pages already present reasonably coherent summaries of complex debates.”
In practical terms SWARM aims to develop new ways to support better reasoned argument, which can yield better collective results. Such support could include for example templates that require arguments to be set out in a reasoned way and which distinguish between contested claims and agreed facts. The platform will be delivered on the Cloud by the Melbourne eResearch Group under the leadership of Professor Richard Sinnott.
SWARM will also explore using algorithms to identify a participant’s “reasoning profile” – a statistical summary of their strengths and biases. This information could then be used to improve the collective outputs.
Associate Professor van Gelder says this isn’t about using artificial intelligence to think for us, but using it to support human intelligence. “The software will intelligently leverage people’s ability to produce and to evaluate reasoning. The platform itself won’t be doing any reasoning.”
“Artificial Intelligence (AI) is a big topic at the moment, but what we are focusing on is Intelligence Augmentation, or IA,” says Associate Professor van Gelder. “The most impressive advances being made in AI at the moment are in areas where you have a particular kind of judgement being made and you have a vast number of prior examples and outcomes to generalise from. But we are tackling what humans have to do in a context where you don’t have all that information.”
How individual biases skew our thinking | |
---|---|
Confirmation bias | Interpreting all new evidence as confirming what one already believes; discounting evidence that doesn’t fit with one’s beliefs. |
Availability bias | Giving excess weight to evidence that is easily called to mind, such as what is in the news. For example, overestimating the likelihood of shark attacks because they are widely reported. |
Overconfidence bias | Having confidence in one’s own judgements that isn’t justified by the facts. |
Above are some of the common mistakes we make when we reason and argue that could be filtered out by SWARM’s platform.
While the Internet has made information more accessible, people still need to be able to make sense of it, says SWARM co-leader Dr Richard de Rozario, who is responsible for the technical side of the platform development.
“We have Big Data, but we don’t have Big Sense,” says Dr de Rozario. That is what we are trying to find, a way to move towards Big Sense, and one way of doing that is to find the ingredients for helping us reason as a crowd.”
Co-leader Associate Professor Fiona Fidler says our thinking is easily hampered by our own biases, such as the tendency to only believe or seek out evidence that supports our views.
“It is a problem that is compounded by the Internet and social media. People can easily surround themselves with information that reinforces their own biases,” she says.
“People are dealing with arguments where the sources are unclear or suspect, and where language is vague. When someone says something is “probable” what does that actually mean? Does it mean there is a 50 per cent chance of it happening or an 80 per cent chance?”
Health & Medicine
Cleaner safe air needs you!
Associate Professor Fidler says a key benefit of the kind of platform the team is developing is that it facilitates sourcing knowledge and ideas from diverse sources that are otherwise often not consulted or who are discouraged from contributing.
“When groups of people come together to discuss problems, people can defer too much to the status and influence of some individuals. In addition some groups, like women and minorities self-select out of contributing. Crowd sourcing is about better accessing the available diversity in thinking and knowledge.”
The project is now actively recruiting volunteers to participate in their research, which will be looking at how people work together in groups to solve challenging analytical problems. Participants will have the opportunity to increase their knowledge and awareness of advanced reasoning and analysis skills, and help test and develop the SWARM platform design and configuration.You can register here.
A small pilot arguwiki has already been running on the question of whether Shakespeare really wrote his plays and if not, who. Another example of the kind of problem they will be tackling is what happened to Malaysian Airlines Flight 370 that disappeared in 2014?
Of course having a better way of deciding what is the best rational response to a policy problem doesn’t mean the response will be adopted. Associate Professor van Gelder and his colleagues quickly concede that politics will always be a battle of competing interests. But they say injecting even a little more reason into public debates has to be a good thing, and if done right the payoff could benefit everyone.
“We aren’t blind to the deep challenges,” says Associate Professor van Gelder. “People’s positions are deeply grounded in their values and identities, so even having the perfect platform for discovering and promoting the best reasoned position on a question isn’t going to magically resolve our disagreements.”
“But what we do have, is the opportunity to provide a better way for the public to argue about complex issues and to arrive at a considered collective viewpoint. Something like this is needed more urgently than ever.”
Banner Image: iStock