Crowdsourcing security intelligence

Inspired by how bees make collective decisions, researchers are exploring how crowdsourcing techniques may help intelligence analysts produce the best-reasoned analysis from the available data

By Associate Professor Tim van Gelder, University of Melbourne

Associate Professor Tim van Gelder

Published 6 June 2018

In a memorable scene in the 2012 thriller Zero Dark Thirty, the CIA Director asks his inner circle for views about the chance that Osama Bin Laden was in a particular compound in Abbottabad, Pakistan.

“100 per cent he’s there. OK fine, 95 per cent, because I know certainty freaks you guys out, but it’s 100,” says the film’s hero, intelligence agent Maya.

Jessica Chastain as Agent Maya in the 2012 film Zero Dark Thirty. Picture: Columbia Pictures

She’s right of course, but in reality, analysts are rarely entitled to such confidence. Typically, they have an overwhelming amount of information, but it is incomplete and much of it is unreliable. They may also be working under pressure to tight deadlines and dealing with a rapidly evolving, high stakes, situation.

Under such circumstances, nobody can be right all the time. But that can be small comfort when the stakes are high and come with international ramifications. Who shot a missile or carried out a bombing? These are questions that intelligence agencies can’t afford to get wrong. So, are there ways of ensuing you’re right more often?

Hit rates

Improving intelligence analysis is about lifting hit rates – the proportion of judgements which are essentially correct. Lifting hit rates is a major focus for the US organisation tasked with bringing the best research to the country’s intelligence community – IARPA (Intelligence Advanced Research Projects Activity).

It is exploring whether intelligence agencies could effectively “crowdsource” better analytical reasoning. Crowds have already demonstrated exceptional performance in other areas like forecasting, so could they also articulate good analytical reasoning? Can the many and diverse contributions of crowd members be pulled into a single coherent piece of reasoning?

One idea is to have the members collaborate wiki-style, much like on Wikipedia. But this is not very promising, for the same reason that crowds can’t wiki-write novels. Good analytical writing has what well-known cognitive psychologist Steven Pinker has called arcs of coherence. The larger and more complex the arcs, the harder it is for wiki-writing crowds to create and sustain them.

Bees and hives

Another model comes from a rather different source. In Honeybee Democracy, entomologist Thomas Seeley describes how swarms of bees find new homes. When a swarm has departed their old hive, they first huddle at a temporary location such as a tree branch. Scouts head off in all directions looking for a suitable cavity. When a scout comes across something promising, she returns and advertises her find by waggle dancing on the surface of the huddle. The better the site, the longer the dance.

Other bees see the dance, and some head off to inspect the potential new home for themselves. They return and dance out their own assessment. More and more bees end up promoting the better sites. Eventually one site wins out and becomes the swarm’s collective choice.

Bees collectively assess intelligence on the best hive locations as intelligence come back from scouts. Picture: Getty Images

According to Professor Seeley, this quasi-democratic process works remarkably well – swarms select the best site around nine times out of ten.

The heart of the process is the simultaneous exploration of many options. But this isn’t the way reports are often produced in intelligence organisations. A common practice is for an individual analyst, or a small team, to produce a draft analysis. This document is then passed to others who suggest revisions.

As a result, the group tends to focus on one document, and on the one analytical approach. It is as if the bee swarm explores only one potential site, debating at length whether it is good enough and how to improve it, while unwittingly ignoring other sites which might be better.

Contending analyses

One research team funded by IARPA’s crowdsourcing program is the SWARM Project, based at the University of Melbourne that is in part inspired by the bees.

The SWARM Project is developing a cloud platform on which analysts collaborate to produce a single well-reasoned intelligence-type report. The platform is similar to Google Docs, but is designed to support simultaneous development and evaluation of alternative analyses.

Any member can post a draft analysis, and other members can rate drafts on various criteria. Better drafts float to the top, much like better hive sites attracting more bee supporters.

The alternative analyses are in effect contending to be selected by the group as the most promising; hence the approach is called “contending analyses.” The most promising analysis is further developed into a final report, with the advantage that the final report can draw on the good ideas embodied in all the also-ran analyses.

Initial testing indicates that groups on the platform can work well together. We have been recruiting teams of up to thirty people via Facebook, and giving them challenging fictional intelligence-type problems, including, for example, a problem that echoes the poisoning of former Russian spy Sergei Skripal and his daughter in March.

These groups have never met, get little training and only interact on the platform using pseudonyms, but often produce very good analytical reports. Indeed, sometimes their work is quite exceptional.

Team challenge

It’s early days, but the secret of success here appears to be finding just the right mix of individual work, with its autonomy and ownership, and collaboration; while also supporting the emergence of that mix with suitable technology.

In late 2018, the SWARM platform, along with systems from other IARPA-funded teams, is to undergo rigorous, large-scale testing by a US-based independent evaluation team.

Meanwhile, starting in late June, the SWARM Project is running its own test, the 2018 Challenge. Here teams from Australian organisations with intelligence functions will be pitted against teams recruited from the general public. These teams will compete to produce the highest-quality analytical reasoning, and the SWARM researchers will be assessing whether the contending analyses approach can produce substantially better reasoning than normal methods.

We can’t all be 100 percent certain like Hollywood heroes, but decisions still have to be made based on what we know and don’t know.

Maybe bees know something we don’t.

Starting June 25, SWARM will be conducting a major test of the new system. SWARM is inviting organisations and individuals to participate in a research exercise in which each team would work on four challenging intelligence-type problems on the new SWARM platform.

Banner Image: Getty Images

Find out more about research in this faculty

Science