Paper audits crucial for automated counting

Lessons from the iVote report: why scrutineers must be able to audit the Senate ballots

By Dr Vanessa Teague, Department of Computing and Information Systems, University of Melbourne

Published 2 August 2016

Australia’s paper-based elections have built up trust over generations through the deployment of carefully considered processes. The most crucial of these is candidate-appointed scrutineering. The ability to observe and scrutinise the process provides critical protection against errors and malfeasance.

The Senate count is partially automated, which is a good thing, but the evidence of its accuracy must be able to be checked by human scrutineers. This is more important than ever because the new Senate voting rules mean handwritten preferences will make a difference.

This article has been co-authored with Dr Michelle Blom, Dr Andrew Conway, Dr Chris Culnane (all University of Melbourne), Professor Rajeev Goré (Australian National University), Dr Robert Merkel (Monash University), Professor Ron Rivest (MIT), Professor Philip Stark (UC Berkeley) and Dr Damjan Vukcevic (University of Melbourne).

We agree “it is essential that a random sample of the paper Senate ballots be checked against the final published electronic votes used for counting”.

Australian Electoral Commission workers emptying Senate ballot boxes to commence the count. Picture: aec.gov.au

Computers don’t remove the need for scrutiny

Election software in Australia has had procedural errors, software bugs and security problems that have affected running systems and (probabilistically) at least one election outcome. (Physical ballot security is also important of course.)

The NSW Electoral Commission recently released PwC’s report into the 2015 iVote internet voting project. iVote input 280,000 votes into the state election tally. Voters could vote from home or from a polling place over the internet. Since meaningful candidate-appointed scrutineering was impossible, the auditing task was entrusted to PwC. Their report was released last month after a delay of more than a year – it is now too late for any unsuccessful candidate to appeal the accuracy of the outcome.

PwC observed a series of testing and certification processes, culminating in a “lockdown” of the software prior to the election run, which they summarise: “Review of the lockdown process ... Prior to live voting NSWEC (New South Wales Electoral Commission) were required to secure iVote to prevent unauthorised access to the system, as this could invalidate or impact the integrity of the voting process.”

There are a number of concerning aspects of the report:

a) There was an unplanned software update two weeks into early polling, the day before election day: “PwC was present on 27/03/2015 to observe the unlocking of the system, due to a change that was required” and “Lockdown was removed for Scytl to make performance improvements to the CVS database“.

The CVS database is the Core Voting System database, the equivalent of an electronic ballot box holding all the iVotes. An update to that database is the electronic equivalent of undoing the seals on that ballot box and rearranging the ballots, an action we would not expect a contractor to perform on paper ballots except under careful observation by scrutineers.

We have argued elsewhere that private certification and testing is no substitute for public scrutiny. We learn now that even that process was bypassed. We don’t see how that unplanned software update could possibly have been subjected to a rigorous testing and certification process while the system was running. And if the software had been thoroughly tested and certified in advance, why did it need to be updated?

b) Even more concerning is a one-line remark about the verification process, “Fix signature file, which was preventing verification”. So how many verifications did the un-fixed signature file prevent? When was it fixed?

c) Perhaps the strangest thing about PwC’s report is that it doesn’t say, of all those people who tried to verify their vote, what fraction failed.

Senate ballot papers in New South Wales are secured. Picture: aec.gov.au

How to scrutinise the senate count

The lesson for the Senate count is clear: the integrity of the election result must depend, not on the perfect security and accuracy of the software, but on Australian scrutineers watching a careful audit of the paper evidence of how Australians voted.

So how can we do that?

The preferences will be manually entered in parallel with the automated digitising system, and scrutineers will be able to compare a digital image of the ballot with the interpreted preferences displayed on the screen. This might be a good way of checking for accidental scanning errors or character recognition problems. However, if there’s a software bug, database problem, procedural error, or security breach, those numbers that the scrutineers see on the screen might not match the paper votes or the final output.

There are two quite separate steps: A digitising step, in which the paper ballots are converted in to an electronic list of preferences, and a counting step, in which those preferences are turned into a list of winning senators.

The counting step is easy to check: download one of the open source Senate vote counting applications available online and recount the published preferences for yourself. Of course, that code too might be buggy, but at least you can read it or reimplement your own. (The official code is a trade secret, despite the Senate itself having passed a motion requesting its publication.)

The digitising step is also easy to check: the Australian Electoral Commission should, in the presence of scrutineers, perform a rigorous random audit of the paper evidence to check that it matches the preference data file. Some ideas:

Choose a reasonable sample size and do an audit. This at least gives us an estimate of the error rate.

Perform a Bayesian Audit to see if any small samples of the paper ballots produce a different election outcome. Perhaps extend this by looking at individual ballots and keeping track of discrepancies between the data file and the paper.

Try to compute a bound on the number of errors that could change the election outcome, then use a Risk Limiting Audit against that possibility.

We could have several teams each doing a different analysis of the same audit data.

Conclusion

Scrutineers must have a chance to verify that the election outcome was right. A simple statistical audit of the paper evidence will greatly lower the chance that bugs or security problems give us an unnoticed wrong Senate outcome.

Banner Image: Polling officials and scrutineers counting and rechecking ballot papers from the WA Senate re-run at an AEC counting centre at Belmont, Perth, in 2014. Picture: Richard Wainwright/AAP

Find out more about research in this faculty

Engineering & Technology