A little bird didn’t tell me
The hack attack on Twitter was aimed at financial gain, but what happens when hacking turns political? Regulation is part of the answer, as is becoming more savvy ourselves
This week’s hack on Twitter came as a surprise to many, but not to all. These attacks are inevitable.
And that is a big problem.
While it appears the point of the attack was simply robbery – an attempt to dupe the followers of the hacked accounts of the rich and famous into sending the scammers Bitcoin – what if the motivation had been more sinister? Political even?
The attack on Twitter can’t be classed as merely a technical vulnerability, it is also the result of human vulnerability. Twitter said it was “a coordinated social engineering attack by people who successfully targeted some of our employees”.
Simply put, someone with administrative rights has somehow facilitated, either willingly or unwittingly, the scammers gaining access.
Social engineering attacks are commonplace, and many of us have witnessed some form of it before. They range from simple scam emails promising riches; to coordinated phishing attacks that send fake SMS messages pretending to be from your bank.
Even where technical systems are gold-standard and are locked down more securely than proverbial Fort Knox, humans involved in the system can cause them to become vulnerable.
A common security maxim is that “the human is the weakest link” as exemplified by the Twitter hack. In the IT landscape, a disgruntled employee; a careless choice of security question; or even a device used in a public space can pose risks and cause vulnerabilities.
We detected what we believe to be a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools.— Twitter Support (@TwitterSupport) July 16, 2020
These risks are exacerbated when remote working is du jour due to the ongoing global COVID-19 pandemic, which means security practices are more decentralized.
But a hack on a major social media platform is entirely different to criminals hacking individual’s emails. The stakes are now incredibly high. Social media platforms have become, for many, authoritative sources of news and information – Twitter is the preferred mode of communication for the President of the United States.
At the same time, any social media platform may be hacked, tampered with, or present disinformation.
In 2013, the official Twitter handle of the respected Associated Press news agency was compromised allowing hackers to Tweet out “Breaking: Two Explosions in the White House and Barack Obama is Injured.” The AP quickly corrected the news, warning it had been hacked, but not before the stock market had temporarily fallen by over $US136.5 billion dollars.
What if, on the eve of an election, a reliable source tweets that a Presidential or Prime Ministerial candidate has committed a crime, or is seriously ill? What if false information about international armed attacks is shared from a reliable news source’s handle? The impacts of such an event could be profound, and more far reaching than just scammed financial losses.
The asymmetrical nature of social networks like Twitter means that, even if a mere 0.001 per cent of those who have seen a malicious tweet believe it, or otherwise act upon it, by compromising just a single high-profile account with over a hundred million followers, hackers could theoretically influence more than 1000 users. If this was a misinformation campaign, the original owner of the Twitter account could suffer a blow to their credibility.
Well-documented social media phenomena like algorithmic ’filter bubbles’ and ’online echo chambers’ can amplify misinformation and trick those unfortunate enough to be caught up in the misinformation.
Who polices social media platforms remains a perennial question. Although there was much ire about the platform being shut down temporarily for verified accounts, the real issue is who decides what is censored or shut down and under what circumstances.
In a twist of irony, Twitter has deleted leaked screenshots of the purported internal administrative tool used to orchestrate the hacks “... and has suspended users who have tweeted them, claiming that the tweets violate its rules”.
Broader questions have already been raised about when Twitter should censor tweets. Only recently, US President Donald Trump’s tweets were labeled ‘potentially misleading’ by Twitter amid pressure from users for the platform to be more transparent.
What is the responsibility of such platforms, and who should govern them, as we become more heavily reliant on the news they mediate? These are important questions that require consistent regulatory frameworks to ensure that existing laws are upheld by enormously powerful digital platforms.
While Australia has no individual piece of legislation dedicated to regulating cyber security, it has a range of laws with similar effect, including those that govern data breaches. Last month the Australian government announced the nation’s cyber defence agency, the Australian Signals Directorate, would increase its funding by $1.3 billion and recruit 500 employees to increase its offensive capabilities.
Hopefully Australia’s forthcoming 2020 Cyber Security Strategy will include strategies and initiatives to support those working on minimising technical and human risks in the cyber context from an offensive perspective, and provide further funding for broader education to increase Australian digital literacy.
In today’s age of social media consumption, it is possible, albeit very hard, to avoid social media altogether. That means to protect themselves from threats – technological and human – it is important for social media users to become more digitally literate.
For starters, best practices for social media and other online accounts should be followed: from using complex, non-reusable passwords, to being wary of scams opening up to social engineering attacks.
As for the human aspect, it requires users to be open-minded yet critical in their consumption of news – from refraining to share a post until it is fact-checked, to diversifying their news sources. Philosophers call it epistemically virtuous. behaviour. You could also call it just being a good citizen.
An earlier version of this story first appeared on The Conversation.
Banner: Getty Images