Sorting fact from fiction in a post-truth world
When social media micro-targeting is shaping political views, and ‘alternative facts’ abound, is there any hope for democracy?
Chris Hatzis
Eavesdrop on Experts, a podcast about stories of inspiration and insights. It’s where expert types obsess, confess and profess. I’m Chris Hatzis, let’s eavesdrop on experts changing the world - one lecture, one experiment, one interview at a time.
They say you’re entitled to your own opinion, just not your own facts. Imagine a world that considers knowledge to be 'elitist'. A world in which, for example, it’s not medical knowledge but a free-for-all opinion market on Twitter that determines whether a newly emergent strain of avian flu is really contagious to humans. This dystopian future is still just that – a possible future. However, there are signs that public discourse is evolving in this direction. Terms like 'post-truth' and 'fake news', largely unknown until 2016, have exploded into the media and public discourse.
Professor Stephan Lewandowsky is a cognitive scientist and Chair of Cognitive Psychology at the University of Bristol. His work explores the implications of the growing abundance of misinformation in the public sphere, how it influences people, and how to counter it. He was recently invited to the University of Melbourne to give a free public lecture entitled “'Post-truth' Politics: Democratisation of Information or Gateway to Authoritarianism”. His most recent research examines the potential conflict between human cognition and the physics of the global climate, which has led him into research in climate science and climate modelling.
Our reporter Dr Andi Horvath sat down for a chat with Stephan to discuss his work that examines people's memory, decision making and knowledge structures, with a particular emphasis on how people update information in memory.
Andi Horvath
Professor Stephan, how do you describe what you do when you meet people?
Stephan Lewandowsky
Well, I study fake news and misinformation. I’m concerned with how people respond to information that turns out not to be true later on. So I’m fascinated by how people update their memories and their beliefs.
Andi Horvath
How do you go about studying that because it seems like a complex thing, the human memory?
Stephan Lewandowsky
Yes, it is complex but we actually do understand it quite well, at least in broad strokes. We have a fairly good understanding of it and we gather that understanding by running experiments. So we take people into the laboratory, or we recruit them online increasingly these days, we present them with information to memorise. In my particular case, what I then do is I correct some of the information that I present to participants because I’m interested in how they respond to that. Then sometime later, I’m asking them questions about what they’ve read or what they’ve heard and I then find under most circumstances that people continue to rely on misinformation. Even if I tell them that it was false and even if they acknowledge that they’ve received the correction. So people will say, oh yeah, I know you told me that’s false but then when I ask them to draw inferences about the information, they still use it.
Andi Horvath
So it’s quite difficult for humans to change their mind once it’s set?
Stephan Lewandowsky
Yes, it’s difficult because fundamentally when we listen to information two things happen. The first thing that happens is that we believe it to be true by default. We have a presumption that people tell us the truth and that makes a lot of sense because, in our daily lives, we’re walking around, we’re doing our shopping and meeting our friends and 99.99 per cent of the time, whatever people tell us is basically true. So we have developed that expectation that things are true.
Now, the second thing that happens is that as I’m listening to information, I’m trying to build what we call a mental model of an event. So if you tell me something that is happening, I will incrementally follow what you’re saying, I put a story in my head and I develop a model of the situation.
Now, if I then say that a key element in that story is wrong, then that’s extremely difficult for you to handle because you’ve just built this edifice in your mind that tells you something about an event and now all of sudden, I’m telling you bing - this core element of it is untrue.
Now, in that situation what happens is that people encode the fact that this is untrue and if I ask them about it, they will say, oh yeah, you told me that that particular thing is untrue. But even though they know it’s untrue, they cannot have a big gaping hole in their mental model now. So they’re continuing to rely on the original information when I ask them to draw inferences. So let me give you an example perhaps.
Andi Horvath
Yes, I mean I’m thinking now anti-vaxxers and climate sceptics.
Stephan Lewandowsky
Well, exactly. Well, it leads up to that, but before we even get there, let me give you a simple example. If I tell you a story about a jewellery theft; somebody breaks into a house and steals the jewellery and I’m dropping all sorts of hints here that it might have been an inside job, that the parents were on vacation and the son, who was a drug addict, he always needs money and so maybe it was the son.
Now, if I then tell you, no actually it wasn’t, then what I can show in my experiments is that you still rely on that information that it was the son even though I told you it was false.
However, if I tell you no, it wasn’t the son and we know it wasn’t because we’ve arrested somebody else, the neighbour, then all of a sudden you will say, ah ha and you will change your mental model to accommodate that alternative information.
Andi Horvath
Right, so it’s like I've imprinted the first story I heard.
Stephan Lewandowsky
Exactly.
Andi Horvath
So how do you then rewrite that?
Stephan Lewandowsky
Well, that’s the key thing; you have to present an alternative. What you have to do, is you have to say - instead of just saying, no it wasn’t the son, you have to say, no it wasn’t the son and we’ve arrested somebody else because the moment that’s the case, you still have a coherent mental model of the event except it has now changed. So when we try to correct people, it is always advisable and advantageous if you can present a coherent alternative explanation of the same narrative, the same story.
Andi Horvath
Professor, how did you get into this area? What event sparked your interest in misconceptions in society?
Stephan Lewandowsky
Well, it was sparked in 2003 during the invasion of Iraq because I was living here in Australia at the time and I was following the news fairly closely and one of the things that struck me from day one was how many news reports would appear overnight early in the morning here in Australia. Then a couple of hours later they were corrected or withdrawn, so there with all these reports about oh, a potential chemical weapons factory was found in Iraq and we have preliminary tests confirming the presence of chemical weapons et cetera et cetera. Because of course the war was fought ostensibly over these weapons of mass destruction, such as chemical weapons, and then none of them were ever found. So what happened on the daily basis was that you got this initial information that a couple of hours later turned out to be untrue and was withdrawn. Then it said, hey actually no, those weren’t chemical weapons, it was an oil spill or something.
That fascinated me at the time because I thought, gee I wonder what happens to people when they’re exposed to that much that turns out to be untrue later on? So I ran an experiment together with colleagues where we - that was conducted while the initial phase of the war was still ongoing - and we conducted that in three different countries; in the United States, in Germany and then Australia. What we found was quite fascinating because we found that people who were sceptical about the reasons for the war who thought that it might not have been fought over weapons of mass destruction that those people were far more able to differentiate between truth and falsehood compared to people who did believe the official reason for the war.
In fact, in our American sample, what we found was that the people who knew that something had been corrected, who were dead certain that this information was false, if we asked them 30 seconds later whether they still believed it, their answer was yes, I still believe it.
So we again had this disassociation between knowing that something is false, but that knowledge not actually affecting your belief about the event in question and that is because people in America at the time predominantly accepted this narrative of weapons of mass destruction. Therefore, weren’t sceptical enough to consider the alternative which was that the war was some about something completely different in that you wouldn’t find any weapons of mass destruction.
Andi Horvath
There’s some relief in deciding that you know something. For instance, there’s a fine line between opinion and knowledge it seems. Is that true?
Stephan Lewandowsky
Absolutely, and it’s a line that is becoming increasingly blurred and this is one of the concerns I have at the moment in my research where I’m dealing with what many people called the post-truth world, the post-truth politics where it appears to be the case that facts no longer matter in some sense.
Just to illustrate, I’m giving a talk later this afternoon and I prepared it this morning by updating the latest statistics on Donald Trump and it turns out that now he is speaking an untruth or something that’s misleading at the rate of about six times a day. He’s been doing this at an increasing rate. He started out at four pieces of misinformation per day; he’s now up to six per day. He’s been doing this ever since he got elected and of course, he did it during the election campaign as well. So we have a president who is clearly habitually misstating the truth, misleading, stating untruths at the rate of six a day.
Andi Horvath
No one fact checks the president?
Stephan Lewandowsky
Well, people do fact check the president, but the problem is that it doesn’t seem to make a dent in his approval ratings because if you look at the long-term trend in his approval and I just again did this recently because I wanted to know and Gallup runs a weekly poll of presidential approval and what you find is that basically among Republicans, his approval has remained invariant at above 80 per cent, sometimes above 90 per cent since he became president. Among Democrats, he’s down at about 10 per cent. Independents are somewhere in the middle. But the striking thing is, that there is absolutely no change, no significant trend in his approval among his own partisans.
So it appears as though stating all these untruths doesn’t make a difference to a politician's survivability and popularity at least among his own partisans. As it turns out, I ran a study last year - published a study last year - was actually run during the primary campaign, where we provided experimental evidence for that.
What we did there was to present an online sample of American voters with statements that Donald Trump had made; half of which were true, half of which were false. We asked for people’s belief ratings in these statements. Having done that, we then either corrected the statements that were false or we affirmed the statements that were true.
Now, when you do that, what happens at first glance was very gratifying because we found that regardless of partisanship, people endorsed the true statements more and the false statements less, as you would expect, as you would want people to do after a correction.
Now the problem is, that what we also found is that there was no association between the extent to which people changed their beliefs and a change in voting intentions or feelings about Donald Trump. So basically what our study showed is you can disabuse people of believing certain statements, but it makes absolutely no difference to their voting intentions. We’ve replicated that since then with a similar but slightly different procedure, again found roughly the same thing, colleagues of ours have replicated the same phenomenon again using a slightly different but very similar procedure. So we now have at least three studies showing that among American voters, the number of false statements that a politician makes just simply doesn’t matter to how they feel about him. The only thing that mattered was people’s prior identification as a Trump supporter or not.
Andi Horvath
These influences or persuaders, this has the potential to really sway populations into dangerous areas and again, I want to bring up climate change and anti-vaxxers because these are two areas that are pretty much established by science as to having made a difference.
Stephan Lewandowsky
Yes, well there’s no scientific doubt about climate change, nor is there about the safety of vaccinations, yeah.
Andi Horvath
That’s right, they save lives in population senses.
Stephan Lewandowsky
Exactly.
Andi Horvath
So is there a tipping point we should be looking out for?
Stephan Lewandowsky
Well, we certainly have to be on the lookout for a lot of things and let me get back to climate change and vaccinations in a moment. I just want to take a slight detour because just over the last week or so, you may know that Cambridge Analytica has turned into a major scandal which I presume was covered in Australia as well.
So basically what happened there is that evidence has now become incontrovertible, I think, that Cambridge Analytica was using data harvested off Facebook to send messages to voters in the UK during the Brexit referendum and the US during the last presidential election. Now, what does that mean and why, if at all, should we be concerned about that? This is where I think we have to be very careful to look at this and we have to be very concerned about it.
The reason is that there is published research that tells us that if I have access to 300 of your likes on Facebook, then with a computer program - using a computer program, I can predict your personality better than your own spouse. So in other words…
Andi Horvath
I find that frightening.
Stephan Lewandowsky
That’s right, it is exactly - well, it is. It has a frightening potential because it means that whatever we do on Facebook is leaving a fingerprint, a digital fingerprint of ourselves, and our personality, and our preferences that can then be known by other people. So we know that access to Facebook data gives us access to a person’s personality.
Now, if those data are then passed on, legally or otherwise that remains to be seen, to a private corporation that specialises in persuasion, then this offers an opportunity for micro-targeting which means that they can send out messages that are customised to exploit people’s particular vulnerabilities. Whatever makes you vulnerable or susceptible to persuasion on the basis of your most intimate personality aspects, they can target. Now, when you then get those messages, you have no idea that you’re being targeted by professional manipulators in that manner.
Andi Horvath
This is mind-control.
Stephan Lewandowsky
It is basically getting very close to that. Now, it remains to be seen how much of that took place, it remains to be seen precise the micro-targeting was, it remains to be - there’s a lot of things we don’t know. But what we do know from the existing research is the potential that we can send messages to Facebook, through Facebook that are personalised to target you as an individual with all your personality characteristics that are known to this unknown corporation that is targeting you and you have no idea that that is what’s happening.
Now what’s worse is that no one else knows what messages you’re receiving and how you’re being manipulated. So what we’ve now done, if this takes place in the political domain, is to completely undermine the whole notion of democracy which is based on a free marketplace of ideas; where ideas are being debated, challenged, rebutted and exchanged.
Now, none of that can happen if a political opponent doesn’t even know what’s being said about them on Facebook in darkness and so I think what’s happening right now, is that we have a situation where potentially the whole idea of a democratic debate is annihilated and negated by these dark ads that no one knows about.
That to my mind is an absolutely serious problem and we should be desperately concerned about that because you cannot have a political debate if half of it isn’t even known to the political opponent. How can you have a debate with somebody if that other person is talking to voters and you have no idea what they’re saying; all you know is that they’re being targeted by some machine that is exploiting their vulnerabilities. To my mind, that’s unacceptable.
Andi Horvath
Democracy works on an informed society and this is not really being informed, this is being manipulated.
Stephan Lewandowsky
Absolutely and there’s a crucial difference between being manipulated in that manner and having a public debate where politicians can still try to manipulate the public, of course, they do, let’s be honest about it. They’ve been doing this for decades ever since day one of democracy. Politicians aren’t innocent, they never have been and they try their best to put a spin on events. But what we’ve never had before is this escape from scrutiny and from the possibility of rebuttal into this dark underground of having personalised ads that no one else will know even exist. That to me is subverting the idea of democracy and I think we have to take that very, very seriously.
Andi Horvath
Professor Stephan, what advice do you have for Citizen Average that makes sense of the world in this knowledge era?
Stephan Lewandowsky
Well, I think it’s absolutely crucial to be alert and to give up this idea that we all hold that people are telling us the truth. Now, unfortunately, that’s not the case. We have to acknowledge the fact that there are a lot of political operatives out there who are bombarding us with messages on Facebook or Twitter or even in the mainstream media that are plain false.
You mentioned vaccinations and climate change earlier and that is a - especially climate change - is an absolute case in point because the discrepancy between the established science and the agreement in the scientific community about this on the one hand and the public’s perception of an ongoing debate, I mean that discrepancy is just absurd.
I mean, I do a lot of work in climate science and I attend the annual meeting of the American Geophysical Union almost every year and that’s attended by 20,000 scientists and it goes for weeks and there’s thousands of talks and posters and there never is a debate about whether or not climate change exists or whether we cause it; not at all because there’s nothing to debate. The physics is 150 years old, we know exactly that we’re changing the climate through CO2 emissions and to say otherwise is scientifically pathetic and so no one actually does that at a scientific conference where the debate is much more on the details of how this will unfold and how much we have to - how much damages we have to expect and so on.
What we see in public discourse by contrast, especially here in Australia unfortunately, is a debate that is drenched in misinformation and I am actually inclined to say disinformation because we know that about a billion dollars is spent every year in the United States on conservative think tanks that are producing various political talking points and among these talking points is the denial of basic physics of climate science.
So you have these pseudo-debates that are taking place here in Australia, in particular, and they’re aided and abetted by media organs that are interested either in pursuing an agenda or in stirring up a controversy just to create the appearance of a debate where there, in fact, is none. So yeah, we have to very careful as consumers of information where we go and what we believe.
Andi Horvath
Is installing some sort of legality around the ethics of marketing a possible solution for this disease almost?
Stephan Lewandowsky
Yes, well it’s certainly a crisis I think. Yes and no. I think I’m inclined to say yes to the idea that we can do something and that we do have the political means and regulatory authority to change the information landscape. So I think it’s important to realise that we’re not helpless, we are not a victim of Facebook or Twitter, quite on the contrary; we can actually control them as Germany, for example, has been demonstrating recently with their new anti-hate speech legislation which has forced Facebook to hire a large number of moderators who are now working in Germany and stripping hate speech off Facebook content. So it is possible to do this.
So let me say yes first. But now I’m going to say no because here’s the problem; I’m totally against having a ministry of truth or some other outfit that says hey this is ethical to say and that isn’t because if we are regulating content, perhaps we can do that with hate speech, I think we can - incitement to hatred and so forth - we can probably deal with that and come up with a definition of what that might be and we can perhaps quarantine that and have regulations for that. But generally, to determine whether something is true or false by fiat or by diktat, by some authority, to my mind, is extremely dangerous. So I’m against regulating content.
However, what I think we can do is that we can regulate the architecture in which all of this takes place and that to me, I’ve labelled this Technocognition, this is something that I proposed in an article I wrote last year and the idea there is that we look at what makes social media unique from what we had previously and how can we maybe change the architecture to make it more conducive to the emergence of truth as opposed to the proliferation of falsehoods.
So how might that work? Well, let me give you one example that’s already been done and it’s the - my favourite example by the Norwegian state broadcast or the equivalent of the ABC in Norway. What they’ve done was to institute a policy whereby if you want to leave a comment on a controversial article on their website, before you can do that, what you have to do is to pass a quiz. You have to pass a multiple choice quiz to demonstrate that you actually understood the article. If you fail that quiz, sorry you can’t leave a comment. Now, I think that’s a wonderful idea because no one is being censored, right, all that happens is that any reader can leave a comment, but they have to demonstrate that they understood what they read.
Andi Horvath
I love that.
Stephan Lewandowsky
Exactly, and then they can comment and they can still say anything they want, perhaps subject to hate crime legislation, that sort of thing. But the point is by just having that architectural little filter in there, you’re eliminating bots because they wouldn’t know how to do that, you’re eliminating people who can’t be bothered to actually read and understand the article and even the people who do understand, you’re forcing them to cool down because it takes a certain amount of time before you get around to posting your comment. If you have a serious comment to make, well then hopefully 30 seconds later after you’ve taken the quiz, your blood pressure is down to the point where you might say something you won’t later on regret.
So that is just one example of how we can deal with the information architecture to improve the quality of discourse without any overt censorship.
Andi Horvath
Professor Stephan, give us some advice. What do you want us to think about next time we see a Facebook ad that we feel a little suss about?
Stephan Lewandowsky
Well, I would say check it out. Where did it come from? Go to snopes.com and see if they’ve debunked it. I think they’re a very reliable website and then there’s all sorts of - Snopes is perhaps one of the better ones, but there are websites out there that tell you how to read the media critically and how to analyse sources.
Now, here’s a word of caution about that and this is going back to the whole idea of democracy and what does it mean to have a democracy and how can we preserve it.
One of the dangers I see in all this, and I actually have data to suggest that, one of the dangers is that this constant exposure to misinformation is making people cynical about the very existence of the notion of truth. If you go back in history and if you look at the writing of Hannah Arendt, for example, who’s one of the preeminent philosophers and analysts of Nazi Germany and who’s made some amazing contributions to understanding fascism, one of her big points was that the point of lying in politics isn’t necessarily the lie itself, but it is the creation in the extreme case of an environment where lies have become so commonplace that the population is giving up on the notion of truth to begin with.
I think if we have a president who is lying six times a day and where he has a press secretary who actually very blatantly is talking about alternative facts, we are having alternative facts, well if you talk about a world like that, then clearly we’re not talking about a world where the lie is just serving a purpose. This is a systematic campaign I think to erode the notion of truth.
If we look at history and we trust Hannah Arendt as I very much do, then that is a precursor to totalitarianism and authoritarianism because truth is something that keeps the powerful in check. So if you undo the concept of truth, then all you have left are people in power who are just lying the loudest and who can do what they want.
So I think it’s absolutely crucial to maintain the idea that certain things are true and other things are lies. So we have to maintain that distinction and in order to maintain the distinction, however, we also must believe certain things. We cannot just throw up our hands and say, oh I don’t know what truth is, everybody is lying to me therefore nothing is true. Well actually, that’s not the case. Climate scientists are not lying to you; they are publishing peer-reviewed science telling us that climate change is real. The same for the medical research community telling us about vaccinations; they’re not lying to us. They’re doing science, and the same is true for many, many other situations where it is actually possible to do a careful analysis and to figure out what’s true and what’s false.
We must maintain that distinction; we can’t just throw up our hands and say, oh well, who cares? I can’t believe whatever I want.
Andi Horvath
Professor Stephan Lewandowsky, thank you for holding power to account and thank you for your amazing insights and truths.
Stephan Lewandowsky
Okay, thanks so much. Been a pleasure.
Chris Hatzis
Thanks to Professor Stephan Lewandowsky, cognitive scientist and Chair of Cognitive Psychology at the University of Bristol. And thanks to our reporter Dr Andi Horvath.
Eavesdrop on Experts - stories of inspiration and insights - was made possible by the University of Melbourne. This episode was recorded on March 26, 2018. You’ll find a full transcript on the Pursuit website.
Audio engineering by yours truly. Co-production by Dr Andi Horvath and Silvi Vann-Wall.
Eavesdrop on Experts is licensed under Creative Commons, Copyright 2018, the University of Melbourne.
If you enjoyed this podcast, drop us a review on iTunes and check out the rest of the episodes in our archive.
I’m Chris Hatzis, producer and editor. Join us again next time for another Eavesdrop on Experts.

Whose fake news?
Cognitive psychologist Professor Stephan Lewandowsky explains why we still believe something to be true, even after we have been told it is not, and why we are all so willing to believe what we read - including fake news.
In our post-truth era he suggests we all need to become a little more cynical, to ward off misinformation and guard against its potential to manipulate.
Episode recorded: 26 March 2018
Producers: Dr Andi Horvath, Chris Hatzis and Silvi Vann-Wall
Audio engineer and editor: Chris Hatzis
Banner image: Getty Images
Subscribe to Eavesdrop on Experts through iTunes, SoundCloud or RSS.