AI and humans: collaboration rather than domination
When algorithms make important decisions, we also need to involve humans who understand the context, explains AI and ethics researcher Jeannie Paterson
CHRIS HATZIS
Eavesdrop on Experts - stories of inspiration and insights. It’s where expert types obsess, confess and profess. I’m Chris Hatzis, let’s eavesdrop on experts changing the world - one lecture, one experiment, one interview at a time.
For many of us, Google, Alexa, Siri or our digital assistant of choice are now part of our daily lives. But increasingly these virtual assistants are keeping us company pretty much all the time. They might be listening to our meetings, noting our everyday routines and apps like Fitbit are tracking our exercise patterns.
They’re extremely convenient, but there’s often a trade-off in relation to our data, privacy, and security. Yet many of us still welcome the convenience and maybe even the apparent connection they offer and continuing to use them has the potential for eroding our privacy, and even our autonomy.
Jeannie Paterson is Professor of Law and Co-director of the Centre for AI and Digital Ethics at the Melbourne Law School, University of Melbourne. The Centre is dedicated to the cross-disciplinary research of AI with particular focus on ethics, regulation and the law.
Jeannie Paterson sat down for a Zoom chat with Dr Andi Horvath.
ANDI HORVATH
Jeannie, if I was your taxi driver and you hopped into my cab and I asked you, what do you do, what do you say?
JEANNIE PATERSON
Well, I'd usually start by saying I'm an academic and I am also a lawyer and I'm interested in protecting consumers and - because of the way the world's gone - protecting consumers means thinking about the impact of emerging technologies - including artificial intelligence - on consumers' rights and interests.
ANDI HORVATH
So what's gone wrong with technology and consumers?
JEANNIE PATERSON
Well, I don't think it's so much that it's gone wrong as consumers don't really know what they're getting actually. So, technology has a lot of potential for improving people's lives and I'm a technology optimist in fact, and there's some really interesting things that technology could do in terms of including people that are currently marginalised or providing access and equity to people who are otherwise disadvantaged. But the problem is, at the moment, access to technology is very uneven, most people don't have enough understanding about the impacts of technology in their lives or even that it's being used against them and that's what needs to be redressed first and foremost and then we can think about how we govern and how we distribute this useful new tool.
ANDI HORVATH
So, artificial intelligence is sort of everywhere, it's in our shopping, it's in our communications. It obviously helps us but how does it harm us?
JEANNIE PATERSON
Well, the way it harms us is actually it's not so much the AI that harms us, it's the uses to which it's put. So, at the moment, most people would be aware I think now that their engagements with social media result in information or data being collected about them, which is then somehow compiled and turned into ads that they see. Now, that seems pretty harmless but the issue there is this; increasingly our interactions with the world are being mediated through these digital profiles that are created about us, so we cease to be ourselves - full, rich, interesting humans - and instead become these sort of flattened representations of an entity that's been combined and composed by an algorithm. It's that digital identity that determines what we see on the screen, what we see when we're watching Netflix, what we see on Facebook, what we see on Twitter.
Now, a lot of people just go, well don't engage with social media, but it goes deeper than that. When we're in supermarkets, the supermarkets are keeping track of what type of products we're buying and how much we're spending and that information can be used to decide what ads are sent to us and even what products cost us. So, you can see in that way we can see we start to lose control of how we interact with the world and also, we have no visibility about who is being left out and who is being disadvantaged.
ANDI HORVATH
So, it's not just about nefarious activities, it's about transparency?
JEANNIE PATERSON
That's a really good point because I think if we watch a lot of media about technology, we sort of might say, oh, there's bad actors, hackers, aggressive governments trying to infiltrate and steal our information and that probably is true but it's just our day-to-day interactions are resulting in us being consistently surveilled without our real understanding and then that information used to determine who we are and what we get. So, there's a lack of transparency, but transparency is just the starting point; there's also a lack of accountability. We as consumers and citizens have very little say about how these technologies are used and how we have control over how they're used and what is said about us, in fact.
ANDI HORVATH
Do you recommend limits for data usage? Is that what your group does?
JEANNIE PATERSON
I think limits on data usage are a starting point. So, if you look at what's happening in Europe, for example, which has the strongest data protection in the world, there's lots of controls over when data can be collected and what people need to be told about the data collection and what it can be used for. But the premise of that regime is that people choose, they decide whether they want to share their data and for how long and in what capacity but the problem with purely relying on individual rights to control access to data is this; most people don't care at the moment. Most people are really busy and preoccupied with other things - particularly at the moment in this sort of COVID pandemic world - and the concern is that if we know anything about human decision making is that things that are a bit abstract and the risks that are in the future get put off, people don't pay attention to them.
So, if we wrest control for data in individuals - in individuals' decisions - we end up with a really bad outcome because individuals aren't going to exercise that control. They don't have the incentive to do it, they don't have the time to do it and they don't have the foresight to do it, so the individualisation of control over data - in my view - is problematic. I think we need to look at the technology and the uses that are being made of that data and focus on the regulation of that as much as on the data that feeds it.
ANDI HORVATH
Personal assistants like Alexa, Siri, M or Google Assistant are kind of labour-saving devices, but I guess if you put an ethical, political and legal lens to it, it can sort of shine up some different things. Jeannie, this morning I asked Siri, should I trust you, and she replied…
JEANNIE PATERSON
[Laughs]
ANDI HORVATH
…by saying [laughs], thank you, your trust means a lot to me, and I thought that was kind of spooky.
JEANNIE PATERSON
It is kind of spooky and Siri and Alexa and all of this are really good examples of the way we have AI in our homes and the reason why we think at the centre I work with - that's the Centre for AI and Digital Ethics - we think the focus needs to go beyond just individual rights over data. The digital assistants - AI in the home - is a really good example of this, because increasingly people have various labour saving smart devices in their homes - so Alexa is a good example, so's Siri, but also smart TVs or fridges that know when you're running out of milk - and they seem really helpful, they seem useful.
I can say to Alexa, Alexa play the song I want, Alexa tell me what the weather's like, Siri - or Alexa - do you trust me? That's amusing and helpful and perhaps doesn't seem particularly invasive to our privacy or our autonomy but if we think about the escalation of these sorts of devices, what we're gradually doing is giving away ourselves. We're giving away ourselves to an automated process that pretends to be our friend. Now, to me that's problematic in all sorts of ways. It's problematic in a legal sense because we don't know what we're getting back, how good are Alexa's or Siri's suggestions; I don't think anybody has looked at that particularly critically. But also, in terms of our interpersonal and human essence, what are we doing if our best friend is really simply an algorithm?
If I can go on here, one of the problems with AI is that it's called artificial intelligence and we tend to go, oh it's a human, it's kind of a human, it's evolving, it's like us. It's nothing like us. It is an algorithm that is making predictions about what you might want on the basis of what you've done in the past and what your friends do. There's nothing intuitive or imaginative or creative about that; it's an algorithm. That's kind of a formula that's being deployed by large multinational companies to extract value from us [laughs]. So, I think there's all sorts of issues there, both immediate and existential.
ANDI HORVATH
I hope I've confused the algorithms by suggesting that I like riding motorcycles and that I enjoy looking at pompom dancing routines.
JEANNIE PATERSON
[Laughs]
ANDI HORVATH
Can I beat the system?
JEANNIE PATERSON
[Laughs] Can you beat the system? Lots of people ask this. They go, well I know that social media is gathering information about me, and I don't really want that information known, so lots of people put a false name into their social media profile or they don't tell Alexa everything about themselves. But the point about digital profiling is it's kind of not looking at what you say, it's looking at what you do. So, what my name is, what my age is, where I live is one of the least important things to the collection of data to feed the AI that's in our lives. What it's really looking at is who we socialise with, how long we look at particular advertisements, how long we look at YouTube films of pompom dancers - or in my case dogs running around with cats and horses - and making predictions about that sort of activity.
So, the only way to beat the system [laughs] is not in fact to be on the system because that's the only way you can prevent information being collected about you which is then used to profile and sell products back to you and otherwise influence your day-to-day behaviour. It doesn't really stop there either because my interactions on Facebook are interactions with other people so I'm also providing information about other people. So, you may choose not to be on the net or on social media but other people who know you are, you're still having information collected about you which will inform the kinds of decisions that are made about you in all sorts of ways by algorithmic processes.
It's not just advertising. Increasingly access to a variety of services - government services and also commercial services like insurance, credit, telecommunications - are made on the basis of predictions about your behaviour. So, there's all sorts of stories about the way that now that pricing of insurance is determined by your credit score or the friends you associate with. So, you can see that our digital profiles have an impact on not just the advertisements we see but all sorts of other interactions we have.
ANDI HORVATH
I'm getting really spooked now [laughs].
JEANNIE PATERSON
[Laughs]
ANDI HORVATH
Are we going to reach a tipping point where the machines become self-aware and I'm no longer in control of who I think I'm connecting with?
JEANNIE PATERSON
I think we are but it's not because the machines are self-aware, the machines aren't self-aware; I'll go back to that point I made earlier. The machine is just a set of algorithms. It's just a mathematical process, essentially, that is using computing power to look for correlations and patterns in large amounts of data. That's actually the scary thing because these correlations, these patterns, these profiles that are created about us may in fact have very little resemblance to you and the way you understand yourself and the interactions you have with your friends but yet it becomes the basis for decisions.
Decisions about the type of advertising you see, decisions about the political advertising you see, decisions about your access to goods and services...
ANDI HORVATH
Doesn't this fly in the face of democracy? Isn't democracy a well-informed society? Isn't that how…
JEANNIE PATERSON
Yes.
ANDI HORVATH
...it works?
JEANNIE PATERSON
That’s right. Democracy works best under sunshine [laughs] and now we have a lot of decisions informed by or made by processes that we don't understand. So, I think I said a little while ago transparency is the beginning. So, the first step is for people to understand the degree to which decisions and interactions are now mediated by algorithms and then to think critically about the kinds of governance structures and controls we want to place on these processes to preserve both our human values and our democratic values.
So, it's not enough to know it's happening - though that's the first step - we also need to think about how we control what's happening, how we make it accountable. So…
ANDI HORVATH
How do we do that?
JEANNIE PATERSON
Well, how do we do that. This is where I think the field of AI ethics - that the Centre for AI and Digital Ethics studies - is really important. A lot of people when I mention I work in the fields of ethics, they go, well that doesn’t do much does it, but here's the thing. The laws we have in society are informed by - in a broad sense - the values of the society, so we need to work out what our values are in relation to these new emerging technologies - these new algorithmic processes - in order to decide what sort of laws we need and how they should apply. Moreover, if we want to change behaviour, we need to have lots of discussions about the ethics of the behaviour because ethics is really asking people to think, what is the right thing? How should we behave? Just because we can do things doesn't mean we should.
So, the conversation about ethics is key to interrogating the role of algorithms in society but also to thinking about the kinds of limits and controls we want to place on their use.
ANDI HORVATH
As a lawyer, what sort of prosecutions have you seen in this area of AI technology and society?
JEANNIE PATERSON
There's been very little legal action in this field; I think it's going to come. Most of the attention in law has been focused on that first process; how the data is collected. So, I mentioned earlier that most of the law regulating personal data is premised on ideas of consent, of asking individuals to consent to the use of that data. But increasingly, lawyers are saying, that's not enough, we need to think what is a fair use of the data and moreover think about the kinds of technologies that are being developed to make use of that data and how we want those technologies to work.
Now, let's not be - you know, to catastrophise here. Some of the uses of the new technologies are fantastic and could really make a change to society. So, AI is currently being used - and increasingly being used - to monitor and predict deforestation, to identify the spread of contagious disease, to develop new treatments and diagnoses of disease. The police are using AI in ways that is problematic and discriminatory but it's also being used to profile and identify child sex offenders. So, AI has potential to help us resolve a lot of the problems we're dealing with in the world today and we want that to happen, but it can only happen if we have a degree of understanding and insight into how this technology is being used and really strong legal frameworks for that use.
I'll come back to this idea of accountability. We may remember that in the UK because the final year students last year couldn't sit their exams, an algorithm was used to predict their results on the basis of what had gone before. Now, the problem there was not merely the use of an algorithm to predict their results, but there was no accountability or contestability in that process. So, the prediction was based on past performance of schools which meant that a lot of state schools did badly and a lot of what are called in the UK public schools - but here we would call private schools - did well because of their past performance.
What that meant was - sort of summed up a lot of the problems that we feel about AI, that there was no scope in there for the outliers, for the person who went to a previously poor performing school but had a fantastic teacher and a fantastic mind and a fantastic cohort and had the exams sat was going to do really well. There was no mechanism in that algorithmic process for understanding those sorts of anomalies in the predicted outcome, there was no accountability, there was no contestability. So, if we're going to use algorithms to make important decisions, we also need to have processes for the humans that are involved who understand the context and understand where - for social reasons or other political or policy reasons - we shouldn't really be relying merely on past performance or past behaviour to make important decisions about the future.
ANDI HORVATH
What is your perspective on facial recognition? It's really pervasive now.
JEANNIE PATERSON
It's really pervasive and badly understood. I think now people may be aware that facial recognition technology is used in some security contexts, to identify wanted persons by some police forces and also in China in relation to the social credit system but what people may not realise is that facial recognition technology is also being used increasingly to scan the faces of school children as they arrive at school instead of taking a roll and in supermarkets and shopping centres to decide the mood of customers so that appropriate ads can be beamed at them to persuade them to buy things.
Now, facial recognition technology goes beyond even the data profiling that comes from the use of social media and other internet sites because it is capturing information about you that is unique to you - your face - which can be used in all sorts of ways subsequently and yet currently, facial recognition technology is really inaccurate. This is one of the problems with AI, that its capacities currently, I think, are often overrated. We think it can do things it's simply not capable of and that's dangerous because if decisions about people's threat to national security, their emotional states, their mental wellbeing, whether school children are attentive in their learning and engaged, those sorts of decisions are being made on the basis of a technology that is inherently inaccurate - if not biased - then that's really problematic for our interaction with both public institutions like schools and private institutions like employers.
ANDI HORVATH
We almost need to coin a term like AI wellbeing.
JEANNIE PATERSON
I'd go further and say we need to lose the term AI. We need to say, predictive technology or something that really explains that it's a technological, not an emotional, process and therefore underlining I think, Andi, your point that we need to keep this technology at arm's length. We need to [keep] carefully about where we deploy it. Think of facial recognition technology in schoolyards for example; do we really want our children to grow up knowing that they are always watched and monitored by a technology they have no control over? I don't think so. I think that stunts their growth as humans.
ANDI HORVATH
Jeannie, how on earth did you end up in this area? A little bird told me you were interested at first becoming a vet, but it all went on a different path.
JEANNIE PATERSON
Well, actually it started before then. My father is an electrical engineer, he worked on the Apollo moon landing, he worked at a place called Honeysuckle Creek which was one of the tracking stations that followed the moon landing, and he was also one of the first people on the street to have a computer. So, of course, as the child of an engineer, I said, I'm not doing that, I'm going to do something else and the something else was I was going to be a vet because I really liked animals. I was kind of like a Gerald Durrell collecting - constantly collecting and bringing them home to study them. [Laughs] The problem was when I did work experience with a vet, I found that I could not abide the sight of blood [laughs] so what do you do if you're a kid who's studied physics, chemistry and biology [laughs] and decides they don't want to pursue that area, you become a lawyer. I don't know how that happened but there you go.
I've kind of managed to combine my father's interests in technology through the work I'm doing and very soon I'll be combining my passion for animals and vet science by myself interviewing some vets who are interested in AI and animals.
ANDI HORVATH
So, how does AI and animals work?
JEANNIE PATERSON
Well, if you think about AI, one of the things it does is it challenges how we understand ourselves. The famous Turing's test for recognising general AI was, can it persuade us, can it persuade us that we're dealing with a human and how we keep changing the ballpark for how that test would work. It comes to that fundamental question, our fascination with artificial intelligence; what does it tell us about ourselves? How do our brains work in comparison to the brain that might be created with a scientist? What is it about being human that makes us different from a machine? Is it our empathy, is it our creativity, is it our selfishness?
Well, all of those questions also apply to animals. Animals, as we are increasingly aware, are thinking, sentient beings with their own objectives, emotions and ways of being, so when we apply AI to animals, what does that tell us about how animals interact with the world and what does it tell us also about our relationship with animals? Should we be using AI to subject and control animals or should we be using AI to understand more about what animals need to operate in this world? One of the great things that Melbourne Zoo is doing is using AI to keep their really intelligent animals - their orangutans - entertained. How cool is that?
ANDI HORVATH
[Laughs] I'm blown away.
JEANNIE PATERSON
I know…
ANDI HORVATH
Entertain…
JEANNIE PATERSON
…it's great.
ANDI HORVATH
[Laughs] I think entertaining myself, I could probably learn something from the orangutans.
[Laughter]
JEANNIE PATERSON
Well, AI…
ANDI HORVATH
All this COVID lockdown is not good for us [laughs].
JEANNIE PATERSON
Indeed. But AI works best when it's used in collaboration with humans rather than to dominate or expel humans from decision making processes. So, if you think about the promise of AI in medicine, for example, the promise of AI in medicine is not that it will replace doctors, but it will help doctors do the job that they want to do better. I think that's important to remember as well, that the best relationship with AI is one of collaboration rather than domination or control. A lot of these discussions about AI ethics and the laws around AI are trying to actually work out a process we can work with those machines to improve the lives of people and animals and indeed the planet, rather than end up in some dystopian future where we've lost our innate humanity and are all sitting isolated and alone with our only interactions being with an in-home device that is pretending to be our friend.
ANDI HORVATH
Next time we see an ad pop up during our internet sessions and we realise, oh yes, I have been looking at one too many crochet sites or one too many dual-clutch transmission motorcycle sites or one too many pompom fitness dance classes, what would you like us to think about?
JEANNIE PATERSON
I'd like you to think about what happens when that same process of shooting ads back to you based on what you looked at previously, I'd like you to think about what that means for our political and democratic processes when the ads that are being shot back to you are news reports or media reports or conspiracy theories or political views based on something you looked at in the past. That's the challenge to democracy and to ourselves.
ANDI HORVATH
Professor Jeannie Paterson, thank you.
JEANNIE PATERSON
Andi, it's been lovely to talk to you.
CHRIS HATZIS
Thank you to Jeannie Paterson, Professor of Law and Co-director of the Centre for AI and Digital Ethics at the Melbourne Law School, University of Melbourne. And thanks to Dr Andi Horvath.
Eavesdrop on Experts - stories of inspiration and insights - was made possible by the University of Melbourne. This episode was recorded on June 16, 2021. You’ll find a full transcript on the Pursuit website. Production, audio engineering and editing by me, Chris Hatzis. Co-production - Silvi Vann-Wall and Dr Andi Horvath. Eavesdrop on Experts is licensed under Creative Commons, Copyright 2021, The University of Melbourne. If you enjoyed this episode, review us on Apple Podcasts and check out the rest of the Eavesdrop episodes in our archive. I’m Chris Hatzis. Join us again next time for another Eavesdrop on Experts.
As consumers and citizens we have very little say about how AI technologies are used, what control we have over their use and what is said about us, says Jeannie Paterson, Professor of Law and Co-director of the Centre for AI and Digital Ethics at the Melbourne Law School, University of Melbourne.
“Technology has a lot of potential for improving people’s lives, in terms of including marginalised people or providing access and equity to people who are otherwise disadvantaged. In fact, I’m a technology optimist,” says Professor Paterson.

Australia vs Facebook: Regulating the market of attention
She points out that most people would be aware that their social media activity generates information and data about them that is being collected and used to target advertising at them.
“The issue is that our interactions with the world are being mediated through these digital profiles that are created about us, so we cease to be ourselves – full, rich, interesting humans.
For example, “there are all sorts of stories about the way now that insurance pricing is determined by your credit score or the friends you associate with.”
Professor Paterson explains that when algorithms are used to make important decisions, we also need to have oversight by people who understand the context of the information being used. For example, there may be valid social reasons or other political or policy reasons why we shouldn’t really be relying merely on past performance or past behaviour to make important decisions about the future.
“When the ads that are being shot back to you are news reports, conspiracy theories or political views based on something you looked at in the past, that’s the challenge to democracy and to ourselves,” she says.
“And the promise of AI in medicine, for example, isn’t that it will replace doctors, but that it will help doctors do the job that they want to do better. The best relationship with AI is one of collaboration rather than domination or control.”
Episode recorded: June 16, 2021.
Interviewer: Dr Andi Horvath.
Producer, audio engineer and editor: Chris Hatzis.
Co-producers: Silvi Vann-Wall and Dr Andi Horvath.
Banner: Getty images
Subscribe to Eavesdrop on Experts through iTunes, SoundCloud or RSS.