Sciences & Technology
Are robots the answer for aged care during pandemics?
Online monitoring raises serious questions about privacy and rights, but where justified it can be used for good if organisations consider wider issues like transparency and fairness
Published 28 October 2021
Youth, online news and entertainment site Pedestrian recently published an anonymous article about a boss requiring all staff working remotely to leave their cameras on – all day every day.
And when asked why staff were turning their cameras off, those that did speak up were told to pipe down and put their cameras back on.
This sounds like an extreme example, but the rise of monitoring technologies, drastically accelerated by COVID-19 forcing workers and students to work from home, is raising new questions over what and what isn’t appropriate – or even legal –in the push for ever greater productivity or to safeguard the safety and wellbeing of staff.
In the workplace, companies like Sneek and Activtrack have promised to help employers and their staff by providing ways to not only facilitate remote meetings and interactions but to also monitor and analyse the hourly and daily productivity of remote workforces.
Sciences & Technology
Are robots the answer for aged care during pandemics?
Another offering, PointGrab, allows for real-time monitoring of office density, with the aim of ensuring that workers adhere to COVID19 social distancing protocols.
Education has also witnessed increases in monitoring technologies.
According to US survey research from the Center for Democracy and Technology, 80 per cent of responding teachers and 77 per cent of responding students reported that their school had provided them with devices – like laptops that contain surveillance software.
Again this is often said to be justified by the values of the institution – reducing cheating or promoting wellbeing. But is this really the appropriate or right way to go about it?
The same report showed that some teachers felt this technology helped them to ensure their students were safe at home and coping mentally. However, the students were largely unaware that the surveillance was happening.
Preventing students cheating or plagiarising has always been a proper quest of educators. Traditionally, exam monitoring was conducted in the hallowed halls of examination rooms.
But technology is now being used to monitor students sitting exams in their own homes. Indeed, COVID19 has caused an explosion in online proctoring platforms. During extended lockdowns, these products were quickly embraced by many institutions and students, and soundly rejected by others.
Sciences & Technology
TikTok captures your face
In our recently published research on exam surveillance, we offer a guide to the sort questions and governance procedures institutions need to consider when evaluating whether to used these technologies.
Remote proctoring systems can record online information from student laptops and, using AI-based facial detection, can ‘flag’ apparently suspicious exam behaviour.
Institutions using these programs will need to consider, for example, whether students without good internet connections or devices will be unfairly disadvantaged, whether the privacy intrusions of exam surveillance outweigh the benefits of remote proctoring, and whether this monitoring is setting a worrying precedent for future intrusions.
When using these technologies do the institutions using them know how the AI and machine learning components of this technology actually work? Is there transparent information available for students who are concerned?
Whenever an organisation considers the use of monitoring technology, it needs to also be considering these ethical questions. And depending on the circumstances, these ethical considerations may lead to a decision not to use a given technology or prompt ideas on alternative ways of achieving the goals of monitoring or surveillance.
It is easy to get tied up in the bad stories about technologies that track and monitor us.
Surveillance technologies can be readily, and often accurately, criticised as invasive, creepy and riddled with power imbalances that compromise the rights and privacy of the surveilled.
Health & Medicine
The dark side of electronic medical records
On the other hand, the widely adopted QR check-in apps used to better control the COVID-19 pandemic show that when presented with a public interest cause, we are sometimes prepared to be monitored, if only temporarily.
In some circumstances then, some monitoring – done fairly and in good faith – might be justified by the good it produces.
For example, allowing students to sit their exams remotely with the use of privacy safe exam proctoring technologies could be life changing for those with disabilities or accessibility issues that make it difficult to attend campus, or those who cannot afford to live near campus.
The same could be said for other technologies. At Centre for Artificial Intelligence and Digital Ethics (CAIDE), our next project is monitoring the monitoring technologies in the workplace.
Here again, we see the relevance of context and the problem that is being addressed in assessing the role of the technology, and the non-technological and less privacy invading alternatives.
For example, using Artificial Intelligence (AI) to scan density limits in buildings at part of ongoing COVID-19 management might be efficient and effective. But organisations also need to be asking if this is something that can be done without identifying or collecting data on individual workers.
Health & Medicine
Privacy and health: The lessons of COVID-19
Similarly, applications that monitor the use of hard hats on building sites could be a non-invasive way to ensure that employees are kept safe and to ensure occupational health and safety (OHS) compliance.
But in implementing a system like this, organisations need to go beyond just the end goal and consider the wider impact on the rights and privacy of their workforce.
Considering these wider factors should make it clear that the organisation needs to ensure that all employees are aware the technology is being deployed and steps are taken to ensure compliance is fair and not targeted.
These considerations may also persuade an organisation that it needs to look at a different way of encouraging compliance.
In considering issues of surveillance, it is clear that we need to consider the whole picture. A singular focus on the technology itself without assessing its social context or wider ramifications is too narrow.
Powerful technology can now easily be created and rapidly adopted, but Silicon Valley has shown us that when developers and users operate in a silo, insulated from these wider concerns, they may overlook the impact of technological surveillance on our mental health, on our privacy and even on our ability to be human.
Monitoring and surveillance technology can be a force for good, but only when it is done right, thought through carefully and examined from a variety of perspectives – including the rights of the surveilled.
Banner: Getty Images
We acknowledge Aboriginal and Torres Strait Islander people as the Traditional Owners of the unceded lands on which we work, learn and live. We pay respect to Elders past, present and future, and acknowledge the importance of Indigenous knowledge in the Academy.
Read about our Indigenous priorities