Sciences & Technology
We need to keep Big Tech in check
Geospatial AI could transform healthcare and disaster management, but we need comprehensive guidelines and laws to mitigate misinformation and safeguard users
Published 25 September 2024
The combination of artificial intelligence (AI) and location intelligence has given rise to a revolutionary technology known as geospatial artificial intelligence or GeoAI.
The technology brings together AI techniques like machine learning and deep learning with geospatial technologies including Geographic Information Systems (GIS), remote sensing and spatial data analysis.
GeoAI is transforming decision-making across various sectors including urban planning, disaster management, healthcare and defence by offering innovative solutions to complex spatial challenges.
For instance, in disaster management, GeoAI enables more timely damage assessment and enhances disaster response and defence operations through improved situational awareness and predictive modelling.
In healthcare, GeoAI tracks population trends and disease outbreaks, including the COVID-19 pandemic, where it identified hotspots and guided resource allocation.
However, as GeoAI technology advances, serious ethical concerns and risks are brought to the forefront, and these must be addressed.
These risks were highlighted in several recent incidents. In April 2024, the Washington Communications Commission imposed nearly US$200 million in fines on wireless carriers for illegally sharing customer location data.
Sciences & Technology
We need to keep Big Tech in check
A data breach at Christie’s Auction House exposed the GPS locations of artworks owned by wealthy collectors.
This breach went beyond merely disclosing street addresses, it unveiled sensitive private details, severely undermining client privacy and damaging trust.
German cybersecurity experts Martin Tschirsich and André Zilch uncovered a vulnerability in Christie’s cybersecurity protocols that exposed GPS data so precise it could determine the exact location of a photo, and consequently, the storage site of hundreds of consigned artworks on the auction house’s website.
They noted that around 10 per cent of the uploaded images contained exact GPS coordinates.
The safety and reliability of GeoAI technologies are crucial, especially in vehicles. This was starkly highlighted by a recent incident in Kerala, India where a Google Maps mishap directed a car into a river, resulting in a dramatic rescue as two men narrowly escaped.
As autonomous vehicle technology advances, ensuring these systems’ safety and reliability becomes increasingly important.
According to Forbes, 93 per cent of Americans have concerns about some aspect of self-driving cars, with safety and technological malfunctions being the primary worries.
Additionally, 61 per cent of Americans would not trust a self-driving car with their loved ones or children, reflecting a significant trust gap that needs to be addressed.
AI systems can also perpetuate or even exacerbate biases inherent in their training data.
For example, biases in mapping data can lead to the inaccurate identification of slum areas, impacting emergency response efforts by hindering effective service distribution.
Similarly, biases in data used to train autonomous vehicles can result in unsafe scenarios.
Research from King’s College London revealed that widely used datasets were significantly less effective at detecting darker-skinned pedestrians compared to lighter-skinned ones, with a detection failure rate nearly eight per cent higher for darker-complexioned individuals.
This highlights the critical need to address AI biases to ensure safety and fairness.
It also underscores the importance of ensuring AI systems are transparent and accountable to maintain trust and mitigate potential biases and errors in GeoAI applications.
While understanding AI decision-making processes is essential for maintaining trust, the complexity and ‘black box’ nature of these algorithms often obscure how decisions are made.
Jie Zhang, a lecturer in computer science at King’s College, notes that while models are often based on open-source frameworks, the specifics of their implementations remain confidential.
Hong Kong’s open data policy introduced in 2018, made government data freely accessible to the public. “Previously, we had to buy the data, but now we can get it for free”, geographic information system (GIS) expert Paulina Wong explains.
While this policy has significantly expanded the scope of GIS applications in Hong Kong, it raises questions and concerns about the quality of the data now being made widely available.
Ensuring data accuracy and reliability remains a critical issue as the volume and use of open data increase.
Health & Medicine
Medibank’s hack tells us privacy laws need to change
The rise of deepfake technology has introduced significant risks, including the unauthorised imitation of public figures’ voices and videos. Deepfake audio has notably influenced several elections worldwide.
For instance, a fake audio recording in Slovakia falsely accused a leading candidate of election rigging and over 130 million people watched a deepfake video of Kamala Harris reposted by X owner Elon Musk.
A recent alarming development involves the use of AI to generate misleading geographic data. For example, fake satellite images and maps could add non-existent features to landscapes, resulting in ‘Deepfake geography’.
As well as leading to incorrect planning decisions, this fake imagery could create security risk or hinder disaster response efforts.
If deepfake technology depicts false military bases, it could mislead intelligence agencies and escalate diplomatic tensions.
These developments pose a broader risk: as deepfakes blur the line between reality and fabrication, they challenge our ability to discern truth as we are already seeing in our social media.
Politics & Society
When AI gets it wrong, workers suffer
As Simon Robinson, Executive Editor at Reuters put it, the real risk with AI is not where we question whether something is fake, but where we question reality and facts themselves.
“If we don't know what's real or true anymore, people can use that murkiness to their advantage. Such a paralysing state, a society which does not have a shared truth or reality, undermines trust in the media, trust in each other, and even in democracy itself,” he said.
While there are no widely reported instances of such deepfake geographic data causing significant issues yet, we have seen similar technologies creating problems in news and social media.
Our legislation should anticipate and mitigate these risks before they become problematic, unlike the delays we have experienced with other emerging risks.
Now is the time not only for Australia but also for the global community to ensure the responsible use of GeoAI.
Policymakers and legislators must collaborate closely with geospatial experts to establish effective AI governance.
Developing comprehensive ethical guidelines and spatial laws that cover privacy, safety and bias is crucial for mitigating misinformation and safeguarding user anonymity.
While GeoAI has transformative benefits, addressing its ethical challenges requires urgent and proactive solutions.
At our research Centre for Spatial Data Infrastructures and Land Administration, we are actively exploring the emerging field of GeoAI. Our investigations focus not only on the technical advancements but also on the ethical and legal implications associated with its use, including data privacy, algorithmic bias, and regulatory compliance.
We acknowledge Aboriginal and Torres Strait Islander people as the Traditional Owners of the unceded lands on which we work, learn and live. We pay respect to Elders past, present and future, and acknowledge the importance of Indigenous knowledge in the Academy.
Read about our Indigenous priorities