Transparency and Trust in Collaborative Mapping: Concerns and Dilemmas in AI-Assisted Road Integration within OpenStreetMap

The influx of machine-generated data from geospatial artificial intelligence (AI), has grown significantly in less than a decade. AI-assisted mapping, where human validation refines machine-generated output, is increasingly used to update crowdsourced databases, such as OpenStreetMap (OSM). OSM cont...

Full description

Saved in:
Bibliographic Details
Main Authors: Andorful, Francis (Author) , Herfort, Benjamin (Author) , Melanda, Edson Augusto (Author) , Damas Antonio, Nathan (Author) , Zipf, Alexander (Author) , Camboim, Silvana Philippi (Author)
Format: Article (Journal)
Language:English
Published: 08 Dec 2025
In: Annals of the American Association of Geographers
Year: 2025, Pages: 1-22
ISSN:2469-4460
DOI:10.1080/24694452.2025.2589286
Online Access:Resolving-System, kostenfrei, Volltext: https://doi.org/10.1080/24694452.2025.2589286
Verlag, kostenfrei, Volltext: https://www.tandfonline.com/doi/full/10.1080/24694452.2025.2589286
Get full text
Author Notes:Francis Andorful, Benjamin Herfort, Edson Augusto Melanda, Nathan Damas Antonio, Alexander Zipf, and Silvana Philippi Camboim
Description
Summary:The influx of machine-generated data from geospatial artificial intelligence (AI), has grown significantly in less than a decade. AI-assisted mapping, where human validation refines machine-generated output, is increasingly used to update crowdsourced databases, such as OpenStreetMap (OSM). OSM contributors, however, have expressed mixed sentiments about the presence of AI content in the database, raising questions about trust and authenticity. We first analyzed community discussions identifying emotional concerns and the evolving role of human mappers. Next, we assessed whether AI-assisted roads (AI-aR) can be reliably detected within OSM, using machine learning (ML) models as diagnostic tools to reveal transparency limitations and advocate for improved tagging practices. Community debates reveal tensions over inadequate tagging, local context loss, and corporate influence. ML models perform best on human-free benchmark data, worse on AI-aR edits, but improve when temporal patterns are included. Although geometric or temporal patterns might help identify AI-aR, these approaches remain uncertain and unstable over time. Our findings underscore the danger of data quality erosion through issues like validation-loop bias and the accountability sink effect. The distinction between AI-aR and human-generated roads will continue to blur with of the growth of human-AI collaboration. Current models might perform best on fresh AI contributions with minimal human modification.
Item Description:Gesehen am 05.01.2026
Physical Description:Online Resource
ISSN:2469-4460
DOI:10.1080/24694452.2025.2589286