Who decides what the future will look, feel or sound like? When thinking about AI conversation tends towards its biases and asymmetric development and implementation. It is clear that the goals behind its mainstream development haven’t been human-centered nor have they had a horizontal/diverse set of voices embedded in its design. Science fiction has played a very interesting role in the way in which both specialized and non-specialized agents have approached AI.
The strong narratives behind ideas such as singularity and AI overpowering humans have been written and rewritten in novels, movies, podcasts, videogames and more, but how do these narratives impact our relationship with AI? Can we build new narratives which include a more horizontal, sustainable and caring AI? How can we speculate about the future of AI by rethinking what privacy, agency and trust mean for these systems?
Privacy, Agency, and Trust in Human-AI Ecosystems (PATH-AI) is a collaborative and multidisciplinary research project between The Alan Turing Institute, the University of Edinburgh, and the RIKEN research institute in Japan. The aim of the project is to examine how the three interrelated values of privacy, agency, and trust work together in the very different cultural contexts of the UK and Japan in relation to AI and other data-driven technologies.