Pubs Projects Tools Join Team About Home

Inherently Privacy-Preserving Robotic Vision

Do you have a robot vacuum cleaner? Perhaps one of the new generation robots that uses a camera to navigate around your house? Where do those camera images go? Who can see them? Perhaps the images should never leave your house. Perhaps they should never leave the robot or the camera chip. Perhaps, to best protect your privacy, the images, as we know them, should never be formed in the first place.

  • We present a robotic vision framework that never captures images or reveals information for reconstruction
  • We propose guidelines for creating inherently privacy-preserving vision systems
  • Our simulated localisation study shows comparable performance to conventional approaches
  • We identify four practical approaches to implementing the system in hardware

This is a first step, and we hope to inspire future works that expand the range of applications open to sighted robotic systems.

Publications

•  A. K. Taras, N. Suenderhauf, P. Corke, and D. G. Dansereau, “Inherently privacy-preserving vision for trustworthy autonomous systems: Needs and solutions,” Journal of Responsible Technology, vol. 17, p. 100079, 2024. Preprint here.

•  Best Poster Award. Adam K. Taras, Niko Suenderhauf, Corke, and Donald G. Dansereau, “The Need for Inherently Privacy-Preserving Vision in Trustworthy Autonomous Systems,” International Conference on Robotics and Automation (ICRA) Workshop: Multidisciplinary Approaches to Co-Creating Trustworthy Autonomous Systems, 2023. Preprint, poster.

•  Summary video, Honours thesis seminar

•  Appearance on local news and associated press release.

Citing

If you find this work useful please cite
@article{taras2024inherently,
  title = {Inherently Privacy-Preserving Vision for Trustworthy Autonomous Systems: Needs and Solutions},
  author = {Adam K. Taras and Niko Suenderhauf and Peter Corke and Donald G. Dansereau},
  journal = {Journal of Responsible Technology},
  volume = {17},
  pages = {100079},
  year = {2024}
}

Collaborators

This work was a collaboration between the Robotic Imaging Group at the Australian Centre for Robotics, University of Sydney, and the QUT Centre for Robotics, Queensland University of Technology.

Themes

Reconstruction challenge

We are interested in putting our inherently privacy-preserving implementaitons to the test. As part of this, we encourage researchers in this space to try the reconstruction challenge, attacking an example of our proposed system to break the code and find a secret message!

See the github repo and discussion forum to get started on the challenge and share your progress and ideas.

Dataset

The handheld dataset of the office floor used in the paper is available here. There are 4 trajectories through a set of classrooms and hallways (ABS), all 1440x1080, 130 images long.

See the dataset readme file here for further details.

Inherently Private Imaging Architecture

Current robotic vision forms human-interpretable images that are rich with private data. Features are extracted and processed in the vulerable digital domain, where attackers can access all of this information.

Instead, we propose to shift processing into the optical-analogue domain. In this example a micromirror device filters light in a scene to multiplex data onto a single-pixel sensor. Analogue processing performs privacy-preserving summarisation and hashing before digitisation. Specialised task-specific algorithms then operate on the secure hashes in the digital domain to perform important robotics tasks. Privacy is ensured because reverting the optical-analogue hashing is intractable.

For more results, including an analysis of which pixels the feature data comes from, please see the paper.

How-to: implementing inherently privacy-preserving vision

We propose the following principles to implement this inherently privacy-preserving vision:
  • Specialise the camera to the task; this sacrifices generality for privacy as the camera can only be used for the task(s) it's designed for,
  • Shift as much processing as possible out of the digital domain, keeping it out of reach of remote attack,
  • Maximise information-destroying operations prior to digitisation,
  • Apply obscuration prior to digitisation such that brute-force attack becomes the only option for inverting the imaging process,
  • Consider all information already available to the attacker, e.g. sequences of data and priors to improve domain performance and ensure privacy, and
  • Maximise ambiguity, so that even a successful brute-force inversion of the imaging process is not likely to yield the correct image.

Call to Action

This work is a call to action for the robotic vision community. We see a path forward for establishing the trustworthiness of inherently privacy-preserving vision systems:
  • Characterising and refining hardware implementations based on the approaches proposed here;
  • Establishing meaningful metrics for privacy in the context of optical-analogue processing and hashing;
  • End-to-end design of optical, analogue and algorithmic processing to tackle a broader range of vision tasks;
  • Establishing trustworthiness through rigorous attack and refinement of the proposed concepts;
  • Communicating and educating in an accessible manner to address the barriers to societal acceptance of sighted, privacy-preserving systems.