The increasing presence of wearable cameras — such as smartphones, Google Glass and lifelogging devices like the Narrative Clip and Autographer — has facilitated benefits in a variety of societal areas, including police investigations, lifestyle monitoring, and aiding patients with memory loss and families with autistic children.
But for two Indiana University professors, the trend toward pervasive, automatic image capturing raises new and important questions about privacy, surveillance and the use of technical data derived from those images.
Assistant professors Apu Kapadia and David Crandall, both at IU Bloomington’s School of Informatics and Computing, along with Dartmouth College sociology professor Denise Anthony, will use $1.2 million in new funding from the National Science Foundation to advance their work developing new technologies to improve the privacy of people captured in those images. Those new technologies could also be designed to protect captured images from various types of automated analysis and in the development of a clearer understanding about how ubiquitous cameras affect individual and societal perceptions about privacy.
“A sudden rise in such image gathering has novel privacy implications for both individuals and society,” Kapadia said. “Our challenge is to understand these privacy implications from both a sociological and technical perspective, and to design new and relevant image and context analysis tools that can help people manage their privacy.”
The needs are multifaceted, previous work by the same researchers has shown. Many lifelogging device users would respect the privacy of bystanders — blurring or blocking out faces or computer screens — if they had the technology to do so. At the same time, similar technologies available to the wearers of the camera also need to be developed to protect their own privacy.
“These cameras open up all sorts of novel and exciting applications, but so many private images are also going to be collected — images of other people, of sensitive documents, of private emails on computer screens, of private places like homes and offices, even inside bathrooms and bedrooms,” Crandall said. “So we’re investigating computer vision techniques that can automatically find potentially private content in images.”
At an international computer security conference in 2013, a team led by Crandall and Kapadia introduced PlaceRaider, a proof-of-concept mobile app that highlighted potential abuse by such automated image collection by collecting sensor data and images taken by a victim’s phone to generate 3-D models of their environment. A year later they created PlaceAvoider, a system that uses computer vision algorithms to numerically fingerprint sensitive spaces, like bathrooms and bedrooms, and then blacklist those spaces from future capture. Recently they also created ScreenAvoider, a system designed to recognize and blacklist computer screens.
“Of course, what makes a photo private is complicated, subjective and highly context-dependent,” Crandall said. “Instead of us defining what is private ahead of time, we’re interested in letting people write their own policies based both on content and context of images, like times, locations, buildings, people, activities or objects, and then recognize these attributes of images automatically.”
Image gathering and dissemination for the purpose of lifelogging, gathering family histories and collecting personal information is expected to become more and more popular. With devices like the Narrative Clip, which takes 120 photos an hour, or Autographer, which takes 360 per hour, users will also need help collecting, curating and editing their images. They think the same types of technologies could be used to organize, edit and produce user content, in addition to providing societal benefits like aids to vision-impaired populations. But for these technologies to be successful, privacy concerns of people captured in the images have to be addressed.
“We’ve found that lifeloggers are concerned about their privacy and that of others, yet it’s doubtful they would give up control of their devices to have their images managed by someone or something else without their participation,” Kapadia said. “Here we are moving toward a socio-technical approach where wearers of cameras could express ‘propriety settings’ and thereby reduce the privacy concerns of bystanders.”
Source: Indiana University
Related:
Was this article valuable?
Here are more articles you may enjoy.