Sections

Commentary

When robots are everywhere, what happens to the data they collect?

Incheon, South Korea.- The photo shows employees of Incheon International Airport, on October 14, 2020, conducting a test operation of an autonomous driving vehicle and an autonomous robot cart in the pre-boarding area of the second terminal of the airport.

As robots have grown more prolific and capable in recent years, they have increasingly been used to perform tasks that allow them to collect personal information about individuals. Roombas—the robotic vacuums—cruise around our houses, making detailed maps of them. Assistive robots give us directions in airports and take our photographs in the process. But because robots are often anthropomorphized and zoomorphized—made to resemble humans and animals—they possess a unique ability to encourage people to give up sensitive data. They can lead people to believe that a humanoid social robot has smirked at them, that each of their Roombas have distinct personalities, and that robots can feel pain. Lulled by a robot’s resemblance to a human or animal, a person interacting with it is far less on guard than when dealing with the myriad other digital privacy threats they encounter on any given day. 

It is easy to underappreciate the privacy risks posed by robots that are more likely to be viewed as pets than data-hungry devices. Owners of Roombas, for example, have a remarkable ability to zoomorphize their robotic vacuum cleaners, in some cases even describing them as family members. Roombas are given free rein to people’s homes and build maps of the spaces they move through. But what happens with the data collected by robots such as these is far from clear. As a result, it is crucial that consumers using robots understand how they function and what data they collect—and for policymakers to write effective rules governing the data collected by robots. 

When robots meet humans 

When we encounter a robot in public, it is often difficult to learn what, exactly, it is up to and what it is doing with the data it collects. In 2019, I was excited to come across an assistive social robot roaming the halls at Incheon International Airport in South Korea. I abandoned my bags to my partner and rushed after the robot so I could study it. I tried to get close to the robot to understand its functionality but was thwarted by a throng of people taking turns trying to get its attention by waving or pressing buttons. But it was oblivious and on a mission, and none of the poking or prodding had any impact. Upon reaching its destination at the other side of the hall, it was apparent that the robot thought it had completed a task by leading a requester to a check-in counter, after which it started to roam aimlessly. Before I could get to the robot and figure out its controls, another woman walked up and scanned her boarding pass on the back of the robot. The robot displayed a new map on the rectangular screen on its back and led the woman back across the hall. Elsewhere in the airport, people were having their photo taken by these assistive robots—and providing their email addresses to get a copy of the photo.

These robots raised a huge number of questions: What information were they collecting as people scanned their boarding passes? Was the information stored locally or in a cloud (since robots typically don’t have much storage space)? How long was that information stored? Where was the privacy policy for these robots? What other sensors might they have and what data could those sensors collect? And most importantly, who was responsible for the robot and the information collected? There was no easy way of knowing. 

I wanted to know more about these robots, and while I knew what to look for, I still couldn’t find answers to my questions. To validate these robots belonged to Incheon Airport, I scoured its website for information. I found nothing. Since most social robots in public spaces are a thing of novelty, I next turned to news sources to see if there were articles on the Incheon Airport robots. The articles I uncovered revealed that the robots were deployed by Incheon Airport and that they were developed by LG. Instead of finding one company that was responsible for the robots and the data they collect, I suddenly had two contenders. After reviewing the websites for both LG and Incheon Airport I could not find schematics, a privacy policy for the robots, nor any other information about what the robots could do outside of what I’d experienced in the airport. Both LG and Incheon Airport likely had some responsibility for the robots and the information they collected, but without clearer documentation it wasn’t obvious where the lines of responsibility drawn. This is an unacceptable amount of effort for anyone to go through in order to understand how the robots around them are impacting their lives.

When a social robot in a public space is wearing nothing but a sticker to declare its allegiance, it is extremely difficult to trust and verify that the robot really reports to the entity emblazoned on the sticker. As a sticker aficionado and collector I know how easy it is to acquire a roll of stickers with any design, and as a robot social engineering researcher, I know that it can be deceptively easy to get people to socially connect with a robot. While the robot could be affiliated with the entity decaled on its body, the robot could have also been placed in-situ by a third party with some money and a roll of stickers. 

Robots, either on their own or through human controllers, can mimic human behaviors in a way that might encourage individuals to take an action that is not in their interest. For example, robots may manipulate individuals into divulging private information, giving unauthorized entities access to restricted spaces or handing over valued resources to unintended parties. This is robot social engineering. Robots can collectively process information, have sensors integrated into their body (space permitting), and can blend in with other robots as long as they are the same model and bear the same markings. It therefore becomes more urgent to understand what data a robot is collecting and why and where that data is going. This becomes more urgent when we realize that robots can easily breach physical security measures. In a 2017 experimental study, students repeatedly let a testing robot posing as a delivery robot into a locked and secured student dorm, even when some rightly identified that the robot could have been delivering a bomb.

Despite the rapid proliferation of robots and the unique threat that they pose to our privacy, there exists a remarkable lack of tailored privacy policies for robots and frameworks for the kind of information they are permitted to collect from human beings. That needs to change. Policymakers, robot manufacturers, and users all have a role to play by making information about robot use, their deployment, and the data they collect far more readily available to the people interacting with these devices.

Addressing the policy challenge

The data-privacy questions raised by human-robot interactions can be conceptualized in three buckets: collection; transit; and storage. When a robot collects data from a human being, what data does that include? Possible types of collected data are myriad: biometrics, like facial structure, height, or voice patterns; identification, like name and email address; or personal information imputed as part of a query. Once that data is collected, it is usually transferred to another system. But how is that data treated during transfer? Is it transmitted in plaintext or is it encrypted? Once data is received, it must be stored—hopefully subject to proper security safeguards. And once it’s stored, data can be put to different uses, including training AI models. But who should be granted access to data and what rules should govern its use, including in model training? Figuring out what to do about data collected by robots is only made more complicated by the fact that human beings are likely to treat a device that can (potentially) walk and talk very differently than an ordinary computer. 

Policymakers need to address the concerns raised by the rapid proliferation of robots by making robot-specific privacy policies easily and publicly accessible. Robot manufacturers need to publicly and freely publish manuals on their robots that describe their functionality and sensors so people can be aware of what robots are able to do and what information they are capable of collecting. Privacy policies for robots should be distinct from website or company privacy policies as the unique abilities of robots are not covered sufficiently in general policy documents. People should be easily able to validate which companies are responsible for the actions of a robot by having information about the robot on the front page of a company website. Policymakers responsible for consumer safety, like the U.S. Federal Trade Commission or the Canadian Office of Consumer Affairs, should be enforcing rules for data-collection by robots and specify consequences for failing to abide by those rules.

Given previous efforts by regulators and public authorities to create robot law, it is worth reviewing what researchers have identified as the basics of what a robot privacy policy should cover. There are two main components to such a privacy policy. First, companies that employ robots should have clearly posted signage indicating that the space is serviced by robots and what information can be collected by the robots in that space. Second, that signage should include a website with further information—either a URL or a QR code. If signage isn’t available, understandable, or accessible to some people, robots should have a fallback mechanism in the form of a physical device like a button that can explain simply and easily what their sensors do and what information they’re collecting—with an ability to opt out of whatever information the robot already collected about a person. The reason this should be a fallback mechanism to the signage and website is that just touching that button might already mean that the robot has a great deal of information on you. 

In short, we should be watching robots as closely as they are watching us. 

Brittany “Straithe” Postnikoff is an information security researcher, hardware hacker, and open source maintainer. She published the first defining works on robot social engineering and has spoken on the topic at over 30 conferences including BlackHat, RightsCon, and DEF CON.