Your domestic robot: A spy, a helpful houseworker, or a loyal companion

1. Introduction
Robots will soon be an integral part of our lives (Gupta, 2015). People use domestic robots to assist with chores, provide care for the elderly, or assist children with learning. While they offer numerous benefits, they also introduce new security and privacy vulnerabilities into the home (Denning et al., 2009). With their ability to collect and process data, domestic robots are subject to regulation under data protection frameworks such as the General Data Protection Regulation (GDPR), and the Fair Information Practices Principles (FIPP). However, in this paper, I will argue that the risk from domestic robots extends beyond the breach of “informational privacy”– relating to personal information of users — but also includes social, psychological, and physical privacy. Those additional dimensions derive from the robots’ ability to move, act autonomously, interact with people, and connect with other smart devices in the Internet of Things (IoT). I will also point out that the current regulation framework is not sufficient to regulate the flow of information entailed by cloud robots due to their extensive software and storage situated externally in the cloud. From there, I will make some recommendations based on the four regulatory tools of law, social norms, market, and code for a comprehensive regulation framework and better protection of the users, especially naive users.

2. Domestic robots and the current commercial state-of-the-art
As defined by Bekey’s (2012), a robot is “a machine, situated in the world, that senses, thinks, and acts”. In this context, I distinguish between service robots that “perform useful tasks for humans” and industrial robots which are used for “industrial automation applications” (ISO 8373). Among service robots, domestic robots are deployed in the home or other domestic settings (Villaronga et al., 2018). Steinfeld et al. (2006) categorises domestic robots into chore robots, communication robots, entertainment robots, and companion robots. To name a few: the Roomba vacuum cleaner, the RoboSapien toy, Spykee and Rovio telepresence robots, and so on. Domestic robots are characterised by their physical presence, ability to interact with people, and level of autonomy (Lutz et al., 2019; Young et al., 2009). Contemporary robots are increasingly based on artificial intelligence (AI) with human-like abilities such as sensing, language, interaction, problem solving, and learning (House of Lords, 2018). Cloud robots are physical robotic systems integrated with cloud-based services, in which the software implementing their intelligent or autonomous behaviors is partially or fully shifted to the cloud (Proia et al., 2015). Operating in“smart home” environments, robots can connect with and control multiple devices such as heating, air-conditioning, and alarms, based on the user’s needs and activity (Prescott et al., 2017). The market for domestic robots is growing at an impressive rate globally. A report from the International Federation of Robotics (2020) shows an increase in the sales of professional service robots in 2019 by 32% from USD 8.5 billion to USD 11.2 billion. Within that, service robots for domestic tasks have the fastest growth with 18.6 million units sold in 2019. Undoubtedly, Covid-19 has boosted the market and created additional demand for robotic solutions, especially in logistics, health care, and the home.

3. Domestic robots: Four dimensions of privacy & security threats

Leino-Kilpi et al. (2001), based on the work of Burgoon (1982) identifies four dimensions of privacy as:
 Physical privacy: personal space and territory;
 Informational privacy: personal information of individuals;
 Social privacy: abilities and efforts in managing social contacts and influence;
 Psychological privacy: abilities to control cognitive inputs and outputs to form one’s values and beliefs.

While informational privacy dominates the research on privacy, other dimensions are occasionally mentioned in the literature (Lutz et al., 2018). In the following section, I will discuss how each type of privacy is affected by the robot’s intrusive data acquisition, its degree of autonomy and agency, and interactions between human and robots with regard to the vulnerabilities of the users.

3.1. Informational privacy
Equipped with numerous sensors, and with increased mobility, domestic robots can obtain a vast amount of information about users and their environment, including sensitive information such as emotional and mental states, interaction between users, and images of children (Sharkey, 2016). Other information regarding “race, religious beliefs, and criminal records” are also classified as “per se sensitive” (Proia et al., 2015). This includes data detected by the robots but without human perception, such as brain waves and inaccessible places in the house, bringing into question the ownership of such data (Calo, 2010a; Rueben et al., 2017). Because of their social character, domestic robots can also elicit “secret” information from interactions with users (Pagallo, 2013). Those data, in the absence of control by the users, can move outside the boundaries of the home and be used for marketing purposes, policing, social control and manipulation (Urquhart et al., 2019). Users, on the other hand, lack awareness of what data is being collected, how it is processed and what purposes it is used for (Lees et al., 2011). The degree, however, varies among groups of users depending on their age, demographic characteristics,and familiarity with the technology (Draper et al., 2017). The lack of transparency of the data flow managed by the ecosystem of stakeholders makes informed consent insufficient, biased and incomprehensible (Villaronga et al., 2020). Though there are generally trade-offs between privacy and utility, Beach et al., (2009) shows that people with greater need of care, especially the elderly, are more willing to give up their privacy in exchange for services, making them even more vulnerable.

3.2. Physical & social privacy
The presence of robots and their “always on” mode changes the characteristics of private home as places of retreat and might negatively affect users’ social behaviors (Sedenberg et al., 2016). A sense of surveillance and lack of personal space can lead to reduced self-development and self-reflection, or feelings of insecurity and alienation in users (Hofmann, 2013). Sharkey et al., (2010) points out that robots can inhibit the normal developmental stages in children, affecting their identity formation. Robots can also affect the autonomy and dignity of users where robots exert control, make users do unwanted things, or increase user’s dependency (Drew et al., 2016). As home is a shared environment, data is co-constructed and represents a shared digital life (Goulden et al., 2018). It challenges the domestic politics as who has the authority over the system, and to what extent, a person has control over another’s information (Tolmie et al., 2018). The current legislation (such as GDPR), however, limits its regulation to the single user but still lack of a collective mechanism for meaningful control shared by individuals (Urquhart et al., 2019).

3.3. Psychological privacy
The human tendency to perceive robots as people is partly due to unfamiliarity with the technology and the social inclination to anthropomorphize robots (Darling, 2012). With their humanoid appearance or behavior, a robot can act as a “device of deception” in which users prefer to form trust and emotional connections with the robots over human (Sharkey, 2012); or robots are used to patronise and infantilise users, especially the senior adults (Körtner, 2016). The usage of assistant robots have generated concerns on the ethics of care and the emotional implications for users (Leenes et al., 2017). Aristotle – an AI-powered babysitter pulled off the market due to the public fear of “replacing the care, judgment and companionship of loving family members ” is an example (Hern, 2017).

3.4. Security threats
As the domestic robots operate in a connected network with multiple devices and the Internet, they become more susceptible to hacking (Sedenberg et al., 2016). Cloud robots can share information to public environment through remote access without a “human in the loop” (Pagallo, 2013). Denning et al., (2009) shows that domestic robots can be infiltrated by outsiders and used to serve malicious purposes such as spying, vandalism, multi-robot attacks, and psychological manipulation. Inpractice, there are warnings on how toys can be hijacked to spy on children, such as the Hello Barbie doll, or stuffed animals (Gibbs, 2015; Bicchierai, 2017). Consequently, domestic robots have introduced a “new expectation of privacy”, and challenged the current legal framework of privacy and data protection (Pagallo, 2013). In the next section, I will discuss the limitations of present regulations and the responsibilities dilemma between groups of stakeholders. From there, I will make some practical recommendations for the usage of domestic robots with regards to vulnerable users.

4. Regulating domestic robots: uncertainty and responsibilities
Robot technologies, with AI, pose unprecedented challenges to the current regulatory framework, which was not designed for progressive and adaptive AI (Leenes et al., 2017). In entails uncertainty – the ontological status of the robots, the degree of agency of robot’s actions, and lack of common ethical, legal and social standards; and responsibilities – the clarity and extent of liability, transparency, and accountability of each stakeholders involved in the process (Villaronga et al., 2020). The current framework has some major drawbacks as it focuses largely on the physical safety or the robot itself, but ignores background information processing, inadequately address the risks from human-robot interactions and their autonomous actions such as social and psychological harm (Pagallo, 2016; Drew et al., 2016). Despite the privacy and legal risks of cloud robots, they are not yet covered in the present regulations (Villaronga et al., 2018). In the following section, I will discuss four types of regulations identified by Lessig (1999) as law, social norms, market, and code in reference to domestic robots.

4.1. Law
If the domestic robots have data acquisition abilities, they are subject to regulation under data protection frameworks such as GDPR and FIPP (Proia et al., 2015). Registered as products, they are regulated under Directive 2001/95/EC on general product safety. Additional regulations are applied in case of specific type of robots. For example, classified as medical devices, care robots are under Regulation 2017/745. However, applications in other domains such as therapy and education are still lacking (European Parliament, 2017). Those frameworks mandate an ex-ante transparency in which users need to be informed on the data usage before making consent (Felzmann et al., 2019). The consent, however, can be adjusted to be dynamic as users choose to share their data in which situations and re-evaluate if the purposes change (Villaronga et al., 2020). In cases users have limited capacity to consent such as persons with dimentia, or children with autism, an informed consent by proxy can be applied (Shepherd et al., 2019). A waiver of consent can also be considered if there is minimal risk compared to the expected benefits (Körtner, 2016).

4.2. Social norms
There are a number of professional bodies who provide general guidance on robotics through their codes of ethics such as the Association of Computing Machinery (ACM), the British Computing Society (BCS), and the Institute of Electrical and Electronic Engineers (IEEE). Within these, the common themes occur as privacy and safety, accountability, respect for fundamental rights, inclusiveness, and the precautionary principle (Urquhart, 2019). However, there is little evidence on the translation of those principles into practice (Winfield et al., 2018). As Holder et al., (2016) highlights, user acceptance is critical for robots uptake, and acceptance is based on trust. To make users engage more effectively, educational programs on the real abilities and limitations of the robots as well as privacy rules can make them feel comfortable and form trust with the robots in appropriate forms. This can also reduce the risks of emotional and psychological harm from human-robot interaction (Sharkey et al., 2012). Designing robots in adapting to personality of users can also contribute to greater acceptance among users (Chammas et al., 2005).

4.3. Market
The liability law can help balance between technologial innovation and customers protection (Leenes et al., 2017). Robots are subject to Directive 85/374/EEC on liability for defective products. However, the autonomous and cognitive features of a smart domestic robot make the current rules insufficient (Villaronga et al., 2020). Additionally, the consequences of social and psychological impact or moral damage is not yet covered. The “sophisticated interdependencies both within products and across interconnected devices” of the cloud robots also challenge the allocation of the liability (EC, 2017). An ongoing discussion of European Commission suggests the risk-based approaches that set safeguard baseline for the usage and development of robots and AI (Malgeieri, 2019). Another solution is the creation of the obligatory insurance scheme or the compensation fund (European Parliament, 2017). However, the ideas are still research prototypes which might get complicated in practice.

4.4. Code (or architecture)
Compared to other regulatory modalities, code is itself self-enforcing and can inhibit non-compliance (Leenes et al., 2017). This embeds the universal and/or localised values in the design process known as “value sensitive design” (Le Dantec et al., 2009). User-centric design helps embrace cultural and societal aspects, and needs of users beside the functional design (Bødker, 2015). For example, privacy and data protection rules can be partly enforced through “privacy by design and default” (Article 25, GDPR) and “security by design” (Article32, GDPR). Faces can be automatically made blurred or data will be made anonymous, encrypted, and deleted after a period of time. Domestic laws and international laws can also be incorporated in the “regulation-by-design”, which can lead to future “compliance-by-default” (Villaronga et al., 2020). However, this can pose questions of legitimacy on how the designs were established and to what extent they were transparent (Brownsword, 2005).

5. CONCLUSION
In sum, while domestic robots offer numerous benefits, they also pose considerable security and privacy risks into the home environment. While the current regulation framework largely focus on informational privacy, the risks from domestic robots extend further to physical, social, and psychological privacy and security threats. There is a need to upgrade the legislation to keep up with the uncertainty in the progressive development of robots and AI through clear responsibilities mechanisms shared by stakeholders. Until robots “become or are made self-aware, Asimov’s laws must be regarded” (European Parliament, 2017). Robots, however, in the near future, can be designed by default as moral agents, and able to make decisions to avoid unethical outcomes by themselves (Winfield et al., 2018). Even in this case, human have greater oversight responsibility to ensure the system do not make wrong decisions. Before that, users might not be worried about their domestic robots being as spies because now they can act as data controllers and protect the privacy and interests of users on their behalf.

REFERENCES

  • Beach, S., Schulz, R., Downs, J., Matthews, J., Barron, B., & Seelman, K. (2009). Disability, age, and informational privacy attitudes in quality of life technology  applications: Results from a national web survey. ACMTrans Access Comput 2(1):1–21
  • Bødker, S. (2015). Third-wave HCI, 10 years later-participation and sharing. Interactions. Vol. 22 No. 5, pp. 24-31.Bicchierai, L. (2017). How this Internet of Things stuffed animal can be remotely turned into a spy device. Available at: https://www.vice.com/en/article/qkm48b/how-this-internet-of-things-teddy-bear-can-be-remotely-turned-into-a-spy-device. Accessed on 1 December 2020.
  • Brownsword, R. (2005). Code, Control, and Choice: Why East is East and West is West. 25 Legal Studies 1.
  • Burgoon, J. (1982). Privacy and communication. In Burgoon, M. (Ed.), Communication Yearbook 6 (pp. 206–249). Beverly Hills, CA: SAGE.
  • Calo, R. (2010a). People can be so fake: A new dimension to privacy and technology scholarship. Penn State Law Review, 114(3).
  • Calo, R. (2010b). Robots and privacy. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 187–201). Cambridge, MA: The MIT Press.
  • Chammas, A., Quaresma, M., Mont’Alvão, C. (2015) A closer look on the user centred design. Proc Manuf 3:5397–5404.
  • Darling, K. (2012). Extending Legal Protection to Social Robots. IEEE SPECTRUM
  • Denning, T., Matuszek, C., Koscher, K., Smith, J., & Kohno, T. (2009). A spotlight on security and privacy risks with future household robots. In Proceedings of the 11th International Conference on Ubiquitous Computing (pp. 105–114). New York, NY: ACM.
  • Draper, H., & Sorell, T. (2017). Ethical values and social care robots for older people. Ethics and Information Technology, 19, 49–68.
  • Drew., S., Nicolas, T., Kris, H. & Cummings, M. (2016). Regulating Healthcare Robots: Maximizing Opportunities While Minimizing Risks. Richmond Journal of Law & Technology. Volume 22. Issue 2.
  • Gupta, S.K. (2015) Six recent trends in robotics and their implications. IEEE Spectrum. Available at https://spectrum.ieee.org/automaton/robotics/home-robots/six-recent-trends-in-robotics-and-their-implications. Accessed 02 December 2020.
  • Hern, A. (2017). Kids should not be guinea pigs’: mattel pulls AI babysitter. Available at: www.theguardian.com/technology/2017/oct/06/mattel-aristotle-ai-babysitter-children-campaign. Accessed on 29 November 2020.
  • Holder, C., Khurana, V., Harrison, F. and Jacobs, L. (2016). Robotics and law: key legal and regulatory implications of the robotics age (part I of II). Computer Law and Security Review, Vol. 32 No. 3, pp. 383-402.
  • Hofmann, B. (2013). Ethical challenges with welfare technology. Science Engineering Ethics, 19(2), 389–406.
  • House of Lords. Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able?
  • Körtner, T. (2016). Ethical challenges in the use of social service robots for elderly people. Z Gerontol Geriat 49, 303–307.
  • Le Dantec, C., Poole, E.A. and Wyche, S. (2009). Values as lived experience: evolving value sensitive design in support of value discovery. Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, NY, pp. 1141-1150.
  • Lessig, L. (1999). The Law of the Horse: What Cyberlaw Might Teach. 6 Harvard Law Review 501.
  • Lees, M., Tang, K., Forlizzi, J., & Kiesler, S. (2011). Understanding users’ perception of privacy in human–robot interaction. In Proceedings of the 6th International Conference on Human– Robot Interaction (pp. 181–182). New York, NY: ACM
  • Leenes, R., Palmerini, E., Koops, B., Bertolini, A., Salvini, P. & Lucivero, F. (2017) Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law, Innovation and Technology, 9:1, 1-44.
  • Leino-Kilpi, H., Välimäki, M., Dassen, T., Gasull, M., Lemonidou, C., & Arndt, M. (2001). Privacy: A review of the literature. International Journal of Nursing Studies, 38, 663–671.
  • Lutz, C., Hoffmann, C., Bucher, E., Fieseler, C. (2018). The role of privacy concerns in the sharing economy. Information, Communication & Society, 21(10), 1472–1492.
  • Lutz, C., Schöttler, M., & Hoffmann, C. (2019). The privacy implications of social robots: Scoping review and expert interviews. Mobile Media & Communication, Vol. 7(3) 412–434.
  • Malgeieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer Law & Security Review. Volume 35, Issue 5.
  • Pagallo, U. (2013). Robots in the cloud with privacy: A new threat to data protection? Computer Law & Security Review, 29(5), 501–508.
  • Pagallo, U. (2016) The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design. In: Gutwirth S., Leenes R., De Hert P. (eds) Data Protection on the Move. Law, Governance and Technology Series, vol 24. Springer, Dordrecht.
  • Prescott, T. & Caleb-Solly, P. (2017). Robotics in Social Care: A Connected Care Ecosystem for Independent Living. UK RAS White Papers. Robotics and Autonomous Systems Network.
  • Proia, A., & Simshaw, D. (2015). Consumer Cloud Robotics and the Fair Information Practice Principles: Recognizing the Challenges and Opportunities Ahead. The Minnesota Journal of Law, Science & Technology. Volume 16. Issue 1.
  • Sedenberg, E., Chuang, J., Mulligan, D. (2016). Designing commercial therapeutic robots for privacy preserving systems and ethical research practices within the home. International Journal of Social Robotics, 8(4), 575–587.
  • Sharkey, A. J. C., & Sharkey, N. (2010). The crying shame of robot nannies. Interaction Studies, 11(2), 161–190.
  • Sharkey, N., & Sharkey, A. (2012) The eldercare factory. Gerontology 58(3):282–288
  • Sharkey, A. J. C. (2016). Should we welcome robot teachers? Ethics and Information Technology, 18(4), 283–297.
  • Shepherd, V., Wood, F., & Griffith, R. (2019). Research involving adults lacking capacity to consent: a content analysis of participant information sheets for consultees and legal representatives in England and Wales. Trials 20, 233.
  • Steinfeld, A., Fong, T., Kaber, D., Lewis, M., Scholtz, J., Schultz, A., & Goodrich. M. (2006). Common Metrics for Human-Robot Interaction. In Proc. of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction. ACM New York, NY, USA,  
  • Tolmie, P. and Crabtree, A. (2018). The practical politics of sharing personal data. Personal and Ubiquitous Computing, Vol. 22 No. 2, pp. 293-315.
  • Urquhart, L., Reedman-Flint, D., and Leesakul, N. (2019). Responsible domestic robotics: exploring ethical implications of robots in the home. Journal of Information, Communication and Ethics in Society Vol. 17 No. 2.
  • Villaronga, E. F., and Millard, C. (2018), Cloud Robotics Law and Regulation. Queen Mary School of Law Legal Studies Research Paper No. 295/2018.
  • Villaronga, E. F., Lutz, C. & Tamò-Larrieux, A. (2020). Gathering Expert Opinions for Social Robots’ Ethical, Legal, and Societal Concerns: Findings from Four International Workshops. Int J of Soc Robotics 12, 441–458 (2020).
  • World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). (2017). Report of COMEST on Robotics Ethics. SHS/YES/COMEST-10/17/2 REV.
  • Winfield, A., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Phil. Trans. R. Soc. A 376: 20180085.