On October 24, 2023, NTNU held the fourth of a series of digital workshops on robotics for agile production entitled “Addressing data challenges in robotics for agile production”.
During this workshop, we invited the audience, composed of members from the scientific community, industry and civil society, to reflect on themes of responsible robotics in agile production and manufacturing. Throughout previous events, it emerged how issues connected to data privacy, cybersecurity and regulation are key for the area of agile production. These intersect with all other areas of non-technological issues of robotics. In this workshop, we invited expert talks on the topics of data regulation and liability in case of problems as well as on the role of human-oversight in mitigating issues of safety and privacy.
The event saw the participation of two keynote speakers: Athina Sachoulidou, Assistant Professor of Criminal Law at the University of Thessaloniki, and Winston Maxwell, Director of Law and Technology Studies at Télécom Paris, Institut Polytechnique de Paris.
First, Dr. Sachoulidou delineated the changes required in criminal law to respond to Artificial Intelligence and Robotics innovations and the connected data challenges. As of today, robots have no criminal personhood and there is no specific offence for illegal acts committed through or against AI, leaving questions about liability open to interpretation. Other previous frameworks can be applied to the cases involving robots, such as the standard of care or the doctrine of permissible risk, however, these need to be monitored in their application to evaluate their suitability. A case-by-case balancing of responsibilities would be desirable.
Maintaining the focus on the issue of responsibility, Professor Maxwell connected questions of safety and data privacy to the figure of the human overseer. The need for human-oversight over AI is being stressed in the AI Act as a necessary solution to avoid non-technological issues. This practice can take three different forms: human-on-the-loop, with humans able to stop the system if problems occur only after the fact; human-in-the-loop, requesting humans to validate an AI decision; human verification after the fact, taking place once issues have been found to emerge. Nevertheless, all three of these models present some challenges.
To conclude, Professor Maxwell presented three main takeaways. First, important decisions about humans should be made by humans and not by AI systems; second, the AI decisions should be contestable requiring an acceptable degree of transparency and traceability; and third, AI systems should always respect the principles of equality and the rule of law. The audience was very interested in the presentations given by the keynote speakers and asked questions about how the principles presented might be applied in a corporate context and the area of agile production. In the breakout sessions, data issues were discussed at length. The participants mentioned problems connected to biased datasets, data manipulation, data loss and hacking. Human-in-the-loop has been mentioned as a needed practice to ensure the safe use of AI and robot systems. Participants also expressed concern about data being collected for marketing purposes and called for regulation addressing corporate responsibility.
The goal of the Robotics4EU project is to promote the more widespread adoption of AI-based robots in Europe. In this workshop, we collaborated with innovators, researchers, citizens, and decision-makers working in agile production, which helped raise awareness of non-technological aspects of robotics among stakeholders. This workshop was important for Robotics4EU to gather feedback on the RoboCompass and understand sector-specific challenges.
Author: Silvia Ecclesia