International Debate

Report Offers Guidelines for Ethics of Technology Design International Debate
Not all the ethical ramifications of new thechnology revolve around weaponization, but that is certainly a genuinely important area needing firm guidelines.

Report Offers Guidelines for Ethics of Technology Design

December 14, 2017 1426

Campaign_to_Stop_Killer_Robots_opt

Not all the ethical ramifications of new thechnology revolve around weaponization, but that is certainly a genuinely important area needing firm guidelines. (Photo: Campaign to Stop Killer Robots/CC BY 2.0 /Wikimedia Commons)

If kids spend hours a day speaking to digital personal assistant Alexa, how will this affect the way they connect to real people? When a self-driving car runs over a pedestrian, who do you take to court? Is it OK to manipulate people’s emotions if it’s making them happier?

Together with an international team of researchers in fields as diverse as philosophy, engineering and anthropology, we set out to tackle these questions. The result is a new set of guidelines focused on the ethical and social implications of autonomous and intelligent systems. That includes everything from big data and social media algorithms to autonomous weapons.

The Conversation logo

This article by Rafael A Calvo and Dorian Peters originally appeared at The Conversation, a Social Science Space partner site, under the title “Engineers, philosophers and sociologists release ethical design guidelines for future technology”

The report, Ethically Aligned Design, was released Tuesday, December 12, by the Institute of Electrical and Electronics Engineers (IEEE). It is the culmination of a year’s work by 250 world leaders in technology, law, social science, business and government spanning six continents.

IEEE is the world’s largest technical professional organization. With more than 420,000 members in 160 countries, it’s the global authority for professional standards related to technology. The latest report proposes a set of recommendations (suggestions) that are open to public feedback.

Once adopted, the guidelines in the report will be implemented by professional organizations, accreditation boards and educational institutions to ensure the next generation of engineers incorporate ethical considerations into their work.

Guiding principles
The big questions posed by our digital future sit at the intersection of technology and ethics. This is complex territory that requires input from experts in many different fields if we are to navigate it successfully.

To prepare the report, economists and sociologists researched the effect of technology on disempowered groups. Lawyers considered the future of privacy and justice. Doctors and psychologists examined impacts on physical and mental health. Philosophers unpacked hidden biases and moral questions.

The report suggests all technologies should be guided by five general principles:

  • protecting human rights
  • prioritizing and employing established metrics for measuring well-being
  • ensuring designers and operators of new technologies are accountable
  • making processes transparent
  • minimising the risks of misuse.

Sticky questions
The report runs the spectrum from practical to more abstract concerns, touching on personal data ownership, autonomous weapons, job displacement and questions like “can decisions made by amoral systems have moral consequences?”

Ethics in Technology report cover_opt

Click to download the full report.

One section deals with a “lack of ownership or responsibility from the tech community.” It points to a divide between how the technology community sees its ethical responsibilities and the broader social concerns raised by public, legal, and professional communities.

Each issue tackled includes background discussion and a set of candidate recommendations. For example, the section on autonomous weapons recommends measures to ensure meaningful human control. The section on employment recommends the creation of an independent body to track the impact of robotics on jobs and economic growth.

A section on affective computing – an area that studies how computers can detect, express and even “feel” emotions – raises concerns about how long-term interaction with computers could change the way people interact with each other.

This brings us back to our question: if kids spend hours a day speaking to Siri or Alexa how will these interactions change them?

The report makes two recommendations on this point:

1) To acknowledge how much we don’t know (we need to learn much more before these systems become widely used);

2) That humans who witness negative impacts – parents, social workers, governments – learn to detect them and have ways to address them, or even shut technologies down. Experience shows this is not always easy – try forbidding your child from watching YouTube and see how well that flies.

Clearly affective computing is an area in which we are at a particular loss for evidence of its human impact.

Consultation and feedback
IEEE standards are developed iteratively and the organisation will use the findings in this report to build a definitive set of guidelines over time.

Early feedback on an earlier version of the report highlighted its Western-centric bias. As a result, a larger and more diverse panel was recruited. A number of new sections were added, including the section on affective computing, along with policy, classical ethics, mixed reality (including augmented reality technologies like Google Glass) and wellbeing.

Over the next year, the final version will be released as a handbook with recommendations that technologists and policy makers can turn to, and be held accountable for, as our technological future unfolds.

The ConversationThis is an important step toward breaking the protective wall of specialisation that allows technologists to separate themselves from the impact of their work on society at large. It will demand that future tech leaders take responsibility for ensuring that the technology we build as humans genuinely benefits us and our planet.


Rafael Calvo is a professor at the University of Sydney, ARC Future Fellow and Director of the Wellbeing Technologies Lab. Dorian Peters is a designer, author, and specialist in user experience for learning and wellbeing. She is currently creative leader at the Positive Computing Lab at the Faculty of Engineering, University of Sydney and is also a member of the Centre for Research on Computer Supported Learning & Cognition.

View all posts by Rafael A Calvo and Dorian Peters

Related Articles

The End of Meaningful CSR?
Business and Management INK
November 22, 2024

The End of Meaningful CSR?

Read Now
Deciphering the Mystery of the Working-Class Voter: A View From Britain
Insights
November 14, 2024

Deciphering the Mystery of the Working-Class Voter: A View From Britain

Read Now
Doing the Math on Equal Pay
Insights
November 8, 2024

Doing the Math on Equal Pay

Read Now
Exploring the Citation Nexus of Life Sciences and Social Sciences
Industry
November 6, 2024

Exploring the Citation Nexus of Life Sciences and Social Sciences

Read Now
Julia Ebner on Violent Extremism

Julia Ebner on Violent Extremism

As an investigative journalist, Julia Ebner had the freedom to do something she freely admits that as an academic (the hat she […]

Read Now
Emerson College Pollsters Explain How Pollsters Do What They Do

Emerson College Pollsters Explain How Pollsters Do What They Do

As the U.S. presidential election approaches, news reports and social media feeds are increasingly filled with data from public opinion polls. How […]

Read Now
All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

All Change! 2024 – A Year of Elections: Campaign for Social Science Annual Sage Lecture

With over 50 countries around the world holding major elections during 2024 it has been a hugely significant year for democracy as […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments