X

Engineering and Computer Ethics: A Brief History

7 septembre 2021

The first ethical code for engineers appeared in 1912. It was launched by the American Institute of Electrical Engineers and agreed that an engineer must consider the protection of the interests of his client or his employer as his first professional obligation (Zandvoort et al., 2000). However, the fields of engineering and computational ethics consider technology as a public good that encompasses a form of social responsibility. These ethics emerged during the 1950s in the wake of the works of Norbert Wiener. He thought a new industrial revolution would replace thousands of manual and intellectual jobs. Therefore, it was necessary to define ethical rules to frame artificial agents capable of making decisions (Van Den Hoven and Weckert, 2008). This vision was pessimistic because it predicted: « bad uses » of technologies that would profoundly transform the world (Himma and Tavani, 2008). Ethical considerations appeared more strongly during the following decade, while computational sciences progressed even though it was only reserved for elites. Reflection was initiated by questioning the autonomous aspect of systems confronted with decision-making and the manner of practically engaging the engineers’ moral responsibility. In the United States, the first programs focused on the ethical aspects of technology emerged during this decade. However, the field of computing was less the driving force of this approach than environmental issues, with researchers denouncing the harmful effects of an industry making uncontrolled use of technologies (Zandvoort et al., 2000).

In 1967, Marvin Minsky, one of the founding fathers of artificial intelligence, agreed that there is no reason to think that machines have limits that are not shared with human beings. The same year, the philosopher Philippa Foot introduced the double effect doctrine according to which two choices can lead to harmful consequences. Today, these have gained prominence in debates around autonomous technologies (Allen et al., 2006). Two years later, the Hastings Centre—an ethics research institute based in the United States—was founded to introduce ethics into engineering, which can be defined as « dealing with judgments and decisions concerning the actions of engineers (individually or collectively) which involve moral principles of one sort or another » (Baum, 1980). In 1974, the Engineers’ Council of Professional Development (ECPD) adopted « The Canons of Ethics of Engineers ». The current version of the text states that « As members of this profession, engineers are expected to exhibit the highest standards of honesty and integrity. Engineering has a direct and vital impact on the quality of life for all people. Accordingly, the services provided by engineers require honesty, impartiality, fairness, and equity, and must be dedicated to the protection of public health, safety, and welfare”. This statement encapsulates most of the codes of ethics relating to the exercise of the engineering profession that is caught in a pincer movement between his employer’s requirements and the responsibility which engages him towards society (Zandvoort et al., 2000).

Nowadays, in its opposition to personal morality, the ethics of the engineer frame standards that can be commonly shared. Engineers also experience conflict between the requirements of the employer on the one hand and the responsibility that engages the engineer to the society on the other (Harris et al., 1996). Also, an engineer cannot resolve all of the ethical dilemmas by himself because of the multidimensional nature of a technology that is not shaped by him alone (Turilli and Floridi, 2009). There is no reason to underestimate the impact, positive as negative, of engineers on the social world (Van Der Vorst, 1998), even though several voices claim that an engineer could not be responsible for the bad uses to which their technology is put. Engineers always have good reasons not to recognize their responsibilities for the results of their acts because a single person never creates a technological product and because a single consequence can have multiple causes (Davis, 2012). People involved in an engineering project are interchangeable and institutional constraints have also to be considered. They are also individuals without power within an organization (Davis, 2012). However, when they claim the neutrality of technology, it is argued that technology is not neutral. Technology is about how humans shape it and what humans do with it.

When the purpose of a given technology is to be autonomous, other ethical questions also arise. This particular field is covered by computational ethics, which is about implementing moral decisions to computers or bots (Allen and Wallach, 2006). Computational ethics results from the analysis of the social impact of computational technologies and concerns the ethical uses of those technologies (Moor, 1985). Because developers are unconscious about the effects of their achievements, Palm and Hansson (2006) suggest an assessment model that considers, among other factors, the dissemination and use of information, the concepts of control, influence, and power, the social impact of the technology and its impact on human values. These questions became prominent over the last years with the development of artificial intelligence technologies, where ethical considerations can be tackled under the lenses of code of ethics that might include injunctions or recommendations, moral strategies to align AI with human values, and technological strategies to make AI error-free or safer (Boddington, 2017: 27-37). Considering that its mission encompasses the promotion of excellence and trust in AI, the European Commission published ethics guidelines in April 2019, where principles such as accuracy, accountability, protection for privacy, data governance, traceability, and transparency were highlighted.

Despite the malleability of computer programs, not everything can be controlled (Moor, 1998). In the domain of algorithm-based decision-making of which automated news production is a field, it is essential to consider that these technologies result from human decisions made upstream (Benjamin, 2012). An algorithmic procedure can follow different paths and give different results, depending on how it was programmed, and bad judgments can appear at different levels: therefore, algorithmic processes are not error-free (McCosker and Milne, 2014). Also, it must not be forgotten that automation is based on human-agent association principles, where the human being is primarily responsible for the results. That means that humans are actively involved in creating the automated processes and, therefore, they are likely to incorporate human biases (Dutton and Kraemer, 1980). However, biases are not only human, but they can also be institutional or technical, and they are not always easy to identify (Friedman and Nissenbaum, 1996). To this question of biases, the one about the failure of procedures has to be added. Although generally predictable, they are not immune to risks related to unrealistic objectives, poor specifications, unmanaged risks, or immature technology. This aspect receives little consideration in organizations (Charette, 2005). The opacity of the processes, which are often compared to black boxes, cause doubt on their sincerity to arise, yet they are considered mechanically reliable, precise, and credible (Gillespie, 2014). However, it is not always explained by their underlying intentions. In most cases, this will be due to the protection concerns of code owners, and even this form of opacity has potential social implications (Burrell, 2016).

Computational code formalizes a set of rules, routines, and institutionalized procedures.  operating within a predetermined social framework and underpinned by professional expertise. Algorithms involve human judgments which cannot, by nature, be considered as « neutral » (Gillespie, 2014). It results from a socio-technical assembly whose components can be understood through the values or issues they convey (Kitchin and Dodge, 2011; Geiger, 2014; McCarthy and Wright, 2007). No information is ethically neutral because it depends on the principles that regulate information flows. These processes concern not solely data and information. Even transparency is the result of choices that are not only ethical but also dependent on economic and legal constraints. Therefore, whether in terms of the choice of processes, the methodology applied, and the configuration settings, the responsibility lies with the architects of the system (Turilli and Floridi, 2009).

For all of those reasons, social agents from the technological world have to remain critical about their work. That presupposes the consciousness of their social responsibility. If ethical codes do exist, their moral authority is not so evident because professional ethics could conflict with the personal morality of the individual. The professional authority of a formal code could also be recognized within a profession without exercising some form of moral authority (Davis, 1987). However, an ethical code of conduct remains a kind of contract of trust with society and can be considered as a step toward the recognition of technology as a public good that needs to be framed.

In 2011, in Berlin, three engineers published The Critical Engineering Manifesto. The document pleads for a critical, if not responsible, attitude from the engineer. The manifesto describes Engineering as « the most powerful language of our time, modeling our ways of moving, communicating and thinking. » The manifesto does not consider technology as good or bad but calls for awareness of « our dependence on technology » as a challenge as much as a threat. It can be considered as a call for a self-critical examination, attesting to a form of moral and individual responsibility. However, in some research laboratories, technocentric discourses can remain dominant. They are less about how to shape technology for society than adapting society to technology (Sabanovic, 2010). This deterministic approach denies that the technology limits are less technical than social (Dourish, 2016; Seaver, 2017)


References

ALLEN, C., WALLACH, W. and SMIT, I. 2006. Why machine ethics?. Intelligent Systems, IEEE, 21(4):12–17.

BAUM, R. J. 1980. Ethics and Engineering Curricula (The Hastings Centre). The Teaching of Ethics VII.

BENJAMIN, S.M. 2012. Algorithms and speech. University of Pennsylvania Law Review, 161:1445–1494.

BODDINGTON, P. (2017). Towards a code of ethics for artificial intelligence, Springer, Cham.

BURRELL, J. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society, 3(1):1–12.

CHARETTE, R. N. 2005. Why software fails. IEEE Spectrum, 42(9):36.

DAVIS, M. 1987. The moral authority of a professional code. Nomos, 29:302–337.

DAVIS, M. 2012. « Ain’t no one here but us social forces »: Constructing the professional responsibility of engineers. Science and Engineering Ethics, 18(1):13–34.

DOURISH, P. 2016. Algorithms and their others: Algorithmic culture in context. Big Data and Society, DOI: 10.1177/2053951716665128

DUTTON, W. H. and KRAEMER, K. L. 1980. Automating bias. Society, 17(2):36–41.

FRIEDMAN, B. et NISSENBAUM, H. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3):330–347.

GEIGER, R. S. 2014. Bots, bespoke, code and the materiality of software platforms. Information, Communication and Society, 17(3):342–356.

GILLESPIE, T. 2014. Relevance of the algorithms. In GILLESPIE T., BOCZKOWSKI P. and FOOT, K., eds: Media Technologies: Essays on Communication, Materiality, and Society, Inside Technology, pp. 167–194. MIT Press, Cambridge, Massachusetts.

HARRIS, C. E., DAVIS, M., PRITCHARD, M. S. and RABINS, M. J. 1996. Engineering ethics: What? why? How? And when? Journal of Engineering Education, 85:93–96.

HIMMA, K. E. and TAVANI, H. T. 2008. The handbook of information and computer ethics. John Wiley and Sons, Hoboken, New Jersey.

KITCHIN, R. and DODGE, M. 2011. Code/space: Software and everyday life. MIT Press, Cambridge, Massachusetts.

MCCARTHY, J. and WRIGHT, P. 2007. Technology as experience. MIT Press, Cambridge, Massachusetts.

MCCOSKER, A. and MILNE, E. 2014. Coding Labour. Cultural Studies Review, 20(1):4–29.

MINSKY, M. L. 1967. Computation: Finite and infinite machines. Prentice Hall, Upper Saddle River, New Jersey.

MOOR, J. H. 1985. What is computer ethics?. Metaphilosophy, 16(4):266–275.

MOOR, J. H. 1998. Reason, relativity, and responsibility in computer ethics. ACM SIGCAS Computers and Society, 28(1):14–21.

PALM, E. and HANSSON, S. O. 2006. The case for ethical technology assessment (eTA). Technological Forecasting and Social Change, 73(5):543–558.

SABANOVIC, S. 2010. Robots in society, society in robots mutual shaping of society and technology as a framework for social robot design. International Journal of Social Robotics, 2(4):439–450.

SEAVER, N. 2017. Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data and Society, 4 (2) | DOI:10.1177/2053951717738104.

TURILLI, M. et FLORIDI, L. 2009. The ethics of information transparency. Ethics and Information Technology, 11(2):105–112.

VAN DEN HOVEN, J. and WECKERT, J. 2008. Information technology and moral philosophy. Cambridge University Press, Cambridge.

VAN DER VORST, R. 1998. Engineering, ethics and professionalism. European Journal of Engineering Education, 23(2) 171–179.

ZANDVOORT, H., VANDEPOEL, I. and BRUMSEN, M. 2000. Ethics in the engineering curricula: Topics, trends and challenges for the future. European Journal of Engineering Education, 25(4): 291–302.

# #