Deep learning techniques are considered to be state-of-the-art machine learning algorithms. Deep learning methods are increasingly being adopted in diverse fields, because of their simplicity of use and superior performance over traditional machine learning algorithms. They allow the user to process a large amount of data very effectively and provide results that were previously considered impossible to achieve [1]. As the deep learning models are becoming more efficient with passing time, people are increasingly depending on the results of the model. This has risen several ethical concerns, such as what protocols should we maintain when developing a model, who is responsible for the decisions made by the model, where should we use deep learning, to what context should we depend on the decisions made by the model, and so on [2]. This article illustrates the ethics in deep learning. The article is divided into three sections. The first section discusses ethics for developing a deep learning model, the second section discusses ethics in using deep learning, and the third section evaluates the necessity of using ethics while using and developing deep learning.
Ethical Concerns in Developing Deep Learning Models
Despite the widespread usage of deep learning, it is still considered to be a black-box [3]. The simplest way of explaining the development process of a model is, at first we need a collection of data, on which we have to perform a particular task, e.g. classification. Next, we need to select a deep learning architecture, which will perform the task and provide some results. Then, we need to compare the results of the model and the actual results and optimize the deep learning architecture to produce the desired results. So the two main aspects of the development of a deep learning model are the data and the optimization of the architecture.
Data is considered the most important element in designing a deep learning model. A deep learning model is as good as the data it is using [4]. As the performance of the model greatly depends on the data, it is necessary to use unbiased data. The United States National Institute of Standards and Technology tested 189 facial recognition software and found that the software developed in the USA were more biased to provide false positive, and one to many matching for Asians, Native Americans, and Africans [5]. Another concern over the data in deep learning is the collection process of the data. The tech giants, e.g. Amazon, Facebook, Google, IBM, etc., are continuously collecting data from the user without their proper consent and using them to develop services based on deep learning algorithms [6], [7]. Recently, a report showed that Facebook uses user data to predict the choice of products that the user is going to make and provide that data to the advertisers [8]. This is a concerning violation of the privacy of the userz. We need to develop rules on the collection of data for the deep learning models. The developers should be aware of and responsible for the data that they are using in their deep learning models.
Due to the lack of understanding of the working process of the deep learning models, the developers build the models based on trial and error [9]–[11]. In this approach, the performance of a model is evaluated based on the accuracy of the model, and the accuracy depends on the data. So, if the data is biased the results of the model will also be biased [12], [13]. Also, the developers sometimes report the result of a deep learning model based on a very specific dataset. They tend to optimize a model in a way that works efficiently for a particular dataset but fails to generalize for other data. This problem can be avoided by enforcing the researchers to explain the learning of the model along with the results. This will not only help the developers develop transparent deep learning models but also allow users to get a better understanding of the model.
The ethical concerns in the development of deep learning models arise due to its widespread usage. It is necessary to ensure that the developers are aware of the system that they are building and also they are in full control. Though the singularity might seem improbable we need to be in control to avoid it [14].
Ethics in Usage of Deep Learning Models
We are living in a world where different aspects of our lives are affected by the deep learning models. Deep learning is used in education, health care, agriculture, military training, human resources and recruiting, autonomous vehicles, social media newsfeed, and so on [2]. Deep learning allowed the tools in an industry to be autonomous. It has decreased the need for human labor. Not only the industry but also other sectors are also affected by automated jobs. As the machines are becoming more and more efficient at making complex and accurate results, the necessity of human labor is decreasing day by day. It is gradually creating unemployment in society and soon the effect will become more evident [15]. Though this is reducing the manual labor done by a human, we need to invent ways to incorporate human labor with automated tools to keep the unemployment under control.
Recently, Google showcased its Google Duplex, which can make phone calls and take decisions by imitating humans using deep learning techniques [16]. This is a compromise of privacy and security. It is possible to train the voice of the automated system to imitate a person, which can be considered as identity theft. We can use this system to gather information from other persons. It also violates our privacy as the listener might be unaware that the voice on the phone is a machine rather than an actual human. In the future, it will be impossible to stop this privacy and security breach by enforcing laws to identify machines before the start of a conversation, because someone willing to gather information by impersonating will not abide by the laws. Deepfake is another recent invention, where deep learning models can create fake video or audio clips [17]. The impact of deepfake is massive, as it can be used to spread false information and initiate cybercrime attacks [18]. Last year using deepfake voice a scammer was able to complete a $243 million fraudulent wire transfer [19]. As a result, government agencies and social media platforms are continuously integrating techniques to detect deepfakes. The advancement of technology is only going to help develop better models to imitate humans more accurately [20]. It is high time we decide on the usage of deep learning to ensure security and privacy [21].
Another important usage of deep learning models is in the field of health care, where the machines outperformed the clinicians in medical diagnosis and treatment recommendation. A group of MIT researchers was able to detect the protein-folding using a deep learning model, which was considered to be an unsolvable problem for 50 years [22]. Though the contribution of the deep learning models in healthcare is impeccable, it poses a direct risk to human life. The most concerning question in the medical diagnosis is who will be responsible for the death of a human due to the misdiagnosis, will it be the machine or the clinician using the machine or the developer who invented the diagnosis system [23]. Geis et al. [24] published a statement overviewing the necessity of considering ethical aspects when designing intelligent systems for clinical usage using deep learning models. Geis et al. stated that using deep learning in clinical diagnosis might increase the accuracy of diagnosis however we are far from deploying it into the clinical environment right away.
Autonomous vehicles are another sector where the decisions made by deep learning models are directly connected to human life. The deep learning models with the help of sensors in the vehicle can predict when to accelerate or stop, however, it does not have any ethical concern. A situation might arise where the system has to decide over the death of car occupants, pedestrians, or occupants of other vehicles [25]. Due to the lack of ethical consideration, it will be impossible for the system to predict the necessary steps. Also, if such decisions were made, the question remains who will be responsible for the incident. The autonomous vehicles can only simulate events that it is aware of and unforeseen events can cause an accident on the road [26].
A private company Northpointe proposed an algorithm called COMPAS to be used in making juridical decisions [27]. However, Angwin et al. [28] studied the results of the decisions made by COMPAS and stated that the algorithm is more skewed towards detecting black people as criminals. According to Piano [2], the results are due to the bias of data used to train COMPAS. Though accuracy is considered to be the only performance metric for developing deep learning models, it is also necessary to introduce fairness as a performance metric in a system like judicial decisions.
There is numerous use of deep learning models, however, to make an acceptable and reliable model we need to ensure that there is not data bias in the system. We have to consider the social impact of deploying such an automated system and also investigate different ethical aspects of the system. We also have to be aware of the threats the deep learning models might possess and have sufficient resources to mitigate the problems as much as possible. Rather than depending only on the decisions of the deep learning models, it is better to use the models to aid humans in making decisions.
The Necessity of Ethics in Deep Learning
Deep learning models can only decide between right and wrong. It lacks empathy, emotions, and overall a moral compass. The ethical concerns of deep learning models are dependent on the developer. The machine can only make decisions based on what it was taught to do [29]. We are relying on the decisions of the machines with our lives, and it is possible to make changes to enforce an accident. Due to the lack of transparency and accountability, developers are developing deep learning models and integrating them into our daily life without judging the consequences of the decisions made by the models.
We are long past the time when only the results of an algorithm were sufficient, and now we require a better understanding of the process [10], [30], [31]. A regulation called “Right to explain” was proposed before the European Union, which gave users the right to ask for an explanation for the output of an algorithm [32]. This will enforce the developers to be accountable for their decisions. Also, this will help reduce decision bias, as the developers will be forced to build transparent models.
In recent times, ethics in deep learning is considered to be as important as the development of the models. As more and more industries are utilizing the deep learning models, we need to enforce laws, rules, and regulations on the usage and development of it. This will ensure maximum benefits with minimum potential harms. Fortunately, global technology companies are raising concerns over the usage of deep learning and incorporating ethical measures during the development of deep learning based services [33]. We should also consider the ethical aspects of our innovations and develop technologies based on them.
[1] S. Dargan, M. Kumar, M. R. Ayyagari, and G. Kumar, “A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning,” Arch. Comput. Methods Eng., vol. 27, no. 4, pp. 1071–1092, Sep. 2020, doi: 10.1007/s11831–019–09344-w.
[2] S. Lo Piano, “Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward,” Humanit. Soc. Sci. Commun., vol. 7, no. 1, Dec. 2020, doi: 10.1057/s41599–020–0501–9.
[3] Fan-Yin Tzeng and Kwan-Liu Ma, “Opening the Black Box — Data Driven Visualization of Neural Networks,” 2006, pp. 383–390, doi: 10.1109/visual.2005.1532820.
[4] “Ethical Use of Data for Training Machine Learning Technology — Part 2.” [Online]. Available: https://info.aiim.org/aiim-blog/ethical-use-of-data-for-training-machine-learning-technology-part-2. [Accessed: 04-Dec-2020].
[5] “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software | NIST.” [Online]. Available: https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software. [Accessed: 04-Dec-2020].
[6] “Companies are making money from our personal data — but at what cost? | Technology | The Guardian.” [Online]. Available: https://www.theguardian.com/technology/2016/aug/31/personal-data-corporate-use-google-amazon. [Accessed: 04-Dec-2020].
[7] “Amazon accused of recording children without consent | WIRED UK.” [Online]. Available: https://www.wired.co.uk/article/wired-awake-130619. [Accessed: 04-Dec-2020].
[8] “Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers.” [Online]. Available: https://theintercept.com/2018/04/13/facebook-advertising-data-artificial-intelligence-ai/. [Accessed: 04-Dec-2020].
[9] A. Holzinger, “From machine learning to explainable AI,” in DISA 2018 — IEEE World Symposium on Digital Intelligence for Systems and Machines, Proceedings, 2018, pp. 55–66, doi: 10.1109/DISA.2018.8490530.
[10] A. Holzinger, P. Kieseberg, E. Weippl, and A. M. Tjoa, “Current advances, trends and challenges of machine learning and knowledge extraction: From machine learning to explainable AI,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, vol. 11015 LNCS, pp. 1–8, doi: 10.1007/978–3–319–99740–7_1.
[11] A. Morichetta, P. Casas, and M. Mellia, “Explain-IT: Towards explainable AI for unsupervised network traffic analysis,” in Big-DAMA 2019 — Proceedings of the 3rd ACM CoNEXT Workshop on Big DAta, Machine Learning and Artificial Intelligence for Data Communication Networks, Part of CoNEXT 2019, 2019, pp. 22–28, doi: 10.1145/3359992.3366639.
[12] “Ethics in Machine Learning. The ethics of how a Machine Learning… | by Apara Venkateswaran | Towards Data Science.” [Online]. Available: https://towardsdatascience.com/ethics-in-machine-learning-9fa5b1aadc12. [Accessed: 04-Dec-2020].
[13] “Machine learning ethics: what you need to know and what you can do | Packt Hub.” [Online]. Available: https://hub.packtpub.com/machine-learning-ethics-what-you-need-to-know-and-what-you-can-do/. [Accessed: 04-Dec-2020].
[14] “Are the robots about to rise? Google’s new director of engineering thinks so… | Technology | The Guardian.” [Online]. Available: https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence. [Accessed: 04-Dec-2020].
[15] “AI, Machine Learning, and Unemployment — Growth Acceleration Partners.” [Online]. Available: https://www.growthaccelerationpartners.com/blog/ai-machine-learning-unemployment/. [Accessed: 04-Dec-2020].
[16] “Google Duplex: A.I. Assistant Calls Local Businesses To Make Appointments — YouTube.” [Online]. Available: https://www.youtube.com/watch?v=D5VN56jQMWM&ab_channel=JeffreyGrubb. [Accessed: 04-Dec-2020].
[17] “The Emergence of Deepfake Technology: A Review | TIM Review.” [Online]. Available: https://timreview.ca/article/1282. [Accessed: 04-Dec-2020].
[18] “Why We Need Ethical AI: 5 Initiatives to Ensure Ethics in AI.” [Online]. Available: https://datafloq.com/read/we-need-ethical-ai-5-initiatives-ensure-ethical-ai/7571. [Accessed: 04-Dec-2020].
[19] “A Voice Deepfake Was Used To Scam A CEO Out Of $243,000.” [Online]. Available: https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=5adfb8352241. [Accessed: 04-Dec-2020].
[20] “Interview: Eugene Goostman Passes the Turing Test | Time.” [Online]. Available: https://time.com/2847900/eugene-goostman-turing-test/. [Accessed: 04-Dec-2020].
[21] “Google’s AI Assistant Is a Reminder that Privacy and Security Are Not the Same.” [Online]. Available: https://hbr.org/2018/05/googles-ai-assistant-is-a-reminder-that-privacy-and-security-are-not-the-same. [Accessed: 04-Dec-2020].
[22] “DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology | MIT Technology Review.” [Online]. Available: https://www.technologyreview.com/2020/11/30/1012712/deepmind-protein-folding-ai-solved-biology-science-drugs-disease/. [Accessed: 04-Dec-2020].
[23] T. Grote and P. Berens, “On the ethics of algorithmic decision-making in healthcare,” Journal of Medical Ethics, vol. 46, no. 3. BMJ Publishing Group, pp. 205–211, 01-Mar-2020, doi: 10.1136/medethics-2019–105586.
[24] J. R. Geis et al., “Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement,” Can. Assoc. Radiol. J., vol. 70, no. 4, pp. 329–334, Nov. 2019, doi: 10.1016/j.carj.2019.08.010.
[25] J. F. Bonnefon, A. Shariff, and I. Rahwan, “The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars,” Proc. IEEE, vol. 107, no. 3, pp. 502–504, Mar. 2019, doi: 10.1109/JPROC.2019.2897447.
[26] E. Yurtsever, L. Capito, K. Redmill, and U. Ozguner, “Integrating Deep Reinforcement Learning with Model-based Path Planners for Automated Driving,” arXiv, Feb. 2020.
[27] “Practitioners Guide to COMPAS,” 2012.
[28] “Machine Bias — ProPublica.” [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. [Accessed: 04-Dec-2020].
[29] “The Importance of Ethics in Artificial Intelligence | by Ferry Hoes | Towards Data Science.” [Online]. Available: https://towardsdatascience.com/the-importance-of-ethics-in-artificial-intelligence-16af073dedf8. [Accessed: 04-Dec-2020].
[30] R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, “Metrics for Explainable AI: Challenges and Prospects Institute for Human and Machine Cognition [rhoffman@ihmc.us].”
[31] D. Doran, S. Schulz, and T. R. Besold, “What Does Explainable AI Really Mean? A New Conceptualization of Perspectives.”
[32] B. Goodman and S. Flaxman, “European union regulations on algorithmic decision making and a ‘right to explanation,’” AI Mag., vol. 38, no. 3, pp. 50–57, Sep. 2017, doi: 10.1609/aimag.v38i3.2741.
[33] “Artificial Intelligence: CIO Survey Sees Big Gains in 2020 | Fortune.” [Online]. Available: https://fortune.com/2019/11/19/artificial-intelligence-2020-cio-survey/. [Accessed: 04-Dec-2020].