PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
2020 | 19 | nr 3 | 411--433
Tytuł artykułu

Artificial Intelligence in Economic Decision Making: How to Assure a Trust?

Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Motivation: The decisions made by modern 'black box' artificial intelligence models are not understandable and therefore people do not trust them. This limits down the potential power of usage of Artificial Intelligence. Aim: The idea of this text is to show the different initiatives in different countries how AI, especially black box AI, can be made transparent and trustworthy and what kind of regulations will be implemented or discussed to be implemented. We also show up how a commonly used development process within Machine Learning can be enriched to fulfil the requirements e.g. of the Ethics guidelines for trustworthy AI of the High-Level Expert Group of the European Union. We support our discussion with a proposition of empirical tools providing interpretability. Results: The full potential of AI or products using AI can only be raised if the decision of AI models are transparent and trustworthy. Regulations which are followed over the whole life cycle of AI models, algorithms or the products they using these are therefore necessary as well as understandability or explainability of the decisions these models and algorithms made. Initiatives on every level of stakeholders started, e.g. international level on the European Union, country level, USA, China etc. as well on a company level. The post-hoc local interpretability methods could and should be implemented by economic decision makers to provide compliance with the regulations. (original abstract)
Czasopismo
Rocznik
Tom
19
Numer
Strony
411--433
Opis fizyczny
Twórcy
  • Nicolaus Copernicus University in Toruń, Poland
  • Nicolaus Copernicus University in Toruń, Poland
Bibliografia
  • Algorithm Watch. (2020). AI ethics guidelines global inventory. Retrieved 29.03.2020 from https://algorithmwatch.org.
  • Anderson, M., & Anderson, S.L. (2007). Machine ethics: creating an ethical intelligent agent. AI Magazine, 28(4). doi:10.1609/aimag.v28i4.2065.
  • Arendt, H. (2007). Über das Böse: Eine Vorlesung zu Fragen der Ethik. München-Zürich: Piper.
  • Arya, V., Bellamy, R.K.E., Chen, P.Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilović, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K.R., Wei, D., & Zhang, Y. (2019). One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. Retrieved 29.03.2020 from https://arxiv.org.
  • Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. Plos One, 10(7). doi:10.1371/journal.pone.0130140.
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58. doi:10.1016/j.inffus.2019.12.012.
  • Bejger, S., & Elster, S. (2019). Das blackbox problem: Künstlicher Intelligenz vertrauen. AI-Spektrum, 1.
  • Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press.
  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish, & W.M. Ramsey (Eds.), The Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020.
  • Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: how the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington: Digital Frontier Press.
  • Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C.R., & Wirth, R. (2000). CRISP-DM 1.0: step-by-step data mining guide. Retrieved 29.03.2020 from https://www.the-modeling-agency.com.
  • DARPA. (2016). Explainable artificial intelligence (XAI). Retrieved 27.03.2020 from https://www.darpa.mil.
  • Datta, A., Sen, S., & Zick, Y. (2016). Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In Proceedings of the 2016 IEEE symposium on security and privacy. San Jose: IEEE. doi:10.1109/SP.2016.42.
  • de Laat, P.B. (2018). Algorithmic decision-making based on machine learning from big data: can transparency restore accountability. Philosophy & Technology, 31(4). doi:10.1007/s13347-017-0293-z.
  • Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. Retrieved 29.03.2020 from https://arxiv.org.
  • Dutton, T. (2018). An overview of national AI strategies. Retrieved 27.03.2020 from https://medium.com.
  • European Commission. (2020a). Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A European strategy for data (COM/2020).
  • European Commission. (2020b). National strategies on artificial intelligence: a European perspective in 2019: country report: Poland. Retrieved 12.05.2020 from https://ec.europa.eu.
  • European Commission. (2020c). White paper on artificial intelligence: a European approach to excellence and trust. Retrieved 27.03.2020 from https://ec.europa.eu.
  • Executive Office of the President. (2019). Maintaining American leadership in artificial intelligence (E.O. 13859). Retrieved 29.03.2020 from https://www.federalregister.gov.
  • Ford, M. (2015). Rise of the robots: technology and the threat of a jobless future. New York: Basic Books.
  • Future of Life Institute. (2020). National and international AI strategies. Retrieved 12.05.2020 from https://futureoflife.org.
  • Gunkel, D.J. (2018). Robot rights. Cambridge-London: MIT Press.
  • High-Level Expert Group on Artificial Intelligence. (2019). Ethics guidelines for trustworthy. Retrieved 27.03.2020 from https://ec.europa.eu.
  • Holzinger, A. (2018). Explainable AI (ex-AI). Informatik-Spektrum, 41(2). doi:10.1007/s00287-018-1102-5.
  • Kaggle. (2019). Loan statistics Lending Club. Retrieved 21.10.2019 from https://www.kaggle.com.
  • Kurzweil, R. (2005). The singularity is near: when humans transcend biology. New York: Penguin Books.
  • Lending Club. (2019). Loan statistics. Retrieved 19.10.2019 from https://www.lendingclub.com.
  • Library of Congress. (2019). Regulation of artificial intelligence in selected jurisdictions. Retrieved 29.03.2020 from https://www.loc.gov.
  • Lipton, Z.C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10). doi:10.1145/3233231.
  • Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., & Lee, S.I. (2019). Explainable AI for trees: from local explanations to global understanding. Retrieved 29.03.2020 from https://arxiv.org.
  • Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data & Society, 3(2). doi:10.1177/2053951716679679.
  • Molnar, C. (2020). Interpretable machine learning: a guide for making black box models explainable. Retrieved 27.03.2020 from https://christophm.github.io.
  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1). doi:10.1007/BF02639315.
  • Plumb, G., Molitor, D., & Talwalkar, A.S. (2018). Model agnostic supervised local explanations. Advances in Neural Information Processing Systems, 31.
  • Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (GDPR) (OJ L 119).
  • Reichmann, W. (2019). Die Banalität des Algorithmus. In M. Rath, F. Krotz, & M. Karmasin (Eds.), Maschinenethik. Wiesbaden: Springer. doi:10.1007/978-3-658-21083-0_9.
  • Ribeiro, M.T. (2018). Anchor experiments. Retrieved 12.05.2020 from https://github.com.
  • Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). Why should I trust you: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. New York: ACM.
  • Ribeiro, M.T., Singh, S., Guestrin, C. (2018). Anchors: high-precision model-agnostic explanations. In Proceedings of the thirty-second AAAI conference on artificial intelligence. New Orleans: AAAI.
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5). doi:10.1038/s42256-019-0048-x.
  • Saabas, A. (2019). Treeinterpreter Python package. Retrieved 07.07.2019 from https://github.com.
  • Sauerwein, F. (2019). Automatisierung, Algorithmen, Accountability. In M. Rath, F. Krotz, & M. Karmasin (Eds.), Maschinenethik. Wiesbaden: Springer. doi:10.1007/978-3-658-21083-0_3.
  • Shrikumar, A., Greenside, P., Shcherbina, A., & Kundaje, A. (2016). Not just a black box: learning important features through propagating activation differences. Retrieved 29.03.2020 from https://arxiv.org.
  • Turner, J. (2019). Robot rules: regulating artificial intelligence. Cham: Palgrave Macmillan. doi:10.1007/978-3-319-96235-1.
  • Vapnik, N.V. (2000). The nature of statistical learning theory. New York: Springer. doi:10.1007/978-1-4757-2440-0.
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: automated decisions and the GDPR. Retrieved 29.03.2020 from https://arxiv.org.
  • Waltl, B., & Vogl, R. (2018). Increasing transparency in algorithmic decision-making with explainable AI. Datenschutz und Datensicherheit, 42(10). doi:10.1007/s11623-018-1011-4.
  • Wischmeyer, T. (2020). Artificial intelligence and transparency: opening the black box. In T. Wischmeyer, & T. Rademacher (Eds.), Regulating artificial intelligence. Cham: Springer. doi:10.1007/978-3-030-32361-5_4.
  • Yudkowsky, E. (2001). Creating friendly AI 1.0: the analysis and design of benevolent goal architectures. Retrieved 12.05.2020 from https://intelligence.org.
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.ekon-element-000171602365

Zgłoszenie zostało wysłane

Zgłoszenie zostało wysłane

Musisz być zalogowany aby pisać komentarze.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.