PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2022 | nr 44 | 446--494
Tytuł artykułu

Stakeholder-accountability model for artificial intelligence projects

Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Aim/purpose - This research presents a conceptual stakeholder accountability model for mapping the project actors to the conduct for which they should be held accountable in artificial intelligence (AI) projects. AI projects differ from other projects in important ways, including in their capacity to inflict harm and impact human and civil rights on a global scale. The in-project decisions are high stakes, and it is critical who decides the system's features. Even well-designed AI systems can be deployed in ways that harm individuals, local communities, and society. Design/methodology/approach - The present study uses a systematic literature review, accountability theory, and AI success factors to elaborate on the relationships between AI project actors and stakeholders. The literature review follows the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement process. Bovens' accountability model and AI success factors are employed as a basis for the coding framework in the thematic analysis. The study uses a web-based survey to collect data from respondents in the United States and Germany employing statistical analysis to assess public opinion on AI fairness, sustainability, and accountability. Findings - The AI stakeholder accountability model specifies the complex relationships between 16 actors and 22 stakeholder forums using 78 AI success factors to define the conduct and the obligations and consequences that characterize those relationships. The survey analysis suggests that more than 80% of the public thinks AI development should be fair and sustainable, and it sees the government and development organizations as most accountable in this regard. There are some differences between the United States and Germany regarding fairness, sustainability, and accountabilility. Research implications/limitations - The results should benefit project managers and project sponsors in stakeholder identification and resource assignment. The definitions offer policy advisors insights for updating AI governance practices. The model presented here is conceptual and has not been validated using real-world projects. Originality/value/contribution - The study adds context-specific information on AI to the project management literature. It defines project actors as moral agents and provides a model for mapping the accountability of project actors to stakeholder expectations and system impacts.(original abstract)
Rocznik
Numer
Strony
446--494
Opis fizyczny
Twórcy
  • Maxmetrics Heidelberg, Germany
Bibliografia
  • 116th Congress (2019-2020). (2020). National Artificial Intelligence Initiative Act of 2020 (H.R. 6216). https://www.congress.gov/bill/116th-congress/house-bill/6216/ all-actions
  • Aggarwal, J., & Kumar, S. (2018). A survey on artificial intelligence. International Journal of Research in Engineering, Science and Management, 1(12), 244-245. https://doi.org/10.31224/osf.io/47a85
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT 2021: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623). Association for Computing Machinery. https://doi.org/10.1145/ 3442188.3445922
  • Bonsón, E., Lavorato, D., Lamboglia, R., & Mancini, D. (2021). Artificial intelligence activities and ethical approaches in leading listed companies in the European Union. International Journal of Accounting Information Systems, 43, 100535. https:// doi.org/10.1016/j.accinf.2021.100535
  • Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447-468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
  • Bovens, M., Schillemans, T., & Hart, P. T. (2008). Does public accountability work? An assessment tool. Public Administration, 86(1), 225-242. https://doi.org/10.1111/ j.1467-9299.2008.00716.x
  • Boyer, M., & Veigl, S. (2015, July 15-17). Privacy preserving video surveillance infrastructure with particular regard to modular video analytics. 6th International Conference on Imaging for Crime Prevention and Detection (ICDP-15), Queen Mary University, London, UK. https://doi.org/10.1049/ic.2015.0120
  • Brandsma, G. J. (2014). Quantitative analysis. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 143-158). Oxford University Press, https://books.google.pl/books?hl=th&lr=&id=pip8AwAA QBAJ&oi=fnd&pg=PA143&ots=ksisAB5c4P&sig=keACNkGzRMWSOIvEL6DC hCcuILI&redir_esc=y
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.ekon-element-000171657154

Zgłoszenie zostało wysłane

Zgłoszenie zostało wysłane

Musisz być zalogowany aby pisać komentarze.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.