Closing the Gaps on Inscrutability: Tackling Challenges with Knowledge Integration during AI development
DOI:
https://doi.org/10.3127/ajis.v29.5567Keywords:
Artificial intelligence, Machine learning, Explainability, Inscrutability, Case studyAbstract
The development of complex artificial intelligence (AI) systems presents a compelling knowledge integration challenge to organisations. As the organisations strive to integrate complex domain knowledge into algorithmic models, they also have to arm domain experts with the technical understanding of how such models work so they can be used responsibly. The inscrutability of AI technology – stemming from challenges related to both the technical explainability of the models as well as their social interpretability – makes knowledge integration particularly challenging by creating and deepening knowledge gaps between the AI model, its human users and domain reality. To increase understanding of how such knowledge gaps can be addressed in AI development, this study reports on three qualitative case studies on AI projects faced where inscrutability needed to be managed. Building on the gap model (Kayande et al., 2009), we identify three sociotechnical mechanisms for addressing knowledge gaps related to AI inscrutability and thus facilitating organisational learning. Our work provides contributions to both theory and practice.
References
Ågerfalk, P. J., Conboy, K., Crowston, K., Eriksson Lundström, J., Jarvenpaa, S. L., Ram, S., & Mikalef, P. (2022). Artificial Intelligence in Information Systems: State of the Art and Research Roadmap. Communications of the Association for Information Systems, 50(1), 420–438. doi.org/10.17705/1CAIS.05017
Akbarighatar, P., Rinta-Kahila, T., & Someh, I. (2025). When Welfare Goes Digital: Lessons from the Dutch Syri Risk Indicator in Public Sector. Academy of Management Proceedings 2025 (1), 10358, Copenhagen, Denmark.
Allen, R. T., & Choudhury, P. (2022). Algorithm-Augmented Work and Domain Experience: The Countervailing Forces of Ability and Aversion. Organization Science, 33(1), 149–169. doi.org/10.1287/ORSC.2021.1554
Arnold, V., Clark, N., Collier, P. A., Leech, S. A., & Sutton, S. G. (2006). The Differential Use and Effect of Knowledge-Based System Explanations in Novice and Expert Judgment Decisions. MIS Quarterly, 30(1), 79–97.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Asatiani, A., Malo, P., Nagbol, P., Penttinen, E., Rinta-Kahila, T., Salovaara, A. (2020) “Challenges of Explaining the Behavior of Black-box AI Systems,” MIS Quarterly Executive 19(4), pp. 259-274.
Asatiani, A., Malo, P., Nagbol, P., Penttinen, E., Rinta-Kahila, T., & Salovaara, A. (2021). “Sociotechnical Envelopment of Artificial Intelligence: Resolving the Challenges of Explainability in an Organization. Journal of the Association for Information Systems, 22(2), 325-352, doi.org/10.17705/1jais.00664
Australian Department of Industry Science and Resources. (2024). Australia’s AI Ethics Principles. In Australia’s Artificial Intelligence Ethics Framework. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles#principle-6
Bauer, K., Hinz, O., van der Aalst, W., & Weinhardt, C. (2021). Expl(AI)n It to Me – Explainable AI and Information Systems Research. Business and Information Systems Engineering, 63(2), 79–82. doi.org/10.1007/s12599-021-00683-2
Bell, A., Solano-Kamaiko, I., Nov, O., & Stoyanovich, J. (2022). It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy. ACM International Conference Proceeding Series, 248–266. doi.org/10.1145/3531146.3533090
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing Artificial Intelligence. MIS Quarterly, 45(3), 1433–1450. doi.org/10.4324/9781315691398-22
Business Wire. (2023). Survey: AI Adoption Among Federal Agencies Is Up But Trust Continues to Be An Obstacle to Future Adoption and Use. https://www.businesswire.com/news/home/20231214493999/en/
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. ArXiv, Working pa. http://arxiv.org/abs/2303.10130
Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision making and a “right to explanation.” AI Magazine, Fall, 50–57. doi.org/10.1609/aimag.v38i3.2741
Gregor, S., & Benbasat, I. (1999). Explanations from Intelligent Systems: Theoretical Foundations and Implications for Practice. MIS Quarterly, 23(4), 497–530. doi.org/10.2307/249487
Grønsund, T., & Aanestad, M. (2020). Augmenting the algorithm: Emerging human-in-the-loop work configurations. Journal of Strategic Information Systems, 29(2), 101614. doi.org/10.1016/j.jsis.2020.101614
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A Survey Of Methods For Explaining Black Box Models. ACM Computing Surveys, 51(5). arxiv.org/abs/1802.01933
Gunning, D., Vorm, E., Wang, J. Y., & Turek, M. (2021). DARPA ’s explainable AI ( XAI ) program: A retrospective . Applied AI Letters, 2(4), 1–12. doi.org/10.1002/ail2.61
Hagiu, A., & Wright, J. (2023). Data‐enabled learning, network effects, and competitive advantage. The RAND Journal of Economics, 54(4), 638-667.
Hahn, J., & Lee, G. (2021). The complex effects of cross-domain knowledge on IS development: A simulation-based theory development. MIS Quarterly, 45(4), 2023–2054. doi.org/10.25300/MISQ/2022/16292
Hammer, M. (1990). Reengineering work: don’t automate, obliterate. Harvard Business Review, 68(4), 104–112.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Kayande, U., De Bruyn, A., Lilien, G. L., Rangaswamy, A., & van Bruggen, G. H. (2009). How incorporating feedback mechanisms in a DSS affects DSS evaluations. Information Systems Research, 20(4), 527–546. doi.org/10.1287/isre.1080.0198
Keil, F. C. (2006). Explanation and understanding. Annu. Rev. Psychol., 57(1), 227-254.
Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI ground truth really true? The dangers of training and evaluating AI tools based on experts’ know-what. MIS Quarterly: Management Information Systems, 45(3), 1501–1525. doi.org/10.25300/MISQ/2021/16564
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis. Organization Science, 33(1), 126–148. doi.org/10.1287/ORSC.2021.1549
Lipton, Z. C. (2018). The Mythos of Model Interpretability. ACM Queue, 16(3), 30. doi.org/10.1145/3233231
Mahya, P., & Fürnkranz, J. (2023). An Empirical Comparison of Interpretable Models to Post-Hoc Explanations. AI (Switzerland), 4(2), 426–436. doi.org/10.3390/ai4020023
Marjanovic, O., Cecez-Kecmanovic, D., & Vidgen, R. (2022). Theorising Algorithmic Justice. European Journal of Information Systems, 31(3), 269–287. doi.org/10.1080/0960085X.2021.1934130
Martens, D., & Provost, F. (2014). Explaining Data-Driven Document Classifications. MIS Quarterly, 38(1), 73–99. doi.org/10.25300/misq/2014/38.1.04
Martin, K. (2019). Designing ethical algorithms. MIS Quarterly Executive, 18(2), 129–142. doi.org/10.17705/2msqe.00012
Matook, S., Lee, G., & Fitzgerald, B. (2021). Information Systems Development. In A. Burton-Jones & P. Seetharaman (Eds.), MIS Quarterly Research Curations. Retrieved from http://misq.org/research-curations
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. doi.org/10.1016/j.artint.2018.07.007
Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2022). Explainable artificial intelligence: a comprehensive review. In Artificial Intelligence Review (Vol. 55, Issue 5). Springer Netherlands. doi.org/10.1007/s10462-021-10088-y
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?” Explaining the Prediction of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16, 1135–1144. doi.org/10.1145/2939672.2939778
Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. CEUR Workshop Proceedings, 2327.
Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Gregor, S. (2022). Algorithmic decision-making and system destructiveness: A case of automatic debt recovery. European Journal of Information Systems, vol. 31, no. 3, pp. 313–338, doi.org/10.1080/0960085x.2021.1960905
Rinta-Kahila, T., Someh, I., Indulska, M., & Ryan, I. (2023a). Building Artificial Intelligence capability in the public sector. Australasian Conference on Information Systems 2023, Wellington, New Zealand.
Rinta-Kahila, T., Penttinen, E., Salovaara, A., Soliman, W., & Ruissalo, J. (2023b). The Vicious Circles of Skill Erosion: A Case Study of Cognitive Automation. Journal of the Association for Information Systems 24 (5), 1378-1412. Article 2. Doi.org/10.17705/1jais.00829
Rosenfeld, A., & Richardson, A. (2019). Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems. doi.org/10.1007/s10458-019-09408-y
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. doi.org/10.1038/s42256-019-0048-x
Russell, S., & Norvig, P. (2010). Artificial Intelligence. A Modern Approach. In Pearson Education. doi.org/10.1119/1.15422
Shollo, A., Hopf, K., Thiess, T., & Müller, O. (2022). Shifting ML value creation mechanisms: A process model of ML value creation. The Journal of Strategic Information Systems, 31(3), 101734.
Someh, I., Wixom, B. H., Beath, C. M., & Zutavern, A. (2022). Building an Artificial Intelligence Explanation Capability. MIS Quarterly Executive, 21(2).
Someh, I., Wixom, B., Davern, M., & Shanks, G. (2023). Configuring relationships between analytics and business domain groups for knowledge integration. Journal of the Association for Information Systems, 24(2), 592-618.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research techniques. Sage Publications.
Strich, F., Mayer, A. S., & Fiedler, M. (2021). What Do I Do in a World of Artificial Intelligence? Investigating the Impact of Substitutive Decision-Making AI Systems on Employees’ Professional Role Identity. Journal of the Association for Information Systems, 22(2), 304–324. doi.org/10.17705/1jais.00663
Teodorescu, M. H. M., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in automation require a deeper understanding of human–ML augmentation. MIS Quarterly: Management Information Systems, 45(3), 1483–1499. doi.org/10.25300/MISQ/2021/16535
Van den Broek, E., Sergeeva, A., & Huysman, M. (2021). When the machine meets the expert: An ethnography of developing AI for hiring. MIS Quarterly, 45(3), 1557-1580. doi.org/10.25300/MISQ/2021/16559
Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., & Krishna, R. (2023). Explanations can reduce overreliance on AI systems during decision-making. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-38.
Waardenburg, L., Huysman, M., & Sergeeva, A. V. (2022). In the land of the blind, the one-eyed man is king: Knowledge brokerage in the age of learning algorithms. Organization Science, 33(1), 59-82. doi.org/10.1287/orsc.2021.1544
Wessel, L., Baiyere, A., Ologeanu-Taddei, R., Cha, J., & Jensen, T. B. (2021). Unpacking the difference between digital transformation and IT-enabled organizational transformation. Journal of the Association for Information Systems, 22(1), 102–129. doi.org/10.17705/1jais.00655
Yin, R. K. (2018). Case Study Research and Applications: Design and Methods (6th ed.). SAGE Publications Inc.
Zacharias, J., von Zahn, M., Chen, J., & Hinz, O. (2022). Designing a feature selection method based on explainable artificial intelligence. Electronic Markets, 32(4), 2159–2184. doi.org/10.1007/s12525-022-00608-1
Zuboff, S. (1991). Informate the enterprise: An agenda for the twenty-first century. National Forum, Honor Society of Phi Kappa Phi, 71(3).
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Tapani Rinta-Kahila, Ida Someh, Ali Darvishi, Reihaneh Bidar, Marta Indulska

This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License.
AJIS publishes open-access articles distributed under the terms of a Creative Commons Non-Commercial and Attribution License which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and AJIS are credited. All other rights including granting permissions beyond those in the above license remain the property of the author(s).