The Role of Roles: When are LLMs Behavioural in Information Systems Decision-Making

Authors

  • Nazmiye Guler School of Information Systems and Technology Management, University of New South Wales, Sydney, Australia
  • Michael Cahalane School of Information Systems and Technology Management, University of New South Wales, Sydney, Australia
  • Samuel Kirshner School of Information Systems and Technology Management, University of New South Wales, Sydney, Australia
  • Richard Vidgen School of Information Systems and Technology Management, University of New South Wales, Sydney, Australia

DOI:

https://doi.org/10.3127/ajis.v29.5573

Keywords:

Large language models, ChatGPT, Behavioural information systems, AI cognition, Prompt-engineering

Abstract

Large Language Models (LLMs) are increasingly embedded in organisational workflows, serving as decision-making tools and proxies for human behaviour as silicon samples. While these models offer significant potential, emerging research highlights concerns about biases in LLM-generated outputs, raising questions about their reliability in complex decision-making contexts. To explore how LLMs respond to challenges in Information Systems (IS) scenarios, we examine ChatGPT’s decision-making in three experimental tasks from the IS literature: identifying phishing threats, making product launch decisions, and managing IT projects. Crucially, we test the impact of role assignment, a prompt engineering technique, on guiding ChatGPT towards behavioural or rational decision approaches. Our findings reveal that ChatGPT often behaves like human decision-makers when prompted to assume a human role, demonstrating susceptibility to similar biases. However, when instructed to act like AI, ChatGPT exhibited greater consistency and reduced susceptibility to behavioural factors. These results suggest that subtle prompt variations can significantly influence decision-making outcomes. This study contributes to the growing literature on LLMs by demonstrating their dual potential to mirror human behaviour and improve decision-making reliability in IS contexts, highlighting how LLMs can enhance efficiency and reliability in organisational decision-making.

References

Ackerley, M., Morrison, B. W., Ingrey, K., Wiggins, M. W., Bayl-Smith, P., & Morrison, N. (2022). Errors, irregularities, and misdirection: Cue utilisation and cognitive reflection in the diagnosis of phishing emails. Australasian Journal of Information Systems. 26. doi.org/10.3127/AJIS.V26I0.3615

Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Exploring the impact of artificial intelligence: Prediction versus judgment. Information Economics and Policy, 47, 1-6. doi.org/10.1016/j.infoecopol.2019.05.001

Argyle, L. P., Bail, C. A., Busby, E. C., Gubler, J. R., Howe, T., Rytting, C., . . . Wingate, D. (2023). Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences, 120(41), e2311627120. doi.org/10.1073/pnas.2311627120

Arnott, D., & Gao, S. (2022). Behavioral economics in information systems research: Critical analysis and research strategies. Journal of Information Technology, 37(1), 80-117. doi.org/10.1177/02683962211016000

Autor, D. H. (2015). The paradox of abundance: Automation anxiety returns. Performance and progress: Essays on capitalism, business, and society, 237-260.

Avgerou, C. (2000). Information systems: what sort of science is it? Omega, 28(5), 567-579. doi.org/10.1016/S0305-0483(00)00021-9

Benbasat, I., & Zmud, R. W. (1999). Empirical research in information systems: The practice of relevance. MIS Quarterly, 3-16. doi.org/10.2307/249403

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). doi.org/10.1145/3442188.3445922

Bertino, E., Doshi-Velez, F., Gini, M., Lopresti, D., & Parkes, D. (2020). Artificial intelligence & cooperation. arXiv preprint arXiv:2012.06034.

Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120. doi.org/10.1073/pnas.2218523120

Boell, S. K., & Cecez-Kecmanovic, D. (2014). A hermeneutic approach for conducting literature reviews and literature searches. CAIS, 34. doi.org/10.17705/1CAIS.03412

Brand, J., Israeli, A., & Ngwe, D. (2023). Using gpt for market research. Available at SSRN 4395751. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4395751

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.

Chen, Y., Kirshner, S., Andiappan, M., Jenkin, T., & Ovchinnikov, A. (2023). A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do? Available at SSRN 4380365. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4380365

Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018). Notes from the AI frontier: Insights from hundreds of use cases. McKinsey Global Institute, 2.

Dickson, G. W., & DeSanctis, G. (2000). Information technology and the future enterprise: new models for managers. Prentice Hall PTR.

Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants? Trends in Cognitive Sciences. doi.org/10.1016/j.tics.2023.09.005

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., . . . Eirug, A. (2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 101994. doi.org/10.1016/j.ijinfomgt.2019.08.002

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., . . . Ahuja, M. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. doi.org/10.1016/j.ijinfomgt.2023.102642

Goel, S., Williams, K., & Dincelli, E. (2017). Got phished? Internet security and human vulnerability. Journal of the Association for Information Systems, 18(1), 2. https://doi.org/10.17705/1jais.00448

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2008). Design science in information systems research. Management Information Systems Quarterly, 28(1), 6.doi.org/10.2307/25148625

Horton, J. J. (2023). Large language models as simulated economic agents: What can we learn from homo silicus?

Hutson, M., & Mastin, A. (2023). Guinea pigbots. Science, 381(6654), 121–123 doi.org/10.1126/science.adi0269

Jiang, H., Zhang, X., Cao, X., Kabbara, J., & Roy, D. (2023). Personallm: Investigating the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547.

Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260. doi.org/10.1126/science.aaa8415

Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information systems research, 14(3), 221-243. doi.org/10.1287/isre.14.3.221.16560

Lee, H. K., Lee, J. S., & Keil, M. (2018). Using perspective-taking to de-escalate launch date commitment for products with known software defects. Journal of Management Information Systems, 35(4), 1251-1276. doi.org/10.1080/07421222.2018.1523546

Lenat, D., & Marcus, G. (2023). Getting from generative ai to trustworthy ai: What llms might learn from cyc. arXiv preprint arXiv:2308.04445.

Lin, Z. (2023, Aug). Why and how to embrace AI such as ChatGPT in your academic life. R Soc Open Sci, 10(8), 230658. doi.org/10.1098/rsos.230658

Marcus, G. (2020). The next decade in AI: Four steps towards robust artificial intelligence. arXiv. https://arxiv.org/abs/2002.06177

Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can trust. Vintage.

Markus, M. L., & Rowe, F. (2018). Is IT changing the world? conceptions of causality for information systems theorizing. MIS Q., 42(4), 1255–1280. https://doi.org/10.25300/misq/2018/12903

Marr, B. (2023). The Best Prompts To Show Off The Mind-Blowing Capabilities Of ChatGPT. Forbes. www.forbes.com/sites/bernardmarr/2023/06/12/the-best-prompts-to-show-off-the-mind-blowing-capabilities-of-chatgpt/?sh=37c372563f60

Nuijten, A., Keil, M., & Commandeur, H. (2016). Collaborative partner or opponent: How the messenger influences the deaf effect in IT projects. European Journal of Information Systems, 25, 534-552. doi.org/10.1057/ejis.2015.17

Orlikowski, W. J. (2005). Material works: Exploring the situated entanglement of technological performativity and human agency. Scandinavian Journal of Information Systems, 17(1), 6.

Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organizations: Research approaches and assumptions. Information systems research, 2(1), 1-28. doi.org/10.1287/isre.2.1.1

Orlikowski, W. J., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “IT” in IT research—A call to theorizing the IT artifact. Information systems research, 12(2), 121-134. doi.org/10.1287/isre.12.2.121.9700

Orlikowski, W. J., & Robey, D. (1991). Information technology and the structuring of organizations. Information systems research, 2(2), 143-169. doi.org/10.1287/isre.2.2.143

Østerlund, C., Jarrahi, M. H., Willis, M., Boyd, K., & Wolf, C. T. (2021). Artificial intelligence and the world of work, a co‐constitutive relationship. Journal of the Association for Information Science and Technology, 72(1), 128-135. doi.org/10.1002/asi.24373

Pontrefact, D. (2023). Harvard And BCG Unveil The Double-Edged Sword Of AI In The Workplace. Forbes. www.forbes.com/sites/danpontefract/2023/09/29/harvard-and-bcg-unveil-the-double-edged-sword-of-ai-in-the-workplace/?sh=b5b6aad3f9f2

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 469-481). doi.org/10.1145/3351095.3372828

Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5-14. doi.org/10.1007/s10676-017-9430-8

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., . . . Wellman, M. (2019, 2019/04/01). Machine behaviour. Nature, 568(7753), 477-486. doi.org/10.1038/s41586-019-1138-y

Ribeiro, L. F., Schmitt, M., Schütze, H., & Gurevych, I. (2020). Investigating pretrained language models for graph-to-text generation. arXiv preprint arXiv:2007.08426.

Scott, S. V., & Orlikowski, W. J. (2014). Entanglements in practice. MIS Quarterly, 38(3), 873-894. doi.org/10.25300/MISQ/2014/38.3.06

Shollo, A., Hopf, K., Thiess, T., & Müller, O. (2022). Shifting ML value creation mechanisms: A process model of ML value creation. The Journal of Strategic Information Systems, 31(3), 101734. doi.org/10.1016/j.jsis.2022.101734

Simon, H. A. (1995). Artificial intelligence: an empirical science. Artificial intelligence, 77(1), 95-127. doi.org/10.1016/0004-3702(94)00062-L

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131. doi.org/10.1126/science.185.4157.1124

Wang, J., Shi, E., Yu, S., Wu, Z., Ma, C., Dai, H., . . . Hu, H. (2023). Prompt engineering for healthcare: Methodologies and applications. arXiv preprint arXiv:2304.14670

Downloads

Published

2025-10-14

How to Cite

Guler, N., Cahalane, M., Kirshner, S., & Vidgen, R. (2025). The Role of Roles: When are LLMs Behavioural in Information Systems Decision-Making. Australasian Journal of Information Systems, 29. https://doi.org/10.3127/ajis.v29.5573

Issue

Section

Research Articles