Does AI Actually Help? How Employees Judge the Effectiveness of AI Tools in the Workplace
Abstract
Organisations around the world are spending heavily on artificial intelligence tools, trusting that those investments will make their employees faster, smarter, and more productive. But the people actually sitting at those desks tell a more complicated story. This paper critically examines how employees measure the effectiveness of AI tools they use in their day-to-day organisational roles, and why trust in those tools is far from automatic. Drawing on the technology acceptance literature, automation trust research, and documented workplace cases, the paper shows that employee judgments of AI effectiveness are shaped by perceived usefulness, transparency of outputs, past errors, and the degree to which a tool genuinely reduces cognitive load rather than adding to it. The paper also identifies the conditions under which AI tools erode rather than build employee trust, and it concludes with a set of practical advisories for workplace managers who want to close the gap between what AI promises and what it actually delivers.
Keywords
Full Text:
PDFReferences
Acemoglu, D., & Restrepo, P. (2018). Artificial intelligence, automation, and work (NBER Working Paper No. 24196). National Bureau of Economic Research. https://doi.org/10.3386/w24196
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30. https://doi.org/10.1257/jep.29.3.3
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108-116.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008
Dietvorst, B. J., Logg, J. M., & Soll, J. B. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126. https://doi.org/10.1037/xge0000033
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50.30392
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.2307/258792
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230-253. https://doi.org/10.1518/001872097778543886
PricewaterhouseCoopers. (2017). Sizing the prize: What's the real value of AI for your business and how can you capitalise? PwC. https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540
Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114-123.



