Balancing Bytes and Ethics: Stakeholder Implications of Private LLMs
DOI:
https://doi.org/10.33423/jabe.v26i4.7280Keywords:
business, economics, artificial intelligence, private large language models, AI governance, data privacy, stakeholder theory, ethicsAbstract
This research explores the ethical implications of private large language models (PLLMs) through the lens of stakeholder theory. Private LLMs, tailored for specific organizational needs, present unique privacy and data protection challenges. We examine the historical development of LLMs and their impact on stakeholders, including shareholders, employees, customers, and society. Our proposed framework balances stakeholder interests with ethical considerations, offering a comprehensive approach to the ethical development and deployment of PLLMs. This framework emphasizes transparency, accountability, and sustainable practices to ensure long-term value creation. Future research directions include developing regulatory frameworks, conducting detailed social impact assessments, and exploring strategies for effective human-AI collaboration. This study contributes to academic discourse by providing a multi-faceted approach to managing the ethical challenges posed by PLLMs, fostering best practices, and mitigating potential conflicts among stakeholders.
References
Anderljung, M., Smith, E., O’Brien, J., Soder, L., Bucknall, B., Bluemke, E., . . . Chowdhury, R. (2023, November). Towards publicly accountable frontier LLMs. In Socially Responsible Language Modelling Research.
Cai, Y., Jo, H., & Pan, C. (2012). Doing well while doing bad? CSR in controversial industry sectors. Journal of Business Ethics, 108, 467–480.
Carroll, A.B. (1979). A three-dimensional conceptual model of corporate performance. Academy of Management Review, 4(4), 497–505.
Carroll, A.B. (1999). Corporate social responsibility: Evolution of a definitional construct. Business & Society, 38(3), 268–295.
Caruana, R., & Chatzidakis, A. (2014). Consumer social responsibility (CnSR): Toward a multi-level, multi-agent conceptualization of the “other CSR.” Journal of Business Ethics, 121, 577–592.
Chelliah, J. (2017). Will artificial intelligence usurp white-collar jobs? Human Resource Management International Digest, 25(3), 1–3.
Chintala, S. (2023). AI-driven personalised treatment plans: The future of precision medicine. Machine Intelligence Research, 17(2), 9718–9728.
Chui, M., Manyika, J., & Miremadi, M. (2016). Where machines could replace humans—and where they can’t (yet). The McKinsey Quarterly, pp. 1–12.
Dastin, J. (2018, October 10). INSIGHT - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.com. Retrieved July 10, 2024, from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/
de Almeida, P.G.R., dos Santos, C.D., & Farias, J.S. (2021). Artificial intelligence regulation: A framework for governance. Ethics and Information Technology, 23(3), 505–525.
Deng, X., Kang, J.-k., & Low, B.S. (2013). Corporate social responsibility and stakeholder value maximization: Evidence from mergers. Journal of Financial Economics, 110(1), 87–109.
Drage, E., McInerney, K., & Browne, J. (2024). Engineers on responsibility: Feminist approaches to who’s responsible for ethical AI. Ethics and Information Technology, 26(1), 4.
Duhaime, I.M., Hitt, M.A., & Lyles, M.A. (Eds.). (2021). Strategic management: State of the field and its future. Oxford University Press.
Dutta, K., & Ring, K. (2021). Do do-gooders do well? Corporate social responsibility, business models and IPO performance. Journal of Applied Business and Economics, 23(2).
Eccles, R.G., Ioannou, I., & Serafeim, G. (2014). The impact of corporate sustainability on organizational processes and performance. Management Science, 60(11), 2835–2857.
Evertz, J., Chlosta, M., Schönherr, L., & Eisenhofer, T. (2024). Whispers in the machine: Confidentiality in LLM-integrated systems. arXiv preprint. arXiv:2402.06922.
Felzmann, H., Villaronga, E.F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 2053951719860542.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.
Fotheringham, D., & Wiles, M.A. (2023). The effect of implementing chatbot customer service on stock returns: An event study analysis. Journal of the Academy of Marketing Science, 51(4), 802–822.
Freeman, R.E. (1984). Strategic management: A stakeholder approach. Cambridge University Press.
Google AI. (n.d.). Google AI and social good. Retrieved from https://ai.google/responsibility/social-good/
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of generative AI in cybersecurity and privacy. IEEE Access.
Hnatushenko, V., Ostrovska, K., & Nosov, V. (2024). Development and research of a chatbot using the linguistic core of Amazon Lex V2. In COLINS (Issue 3, pp. 50–62).
Hunt, E. (2019, September 9). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
Ioannidis, J., Harper, J., Quah, M.S., & Hunter, D. (2023, June). Gracenote.ai: Legal generative AI for regulatory compliance. In Proceedings of the Third International Workshop on Artificial Intelligence and Intelligent Assistance for Legal Professionals in the Digital Workplace (LegalAIIA 2023).
Khalil, M., & Rashed, A. (2023). The impact of female directors on the relationship between corporate social responsibility and capital structure: Evidence from Egypt. Journal of Applied Business and Economics, 25(2).
Kharlamova, A., Kruglov, A., & Succi, G. (2024, May). State-of-the-art review of life insurtech: Machine learning for underwriting decisions and a shift toward data-driven, society-oriented environment. In 2024 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA) (pp. 1–12). IEEE.
Kocak, B., Keles, A., & Akinci D’Antonoli, T. (2024). Self-reporting with checklists in artificial intelligence research on medical imaging: A systematic review based on citations of CLAIM. European Radiology, 34(4), 2805–2815.
Li, L. (2022). Reskilling and upskilling the future-ready workforce for Industry 4.0 and beyond. Information Systems Frontiers, pp. 1–16.
Mayer, R.C., Davis, J.H., & Schoorman, F.D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.
Meltzer, J.P. (2023, November 1). Toward international cooperation on foundational AI models: An expanded role for trade agreements and international economic policy. SSRN. Retrieved from https://ssrn.com/abstract=4685309
Mills, S., Sampanthar, K., & Dardaman, E. (2023, February 5). Getting stakeholder engagement right in responsible AI. VentureBeat.com. Retrieved July 10, 2024, from https://venturebeat.com/ai/getting-stakeholder-engagement-right-in-responsible-ai/
Mintzberg, H. (1983). The case for corporate social responsibility. Journal of Business Strategy, 4(2), 3–15.
Mongan, J., Moy, L., & Kahn, Jr., C.E. (2020). Checklist for artificial intelligence in medical imaging (CLAIM): A guide for authors and reviewers. Radiology: Artificial Intelligence, 2(2), e200029.
Nova, K. (2023). Generative AI in healthcare: Advancements in electronic health records, facilitating medical languages, and personalized patient care. Journal of Advanced Analytics in Healthcare Management, 7(1), 115–131.
Phillips, R.A. (1997). Stakeholder theory and a principle of fairness. Business Ethics Quarterly, 7(1), 51–66.
Porter, M.E. (2023, April 4). Creating shared value. Harvard Business Review. Retrieved from https://hbr.org/2011/01/the-big-idea-creating-shared-value
Post, J.E., Preston, L.E., & Sachs, S. (2002). Managing the extended enterprise: The new stakeholder view. California Management Review, 45(1), 6–28.
Rana, M.S., & Shuford, J. (2024). AI in healthcare: Transforming patient care through predictive analytics and decision support systems. Journal of Artificial Intelligence General Science (JAIGS), 1(1).
Sai, S., Gaur, A., Sai, R., Chamola, V., Guizani, M., & Rodrigues, J.J. (2024). Generative AI for transformative healthcare: A comprehensive study of emerging models, applications, case studies, and limitations. IEEE Access.
Santhosh, S. (2023, January 15). Reinforcement learning from human feedback (RLHF)–ChatGPT. Medium. Retrieved from https://medium.com/@sthanikamsanthosh1994/reinforcement-learning-from-human-feedback-rlhf-532e014fb4ae#:~:text=Reinforcement%20learning%20from%20 human%20feedback%20(RLHF)%20has%20gained%20popularity%20with,language%20model%20with%20reinforcement%20learning
Shoetan, P.O., & Familoni, B.T. (2024). Transforming fintech fraud detection with advanced artificial intelligence algorithms. Finance & Accounting Research Journal, 6(4), 602–625.
Sofia, M., Fraboni, F., DeAngelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artificial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The International Journal of an Emerging Transdiscipline, 26, 39–68.
Stoelhorst, J.W., & Vishwanathan, P. (2024). Beyond primacy: A stakeholder theory of corporate governance. Academy of Management Review, 49(1), 107–134.
Talati, D. (2023). Artificial intelligence (AI) in mental health diagnosis and treatment. Journal of Knowledge Learning and Science Technology, 2(3), 251–253.
Temelkov, Z., & Georgieva Svrtinov, V. (2024). AI impact on traditional credit scoring models. Journal of Economics, 9(1), 1–9.
Tutun, S., Johnson, M.E., Ahmed, A., Albizri, A., Irgil, S., Yesilkaya, I., . . . Harfouche, A. (2023). An AI-based decision support system for predicting mental health disorders. Information Systems Frontiers, 25(3), 1261–1276.
Villegas-Ch, W., & García-Ortiz, J. (2023). Toward a comprehensive framework for ensuring security and privacy in artificial intelligence. Electronics, 12(18), 3786.
Wang, Y. (2023). The large language model (LLM) paradox: Job creation and loss in the age of advanced AI. Authorea Preprints.
Zilliz. (2024, April 11). What are private LLMs? Running large language models privately - privateGPT and beyond. Retrieved from https://zilliz.com/learn/what-are-private-llms
Wulf, A.J., & Seizov, O. (2024). “Please understand we cannot provide further information”: Evaluating content and transparency of GDPR-mandated AI disclosures. AI & Society, 39(1), 235–256.
Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., & Zhang, Y. (2024). A survey on large language model (LLM) security and privacy: The good, the bad, and the ugly. High-Confidence Computing, 4(2), 100211.