The rapid development of Large Language Models (LLMs), such as GPT-3 and GPT-4, has transformed corporate functions, yet their integration raises critical privacy concerns. These models require vast amounts of training data, often including sensitive corporate information, which increases risks of data breaches and unauthorized access. This systematic review explores privacy challenges, model privacy strategies, and ethical considerations. By identifying key legal frameworks such as the GDPR and the AI Act, this paper aims to provide insights into corporate compliance and the responsible employment of LLMs.
Large Language Models and Privacy in the Corporate Context
GUENDALINA CAPECEMethodology
;
2025-01-01
Abstract
The rapid development of Large Language Models (LLMs), such as GPT-3 and GPT-4, has transformed corporate functions, yet their integration raises critical privacy concerns. These models require vast amounts of training data, often including sensitive corporate information, which increases risks of data breaches and unauthorized access. This systematic review explores privacy challenges, model privacy strategies, and ethical considerations. By identifying key legal frameworks such as the GDPR and the AI Act, this paper aims to provide insights into corporate compliance and the responsible employment of LLMs.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

