Secure telemedicine requires privacy-preserving deepfake detection in digital health systems to protect patient identification. Post-pandemic growth in the European telemedicine business from €45B to €380B has created a key vulnerability: medical deepfakes. Multiple European sources warn that deepfakes could destabilize digital healthcare infrastructure (European Data Protection Supervisor, 2023; European Union, 2024). This study proposes explainable federated learning (XFL) as the identity verification standard for telemedicine systems. Without such a norm, unavoidable breaches might severely erode public trust. Deepfake technology makes basic webcam checks and static ID images obsolete. Thus, federated, privacy-preserving detection systems need immediate regulation. These systems enable GDPR compliance and avoid medical identity fraud. Our investigation has three goals. We first highlight the fundamental healthcare privacy breaches in centralized deepfake detection methods. We believe federated options uniquely promote data sovereignty. Second, we will propose real regulatory frameworks with timetables, compliance standards, and penalties to mandate XFL adoption. Thirdly, we will analyze the economic effects of proactive vs reactive solutions and show that compulsory implementation is usually cheaper than medical identity fraud losses... [continue]

Explainable Federated Learning for Secure Telemedicine: Protecting Patient Identity through Privacy-Preserving Deepfake Detection in Digital Health Platforms

Raimondo Fanale
Writing – Original Draft Preparation
;
Fabio Liberti
Membro del Collaboration Group
;
Vittorio Stile
Membro del Collaboration Group
2025-01-01

Abstract

Secure telemedicine requires privacy-preserving deepfake detection in digital health systems to protect patient identification. Post-pandemic growth in the European telemedicine business from €45B to €380B has created a key vulnerability: medical deepfakes. Multiple European sources warn that deepfakes could destabilize digital healthcare infrastructure (European Data Protection Supervisor, 2023; European Union, 2024). This study proposes explainable federated learning (XFL) as the identity verification standard for telemedicine systems. Without such a norm, unavoidable breaches might severely erode public trust. Deepfake technology makes basic webcam checks and static ID images obsolete. Thus, federated, privacy-preserving detection systems need immediate regulation. These systems enable GDPR compliance and avoid medical identity fraud. Our investigation has three goals. We first highlight the fundamental healthcare privacy breaches in centralized deepfake detection methods. We believe federated options uniquely promote data sovereignty. Second, we will propose real regulatory frameworks with timetables, compliance standards, and penalties to mandate XFL adoption. Thirdly, we will analyze the economic effects of proactive vs reactive solutions and show that compulsory implementation is usually cheaper than medical identity fraud losses... [continue]
2025
Explainable Federated Learning, Privacy-Preserving Deepfake Detection, Telemedicine Security, GDPR Compliance, Digital Health Regulation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12606/35326
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact