Secure telemedicine requires privacy-preserving deepfake detection in digital health systems to protect patient identification. Post-pandemic growth in the European telemedicine business from €45B to €380B has created a key vulnerability: medical deepfakes. Multiple European sources warn that deepfakes could destabilize digital healthcare infrastructure (European Data Protection Supervisor, 2023; European Union, 2024). This study proposes explainable federated learning (XFL) as the identity verification standard for telemedicine systems. Without such a norm, unavoidable breaches might severely erode public trust. Deepfake technology makes basic webcam checks and static ID images obsolete. Thus, federated, privacy-preserving detection systems need immediate regulation. These systems enable GDPR compliance and avoid medical identity fraud. Our investigation has three goals. We first highlight the fundamental healthcare privacy breaches in centralized deepfake detection methods. We believe federated options uniquely promote data sovereignty. Second, we will propose real regulatory frameworks with timetables, compliance standards, and penalties to mandate XFL adoption. Thirdly, we will analyze the economic effects of proactive vs reactive solutions and show that compulsory implementation is usually cheaper than medical identity fraud losses... [continue]
Explainable Federated Learning for Secure Telemedicine: Protecting Patient Identity through Privacy-Preserving Deepfake Detection in Digital Health Platforms
Raimondo FanaleWriting – Original Draft Preparation
;Fabio LibertiMembro del Collaboration Group
;Vittorio StileMembro del Collaboration Group
2025-01-01
Abstract
Secure telemedicine requires privacy-preserving deepfake detection in digital health systems to protect patient identification. Post-pandemic growth in the European telemedicine business from €45B to €380B has created a key vulnerability: medical deepfakes. Multiple European sources warn that deepfakes could destabilize digital healthcare infrastructure (European Data Protection Supervisor, 2023; European Union, 2024). This study proposes explainable federated learning (XFL) as the identity verification standard for telemedicine systems. Without such a norm, unavoidable breaches might severely erode public trust. Deepfake technology makes basic webcam checks and static ID images obsolete. Thus, federated, privacy-preserving detection systems need immediate regulation. These systems enable GDPR compliance and avoid medical identity fraud. Our investigation has three goals. We first highlight the fundamental healthcare privacy breaches in centralized deepfake detection methods. We believe federated options uniquely promote data sovereignty. Second, we will propose real regulatory frameworks with timetables, compliance standards, and penalties to mandate XFL adoption. Thirdly, we will analyze the economic effects of proactive vs reactive solutions and show that compulsory implementation is usually cheaper than medical identity fraud losses... [continue]I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

