(a) Situation faced: Digital transformation in the insurance value chain is fostering the adop tion of artificial intelligence, namely of deep learning methods, for enabling the improvement and the automation of two relevant tasks in the claim management process, i.e., (i) sensitive data detection and anonymization, and (ii) manipulation detection on images. The proposed approach is technically feasible, lightweight, and sufficiently scalable due to the properties of fered by currently available cloud platforms, and it also yields a sensible reduction in opera tional costs. (b) Action taken: Since well-established guidelines to address insurance digitalization use cases requiring deep learning do not yet exist, we propose a customized data science workflow for designing and developing two prototypes that tackle: (i) sensitive data detection and anon ymization, and (ii) manipulation detection on claim images. We proposed a six-step method, that is implemented using deep convolutional neural networks in Keras and TensorFlow and is seamlessly integrable with the most frequently used cloud environments. During prototyping, different training and testing iterations were carried out, thus progressively fine-tuning detec tion models, up to the achievement of the desired performance. (c) Results achieved: The developed prototypes are able to (i) robustly anonymize claim im ages and (ii) robustly detect manipulations on claim images (robustness means that, from a statistical viewpoint, the declared performance level is preserved even in the presence of highly heterogeneous distributions of the input data). The technical realization relies on open-source software and on the availability of cloud platforms, this last both for training purposes and for scalability issues. This demonstrates the applicability of our methodology, given a reliable anal ysis of the available resources, including the preparation of an appropriate training dataset for the models. (d) Lessons learned: The present work demonstrates the feasibility of the proposed deep learning-based six-step methodology for image anonymization and manipulation detection pur poses, and discusses challenges and learnings during implementation. Specifically, key learn ings include the importance of business translation, data quality, data preparation, and model training.

Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach

RICCIARDI CELSI L;
In corso di stampa

Abstract

(a) Situation faced: Digital transformation in the insurance value chain is fostering the adop tion of artificial intelligence, namely of deep learning methods, for enabling the improvement and the automation of two relevant tasks in the claim management process, i.e., (i) sensitive data detection and anonymization, and (ii) manipulation detection on images. The proposed approach is technically feasible, lightweight, and sufficiently scalable due to the properties of fered by currently available cloud platforms, and it also yields a sensible reduction in opera tional costs. (b) Action taken: Since well-established guidelines to address insurance digitalization use cases requiring deep learning do not yet exist, we propose a customized data science workflow for designing and developing two prototypes that tackle: (i) sensitive data detection and anon ymization, and (ii) manipulation detection on claim images. We proposed a six-step method, that is implemented using deep convolutional neural networks in Keras and TensorFlow and is seamlessly integrable with the most frequently used cloud environments. During prototyping, different training and testing iterations were carried out, thus progressively fine-tuning detec tion models, up to the achievement of the desired performance. (c) Results achieved: The developed prototypes are able to (i) robustly anonymize claim im ages and (ii) robustly detect manipulations on claim images (robustness means that, from a statistical viewpoint, the declared performance level is preserved even in the presence of highly heterogeneous distributions of the input data). The technical realization relies on open-source software and on the availability of cloud platforms, this last both for training purposes and for scalability issues. This demonstrates the applicability of our methodology, given a reliable anal ysis of the available resources, including the preparation of an appropriate training dataset for the models. (d) Lessons learned: The present work demonstrates the feasibility of the proposed deep learning-based six-step methodology for image anonymization and manipulation detection pur poses, and discusses challenges and learnings during implementation. Specifically, key learn ings include the importance of business translation, data quality, data preparation, and model training.
In corso di stampa
978-3-030-80002-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12606/23770
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact