Human-Centred XAI: Enhancing AI Acceptability for Healthcare

In Conjunction with IEEE International Conference of Healthcare Informatics (ICHI) 2024

3rd June, Orlando, Florida, USA (and Online)

Deadline Extended to 31st March

Overview

Overview

As Artificial Intelligence (AI) continues to reshape healthcare, the acceptability of Explainable Artificial Intelligence (XAI) remains paramount. This workshop delves into the convergence of XAI and healthcare, with a focus on fostering acceptability through human-in-the-loop oversight. Participants will engage in discussions on practical applications, challenges, and strategies for developing transparent, interpretable, and ethically sound AI solutions in healthcare. Through exploration of challenges, sharing of best practices, and facilitated discussions, the workshop aims to empower participants to navigate the intricacies of XAI implementation in healthcare, ensuring that AI systems are not only accurate but also understandable and trusted by end-users.


Topics of Interest (but not limited to):

  1. Interpretable Models in Healthcare:
    Exploring challenges and solutions in developing AI models that are not only accurate but also interpretable in healthcare applications.

  2. User Trust and Acceptance:
    Examining factors influencing user trust and acceptance of XAI in healthcare, considering the unique ethical considerations and sensitivities of the domain.

  3. Ethical Considerations:
    Discussing the ethical implications of AI in healthcare and how incorporating human-in-the-loop oversight can address concerns related to bias, fairness, and accountability.

  4. Practical Implementation:
    Sharing experiences and case studies related to the practical implementation of XAI in healthcare settings, highlighting success stories and lessons learned.

Important Dates

Deadline for full paper submissions: 1st March 2024 --> 31st March (Final Extension!).

Author notification: 12th April 2024.

Camera-ready submission: 21st April 2024.

Workshop day: 3rd June 2024.

Submissions

Prospective authors are invited to submit original contributions, including research papers, case studies, and position papers. Submissions should be formatted according to the IEEE conference template. We will accept full papers (up to 8 pages, including references), and short paper (up to 4 pages, including references).
Papers can be submitted through the conference submission system: https://easychair.org/conferences/?conf=ieeeichi2024, and please select the workshop track "Human-Centered XAI Workshop".

Submissions will undergo a rigorous single-blind peer review process. At least two reviews from the program committee will be considered for each submission. The final acceptance decisions will be made by the program chairs based on the reviewers' feedback. Accepted submissions will be presented at the workshop and published in the IEEE ICHI 2024 Proceedings, which will be archived in the IEEE Xplore Digital Library.

Speakers can present remotely if unable to travel to the venue (hybrid event).

Program Committee

  • David Western, University of the West of England, Bristol, United Kingdom.

  • Belal Alsinglawi, Swinburne University of Technology, Australia.

  • Pedro A. Moreno-Sánchez, Tampere University, Finland.

  • Philippos Papaphilippou, Trinity College Dublin, Ireland.

  • Federica Cilia, Université de Picardie Jules Verne, France.

  • Maksims Ivanovs, Institute of Electronics and Computer Science (EDI), Latvia.

  • Ruairi O'Reilly, Munster Technological University, Ireland.

Organizers

  • Gilles DEQUEN, Université de Picardie Jules Verne, France.

  • Daniel Aïham GHAZALI, University Hospital of Amiens, Université de Picardie Jules Verne, and Université Paris Cité, France.

  • Emilien ARNAUD, University Hospital of Amiens, and Université de Picardie Jules Verne, France.

  • Mahmoud ELBATTAH, University of the West of England Bristol, United Kingdom, and Université de Picardie Jules Verne, France.

For any queries, please contact: mahmoud.elbattah@uwe.ac.uk


Agenda

TBD