Logo Sciencesconf

About

We are witnessing the emergence of an “AI economy and society” where AI technologies are increasingly impacting many aspects of business as well as everyday life. We read with great interest about recent advances in AI medical diagnostic systems, self-driving cars, ability of AI technology to automate many aspects of business decisions like loan approvals, hiring, policing etc. However, as evident by recent experiences, AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings have been documented in scientific but also and importantly in general press (accidents with self-driving cars, biases in AI-based policing, hiring and loan systems, biases in face recognition systems for people of color, seemingly correct medical diagnoses later found to be made due to wrong reasons etc.). These shortcomings are raising many ethical and policy concerns not only in technical and academic communities, but also among policymakers and general public, and will inevitably impede wider adoption of AI in society.

 

The problems related to Ethical AI are complex and broad and encompass not only technical issues but also legal, political and ethical ones. One of the key component of Ethical AI systems is explainability or transparency, but other issues like detecting bias, ability to control the outcomes, ability to objectively audit AI systems for ethics are also critical for successful applications and adoption of AI in society. Consequently, explainable and Ethical AI are very current and popular topics both in technical as well as in business, legal and philosophy communities. Many workshops in this field are held at top conferences, and we believe ICPR has to address this topic broadly and focus on its technical aspects. Our proposed workshop aims to address technical aspects of explainable and ethical AI in general, and include related applications and case studies with the aim to address this very important problems from a broad technical perspective.

 

This is the fourth edition of the workshop, you can find the website of the previous edition here: 2024, 2022, 2020.

Topics

The topics comprise but are not limited to:

  • Naturally explainable AI methods
  • Post-Hoc Explanation methods of Deep Neural Networks and Transformers
  • Technical issues in AI ethics including automated audits, detection of bias, ability to control AI systems to prevent harm and others
  • Methods to improve AI explainability in general, including algorithms and evaluation methods
  • User interface and visualization for achieving more explainable and ethical AI
  • Real world applications and case studies

Reviewing commitee

  • Alexandre Benoit, Univ. Savoie Mont Blanc / LISTIC
  • Damien Garreau , Julius-Maximilians-Universität Würzburg 
  • Hervé Le Borgne,  CEA LIST
  • Marco Angelini ,  Univ.  Rome 
  • Mark Keane,     UCD Dublin / Insight SFI Centre for Data Analytics
  • Martha Larson,     Radboud University 
  • Romain Bourqui,    Univ. Bordeaux 
  • Jenny Benois Pineau,    Univ. Bordeaux 
  • Romain Giot,    Univ. Bordeaux 
  • Romain Xu Darme,    CEA LIST
  • Sebastian Lapuschkin,    Fraunhofer Institute for Telecommunications
  • Sebastien Destercke,    Université de Technologie de Compiegne     
  • Stefanos Kollias,    National Technical University of Athens  
  • Vasilis Mezaris,   Information Technologies Institute / Centre for Research and Technology Hellas
  • Victoria Bourgeais, Univ Bordeaux
  • Wassila Ouerdane, Centrale supelec  
 
 
Each paper will be reviewed by at least 2 reviewers. Double-blind is possible but not mandatory.

Program chairs

  • Marco Angelini,    Univ Rome 3
  • Romain Bourqui,    Univ. Bordeaux
  • Jenny Benois Pineau,    Univ. Bordeaux
  • Romain Giot,    Univ. Bordeaux
  • Sebastian Lapuschkin,    Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute

Important dates

  • 1st May: paper submission
  • 1st July  10 June: reviewers answer
  • 31 July 20 June: camera ready version
  • 21 August: Workshop

 

Submission

The Proceedings of the XAIE 2026 workshop will be published in the Springer Lecture Notes in Computer Science (LNCS) series. Papers will be selected by a single blind (reviewers are anonymous) review process. Submissions must be formatted in accordance with the Springer’s Computer Science Proceedings guidelines: 12-15 pages.

 

The proceedings of the workshop will be edited by ICPR workshop chairs with LNCS. You'll find the LaTEX template at the submission page https://xaie4.sciencesconf.org/user/submissions.

A paper should not exceed 15 pages including references. If the paper exceeds 15 pages, you must pay 150€ for each additional page

Loading... Loading...