The keyword eXplainable AI (XAI) is one of the mantras of recent research in AI. The reason is that technology allows the design of systems based on machine learning that interpret data and make decisions in contexts once traditionally reserved to human intelligence. Among these applications, some have direct effects on the life of human beings and thus touch areas traditionally garrisoned by law, ethics, and social conventions, i.e., by reasoning approaches far from the massively quantitative machinery of nowadays AI, and that admit conclusions only if they can be explained and debated by human means.
In this seminar we will argue mainly along two directions. The first is that explainability is a cost from at least two points of view: developing an XAI system is more expensive than developing an AI system and, more importantly, there is an implicit explainability-performance trade off constraining explainable solutions to perform worse than unexplainable ones. The second is that there are relevant applications in which not only explainability is unneeded but unexplainability is implicitly sought as a design criterion, thus being the driving factor of the solution.
In the light of all this we will argue that deciding to impose an explainability constraint may be a non-trivial task, perhaps one that can be solved by a properly designed AI. And then, whether this further AI should be explainable or not, becomes the next unavoidable meta-question.