Explanations are not per se helpful or wanted, they can also be a burden. They are not of absolute but of instrumental value. For the field of explainable AI, it is crucial to determine when and for whom which explanation will most likely be valuable and when not. This normativity of explanations can generally be grasped from an economic point of view (efficiency, comfort), from a social point of view (roles, constellations, contexts, stakeholder interests), and from an ethical-political point of view (obligations, duties, virtues, impacts of (not) explaining). The workshop serves to sort out these different normative expectations and implications and tries to link them to the task of value-oriented technology design. The urgency of this workshop is emphasised by the proliferation of AI systems and their use in decision making processes, from credit score assessments, via risk assessment models, to judicial rulings.
Organizing Team: Tobias Matzner, Suzana Alpsancar, Martina Philippi, Wessel Reijers
For more details, venue and registration, please visit the workshop homepage