CLEAR-AI
Compliance, Legal, and Ethical Aspects of AI
Regulation
to be organized as part of the 21st
International Conference on Intelligent Environments
June 23-26, 2025 Darmstadt, Germany
Overview
The rapid advancement of artificial intelligence
(AI) has opened up a plethora of transformative possibilities across numerous
sectors. However, this rapid advancement also raises a number of profound legal
and ethical questions. As AI systems are integrated into decision-making
processes, concerns over accountability, transparency, bias, privacy, and the
impact on human rights intensify. Addressing these challenges requires robust
regulatory frameworks to ensure that AI technologies are deployed in ways that
protect individual rights, promote fairness, and mitigate risks associated with
their growing influence on society.
The legal governance of AI represents a current and
ongoing topic of discussion and debate, with a variety of national and
international frameworks being proposed to address the ethical, social, and
legal challenges posed by AI technologies. The EU's AI Act and the General Data
Protection Regulation (GDPR) are of great significance in terms of providing
regulatory oversight within Europe. They set out basic rules that AI systems
must comply with, including references to basic principles such as
accountability, transparency, and fairness. The AI Act, in particular, is
designed to mitigate the risks associated with high-risk AI systems, safeguard
human rights, and establish benchmarks for responsible AI innovation. The
global and cross-border nature of AI further necessitates the examination of
additional frameworks beyond Europe, including standards emerging from
international organizations and non-EU countries, such as the OECD AI
Principles, U.S. initiatives like the Algorithmic Accountability Act, and
various sector-specific regulations.
This workshop serves as an interdisciplinary
platform, connecting legal, ethical, and technical discussions to navigate the
complexities of AI governance. Technical experts will gain insight into
regulatory impacts on AI development, while legal professionals and researchers
will better understand the technical constraints and capabilities of AI systems. The
aim of the workshop, overall, is to offer a comprehensive platform for
researchers, academia, industry, and experts to discuss the legal and ethical
implications of AI, focusing on how these issues are addressed by current
regulations like the AI Act.
Theme
The workshop\92s central theme will revolve around
the balance between innovation and responsibility\97ensuring AI development
fosters progress while upholding ethical standards and legal compliance.
Therefore the theme of the workshop is identified as "Responsible
AI Development: Balancing Innovation, Ethics, and Regulation," focusing
on the convergence of innovation and compliance, offering a space for dialogue
on global AI governance and practices.
For
speakers, this workshop provides an opportunity to share insights and
strategies for addressing legal and ethical challenges. Engaging with peers
from academia, industry, and regulatory bodies, speakers will have a platform
to shape the conversation around regulatory frameworks and ethical AI
development.
Participants,
whether technical experts, legal professionals, or social scientists, will gain
valuable insights into how ethical principles and regulatory frameworks shape
AI development. They will benefit from a deep dive into the key legal and
ethical concerns driving AI governance. They will gain practical knowledge on
regulatory frameworks such as the AI Act, learn about the latest developments
in AI legislation, and discuss best practices for creating AI systems that are
not only innovative but also compliant with ethical standards.
This workshop serves as an interdisciplinary
platform, connecting legal, ethical, and technical discussions to navigate the
complexities of AI governance. Technical experts will gain insight into regulatory
impacts on AI development, while legal professionals and researchers will
present their ideas and insights fostering a deeper understanding of how legal
frameworks can shape the responsible deployment of AI. Overall,
participants from technical, legal, and social science backgrounds will benefit
from in-depth discussions and case studies that connect regulatory frameworks
to practical applications, ensuring responsible and compliant AI innovation.
Topics of interest include any relevant subjects
connected to the below topics, but not limited to:
Paper submission: EasyChair Submission Link
Submittted papers will be subjected to a meticulous
and comprehensive evaluation process. Each paper will be assessed by a panel of
experts in the field, ensuring that only those of the highest quality are
selected for presentation during the event.
All submissions must adhere to the IOS Press format and
be a minimum of six and a maximum of ten pages in length. Submission of the
paper must be completed according to the above submission dates and submissions
are handled through the EasyChair conference management system. The
confidentiality of submissions is maintained throughout the review process.
Following acceptance, a final revised camera-ready
version of the paper will be required in electronic form for inclusion in the
proceedings. Final camera-ready versions of accepted papers must be accompanied
by a signed copyright form.
Should you encounter any difficulties in meeting
the deadline or require further clarification regarding the submission process,
we kindly request that you inform us at your earliest convenience via: clearaiworkshop@gmail.com
Important Dates:
Paper submission deadline
7 March 2025
Notification of acceptance
4 April 2025
Camera-ready version
11 April 2025
Workshop date
23 or 24 June 2025
Accepted Papers
See the list here: TBD
Preliminary Program
TBD
Publication
All papers accepted will be published in the
proceedings of the event which will be an Open Access volume in the Book Series
on Ambient Intelligence and Smart Environments Series (IOS Press). As of 2015,
the Workshops Proceedings published by this Book Series are indexed in the
Conference Proceedings Citation Index - Science (CPCI-S) by Thomson Reuters.
Program Committee
Gizem
G\FCltekin-V\E1rkonyi, University of Szeged, Hungary
Gordon
Hunter, Kingston University, UK
Anton Gradisek, Jo\9Eef Stefan Institute, Slovenia
Norbert Tribl, University of Szeged, Hungary
Anastasia Nefeli Vidaki, Vrije University Brussel,
Belgium
Sabire Sanem Yılmaz, Sant\92Anna School of
Advanced Studies, Italy
Contact
For any questions, please contact us at clearaiworkshop@gmail.com