AI STAR Symposium

Artificial Intelligence Symposium on Theory, Application & Research

2023/09/27 08:30-2023/09/28 17:30

Location: ESOC, Robert-Bosch Strasse 5, 64293 Darmstadt

Organizer: esa, FAIR GSI, TU Darmstadt


Abstract

Artificial Intelligence Symposium on Theory, Application and Research is a two-day event featuring technical talks, networking opportunities and exciting project poster spaces in the fields of Research, Methods and Algorithms and AI Application. The Symposium brings together AI experts and enthusiasts from industry and academia to explore, connect, network and exchange ideas.

After a successful first edition in 2021, with more than 1000 participants, AI STAR is happening again this September, in a hybrid mode, tackling new topics with always the same intention of bridging the AI community to bolster innovation.

The Symposium is designed to

  • Identify concrete technological solutions addressing current and future needs in our organizations.
  • Connect the communities and enable interactions to lay the foundations for further collaborations.
  • Inspire and enable the public to learn about AI research and applications from experts.

Themes for the event include general AI, AI for Cybersecurity and Cybersecurity for AI as well as Diagnostics and Preventative Maintenance and Assistance.

Prof. Marc Fischlin, spokesperson of CROSSING as well as of the Profile Topic Cybersecurity and Privacy, is part of the organizing committee and will chair a panel discussion on the “Implications of AI Systems on Cybersecurity Posture”. During the discussion, experts will analyze the multifaceted impact of AI systems on cybersecurity. As AI technologies become more prevalent in various domains, so do the potential risks of specific cyber threats and vulnerabilities. The discussion will explore how AI can be both an enabler and a mitigator of cyber threats. Panel members will address the critical importance of AI in enhancing cybersecurity posture, while also addressing concerns about adversarial attacks and potential biases in AI-driven security systems.

Further Information
Website