Marek Pawlicki Ph.D. Eng. holds an adjunct position at the Bydgoszcz University of Science and Technology. He has been involved in a number of international projects related to cybersecurity, critical infrastructure protection, software quality etc. (e.g. H2020 SPARTA, H2020 SIMARGL, H2020 PREVISION, H2020 MAGNETO, H2020 Q-Rapids, H2020 SocialTruth). He is an author of over 80 peer-reviewed scientific publications. His interests pertain to the application of machine learn- ing in several domains, including cybersecurity.
Enhancing Network Cybersecurity with Novel Trustworthy AI Solutions
This presentation will focus on novel trustworthy AI solutions in the field of network intrusion detection (NIDS), The research and development work, particularly in the context of EU-funded projects like H2020 STARLIGHT, HE AI4Cyber, H2020 SPARTA, H2020 APPRAISE, H2020 ELEGANT, H2020 SIMARGL and others has led to significant advancements in NIDS and the security of AI systems.
The core of this presentation details the development of AI-based intrusion detection technologies that leverage flow-based data for real-time threat analysis. These systems are designed with modularity and scalability in mind, utilizing tools like Apache Spark and Kafka for efficient data handling and processing.
Another major focus is on explainability in AI, crucial for gaining user trust and enhancing system transparency. Methodologies for integrating explainable AI (xAI) techniques with existing AI models will be presented, which are critical for sectors requiring an understanding of AI decision-making processes. The practical implementation of these technologies in various industrial and academic projects will be discussed, showcasing their effectiveness in live environments and their adaptability to different types of cyberthreats. The presentation concludes with insights into future research directions and opportunities for further innovation in AI-driven cybersecurity solutions, aiming to improve their reliability, security, and user trust.