|CFP Deadline||July 2, 2018, 11:59 p.m.|
Adversarial attacks of Machine Learning systems have become an undisputable threat. Attackers can compromise the training of Machine Learning models by injecting malicious data into the training set (so-called poisoning attacks), or by crafting adversarial samples that exploit the blind spots of Machine Learning models at test time (so-called evasion attacks). Adversarial attacks have been demonstrated in a number of different application domains, including malware detection, spam filtering, visual recognition, speech-to-text conversion, and natural language understanding. Devising comprehensive defences against poisoning and evasion attacks by adaptive adversaries is still an open challenge. Thus, gaining a better understanding of the threat by adversarial attacks and developing more effective defence systems and methods is paramount for the adoption of Machine Learning systems in security-critical real-world applications.
The Nemesis ’18 tutorial and workshop aims to bring together researchers and practitioners to discuss recent advances in the rapidly evolving field of Adversarial Machine Learning. Particular emphasis will be on:
Reviewing both theoretical and practical aspects of Adversarial Machine Learning;
Sharing experience from Adversarial Machine Learning in various business applications, including (but not limited to): malware detection, spam filtering, visual recognition, speech-to-text conversion and natural language understanding;
Discussing adversarial attacks both from a Machine Learning and Security/Privacy perspective;
Gaining hands-on experience with the latest tools for researchers and developers working on Adversarial Machine Learning;
Identifying strategic areas for future research in Adversarial Machine Learning, with a clear focus on how that will advance the security of real-world Machine Learning applications against various adversarial threats.