Image Scaling Attack Simulation: A Measure of Stealth and Detectability
Loading...
Date
2023-12-14
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Cybersecurity practices require constant effort to be maintained, and one major weakness within the machine learning field is a lack of awareness regarding potential attacks not only in the usage of machine learning models, but in the development process of models as well. It is possible to poison datasets for the benefit of attackers, and for the poor performance of models using data. Previous studies have already determined that preprocessing attacks, such as image scaling attacks, can be difficult to detect both visually and algorithmically. However, there is a lack of emphasis in these studies regarding the real world performance of these attacks and the detectability of the presence of one of these attacks. The purpose of this work is to analyze the relationship between awareness of image scaling attacks with respect to demographic background and experience. We conduct a survey where we gather the subjects’ demographics, analyze the subjects’ experience in cybersecurity, record their responses to a poorly performing convolutional neural network model that has been unknowingly hindered by an image scaling attack of a used dataset, and note their reactions after we reveal to them that the images used within the broken models have been attacked. The subjects in our pilot analysis consist of students taking computer science courses and professors in computer science within Texas A&M University. We find in this study that the overall detection rate of the attack is low enough to be viable in a workplace or academic setting, and that after discovery subjects cannot conclusively determine benign images from attacked images.