Jolio and Yenzo discusses new attack vectors on Deep Neural Network (DNN) executables, specifically focusing on side-channel and bit-flip vulnerabilities.
Here’s a summary of the key points:
- DNN Executables and Vulnerabilities: The talk begins by explaining that cloud service providers deploy machine learning models as valuable services. Attackers can exploit side-channels to learn about the architecture of these models, which then opens the door for other attacks. The speakers investigate if DNN executables, created through deep learning compilation for performance, are also vulnerable.
- Deep Compilation and Side-Channels: Deep learning compilers optimize models for specific hardware, creating standalone executables. While traditional side-channel attacks might not work directly, the researchers found that compiler optimizations leave “fingerprints” in the cache access patterns. These patterns can be analyzed to infer the model’s architecture.
- Bit-Flip Attacks: The second part of the talk focuses on bit-flip attacks, where the goal is to corrupt the model’s intelligence by flipping bits in the compiled machine code. The researchers found that by identifying “super bits” – vulnerable bits that are consistent across different models – they could achieve a high success rate in disabling the model’s functionality. This method proved to be significantly more effective than previous attacks that targeted the model’s weights.
- Key Takeaways: The presentation concludes that deep learning compiler optimizations can inadvertently leak model architecture information through side-channels. Furthermore, DNN executables are more susceptible to bit-flip attacks than the model weights themselves. The speakers call for more security research in this area to develop both offensive and defensive strategies.