NeRF Robustness Study Against Adversarial Bit Flip Attack
Description
Recently, there has been a notable surge in the development of generative models dedicated to synthesizing 3D scenes. In these research works, Neural Radiance Fields(NeRF) is one of the most popular AI approaches due to its outstanding performance with relatively smaller model size and fast training/ rendering time. Owing to its popularity, it is important to investigate the NeRF model security concern. If it is widely used for different applications with some fatal security issues would cause some serious problems.
Meanwhile, as for AI security and model robustness research, an emerging adversarial Bit Flip Attack (BFA) is demonstrated to be able to greatly reduce AI model accuracy by flipping several bits out of millions of weight parameters stored in the computer's main memory. Such malicious fault injection attack brings emerging model robustness concern for the widely used NeRF-based 3D modeling. This master thesis is targeting to study the NeRF model robustness against the adversarial bit flip attack. Based on the research works the fact can be discovered that the NeRF model is highly vulnerable to BFA, where the rendered image quality will have great degradation with only several bit flips in the model parameters.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2023
Agent
- Author (aut): YU, Zhou
- Thesis advisor (ths): Fan, Deliang DF
- Committee member: Chakrabart, Chaitali CC
- Committee member: Zhang, Yanchao YZ
- Publisher (pbl): Arizona State University