Full metadata
Title
NeRF Robustness Study Against Adversarial Bit Flip Attack
Description
Recently, there has been a notable surge in the development of generative models dedicated to synthesizing 3D scenes. In these research works, Neural Radiance Fields(NeRF) is one of the most popular AI approaches due to its outstanding performance with relatively smaller model size and fast training/ rendering time. Owing to its popularity, it is important to investigate the NeRF model security concern. If it is widely used for different applications with some fatal security issues would cause some serious problems.
Meanwhile, as for AI security and model robustness research, an emerging adversarial Bit Flip Attack (BFA) is demonstrated to be able to greatly reduce AI model accuracy by flipping several bits out of millions of weight parameters stored in the computer's main memory. Such malicious fault injection attack brings emerging model robustness concern for the widely used NeRF-based 3D modeling. This master thesis is targeting to study the NeRF model robustness against the adversarial bit flip attack. Based on the research works the fact can be discovered that the NeRF model is highly vulnerable to BFA, where the rendered image quality will have great degradation with only several bit flips in the model parameters.
Date Created
2023
Contributors
- YU, Zhou (Author)
- Fan, Deliang DF (Thesis advisor)
- Chakrabart, Chaitali CC (Committee member)
- Zhang, Yanchao YZ (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
22 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.190982
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: M.S., Arizona State University, 2023
Field of study: Computer Engineering
System Created
- 2023-12-14 02:04:19
System Modified
- 2023-12-14 02:04:24
- 11 months 1 week ago
Additional Formats