Description
Having properly implemented instructions is key to computer architecture and the security of a computer. Without properly implemented instructions, there is a risk of security vulnerabilities such as privilege escalation. Current methods of checking specification mismatches are the various versions of the manual approach and the use of symbolic execution. These current methods can be time-consuming or have issues with scalability and efficiency. In this thesis, an approach is proposed to improve the current methods by employing the aid of machine-learning, specifically large-language models (LLMs), testing on RISC-V architecture. RISC-V architecture is proposed to test this method due to its simplistic nature and smaller instruction set compared to other architectures like x86. In this approach, Chat-GPT is proposed as the LLM of choice due to its rising popularity as well as its capability and power. The approach combines manual aspects and the aid of Chat-GPT to fully test how well Chat-GPT is at generating expressions and test cases to detect specification mismatches. The Chat-GPT generated test cases are evaluated on a RISC-V framework to see if the Chat-GPT generated test cases can be used in the future to detect specification mismatches as well as being used in more complicated architectures.
Details
Title
- Detecting Specification Mismatches using Machine Learning-Based Analysis of CPU Manuals
Contributors
- Guzman, Rachel (Author)
- Xiao, Xusheng (Thesis advisor)
- Ahmad, Adil (Committee member)
- Ghayekhloo, Samira (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Resource Type
Collections this item is in
Note
-
Partial requirement for: M.S., Arizona State University, 2024
-
Field of study: Computer Science