Accelerating Deep Learning Inference in Relational Database Systems
Description
Deep learning has become a potent method for drawing conclusions and forecasts from massive amounts of data. But when used in practical applications, conventional deep learning frameworks frequently run into problems, especially when data is stored in relational database systems. Thus, in recent years, a stream of research in integrating machine learning model inferences with a relational database to achieve benefits such as avoiding privacy issues and data transfer overheads is observed. The logic for performing the inference using the DNN model can be encapsulated in a user-defined function (UDF). These UDFs can then be integrated with the query interface of the DBMS and executed by the query execution engine. While it is relatively straightforward to leverage the User Defined Functions (UDFs) to implement machine learning algorithms using parallelism, it is observed that such implementations will not always be optimal and may incur issues in balancing the database threading and the threading of the libraries that the UDFs invoke. Since relational databases provide native support for relational operators, it is possible to leverage a cost model to make decisions for selectively transforming the UDFs based inference logic into a model-parallel implementation for optimal performance. Thus, this thesis will focus on the following: 1. Designing a domain-specific language for implementing the UDFs using Velox library, which can be lowered to a graph-based intermediate representation (IR); 2. Providing a cost model that aids in the decision-making of converting a UDF-centric implementation to a relation centric one.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Agent
- Author (aut): Masood, Saif
- Thesis advisor (ths): Zou, Jia
- Committee member: Xiao, Xusheng
- Committee member: Yang, Yingzhen
- Publisher (pbl): Arizona State University