Towards Fine-Grained Control of Visual Data in Mobile Systems
Description
With the rapid development of both hardware and software, mobile devices with their advantages in mobility, interactivity, and privacy have enabled various applications, including social networking, mixed reality, entertainment, authentication, and etc.In diverse forms such as smartphones, glasses, and watches, the number of mobile devices is expected to increase by 1 billion per year in the future.
These devices not only generate and exchange small data such as GPS data, but also large data including videos and point clouds.
Such massive visual data presents many challenges for processing on mobile devices.
First, continuously capturing and processing high resolution visual data is energy-intensive, which can drain the battery of a mobile device very quickly.
Second, data offloading for edge or cloud computing is helpful, but users are afraid that their privacy can be exposed to malicious developers.
Third, interactivity and user experience is degraded if mobile devices cannot process large scale visual data in real-time such as off-device high precision point clouds. To deal with these challenges, this work presents three solutions towards fine-grained control of visual data in mobile systems, revolving around two core ideas, enabling resolution-based tradeoffs and adopting split-process to protect visual data.In particular, this work introduces: (1) Banner media framework to remove resolution reconfiguration latency in the operating system for enabling seamless dynamic resolution-based tradeoffs;
(2) LesnCap split-process application development framework to protect user's visual privacy against malicious data collection in cloud-based Augmented Reality (AR) applications by isolating the visual processing in a distinct process;
(3) A novel voxel grid schema to enable adaptive sampling at the edge device that can sample point clouds flexibly for interactive 3D vision use cases across mobile devices and mobile networks. The evaluation in several mobile environments demonstrates that, by controlling visual data at a fine granularity, energy efficiency can be improved by 49% switching between resolutions, visual privacy can be protected through split-process with negligible overhead, and point clouds can be delivered at a high throughput meeting various requirements.Thus, this work can enable more continuous mobile vision applications for the future of a new reality.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2022
Agent
- Author (aut): Hu, Jinhan
- Thesis advisor (ths): LiKamWa, Robert
- Committee member: Wu, Carole-Jean
- Committee member: Doupe, Adam
- Committee member: Jayasuriya, Suren
- Publisher (pbl): Arizona State University