Analysis of the FAA's Small UAS Regulations for Airports Security Support UAS Operations

171683-Thumbnail Image.png
Description
Unmanned Aircraft Systems (UAS) are finding new applications in various industries, including airport operations. One of the most recent examples of this type of application is the Security Support UAS, which serves as a supporting technology for airport security. Their

Unmanned Aircraft Systems (UAS) are finding new applications in various industries, including airport operations. One of the most recent examples of this type of application is the Security Support UAS, which serves as a supporting technology for airport security. Their ability to conduct surveillance autonomously at a higher rate of speed in regions inaccessible to vehicles makes them a perfect instrument for supporting numerous airport security activities such as perimeter patrol, security alert response, and threat tracking. Despite the benefits, present regulations restrict the usage of this technology in airports. This study aims to determine how well equipped the regulatory framework is to support safe UAS flights at US airports. The Federal Aviation Administration's (FAA) Small Unmanned Aerial System Rule, generally referred to as Part 107, was analyzed qualitatively to examine the current regulatory environment. Although Part 107 is intended to regulate low-risk small UAS flights, findings indicate that requests for waivers to Part 107 that include appropriate risk mitigation can enable more complex UAS operations. The FAA has made tremendous strides in adapting current regulations to meet the operational requirements of numerous emergent UAS applications through its waiver procedure. On the other hand, more permanent solutions are required to enable scalable operations.
Date Created
2022
Agent

Towards Human-Machine Symbiosis: Design for Effective AI Facilitation

161981-Thumbnail Image.png
Description
The rapid increase in the volume and complexity of data lead to accelerated Artificial Intelligence (AI) applications, primarily as intelligent machines, in everyday life. Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework,

The rapid increase in the volume and complexity of data lead to accelerated Artificial Intelligence (AI) applications, primarily as intelligent machines, in everyday life. Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework, which provides the rationale behind an AI agent's decision-making. Therefore, the validity of the AI models is constrained based on their ability to explain their decision-making rationale. On the other hand, AI agents cannot perceive the social situation that human experts may recognize using their background knowledge, specifically in cybersecurity and the military. Social behavior depends on situation awareness, and it relies on interpretability, transparency, and fairness when we envision efficient Human-AI collaboration. Consequently, the human remains an essential element for planning, especially when the problem's constraints are difficult to express for an agent in a dynamic setting. This dissertation will first develop different model-based explanation generation approaches to predict where the human teammate would misunderstand the plan and, therefore, generate an explanation accordingly. The robot's generated explanation or interactive explicable behavior maintains the human teammate's cognitive workload and increases the overall team situation awareness throughout human-robot interaction. Further, it will focus on a rule-based model to preserve the collaborative engagement of the team by exploring essential aspects of the facilitator agent design. In addition to recognizing wherein the plan might be discrepancies, focusing on the decision-making process provides insight into the reason behind the conflict between the human expectation and the robot's behavior. Employing a rule-based framework will shift the focus from assisting an individual (human) teammate to helping the team interactively while maintaining collaboration. Hence, concentrating on teaming provides the opportunity to recognize some cognitive biases that skew the teammate's expectations and affect interaction behavior. This dissertation investigates how to maintain collaboration engagement or cognitive readiness for collaborative planning tasks. Moreover, this dissertation aims to lay out a planning framework focusing on the human teammate's cognitive abilities to understand the machine-provided explanations while collaborating on a planning task. Consequently, this dissertation explored the design for AI facilitator, helping a team tasked with a challenging task to plan collaboratively, mitigating the teaming biases, and communicate effectively. This dissertation investigates the effect of some cognitive biases on the task outcome and shapes the utility function. The facilitator's role is to facilitate goal alignment, the consensus of planning strategies, utility management, effective communication, and mitigate biases.
Date Created
2021
Agent