Multi-Camera Bird-Eye-View Occupancy Detection for Intelligent Transportation System

193840-Thumbnail Image.png
Description
3D perception poses a significant challenge in Intelligent Transportation Systems (ITS) due to occlusion and limited field of view. The necessity for real-time processing and alignment with existing traffic infrastructure compounds these limitations. To counter these issues, this work introduces

3D perception poses a significant challenge in Intelligent Transportation Systems (ITS) due to occlusion and limited field of view. The necessity for real-time processing and alignment with existing traffic infrastructure compounds these limitations. To counter these issues, this work introduces a novel multi-camera Bird-Eye View (BEV) occupancy detection framework. This approach leverages multi-camera setups to overcome occlusion and field-of-view limitations while employing BEV occupancy to simplify the 3D perception task, ensuring critical information is retained. A noble dataset for BEV Occupancy detection, encompassing diverse scenes and varying camera configurations, was created using the CARLA simulator. Subsequent extensive evaluation of various Multiview occupancy detection models showcased the critical roles of scene diversity and occupancy grid resolution in enhancing model performance. A structured framework that complements the generated data is proposed for data collection in the real world. The trained model is validated against real-world conditions to ensure its practical application, demonstrating the influence of robust dataset design in refining ITS perception systems. This contributes to significant advancements in traffic management, safety, and operational efficiency.
Date Created
2024
Agent

Interpretable Hate Speech Detection via Large Language Model-extracted Rationales

193452-Thumbnail Image.png
Description
Social media platforms have become widely used for open communication, yet their lack of moderation has led to the proliferation of harmful content, including hate speech. Manual monitoring of such vast amounts of user-generated data is impractical, thus necessitating automated

Social media platforms have become widely used for open communication, yet their lack of moderation has led to the proliferation of harmful content, including hate speech. Manual monitoring of such vast amounts of user-generated data is impractical, thus necessitating automated hate speech detection methods. Pre-trained language models have been proven to possess strong base capabilities, which not only excel at in-distribution language modeling but also show powerful abilities in out-of-distribution language modeling, transfer learning and few-shot learning. However, these models operate as complex function approximators, mapping input text to a hate speech classification, without providing any insights into the reasoning behind their predictions. Hence, existing methods often lack transparency, hindering their effectiveness, particularly in sensitive content moderation contexts. Recent efforts have been made to integrate their capabilities with large language models like ChatGPT and Llama2, which exhibit reasoning capabilities and broad knowledge utilization. This thesis explores leveraging the reasoning abilities of large language models to enhance the interpretability of hate speech detection. A novel framework is proposed that utilizes state-of-the-art Large Language Models (LLMs) to extract interpretable rationales from input text, highlighting key phrases or sentences relevant to hate speech classification. By incorporating these rationale features into a hate speech classifier, the framework inherently provides transparent and interpretable results. This approach combines the language understanding prowess of LLMs with the discriminative power of advanced hate speech classifiers, offering a promising solution to the challenge of interpreting automated hate speech detection models.
Date Created
2024
Agent