Deployable Web GUI for LLM Applications

Description
The scientific manuscript review stage is a key part of the modern scientific process. It involves rigorous evaluation of new papers by peers to assess the significance of contributions in a particular area of study and ensure that papers meet

The scientific manuscript review stage is a key part of the modern scientific process. It involves rigorous evaluation of new papers by peers to assess the significance of contributions in a particular area of study and ensure that papers meet high standards. This process helps maintain the quality and credibility of research. However, some reviews can be toxic or overly discouraging, leading to unintentional psychological damage (such as anxiety or depression) to paper authors and detracting from the constructive tone of the review space. This Thesis/Creative Project was completed alongside a capstone project. Our capstone project aims to address this issue. The goal is to fine tune a Large Language Model (LLM) that can first accurately identify toxic sentences within a paper review. Then, the LLM will revise any toxic sentences in a way that maintains the criticism but delivers it in a more friendly or encouraging tone. To effectively use this LLM, it requires a Graphical User Interface (GUI) so that end-users (such as editors, associate editors, reviewers) can easily interact with it. This allows them to update the wording of the review in an effective manner while maintaining scientific integrity. While the GUI provides a user-friendly interface for interacting with the LLM, there are some technical challenges in running a LLM application in a web-based framework. LLMs are computationally expensive to run. They require significant GPU RAM, which can be a limiting factor, especially in a web-based framework with limited resources. One potential solution to this problem is model quantization, which can reduce the memory footprint of the model. However, this introduces the problem of model drift, as the model’s performance may decrease when quantized. This needs to be measured to ensure the model continues to provide accurate results.
Date Created
2024-05
Agent

Anomaly Detection using Cascade Variational Autoencoder Coupled with Zero Shot Learning – Medical Imaging Use Cases

187836-Thumbnail Image.png
Description
Detection of anomalies before they are included in the downstream diagnosis/prognosis models is an important criterion for maintaining the medical AI imaging model performance across internal and external datasets. Furthermore, the need to curate huge amounts of data to train

Detection of anomalies before they are included in the downstream diagnosis/prognosis models is an important criterion for maintaining the medical AI imaging model performance across internal and external datasets. Furthermore, the need to curate huge amounts of data to train supervised models that produce precise results also requires an automated model that can accurately identify in-distribution (ID) and out-of-distribution (OOD) data for ensuring the training dataset quality. However, the core challenges for designing such as system are – (i) given the infinite variations of the anomaly, curation of training data is in-feasible; (ii) making assumptions about the types of anomalies are often hypothetical. The proposed work designed an unsupervised anomaly detection model using a cascade variational autoencoder coupled with a zero-shot learning network that maps the latent vectors to semantic attributes. The performance of the proposed model is shown on two different use cases – skin images and chest radiographs and also compare against the same class of state-of-the-art generative OOD detection models.
Date Created
2023
Agent

Developing an Echocardiography-Based, Automatic Deep Learning Framework for the Differentiation of Increased Left Ventricular Wall Thickness Etiologies

Description
Increased LV wall thickness is frequently encountered in transthoracicechocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required for establishing the diagnosis. I propose the first echo-based,

Increased LV wall thickness is frequently encountered in transthoracicechocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required for establishing the diagnosis. I propose the first echo-based, automated deep learning model with a fusion architecture to facilitate the evaluation and diagnosis of increased left ventricular (LV) wall thickness. Patients with an established diagnosis for increased LV wall thickness (hypertrophic cardiomyopathy (HCM), cardiac amyloidosis (CA), and hypertensive heart disease (HTN)/others) between 1/2015 to 11/2019 at Mayo Clinic Arizona were identified. The cohort was divided into 80%/10%/10% for training, validation, and testing sets, respectively. Six baseline TTE views were used to optimize a pre-trained InceptionResnetV2 model, each model output was used to train a meta-learner under a fusion architecture. Model performance was assessed by multiclass area under the receiver operating characteristic curve (AUROC). A total of 586 patients were used for the final analysis (194 HCM, 201 CA, and 191 HTN/others). The mean age was 55.0 years, and 57.8% were male. Among the individual view-dependent models, the apical 4 chamber model had the best performance (AUROC: HCM: 0.94, CA: 0.73, and HTN/other: 0.87). The final fusion model outperformed all the view-dependent models (AUROC: CA: 0.90, HCM: 0.93, and HTN/other: 0.92). I successfully established an automatic end-to-end deep learning model framework that accurately differentiates the major etiologies of increased LV wall thickness, including HCM and CA from the background of HTN/other diagnoses.
Date Created
2022
Agent