Development of horizontal coordination mechanisms for planning agricultural production

Description
Agricultural supply chains are complex systems which pose significant challenges beyond those of traditional supply chains. These challenges include: long lead times, stochastic yields, short shelf lives and a highly distributed supply base. This complexity makes coordination critical to prevent

Agricultural supply chains are complex systems which pose significant challenges beyond those of traditional supply chains. These challenges include: long lead times, stochastic yields, short shelf lives and a highly distributed supply base. This complexity makes coordination critical to prevent food waste and other inefficiencies. Yet, supply chains of fresh produce suffer from high levels of food waste; moreover, their high fragmentation places a great economic burden on small and medium sized farms.

This research develops planning tools tailored to the production/consolidation level in the supply chain, taking the perspective of an agricultural cooperative—a business model which presents unique coordination challenges. These institutions are prone to internal conflict brought about by strategic behavior, internal competition and the distributed nature of production information, which members keep private.

A mechanism is designed to coordinate agricultural production in a distributed manner with asymmetrically distributed information. Coordination is achieved by varying the prices of goods in an auction like format and allowing participants to choose their supply quantities; the auction terminates when production commitments match desired supply.

In order to prevent participants from misrepresenting their information, strategic bidding is formulated from the farmer’s perspective as an optimization problem; thereafter, optimal bidding strategies are formulated to refine the structure of the coordination mechanism in order to minimize the negative impact of strategic bidding. The coordination mechanism is shown to be robust against strategic behavior and to provide solutions with a small optimality gap. Additional information and managerial insights are obtained from bidding data collected throughout the mechanism. It is shown that, through hierarchical clustering, farmers can be effectively classified according to their cost structures.

Finally, considerations of stochastic yields as they pertain to coordination are addressed. Here, the farmer’s decision of how much to plant in order to meet contracted supply is modeled as a newsvendor with stochastic yields; furthermore, options contracts are made available to the farmer as tools for enhancing coordination. It is shown that the use of option contracts reduces the gap between expected harvest quantities and the contracted supply, thus facilitating coordination.
Date Created
2015
Agent

A probabilistic framework of transfer learning- theory and application

154099-Thumbnail Image.png
Description
Transfer learning refers to statistical machine learning methods that integrate the knowledge of one domain (source domain) and the data of another domain (target domain) in an appropriate way, in order to develop a model for the target domain that

Transfer learning refers to statistical machine learning methods that integrate the knowledge of one domain (source domain) and the data of another domain (target domain) in an appropriate way, in order to develop a model for the target domain that is better than a model using the data of the target domain alone. Transfer learning emerged because classic machine learning, when used to model different domains, has to take on one of two mechanical approaches. That is, it will either assume the data distributions of the different domains to be the same and thereby developing one model that fits all, or develop one model for each domain independently. Transfer learning, on the other hand, aims to mitigate the limitations of the two approaches by accounting for both the similarity and specificity of related domains. The objective of my dissertation research is to develop new transfer learning methods and demonstrate the utility of the methods in real-world applications. Specifically, in my methodological development, I focus on two different transfer learning scenarios: spatial transfer learning across different domains and temporal transfer learning along time in the same domain. Furthermore, I apply the proposed spatial transfer learning approach to modeling of degenerate biological systems.Degeneracy is a well-known characteristic, widely-existing in many biological systems, and contributes to the heterogeneity, complexity, and robustness of biological systems. In particular, I study the application of one degenerate biological system which is to use transcription factor (TF) binding sites to predict gene expression across multiple cell lines. Also, I apply the proposed temporal transfer learning approach to change detection of dynamic network data. Change detection is a classic research area in Statistical Process Control (SPC), but change detection in network data has been limited studied. I integrate the temporal transfer learning method called the Network State Space Model (NSSM) and SPC and formulate the problem of change detection from dynamic networks into a covariance monitoring problem. I demonstrate the performance of the NSSM in change detection of dynamic social networks.
Date Created
2015
Agent

Capacitated vehicle routing problem with time windows: a case study on pickup of dietary products in nonprofit organization

154011-Thumbnail Image.png
Description
This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem

This thesis presents a successful application of operations research techniques in nonprofit distribution system to improve the distribution efficiency and increase customer service quality. It focuses on truck routing problems faced by St. Mary’s Food Bank Distribution Center. This problem is modeled as a capacitated vehicle routing problem to improve the distribution efficiency and is extended to capacitated vehicle routing problem with time windows to increase customer service quality. Several heuristics are applied to solve these vehicle routing problems and tested in well-known benchmark problems. Algorithms are tested by comparing the results with the plan currently used by St. Mary’s Food Bank Distribution Center. The results suggest heuristics are quite completive: average 17% less trucks and 28.52% less travel time are used in heuristics’ solution.
Date Created
2015
Agent

Fix-and-optimize heuristic and MP-based approaches for capacitated lot sizing problem with setup carryover, setup splitting and backlogging

153852-Thumbnail Image.png
Description
In this thesis, a single-level, multi-item capacitated lot sizing problem with setup carryover, setup splitting and backlogging is investigated. This problem is typically used in the tactical and operational planning stage, determining the optimal production quantities and sequencing for all

In this thesis, a single-level, multi-item capacitated lot sizing problem with setup carryover, setup splitting and backlogging is investigated. This problem is typically used in the tactical and operational planning stage, determining the optimal production quantities and sequencing for all the products in the planning horizon. Although the capacitated lot sizing problems have been investigated with many different features from researchers, the simultaneous consideration of setup carryover and setup splitting is relatively new. This consideration is beneficial to reduce costs and produce feasible production schedule. Setup carryover allows the production setup to be continued between two adjacent periods without incurring extra setup costs and setup times. Setup splitting permits the setup to be partially finished in one period and continued in the next period, utilizing the capacity more efficiently and remove infeasibility of production schedule.

The main approaches are that first the simple plant location formulation is adopted to reformulate the original model. Furthermore, an extended formulation by redefining the idle period constraints is developed to make the formulation tighter. Then for the purpose of evaluating the solution quality from heuristic, three types of valid inequalities are added to the model. A fix-and-optimize heuristic with two-stage product decomposition and period decomposition strategies is proposed to solve the formulation. This generic heuristic solves a small portion of binary variables and all the continuous variables rapidly in each subproblem. In addition, the case with demand backlogging is also incorporated to demonstrate that making additional assumptions to the basic formulation does not require to completely altering the heuristic.

The contribution of this thesis includes several aspects: the computational results show the capability, flexibility and effectiveness of the approaches. The average optimality gap is 6% for data without backlogging and 8% for data with backlogging, respectively. In addition, when backlogging is not allowed, the performance of fix-and-optimize heuristic is stable regardless of period length. This gives advantage of using such approach to plan longer production schedule. Furthermore, the performance of the proposed solution approaches is analyzed so that later research on similar topics could compare the result with different solution strategies.
Date Created
2015
Agent

Small blob detection in medical images

153643-Thumbnail Image.png
Description
Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of

Recent advances in medical imaging technology have greatly enhanced imaging based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this dissertation, one type of imaging objects is of interest: small blobs. Example small blob objects are cells in histopathology images, small breast lesions in ultrasound images, glomeruli in kidney MR images etc. This problem is particularly challenging because the small blobs often have inhomogeneous intensity distribution and indistinct boundary against the background.

This research develops a generalized four-phased system for small blob detections. The system includes (1) raw image transformation, (2) Hessian pre-segmentation, (3) feature extraction and (4) unsupervised clustering for post-pruning. First, detecting blobs from 2D images is studied where a Hessian-based Laplacian of Gaussian (HLoG) detector is proposed. Using the scale space theory as foundation, the image is smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale based on which a pre-segmentation is conducted. Novel Regional features are extracted from pre-segmented blob candidates and fed to Variational Bayesian Gaussian Mixture Models (VBGMM) for post pruning. Sixteen cell histology images and two hundred cell fluorescent images are tested to demonstrate the performances of HLoG. Next, as an extension, Hessian-based Difference of Gaussians (HDoG) is proposed which is capable to identify the small blobs from 3D images. Specifically, kidney glomeruli segmentation from 3D MRI (6 rats, 3 humans) is investigated. The experimental results show that HDoG has the potential to automatically detect glomeruli, enabling new measurements of renal microstructures and pathology in preclinical and clinical studies. Realizing the computation time is a key factor impacting the clinical adoption, the last phase of this research is to investigate the data reduction technique for VBGMM in HDoG to handle large-scale datasets. A new coreset algorithm is developed for variational Bayesian mixture models. Using the same MRI dataset, it is observed that the four-phased system with coreset-VBGMM has similar performance as using the full dataset but about 20 times faster.
Date Created
2015
Agent

A new machine learning based approach to NASA's propulsion engine diagnostic benchmark problem

153567-Thumbnail Image.png
Description
Gas turbine engine for aircraft propulsion represents one of the most physics-complex and safety-critical systems in the world. Its failure diagnostic is challenging due to the complexity of the model system, difficulty involved in practical testing and the infeasibility of

Gas turbine engine for aircraft propulsion represents one of the most physics-complex and safety-critical systems in the world. Its failure diagnostic is challenging due to the complexity of the model system, difficulty involved in practical testing and the infeasibility of creating homogeneous diagnostic performance evaluation criteria for the diverse engine makes.

NASA has designed and publicized a standard benchmark problem for propulsion engine gas path diagnostic that enables comparisons among different engine diagnostic approaches. Some traditional model-based approaches and novel purely data-driven approaches such as machine learning, have been applied to this problem.

This study focuses on a different machine learning approach to the diagnostic problem. Some most common machine learning techniques, such as support vector machine, multi-layer perceptron, and self-organizing map are used to help gain insight into the different engine failure modes from the perspective of big data. They are organically integrated to achieve good performance based on a good understanding of the complex dataset.

The study presents a new hierarchical machine learning structure to enhance classification accuracy in NASA's engine diagnostic benchmark problem. The designed hierarchical structure produces an average diagnostic accuracy of 73.6%, which outperforms comparable studies that were most recently published.
Date Created
2015
Agent

An improved framework for design concept generation based on experiential and intuitive methods

153188-Thumbnail Image.png
Description
Conceptual design stage plays a critical role in product development. However, few systematic methods and tools exist to support conceptual design. The long term aim of this project is to develop a tool for facilitating holistic ideation for conceptual design.

Conceptual design stage plays a critical role in product development. However, few systematic methods and tools exist to support conceptual design. The long term aim of this project is to develop a tool for facilitating holistic ideation for conceptual design. This research is a continuation of past efforts in ASU Design Automation Lab. In past research, an interactive software test bed (Holistic Ideation Tool - version 1) was developed to explore logical ideation methods. Ideation states were identified and ideation strategies were developed to overcome common ideation blocks. The next version (version 2) of the holistic ideation tool added Cascading Evolutionary Morphological Charts (CEMC) framework and intuitive ideation strategies (reframing, restructuring, random connection, and forced connection).

Despite these remarkable contributions, there exist shortcomings in the previous versions (version 1 and version 2) of the holistic ideation tool. First, there is a need to add new ideation methods to the holistic ideation tool. Second, the organizational framework provided by previous versions needs to be improved, and a holistic approach needs to be devised, instead of separate logical or intuitive approaches. Therefore, the main objective of this thesis is to make the improvements and to resolve technical issues that are involved in their implementation.

Towards this objective, a new web based holistic ideation tool (version 3) has been created. The new tool adds and integrates Knowledge Bases of Mechanisms and Components Off-The-Shelf (COTS) into logical ideation methods. Additionally, an improved CEMC framework has been devised for organizing ideas efficiently. Furthermore, the usability of the tool has been improved by designing and implementing a new graphical user interface (GUI) which is more user friendly. It is hoped that these new features will lead to a platform for the designers to not only generate creative ideas but also effectively organize and store them in the conceptual design stage. By placing it on the web for public use, the Testbed has the potential to be used for research on the ideation process by effectively collecting large amounts of data from designers.
Date Created
2014
Agent

A statistical approach to solar photovoltaic module lifetime prediction

153145-Thumbnail Image.png
Description
The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline

The main objective of this research is to develop an approach to PV module lifetime prediction. In doing so, the aim is to move from empirical generalizations to a formal predictive science based on data-driven case studies of the crystalline silicon PV systems. The evaluation of PV systems aged 5 to 30 years old that results in systematic predictive capability that is absent today. The warranty period provided by the manufacturers typically range from 20 to 25 years for crystalline silicon modules. The end of lifetime (for example, the time-to-degrade by 20% from rated power) of PV modules is usually calculated using a simple linear extrapolation based on the annual field degradation rate (say, 0.8% drop in power output per year). It has been 26 years since systematic studies on solar PV module lifetime prediction were undertaken as part of the 11-year flat-plate solar array (FSA) project of the Jet Propulsion Laboratory (JPL) funded by DOE. Since then, PV modules have gone through significant changes in construction materials and design; making most of the field data obsolete, though the effect field stressors on the old designs/materials is valuable to be understood. Efforts have been made to adapt some of the techniques developed to the current technologies, but they are too often limited in scope and too reliant on empirical generalizations of previous results. Some systematic approaches have been proposed based on accelerated testing, but no or little experimental studies have followed. Consequently, the industry does not exactly know today how to test modules for a 20 - 30 years lifetime.

This research study focuses on the behavior of crystalline silicon PV module technology in the dry and hot climatic condition of Tempe/Phoenix, Arizona. A three-phase approach was developed: (1) A quantitative failure modes, effects, and criticality analysis (FMECA) was developed for prioritizing failure modes or mechanisms in a given environment; (2) A time-series approach was used to model environmental stress variables involved and prioritize their effect on the power output drop; and (3) A procedure for developing a prediction model was proposed for the climatic specific condition based on accelerated degradation testing
Date Created
2014
Agent

Applied meta-analysis of lead-free solder reliability

153109-Thumbnail Image.png
Description
This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of

This thesis presents a meta-analysis of lead-free solder reliability. The qualitative analyses of the failure modes of lead- free solder under different stress tests including drop test, bend test, thermal test and vibration test are discussed. The main cause of failure of lead- free solder is fatigue crack, and the speed of propagation of the initial crack could differ from different test conditions and different solder materials. A quantitative analysis about the fatigue behavior of SAC lead-free solder under thermal preconditioning process is conducted. This thesis presents a method of making prediction of failure life of solder alloy by building a Weibull regression model. The failure life of solder on circuit board is assumed Weibull distributed. Different materials and test conditions could affect the distribution by changing the shape and scale parameters of Weibull distribution. The method is to model the regression of parameters with different test conditions as predictors based on Bayesian inference concepts. In the process of building regression models, prior distributions are generated according to the previous studies, and Markov Chain Monte Carlo (MCMC) is used under WinBUGS environment.
Date Created
2014
Agent

A model fusion based framework for imbalanced classification problem with noisy dataset

153065-Thumbnail Image.png
Description
Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information

Data imbalance and data noise often coexist in real world datasets. Data imbalance affects the learning classifier by degrading the recognition power of the classifier on the minority class, while data noise affects the learning classifier by providing inaccurate information and thus misleads the classifier. Because of these differences, data imbalance and data noise have been treated separately in the data mining field. Yet, such approach ignores the mutual effects and as a result may lead to new problems. A desirable solution is to tackle these two issues jointly. Noting the complementary nature of generative and discriminative models, this research proposes a unified model fusion based framework to handle the imbalanced classification with noisy dataset.

The phase I study focuses on the imbalanced classification problem. A generative classifier, Gaussian Mixture Model (GMM) is studied which can learn the distribution of the imbalance data to improve the discrimination power on imbalanced classes. By fusing this knowledge into cost SVM (cSVM), a CSG method is proposed. Experimental results show the effectiveness of CSG in dealing with imbalanced classification problems.

The phase II study expands the research scope to include the noisy dataset into the imbalanced classification problem. A model fusion based framework, K Nearest Gaussian (KNG) is proposed. KNG employs a generative modeling method, GMM, to model the training data as Gaussian mixtures and form adjustable confidence regions which are less sensitive to data imbalance and noise. Motivated by the K-nearest neighbor algorithm, the neighboring Gaussians are used to classify the testing instances. Experimental results show KNG method greatly outperforms traditional classification methods in dealing with imbalanced classification problems with noisy dataset.

The phase III study addresses the issues of feature selection and parameter tuning of KNG algorithm. To further improve the performance of KNG algorithm, a Particle Swarm Optimization based method (PSO-KNG) is proposed. PSO-KNG formulates model parameters and data features into the same particle vector and thus can search the best feature and parameter combination jointly. The experimental results show that PSO can greatly improve the performance of KNG with better accuracy and much lower computational cost.
Date Created
2014
Agent