A Bayesian network approach to early reliability assessment of complex systems

154594-Thumbnail Image.png
Description
Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. This dissertation develops Bayesian network models for system reliability analysis through the use of Bayesian inference techniques.

Bayesian networks generalize

Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. This dissertation develops Bayesian network models for system reliability analysis through the use of Bayesian inference techniques.

Bayesian networks generalize fault trees by allowing components and subsystems to be related by conditional probabilities instead of deterministic relationships; thus, they provide analytical advantages to the situation when the failure structure is not well understood, especially during the product design stage. In order to tackle this problem, one needs to utilize auxiliary information such as the reliability information from similar products and domain expertise. For this purpose, a Bayesian network approach is proposed to incorporate data from functional analysis and parent products. The functions with low reliability and their impact on other functions in the network are identified, so that design changes can be suggested for system reliability improvement.

A complex system does not necessarily have all components being monitored at the same time, causing another challenge in the reliability assessment problem. Sometimes there are a limited number of sensors deployed in the system to monitor the states of some components or subsystems, but not all of them. Data simultaneously collected from multiple sensors on the same system are analyzed using a Bayesian network approach, and the conditional probabilities of the network are estimated by combining failure information and expert opinions at both system and component levels. Several data scenarios with discrete, continuous and hybrid data (both discrete and continuous data) are analyzed. Posterior distributions of the reliability parameters of the system and components are assessed using simultaneous data.

Finally, a Bayesian framework is proposed to incorporate different sources of prior information and reconcile these different sources, including expert opinions and component information, in order to form a prior distribution for the system. Incorporating expert opinion in the form of pseudo-observations substantially simplifies statistical modeling, as opposed to the pooling techniques and supra Bayesian methods used for combining prior distributions in the literature.

The methods proposed are demonstrated with several case studies.
Date Created
2016
Agent

Categorical responses in mixture experiments

154390-Thumbnail Image.png
Description
Mixture experiments are useful when the interest is in determining how changes in the proportion of an experimental component affects the response. This research focuses on the modeling and design of mixture experiments when the response is categorical namely, binary

Mixture experiments are useful when the interest is in determining how changes in the proportion of an experimental component affects the response. This research focuses on the modeling and design of mixture experiments when the response is categorical namely, binary and ordinal. Data from mixture experiments is characterized by the perfect collinearity of the experimental components, resulting in model matrices that are singular and inestimable under likelihood estimation procedures. To alleviate problems with estimation, this research proposes the reparameterization of two nonlinear models for ordinal data -- the proportional-odds model with a logistic link and the stereotype model. A study involving subjective ordinal responses from a mixture experiment demonstrates that the stereotype model reveals useful information about the relationship between mixture components and the ordinality of the response, which the proportional-odds fails to detect.

The second half of this research deals with the construction of exact D-optimal designs for binary and ordinal responses. For both types, the base models fall under the class of Generalized Linear Models (GLMs) with a logistic link. First, the properties of the exact D-optimal mixture designs for binary responses are investigated. It will be shown that standard mixture designs and designs proposed for normal-theory responses are poor surrogates for the true D-optimal designs. In contrast with the D-optimal designs for normal-theory responses which locate support points at the boundaries of the mixture region, exact D-optimal designs for GLMs tend to locate support points at regions of uncertainties. Alternate D-optimal designs for binary responses with high D-efficiencies are proposed by utilizing information about these regions.

The Mixture Exchange Algorithm (MEA), a search heuristic tailored to the construction of efficient mixture designs with GLM-type responses, is proposed. MEA introduces a new and efficient updating formula that lessens the computational expense of calculating the D-criterion for multi-categorical response systems, such as ordinal response models. MEA computationally outperforms comparable search heuristics by several orders of magnitude. Further, its computational expense increases at a slower rate of growth with increasing problem size. Finally, local and robust D-optimal designs for ordinal-response mixture systems are constructed using MEA, investigated, and shown to have high D-efficiency performance.
Date Created
2016
Agent

A pairwise comparison matrix framework for large-scale decision making

151545-Thumbnail Image.png
Description
A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP).

A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.
Date Created
2013
Agent

A simulation study of Kanban levels for assembly lines and systems

151029-Thumbnail Image.png
Description
In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to hel

In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to help the demand planning process for production. However, accurate forecasting from historical data is difficult in today's complex volatile market. Also it is not the only factor that influences the demand planning. Factors, namely, Consumer's shifting interest and buying power also influence the future demand. Hence, this research study focuses on Just-In-Time (JIT) philosophy using a pull control strategy implemented with a Kanban control system to control the inventory flow. Two different product structures, serial product structure and assembly product structure, are considered for this research. Three different methods: the Toyota Production System model, a histogram model and a cost minimization model, have been used to find the number of kanbans that was used in a computer simulated Just-In-Time Kanban System. The simulation model was built to execute the designed scenarios for both the serial and assembly product structure. A test was performed to check the significance effects of various factors on system performance. Results of all three methods were collected and compared to indicate which method provides the most effective way to determine number of kanbans at various conditions. It was inferred that histogram model and cost minimization models are more accurate in calculating the required kanbans for various manufacturing conditions. Method-1 fails to adjust the kanbans when the backordered cost increases or when product structure changes. Among the product structures, serial product structures proved to be effective when Method-2 or Method-3 is used to calculate the kanban numbers for the system. The experimental result data also indicated that the lower container capacity collects more backorders in the system, which increases the inventory cost, than the high container capacity for both serial and assembly product structures.
Date Created
2012
Agent

Mixture-process variable design experiments with control and noise variables within a split-plot structure

149476-Thumbnail Image.png
Description
In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some

In mixture-process variable experiments, it is common that the number of runs is greater than in mixture-only or process-variable experiments. These experiments have to estimate the parameters from the mixture components, process variables, and interactions of both variables. In some of these experiments there are variables that are hard to change or cannot be controlled under normal operating conditions. These situations often prohibit a complete randomization for the experimental runs due to practical and economical considerations. Furthermore, the process variables can be categorized into two types: variables that are controllable and directly affect the response, and variables that are uncontrollable and primarily affect the variability of the response. These uncontrollable variables are called noise factors and assumed controllable in a laboratory environment for the purpose of conducting experiments. The model containing both noise variables and control factors can be used to determine factor settings for the control factor that makes the response "robust" to the variability transmitted from the noise factors. These types of experiments can be analyzed in a model for the mean response and a model for the slope of the response within a split-plot structure. When considering the experimental designs, low prediction variances for the mean and slope model are desirable. The methods for the mixture-process variable designs with noise variables considering a restricted randomization are demonstrated and some mixture-process variable designs that are robust to the coefficients of interaction with noise variables are evaluated using fraction design space plots with the respect to the prediction variance properties. Finally, the G-optimal design that minimizes the maximum prediction variance over the entire design region is created using a genetic algorithm.
Date Created
2010
Agent