Projection properties and analysis methods for six to fourteen factor no confounding designs in 16 runs

151329-Thumbnail Image.png
Description
During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments hel

During the initial stages of experimentation, there are usually a large number of factors to be investigated. Fractional factorial (2^(k-p)) designs are particularly useful during this initial phase of experimental work. These experiments often referred to as screening experiments help reduce the large number of factors to a smaller set. The 16 run regular fractional factorial designs for six, seven and eight factors are in common usage. These designs allow clear estimation of all main effects when the three-factor and higher order interactions are negligible, but all two-factor interactions are aliased with each other making estimation of these effects problematic without additional runs. Alternatively, certain nonregular designs called no-confounding (NC) designs by Jones and Montgomery (Jones & Montgomery, Alternatives to resolution IV screening designs in 16 runs, 2010) partially confound the main effects with the two-factor interactions but do not completely confound any two-factor interactions with each other. The NC designs are useful for independently estimating main effects and two-factor interactions without additional runs. While several methods have been suggested for the analysis of data from nonregular designs, stepwise regression is familiar to practitioners, available in commercial software, and is widely used in practice. Given that an NC design has been run, the performance of stepwise regression for model selection is unknown. In this dissertation I present a comprehensive simulation study evaluating stepwise regression for analyzing both regular fractional factorial and NC designs. Next, the projection properties of the six, seven and eight factor NC designs are studied. Studying the projection properties of these designs allows the development of analysis methods to analyze these designs. Lastly the designs and projection properties of 9 to 14 factor NC designs onto three and four factors are presented. Certain recommendations are made on analysis methods for these designs as well.
Date Created
2012
Agent

A simulation study of Kanban levels for assembly lines and systems

151029-Thumbnail Image.png
Description
In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to hel

In the entire supply chain, demand planning is one of the crucial aspects of the production planning process. If the demand is not estimated accurately, then it causes revenue loss. Past research has shown forecasting can be used to help the demand planning process for production. However, accurate forecasting from historical data is difficult in today's complex volatile market. Also it is not the only factor that influences the demand planning. Factors, namely, Consumer's shifting interest and buying power also influence the future demand. Hence, this research study focuses on Just-In-Time (JIT) philosophy using a pull control strategy implemented with a Kanban control system to control the inventory flow. Two different product structures, serial product structure and assembly product structure, are considered for this research. Three different methods: the Toyota Production System model, a histogram model and a cost minimization model, have been used to find the number of kanbans that was used in a computer simulated Just-In-Time Kanban System. The simulation model was built to execute the designed scenarios for both the serial and assembly product structure. A test was performed to check the significance effects of various factors on system performance. Results of all three methods were collected and compared to indicate which method provides the most effective way to determine number of kanbans at various conditions. It was inferred that histogram model and cost minimization models are more accurate in calculating the required kanbans for various manufacturing conditions. Method-1 fails to adjust the kanbans when the backordered cost increases or when product structure changes. Among the product structures, serial product structures proved to be effective when Method-2 or Method-3 is used to calculate the kanban numbers for the system. The experimental result data also indicated that the lower container capacity collects more backorders in the system, which increases the inventory cost, than the high container capacity for both serial and assembly product structures.
Date Created
2012
Agent

Adaptive operation decisions for a system of smart buildings

151008-Thumbnail Image.png
Description
Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors.

Buildings (approximately half commercial and half residential) consume over 70% of the electricity among all the consumption units in the United States. Buildings are also responsible for approximately 40% of CO2 emissions, which is more than any other industry sectors. As a result, the initiative smart building which aims to not only manage electrical consumption in an efficient way but also reduce the damaging effect of greenhouse gases on the environment has been launched. Another important technology being promoted by government agencies is the smart grid which manages energy usage across a wide range of buildings in an effort to reduce cost and increase reliability and transparency. As a great amount of efforts have been devoted to these two initiatives by either exploring the smart grid designs or developing technologies for smart buildings, the research studying how the smart buildings and smart grid coordinate thus more efficiently use the energy is currently lacking. In this dissertation, a "system-of-system" approach is employed to develop an integrated building model which consists a number of buildings (building cluster) interacting with smart grid. The buildings can function as both energy consumption unit as well as energy generation/storage unit. Memetic Algorithm (MA) and Particle Swarm Optimization (PSO) based decision framework are developed for building operation decisions. In addition, Particle Filter (PF) is explored as a mean for fusing online sensor and meter data so adaptive decision could be made in responding to dynamic environment. The dissertation is divided into three inter-connected research components. First, an integrated building energy model including building consumption, storage, generation sub-systems for the building cluster is developed. Then a bi-level Memetic Algorithm (MA) based decentralized decision framework is developed to identify the Pareto optimal operation strategies for the building cluster. The Pareto solutions not only enable multiple dimensional tradeoff analysis, but also provide valuable insight for determining pricing mechanisms and power grid capacity. Secondly, a multi-objective PSO based decision framework is developed to reduce the computational effort of the MA based decision framework without scarifying accuracy. With the improved performance, the decision time scale could be refined to make it capable for hourly operation decisions. Finally, by integrating the multi-objective PSO based decision framework with PF, an adaptive framework is developed for adaptive operation decisions for smart building cluster. The adaptive framework not only enables me to develop a high fidelity decision model but also enables the building cluster to respond to the dynamics and uncertainties inherent in the system.
Date Created
2012
Agent

Harm during hospitalizations for heart failure: adverse events as a reliability measure of hospital policies and procedures

150981-Thumbnail Image.png
Description
For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP

For more than twenty years, clinical researchers have been publishing data regarding incidence and risk of adverse events (AEs) incurred during hospitalizations. Hospitals have standard operating policies and procedures (SOPP) to protect patients from AE. The AE specifics (rates, SOPP failures, timing and risk factors) during heart failure (HF) hospitalizations are unknown. There were 1,722 patients discharged with a primary diagnosis of HF from an academic hospital between January 2005 and December 2007. Three hundred eighty-one patients experienced 566 AEs, classified into four categories: medication (43.9%), infection (18.9%), patient care (26.3%), or procedural (10.9%). Three distinct analyses were performed: 1) patient's perspective of SOPP reliability including cumulative distribution and hazard functions of time to AEs; 2) Cox proportional hazards model to determine independent patient-specific risk factors for AEs; and 3) hospital administration's perspective of SOPP reliability through three years of the study including cumulative distribution and hazard functions of time between AEs and moving range statistical process control (SPC) charts for days between failures of each type. This is the first study, to our knowledge, to consider reliability of SOPP from both the patient's and hospital administration's perspective. AE rates in hospitalized patients are similar to other recently published reports and did not improve during the study period. Operations research methodologies will be necessary to improve reliability of care delivered to hospitalized patients.
Date Created
2012
Agent

Single machine scheduling: comparison of MIP formulations and heuristics for interfering job sets

150733-Thumbnail Image.png
Description
This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear

This research by studies the computational performance of four different mixed integer programming (MIP) formulations for single machine scheduling problems with varying complexity. These formulations are based on (1) start and completion time variables, (2) time index variables, (3) linear ordering variables and (4) assignment and positional date variables. The objective functions that are studied in this paper are total weighted completion time, maximum lateness, number of tardy jobs and total weighted tardiness. Based on the computational results, discussion and recommendations are made on which MIP formulation might work best for these problems. The performances of these formulations very much depend on the objective function, number of jobs and the sum of the processing times of all the jobs. Two sets of inequalities are presented that can be used to improve the performance of the formulation with assignment and positional date variables. Further, this research is extend to single machine bicriteria scheduling problems in which jobs belong to either of two different disjoint sets, each set having its own performance measure. These problems have been referred to as interfering job sets in the scheduling literature and also been called multi-agent scheduling where each agent's objective function is to be minimized. In the first single machine interfering problem (P1), the criteria of minimizing total completion time and number of tardy jobs for the two sets of jobs is studied. A Forward SPT-EDD heuristic is presented that attempts to generate set of non-dominated solutions. The complexity of this specific problem is NP-hard. The computational efficiency of the heuristic is compared against the pseudo-polynomial algorithm proposed by Ng et al. [2006]. In the second single machine interfering job sets problem (P2), the criteria of minimizing total weighted completion time and maximum lateness is studied. This is an established NP-hard problem for which a Forward WSPT-EDD heuristic is presented that attempts to generate set of supported points and the solution quality is compared with MIP formulations. For both of these problems, all jobs are available at time zero and the jobs are not allowed to be preempted.
Date Created
2012
Agent

Ethernet passive optical network dynamic bandwidth allocation study

Description
Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if

Fiber-Wireless (FiWi) network is the future network configuration that uses optical fiber as backbone transmission media and enables wireless network for the end user. Our study focuses on the Dynamic Bandwidth Allocation (DBA) algorithm for EPON upstream transmission. DBA, if designed properly, can dramatically improve the packet transmission delay and overall bandwidth utilization. With new DBA components coming out in research, a comprehensive study of DBA is conducted in this thesis, adding in Double Phase Polling coupled with novel Limited with Share credits Excess distribution method. By conducting a series simulation of DBAs using different components, we found out that grant sizing has the strongest impact on average packet delay and grant scheduling also has a significant impact on the average packet delay; grant scheduling has the strongest impact on the stability limit or maximum achievable channel utilization. Whereas the grant sizing only has a modest impact on the stability limit; the SPD grant scheduling policy in the Double Phase Polling scheduling framework coupled with Limited with Share credits Excess distribution grant sizing produced both the lowest average packet delay and the highest stability limit.
Date Created
2011
Agent

Semiconductor yield modeling using generalized linear models

149613-Thumbnail Image.png
Description
Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement,

Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is essential to identifying processing issues, improving quality, and meeting customer demand in the industry. However, the complicated fabrication process, the massive amount of data collected, and the number of models available make yield modeling a complex and challenging task. This work presents modeling strategies to forecast yield using generalized linear models (GLMs) based on defect metrology data. The research is divided into three main parts. First, the data integration and aggregation necessary for model building are described, and GLMs are constructed for yield forecasting. This technique yields results at both the die and the wafer levels, outperforms existing models found in the literature based on prediction errors, and identifies significant factors that can drive process improvement. This method also allows the nested structure of the process to be considered in the model, improving predictive capabilities and violating fewer assumptions. To account for the random sampling typically used in fabrication, the work is extended by using generalized linear mixed models (GLMMs) and a larger dataset to show the differences between batch-specific and population-averaged models in this application and how they compare to GLMs. These results show some additional improvements in forecasting abilities under certain conditions and show the differences between the significant effects identified in the GLM and GLMM models. The effects of link functions and sample size are also examined at the die and wafer levels. The third part of this research describes a methodology for integrating classification and regression trees (CART) with GLMs. This technique uses the terminal nodes identified in the classification tree to add predictors to a GLM. This method enables the model to consider important interaction terms in a simpler way than with the GLM alone, and provides valuable insight into the fabrication process through the combination of the tree structure and the statistical analysis of the GLM.
Date Created
2011
Agent

Profile monitoring-- control chart schemes for monitoring linear and low order polynomial profiles

149475-Thumbnail Image.png
Description
The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on

The emergence of new technologies as well as a fresh look at analyzing existing processes have given rise to a new type of response characteristic, known as a profile. Profiles are useful when a quality variable is functionally dependent on one or more explanatory, or independent, variables. So, instead of observing a single measurement on each unit or product a set of values is obtained over a range which, when plotted, takes the shape of a curve. Traditional multivariate monitoring schemes are inadequate for monitoring profiles due to high dimensionality and poor use of the information stored in functional form leading to very large variance-covariance matrices. Profile monitoring has become an important area of study in statistical process control and is being actively addressed by researchers across the globe. This research explores the understanding of the area in three parts. A comparative analysis is conducted of two linear profile-monitoring techniques based on probability of false alarm rate and average run length (ARL) under shifts in the model parameters. The two techniques studied are control chart based on classical calibration statistic and a control chart based on the parameters of a linear model. The research demonstrates that a profile characterized by a parametric model is more efficient monitoring scheme than one based on monitoring only the individual features of the profile. A likelihood ratio based changepoint control chart is proposed for detecting a sustained step shift in low order polynomial profiles. The test statistic is plotted on a Shewhart like chart with control limits derived from asymptotic distribution theory. The statistic is factored to reflect the variation due to the parameters in to aid in interpreting an out of control signal. The research also looks at the robust parameter design study of profiles, also referred to as signal response systems. Such experiments are often necessary for understanding and reducing the common cause variation in systems. A split-plot approach is proposed to analyze the profiles. It is demonstrated that an explicit modeling of variance components using generalized linear mixed models approach has more precise point estimates and tighter confidence intervals.
Date Created
2010
Agent

Multivariate charts for multivariate poisson-distributed data

149367-Thumbnail Image.png
Description
There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest.

There has been much research involving simultaneous monitoring of several correlated quality characteristics that rely on the assumptions of multivariate normality and independence. In real world applications, these assumptions are not always met, particularly when small counts are of interest. In general, the use of normal approximation to the Poisson distribution seems to be justified when the Poisson means are large enough. A new two-sided Multivariate Poisson Exponentially Weighted Moving Average (MPEWMA) control chart is proposed, and the control limits are directly derived from the multivariate Poisson distribution. The MPEWMA and the conventional Multivariate Exponentially Weighted Moving Average (MEWMA) charts are evaluated by using the multivariate Poisson framework. The MPEWMA chart outperforms the MEWMA with the normal-theory limits in terms of the in-control average run lengths. An extension study of the two-sided MPEWMA to a one-sided version is performed; this is useful for detecting an increase in the count means. The results of comparison with the one-sided MEWMA chart are quite similar to the two-sided case. The implementation of the MPEWMA scheme for multiple count data is illustrated, with step by step guidelines and several examples. In addition, the method is compared to other model-based control charts that are used to monitor the residual values such as the regression adjustment. The MPEWMA scheme shows better performance on detecting the mean shift in count data when positive correlation exists among all variables.
Date Created
2010
Agent

Modeling supply chain dynamics with calibrated simulation using data fusion

149315-Thumbnail Image.png
Description
In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes

In today's global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods - Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering - are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in `proof-of-concept' case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the `beer game' supply chain model, the system's chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.
Date Created
2010
Agent