DESI - Tesis de Doctorado en Ciencias de la Ingenieríahttps://hdl.handle.net/11117/51492024-03-29T09:40:30Z2024-03-29T09:40:30ZNon-Invasive Statistical Approach to Evaluate Processes Variability Using Fuzzy Process Capability Indices and Fuzzy Individual Control ChartsRodríguez-Álvarez, José L.https://hdl.handle.net/11117/84482023-03-16T20:16:53Z2022-02-01T00:00:00ZNon-Invasive Statistical Approach to Evaluate Processes Variability Using Fuzzy Process Capability Indices and Fuzzy Individual Control Charts
Rodríguez-Álvarez, José L.
The statistical thinking and monitoring of the quality of product characteristics, either of a variable or attribute type, plays a key role in a successful quality improvement to reduce process or product variation. For this purpose, the control charts techniques and the process capability indices are widely used in a variety of manufacturing and service industries to carry out an overall evaluation of the process performance. However, it should be noted that the results obtained using these techniques, given that they operate under certain assumptions, generally show variation over time, mainly in complex processes in which it is difficult to collect sufficient data on the quality variables and in processes with uncertainty in the measurements. Therefore, special care must be taken in choosing the appropriate technique to have an evaluation close to the reality of the process. Firstly, in this doctoral dissertation is presented an alternative fuzzy individual and moving range control charts based on the 𝛼-cut fuzzy midrange approach. The proposed method to generate the fuzzy numbers is based on the sigma level of the process, and the observed variation in each sample. Thus, these fuzzy control charts are more flexible, because the amplitude between the upper and lower control limits is greater than those shown in the traditional control charts. In the second proposal is presented an alternative method to estimate the process capability indices under a fuzzy approach. This alternative uses a coupled applications of modeling + experimental designs, which are presented as a non-invasive approach. The application has a double purpose: first, to know the process variability, and second, to reduce variability in the quality control variable. When a process performance evaluation based on control charts and capability indices is carried out, it is necessary to use the interest quality variables data. Generally, the data are recorded by the quality control department or the measuring instruments that are part of the process. In the proposed method, the used data set to evaluate the process capability corresponds to values predicted by a trained model with reasonable accuracy. This imply that the measures will include the variability shown in each independent variable that affect the response, in addition to the model error. The method was validated using a real basis weight dataset. The findings showed that the overall process capability indices estimated with the proposed method are closer to the process reality than the existing traditional and fuzzy methods.
2022-02-01T00:00:00ZHybrid Service Simulation Model for Circular Economy ImplementationGuevara-Rivera, Edna L.https://hdl.handle.net/11117/83952023-03-01T14:16:11Z2022-11-01T00:00:00ZHybrid Service Simulation Model for Circular Economy Implementation
Guevara-Rivera, Edna L.
The circular economy (CE) principles have been created in response to the depletion of natural resources as a set of guidelines to eliminate the linear take-use-disposition model of product consumption. The consequences of moving from a linear supply chain to a circular one are difficult to
visualize in the long term. Therefore, in some cases, implementing a circular economy simulation tool in linear processes of small and medium-sized enterprises (SMEs) is essential to test policies before implementing them in the real world. This study aimed to evaluate the dominant service
logic, ecosystem services, system dynamics, and agent-based modeling to design a methodology for a simulation model implemented in two life cycle case studies: a food bank and a confectionery factory. In both cases, visits and interviews were made with interested parties to evaluate the simulation model during the development phase. The circular economy indicator prototype (CEIP), whose score was 52% (rated as a ”good” product), was used as the circular maturity measure of the confectionery factory. We used NetLogo software to execute the simulation models for each case study, implementing a scenario analysis based on CE policies. Various variables were used in these analyses related to the process’s costs and the amount of discarded and recycled products. The main contribution of this work is the methodology implemented in two real case studies in Mexico, in which we designed two simulation models to evaluate circular economy strategies in future scenarios. In addition, in the case of the confectionery factory, the simulator allowed interested parties to understand the operation of the recycling process and to visualize all the variables involved in the system.utilizaron diversas variables en estos análisis relacionadas con los costos del proceso y con la cantidad de producto desechado y reciclado. El principal aporte de este trabajo es la metodología implementada en dos casos de estudio reales en México en el que se diseñó dos modelos de simulación para evaluar estrategias de economía circular en escenarios futuros. Además, en el caso de la fábrica de confitería el simulador permitió a los interesados comprender el funcionamiento del proceso de reciclaje y visualizar todas las variables involucradas en el sistema.
2022-11-01T00:00:00ZDecoupling Capacitors Optimization Methodologies for Power Delivery Networks in Computer PlatformsMoreno-Mojica, Aurea E.https://hdl.handle.net/11117/83932023-03-01T14:18:55Z2022-11-01T00:00:00ZDecoupling Capacitors Optimization Methodologies for Power Delivery Networks in Computer Platforms
Moreno-Mojica, Aurea E.
Toda plataforma de cómputo requiere de una red de suministro de potencia (PDN, por sus siglas en inglés) para energizar sus dispositivos. Cuando la señales en los diferentes dispositivos de una PDN comienzan a conmutar, provocan picos de corriente que crean ruido de voltaje. El control de ruido fallido en la PDN puede deteriorar el desempeño y provocar fallas funcionales graves en la plataforma de cómputo. El nivel de voltaje requerido por los chips depende del espectro de frecuencia de la corriente que consumen; así un buen diseño de PDN debe tener un perfil de impedancia bajo. Esto se hace colocando varias etapas de capacitores de desacoplo para reducir la
impedancia y proporcionar fuentes locales de carga. Estos arreglos de capacitores paralelos introducen frecuencias resonantes que pueden magnificar los problemas de ruido y que se traducen en el dominio del tiempo como caídas de voltaje. Esta tesis doctoral presenta un procedimiento numérico para encontrar las frecuencias resonantes paralelas de un arreglo paralelo de más de dos capacitores, así como ecuaciones analíticas para encontrar las frecuencias resonantes paralelas de un arreglo de tres capacitores, que también aproximan las frecuencias de resonancia de arreglos de más de tres capacitores. Luego presenta varias técnicas de optimización numérica para optimizar
el número de capacitores de desacoplo en una PDN y los valores de los elementos de compensación de un regulador de voltaje que aseguran estabilidad, considerando los efectos en el dominio de la frecuencia y del tiempo. Además, esta tesis presenta un enfoque de optimización del rendimiento en el dominio de la frecuencia y del tiempo considerando el impacto de las tolerancias de capacitancia en los capacitores de desacoplo. Finalmente, la tesis proporciona los primeros pasos para obtener un circuito equivalente concentrado de planos discretizados de una PDN que permita colocar capacitores de desacoplo en cualquier lugar de la PDN. Cada metodología propuesta es
debidamente validada por casos de prueba adecuados, demostrando la eficiencia de las técnicas propuestas. También se prevén algunas oportunidades de investigación futuras.
2022-11-01T00:00:00ZUsing Interpretive Semantics Techniques to Enhance Ontology LearningEscobar-Vega, Luis M.https://hdl.handle.net/11117/74612023-03-15T21:07:51Z2021-06-01T00:00:00ZUsing Interpretive Semantics Techniques to Enhance Ontology Learning
Escobar-Vega, Luis M.
As intelligent virtual assistant scales to the mass market, traditional validation techniques for question answering systems become inappropriate to get full functional coverage of the system. Natural language and conversational dialog inherent complexities introduce design challenges to guarantee process, talk, and understanding performance. Besides, there is and an increasing number of training language models in question-answering systems. A significant portion of them corresponds to the statistic-based language model. Improvements in datasets, natural language processing techniques, and processing speed have allowed better data rates to scale beyond 90% of the Score. Some effects of the lack of interpretation can create multiple understanding integrity problems in solving a question. This problem is aggravated when the model faces a new and different context from that used in the training process. Challenges for meaning comprehension are continuously increasing. Therefore, information retrieval processes extract key elements of the language that can be critical for making more useful question-answering systems. Using appropriate information retrieval techniques to extract critical elements that can be used to create new knowledge structures is a significant challenge. The combination of information retrieval and ontology learning can be a very consuming validation task. Typical practices in question-answering systems construction are statistic-based. Consequently, they require massive datasets to train their models, making the information retrieval process too lengthy and prohibitive when the model faces new contexts. In this doctoral dissertation, the combination of interpretive semantics, semantic similarity, and ontology learning methods with suitable statistical functions is proposed to improve the efficiency of extracting semantic elements from a text. The proposed methods are implemented in a software tool, and its performance is evaluated on real question-answering platforms such as virtual assistants. The results show both the efficiency of the proposed methods and significant improvements when compared to state-of-the-art practices.
2021-06-01T00:00:00Z