Refine
Document Type
- Working Paper (13)
- Report (8)
- Article (7)
- Preprint (3)
Has Fulltext
- yes (31)
Keywords
- Optimization (9)
- Optimierung (6)
- Benchmarking (3)
- Modeling (3)
- 3D Printing (2)
- Combined simulation (2)
- Evolutionary Computation (2)
- Evolutionärer Algorithmus (2)
- Globale Optimierung (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Soft Computing (2)
- Surrogate (2)
- Surrogate Models (2)
- Surrogate-based (2)
- 3D-Druck (1)
- Abgasreinigung (1)
- Algorithm Tuning (1)
- Algorithmus (1)
- Angewandte Mathematik (1)
- Artificial intelligence (1)
- Automated Learning (1)
- Automation (1)
- BBOB (1)
- Bayesian Learning (1)
- Bayesian Optimization (1)
- Bayesian Regression (1)
- Big Data (1)
- Big data platform (1)
- Business Intelligence (1)
- Ccomputational fluid dynamics (1)
- Cognition (1)
- Computational fluid dynamics (1)
- Conditional inference tree (1)
- Data Analysis (1)
- Data Mining (1)
- Data Modelling (1)
- Data-Warehouse-Konzept (1)
- Decision tree (1)
- Design of Experiments (1)
- Discrete Optimization (1)
- Electrostatic Precipitator (1)
- Ensemble Methods (1)
- Ensemble based modeling (1)
- Entdeckendes Lernen (1)
- Erfahrungsbericht (1)
- Event Detection (1)
- Evolutionsstrategie (1)
- Expensive Optimization (1)
- Experiment (1)
- Experimental Algorithmics (1)
- Faserverbundwerkstoffe (1)
- Feature selection (1)
- Flowcurve (1)
- Forschendes Lernen (1)
- Function Approximation (1)
- Funktionstest (1)
- Gaussian Process (1)
- Genetische Algorithmen (1)
- Genetisches Programmieren (1)
- Health condition monitoring (1)
- Hot rolling (1)
- Imputation (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Knowledge extraction (1)
- Kognition (1)
- Kriging (1)
- Künstliche Intelligenz (1)
- Lineare Regression (1)
- Machine learning (1)
- Massive Online Analysis (1)
- Meta-model (1)
- Metaheuristics (1)
- Metaheuristik (1)
- Metal (1)
- Metamodel (1)
- Metamodell (1)
- Metamodels (1)
- Model Selection (1)
- Modelierung (1)
- Muschelknautz Method of Modelling (1)
- Neural and Evolutionary Computing (1)
- Numerische Strömungssimulation (1)
- On-line Algorithm (1)
- Parallelization (1)
- Performance (1)
- Predictive Analytics (1)
- Promotion (1)
- R (1)
- Referenzmodell (1)
- Regression (1)
- SAP (1)
- SPOT (1)
- Sensor placement (1)
- Sensortechnik (1)
- Sequential Parameter Optimization (1)
- Signalanalyse (1)
- Simulation (1)
- Simulation-based Optimization (1)
- Stacked Generalization (1)
- Stacking (1)
- Standardisierung (1)
- Staubabscheider (1)
- Structural Health Monitoring (1)
- Surrogate Mod (1)
- Surrogate Model (1)
- Surrogate model (1)
- Surrogate model based optimization (1)
- Surrogates (1)
- Tauchrohrtiefe (1)
- Taxonomie (1)
- Taxonomy (1)
- Test Function (1)
- Test function generator (1)
- Time Series (1)
- Trinkwasser (1)
- Univariate Data (1)
- Variable reduction (1)
- Verunreinigung (1)
- Wasserverteilung (1)
- Water distribution systems (1)
Institute
- Fakultät für Informatik und Ingenieurwissenschaften (F10) (31) (remove)
Die Reinhaltung der Luft spielt heute mehr denn je eine wichtige Rolle. In Gesellschaft und Politik wird über Dieselfahrverbote in Innenstädten diskutiert, um die Feinstaubbelastung in den Städten zu senken. Besonders die Industrie steht vor der Aufgabe, den Partikelausstoß zu senken und Wege zu finden, um eine gesunde Luft zu wahren. Zur Abgasreinigung werden oft Filter eingesetzt. Diese weisen aber hohe Energieverluste auf. Die ständige Reinigung oder der Wechsel der Filter kostet Zeit und Geld. Daher ist neben Filtern eine der gängigsten Methoden die Abgasreinigung durch Staubabscheider. Staubabscheider funktionieren filterlos. Dadurch entfällt eine wiederkehrende Filterreinigung, beziehungsweise der regelmäßige Filtertausch. Die Technik der Staubabscheider hat ihren Ursprung in der Natur. Aus der Betrachtung von Zyklonen (in den Tropen vorkommende Wirbelstürme) wurde ein Verfahren entwickelt, um staubhaltige Fluide von den Verunreinigungen zu trennen. Die Abgasreinigung mittels Zyklon-Staubabscheider wird in vielen verschiedenen
Industrien eingesetzt, heutzutage meist als Vorabscheider. Beispiele hierfür sind die
braunkohleverarbeitende Industrie, die Gesteinsindustrie und die papier- oder holzverarbeitende Industrie, insbesondere dort, wo viel Staub oder auch größere Späne in die Luft gelangen. Auch im Alltag sind Zyklon-Staubabscheider zu finden. Hier kommen sie in beutellosen Staubsaugern oder als Vorabscheider von Staubsaugern bei der Holzverarbeitung zum Einsatz.
Die Vorgänge im Staubabscheider-Zyklon sind bereits durch mathematische Modelle beschrieben worden. Hierbei handelt es sich um Näherungen, jedoch nicht um
die exakte Abbildung der Realität, weswegen bis heute die Modelle immer wieder weiterentwickelt und verbessert werden. Eine CFD (Computional Fluid Dynamics)Simulation bringt meist die besten Ergebnisse, ist jedoch sehr aufwendig und muss für jeden Staubabscheider neu entwickelt werden. Daher wird noch immer an der Weiterentwicklung der mathematischen Modelle gearbeitet, um eine Berechnung zu optimieren, die für alle Staubabscheider gilt. Muschelknautz hat in diesem Bereich über Jahre hinweg geforscht und so eine der
wichtigsten Methoden zur Berechnung von Zyklonabscheidern entwickelt. Diese stimmt oft sehr gut mit der Realität überein. Betrachtet man jedoch die Tiefe des Tauchrohres im Zyklon, fällt auf, dass der Abscheidegrad maximal wird, wenn das Tauchrohr nicht in den Abscheideraum ragt, sondern mit dem Deckel des Zyklons abschließt. Dieses Phänomen tritt weder bei den durchgeführten CFD-Simulationen noch bei den durchgeführten Messungen am Bauteil auf. Ziel der Arbeit ist es, diese Unstimmigkeit zwischen Berechnung und Messung zu untersuchen und Gründe hierfür herauszufinden. Darum wird zunächst der Stand der Technik und das Muschelknautz’sche Modell
vorgestellt, um im Anschluss die Berechnungsmethode genauer zu untersuchen. So soll festgestellt werden, ob die Ursache der Abweichungen zur Realität bei einer Analyse der Berechnungsmethode ersichtlich wird. Beispielsweise soll überprüft werden, ob die Schlussfolgerung einer maximalen Abscheideleistung bei minimaler Tauchrohrtiefe von speziellen Faktoren abhängt. Es wird eine Reihe von Beispielrechnungen durchgeführt, mit deren Hilfe der Zusammenhang
von Abscheidegrad und Tauchrohrtiefe ersichtlich wird. Hierbei werden die Geometrieparameter des Abscheiders variiert, um deren Einfluss auf die Tauchrohrtiefe
zu untersuchen.
Architecural aproaches are considered to simplify the generation of re-usable building blocks in the field of data warehousing. While SAP’s Layer Scalable Architecure (LSA) offers a reference model for creating data warehousing infrastructure based on SAP software, extented reference models are needed to guide the integration of SAP and non-SAP tools. Therefore, SAP’s LSA is compared to the Data Warehouse Architectural Reference Model (DWARM), which aims to cover the classical data warehouse topologies.
Verunreinigungen im Wassernetz können weite Teile der Bevölkerung unmittelbar gefährden. Gefahrenpotenziale bestehen dabei nicht nur durch mögliche kriminelle Handlungen und terroristische Anschläge. Auch Betriebsstörungen, Systemfehler und Naturkatastrophen können zu Verunreinigungen führen.
In this paper we present a comparison of different data driven modeling methods. The first instance of a data driven linear Bayesian model is compared with several linear regression models, a Kriging model and a genetic programming model.
The models are build on industrial data for the development of a robust gas sensor.
The data contain limited amount of samples and a high variance.
The mean square error of the models implemented in a test dataset is used as the comparison strategy.
The results indicate that standard linear regression approaches as well as Kriging and GP show good results,
whereas the Bayesian approach, despite the fact that it requires additional resources, does not lead to improved results.
Modelling Zero-inflated Rainfall Data through the Use of Gaussian Process and Bayesian Regression
(2018)
Rainfall is a key parameter for understanding the water cycle. An accurate rainfall measurement is vital in the development of hydrological models. By means of indirect measurement, satellites can nowadays estimate the rainfall around the world. However, these measurements are not always accurate. As a first approach to generate a bias-corrected rainfall estimate using satellite data, the performance of Gaussian process and Bayesian regression is studied. The results show Gaussian process as the better option for this dataset but leave place to improvements on both modelling strategies.
Sensor placement for contaminant detection in water distribution systems (WDS) has become a topic of great interest aiming to secure a population's water supply. Several approaches can be found in the literature with differences ranging from the objective selected to optimize to the methods implemented to solve the optimization problem. In this work we aim to give an overview of the current work in sensor placement with focus on contaminant detection for WDS. We present some of the objectives for which the sensor placement problem is defined along with common optimization algorithms and Toolkits available to help with algorithm testing and comparison.
Many black-box optimization problems rely on simulations to evaluate the quality of candidate solutions. These evaluations can be computationally expensive and very time-consuming. We present and approach to mitigate this problem by taking into consideration two factors: The number of evaluations and the execution time. We aim to keep the number of evaluations low by using Bayesian optimization (BO) – known to be sample efficient– and to reduce wall-clock times by executing parallel evaluations. Four parallelization methods using BO as optimizer are compared against the inherently parallel CMA-ES. Each method is evaluated on all the 24 objective functions of the Black-Box-Optimization-Benchmarking test suite in their 20-dimensional versions. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on most of the test functions, also on higher dimensions.
We propose a hybridization approach called Regularized-Surrogate- Optimization (RSO) aimed at overcoming difficulties related to high- dimensionality. It combines standard Kriging-based SMBO with regularization techniques. The employed regularization methods use the least absolute shrinkage and selection operator (LASSO). An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than Kriging to obtain comparable results. The pros and cons of the RSO approach are discussed and recommendations for practitioners are presented.
Real-world problems such as computational fluid dynamics simulations and finite element analyses are computationally expensive. A standard approach to mitigating the high computational expense is Surrogate-Based Optimization (SBO). Yet, due to the high-dimensionality of many simulation problems, SBO is not directly applicable or not efficient. Reducing the dimensionality of the search space is one method to overcome this limitation. In addition to the applicability of SBO, dimensionality reduction enables easier data handling and improved data and model interpretability. Regularization is considered as one state-of-the-art technique for dimensionality reduction. We propose a hybridization approach called Regularized-Surrogate-Optimization (RSO) aimed at overcoming difficulties related to high-dimensionality. It couples standard Kriging-based SBO with regularization techniques. The employed regularization methods are based on three adaptations of the least absolute shrinkage and selection operator (LASSO). In addition, tree-based methods are analyzed as an alternative variable selection method. An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than standard SBO to obtain comparable results. The pros and cons of the RSO approach are discussed, and recommendations for practitioners are presented.
Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the ‘BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget.