CIplus
Der Forschungsschwerpunkt CIplus ist im Cluster Computational Services and Software Quality der TH Köln angesiedelt. Ziel ist die Verbesserung des internen Austausches und der externen Sichtbarkeit der Fachdisziplinen.
Weitere Informationen zum Forschungsschwerpunkt erhalten Sie auf der Webseite Computational Intelligence plus - CIplus.
Herausgeber:
Prof. Dr. Thomas Bartz-Beielstein (Schriftenleiter)
Prof. Dr. Wolfgang Konen
Prof. Dr. Boris Naujoks
Weitere Informationen zum Forschungsschwerpunkt erhalten Sie auf der Webseite Computational Intelligence plus - CIplus.
Herausgeber:
Prof. Dr. Thomas Bartz-Beielstein (Schriftenleiter)
Prof. Dr. Wolfgang Konen
Prof. Dr. Boris Naujoks
Refine
Year of publication
Document Type
- Report (24)
- Working Paper (16)
- Article (7)
- Preprint (6)
- Conference Proceeding (1)
Has Fulltext
- yes (54)
Keywords
- Optimierung (20)
- Optimization (13)
- Modellierung (9)
- Soft Computing (9)
- Simulation (6)
- Benchmarking (5)
- Evolutionärer Algorithmus (5)
- Globale Optimierung (5)
- Maschinelles Lernen (4)
- Metaheuristik (4)
2/2023
Die Arbeit beschreibt die Entwicklung und Verbreitung künstlicher Intelligenz (KI) und die damit verbundenen Herausforderungen und Chancen. Es wird hervorgehoben, dass trotz des offensichtlichen Nutzens von KI, Bedenken hinsichtlich unerwünschter Nebenwirkungen durch fehlerhafte oder missbräuchliche Anwendungen bestehen. Um diese Herausforderungen zu bewältigen, wird ein Ansatz vorgeschlagen, der als “konviviale künstliche Intelligenz” bezeichnet wird. Dieser Ansatz zielt auf ein harmonisches Zusammenspiel zwischen KI und Mensch ab und betont die Notwendigkeit einer menschenzentrierten Gestaltung bei der Entwicklung und Implementierung von KI-Modellen.
1/2023
Die steigende Komplexität der Produktionssysteme, insbesondere im Maschinenbau, führt zu einer Belastung für Automatisierer und Anlagenbauer. Um dieser Belastung entgegenzuwirken, bietet Industrie 4.0 mit Cyber-physischen Systemen und intelligenten Automatisierungssystemen eine Lösung. Dabei wird menschliches Expertenwissen in die Automatisierung verlagert, indem Ziele deklarativ formuliert werden, anstatt prozedurale Handlungsabläufe zu beschreiben. Dieser Ansatz ermöglicht es intelligenten Systemen, ausreichenden Handlungsspielraum zu haben und den menschlichen Aufwand bei der Optimierung, Inbetriebnahme und Anlagenumbau zu reduzieren. Um intelligente Automation umzusetzen, werden neue Automatisierungstechniken und Software-Services benötigt, die verschiedene Methoden wie maschinelles Lernen, Condition-Monitoring und Diagnose-Algorithmen sowie Optimierungsverfahren nutzen. Derzeit werden diese Services unabhängig voneinander implementiert und die Schnittstellen sind oft proprietär, was den Austausch von Daten, Modellen und Ergebnissen erschwert. Dennoch strebt Industrie 4.0 die Zusammenarbeit von Geräten und Komponenten unterschiedlicher Hersteller an. Als ein Lösungsansatz wurde in diesem Projekt eine kognitive Referenzarchitektur entwickelt, welche die genannten Punkte adressiert.
11/2020
Drinking water supply and distribution systems are critical infrastructure that has to be well maintained for the safety of the public. One important tool in the maintenance of water distribution systems (WDS) is flushing. Flushing is a process carried out in a periodic fashion to clean sediments and other contaminants in the water pipes. Given the different topographies, water composition and supply demand between WDS no single flushing strategy is suitable for all of them. In this report a non-exhaustive overview of optimization methods for flushing in WDS is given. Implementation of optimization methods for the flushing procedure and the flushing planing are presented. Suggestions are given as a possible option to optimise existing flushing planing frameworks.
9/2020
EventDetectR: An efficient Event Detection System (EDS) capable of detecting unexpected water quality conditions. This approach uses multiple algorithms to model the relationship between various multivariate water quality signals. Then the residuals of the models were utilized in constructing the event detection algorithm, which provides a continuous measure of the probability of an event at every time step. The proposed framework was tested for water contamination events with industrial data from automated water quality sensors. The results showed that the framework is reliable with better performance and is highly suitable for event detection.
10/2020
Sensor placement for contaminant detection in water distribution systems (WDS) has become a topic of great interest aiming to secure a population's water supply. Several approaches can be found in the literature with differences ranging from the objective selected to optimize to the methods implemented to solve the optimization problem. In this work we aim to give an overview of the current work in sensor placement with focus on contaminant detection for WDS. We present some of the objectives for which the sensor placement problem is defined along with common optimization algorithms and Toolkits available to help with algorithm testing and comparison.
8/2020
Benchmark experiments are required to test, compare, tune, and understand optimization algorithms. Ideally, benchmark problems closely reflect real-world problem behavior. Yet, real-world problems are not always readily available for benchmarking. For example, evaluation costs may be too high, or resources are unavailable (e.g., software or equipment). As a solution, data from previous evaluations can be used to train surrogate models which are then used for benchmarking. The goal is to generate test functions on which the performance of an algorithm is similar to that on the real-world objective function. However, predictions from data-driven models tend to be smoother than the ground-truth from which the training data is derived. This is especially problematic when the training data becomes sparse. The resulting benchmarks may not reflect the landscape features of the ground-truth, are too easy, and may lead to biased conclusions.
To resolve this, we use simulation of Gaussian processes instead of estimation (or prediction). This retains the covariance properties estimated during model training. While previous research suggested a decomposition-based approach for a small-scale, discrete problem, we show that the spectral simulation method enables simulation for continuous optimization problems. In a set of experiments with an artificial ground-truth, we demonstrate that this yields more accurate benchmarks than simply predicting with the Gaussian process model.
7/2020
An important class of black-box optimization problems relies on using simulations to assess the quality of a given candidate solution. Solving such problems can be computationally expensive because each simulation is very time-consuming. We present an approach to mitigate this problem by distinguishing two factors of computational cost: the number of trials and the time needed to execute the trials. Our approach tries to keep down the number of trials by using Bayesian optimization (BO) –known to be sample efficient– and reducing wall-clock times by parallel execution of trials. We compare the performance of four parallelization methods and two model-free alternatives. Each method is evaluated on all 24 objective functions of the Black-Box-Optimization- Benchmarking (BBOB) test suite in their five, ten, and 20-dimensional versions. Additionally, their performance is investigated on six test cases in robot learning. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on the BBOB test functions, especially for higher dimensions. On the robot learning tasks, the differences are less clear, but the data do support parallelized BO as the ‘best guess’, winning on some cases and never losing.
6/2020
Many black-box optimization problems rely on simulations to evaluate the quality of candidate solutions. These evaluations can be computationally expensive and very time-consuming. We present and approach to mitigate this problem by taking into consideration two factors: The number of evaluations and the execution time. We aim to keep the number of evaluations low by using Bayesian optimization (BO) – known to be sample efficient– and to reduce wall-clock times by executing parallel evaluations. Four parallelization methods using BO as optimizer are compared against the inherently parallel CMA-ES. Each method is evaluated on all the 24 objective functions of the Black-Box-Optimization-Benchmarking test suite in their 20-dimensional versions. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on most of the test functions, also on higher dimensions.
5/2020
Real-world problems such as computational fluid dynamics simulations and finite element analyses are computationally expensive. A standard approach to mitigating the high computational expense is Surrogate-Based Optimization (SBO). Yet, due to the high-dimensionality of many simulation problems, SBO is not directly applicable or not efficient. Reducing the dimensionality of the search space is one method to overcome this limitation. In addition to the applicability of SBO, dimensionality reduction enables easier data handling and improved data and model interpretability. Regularization is considered as one state-of-the-art technique for dimensionality reduction. We propose a hybridization approach called Regularized-Surrogate-Optimization (RSO) aimed at overcoming difficulties related to high-dimensionality. It couples standard Kriging-based SBO with regularization techniques. The employed regularization methods are based on three adaptations of the least absolute shrinkage and selection operator (LASSO). In addition, tree-based methods are analyzed as an alternative variable selection method. An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than standard SBO to obtain comparable results. The pros and cons of the RSO approach are discussed, and recommendations for practitioners are presented.
4/2020
Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the ‘BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget.
3/2020
We propose a hybridization approach called Regularized-Surrogate- Optimization (RSO) aimed at overcoming difficulties related to high- dimensionality. It combines standard Kriging-based SMBO with regularization techniques. The employed regularization methods use the least absolute shrinkage and selection operator (LASSO). An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than Kriging to obtain comparable results. The pros and cons of the RSO approach are discussed and recommendations for practitioners are presented.
2/2020
This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well- specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility. The final goal is to provide well-accepted guidelines (rules) that might be useful for authors and reviewers. As benchmarking in optimization is an active and evolving field of research this manuscript is meant to co-evolve over time by means of periodic updates.
1/2020
This paper introduces CAAI, a novel cognitive architecture for artificial intelligence in cyber-physical production systems. The goal of the architecture is to reduce the implementation effort for the usage of artificial intelligence algorithms. The core of the CAAI is a cognitive module that processes declarative goals of the user, selects suitable models and algorithms, and creates a configuration for the execution of a processing pipeline on a big data platform. Constant observation and evaluation against performance criteria assess the performance of pipelines for many and varying use cases. Based on these evaluations, the pipelines are automatically adapted if necessary. The modular design with well-defined interfaces enables the reusability and extensibility of pipeline components. A big data platform implements this modular design supported by technologies such as Docker, Kubernetes, and Kafka for virtualization and orchestration of the individual components and their communication. The implementation of the architecture is evaluated using a real-world use case.
7/2018
The availability of several CPU cores on current computers enables
parallelization and increases the computational power significantly.
Optimization algorithms have to be adapted to exploit these highly
parallelized systems and evaluate multiple candidate solutions in
each iteration. This issue is especially challenging for expensive
optimization problems, where surrogate models are employed to
reduce the load of objective function evaluations.
This paper compares different approaches for surrogate modelbased
optimization in parallel environments. Additionally, an easy
to use method, which was developed for an industrial project, is
proposed. All described algorithms are tested with a variety of
standard benchmark functions. Furthermore, they are applied to
a real-world engineering problem, the electrostatic precipitator
problem. Expensive computational fluid dynamics simulations are
required to estimate the performance of the precipitator. The task
is to optimize a gas-distribution system so that a desired velocity
distribution is achieved for the gas flow throughout the precipitator.
The vast amount of possible configurations leads to a complex
discrete valued optimization problem. The experiments indicate
that a hybrid approach works best, which proposes candidate solutions
based on different surrogate model-based infill criteria and
evolutionary operators.
6/2018
Die Reinhaltung der Luft spielt heute mehr denn je eine wichtige Rolle. In Gesellschaft und Politik wird über Dieselfahrverbote in Innenstädten diskutiert, um die Feinstaubbelastung in den Städten zu senken. Besonders die Industrie steht vor der Aufgabe, den Partikelausstoß zu senken und Wege zu finden, um eine gesunde Luft zu wahren. Zur Abgasreinigung werden oft Filter eingesetzt. Diese weisen aber hohe Energieverluste auf. Die ständige Reinigung oder der Wechsel der Filter kostet Zeit und Geld. Daher ist neben Filtern eine der gängigsten Methoden die Abgasreinigung durch Staubabscheider. Staubabscheider funktionieren filterlos. Dadurch entfällt eine wiederkehrende Filterreinigung, beziehungsweise der regelmäßige Filtertausch. Die Technik der Staubabscheider hat ihren Ursprung in der Natur. Aus der Betrachtung von Zyklonen (in den Tropen vorkommende Wirbelstürme) wurde ein Verfahren entwickelt, um staubhaltige Fluide von den Verunreinigungen zu trennen. Die Abgasreinigung mittels Zyklon-Staubabscheider wird in vielen verschiedenen
Industrien eingesetzt, heutzutage meist als Vorabscheider. Beispiele hierfür sind die
braunkohleverarbeitende Industrie, die Gesteinsindustrie und die papier- oder holzverarbeitende Industrie, insbesondere dort, wo viel Staub oder auch größere Späne in die Luft gelangen. Auch im Alltag sind Zyklon-Staubabscheider zu finden. Hier kommen sie in beutellosen Staubsaugern oder als Vorabscheider von Staubsaugern bei der Holzverarbeitung zum Einsatz.
Die Vorgänge im Staubabscheider-Zyklon sind bereits durch mathematische Modelle beschrieben worden. Hierbei handelt es sich um Näherungen, jedoch nicht um
die exakte Abbildung der Realität, weswegen bis heute die Modelle immer wieder weiterentwickelt und verbessert werden. Eine CFD (Computional Fluid Dynamics)Simulation bringt meist die besten Ergebnisse, ist jedoch sehr aufwendig und muss für jeden Staubabscheider neu entwickelt werden. Daher wird noch immer an der Weiterentwicklung der mathematischen Modelle gearbeitet, um eine Berechnung zu optimieren, die für alle Staubabscheider gilt. Muschelknautz hat in diesem Bereich über Jahre hinweg geforscht und so eine der
wichtigsten Methoden zur Berechnung von Zyklonabscheidern entwickelt. Diese stimmt oft sehr gut mit der Realität überein. Betrachtet man jedoch die Tiefe des Tauchrohres im Zyklon, fällt auf, dass der Abscheidegrad maximal wird, wenn das Tauchrohr nicht in den Abscheideraum ragt, sondern mit dem Deckel des Zyklons abschließt. Dieses Phänomen tritt weder bei den durchgeführten CFD-Simulationen noch bei den durchgeführten Messungen am Bauteil auf. Ziel der Arbeit ist es, diese Unstimmigkeit zwischen Berechnung und Messung zu untersuchen und Gründe hierfür herauszufinden. Darum wird zunächst der Stand der Technik und das Muschelknautz’sche Modell
vorgestellt, um im Anschluss die Berechnungsmethode genauer zu untersuchen. So soll festgestellt werden, ob die Ursache der Abweichungen zur Realität bei einer Analyse der Berechnungsmethode ersichtlich wird. Beispielsweise soll überprüft werden, ob die Schlussfolgerung einer maximalen Abscheideleistung bei minimaler Tauchrohrtiefe von speziellen Faktoren abhängt. Es wird eine Reihe von Beispielrechnungen durchgeführt, mit deren Hilfe der Zusammenhang
von Abscheidegrad und Tauchrohrtiefe ersichtlich wird. Hierbei werden die Geometrieparameter des Abscheiders variiert, um deren Einfluss auf die Tauchrohrtiefe
zu untersuchen.
5/2018
Modelling Zero-inflated Rainfall Data through the Use of Gaussian Process and Bayesian Regression
(2018)
Rainfall is a key parameter for understanding the water cycle. An accurate rainfall measurement is vital in the development of hydrological models. By means of indirect measurement, satellites can nowadays estimate the rainfall around the world. However, these measurements are not always accurate. As a first approach to generate a bias-corrected rainfall estimate using satellite data, the performance of Gaussian process and Bayesian regression is studied. The results show Gaussian process as the better option for this dataset but leave place to improvements on both modelling strategies.
4/2018
Surrogate-based optimization and nature-inspired metaheuristics have become the state of the art in solving real-world optimization problems. Still, it is difficult for beginners and even experts to get an overview that explains their advantages in comparison to the large number of available methods in the scope of continuous optimization. Available taxonomies lack the integration of surrogate-based approaches and thus their embedding in the larger context of this broad field.
This article presents a taxonomy of the field, which further matches the idea of nature-inspired algorithms, as it is based on the human behavior in path finding. Intuitive analogies make it easy to conceive the most basic principles of the search algorithms, even for beginners and non-experts in this area of research. However, this scheme does not oversimplify the high complexity of the different algorithms, as the class identifier only defines a descriptive meta-level of the algorithm search strategies. The taxonomy was established by exploring and matching algorithm schemes, extracting similarities and differences, and creating a set of classification indicators to distinguish between five distinct classes. In practice, this taxonomy allows recommendations for the applicability of the corresponding algorithms and helps developers trying to create or improve their own algorithms.
3/2018
Architecural aproaches are considered to simplify the generation of re-usable building blocks in the field of data warehousing. While SAP’s Layer Scalable Architecure (LSA) offers a reference model for creating data warehousing infrastructure based on SAP software, extented reference models are needed to guide the integration of SAP and non-SAP tools. Therefore, SAP’s LSA is compared to the Data Warehouse Architectural Reference Model (DWARM), which aims to cover the classical data warehouse topologies.
2/2018
1/2018
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation. Several simulation tools can be run independently and in parallel, e.g., long running computational fluid dynamics simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. However, there are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, but analytical models do not. Combining results from models with different input-output structures might improve and accelerate the optimization process. The optimization via multimodel simulation (OMMS) approach, which is able to combine results from these different models, is introduced in this paper.
Using cyclonic dust separators as a real-world simulation problem, the feasibility of this approach is demonstrated and a proof-of-concept is presented. Cyclones are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities. Pros and cons of this multimodel optimization approach are discussed and experiences from experiments are presented.