CIplus
Der Forschungsschwerpunkt CIplus ist im Cluster Computational Services and Software Quality der TH Köln angesiedelt. Ziel ist die Verbesserung des internen Austausches und der externen Sichtbarkeit der Fachdisziplinen.
Weitere Informationen zum Forschungsschwerpunkt erhalten Sie auf der Webseite Computational Intelligence plus - CIplus.
Herausgeber:
Prof. Dr. Thomas Bartz-Beielstein (Schriftenleiter)
Prof. Dr. Wolfgang Konen
Prof. Dr. Boris Naujoks
Weitere Informationen zum Forschungsschwerpunkt erhalten Sie auf der Webseite Computational Intelligence plus - CIplus.
Herausgeber:
Prof. Dr. Thomas Bartz-Beielstein (Schriftenleiter)
Prof. Dr. Wolfgang Konen
Prof. Dr. Boris Naujoks
Refine
Document Type
- Working Paper (16) (remove)
Has Fulltext
- yes (16)
Keywords
- Benchmarking (4)
- Optimierung (4)
- Optimization (4)
- Modeling (3)
- Künstliche Intelligenz (2)
- Parallelization (2)
- Simulation (2)
- Test Function (2)
- Abgasreinigung (1)
- Artificial intelligence (1)
2/2023
Die Arbeit beschreibt die Entwicklung und Verbreitung künstlicher Intelligenz (KI) und die damit verbundenen Herausforderungen und Chancen. Es wird hervorgehoben, dass trotz des offensichtlichen Nutzens von KI, Bedenken hinsichtlich unerwünschter Nebenwirkungen durch fehlerhafte oder missbräuchliche Anwendungen bestehen. Um diese Herausforderungen zu bewältigen, wird ein Ansatz vorgeschlagen, der als “konviviale künstliche Intelligenz” bezeichnet wird. Dieser Ansatz zielt auf ein harmonisches Zusammenspiel zwischen KI und Mensch ab und betont die Notwendigkeit einer menschenzentrierten Gestaltung bei der Entwicklung und Implementierung von KI-Modellen.
8/2020
Benchmark experiments are required to test, compare, tune, and understand optimization algorithms. Ideally, benchmark problems closely reflect real-world problem behavior. Yet, real-world problems are not always readily available for benchmarking. For example, evaluation costs may be too high, or resources are unavailable (e.g., software or equipment). As a solution, data from previous evaluations can be used to train surrogate models which are then used for benchmarking. The goal is to generate test functions on which the performance of an algorithm is similar to that on the real-world objective function. However, predictions from data-driven models tend to be smoother than the ground-truth from which the training data is derived. This is especially problematic when the training data becomes sparse. The resulting benchmarks may not reflect the landscape features of the ground-truth, are too easy, and may lead to biased conclusions.
To resolve this, we use simulation of Gaussian processes instead of estimation (or prediction). This retains the covariance properties estimated during model training. While previous research suggested a decomposition-based approach for a small-scale, discrete problem, we show that the spectral simulation method enables simulation for continuous optimization problems. In a set of experiments with an artificial ground-truth, we demonstrate that this yields more accurate benchmarks than simply predicting with the Gaussian process model.
7/2020
An important class of black-box optimization problems relies on using simulations to assess the quality of a given candidate solution. Solving such problems can be computationally expensive because each simulation is very time-consuming. We present an approach to mitigate this problem by distinguishing two factors of computational cost: the number of trials and the time needed to execute the trials. Our approach tries to keep down the number of trials by using Bayesian optimization (BO) –known to be sample efficient– and reducing wall-clock times by parallel execution of trials. We compare the performance of four parallelization methods and two model-free alternatives. Each method is evaluated on all 24 objective functions of the Black-Box-Optimization- Benchmarking (BBOB) test suite in their five, ten, and 20-dimensional versions. Additionally, their performance is investigated on six test cases in robot learning. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on the BBOB test functions, especially for higher dimensions. On the robot learning tasks, the differences are less clear, but the data do support parallelized BO as the ‘best guess’, winning on some cases and never losing.
2/2020
This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well- specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility. The final goal is to provide well-accepted guidelines (rules) that might be useful for authors and reviewers. As benchmarking in optimization is an active and evolving field of research this manuscript is meant to co-evolve over time by means of periodic updates.
1/2020
This paper introduces CAAI, a novel cognitive architecture for artificial intelligence in cyber-physical production systems. The goal of the architecture is to reduce the implementation effort for the usage of artificial intelligence algorithms. The core of the CAAI is a cognitive module that processes declarative goals of the user, selects suitable models and algorithms, and creates a configuration for the execution of a processing pipeline on a big data platform. Constant observation and evaluation against performance criteria assess the performance of pipelines for many and varying use cases. Based on these evaluations, the pipelines are automatically adapted if necessary. The modular design with well-defined interfaces enables the reusability and extensibility of pipeline components. A big data platform implements this modular design supported by technologies such as Docker, Kubernetes, and Kafka for virtualization and orchestration of the individual components and their communication. The implementation of the architecture is evaluated using a real-world use case.
7/2018
The availability of several CPU cores on current computers enables
parallelization and increases the computational power significantly.
Optimization algorithms have to be adapted to exploit these highly
parallelized systems and evaluate multiple candidate solutions in
each iteration. This issue is especially challenging for expensive
optimization problems, where surrogate models are employed to
reduce the load of objective function evaluations.
This paper compares different approaches for surrogate modelbased
optimization in parallel environments. Additionally, an easy
to use method, which was developed for an industrial project, is
proposed. All described algorithms are tested with a variety of
standard benchmark functions. Furthermore, they are applied to
a real-world engineering problem, the electrostatic precipitator
problem. Expensive computational fluid dynamics simulations are
required to estimate the performance of the precipitator. The task
is to optimize a gas-distribution system so that a desired velocity
distribution is achieved for the gas flow throughout the precipitator.
The vast amount of possible configurations leads to a complex
discrete valued optimization problem. The experiments indicate
that a hybrid approach works best, which proposes candidate solutions
based on different surrogate model-based infill criteria and
evolutionary operators.
6/2018
Die Reinhaltung der Luft spielt heute mehr denn je eine wichtige Rolle. In Gesellschaft und Politik wird über Dieselfahrverbote in Innenstädten diskutiert, um die Feinstaubbelastung in den Städten zu senken. Besonders die Industrie steht vor der Aufgabe, den Partikelausstoß zu senken und Wege zu finden, um eine gesunde Luft zu wahren. Zur Abgasreinigung werden oft Filter eingesetzt. Diese weisen aber hohe Energieverluste auf. Die ständige Reinigung oder der Wechsel der Filter kostet Zeit und Geld. Daher ist neben Filtern eine der gängigsten Methoden die Abgasreinigung durch Staubabscheider. Staubabscheider funktionieren filterlos. Dadurch entfällt eine wiederkehrende Filterreinigung, beziehungsweise der regelmäßige Filtertausch. Die Technik der Staubabscheider hat ihren Ursprung in der Natur. Aus der Betrachtung von Zyklonen (in den Tropen vorkommende Wirbelstürme) wurde ein Verfahren entwickelt, um staubhaltige Fluide von den Verunreinigungen zu trennen. Die Abgasreinigung mittels Zyklon-Staubabscheider wird in vielen verschiedenen
Industrien eingesetzt, heutzutage meist als Vorabscheider. Beispiele hierfür sind die
braunkohleverarbeitende Industrie, die Gesteinsindustrie und die papier- oder holzverarbeitende Industrie, insbesondere dort, wo viel Staub oder auch größere Späne in die Luft gelangen. Auch im Alltag sind Zyklon-Staubabscheider zu finden. Hier kommen sie in beutellosen Staubsaugern oder als Vorabscheider von Staubsaugern bei der Holzverarbeitung zum Einsatz.
Die Vorgänge im Staubabscheider-Zyklon sind bereits durch mathematische Modelle beschrieben worden. Hierbei handelt es sich um Näherungen, jedoch nicht um
die exakte Abbildung der Realität, weswegen bis heute die Modelle immer wieder weiterentwickelt und verbessert werden. Eine CFD (Computional Fluid Dynamics)Simulation bringt meist die besten Ergebnisse, ist jedoch sehr aufwendig und muss für jeden Staubabscheider neu entwickelt werden. Daher wird noch immer an der Weiterentwicklung der mathematischen Modelle gearbeitet, um eine Berechnung zu optimieren, die für alle Staubabscheider gilt. Muschelknautz hat in diesem Bereich über Jahre hinweg geforscht und so eine der
wichtigsten Methoden zur Berechnung von Zyklonabscheidern entwickelt. Diese stimmt oft sehr gut mit der Realität überein. Betrachtet man jedoch die Tiefe des Tauchrohres im Zyklon, fällt auf, dass der Abscheidegrad maximal wird, wenn das Tauchrohr nicht in den Abscheideraum ragt, sondern mit dem Deckel des Zyklons abschließt. Dieses Phänomen tritt weder bei den durchgeführten CFD-Simulationen noch bei den durchgeführten Messungen am Bauteil auf. Ziel der Arbeit ist es, diese Unstimmigkeit zwischen Berechnung und Messung zu untersuchen und Gründe hierfür herauszufinden. Darum wird zunächst der Stand der Technik und das Muschelknautz’sche Modell
vorgestellt, um im Anschluss die Berechnungsmethode genauer zu untersuchen. So soll festgestellt werden, ob die Ursache der Abweichungen zur Realität bei einer Analyse der Berechnungsmethode ersichtlich wird. Beispielsweise soll überprüft werden, ob die Schlussfolgerung einer maximalen Abscheideleistung bei minimaler Tauchrohrtiefe von speziellen Faktoren abhängt. Es wird eine Reihe von Beispielrechnungen durchgeführt, mit deren Hilfe der Zusammenhang
von Abscheidegrad und Tauchrohrtiefe ersichtlich wird. Hierbei werden die Geometrieparameter des Abscheiders variiert, um deren Einfluss auf die Tauchrohrtiefe
zu untersuchen.
3/2018
Architecural aproaches are considered to simplify the generation of re-usable building blocks in the field of data warehousing. While SAP’s Layer Scalable Architecure (LSA) offers a reference model for creating data warehousing infrastructure based on SAP software, extented reference models are needed to guide the integration of SAP and non-SAP tools. Therefore, SAP’s LSA is compared to the Data Warehouse Architectural Reference Model (DWARM), which aims to cover the classical data warehouse topologies.
8/2017
Surrogate-assisted optimization has proven to be very successful if applied to industrial problems. The use of a data-driven surrogate model of an objective function during an optimization cycle has many bene ts, such as being cheap to evaluate and further providing both information about the objective landscape and the parameter space. In preliminary work, it was researched how surrogate-assisted optimization can help to optimize the structure of a neural network (NN) controller. In this work, we will focus on how surrogates can help to improve the direct learning process of a transparent feed-forward neural network controller. As an initial case study we will consider a manageable real-world control task: the elevator supervisory group problem (ESGC) using a simplified simulation model. We use this model as a benchmark which should indicate the applicability and performance of surrogate-assisted optimization to this kind of tasks. While the optimization process itself is in this case not onsidered expensive, the results show that surrogate-assisted optimization is capable of outperforming metaheuristic optimization methods for a low number of evaluations. Further the surrogate can be used for signi cance analysis of the inputs and weighted connections to further exploit problem information.
6/2017
To maximize the throughput of a hot rolling mill,
the number of passes has to be reduced. This can be achieved by maximizing the thickness reduction in each pass. For this purpose, exact predictions of roll force and torque are required. Hence, the predictive models that describe the physical behavior of the product have to be accurate and cover a wide range of different materials.
Due to market requirements a lot of new materials are tested and rolled. If these materials are chosen to be rolled more often, a suitable flow curve has to be established. It is not reasonable to determine those flow curves in laboratory, because of costs and time. A strong demand for quick parameter determination and the optimization of flow curve parameter with minimum costs is the logical consequence. Therefore parameter estimation and the optimization with real data, which were collected during previous runs, is a promising idea. Producers benefit from this data-driven approach and receive a huge gain in flexibility when rolling new
materials, optimizing current production, and increasing quality. This concept would also allow to optimize flow curve parameters, which have already been treated by standard methods. In this article, a new data-driven approach for predicting the physical behavior of the product and setting important parameters is presented.
We demonstrate how the prediction quality of the roll force and roll torque can be optimized sustainably. This offers the opportunity to continuously increase the workload in each pass to the theoretical maximum while product quality and process stability can also be improved.