### Refine

#### Document Type

- Working Paper (11)
- Report (8)
- Article (6)
- Preprint (4)

#### Keywords

- Optimization (9)
- Optimierung (6)
- Benchmarking (3)
- Modeling (3)
- 3D Printing (2)
- Combined simulation (2)
- Evolutionary Computation (2)
- Evolutionärer Algorithmus (2)
- Globale Optimierung (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Soft Computing (2)
- Surrogate (2)
- Surrogate Models (2)
- Surrogate-based (2)
- 3D-Druck (1)
- Algorithm Tuning (1)
- Algorithmus (1)
- Angewandte Mathematik (1)
- Artificial intelligence (1)
- Automated Learning (1)
- BBOB (1)
- Bayesian Learning (1)
- Bayesian Optimization (1)
- Bayesian Regression (1)
- Big Data (1)
- Big data platform (1)
- Ccomputational fluid dynamics (1)
- Cognition (1)
- Computational fluid dynamics (1)
- Conditional inference tree (1)
- Data Analysis (1)
- Data Mining (1)
- Data Modelling (1)
- Decision tree (1)
- Design of Experiments (1)
- Discrete Optimization (1)
- Electrostatic Precipitator (1)
- Ensemble Methods (1)
- Ensemble based modeling (1)
- Entdeckendes Lernen (1)
- Erfahrungsbericht (1)
- Event Detection (1)
- Evolutionsstrategie (1)
- Expensive Optimization (1)
- Experiment (1)
- Experimental Algorithmics (1)
- Faserverbundwerkstoffe (1)
- Feature selection (1)
- Flowcurve (1)
- Forschendes Lernen (1)
- Function Approximation (1)
- Funktionstest (1)
- Gaussian Process (1)
- Genetische Algorithmen (1)
- Genetisches Programmieren (1)
- Health condition monitoring (1)
- Hot rolling (1)
- Imputation (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Knowledge extraction (1)
- Kognition (1)
- Kriging (1)
- Künstliche Intelligenz (1)
- Lineare Regression (1)
- Machine learning (1)
- Massive Online Analysis (1)
- Meta-model (1)
- Metaheuristics (1)
- Metaheuristik (1)
- Metal (1)
- Metamodel (1)
- Metamodell (1)
- Metamodels (1)
- Model Selection (1)
- Modelierung (1)
- Neural and Evolutionary Computing (1)
- Numerische Strömungssimulation (1)
- On-line Algorithm (1)
- Parallelization (1)
- Performance (1)
- Predictive Analytics (1)
- Promotion (1)
- R (1)
- Regression (1)
- SPOT (1)
- Sensor placement (1)
- Sensortechnik (1)
- Sequential Parameter Optimization (1)
- Signalanalyse (1)
- Simulation (1)
- Simulation-based Optimization (1)
- Stacked Generalization (1)
- Stacking (1)
- Structural Health Monitoring (1)
- Surrogate Mod (1)
- Surrogate Model (1)
- Surrogate model (1)
- Surrogate model based optimization (1)
- Surrogates (1)
- Taxonomie (1)
- Taxonomy (1)
- Test Function (1)
- Test function generator (1)
- Time Series (1)
- Trinkwasser (1)
- Univariate Data (1)
- Variable reduction (1)
- Verunreinigung (1)
- Wasserverteilung (1)
- Water distribution systems (1)

#### Institute

- Fakultät für Informatik und Ingenieurwissenschaften (F10) (29) (remove)

When designing or developing optimization algorithms, test functions are crucial to evaluate
performance. Often, test functions are not sufficiently difficult, diverse, flexible or relevant to real-world
applications. Previously,
test functions with real-world relevance were generated by training a machine learning model based on
real-world data. The model estimation is used as a test function.
We propose a more principled approach using simulation instead of estimation.
Thus, relevant and varied test functions
are created which represent the behavior of real-world fitness landscapes.
Importantly, estimation can lead to excessively smooth test functions
while simulation may avoid this pitfall. Moreover, the simulation
can be conditioned by the data, so that the simulation reproduces the training data
but features diverse behavior in unobserved regions of the search space.
The proposed test function generator is illustrated with an intuitive, one-dimensional
example. To demonstrate the utility of this approach it
is applied to a protein sequence optimization problem.
This application demonstrates the advantages as well as practical limits of simulation-based
test functions.

Surrogate-assisted optimization has proven to be very successful if applied to industrial problems. The use of a data-driven surrogate model of an objective function during an optimization cycle has many bene ts, such as being cheap to evaluate and further providing both information about the objective landscape and the parameter space. In preliminary work, it was researched how surrogate-assisted optimization can help to optimize the structure of a neural network (NN) controller. In this work, we will focus on how surrogates can help to improve the direct learning process of a transparent feed-forward neural network controller. As an initial case study we will consider a manageable real-world control task: the elevator supervisory group problem (ESGC) using a simplified simulation model. We use this model as a benchmark which should indicate the applicability and performance of surrogate-assisted optimization to this kind of tasks. While the optimization process itself is in this case not onsidered expensive, the results show that surrogate-assisted optimization is capable of outperforming metaheuristic optimization methods for a low number of evaluations. Further the surrogate can be used for signi cance analysis of the inputs and weighted connections to further exploit problem information.

Surrogate-based optimization and nature-inspired metaheuristics have become the state of the art in solving real-world optimization problems. Still, it is difficult for beginners and even experts to get an overview that explains their advantages in comparison to the large number of available methods in the scope of continuous optimization. Available taxonomies lack the integration of surrogate-based approaches and thus their embedding in the larger context of this broad field.
This article presents a taxonomy of the field, which further matches the idea of nature-inspired algorithms, as it is based on the human behavior in path finding. Intuitive analogies make it easy to conceive the most basic principles of the search algorithms, even for beginners and non-experts in this area of research. However, this scheme does not oversimplify the high complexity of the different algorithms, as the class identifier only defines a descriptive meta-level of the algorithm search strategies. The taxonomy was established by exploring and matching algorithm schemes, extracting similarities and differences, and creating a set of classification indicators to distinguish between five distinct classes. In practice, this taxonomy allows recommendations for the applicability of the corresponding algorithms and helps developers trying to create or improve their own algorithms.

As the amount of data gathered by monitoring systems increases, using computational tools to analyze it becomes a necessity.
Machine learning algorithms can be used in both regression and classification problems, providing useful insights while avoiding the bias and proneness to errors of humans. In this paper, a specific kind of decision tree algorithm, called conditional inference tree, is used to extract relevant knowledge from data that pertains to electrical motors. The model is chosen due to its flexibility, strong statistical foundation, as well as great capabilities to generalize and cope with problems in the data. The obtained knowledge is organized in a structured way and then analyzed in the context of health condition monitoring. The final
results illustrate how the approach can be used to gain insight into the system and present the results in an understandable, user-friendly manner

The availability of several CPU cores on current computers enables
parallelization and increases the computational power significantly.
Optimization algorithms have to be adapted to exploit these highly
parallelized systems and evaluate multiple candidate solutions in
each iteration. This issue is especially challenging for expensive
optimization problems, where surrogate models are employed to
reduce the load of objective function evaluations.
This paper compares different approaches for surrogate modelbased
optimization in parallel environments. Additionally, an easy
to use method, which was developed for an industrial project, is
proposed. All described algorithms are tested with a variety of
standard benchmark functions. Furthermore, they are applied to
a real-world engineering problem, the electrostatic precipitator
problem. Expensive computational fluid dynamics simulations are
required to estimate the performance of the precipitator. The task
is to optimize a gas-distribution system so that a desired velocity
distribution is achieved for the gas flow throughout the precipitator.
The vast amount of possible configurations leads to a complex
discrete valued optimization problem. The experiments indicate
that a hybrid approach works best, which proposes candidate solutions
based on different surrogate model-based infill criteria and
evolutionary operators.

Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the ‘BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget.

Real-world problems such as computational fluid dynamics simulations and finite element analyses are computationally expensive. A standard approach to mitigating the high computational expense is Surrogate-Based Optimization (SBO). Yet, due to the high-dimensionality of many simulation problems, SBO is not directly applicable or not efficient. Reducing the dimensionality of the search space is one method to overcome this limitation. In addition to the applicability of SBO, dimensionality reduction enables easier data handling and improved data and model interpretability. Regularization is considered as one state-of-the-art technique for dimensionality reduction. We propose a hybridization approach called Regularized-Surrogate-Optimization (RSO) aimed at overcoming difficulties related to high-dimensionality. It couples standard Kriging-based SBO with regularization techniques. The employed regularization methods are based on three adaptations of the least absolute shrinkage and selection operator (LASSO). In addition, tree-based methods are analyzed as an alternative variable selection method. An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than standard SBO to obtain comparable results. The pros and cons of the RSO approach are discussed, and recommendations for practitioners are presented.

We propose a hybridization approach called Regularized-Surrogate- Optimization (RSO) aimed at overcoming difficulties related to high- dimensionality. It combines standard Kriging-based SMBO with regularization techniques. The employed regularization methods use the least absolute shrinkage and selection operator (LASSO). An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than Kriging to obtain comparable results. The pros and cons of the RSO approach are discussed and recommendations for practitioners are presented.

Many black-box optimization problems rely on simulations to evaluate the quality of candidate solutions. These evaluations can be computationally expensive and very time-consuming. We present and approach to mitigate this problem by taking into consideration two factors: The number of evaluations and the execution time. We aim to keep the number of evaluations low by using Bayesian optimization (BO) – known to be sample efficient– and to reduce wall-clock times by executing parallel evaluations. Four parallelization methods using BO as optimizer are compared against the inherently parallel CMA-ES. Each method is evaluated on all the 24 objective functions of the Black-Box-Optimization-Benchmarking test suite in their 20-dimensional versions. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on most of the test functions, also on higher dimensions.

Sensor placement for contaminant detection in water distribution systems (WDS) has become a topic of great interest aiming to secure a population's water supply. Several approaches can be found in the literature with differences ranging from the objective selected to optimize to the methods implemented to solve the optimization problem. In this work we aim to give an overview of the current work in sensor placement with focus on contaminant detection for WDS. We present some of the objectives for which the sensor placement problem is defined along with common optimization algorithms and Toolkits available to help with algorithm testing and comparison.