Refine
Year of publication
Document Type
- Report (22)
- Working Paper (13)
- Article (7)
- Preprint (5)
- Conference Proceeding (1)
Has Fulltext
- yes (48)
Keywords
- Optimierung (17)
- Optimization (12)
- Soft Computing (9)
- Modellierung (7)
- Evolutionärer Algorithmus (5)
- Globale Optimierung (5)
- Simulation (5)
- Benchmarking (4)
- Metaheuristik (4)
- Computational Intelligence (3)
- Genetisches Programmieren (3)
- Kriging (3)
- Maschinelles Lernen (3)
- Mehrkriterielle Optimierung (3)
- Modeling (3)
- Sequential Parameter Optimization (3)
- Sequentielle Parameter Optimierung (3)
- Surrogate (3)
- Surrogate Models (3)
- Versuchsplanung (3)
- 3D Printing (2)
- Bayesian Optimization (2)
- Co-Kriging (2)
- Combined simulation (2)
- Design of Experiments (2)
- Ensemble Methods (2)
- Event Detection (2)
- Evolutionary Computation (2)
- Evolutionsstrategie (2)
- Genetic Programming (2)
- Genetische Algorithmen (2)
- Imputation (2)
- Künstliche Intelligenz (2)
- Machine Learning (2)
- Metamodel (2)
- Multi-Criteria Optimization (2)
- Multiobjective Optimization (2)
- Optimierungsproblem (2)
- Parallelization (2)
- Prognose (2)
- R (2)
- Surrogat-Modellierung (2)
- Surrogate Modeling (2)
- Surrogate-based (2)
- Taxonomie (2)
- Taxonomy (2)
- 3D-Druck (1)
- Algorithm Tuning (1)
- Algorithmus (1)
- Angewandte Mathematik (1)
- Artificial intelligence (1)
- Automated Learning (1)
- BBOB (1)
- Bayesian Learning (1)
- Bayesian Regression (1)
- Big Data (1)
- Big data platform (1)
- Biogas (1)
- Biogas Plant (1)
- Ccomputational fluid dynamics (1)
- Cognition (1)
- Computational fluid dynamics (1)
- Conditional inference tree (1)
- Cyber-physische Produktionssysteme (1)
- Cyclone Dust Separator (1)
- Data Analysis (1)
- Data Mining (1)
- Data Modelling (1)
- Datenanalyse (1)
- Decision tree (1)
- Discrete Optimization (1)
- Electrostatic Precipitator (1)
- Ensemble based modeling (1)
- Entdeckendes Lernen (1)
- Entstauber (1)
- Erfahrungsbericht (1)
- Evolution Strategies (1)
- Evolutionary Algorithms (1)
- Evolutionary Robotics (1)
- Evolutionsstrategien (1)
- Evolutionäre Algorithmen (1)
- Expected Improvement (1)
- Expensive Optimization (1)
- Experiment (1)
- Experimental Algorithmics (1)
- Faserverbundwerkstoffe (1)
- Feature selection (1)
- Fehlende Daten (1)
- Finanzwirtschaft (1)
- Flowcurve (1)
- Flushing (1)
- Forschendes Lernen (1)
- Function Approximation (1)
- Funktionstest (1)
- Gaussian Process (1)
- Genetic Algorithms (1)
- Genetic programming (1)
- Health condition monitoring (1)
- Heuristics (1)
- Hot rolling (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Knowledge extraction (1)
- Kognition (1)
- Kognitive Referenzarchitektur (1)
- Lineare Regression (1)
- Machine learning (1)
- Massive Online Analysis (1)
- Meta-model (1)
- Metaheuristics (1)
- Metal (1)
- Metamodell (1)
- Metamodels (1)
- Missing Data (1)
- Mixed Models (1)
- Mixed-Effects Models (1)
- Model Selection (1)
- Modelierung (1)
- Multi-criteria Optimization (1)
- Multi-fidelity (1)
- Neural Networks (1)
- Neural and Evolutionary Computing (1)
- Numerische Strömungssimulation (1)
- On-line Algorithm (1)
- Parallelisierung (1)
- Parameter Tuning (1)
- Parametertuning (1)
- Performance (1)
- Predictive Analytics (1)
- Promotion (1)
- Regression (1)
- Robotics (1)
- SPOT (1)
- Sensor placement (1)
- Sensortechnik (1)
- Signalanalyse (1)
- Simulated annealing (1)
- Simulation-based Optimization (1)
- Simulationsmodell (1)
- Social Learning (1)
- Spülen (1)
- Stacked Generalization (1)
- Stacking (1)
- Statistics (1)
- Statistische Versuchsplanung (1)
- Structural Health Monitoring (1)
- Surrogate Mod (1)
- Surrogate Model (1)
- Surrogate Optimization (1)
- Surrogate model (1)
- Surrogate model based optimization (1)
- Surrogate-Model-Based Optimization (1)
- Surrogate-model-based Optimization (1)
- Surrogates (1)
- Surrogatmodellbasierte Optimierung (1)
- Test Function (1)
- Test function generator (1)
- Time Series (1)
- Time-series (1)
- Trinkwasser (1)
- Trinkwasserversorgung (1)
- Univariate Data (1)
- Unsicherheit (1)
- Variable reduction (1)
- Varianzanalyse (1)
- Verunreinigung (1)
- Vorgehensmodell (1)
- Vorverarbeitung (1)
- Wasserverteilung (1)
- Wasserwirtschaft (1)
- Water Distribution Systems (1)
- Water Quality Monitoring (1)
- Water distribution systems (1)
- Zeitreihe (1)
- Zeitreihenanalyse (1)
- Zylon Enstauber (1)
Dieser Schlussbericht beschreibt die im Projekt „CI-basierte mehrkriterielle Optimierungsverfahren für Anwendungen in der Industrie“ (CIMO) im Zeitraum von November 2011 bis einschließlich Oktober 2014 erzielten Ergebnisse. Für aufwändige Optimierungsprobleme aus der Industrie wurden geeignete Lösungsverfahren entwickelt. Der Schwerpunkt lag hierbei auf Methoden aus den Bereichen Computational Intelligence (CI) und Surrogatmodellierung. Diese bieten die Möglichkeit, wichtige Herausforderung von aufwändigen, komplexen Optimierungsproblemen zu lösen. Die entwickelten Methoden können verschiedene konfliktäre Zielgrößen berücksichtigen, verschiedene Hierarchieebenen des Problems in die Optimierung integrieren, Nebenbedingungen beachten, vektorielle aber auch strukturierte Daten verarbeiten (kombinatorische Optimierung) sowie die Notwendigkeit teurer/zeitaufwändiger Zielfunktionsberechnungen reduzieren. Die entwickelten Methoden wurden schwerpunktmäßig auf einer Problemstellung aus der Kraftwerkstechnik angewendet, nämlich der Optimierung der Geometrie eines Fliehkraftabscheiders (auch: Zyklon), der Staubanteile aus Abgasen filtert. Das Optimierungsproblem, das diese FIiehkraftabscheider aufwerfen, führt zu konfliktären Zielsetzungen (z.B. Druckverlust, Abscheidegrad). Zyklone können unter anderem über aufwändige Computational Fluid Dynamics (CFD) Simulationen berechnet werden, es stehen aber auch einfache analytische Gleichungen als Schätzung zu Verfügung. Die Verknüpfung von beidem zeigt hier beispielhaft wie Hierarchieebenen eines Optimierungsproblems mit den Methoden des Projektes verbunden werden können. Neben dieser Schwerpunktanwendung konnte auch gezeigt werden, dass die Methoden in vielen weiteren Bereichen Erfolgreich zur Anwendung kommen können: Biogaserzeugung, Wasserwirtschaft, Stahlindustrie. Die besondere Herausforderung der behandelten Probleme und Methoden bietet viele wichtige Forschungsmöglichkeiten für zukünftige Projekte, die derzeit durch die Projektpartner vorbereitet werden.
Computational intelligence methods have gained importance in several real-world domains such as process optimization, system identification, data mining, or statistical quality control. Tools are missing, which determine the applicability of computational intelligence methods in these application domains in an objective manner. Statistics provide methods for comparing algorithms on certain data sets. In the past, several test suites were presented and considered as state of the art. However, there are several drawbacks of these test suites, namely: (i) problem instances are somehow artificial and have no direct link to real-world settings; (ii) since there is a fixed number of test instances, algorithms can be fitted or tuned to this specific and very limited set of test functions; (iii) statistical tools for comparisons of several algorithms on several test problem instances are relatively complex and not easily to analyze. We propose amethodology to overcome these dificulties. It is based on standard ideas from statistics: analysis of variance and its extension to mixed models. This work combines essential ideas from two approaches: problem generation and statistical analysis of computer experiments.
Evolutionary algorithm (EA) is an umbrella term used to describe population-based stochastic direct search algorithms that in some sense mimic natural evolution. Prominent representatives of such algorithms are genetic algorithms, evolution strategies, evolutionary programming, and genetic programming. On the basis of the evolutionary cycle, similarities and differences between these algorithms are described. We briefly discuss how EAs can be adapted to work well in case of multiple objectives, and dynamic or noisy optimization problems. We look at the tuning of algorithms and present some recent developments coming from theory. Finally, typical applications of EAs to real-world problems are shown, with special emphasis on data-mining applications
Ziel des Forschungsprojektes "Mehrkriterielle CI-basierte Optimierungsverfahren für den industriellen Einsatz" (MCIOP) war die Verringerung von Schadstoffemissionen in Kohlekraftwerken. Der wissenschaftliche Fokus lag auf der Entwicklung von Methoden, die in der Lage sind, interpretierbare Modelle für die Schadstoffemissionen automatisch zu generieren. Hierzu wurden mehrkriterielle Optimierungsverfahren entwickelt und eingesetzt. Zur Zeit- und Kostenreduktion wurde die Optimierung durch Surrogat-Modelle erfolgen, die abgestuft mit aufwändigeren Simulationen zum Einsatz kamen („optimization via simulation“). Bei der Untersuchung von Staubabscheidern konnten durch eine mehrkriterielle Optimierung unterschiedliche Zielgrößen, wie z.B. Abscheidegrad und Druckverlust, gleichzeitig berücksichtigt werden.
Dieser Bericht beschreibt die im Projekt MCIOP im Zeitraum von August 2011 bis einschließlich Juni 2015 erzielten Ergebnisse.
Forschendes Lernen versteht sich als ein methodisches Prinzip, das Forschungsorientierung und Verknüpfung von Forschung und Lehre in die Studiengänge und Lehrveranstaltungen integriert und für studentische Lernprozesse nutzbringend anwendet. Studierende sind dabei Teil der Scientific Community.
Dieser Artikel ist ein Erfahrungsbericht, in dem das Konzept des „Forschenden Lernens“ in einer Variante vorgestellt wird, die in den letzten zehn Jahren an einer deutschen Fachhochschule für ingenieurwissenschaftliche Studiengänge entwickelt wurde.
Da es „das“ Forschende Lernen nicht gibt, werden zunächst die für diesen Artikel relevanten Gesichtspunkte zusammengestellt. Darauf aufbauend wird ein Prozessmodell des Forschenden Lernens vorgestellt. Dieses Modell ermöglicht Forschendes Lernen für Bachelor- und Masterstudierende sowie für Doktorandinnen und Doktoranden.
This paper proposes an experimental methodology for on-line machine learning algorithms, i.e., for algorithms that work on data that are available in a sequential order.
It is demonstrated how established tools from experimental algorithmics (EA) can be applied in the on-line or streaming data setting.
The massive on-line analysis (MOA) framework is used to perform the experiments.
Benefits of a well-defined report structure are discussed.
The application of methods from the EA community to on-line or streaming data is referred to as experimental algorithmics for streaming data (EADS).
In this paper we present a comparison of different data driven modeling methods. The first instance of a data driven linear Bayesian model is compared with several linear regression models, a Kriging model and a genetic programming model.
The models are build on industrial data for the development of a robust gas sensor.
The data contain limited amount of samples and a high variance.
The mean square error of the models implemented in a test dataset is used as the comparison strategy.
The results indicate that standard linear regression approaches as well as Kriging and GP show good results,
whereas the Bayesian approach, despite the fact that it requires additional resources, does not lead to improved results.
When using machine learning techniques for learning a function approximation from given data it is often a difficult task to select the right modeling technique.
In many real-world settings is no preliminary knowledge about the objective function available. Then it might be beneficial if the algorithm could learn all models by itself and select the model that suits best to the problem.
This approach is known as automated model selection. In this work we propose a
generalization of this approach.
It combines the predictions of several into one more accurate ensemble surrogate model. This approach is studied in a fundamental way, by first evaluating minimalistic ensembles of only two surrogate models in detail and then proceeding to ensembles with three and more surrogate models.
The results show to what extent combinations of models can perform better than single surrogate models and provides insights into the scalability and robustness of the approach. The study focuses on multi-modal functions topologies, which are important in surrogate-assisted global optimization.
This report presents a practical approach to stacked generalization in surrogate model based optimization. It exemplifies the integration of stacking methods into the surrogate model building process. First, a brief overview of the current state in surrogate model based opti- mization is presented. Stacked generalization is introduced as a promising ensemble surrogate modeling approach. Then two examples (the first is based on a real world application and the second on a set of artificial test functions) are presented. These examples clearly illustrate two properties of stacked generalization: (i) combining information from two poor performing models can result in a good performing model and (ii) even if the ensemble contains a good performing model, combining its information with information from poor performing models results in a relatively small performance decrease only.
When researchers and practitioners in the field of
computational intelligence are confronted with real-world
problems, the question arises which method is the best to
apply. Nowadays, there are several, well established test
suites and well known artificial benchmark functions
available.
However, relevance and applicability of these methods to
real-world problems remains an open question in many
situations. Furthermore, the generalizability of these
methods cannot be taken for granted.
This paper describes a data-driven approach for the
generation of test instances, which is based on
real-world data. The test instance generation uses
data-preprocessing, feature extraction, modeling, and
parameterization. We apply this methodology on a classical
design of experiment real-world project and generate test
instances for benchmarking, e.g. design methods, surrogate
techniques, and optimization algorithms. While most
available results of methods applied on real-world
problems lack availability of the data for comparison,
our future goal is to create a toolbox covering multiple
data sets of real-world projects to provide a test
function generator to the research community.
Data pre-processing is a key research topic in data mining because it plays a
crucial role in improving the accuracy of any data mining algorithm. In most
real world cases, a significant amount of the recorded data is found missing
due to most diverse errors. This loss of data is nearly always unavoidable.
Recovery of missing data plays a vital role in avoiding inaccurate data
mining decisions. Most multivariate imputation methods are not compatible
to univariate datasets and the traditional univariate imputation techniques
become highly biased as the missing data gap increases. With the current
technological advancements abundant data is being captured every second.
Hence, we intend to develop a new algorithm that enables maximum
utilization of the available big datasets for imputation. In this paper, we
present a Seasonal and Trend decomposition using Loess (STL) based
Seasonal Moving Window Algorithm, which is capable of handling patterns
with trend as well as cyclic characteristics. We show that the algorithm is
highly suitable for pre-processing of large datasets.
The use of surrogate models is a standard method to deal with complex, realworld
optimization problems. The first surrogate models were applied to continuous
optimization problems. In recent years, surrogate models gained importance
for discrete optimization problems. This article, which consists of three
parts, takes care of this development. The first part presents a survey of modelbased
methods, focusing on continuous optimization. It introduces a taxonomy,
which is useful as a guideline for selecting adequate model-based optimization
tools. The second part provides details for the case of discrete optimization
problems. Here, six strategies for dealing with discrete data structures are introduced.
A new approach for combining surrogate information via stacking
is proposed in the third part. The implementation of this approach will be
available in the open source R package SPOT2. The article concludes with a
discussion of recent developments and challenges in both application domains.
To maximize the throughput of a hot rolling mill,
the number of passes has to be reduced. This can be achieved by maximizing the thickness reduction in each pass. For this purpose, exact predictions of roll force and torque are required. Hence, the predictive models that describe the physical behavior of the product have to be accurate and cover a wide range of different materials.
Due to market requirements a lot of new materials are tested and rolled. If these materials are chosen to be rolled more often, a suitable flow curve has to be established. It is not reasonable to determine those flow curves in laboratory, because of costs and time. A strong demand for quick parameter determination and the optimization of flow curve parameter with minimum costs is the logical consequence. Therefore parameter estimation and the optimization with real data, which were collected during previous runs, is a promising idea. Producers benefit from this data-driven approach and receive a huge gain in flexibility when rolling new
materials, optimizing current production, and increasing quality. This concept would also allow to optimize flow curve parameters, which have already been treated by standard methods. In this article, a new data-driven approach for predicting the physical behavior of the product and setting important parameters is presented.
We demonstrate how the prediction quality of the roll force and roll torque can be optimized sustainably. This offers the opportunity to continuously increase the workload in each pass to the theoretical maximum while product quality and process stability can also be improved.
Cyclone separators are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities.
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation, which necessary for constructing efficient cyclones. Several simulation tools can be run in parallel, e.g., long running CFD simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. There are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, whereas analytical models do not. Combining results from models with different input-output structure is of great interest. This combination inspired the development of a new methodology. An optimization via multimodel simulation approach, which combines results from different models, is introduced.
Using cyclonic dust separators (cyclones) as a real-world simulation problem, the feasibility of this approach is demonstrated. Pros and cons of this approach are discussed and experiences from the experiments are presented.
Furthermore, technical problems, which are related to 3D-printing approaches, are discussed.
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation. Several simulation tools can be run independently and in parallel, e.g., long running computational fluid dynamics simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. However, there are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, but analytical models do not. Combining results from models with different input-output structures might improve and accelerate the optimization process. The optimization via multimodel simulation (OMMS) approach, which is able to combine results from these different models, is introduced in this paper.
Using cyclonic dust separators as a real-world simulation problem, the feasibility of this approach is demonstrated and a proof-of-concept is presented. Cyclones are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities. Pros and cons of this multimodel optimization approach are discussed and experiences from experiments are presented.
The performance of optimization algorithms relies crucially on their parameterizations. Finding good parameter settings is called algorithm tuning. Using
a simple simulated annealing algorithm, we will demonstrate how optimization algorithms can be tuned using the Sequential Parameter Optimization Toolbox (SPOT). SPOT provides several tools for automated and interactive tuning. The underlying concepts of the SPOT approach are explained. This includes key techniques such as exploratory fitness landscape analysis and response surface methodology. Many examples illustrate
how SPOT can be used for understanding the performance of algorithms and gaining insight into algorithm behavior. Furthermore, we demonstrate how SPOT can be used as an optimizer and how a sophisticated ensemble approach is able to combine several meta models via stacking.
Surrogate-assisted optimization has proven to be very successful if applied to industrial problems. The use of a data-driven surrogate model of an objective function during an optimization cycle has many bene ts, such as being cheap to evaluate and further providing both information about the objective landscape and the parameter space. In preliminary work, it was researched how surrogate-assisted optimization can help to optimize the structure of a neural network (NN) controller. In this work, we will focus on how surrogates can help to improve the direct learning process of a transparent feed-forward neural network controller. As an initial case study we will consider a manageable real-world control task: the elevator supervisory group problem (ESGC) using a simplified simulation model. We use this model as a benchmark which should indicate the applicability and performance of surrogate-assisted optimization to this kind of tasks. While the optimization process itself is in this case not onsidered expensive, the results show that surrogate-assisted optimization is capable of outperforming metaheuristic optimization methods for a low number of evaluations. Further the surrogate can be used for signi cance analysis of the inputs and weighted connections to further exploit problem information.
As the amount of data gathered by monitoring systems increases, using computational tools to analyze it becomes a necessity.
Machine learning algorithms can be used in both regression and classification problems, providing useful insights while avoiding the bias and proneness to errors of humans. In this paper, a specific kind of decision tree algorithm, called conditional inference tree, is used to extract relevant knowledge from data that pertains to electrical motors. The model is chosen due to its flexibility, strong statistical foundation, as well as great capabilities to generalize and cope with problems in the data. The obtained knowledge is organized in a structured way and then analyzed in the context of health condition monitoring. The final
results illustrate how the approach can be used to gain insight into the system and present the results in an understandable, user-friendly manner
Faserverbundwerkstoffe (FVW) und Composites haben in der Luft- und Raumfahrtindustrie, im Automobilbau, beim Bau von Windenergieanlagen und in vielen weiteren zukunftsträchtigen Branchen eine große Bedeutung. Maßnahmen, die ein Erkennen von Schädigungen simultan zur Entstehung ermöglichen und Restbetriebszeiten prognostizieren können, sind geeignet, die Lebensdauer von FVW-Konstruktionen zu erhöhen. Darüber hinaus ist eine zustandsorientierte und somit kosteneffektive Wartung dieser Bauteile möglich.
Sowohl die Prognose, als auch die Detektion von Schäden würde den ressourcenschonenden Einsatz dieser Werkstoff-gruppe ermöglichen. Das sogenannte Structural Health Monitoring (SHM) bezeichnet in diesem Zusammenhang eine Methode, die es ermöglicht, kontinuierlich Anhalts-punkte über die Funktionsfähigkeit von Bauteilen und Konstruktionen zu erhalten.
Dieser Artikel beschreibt die Planung, Durchführung und Analyse von SHM-Experimenten. Das Hauptziel bestand in der Planung von Experimenten zur Gewinnung von Messdaten mittels piezoelektrischen Elementen auf Versuchstafeln, bei denen bewusst trukturbeschädigungen eingebracht wurden. Statistische Auswertungsmethoden sollen auf ihre Eignung getestet werden, Rückschlüsse aus den experimentell gewonnenen Daten auf die Art der Strukturbeschädigungen zu ziehen.
When designing or developing optimization algorithms, test functions are crucial to evaluate
performance. Often, test functions are not sufficiently difficult, diverse, flexible or relevant to real-world
applications. Previously,
test functions with real-world relevance were generated by training a machine learning model based on
real-world data. The model estimation is used as a test function.
We propose a more principled approach using simulation instead of estimation.
Thus, relevant and varied test functions
are created which represent the behavior of real-world fitness landscapes.
Importantly, estimation can lead to excessively smooth test functions
while simulation may avoid this pitfall. Moreover, the simulation
can be conditioned by the data, so that the simulation reproduces the training data
but features diverse behavior in unobserved regions of the search space.
The proposed test function generator is illustrated with an intuitive, one-dimensional
example. To demonstrate the utility of this approach it
is applied to a protein sequence optimization problem.
This application demonstrates the advantages as well as practical limits of simulation-based
test functions.