Refine
Document Type
- Working Paper (13)
- Report (8)
- Article (7)
- Preprint (3)
Has Fulltext
- yes (31)
Keywords
- Optimization (9)
- Optimierung (6)
- Benchmarking (3)
- Modeling (3)
- 3D Printing (2)
- Combined simulation (2)
- Evolutionary Computation (2)
- Evolutionärer Algorithmus (2)
- Globale Optimierung (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Soft Computing (2)
- Surrogate (2)
- Surrogate Models (2)
- Surrogate-based (2)
- 3D-Druck (1)
- Abgasreinigung (1)
- Algorithm Tuning (1)
- Algorithmus (1)
- Angewandte Mathematik (1)
- Artificial intelligence (1)
- Automated Learning (1)
- Automation (1)
- BBOB (1)
- Bayesian Learning (1)
- Bayesian Optimization (1)
- Bayesian Regression (1)
- Big Data (1)
- Big data platform (1)
- Business Intelligence (1)
- Ccomputational fluid dynamics (1)
- Cognition (1)
- Computational fluid dynamics (1)
- Conditional inference tree (1)
- Data Analysis (1)
- Data Mining (1)
- Data Modelling (1)
- Data-Warehouse-Konzept (1)
- Decision tree (1)
- Design of Experiments (1)
- Discrete Optimization (1)
- Electrostatic Precipitator (1)
- Ensemble Methods (1)
- Ensemble based modeling (1)
- Entdeckendes Lernen (1)
- Erfahrungsbericht (1)
- Event Detection (1)
- Evolutionsstrategie (1)
- Expensive Optimization (1)
- Experiment (1)
- Experimental Algorithmics (1)
- Faserverbundwerkstoffe (1)
- Feature selection (1)
- Flowcurve (1)
- Forschendes Lernen (1)
- Function Approximation (1)
- Funktionstest (1)
- Gaussian Process (1)
- Genetische Algorithmen (1)
- Genetisches Programmieren (1)
- Health condition monitoring (1)
- Hot rolling (1)
- Imputation (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Knowledge extraction (1)
- Kognition (1)
- Kriging (1)
- Künstliche Intelligenz (1)
- Lineare Regression (1)
- Machine learning (1)
- Massive Online Analysis (1)
- Meta-model (1)
- Metaheuristics (1)
- Metaheuristik (1)
- Metal (1)
- Metamodel (1)
- Metamodell (1)
- Metamodels (1)
- Model Selection (1)
- Modelierung (1)
- Muschelknautz Method of Modelling (1)
- Neural and Evolutionary Computing (1)
- Numerische Strömungssimulation (1)
- On-line Algorithm (1)
- Parallelization (1)
- Performance (1)
- Predictive Analytics (1)
- Promotion (1)
- R (1)
- Referenzmodell (1)
- Regression (1)
- SAP (1)
- SPOT (1)
- Sensor placement (1)
- Sensortechnik (1)
- Sequential Parameter Optimization (1)
- Signalanalyse (1)
- Simulation (1)
- Simulation-based Optimization (1)
- Stacked Generalization (1)
- Stacking (1)
- Standardisierung (1)
- Staubabscheider (1)
- Structural Health Monitoring (1)
- Surrogate Mod (1)
- Surrogate Model (1)
- Surrogate model (1)
- Surrogate model based optimization (1)
- Surrogates (1)
- Tauchrohrtiefe (1)
- Taxonomie (1)
- Taxonomy (1)
- Test Function (1)
- Test function generator (1)
- Time Series (1)
- Trinkwasser (1)
- Univariate Data (1)
- Variable reduction (1)
- Verunreinigung (1)
- Wasserverteilung (1)
- Water distribution systems (1)
Institute
- Fakultät für Informatik und Ingenieurwissenschaften (F10) (31) (remove)
Cyclone separators are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities.
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation, which necessary for constructing efficient cyclones. Several simulation tools can be run in parallel, e.g., long running CFD simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. There are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, whereas analytical models do not. Combining results from models with different input-output structure is of great interest. This combination inspired the development of a new methodology. An optimization via multimodel simulation approach, which combines results from different models, is introduced.
Using cyclonic dust separators (cyclones) as a real-world simulation problem, the feasibility of this approach is demonstrated. Pros and cons of this approach are discussed and experiences from the experiments are presented.
Furthermore, technical problems, which are related to 3D-printing approaches, are discussed.
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation. Several simulation tools can be run independently and in parallel, e.g., long running computational fluid dynamics simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. However, there are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, but analytical models do not. Combining results from models with different input-output structures might improve and accelerate the optimization process. The optimization via multimodel simulation (OMMS) approach, which is able to combine results from these different models, is introduced in this paper.
Using cyclonic dust separators as a real-world simulation problem, the feasibility of this approach is demonstrated and a proof-of-concept is presented. Cyclones are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities. Pros and cons of this multimodel optimization approach are discussed and experiences from experiments are presented.
The performance of optimization algorithms relies crucially on their parameterizations. Finding good parameter settings is called algorithm tuning. Using
a simple simulated annealing algorithm, we will demonstrate how optimization algorithms can be tuned using the Sequential Parameter Optimization Toolbox (SPOT). SPOT provides several tools for automated and interactive tuning. The underlying concepts of the SPOT approach are explained. This includes key techniques such as exploratory fitness landscape analysis and response surface methodology. Many examples illustrate
how SPOT can be used for understanding the performance of algorithms and gaining insight into algorithm behavior. Furthermore, we demonstrate how SPOT can be used as an optimizer and how a sophisticated ensemble approach is able to combine several meta models via stacking.
Surrogate-assisted optimization has proven to be very successful if applied to industrial problems. The use of a data-driven surrogate model of an objective function during an optimization cycle has many bene ts, such as being cheap to evaluate and further providing both information about the objective landscape and the parameter space. In preliminary work, it was researched how surrogate-assisted optimization can help to optimize the structure of a neural network (NN) controller. In this work, we will focus on how surrogates can help to improve the direct learning process of a transparent feed-forward neural network controller. As an initial case study we will consider a manageable real-world control task: the elevator supervisory group problem (ESGC) using a simplified simulation model. We use this model as a benchmark which should indicate the applicability and performance of surrogate-assisted optimization to this kind of tasks. While the optimization process itself is in this case not onsidered expensive, the results show that surrogate-assisted optimization is capable of outperforming metaheuristic optimization methods for a low number of evaluations. Further the surrogate can be used for signi cance analysis of the inputs and weighted connections to further exploit problem information.
As the amount of data gathered by monitoring systems increases, using computational tools to analyze it becomes a necessity.
Machine learning algorithms can be used in both regression and classification problems, providing useful insights while avoiding the bias and proneness to errors of humans. In this paper, a specific kind of decision tree algorithm, called conditional inference tree, is used to extract relevant knowledge from data that pertains to electrical motors. The model is chosen due to its flexibility, strong statistical foundation, as well as great capabilities to generalize and cope with problems in the data. The obtained knowledge is organized in a structured way and then analyzed in the context of health condition monitoring. The final
results illustrate how the approach can be used to gain insight into the system and present the results in an understandable, user-friendly manner
Faserverbundwerkstoffe (FVW) und Composites haben in der Luft- und Raumfahrtindustrie, im Automobilbau, beim Bau von Windenergieanlagen und in vielen weiteren zukunftsträchtigen Branchen eine große Bedeutung. Maßnahmen, die ein Erkennen von Schädigungen simultan zur Entstehung ermöglichen und Restbetriebszeiten prognostizieren können, sind geeignet, die Lebensdauer von FVW-Konstruktionen zu erhöhen. Darüber hinaus ist eine zustandsorientierte und somit kosteneffektive Wartung dieser Bauteile möglich.
Sowohl die Prognose, als auch die Detektion von Schäden würde den ressourcenschonenden Einsatz dieser Werkstoff-gruppe ermöglichen. Das sogenannte Structural Health Monitoring (SHM) bezeichnet in diesem Zusammenhang eine Methode, die es ermöglicht, kontinuierlich Anhalts-punkte über die Funktionsfähigkeit von Bauteilen und Konstruktionen zu erhalten.
Dieser Artikel beschreibt die Planung, Durchführung und Analyse von SHM-Experimenten. Das Hauptziel bestand in der Planung von Experimenten zur Gewinnung von Messdaten mittels piezoelektrischen Elementen auf Versuchstafeln, bei denen bewusst trukturbeschädigungen eingebracht wurden. Statistische Auswertungsmethoden sollen auf ihre Eignung getestet werden, Rückschlüsse aus den experimentell gewonnenen Daten auf die Art der Strukturbeschädigungen zu ziehen.
When designing or developing optimization algorithms, test functions are crucial to evaluate
performance. Often, test functions are not sufficiently difficult, diverse, flexible or relevant to real-world
applications. Previously,
test functions with real-world relevance were generated by training a machine learning model based on
real-world data. The model estimation is used as a test function.
We propose a more principled approach using simulation instead of estimation.
Thus, relevant and varied test functions
are created which represent the behavior of real-world fitness landscapes.
Importantly, estimation can lead to excessively smooth test functions
while simulation may avoid this pitfall. Moreover, the simulation
can be conditioned by the data, so that the simulation reproduces the training data
but features diverse behavior in unobserved regions of the search space.
The proposed test function generator is illustrated with an intuitive, one-dimensional
example. To demonstrate the utility of this approach it
is applied to a protein sequence optimization problem.
This application demonstrates the advantages as well as practical limits of simulation-based
test functions.
Verunreinigungen im Wassernetz können weite Teile der Bevölkerung unmittelbar gefährden. Gefahrenpotenziale bestehen dabei nicht nur durch mögliche kriminelle Handlungen und terroristische Anschläge. Auch Betriebsstörungen, Systemfehler und Naturkatastrophen können zu Verunreinigungen führen.
Architecural aproaches are considered to simplify the generation of re-usable building blocks in the field of data warehousing. While SAP’s Layer Scalable Architecure (LSA) offers a reference model for creating data warehousing infrastructure based on SAP software, extented reference models are needed to guide the integration of SAP and non-SAP tools. Therefore, SAP’s LSA is compared to the Data Warehouse Architectural Reference Model (DWARM), which aims to cover the classical data warehouse topologies.
Surrogate-based optimization and nature-inspired metaheuristics have become the state of the art in solving real-world optimization problems. Still, it is difficult for beginners and even experts to get an overview that explains their advantages in comparison to the large number of available methods in the scope of continuous optimization. Available taxonomies lack the integration of surrogate-based approaches and thus their embedding in the larger context of this broad field.
This article presents a taxonomy of the field, which further matches the idea of nature-inspired algorithms, as it is based on the human behavior in path finding. Intuitive analogies make it easy to conceive the most basic principles of the search algorithms, even for beginners and non-experts in this area of research. However, this scheme does not oversimplify the high complexity of the different algorithms, as the class identifier only defines a descriptive meta-level of the algorithm search strategies. The taxonomy was established by exploring and matching algorithm schemes, extracting similarities and differences, and creating a set of classification indicators to distinguish between five distinct classes. In practice, this taxonomy allows recommendations for the applicability of the corresponding algorithms and helps developers trying to create or improve their own algorithms.