Refine
Has Fulltext
- yes (16)
Keywords
- Optimierung (11)
- Modellierung (9)
- Soft Computing (7)
- Simulation (4)
- Computational Intelligence (3)
- Evolutionärer Algorithmus (3)
- Genetic Programming (3)
- Globale Optimierung (3)
- Mehrkriterielle Optimierung (3)
- Optimization (3)
- Sequentielle Parameter Optimierung (3)
- Versuchsplanung (3)
- Co-Kriging (2)
- Evolutionary Algorithms (2)
- Evolutionäre Algorithmen (2)
- Genetisches Programmieren (2)
- Kriging (2)
- Metaheuristik (2)
- Multi-Criteria Optimization (2)
- Multiobjective Optimization (2)
- Optimierungsproblem (2)
- Prognose (2)
- Sequential Parameter Optimization (2)
- Surrogat-Modellierung (2)
- Surrogate Modeling (2)
- Zeitreihenanalyse (2)
- Adaptive Schrittweite (1)
- Biogas (1)
- Biogas Plant (1)
- Board Games (1)
- Cyclone Dust Separator (1)
- Datenanalyse (1)
- Design of Experiments (1)
- Ensemble Methods (1)
- Entstauber (1)
- Event Detection (1)
- Evolution Strategies (1)
- Evolutionsstrategie (1)
- Evolutionsstrategien (1)
- Expected Improvement (1)
- Fehlende Daten (1)
- Finanzwirtschaft (1)
- Genetic Algorithms (1)
- Genetic programming (1)
- Genetische Algorithmen (1)
- Genetische Programmierung (1)
- Imputation (1)
- Klassifikation (1)
- Maschinelles Lernen (1)
- Metamodel (1)
- Missing Data (1)
- Mixed Models (1)
- Mixed-Effects Models (1)
- Modeling (1)
- Modelling (1)
- Modelllernen (1)
- Multi-criteria Optimization (1)
- Multi-fidelity (1)
- N-tuple Systems (1)
- Online Adaptation (1)
- Parametertuning (1)
- R (1)
- Regression (1)
- Reinforcement Learning (1)
- Simulated annealing (1)
- Simulationsmodell (1)
- Software (1)
- Statistics (1)
- Statistische Versuchsplanung (1)
- Surrogate Models (1)
- Surrogate Optimization (1)
- Surrogate-Model-Based Optimization (1)
- Surrogate-model-based Optimization (1)
- Surrogatmodellbasierte Optimierung (1)
- System Identification (1)
- Systemidentifikation (1)
- Temporal Difference Learning (TDL) (1)
- Time-series (1)
- Unsicherheit (1)
- Varianzanalyse (1)
- Vorverarbeitung (1)
- Wasserwirtschaft (1)
- Water Quality Monitoring (1)
- Zeitreihe (1)
- Zylon Enstauber (1)
- classification (1)
- model-assisted optimization (1)
- modellbasierte Optimierung (1)
- regression (1)
Institute
- Fakultät 10 / Institut für Informatik (16) (remove)
We propose to apply typed Genetic Programming (GP) to the problem of finding surrogate-model ensembles for global optimization on compute-intensive target functions. In a model ensemble, base-models such as linear models, random forest models, or Kriging models, as well as pre- and post-processing methods, are combined. In theory, an optimal ensemble will join the strengths of its comprising base-models while avoiding their weaknesses, offering higher prediction accuracy and robustness. This study defines a grammar of model ensemble expressions and searches the set for optimal ensembles via GP. We performed an extensive experimental study based on 10 different objective functions and 2 sets of base-models. We arrive at promising results, as on unseen test data, our ensembles perform not significantly worse than the best base-model.
Sequential Parameter Optimization is a model-based optimization methodology, which includes several techniques for handling uncertainty. Simple approaches such as sharp- ening and more sophisticated approaches such as optimal computing budget allocation are available. For many real world engineering problems, the objective function can be evaluated at different levels of fidelity. For instance, a CFD simulation might provide a very time consuming but accurate way to estimate the quality of a solution.The same solution could be evaluated based on simplified mathematical equations, leading to a cheaper but less accurate estimate. Combining these different levels of fidelity in a model-based optimization process is referred to as multi-fidelity optimization. This chapter describes uncertainty-handling techniques for meta-model based search heuristics in combination with multi-fidelity optimization. Co-Kriging is one power- ful method to correlate multiple sets of data from different levels of fidelity. For the first time, Sequential Parameter Optimization with co-Kriging is applied to noisy test functions. This study will introduce these techniques and discuss how they can be applied to real-world examples.
Dieser Schlussbericht beschreibt die im Projekt „CI-basierte mehrkriterielle Optimierungsverfahren für Anwendungen in der Industrie“ (CIMO) im Zeitraum von November 2011 bis einschließlich Oktober 2014 erzielten Ergebnisse. Für aufwändige Optimierungsprobleme aus der Industrie wurden geeignete Lösungsverfahren entwickelt. Der Schwerpunkt lag hierbei auf Methoden aus den Bereichen Computational Intelligence (CI) und Surrogatmodellierung. Diese bieten die Möglichkeit, wichtige Herausforderung von aufwändigen, komplexen Optimierungsproblemen zu lösen. Die entwickelten Methoden können verschiedene konfliktäre Zielgrößen berücksichtigen, verschiedene Hierarchieebenen des Problems in die Optimierung integrieren, Nebenbedingungen beachten, vektorielle aber auch strukturierte Daten verarbeiten (kombinatorische Optimierung) sowie die Notwendigkeit teurer/zeitaufwändiger Zielfunktionsberechnungen reduzieren. Die entwickelten Methoden wurden schwerpunktmäßig auf einer Problemstellung aus der Kraftwerkstechnik angewendet, nämlich der Optimierung der Geometrie eines Fliehkraftabscheiders (auch: Zyklon), der Staubanteile aus Abgasen filtert. Das Optimierungsproblem, das diese FIiehkraftabscheider aufwerfen, führt zu konfliktären Zielsetzungen (z.B. Druckverlust, Abscheidegrad). Zyklone können unter anderem über aufwändige Computational Fluid Dynamics (CFD) Simulationen berechnet werden, es stehen aber auch einfache analytische Gleichungen als Schätzung zu Verfügung. Die Verknüpfung von beidem zeigt hier beispielhaft wie Hierarchieebenen eines Optimierungsproblems mit den Methoden des Projektes verbunden werden können. Neben dieser Schwerpunktanwendung konnte auch gezeigt werden, dass die Methoden in vielen weiteren Bereichen Erfolgreich zur Anwendung kommen können: Biogaserzeugung, Wasserwirtschaft, Stahlindustrie. Die besondere Herausforderung der behandelten Probleme und Methoden bietet viele wichtige Forschungsmöglichkeiten für zukünftige Projekte, die derzeit durch die Projektpartner vorbereitet werden.
Computational intelligence methods have gained importance in several real-world domains such as process optimization, system identification, data mining, or statistical quality control. Tools are missing, which determine the applicability of computational intelligence methods in these application domains in an objective manner. Statistics provide methods for comparing algorithms on certain data sets. In the past, several test suites were presented and considered as state of the art. However, there are several drawbacks of these test suites, namely: (i) problem instances are somehow artificial and have no direct link to real-world settings; (ii) since there is a fixed number of test instances, algorithms can be fitted or tuned to this specific and very limited set of test functions; (iii) statistical tools for comparisons of several algorithms on several test problem instances are relatively complex and not easily to analyze. We propose amethodology to overcome these dificulties. It is based on standard ideas from statistics: analysis of variance and its extension to mixed models. This work combines essential ideas from two approaches: problem generation and statistical analysis of computer experiments.
Evolutionary algorithm (EA) is an umbrella term used to describe population-based stochastic direct search algorithms that in some sense mimic natural evolution. Prominent representatives of such algorithms are genetic algorithms, evolution strategies, evolutionary programming, and genetic programming. On the basis of the evolutionary cycle, similarities and differences between these algorithms are described. We briefly discuss how EAs can be adapted to work well in case of multiple objectives, and dynamic or noisy optimization problems. We look at the tuning of algorithms and present some recent developments coming from theory. Finally, typical applications of EAs to real-world problems are shown, with special emphasis on data-mining applications
Learning board games by self-play has a long tradition in computational intelligence for games. Based on Tesauro’s seminal success with TD-Gammon in 1994, many successful agents use temporal difference learning today. But in order to be successful with temporal difference learning on game tasks, often a careful selection of features and a large number of training games is necessary. Even for board games of moderate complexity like Connect-4, we found in previous work that a very rich initial feature set and several millions of game plays are required. In this work we investigate different approaches of online-adaptable learning rates like Incremental Delta Bar Delta (IDBD) or Temporal Coherence Learning (TCL) whether they have the potential to speed up learning for such a complex task. We propose a new variant of TCL with geometric step size changes. We compare those algorithms with several other state-of-the-art learning rate adaptation algorithms and perform a case study on the sensitivity with respect to their meta parameters. We show that in this set of learning algorithms those with geometric step size changes outperform those other algorithms with constant step size changes. Algorithms with nonlinear output functions are slightly better than linear ones. Algorithms with geometric step size changes learn faster by a factor of 4 as compared to previously published results on the task Connect-4.
SOMA - Systematische Optimierung von Modellen in IT- und Automatisierungstechnik (Schlussbericht)
(2013)
Das im Rahmen der Förderlinie IngenieurNachwuchs geförderte Forschungsvorhaben Systematische Optimierung von Modellen für Informations- und Automatisierungs-technik (kurz: SOMA) startete im August 2009. Eine wesentliche Zielsetzung war die Entwicklung und Optimierung von Modellen zur Prognose von Zielgrößen. Ein wichtiges Merkmal ist dabei die effiziente Optimierung dieser Modelle, welche es ermöglichen soll, mit einer streng limitierten Anzahl an Auswertungen gute Parametereinstellungen zu bestimmen. Mithilfe dieser genaueren Parametrierungen der unterliegenden Modelle können unter Einbeziehung neuer merkmalserzeugender Verfahren insbesondere für kleine und mittelständische Unternehmen verbesserte Lösungen erzielt werden. Als direkter Gewinn derartiger Verbesserungen konnte für KMUs ein geeignetes Framework für Modellierungs- und Prognoseaufgaben be- reitgestellt werden, sodass mit geringem technischem und personellen Aufwand performante und nahezu optimale Lösungen erzielt werden können. Dieser Schluss-bericht beschreibt die im Projekt durchgeführten Maßnahmen und Ergebnisse.
An essential task for operation and planning of biogas plants is the optimization of substrate feed mixtures. Optimizing the monetary gain requires the determination of the exact amounts of maize, manure, grass silage, and other substrates. Accurate simulation models are mandatory for this optimization, because the underlying chemical processes are very slow. The simulation models themselves may be time-consuming to evaluate, hence we show how to use surrogate-model-based approaches to optimize biogas plants efficiently. In detail, a Kriging surrogate is employed. To improve model quality of this surrogate, we integrate cheaply available data into the optimization process. Doing so, Multi-fidelity modeling methods like Co-Kriging are employed. Furthermore, a two-layered modeling approach is employed to avoid deterioration of model quality due to discontinuities in the search space. At the same time, the cheaply available data is shown to be very useful for initialization of the employed optimization algorithms. Overall, we show how biogas plants can be efficiently modeled using data-driven methods, avoiding discontinuities as well as including cheaply available data. The application of the derived surrogate models to an optimization process is shown to be very difficult, yet successful for a lower problem dimension.
Multi-criteria optimization has gained increasing attention during the last decades. This article exemplifies multi-criteria features, which are implemented in the statistical software package SPOT. It describes related software packages such as mco and emoa and gives a comprehensive introduction to simple multi criteria optimization tasks. Several hands-on examples are used for illustration. The article is well-suited as a starting point for performing multi-criteria optimization tasks with SPOT.
RGP is genetic programming system based on, as well as fully integrated into, the R environment. The system implements classical tree-based genetic programming as well as other variants including, for example, strongly typed genetic programming and Pareto genetic programming. It strives for high modularity through a consistent architecture that allows the customization and replacement of every algorithm component, while maintaining accessibility for new users by adhering to the "convention over configuration" principle.