Refine
Year of publication
Document Type
- Report (22)
- Working Paper (13)
- Article (7)
- Preprint (5)
- Conference Proceeding (1)
Has Fulltext
- yes (48)
Keywords
- Optimierung (17)
- Optimization (12)
- Soft Computing (9)
- Modellierung (7)
- Evolutionärer Algorithmus (5)
- Globale Optimierung (5)
- Simulation (5)
- Benchmarking (4)
- Metaheuristik (4)
- Computational Intelligence (3)
Cyclone separators are popular devices used to filter dust from the emitted flue gases. They are applied as pre-filters in many industrial processes including energy production and grain processing facilities.
Increasing computational power and the availability of 3D printers provide new tools for the combination of modeling and experimentation, which necessary for constructing efficient cyclones. Several simulation tools can be run in parallel, e.g., long running CFD simulations can be accompanied by experiments with 3D printers. Furthermore, results from analytical and data-driven models can be incorporated. There are fundamental differences between these modeling approaches: some models, e.g., analytical models, use domain knowledge, whereas data-driven models do not require any information about the underlying processes.
At the same time, data-driven models require input and output data, whereas analytical models do not. Combining results from models with different input-output structure is of great interest. This combination inspired the development of a new methodology. An optimization via multimodel simulation approach, which combines results from different models, is introduced.
Using cyclonic dust separators (cyclones) as a real-world simulation problem, the feasibility of this approach is demonstrated. Pros and cons of this approach are discussed and experiences from the experiments are presented.
Furthermore, technical problems, which are related to 3D-printing approaches, are discussed.
This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well- specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility. The final goal is to provide well-accepted guidelines (rules) that might be useful for authors and reviewers. As benchmarking in optimization is an active and evolving field of research this manuscript is meant to co-evolve over time by means of periodic updates.
When researchers and practitioners in the field of
computational intelligence are confronted with real-world
problems, the question arises which method is the best to
apply. Nowadays, there are several, well established test
suites and well known artificial benchmark functions
available.
However, relevance and applicability of these methods to
real-world problems remains an open question in many
situations. Furthermore, the generalizability of these
methods cannot be taken for granted.
This paper describes a data-driven approach for the
generation of test instances, which is based on
real-world data. The test instance generation uses
data-preprocessing, feature extraction, modeling, and
parameterization. We apply this methodology on a classical
design of experiment real-world project and generate test
instances for benchmarking, e.g. design methods, surrogate
techniques, and optimization algorithms. While most
available results of methods applied on real-world
problems lack availability of the data for comparison,
our future goal is to create a toolbox covering multiple
data sets of real-world projects to provide a test
function generator to the research community.
Data pre-processing is a key research topic in data mining because it plays a
crucial role in improving the accuracy of any data mining algorithm. In most
real world cases, a significant amount of the recorded data is found missing
due to most diverse errors. This loss of data is nearly always unavoidable.
Recovery of missing data plays a vital role in avoiding inaccurate data
mining decisions. Most multivariate imputation methods are not compatible
to univariate datasets and the traditional univariate imputation techniques
become highly biased as the missing data gap increases. With the current
technological advancements abundant data is being captured every second.
Hence, we intend to develop a new algorithm that enables maximum
utilization of the available big datasets for imputation. In this paper, we
present a Seasonal and Trend decomposition using Loess (STL) based
Seasonal Moving Window Algorithm, which is capable of handling patterns
with trend as well as cyclic characteristics. We show that the algorithm is
highly suitable for pre-processing of large datasets.
The use of surrogate models is a standard method to deal with complex, realworld
optimization problems. The first surrogate models were applied to continuous
optimization problems. In recent years, surrogate models gained importance
for discrete optimization problems. This article, which consists of three
parts, takes care of this development. The first part presents a survey of modelbased
methods, focusing on continuous optimization. It introduces a taxonomy,
which is useful as a guideline for selecting adequate model-based optimization
tools. The second part provides details for the case of discrete optimization
problems. Here, six strategies for dealing with discrete data structures are introduced.
A new approach for combining surrogate information via stacking
is proposed in the third part. The implementation of this approach will be
available in the open source R package SPOT2. The article concludes with a
discussion of recent developments and challenges in both application domains.
We propose to apply typed Genetic Programming (GP) to the problem of finding surrogate-model ensembles for global optimization on compute-intensive target functions. In a model ensemble, base-models such as linear models, random forest models, or Kriging models, as well as pre- and post-processing methods, are combined. In theory, an optimal ensemble will join the strengths of its comprising base-models while avoiding their weaknesses, offering higher prediction accuracy and robustness. This study defines a grammar of model ensemble expressions and searches the set for optimal ensembles via GP. We performed an extensive experimental study based on 10 different objective functions and 2 sets of base-models. We arrive at promising results, as on unseen test data, our ensembles perform not significantly worse than the best base-model.
Sequential Parameter Optimization is a model-based optimization methodology, which includes several techniques for handling uncertainty. Simple approaches such as sharp- ening and more sophisticated approaches such as optimal computing budget allocation are available. For many real world engineering problems, the objective function can be evaluated at different levels of fidelity. For instance, a CFD simulation might provide a very time consuming but accurate way to estimate the quality of a solution.The same solution could be evaluated based on simplified mathematical equations, leading to a cheaper but less accurate estimate. Combining these different levels of fidelity in a model-based optimization process is referred to as multi-fidelity optimization. This chapter describes uncertainty-handling techniques for meta-model based search heuristics in combination with multi-fidelity optimization. Co-Kriging is one power- ful method to correlate multiple sets of data from different levels of fidelity. For the first time, Sequential Parameter Optimization with co-Kriging is applied to noisy test functions. This study will introduce these techniques and discuss how they can be applied to real-world examples.
Dieser Schlussbericht beschreibt die im Projekt „CI-basierte mehrkriterielle Optimierungsverfahren für Anwendungen in der Industrie“ (CIMO) im Zeitraum von November 2011 bis einschließlich Oktober 2014 erzielten Ergebnisse. Für aufwändige Optimierungsprobleme aus der Industrie wurden geeignete Lösungsverfahren entwickelt. Der Schwerpunkt lag hierbei auf Methoden aus den Bereichen Computational Intelligence (CI) und Surrogatmodellierung. Diese bieten die Möglichkeit, wichtige Herausforderung von aufwändigen, komplexen Optimierungsproblemen zu lösen. Die entwickelten Methoden können verschiedene konfliktäre Zielgrößen berücksichtigen, verschiedene Hierarchieebenen des Problems in die Optimierung integrieren, Nebenbedingungen beachten, vektorielle aber auch strukturierte Daten verarbeiten (kombinatorische Optimierung) sowie die Notwendigkeit teurer/zeitaufwändiger Zielfunktionsberechnungen reduzieren. Die entwickelten Methoden wurden schwerpunktmäßig auf einer Problemstellung aus der Kraftwerkstechnik angewendet, nämlich der Optimierung der Geometrie eines Fliehkraftabscheiders (auch: Zyklon), der Staubanteile aus Abgasen filtert. Das Optimierungsproblem, das diese FIiehkraftabscheider aufwerfen, führt zu konfliktären Zielsetzungen (z.B. Druckverlust, Abscheidegrad). Zyklone können unter anderem über aufwändige Computational Fluid Dynamics (CFD) Simulationen berechnet werden, es stehen aber auch einfache analytische Gleichungen als Schätzung zu Verfügung. Die Verknüpfung von beidem zeigt hier beispielhaft wie Hierarchieebenen eines Optimierungsproblems mit den Methoden des Projektes verbunden werden können. Neben dieser Schwerpunktanwendung konnte auch gezeigt werden, dass die Methoden in vielen weiteren Bereichen Erfolgreich zur Anwendung kommen können: Biogaserzeugung, Wasserwirtschaft, Stahlindustrie. Die besondere Herausforderung der behandelten Probleme und Methoden bietet viele wichtige Forschungsmöglichkeiten für zukünftige Projekte, die derzeit durch die Projektpartner vorbereitet werden.
This paper proposes an experimental methodology for on-line machine learning algorithms, i.e., for algorithms that work on data that are available in a sequential order.
It is demonstrated how established tools from experimental algorithmics (EA) can be applied in the on-line or streaming data setting.
The massive on-line analysis (MOA) framework is used to perform the experiments.
Benefits of a well-defined report structure are discussed.
The application of methods from the EA community to on-line or streaming data is referred to as experimental algorithmics for streaming data (EADS).
In this paper we present a comparison of different data driven modeling methods. The first instance of a data driven linear Bayesian model is compared with several linear regression models, a Kriging model and a genetic programming model.
The models are build on industrial data for the development of a robust gas sensor.
The data contain limited amount of samples and a high variance.
The mean square error of the models implemented in a test dataset is used as the comparison strategy.
The results indicate that standard linear regression approaches as well as Kriging and GP show good results,
whereas the Bayesian approach, despite the fact that it requires additional resources, does not lead to improved results.