Seminars Spring 2011

Le Laboratoire d’Automatique de l’EPFL a le plaisir de vous inviter aux séminaires selon la liste ci-après. Une mise à jour régulière des informations concernant ces séminaires est disponible à l’adresse sur cette page. En particulier, il est conseillé aux visiteurs externes de vérifier que les séminaires soient dispensés comme prévu ci-dessous.

Where: Salle de séminaire LA-EPFL, ME C2 405 (2nd floor), 1015 Lausanne

When: Friday at 10.15am


Older seminars can be found here.


18.02.2011     Pr. Moritz Diehl (Vendredi – 10:15am–11:15am)
Embedded Optimization for Control and Signal Processing.
Many branches of engineering employ linear mappings between some input and output sequences, most prominently in control engineering and in signal processing. Examples are PID or other linear controllers, the Kalman Filter, as well as the many filters used in sound processing e.g. in loudspeakers or hearing aids. These linear maps are usually only useful for one special set of conditions, when no constraints are violated, while they need to be adapted whenever the conditions change.
A completely different approach is the following: we generate a map between inputs and outputs via embedded optimization, i.e. the outputs are generated as the solution of parametric optimization problems that are solved again and again, each time for different values of the input parameters. This approach directly generates a nonlinear map between inputs and outputs, and allows to easily incorporating constraints and user defined objectives. It can be shown that this approach is able to generate any continuous input-output map even if we require the optimization problems to be convex in both inputs and outputs, which is the most favorable case.
The structure of the embedded optimization problems needs to be exploited to the maximum, as many applications require sampling times in the order of milli or even microseconds. We present four structure exploiting algorithms that were used in applications:
(a)   a duality and Fourier based approach to optimal clipping in hearing aids
(b) online active set strategy for an optimal pre-filter for machine tools
(c)   nonlinear real-time iterations for model predictive control of power generating kite systems
01.03.2011     Dr. L. Fagiano (MARDI 10:15am–11:15am)
High-altitude Wind Energy Generation.
Sustainable energy generation is one of the most urgent challenges that mankind is facing nowadays. Unfortunately, the actual renewable energies are not competitive with respect to fossil sources due to the high costs of the related technologies, their variable and non-uniform availability and their low power density per unit area. Thus, a breakthrough would be needed in renewable energy generation to foster its application and to produce everywhere large quantities of cheap, green energy. Such a radical innovation may be obtained by converting high-altitude wind energy into electricity. The idea, firstly investigated about 30 years ago but only recently developed by some research groups and companies around the world, is to exploit the aerodynamic forces generated by automatically controlled tethered wings. Such wings are able to reach much higher altitudes than the actual wind turbines, where stronger and more constant wind can be found practically everywhere. The generated forces are then converted into mechanical and electrical power by using suitable devices, either onboard or at ground level. Automatic control is a key point of this technology, since the system to be controlled is open loop unstable, highly nonlinear and subject to operational constraints. In this talk, the main characteristics and the potentials of the outlined concept of high-altitude wind energy will be described, and the main results obtained so far in several research projects at Politecnico di Torino, including theoretical analyses, numerical simulations and experimental activities, will be discussed.
18.03.2011     Pr. C. Georgakis (VENDREDI 10:15am–11:15am)
Data-Driven Optimization of Batch Processes: The Design of Dynamic Experiments.
Many batch processes cannot be optimized using knowledge-driven process models, because such models do not exist. This is due to our incomplete understanding of the inner workings of many batch processes and unfavorable economics due to their small production rates. To resolve this impasse, a new data-driven methodology is presented for optimizing the operation of a variety of batch processes when at least one time-varying operating condition needs to be selected. This methodology calculates optimal time-varying conditions without the use of an a priori knowledge-driven model. The approach generalizes the classical Design of Experiments (DoE) methodology, limited by its consideration of time-invariant decision variables. The new approach, called the Design of Dynamic Experiments (DoDE), designs experiments that explore a set of “dynamic signatures” of the unknown decision function(s). Constrained optimization of the interpolating response surface model, calculated from the results of the performed experiments, leads to the selection of the optimal operating conditions. Results from two simulated examples and an experimental pharmaceutical process demonstrate the powerful utility of the method. The first examines a simple reversible reaction in a batch reactor, where the time-dependant reactor temperature is the decision function. The second example examines the optimization of a penicillin fermentation process, where the feeding profile of the substrate is the decision variable. In both cases, a finite number of experiments leads to the effective approximation of the optimal operation of the process. The third example examines an asymmetric catalytic hydrogenation reaction in the production of an active pharmaceutical ingredient. Here the best of the DoDE experiments is 50% better than the best experiment of the DoE set.
25.03.2011     Pr. D. Dochain (VENDREDI 10:15am–11:15am)
Extremum Seeking Control and its Application to Process and Reaction Systems: a Survey.
Most adaptive control schemes documented in the literature are developed for regulation to known set-points or tracking known reference trajectories. Yet in some applications the control objective could be to optimize an objective function, which can be a function of unknown parameters, or to select the desired states to keep a performance function at its extremum value. Extremum seeking control is one of the methods to handle these kinds of optimization problems. It allows the solution of the optimization problem as a control problem with the advantages related to sensitivity reduction and disturbance rejection. In the past few years, Krstic et al. have presented several schemes for extremum seeking control of nonlinear systems. First the system is perturbed using an external excitation signal in order to numerically compute the gradient. 
Although this technique has been proven useful for some applications, the lack of guaranteed transient performance of the black-box schemes remains a significant drawback in its application. Alternatively an adapted model of the system is used for analytical evaluation of the gradient. The extremum seeking framework proposed by Guay and Zhang assumes that the objective function is explicitly known as a function of the system states and uncertain parameters from the system dynamic equations. Parametric uncertainties make the on-line reconstruction of the true cost impossible such that only an estimated value based on parameter estimates is available. The control objective is to simultaneously identify and regulate the system to the lowest cost operating point, which depends on the uncertain parameters. The main advantage of this approach is that one can guarantee some degree of transient performance while achieving the optimization objectives when a reasonable functional approximation of the objective function is available. The objective of this seminar is to present a survey on extremum seeking control methods and their applications to process and reaction systems. Two important classes of extremum seeking control approaches are considered: perturbation-based and model-based methods.
08.04.2011     Pr. R. Tempo (VENDREDI 10:15am–11:15am)
Design of Uncertain Complex Systems: A Randomization Viewpoint.
In recent years, we have seen a growing interest in probabilistic and randomized methods for design of uncertain complex systems. In this lecture, we provide a broad perspective of this research area and discuss several randomized algorithms. In particular, we present sequential algorithms for convex problems and we discuss non-sequential algorithms for non-convex problems. Furthermore, we demonstrate the utility of this approach for analyzing specific applications, such as the design of autonomous unmanned aerial vehicles.
06.05.2011     Dr. M. Butcher (VENDREDI 10:15am–11:15am)
A Kalman Filter Based, Sensorless Stepping Motor Driver for Positioning Systems in Radioactive Environments.
The Equipment, Controls and Electronics (ECE) section of the EN-STI group at CERN is responsible for the design, installation and maintenance of high precision control systems for movable devices (e.g. scrapers, collimators, shielding and targets) in highly radioactive environments, as found in CERN’s particle accelerators. Motor drive electronics are damaged by high levels of radiation and must therefore be placed in radiation-safe zones, up to 1 km from the motors. The ECE section is thus currently developing a sensorless PWM stepping motor driver that is able to ensure high positioning repeatability and low Electro-Magnetic Interference (EMI) when driving a motor via long cables. High positioning repeatability of the motors can be achieved using feedback control based on the motor’s position. However, in order to increase driver robustness and reliability, a sensorless approach, i.e. without direct position measurement, is desirable. An Extended Kalman filter based method has been selected for its combination of rigorous theoretical foundations and proven effectiveness in application. In this talk the design of the estimation algorithm, its real-time implementation on an industrial DSP and the proposed Kalman filter tuning approach will be presented.
20.05.2011     Dr. D. Roberge (VENDREDI 10:15am–11:15am)
Industrial Designs and Use of Microreactors.
Microreactor Technology enables processes to be run in a continuous manner using a minimal quantity of reagent. Thus, it permits the rapid and scalable development of continuous processes in the fine chemical and pharmaceutical industries. Under such circumstances, advantages are associated to the continuous way of operation and to the micro structure as well such as the good thermal control. It is important to differentiate from where the advantage is coming from because it will significantly influence the scale up strategy for larger productions.
In the fine chemical industry, productions are made in multi-purpose plants of high flexibility. The integration of a continuous system is such an environment is feasible given appropriate modules are employed with (i) good chemical resistance (i.e. glass, Hastelloy, Teflon), with (ii) excellent material stability over a large range of temperatures, and with (iii) ease of connection avoiding dead volume. In addition the various modules need to take into account the physico-chemical properties of the reaction such as the reaction kinetics (==> residence time) and the reaction phases (solid – liquid – gas). This approach leads systematically to a toolbox concept.
A detailed analysis at Lonza showed that ca. 50% of the reactions studied could fit into a microreactor based on their kinetic. However, by taking into account the reaction phases, this number shrinks to ca. 20% of potential candidates because a solid phase is present in more than 60% of the cases. In addition, the reactions were classified into 3 classes namely Type A (mixing controlled reactions), Type B (rapid but kinetically controlled reactions), and Type C reactions (batch reactions with thermal hazard).
The 3 types of reaction define of course 3 types of reactor modules required to operate in a flexible manner various pharmaceutical reactions. This talk will present an analysis of the industrial designs of reactors, review their use, and address in details the scale-up concept. The Lonza MicroReactors have already produced several tons of material. Thus, manufacturing examples will be presented based on Lonza experience showing the applicability of continuous processes and microreactors in an industrial environment.
27.05.2011     Pr. S. Narasimhan  (VENDREDI 10:15am–11:15am)
Treatment of Noise in Multivariate Data Analysis Techniques.
The last decade has seen an explosion in the quantum of data available on different systems. This has led to a growth in the development and use of techniques for mining this data for extracting valuable information. The spectrum of applications include speech and image processing, biomedical signal processing, bioinformatics, envirometrics and chemometrics. Several multivariate data analysis techniques such as Principal Components Analysis (PCA) and its variants, Non-negative Matrix Factorization (NMF), Independent Components Analysis (ICA) etc. are among the popular techniques being currently used. Despite the fact that the data obtained in many of the above applications contain a significant amount of noise, relatively less effort has been directed at treating noise in a systematic and theoretically rigorous manner. Methods such as NMF and ICA are developed from a deterministic viewpoint, and typically PCA is used as a pre-processing technique for dealing with noise. The purpose of this talk is to first review the conditions under which PCA is an optimal technique for denoising data. The Iterative PCA (IPCA) method, which we have developed for dealing with heteroscedastic errors in a rigorous manner, by estimating both the noise parameters and the regression model simultaneously, is also discussed. IPCA is further integrated with functional PCA for developing a powerful combined univariate-multivariate denoising technique. The use of the proposed approach in developing more accurate multivariate calibration models and as a pre-processing technique for accurate extraction of source signals from mixtures is illustrated.
01.07.2011     Pr. S. Shah (VENDREDI 10:15am–11:15am)
Data, Data Everywhere… How to Shelter from the Digital Tsunami ?
Sensor-fusion and Signal Processing for Plant Health Management.
It is now common to have archival history of thousands of sensors sampled every second over long time periods. Yet we frequently have process engineers complain: “…We are drowning in data but starving for information…”. How can these rich data sets be put to use? This seminar will address the issue of information and knowledge extraction from data with emphasis on process and performance monitoring. Most of the major plant, factory, process, equipment and tool disruptions are avoidable, and yet preventable fault detection and diagnosis strategies are not the norm in most industries. It is not uncommon to see simple and preventable faults disrupt the operation of an entire integrated manufacturing facility. For example, faults such as malfunctioning sensors or actuators, inoperative alarm systems, poor controller tuning or configuration can render the most sophisticated control systems useless. Such disruptions can cost in the excess of $1 million per day and on the average they rob the plant of 7% of its annual capacity.                  
Over the last decade the fields of multivariate statistics, controller performance monitoring techniques and Bayesian inference methods have merged to develop powerful sensing and condition-based monitoring systems for predictive fault detection and diagnosis. These methods rely on the notion of sensor fusion whereby data from many sensors or units are combined with process information, such as physical connectivity of process units, to give a holistic picture of health of an integrated plant. Such methods are at a stage where these strategies are being implemented for off-line and on-line deployment.
This presentation will outline the field of sensor fusion – the application of signal processing methods, in the temporal as well as spectral domains, on a multitude and NOT singular sensor signals to detect incipient process abnormality before a catastrophic breakdown is likely to occur. This talk will be complemented with industrial case studies to demonstrate the success of these methods. These same techniques can also be applied in other fields. For example, the fusion of pixels of information from digital images will be illustrated via application of automated detection and diagnosis of Malaria parasites from microscopic images.