Invited talks and tutorials
Invited talks and tutorials have always been an important way to add value to the LION conference, not only for the presentations but also because of the interactions during the event. The program of invited talks and tutorials at LION7 has been defined as follows.
Plenary Speakers Invited talks
Microsoft Research, United Kingdom
Automating Parallel SAT Solving
In the last 15 years tremendous algorithmic progresses have turned Propositional Satisfiability into an efficient problem solving formalism. Nowadays, SAT engines benefit to important domains like hardware and software verification, and are often showing surprising efficiency in emerging fields like Computational Biology. In the last 5 years, with the practical availability of cheap multicore platforms, the focus has been put on Parallel Satisfiability. This raised new algorithmic questions among which the control of the cooperation between different SAT engines became prominent. In this talk, I will summarize our work directed to the automated control of cooperation between independent SAT solvers. Our objective is to automate and streamline the cooperation between solvers, removing the needs for parameter tuning. I will present solutions to control throughput, topology, and synchronisation in state-of-the-art parallel SAT Solvers.
Youssef Hamadi is a Senior Researcher at Microsoft Research. His work involves the practical resolution of large scale real life problems set at the intersection of Optimization and Artificial Intelligence. His current research considers the design of complex systems based on multiple formalisms fed by different information channels which plan ahead and perform smart decisions. His current focus is on Autonomous Search, Parallel Search, and Propositional Satisfiability, with applications to Environmental Intelligence, Business Intelligence, and Software Verification.
Mauricio G.C. Resende
Algorithms and Optimization Research Department, USA
Algorithms & Optimization Research Department
AT&T Labs Research, Shannon Laboratory
Florham Park, NJ 07932 USA
BIASED RANDOM-KEY GENETIC ALGORITHMS
A biased random-key genetic algorithm (BRKGA) is a metaheuristic for combinatorial and continuous global optimization. In this talk, we introduce BRKGAs and show how they can be applied to find optimal or near-optimal solutions to optimization problems.
We first review the basic idea of genetic algorithms (GAs). Then we present BRKGAs, pointing out the differences between them and classical GAs. We consider the basic architecture of BRKGAs and then concentrate on each specific component. Finally, we consider extensions to the basic architecture.
We conclude by considering implementation and applications of BRKGAs. We begin by describing an application programming interface (API) for BRKGAs and then describe applications of BRKGAs on several combinatorial and continuous global optimization problems
Mauricio G. C. Resende is a research scientist at the Algorithms and Optimization Research Department of AT&T Labs Research. His undergraduate studies were in electrical engineering (systems concentration) at the Catholic University of Rio de Janeiro, Brazil and he earned a M.Sc. in operations research at the Georgia Institute of Technology. He has been at AT&T Bell Labs and AT&T Labs Research since 1987, when he earned a Ph.D. in operations research at the University of California, Berkeley.
His research has focused on optimization, including interior point algorithms for linear programming, network optimization, and nonlinear programming, as well as heuristics for discrete optimization problems arising in telecommunications, scheduling, location, assignment, and graph theory. Most of his work with heuristics has concentrated on GRASP (greedy randomized adaptive search procedures), a metaheuristic that he and Thomas A. Feo developed in the late 1980s, and more recently on biased random-key genetic algorithms.
He has published over 100 papers and is co-editor of five books, including the Handbook of Optimization in Telecommunications and the Handbook of Applied Optimization. He is on the editorial boards of many journals, including Networks, Journal of Heuristics, Journal of Global Optimization, and Computational Optimization and Applications.
Besides working in the telecommunications industry, he has worked in the electrical power and semiconductor manufacturing industries, where he developed several decision support systems (tools) for optimization problems.
School of Computer Science & Electronic Engineering,
University of Essex, UK
Multiobjective Evolutionary Computation, Decomposition and Regularity
Many real world optimization problems have multiple conflicting objectives by nature. Unlike a single optimization problem, a multiobjective optimization problem has a set of Pareto optimal solutions (Pareto front) which could be required by a decision maker to make her final decision. Evolutionary algorithms are able to generate an approximation to the Pareto front in a single run, and many traditional optimization methods have been also developed for dealing with multiple objectives. Although there is not much work that has been done, combination of evolutionary algorithms and traditional optimization methods should be a next generation multiobjective optimization solver. Decomposition techniques have been well used and studied in traditional multiobjective optimization. It is well known that the Pareto optimal solution set of a continuous multiobjective problem often exhibits some regularity. In this talk, I will describe two multiobjective evolutionary algorithms: MOEA/D and RM-MEDA. Both of them borrow ideas from traditional optimization. MOEA/D decomposes a multiobjective problem into a number of single objective subproblems or simple multiobjective subproblems, and then solves these subproblems in a collaborative manner. RM-MEDA makes use of the regularity property to model the distribution of Pareto optimal solutions in the search space, and then generates new solutions from the model thus built. I will also outline some possible research issues in multiobjective evolutionary computation.
Qingfu Zhang is currently a Professor with the School of Computer Science and Electronic Engineering, University of Essex, UK. His is also a Changjing Visiting Chair Professor in Xidian University, China. From 1994 to 2000, he was with the National Laboratory of Parallel Processing and Computing, National University of Defence Science and Technology, China, Hong Kong Polytechnic University, Hong Kong, the German National Research Centre for Information Technology (now Fraunhofer-Gesellschaft, Germany), and the University of Manchester Institute of Science and Technology, Manchester, U.K. He holds two patents and is the author of many research publications. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. Dr. Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions on Systems, Man, and Cybernetics-Part B. He is also an Editorial Board Member of three other international journals. MOEA/D, a multobjevitve optimization algorithm developed in his group, won the Unconstrained Multiobjective Optimization Algorithm Competition at the Congress of Evolutionary Computation 2009, and was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award.
Dipartimento di Matematica e Applicazioni "R. Caccioppoli"
Universita' degli Studi di Napoli "Federico II", Italy
Combinatorial Optimization Approaches for Data Clustering and Biclustering
Clustering algorithms aim to group data such that the most similar objects belong to the same group or cluster, and dissimilar objects are assigned to different clusters. In more detail, they take as input a data set and a similarity (or distance) function over the domain, with the aim of finding a partition of the data into groups of mutually similar elements.
Biclustering is a variant of this task that is needed when the input data comes from two domain sets and some relation over the Cartesian product of these two sets is given. In this case, one could be interested in partitioning each of the sets, such that the subsets from one domain exhibit similar behavior across the subsets of the other domain. Roughly speaking, biclustering can be viewed as simultaneous data clustering and feature selection, i.e., detection of significant clusters and the features that are uniquely associated with them, given that not all features are relevant to certain clusters.
Clustering and biclustering have both generated considerable interest over the past few decades, due to the increasing need of efficiently analyzing high-dimensional gene expression data in several different and heterogenous contexts, such as for example in information retrieval, knowledge discovery, and data mining.
Unfortunately, both the problem of clustering and the problem of locating the most significant bicluster have been shown to be NP-complete. Therefore, given the inner intractability of these problems from a computational point of view, heuristic and metaheuristic approaches are needed to efficiently find good solutions in reasonable running times.
In this talk, some efficient metaheuristic algorithms will be presented to approach these problems.
Paola Festa obtained her Ph.D. degree in Operations Research in 2000. She is currently Associate Professor in Operations Research at the University of Napoli FEDERICO II, where since 2002 she has been teaching Programming (undergraduate degree), Operations Research (undergraduate degree) and Combinatorial Optimization (Master degree). She is in the scientific committee of the Ph.D. Program in Operations Research (University of Calabria) and of the Ph.D. Program in Computational Sciences and Computer Science. She has been in the scientific committee of the Ph.D. Program in Computational Biology and Bioinformatics (University of Napoli FEDERICO II). Since 1999 she has been frequently Research Scholar at several national and international research institutes, including CNRs, MIT Lab. for Information and Decision Systems (USA), AT&T Labs Research (USA), and Department of Industrial and Systems Engineering, University of Florida (USA). Her research spans several fields, including network optimization, shortest paths, hard combinatorial optimization, and computational biology. She is referee for several international journals, including Computers & Operations Research, Discrete Optimization, Discrete Mathematics, Optimization and Engineering, Global Optimization, Optimization Methods & Software, and Journal of Heuristics. She is author/coauthor of more than 60 publications appeared in international journals, books, and peer-reviewed conference proceedings.
High Performance Computing and Networking Institute, CNR, Italy
Intelligent feature selection for supervised classification.
This tutorial introduces algorithms and software for feature selection, with an emphasis on these using mathematical programming as a computational kernel. An analysis of existing algorithms for supervised classification is detailed. Comparison of different methods, with respect to different evaluation criteria and data mining tasks, provides guidelines in choosing feature selection algorithms in different problems.
Intelligent feature selection solutions based on the integration of existing algorithms is described, highlighting how those solutions can help in choosing a suitable algorithm with or without prior knowledge. Some real-world applications from genomics and transcriptomics are included to demonstrate the use of feature selection in mining problems. Finally, open research problems and challenges are outlined.
Mario Guarracino is researcher at High Performance Computing and Networking Institute of National Research Council of Italy. He is also affiliated with Center for Applied Optimization at University of Florida and Italian National Institute for Higher Mathematics "F. Saveri". He received a PhD in Mathematics and an Ms cum laude in Applied Mathematics. He has been teaching various graduate and doctoral courses in bioinformatics, computer science, applied mathematics and statistics in various European universities. His research interests include high performance scientific computing, machine learning and data mining methods for biological applications.
Department of Higher Mathematics at Faculty of Economics,
National Research University Higher School of Economics, Moscow, Russia
Patterns in the Linear Assignment Problem and their Applications in Modelling and Optimization
In this tutorial we introduce a new pattern-based approach within the Linear Assignment Model with the purpose to model and solve (either exactly or by a high quality heuristics) many Combinatorial Optimization Problems (COPs) and their applications in industrial engineering, social and management sciences.
We assume that the COP has an additive (separable) objective function and the structure of a feasible (optimal) solution to the COP is predefined by a collection of cells (positions) in an input file. We define a pattern as a collection of positions in an instance problem represented by its input file (matrix). We illustrate the notion of pattern by means of some well known problems in COP among them the Linear Ordering Problem, Cell Formation Problem (CFP) just to mention a couple. The CFP is defined on an input Boolean matrix the rows of which representing machines and columns - parts. The CFP consists in finding three optimal objects: a block-diagonal collection of rectangulars with optimal their number and sizes, an optimal row (machines) permutation, and an optimal column (parts) permutation such that the grouping efficacy is maximized. The CFP provides an example of the problem with an unknown pattern, i.e. the CFP pattern itself is one of the optimization parameters. As the main result of this tutorial we announce an insighful understanding of the Linear Assignment Model and its applications to solve many real world problems.
Prerequisite: no pre-knowledge is required.
Boris Goldengorin has the M.Sc. in Computer Science from the Radio Engineering University, Riazan, Russia; M.Sc. in Applied Mathematics from the Moscow University of Mathematics & Electronics; Ph.D. in Standardization and Production Quality Control from the National Institute of Standardization, Moscow, Russia; Sc.D. in Operations Research from the Institute of System Analysis, Russian Academy of Sciences, Moscow, Russia; Ph.D. in Economics from the University of Groningen, The Netherlands. He receives the title of Professor in Engineering Cybernetics from the Ministry of Science, High School and Engineering of Russian Federation, Moscow, Russia; Honorary Doctor of Science degree from the Khmelnitsky National University, Ukraine.
Dr. Goldengorin is an author of four monographs, and two textbooks and his research articles published in Soviet Math. Doklady, Automation and Remote Control, Journal of Computer and Systems Sciences International (former Engineering Cybernetics), Management Science, Computers & Operations Research, Journal of Global Optimization, Discrete Optimization, Journal of Algebraic Combinatorics, Lecture Notes in Computer Science, European Journal of Operational Research, Handbook of Combinatorial Optimization, Theory of Optimization, Journal of Heuristics, Algorithmic Operations Research, Computers and Mathematics with Applications, Journal of Combinatorial Optimization, Operations Research and a number of other professional journals.
Currently he is Professor of Higher Math Department and Scientific Deputy Head of LATNA Laboratory of Algorithms and Technologies for Networks at National Research University Higher School of Economics, Russia. He is affiliated to the Advanced Marketing Models Company (http://ammodelsinc.com/core_talent.html) in New York (USA).
Yaroslav D. Sergeyev
Università della Calabria, Italy
Deterministic global optimization using the Lipschitz condition
In this lecture, the global optimization problem of a multidimensional function satisfying the Lipschitz condition with an unknown Lipschitz constant over a multi-dimensional box is considered. It is supposed that the objective function can be "black box", multiextremal, and non differentiable. It is also assumed that evaluation of the objective function at a point is a time consuming operation. Many algorithms for solving this problem have been discussed in the literature. They can be distinguished, for example, by the way of obtaining an information about the Lipschitz constant and by the strategy used to explore the search domain. Different exploration techniques based on various adaptive partition strategies are analyzed. The main attention is dedicated to diagonal algorithms, since they have a number of attractive theoretical properties and have proved to be efficient in solving applied problems. In these algorithms, the search box is adaptively partitioned into sub-boxes and the objective function is evaluated only at two vertices corresponding to the main diagonal of the generated sub-boxes. It is demonstrated that the traditional diagonal partition strategies do not fulfil the requirements of computational efficiency because of executing many redundant evaluations of the objective function. A new adaptive diagonal partition strategy that allows one to avoid such computational redundancy is described. Some powerful multidimensional global optimization algorithms based on the new strategy are introduced. Results of extensive numerical experiments performed to test the methods proposed show their advantages with respect to diagonal algorithms in terms of both number of trials of the objective function and qualitative analysis of the search domain, which is characterized by the number of generated boxes. Finally, problems with Lipschitz first derivatives are considered and connections between the Lipschitz global optimization and interval analysis global optimization are discussed.
Yaroslav D. Sergeyev is Distinguished Professor at the University of Calabria, Italy (professorship awarded by the Italian Government). He is also Professor (part-time contract) at N.I. Lobachevski Nizhni Novgorod State University, Russia and Affiliated Researcher at the Institute of High Performance Computing and Networking of the Italian National Research Council. He has got his Ph.D. from N.I. Lobachevski Nizhni Novgorod State University and his D.Sc. degree from M.V. Lomonosov State University, Moscow (this degree is Habilitation for Full Professorship in Russian universities). His research interests include numerical analysis, global optimization, infinity computing and calculus, set theory, number theory, fractals, parallel computing, and interval analysis. Prof. Sergeyev has been awarded several national and international prizes (Pythagoras International Prize in Mathematics, Italy, 2010; Outstanding Achievement Award from the 2010 World Congress in Computer Science, Computer Engineering, and Applied Computing, USA; Lagrange Lecture, Turin University, Italy, 2010; MAIK Prize for the best scientific monograph published in Russian, Moscow, 2008, etc.). He has published more than 60 papers in the leading international journals, 4 books, and 4 patents (his complete list of publications contains almost 200 items). He is a member of editorial boards of 4 international journals and co-editor of 3 special issues. He has given more than 30 plenary and keynote lectures at prestigious international congresses and was a member of Scientific Committees of 40 international congresses. He was Coordinator of numerous national and international research and educational projects. In particular, since 1994 he is Coordinator of the international research and educational program "Italian-Russian University". Software developed under his supervision is used in more than 40 countries of the world. Numerous magazines, newspapers, TV and radio channels have dedicated a lot of space to his research. Detailed information: http://si.deis.unical.it/~yaro/