Excursions in Algorithmics: Brown University, October 27-28, 2006 |
Workshop Speakers
Abstract
The problem of characterizing and detecting recurrent sequence patterns
such as substrings or motifs and related associations or rules is pursued
ubiquitously in order to compress data, unveil structure, infer succinct
descriptions, extract and classify features, etc. The discovery,
particularly on a massive scale, of significant patterns and correlations
thereof poses interesting methodological and algorithmic problems, in so
far as it often exposes scenarios in which tables and descriptors grow
faster and bigger than the phenomena they are meant to encapsulate.
While part of this problem is endemic, part of it can be attributed to the
traditional approaches to the definition of a pattern, that hinge
alternatively on syntactic or statistical properties alone. It has been
seen recently that at least this part of the problem may be mitigated
by more prudent paradigms, in which the syntactic description of
a pattern and the list of all of its occurrences are tightly intertwined.
This approach leads to identify regions of monotonicity for some scores
of surprise in use, within which it is enough to consider and weigh
only extremal terms and values. This talk reviews concepts, constructs,
and application results obtained along this line of research.
Biography
Alberto Apostolico's research interests are in the areas of algorithms
analysis and design, and applications. His work deals with algorithms and data
structures for combinatorial pattern matching and discover. His collaboration
with Franco Preparata on this subject led to papers for finding repetitions in
strings and for computing exhaustive substring statistics. He is a co-editor
(with Z. Galil) of the books ''Combinatorial Algorithms on Words'' (Springer-
Verlag) and ''Pattern Matching Algorithms'' (Oxford Univ.Press), serves on the
editorial boards of Parallel Processing Letters, Theoretical Computer Science,
Journal of Computational Biology, Chaos Theory and Applications, International
J. of Bioinformatics Research and Applications, International Journal on
Foundations of Computer Science, Springer-Verlag Lecture Notes on
Bioinformatics, and as guest editor for a special issues of Algorithmica,
Information Sciences, Journal of Discrete Algorithms, PPL, and TCS.
A founding member of the steering committee of the International Symposia on Combinatorial Pattern Matching , the Proceedings of which he co-edited in 1993, 1994, 1997, 2002, and 2005, he also serves on the steering committees of the International Conferences on Discovery Science, and of the Symposia on String Processing and Information Retrieval. With Franco Preparata he was founding executive committee member of the Fibonacci Institute for the Foundations of Computer Science, which held Summer Research Schools in Trento from 1987 to 1993. He has served on the program committees of many international conferences, most recently, Research in Computational Biology (RECOMB), Workshop on Algorithms in Bioinformatics (WABI), IEEE Data Compression Conference, String Processing and Information Retrieval (SPIRE), Combinatorial Pattern Matching (CPM), among others, and as an invited speaker at numerous international conferences and advanced research schools.
In his career, Alberto Apostolico has held visiting and permanent appointments in the U.S. (CMU, UIUC, Rensselaer, Purdue, IBM, GATech) and Europe (U. of Salerno, U. of L' Aquila, U. of Padova, IASI-CNR, U. of Paris, U. of London, King's College, Zif-Bielefeld, Renyi-Hungarian Acad. of Science). He has been the (co-)recipient of US (Air Force, NIH, NSF, IBM), British, French, Italian, Collaborative (Israel, Korea, Japan), and International (Fulbright, NATO, ESPRIT) research grants.
Abstract
In several circumstances we have to solve a
problem whose instance is not completely known in advance.
Situations of this kind occur in computer systems and networks
management, in financial decision making, in robotics etc. Problems
of this kind are called on-line problems. In most applications (e.g. paging)
an agent is required to serve requests in an on line fashion and a request
has to be served by the agent before a new request is revealed. In such
cases the notion of time is simply used to refer to a totally ordered
sequence of discrete instants (t0, t1, ..., tn). Both requests
and agent's service actions occur instantanaously in one of these moments.
The actual time intercurring between two instants is not specified and
inessential under any respect. A different situation occurs instead in
other problems where the time dimension is continuous and plays a specific
role. We call this kind of problems real time on-line problems. In this
paper we address a specific class of real time on-line problems: vehicle
routing problems. By examining the properties of this paradigmatic type of
problems, we put in evidence the role of real time in the design of
competitive algorithms, in the construction of adversarial arguments and
in the definition of clairvoyancy.
(joint work with L. Allulli, V. Bonifaci, L. Laura)
Biography
Giorgio Ausiello graduated in Physics in 1966 at the University of Rome
"La Sapienza". He is Full Professor of Theoretical Computer Science at
the Faculty of Engineering of the same university since 1980. Over the
years, his research activity has addressed the following domains:
computational complexity, database theory, approximation of NP-hard
optimization problems, dynamic and on line algorithms. On these topics
he published 9 books and over 100 papers in major journals and
conferences. In 1973 he has been one of the founders of the European
Association for Theoretical Computer Science (EATCS) and he has been
member of the EATCS Council almost continuously since the foundation.
Since 1980 he belongs to the Advisory Board of the Springer series of
EATCS monographs. Since January 2001 he is Editor in Chief of the
Elsevier journal Theoretical Computer Science (Series A: Algorithms,
automata, complexity and games). He is also Editor of the International
Journal of Foundatiosn of Computer Science. From 1996 to 2000 he
coordinated a group of eleven Universities in the EU Basic Research
ALCOM-IT project (Algorithms and Complexity in Information Technology).
Since January 1997 to December 2002 he has been Chairman of the IFIP
Technical Committee on Foundations of Computer Science (TC1). Since 1996
to 2000 he has served in the Board of Trustees of the International
Computer Science Institute (ICSI) in Berkeley, Ca. In 1996 he has been
elected member of Academia Europaea. In 2004 he has been granted the
title of Doctor Honoris Causa of Dauphine University (Paris).
Abstract
It is widely agreed that machines based on sequential programming are
increasingly inadequate to fully exploit the potential of present and
forthcoming technologies. However, a rigorous quantitative assessment
of the inherent limitations of the sequential approach does not seem
to be available. In this context, we take a look at two interrelated
issues:
Biography
Gianfranco Bilardi has earned his Ph.D. in Electrical Engineering
in 1985, from the Department of Electrical Engineering of the
University of Illinois at Urbana Champaign, with a thesis supervised
by Franco Preparata. He first joined Computer Science at Cornell
University as an Assistant Professor. In 1990, he became a Professor
of Computer Science at the University of Padova, where he is currently
the Coordinator of the Center of Excellence "Science and Applications
of Advanced Computing Paradigms." Since 2000, he has also been
affiliated with the T.J. Watson Research Laboratory of IBM. Bilardi's
research interests lie in the areas of theory of computation, and
algorithms and architectures for high performance computing.
Abstract
Franco's book (co-authored by M. Shamos) "Computational Geometry : an
Introduction", the very first text book in Computational Geometry, was
devoted to Linear Geometry: points, lines, triangles, rectangles were
the objects of primary interest. 20 years later, Non-Linear
Computational Geometry is emerging as a new exciting area. The talk
will revisit some chapters of Franco's book from this perspective,
discuss some recent results on proximity problems and the geometry of
spheres, and present some applications in Information Computational
Geometry and Geometric Approximation.
Biography
Jean-Daniel Boissonnat graduated from the Ecole Superieure
d'Electricité in 1976. He obtained a Ph.D. in Control Theory from
Rennes University in 1979. He joined INRIA-Rocquencourt in 1980 and
started working there on surface reconstruction while reading
M. Shamos's thesis (which, after Franco's collaboration, became the
first text book in Computational Geometry). He moved to INRIA
Sophia-Antipolis in 1986 where he successively led the PRISME and
Geometrica projects devoted to Discrete and Computational Geometry and
their applications, most notably geometric modeling, surface
reconstruction, mesh generation and motion planning. He initiated the
development of the CGAL library at INRIA and, partly in collaboration
with Franco, contributed to robust geometric computing. He
co-authored, with M. Yvinec, the book ``Algorithmic Geometry''
(Ediscience International and Cambridge University Press). He has
received the IBM award in Computer Science in 1987, the FIAT/French
Academy of Science award in 1989 and has been elected Chevalier de
l'Ordre National du Mérite in 2006. He is currently on the editorial
board of four international journals devoted to algorithms,
computational geometry and graphics. He co-chaired the program
committee of the 20th ACM Symposium on Computational Geometry.
Jean-Daniel met Franco for the first time at the Ecole Normale
Superieure in Paris in 1987 where Franco was on sabbatical leave from
the University of Illinois. Since then, they worked together on fundamental
geometric algorithms, robust computation and motion planning.
Abstract
Motivated by the need to cope with data that is massive, high-dimensional,
noisy, uncertain, unevenly priced, low-entropic, or all of the above,
algorithm design is undergoing profound changes. I will discuss some of
these developments; in particular, sublinear algorithms, online reconstruction,
self-improving algorithms, and dimension reduction.
Biography
Bernard Chazelle is professor of computer science
at Princeton University, where he has been on the faculty since 1986.
He has held research and faculty positions
at Carnegie-Mellon University, Brown University,
Ecole Polytechnique, Ecole Normale Superieure, University
of Paris, INRIA, Xerox Parc, DEC SRC, and NEC Research.
He received his Ph.D in computer science from Yale University in 1980.
Abstract
I will present some recent results on the computation of
equilibria both in game-theoretic and economic scenarios. I will
particularly focus on problems which pose significant challenges to
algorithmists.
Biography
Bruno Codenotti is a Research Director of the Italian National
Research Council. He was the Director of the Institute for
Computational Mathematics in Pisa.
He has been a visiting professor at several institutions in the US,
including the University of Illinois at Urbana Champaign, ICSI in
Berkeley, Cornell University, Sandia National Labs, The University of
Chicago, University of Iowa, and TTI-Chicago. He has taught
undergraduate and graduate courses in Italy, China, and the US. His
research interests are in algorithmics, particularly problems with
an algebraic flavor. In the last few years he has worked extensively
on the computation of equilibria for markets and games.
Abstract
In this talk, we address the problem of representing the connectivity
information of geometric objects using as little memory as
possible. As opposed to raw compression issues, the focus is here
on designing data structures that preserve the possibility of
answering incidence queries in constant time. We propose in
particular the first optimal representation for 3-connected
planar graphs and triangulations, which are the most standard
classes of graphs underlying meshes with spherical topology.
Optimal means that these representations asymptotically match the
respective entropy of the two classes, namely 2 bits per edge for
3-c planar graphs, and 1.62 bits per triangle or equivalently 3.24
bits per vertex for triangulations.
This work presents a rather theoretical point of view,
in conclusion we will discuss the possibility of more practical versions.
This is joint work with Luca Castelli Aleardi, Abdelkrim Mebarki and Gilles Schaeffer.
Biography
Olivier Devillers is a former student of École Normale Supérieure
in Paris and he received his PhD in computer graphics in 1988
(Claude Puech advisor) from Paris XI University.
Then his interest focused on computational geometry
and he obtained his Habilitation à diriger des recherches
in 1993 from Nice Sophia-Antipolis University.
Olivier Devillers' work happened in a period where computational geometry was evolving from a combinatorial and rather theoretical point of view to a more practical one. He participated in this evolution through his work on the design on simpler randomized algorithms, the treatment of robustness issues and the development of the Computational Geometry Algorithms Library. His Delaunay hierarchy algorithm for Delaunay computation mixed up all these ingredients and is available in CGAL.
Olivier Devillers has been collaborating with Franco Preparata since 1990, and visited him several times in Brown University. They co-authored the following papers :
Abstract
In this talk I will survey areas of research I have been
involved in that have been inspired by work pioneered by Franco
Preparata. Topics include algorithms for problems involving parallelism
and computational geometry. This talk will be presented at a level that
should be of interest to theory and non-theory people alike.
Biography
Dr. Michael Goodrich is a Professor of Computer Science at University of California,
Irvine, where he has served since 2001. Prior to that he was a Professor of Computer
Science at Johns Hopkins University for 14 years. He received his PhD in Computer
Sciences from Purdue University, in West Lafayette, Indiana, in 1987 and his B.A.
in Mathematics and Computer Science at Calvin College, in Grand Rapids, Michigan, in 1983.
He has published nearly 200 research papers and is a co-author on three textbooks: Data Structures and Algorithms in Java, which is now in its 4th edition, Data Structures and Algorithms in C++, and Algorithm Design: Foundations, Analysis, and Internet Examples. He is on the editorial board of Journal of Computer and Systems Sciences, Computational Geometry: Theory and Applications, International Journal of Computational Geometry and Applications, and Journal of Graph Algorithms and Applications. In 1996 he received an ACM Recognition of Service Award and in 2002 he received a Technology Transfer Award from DARPA. He is a member of ACM, a senior member of IEEE, and a member of the Sigma Xi Scientific Research Honor Society. His research has been funded by grants from NSF, including two ITR grants and a Research Initiation Award, as well as grants from DARPA, NSA, and a MURI grant from ARO led by Franco Preparata as Principle Investigator.
Dr. Goodrich has done research in parallel algorithms, computational geometry, combinatorial algorithms, and computer security. He has also published several pioneering papers on external memory algorithms for problems in graph algorithms and computational geometry.
Abstract
The analysis of 3D structures of objects and the identification of regions of
similarity among structures arises in diverse applications and has been the
subject of much study.
In computational biology, the search for common structural motifs in a set of
proteins is believed to help reveal functional relationships among different
proteins and to lead to the discovery of evolutionary relations not evident
from the amino-acid sequence alone.
In this talk, I discuss various matching techniques and present recent
approaches to protein surface comparison that can be useful in docking
applications.
Biography
Concettina Guerra earned her Dr. Sc. Degree in Mathematics (with honors) from
the University of Naples in 1972. She has been on the Faculty of the Dept. of
Computer Sciences of Purdue University, of the Dept. of Informatics of
University of Rome and has visited several Institutions in the US and in
Europe, including Carnegie Mellon University, Rensseleaer Polytechnic and
University of Helsinki.
Concettina Guerra works in the areas of Computational Biology and Computer Vision. Her recent interests fall in the domains of protein classification, recognition and docking, where she develops geometric approaches to such tasks as protein comparison and identification of sites on molecular surfaces that are likely to be of biological interest.
She has organized and served on several meetings; more recently, she has been conference chair of RECOMB (2006), co-director of the Lipari School on Proteomes and Proteins (2006), and co-chair and a founding member of the steering committee of 3DPVT (2002).
Abstract
Stable matching is one of the oldest problems studied from
an algorithmic point of view. Also it has a lot of real-world
applications, including the National Resident Matching Program in the
US which started in 1952. Thus the problem has a very long history of
research, but there still remains many interesting problems. For
example, for its most general version, allowing both ties and
incomplete lists, nothing was known until 1999 for its complexity.
Very recently, Iwama, Miyazaki and Yamauchi found its approximation
algorithm whose approximation factor is strictly less than two (SODA
07). This talk explain why this problem is so interesting and deep,
from several different angles including the newest result above
mentioned.
Biography
PhD from Kyoto University in 1980, with Kyoto Sangyo University
(1978-90), Kyushu University (90-97) and Kyoto University
(97-present). Now Professor in the School of Informatics. Editorial
board of Information Processing Letters, Parallel Computing,
International Journal of Foundations of Computer Science and so on.
Council Member of European Association for Theoretical Computer
Science (EATCS), Steering Committee Member of European Symposium on
Algorithms (ESA), and Chair of the New Horizons in Computing (NHC)
project.
Franco Preparata loves Kyoto and has had a long relation (since well before I knew him) with the research group of Shuzo Yajima and Yahiko Kambayashi, Kyoto University. After I joined this group, Franco occasionally visited us; the two important ones for me are in mid 80's and in 97. It was him who pointed out that the string problem I was working on in early 80's was nothing other than parallel computation, which just led me to that topic and more importantly to algorithms from automata theory. Also, he gave a very impressive talk on Grover's algorithm in 97, which again made me interested in quantum computation that is now one of my most important research topics.
Abstract
In this talk we present an overview of problems related to sequence
manipulation, including searching subsequences of maximum density or
maximum sum, and selecting subsequences of a certain density or a
given rank, with or without length restrictions. Problem transformation
and utilization of efficient data structures or problem-solving methods
will be presented. The problem-solving methods are fundamental to
computational problems, which arise, for example, in bioinformatics.
(Joint work with Tien-Ching Lin.)
Biography
Dr. Lee received his B.S. degree in Electrical Engineering from the National Taiwan
University in 1971, and the M.S. and Ph. D. degrees in Computer Science from the
University of Illinois at Urbana-Champaign in 1976 and 1978 respectively.
Dr. Lee has been with the Institute of Information Science, Academia Sinica, Taiwan,
where he is Director and a Distinguished Research Fellow since July 1, 1998. Prior to
joining the Institute, he was a Professor of the Department of Electrical Engineering
and Computer Science, Northwestern University, where he has worked since 1978.
Dr. Lee holds a joint appointment with the Department of Computer Science and
Information Engineering, and the Graduate Institute of Electronics Engineering,
National Taiwan University, also serves as Director of the Taiwan Information Security
Center (TWISC) and the Chief Executive Officer of the National Digital Archives
Program, both sponsored by the National Science Council, Taiwan.
Dr. Lee's research interests include design and analysis of algorithms, computational geometry, web-based computing, algorithm visualization, software tools development, bio-informatics, digital libraries, and software security. He has published over 120 technical articles in scientific journals and conference proceedings, and he also holds three U.S. patents, and one R.O.C patent. He is Editor of Algorithmica, Editor of ACM Journal on Computers and Cultural Heritage, Co-Chief Editor of Int'l Journal of Computational Geometry & Applications, Editor-in-Chief of Journal of Information Science and Engineering, and Series Editor of Lecture Notes Series on Computing for World Scientific Publishing Co., Singapore. He is Fellow of IEEE, Fellow of ACM and elected Member of Academia Sinica, 2004.
During the years 1974 -1978, D. T. was a graduate student of the Department of Computer Science, University of Illinois at Urbana-Champaign, and worked at the Coordinated Science Laboratory, from 1975 through 1978 under the supervision of Prof. Franco Preparata. The Coordinated Science Laboratory was established in 1951, and is one of the premier laboratories in the U.S. in information technology and telecommunications. It was during these years with Franco, that D.T. received this excellent training in research, and problem-solving skills in the scientific discipline. And it was during these years with Franco, that D.T. was constantly and consistently under his influence of being a sound and whole individual, striving for knowledge in ancient culture and modern world, and living a life full of energy, affection, hope and respect.
Abstract
Semantic constraints for the vertices of a graph G define the
placement that these vertices must have in a readable visualization
of G. For example, in the context of data base design some particularly
relevant entities of an ER schema may be required to be
represented in the center and/or along the boundary of the
diagram. An effective way of modelling semantic constraints
is to map groups of semantically related
vertices to corresponding given sets of points in the plane.
This talk surveys some recent findings about drawing algorithms that receive as input a planar graph G, a partitioning of the vertices of G into k different semantic categories V0, ..., Vk-1, and k disjoint sets S0, ..., Sk-1 of points in the plane with |Vi|=|Si| (i in {0, ..., k-1}). The desired output is a planar drawing such that the vertices of Vi are mapped onto the points of Si and such that the curve complexity of the edges (i.e. the number of bends along each edge) is kept small. Particular attention will be devoted to proving lower bounds and upper bounds on the curve complexity of these types of drawings for the apparently innocent cases of just two or three semantic categories.
Biography
Giuseppe "Beppe" Liotta received a Ph.D. in Computer Engineering
from the University of Rome ``La Sapienza'' in 1995. From May 1995
to September 1996 he was a Post Doc at Brown University , where he
worked with Franco Preparata and Roberto Tamassia mainly on
computational geometry problems and more specifically on questions
of robust geometric computing. The many exciting hours of research
spent under the supervision of Franco during those years not only
produced scientific results but also strongly influenced the next
years of Beppe's scientific career.
From October 1996 to October 1998 Beppe was assistant professor (ricercatore) at the Department of Computer Engineering of the University of Rome ``La Sapienza''. He then joined the Department of Electrical Engineering and Computer Science of the University of Perugia, where he served as associate professor from 1998 to 2002. From 2002 Beppe is a professor of computer science and currently serves as the chair of the studies in Computer Science and Electrical Engineering of the University of Perugia.
His research interests are mainly directed at the design of algorithms for solving problems motivated from graph drawing, computational geometry, and information visualization. On these topics he edited special issues, wrote book chapters and surveys, organized conferences and workshops, and published about 100 research papers. He received awards by EATCS and by ACM. His research has been funded by MIUR, CNR, NATO, EU, and by several industrial sponsors.
Abstract
XML is fast becoming the standard format to store, exchange and publish over the web.
Two challenges in handling XML are its size (the XML representation of a document is
significantly larger than its native state) and the complexity of its search (XML search
involves path and content searches on labeled tree structures). We address the basic
problems of compression, navigation and searching of XML documents. Adopting some algorithms
proposed in FOCS 05 for succinct tree representations, we design and implement a compressed
index for XML, called XBzipINDEX, in which the XML document is maintained in a highly
compressed format, and both navigation and searching can be done uncompressing only a
tiny fraction of the data. Detailed experiments show that XBzipINDEX has compression
ratio up to 35% better than the ones achievable by other tools, and its time performance
on some path and content search operations is order of magnitudes faster. The method is
being patented by the University of Pisa.
Joint work with Paolo Ferragina, Giovanni Manzini, and S. Muthu Muthukrishnan.
Biography
Fabrizio Luccio was born in Tripoli, Axis of Evil, in 1938. Citizen of Italy.
He received the Dr.Ing. degree in electrical engineering from the Politecnico
di Milano, in 1962, and the Libera Docenza in electronic computers from the
Italian university system in 1968. He is currently a Professor of Informatics
at the University of Pisa.
After industrial experience with Olivetti, he taught and conducted research in various fields of C.S. at the Politecnico di Milano, M.I.T. (Project MAC), the University of Southern California, and New York University, before returning permanently to Italy. where he became a professor at the University of Pisa, serving as department chairman for six years, and as coordinator of the Ph.D. program in Informatics for other six years. He spent several sabbatical periods as a visiting professor at U.C.L.A., the University of Illinois, the National University of Singapore, Carleton University in Canada, and the B.U.A.P. in Puebla, Mexico. He has been a visiting scientist at T.J. Watson Research Center of IBM, USA, and a distinguished foreign scholar at the NTT LSI Laboratories in Morinosato, Japan. He has also been a distinguished scientist in the city of Ottawa, Canada, invited by the municipality to carry on cooperative research with the local universities. He has pursued intense activities with UNESCO, for the dissemination of informatics at the university level in developing countries. He received several scientific and academic prizes, and is a Life Fellow of the IEEE.
For a long time his major scientific interest has been in computing systems, particularly in parallel and distributed processing. It is now in algorithms and data structures, with attention to Web problems. He is author of more than one hundred and fifty papers published in international research journals and conference proceedings, and of four major university text books published in Italy.
Abstract
We consider a scenario where sensing devices equipped with wireless
transceivers are distributed over a remote area to acquire and store
data. Stored data are then collected by a mobile data host (e.g. an
aircraft) visiting sporadically the area and establishing wireless
communication with the sensors. Sensors are inexpensive devices
possessing limited storage and limited, renewable power sources
(e.g., tiny batteries fed by solar cells). In any single visit, the
mobile host may be unable to establish communication with any single
sensor, due to temporary depletion of battery or to obstacles shadowing
the sensor. The same sensor may be again able to connect to the host
and to upload data in a later visit, but some acquired data may be
lost if the local storage capacity has meantime been exceeded. Security
issues are also a major concern if sensors are set to collect sensible
data in a hostile environment. Reliable and secure data collection in
this scenario is a challenging task and redundancy appears the only
reasonable approach. Setting multiple sensors to collect the same data is
easily feasible, since sensors are cheap resources.
We propose an approach exploiting redundancy, where the same data are acquired by multiple sensors. Acquired data are encoded in a Residue Number System with Non-Pairwise-Prime Moduli (RNS-NPM). RNS-NPM are defined over a set M of composite moduli and a set P of pairwise-prime factors, where every factor p in P divides exactly two moduli in M. Every sensor is tied to a modulus mi in M and stores the residue digit modulo mi of the local readings. Information is thus fragmented and dispersed over the sensors: the residue digits are the fragments; every sensor stores a fragment and every fragment is stored in at least one sensor. Given the redundancy of the code, the information can be reconstructed from the fragments in any subset D' of the set D of all digits, provided the cardinality d' of D' is no less than a threshold depending on the code redundancy. With this approach, in every visit the mobile host attempts connection with all sensors. Some sensors may fail to respond, but the temporary loss of communication does not prevent reconstructing the full information, as long as at least d' residue digit can be collected. It should be observed that sensors acquire data independently from each other, and thus different sensors may read different values, although the readings can be expected to be very close in value, unless some fault occurs. Since every sensor stores a single digit of the residue encoding of its reading and the mobile data hosts collects every digit from a different sensor, the set of collected digits may be inconsistent. Reconstructing the information from the raw digits may thus produce a wrong result. However, a nice property of RNS-NPM enables the decoding procedure to guess the average of the readings, as long as every reading is within d/4 from the average, where d is the minimum of factors in set P. Fragmentation and dispersal of data also saves the scarce storage resources of sensors and contributes to data security. In fact the data acquired by sensors cannot be reconstructed unless all digits in a subset D' of sufficient cardinality are collected and the association of every sensor to a unique modulus in M is known. Eavesdropping communication between sensors and the data host, or even seizing sensors, cannot allow reconstruction, unless the above condition are satisfied. Of course, this feature can be combined with encryption of residue digits.
Joint work with Stefano Chessa.
[BM1980] Barsi, F. and Maestrini, P. "Error codes constructed in residue number systems with non-pairwise-prime moduli". Information and Control, vol. 46 n. 1 (1980), pp. 16-25.
Biography
Piero Maestrini received the Laurea degree in Electronic Engineering from the University
of Pisa and the Scuola Superiore SantAnna in 1962. In 1971 he was also granted the Libera
Docenza, a national academic degree, in Switching Theory and Electronic Computers. From
1962 to 1981 he was a member of research staff of the Istituto di Elaborazione dell'Informazione
of the National Science Council (IEI-CNR, Pisa). From 1981 to present he has been full
professor of Computer Science in the Computer Science Department of the University of Pisa.
From 1974 to 1981 he chaired the System Architecture Department of IEI-CNR, and from 1974 to 1977
he also directed a common Research Center of IEI-CNR and Selenia SpA. In 1986 and 1987 he was
the delegate for research of the Rector of the Pisa University. From 1987 to 1990 he was the
Dean of the Faculty of Science of the University of Pisa. From 1995 to 2002 he was the director
of the IEI-CNR, Pisa, and subsequently, from 2002 to June 2006, he has been the director of
the Istituto di Scienza e Tecnologie dellInformazione A. Faedo of the National Science Council
(ISTI-CNR, Pisa). In 1992 he has been honored by the University of Pisa with the Ordine del Cherubino.
He has visited the University of California at Los Angeles, the University of Illinois at
Urbana-Champaign, the IBM Thomas J. Watson Research Center, Yorktown Heights, NY, the Tata Institute
of Fundamental Research, Bombay, India, and the Centre for the Development of Advanced Computing
(C-DAC), Pune and Bangalore, India. He has been the author or co-author of about 100 published papers
and two books, in Computer Arithmetic and Operating Systems. Early research addressed Computer Architecture
and Design and Computer Arithmetic. The bulk of subsequent production has been in Coding Theory, with
major contributions to Arithmetic Residue Codes, and in many theoretic and algorithmic aspects
of Self-Diagnosis of Multiprocessor Systems. Recent and present research addresses Wireless, Mobile
Networks and Sensor Networks, with prevailing emphasis on routing algorithms, dependability and data security.
Personal Note: I first met Franco in the early seventies, when for a while he joined the University of Pisa, and I was inspired by his early work on System Level Diagnosis, a topic which he initiated and became a recurrent topic of my research. In 1977-78 I visited the University of Illinois and, beside being associated with his research, I found support and friendship from him and his family. From 1998 to 2005, Franco has been the most distinguished member of the Advisory Board of IEI-CNR and ISTI-CNR, and I owe gratitude to him for invaluable advice and encouragement.
Abstract
Reliable implementation of geometric algorithms is a notoriously difficult
task. In fact, most implementations (academic and commercial) of geometric algorithms
fail on some input instances, i.e., crash or give an incorrect answer.
In the introductory part of the talk, we illustrate the pitfalls of geometric computing [ClassRoomExample] and explain for one algorithm in detail where the problem lies and what goes wrong.
In the main part of the talk, we discuss a recent approach to reliable and efficient geometric computing: controlled perturbation. The approach was introduced by D. Halperin and co-workers {Halperin-Shelton:perturbation,Halperin-Raab:perturbation,Halperin-Leiserowitz} and refined by us [Controlled-Perturbation-General-Strategy,controlled-perturbation]. We show that the approach applies to a large class of geometric algorithms, in particular, most algorithms in CGAL (Computational Geometry Algorithms Library), and yields correct and efficient implementations. The method is easy to use.
Biography
Prof. Dr. Kurt Mehlhorn is the
Director at the Max-Planck-Institute for Computer Science, Saarbrucken, Germany.
He was born 1949, received his Ph.D. from Cornell University in 1974, and has
been a professor in Saarbrucken since 1975.
He is the recipient of many awards, including:
Leibniz-Preis der deutschen Forschungsgemeinschaft (1986),
Humboldt-Award (1989),
Karl-Heinz-Beckurts-Award (1994), Member of Academia Europaea (1995),
Konrad-Zuse-Award (1995), ACM Fellow (1999), Honorary doctorate degrees from
Magdeburg and Waterloo, Berlin-Brandenburg Academy of Science, Leopoldina.
His research interests include data structures, graph algorithms, computational geometry,
parallel algorithms, implementation of algorithms, algorithm engineering.
He has published extensively, including 6 books and close to 200
articles in scientific journals and conferences.
Kurt and Franco have shared two research visits - Kurt spent the summer of 1982 at UIUC and Franco spent the summer of 1985 in Saarbrucken. These visits resulted in a number of joint publications, including
Abstract
We prove a general result bridging the fields of Secure Protocols
and Game Theory.
In game-theoretic terms, we show that ANY normal-form mechanism Has an equivalent UNMEDIATED, extensive-form mechanism played with ballots and a ballot box---the venerable devices used throughout the world to privately and correctly compute the tally of secret votes.
In cryptographic terms, we show that, in ANY joint computation, security can be achieved based solely on the players' RATIONALITY, rather than on the HONESTY of some of them.
Our result has broad implications for Mechanism Design; in particular, it enables One to design mechanisms in a MODULAR and COMPETITIVE fashion.
Joint work with Sergei Izmalkov and Matt Lepinski.
Biography
Silvio Micali received his Laurea in Mathematics from the University
of Rome in 1978, and his Ph.D. in Computer Science form the University
of California at Berkeley in 1982. He joined MIT's faculty in 1983,
where is the co-founder and co-leader of the Information and Computer
Security Group. His research focuses on developing the modern theory
of pseudo-random generation, encryption, digital signatures and secure
protocols. A member of the American Academy of Arts and Sciences, he
has won the Goedel Prize in theoretical computer science, and the
RSA prize in cryptography.
More importantly, Silvio has befriended Franco for 30 years, and continues to share with him jokes and serious thoughts in huge and equal amounts. Important lessons that Silvio learned from Franco:
Grazie Franco!
Abstract
Light-weight embedded systems are now gaining more popularity due to recent
technological advances in fabrication of more powerful tiny processors.
In particular, the field of pervasive computing and medical monitoring is
emerging due to the development of tiny and low-profile computing nodes,
mated with sensors and actuators, in close proximity to the human.
In distributed embedded systems, often battery-powered, it is critical to extend the system lifetime for each individual node in order to retain the functionality of the entire network for the longest possible time. We will describe an -optimal polynomial time technique that evenly distributes the communication load over the nodes. This formulation can be exploited to optimize system power consumption or to enhance network reliability. Both power and reliability issues are among the most significant concerns in medical embedded systems.
Biography
Majid Sarrafzadeh (M'87, SM'92,F'96) (http://www.cs.ucla.edu/~majid) received his Ph.D.
in 1987 from the University of Illinois at Urbana-Champaign in Electrical and Computer
Engineering under the supervision of Professor Franco Preparata.
He joined Northwestern University as an Assistant Professor in 1987.
In 2000, he joined the Computer Science Department at University of California at Los
Angeles (UCLA). His recent research interests lie in the area of Embedded and
Reconfigurable Computing, VLSI CAD, and design and analysis of algorithms.
Dr. Sarrafzadeh is a Fellow of IEEE for his contribution to "Theory and Practice of VLSI Design".
He has served on the technical program committee of numerous conferences
in the area of VLSI Design and CAD, including ICCAD, DAC, EDAC, ISPD, FPGA, and DesignCon.
He has served as committee chair of a number of these conferences. He is on the executive
committee/steering committee of several conferences such as ICCAD, ISPD, and ISQED.
He was the program committee and the general chair of ICCAD in 2004 and 2005, respectively,
the premiere conference in CAD.
Professor Sarrafzadeh has published approximately 350 papers, co-authored 5 books, and is a named inventor on 6 US patents. Dr. Sarrafzadeh is an Associate Editor of ACM Transaction on Design Automation (TODAES) and an Associate Editor of IEEE Transactions on Computers and a number of other journals.
Dr. Sarrafzadeh has collaborated with many industries in the past fifteen years including IBM and Motorola and many CAD industries and was the architect of the physical design subsystem of Monterey Design Systems - Synopsys acquired the company. He was a co-founder of Hier Design, Inc. Hier Design was acquired by Xilinx in 2004.
Professor Sarrafzadeh has graduated about 30 PhD students, 11 of who are working in academia.
Personal Note: I took a course with Franco in 1981 on Advanced Logic. I loved his style of teaching and loved the topic. I decided to work with him. As I entered grad school in 1982 he offered me an RA, the best news of my life at the time. I studied Channel Routing for my PhD. The kind of things I do today have nothing to do with Channel Routing and are not always deeply theoretical, however, the foundation I have obtained as Franco's student has proved most valuable. Quoting the famous Chinese proverb, one of Franco's favorite sayings, "Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime." I must say that I had the best fishing instructor.
Abstract
Franco Preparata has had a major influence on the research directions that many of
us have pursued. In this talk I will reminisce about his impact on my involvement
in VLSI as well as describe my current work on nanotechnology that was inspired by
the early work. The academic community was introduced to the problems of VLSI in the
1970s by interlocutors who bridged the world of chip manufacture and academia.
Very quickly theoreticians formulated and addressed many new and interesting
questions. As we enter the age of nanotechnology similar interesting questions
arise. I will give an overview of VLSI research as well as provide a quick introduction
to the new problems of nanotechnology.
Biography
John Savage began his research career as a coding and information theorist within electrical
engineering. The observed complexity of decoders for error-correcting codes propelled him
to investigate the complexity of decoders. This naturally led into a study of tradeoffs
between computational resources and to the study of circuit complexity, a fundamental
computational measure in terms of which tradeoffs can be expressed. Much of this work was
captured in his 1976 book The Complexity of Computing which made evident the importance of
circuit complexity to theoretical computer science. He has also written a textbook for a
computer literacy course with two undergraduates in the mid-1980s and a nearly comprehensive
introduction to theoretical computer science in the late 1990s.
A recurring theme in Prof. Savages work is the development of fundamental limits on computation, a theme that emerged in the study of decoder complexity. It has been expressed in the development of lower bounds, whether they be on a single resource, such as circuit size or depth, or expressed as tradeoffs between computational resources, such as space and time in serial computation, area and time in VLSI computation, or primary storage capacity and I/O operations in I/O-limited computations. It has also been seen in the classification of problems by their membership in traditional complexity classes, such as P, NP, and BPP. Another recurring theme has been the design and analysis of algorithms, which has been expressed in several areas including serial and parallel computer aided design, distributed computing for scientific computation, and his recent work on computational nanotechnology. The challenge of the latter area is to produce deterministic computing elements when the assembly process is stochastic and unpredictable. Computer architecture, probability theory, and complexity analysis play important roles here.
Abstract
Graphs are important tools for solving many problems
in a multitude of application areas.
In this talk we will describe several results from VLSI
layout, graph drawing, computational geometry, and
biomedical informatics. The common underline theme
is that in order to obtain these results we use graphs combined
with geometry, a combination originally taught to me by
the master, Franco Preparata.
Biography
Dr. Ioannis (Yanni) G. Tollis is a Professor of Computer Science at the
University of Crete, and the head of the Biomedical Informatics Laboratory
(BMI lab.) at the Institute of Computer Science at FORTH (ICS-FORTH).
He received his Ph. D. degree in Computer Science from the
University of Illinois at Urbana-Champaign in 1987. His Ph.D. thesis
dealt with Algorithms for VLSI Layout and was done under the guidance
of Franco Preparata. The nice mix of graphs, geometry, and VLSI theory that
Franco provided through his advise and insight was extremely important
not only for the thesis itself, but for the rest of Yanni's carreer.
After his Ph.D. he joined the faculty of The University of Texas at Dallas, where he was a Professor of Computer Science, and director of the CAD and Visualization lab. until 2004 (now a research professor).
He has published seven books, over 110 journal and conference papers, and has given more than 50 invited lectures world-wide. His research has been funded by numerous funding agencies and companies. He is the owner of a U.S. patent and several of his projects have been licensed by companies for commercial distribution. He is editor-in-chief of the electronic Journal of Graph Algorithms and Applications and was a member of the editorial board of the IEEE Transactions on Computers (2000-2004). He has served in several program committees of conferences including, among others, the International Symposium on Graph Drawing, and is a founding member of the steering committee for Graph Drawing.
Abstract
Sequencing by hybridization is a novel DNA sequencing technique in which
an array (SBH chip) of short sequences of nucleotides (probes} is
brought in contact with a solution of (replicas of) the target DNA
sequence. A biochemical method determines the subset of probes that bind
to the target sequence (spectrum of the sequence), and a combinatorial
method is used to reconstruct the DNA sequence from the spectrum.
Since technology limits the number of probes on the SBH chip, a
challenging combinatorial question is the design of a smallest set of
probes that can sequence almost all DNA string of a given length.
We'll present combinatorial designs that improve the performance
of SBH, using universal bases (bases that bind to any nucleotide), and
related methods. The talk will focus on the combinatorial and
algorithmic issues, with only brief introduction to the micro-biology
background. (Joint work with Franco Preparata and Alan Frieze.)
Biography
Eli Upfal is a professor of computer science at Brown University and since 2002 also the
department chair. Before coming to Brown in 1998, he was a researcher and project manager
at the IBM Almaden Research Center in California, and a professor at the Weizmann Institute
in Israel. He received an undergraduate degree in mathematics and statistics and a
doctorate degree in computer science from the Hebrew University in Jerusalem, Israel.
His research focuses on the design and analysis of algorithms. In particular he is interested in randomized algorithms and probabilistic analysis of algorithms. Applications range from combinatorial and stochastic optimization to routing and communication networks, computational biology, and computational finance.
Francofest Home | francofest@cs.tamu.edu |