The Seventh International Workshop on
the Algorithmic Foundations of Robotics New York City, July 16-18, 2006 |
National Science Foundation Microsoft Research NYU, RPI, Texas A&M |
We have a stellar line-up of invited speakers for WAFR 2006, including researchers who defined the field and researchers who are today defining the frontiers of the field - in several cases the same people:
Abstract
The paradigm shift of nanotechnology owes a lot to radical changes in
that occurred in microscopy in the mid eighties. The Scanning Tunneling
Microscope came into existence through the work of Gerd Binnig and
Heinrich Rohrer at the IBM Zurich Research Laboratory. The principle of
the Scanning Tunneling Microscope or STM involved tactile sensing as a
means of magnifying using a finger which was a sharp tip terminated by
an atom which could use quantum sensing to feel the atoms one by one.
The images recreated by the feeling process provided at first a sense of
wonderment about the atomic world which could be reached in a manner so
intimate that traditional scientists initially shunned it perhaps
because the intimacy removed the layers of subjectivity and distance
associated with 19th century science. The STM pushed the limits not only
of microscopy but as importantly it took scientists to a new edge of
imagination and creativity: the manipulation and experimentation of
atoms and molecules on an individual basis. Up to the mid-eighties,
scientists tended to experiment of many atoms and molecules at a time.
Our new found ability to 'see' on the single atom and molecule scale and
to fabricate on the single atom and molecule scale demonstrated the
feasibility and proof of principle of precise atomic and molecular
control on an individual basis as well as reveal new scientific
insights. In our human context as a tool builder, our survival and
success throughout history has depended on our capabilities to develop
and use new tools and the process of remotely touching, repositioning
and performing other mechanical operations on single atoms and molecules
is a ultimate limit in a sense to that path. The subject of this talk
is partially from the perspective of science and engineering and partly
from the human level of essential connecting ones own sight, brain and
hands to objects that are a billion times smaller than ourselves. Both
of these are connected and I will describe the evolution, process and
technology that went on during this new mastery of mind over atom.
Biography
Dr. James K. Gimzewski is a Professor in the Dept. of Chemistry and
Biochemistry at UCLA, and Member of the California NanoSystems Institute
(CNSI). Previously, he worked 20 years at the IBM Corporate Research
Laboratories in Zurich, Switzerland. His achievements include: the Feynman
Prize in Nanotechnology (1997), the Discover Award (1997), the "Wired 25"
Award (Wired magazine, 1998), and the Institute of Physics' "Duddell" prize
and medal (2001). He is recognized by the Guinness Book of Records for the
world's smallest calculator. He is a Fellow to the Royal Academy of
Engineering, a Fellow of the Institute of Physics and a Fellow of the World
Innovation Foundation. He was co-director of "Nano" an art and science
exhibit of nanotechnology (LACMA 2003-2006). His work has received
worldwide press coverage.
Abstract
Computer animations and virtual environments both require a controllable
source of motion for their characters. Most of the currently available
technologies require significant training and are not useful tools for
casual users. Over the past few years, we have explored several
different approaches to this problem. Each solution relies on the information
about natural human motion inherent in a motion capture database. For
example, the user can sketch an approximate path for an animated character which
is then refined by searching a graph constructed from a motion database.
We can also find a natural looking motion for a particular behavior
based on sparse constraints from the user (foot contact locations and timing,
for example) by optimizing in a low-dimensional, behavior-specific space
found from motion capture. And finally, we have developed performance
animation systems that use video input of the user to build a local
model of the user's motion and reproduce it on an animated character.
Biography
Jessica Hodgins is a Professor in the Robotics Institute and Computer
Science Department at Carnegie Mellon University. Prior to moving to
CMU in 2000, she was an an Associate Professor and Assistant Dean in the
College of Computing at Georgia Institute of Technology. She received
her Ph.D. in Computer Science from Carnegie Mellon University in 1989.
Her research focuses on computer graphics, animation, and robotics.
She has received a NSF Young Investigator Award, a Packard Fellowship,
and a Sloan Fellowship. She was editor-in-chief of ACM Transactions on
Graphics from 2000-2002 and SIGGRAPH Papers Chair in 2003.
Abstract
Why is probabilistic roadmap (PRM) planning probabilistic? Since no inherent randomness or
uncertainty exists in the classic formulation of motion planning problems, one may wonder why
probabilistic sampling helps to solve them. In this talk, I will argue that the probabilistic nature
of PRM planning follows from the foundational choice to avoid computing an exact representation of
the robots free space F. So, a PRM planner never knows the exact shape of this space. It works very
much like a robot exploring an unknown environment to build a map. At any time, many hypotheses on
the shape/connectivity of the free space are consistent with the information gathered so far, and
the probability measure used by the planner to sample F derives from this uncertainty. Hence, PRM
planning trades the cost of computing F exactly against the cost of dealing with uncertainty. This
choice is beneficial only if probabilistic sampling leads to a roadmap that is much smaller in size
than that of an exact representation of F and still represents F well enough to answer motion planning
queries correctly. This view of PRM leads to a series of other questions, in particular: What does the
empirical success of PRM planning imply? Based on both previous and new results, I will argue that
this success implies that most free spaces encountered in practice have favorable visibility properties,
so that, surprisingly small roadmaps are often sufficient to answer queries correctly. This fact was
a priori unsuspected, but in retrospect it is not so surprising. Poor visibility is caused by narrow
passages, which are unstable geometric features: small random perturbations of workspace geometry are
likely to either eliminate them or make them wider. So, narrow passages rarely occur by accident. I
will show, however, that, as visibility is not uniformly favorable across a free space, non-uniform
probabilistic sampling measures are critical to speed up PRM planning. In comparison, randomness
plays a minor role. I will conclude the talk by suggesting new research directions to improve PRM planning.
Joint work with David Hsu and Hanna Kurniawati, Computer Science Department, National University of Singapore.
Biography
Jean-Claude Latombe is the Kumagai Professor of Computer Science at Stanford University. He received his
PhD from the National Polytechnic Institute of Grenoble (INPG) in 1977. He was on the faculty of INPG from
1980 to 1984, when he joined ITMI (Industry and Technology for Machine Intelligence), a company that he
had co-founded in 1982. He moved to Stanford in 1987, where his main research interests are in Artificial
Intelligence, Robotics, Computational Biology, Computer-Aided Surgery, and Graphic Animation. He served
as the Chairman of the Computer Science Department from 1997 to 2001, and on the Leadership Council of BioX,
a multidisciplinary program centered on Biology, from 2002 to 2004. He has hold visiting professor positions
at the National University of Singapore, the Indian Institute of Technology in Kanpur, and the TEC of
Monterrey (Mexico).
Abstract
Structural Biology and Robotics share a central concern with tree- like structures with many
degrees of motion freedom. This talk draws examples from drug-activity prediction, protein design,
computer vision and robot grasping to illustrate some problem formulations and algorithmic
techniques of common interest in both disciplines; in particular, I will discuss multiple-instance
learning and maximum a posteriori assignment problems. I will conclude with some speculation into
the role of machine learning and statistical methods in robot manipulation.
Biography
Tomás Lozano-Pérez is the TIBCO Founders Professor of Computer
Science and Engineering at MIT, where he is a member of the Computer
Science and Artificial Intelligence Laboratory. He has been on the MIT
faculty since 1981. From 1990 to 1994, he was a Senior Staff Fellow at Arris
Pharmaceuticals. He has been Associate
Director of the Artificial Intelligence Laboratory and Associate Head
for Computer Science of MIT's Department of Electrical Engineering and
Computer Science. His research has been in robotics
(configuration-space approach to motion planning), computer vision
(interpretation-tree approach to object recognition), machine learning
(multiple-instance learning), medical imaging (computer-assisted
surgery) and computational chemistry (drug activity prediction and
protein structure determination). He has been
co-editor of the International Journal of Robotics Research and a
recipient of a Presidential Young Investigator Award from the NSF.
Abstract
Force driven motions in biology begin close to the atomic level with the dynamic self-assembly
and disassembly of 'tracks' traversed by energy-consuming motor molecules. These nano-mechanisms
are used to organize surprisingly complex transport processes within cells; to achieve the
intricate dance involved in chromosome separation and cell separation during mitosis; and to
give all cells 'micro-muscles' which can be used to achieve motility. During embryo development,
entire cell populations, guided by synchronized patterns of chemical signals, migrate to organize
tissues and also to form the connections on which the nervous system relies.
This talk will introduce some of these these fascinating and still poorly understood mechanisms, which may be a fruitful area in for collaborations between roboticists and molecular cell biologists.
Biography
Jack Schwartz, recently retired from New York university's Courant Institute of Mathematical
Sciences, has worked in a wide variety of fields including pure mathematics, hardware and
software design, multimedia, robotics, psychophysics of vision, and most recently biology.
He is a former Director of NYU's robotic and multimedia laboratories, and the author
(with Micha Sharir of Tel Aviv University) of a series of early papers on algorithmic
motion-planning techniques.
Abstract
The DARPA Grand Challenge has been the most significant challenge to
the mobile robotics community in more than a decade. The challenge
was to build an autonomous robot capable of traversing 132 miles of
unrehearsed desert terrain in less than 10 hours. In 2004, the best
robot only made 7.3 miles. In 2005, Stanford won the challenge and the
$2M prize money by successfully traversing the course in less than 7
hours. This talk, delivered by the leader of the Stanford Racing
Team, will provide insights in the software architecture of Stanford's
winning robot. The robot massively relied on machine learning and
probabilistic modeling for sensor interpretation, and robot motion
planning algorithms for vehicle guidance and control. The speaker will
explain some of the basic algorithms and share some of the excitement
characterizing this historic event. He will also discuss the
implications of this work for the future of the transportation.
Biography
Professor Sebastian Thrun is Director of the Stanford Artificial
Intelligence Laboratory (SAIL). He published 300 refereed articles
(including several books), won six best paper awards, a German Olympus
Award, a NSF CAREER award, and he is also a Fellow of the AAAI.
Thrun's research focuses on robotics, machine learning, and artificial
intelligence.
WAFR 2006 Home Page |
NYU
RPI Texas A&M |