# Books

- M. Spong, S. Hutchinson, M. Vidyasagar,
Robot Modeling and Control, John Wiley and Sons, New York, 2006.

[Book Web Page ] - H. Choset, K. M. Lynch, S. Hutchinson,
G. Kantor, W. Burgard, L. E. Kavraki and S. Thrun,
Principles of Robot Motion: Theory, Algorithms, and Implementations, MIT Press, Boston, 2005.

[Book Web Page ] - J.-D. Boissonnat, J. Burdick, K. Goldberg and S. Hutchinson, eds.,
Algorithmic Foundations of Robotics V , Springer-Verlag, Heidelberg, Germany, 2003.

[ Book Web Page (amazon.com) ]

# Articles

- G. Lopez-Nicolas, N. Gans, S. Bhattacharya,
C. Sagues, J. J. Guerrero, and S. Hutchinson,
Homography-Based Control Scheme for Mobile Robots With Nonholonomic and Field-of-View Constraints, *IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics*, Aug., 2010, pp. 1115-1127.

[Abstract] [BibTex] [IEEEXplore]In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

@ARTICLE{5337950, author={Lopez-Nicolas, G. and Gans, N. R. and Bhattacharya, S and Sagues, C. and Guerrero, J.J. and Hutchinson, S.}, title={Homography-Based Control Scheme for Mobile Robots With Nonholonomic and Field-of-View Constraints}, journal={{IEEE} Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics}, year={2010}, month={aug.}, volume={40}, number={4}, pages={1115 -1127}

- S. Bhattacharya and S. Hutchinson,
On the Existence of Nash Equilibrium for a Two-player Pursuit—Evasion Game with Visibility Constraints, *Int'l Journal of Robotics Research*, Vol. 29, No. 7, 2010, pp. 831-839.

[Abstract] [BibTex] [ IJRR ]In this paper, we present a game-theoretic analysis of a visibility-based pursuit—evasion game in a planar environment containing obstacles. The pursuer and the evader are holonomic having bounded speeds. Both players have a complete map of the environment. Both players have omnidirectional vision and have knowledge about each other’s current position as long as they are visible to each other. The pursuer wants to maintain visibility of the evader for the maximum possible time and the evader wants to escape the pursuer’s sight as soon as possible. Under this information structure, we present necessary and sufficient conditions for surveillance and escape. We present strategies for the players that are in Nash equilibrium. The strategies are a function of the value of the game. Using these strategies, we construct a value function by integrating the adjoint equations backward in time from the termination situations provided by the corners in the environment. From these value functions we recompute the control strategies for the players to obtain optimal trajectories for the players near the termination situation. This is the first work that presents the necessary and sufficient conditions for tracking for a visibility based pursuit—evasion game and presents the equilibrium strategies for the players.

@article{BhatHut10, author = {S. Bhattacharya and S. Hutchinson}, title = {On the Existence of {N}ash Equilibrium for a Two-player Pursuit—Evasion Game with Visibility Constraints}, journal = {International Journal of Robotics Research}, volume = {29}, number = 7, year = 2010, pages = {831-839}}

- A. Sarmiento, R. Murrieta and S. Hutchinson,
An Efficient Motion Strategy to Compute Expected-Time Locally Optimal Continuous Search Paths in Known Environments, *Advanced Robotics*, Vol. 23., Nos. 12-13, 2009, pp. 1533-1560.

[Abstract] [BibTex]In this paper we address the problem of finding time-optimal search paths in known environments. In particular, we address the problem of searching a known environment for an object whose unknown location is characterized by a known probability density function (PDF). With this formulation, the time required to find the object is a random variable induced by the choice of search path together with the PDF for the object's location. The optimization problem we consider is that of finding the path that minimizes the expected value of the time required to find the object. As the complexity of the problem precludes finding an exact optimal solution, we propose a two-level, heuristic approach to finding the optimal search path. At the top level, we use a decomposition of the workspace based on critical curves to impose a qualitative structure on the solution trajectory. At the lower level, individual segments of this trajectory are refined using local numerical optimization methods. We have implemented the algorithm and present simulation results for the particular case when the object's location is specified by the uniform

@article{SarMurHut09, author = {A. Sarmiento and R. Murrieta and S. Hutchinson}, title = {An Efficient Motion Strategy to Compute Expected-Time Locally Optimal Continuous Search Paths in Known Environments}, journal = {Advanced Robotics}, volume = 23, number = {12-13}, year = 2009, pages = {1533-1560}}

- N. Gans and S. Hutchinson,
Multi-Attribute Utility Analysis in the Choice of Vision-Based Robot Controllers, *The Int'l Journal of Optomechatronics*, Vol. 2, No. 3, 2008, pp. 326-360.

[Abstract] [BibTex]Multi-attribute utility analysis is an ideal tool for comparing the disparate performance of multiple visual servo controllers. Its strength lies in the fact that very different metrics can be compared and that it takes into account human preferences and risk attitudes. In this article, multi-attribute utility analysis is used to choose between multiple visual servo controllers and choices of camera lenses. The resulting visual servo controllers are suited to the needs of a specific user for specific tasks.

@article{GanHut08, author = {N. Gans and S. Hutchinson}, title = {Multi-Attribute Utility Analysis in the Choice of Vision-Based Robot Controllers}, journal = {The Int'l Journal of Optomechatronics}, volume = 2, number = 3, year = 2008, pages = {326-360}}

- F. Chaumette and S. Hutchinson,
Visual Servo Control, Part II: Advanced Approaches, *IEEE Robotics and Automation Magazine*, Vol. 14, No. 1, Mar., 2007, pp 109-118.

[Abstract] [BibTex] [IEEEXplore]This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics.

@article{ChaHut07, author = {F. Chaumette and S. Hutchinson}, title = {Visual Servo Control, Part {II}: Advanced Approaches}, journal = {IEEE Robotics and Automation Magazine}, volume = 14, number = 1, month = mar, year = 2007, pages = {109-118}}

- N. Gans and S. Hutchinson,
Stable Visual Servoing through Hybrid Switched-System Control, *IEEE Trans. on Robotics*, Vol. 23, No. 3, 2007, pp. 530-540.

[Abstract] [BibTex] [IEEEXplore]Visual servoing methods are commonly classified as image-based or position-based, depending on whether image features or the camera position define the signal error in the feedback loop of the control law. Choosing one method over the other gives asymptotic stability of the chosen error but surrenders control over the other. This can lead to system failure if feature points are lost or the robot moves to the end of its reachable space. We present a hybrid switched-system visual servo method that utilizes both image-based and position-based control laws. We prove the stability of a specific, state-based switching scheme and present simulated and experimental results.

@article{GansHut07, author = {N. Gans and S. Hutchinson}, title = {Stable Visual Servoing Through Hybrid Switched-System Control}, journal = {IEEE Trans. on Robotics}, volume = 23, number = 3, month = jun, year = 2007, pages = {530-540}}

- R. Murrieta-Cid, T. Muppirala, A. Sarmiento,
S. Bhattacharya, and S. Hutchinson,
Surveillance Strategies for a Pursuer with Finite Sensor Range, *Int'l Journal of Robotics Research*, Vol. 26, No. 3, 2007, pp. 233-253.

[Abstract] [BibTex] [ IJRR ]This paper addresses the pursuit-evasion problem of maintaining surveillance by a pursuer of an evader in a world populated by polygonal obstacles. This requires the pursuer to plan colision-free motions that honor distance constraints imposed by sensor capabilities, while avoiding occlusion of the evader by any obstacle. The paper extends the three-dimensional cellular decomposition of Schwartz and Sharir to represent the four-dimensional configuration space of the pursuer-evader system, and derive necessary conditions for surveillance (equivalently, sufficient conditions for escape) in terms of this new representation A game theoretic formulation of the problem is then given, and this formulation is used to characterize optimal escape trajectories for the evader. A shooting algorithm is proposed that finds these trajectories using the minimun principle. Finally, noting the similarities between this surveillance problem and the problem of cooperative manipulation by two robots, several cooperation strategies are presented that maximize system performance for cooperative motions.

@article{MurMupSar06, author = {R. Murrieta-Cid, T. Muppirala, A. Sarmiento and S. Bhattacharya, and S. Hutchinson}, title = {Surveillance Strategies for a Pursuer with Finite Sensor Range}, journal = {International Journal of Robotics Research}, volume = 26, number = 3, year = 2007, pages = {233-253}}

- S. Bhattacharya, R. Murrieta-Cid, and S. Hutchinson,
Optimal Paths for Landmark-based Navigation by Differential Drive Vehicles with Field-of-View Constraints, *IEEE Trans. on Robotics*, Vol. 23, No. 1, Feb., 2007, pp. 47-59.

[Abstract] [BibTex] [(IEEEXplore)]In this paper, we consider the problem of planning optimal paths for a differential-drive robot with limited sensing, that must maintain visibility of a fixed landmark as it navigates in its environment. In particular, we assume that the robot's vision sensor has a limited field of view (FOV), and that the fixed landmark must remain within the FOV throughout the robot's motion. We first investigate the nature of extremal paths that satisfy the FOV constraint. These extremal paths saturate the camera pan angle. We then show that optimal paths are composed of straight-line segments and sections of these these extremal paths. We provide the complete characterization of the shortest paths for the system by partitioning the plane into a set of disjoint regions, such that the structure of the optimal path is invariant over the individual regions.

@article{BhaMurHut06, author = {S. Bhattacharya, R. Murrieta-Cid, and S. Hutchinson}, title = {Optimal Paths for Landmark-based Navigation by Differential Drive Vehicles with Field-of-View Constraints}, journal = {IEEE Trans. on Robotics}, volume = 23, number = 1, month = feb, year = 2007, pages = {47-59}}

- B. Tovar, L. Munoz-Gomez, R. Murrieta-Cid, M. Alencastre-Miranda,
R. Monroy and S. Hutchinson,
Planning Exploration Strategies for Simultaneous Localization and Mapping, *Journal of Robotics and Autonomous Systems*, Vol. 54, Iss. 4, Apr., 2006, pp. 314-331.

[Abstract] [BibTex]In this paper, we present techniques that allow one or multiple mobile robots to efficiently explore and model their environment. While much existing research in the area of Simultaneous Localization and Mapping (SLAM) focuses on issues related to uncertainty in sensor data, our work focuses on the problem of planning optimal exploration strategies. We develop a utility function that measures the quality of proposed sensing locations, give a randomized algorithm for selecting an optimal next sensing location, and provide methods for extracting features from sensor data and merging these into an incrementally constructed map.

We also provide an efficient algorithm driven by our utility function. This algorithm is able to explore several steps ahead without incurring too high a computational cost. We have compared that exploration strategy with a totally greedy algorithm that optimizes our utility function with a one-step-look ahead.

The planning algorithms which have been developed operate using simple but flexible models of the robot sensors and actuator abilities. Techniques that allow implementation of these sensor models on top of the capabilities of actual sensors have been provided.

All of the proposed algorithms have been implemented either on real robots (for the case of individual robots) or in simulation (for the case of multiple robots), and experimental results are given.@article{TovMunCid06, author = {B. Tovar and L. Munoz-Gomez and R. Murrieta-Cid and M. Alencastre-Miranda and R. Monroy and S. Hutchinson}, title = {Planning Exploration Strategies for Simultaneous Localization and Mapping}, journal = {Journal of Robotics and Autonomous Systems}, volume = 54, issue = 4, month = apr, year = 2006, pages = {314-331}}

- F. Chaumette and S. Hutchinson,
Visual Servo Control, Part I: Basic Approaches, *IEEE Robotics and Automation Magazine*, Vol. 13, No. 4, Dec., 2006, pp. 82-90.

[Abstract] [BibTex] [IEEEXplore]This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques.

@article{ChaHut06, author = {F. Chaumette and S. Hutchinson}, title = {Visual Servo Control, Part {I}: Basic Approaches}, journal = {IEEE Robotics and Automation Magazine}, volume = 13, number = 4, month = dec, year = 2006, pages = {82-90}}

- S. Kloder and S. Hutchinson,
Path Planning for Permutation-Invariant MultiRobot Formations, *IEEE Trans. on Robotics*, Vol. 22, No. 4, 2006, pp. 650-665.

[Abstract] [BibTex] [(IEEEXplore]In many multirobot applications, the specific assignment of goal configurations to robots is less important than the overall behavior of the robot formation. In such cases, it is convenient to define a permutation-invariant multirobot formation as a set of robot configurations, without assigning specific configurations to specific robots. For the case of robots that translate in the plane, we can represent such a formation by the coefficients of a complex polynomial whose roots represent the robot configurations. Since these coefficients are invariant with respect to permutation of the roots of the polynomial, they provide an effective representation for permutation-invariant formations. In this paper, we extend this idea to build a full representation of a permutation-invariant formation space. We describe the properties of the representation, and show how it can be used to construct collision-free paths for permutation-invariant formations.

@article{KloHut06, author = {S. Kloder and S. Hutchinson}, title = {Path Planning for Permutation-Invariant MultiRobot Formations}, journal = {IEEE Trans. on Robotics}, volume = 22, number = 4, year = 2006, pages = {650-665}}

- R. Murrieta, B. Tovar and S. Hutchinson,
A Sampling-Based Motion Planning Approach to Maintain Visibility of Unpredictable Targets, *Autonomous Robots*, Vol. 19, No. 3, 2005, pp. 285-300.

[Abstract] [BibTex] [Autonomous Robots]This paper deals with the surveillance problem of computing the motions of one or more robot observers in order to maintain visibility of one or several moving targets. The targets are assumed to move unpredictably, and the distribution of obstacles in the workspace is assumed to be known in advance. Our algorithm computes a motion strategy by maximizing the shortest distance to escape—the shortest distance the target must move to escape an observer's visibility region. Since this optimization problem is intractable, we use randomized methods to generate candidate surveillance paths for the observers. We have implemented our algorithms, and we provide experimental results using real mobile robots for the single target case, and simulation results for the case of two targets-two observers.

@article{MurTovHut05, author = {R. Murrieta and B. Tovar and S. Hutchinson}, title = {A Sampling-Based Motion Planning Approach to Maintain Visibility of Unpredictable Targets}, journal = {Autonomous Robots}, volume = 19, number = 3, year = 2005, pages = {285-300}}

- N. Gans, S. Hutchinson and P. Corke,
Performance Tests for Visual Servo Control Systems, with Application to Partitioned Approaches to Visual Servo Control, *Int'l Journal of Robotics Research*, Vol. 22, No. 10-11, Oct.-Nov. 2003, pp. 955-981.

[Abstract] [BibTex] [ IJRR ]Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved position-based visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that mea sure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.

@article{GanHutCor03, author={N. Gans and S. Hutchinson and P. Corke}, title={Performance Tests for Visual Servo Control Systems, with Application to Partitioned Approaches to Visual Servo Control}, journal = {International Journal of Robotics Research}, vol = 22, number = {10-11}, month = {oct-nov}, pages = 955-981, year = 2003}

- P. Leven and S. Hutchinson,
Using Manipulability to Bias Sampling During the Construction of Probabilistic Roadmaps, *IEEE Trans. on Robotics and Automation*, Vol. 19, No. 6, Dec. 2003, pp. 1020-1026.

[Abstract] [BibTex] [IEEEXplore]Probabilistic roadmaps (PRMs) are a popular representation used by many current path planners. Construction of a PRM requires the ability to generate a set of random samples from the robot's configuration space, and much recent research has concentrated on new methods to do this. In this paper, we present a sampling scheme that is based on the manipulability measure associated with a robot arm. Intuitively, manipulability characterizes the arm's freedom of motion for a given configuration. Thus, our approach is to densely sample those regions of the configuration space in which manipulability is low (and therefore, the robot has less dexterity), while sampling more sparsely those regions in which the manipulability is high. We have implemented our approach, and performed extensive evaluations using prototypical problems from the path planning literature. Our results show this new sampling scheme to be effective in generating PRMs that can solve a large range of path planning problems.

@article{LevHut03, author = {P. Leven and S. Hutchinson}, title = {Using Manipulability to Bias Sampling During the Construction of Probabilistic Roadmaps}, journal = {IEEE Trans. on Robotics and Automation}, volume = 19, number = 6, month = dec, year = 2003, pages = {1020-1026}}

- P. Leven and S. Hutchinson,
Realtime Path Planning in Changing Environments, *Int'l Journal of Robotics Research*, vol. 21, No. 12, Dec. 2002, pp. 999-1030.

[Abstract] [BibTex] [ IJRR ]We present a new method for generating collision-free paths for robots operating in changing environments. Our approach is closely related to recent probabilistic roadmap approaches. These planners use preprocessing and query stages, and are aimed at planning many times in the same environment. In contrast, our preprocessing stage creates a representation of the configuration space that can be easily modified in real time to account for changes in the environment, thus facilitating real-time planning. As with previous approaches, we begin by constructing a graph that represents a roadmap in the configuration space, but we do not construct this graph for a specific workspace. Instead, we construct the graph for an obstacle-free workspace, and encode the mapping from workspace cells to nodes and arcs in the graph. When the environment changes, this mapping is used to make the appropriate modifications to the graph, and plans can be generated by searching the modified graph. In this paper, we first discuss the construction of the roadmap, including how random samples of the configuration space are generated using an importance sampling approach and how these samples are connected to form the roadmap. We then discuss the mapping from the workspace to the configuration space roadmap, explaining how the mapping is generated and how it can be encoded efficiently using compression schemes that exploit redundancy in the mapping. We then introduce quantitative robustness measures and show how these can be used to enhance the robustness of the roadmap to changes in the environment. Finally, we evaluate an implementation of our approach for serial-link manipulators with up to 20 joints.

@article{LevHut02, author = { P. Leven and S. Hutchinson}, title = {Realtime Path Planning in Changing Environments}, journal = {International Journal of Robotics Research}, volume = 21, number = 12, month = dec, year = 2002, pages = {999-1030}}

- P. Ranganathan, J.B. Hayet, M. Devy, S. Hutchinson and F. Lerasle,
Topological Navigation and Qualitative Localization for Indoor Environment Using Multisensory Perception, *Robotics and Autonomous Systems*, Vol. 41, Nos. 2-3, Nov. 2002, pp. 137-144.

[Abstract] [BibTex]This article describes a navigation system for a mobile robot which must execute motions in a building; the robot is equipped with a belt of ultrasonic sensors and with a camera. The environment is represented by a topological model based on a Generalized Voronoi Graph (GVG) and by a set of visual landmarks. Typically, the topological graph describes the free space in which the robot must navigate; a node is associated to an intersection between corridors, or to a crossing towards another topological area (an open space: rooms, hallways, .); an edge corresponds to a corridor or to a path in an open space. Landmarks correspond to static, rectangular and planar objects (e.g. doors, windows, posters, .) located on the walls. The landmarks are only located with respect to the topological graph: some of them are associated to nodes, other to edges. The paper is focused on the preliminary exploration task, i.e. the incremental construction of the topological model. The navigation task is based on this model: the robot self-localization is only expressed with respect to the graph.

@article{RanHayDev02, author = {P. Ranganathan and J.B. Hayet and M. Devy and S. Hutchinson and F. Lerasle}, title = {Topological Navigation and Qualitative Localization for Indoor Environment Using Multisensory Perception}, journal = {Robotics and Autonomous Systems}, volume = 41, number = {2-3}, month = nov, year = 2002, pages = {137-144}}

- K. Nickels and S. Hutchinson,
Estimating Uncertainty in SSD-Based Feature Tracking, *Image and Vision Computing*, vol. 20, no. 1, 2002 pp. 47-58

[Abstract] [BibTex]Sum-of-squared-differences (SSD) based feature trackers have enjoyed growing popularity in recent years, particularly in the field of visual servo control of robotic manipulators. These trackers use SSD correlation measures to locate target features in sequences of images. The results can then be used to estimate the motion of objects in the scene, to infer the 3D structure of the scene, or to control robot motions.

The reliability of the information provided by these trackers can be degraded by a variety of factors, including changes in illumination, poor image contrast, occlusion of features, or unmodeled changes in objects. This has led other researchers to develop confidence measures that are used to either accept or reject individual features that are located by the tracker. In this paper, we derive quantitative measures for the spatial uncertainty of the results provided by SSD-based feature trackers. Unlike previous confidence measures that have been used only to accept or reject hypotheses, our new measure allows the uncertainty associated with a feature to be used to weight its influence on the overall tracking process. Specifically, we scale the SSD correlation surface, fit a Gaussian distribution to this surface, and use this distribution to estimate values for a covariance matrix. We illustrate the efficacy of these measures by showing the performance of an example object tracking system with and without the measures.@article{NicHut02, author = {K. Nickels and S. Hutchinson}, title = {Estimating Uncertainty in SSD-Based Feature Tracking}, journal = { Image and Vision Computing}, volume = 20, number = 1, year = 2002, pages = {47-58}}

- P. I. Corke and S. A. Hutchinson,
A New Partitioned Approach to Image-Based Visual Servo Control, *IEEE Trans. on Robotics and Automation*, Vol. 17, No. 4, Aug. 2001, pp. 507-515.

[Abstract] [BibTex] [IEEEXplore]In image-based visual servo control, where control is effected with respect to the image, there is no direct control over the Cartesian velocities of the robot end effector. As a result, the robot executes trajectories that are desirable in the image, but which can be indirect and seemingly contorted in Cartesian space. We introduce a partitioned approach to visual servo control that overcomes this problem. In particular, we decouple the x-axis rotational and translational components of the control from the remaining degrees of freedom. Then, to guarantee that all features remain in the image throughout the entire trajectory, we incorporate a potential function that repels feature points from the boundary of the image plane. We illustrate our control scheme with a variety of results.

@article{CorHut01, author = {P. I. Corke and S. A. Hutchinson}, title = {A New Partitioned Approach to Image-Based Visual Servo Control}, journal = {IEEE Trans. on Robotics and Automation}, volume = 17, number = 4, month = aug, year = 2001, pages = {507-515}}

- K. Nickels and S. Hutchinson,
Model-Based Tracking of Complex Articulated Objects, *IEEE Trans. on Robotics and Automation*, Vol. 17, No. 1, Feb. 2001, pp. 28-36.

[Abstract] [BibTex] [IEEEXplore]In this paper, we present methods for tracking complex, articulated objects. We assume that an appearance model and the kinematic structure of the object to be tracked are given, leading to what is termed a model-based object tracker. At each time step, this tracker observes a new monocular grayscale image of the scene and combines information gathered from this image with knowledge of the previous configuration of the object to estimate the configuration of the object at the time the image was acquired. Each degree of freedom in the model has an uncertainty associated with it, indicating the confidence in the current estimate for that degree of freedom. These uncertainty estimates are updated after each observation. An extended Kalman filter with appropriate observation and system models is used to implement this updating process. The methods that we describe are potentially beneficial to areas such as automated visual tracking in general, visual servo control, and human computer interaction.

@article{NicHut01, author = {K. Nickels and S. Hutchinson}, title={Model-Based Tracking of Complex Articulated Objects}, journal = {IEEE Trans. on Robotics and Automation}, volume = 17, number = 1, date = feb, year = 2001, pages = {28-36}}

- H. Rifai, I. Bloch, S. Hutchinson,
J. Wiart and L. Garnero,
Segmentation of the Skull Using Deformable Model and Taking Partial Volume Effect into Account, *Medical Image Analysis*, Vol. 4, Iss. 3, Sept. 2000, pp. 219-233.

[Abstract] [BibTex]Segmentation of the skull in medical imagery is an important stage in applications that require the construction of realistic models of the head. Such models are used, for example, to simulate the behavior of electro-magnetic fields in the head and to model the electrical activity of the cortex in EEG and MEG data. In this paper, we present a new approach for segmenting regions of bone in MRI volumes using deformable models. Our method takes into account the partial volume effects that occur with MRI data, thus permitting a precise segmentation of these bone regions. At each iteration of the propagation of the model, partial volume is estimated in a narrow band around the deformable model. Our segmentation method begins with a pre-segmentation stage, in which a preliminary segmentation of the skull is constructed using a region-growing method. The surface that bounds the pre-segmented skull region offers an automatic 3D initialization of the deformable model. This surface is then propagated (in 3D) in the direction of its normal. This propagation is achieved using level set method, thus permitting changes to occur in the topology of the surface as it evolves, an essential capability for our problem. The speed at which the surface evolves is a function of the estimated partial volume. This provides a sub-voxel accuracy in the resulting segmentation.

@article{RifEtAl00, author = {H. Rifai and I. Bloch and S. Hutchinson and J. Wiart and L. Garnero}, title = {Segmentation of the Skull Using Deformable Model and Taking Partial Volume Effect into Account}, journal = {Medical Image Analysis}, volume = 4, issue = 3, month = sep, year = 2000, pages = {219-233}}

- H. Rifai, I. Bloch, S. Hutchinson,
J. Wiart and L. Garnero,
Segmentation par modele deformable des regions osseuses de la tete dans les volume IRM, *Traitement du Signal*, Vol. 16, No. 4, 1999, pp. 319-330. - S. LaValle and S. A. Hutchinson,
Optimal Motion Planning for Multiple Robots Having Independent Goals, *IEEE Trans. on Robotics and Automation*, Vol. 14, No. 6, Dec. 1998, pp. 912-925.

[Abstract] [BibTex] [IEEEXplore]This work makes two contributions to geometric motion planning for multiple robots: 1) motion plans are computed that simultaneously optimize an independent performance measure for each robot; 2) a general spectrum is defined between decoupled and centralized planning, in which we introduce coordination along independent roadmaps. By considering independent performance measures, we introduce a form of optimality that is consistent with concepts from multiobjective optimization and game theory literature. We present implemented, multiple-robot motion planning algorithms that are derived from the principle of optimality, for three problem classes along the spectrum between centralized and decoupled planning: 1) coordination along fixed, independent paths; 2) coordination along independent roadmaps; and 3) general, unconstrained motion planning. Computed examples are presented for all three problem classes that illustrate the concepts and algorithms.

@article{LavHut98a, author ={S. LaValle and S. A. Hutchinson}, title = {Optimal Motion Planning for Multiple Robots Having Independent Goals}, journal = {IEEE Trans. on Robotics and Automation}, volume = {14}, number = 6, month = dec, year = 1998, pages = {912-925}}

- S. LaValle and S. A. Hutchinson,
An Objective-Based Framework for Motion Planning Under Sensing and Control Uncertainties, *Int'l Journal of Robotics Research*, Vol. 17, No. 1, Jan. 1998, pp. 19-42.

[Abstract] [BibTex] [ IJRR ]The authors consider the problem of determining robot motion plans under sensing and control uncertainties. Traditional approaches are often based on methodology known as preimage planning, which involves worst-case analysis. The authors have developed a gen eral framework for determining feedback strategies by blending ideas from stochastic optimal control and dynamic game theory with traditional preimage planning concepts. This generalizes classical preimages to performance preimages and preimage planning for de signing motion strategies with information feedback. For a given problem, one can define a performance criterion that evaluates any executed trajectory of the robot. The authors present methods for selecting a motion strategy that optimizes this criterion under either nondeterministic uncertainty (resulting in worst-case analysis) or probabilistic uncertainty (resulting in expected-case analysis). The authors present dynamic programming-based algorithms that nu merically compute performance preimages and optimal strategies; several computed examples of forward projections, performance preimages, and optimal strategies are presented.

@article{LavHut98b, author ={S. LaValle and S. A. Hutchinson}, title={An Objective-Based Framework for Motion Planning Under Sensing and Control Uncertainties}, journal = {International Journal of Robotics Research}, volume = {17}, number = 1, month = jan, year = 1998, pages = {19-42}}

- S. LaValle, K. Moroney and S. A. Hutchinson,
Methods for Numerical Integration of High-Dimensional Posterior Densities with Application to Statistical Image Models, *IEEE Trans. on Image Processing*, Vol. 6, No. 12, Dec. 1997, pp. 1659-1672.

[Abstract] [BibTex] [IEEEXplore]Numerical computation with Bayesian posterior densities has recently received much attention both in the applied statistics and image processing communities. This paper surveys previous literature and presents efficient methods for computing marginal density values for image models that have been widely considered in computer vision and image processing. The particular models chosen are a Markov random field (MRF) formulation, implicit polynomial surface models, and parametric polynomial surface models. The computations can be used to make a variety of statistically based decisions, such as assessing region homogeneity for segmentation or performing model selection. Detailed descriptions of the methods are provided, along with demonstrative experiments on real imagery.

@article{LavMorHut97, author={S. M. LaValle, K. J. Moroney and S. A. Hutchinson}, title={Methods for Numerical Integration of High-Dimensional Probability Densities with Application to Statistical Image Models}, journal={IEEE Transactions on Image Processing}, volume = {6}, number = 12, month = dec, year = 1997, pages = {1659-1672}}

- K. Nickels and S. Hutchinson,
Textured Image Segmentation: Returning Multiple Solutions, *Image and Vision Computing*, Vol. 15, No. 10, 1997, pp. 781-795.

[Abstract] [BibTex]Traditionally, the goal of image segmentation has been to produce a single partition of an image. This partition is compared to some `ground truth', or human approved partition, to evaluate the performance of the algorithm. This paper utilizes a framework for considering a range of possible partitions of the image to compute a probability distribution on the space of possible partitions of the image. This is an important distinction from the traditional model of segmentation, and has many implications in the integration of segmentation and recognition research. The probabilistic framework that enables us to return a confidence measure on each result also allows us to discard from consideration entire classes of results due to their low cumulative probability. The distributions thus returned may be passed to higher-level algorithms to better enable them to interpret the segmentation results. Several experimental results are presented using Markov random fields as texture models to generate distributions of segments and segmentations on textured images. Both simple homogeneous images and natural scenes are presented.

@article{NicHut97, author = {K. Nickels and S. Hutchinson}, title = {Textured Image Segmentation: Returning Multiple Solutions}, journal = {Image and VIsion Computing}, volume = 15, number = 10, year = 1997, pages = {781-795}}

- R. Sharma and S. Hutchinson,
Motion Perceptibility and Its Application to Active Vision-Based Servo Control, *IEEE Trans. on Robotics and Automation*, Vol. 13, No. 4, 1997, pp. 607-617.

[Abstract] [BibTex] [IEEEXplore]We address the ability of a computer vision system to perceive the motion of an object (possibly a robot manipulator) in its field of view. We derive a quantitative measure of motion perceptibility, which relates the magnitude of the rate of change in an object's position to the magnitude of the rate of change in the image of that object. We then show how motion perceptibility can be combined with the traditional notion of manipulability, into a composite perceptibility/manipulability measure. We demonstrate how this composite measure may be applied to a number of different problems involving relative hand/eye positioning and control.

@Article{ShaHut97, author = {R. Sharma and S. Hutchinson}, title = {Motion Perceptibility and its Application to Active Vision-Based Servo Control}, journal = {IEEE Trans. on Robotics and Automation}, year = 1997, month = Aug, volume = 13, number = 4, pages = {607-617}}

- R. L. Castano and S. A. Hutchinson,
A Probabilistic Approach to Perceptual Grouping, *Computer Vision and Image Understanding*, Vol.64, No. 3, Nov. 1996, pp. 399-419.

[Abstract] [BibTex]We present a general framework for determining probability distributions over the space of possible image feature groupings. The framework can be used to find several of the most probable partitions of image features into groupings, rather than just returning a single partition of the features as do most feature grouping techniques. In addition to the groupings themselves, the probability of each partition is computed, providing information on the relative probability of multiple partitions that few grouping techniques offer. In determining the probability distribution of groupings, no parameters are estimated, thus eliminating problems that occur with small data sets and outliers such as the compounding of errors that can occur when parameters are estimated and the estimated parameters are used in the next grouping step. We have instantiated our framework for the two special cases of grouping line segments into straight lines and for grouping bilateral symmetries with parallel axes, where bilateral symmetries are formed by pairs of edges. Results are presented for these cases on several real images.

@article{CasHut96, author = {R. Castano and S. Hutchinson}, title = {A Probabilistic Approach to Perceptual Grouping}, journal = { Computer Vision and Image Understanding}, volume = 64, number = 3, month = nov, year = 1996, pages = {399-419}}

- B. Bishop, S. A. Hutchinson and M. W. Spong,
Camera Modelling for Visual Servo Control Applications, *Mathematical and Computer Modelling*, Special issue on Modelling Issues in Visual Sensing, Vol. 24, No. 5/6, 1996, pp. 79-102.

[Abstract] [BibTex]When designing a visual servo system, it is important to have a complete and accurate model of the imaging process. Unmodelled imaging dynamics may play an important role in the stability and performance of such systems. In this paper, we present a detailed camera model which can be used in the design and analysis of visual servo systems. Using the free-standing acrobot as a testbed, we analyze the effects of unmodelled imaging dynamics on visual servo control systems. We show that certain camera parameters strongly influence the performance of this system, and that accurate modeling is necessary for proper selection of imaging hardware.

@article{BisHutSpo96, author = {B. Bishop and S. A. Hutchinson and M. W. Spong}, title = {Camera Modelling for Visual Servo Control Applications}, journal = {Mathematical and Computer Modelling, Special issue on Modelling Issues in Visual Sensing,}, volume = 24, number = {5/6}, year = 1996, pages = {79-102}}

- S. Hutchinson, G. Hager and P. Corke,
A Tutorial on Visual Servo Control, *IEEE Trans. on Robotics and Automation*, Vol. 12, No. 5, Oct. 1996, pp. 651-670.

[Abstract] [BibTex] [IEEEXplore]This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.

@article{HutHagCor96, author = { S. Hutchinson and G. Hager and P. Corke}, title = {A Tutorial on Visual Servo Control}, journal = {IEEE Trans. on Robotics and Automation}, volume = 12, number = 5, month = oct, year = 1996, pages = {651-670}}

- J. Reed and S. Hutchinson,
Image Fusion and Subpixel Parameter Estimation for Automated Optical Inspection of Electronic Components, IEEE Trans. on Industrial Electronics, Vol. 43, No. 3, June 1996, pp. 346-354.

[Abstract] [BibTex] [IEEEXplore]The authors present a new approach to automated optical inspection (AOI) of circular features that combines image fusion with subpixel edge detection and parameter estimation. In their method, several digital images are taken of each part as it moves past a camera, creating an image sequence. These images are fused to produce a high-resolution image of the features to be inspected. Subpixel edge detection is performed on the high-resolution image, producing a set of data points that is used for ellipse parameter estimation. The fitted ellipses are then back-projected into 3-space in order to obtain the sizes of the circular features being inspected, assuming that the depth is known. The method is accurate, efficient, and easily implemented. The authors present experimental results for real intensity images of circular features of varying sizes. Their results demonstrate that their algorithm shows greatest improvement over traditional methods in cases where the feature size is small relative to the resolution of the imaging device.

@article{ReeHut96, author = {J. Reed and S. Hutchinson}, title = {Image Fusion and Subpixel Parameter Estimation for Automated Optical Inspection of Electronic Components}, journal = {IEEE Transactions on Industrial Electronics}, volume = 43, number = 3, month = jun, year = 1996, pages = {346-354}}

- R. Sharma, S. LaValle and S. A. Hutchinson,
Optimizing Robot Motion Strategies for Assembly with Stochastic Models of the Assembly Process, *IEEE Trans. on Robotics and Automation*, Vol. 12, No. 2, Apr. 1996, pp. 160-174.

[Abstract] [BibTex] [IEEEXplore]Gross-motion planning for assembly is commonly considered as a distinct, isolated step between task sequencing/scheduling and fine-motion planning. In this paper the authors formulate a problem of delivering parts for assembly in a manner that integrates it with both the manufacturing process and the fine motions involved in the final assembly stages. One distinct characteristic of gross-motion planning for assembly is the prevalence of uncertainty involving time-in parts arrival, in request arrival, etc. The authors propose a stochastic representation of the assembly process, and design a state-feedback controller that optimizes the expected time that parts wait to be delivered. This leads to increased performance and a greater likelihood of stability in a manufacturing process. Six specific instances of the general framework are modeled and solved to yield optimal motion strategies for different robots operating under different assembly situations. Several extensions are also discussed.

@article{ShaLavHut96, author = {R. Sharma and S. LaValle and S. A. Hutchinson}, title = {Optimizing Robot Motion Strategies for Assembly with Stochastic Models of the Assembly Process}, journal = {IEEE Trans. on Robotics and Automation}, volume = 12, number = 2, month = Apr, year = 1996, pages = {160-174}}

- M. Barbehenn and S. Hutchinson,
Efficient Search and Hierarchical Motion Planning By Dynamically Maintaining Single-Source Shortest Paths Trees, *IEEE Trans. on Robotics and Automation*, Vol. 11, No. 2, Apr. 1995, pp. 198-214.

[Abstract] [BibTex] [(IEEEXplore)]Hierarchical approximate cell decomposition is a popular approach to the geometric robot motion planning problem. In many cases, the search effort expended at a particular iteration can be greatly reduced by exploiting the work done during previous iterations. In this paper, we describe how this exploitation of past computation can be effected by the use of a dynamically maintained single-source shortest paths tree. We embed a single-source shortest paths tree in the connectivity graph of the approximate representation of the robot configuration space. This shortest paths tree records the most promising path to each vertex in the connectivity graph from the vertex corresponding to the robot's initial configuration. At each iteration, some vertex in the connectivity graph is replaced with a new set of vertices, corresponding to a more detailed representation of the configuration space. Our new, dynamic algorithm is then used to update the single-source shortest paths tree to reflect these changes to the underlying connectivity graph.

@article{BarHut95, author = {M. Barbehenn and S. A. Hutchinson}, title={Efficient Search and Hierarchical Motion Planning By Dynamically Maintaining Single-source Shortest Paths Trees}, journal = {IEEE Trans. on Robotics and Automation}, month = Apr, year = 1995, volume={11}, number={2}, pages={198-214}}

- S. LaValle and S. A. Hutchinson,
A Bayesian Framework for Constructing Probability Distributions on the Space of Image Segmentations, *Computer Vision and Image Understanding*, Vol. 61, No. 2, March 1995, pp. 203-230.

[Abstract] [BibTex]The goal of traditional probabilistic approaches to image segmentation has been to derive a single, optimal segmentation, given statistical models for the image formation process. In this paper, we describe a new probabilistic approach to segmentation, in which the goal is to derive a set of plausible segmentation hypotheses and their corresponding probabilities. Because the space of possible image segmentations is too large to represent explicitly, we present a representation scheme that allows the implicit representation of large sets of segmentation hypotheses that have low probability. We then derive a probabilistic mechanism for applying Bayesian, model-based evidence to guide the construction of this representation. One key to our approach is a general Bayesian method for determining the posterior probability that the union of regions is homogeneous, given that the individual regions are homogeneous. This method does not rely on estimation and properly treats the issues involved when sample sets are small and estimation performance degrades. We present experimental results for both real and synthetic range data, obtained from objects composed of piecewise planar and implicit quadric patches.

@article{LavHut95a, author = {S. LaValle and S. Hutchinson}, title = {A Bayesian Framework for Constructing Probability Distributions on the Space of Image Segmentations}, journal = {Computer Vision and Image Understanding}, volume = 61, number = 2, month = mar, year = 1995, pages = {203-230}}

- A. Fox and S. A. Hutchinson,
Exploiting Visual Constraints in the Synthesis of Uncertainty-Tolerant Motion Plans, *IEEE Trans. on Robotics and Automation*, Vol. 11, No. 1, Feb. 1995, pp. 56-71.

[Abstract] [BibTex] [IEEEXplore]We introduce visual constraint surfaces as a mechanism to effectively exploit visual constraints in the synthesis of uncertainty-tolerant robot motion plans. We first show how object features, together with their projections onto a camera image plane, define a set of visual constraint surfaces. These visual constraint surfaces can be used to effect visual guarded and visual compliant motions. We then show how the backprojection approach to fine-motion planning can be extended to exploit visual constraints. Specifically, by deriving a configuration space representation of visual constraint surfaces, we are able to include visual constraint surfaces as boundaries of the directional backprojection. By examining the effect of visual constraints as a function of the direction of the commanded velocity, we are able to determine new criteria for critical velocity orientations, i.e. velocity orientations at which the topology of the directional backprojection might change.

@article{FoxHut94, author = {A. Fox and S. Hutchinson}, title = {Exploiting Visual Constraints in the Synthesis of Uncertainty-Tolerant Motion Plans}, journal = {IEEE Trans. on Robotics and Automation}, volume = {11}, pages = {56-71}, year = {1995} }

- S. LaValle and S. A. Hutchinson,
A Bayesian segmentation methodology for parametric image models, *IEEE Trans. on Pattern Analysis and Machine Intelligence*, Vol. 17, No. 2, Feb. 1995, pp. 211-217.

[Abstract] [BibTex] [IEEEXplore]Region-based image segmentation methods require some criterion for determining when to merge regions. This paper presents a novel approach by introducing a Bayesian probability of homogeneity in a general statistical context. The authors' approach does not require parameter estimation and is therefore particularly beneficial for cases in which estimation-based methods are most prone to error: when little information is contained in some of the regions and, therefore, parameter estimates are unreliable. The authors apply this formulation to three distinct parametric model families that have been used in past segmentation schemes: implicit polynomial surfaces, parametric polynomial surfaces, and Gaussian Markov random fields. The authors present results on a variety of real range and intensity images.

@article{LavHut95, author = {S. LaValle and S. Hutchinson}, title = { A {B}ayesian segmentation methodology for parametric image models}, journal = {IEEE Trans. Pattern Analysis Machine Intelligence}, volume = {17}, number = {1}, month = feb, pages = {211-217}, year = {1995} }

- R. Spence and S. Hutchinson,
An Integrated Architecture for Robot Motion Planning and Control in the Presence of Obstacles with Unknown Trajectories, *IEEE Trans. on Systems, Man, and Cybernetics*, Vol. 25, No. 1, Jan. 1995, pp. 100-110.

[Abstract] [BibTex] [IEEEXplore]We present an integrated architecture for real-time planning and control of robot motions, for a robot operating in the presence of moving obstacles whose trajectories are not known a priori. The architecture comprises three control loops: an inner loop to linearize the robot dynamics, and two outer loops to implement the attractive and repulsive forces used by an artificial potential field motion planning algorithm. From a control theory perspective, our approach is unique in that the outer control loops are used to effect both desirable transient response and collision avoidance. From a motion planning perspective, our approach is unique in that the dynamic characteristics of both the robot and the moving obstacles are considered. Several simulations are presented that demonstrate the effectiveness of the planner/controller combination.

@article{SpeHut95, author = {R. Spence and S. Hutchinson}, title = {An Integrated Architecture for Robot Motion Planning and Control in the Presence of Obstacles with Unknown Trajectories}, journal = {IEEE Trans. on Systems, Man, and Cybernetics}, volume = 25, number = 1, month = jan, year = 1995, pages = {100-110}}

- A. Castano and S. A. Hutchinson,
Visual Compliance: Task-Directed Visual Servo Control, *IEEE Trans. on Robotics and Automation*, Vol. 10, No. 3, June 1994, pp. 334-342.

[Abstract] [BibTex] [IEEEXplore]This paper introduces visual compliance, a new vision-based control scheme that lends itself to task-level specification of manipulation goals. Visual compliance is effected by a hybrid vision/position control structure. Specifically, the two degrees of freedom parallel to the image plane of a supervisory camera are controlled using visual feedback, and the remaining degree of freedom (perpendicular to the camera image plane) is controlled using position feedback provided by the robot joint encoders. With visual compliance, the motion of the end effector is constrained so that the tool center of the end effector maintains “contact” with a specified projection ray of the imaging system. This type of constrained motion can be exploited for grasping, parts mating, and assembly. The authors begin by deriving the projection equations for the vision system. They then derive equations used to position the manipulator prior to the execution of visual compliant motion. Following this, the authors derive the hybrid Jacobian matrix that is used to effect visual compliance. Experimental results are given for a number of scenarios, including grasping using visual compliance.

@article{CasHut94, author = {A. Castano and S. A. Hutchinson}, title = {Visual Compliance: Task-Directed Visual Servo Control}, journal = {IEEE Trans. on Robotics and Automation}, volume = {10}, number = 3, month = jun, year = 1994, pages = {334-342}}

- N. Mahadevamurty, T-C. Tsao and S. Hutchinson,
Multi-Rate Analysis and Design of Visual Feedback Digital Servo Control Systems, *ASME Journal of Dynamic Systems, Measurement and Control*, Vol. 116, No. 1, March 1994, pp. 45-55.

[Abstract] [BibTex]This paper addresses the analysis and design of digital motion control system with machine vision as a feedback measurement in the servo loop. The camera vision is modeled as a discrete time-delayed sensor. A multirate formulation is proposed based on the fact that vision update rate is slower than the digital servo-control update rate and is analyzed through the lifting technique which converts the periodic time varying multirate system to a time invariant one. Some interesting properties of this specific multirate system are found and are utilized in control system design. An l-1 norm optimal control problem is formulated to minimize the maximum time domain error, which has direct connection to camera field of view and mechanical tolerance. A numerical example is given to demonstrate the presented methods.

@article{MahTsaHut94, author = {N. Mahadevamurty and T-C. Tsao and S. Hutchinson}, title = {Multi-Rate Analysis and Design of Visual Feedback Digital Servo Control Systems}, journal = { ASME Journal of Dynamic Systems, Measurement and Control}, volume = 116, number = 1, month = mar, year = 1994, pages = {45-55}}

- S. A. Hutchinson and A. C. Kak,
SPAR: A Planner that Satisfies Operational and Geometric Goals in Uncertain Environments, *AI Magazine*, Vol. 11, No. 1, Spring 1990, pp. 30-61.

[Abstract] [BibTex]In this article, we present Spar (simultaneous planner for assembly robots), an implemented system that reasons about high-level operational goals, geometric goals, and uncertainty-reduction goals to create task plans for an assembly robot. These plans contain manipulations to achieve the assembly goals and sensory operations to cope with uncertainties in the robot's environment. High-level goals (which we refer to as operational goals) are satisfied by adding operations to the plan using a nonlinear, constraint-posting method. Geometric goals are satisfied by placing constraints on the execution of these operations. If the geometric configuration of the world prevents this, Spar adds new operations to the plan along with the necessary set of constraints on the execution of these operations. When the uncertainty in the world description exceeds that specified by the uncertainty-reduction goals, Spar introduces either sensing operations or manipulations to reduce this uncertainty to acceptable levels. If Spar cannot find a way to sufficiently reduce uncertainties, it augments the plan with sensing operations to be used to verify the execution of the action and, when possible, posts possible error-recovery plans, although at this point, the verification operations and recovery plans are predefined.

@article{HutKak90, author = {S. A. Hutchinson and A. C. Kak}, title = {{SPAR:} A Planner that Satisfies Operational and Geometric Goals in Uncertain Environments}, journal = {AI Magazine}, volume = {11}, number = {1}, month = Spring, pages = {30-61}, year = {1990} }

- S. A. Hutchinson and A. C. Kak,
Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities, *IEEE Trans. on Robotics and Automation*, Vol. 5, No. 6, Dec. 1989, pp. 765-783.

[Abstract] [BibTex] [IEEEXplore]An approach is presented for planning sensing strategies dynamically on the basis of the system's current best information about the world. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. The system then applies that sensing operation which minimizes this ambiguity. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the authors apply the concept of entropy from information theory. This allows them to determine the ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses.

@article{HutKak89, author = {S. A. Hutchinson and A. C. Kak}, title = {Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities}, journal = {IEEE Trans. on Robotics and Automation}, volume = {5}, number = {6}, month = dec, pages = {765-783}, year = {1989} }

# Book Chapters

- S. Bhattacharya and S. Hutchinson,
On the Existence of Nash Equilibrium for a Visibility Based Pursuit Evasion Game, in*Algorithmic Foundations of Robotics VIII*, G. Chirikjian, H. Choset, M. Morales and T. Murphey, Eds., Springer-Verlag, Heidelberg, Germany, 2010, pp. 251-266. - J. Davidson and S. Hutchinson,
A Sampling Hyperbelief Optimization Techinque for Stochastic Systems, in*Algorithmic Foundations of Robotics VIII*, G. Chirikjian, H. Choset, M. Morales and T. Murphey, Eds., Springer-Verlag, Heidelberg, Germany, 2010, pp. 217-231. - R. Murrieta-Cid,
A. Sarmiento, T. Muppirala, S. Hutchinson, R. Monroy,
M. Alencastre-Miranda, L. Munoz-Gomez and R. Swain,
A Framework for Reactive Motion and Sensing Planning: A Critical Events-Based Approach, in*Advances in Artificial Intelligence --- MICAI*, A. Gelbukh, A. Albornoz, H. Terashima-Marín Eds., Springer-Verlag {LNCS 3789}, 2005, pp. 990-1000. - S. Hutchinson and P. Leven,
Planning Collision-Free Paths Using Probabilistic Roadmaps, in*Handbook of Geometric Computing: Applications in Pattern Recogntion, Computer Vision, Neuralcomputing, and Robotics*, Eduardo Bayro Corrochano, Ed., Springer Verlag, Heidelberg, 2005, pp. 717-748. - A. Sarmiento, R. Murrieta-Cid and S. Hutchinson,
A Multi-robot Strategy for Rapidly Searching a Polygonal Environment, in*Advances in Artificial Intelligence --– IBERAMIA*, C. Lemaitre, C. A. Reyes, J. A. Gonzalez Eds., Springer-Verlag, Heidelberg, {LNCS 3315}, 2004, pp. 484-493. - P. I. Corke, S. A. Hutchinson and N. R. Gans,
Partitioned Image-Based Visual Servo Control: Some New Results, in*Sensor Based Intelligent Robots*, G. D. Hager, H. I. Christensen, H. Bunke, R. Klein Eds., Springer {LNCS 2238}, 2002, pp. 122-140. - P. Leven and S. Hutchinson,
Toward Real-Time Motion Planning in Dynamic Environments, in*Algorithmic and Computational Robotics: New Directions*, B. R. Donald, K. M. Lynch and D. Rus Eds., A. K. Peters, Natick, MA, 2001, pp. 363-376. - J. Reed and S. A. Hutchinson,
Data Fusion for Inspection of Electronic Components, in*Applications of NDT Data Fusion*, X. Gros Ed., Kluwer Academic Publishers, Norwell, MA, 2001, pp. 105-128. - K. Nickels and S. A. Hutchinson,
Integrated Object Models for Robust Visual Tracking, in*Robust Vision for Vision-Based Control of Motion*, M. Vincze and G.D. Hager Eds., IEEE Press, 2000, pp. 30-51. - S. M. LaValle and S. A. Hutchinson,
"Considering Multiple-Surface Hypotheses in a Bayesian Hierarchy,
in
*Selected SPIE Papers on CD-ROM, Vol 8: Mathematical Imaging and Vision*, Gerhard Ritter Ed., SPIE Press, 1999. - M. J. Shaw, N. Ahuja, S. .A. Hutchinson,
Coodination, Collaboration, and Control of Multirobot Systems, in*Handbook of Industrial Robotics*, S. Y. Nof, Ed., John Wiley & Sons Ltd., 1999, pp. 423-438. - M. Barbehenn and S. A. Hutchinson,
Toward Incremental Geometric Robot Motion Planning, in*Practical Motion Planning in Robotics: Current Approaches and Future Directions*, K. Gupta and A. del Pobil, Eds., John Wiley & Sons Ltd., 1998, pp. 133-152. - A. Fox and S. A. Hutchinson,
Exploiting Visual Constraints in the Synthesis of Uncertainty-Tolerant Motion Plans, in*The Algorithmic Foundations of Robotics*, K. Goldberg, D. Halperin, J.C. Latombe and R. Wilson, Eds., A. K. Peters, Boston, MA, 1995. - S. A. Hutchinson and A. C. Kak,
Multi-Sensor Strategies Using Dempster/Shafer Belief Accumulation, in*Data Fusion in Robotics and Machine Intelligence*, M. A. Abidi, S. C. Thomopoulos and R. C. Gonzalez, Eds., Academic Press, Cambridge, MA, 1992, pp. 165-205. - S. A. Hutchinson and A. C. Kak,
FProlog: A Language to Integrate Logic and Functional Programming for Automated Assembly, in*Control and Programming in Advanced Manufacturing*, K. Rathmill, Ed., IFS Publications Ltd, UK, 1988, pp. 361-371.

# Proceedings of Technical Meetings

- S. Candido, J. Davidson and S. Hutchinson,
Exploiting domain knowledge in planning for uncertain robot systems modeled as POMDPs, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, 2010, pp. 3596-3603

[Abstract] [BibTex] [IEEEXplore]We propose a planning algorithm that allows user-supplied domain knowledge to be exploited in the synthesis of information feedback policies for systems modeled as partially observable Markov decision processes (POMDPs). POMDP models, which are increasingly popular in the robotics literature, permit a planner to consider future uncertainty in both the application of actions and sensing of observations. With our approach, domain experts can inject specialized knowledge into the planning process by providing a set of local policies that are used as primitives by the planner. If the local policies are chosen appropriately, the planner can evaluate further into the future, even for large problems, which can lead to better overall policies at decreased computational cost. We use a structured approach to encode the provided domain knowledge into the value function approximation. We demonstrate our approach on a multi-robot fire fighting problem, in which a team of robots cooperates to extinguish a spreading fire, modeled as a stochastic process. The state space for this problem is significantly larger than is typical in the POMDP literature, and the geometry of the problem allows for the application of an intuitive set of local policies, thus demonstrating the effectiveness of our approach.

@INPROCEEDINGS{5509494, author={Candido, S. and Davidson, J. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Exploiting domain knowledge in planning for uncertain robot systems modeled as POMDPs}, year={2010}, month={may.}, volume={}, number={}, pages={3596 -3603}}

- R. Fomena, H. Yoon, A. Cherubini, F. Chaumette, and S. Hutchinson,
Coarsely calibrated visual servoing of a mobile robot using a catadioptric vision system, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, 2009, pp. 5432-5437.

[Abstract] [BibTex] [IEEEXplore]A catadioptric vision system combines a camera and a mirror to achieve a wide field of view imaging system. This type of vision system has many potential applications in mobile robotics. This paper is concerned with the design of a robust image-based control scheme using a catadioptric vision system mounted on a mobile robot. We exploit the fact that the decoupling property contributes to the robustness of a control method. More precisely, from the image of a point, we propose a minimal and decoupled set of features measurable on any catadioptric vision system. Using the minimal set, a classical control method is proved to be robust in the presence of point range errors. Finally, experimental results with a coarsely calibrated mobile robot validate the robustness of the new decoupled scheme.

@INPROCEEDINGS{5354130, author={Fomena, R.T. and Han Ul Yoon and Cherubini, A. and Chaumette, F. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Coarsely calibrated visual servoing of a mobile robot using a catadioptric vision system}, year={2009}, month={oct.}, volume={}, number={}, pages={5432 -5437}}

- S. Bhattacharya, S. Hutchinson, and T. Basar,
Game-theoretic analysis of a visibility based pursuit-evasion game in the presence of obstacles, *Proc. of the American Control Conference*, June, 2009, pp. 373-378.

[Abstract] [BibTex] [IEEEXplore]In this paper, we present a game theoretic analysis of a visibility based pursuit-evasion game in an environment containing obstacles. The pursuer and the evader are holonomic having bounded speeds. Both players have a complete map of the environment. Both players have omnidirectional vision and have knowledge about each other's current position as long as they are visible to each other. Under this information structure, the pursuer wants to maintain visibility of the evader for maximum possible time and the evader wants to escape the pursuer's sight as soon as possible. We present strategies for the players that are in Nash equilibrium. The strategies are a function of the value of the game. Using these strategies, we construct a value function by integrating the retrogressive path equations backward in time from the termination situations provided by the corners in the environment. From these value functions we recompute the control strategies for them to obtain optimal trajectories for the players near the termination situation.

@article{BhaHutBas09, author = {Bhattacharya, S. and Hutchinson, S. and Basar, T.}, title = {Game-theoretic analysis of a visibility based pursuit-evasion game in the presence of obstacles}, journal = {Proc. of the American Control Conference}, month = jun, year = 2009, pages = {373-378}}

- S. Candido and S. Hutchinson,
Detecting intrusion faults in remotely controlled systems, *Proc. of the American Control Conference*, June, 2009, pp. 4968-4973.

[Abstract] [BibTex] [IEEEXplore]In this paper, we propose a method to detect an unauthorized control signal being sent to a remote-controlled system (deemed an ldquointrusion faultrdquo or ldquointrusionrdquo) despite attempts to conceal the intrusion. We propose adding a random perturbation to the control signal and using signal detection techniques to determine the presence of that signal in observations of the system. Detection of these perturbations indicates that an authorized or ldquotrustedrdquo operator is in control of the system. We analyze a worst case scenario (in terms of detection of the intrusion), discuss construction of signal detectors, and demonstrate our method through a simple example of a point robot with dynamics.

@article{CanHut09, author = {S. Candido and S. Hutchinson}, title = {Detecting intrusion faults in remotely controlled systems}, journal = { Proc. of the American Control Conference}, month = jun, year = 2009, pages = {{4968-4973}}

- S. Candido, Y.-T. Kim and S. Hutchinson,
An improved hierarchical motion planner for humanoid robots, *Proc. IEEE-RAS Int'l. Conf. on Humanoid Robots*, 2008, pp. 654-661.

[Abstract] [BibTex]In our previous work, we proposed a hierarchical planner for bipedal and humanoid robots navigating complex environments based on a motion primitives framework. In this paper, we extend and improve that planner by proposing a different approach for the global and subgoal components of our planner. We continue to use a workspace decomposition that consists of a passage map, obstacle map, gradient map, and local map. We verify our approach using both simulation results and experimentally on a mechanical humanoid system.

@article{CanKimHut08, author = {S. Candido and Y.-T. Kim and S. Hutchinson}, title = {An improved hierarchical motion planner for humanoid robots}, journal = {Proc. IEEE-RAS Int'l. Conf. on Humanoid Robots}, year = 2008, pages = { 654-661}}

- S. Bhattacharya and S. Hutchinson,
On the Existence of Nash Equilibrium for a Visibility-Based Pursuit Evasion Game, *Proc. Workshop on the Algorithmic Foundations of Robotics*, Guanajuato, 2008.

[Abstract] [BibTex]In this paper, we present a game theoretic analysis of a visibility based pursuitevasion game in a planar environment containing obstacles. The pursuer and the evader are holonomic having bounded speeds. Both the players have a complete map of the environment. Both the players have omnidirectional vision and have knowledge about each other's current position as long as they are visible to each other. The pursuer wants to maintain visibility of the evader for maximum possible time and the evader wants to escape the pursuer's sight as soon as possible. Under this information structure, we present necessary and sufficient conditions for surveillance and escape. We present strategies for the players that are in Nash Equilibrium. The strategies are a function of the value of the game. Using these strategies, we construct a value function by integrating the adjoint equations backward in time from the termination situations provided by the corners in the environment. From these value functions we recompute the control strategies for the players to obtain optimal trajectories for the players near the termination situation. As far as we know, this is the first work that presents the necessary and sufficient conditions for tracking for a visibility based pursuit-evasion game and presents the equilibrium strategies for the players.

@article{BhaHut08, author = {S. Bhattacharya and S. Hutchinson}, title = {On the Existence of Nash Equilibrium for a Visibility-Based Pursuit Evasion Game}, journal = {Proc. Workshop on the Algorithmic Foundations of Robotics}, year = 2008}

- J. Davidson and S. Hutchinson,
A Sampling Hyperbelief Optimization Technique for Stochastic Systems, *Proc. Workshop on the Algorithmic Foundations of Robotics*, Guanajuato, 2008.

[Abstract] [BibTex]In this paper we propose an anytime algorithm for determining nearly optimal policies for total cost and finite time horizon partially observed Markov decision processes (POMDPs) using a sampling-based approach. The proposed technique, sampling hyperbelief optimization technique (SHOT), attempts to exploit the notion that small changes in a policy have little impact on the quality of the solution except at a small set of critical points. The result is a technique to represent POMDPs independent of the initial conditions and the particular cost function, so that the initial conditions and the cost function may vary without having to reperform the majority of the computational analysis.

@article{DavHut08, author = {J. Davidson and S. Hutchinson}, title = {A Sampling Hyperbelief Optimization Technique for Stochastic Systems}, journal = {Proc. Workshop on the Algorithmic Foundations of Robotics}, year = 2008}

- J. Davidson and S. Hutchinson,
Hyper-particle filtering for stochastic systems, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, 2008, pp. 2770-2777

[Abstract] [BibTex] [IEEEXplore]Information-feedback control schemes (more specifically, sensor-based control schemes) select an action at each stage based on the sensory data provided at that stage. Since it is impossible to know future sensor readings in advance, predicting the future behavior of a system becomes difficult. Hyper-particle filtering is a sequential computational scheme that enables probabilistic evaluation of future system performance in the face of this uncertainty. Rather than evaluating individual sample paths or relying on point estimates of state, hyper-particle filtering maintains at each stage an approximation of the full probability density function over the belief space (i.e., the space of possible posterior densities for the state estimate). By applying hyper-particle filtering, control policies can be more more accurately assessed and can be evaluated from one stage to the next. These aspects of hyper-particle filtering may prove to be useful when determining policies, not just when evaluating them.

@INPROCEEDINGS{4543630, author={Davidson, J.C. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Hyper-particle filtering for stochastic systems}, year={2008}, month={may.}, pages={2770 -2777}}

- R. Murrieta-Cid, R. Monroy, S. Hutchinson and J.-P. Laumond,,
A Complexity result for the pursuit-evasion game of maintaining visibility of a moving evader, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, 2008, pp. 2657-2664.

[Abstract] [BibTex] [IEEEXplore]In this paper we consider the problem of maintaining visibility of a moving evader by a mobile robot, the pursuer, in an environment with obstacles. We simultaneously consider bounded speed for both players and a variable distance separating them. Unlike our previous efforts [R. Murrieta-Cid et al., 2007], we give special attention to the combinatorial problem that arises when searching for a solution through visiting several locations. We approach evader tracking by decomposing the environment into convex regions. We define two graphs: one is called the mutual visibility graph (MVG) and the other the accessibility graph (AG). The MVG provides a sufficient condition to maintain visibility of the evader while the AG defines possible regions to which either the pursuer or the evader may go to. The problem is framed as a non cooperative game. We establish the existence of a solution, based on a k- Min approach, for the following givens: the environment, the initial state of the evader and the pursuer, including their maximal speeds. We show that the problem of finding a solution to this game is NP-complete.

@INPROCEEDINGS{4543613, author={Murrieta-Cid, R. and Monroy, R. and Hutchinson, S. and Laumond, J.-P.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={A Complexity result for the pursuit-evasion game of maintaining visibility of a moving evader}, year={2008}, month={may.}, volume={}, number={}, pages={2657 -2664}}

- S. Kloder and S. Hutchinson,
Partial barrier coverage: Using game theory to optimize probability of undetected intrusion in polygonal environments, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, 2008, pp. 2671-2676.

[Abstract] [BibTex] [IEEEXplore]In this paper, we formalize the problem of partial barrier coverage, that is, the problem of using robot sensors (guards) to minimize the probability of undetected intrusion in a particular region by an intruder. We use ideas from noncooperative game theory together with previous results from complete barrier coverage - the problem of completely preventing undetected intrusion - to develop new methods that solve this problem for the specific case of bounded-range line-of-sight sensors in a two-dimensional polygonally-bounded region. Our solution constructs equilibrium strategies for the intruder and guards, and calculates the level of partial coverage.

@INPROCEEDINGS{4543615, author={Kloder, S. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Partial barrier coverage: Using game theory to optimize probability of undetected intrusion in polygonal environments}, year={2008}, month={may.}, volume={}, number={}, pages={2671 -2676}}

- S. Bhattacharya and S. Hutchinson,
Approximation Schemes for Two-Player Pursuit Evasion Games with Visibility Constraints, *Proc. Robotics: Science and Systems IV*, 2008, pp. 81-88.

[Abstract] [BibTex] [RSS page]In this paper, we consider the problem in which a mobile pursuer attempts to maintain visual contact with an evader as it moves through an environment containing obstacles. This surveillance problem is a variation of traditional pursuitevasion games, with the additional condition that the pursuer immediately loses the game if at any time it loses sight of the evader. Since it has been shown that the problem of deciding whether or not the pursuer is able to maintain visibility of the evader is at least NP-complete, we present schemes to approximate the set of initial positions of the pursuer from which it might be able to track the evader. We first consider the case of an environment containing only polygonal obstacles. We prove that in this case the set of initial pursuer configurations from which it does not lose the game is bounded. Moreover, we provide polynomial time approximation schemes to bound this set. We then extend our results to the case of arbitrary obstacles with smooth boundaries.

@INPROCEEDINGS{Bhattacharya-RSS08, AUTHOR = {Sourabh Bhattacharya, Seth Hutchinson}, TITLE = {Approximation Schemes for Two-Player Pursuit Evasion Games with Visibility Constraints}, BOOKTITLE = {Proceedings of Robotics: Science and Systems IV}, YEAR = {2008}, ADDRESS = {Zurich, Switzerland}, MONTH = {June}}

- S. Candido, Y.T. Kim and S. Hutchinson,
A Workspace Decomposition for Hierarchical Motion Planning with Humanoid Robots, *Proc. Int'l. Conf. on Advanced Robotics*, Jeju island, South Korea, 2007.

[Abstract] [BibTex]This paper presents a hierarchical motion planner for humanoid robots navigating complex environments. We use a workspace decomposition that allows our planning algorithm to be separated into high-level, subgoal, and local algorithms. The workspace decomposition consists of a passage map, obstacle map, gradient map, and local map. We verify our approach using a virtual humanoid robot in a simulated environment.

@inproceedings{candido_icar2007, title = {A Workspace Decomposition for Hierarchical Motion Planning with Humanoid Robots}, author = {Yong-Tae Kim AND Salvatore Candido AND Seth Hutchinson}, booktitle = {Proceedings of the IEEE International Conference on Advanced Robotics}, year = {2007}, address = {Jeju island, South Korea}, month = {August} }

- S. Kloder and S. Hutchinson,
Barrier Coverage for Variable Bounded-Range Line-of-Sight Guards, *ICRA*, 2007, pp. 391-396.

[Abstract] [BibTex] [IEEEXplore]In this paper, we formalize the problem of barrier coverage, that is, the problem of preventing undetected intrusion in a particular region using robot sensors. We solve the problem of finding the minimum-length barrier in the case of variable bounded-range line-of-sight sensors in a two-dimensional polygonally-bounded region. We do this by building a graph of candidate barriers that could potentially be in the minimum barrier. The dual of this graph shows the connectivity of the free space. We thus reduce the problem to the network flows maximum-flow/minimum-cut problem.

@INPROCEEDINGS{4209123, author={Kloder, S. and Hutchinson, S.}, journal={ICRA}, title={Barrier Coverage for Variable Bounded-Range Line-of-Sight Guards}, year={2007}, month={apr.}, volume={}, number={}, pages={391 -396}}

- N. Gans and S. Hutchinson,
A Stable Vision-Based Control Scheme for Nonholonomic Vehicles to Keep a Landmark in the Field of View, *ICRA*, 2007, pp. 2196-2201.

[Abstract] [BibTex] [IEEEXplore]Control of wheeled vehicles is a difficult problem due to nonholonomic constraints. This problem is compounded by sensor limitations. A previously developed control scheme for a wheeled robot, which keeps a target in the view of a mounted camera, is one solution to the problem. In this paper, we prove the controllability and stability of the control scheme. We present an implementation of the controller, as well as present the results of simulations and physical experiments.

@INPROCEEDINGS{4209410, author={Gans, N.R. and Hutchinson, S.A.}, journal={ICRA}, title={A Stable Vision-Based Control Scheme for Nonholonomic Vehicles to Keep a Landmark in the Field of View}, year={2007}, month={apr.}, volume={}, number={}, pages={2196 -2201}}

- G. Lopez-Nicolas, S. Bhattacharya, J.J. Guerrero, C. Sagues, and Hutchinson,
Switched Homography-Based Visual Control of Differential Drive Vehicles with Field-of-View Constraints, *ICRA*, 2007, pp. 4238-4244.

[Abstract] [BibTex] [IEEEXplore]This paper presents a switched homography-based visual control for differential drive vehicles. The goal is defined by an image taken at the desired position, which is the only previous information needed from the scene. The control takes into account the field-of-view constraints of the vision system through the specific design of the paths with optimality criteria. The optimal paths consist of straight lines and curves that saturate the sensor viewing angle. We present the controls that move the robot along these paths based on the convergence of the elements of the homography matrix. Our contribution is the design of the switched homography-based control, following optimal paths guaranteeing the visibility of the target.

@INPROCEEDINGS{4209749, author={Lopez-Nicolas, G. and Bhattacharya, S. and Guerrero, J.J. and Sagues, C. and Hutchinson, S.}, journal={ICRA}, title={Switched Homography-Based Visual Control of Differential Drive Vehicles with Field-of-View Constraints}, year={2007}, month={apr.}, volume={}, number={}, pages={4238 -4244}}

- S. Bhattacharya and S. Candido and S. Hutchinson,
Motion Strategies for Surveillance, *Proceedings of Robotics: Science and Systems*, June, 2007.

[Abstract] [BibTex] [RSS Page]We address the problem of surveillance in an environment with obstacles. We show that the problem of tracking an evader with one pursuer around one corner is completely decidable. The pursuer and evader are assumed to have complete information about each other's instantaneous position and velocity. We present a partition of the visibility region of the pursuer where based on the region in which the evader lies, we provide strategies for the evader to escape the visibility region of the pursuer or for the pursuer to track the target for all future time. We also present the solution to the inverse problem: given the position of the evader, the positions of the pursuer for which the evader can escape the visibility region of the target. These results have been provided for varying speeds of the pursuer and the evader. Based on the results of the inverse problem we provide an $O(n^{3}log{n})$ algorithm that can decide if the evader can escape from the visibility region of a pursuer for some initial pursuer and evader positions. Finally, we extend the result of the target tracking problem around a corner in two dimensions to an edge in three dimensions.

@INPROCEEDINGS{ Bhattacharya-RSS-07, AUTHOR = {S. Bhattacharya and S. Candido and S. Hutchinson}, TITLE = {Motion Strategies for Surveillance}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2007}, ADDRESS = {Atlanta, GA, USA}, MONTH = {June} }

- N.R. Gans and S. A. Hutchinson,
Visual Servo Velocity and Pose Control of a Wheeled Inverted Pendulum through Partial-Feedback Linearization, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, 2006, pp. 3823-3828.

[Abstract] [BibTex] [IEEEXplore]Vision-based control of wheeled vehicles is a difficult problem due to nonholonomic constraints on velocities. This is further complicated in the control of vehicles with drift terms and dynamics containing fewer actuators than velocity terms. We explore one such system, the wheeled inverted pendulum, embodied by the Segway. We present two methods of eliminating the effects of nonactuated attitude motions and a novel controller based on partial feedback linearization. This novel controller outperforms a controller based on typical linearization about an equilibrium point.

@INPROCEEDINGS{4059002, author={Gans, N.R. and Hutchinson, S.A.}, journal={ Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Visual Servo Velocity and Pose Control of a Wheeled Inverted Pendulum through Partial-Feedback Linearization}, year={2006}, month={oct.}, volume={}, number={}, pages={3823 -3828}}

- S. Bhattacharya and S. Hutchinson,
Controllability and Properties of Optimal Paths for a Differential Drive Robot with Field-of-View Constraints, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Orlando, 2006 pp. 1624 - 1629.

[Abstract] [BibTex] [IEEEXplore]This work presents the proof of controllability for a differential drive robot that maintains visibility of a landmark. The robot has limited sensing capabilities (angle of view). We also present properties of optimal paths for this system.

@INPROCEEDINGS{1641939, author={Bhattacharya, S. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Controllability and properties of optimal paths for a differential drive robot with field-of-view constraints}, year={2006}, month={may.}, volume={}, number={}, pages={1624 -1629}}

- R. Katz and S. Hutchinson
Efficiently Biasing PRMs with Passage Potentials, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Orlando, 2006 pp. 889 - 894.

[Abstract] [BibTex] [IEEEXplore]This paper presents a passage potential based biasing scheme for PRMs to specifically address the narrow passage problem. The biasing strategy fulfills minimum requirements for an efficient biasing, considering not only location issues, but also intensity, sparseness and applicability of the biasing criterion. Conforming to these features a particular family of passage potential functions has been defined and integrated within a basic PRM to achieve biasing. Simulations have demonstrated the reliable and successful implementation of the proposed architecture under several experimental settings and robot configurations.

@INPROCEEDINGS{1641822, author={Katz, R. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Efficiently biasing PRMs with passage potentials}, year={2006}, month={may.}, volume={}, number={}, pages={889 -894}}

- R. Murrieta-Cid, L. Munoz, M. Alencastre, A. Sarmiento, S. Kloder,
S. Hutchinson, F. Lamiraux and J.P. Laumond,
Maintaining Visibility of a Moving Holonomic Target at a Fixed Distance with a Non-Holonomic Robot, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Edmonton, Canada, 2005, pp. 2687 - 2693.

[Abstract] [BibTex] [IEEEXplore]In this paper we consider the problem of maintaining surveillance of a moving the target by a nonholonomic mobile observer. The observer's goal is to maintain visibility of the target from a predefined, fixed distance, l. The target escapes if (a) it moves behind an obstacle to occlude the observer's view, (b) it causes the observer to collide with an obstacle, or (c) it exploits the nonholonomic constraints on the observer motion to increase its distance from the observer beyond the surveillance distance l. We deal specifically with the situation in which the only constraint on the target's velocity is a bound on speed (i.e., there are no nonholonomic constraints on the target's motion), and the observer is a nonholonomic, differential drive system having bounded speed. We develop the system model, from which we derive a lower bound for the required observer speed. Finally, we consider the effect of obstacles on the observer's ability to successfully track the target.

@INPROCEEDINGS{1545275, author={Murrieta-Cid, R. and Munoz-Gomez, L. and Alencastre-Miranda, M. and Sarmiento, A. and Kloder, S. and Hutchinson, S. and Lamiraux, F. and Laumond, J.P.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Maintaining visibility of a moving holonomic target at a fixed distance with a non-holonomic robot}, year={2005}, month={aug.}, volume={}, number={}, pages={ 2687 - 2693}}

- T. Muppirala, R. Murrieta-Cid and S. Hutchinson
Optimal Motion Strategies Based on Critical Events to Maintain Visibility of a Moving Target, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Barcelona, 2005, pp. 3837-3842.

[Abstract] [BibTex]In this paper, we consider the surveillance problem of maintaining visibility at a fixed distance of a mobile evader using a mobile robot equipped with sensors. Optimal motion for the target to escape is found. Symmetrically, an optimal motion strategy for the observer to always maintain visibility of the evader is determined. The optimal motion strategies proposed in this paper are based on critical events. The critical events are defined with respect to the obstacles in the environment.

@INPROCEEDINGS{1570704, author={ Muppirala, T. and Hutchinson, S. and Murrieta-Cid, R.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Optimal Motion Strategies Based on Critical Events to Maintain Visibility of a Moving Target}, year={2005}, month={apr.}, volume={}, number={}, pages={ 3826 - 3831}}

- A. Sarmiento, R. Murrieta-Cid and S. Hutchinson
A Sample-based Convex Cover for Rapidly Finding an Object in a 3-D Environment, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Barcelona, 2005, pp. 3497-350.

[Abstract] [BibTex] [IEEEXplore]In this paper we address the problem of generating a motion strategy to find an object in a known 3-D environment as quickly as possible on average. We use a sampling scheme that generates an initial set of sensing locations for the robot and then we propose a convex cover algorithm based on this sampling. Our algorithm tries to reduce the cardinality of the resulting set and has the main advantage of scaling well with the dimensionality of the environment. We then use the resulting convex covering to generate a graph that captures the connectivity of the workspace. Finally, we search this graph to generate trajectories that try to minimize the expected value of the time to find the object.

@INPROCEEDINGS{1570649, author={ Sarmiento, A. and Murrieta-Cid, R. and Hutchinson, S.}, journal={Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={A Sample-based Convex Cover for Rapidly Finding an Object in a 3-D Environment}, year={2005}, month={apr.}, volume={}, number={}, pages={ 3486 - 3491}}

- S. Kloder and S. Hutchinson
Path Planning for Permutation-Invariant Multi-Robot Formations, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Barcelona, 2005, pp. 1797-1802.

[Abstract] [BibTex]In this paper we demonstrate path planning for our formation space that represents permutation-invariant multi-robot formations. Earlier methods generally pre-assign roles for each individual robot, rely on local planning and behaviors to build emergent behaviors, or give robots implicit constraints to meet. Our method first directly plans the formation as a set, and only afterwards determines which robot takes which role. To build our representation of this formation space, we make use of a property of complex polynomials: they are unchanged by permutations of their roots. Thus we build a characteristic polynomial whose roots are the robot locations, and use its coefficients as a representation of the formation. Mappings between work spaces and formation spaces amount to building and solving polynomials. In this paper, we construct an efficient obstacle collision detector, and use it in a local planner. From this we construct a basic roadmap planner. We thus demonstrate that our polynomial based representation can be used for effective permutation invariant formation planning.

@INPROCEEDINGS{1570374, author={ Kloder, S. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Path Planning for Permutation-Invariant Multi-Robot Formations}, year={2005}, month={apr.}, volume={}, number={}, pages={ 1797 - 1802}}

- A. Sarmiento, R. Murrieta-Cid and S. Hutchinson,
Planning Expected-Time Optimal Paths for Searching Known Environments, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Sendai, Japan, 2004, pp. 872-878.

[Abstract] [BibTex] [IEEEXplore]In this paper we address the problem of finding time optimal search paths in known environments. In particular, the task is to search a known environment for an object whose unknown location is characterized by a known probability density function (pdf). With this formulation, the time required to find the object is a random variable induced by the choice of search path together with the pdf for the object's location. The optimization problem is to find the path that yields the minimum expected value of the time required to find the object. We propose a two layered approach. Our algorithm first determines an efficient ordering of visiting regions in a decomposition that is defined by critical curves that are related to the aspect graph of the space to be searched. It then generates locally optimal trajectories within each of these regions to construct a complete continuous path. We have implemented this algorithm and present results.

@INPROCEEDINGS{1389462, author={Sarmiento, A. and Murrieta-Cid, R. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Planning expected-time optimal paths for searching known environments}, year={2004}, month={sep.}, volume={1}, number={}, pages={ 872 - 878 vol.1}}

- N.R. Gans and S. A. Hutchinson,
`Multi-Attribute Utility Analysis in the Choice of a Vision-based Robot Controller, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Sendai, Japan, 2004, pp. 355-362.

[Abstract] [BibTex] [IEEEXplore]We present an example of the use of multi-attribute utility analysis in the design of a robot system. Multi-attribute utility analysis is a tool used by systems engineers to aid in deciding amongst numerous alternatives. Its strength lies in the fact that very different metrics can be compared, and that it takes unto account human preferences and risk attitudes. As a design tool, multi-attribute utility analysis is performed off line, during the system design phase, to choose among possible designs, components, gains, etc. We offer a demonstration of multi-attribute utility analysis in designing a hybrid switched-system visual servo system. We have previously introduced such a system, and here use multi-attribute utility analysis to select a switching algorithm that best suits the needs of a specific user.

@INPROCEEDINGS{1389377, author={Gans, N.R. and Hutchinson, S.A.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Multi-attribute utility analysis in the choice of a vision-based robot controller}, year={2004}, month={sep.}, volume={1}, number={}, pages={ 355 - 362 vol.1}}

- B. Chambers and S. Hutchinson,
Integrated Tracking and Control Using Condensation-based Critical-Point Matching, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Sendai, Japan, 2004, pp. 949-956.

[Abstract] [BibTex] [IEEEXplore]Image matching via multiresolution critical-point hierarchies has been shown to be useful in feature point selection, real-time tracking, volume rendering, and image interpolation. Drawbacks of the method include computational complexity and a lack of constraints on rigid motion. In this paper we present a method by which robot end-effector velocities are tracked using the condensation algorithm and critical-point image observations. By using a window-based approach, we immediately reduce complexity while imposing constraints on camera motion. We show that the critical-point observations are successful in estimating camera motion by evaluating the similarity of sample windows.

@INPROCEEDINGS{1389475, author={Chambers, B. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Integrated tracking and control using condensation-based critical-point matching}, year={2004}, month={sep.}, volume={1}, number={}, pages={ 949 - 956 vol.1}

- S. Bhattacharya, R. Murrieta-Cid and S. Hutchinson,
Path Planning for a Differential Drive Robot: Minimal Length Paths - a Geometric Approach, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Sendai, Japan, 2004, pp. 2793-2798.

[Abstract] [BibTex] [IEEEXplore]This work presents the minimal length paths, for a robot that maintains visibility of a landmark. The robot is a differential drive system and has limited sensing capabilities (range and angle of view). The optimal paths are composed of straight lines and curves that saturate the camera pan angle.

@INPROCEEDINGS{1389832, author={Bhattacharya, S. and Murrieta-Cid, R. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Path planning for a differential drive robot: minimal length paths - a geometric approach}, year={2004}, month={sep.}, volume={3}, number={}, pages={ 2793 - 2798 vol.3}}

- S. Kloder, S. Bhattacharya and S. Hutchinson,
A Configuration Space for Permutation-Invariant Multi-Robot Formations, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, New Orleans, 2004, pp. 2746-2751.

[Abstract] [BibTex] [IEEEXplore]In this paper we describe a new representation for a configuration space for formations of robots that translate in the plane. What makes this representation unique is that it is permutation-invariant, so the relabeling of robots does not affect the configuration. Earlier methods generally either pre-assign roles for each individual robot, or rely on local planning and behaviors to build emergent behaviors. Our method first plans the formation as a set, and only afterwards determines which robot takes which role. To build our representation of this formation space, we make use of a property of complex polynomials: they are unchanged by permutations of their roots. Thus we build a characteristic polynomial whose roots are the robot locations, and use its coefficients as a representation. Mappings between work spaces and formation spaces amount to building and solving polynomials. In this paper we also perform basic path planning on this new representation, and show some practical and theoretical properties. We show that the paths generated are invariant-relative to their endpoints - with respect to linear coordinate transforms, and in most cases produce reasonable, if not linear, paths from start to finish.

@INPROCEEDINGS{1307476, author={Kloder, S. and Bhattacharya, S. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={A configuration space for permutation-invariant multi-robot formations}, year={2004}, month={apr.}, volume={3}, number={}, pages={ 2746 - 2751 Vol.3}}

- R. Murrieta-Cid, A. Sarmiento, S. Bhattacharya and S. Hutchinson,
Maintaining Visibility of a Moving Target at a Fixed Distance: the Case of Observer Bounded Speed, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, New Orleans, 2004, pp. 479-484.

[Abstract] [BibTex] [IEEEXplore]This work addresses the problem of computing the motions of a robot observer in order to maintain visibility of a moving target at a fixed surveillance distance. In this paper, we deal specifically with the situation in which the observer has bounded velocity. We give necessary conditions for the existence of a surveillance strategy and give an algorithm that generates surveillance strategies.

@INPROCEEDINGS{1307195, author={Murrieta, R. and Sarmiento, A. and Bhattacharya, S. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Maintaining visibility of a moving target at a fixed distance: the case of observer bounded speed}, year={2004}, month={apr.}, volume={1}, number={}, pages={ 479 - 484 Vol.1}}

- A. Sarmiento, R. Murrieta-Cid and S. Hutchinson,
An Efficient Strategy for Rapidly Finding an Object in a Polygonal World, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Las Vegas, 2003, pp. 1153-1158.

[Abstract] [BibTex]In this paper, we propose an approach to solve the problem of finding an object in a polygon which may contain holes. We define an optimal solution as the route that minimizes the expected time it takes to find said object. The object search problem is shown to be NP-hard by reduction, therefore, we propose the heuristic of an utility function, defined as the ratio of a gain over a cost and a greedy algorithm in a reduced search space that is able to explore several steps ahead without incurring in too high computational cost. This approach was implemented and simulation results are shown.

@INPROCEEDINGS{1248801, author={Sarmiento, A. and Murrieta, R. and Hutchinson, S.A.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={An efficient strategy for rapidly finding an object in a polygonal world}, year={2003}, month={oct.}, volume={2}, number={}, pages={ 1153 - 1158 vol.2}}

- R. Murrieta-Cid, A. Sarmiento and S. Hutchinson,
On the Existence of a Strategy to Maintain a Moving Target within the Sensing Range of an Observer Reacting with Delay, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Las Vegas, 2003, pp. 1184-1191.

[Abstract] [BibTex]This paper deals with the problem of computing the motions of a robot observer in order to maintain visibility of a moving target. The target moves unpredictably, and the distribution of obstacles in the workspace is known in advance. Our algorithm computes a motion strategy based on partitioning the configuration space and the workspace in non-critical regions separated by critical curves. In this work, the existence of a solution for a given polygon and delay are determined.

@INPROCEEDINGS{1248806, author={Murrieta, R. and Sarmiento, A. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={On the existence of a strategy to maintain a moving target within the sensing range of an observer reacting with delay}, year={2003}, month={oct.}, volume={2}, number={}, pages={ 1184 - 1191 vol.2}}

- B. Chambers, J. Durand, N. Gans and S. Hutchinson,
Dynamic feature point detection for visual servoing using multiresolution critical-point filters, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Las Vegas, Oct. 2003, pp. 504--509.

[Abstract] [BibTex] [IEEEXplore]In this paper we examine the selection of feature points for visual servoing methods using multiresolution critical-point filters (CPF). With the increased number of feature points made available to us using CPF, we hope to improve the robustness of the system by allowing the algorithm to automatically detect usable feature points on virtually any object without any a priori knowledge of the object. Furthermore, the algorithm revises these points at each iteration to account for events that may have otherwise caused feature points to be lost and led to the visual servo method ending in failure.

@INPROCEEDINGS{1250679, author={Chambers, B. and Gans, N. and Durand, J. and Hutchinson, S.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Dynamic feature point detection for visual servoing using multiresolution critical-point filters}, year={2003}, month={oct.}, volume={1}, number={}, pages={ 504 - 509 vol.1}}

- J. C. Davidson and S. A. Hutchinson,
Recognition of Traversable Areas for Mobile Robotic Navigation in Outdoor Environments, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Las Vegas, 2003, pp. 297-304.

[Abstract] [BibTex] [IEEEXplore]In this paper we consider the problem of automatically determining whether regions in an outdoor environment can be traversed by a mobile robot. We propose a two-level classifier that uses data from a single color image to make this determination. At the low level, we have implemented three classifiers based on color histograms, directional filters and local binary patterns. The outputs of these low level classifiers are combined using a voting scheme that weights the results of each classifier using an estimate of its error probability. We present results from a large number of trials using a database of representative images acquired in real outdoor environments.

@INPROCEEDINGS{1250644, author={Davidson, J.C. and Hutchinson, S.A.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Recognition of traversable areas for mobile robotic navigation in outdoor environments}, year={2003}, month={oct.}, volume={1}, number={}, pages={ 297 - 304 vol.1}}

- N.R. Gans and S. A. Hutchinson,
An asymptotically stable switched system visual controller for eye in hand robots" *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Las Vegas, 2003, pp. 735 - 742

[Abstract] [BibTex] [IEEEXplore]Visual servoing methods are commonly classified as image based or position based, depending on whether image features or the robot pose is used in the feedback loop of the control law. Choosing one method over the other gives stability in the chosen state but surrenders all control over the other, which can lead to system failure if feature points are lost or the robot moves to the end of its reachable space. We present a hybrid switched system visual servo method that utilizes both image based and position based control laws. Through a switching scheme we present, this method provides asymptotic stability in both the image and pose and prevent system failure.

@INPROCEEDINGS{1250717, author={Gans, N.R. and Hutchinson, S.A.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={An asymptotically stable switched system visual controller for eye in hand robots}, year={2003}, month={oct.}, volume={1}, number={}, pages={ 735 - 742 vol.1}}

- J. Durand and S. A. Hutchinson,
Real-Time Object Tracking using Multi-Resolution Critical Points Filters, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Taipei, Taiwan, 2003, pp. 1682-1687.

[Abstract] [BibTex]In this paper, we propose a new method for object tracking, which is primarily based on the results from prof. Shinagawa's image matching. We provide a method that tracks an object and follows it in real-time through a sequence of images which are given, for example, by a robotic camera. The main feature of the method is that it is not affected by the movements (within a certain reasonable range) of the camera or the object; such as, translation, rotation or scaling. The algorithm is also insensible to regular changes of the object's shape. For real-time applications, the algorithm allows the tracking of an object through a sequence of 64*64 images, at a rate of over 8 frames/second.

@INPROCEEDINGS{1241836, author={Durand, J. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Real-time object tracking using multi-res. critical points filters}, year={2003}, month={sep.}, volume={2}, number={}, pages={ 1682 - 1687 vol.2}}

- N.R. Gans and S. A. Hutchinson,
An experimental study of hybrid switched system approaches to visual servoing, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Taipei, Taiwan, 2003, pp. 3061 - 3068

[Abstract] [BibTex] [IEEEXplore]In the recent past, many researchers have developed control algorithms for visual servo applications. In this paper, we introduce a new hybrid switched system approach, in which a high-level decision maker selects between two visual servo controllers. We have evaluated our approach with simulations and experiments using three individual visual servo systems and three candidate switching rules. The proposed method is very promising for visual servo tasks in which there is a significant distance between the initial and goal configuration, or the task is one that can cause an individual visual servo system to fail.

@INPROCEEDINGS{1242061, author={Gans, N.R. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={An experimental study of hybrid switched system approaches to visual servoing}, year={2003}, month={sep.}, volume={3}, number={}, pages={ 3061 - 3068 vol.3}}

- A. Sarmiento, R. Murrieta-Cid and S. A. Hutchinson
A Strategy for Searching an Object with a Mobile Robot, *Proc. Int'l. Conf. on Advanced Robotics*, Coimbra, Portugal, June, 2003 pp. 234-239. - R. Murrieta-Cid, A. Sarmiento, and S. A. Hutchinson
A Motion Planning Strategy to Maintain Visibility of a Moving Target at a Fixed Distance in a Polygon, *Proc. Int'l. Conf. on Advanced Robotics*, Coimbra, Portugal, June, 2003, pp. 228-233. - N. Gans and S. Hutchinson,
Switching Approaches to Visual Servo Control, *Proc. IEEE Workshop on Visual Servoing*, Lausanne, Switzerland, 2002 (invited). - N. Gans and S. Hutchinson,
A Switching Approach to Visual Servo Control, *Proc. 17th IEEE International Symposium on Intelligent Control*, Vancouver, Canada, 2002, pp. 770-760 (invited).

[Abstract] [BibTex] [IEEEXplore]In the recent past, many researchers have developed control algorithms for visual servo applications. In this paper we introduce a new switching approach, in which a high-level decision maker determines which of two low level visual servo controllers should be used at each control cycle. We introduce two new low-level controllers, one that relies on the homography between initial and goal images, and one that uses an affine transformation to approximate the motion between initial and goal camera configurations. Since an affine transformation can only approximate a restricted set of camera motions, this choice of low-level controllers illustrates the strength of the switching approach. We evaluated our approach with several simulations using three candidate switching rules. Although our results are preliminary, we believe that the proposed method is very promising for visual servo tasks in which there is a significant distance between the initial and goal configurations.

@INPROCEEDINGS{1157859, author={Gans, N.R. and Hutchinson, S.A.}, journal={ Proc. 17th IEEE International Symposium on Intelligent Control}, title={A switching approach to visual servo control}, year={2002}, month={}, volume={}, number={}, pages={ 770 - 776}}

- S. Akella and S. Hutchinson,
Coordinating the Motions of Multiple Robots with Specified Trajectories, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Washington D.C., 2002, pp. 624-631.

[Abstract] [BibTex] [IEEEXplore]Coordinating the motions of multiple robots operating in a shared workspace without collisions is an important capability. We address the task of coordinating the motions of multiple robots when their trajectories (defined by both the path and velocity along the path) are specified. This problem of collision-free trajectory coordination arises in welding and painting workcells in the automotive industry. We identify sufficient and necessary conditions for collision-free coordination of the robots when only the robot start times can be varied, and define corresponding optimization problems. We develop mixed integer programming formulations of these problems to automatically generate minimum time solutions. This method is applicable to both mobile robots and articulated arms, and places no restrictions on the number of degrees of freedom of the robots. The primary advantage of this method is its ability to coordinate the motions of several robots, with as many as 20 robots being considered. We show that, even when the robot trajectories are specified, minimum time coordination of multiple robots is NP-hard.

@INPROCEEDINGS{1013428, author={Akella, S. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Coordinating the motions of multiple robots with specified trajectories}, year={2002}, month={}, volume={1}, number={}, pages={ 624 - 631 vol.1}}

- N. R. Gans, P. I. Corke and S. A. Hutchinson,
Performance Tests of Partitioned Approaches to Visual Servo Control, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Washington D.C., 2002, pp. 1616-1623.

[Abstract] [BibTex] [IEEEXplore]Visual servoing has been a viable method of robot manipulator control for more than a decade. Image-based visual servoing (IBVS), in particular, has seen considerable development in recent years. Recently, a number of researchers have reported tasks for which traditional IBVS methods fail, or experience serious difficulties. In response to these difficulties, several methods have been devised that partition the control scheme, allowing troublesome motions to be handled by methods that do not rely solely on the image Jacobian. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation, in addition to the traditional IBVS approach. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then simulate each of the candidate visual servo methods for four canonical tasks, under perfect and nonideal experimental conditions.

@INPROCEEDINGS{1014774, author={Gans, N.R. and Corke, P.I. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Performance tests of partitioned approaches to visual servo control}, year={2002}, month={}, volume={2}, number={}, pages={ 1616 - 1623 vol.2}}

- P. Leven and S. Hutchinson,
Using Manipulability to Bias Sampling During the Construction of Probabilistic Roadmaps, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Washington D.C., 2002, pp. 2134-2140.

[Abstract] [BibTex] [IEEEXplore]Probabilistic roadmaps (PRMs) are a popular representation used by many current path planners. Construction of a PRM requires the ability to generate a set of random samples from the robot's configuration space, and much recent research has concentrated on new methods to do this. In this paper, we present a sampling scheme that is based on the manipulability measure associated with a robot arm. Intuitively, manipulability characterizes the arm's freedom of motion for a given configuration. Thus, our approach is to sample densely those regions of the configuration space in which manipulability is low (and therefore the robot has less dexterity), while sampling more sparsely those regions in which the manipulability is high. We have implemented our approach, and performed extensive evaluations using prototypical problems from the path planning literature. Our results show this new sampling scheme to be quite effective in generating PRMs that can solve a large range of path planning problems.

@INPROCEEDINGS{1014855, author={Leven, P. and Hutchinson, S.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Using manipulability to bias sampling during the construction of probabilistic roadmaps}, year={2002}, month={}, volume={2}, number={}, pages={ 2134 - 2140 vol.2}}

- N. R. Gans, P. I. Corke and S. A. Hutchinson,
Comparison of Robustness and Performance of Partitioned Image Based Visual Servo Systems, *Proc. Australian Conference on Robotics and Automation*, Sydney, 2001. - P. Ranganathan, J.B. Hayet, M. Devy, S. Hutchinson and F. Lerasle,
Topological Navigation and Qualitative Localization for Indoor Environments Using Multisensory Perception, *Proc. Ninth International Symposium on Intelligent Robotic Systems*, Toulouse, July, 2001. - P. Leven and S. Hutchinson,
Robust, Compact Representations for Real-Time Path Planning in Changing Environments, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Maui, Oct. 2001, pp. 1483-1490.

[Abstract] [BibTex] [IEEEXplore]We have previously (2000) developed a new method for generating collision-free paths for robots operating in changing environments. Our approach relies on creating a representation of the configuration space that can be easily modified in real time to account for changes in the environment. In this paper we address the issues of efficiency and robustness. First, we develop a novel, efficient encoding scheme that exploits the redundancy in the map from robot's Euclidean workspace to its configuration space. Then, we introduce the concept of epsi;-robustness, and show how it can be used to enhance the representations that are used by the planner. Along the way, we present quantitative results that illustrate the efficiency and robustness of our approach.

@INPROCEEDINGS{977190, author={Leven, P. and Hutchinson, S.}, journal={ Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Robust, compact representations for real-time path planning in changing environments}, year={2001}, month={}, volume={3}, number={}, pages={1483 -1490 vol.3}}

- R. Swain-Oropeza, M. Devy and S. Hutchinson,
Sensor-Based Navigation in Cluttered Environments, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Maui, Oct. 2001, pp. 1662-1669.

[Abstract] [BibTex] [IEEEXplore]We present a new approach to sensor-based navigation in cluttered environments. In our system, tasks are specified in terms of visual goals, and obstacles are detected by a laser range finder. To effect task performance, we introduce a new gain scheduling visual servo controller. Our approach uses a diagonal gain matrix whose entries are adjusted during execution according to one of several proposed gain schedules. Obstacle avoidance is achieved by allowing the detected obstacles to generate artificial repulsive potential fields, which alter the motion of the mobile robot base. Since this motion affects the vision-based control, it is compensated by corresponding camera motions. Finally, we combine the obstacle avoiding and visual servo components of the system so that visual servo tasks can be performed as obstacles are avoided. We illustrate our approach with both simulations and real experiments using our experimental platform Hilare2Bis.

@INPROCEEDINGS{977217, author={Swain-Oropeza, R. and Devy, M. and Hutchinson, S.}, journal={ Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Sensor-based navigation in cluttered environments}, year={2001}, month={}, volume={3}, number={}, pages={1662 -1669 vol.3}}

- P. Leven and S. Hutchinson,
Toward Real-Time Motion Planning in Dynamic Environments, *Proc. Workshop on the Algorithmic Foundations of Robotics*, Dartmouth, March, 2000. - P. I. Corke and S. A. Hutchinson,
Recent Results in Visual Servo Control, *Proc. IEEE Workshop on Integrating Sensors into Mobility and Manipulation*, San Francisco, April, 2000 (invited). - P. I. Corke and S. A. Hutchinson,
Real-Time Vision, Tracking and Control, *Proc. IEEE Int'l Conf. on Robotics and Automation*, San Francisco, April 2000, pp. 622-629 (invited).

[Abstract] [BibTex] [IEEEXplore]Provides a broad sketch of visual servoing, and the application of real-time vision, tracking and control for robot guidance. It outlines the basic theoretical approaches to the problem, describes a typical architecture, and discusses major milestones, applications and the significant vision sub-problems that must be solved.

@INPROCEEDINGS{844122, author={Corke, P.I. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l. Conf. on Robotics and Automation}, title={Real-time vision, tracking and control}, year={2000}, month={}, volume={1}, number={}, pages={622 -629 vol.1}}

- P. I. Corke and S. A. Hutchinson,
A New Hybrid Image-Based Visual Servo Control Scheme, *Proc. 39th Conf. on Decision and Control,*Sydney, Dec., 2000, pp. 2521-2526. [Also in*Proc. 31st Int'l Symposium on Robotics*, Montreal, May 2000,pp. 30-35, and*Proc. Eighth IEEE Mediterranean Conference on Control and Automation*, Rio, Greece, July, 2000.]

[Abstract] [BibTex] [IEEEXplore]In image-based visual servo control, where control is effected with respect to the image, there is no direct control over the Cartesian velocities of the robot end effector. As a result, the robot executes trajectories that are desirable in the image, but which can be indirect and seemingly contorted in Cartesian space. We describe the cause of these phenomena, and introduce a new partitioned approach to visual servo control that overcomes the problem. In particular, we decouple the z-axis rotational and translational components of the control from the remaining degrees of freedom. Then, to guarantee that all features remain in the image throughout the entire trajectory, we incorporate a potential function that repels feature points from the boundary of the image plane. We illustrate our control scheme with a variety of simulation results.

@INPROCEEDINGS{914182, author={Corke, P.I. and Hutchinson, S.A.}, journal={Proc. 39th Conf. on Decision and Control}, title={A new hybrid image-based visual servo control scheme}, year={2000}, month={}, volume={3}, number={}, pages={2521 -2526 vol.3}}

- P. Leven and S. Hutchinson,
Real-Time Motion Planning in Changing Envrionments: Some Preliminary Results, *Proc. on 31st Int'l Symposium on Robotics*, Montreal, May 2000, pp. 12-17. - K. Nickels and S. Hutchinson,
Measurement Error Estimation for Feature Tracking, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Detroit, 1999, pp. 3230-3235.

[Abstract] [BibTex] [IEEEXplore]Performance estimation for feature tracking is a critical issue, if feature tracking results are to be used intelligently. In this paper, we derive quantitative measures for the spatial accuracy of a particular feature tracker. This method uses the results from the sum-of-squared-differences correlation measure commonly used for feature tracking to estimate the accuracy (in the image plane) of the feature tracking result. In this way, feature tracking results can be analyzed and exploited to a greater extent without placing undue confidence in inaccurate results or throwing out accurate results. We argue that this interpretation of results is more flexible and useful than simply using a confidence measure on tracking results to accept or reject features. For example, and extended Kalman filtering framework can assimilate these tracking results directly to monitor the uncertainty in the estimation process for the state of an articulated object.

@INPROCEEDINGS{774090, author={Nickels, K. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Measurement error estimation for feature tracking}, year={1999}, month={}, volume={4}, number={}, pages={3230 -3235 vol.4}}

- T. Kurpjuhn, K. Nickels, A. Hauck and S. Hutchinson,
Development of a Visual-Space Mouse, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Detroit, 1999, pp. 2527-2532.

[Abstract] [BibTex] [IEEEXplore]The pervasiveness of computers in everyday life coupled with recent rapid advances in computer technology have created both the need and the means for sophisticated human-computer interaction (HCI) technology. Despite all the progress in computer technology and robotic manipulation, the interfaces for controlling manipulators have changed very little in the last decade. Therefore human-computer interfaces for controlling robotic manipulators are of great interest. A flexible and useful robotic manipulator is one capable of movement in three translational degrees of freedom, and three rotational degrees of freedom. In addition to research labs, six degree of freedom robots can be found in construction areas or other environments unfavorable for human beings. This paper proposes an intuitive and convenient visually guided interface for controlling a robot with six degrees of freedom. Two orthogonal cameras are used to track the position and the orientation of the hand of the user. This allows the user to control the robotic arm in a natural way.

@INPROCEEDINGS{773977, author={Kurpjuhn, T.P. and Nickels, K. and Hauck, A. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Development of a visual space-mouse}, year={1999}, month={}, volume={4}, number={}, pages={2527 -2532 vol.4}}

- R. Kelly, F. Reyes, J. Moreno and S. Hutchinson,
A Two Loops Direct Visual Control of Direct-Drive Planar Robots with Moving Target, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Detroit, 1999, pp. 599-604.

[Abstract] [BibTex] [IEEEXplore]This paper addresses the visual servoing of robot manipulators in fixed-camera configuration for considering a moving target. We propose a control scheme consisting of two loops: an inner loop which is a joint velocity controller; and an outer loop which is an image-based feedback loop. We present the stability analysis and the experimental evaluation on a two degrees of freedom direct-drive planar robot arm.

@INPROCEEDINGS{770041, author={Kelly, R. and Reyes, F. and Moreno, J. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={A two loops direct visual control of direct-drive planar robots with moving target}, year={1999}, month={}, volume={1}, number={}, pages={599 -604 vol.1}}

- P. Leven, D. Burschka, S. Hutchinson,
Perception-Based Motion Planning for Indoor Exploration, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Detroit, 1999, pp. 695-701.

[Abstract] [BibTex] [IEEEXplore]This paper proposes an approach for motion planning in indoor environments based on incomplete and uncertain information from a line-based binocular stereo system. The primary goal of the planning process is to plan an optimal path through an unknown or partially known environment, depending on the information gained from exploration and the current mission goal. This paper presents an adaptable motion planner that supports sensor-based map construction, object recognition and navigation in an unknown environment while carrying out a mission. Also presented are some preliminary experimental results that demonstrate the utility of the approach.

@INPROCEEDINGS{770056, author={Leven, P. and Hutchinson, S. and Burschka, D. and Farber, G.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Perception-based motion planning for indoor exploration}, year={1999}, month={}, volume={1}, number={}, pages={695 -701 vol.1}}

- H. Rifai, I. Bloch, S. A. Hutchinson, J. Wiart and L. Garnero,
Segmentation of the Skull in MRI Volumes Using Deformable Model and Taking the Partial Volume Effect into Account, *Proc. SPIE Medical Imaging Symposium*, San Diego, 1999, pp. 288-299. - K. Nickels and S. A. Hutchinson,
Weighting Observations: The use of Kinematic Models in Object Tracking, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Leuven, Belgium, May, 1998. pp. 1677-1682.

[Abstract] [BibTex] [IEEEXplore]We describe a model-based object tracking system that updates the configuration parameters of an object model based upon information gathered from a sequence of monocular images. Realistic object and imaging models are used to determine the expected visibility of object features, and to determine the expected appearance of all visible features. We formulate the tracking problem as one of parameter estimation from partially observed data, and apply the extended Kalman filtering (EKF) algorithm. The models are also used to determine what point feature movement reveals about the configuration parameters of the object. This information is used by the EKF to update estimates for parameters, and for the uncertainty in the current estimates, based on observations of point features in monocular images.

@INPROCEEDINGS{677401, author={Nickels, K. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Weighting observations: the use of kinematic models in object tracking}, year={1998}, month={may.}, volume={2}, number={}, pages={1677 -1682 vol.2}}

- K. Nickels and S. A. Hutchinson,
Integrated Object Models for Robust Visual Tracking, *Proc. IEEE Workshop on Robust Vision for Vision-based Control of Motion*, Leuven, Belgium, May, 1998 (invited). - K. Nickels and S. Hutchinson,
Characterizing the Uncertainties in Point Feature Motion for Model-Based Object Tracking, *Proc. IEEE Workshop on New Trends in Image-Based Robot Servoing*, Grenoble, France, 1997, pp. 53-63 (invited). - M. Barbehenn and S. Hutchinson,
Toward Incremental Geometric Robot Motion Planning, *Proc. IEEE Workshop on Practical Motion Planning in Robotics*, April, 1996 (invited). - S. Hutchinson,
Using Projective Geometry to Derive Constraints for Calibration-Free Visual Servo Control, *Proc. Sixth Int'l Symposium on Robotics and Manufacturing,*Montpellier, France, pp. 305-310, 1996. - S. LaValle and S. A. Hutchinson,
Optimal Motion Planning for Multiple Robots Having Independent Goals, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Minneapolis, pp. 2847-2852, 1996.

[Abstract] [BibTex] [IEEEXplore]This work makes two contributions to geometric motion planning for multiple robots: i) motion plans can be determined that simultaneously optimize an independent performance criterion for each robot; ii) a general spectrum is defined between decoupled and centralized planning. By considering independent performance criteria, we introduce a form of optimality that is consistent with concepts from multi-objective optimization and game theory research. Previous multiple-robot motion planning approaches that consider optimality combine individual criteria into a single criterion. As a result, these methods can fail to find many potentially useful motion plans. We present implemented, multi-robot motion planning algorithms that are derived from the principle of optimality, for three problem classes along the spectrum between centralized and decoupled planning: i) coordination along fixed, independent paths; ii) coordination along independent roadmaps; iii) general, unconstrained motion planning. Several computed examples are presented for all three problem classes that illustrate the concepts and algorithms.

@INPROCEEDINGS{506594, author={LaValle, S.M. and Hutchinson, S.A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Optimal motion planning for multiple robots having independent goals}, year={1996}, month={apr.}, volume={3}, number={}, pages={2847 -2852 vol.3}}

- S. LaValle and S. A. Hutchinson,
Evaluating Motion Strategies under Nondeterministic or Probabilistic Uncertainties in Sensing and Control, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Minneapolis, pp. 3034-3039, 1996.

[Abstract] [BibTex] [IEEEXplore]Provides a method for characterizing future configurations under the implementation of a motion strategy in the presence of sensing and control uncertainties. We provide general techniques which can apply to either nondeterministic models of uncertainty (as typically considered in preimage planning research) or probabilistic models. Information-space concepts from modern control theory are utilized to define the notion of a strategy in this general context. We have implemented algorithms and show several computed examples that generalize the forward projection concepts from traditional literature in this area.

@INPROCEEDINGS{509173, author={LaValle, S.M. and Hutchinson, S.A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Evaluating motion strategies under nondeterministic or probabilistic uncertainties in sensing and control}, year={1996}, month={apr.}, volume={4}, number={}, pages={3034 -3039 vol.4}}

- B. Bishop, A. Castano, S. Hutchinson, R. Sharma,
P. Shirkey, M. W. Spong, N. Srinivasa,
Some Experiments in Vision-Based Robotics at the University of Illinois, *Proc. IEEE Vision for Robotics Workshop*, 1995 (invited). - R. L. Castano and S. A. Hutchinson,
A Probabilistic Framework for Grouping Image Features, *Proc. the IEEE Int'l Symposium on Computer Vision*, pp. 611-616, 1995.

[Abstract] [BibTex] [IEEEXplore]Presents a framework for determining probability distributions over the space of possible image feature groupings. Such a framework allows higher level processes to reason over many plausible perceptual groupings in an image, rather than committing to a specific image segmentation in the early stages of processing. The authors first derive an expression for the probability that a set of features should be grouped together, conditioned on the observed image data associated with those features. This probability measure formalizes the principle that features in an image should be grouped together when they participate in a common underlying geometric structure. The authors then present a representation scheme in which only those groupings with high probability are explicitly represented, while large sets of unlikely grouping hypotheses are implicitly represented. The authors present experimental results for a variety of real intensity images.

@INPROCEEDINGS{477069, author={Castano, R.L. and Hutchinson, S.}, journal={Proc. the IEEE Int'l Symposium on Computer Vision}, title={A probabilistic framework for grouping image features}, year={1995}, month={nov.}, volume={}, number={}, pages={611 -616}}

- R. Sharma, S. LaValle and S. A. Hutchinson,
Optimizing Robot Motion Strategies for Assembly with Stochastic Models of the Assembly Process, *Proc. IEEE Int'l Symposium on Assembly and Task Planning*, 1995.

[Abstract] [BibTex] [IEEEXplore]Gross-motion planning for assembly is commonly considered as a distinct, isolated step between task sequencing/scheduling and fine-motion planning. In this paper the authors formulate the problem of gross-motion planning for assembly in a manner that integrates it with both the manufacturing process and the fine motions involved in the final assembly stages. One distinct characteristic of gross-motion planning for assembly is the prevalence of uncertainty involving time-in parts arrival, in request arrival, etc. The authors propose a stochastic representation of the assembly process that improves the robot performance in the uncertain assembly environment by optimizing an appropriate criterion in the expected sense.

@INPROCEEDINGS{518792, author={Sharma, R. and LaValle, S.M. and Hutchinson, S.A.}, journal={Proc. IEEE Int'l Symposium on Assembly and Task Planning}, title={Optimizing robot motion strategies for assembly with stochastic models of the assembly process}, year={1995}, month={aug.}, volume={}, number={}, pages={341 -346}}

- M. Barbehenn and S. Hutchinson,
Toward an Exact Incremental Geometric Robot Motion Planner, *Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems*, Pittsburgh, 1995, pp. 39-44, vol. 3.

[Abstract] [BibTex] [IEEEXplore]In this paper we introduce a new class of geometric robot motion planning problems that we call incremental problems. We also introduce the concept of incremental algorithms to solve this class of problems. As an example, we describe an incremental critical curve based exact cell decomposition algorithm for a line segment robot moving freely amidst polygonal obstacles. In the example, after computing an initial representation of the robot's free space, the algorithm maintains the representation as obstacles are moved between planning problems. The cost to maintain the representation is expected to be small relative to the cost of its initial construction.

@INPROCEEDINGS{525859, author={Barbehenn, M. and Hutchinson, S.}, journal={ Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Toward an exact incremental geometric robot motion planner}, year={1995}, month={aug.}, volume={3}, number={}, pages={39 -44 vol.3}}

- R. Sharma and S. Hutchinson,
Optimizing Hand/Eye Configuration for Visual-Servo Systems, *Proc. IEEE Int'l Conf. on Robotics and Automation*, Nagoya, Japan, pp. 172-177, 1995.

[Abstract] [BibTex] [IEEEXplore]The authors (1994) derived a quantitative measure of the ability of a camera setup to observe the changes in image features due to relative motion. This measure of motion perceptibility has many applications in evaluating a robot hand/eye setup with respect to the ease of achieving vision-based control, and steering away from singular-configurations. Motion perceptibility can be combined with the traditional notion of manipulability, into a composite perceptibility/manipulability measure. In this paper the authors demonstrate how this composite measure may be applied to a number of different problems involving relative hand/eye positioning and control. These problems include optimal camera placement, active camera trajectory planning, robot trajectory planning, and feature selection for visual servo control. The authors consider the general formulation of each of these problems, and several others, in terms of the motion perceptibility/manipulability measure and illustrate the solution for particular hand/eye configurations.

@INPROCEEDINGS{525281, author={Sharma, R. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Optimizing hand/eye configuration for visual-servo systems}, year={1995}, month={may.}, volume={1}, number={}, pages={172 -177 vol.1}}

- B. Bishop, S. Hutchinson and M. Spong,
On the performance of Direct Visual Servo Systems, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, San Diego, 1994, pp. 168-173 (invited).

[Abstract] [BibTex] [IEEEXplore]Discusses the use of computer vision for real-time state estimation in feedback control systems. To this end, the authors construct a system for visual state estimation of simple state vectors and study the effects of various real-world disturbances on the state estimates. Simulations are performed using a detailed camera model to study the performance of an image plane position estimation algorithm for a single circular feature. Various disturbances, such as lens distortion, noise, defocus, and blurring are simulated and analyzed with respect to this estimation routine and visual state estimation in general.

@INPROCEEDINGS{350993, author={Bishop, B. and Hutchinson, S. and Spong, M.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={On the performance of state estimation for visual servo systems}, year={1994}, month={may.}, volume={}, number={}, pages={168 -173 vol.1}}

- J. Reed and S. Hutchinson,
Parameter Estimation for Elliptical Shapes Using Image Sequences, *Proc. IEEE Int'l Conf. on Multisensor Fusion and Integration for Intelligent Systems*, 1994, pp. 567-574.

[Abstract] [BibTex] [IEEEXplore]We present a method of ellipse parameter estimation that can be used in performing automated inspection of circular features. In our method, several digital images are taken of each part as it moves past a camera, creating an image sequence. Image enhancement is performed using the image sequence, yielding a high-resolution image. Subpixel edge detection is performed on the high-resolution image, producing a set of data points that is used for ellipse parameter estimation.

@INPROCEEDINGS{398403, author={Reed, J.N. and Hutchinson, S.A.}, journal={Proc. IEEE Int'l Conf. on Multisensor Fusion and Integration for Intelligent Systems}, title={Subpixel parameter estimation for elliptical shapes using image sequences}, year={1994}, month={oct.}, volume={}, number={}, pages={567 -574}}

- S. LaValle and S. A. Hutchinson,
An Objective-Based Stochastic Framework for Manipulation Planning. *Proc. IEEE Int'l Conf. on Intelligent Robots and Systems*, Munich, Germany, pp. 1772-1779, 1994.

[Abstract] [BibTex] [IEEEXplore]We consider the problem of determining robot manipulation plans when sensing and control uncertainties are specified as conditional probability densities. Traditional approaches are usually based on worst-case error analysis in a methodology known as preimage backchaining. We have developed a general framework for determining sensor-based robot plans by blending ideas from stochastic optimal control and dynamic game theory with traditional preimage backchaining concepts. We argue that the consideration of a precise loss (or performance) functional is crucial to determining and evaluating manipulation plans in a probabilistic setting. We consequently introduce a stochastic, performance preimage that generalizes previous preimage notions. We also present some optimal strategies for planar manipulation tasks that were computed by a dynamic programming-based algorithm.

@INPROCEEDINGS{407618, author={Lavalle, S.M. and Hutchinson, S.A.}, journal={Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={An objective-based stochastic framework for manipulation planning }, year={1994}, month={sep.}, volume={3}, number={}, pages={1772 -1779 vol.3}}

- G. D. Hager and S. Hutchinson,
Visual Servoing: Achievements, Issues, and Applications, *Proc. IEEE Workshop on Visual Servoing: Achievements, Applications and Open Problems*, 1994. - R. Sharma and S. Hutchinson,
Evaluating a Camera Position for Vision-Guided Manipulation, *Proc. AAAI Spring Symposium on Physical Interaction and Manipulation*, 1994. - S. LaValle and S. Hutchinson
Path Selection and Coordination for Multiple Robots via Nash Equilibria, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, San Diego, 1994, pp. 1847-1852.

[Abstract] [BibTex] [IEEEXplore]We present a method for analyzing and selecting time-optimal coordination strategies for n robots whose configurations are constrained to lie on a C-space roadmap (which could, for instance, represent a Voronoi diagram). We consider independent objective functionals, associated with each robot, together in a game-theoretic context in which maximal Nash equilibria represent the favorable strategies. Within this framework additional criteria, such as priority or the amount of sacrifice one robot makes, can be applied to select a particular equilibrium. An algorithm that determines all of the maximal Nash equilibria for a given problem is presented along with several computed examples for two and three robots.

@INPROCEEDINGS{351192, author={LaValle, S.M. and Hutchinson, S.A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Path selection and coordination for multiple robots via Nash equilibria}, year={1994}, month={may.}, volume={}, number={}, pages={1847 -1852 vol.3}}

- R. Sharma and S. Hutchinson
On the Observability of Robot Motion Under Active Camera Control, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, San Diego, 1994, pp. 162-167.

[Abstract] [BibTex] [IEEEXplore]Defines a measure of ldquo;observability rdquo; of robot motion that can be used in evaluating a hand/eye set-up with respect to the ease of achieving vision-based control. This extends the analysis of ldquo;manipulability rdquo; of a robotic mechanism in Yoshikawa (1983) to incorporate the effect of visual features. The authors discuss how the observability measure can be applied for active camera placement and for robot trajectory planning to improve the visual servo control. The authors use the examples of a planar 2-DOF arm and a PUMA-type 3-DOF arm to show the variation of the observability and manipulability measure with respect to the relative position of the active camera.

@INPROCEEDINGS{350994, author={Sharma, R. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={On the observability of robot motion under active camera control }, year={1994}, month={may.}, volume={}, number={}, pages={162 -167 vol.1}}

- M. Barbehenn, P. Chen and S. Hutchinson,
An Efficient Hybrid Planner in Changing Environments, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, San Diego, 1994, pp. 2755-2761.

[Abstract] [BibTex] [IEEEXplore]In this paper, we present a new hybrid motion planner that is capable of exploiting previous planning episodes when confronted with new planning problems. Our approach is applicable when several (similar) problems are successively posed for the same static environment, or when the environment changes incrementally between planning episodes. At the heart of our system lie two low-level motion planners: a fast, but incomplete planner LOCAL, and a computationally costly (possibly resolution) complete planner GLOBAL. When a new planning problem is presented to our planner, an efficient meta-level planner MANAGER decomposes the problem into segments that are amenable to solution by LOCAL. This decomposition is made by exploiting a task graph, in which successful planning episodes have been recorded. In cases where the decomposition fails, GLOBAL is invoked. The key to our planner's success is a novel representation of solution trajectories, in which segments of collision-free paths are associated with the boundary of nearby obstacles. Thus we effectively combine the efficiency of one planner with the completeness of another to obtain a more efficient complete planner.

@INPROCEEDINGS{350920, author={Barbehenn, M. and Chen, P.C. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={An efficient hybrid planner in changing environments}, year={1994}, month={may.}, volume={}, number={}, pages={2755 -2760 vol.4}}

- N. Mahadevamurty, T-C. Tsao and S. Hutchinson,
Multi-Rate Analysis and Design of Visual Feedback Digital Servo Control Systems, DSC-Vol. 50/PED-Vol. 63, Symposium on Mechatronics,*Proc. ASME Winter Annual Meeting*, 1993, pp. 7-14. - Mark Spong, Gerald DeJong and Seth Hutchinson,
Integration of Machine Learning and Sensor--Based Control in Intelligent Robotic Systems, *Proc. American Control Conf.*, 1993, pp. 352-356 (invited).

[Abstract] [BibTex]This paper discusses the integration of machine learning and sensor-based control in intelligent robotic systems. Our research is interdisciplinary and combines techniques of explanation-based control with robust and adaptive nonlinear control, computer vision, and motion planning. Our intent in this research is to go beyond the strict hierarchical control architectures typically used in robotic systems by integrating modeling, dynamics, and control across traditional levels of planning and control at all levels of intelligence. Our ultimate goal is to combine analytical techniques of nonlinear dynamics and control with artificial intelligence into a single new paradigm in which symbolic reasoning holds an equal place with differential equation based modeling and control.

@INPROCEEDINGS{4792873, author={DeJong, Gerald and Hutchinson, Seth and Spong, Mark W.}, journal={American Control Conference, 1993}, title={Integration of Machine Learning and Sensor-Based Control in Intelligent Robotic Systems}, year={1993}, month={jun.}, volume={}, number={}, pages={352 -356}}

- E. Welton, S. Hutchinson and M. Spong,
A Modular, Interdisciplinary Approach to Undergraduate Robotics Education, *Proc. Frontiers in Education*, Washington, D.C., 1993, pp. 714-719.

[Abstract] [BibTex] [IEEEXplore]The authors describe three modular, half-semester courses that constitute the emerging undergraduate robotics curriculum at the University of Illinois. They also present two software systems that are currently being tested in these courses. These systems allow students to make progress, unrestrained by occasional hardware inaccessibility and undaunted by implementation details which have been an issue in the past. Students are also able to run their own solutions in parallel with actual solutions. For example, a student could run a forward kinematics solver while simultaneously observing the isomorphic operation of the physical robot. Furthermore, use of these systems provides an opportunity for students to design and implement solutions to real-world problems within the course of a semester.

@INPROCEEDINGS{405430, author={Welton, E. and Hutchinson, S. and Spong, M.}, journal={Proc. Frontiers in Education}, title={A modular, interdisciplinary approach to undergraduate robotics education}, year={1993}, month={nov.}, volume={}, number={}, pages={714 -719}}

- S. LaValle and S. Hutchinson,
Game Theory as a Unifying Structure for a Variety of Robot Tasks, *Proc. IEEE Int'l. Symposium on Intelligent Control*, 1993, pp. 429-434.

[Abstract] [BibTex] [IEEEXplore]The use of game theory as a general formalism for representing, comparing, and providing insight into solutions to a wide class of robotics problems is proposed. It is shown how game theory can be applied to problems of multiple robot coordination, high-level strategy planning, information gathering through manipulation and/or sensor planning, and pursuit-evasion scenarios. A very general game structure is considered which has broad application. Some preliminary experiments on a two-robot corridor navigation problem in which the robots have independent tasks, and the equilibria in a dynamic game with a rolling time horizon are used for coordination.

@INPROCEEDINGS{397675, author={LaValle, S.M. and Hutchinson, S.}, journal={Proc. IEEE Int'l. Symposium on Intelligent Control}, title={Game theory as a unifying structure for a variety of robot tasks }, year={1993}, month={aug.}, volume={}, number={}, pages={429 -434}}

- S. LaValle and S. Hutchinson,
On Considering Uncertainty and Alternatives in Low-Level Vision, *Proc. Ninth Conf. on Uncertainty in Artificial Intelligence*, 1993, pp. 55-65. - S. M. LaValle and S. A. Hutchinson,
Bayesian Region Merging Probability for Parametric Image Models, *Proc. IEEE Conf. on Computer Vision and Pattern Recognition*, New York, 1993, pp. 778-779.

[Abstract] [BibTex] [IEEEXplore]A novel Bayesian approach to region merging is described. It directly uses statistical image models to determine the probability that the union of two regions is homogeneous, and does not require parameter estimation. This approach is particularly beneficial for cases in which the merging decision is most likely to be incorrect, i.e., when little information is contained in one or both of the regions and when parameter estimates are unreliable. The formulation is applied to the implicit polynomial surface model for range data, and texture models for intensity images.

@INPROCEEDINGS{341171, author={LaValle, S.M. and Hutchinson, S.A.}, journal={ Proc. IEEE Conf. on Computer Vision and Pattern Recognition}, title={Bayesian region merging probability for parametric image models}, year={1993}, month={jun.}, volume={}, number={}, pages={778 -779}}

- S. M. LaValle, K. J. Moroney and S. A. Hutchinson,
Agglomerative Clustering on Range Data with a Unified Probabilistic Merging Function and Termination Criterion, *Proc. IEEE Conf. on Computer Vision and Pattern Recognition*, New York, 1993, pp. 798-799.

[Abstract] [BibTex] [IEEEXplore]Clustering methods, which are frequently employed for region-based segmentation, are inherently metric based. A fundamental problem with an estimation-based criterion is that as the amount of information in a region decreases, the parameter estimates become extremely unreliable and incorrect decisions are likely to be made. It is shown that clustering need not be metric based. A rigorous region merging probability function is used. It makes use of all information available in the probability densities of a statistical image model. By using this probability function as a termination criterion it is possible to produce segmentations in which all region merges are performed above some level of confidence.

@INPROCEEDINGS{341182, author={LaValle, S.M. and Moroney, K.J. and Hutchinson, S.A.}, journal={Proc. IEEE Conf. on Computer Vision and Pattern Recognition}, title={Agglomerative clustering on range data with a unified probabilistic merging function and termination criterion}, year={1993}, month={jun.}, volume={}, number={}, pages={798 -799}}

- S. LaValle, K. J. Moroney and S. A. Hutchinson,
Methods for Numerical Integration of High-Dimensional Posterior Densities with Application to Statistical Image Models, *Proc. SPIE Conf. on Neural and Stochastic Methods in Image and Signal Processing*, 1993, pp. 292-303. - A. Fox and S. A. Hutchinson,
Exploiting Visual Constraints in the Synthesis of Uncertainty-Tolerant Motion Plans I: The Directional Backprojection, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Atlanta, 1993, pp. 305-310.

[Abstract] [BibTex] [IEEEXplore]It is shown how the backprojection approach to fine-motion planning can be extended to exploit visual constraints. Specifically, by deriving a configuration space representation of visual constraint surfaces, visual constraint surfaces can be included as boundaries of the directional backprojection. An implemented backprojection planner for C=R2 that is based on a plane-sweep algorithm for computing the directional backprojection is described. The effects of visual constraints on the asymptotic time complexity of the modified algorithm are discussed.

@INPROCEEDINGS{291999, author={Fox, A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Exploiting visual constraints in the synthesis of uncertainty-tolerant motion plans. I. The directional backprojection}, year={1993}, month={may.}, volume={}, number={}, pages={305 -310 vol.1}}

- A. Fox and S. A. Hutchinson,
Exploiting Visual Constraints in the Synthesis of Uncertainty-Tolerant Motion Plans II: The Nondirectional Backprojection, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Atlanta, 1993, pp. 311-316.

[Abstract] [BibTex] [IEEEXplore]For Pt.I see ibid., p.305-10, (1993). It is shown how the introduction of visual constraints into the backprojection formalism affects the computation and structure of the nondirectional backprojection. Specifically, by examining the behavior of the visual constraints as a function of the direction of the commanded velocity, it is possible to determine the new criteria for critical velocity orientations (i.e., velocity orientations at which the topology of the directional backprojection, including visual constraint rays, might change).

@INPROCEEDINGS{292000, author={Fox, A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Exploiting visual constraints in the synthesis of uncertainty-tolerant motion plans. II. The nondirectional backprojection }, year={1993}, month={may.}, volume={}, number={}, pages={311 -316 vol.1}}

- M. Barbehenn and S. A. Hutchinson,
Efficient Search in Hierarchical Motion Planning Using Dynamic Single Source Shortest Paths Trees, *Proc. IEEE Int'l. Conf. on Robotics and Automation*, Atlanta, 1993, pp. 566-571.

[Abstract] [BibTex] [IEEEXplore]All previous robot motion planners based on approximate cell decomposition exhibit redundancy between successive searchers for a sequence of empty cells. A search method that eliminates this redundancy is presented. It is founded on the ability to efficiently maintain a single-source shortest paths tree embedded in the connectivity graph that is subject to the dynamic modifications that result from incremental subdivision of cells. The convergence of the algorithm is controlled by the vertex cost function, which relies on an estimate for the proportion of free space in a cell. The planner is fully implemented, and empirical results are given to illustrate the performance improvements of the dynamic algorithm compared to Dijkstra's algorithm.

@INPROCEEDINGS{292039, author={Barbehenn, M. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Efficient search and hierarchical motion planning using dynamic single-source shortest paths trees}, year={1993}, month={may.}, volume={}, number={}, pages={566 -571 vol.1}}

- R. Spence and S. A. Hutchinson,
Dealing with Unexpected Moving Obstacles by Integrating Potential Field Planning with Inverse Dynamics Control, *Proc. of the IEEE Int'l Conf. on Intelligent Robots and Systems*, Raleigh, 1992, pp. 1485-1490.

[BibTex]@INPROCEEDINGS{594180, author={Spence, R. and Hutchinson, S.}, journal={ Proc. IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems}, title={Dealing With Unexpected Moving Obstacles By Integrating Potential Field Planning With Inverse Dynamics Control}, year={1992}, month={jul.}, volume={3}, number={}, pages={1485 -1490}}

- A. Castano and S. A. Hutchinson,
Hybrid Vision/Position Servo Control of a Robotic Manipulator, *Proc. of the IEEE Int'l Conf. on Robotics and Automation*, Nice, France, 1992, pp. 1264-1269.

[Abstract] [BibTex] [IEEEXplore]The authors address a number of issues associated with visual servo control of robotic manipulators. They derive a set of projection equations that are used in the derivation of the Jacobian matrix for resolved-rate visual servo control. A calibration procedure that determines those parameters that appear in the projection equations is presented. Given the projection equations and calibration procedure, a set of equations is derived that can be used to initially position the manipulator at a specified perpendicular distance from the camera such that the tool center of the end effector projects onto a specified pixel on the image plane. A Jacobian matrix that is used to effect hybrid control of the manipulator is derived. Specifically, the two degrees of freedom parallel to the image plane of the camera are controlled using visual feedback, and the remaining degree of freedom is controlled using position feedback provided by the robot joint encoders.

@INPROCEEDINGS{220075, author={Castano, A. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Hybrid vision/position servo control of a robotic manipulator}, year={1992}, month={may.}, volume={}, number={}, pages={1264 -1269 vol.2}}

- S. Pandya and S. A. Hutchinson,
A Case-Based Approach to Robot Motion Planning, *Proc. of the IEEE Int'l Conf. on Systems Man and Cybernetics*, 1992, pp. 492-497.

[Abstract] [BibTex] [IEEEXplore]The authors present a case-based robot motion planning system. Case-based planning affords on the system good average case performance by allowing it to understand and exploit the tradeoff between completeness and computational cost, and permits it to successfully plan and learn in a complex domain without the need for an extensively engineered and possibly incomplete domain theory. The system automatically classifies motion planning problems according to the solution method that is most appropriate, not by using a fixed classification of problems, but by learning with experience how to map problems to solution methods.

@INPROCEEDINGS{271726, author={Pandya, S. and Hutchinson, S.}, journal={ Proc. of the IEEE Int'l Conf. on Systems Man and Cybernetics}, title={A case-based approach to robot motion planning}, year={1992}, month={oct.}, volume={}, number={}, pages={492 -497 vol.1}}

- S. M. LaValle and S. A. Hutchinson,
Representing Probability Distributions of Image Segments and Segmentations, *Proc. IEEE Int'l Conf. on Systems Man and Cybernetics*, 1992, pp. 1552-1557 (invited).

[Abstract] [BibTex] [IEEEXplore]The authors develop a method for probabilistically maintaining sets of alternative homogeneous regions and segmentations. Depending on the image size and complexity and on the applications, a probability distribution can be constructed over the entire image, or a distribution over partial segmentations can be formed. The authors develop an efficient representation structure and a probabilistic mechanism for applying Bayesian model-based evidence to guide the construction of the representation and influence the resulting posterior distribution over the space of alternatives. The formalism is applied to range images using a piecewise-planar model with additive Gaussian noise.

@INPROCEEDINGS{271519, author={LaValle, S.M. and Hutchinson, S.A.}, journal={ Proc. IEEE Int'l Conf. on Systems Man and Cybernetics}, title={Representing probability distributions of image segments and segmentations}, year={1992}, month={oct.}, volume={}, number={}, pages={1552 -1557 vol.2}}

- S. A. Hutchinson,
Planning Visually Controlled Robot Motions, *Proc. of the AAAI Fall Symposium on Sensory Aspects of Robotic Intelligence*, 1991, pp. 38-43. - A. Fox, A. Castano and S. A. Hutchinson,
Planning and Executing Visually Constrained Robot Motions, *Proc. of the SPIE Symposium on Advances in Intelligent Robotic Systems*, 1991. - S. A. Hutchinson,
Exploiting Visual Constraints in Robot Motion Planning, *Proc. of the IEEE Int'l Conf. on Robotics and Automation*, Sacramento, 1991, pp. 1722-1727.

[Abstract] [BibTex] [IEEEXplore]A number of issues concerning the integration of visual and physical constraints for the synthesis and execution of error-tolerant motion strategies are addressed. Object features and their projections onto the image plane of a supervisory camera are used to define visual constraint surfaces. These surfaces can be directly used to enforce the following types of constrained motion: motion terminated on contact with a visual constraint surface, motion maintaining constant contact with a visual constraint surface, and motion that is simultaneously constrained by both visual and physical constraint surfaces. Preimage planning techniques are extended to the synthesis of motion strategies that exploit these types of motion.

@INPROCEEDINGS{131869, author={Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Exploiting visual constraints in robot motion planning}, year={1991}, month={apr.}, volume={}, number={}, pages={1722 -1727 vol. 2}}

- M. Barbehenn and S. A. Hutchinson,
Learning Conditional Effects of Actions for Robot Navigation, *Proc. of the IEEE Int'l Conf. on Robotics and Automation*, Sacramento, 1991, pp. 260-265 [Also in*Proc. of the Florida Artificial Intelligence Research Symposium*, 1991, pp. 37-41].

[Abstract] [BibTex] [IEEEXplore]GINKO, an integrated learning and planning system that has been applied to an autonomous mobile robot domain, is described. The goal of GINKO's learning system is to partition the robot's configuration space into regions in which actions exhibit a uniform qualitative behavior. This partitioning is performed by an inductive learning algorithm that classifies regions of the configuration space with regard to the effects of the robot's actions when executed in those regions. GINKO's learning is driven by its attempts to perform tasks. Thus, the learned effects of actions are directly applicable to normal system performance.

@INPROCEEDINGS{131584, author={Barbehenn, M. and Hutchinson, S.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Learning conditional effects of actions for robot navigation}, year={1991}, month={apr.}, volume={}, number={}, pages={260 -265 vol.1}}

- M. Barbehenn and S. A. Hutchinson,
An Integrated Architecture for Learning and Planning in Robotic Domains, *Proc. of the AAAI Spring Symposium on Integrated Architectures*, 1991, pp. 15-19 [Reprinted in*ACM SIGART*, Vol. 2, No. 4, Aug. 1991, pp. 29-33]. - S. M. LaValle and S. A. Hutchinson,
Considering Multiple Surface Hypotheses in a Bayesian Hierarchy, *Proc. of the SPIE Conf. on Stochastic Methods in Signal Processing, Image Processing, and Computer Vision*, 1991, pp. 1-15. - S. A. Hutchinson and A. C. Kak,
Extending the Classical AI Planning Paradigm to Robotic Assembly Planning, *Proc. of the IEEE Conf. on Robotics and Automation*, Cincinnati, 1990, pp. 182-189.

[Abstract] [BibTex] [IEEEXplore]A description is given of SPAR, a task planner that has been implemented on a PUMA 762. SPAR is capable of formulating manipulation plans to meet specified assembly goals; these manipulation plans include grasping and regrasping operations if they are deemed necessary for successful completion of assembly. SPAR goes beyond classical AI planners in the sense that SPAR is capable of solving geometric goals associated with high-level symbolic goals. Consequently, if a high-level symbolic goals. Consequently, if a high-level symbolic goal is on (A ,B), SPAR can also entertain the geometric conditions associated with such a goal. Therefore, a simple goal such as on (A ,B) may or may not be found to be feasible depending on the kinematic constraints implied by the associated geometric conditions. SPAR has available to it a user-defined repertoire of actions for solving goals and associated with each action is an uncertainty precondition that defines the maximum uncertainty in the world description that would guarantee the successful execution of that action. SPAR has been implemented as a nonlinear constraint posting planner.

@INPROCEEDINGS{125969, author={Hutchinson, S.A. and Kak, A.C.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Extending the classical AI planning paradigm to robotic assembly planning}, year={1990}, month={may.}, volume={}, number={}, pages={182 -189 vol.1}}

- A. C. Kak, S. A. Hutchinson, C. H. Chen, S. N. Gottschlich,
and K. D. Smith,
Coordinated Use of Multiple Sensors in a Robotic Workcell, *Proc. of the NATO Advanced Research Workshop on Multisensor Fusion for Computer Vision*, Grenoble, France, July 1989. - S. A. Hutchinson, R. L. Cromwell and A. C. Kak,
Applying Uncertainty Reasoning to Model Based Object Recognition, *Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition*, 1989, pp. 541-548.

[Abstract] [BibTex] [IEEEXplore]An architecture for reasoning with uncertainty about the identities of objects in a scene is described. The main components of this architecture create and assign credibility to object hypotheses based on feature-match, object, relational, and aspect consistencies. The Dempster-Shafer formalism is used for representing uncertainty, so these credibilities are expressed as belief functions which are combined using Dempster's combination rule to yield the system's aggregate belief in each object hypothesis. One of the principal objections to the use of Dempster's rule is that its worst-case time complexity is exponential in the size of the hypothesis set. The structure of the hypothesis sets developed by this system allow for a polynomial implementation of the combination rule. Experimental results affirm the effectiveness of the method in assessing the credibility of candidate object hypotheses.

@INPROCEEDINGS{37899, author={Hutchinson, S.A. and Cromwell, R.L. and Kak, A.C.}, journal={Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition}, title={Applying uncertainty reasoning to model based object recognition }, year={1989}, month={jun.}, volume={}, number={}, pages={541 -548}}

- S. A. Hutchinson and A. C. Kak,
A Task Planner for Simultaneous Fulfillment of Operational, Geometric and Uncertainty-Reduction Goals, *Proc. of the Workshop on Human-Machine Symbiotic Systems*, ORAU 89/C-140, Oak Ridge National Lab., 1988. - A. C. Kak, S. A. Hutchinson and K. A. Andress,
Planning and Reasoning in Sensor Based Robotics, *Proc. of the IEEE Int'l Workshop on Intelligent Robots and Systems*, 1988, pp. 239-245.

[BibTex]@INPROCEEDINGS{593283, author={Kak, A.C. and Hutchinson, S.A. and Andress, K.M.}, journal={Proc. of the IEEE Int'l Workshop on Intelligent Robots and Systems}, title={Planning and Reasoning in Sensor Based Robotics}, year={1988}, month={oct.}, volume={}, number={}, pages={239 -245}}

- S. A. Hutchinson and A. C. Kak,
Applying Uncertainty Reasoning to Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities, *Proc. of the IEEE Symposium on Intelligent Control*, Tokyo, Japan, 1988, pp. 129-134.

[Abstract] [BibTex] [IEEEXplore]An approach to planning sensing strategies dynamically on the basis of the system's current best information about the world is described. The approach is for the system to propose a sensing operation automatically and then to determine the maximum ambiguity which might remain in the world description if that sensing operation were applied. When this maximum ambiguity is sufficiently small, the corresponding sensing operation is applied. To do this, the system formulates object hypotheses and assesses its relative belief in those hypotheses to predict what features might be observed by a proposed sensing operation. Furthermore, since the number of sensing operations available to the system can be arbitrarily large, equivalent sensing operations are grouped together using a data structure that is based on the aspect graph. In order to measure the ambiguity in a set of hypotheses, the concept of entropy from information theory is applied. This allows the determination of ambiguity in a hypothesis set in terms of the number of hypotheses and the system's distribution of belief among those hypotheses.

@INPROCEEDINGS{65418, author={Hutchinson, S.A. and Kak, A.C.}, journal={ Proc. of the IEEE Symposium on Intelligent Control}, title={Applying uncertainty reasoning to planning sensing strategies in a robot workcell with multi-sensor capabilities}, year={1988}, month={aug.}, volume={}, number={}, pages={129 -134}}

- S. A. Hutchinson, R. L. Cromwell and A. C. Kak,
Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities, *Proc. of the IEEE Int'l Conf. on Robotics and Automation*, Philadelphia, 1988, pp. 1068-1075.

[Abstract] [BibTex] [IEEEXplore]The authors present an approach to planning sensing strategies in a robot workcell with multisensor capabilities. The system first forms an initial set of object hypotheses by using one of the sensors. Subsequently, the system reasons over different possibilities for selecting the next sensing operation, this being done in a manner so as to maximally disambiguate the initial set of hypotheses. The `next sensing operation' is characterized by both the choice of the sensor and the viewpoint to be used. Aspect graph representation of objects plays a central role in the selection of the viewpoint, these representations being derived automatically by a solid modelling program.

@INPROCEEDINGS{12202, author={Hutchinson, S.A. and Cromwell, R.L. and Kak, A.C.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={Planning sensing strategies in a robot work cell with multi-sensor capabilities}, year={1988}, month={apr.}, volume={}, number={}, pages={1068 -1075 vol.2}}

- S. A. Hutchinson and A. C. Kak,
FProlog: A Language to Integrate Logic and Functional Programming for Automated Assembly, *Proc. of the IEEE Int'l Conf. on Robotics and Automation*, San Francisco, 1986, pp. 904-909.

[Abstract] [BibTex]In this paper, we present FProlog, a programming language designed to act as the top level in a robot assembly system. FProlog is a logic programming language, with the ability to interface with LISP. This allows the use of a logic programming environment to construct assembly plans, while using LISP programs to interface with vision systems, world modeling systems, robot manipulators, etc. FProlog differs from hybrid logic programming languages, such as LOGLISP, in that FProlog may invoke functional programs as goals, and functional programs may invoke FProlog's inference engine. Also, FProlog differs from traditional robot assembly languages, such as AUTOPASS, in its generality, and therefore its ability to interface with many different subsystems. As a demonstration of the applicability of FProlog, we also present an FProlog program which is used as the top level in a robot assembly system which performs a version of the blocks world experiment.

@INPROCEEDINGS{1087614, author={ Hutchinson, S. and Kak, A.}, journal={Proc. of the IEEE Int'l Conf. on Robotics and Automation}, title={FProlog: A language to integrate logic and functional programming for automated assembly}, year={1986}, month={apr.}, volume={3}, number={}, pages={ 904 - 909}}

# Technical Reports

- S. Hutchinson, G. Hager and P. Corke,
A Tutorial on Visual Servo Control, Yale University, Department of Computer Science, Research Report YALEU/DCS/RR-1068, March, 1995. - R. Sharma, S. LaValle and S. Hutchinson,
Optimizing Robot Motion Strategies for Assembly with Stochastic Models of the Assembly Process, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-94-11, 1994. - S. LaValle and S. Hutchinson,
Multiple-robot Motion Planning Under Independent Objectives, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-94-10, 1994. - R. Sharma and S. Hutchinson,
Motion Perceptibility and its Application to Active Vision-Based Servo Control, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-94-05, 1994. - S. LaValle and S. Hutchinson,
A Bayesian Segmentation Methodology for Parametric Image Models, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-93-06, 1993. - M. Barbehenn and S. Hutchinson,
Efficient Search and Hierarchical Motion Planning By Dynamically Maintaining Single-Source Shortest Paths Trees, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-93-04, 1993. - S. LaValle and S. A. Hutchinson,
Image Segmentation Using a Bayesian Region Merging Probability, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-93-02, 1993. - A. Castano and S. A. Hutchinson,
Visual Compliance: Task-Directed Visual Servo Control, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-93-01, 1993. - A. Fox and S. A. Hutchinson,
Exploiting Visual Constraints in the Synthesis of Uncertainty-tolerant Motion Plans, University of Illinois at Urbana-Champaign, Technical Report UIUC-BI-AI-RCV-92-05, 1992. - S. A. Hutchinson and A. C. Kak,
A Task Planner for Simultaneous Fulfillment of Operational, Geometric and Uncertainty-Reduction Goals, Purdue University Technical Report TR 88-46, 1988.