Research in Artificial Vision:

at the Robotic Intelligence Laboratory, Centre for Robotics and Intelligent Systems, School of Computing, Communications and Electronics, University of Plymouth
Dr. Guido Bugmann , tel +44 (0)1752 23 25 66

Research in Artificial Vision is conducted with domestic robots in mind.

These need vision for recognizing objects that they may be asked to retrieve.

Vision is also needed for recognizing the spatial layout during navigation.

 Spatial Vision
  • Template-Based Landmark Recognition for Urban Navigation.  
  • Predictive vision-based Control.  
  • Artificial vision for Micromouse.  
  • Visual indoors layout detection for autonomous vehicles.  
  • Visual outdoor layout detection for autonomous vehicles.

 Object Vision:

  • Object-in-Frame recognition.
  • Invariant Object recognition.
  • Invariant Object recognition.
  • Visual control of robot grasping
  • Visual recognition of piece positions on a game board.
  • Visual measurement of ball bearing sizes.
  • Robot Football control system.
  • Spatial Vision

  • Template-Based Landmark Recognition for Urban Navigation.

    This is part of a PhD pursued by Theocharis Kyriacou aimed at developping vision-based navigation procedures for verbally instructed robots. For instance,
    a robot told to "take the first right" would search in the visual scene for a right turn only. This makes visual scene analysis task specific and, in principle, much simpler. The steps in the process are: 1. Produce a top view of the scene (A) using projective geometry (not shown), 2. Apply a color filter to extract navigable areas (B), 3. Attempt to match the "right turn" template to the filtered image (C). See details in publications of
    the IBL project.
  • C
  • Predictive vision-based Control.

    This is a PhD project pursued by Kheng Koay Lee. He works with a robot that uses vision to map its environment, recognize and localize obstacles, then plan a route towards a goal, e.g. the power supply. The problem is the slow overall processing time that leads to motion interrupted by stops. The problem here is to compute while the robot is in motion and a new non-predictive technique has been developed.
    More details.

  • Artificial vision for Micromouse.

    This is pursued as a series of student projects. An effective algorithm for self-localization in a maze on the basis of visually detected floor-wall edges was developped by Vincent Onillon (see internal report #55 ). Now the algorithm is being converted for on-board computing with a 68000 based microcontroller, which is quite a challenge.

  • Visual indoors layout detection for autonomous vehicle.

    This project is aimed at detecting free floor space in a hospital environment, to enable navigation of an autonomous wheelchair. For that purpose, color and texture vision is used. This project is handled as a sequence of MSc projects. The two images on the right show the kind of problems that are to be solved.
  • Visual outdoor layout detection for autonomous vehicle.

    This project is aimed at outdoors navigation for an autonomous wheelchair. The plan is to segment navigable space using color and texture criteria, then to fit the edges with "snakes", which will then be used for further processing. This is a current MSc project. (
    More on autonomous wheelchair )
  • Object Vision:

  • Object-in-Frame recognition.

    In this project, the aim was to build a representation of the space in the form of a sequence of views and gaze saccades. Views were rough images taken by a camera on-board a small robot. In some of the views were objects. To simplify their recognition, they were surrounded by a frame, and a image processing algorithm was designed to i) localize one or several frames in an image, then ii) copy the content of the frame into the input of a RBF neural network. (see paper #72 ).
  • Invariant Object recognition.

    The problem here is to detect an object in a natural, cluttered scene.
    Many existing methods have been reviewed and it seems that evidence based-system, such as SEEMOORE by Bartlett Mel, are the best candidates. These methods suffer from long computation times dues to large numbers of filters beeing used. Current work here is aimed at defining a small set of problem-specific filters that may reduce the computation time.
  • Facial feature detection for PC control.

    This project is pursued as a series of student projects. Initial work by Javier Gonzalez BernardoVideo Camera remote control over the internet. has led to the design of a software that could
    i) process the image from a camera looking at the user of a PC, and determine where his head was turned.
    Send a command to another machine through the internet, and rotate a camera attached to that PC, then send the image seen by that camera back to the first PC. 
    Current work is aimed at refining the set of facial features being usable for PC control.
    • Chess board localization and tracking of the movement of pieces.

    This was a project by Scott Blunsden as part of a longer-term aim of developing a chess-playing robot. A surprisingly difficult problem was to locate the board and its cells. We developed a filter for intersections of cells and a new type of adaptive shape based on SOM neural networks principles.

    White Knight moves from 1B to 3A, Correct identification of the move

  • Visual control of robot grasping.

    This was a project by Derrick Tapscott using a QUICKCAM attached at the end of a robot arm to detect an object on a dark background. It was a very effective system and most observers did not notice the camera, which was frustrating for the student.
  • Visual measurement of ball bearing sizes.

    This was a student project that was hit by problems with non-square pixels, optical aberations and illumination dependent apparent sizes.

  • Visual recognition of piece positions on a game board.

    This was a students project aimed at programming a robot for playing noughts and crosses. The vision problem was to recognize the pieces and determine their positions to enable i) calculating the next move and ii) provide visual feedback for the robot gripper. The main problems with the system were dealing with optical aberations and light conditions.

  • Robot Football control system.

    This is a challenging student project, in which images from the pitch must be processed at frame rate in order to determine the position and orientation of all 6 robots and the ball. Currently recognition rates of 100% are achieved. Current work aims at: 1) increasing the processing speed; 2) handle the problem of changing hue when natural light is used (blue sky, then clouds...).
    The image on the right shows the detection of identification color disks on the robots.
    More on robot football.
  • Links:

    The british Machine Vision Association
    Pattern Recognition on the Web
    The Computer Vision Home Page
    Bayesian Decision Theory with Gaussian Distributions
    CV-Online: The On-Line Compendium of Computer Vision

    Back to Home Page

    Guido Bugmann, February 2006