ROBOTIC FUTURE: The Robotics and Intelligent Systems Seminar Series
Unless stated otherwise all lectures will be from 13.30 - 14.30. the location of the seminars varies so see each talk for the location.
Friday 20 November 2009. *Rolle 115* Dr. Jonathan Ginzburg. Senior Lecturer in the Department of Computer Science, King's College London
Title : Integrating Semantics with Interaction - Hosted by Angelo Cangelosi
Abstract : There is a long and highly fruitful tradition of doing semantics going back to Frege. In a sentence, this is *semantics as characterizing successful communication*. Since it abstracts away from individual differences and from the communicative process, one might dub it Communitarian Semantics. As I will discuss in this talk, a conversation oriented semantics cannot restrict itself in this way. As emphasized by Conversation Analysis and much subsequent psycholinguistic work, metacommunicative acts such as acknowledgements of understanding and clarification requests are coherent and ubiquitous. Moreover, there is evidence from computational simulations (see e.g. Macura, 2007) that a metacommunicative component in interaction is not some incidental add
on but rather plays a vital role in maintaining a language from irretrievably diverging Tower of Babel style across its speakers. Thus, a concrete task for a conversation oriented theory of meaning is
to explicate the potential for and range of clarification requests that can occur following uses of a given utterance type: (1) A: Did Bo kowtow? B: Bo? / Your cousin?/ Did WHO kowtow? / Bo Jackson or Bo Didly? / kowtow? / What do you mean `kowtow'? Why? In order to do this---and a variety of new semantic tasks involving conversation---I will argue that Communitarian Semantics needs to be
supplanted by the far more general *Interactive Stance*. The Interactive Stance involves integrating the communicative process within semantics and places importance on explicating the potential for misunderstanding, rejection, and correction, as well as success. I will sketch a theory of context dynamics which allows the Interactive Stance to be implemented.
As a final application, I will turn to consider interaction between unequals, most prototypically between adult caregivers and children. In such interaction modelling even simple exchanges requires
a more complex contextual dynamics---I will offer some possible benchmarks guiding the development of such a theory.
Friday 04 December 2009. *Cookworthy 505* Dr. Torbjorn Dahl. Senior Lecturer in the Department of Computing, and Coordinator of the Robotic Intelligence Lab at the University of Wales
Title : TBA - Hosted by Guido Bugmann
Friday 13 November 2009 *Rolle 115*. Dr. Bill Bigge. Associate Tutor in the Department of Informatics, University of Sussex
Title : May the force be with you - Torque control and mechanical emulators for
cheap robots - hosted by Anthony Morse
Abstract: Traditional motion control is dominated by the control paradigm of high stiffness position and angle control, but more modern approaches to robotic actuation have highlighted the benefits of compliance, and the need to control force rather than just position in the joints of autonomous robotic systems. I will talk about my particular approach to compliant actuator design, which has concentrated on developing a general purpose actuator system that uses active force control and can be programmed in software to emulate a variety of mechanical spring damping systems. Rather than
develop an expensive high fidelity system I have concentrated on creating a low cost compact design as a direct replacement for the ubiquitous 'hobby servo', which currently forms the basis for most
robots built by the hobbyist, and the research working on a budget.
Friday 30 October 2009. *RL-302* Prof. Jian Dai. Head of the Center for Mechatronics and Manufacturing Systems at King's College London
Title : Novel Mechanisms and Intuitive Robotic Manipulation - Hosted by Guido Bugmann and Peter Gibbons
Abstract : The presentation introduces novel mechanisms developed at King's which lead to new robotic structure including the metamorphic hand and the ankle rehabilitation robot. The origami-inspired mechanism development admits new mechanisms based on reconfiguration and
adaptation. The second part of the presentation introduces the robotic manipulation that was developed at King's for origami folding, fabric handling, automatic ironing, and packaging. The remit covers both domestic and industrial applications and leads to the fine robotic manipulation.
Tuesday 27 October 2009. *PSQ Devonport from 11.00 - 12.00* Dr. Steven Greenberg. Steven Greenberg is a scientist whose research focuses on the perception and modeling of spoken language processing for speech technology applications. He is currently President of Silicon Speech, a research company based in Santa Venetia, California and also serves as Visiting Professor at the Centre for Applied Hearing Research at the Technical University of Denmark. In the past, he has worked in the Department of Neurophysiology, University of Wisconsin-Madison, the Department of Linguistics, University of California, Berkeley, and the International Computer Science Institute (Berkeley, CA).
Title : Time Perspective in Spoken Language - hosted by Sue Denham
Friday 23 October 2009. *Rolle Building Room 115* Dr Manuel Lopes. Manuel Lopes did his PhD in Electrical and Computer Engineering on the topic of Cognitive Robotics. He participated in several European research projects on artificial development, robot learning and cognitive robotics, and has organized several activities on Robot Learning. Currently he is Lecturer on Humanoid Robotics and Intelligent Systems at the University of Plymouth.
Title: Developmental Robotics
Abstract: This talk will present a developmental perspective for cognitive robotics. The robot starts with little knowledge and by carefully increasing the complexity of the physical and social environment it acquires more complex skills. At the top of such skills is learning by imitation, with a model based on research from experimental psychology. Several techniques are going to be showed such as: supervised learning, reinforcement learning, inverse reinforcement learning, Bayesian methods and active learning techniques.
Wednesday 27 May 2009. Prof. Chyi-Yeu Jerry Lin. National Taiwan University of Science and Technology. Development of Autonomous Intelligent Robots: Multifunctional Entertaining Robots, Robot Theatre, and Next Generation Service Robots.
The first part of his talk will introduce a number of autonomous intelligent robots developed in Taiwan Tech (National Taiwan University of Science and Technology), which include three desktop multifunctional companion robots DOC-1/DOC-2/DOC-3, and a world unique theatric robot team comprising two adult-size biped androids and two dual-wheeled humanoid robots. These robots can perform various autonomous functions in educational and entertaining applications. The second part of the talk will present the fundamental scheme of an ongoing research project aiming to create highly practical autonomous service robots for new generations.
Wed 6 May 2009. Dr Boris Vladimirskiy, University of Bern, Switzerland.Unsupervised learning of natural stimulus statistics and hierarchical novelty-familiarity representation in the visual cortex, with a detour into why intrinsic noise is not a prerequisite for fast reinforcement learning
Information processing in the visual system can be viewed as driven by the statistical structure in natural stimuli due to evolutionary adaptation processes. Predictive coding, in which population feedback from higher areas carries expectations of lower-level activity (familiarity signal), whereas the population feedforward (novelty) signals carry discrepancies between the expectations and the stimuli, has been suggested to serve as an organizing principle for the entire cortical hierarchy (Lee and Mumford, 2003; Friston, 2005). Modeling results (Olshausen and Field, 1996, 1997; Rao and Ballard, 1999) have shown that the statistics of natural images can explain some receptive field (RF) properties, but no attempt had been made at coding entire images. Furthermore, the proposed connectivity appeared too slow to be biologically compatible and unrealistic RFs with only 3 small patches of 5 natural images were used. We investigate how good predictive coding actually is at coding entire natural stimuli using a natural topographic connectivity and a hierarchy of processing levels, each effectively performing fast visual recognition. Our neural network model is ased on the principle of prediction error minimization, natural to expect of an organism in order for it to survive, and is neurobiologically feasible.
The network is trained on 1000 natural images, following which the coding performance is evaluated on a set of 200 different images. Despite a compression factor of 4 for each level, the image reconstruction quality is quite good and strongly exceeds that of local averaging, implying that the learning results in the extraction of features characteristic of the set of natural images as a whole. We analytically show that this unsupervised learning is equivalent to generalized principal component analysis carried out, importantly, in a biologically reasonable way for the first time. With our model, we are able to reproduce several classical and extra-classical receptive-field effects in V1. Finally, the proposed architecture allows for the simultaneous representation of familiarity and novelty (e.g., a predator suddenly appearing in a familiar scene) and could be an effective way for the visual system to combine fast hierarchical visual recognition with higher information processing, such as providing a read-out signal for attention, in the brain.
In the last five minutes of my talk, I will very briefly describe the main result of a previous project. The fundamental problem of reinforcement learning is how to make a useful interpretation of a single global, non-specific feedback (reward) signal at each local synapse in a large network so as to improve the performance of the whole network. We show that, contrary to previously commonly accepted opinion, intrinsic noise sources are not necessary and can be harmful for effective exploration during reinforcement learning. Instead, we demonstrate that stimulus sampling, resulting from a random stimulus choice out of a (large) stimulus set, is perfectly adequate to produce the level of performance in a simple abstract neural network that matches that of a monkey on a four-choice visuomotor association task.
In contrast, two well-established classes of intrinsic-noise based algorithms turn out to be too slow. Additionally, our approach possesses the important advantage of simpler and inherently faster network architecture, without any need to rely on additional noise sources whose biological realism is often questionable.
Friday 20 March 2009. Roger Moore, University of Sheffield. Spoken Language Processing: Where Do We Go From Here?
Recent years have seen steady improvements in the quality and performance of speech-based human-machine interaction driven by a significant convergence in the methods and techniques employed. However, the quantity of training data required to improve state-of-the-art systems seems to be growing exponentially, yet performance appears to be reaching an asymptote that is not only well short of human performance, but which may also be inadequate for many real-world applications. This suggests that there may be a fundamental flaw in the underlying architecture of contemporary systems, and the future direction for research into spoken language processing is currently uncertain. This talk addresses these issues by stepping outside the usual domains of speech science and technology, and instead draws inspiration from recent findings in the neurobiology of living systems. In particular, four areas will be discussed: the growing evidence for an intimate relationship between sensor and motor behaviour in living organisms, the power of negative feedback control to accommodate unpredictable disturbances in real-world environments, mechanisms for imitation and mental imagery for learning and modelling, and hierarchical models of temporal memory for predicting future behaviour and anticipating the outcome of events. The talk will conclude by showing how these results point towards a novel architecture for speech-based human-machine interaction that blurs the distinction between the core components of a traditional spoken language dialogue system; an architecture in which cooperative and communicative behaviour emerges as a by-product of a model of interaction where the system has in mind the needs and intentions of a user, and a user has in mind the needs and intentions of the system.
Friday 13 March 2009. Dr Takashi Hashimoto, Japan Advanced Institute of Science and Technology. Grammaticalisation, language evolution and creativity.
A possible mechanism of language evolution is grammaticalisation, which is a kind of language change in which content words get grammatical functions. The characteristics of grammaticalisation are the unidirectionality of changes and the universality of changing patterns in all languages. A mechanism of the unidirectional meaning change in grammaticalisation is studied by constructing and operating an abstract cognitive model of language acquisition process. It was shown that two cognitive biases, pragmatic extension and co-occurrence, were critical in order to realize the unidirectionality. We also found that the ability of linguistic analogy, which is to extensively apply acquired grammatical rules to other language knowledge, was important for language to be acquired and to change. We will discuss the relationship among linguistic analogy, displacement and creativity by considering evidences in human evolution and archeology and will suggest a hypothesis of the origin and the evolution of language.
Friday 20 Feb 2009, Dr Michael Meredith, University of Sheffield , Mathematical Mechanics of Motion
This seminar will look at how articulated structures are postured and animated using a range of approaches from kinematics to dynamics. We will also explore how individualised postures can be achieved and thus similar motions can be portrayed differently using the range of techniques. While this research work is grounded in computer graphics, many of the underlying techniques have been borrowed from the field of robotics and, hopefully, because of the recent explosion in computer gaming, research in this area can offer something back. Humanoid computer character animation and display techniques will also be discussed briefly during this presentation.
Friday 13 February 2009. Darren Cosker, University of Bath.
The modelling and animation of faces is an area of intense research, and facial models – developed by computer graphics and vision researchers -- have a wide range of applications.
Arguably the most publicly visible result of facial research appears in movies and video games. However, facial models also have a major role to play in psychology and neuroscience research - typically for testing hypothesis regarding how humans process different static and dynamic facial expressions and performances. This former research can teach computer graphics and vision researchers some powerful lessons. For instance, just a small change in expression dynamics in a facial display can alter an observers opinion of a person or their decision making process. It therefore seems clear that the in order to reach 'human realism' in animation we also have to understand how faces are perceived. Under this theme, I will discuss the modelling of faces from 2D images and dynamic 3D facial data (i.e. 3D captured at 60 frames per second), parameterisation of such data for creation of facial animations, and their application to computer vision tasks and psychology research. I will also discuss the generation of animations automatically from speech, as well as perceptual methods to assess the realism of such animations in comparison to real speakers. Finally, I will report on some example recent collaborative studies that highlight the power of manipulating photorealistic facial expressions to create different overall impressions of a person and influence decision-making.
Friday 6 February 2009. Sandor Veres, University of Southampton. Engineering the Behaviour of Autonomous Vehicles.
The Autonomous Vehicle Control Systems Lab at Southampton has a series of projects on AUVs, UAVs, spacecrafts and autonomous ground vehicles. The talk will review some of the common architectural features for sensing, data fusion, planning, decision making and execution of missions. Other aspects that the talk will examine are formal verification and reconfigurability of autonomous vehicles.
Friday 30 January 2009, Martin Woolner, INNOVATE, University of Plymouth.
The seminar will explore the potential of 3D scanning, virtual object manipulation, rapid prototyping and the production of multiples as applied across a broad range of industries.
Friday 12 December 2008, Phil Culverhouse, University of Plymouth. Automated plankton identification.
Phil has been researching methods for computer-based visual identification of marine plankton for many years. He will explore some of the methods available and the problems in identification that arise.
Friday 31 October 2008, Martin Peniak, University of Plymouth. Autonomous Robot Exploration of Unknown Terrain: A Model of a Mars Rover Robot.
In the last few years evolutionary robotics has challenged more traditional approaches by focusing on control systems that are inspired on natural evolution, exploiting active perception, sensory-motor coordination and embodiment. Too often these studies are based on a simple robot, such as the Khepera robot, and are tested in a well structured environment. We believe that a new challenge for evolutionary robotics techniques can be interplanetary exploration, which demands robust and intelligent autonomous robot control. The rovers Spirit and Opportunity, for example, currently exploring the Martian surface, are capable of autonomous navigation with hazard avoidance using stereo cameras. However, there is no other way for the rover to avoid obstacles in case of camera failure. Navigation based on infrared sensors could be used as a possible back-up solution in case of such failure.
We present a new simulation model of the Rover Mars robot based on infrared sensors. This work has the objective of investigating the possibility of using an alternative obstacle avoidance system for future rovers capable of performing autonomous tasks in challenging planetary terrain environments. The simulation model of the robot and of Mars terrain is based on the physics engine Open Dynamics Engine. The rover model consists of six wheels attached to a rocker-bogie suspension system allowing independent movement of different parts of the robot. The robot has eighteen sensors attached at two different height levels to detect small and high obstacles as well as steep slopes and holes. The robot control system consists of an artificial neural network trained using evolutionary computation techniques. Results show that the robot is able to avoid rocks, holes and steep slopes based purely on the information provided by the infrared sensors.
Our current research is focused on using similar but more accurate physical rover model equipped with an active vision system consisting of a pan/tilt CCD camera controlled by a neural network. It has been shown that complex visual tasks, such as shape recognition and spatial navigation, can be tackled with simple architectures generated by a co-evolutionary process of active vision and feature selection. Our aim is to take this research farther and investigate the possibility of exploiting the active perception paradigm to develop more autonomous and intelligent control systems that could be used for future planetary robotics missions.
Friday 14 November 2008, Joerg Wolf. The Secrets of Robot Football
The University of Plymouth has a track record of developing robot football systems for the last 11 years. By reviewing this development we may be able to predict how future robotic football players will look like and how this will influence the future of robotics research. This seminar will show how robot football systems work in detail. How are these little robots detected by the camera? How do you control the robot’s position? Over the years many spin-off student projects have been completed that have used robot football algorithms and circuits. There may be something useful in it for your project.
Friday 17 October 2008. Ehsan Honary. The Hunt for Aliens: Where no Robot Has Gone Before.
Are we alone out there? How can we find out if there are non-terrestrial life forms in the universe, intelligent or not? What are the challenges facing us when we want to use remote devices to collect more information in hostile environments? How far do we need to go and at what cost? This talk focuses on many aspect of robotics, intelligence and life. It explores the challenges we face in a variety of environments such as space, air or even underwater and reports on some of the latest state of the art projects and studies carried out to extend our understanding of the physical world.
Friday 29 February 2008, Dr Mark Norman,
Merlin Systems Corp., Adventures with Robot Snakes
Dr Mark Norman, Chief Executive of Merlin Systems Corp. Ltd will describe how Merlin Robotics has created a wide range of innovative robotics products and technologies including mobile robot platforms, vision systems, compliant muscles and applied them into real-world environments over the last 9 years. The most recent robotics adventure being the creation of a 2m tall upright robot snake constructed from 28 Merlin Artificial Muscles and 9 articulated segments Merlin System Corp. Ltd has been researching and creating new robot technologies to meet the predicted emerging service robotics market since 1998 but since this time the service robotics market has only emerged a little! Market research year on year predicts massive growth for a new generation of service robots and everyone knows that robots will one-day provide for our every need. So why has it not yet happened and what is getting in the way? Surely, if we can create Asimo or P3, the ‘robot servant’ must be very close? Or perhaps there is a deeper problem, maybe the technology is not the major issue...
Paul Robinson, University of Plymouth. Robot football - a goal in itself? Friday 7 December 2007.
Tony Pipe, University of the West of England. I, Service Robot: The rise of the machines?
Friday 23 November 2007.
Will Jackson, Engineered Arts Ltd. Friday 9 November 2007.
Paul Newman, University of Oxford. Using Scene Appearance for Loop Closing in SLAM. Friday 12 October 2007.
Larry Bull, University of the West of England. Towards artificial creativity: open-ended search in interactive evolution. Friday 2 March 2007.
Tony J. Prescott, University of Sheffield. Computational Ethology of an Active Sense System: Biological and Robotic Investigations of Tactile Perception in the Rat. 16 February 2007.
Marián Beszédeš, Faculty of Electrical Engineering and Information Technology, Slovak University of Technology. Normalisation for face recognition using Active Appearance Models. 9 February 2007. Powerpoint slides (4Mb).
Antonella di Angeli, University of Manchester. Disinhibition in virtual partners. 2 February 2007.
Tony Belpaeme, University of Plymouth. Epigenetic robotics: what can we learn from children to make smart robots? 8 December 2006.
Owen Holland, University of Essex. Could we build a conscious robot? 1 December 2006.
Michael Punt, University of Plymouth. 10 November 2006. Cinema, space and robotics: the value of an imaginative dimension.
Will Browne, University of Reading. 3 November 2006. Emotional robots: paranoid androids or intelligent agents?
Guido Bugmann, University of Plymouth. 20 October 2006.
Phil Culverhouse, University of Plymouth. 6 October 2006. (abstract)
Chris Melhuish, University of the West of England, Bristol, UK. Friday 19 May 2006.
Alan Bunkum, Loughborough University. 9 December 2005 (abstract)
Yiannis Demiris, Imperial College London. 25 November 2005.
Joanna Bryson, University of Bath, UK. Friday 28 April 2005.
Giulio Sandini, University Genoa, Italy. Friday 24 March 2005.
Mon 12 Oct 2009
Dr Mark Norman, Chief Executive of Merlin Systems Corp. Ltd will describe how Merlin Robotics has created a wide range of innovative robotics products and technologies including mobile robot platforms, vision systems, compliant muscles and applied them into real-world environments over the last 9 years. The most recent robotics adventure being the creation of a 2m tall upright robot snake constructed from 28 Merlin Artificial Muscles and 9 articulated segments.
Merlin System Corp. Ltd has been researching and creating new robot technologies to meet the predicted emerging service robotics market since 1998 but since this time the service robotics market has only emerged a little! Market research year on year predicts massive growth for a new generation of service robots and everyone knows that robots will one-day provide for our every need. So why has it not yet happened and what is getting in the way? Surely, if we can create Asimo or P3, the ‘robot servant’ must be very close? Or perhaps there is a deeper problem, maybe the technology is not the major issue...