Robotics and Robot Learning
MULTI AGENT SYSTEM
A multi-agent system comprises of multiple intelligent entities interacting with each other. These entities or agents are autonomous. However, the decisions and results of actions of an agent are influenced by the actions and decisions of others. Usually the agents cooperate to attain a common goal (say mapping) but there are competitive multi-agent systems also where each agent acts individually to maximize its gain. It is also interesting and useful to model how the learning of one agent can be transferred to another and how the other agents adapt and use it. There are a large number of problems where multi-agent systems can be implemented effectively and this has spurred research in this area. In general large problems, that can be split into smaller sub-tasks and which can be solved in a distributed fashion are the target applications of multi-agent research.
ROBOT LEARNING AND TRANSFER LEARNING
In transfer learning we aim to solve new problems using fewer examples and using information gained from solving related problems. It is motivated from the point that people use the knowledge gained by solving one problem, to solve another related problem. People also learn from the knowledge shared or transferred from another person. Extending this motivation, we aim to transfer the learning on one agent to another agent performing similar or related task. While transfer between similar agents is relatively straight forward, when we have a team of heterogeneous agents, it is not easy. The action space or set of possible actions of different real world agents are not same and the action space of same agent can also change with time because of changes in mechanical, electrical and environmental factors. Hence it is required that the learning of first agent needs to be represented appropriately for the second agent to use effectively and reach its goal and learn its policy with lesser trials.
- Discussion on Options
- Player/Stage Integration with robots
-Integrate the Player/Stage with the bluetooth based control of the robot.
-Reflect the movement of the robot in the Stage
-Common module which controls the robot as well as the the simulated robot in the Stage
-A-star based algo for path planning with random obstacle in the environment.
- Ultrasound based distance finder module
-Ultrasound sensor module component
-Parallel port input format studies and concluded to be not suitable
-serial port with USB to serial converter studied and finalized for the distance estimation
-Data format for the ultrasound based distance estimation keeping in mind 180* sweeping unit
-Bluetooth based autonomous master/slave robot for guidance and follow. This can be extended to heterogeneous robots and control change estimation. -Wheel chair debugging
-Real world simulation trial in stage
- Lab Presentation
- Transfer Learning between robots with heterogeneous action models.
- Mapping of sequence of actions of two robots for reuse during exploration.
- Prior Work in Transfer Learning.
- Implementation and Result Discussion
- Transfer from more capable robot to less capable robot
- More capable robots mean that their action model supports finer movement. Can reach closer states in the environment.
- Robot with finer action model transfers knowledge to robot with larger action model
- Experiments show an improvement of 90 %
- journal papers in robot learning, transfer learning.
- Reinforcement Algorithm integration to Player Stage with Mansimar
- Discussion on simulating sinusoidal movement of robots to capture robots with different action models.
- Implementation of greedy ( about distance from goal ) algorithm for path planning in Player/ Stage.
- Effect of the online knowledge of state adjacency during exploration in policy convergence.
- GTC Report
- POMDP readup