PROJECTS IN THE INTELLIGENT ROBOTICS LABORATORY IN YEAR 2005.

Marek Perkowski, Professor ECE, Intelligent Robotics Laboratory



1. Please contact the ECE secretary for keys to the laboratory.

2. Be responsible for the lab, do not leave it open, many items were stolen in the past, including a very expensive analog camera, PC, and PC projector. After being done, put the items to your lockers and lock them. Do not leave robot head or camera in public space of the lab.

3. Report any strange people who come to lab immediately to security guard and PSU Police. In the past some street people and former students stole equipment when our students were in the lab.

4. You may reuse other people software and ideas, you are even advised to do so, but remember that you have to always acknowledge the source of every piece of code, slide, few sentences or ideas taken from other people.

5. Keep lab in order, do not eat in lab.

6. All groups will have their presentation at the beginning of February. Start reading early. Partitionate tasks among group members. Try to find similar work on my www page or internet.

PROJECT NUMBER 1

"HEXOR ROBOT" OR HOW TO FIND A SODA CAN IN A MESSY ROOM.

PEOPLE:

GROUP MANAGER: YOU SELECT. MUST BE MORE SOFTWARE THAN HARDWARE PERSON.

SOFTWARE INTEGRATION: Joey Baranski

ROBOT VISION AND FUZZY LOGIC: Seho Park

HARDWARE AND SOFTWARE ON PC, INTERFACE, RADIO: Paul Benoit This person is also responsible for connecting claws to robot body and programming them. Please contact Martin Lukac for access to Hexor robot.

CONSULTANTS: Natalie Metzger and Chris Brawn (do not ask them too many questions). Ask Martin Lukac for all components, software, and possibly previous related projects. I have CDROM with the previous group software. Ask me on Monday. If I forget, Martin has access to my room. It is "HEXOR SOFTWARE FROM BRAWN. Spring 2004." Martin knows where are the claws, or Jake knows. Ask them. Ask Jake for claws documentation. Jeff also knows about claws. Ask Jeff about new vision software.
EQUIPMENT: HEXOR ROBOT, arms for Hexor, TV card for a PC, Labyrinth. If you want you can add more sensors, such as microphones. You should also add a speaker on the robot. The commands to the speaker are transmitted by radio from the controlling PC.

PROJECT STARTED: January 3, 2004.

The goal of this project is to familiarize students with some of the basic technologies used in robotics. Particularly: remote control by radio, sensors, image processing, walking robot control, visual feedback, fuzzy logic.

PROJECT GOAL: You have to adapt the Hexor robot to be able to do the following:
  1. Collect literature (papers, books, webpages) on the following topics:
    1. Complete documentation of the state of this project from year 2004. You have to reuse all their software. You have to add claws.
    2. Obstacle avoidance by mobile robots.
    3. Shortest path algorithms in labyrinths.
    4. Robots in mazes - algorithms, sensors, gripping and transporting some items from location to location.
    5. Algorithms for soda cans collection - see MIT.
    6. Speech generation by robots.
  2. You have to build a square labyrinth 4 meters * 4 meters (or similar) with different kinds of obstacles. Obstacles should be movable so you can design or generate randomly various kinds of labyrinths. Experiment with few and many different types of obstacles.
  3. Put a small speaker on the robot to which the commands from the controlling PC will be spoken.
  4. Grasp items by grippers. The robot will have two grippers/arms controlled remotely from the PC using visual feedback of a controlling human. Human controls remotely by viewing the image from TV camera located on a robot. This image is transmitted by radio to his PC. Example: grasping a soda can.
  5. Transport simple items such as soda cans or boxes from location to location. We assume that all objects are located on floor level.
  6. Navigate a labyrinth full of various kinds of obstacles. Obstacles are of different kinds, colors, shapes and sizes. They may block certain kind of sensors such as infrared but not other such as TV camera.
  7. Map the whole environment to be sure that all cans have been collected.
  8. Try to collect as many cans in as short period of time as possible. A competition between various programs/designs will be made to find the winner. For instance, 15 minutes will be given and it will be calculated how many a total of cans were collected in 10 runs.
  9. The cans should be put in order somewhere in the edge of the labyrinth and the robot has to communicate by voice "I have collected five cans" or something like that. You can add more speech behaviors to the robot. The robot may communicate its decisions or internal states through speech, such as "I am searching way out of this loop" or "Now I am trying to go back to the entrance", or finally "my battery is low, I am dying".
  10. All the intelligence will be in the PC. The processor on board will be used only to collect information from servos and create lowest level motor control. This way you will be able to use the most sophisticated programming concepts of learning, knowledge acquisition, obstacle avoidance, etc and program them in any sophisticated language of your choice without concern for how much memory you have.

I need a complete report with plan of your work and information about your algorithms and ideas by February 1. Person responsible for the report: Group Manager. Please write me about the mechanical design of labyrinth, or just do it. What you can find in the lab, what you need to purchase? This has a high priority, since your group needs to build the labyrinth for experiments.
Problems to be discussed in the report and presentations.
  • Gaits for hexapods. General principles, types of gaits. Gaits for Hexor. New commands for rotation and turning right/left. New gaits?
  • Grippers. Function and control.
  • Architecture of the board on the robot.
  • Radio communication and formats.
  • How to have all sensor readings available on PC. What is the delay?
  • Algorithms for labirynth navigation.
  • Algorithms for environment mapping.
  • Algorithms for can collection:
    1. First do entire mapping, then plan shortest path to collect cans. Collect two cans at once.
    2. Whenever you see a can, take it and put to storage, assuming the predicted shortest path.
    3. A combination of the above two algorithms.
  • Computer Vision algorithms used to map building. What algorithms? Where to find them? Tests.
  • Sonar used to map building.
  • Infrared sensors used to map building.
  • Integration of three maps: one from camera, one from sonars and one from infrared to a single map. This map is 2.5 dimensional (height of obstacles taken into account).
  • Shortest path algorithms. Dijkstra. A*. Genetic. Simulated Annealing. Mixed.
    MOST URGENT TASK: Take all materials of the previous group from Marek Perkowski. After reading the material from last year, and playing with the robot, write the priorietized list of tasks for this quarter and next quarter. Our goal is to have even more successful than last year demo for Intel competition and PDXBOT. COLLABORATION WITH OTHER GROUPS: You do not have to do everything from scratch. Much can be shared with other groups, especially the vision software. Talk to Walking Robots or Professor Muval groups.

    PROJECT NUMBER 2

    "PROFESSOR MUVAL" OR HOW TO USE MACHINE LEARNING BASED ON RELATION DECOMPOSITION TO CONTROL HOW PROFESSOR ROBOT HEAD MOVES AND WHAT IT SAYS.

    PEOPLE:

    GROUP MANAGER: Reto Toengi

    MACHINE LEARNING: Wael Refaai

    ROBOT VISION AND INTEGRATION: Reto Toengi.

    NETWORKING AND INTEGRATION: David Tabachnik.

    SPEECH AND DIALOG: Anupama Seshagin.

    CONSULTANTS: Martin Lukac and Jeff Allen. Ask Martin for the head and software. Ask Jeff for software. You can take and integrate any software from Martin and Jeff. You can take also any software from Internet. But you have always to write a credit and do not remove the original header of software, even if you modify it.


    EQUIPMENT:
    1. Professor MUVAL robot. Possibly add simple touch or temperature sensors. Use microphone and camera, but do not put the camera to the eyes.
    2. Camera is stand alone, may be controlled from the PC.
    3. Interfaces to the PC.
    4. Fonix software.
    5. Intel Vision Library.
    6. CMU Image Processing and robotics software.


    The goal of this project is to familiarize students with several robotics technologies: voice synthesis, voice recognition, computer vision, and human-humanoid robot interface. Other goals are:
    1. Manual control of robot movements. "Emotions".
    2. Simple robotic control language.
    3. Learning by relation decomposition and logic synthesis using MVSIS.


    This is a latex-skin type of robot with many degrees of freedom. You have to insert servos and mount wires to move skin. Ask Martin for help. This should be NOT an attempt at a realistic robot, its behavior should be exaggerated, as in a puppet theatre or Pizza Hut robot shows. It is however, not a caricature. The robot should be friendly, funny and smart. Like an old fashioned european professor. Muval will be given as a best software award in MVL conference. It is called Jan Emil Muval to honor two creators of multiple-valued logic: Jan Lukasiewicz and Emil Post. You may read about these two greate scholars to create a meaningful dialog for the robot. His latex face is however not similar to any of them, and we should not make attempts at simulating their gestures (very few films with them remain).
    The goal of this project is to create realistic human-like emotions. You have to invent the personality of this robot, his knowledge, responses, profession, etc. It is similar to writing a scenario.

    TASKS:
    1. Read about "emotional robots". There are several books and WWW Pages. Find articles. Read Cynthia Breazeal Ph.D. thesis.
    2. Find some book (Martin has one) that has models of movements of human face muscles to represent various emotions. There is a rich literature about animiation of characters for video games and cartoons. You can use 3D modelling first.
    3. Write a scenario for a conversation with your robot where she will be able to go from one emotion to another. As a professor, he should be perhaps emotional only about science. The emotions should be:
      1. anger
      2. smile
      3. surprize
      4. boredom
      The emotions as expressed by facial gestures are linked to what the robot says, and what is says depends on what it hears from the human. Also, the robot can react with its state to whom it says. For instance, the robot can smile whenever it sees person X and frown when it sees person ZX.



    PROJECT NUMBER 3

    LITTLE WALKING ROBOTS.

    PEOPLE:

    GROUP LEADER: YOU DECIDE.

    Robosapien person: John Ferrell.

    Japanese-Korean Wow Humanoid Robot person: Kristine Smith.

    Software: Tzewen Wang.
    1. EQUIPMENT:
      (1) little walking and exercising robot from Japan. (Take this robot from Jake Biamonte. He will give you also a manual. The English language manual is also on the webpage of Steven Mbah. Steven assemblies and programs another robot of this type, he may be a very useful consultant for you. His email is on my Thursday's seminar group webpage. There are www pages about this robot. Look first to Steve Mbah's webpage, it is linked from my main webpage. You may re-use much of our previous robot soccer software for control and vision. In the previous project we controlled small hexapods. The software, developed under leadership of Mikhail Pivtoraiko (now CMU) is available from Martin. Martin Lukac has also the manual written by this group. Parts of the stage are also available in the lab. The software that we have can be easily modified to add sensors to the robot. Consult also with Hexor group, since this project is somehowat similar, especially much of vision software can reused if you put camera on the head of the robot.
      (2) Robosapine robot. This robot is very popular recently and there are many webpages that present in detail how the robot can be adapted. You can add camera and speaker to it, you can control it from laptop, Pocket PC or Philips Pronto as well as many other devices. Starting searching the WWW from here .

    2. YOUR TASK
      You have to be able to control both these robots by radio from the PC. For Korean robot you will have to purchase radio controllers of the same type as used previously in robot soccer project (ask Martin Lukac and check on the webpage, another consultant is Tony Muilenburg).
      The receivers will be located on the robots. There will be a ceiling or front camera that will see where the robots are located and what happens to them. The camera is connected to the computer that gives instructions to both robots what to do. To verify that the system works, I would like to see some simple coordinated tasks of the robots, like walking in line or in row, shaking hands (tough), dancing, playing a ball (tough). I give you a complete freedom what are these tasks at this stage. The only requirement is that you must create a robot control language for two or more robots. This language should be easy to use and be based on C, C++ or any other high level language. It would be very great if the robots were programmed in Lisp. We have done such project in the past, and it allowed to program nicely recursive and rule-based behaviors, advanced vision, emotions and natural language conversation.
      In any case, the software on the robots is trivial if any, there is no intelligence, only communication tasks for robots (there are no sensors on the robots). All intelligence is in vision software and planning/obstacle avoiding/movement software in PC. The basic actions of robots are ready and built-in them, you do not have to program them.
      The complexity of this project is in integration, since many components can be taken from other projects, including vision software.
      The example actions of the robots may be: (1) going out of labyrinth, (2) fighting, (3) playing a ball, (4) dancing and singing. (5) doing anything together.