ABOUT THE ENTIRE ROBOTICS LABORATORY PROJECT

  • Marek Perkowski, Professor.

    Project Summary

    We proposed to Intel to augment the traditional integrated undergraduate electronics/instrumentation laboratory of ECE Department at PSU with materials, software and equipment required for conducting robot building, robot application and machine learning/evolution experiments. These experiments will illustrate material from the following courses: Introductory Logic Circuits, Analog Circuits, Finite State Machine design, Digital Systems, Microprocessors, Artificial Intelligence (AI), Neural Nets, and Testing and Design for Test.

    The proposal has been accepted and there are several faculty and students who will work on this project. Your task in the class will be to become "guinea pigs" of this new project. You will be the first who will design some devices according to our plans. The next student projects will be more complicated. and will involve more complex robots than the OWI arm.

    The projects will illustrate the following topics: robotics in integrated test and manufacturing, sensor integration, image processing for robotics, voice control of robots, and reactive robots based on learning state machines.

    Introduction of these ideas few years ago would be impossible, but the availability of inexpensive robotic hardware kits and the power of modern computers allows now to design such experiments for the first time. We believe that robotics illustrates ideas across the computer engineering undergraduate curriculum, and we feel that such a model will allow to increase students' enthusiasm, as well as the practicality of their attitude towards learning.

    As I told you in class, I have a long-time fascination with robots, animats, and puppets construction. I built my first robot AZOR (Animowany Zwierz Oraz Robot, in Polish, Animated Animal and a Robot) in 1970. This mobile robot was based on early cybernetics ideas of modeling instincts. In 1982 I worked in Minnesota on project ARM - software controller for robotics. At PSU, 1983-1990, I supervised several student robotics projects that converted various toys to computer-controlled robots; Big Bird, Robbie the Robot, Fred the Robot, electric cars and Armatron hands.

    I built the entire EE 406 "By Design" course on microprocessor interfacing and robotics, but at this time the costs of useful components and hardware were still too high to allow using them in a mass-scale laboratory rather than individual projects.

    At PSU, I supervised also two long-term robotics projects: Micro-Mouse (complete mobile robot with on-board computer and sensors to navigate a labirynth), and PSUBOT (Portland State University roBOT) - a voice-controlled robotized wheelchair with AI techniques (shown on Oregon television). I sculptured also mechanical face-puppets, and with new software and hardware available now.

    Now, I hope to build with you a robotized face sculpture with intelligent feedback, better than the famous Furby toy, and based on techniques presented below. I expect your ideas and help.

    Current Situation at PSU and our Motivation

    Currently, laboratory experiences are gained by undergraduate students in Electrical and Computer Engineering Department of PSU mostly by a mandatory sequence of laboratories: ECE 201 (what is there), ECE 202(add more) ECE 203. It has been long felt by our faculty that these laboratories need intensive upgrade.

    As operating systems and software environments become increasingly user-friendly and sophisticated, a laboratory environment is well insulated from the lower-level features of the underlying machine and its software components.

    Observing students who use the formerly introduced integrated laboratory based on LabView (Schaumann/Stegawski), we feel that although this very modern lab is excellent from some points of view, some other crucial educational concerns are not yet well addressed. For instance, the students have a tendency of falling into a trap of "software simulation" and disassociating themselves from the real hardware.

    Also, based on our previous experiments with robotics that generated a lot of students' enthusiasm, we think that adding robotics projects will make this laboratory more exciting for undergraduate students. We strongly believe that generating students' enthusiasm and creativity is extremely important especially in the sophomore and freshmen years, and we know from experience that it helps the students' retention (lack of engineering excitement was mentioned by few good students who dropped early from our curriculum).

    Robotics will allow to create a variety of different projects using the same equipment, thus preventing cheating that exist now as students accumulate past-year lab descriptions from their seniors.

    Typical undergraduate computerized measurements laboratory focuses mostly on measurement techniques. Our lab has also some simple design component, but the integration component is definitely missing. The design and simulation is done in software, always of the same circuit, and next programmed to EPLD. The "engineering feeling" of inserting chips to sockets, wirewrapping or soldering, troubleshooting and testing with probes, the excitement of the digital circuit finally working and controlling a motor, that existed still 10 years ago in undergraduate projects, has been lost. We do not believe that it is good that we have more and more students who are afraid of touching real circuits and prefer to do everything on computer models instead.

    On the other hand, integration of software and hardware, of analog and digital circuits becomes more and more the primary industrial experience rather than developing subsystems from scratch. Similarly, in software classes, our students learn programming and analysis of algorithms.

    Thus, again the emphasis is on software development and the students may never experience the underlying hardware of the computers they utilize. Computer exercises related to microprocessors that come in junior and senior years are not related to applications of microprocessors, such as control of motors or reading from real sensors. Thus, even in these exercises, there is minimal exposure to actual physical hardware components, especially from the point of view of hardware integration and software/hardware integration

    Situation at top Universities

    Undergraduate robotics projects are becoming very popular in top Universities, because they are treated as the first introduction to software-hardware integration, control and systems theory. For instance, such curricula exist in MIT and Carnegie Mellon University. Students learn principles of analog and digital electronics, control, Lisp and Prolog programming, assembly programming, robotics, Machine Learning, Neural Nets, Fuzzy Logic, and Artificial Intelligence.

    It is relatively easy nowadays to design an inexpensive robot, either a robotic hand or a mobile robot that would be controlled by the computer. Such projects can teach a student a variety of subjects: DC motors, stepper-motor control, analog electronics for sensors, microprocessor interfacing, sonars and their interfacing through simple EPLD/FPGA-built combinational logic and state machines.

    Arrival of inexpensive robotic kits

    Equipment prices for computer-related equipment, voice-recognition software, motors and complete robotics kits go down dramatically , so that high prices are no longer an obstacle even for moderately budgeted Universities. For instance, just very recently the Robotic Arm Trainer from OWI has arrived on the market. It costs only $ 45 (on sale). I have assembled this arm from the kit in 5 hours, but of my students was smarter and he needed only 4 hours.

    Functionally similar robotic manipulator with 5 degrees of freedom costed at least $ 1,500 few years ago. Moreover, also on the last Christmas, the LEGO Masterminds were introduced, and are now so popular that it is impossible to purchase them. This kit has excellent evaluation in robotics press and on WWW and we will try to purchase it for evaluation soon. It costs $ 200, can be upgraded in modules with cost of approximately $ 80, and has been designed by the top world authorities in robotics, computer education and curricula development from MIT.

    It allows to build robotic manipulators, hands, legs, and various kinds of animals and space creatures. CD-ROM and software exist as well as books with hundreds of project - this is the "LEGO for the 21-st Century . Another very impressive kit, called Robix, $ 500, allows to build many different animals and models of industrial manipulators. PC software is included, and there is a bulletin board and users' group. Availability of various complete sub-systems, such as voice recognition, voice synthesis, image processing and pattern recognition has also improved dramatically since 1997.

    There are now cameras and voice recognition software for under $ 100, The prices of previous toys such as Big Bird have gone down, and many new toys has been introduced such as Mr. Arnold, Furby, Barney and other. While they are not as flexible as those presented above, their mechanical parts and sensors can be still re-used in interesting projects.

    What have we learned from past robotics projects

    Based on our limited but personal robotics experiences mentioned above, we can formulate some ideas that will be the base of the new projects. What was good with our previous robotics projects.
    1. Students learned integration of analog, digital, and mechanical design, sensors/motors integration.
    2. Students learned integration of software and hardware and assembly/C programming in real-time environment.
    3. Students became less afraid of building hardware with their hands. Their build PCBs, learned how to wirewrap, assemble, test, etc.
    4. Students had a sense of accomplishment and were proud during Open Houses, national exhibitions, Science Fairs, etc.
    5. Robotics entusiasts created Cybernetics(WSU) and Student Robotic (PSU) Societies and participated in competitions. Some returned to PSU as M.S. students.

    Unfortunately, not everything was perfect in our experience, so let us try to analyze what was wrong and how we can improve on it.
      What was wrong with our previous robotics projects and what to do.
      1. Some toys were difficult to convert (Big Bird,cars), which caused waste of time and little learning experience. The toy should be designed for projects and experimentation. The new kits have this property and there is now literature available for free, or very inexpensive, how to use them.
      2. Projects were too time consuming and their costs were too high for students. Again, the new kits are of very low price. Students will not have to build everything from scratch. In the past, building for instance the PCB board, or mechanical adaptations were so time-consuming, that there was little time for software writing and experimentation with robot's behavior. There should be some time spent on mechanical/electrical construction, but not too much.
      3. When the experiment has been successfully completed and the student with his experience was gone, the project was dead and practically not usable. This will be improved by having many copies of robotic kits and integrated software environment, plus WWW pages of previous students available for future use.
      4. Equipment was not reliable.} New equipment seems to be more reliable. Because it is very inexpensive, with this grant we will be able to purchase many copies of kits, that will allow to keep the lab operating for at least 7 more years.
      5. It was not possible to have this kind of projects for many students in a regular lab environment. The lab students in the first years will also assemble the OWI robots from kits. This will take them much less time than converting toys and will be fun. With many kits, there will be a unification and (partial) repeatibility of projects.
      6. Although it was a great learning experience to build the robot, when it was built, no learning was achieved. Thus, pre-canned intelligent programming exercises should be created, that will be interesting and will also teach the student of some important concepts. More time will be spend on integration based on "Building Block" philosophy of LEGO and experimentation/machine-learning. Each project will be sufficiently different from others to prevent cheating.



      The main objectives of the proposed set of new laboratory experiments will be the following: Understand the basic robotic components and concepts. Reinforce the concepts of a Combinational Function, Finite State Machine, in application to robotics. Gain practical experience with EPLDs for various practical applications. Understand principles of automatic test. Learn about rule-based programming for robotics. Learn about Machine Learning and Pattern recognition software and hardware in simple robotic applications. Understand basic principles of the most important current advanced robotic programming methods. Create and practically develop a complete interesting robotic system from existing software and hardware modules. Learn how to document work and present results using WWW Pages.


      Construction of robots involves interfacing of sensors, and, more importantly, dealing with the physical connection between a computer and the controller board (via parallel or serial ports). Developing software to control the behavior of physical robots also involves working under constrained resources (especially with respect to speed and memory) imposed by the robot's software or hardware. This provides a direct exposure to the time as well as the space complexity of algorithms. Yet another link that can be emphasized via the laboratory exercises is that between the algorithms embodied in the agents and their equivalence as a whole to an automaton, the Reactive State Machine (RSM).


      Equipment and Software for the Robotics Projects

      We propose to introduce robotics as a part of the existing laboratory. This will decrease costs and allow to use the existing software and infrastructure. Some PC computers will be augmented with additional external hardware and software. Each unit will require the following:
      1. The computer (PC or UNIX). It uses lot's of PC-Windows graphics)
      2. The interface to robot (parallel or serial port).
      3. The robot itself. It can be: (1) completely assembled (from the kit, by student-staff); (2) assembled by the laboratory students from the kit in a standard way; (3) assembled by the laboratory students from the kit in a non-standard way, according to their idea (new animal? a face? new space creature?, new kind of tool or crane?).
      4. Sensors to be mounted on the robot (touch, bump, photoresistors, microphone, camera, etc.)
      5. Other additional robot building materials: LEGO Technic resource sets, children's motorized building kits from Meccano, K'Nex, Fischer Technik, Capsela, etc., containers, ON/OFF switches, photoresistors, and a standard television remote (for infra red signals), boxes, etc, provided by the lab or by students. In our experience, we have found LEGO materials to be the most effective in price and performance. All these components are relatively cheap. Also, students bring their old toys and donate to the lab.
      6. Basic control softwares (come with kits). Written in C.
      7. Visual software DUAL by Alan Mishchenko, adopted to learn Reactive State Machines (RSM). Written in C++.

      Proposed Experiments

      The experiments will illustrate three levels of robot learning. In each of these levels there will be various project variants each year, to not allow cheating.

      Experiment 1: Robot Programming for Test of Electronic Boards in Industrial Environment.

      This is a simple exercise with the algorithmic robot programming. The robot mechanics will be fixed (OWI arms) and the positions of the electronic board for testing will be fixed and known to the computer. During the experiment design the graduate students-helpers will prespecify all possible signals from sensors and A/D converters. The task will be to perform certain measurements on the board and make decision where is the hardware fault located (simulated by a switch). By its measurements and internal model of the circuit, the robot, using fault location program, will detect where is the fault and will communicate it to the user. Next the robot will find the switch with which the user simulated the fault, and will toggle this switch. If robot's procedure was correct, the circuit on the board becomes now fault-free. So, the robot tests it again and communicates its correctness.

      In this type of experiments, the robot has fixed mechanics/sensor-location/motors and is totally pre-programmed by the lab students. Each input signal combination is predicted and analyzed, and every respective output action is fully specified by the students. Such input/output behavioral sequences are given to DUAL RSM learning software which designs the simpliest machine for the set of behavioral sequences. The learning is only in the constructive induction process of creating the simplest machine from the given set of examples. The user can change this machine (robot's brain) by changing the set of examples. Observe however, that the examples are created in software and the evaluation is done by the human who observes robot's behavior. Also, the machine is completely specified, so no generalization occurs.

      This is a very simple type of learning: being taught on rules for only positive examples. There is no place for any "creativity" of the robot here, in a new situation the robot will not know what to do. When the reactive state machine (RSM) is totally designed and correct behaviors are observed on the robot in real environment, the machine is converted to Kiss format of state machines, next to Espresso format of Boolean functions, and next this format is used to program the Electronically Programmable Logic Device (EPLD) with the "brain contents". Robot is disconnected from the computer and is connected to the EPLD located now in the socket on a protoboard. The same behavior or the robot should be observed as before using the computer.

      Similarly to this project, we could design experiments such as the robot solving puzzles (wolf/cabbage/goat/farmer, missionaries and cannibales), building pyramides, etc.

      Experiment 2: Reactive State Machines.

      As before, the robot will perform simple programmable movements: $MOTOR-1-UP$, $MOTOR-2-DOWN$, $MOTOR-3-LEFT$, $PICK$, $RELEASE$. They correspond to states of output variables. For each OVI arm thera are 5 motors, each in 3 states (left, right, none). All signals are programmed as multi-valued. Timing information is added. For instance: $MOTOR-1-UP$ for 2 seconds, $MOTOR-2-LEFT$ and $MOTOR-3-UP$ for 3 seconds. One multi-valued variable will be used for time, with 4 bits. It will describe time for both input and output variables. This will allow to give commands such as: IF $INPUT-SIGNAL-SENSOR-1$ for two seconds AND $INPUT-SIGNAL-SENSOR-3$ for one second, THEN generate sequence $MOTOR-1-UP$ and $MOTOR-2-LEFT$ and $MOTOR-3-UP$ for 10 seconds. Multivalued variables will be created for a total of 50 equivalent binary variables for learning. State machine will be hierarchical and distributed, each component will specify certain behavior, for instance, it will include counters for counting time. The user will describe the learning data in form of tables available in a new window of DUAL software.

      There will be also voice recognition module. The human will control the robot with simple voice commands such as LEFT, RIGHT, STOP. Simple image processing software will allow to recognize rough shapes and colors. For instance, the color can be used to recognize the human. Big area of green (a jacket, etc.) will symbolize man Green, red will be man Red, etc.

      The experiment will illustrate the behavior-based programming based on RSMs. Using this paradigm robot programs (i.e. RSMs) are constructed as a hierarchy of behavior modules that execute concurrently in a multi-tasking environment. This enables a robot builder to create robots that exhibit simple behaviors to which more sophisticated behaviors can be added simply by coding higher level behavior modules. This way of organizing a hierarchy of behavior modules is also termed subsumption architecture. This proved to be a unique and concrete way of introducing multitasking, and issues relating to real-time control in the computer science curriculum and is used at MIT, CMU etc. What is new to our DUAL approach, is the acquisition of the machine in a higher level language based on input-output constraints.

      Typically, the tasks only provide an abstract description of the desired mapping from perception to action. Much of the detailed specifications about how to behave in a particular situation is not given and may not be known. For example, a typical robotics task is to navigate from a starting point to a goal point. How will the robot recognize that it has arrived at the goal location?

      In these experiments, the mechanics/electronics of the robot is build by the student (an animal, a robotic hand, a mobile robot, a human head, etc), He locates sensors and decides on the effectors. He next designs the software of the robot but not by writing software but by teaching the robot on examples. This is standard "learning from supervisor" approach, and the student is the supervisor. He creates all sequences for the reactive state machine. It is as the parent would teach the child by re-wiring directly his brain based on positive and negative examples. Thus, the learning is done in software rather than in hardware. The set of sequences is incomplete, so the machine performs the generalization , automatically. Adding or removing new rules, by the human supervisor or automatically/randomly, will change the behavior. The students can experiment with this a lot.

      What obstacles can the robot expect to encounter along the way? If a map is available, how do positions on the map correspond to the robot's sensor readings? Due to the open ended nature of robotics problems, a lengthy process of trial and error is often necessary to answer these types of questions clearly enough to develop a working algorithm. Students must repeatedly run their RSM-based programs on the robot and watch how well it performs. They must analyze the failures and determine how they emanate from the RSM. They must then modify the RSMs in order to correct the failures or improve the performance. During this debugging period, students learn a great deal about the interaction between the robot's sensors and its physical environment, and in turn must translate this knowledge back into their programs.

      The goal of this project is to design machines that will react to sound, temperature, touch, words (text) from speech recognition, simplified image recognition, light sensors, etc. Here, we want to design something like the famous Furby toy, but with real learning.

      Let us first discuss how Furby works. It can be observed that Furby's internal states are prespecified, its learning is only transiting to prespecified new states in the labyrinth of its possible "states of emotions and knowledge" (sleeps, plays games, sings, is ill/healthy, etc). Appropriate learning patterns (such as petting the head twice and next the back once) lead to the displays of appropriate behaviors pre-stored in the toy's memory, and are only hidden from the user by not entering some internal states earlier. Although this is not a real learning, it is perceived as astounding by people who observe this toy. Now, we want to have a toy similar in sensor/actuator Pavlovian/Skinnerian learning model, but that will built its "world model" with unlimited behaviors. The totally new states with their respective input/output behaviors will be created using DUAL approach.

      In contrast to Furby, we create a robot that will have new internal states, created automatically, and not known to the designer. The power of constraint-based RSM descriptions induced from examples of DUAL will be again used to achieve this task. Instead of only transiting to "higher levels of consciousness" as Furby, the robot will create its own space of internal states and transitions, modeling a simplified environment.

      Observe however, that there is still no real world involved directly in learning. The observation of the results of robot's behavior is done by the student-supervisor and it is him who in a God-like manner (or, brain-surgean manner) inserts directly knowledge to the software/silicon brain of the robot (the RSM and next the EPLD).

      In this world, there exist two kinds of learning. The learning in the DUAL software, which is designing the robot's brain, and the learning of the student, which means, the student learns to invent good learning sequences. Observing how robot reacts to the sequences, he invents new learning sequences that directly affect robot's brain. He does this by adding/removing/modifying temporal equations in three windows of DUAL, called the: Input/Output Specification, Initial State , and Environmental Constraints The student has the power of directly changing the brain of the robot, as he wishes and according to what he observes in the real world, but there is no mechanism of having the world tell directly to the robot what is wrong with its actions. This would require the evolution and the evolving robot, which is done in the Experiment Number Three.



      Experiment 3: Evolvable Robot that learns from its mistakes in environment

      In this experiment the robot's mechanics is fixed, and there is now no human supervisor. The real world or the environment, serves now as the supervisor. This is the most complex of the learning modes. We will simulate genetic algorithm; with its cycle of parents' chromosomes crossover, mutations, fitness functions, and the survival of the fittest. Having, however, only one robot, we will be growing only the brains of robots in the robot population (i.e. each reactive state machine plays the role of one chromosome) and the populations of these machines will be tested/evaluated in real mechanics/hardware. Thus, a programmed robot's genotype is its reactive state machine. The programmed robot's phenotype is its physical robot with its brain in the real physical environment of robotic arms and boxes, that punishes it or rewards for its actions.

      At the beginning of its life the robot phenotype obtains certain number of units of energy. With time this energy is dissipated (the robot is getting old), and the robot is supposed to achieve some tasks (like bringing a box from place to place). For each task achieved the robot phenotype (its part of memory other than the RSM) is rewarded (a number is added to the value of the fitness function). For each task not achieved, or a mistake (like bumping with one hand into another) the robot is penalized (a number is subtracted from the fitness function). When certain given time passes (i.e. when the robot dies), the total fitness function of the robot is known. If it is small the robot's genotype is send to eternal damnation. If it is high, the robot is allowed to reproduce, which means, new reactive state machines will be created by reproducing the most successful genotype RSMs. Thus the corresponding child genotype and next child phenotype robot are created. The child will be tested again in the same environment (reincarnation?). Thus the superviser is removed from the loop now, the human provides only the formulation of the fitness function. It will be up to the student to define the fitness function and to develop experiments. He will evolve the robot as a series of RSM files, created by positive and negative feedbacks from the physical environment.

      This project will be based on ready pre-built hardware consisting of two OWI robotic arms as two hands, and the modified OWI arm to play the role of a head. (This can be done with just minor modifications, for instance the gripper will become the mouth. A wood sculptured head from balsa will cover the re-built arm mechanism). Thus an upper body of a human will be assembled, controlled by a total of 3 * 5 = 15 motors. There will be also several sensors on each hand and the face: temperature, touch, infrared, and other. The signals from them will be converted in A/D converters to multi-valued input signals of the reactive state machine.

      In this project, there will be an additional emphasis on image processing. In the first variant the camera will be facing at the human. The solid parameters of human's face and body (like in Avatar project that we are working on now for Intel as a Capstone Project) are transmitted to the robot, and the robot is supposed to reproduce hand waving, head turning, or other actions of the human teacher. Its behaviors are evaluated for fitness function. Like a little child, the robot will learn by mimicking the human. This time, however, the feedback from the supervisor is through real world interaction, not directly by "brain surgery".

      In other variant, the camera will be facing at the robot. Fast standard Hough transform will be used for feature extraction because robot body has a lot of straight lines. and we created a very fast program for it in the past. The features extracted from the camera will serve as the additional feedback for the robot, in addition to signals from sensors. Here the robot will learn by analysing its own successes and failures.

      Bibliography

      1. R.A. Brooks, ``A Robust Layered Control System for a Mobile Robot,'' IEEE Journal of Robotics and Automation, 2(1):14, 23,March 1986.
      2. M. Domsch, ``MIT 6.270 LEGO Robot Design Competition,'' World Wide Web, URL ishttp://www.mit.edu/courses/6.270/home.html.
      3. S. Russell and P. Norvig, ``Artiflcial Intelligence: A Modern Approach,'' {\em Prentice Hall}, Englewood Clifis, NJ, 1995.
      4. J. Iovine, ``Robots, Androids, and Animatrons. 12 Incredible Projects You Can Build.'' Mc Graw Hill, 1998.
      5. H. Moravec, ``Robot. Mere Machine to Transcendent Mind,'' Oxford University Press, 1999.