Eric B. Olsen
GeneSys-1 is the name I chose to call my first complete autonomous robot, even before it was built. GeneSys stands for "General System", as that described my vision of a powerful, advanced home built amateur robot. While I have often longed for the commercially built robots that flourished in the mid 80s to work from, such as Hero, Androbot, and Gemini, they were out of the reach of my college day financial means. When I made more money in the real world, a lot of those costly robots disappeared. At any rate, my confidence level was higher, and I thought I could design my own version for much less.
My goals for GeneSys-I were not very concrete at first. However, I wanted a mobile autonomous robot, one that could at least navigate around my house. From here, everything else would be icing. My basic philosophy was the larger the number of sensors and peripherals, the more useful the robot would be.
From a mechanical viewpoint, I envisioned a robot that was professionally constructed, an apparatus that would stand the test of time. My hope was to construct a robot base from precision milled aluminum. In terms of electronics packaging, I envisioned professionally designed PCB boards. However, practicality prevailed, and due to time and resource limitations, GeneSys-1 is no more than a toy robot base supporting several layers of wire-wrapped electronics. In retrospect, this down-sizing of my expectations was a blessing. GeneSys-1 is no less complex, and having learned from the experience, one can always aim higher the next time around.
The robot base for GeneSys-1 consists of a surplus toy robot base originally used to operate a toy inflatable robot. It cost me $19.95 plus shipping. The base included two independently controlled wheels in a "turtle drive" configuration. The base also included a battery compartment and a transistor based H-bridge driver circuit to operate each motor. Also included was an photo-emitter and photo-receiver optic pair tied to one wheel through a notched gear. This allows speed detection of the wheel. The other wheel had provisions for an optic photo-detector, but it was not included.
I enhanced the robot base in two ways. First, I added a rear castor wheel using an RC airplane rear landing gear. The castor wheel is needed to stabilize the turtle wheel configuration, otherwise, the platform will rock back and forth. Initially I thought the castor should be free spinning, but later I decided that more control would be gained if I could electronically position the castor wheel. Positioning was accomplished using a Futaba servo as the RC model rear landing wheel allowed a simple mechanical connection.
The second modification was to equip the non-encoded wheel with an optic pair so that the robot could accurately measure the speed of both wheels; for the turtle drive configuration, this is important to accurately control turning. I bought an IR optic pair from Radio Shack, and mounted them within the locations intended for the factory optics. The radio shack optics worked better than the factory supplied version, since that version suffers from occasional glitches, probably due to light reflections in the optic arrangement.
There is a lot to gain by learning how to operate hobby servos in their intended mode, that is, as a servo. This allows easy positioning of sensors and wheels, and for the enthusiast looking to reach out, allows simple construction of robot arms and the like. Operating Futaba hobby servos is relatively simple to do. The servo has three wires, two wires are for power, and the third is the position control signal. Both the power and control signal require 5 volts.
After trial and error with a pulse generator, I learned that the Futaba servo operates using pulse width control. The required signal is a simple positive going pulse having a width between 200 and 2150 us (micro-seconds). In my original design, I constructed a circuit that would accept an 8 bit value and output a corresponding pulse having a width between that range. My circuit was also designed to continually send this pulse width out at a rate of 60 times per second. When a new 8-bit value is written to the circuit, the circuit adjusts the pulse width accordingly, and the servo moves to its new position.
My servo control circuit is implemented in a Xilinx FPGA (field programmable gate array). However, during testing, I realized the circuit is overkill. I now believe that control of the Futaba servos is easy enough without such dedicated logic capability. With this in mind, Ive included a block diagram of a simple circuit for testing, see figure 2. While I havent tried it, I believe that control of Futaba servos is within the capabilities of most hobby CPU interfaces. The following paragraph summarizes my findings.
The secret is that the Futaba servo remembers the last pulse "position" sent to it. Therefore, it is not overly important to continue sending the servo the same pulse, hence no pulse train is required. (However, this is not true if the servo is to provide a holding strength!) The required pulse width is very small. Still, it is likely that a micro-processor can create this pulse using a simple timing loop (in assembly, of course). The timing loop controls the duration that a single output port is high.
The goal is to create different pulse widths in the indicated range. By using multiple output bits, or one output bit that is multiplexed, any number of servos can be controlled. It may be necessary to produce a given pulse more than once to fully position the servo. Figure 3 summarizes the range of pulse generation required. Any pulse produced in-between this range will position the servo between its maximum positions accordingly.
GeneSys-1 needs to know where it is going in order to avoid obstacles. Indeed, obstacle detection is one of the most daunting tasks in the construction of any robot. As in many high end robots, like Hero and Androbot, object detection is accomplished using sonar. Realizing such precedents, I decided to purchase two sonar sensors from Polaroid. This cost me $99 plus postage and handling.
The Polaroid sonar transducer and associated driver circuit employ some very interesting features. For one, the unit uses a single transducer, which means that the sonar transmitter and receiver are one in the same. Also, the driver circuit employs a variable gain amplifier for receiving return echoes; the gain of the receiver increases as the time elapses from a transmission pulse (called a "ping"). This allows the unit to be sensitive at long distances since return echoes from increasingly far sources become increasingly weak.
It took only one evening to get the Polaroid sensor interfaced to a micro-processor. However, my first interface was a bit problematic because I attempted to operate the Polaroid device at 6 volts; For this, I used opto-couplers to level shift the signals to and from 5 volts. In the end, I did away with the need for opto-couplers by powering the Polaroid unit at 5 volts. This works fine as long as the 5 volt source is capable of very short, but relatively high peak currents. Wall packs limited to 1 Amp or so will generally not suffice for correct operation and this can lead to hours of frustration.
The basic operation of the Polaroid interface goes something like this: The robot processor initiates a transmission signal to the Polaroid driver board, starts a high speed timer running, and then awaits a return signal from the driver board. Once this return signal is detected, the timer value is captured. Since GeneSys-1 uses an 8051 based microprocessor, the timer2 function works very nicely. This timer, if properly configured, will capture its contents automatically when an interrupt signal (driven by the Polaroid receiver circuit) arrives at the CPU. The timer capture process is automatic, but the process also triggers a software interrupt so that driver software can read the timer value, and clear it for the next sonar transmission. Keep in mind that the timer must count in "micro-seconds", since the elapsed time produced by the speed of sound at short distances is small.
Of course, a well written software driver should support a time-out in the event that a return signal never arrives. Also, a conversion formula, which accounts for the speed of sound in air, is used to convert the time of travel to a usable distance. I was able to reduce all required equations to a simple integer calculation to obtain inches as the resulting units of distance. Keep in mind that the measured time is doubled, as it represents the distance the sound travels "to" and "from" the obstacle.
As impressed as I first was, I realized that this preliminary interface was not going to work as well as I thought (even though I wrote some simple obstacle avoidance programs for my robot ... just for fun!). The simple interface limited the sonar to a minimum ranging distance of 18 inches, and unfortunately, 18 inches is a long distance indeed for indoor obstacle avoidance. A typical hallway is only 36" wide! Therefore, anything closer than 18" would not be reliably detected, and in fact, my software gave erroneous results if the obstacle was closer. The reason for this is explained by Polaroid. That is, the sonar transmission burst is too long, and when dealing with very short distances, the Polaroid unit is likely to receive its own transmission pulses as return echoes!
Dont worry! Polaroid has a solution, and it is in the form of another control signal, called "blanking". By using the blanking signal, it is possible to reduce the time of the transmission pulse, thus allowing time for the Polaroid unit to be placed into its receive mode. This allows the unit to listen for very quick return echoes. Keep in mind that the Polaroid unit must "turn itself around", from being a transmitter to a being a receiver. During this period the driver circuitry must let the transducer settle down. This phenomenon is known as transducer "ringing".
After some delays in the project, and after a couple of hard working nights, I finally succeeded in refining the interface to allow detection of obstacles as close as 6 inches! This gave the little beast the ability to range from 6" all the way to 30 (feet) in increments of one inch!
Shortly after that, I decided to implement the last control line included on the Polaroid driver module. This control line resets the return echo pulse so that "multiple echoes" can be detected. This provides the ability to detect more than one obstacle with the same sonar ping. This feature is analogous to radar detecting more than one plane, each at a different altitude. It turns out that this feature is very important, especially if the robot is expected to see through door-way passages and the like! The reason is that sonar beams spread out, and they tend to bounce off of the corners of door entrances, thus appearing to the robot that no door exists at all! Therefore, in some instances, the first sonar echo may not be the echo of interest.
I decided to mount two sonar sensors so they could be positioned in any horizontal direction. Thus, I mounted them on top of Futaba servos. I often wondered why this was not done more often! (However, one little robot used the sonar-servo combination on the cover of Circuit Cellar very recently). Hero, Androbot, and many other robots seemed severely handicapped without the ability to direct their sonar in a flexible way. This reduced these robots into the very clumsy mode of having to position their entire body in order to see in that direction. Its like having a stiff neck and no ability to move your eyes! Furthermore, I found the Polaroid resolution to be quite high, such that angular increments of one degree can result in the detection of different surface features and different obstacles.
The culmination of my sonar work was realized when I wrote an application program to map out all obstacles in the robots path, printing this "map" to a computer terminal. The program works similar to a "radar screen". Its quite impressive since both sonar units sweep in a circular fashion (under servo control), and all obstacles detected by the sonar sensors are printed on a terminal, preserving the scale of the distance between them.
In theory, creating a sonar map relies on triangulation, and with modular programming techniques, its not as difficult as it might seem. In short, I brushed up a bit on my trigonometry, and I created some conversion functions so that I could position my servos using angular arguments, like 45 degrees, etc. These positions were relative to the front of the robot. Given that the angular position of the servo is known, and given that the range of a detected obstacle is known, the obstacles position can be plotted using an X-Y coordinate map; this map is relative to front of the robot. Once this was accomplished, I used a terminal emulation program (like Crosstalk) and standard ANSI commands to place asterisks at different coordinates of my computer screen, each coordinate the result of a sonar obstacle detection. A picture of my PC "sonar" screen is shown in figure 3.
It seemed that all was complete once the sonar was working. Not! How would the robot be able to navigate in a room? Even if the robots environment was programmed into its memory, how would the robot know what direction it was facing? How would the robot know how far its turned? The fact is, successful navigation requires more information. Another sensor is needed.
Knowing that more information is needed, Ive always pondered some sort of Gyro-scope system, and one of my friends suggested global satellite positioning, but both of these solutions seemed out of reach, and even if I could get them, they had their own problems... and still theyre not complete solutions. Whats needed is some sort of reliable compass to give directional orientation. Realizing this, I attempted to adapt a typical car mounted compass with some optics to detect a given compass reading. I wasnt very successful. Later, I read about an electronic compass made from Hall effect sensors. Unfortunately, I never got all of the needed components together. Then, I discovered the Vector compass sold through mail order by Jameco. It seemed the perfect solution.
Two types of Vector compasses are offered by Jameco. The basic compass module is about $50, and thats the one I bought. The other compass offered is the same unit, only it is mounted using gimbals, which allows the compass module to maintain a level orientation. The gimbals mount is good to +- 15 degrees. This unit sells for about $100.
In retrospect, I would have bought the gimbals mounted compass since I was not aware of one basic limitation of the Vector electronic compass. That is, the compass must stay (perfectly) level in order to be accurate. In fact, for every degree that the compass is not level, the direction reading can be off by as much as 3 degrees. Thats a pretty big error ratio! This being the case, my robot is going to have to work on level surfaces until I can afford an upgrade!
The Vector compass is a cool module. There are plenty of methods for interfacing this module to a micro-processor, and the included documentation, being quite complete, describes many alternatives. The output from the compass is serial, and you must choose between a slave mode and a master mode interface. In the slave mode, the CPU must clock the device ... this may be the most flexible for hobby applications, and can likely be accomplished using Basic and simple I/O. The other mode is the master mode, in which the Vector compass does the clocking. I chose the Master mode, and developed a shift register-latch circuit to read the data output by the Vector compass. I also implemented an interrupt to the robot CPU to signal that the reading process is complete. (I chose the master mode since with additional hardware, there is no CPU intervention other than to initiate acquisition and respond to interrupts.)
Once an interface is developed, the unit can deliver up to 5 direction measurements per second, and this is plenty for any home robot application. The unit is robust in that it can cancel out any "static" magnetic field that may be present from the robot itself. However, as one may suspect, the unit is susceptible to variable and stray magnetic effects from the environment, so this must be considered before relying on it in any given application.
I chose to mount the Vector compass on a servo. However, this is not required. The Vector compass will read all directions from a fixed orientation; there is no need to rotate the compass to use it. However, there is a need to rotate the compass 180 degrees for calibration purposes. I wanted the robot to "self calibrate" the compass. So far, calibration has not been a big issue; when I turn on the device, it seems accurate to within +-1 degree. So far, the servo has not been needed!
To summarize, the Vector compass provides critical direction information. I hope to find time to finalize GeneSys-1 by developing an application program which uses the compass in coordination with the sonar and wheel encoders. With this combination, I hope that it will be possible to demonstrate robust navigation capabilities with GeneSys 1.
A robot that cannot communicate is merely a machine; it cannot interact with people. For an autonomous robot, the ideal form of communication is through an audible voice, because it is both a wireless form of communication, and of course, it is the method most preferred by humans. Thus, a speech synthesizer was an important peripheral and goal for GeneSys-1.
Which synthesizer should be used? My personal preference was to use a speech synthesizer that sounded very machine-like, unlike a human voice (most people seem to prefer a human-like voice, but I thought a mechanical sound was most fitting for my mechanical creature!) My limited market search came up with only a few low cost alternatives that were simple enough to implement. I discarded the growing number of "solid state" recorder chips as they are very limited in the number of words that could be stored; furthermore, they were difficult to use as they would have to be recorded into, and then to play back, re-addressed in a fairly complex manner. Sound Blaster technology, while very impressive, seemed difficult to interface with; especially in light of the fact I wasnt using a PC platform to drive the robot. At last, I found the "Digitalker" chip set offered through Jameco. Thus, I settled with this, and in fact, it did have the mechanical voice I was looking for!
The Digitalker chip set is an old speech synthesis solution manufactured by National Semiconductor. The copyright on the chips is 1981! However, for a low-end 8-bit robot platform, the Digitalker solution is ideal. This speech synthesizer, in combination with an expansion vocabulary chip set (all sold through Jameco for $50 plus shipping and handling) supports a set of 275 words. These words are fixed and cannot be changed. Basically, the vocabulary consists of many words that can be used for technical, industrial, and home automation speech phrases. Complete sentence formation is possible, but is somewhat limited due to too few verbs and nouns. Also included are all of the numbers needed to allow the robot to speak any number up to 100 million! In general, I believed the chip set to be acceptable for GeneSys-1.
The construction of the Digitalker circuit is straight forward as schematics are provided by Jameco when you order the parts. (A word list is also sent with each vocabulary chip set). The speech system is basically a dedicated processor that directly addresses memory chips. In fact, the vocabulary memory chips are old style EPROMs, probably 2764 devices. It may be possible to place all of the speech information in a single, larger EPROM device. An analog circuit must also be provided, and the documentation details two alternatives. All parts for the analog section can be obtained at Radio Shack for about $15-$20 dollars.
The processor interface to the Digitalker is also straight forward. Basically, an 8 bit value is written to the Digitalker control port, and a one bit select line is needed to address one of the two banks of vocabulary ROMs (if using the expansion vocabulary chip set which I recommend). When writing to the control port, I used the write signal directly from the 8051 based processor. Also provided by Digitalker is a "word completed" signal. I tied this signal to an interrupt line to signal my program of the completion of each word. In this way, the software requires an array of words for the phrase or sentence to speak. The software "kicks off" the first word, while an interrupt routine, activated by the word complete signal, triggers each subsequent word until the end of the sentence is reached. This type of software driver allows the robot to speak complete sentences with very little overhead, and allows the robot to continue to process other tasks while talking! Alternatively, it is possible to "poll" for the word complete signal, as this decision is left to the designer.
Now that GeneSys-1 has speech capability, what about hearing? Well, as most of us know, this is far more difficult. However, it need not be out of reach. The fact is, many companies are presently offering advanced speech recognition chips with all sorts of capabilities. Unfortunately, it can be a big project to get them going. Luckily, I had bought an old chip from Radio Shack many years ago that caught my interest, it was a speaker independent voice recognition chip, called the VCP200. I never even opened the package! Well, my choice was easy. I would consider using the device for GeneSys-1; therefore, I built a prototype and evaluated the part.
The VCP200 has provisions to recognize up to five words in a speaker independent fashion. This means that anybody can talk to the chip. The chip has the ability to recognize the following words: "Forward", "Backward" , "Stop", "Left Turn" and "Turn Right". Another mode allows the chip to detect two words, "Yes" and "No", with a third signal indicating "unknown word". Experimenting with the chip was fairly easy; I built the circuit according to the documentation, and it worked the first time! However, the performance was not so impressive. When the VCP200 was placed in the mode to recognize the five words (since these words were very applicable to robot commands), I found that it was necessary to speak directly into the micro-phone. If attempting to speak even one foot away, the circuit tended to miss recognize about 50% of my commands. The documentation admitted this limitation, and gave all sorts of hints, the main one being speaking directly into the microphone. Unfortunately, a robot is not very interactive if you have to kiss it every time you want to speak to it!
When placing the VCP200 in the "Yes" and "No" mode, far better results were obtained. I was able to speak as far away as 10 feet, getting the VCP200 to recognize my commands about 90% of the time. When not recognizing correctly, the VCP200 asserted the "unknown word" signal about 90% of the time. I began to think that this might really work! But wait! What use is the ability to recognize only two words? After a few days of pondering, I came to a significant conclusion, that is, the robot could ask me questions, and I could respond either affirmatively, or negatively.
It turns out that only a few pertinent questions need be asked by the robot, and quite impressive communication can be achieved. Eureka! I stumbled onto a technique for which a robot can communicate effectively through spoken word! As a final and very important note, the fact that Ive structured the robot to ask all the questions solves another significant problem with the voice recognition chip, that is, the robot will only enable its hearing (listen) when it asks a question. All other times, the VCP200 recognition chip is disabled. This is truly important as the VCP200 chip can false recognize on all kinds of environmental noise from the outside world. This is undoubtedly true for many types of voice recognition systems.
The construction and operation of GeneSys-1 took far more time than implied in the above sections. This last section will cover some of the less exciting, but equally important issues that are essential for the construction of a complex robot pet!
Before I talk about my processor choice, I like to describe my development language preferences. I prefer the C language. This language offers much to the developer. I describe the C language as the "high level, low level" language. While C allows the developer to write high level, structured code, it equally supports low level, direct I/O. C is also fast! It can approximate the speed of assembly language if properly written. C is modular and highly portable; routines written on todays robot platform can be used on tomorrows platform. As an added bonus, most modern C compilers allow interrupt handlers to be written directly in C! This significantly speeds the development of advanced peripheral interfaces. GeneSys-1 greatly benefited from such advantages!
The drawbacks of C is that it is a compiled language. Not that compiling irritates me, just the cost of them! Thus, the processor choice for my robot was partially dictated by the availability of a C compiler. I also desire a good embedded controller full of timers, serial ports, and the like. Thus, my choice was the Dallas 80C320 "speed-it" micro-controller. This processor was used by the company I work for. The compiler was thus available and I knew it well.
The Dallas 320, as we call it, is a third party, high performance 8032 derivative. The part can run about 6 times faster than the original 8032 from Intel. The part also includes an internal watchdog timer, an extra serial port, and 4 additional external interrupts ... a big advantage for my robot project! The 8032 architecture also sports some great features, such as a full 64K address space for code, and a separate 64K address space for RAM memory (and memory mapped devices), double the address space of most other 8 bit controllers. The part can operate up to 25 MHz, includes bit addressing, and includes three timers.
As with all of us, we quickly outgrow our means. In the future, I will be using only 16/32 bit controllers, and I hope to migrate to the C++ language as this offers enormous advantages, even over C. Until then, my initial choices have allowed me rapid development for my first (working) robot. But truthfully, the GeneSys-1 robot will never really be completed! Are they ever?
One of the critical aspects of any complex robot design is that of power distribution and regulation. The power demands of 4 servos, two sonar transducers, two motorized wheels, a high performance CPU, and needless to say, all the other support circuitry, lights, bells and whistles, can be dazzling! Single battery sources with simple (linear) regulator designs dont cut it. Dont even try!
Luckily, todays modern electronics smorgasbord offers some very good solutions to these problems. I highly recommend on-board switching regulators as they offer plenty of instant power, good regulation, and great DC to DC power/voltage conversion for all of those weird voltages that you will undoubtedly need.
In the GeneSys-1 design, there are actually two battery supplies. One battery supply is used for all motorized systems, i.e. the servos and the wheel motors. This supply consists of 6 Ni-Cad D cells; they are mounted in the original battery compartment of the toy robot base. Six volts is tapped from four of the D cell batteries to power the wheels. For the servos, a DC-DC switching regulator circuit, powered from all 6 batteries, is used to deliver 5 volts at up to 2 amps. The DC-DC circuit is actually a module is from Linear Technologies; it is a sample board demonstrating one of their highly efficient chips. I would not have constructed it on my own, but it does work well. Grab what you can!
For the sonar circuits, CPU electronics, and other 5 volt electronic demands, I chose Linear Technologies LT1076 switching regulator. This part is very similar to National Semiconductors line of simple to use DC-DC switching regulators. The regulator circuit consists of about 5 parts, and is powered from a 9.6 volt RC Ni-Cad battery pack. This regulator works well, and is far more efficient than typical linear regulators, such as the classic 7805 (for which I originally used, but quickly outgrew).
Before I solved my power distribution problems, it was common to watch servos go into un-stable oscillations, or see the CPU continually reset; most of the time, stuff just refused to work!
I thought I would recommend something so trivial, but so important, that is, kill switches. In the development of GeneSys, it is imperative to isolate main power supply lines using well placed toggle switches. I cant remember how many times GeneSys-1 lurched unexpectedly forward, sending it on a dooms day path off my workbench! After several of these episodes, I took the time to wire in three toggle switches. One is for the main electronics, one for the main wheels, and one for the power hungry servos. Keep the power to the wheels off during non-locomotive related experimentation!
As in the movies, robots are not very fond of being disassembled! However, I cant count how many times Ive disassembled GeneSys-1. To this end, I highly recommend spending the time and money it takes to add connectors to any subsystem that may need to be disassembled. Even systems you believe will not be disassembled may need to be when you discover a special enhancement or new invention! Ive spend many hours re-wiring GeneSys-1 because I failed to plan for such provisions. Just a tip!
This section describes some of the remaining tasks in the development and construction of GeneSys-1.
GeneSys-1 still lacks several basic sensors, especially in the area of "touch". For example, it will be important to add bumper switches to the perimeter of the robot base. To complement this type of mechanical touch, I am considering the addition of optic whiskers. An Optic whisker is an electronic touch sensor for detecting close-by obstacles by looking for reflected light. Typically, this type of sensor operates up to about 10" or so depending on the reflectivity of the obstacle.
Using only one input port bit, a relatively simple method for implementing optic whiskers is possible. By using commonly available IR receiver modules (even Radio Shack sells a version), and by driving an IR LED at the correct frequency (matched to the IR receiver, typically around 40 kHz), it is possible to detect faint reflections of the IR source. The receiver output will go active in the presence of reflected IR light. Because this system uses modulated light, it is essentially immune to stray light from the environment, which is very important. Photo 6 shows my proposed mounting and arrangement of the IR receiver and IR LED.
Previous testing has shown that the IR receiver output goes active when the IR light source is relatively strong ... meaning that reflected light levels are high. The output stays continuously active in this case. But more importantly, the output is intermittently active as the received light source becomes weaker. In other words, the IR receiver output pulses active, but is not stable. I found the output of the receiver to be more active when the faint light source is relatively strong, and less active when the light source is very faint. Thus, a more complex version of the optic whisker is to build a circuit that can measure how much the IR receivers output is active versus non-active; this measure would be made over a given sample period. Thus, the system should be able to feel how close a given object is.
Under CPU control, I propose each LED module and IR LED source be enabled in sequence to provide for a scanning process. This will lower power requirements, and also shield each IR receiver-LED pair from false readings from one another, hence eliminating cross-talk. For a more ambitious project, the IR LED could be mounted on a servo to give some directional capabilities, but I will not go this far with GeneSys-1 unless there is a strong indication to do so.
The development of application software is a difficult and time-consuming task, and for GeneSys-1, there are plenty of applications I have envisioned. However, I have defined two primary goals: The first is to develop a navigation program that allows the robot to successfully move from room to room in my house. This capability implies that a map of my house is programmed into the robot. Perhaps the robot would be capable of recognizing whether a door was closed or not, and even possibly ask someone to open it. Some type of simple obstacle avoidance capability would also be important (as I have dogs!).
The second application program goal is to develop a program that maps out a room, forming an internal representation of the environment on its own. This procedure would be part of a learning program, and could be used as an add-on to the first application program. Obviously, there are limitations to such program development as the CPU is only 8-bit, having limited code and data memory.
I have identified a very important component for autonomous robot development, especially in the latter stages, that is, a radio link capability. While this choice may seem obvious and already popular, I envision a two-way terminal link-up to aid in debugging and remote monitoring of complex application programs. In the end, the designer must let his creature go! How else is one to know what decision paths the robot takes unless such capability is added?
The more you build, the more there is to do! That is the conundrum of GeneSys. I surmise this is the case with any home robot project. Even so, I hope to reach a definite end to the project, as I am eager to plan and start GeneSys-2.
GeneSys-1 has taught me much about robot development, and of course, there is an infinite amount to learn. If I ignore the final, but most important tasks, I will sacrifice the development of GeneSys-2, since the essential lessons will not be learned. Hopefully, I will resist my worst habit of moving on to another project before the previous one is complete. Only time will tell!
Eric Olsen is an Electrical Engineer employed as an engineering manager in the gaming equipment manufacturing industry. He holds a BSEE from the University of Nevada, Las Vegas. He enjoys many areas of electrical engineering, especially those of robotics, artificial intelligence, and machine vision. He hopes someday to witness the evolution of pseudo-intelligent machines that can interact with humans and benefit society as a whole. You can reach him via E-mail at email@example.com.