Convey Belts

In total we can estimate the robot would be approximately 5-6 feet long, 4 feet wide, and 250 pounds. The robot’s design would be based upon a single person pontoon boat. The pontoons would allow space for a set of conveyer belts on either end of the robot. The robot would use a series of two conveyor belts made of stainless steel woven wire: the first conveyer belt to remove the aquatic vegetation from the water’s surface, and the second to move the vegetative material from the robot to the shore to be dumped in a manner which facilitates either proper decomposition or easy dumping in a portable container. 

 

Paddle Wheels and Batteries

The robot would use two paddle wheels, one on either side of the robot. These wheels will have the ability to run in both reverse and forward directions to give the robot the ability to rotate around its own axis. A large 12 volt battery will be placed in the bottom of the craft, which would also serve as ballast for our top-heavy design. We will need to build an AC powered docking station to charge the robot when not in use.

 

Controlling 

Our robot will use a Raspberry Pi or Beaglebone as the controller, with software written in Java. To locate each robot, we will use the Piksi, an RTK (real-time kinematic) GPS made by Swift Navigation that is accurate within two inches. One GPS unit mounted on a base station will calculate ionospheric interference and transmit the data to the GPS on each robot. The communication between robots and the base station will use XBee pro radios, which have limited bandwidth but a range of 1.5 miles, which should allow all units working in a single body of water to use a central base station. 

 

Software Phases

We will develop the software in phases, allowing our hardware to be tested early while more advanced navigation code can be tested on a functional robot.

Phase 1:

Our first software phase will be basic joystick control. The drive code will be similar to the FIRST robots made by our robotics program, except it will detect motion using a compass and compensate for drift.

Phase 2:

The next phase will consist of basic navigation around shorelines and the location of disposal sites. The layout of islands and other obstacles in a body of water will be recorded by manually driving the robot around each hazard and storing their locations into a grid. Each grid square will be one square foot and be occupied by either land or water. Two ultrasonic sensors, one facing downwards under the robot and one facing forward will alert it of additional hazards such as dogs or boats. At this phase, the robot will drive straight until it approaches a hazard, turn, and continue. While this will not perform intelligent or systematic sweeps, it will allow us to develop and test GPS and radio input. We will also record the positions of possible disposal locations, and utilize a capacitive proximity sensor to detect when disposal is necessary. It will approach these locations and orient itself using a compass. Once we are capable of building accurate map files and finding accurate GPS coordinates, we can begin development of a more advanced autonomous mode. This will involve grouping the grid squares into sections, and keeping track of the time since each section was last visited. The robot would then navigate to the desired section using a node graph and a simple path finding algorithm. Once there, the robot would perform a systematic sweep of the section before repeating the cycle. This would be effective assuming that all areas of the lake require an equal amount of attention. This, however, is not the case. Wind and water flow, as well as sunlight levels, affect the amount of vegetation buildup in each area of a body of water. Therefore, we will also develop a desktop application that will communicate with the robots, allowing users to select areas of lake for the robots to prioritize.

Phase 3:

The final phase of development will involve collaboration between multiple robots operating in one area. The robots will communicate directly with each other, sharing a map of the area. Upon completing a task, a robot will find the area with the most urgent priority and proceed to clean that area, updating the shared map so that other units do not go to work on that area.