In the realm of our project's application, we set out on a mission to revolutionize the testing process by harnessing the power of drones. Our primary objective was to establish a seamless, automated workflow that would empower drones to navigate fields autonomously, conduct real-time crop scans, and swiftly detect the fluorescence given off by the protein. Our aspiration was for this system to be not only efficient but also cost-effective.

Our project had clear advantages: it saved time compared to sending samples to a lab, reduced energy usage versus manual sample collection, and made the process less labor-intensive.

However, there were hurdles to overcome. Initially, there was a need for a significant upfront investment in the drones. Moreover, making sure that the system was easy to use, especially for those unfamiliar with drones, was a priority.

To overcome these hurdles, we embarked on a journey marked by the creation of multiple prototypes, each progressively more cost-effective than the last. Simultaneously, we dedicated ourselves to crafting the software from the ground up, making it not only cost-efficient but perfectly tailored to our specific needs. Furthermore, we've positioned our project as a service, sparing farmers the need for extensive efforts on their part.

Throughout the project, we've worked diligently on three distinct prototypes, complemented by the development of an online portal named "FungiLink." This portal serves as a valuable resource, equipping farmers with a deeper understanding of their crop's health. For every drone prototype we engineered, a set of consistent features were required:

It's important to note that farmers will still be responsible for applying the protein construct to their crops using their existing machinery, such as ones they would be using for fertilizer distribution.

Our initial prototype was constructed using various DJI products, most notably the DJI Matrice 300 RTK drone, complemented by additional equipment. Below, you'll find a table listing all the equipment used for this prototype.

The DJI Matrice 300 RTK drone stood out as an excellent choice for our Fusarium infection detection project. Its flexibility in customizing payloads, impressive battery life of up to 2.5 hours, and the capability to establish predefined flight paths made it a versatile tool. Equipped with a dual payloads system, we outfitted the drone with the necessary components to effectively detect the Fusarium infections. The ability to set flight paths ensured efficient and systematic coverage of the fields, greatly enhancing our detection capabilities. We automated the flight using DJI's Waypoints 2.0 feature, allowing the drone to take off, navigate the specified path, and land autonomously. To set up automated flight, follow these steps:

  1. Ensure you're in P mode, which is DJI's default flying mode.
  2. Start DJI GO, connect your mobile device to the remote controller, and power on both the controller and the drone.
  3. Tap the intelligent flight modes icon on the left side of DJI GO.
  4. Tap the Waypoints icon to set up your waypoints.
  5. Scroll the map to your desired location for the mission.
  6. Add waypoints to the map following the app's instructions.

For detection, we used a multispectral camera and a custom light attachment. We added a combination of colored light filters in front of the light source to serve as an exciting means for mScarlet. As the drone moved, the light excited mScarlet, and the resulting fluorescence was captured by the Micasense Altum multispectral camera. Multispectral cameras work by capturing images at specific wavelengths of light. In our case, we set the capture at 668nm. You can see sample scan results below:

Figure 1. Figure 1 represents the Red, Green, and Blue channels, similar to what our eyes perceive. Please note that these images depict mScarlet in E. Coli, not Fusarium.
Figure 2. Figure 2 displays the picture at 668nm, with white portions indicating fluorescence, where Fusarium should be. Please note that these images depict mScarlet in E. Coli, not Fusarium.

While this prototype demonstrated functionality, it came with certain limitations. It was not a cost-effective solution, primarily due to the high cost of flagship drones. Additionally, integrating AI for automated detection was challenging and required expertise with DJI systems. This meant that the operator needed to monitor the controller for detections manually. While it worked as a proof of concept, improvements were necessary. For instance, obtaining a custom multispectral camera optimized for mScarlet would be a step forward. Please be aware that this prototype serves as proof of concept, as we were unable to conduct real-world Fusarium testing in crop fields. However, we did perform independent testing for flight automation and detection within a level 2 safety lab.

Table 1. Cost breakdown for Prototype I. Prices and conversion rates are as of September 19, 2023, and do not include taxes. We did not purchase any DJI equipment; all DJI equipment was borrowed from Dr. Church's lab.
Part Cost Purchase
MATRICE 300 RTK USD $12,500.00
Matrice 300 Series DJI Smart Controller Enterprise USD $960.00
DJI D-RTK 2 High Precision GNSS Mobile Station USD $2,900
Micasense Altum CAD $14,699.00
Z15 SPOTLIGHT CAD $3,625.00
Matrice 300 Series Dual Gimbal Connector USD $175
Color Gel Filter CAD $35.99
Total: CAD $40,667.75

After completing Prototype I, we realized that we could design a better-suited prototype to meet our needs. For this iteration, our goals were clear: implement AI, reduce costs, and make it more accessible as an open-source solution. To achieve this, we utilized the DJI Phantom 4 drone, a Raspberry Pi, and incorporated 3D printing into the design.

Initially, we attempted to work with a multispectral camera tailored for this drone, but we encountered challenges in obtaining satisfactory results. To overcome this issue, we transitioned to using a standard GoPro-like camera paired with the Raspberry Pi, incorporating a built-in AI component. The setup included the following components:

  • A rear-mounted light attachment on the drone at an angle.
  • A light source from Amazon, equipped with the same light filter papers as in the previous prototype.
  • The Raspberry Pi mounted on top of the drone.
  • The camera positioned at the front of the drone for capturing images.
  • All necessary wiring to connect the camera integrated into the drone, with power for the Raspberry Pi sourced from the drone's battery.

The process for this prototype is as followed:

To set the flight path we had used the DJI GO app to create a grid pattern. Here's a general overview of the steps to automate a flight path:

  1. Prepare the Equipment:
    1. Ensure your DJI Phantom 4 is fully charged and updated with the latest firmware.
    2. Make sure your remote controller is also charged and connected to the drone.
  2. Connect a Device:
    1. Connect your smartphone or tablet to the remote controller via a USB cable or wireless connection.
  3. Set Up Flight Plan:
    1. Create a new mission or flight plan.
  4. Define waypoints,specify the altitude (in this case, the lowest was 16ft), heading, and other parameters for each waypoint.
  5. Review and Confirm:
    1. Double-check your flight plan to ensure it's safe and meets your requirements.
    2. Ensure your drone's return-to-home (RTH) settings are properly configured in case of any issues during the automated flight.
  6. Execute the Mission:
    1. Place the drone at the starting point of the mission.
    2. Start the mission within the app, and the drone will follow the predefined flight path autonomously.
  7. Monitor the Flight:
    1. Keep an eye on the drone and the app throughout the mission to ensure everything is going smoothly.
    2. Be ready to intervene if necessary.
  8. Complete the Mission:
    1. Once the mission is completed, the drone may return to the home point automatically, depending on your settings.
  9. Safety and Legal Considerations:
    1. Always comply with local regulations and laws regarding drone flight.
    2. Ensure you have proper permissions or permits if required.
    3. Monitor weather conditions and avoid flying in adverse weather.

As the drone executes its flight path, the Raspberry Pi runs a code responsible for capturing images using the action camera once the drone has moved more than 2 meters from its initial starting location. To achieve precision, we attached a GPS module to the Raspberry Pi, allowing us to obtain GPS coordinates with an accuracy of up to 11.11 centimeters (to six decimal places).

The code followed these steps:

  1. Continuously monitor the current location.
  2. If the drone moves beyond 2 meters from the starting point, proceed to step 3.
  3. Cycle through steps 3 to 6 until the drone returns within 2 meters of the starting point.
  4. Capture an image and analyze it using the built-in AI component.
  5. If the AI detects fungi, record the GPS coordinates.
  6. Keep track of any GPS coordinates where the AI encountered errors.
  7. Check if the current GPS coordinates are back within 2 meters of the starting point.
  8. Record the coordinates in a CSV file.
  9. Terminate

Once the drone returns to its initial coordinates, it writes the collected GPS coordinates into a CSV file. This file is subsequently uploaded to our FungiLink portal, enabling farmers to access and review their results. Additional details about FungiLink can be found on the dedicated FungiLink page.

It's worth noting that all the code for this prototype was written in Python, chosen for its simplicity and accessibility. Image processing was accomplished using OpenCV, an open-source image processing library that seamlessly integrated with our project. The results obtained from the image processing code are detailed below.

We used Python and OpenCV to implement the color detection methods. The scan() function first reads an image. Then, we determine the lower and upper range of the color we hope to detect from the image. We employ cv2.inRange() to create a binary mask for this color range. Next, a 5x5 rectangular kernel is created using np.ones(), and we use cv2.dilate() to expand the pink_mask. Dilation is a morphological operation that expands the white regions in the binary mask. This can help connect nearby pink regions that might not have been fully captured by the initial mask. (Robert Fisher et al.)

Following that, we utilize cv2.bitwise_and() to apply the dilation and extract the pink region we intend to retain. We then use cv2.findContours() to locate contours in the mask and outline them. The returned value, cnts, will hold the detected image regions from the dilation. Based on this variable, we draw the contour and return 1. If no contour is found, the scan() function returns 0; otherwise, if an error occurs, it returns -1.

Figure 3. Figure 3 represents the Red, Green, and Blue channels, similar to what our eyes perceive. Please note that these images depict mScarlet in E. Coli, not Fusarium.
Figure 4. Figure 4 displays the picture at 668nm, with white portions indicating fluorescence, where Fusarium should be. Please note that these images depict mScarlet in E. Coli, not Fusarium.

Power was going to be an issue for our drone, since if we decided to add additional batteries it would have made it heavier. To overcome this problem we had used a premodified drones that had a battery eliminaor circuit. This made it possible for us to power the Rasberry Pi diretly from the drone's main battery.

We had to create some hardware to attach the camera and light to the drone. For the light, we used an adjustable headlight from Amazon that ran on AAA batteries. We used tape to add a filter to the light to turn its white output into a green hue. To attach the light, we had to design custom ledges. Similarly, we used a ledge to attach the camera to the drone, allowing it to be angled downward at 45 degrees from the drone's horizon. You can find all our hardware designs on our Github profile (Here).

While this prototype was a notable improvement with a price decrease by 92.97% over the previous one, there was still room for more enhancements. We used an action camera for image processing, but a multispectral camera would have been better. We also lacked enough data to train a robust AI model properly. More tests could have helped, but finding an isolated field for introducing Fusarium was a challenge. We chose to follow iGEM rules and local laws.

One way to improve results would have been using a custom multispectral camera, which would have simplified AI training. Unfortunately, when we inquired about custom multispectral cameras, we received a quote of $9910.00 USD, which was beyond our budget. Another area for improvement was the need for various external hardware modifications to the Phantom 4 drone, but this was unavoidable without a custom-made drone. The camera attachment was fixed at a 45-degree angle, unlike the adjustable light attachment, limiting its versatility.

Table 2. Cost breakdown for Prototype II. Prices and conversion rates are as of September 19, 2023, and do not include taxes. We did not purchase any DJI equipment; all DJI equipment was borrowed from Dr. Church's lab.
Part Cost Purchase
Phantom 4 USD $1,399 No longer sold.
Action Camera CAD $89.99
Headlamp CAD $79.95
Rasberry Pi CAD $178.44
Color Gel Filter CAD $35.99
Raspberry Pi Touchscreen CAD $26.21
USB GPS CAD $16.99
3D printed Material CAD ~$5.00 Free campus print
Code CAD $0.00 Self-Developed
Total: CAD $2,321.00

Prototype III was intended to be a custom-designed drone tailored specifically for our project's needs. Our plan was to purchase a drone kit from a supplier like Drone Dojo and then add the necessary components for our detection purposes. Regrettably, due to budget constraints, we were unable to proceed with this plan. Nevertheless, we began working on the code that would have been used if we had acquired the drone. This code would have incorporated the same AI from Prototype II but with modifications to better suit our requirements. All of the code we intended to use was written in Python and can be found on our GitLab page.

Ideally, even for this prototype, a multispectral camera would have been preferable since it simplifies the task of distinguishing between white and black, an essential aspect of our project. The work we completed serves as a foundational step for future teams that may have the resources to continue this part of our project.

The program designed to connect to the drone was intended to perform five key functions:

  1. Connect with Drone: Establish communication with the drone through User Datagram Protocol (UDP) to issue commands.
  2. Take-off to a Specified Altitude: Initiate take-off to a predetermined altitude.
  3. Fly in a Lawn Mower Pattern: Implement a lawnmower-style flight pattern across the designated area.
  4. Detect Fluorescence and Record Coordinates: Capture fluorescence detection coordinates and store them in a comma-separated value (CSV) file.
  5. Return to Starting Location and Land: Guide the drone back to its initial position and execute a safe landing.

The program relied on the powerful Dronekit library for drone development, integrated with our code using Python. It's important to note that Python 2.7 was required because Dronekit was compatible with this Python version.

Before discussing the code responsible for flight and fluorescence detection, it's crucial to understand how coordinates were created and stored. This structure was fundamental for defining the drone's flight path and the visual representation for farmers accessing our web portal. Our program incorporated two classes: the Waypoint class and the Coordinate class. The Waypoint class handled the creation and storage of coordinates for the drone's path, following a structure inspired by waypoint files commonly used in aviation. The Waypoint file accommodated the four corner coordinates of the field, forming breakpoints for the drone's path. The path was stored as a list of Coordinate objects, with each Coordinate object containing latitude and longitude information.

The first thing that the program must do is connect to the vehicle, our drone. The program will be connected to the drone through a User Datagram Protocol, a communication protocol, which will allow our program to talk to the drone and give it commands. Since the Raspberry Pi will be connected to the drone via USB, the port will be: “/dev/ttyAMA0”. If it's connected through the network, then use port “14550”. Once connected, we can control the drone using our program. To get the drone to get activated and take off, we have a function that is responsible for the take-off of the drone. The function takes an altitude value which will take the drone off to the specified altitude. The function will first ensure that the drone is armed meaning that drone has passed the preflight checks and the motors armed. If preflight checks are not passed, then the drone will not arm and take off. This check is done by the dronekit library. The mode for our drone will be GUIDED which means it flies given coordinates that aren't predefined which is a necessary part of taking off. Since our program creates the lawnmower path based on the coordinates received, the GUIDED mode will be used for the whole flight path. Once the mode is set, the drone can take off using the “simple_takeoff” that dronekit has, which will elevate the drone to specified altitude.

Once the drone has reached the target altitude, the waypoint class already has a method that returns a list of coordinate objects that the drone should stop and capture a frame to detect for fluorescence in Fusarium graminearum. The path will go from top left and will go across the same latitude in the crop field until it has reached the end. Then, it will step down an interval of longitude and go across the opposite direction. The path will continue to scan the field until it has reached the last coordinates. When the drone gets to a coordinate in the path, it will stop and capture an image. The image will then be processed by our AI algorithm to determine if there is any fluorescence in the crops. If there are, it will record the latitude and longitude in a csv file which will then be displayed as a heat map for the farmers to access on our web portal. To ensure that two images do not have the same fluorescence detected, we are removing a significant digit of the coordinates that are being compared to ensure that a new area is scanned and considered for fluorescence presence.

Once the last coordinate of the path has been processed, the drone will go to the starting position using the simple_goto function to take the drone back to its original starting location. Once the drone has reached that location, it will start the landing protocol which is done through the land function. This will set the mode to LAND and will disarm the drone’s motors once it’s landed. Then the vehicle object will close meaning the program and drone won’t be connected anymore and the program ends there.

Since we don’t have the drone available, we would be using the drone simulation kit known as dronekit-sitl. To run this simulator, a x86 (32-bit) architecture is needed as the binaries that are created are for that specific computer architecture. For this, we’d be connecting our drone to the simulator through MAVProxy once all the required installations have been installed. We would be installing DroneKit-SITL, MAVProxy, and the firmware for various types of drones, such as ArduCopter and ArduPlane. To start the simulation, the following command would be executed: “dronekit-sitl copter --home=latitude,longitude,altitude,heading --instance N.” In this case, the vehicle we would be simulating is a copter. “Latitude, Longitude, Altitude” sets the home location of your simulated drone. Afterwards, we would open another terminal window and start MAVProxy to connect to the simulation. At this point, we can send commands to our simulated drone using MAVProxy or our preferred ground control software. We can also use DroneKit-Python to write scripts to interact with the simulated drone programmatically.

In the output of the DroneKit-SITL simulation session, we can expect to see a series of informative messages and status updates that detail the behavior of the simulated drone. These messages typically include initialization details, pre-flight checks, flight mode changes, altitude control updates, commands execution, telemetry data, and indications of landing and disarming. The exact content of these messages depends on your actions within the simulation and the scripts or commands you use to interact with the virtual drone.

In our case, we encountered challenges in attempting to successfully simulate the drone in the provided environment due to a compatibility issue. The simulation requires an x86 (32-bit) architecture, as the binaries generated by the simulation tools were designed specifically for this computer architecture. Unfortunately, our system did not have this particular architecture, which prevented us from running the simulator effectively.

The ultimate stage of our project aimed to embrace the possibilities of 3D printing by building a fully 3D printed drone, drawing on the insights gained from our prior prototypes. Our aspiration was to make this a completely open-source endeavor so that, once successfully accomplished, anyone could readily obtain the necessary hardware, 3D print the essential components, and combat infections independently.

Prototype I

It is worth mentioning that before we were introduced to multispectral cameras, we had been working on the same detection technique for prototype I as we did for prototype II. We were developing a specialized payload that was 3D printed, which carried the same components as the second prototype. The only difference is that the power came from a USB-C plug that was already on the drone. All we needed to do was have an adapter that plugged into the USB-C port and had a micro USB on the other side to connect to the Raspberry Pi. The Raspberry Pi would then power everything else. All our designs for all prototypes can be found [insert link to the designs] here. We tried uploading them to the iGEM GitLab; however, it did not work as the files were too big. Rest assured, this GitHub repository will never be deleted.

Prototype II

For the second prototype, we also attempted to work with the Parrot Sequoia multispectral camera; however, we encountered two issues. Firstly, we were unable to successfully integrate it with the Raspberry Pi. Secondly, the camera's range was not optimal for this specific use case.

AI development

We experimented with other techniques, such as the k-means method. However, when attempting to solve the problem using k-means, we discovered that it is highly dependent on the number of clusters (k) we choose to use. If k is not sufficiently large, the results are significantly affected. For instance, in this case, when we set k = 5 and treated a clustered label as 1, k-means was able to detect the fluorescence color successfully. However, when we used k = 3, the original image could not be adequately represented with only 3 cluster colors, resulting in the inability to detect the fluorescence. Additionally, another issue with this method is the extensive time it takes to obtain results, particularly when k is set to a high value. Consequently, for this project, we ultimately decided to utilize the color detection method.

Figure 13. Figure 13 represents the segmented image of Figure 3 at k = 3. This shows that with k = 3, we can not see the growth properly. Code run time at k = 3 for Figure 3 was 75.9 seconds.
Figure 14. Figure 14 represents the segmented image of Figure 3 at k = 5. This shows that with k=3, we can just barely start to see the growth. To improve this we need to set the k value higher however the trade off is time. Code run time at k = 5 for Figure 3 was 58.6 seconds.
Figure 15. Figure 15 represents the segmented image of Figure 3 at k = 7. Code run time at k = 7 for Figure 3 was 84.1 seconds.
Figure 16. Figure 16 represents the segmented image of Figure 3 at k = 9. Code run time at k = 9 for Figure 3 was 136.7 seconds.

In retrospect, our project serves as a starting stepping stone for aspiring teams venturing into the realm of drone automation. What we've achieved is not just a culmination of our efforts but a platform for others to embark on their own drone automation journeys. Our system is designed with flexibility in mind, offering future teams the opportunity to effortlessly adapt and customize our work to suit their specific requirements.

In terms of detections, the simplicity lies in a few numerical adjustments within our code to target different colors or properties. This adaptability opens the door to a multitude of applications, expanding the scope of possibilities. The potential for future growth knows no bounds, and the sky truly becomes the limit for those who dare to explore and innovate further.


REFERENCES

  1. Robert Fisher, Simon Perkins, Ashley Walker and Erik Wolfart. 2000. HYPERMEDIA IMAGE PROCESSING REFERENCE. https://homepages.inf.ed.ac.uk/rbf/HIPR2/dilate.htm.