According to our genetic circuit, the output of our test kit will be green fluorescent or red fluorescent signal corresponding to GFP or RFP creates. However, in some sense, color is subjective. Fluorescent light might not be easily observed by human eyes, not to say those who have color blindness. In this case, the users might read the results incorrectly and the test kit will be wasted. Therefore, we find it necessary to develop an app complementary to our hardware package so that the results can be more readable and interpretable. In addition to that, to suit diverse background of users, a detailed menual on our Cereulide Testing Kit and educational information related to synthetic biology will be provided in our app.
      In our app development process, there are basically three steps. First, we trained a machine learning model which is going to be used in the Color Identification function. Our model is trained using Google Teachable machine[1]. Next, we designed the prototype of our app. Finally, we'll program our app on Android Studio[2] according to the prototype design we made so it will be able to help the users to analyze the results of the test kits with their mobile devices in the real world.
      As we aim to identify whether the output fluorescent signal is in green or red color, we plan to train a machine learning model using Google Teachable Machine to do color identification. Google Teachable Machine is a platform where users can create their own machine learning model easily by providing a dataset containing inputs and labels of inputs. To train our model, we are collaborating with hardware to collect photos of gels with fluorescent protein inside and divide them into the Green group and the Red group accordingly.
      Once the training is done, the model will be able to output the probabilities of test case belonging to each class. The class with the highest probability will be the final results on the surface of our app.
      Google Teachable Machine provides the functions for user to enter the input and do the real-time classification on their webpage. However, we want to integrate the whole classification process in our app. Therefore, we exported our model generated from Google Teachable Machine and programmed it into our app in later steps.
We have done the programming of the prototype on Android Studio, which is the official Integrated Development Environment (IDE) for Android app development. Here is the current interface we have developed:
      After that, the user could click the 'Predict' button to generate the result. As we do not have a training dataset now, we haven't built a model yet (check Color identification machine learning model for future plan) . Below is the demo of how the result would looks like: