Engineering can be described as the branch of science and technology concerned with the design, building, and use of engines, machines, and structures. Based on this definition, the engineering principles are key elements in constructing our tools: whether for our software, our biological tool, our model... Our reflection is guided through time by a series of steps. There are four stages that include Design, Build, Test and Learn. We wanted to represent it as a rond because, after the learning step, you can easily redo all the previous stages to improve your tool or even consider new parameters or information. This is a way to learn from errors or missing steps in our creation process, not miss a single parameter, and build a tool that is as effective and complete as possible.
We present the five main cycles we used in our engineering approach. This is how we tackled the various challenges that arose during our thinking and project construction. First, you'll learn about our software design approach, a necessary step in creating our biological tool. Then, you'll discover the biological design of our SuperBugBuster. The modeling part of our project also involved an engineering process. Finally, surprising as it may seem, the engineering cycle also applies to constructing a social and historical work. Indeed, it's a way of building a precise reflection on a specific subject. As human practices are at the heart of this project against antibiotic resistance, the cycle of its reflection is represented on this page.
When looking for CRISPR motifs, we came up against a difficulty: we wanted to eliminate carbapenemase resistance. As this is a class of genes with the same function, some of the genes in this class may have homology relationships. We, therefore, wanted to obtain a single motif capable of guiding the CRISPR module over several same-class genes. However, current CRISPR tools were unable to find CRISPR motifs capable of binding to a gene database. We created this program using two existing software packages: https://cstb.ibcp.fr/ and http://www.rgenome.net/be-designer/.
Firstly, we contacted the IBCP team, who created one of the programs we based ourselves on. The program takes two databases as input, the first containing the list of genes for which we are looking for CRISPR motifs. The second database contains the bacterial genome(s) in which we don't want CRISPR motifs to be recognized. Then our program follows these steps:
To test the program, we conducted tests between our program and http://www.rgenome.net/be-designer/ to ensure that the program functioned correctly. In addition, we also carried out manual tests simulating the use of CRISPR in specific sequences to check that the program behaved correctly.
We realized that our initial hypothesis was that, since a homology relationship linked the genes, the same motif could hybridize on a large number of genes in the same class. In some cases, a motif may hybridize with several genes, but this number remains limited.
1rst iteration: pEDIT1
The first plasmid we need is the entry vector of the Gateway system. This plasmid must contain a transfer origin, allowing conjugation with a wide range of bacteria to make our tool applicable to different species. Additionally, this plasmid must provide a carefully chosen replication origin.
A Gateway entry vector must include specific recombination sites attR3 and attR4, between which the sequences of interest are located. Outside of these recombination sites, the vector must contain antibiotic resistance for plasmid selection, as well as a pUC replication origin. The sequences of interest provided by this plasmid will be the replication origin, the transfer origin, and the resistance necessary to exert selection pressure on the final plasmid. We have chosen an RP4 plasmid transfer origin, allowing conjugation of the plasmid across a broader spectrum of bacteria. For the replication origin, we decided to design a mini F plasmid containing the ori2 replication origin and the associated replication protein, RepE. This replication origin is tightly regulated, and the F plasmid is present in a single copy per cell. We have removed partition genes to allow easy loss of this plasmid without selection pressure. This plasmid is constructed in five fragments assembled by Golden Gate cloning using the BsaI enzyme and T4 DNA ligase. Each fragment was previously obtained by PCR from a template plasmid using appropriate primers. These primers have 5' overhangs with BsaI sites possessing suitable sticky ends to allow directional assembly during Golden Gate cloning.
In a first step, the sizes of the PCR products are verified by agarose gel electrophoresis. The expected sizes are as follows:
Once ligation is completed, the size of the resulting plasmid is verified by XhoI enzymatic digestion.
Finally, sequencing covering the ligated regions is performed to ensure that the correct sequence is obtained(see Results page).
The results seem to indicate that we have successfully constructed the expected plasmid.
2nd iteration: pEDIT2
The second plasmid must incorporate an inducible editing system capable of inactivating a specific target gene.
DNA base editors, combining a deaminase with an inactive Cas-9 protein, are genetic editing tools adapted to make specific modifications at the genome level. There are various base editors, and in this project, we used the Cytidine deaminase module derived from the marine lamprey Petromyzon marinus. This module induces conversions of Cytidine to Uridine within the DNA. It is fused with the uracil glycosylase inhibitor (UGI) from the PBS1 bacteriophage of Bacillus subtilis, which inhibits bacterial uracil-DNA glycosylase and prevents the reversion of Uridine to Cytidine. This protein combination with the CRISPR-dCas9 system allows guide RNAs (gRNAs) to precisely target the genome region where the cytosine-to-thymine base mutation is intended to occur. The base editor is guided to a specific site called the protospacer by a gRNA. A 20-nucleotide segment called a spacer, located at the 5' end of the gRNA, forms a heteroduplex with the complementary DNA strand of the protospacer. This complex exposes the protospacer DNA strand, making it accessible to the deaminase. In the case of the Cytidine deaminase base editor coupled with the CRISPR-dCas9 system, the deamination window is located 13 to 17 bases upstream of the Protospacer Adjacent Motif (PAM) NGG recognized by the Cas9 protein.
We decided to place our editing system under the control of the tetR induction system. This repressor regulates the expression of the tetA gene encoding the TetA efflux pump capable of exporting tetracycline antibiotics out of the bacterial cell. In the absence of tetracycline, the TetR repressor blocks the expression of the tetA gene. However, in the presence of tetracycline, it interacts with the TetR regulator, which changes conformation and loses its affinity for the tetO operators. Consequently, the ptet promoter is released, and the tetA gene is transcribed.
This plasmid was constructed by performing a double enzymatic digestion (SpeI + BamHI) of two-parent plasmids and a ligation using T4 DNA ligase. After obtaining the plasmid resulting from the previous ligation, we need to remove the region responsible for chloramphenicol resistance to avoid having two chloramphenicol resistance regions on the final pEDIT5 plasmid. To achieve this, we perform a double enzymatic digestion (ZraI + SpeI), followed by blunting the protruding ends, and finally, ligation using T4 ligase.
We tested the sensitivity of the obtained strains to chloramphenicol. The strains should be sensitive to chloramphenicol. The size of the resulting plasmid is verified by (Kpn1 + SpeI) enzymatic digestion.
Finally, to ensure that the correct sequence is obtained, sequencing covering the ligated regions is performed (see Results page).
The only plasmids we could obtain with this method are the initial plasmids (chloramphenicol resistant). We think that the source of error is the blunting kit, as it is likely inefficient. We have decided to perform different enzymatic digestion of the first intermediate plasmid; we are conducting a double digestion with ZraI + MlsI in order to partially delete the region carrying the chloramphenicol resistance, hoping for a loss of function.
3rd iteration: pEDIT3
This plasmid must contain between attL1 and attL2 sites, the guide RNAs (two or four guide RNAs) assembled in tandem, each under the control of an Anhydrotetracycline-inducible promoter.
This plasmid requires the construction of three intermediate plasmids:
The pHost-spacer plasmids contain: the ptet promoter, a ccdB suicide cassette, the guide RNA backbone necessary for interaction with the Cas9 protein.
Each of these plasmids will accept a specific spacer.
The pHost-spacer plasmids differ in the presence of distinct BbsI sites, allowing directional assembly of guide RNAs into the recipient vectors pV2-mScarlet or pV4-mScarlet.
The pHost-spacer plasmids consist of two fragments obtained by PCR from the template plasmids pINS-Rif and pT1, using primers with 5' overhangs carrying distinct BbsI sites.
The expression plasmids pEx-gRNA1 and pEx-gRNA2 contain the spacers S1-oxa48 and S2-oxa48, respectively, targeting [specific target]. They were obtained through Golden Gate cloning using BsaI and T4 DNA ligase into the plasmids pHost-spacer1 and pHost-spacer2.
The host plasmids for tandem guide RNAs, pV2-mScarlet and pV4-mScarlet, are derived from the pENTR4-dual plasmid, marketed by Thermofisher and containing the recombination sites attL1 and attL2. The objective is to replace the chloramphenicol resistance gene and the ccdB suicide gene located between attL1 and attL2 with the gene encoding mScarlet, flanked by two BbsI restriction sites. These BbsI sites will enable the oriented insertion of tandem guide RNAs through Golden Gate cloning. The construction of these plasmids involves a series of steps:
We tested the loss of the BbsI site by enzymatic digestion.
Finally, to ensure that the correct sequence is obtained, sequencing covering the ligated regions is performed (see Results page).
We noticed that we had an issue with selection pressure. Indeed, since the pEDIT3 plasmid is KanR, and the plasmids used for its construction are also KanR, we had difficulty isolating the correct plasmid. Therefore, we decided to make pEDIT3 SmR to enhance our selection.
The design of the differential equation model on the Crispr part of our project was based on the needs of the team in charge of laboratory manipulations. This model enables us to explore the tests of our in silico tool, and therefore had to answer questions asked of our biological tool, in order to support future tests but also to test specific things that would not be feasible in the time available. The main questions we had to answer were:
After answering these questions, we came up with a population dynamics model showing plasmid transitions that move bacteria between the following categories: susceptible, resistant, susceptible with CRISPR plasmid, resistant with CRISPR plasmid.
The creation of the model was still very much related to biological issues, but the more mathematical aspects came into play. Many types of equations were available, so we had to choose the most appropriate ones for our situation. We therefore chose to use two different models: one with an Allee effect and the other with a logistic effect (Verhulst). This model-building stage led to a lot of backtracking to the previous design stage, as we repeatedly realized that our first design had certain shortcomings, or even errors, in relation to biological reality. It's up to the model to adapt to the experiments, not the other way round, hence the feedback. Once our final design appeared to be the right one, we still had to find the parameter values. We therefore undertook bibliographical research and had to use certain arbitrary values due to lack of knowledge, which were then adjusted in the next stage of the engineering cycle.
The mathematical equations were translated into Python code to run numerical simulations. Euler's method was used to approximate the evolution of the variables. The test phase may revert to the build phase at times, as certain parameters need to be adjusted. The test consists in displaying the curves and observing the evolution of the populations. We know what result we want to achieve, a result that corresponds to the correct functioning of our tool, and we try to understand how to write the model in a way that is biologically and mathematically correct.
We went back and forth between the learn, test and build phases. In fact, observing the curves allows us to understand any inconsistencies that may have remained in our model. At a certain point, our model meets our expectations and we decide that the curves obtained are our final results. These results allow us to learn about the effectiveness of our tool. This effectiveness remains an assumption, as it is based on simplifications, but these have been made sparingly and appropriately, so we can rely on our model. Our SuperBugBuster tool is effective, since it enables us to eliminate resistance and, in the end, obtain only susceptible bacteria. This last phase also enabled us to revisit our simplifications, re-discuss them and envisage new complexities to bring our model even closer to biological reality.The model can be used to test different initial conditions and observe the behavior of bacterial populations. It can therefore be used to prepare test experiments in the laboratory!
1rst iteration: Onion Method
For the first iteration of our design part of human practices, we did real research work on bibliography. We had a few questions in mind about our subject. So, we searched for information on the theme of antibiotic resistance: The history of antibiotics, the economic context around it, the consequences and where do we go with our actual situation. This permitted us to create our real base of work to design our reflection of human practices. But with information leading to new questions, we extended our work to also search data and about the imaginary around science in general, how people perceive science. Moreover, the notion of One Health came up. Then we asked around how we could image all of this, how we could represent all this information, in an easy and understandable way. That is when the onion method came up to us, and seemed to us to be a great tool to use in our Human Practice approach of the antibiotic resistance subject. Indeed, this allows us to structure our ideas, take a step back, and think critically about our subject.
Then, we build up our onion method, which you can find on the Human Practice page. This tool was created and imagined by Jean François Trégouët and Céline Nguyen, teachers at INSA Lyon. So we have surrounded ourselves with teachers and humanity teachers to help us build our onions. Their expertise in the subject led to the design that we have now. Layers, by layers we looked into our bibliography to fill in them. We also included the One Health problematic, by a color code.
For the test of our tool, as it is a written form, we decided to test it around us, by proofreading our onion method by different persons. We tried to include in this trial phase people from different environments, friends, family, teachers, to have a maximum of feedback on it. Like this, we had many different points of view on the subject: people close to the scientist environment, others totally out of it, or even others who specialize more into social sciences. Their feedback was an important step in our reflection that we seriously take into consideration. This was a real help in our Human Practices journey. The test phase was effective because it pointed out that the onion was a great tool to have a global idea of the subject, but was only based on bibliography. We needed to have real testimonies on the subject to have a more important knowledge of the antibiotic resistance subject.
To finish our first iteration of the Human Practice Engineering cycle, we took into account the different feedback of our helpers. At this step, we needed more material on antibiotic resistance, and needed it in a different way than just with bibliography. Indeed, the theme of antibiotic resistance being first of all a consumption problem, we needed to have testimonies, to have real people telling their experience of it. But this onion method step wasn’t useless, for sure, because it was this first approach of the subject that permitted us the following of our Human Practice Journey; to include and integrate people in the core of this part of the Super Bug Buster subject. Indeed, the work by Human Practice at this point is not finished at all and we are at the beginning of our second iteration of the engineering cycle !
2nd iteration: Studies
Our second iteration of the cycle began with the creation of our qualitative study; Indeed, after our research work and the creation of our bibliography, we needed to go further in the antibiotic resistance problem. To do so, we research how we could do so. It is natural that we turned to a total study of the subject: both qualitative and quantitative studies.
The building of this full study consisted in two branches. First we created our qualitative study by creating a full interview with basic questions that we built thanks to the firsts research that we made. We modified it according to the type of person that we wanted to interview. We ended up with 5 different interviews according to the interviewed profile. Of course, during the actual interview new questions came up, and through the interviews, a new profile appeared, so new questions again. Indeed each interview ended up influencing the next one, and the type of people to interview, by opening new questions on the subject. It was a real work of adaptation to build our qualitative study ! Then when we finished our qualitative study, by creating a Reflection tree, permitting to arrange our reflection, through the interviews. This permitted us to see all the links between interviews, and another time, it was important for us to include this notion of One Health, that is really important in the context of antibiotic resistance. Furthermore, for our quantitative aspect, we decided to get surrounded by a Science Po student, to help us to create a survey. This was for us to have global information on several big questions that raised our bibliography work. Indeed we asked: what people know about antibiotic resistance, their consumptions of antibiotics and how they think it affects the world. The creation of this survey was the final point of our Integrated Human Practices journey.
The test step of this iteration of the Human Practice work mainly consisted of the deployment of our survey. but we realized that it was biased. Indeed, we, as students in science, are surrounded by people from this field. So by deploying our survey around us, by social media, mails … we will automatically touch younger people, and related to science, so that they necessarily have an idea of what antibiotic resistance is or at list that they have to have a reasonable use of. We didn’t want to have false results, with false interpretation.
So to not create this, thanks to our test, we learned that we will keep this survey as a proof-of-concept, like a potential survey that we could deploy, if we had a larger audience, or a bigger impact. We detail all of this in the Integrated Human Practices page.
3rd iteration: Studies
Last iteration, but not least, for the Human Practice work, after realizing the importance of educating people about antibiotic resistance, we decided to focus on education. We started by identifying the educational objectives we want to achieve such as equipping individuals with knowledge, skills, and values needed for personal development and societal contribution. Through education, individuals gain the tools to navigate the world, make informed decisions, and actively participate in shaping a better future. We conducted research, gathering information about existing educational games, successful awareness campaigns, and effective social media strategies used in education. Additionally, we defined our target audience, considering factors such as age group, educational level, and demographics. With all this information, we brainstormed creative ideas that aligned with our objectives and target audience.
We designed and created educational games that are interactive, engaging, easily adaptable and accessible to all. Simultaneously, we developed an awareness campaign about antibioresistance through flyers and brochures. Talking about our project and its key concepts, we used our social media to do some science popularization. The idea was to reach as many people as possible, around the world ! We encourage user engagement, discussions, and sharing of educational resources.
To test our educational tools, we wanted to deploy our games in schools or summer camps. However, out of the fifty establishments we contacted, only one middle school answered. We did a presentation of our project, focusing on the educational part, for 170 students. For the awareness campaign, we contacted doctors and sent them our flyers and brochure. Two of them agreed to display them in their waiting rooms. For the first-year’s students of Biosciences at INSA Lyon, we gathered feedback to evaluate if our games and social media contents were engaging and enjoyable and if they were suitable for children.
Finally, after the tests, we reviewed the feedback received for the games, awareness campaign, and social media content. We identified areas of improvement and areas that have been successful. This helped us refine and enhance our educational tools. We implemented necessary modifications to improve their effectiveness and impact. By sharing our experiences, we contribute to the collective knowledge and improvement of education.
General
The first goal was to find a solution to degrade OXA48 protein in the cytoplasm. We came up with an idea of a BacPROTACs, with ClpX directly involved.
To design our PROTAC, we needed two proteins, one that could bind to the OXA48 protein and one to ClpX. We decided to use ClpX directly because it was simpler that attempting to find a lingant protein from scratch (something we didn’t know how to do). We also found in the literature a nanobody specific to CMY-2, a protein from the Oxa48 family, that we thought would interact with our protein of interest since the two molecules were pretty alike. To link those two proteins, we decided to use a really common linker, that is normally good: Asn Phe Phe Asn Leu.
When we presented our project to a group of scientists, from diverse specialties, even though they found the idea really interesting, they quickly pointed out that the nanobody wouldn't bound with the oxa48 protein because it was really specific to CMY-2. They also said that a linker is not something we can choose out of the blue, and we need to make some modelisation to find it so the BacPROTACs would work.
During this meeting, we learned several things. First, it is important to present our work to scientists so they can point out some of the mistakes and miscalculations we made during the process. Also, building BacPROTACs is a lot more complicated than we thought and just putting three proteins all together won’t work. So we needed to think about it further.
2nd iteration : NANOBODY
We took into account what the scientists said and tried to do some modelisations of the interaction OXA-48-nanobody. As there wasn’t any interaction, we decided to modify this nanobody, thanks to the help of Riccardo Pelarin, a scientist from the bioinformatics field, who gave us the idea.
To do so, we used a new algorithm: RF Diffusion (see Docking/ page ). It allowed us to transform the nanobody CMY-2 specific and to obtain potential ones interacting with OXA48.
We modeled in silico the interaction of the best we had and we thought of doing some tests in vitro, for example an ELISA test (see Experiments/ page) in order to find which nanobody interacts with the protein.
Here, we learned pretty much about how time-consuming is in silico modelization and about the difficulty to have a good result. Indeed, the algorithms are really complicated, so difficult to really understand, and because it’s biology, it’s even more complicated to model. Thanks to that manipulation, we succeeded to have three nanobody candidates, which, when implemented in modelisation platforms, produced an interaction with OXA48.
2nd iteration bis: LINKER
The first linker we chose was a simple and basic linker: Asn Phe Phe Asn Leu. But it wasn’t necessarily the best option. Indeed, we realized later, by consulting a researcher on the idea, that we didn’t know if we could avoid steric clashes with this linker and if it would be long enough to allow both the nanobody and ClpX to do their function fine. So we decided to try to model it.
Based on the previous, we planned to use an algorithm called IMP-develop that allows to detail structural characterization of assemblies. So it could help us to build our linker by visualizing the interaction between the nanobody, ClpX and the chain we wanted to put.
To do so, we needed to supply the IMP tool with the nanobody’s and ClpX’s sequence, and the most basic linker possible, so it can modify it. The nanobody and ClpX models are frozen, so that only the linker can take on all possible conformations in this rigid environment, in order to obtain the best possible results on the possible coordinates of the linker.
Although we didn't use this tool, as we would have had to start a script from scratch. We were able to determine that we needed to link the linker to the C-terminal part of ClpX since studies using GFP as the protein to be degraded had linked GFP to the C-terminal part of this same protein. This allows us to switch from two binding possibilities (C-terminal or N-terminal) to a single one, which we can implement right from the start in the software to have a reduced data set.
Click on the buttons to see the cycles appear !