Team:Johns Hopkins-BAG/Protocols

From 2009.igem.org

(Difference between revisions)
(Protocols)
(Results)
Line 38: Line 38:
==== MUTATIONS TABLE ====
==== MUTATIONS TABLE ====
=== Results ===
=== Results ===
 +
From our construction successes (< 95%) and our lack of perfect clones we must conclude a mixed result from our experiments. This method has many advantages and disadvantages that need to be accounted for in order to assess the ultimate efficiency and accuracy of the protocol. <p>
 +
 +
First and foremost, this is a very costly protocol (refer to Jasper’s report) and even with one pass through the FPCR to CSPCR workflow, the price of the reagents cannot be reconciled with the lack of successes on our sequencing results. We use more BBs since it costs more to overlap every part of BB. This extra cost cannot be offset by the increased efficiency since my results suggests that even with the new protocol, a single perfect clone cannot be found out of 12 submitted clones. The increased accuracy was supposed to improve the parts where the overlapped regions met. <p>
 +
 +
We must also take into account the relative difficulty that these BBs offer in their construction. They are all of the ones that have failed previous years. There are quite a few that are over 800bp in length. From our sequence data, transversions were the #1 type of mutations found in the LCR sequences followed by deletions which are then followed by transitions. These mutations occurred in very random places and never at the same place among same BBs. All the highlighted areas were based on the abi sequence files that had clear well-defined ratings for each mutation. All in all, these BBs were poor examples in terms of relative design and composition of BBs encountered by BAG students over the course of a semester.  <p>
 +
 +
The time that it takes the LCR to progress is staggering considering the workflow jams that might occur if more PCR machines are not introduced into the classroom. Testing for shortened annealing times in which 5 minutes instead of 15 minutes is used should be conducted. This would require a comparative study among the same BBs.<p>
 +
 +
Next time a negative control is also needed for every step in order to prove that this LCR is created from contaminated reagents. <p>
 +
 +
This protocol is excellent for pushing BBs through the FPCR stage. It’s ultimate accuracy is still up for debate as we have a very bad sample to draw any reasonable conclusions from. Ultimately, it may be that the commercial markets have not deemed the LCR reaction reagents as a low cost alternative to the Taq polymerase driven reactions of our TPCR.  <p>
 +
=== Discussion ===
=== Discussion ===
{{Template:JHU_BAG_Bottom}}
{{Template:JHU_BAG_Bottom}}

Revision as of 12:30, 5 October 2009

Contents

Protocols

The Ligase Chain Reaction Protocol

INTRODUCTION

The Ligase Chain Reaction Protocol (LCR) is designed to replace and improve the current Templateless Chain Reaction Protocol (TPCR). The LCR is thought to have many distinct advantages:
1.) Accuracy of BB sequence should be increased (theoretically).
2.) The actual construction of the BB should be considerably easier thanks to the overlapping feature of the oligonucleotides.
3.) Money should be saved throughout the entire workflow because the LCR only needs to be run once to produce workable results.
In order to make a working protocol we must first subject the given protocol to stress tests to test for robustness. The second part of our experiments should be to test known working samples of the LCR trials in order to prove that the experimental procedure works correctly. (We did this because our oligonucleotides were built incorrectly) We used samples from Jennifer Tullman’s batch of 3L.3_23.A1 BB. The third part of our testing will be to assemble the failed BB’s from the Intersession 2009 work that Mary and I worked on.

THEORY

The theoretical background for the efficiency and accuracy of the LCR is based on the un-gapped oligonucleotides and manner of construction of the BB.

Step 1
Build un-gapped oligonucleotides that are around 60bp each
Step 2a
Mix the oligonucleotides together.
Step 2b
Phosphorylate the 5’ ends of the oligonucleotides so that the Taq DNA Ligase (Thermostable) can work on ligating the spaces
<PICTURE>
Step 3 Use a chained reaction to allow the overlapping pieces to anneal and then use the ligase to join the DNA backbone.
<PICTURE>
This is the step where the efficiency and the accuracy are most obvious. The oligonucleotides have a very high specificity for their complementary strand. This is thanks to the ~30bp stretch of complementary DNA. There is also a very low possibility for loxP sites since a palindrome sequence is much harder to find as the stretch of complementary sites for annealing grow longer.

Step 4
Every step thereafter is the same as the old BAG protocol.

COMPLETE OVERLAP PROTOCOL

DILUTION PROCEDURE/ GENERAL OPTIMIZATION

2009 ACCOMPLISHMENTS

ANALYSIS- Clone QC

Examples of ClustalW alignments for the 3R.3_23.C2.01 clones

MUTATIONS TABLE

Results

From our construction successes (< 95%) and our lack of perfect clones we must conclude a mixed result from our experiments. This method has many advantages and disadvantages that need to be accounted for in order to assess the ultimate efficiency and accuracy of the protocol.

First and foremost, this is a very costly protocol (refer to Jasper’s report) and even with one pass through the FPCR to CSPCR workflow, the price of the reagents cannot be reconciled with the lack of successes on our sequencing results. We use more BBs since it costs more to overlap every part of BB. This extra cost cannot be offset by the increased efficiency since my results suggests that even with the new protocol, a single perfect clone cannot be found out of 12 submitted clones. The increased accuracy was supposed to improve the parts where the overlapped regions met. <p> We must also take into account the relative difficulty that these BBs offer in their construction. They are all of the ones that have failed previous years. There are quite a few that are over 800bp in length. From our sequence data, transversions were the #1 type of mutations found in the LCR sequences followed by deletions which are then followed by transitions. These mutations occurred in very random places and never at the same place among same BBs. All the highlighted areas were based on the abi sequence files that had clear well-defined ratings for each mutation. All in all, these BBs were poor examples in terms of relative design and composition of BBs encountered by BAG students over the course of a semester. <p> The time that it takes the LCR to progress is staggering considering the workflow jams that might occur if more PCR machines are not introduced into the classroom. Testing for shortened annealing times in which 5 minutes instead of 15 minutes is used should be conducted. This would require a comparative study among the same BBs.<p> Next time a negative control is also needed for every step in order to prove that this LCR is created from contaminated reagents. <p> This protocol is excellent for pushing BBs through the FPCR stage. It’s ultimate accuracy is still up for debate as we have a very bad sample to draw any reasonable conclusions from. Ultimately, it may be that the commercial markets have not deemed the LCR reaction reagents as a low cost alternative to the Taq polymerase driven reactions of our TPCR. <p>

Discussion

</div>

</div>