<p>To efficiently predict the staple protein structures, we familiarized ourselves with supercomputing infrastructure and gained access to the Helix bwForCluster. Utilizing the AF2 module on the cluster, we predicted protein multimers (Jumper <iclass="italic">et al.</i> 2021).</p>
<p>To efficiently predict the staple protein structures, we familiarized ourselves with supercomputing infrastructure and gained access to the BwForCluster Helix. Utilizing the AF2 module on the cluster, we predicted protein multimers (Jumper <iclass="italic">et al.</i> 2021).</p>
<pclass="figcaption">Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873). https://doi.org/10.1038/s41586-021-03819-2</p>
</div>
<divclass="text-block block-text">
...
...
@@ -1255,7 +1255,7 @@
<divclass="text-block block-text">
<h3class="Build fs-h3 fw-medium">Build</h3>
<divclass="sm-vl"></div>
<p>We ran RosettaFold predictions on the bwForCluster Helix, using the open-source RosettaFold AllAtom (RFAA) package (Baek <iclass="italic">et al.</i> 2023).
<p>We ran RosettaFold predictions on the BwForCluster Helix, using the open-source RosettaFold AllAtom (RFAA) package (Baek <iclass="italic">et al.</i> 2023).
<pclass="italic">Baek, M., McHugh, R., Anishchenko, I., Jiang, H., Baker, D., & DiMaio, F. (2023). Accurate prediction of protein–nucleic acid complexes using RoseTTAFoldNA. Nature Methods, 21(1). https://doi.org/10.1038/s41592-023-02086-5</p>
</p>
</div>
...
...
@@ -1440,7 +1440,7 @@
<divclass="text-block block-text">
<h3class="Build fs-h3 fw-medium">Build</h3>
<divclass="sm-vl"></div>
<p>To optimize the runtime, we modulated hardware configurations and process-specific calculation distribution in GROMACS. On the bwForCluster HPC system we implemented strategies for GPU acceleration and multi-node scalability <iclass="italic">(Massively Improved Multi-Node NVIDIA GPU Scalability with GROMACS, 2023).</i></p>
<p>To optimize the runtime, we modulated hardware configurations and process-specific calculation distribution in GROMACS. On the BwForCluster Helix HPC system we implemented strategies for GPU acceleration and multi-node scalability <iclass="italic">(Massively Improved Multi-Node NVIDIA GPU Scalability with GROMACS, 2023).</i></p>
@@ -202,7 +202,7 @@ Since modeling every atom at every time increment is a resource-intensive task t
<br><br>
By combining the strengths of the aforementioned, specialized tools with our customized adaptor and scoring functions, we create a <b>unified DaVinci modeling pipeline</b> that handles both local and long-range interactions and transforms static predictions into dynamic simulations. By iteratively exchanging information between DaVinci and PICasSO during their development, we <b>created a rapid engineering cycle</b>, where we iterated through testing ideas <iclass="italic">in silico</i>, applying them in the lab and finally feeding the experimental data back into the computational pipeline. Adopting this approach to engineer and observe enhancer hijacking, we <b>demonstrate DaVinci is working successfully</b>.
<br><br>
DaVinci is running on the <b> high-performance computing cluster bwHelix</b>, which is accessible free-of-charge to academic institutions in the state of Baden-Württemberg.
DaVinci is running on the <b> high-performance computing cluster BwForCluster Helix</b>, which is accessible free-of-charge to academic institutions in the state of Baden-Württemberg.