diff --git a/core/core.html b/core/core.html index c70262a..e25024f 100644 --- a/core/core.html +++ b/core/core.html @@ -341,7 +341,7 @@

Core Data Analysis

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-1/overview.html b/core/week-1/overview.html index 060ba32..9d58f5b 100644 --- a/core/week-1/overview.html +++ b/core/week-1/overview.html @@ -337,7 +337,7 @@

Overview

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-1/study_after_workshop.html b/core/week-1/study_after_workshop.html index e637a2e..07d6ade 100644 --- a/core/week-1/study_after_workshop.html +++ b/core/week-1/study_after_workshop.html @@ -337,7 +337,7 @@

Independent Study to consolidate this week

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-1/study_before_workshop.html b/core/week-1/study_before_workshop.html index 39bb405..2a8eb25 100644 --- a/core/week-1/study_before_workshop.html +++ b/core/week-1/study_before_workshop.html @@ -330,7 +330,7 @@

Independent Study to prepare for workshop

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-1/workshop.html b/core/week-1/workshop.html index 305d040..8e94433 100644 --- a/core/week-1/workshop.html +++ b/core/week-1/workshop.html @@ -390,7 +390,7 @@

Workshop

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-11/overview.html b/core/week-11/overview.html index 3a12ef4..5f10e68 100644 --- a/core/week-11/overview.html +++ b/core/week-11/overview.html @@ -337,7 +337,7 @@

Overview

Published
-

3 December, 2023

+

4 December, 2023

@@ -350,31 +350,27 @@

Overview

xxxxx

Learning objectives

+

The successful student will be able to:

Instructions

    -
  1. Prepare

    -
      -
    1. 📖 Read
    2. -
  2. -
  3. Workshop

    -
      -
    1. 💻 dd.

    2. -
    3. 💻 ddd

    4. -
    5. 💻 ddd

    6. -
  4. -
  5. Consolidate

    -
      -
    1. 💻 dd

    2. -
    3. 💻 dd

    4. -
  6. +
  7. Prepare

  8. +
  9. Workshop

diff --git a/core/week-11/study_after_workshop.html b/core/week-11/study_after_workshop.html index e7d6b3a..3a5fad9 100644 --- a/core/week-11/study_after_workshop.html +++ b/core/week-11/study_after_workshop.html @@ -341,7 +341,7 @@

Independent Study to consolidate this week

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-11/study_before_workshop.html b/core/week-11/study_before_workshop.html index 98b6079..d75f2c8 100644 --- a/core/week-11/study_before_workshop.html +++ b/core/week-11/study_before_workshop.html @@ -384,51 +384,55 @@

Independent Study to prepare for workshop

-

3 December, 2023

+

4 December, 2023

Module assessment

This module is assessed by:

-

These slides are a guide to Research compendium.

-

What is a Research Compendium?

Overview of assessment

+

Stage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.

Students will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised.

+

What is a Research Compendium?

Overview of assessment

+

Stage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.

Students will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised.

+

What is a Research Compendium?

+
    -
  • Zipped folder containing all data, code, and text associated with a research project organised and documented clearly. Any unscripted processing should be described.

  • -
  • Everything needed to reproduce the results, and no more. The compendium should not be a dumping ground for data files and scripts. It needs to be curated. You may generate files that are not needed to reproduce your work and these should be removed.

  • +
  • Zipped folder containing all data, code and text associated with a research project organised and documented clearly. Any unscripted processing should be described.

  • +
  • Everything needed to understand what the project is and reproduce the results, and no more. The compendium should not be a dumping ground for data files and scripts. It needs to be curated. You may generate files that are not needed to reproduce your work and these should be removed.

  • Your compendium might be a single Quarto/RStudio Project, or it might be folder including an RStudio Project and some additional materials including the description of unscripted processing.

  • Ideally uses literate programming to create submitted report

+

Use guidelines from Core 1 and 2

@@ -437,7 +441,7 @@

Project level documentation

diff --git a/core/week-11/workshop.html b/core/week-11/workshop.html index a333272..98e1b04 100644 --- a/core/week-11/workshop.html +++ b/core/week-11/workshop.html @@ -18,6 +18,40 @@ margin: 0 0.8em 0.2em -1em; /* quarto-specific, see https://github.com/quarto-dev/quarto-cli/issues/4556 */ vertical-align: middle; } +/* CSS for syntax highlighting */ +pre > code.sourceCode { white-space: pre; position: relative; } +pre > code.sourceCode > span { display: inline-block; line-height: 1.25; } +pre > code.sourceCode > span:empty { height: 1.2em; } +.sourceCode { overflow: visible; } +code.sourceCode > span { color: inherit; text-decoration: inherit; } +div.sourceCode { margin: 1em 0; } +pre.sourceCode { margin: 0; } +@media screen { +div.sourceCode { overflow: auto; } +} +@media print { +pre > code.sourceCode { white-space: pre-wrap; } +pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; } +} +pre.numberSource code + { counter-reset: source-line 0; } +pre.numberSource code > span + { position: relative; left: -4em; counter-increment: source-line; } +pre.numberSource code > span > a:first-child::before + { content: counter(source-line); + position: relative; left: -1em; text-align: right; vertical-align: baseline; + border: none; display: inline-block; + -webkit-touch-callout: none; -webkit-user-select: none; + -khtml-user-select: none; -moz-user-select: none; + -ms-user-select: none; user-select: none; + padding: 0 4px; width: 4em; + } +pre.numberSource { margin-left: 3em; padding-left: 4px; } +div.sourceCode + { } +@media screen { +pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; } +} /* CSS for citations */ div.csl-bib-body { } div.csl-entry { @@ -343,7 +377,7 @@

Workshop

Published
-

3 December, 2023

+

4 December, 2023

@@ -360,15 +394,19 @@

Workshop

Session overview

In this workshop we will go through an example quarto document. You will learn:

Exercise

-

🎬 make a start on your compendium

-

🎬 make a start on a quarto doc

+

🎬 The example RStudio project containing this code here: chaffinch. You can download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:

+
usethis::use_course(url = "3mmaRand/chaffinch", destdir = ".")
+

You can agree to deleting the zip. You should find RStudio restarts and you have a new project called chaffinch-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded. You can now run the code in the project.

+

🎬 Make an outline of your compendium. This could be a sketch on paper or slide or from the mindmap software you usually use. Or it could be a skeleton of folders and files on your computer.

+

🎬 Make a start on a quarto doc.

You’re finished!

🥳 Well Done! 🎉

Independent study following the workshop

diff --git a/core/week-2/overview.html b/core/week-2/overview.html index 643f4d3..3beebf7 100644 --- a/core/week-2/overview.html +++ b/core/week-2/overview.html @@ -328,7 +328,7 @@

Overview

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-2/study_after_workshop.html b/core/week-2/study_after_workshop.html index d26aa3a..8919168 100644 --- a/core/week-2/study_after_workshop.html +++ b/core/week-2/study_after_workshop.html @@ -328,7 +328,7 @@

Independent Study to consolidate this week

Published
-

3 December, 2023

+

4 December, 2023

diff --git a/core/week-2/study_before_workshop.html b/core/week-2/study_before_workshop.html index 4fc0405..e15c783 100644 --- a/core/week-2/study_before_workshop.html +++ b/core/week-2/study_before_workshop.html @@ -444,7 +444,7 @@ -

3 December, 2023

+

4 December, 2023

Overview

+
+

On the vertical axis are genes which are differentially expressed at the 0.01 level. On the horizontal axis are samples. We can see that the FGF-treated samples cluster together and the control samples cluster together. We can also see two clusters of genes; one of these shows genes upregulated (more yellow) in the FGF-treated samples (the pink cluster) and the other shows genes down regulated (more blue, the blue cluster) in the FGF-treated samples.

@@ -1022,8 +1022,8 @@

Workshop

labRow = rownames(mat), heatmap_layers = theme(axis.line = element_blank()))
-
- +
+

It will take a minute to run and display. On the vertical axis are genes which are differentially expressed at the 0.01 level. On the horizontal axis are cells. We can see that cells of the same type don’t cluster that well together. We can also see two clusters of genes but the pattern of gene is not as clear as it was for the frogs and the correspondence with the cell clusters is not as strong.

diff --git a/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png b/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png index 8925d31..b0a7a16 100644 Binary files a/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png and b/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png differ diff --git a/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png b/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png index a876699..7620f9c 100644 Binary files a/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png and b/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png differ diff --git a/search.json b/search.json index 74f7266..0e07722 100644 --- a/search.json +++ b/search.json @@ -4,14 +4,14 @@ "href": "structures/structures.html", "title": "Structure Data Analysis for Group Project", "section": "", - "text": "There is an RStudio project containing a Quarto version of the the Antibody Mimetics Workshop by Michael Plevin & Jon Agirre. Instructions to obtain the RStudio project are at the bottom of this document after the set up instructions.\nYou might find RStudio useful for Python because you are already familiar with it. It is also a good way to create Quarto documents with code chunks in more than one language. Quarto documents can be used in RStudio, VS Code or Jupyter notebooks\nSome set up is required before you will be able to execute code in antibody_mimetics_workshop_3.qmd. This in contrast to the Colab notebook which is a cloud-based Jupyter notebook and does not require any set up (except installing packages).\n\n🎬 If using your own machine, install Python from https://www.python.org/downloads/. This should not be necessary if you are using a university machine where Python is already installed.\n🎬 If using your own machine and you did not install Quarto in the Core 1 workshop, install it now from https://quarto.org/docs/get-started/. This should not be necessary if you are using a university machine where quarto is already installed.\n🎬 Open RStudio and check you are using a “Git bash” Terminal: Tools | Global Options| Terminal | New Terminal opens with… . If the option to choose Git bash, you will need to install Git from https://git-scm.com/downloads. Quit RStudio first. This should not be necessary if you are using a university machine where Git bash is already installed.\n🎬 If on your own machine: In RStudio, install the quarto and the recticulate packages. This should not be necessary if you are using a university machine where these packages are already installed.\n🎬 Whether you are using your own machine or a university machine, you need to install some python packages. In RStudio and go to the Terminal window (behind the Console window). Run the following commands in the Terminal window:\npython -m pip install --upgrade pip setuptools wheel\nYou may get these warnings about scripts not being on the path. You can ignore these.\n WARNING: The script wheel.exe is installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\n WARNING: The scripts pip.exe, pip3.11.exe, pip3.9.exe and pip3.exe are installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nspyder 5.1.5 requires pyqt5<5.13, which is not installed.\nspyder 5.1.5 requires pyqtwebengine<5.13, which is not installed.\nconda-repo-cli 1.0.4 requires pathlib, which is not installed.\nanaconda-project 0.10.2 requires ruamel-yaml, which is not installed.\nSuccessfully installed pip-23.3.1 setuptools-69.0.2 wheel-0.41.3\npython -m pip install session_info\npython -m pip install wget\npython -m pip install gemmi\nNote: On my windows laptop at home, I also had to install C++ Build Tools to be able to install the gemmi python package. If this is true for you, you will get a fail message telling you to install C++ build tools if you need them. These are from https://visualstudio.microsoft.com/visual-cpp-build-tools/ You need to check the Workloads tab and select C++ build tools.\n\nYou can then install the gemmi package again.\nI think that’s it! You can now download the RStudio project and run each chunk in the quarto document.\nThere is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project:\n\nusethis::use_course(url = \"3mmaRand/structure-analysis\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called structure-analysis-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded.\nYou should be able to open the antibody_mimetics_workshop_3.qmd file and run each chunk. You can also knit the document to html." + "text": "There is an RStudio project containing a Quarto version of the the Antibody Mimetics Workshop by Michael Plevin & Jon Agirre. Instructions to obtain the RStudio project are at the bottom of this document after the set up instructions.\nYou might find RStudio useful for Python because you are already familiar with it. It is also a good way to create Quarto documents with code chunks in more than one language. Quarto documents can be used in RStudio, VS Code or Jupyter notebooks\nSome set up is required before you will be able to execute code in antibody_mimetics_workshop_3.qmd. This in contrast to the Colab notebook which is a cloud-based Jupyter notebook and does not require any set up (except installing packages).\n\n🎬 If using your own machine, install Python from https://www.python.org/downloads/. This should not be necessary if you are using a university machine where Python is already installed.\n🎬 If using your own machine and you did not install Quarto in the Core 1 workshop, install it now from https://quarto.org/docs/get-started/. This should not be necessary if you are using a university machine where quarto is already installed.\n🎬 Open RStudio and check you are using a “Git bash” Terminal: Tools | Global Options| Terminal | New Terminal opens with… . If the option to choose Git bash, you will need to install Git from https://git-scm.com/downloads. Quit RStudio first. This should not be necessary if you are using a university machine where Git bash is already installed.\n🎬 If on your own machine: In RStudio, install the quarto and the recticulate packages. This should not be necessary if you are using a university machine where these packages are already installed.\n🎬 Whether you are using your own machine or a university machine, you need to install some python packages. In RStudio and go to the Terminal window (behind the Console window). Run the following commands in the Terminal window:\npython -m pip install --upgrade pip setuptools wheel\nYou may get these warnings about scripts not being on the path. You can ignore these.\n WARNING: The script wheel.exe is installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\n WARNING: The scripts pip.exe, pip3.11.exe, pip3.9.exe and pip3.exe are installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nspyder 5.1.5 requires pyqt5<5.13, which is not installed.\nspyder 5.1.5 requires pyqtwebengine<5.13, which is not installed.\nconda-repo-cli 1.0.4 requires pathlib, which is not installed.\nanaconda-project 0.10.2 requires ruamel-yaml, which is not installed.\nSuccessfully installed pip-23.3.1 setuptools-69.0.2 wheel-0.41.3\npython -m pip install session_info\npython -m pip install wget\npython -m pip install gemmi\nNote: On my windows laptop at home, I also had to install C++ Build Tools to be able to install the gemmi python package. If this is true for you, you will get a fail message telling you to install C++ build tools if you need them. These are from https://visualstudio.microsoft.com/visual-cpp-build-tools/ You need to check the Workloads tab and select C++ build tools.\n\nYou can then install the gemmi package again.\nI think that’s it! You can now download the RStudio project and run each chunk in the quarto document.\nThere is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:\n\nusethis::use_course(url = \"3mmaRand/structure-analysis\", destdir = \".\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called structure-analysis-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded.\nYou should be able to open the antibody_mimetics_workshop_3.qmd file and run each chunk. You can also knit the document to html." }, { "objectID": "structures/structures.html#programmatic-protein-structure-analysis", "href": "structures/structures.html#programmatic-protein-structure-analysis", "title": "Structure Data Analysis for Group Project", "section": "", - "text": "There is an RStudio project containing a Quarto version of the the Antibody Mimetics Workshop by Michael Plevin & Jon Agirre. Instructions to obtain the RStudio project are at the bottom of this document after the set up instructions.\nYou might find RStudio useful for Python because you are already familiar with it. It is also a good way to create Quarto documents with code chunks in more than one language. Quarto documents can be used in RStudio, VS Code or Jupyter notebooks\nSome set up is required before you will be able to execute code in antibody_mimetics_workshop_3.qmd. This in contrast to the Colab notebook which is a cloud-based Jupyter notebook and does not require any set up (except installing packages).\n\n🎬 If using your own machine, install Python from https://www.python.org/downloads/. This should not be necessary if you are using a university machine where Python is already installed.\n🎬 If using your own machine and you did not install Quarto in the Core 1 workshop, install it now from https://quarto.org/docs/get-started/. This should not be necessary if you are using a university machine where quarto is already installed.\n🎬 Open RStudio and check you are using a “Git bash” Terminal: Tools | Global Options| Terminal | New Terminal opens with… . If the option to choose Git bash, you will need to install Git from https://git-scm.com/downloads. Quit RStudio first. This should not be necessary if you are using a university machine where Git bash is already installed.\n🎬 If on your own machine: In RStudio, install the quarto and the recticulate packages. This should not be necessary if you are using a university machine where these packages are already installed.\n🎬 Whether you are using your own machine or a university machine, you need to install some python packages. In RStudio and go to the Terminal window (behind the Console window). Run the following commands in the Terminal window:\npython -m pip install --upgrade pip setuptools wheel\nYou may get these warnings about scripts not being on the path. You can ignore these.\n WARNING: The script wheel.exe is installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\n WARNING: The scripts pip.exe, pip3.11.exe, pip3.9.exe and pip3.exe are installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nspyder 5.1.5 requires pyqt5<5.13, which is not installed.\nspyder 5.1.5 requires pyqtwebengine<5.13, which is not installed.\nconda-repo-cli 1.0.4 requires pathlib, which is not installed.\nanaconda-project 0.10.2 requires ruamel-yaml, which is not installed.\nSuccessfully installed pip-23.3.1 setuptools-69.0.2 wheel-0.41.3\npython -m pip install session_info\npython -m pip install wget\npython -m pip install gemmi\nNote: On my windows laptop at home, I also had to install C++ Build Tools to be able to install the gemmi python package. If this is true for you, you will get a fail message telling you to install C++ build tools if you need them. These are from https://visualstudio.microsoft.com/visual-cpp-build-tools/ You need to check the Workloads tab and select C++ build tools.\n\nYou can then install the gemmi package again.\nI think that’s it! You can now download the RStudio project and run each chunk in the quarto document.\nThere is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project:\n\nusethis::use_course(url = \"3mmaRand/structure-analysis\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called structure-analysis-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded.\nYou should be able to open the antibody_mimetics_workshop_3.qmd file and run each chunk. You can also knit the document to html." + "text": "There is an RStudio project containing a Quarto version of the the Antibody Mimetics Workshop by Michael Plevin & Jon Agirre. Instructions to obtain the RStudio project are at the bottom of this document after the set up instructions.\nYou might find RStudio useful for Python because you are already familiar with it. It is also a good way to create Quarto documents with code chunks in more than one language. Quarto documents can be used in RStudio, VS Code or Jupyter notebooks\nSome set up is required before you will be able to execute code in antibody_mimetics_workshop_3.qmd. This in contrast to the Colab notebook which is a cloud-based Jupyter notebook and does not require any set up (except installing packages).\n\n🎬 If using your own machine, install Python from https://www.python.org/downloads/. This should not be necessary if you are using a university machine where Python is already installed.\n🎬 If using your own machine and you did not install Quarto in the Core 1 workshop, install it now from https://quarto.org/docs/get-started/. This should not be necessary if you are using a university machine where quarto is already installed.\n🎬 Open RStudio and check you are using a “Git bash” Terminal: Tools | Global Options| Terminal | New Terminal opens with… . If the option to choose Git bash, you will need to install Git from https://git-scm.com/downloads. Quit RStudio first. This should not be necessary if you are using a university machine where Git bash is already installed.\n🎬 If on your own machine: In RStudio, install the quarto and the recticulate packages. This should not be necessary if you are using a university machine where these packages are already installed.\n🎬 Whether you are using your own machine or a university machine, you need to install some python packages. In RStudio and go to the Terminal window (behind the Console window). Run the following commands in the Terminal window:\npython -m pip install --upgrade pip setuptools wheel\nYou may get these warnings about scripts not being on the path. You can ignore these.\n WARNING: The script wheel.exe is installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\n WARNING: The scripts pip.exe, pip3.11.exe, pip3.9.exe and pip3.exe are installed in 'C:\\Users\\er13\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\n Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nspyder 5.1.5 requires pyqt5<5.13, which is not installed.\nspyder 5.1.5 requires pyqtwebengine<5.13, which is not installed.\nconda-repo-cli 1.0.4 requires pathlib, which is not installed.\nanaconda-project 0.10.2 requires ruamel-yaml, which is not installed.\nSuccessfully installed pip-23.3.1 setuptools-69.0.2 wheel-0.41.3\npython -m pip install session_info\npython -m pip install wget\npython -m pip install gemmi\nNote: On my windows laptop at home, I also had to install C++ Build Tools to be able to install the gemmi python package. If this is true for you, you will get a fail message telling you to install C++ build tools if you need them. These are from https://visualstudio.microsoft.com/visual-cpp-build-tools/ You need to check the Workloads tab and select C++ build tools.\n\nYou can then install the gemmi package again.\nI think that’s it! You can now download the RStudio project and run each chunk in the quarto document.\nThere is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:\n\nusethis::use_course(url = \"3mmaRand/structure-analysis\", destdir = \".\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called structure-analysis-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded.\nYou should be able to open the antibody_mimetics_workshop_3.qmd file and run each chunk. You can also knit the document to html." }, { "objectID": "core/week-2/workshop.html", @@ -60,7 +60,7 @@ "href": "core/week-2/workshop.html#rstudio-terminal", "title": "Workshop", "section": "RStudio terminal", - "text": "RStudio terminal\nThe RStudio terminal is a convenient interface to the shell without leaving RStudio. It is useful for running commands that are not available in R. For example, you can use it to run other programs like fasqc, git, ftp, ssh\nNavigating your file system\nSeveral commands are frequently used to create, inspect, rename, and delete files and directories.\n$\nThe dollar sign is the prompt (like > on the R console), which shows us that the shell is waiting for input.\nYou can find out where you are using the pwd command, which stands for “print working directory”.\n\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n\n\nYou can find out what you can see with ls which stands for “list”.\n\nls\n\ndata\nimages\noverview.qmd\nstudy_after_workshop.html\nstudy_after_workshop.qmd\nstudy_before_workshop.html\nstudy_before_workshop.ipynb\nstudy_before_workshop.qmd\nworkshop.html\nworkshop.qmd\nworkshop.rmarkdown\nworkshop_files\n\n\nYou might have noticed that unlike R, the commands do not have brackets after them. Instead, options (or switches) are given after the command. For example, we can modify the ls command to give us more information with the -l option, which stands for “long”.\n\nls -l\n\ntotal 228\ndrwxr-xr-x 2 runner docker 4096 Dec 3 14:01 data\ndrwxr-xr-x 2 runner docker 4096 Dec 3 14:01 images\n-rw-r--r-- 1 runner docker 1597 Dec 3 14:01 overview.qmd\n-rw-r--r-- 1 runner docker 25553 Dec 3 14:05 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 3 14:01 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70839 Dec 3 14:05 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4807 Dec 3 14:01 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13029 Dec 3 14:01 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 58063 Dec 3 14:01 workshop.html\n-rw-r--r-- 1 runner docker 8550 Dec 3 14:01 workshop.qmd\n-rw-r--r-- 1 runner docker 8564 Dec 3 14:05 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4096 Dec 3 14:01 workshop_files\n\n\nYou can use more than one option at once. The -h option stands for “human readable” and makes the file sizes easier to understand for humans:\n\nls -hl\n\ntotal 228K\ndrwxr-xr-x 2 runner docker 4.0K Dec 3 14:01 data\ndrwxr-xr-x 2 runner docker 4.0K Dec 3 14:01 images\n-rw-r--r-- 1 runner docker 1.6K Dec 3 14:01 overview.qmd\n-rw-r--r-- 1 runner docker 25K Dec 3 14:05 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 3 14:01 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70K Dec 3 14:05 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4.7K Dec 3 14:01 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Dec 3 14:01 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Dec 3 14:01 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Dec 3 14:01 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Dec 3 14:05 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Dec 3 14:01 workshop_files\n\n\nThe -a option stands for “all” and shows us all the files, including hidden files.\n\nls -alh\n\ntotal 236K\ndrwxr-xr-x 5 runner docker 4.0K Dec 3 14:05 .\ndrwxr-xr-x 6 runner docker 4.0K Dec 3 14:05 ..\ndrwxr-xr-x 2 runner docker 4.0K Dec 3 14:01 data\ndrwxr-xr-x 2 runner docker 4.0K Dec 3 14:01 images\n-rw-r--r-- 1 runner docker 1.6K Dec 3 14:01 overview.qmd\n-rw-r--r-- 1 runner docker 25K Dec 3 14:05 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 3 14:01 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70K Dec 3 14:05 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4.7K Dec 3 14:01 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Dec 3 14:01 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Dec 3 14:01 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Dec 3 14:01 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Dec 3 14:05 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Dec 3 14:01 workshop_files\n\n\nYou can move about with the cd command, which stands for “change directory”. You can use it to move into a directory by specifying the path to the directory:\n\ncd data\npwd\ncd ..\npwd\ncd data\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n\n\nhead 1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nhead -20 data/1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nSOURCE 3 ORGANISM_COMMON: SPERM WHALE; \nSOURCE 4 ORGANISM_TAXID: 9755; \nSOURCE 5 EXPRESSION_SYSTEM: ESCHERICHIA COLI; \nSOURCE 6 EXPRESSION_SYSTEM_TAXID: 562; \nSOURCE 7 EXPRESSION_SYSTEM_VECTOR_TYPE: PLASMID; \nSOURCE 8 EXPRESSION_SYSTEM_PLASMID: PET15A \nKEYWDS HELICAL, GLOBULAR, ALL-HYDROGEN CONTAINING STRUCTURE, OXYGEN STORAGE- \nKEYWDS 2 TRANSPORT COMPLEX \nEXPDTA NEUTRON DIFFRACTION \nAUTHOR F.SHU,V.RAMAKRISHNAN,B.P.SCHOENBORN \nless 1cq2.pdb\nless is a program that displays the contents of a file, one page at a time. It is useful for viewing large files because it does not load the whole file into memory before displaying it. Instead, it reads and displays a few lines at a time. You can navigate forward through the file with the spacebar, and backwards with the b key. Press q to quit.\nA wildcard is a character that can be used as a substitute for any of a class of characters in a search, The most common wildcard characters are the asterisk (*) and the question mark (?).\nls *.csv\ncp stands for “copy”. You can copy a file from one directory to another by giving cp the path to the file you want to copy and the path to the destination directory.\ncp 1cq2.pdb copy_of_1cq2.pdb\ncp 1cq2.pdb ../copy_of_1cq2.pdb\ncp 1cq2.pdb ../bob.txt\nTo delete a file use the rm command, which stands for “remove”.\nrm ../bob.txt\nbut be careful because the file will be gone forever. There is no “are you sure?” or undo.\nTo move a file from one directory to another, use the mv command. mv works like cp except that it also deletes the original file.\nmv ../copy_of_1cq2.pdb .\nMake a directory\nmkdir mynewdir" + "text": "RStudio terminal\nThe RStudio terminal is a convenient interface to the shell without leaving RStudio. It is useful for running commands that are not available in R. For example, you can use it to run other programs like fasqc, git, ftp, ssh\nNavigating your file system\nSeveral commands are frequently used to create, inspect, rename, and delete files and directories.\n$\nThe dollar sign is the prompt (like > on the R console), which shows us that the shell is waiting for input.\nYou can find out where you are using the pwd command, which stands for “print working directory”.\n\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n\n\nYou can find out what you can see with ls which stands for “list”.\n\nls\n\ndata\nimages\noverview.qmd\nstudy_after_workshop.html\nstudy_after_workshop.qmd\nstudy_before_workshop.html\nstudy_before_workshop.ipynb\nstudy_before_workshop.qmd\nworkshop.html\nworkshop.qmd\nworkshop.rmarkdown\nworkshop_files\n\n\nYou might have noticed that unlike R, the commands do not have brackets after them. Instead, options (or switches) are given after the command. For example, we can modify the ls command to give us more information with the -l option, which stands for “long”.\n\nls -l\n\ntotal 228\ndrwxr-xr-x 2 runner docker 4096 Dec 4 10:05 data\ndrwxr-xr-x 2 runner docker 4096 Dec 4 10:05 images\n-rw-r--r-- 1 runner docker 1597 Dec 4 10:05 overview.qmd\n-rw-r--r-- 1 runner docker 25553 Dec 4 10:09 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 4 10:05 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70839 Dec 4 10:09 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4807 Dec 4 10:05 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13029 Dec 4 10:05 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 58063 Dec 4 10:05 workshop.html\n-rw-r--r-- 1 runner docker 8550 Dec 4 10:05 workshop.qmd\n-rw-r--r-- 1 runner docker 8564 Dec 4 10:09 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4096 Dec 4 10:05 workshop_files\n\n\nYou can use more than one option at once. The -h option stands for “human readable” and makes the file sizes easier to understand for humans:\n\nls -hl\n\ntotal 228K\ndrwxr-xr-x 2 runner docker 4.0K Dec 4 10:05 data\ndrwxr-xr-x 2 runner docker 4.0K Dec 4 10:05 images\n-rw-r--r-- 1 runner docker 1.6K Dec 4 10:05 overview.qmd\n-rw-r--r-- 1 runner docker 25K Dec 4 10:09 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 4 10:05 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70K Dec 4 10:09 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4.7K Dec 4 10:05 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Dec 4 10:05 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Dec 4 10:05 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Dec 4 10:05 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Dec 4 10:09 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Dec 4 10:05 workshop_files\n\n\nThe -a option stands for “all” and shows us all the files, including hidden files.\n\nls -alh\n\ntotal 236K\ndrwxr-xr-x 5 runner docker 4.0K Dec 4 10:09 .\ndrwxr-xr-x 6 runner docker 4.0K Dec 4 10:08 ..\ndrwxr-xr-x 2 runner docker 4.0K Dec 4 10:05 data\ndrwxr-xr-x 2 runner docker 4.0K Dec 4 10:05 images\n-rw-r--r-- 1 runner docker 1.6K Dec 4 10:05 overview.qmd\n-rw-r--r-- 1 runner docker 25K Dec 4 10:09 study_after_workshop.html\n-rw-r--r-- 1 runner docker 184 Dec 4 10:05 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 70K Dec 4 10:09 study_before_workshop.html\n-rw-r--r-- 1 runner docker 4.7K Dec 4 10:05 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Dec 4 10:05 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Dec 4 10:05 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Dec 4 10:05 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Dec 4 10:09 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Dec 4 10:05 workshop_files\n\n\nYou can move about with the cd command, which stands for “change directory”. You can use it to move into a directory by specifying the path to the directory:\n\ncd data\npwd\ncd ..\npwd\ncd data\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n\n\nhead 1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nhead -20 data/1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nSOURCE 3 ORGANISM_COMMON: SPERM WHALE; \nSOURCE 4 ORGANISM_TAXID: 9755; \nSOURCE 5 EXPRESSION_SYSTEM: ESCHERICHIA COLI; \nSOURCE 6 EXPRESSION_SYSTEM_TAXID: 562; \nSOURCE 7 EXPRESSION_SYSTEM_VECTOR_TYPE: PLASMID; \nSOURCE 8 EXPRESSION_SYSTEM_PLASMID: PET15A \nKEYWDS HELICAL, GLOBULAR, ALL-HYDROGEN CONTAINING STRUCTURE, OXYGEN STORAGE- \nKEYWDS 2 TRANSPORT COMPLEX \nEXPDTA NEUTRON DIFFRACTION \nAUTHOR F.SHU,V.RAMAKRISHNAN,B.P.SCHOENBORN \nless 1cq2.pdb\nless is a program that displays the contents of a file, one page at a time. It is useful for viewing large files because it does not load the whole file into memory before displaying it. Instead, it reads and displays a few lines at a time. You can navigate forward through the file with the spacebar, and backwards with the b key. Press q to quit.\nA wildcard is a character that can be used as a substitute for any of a class of characters in a search, The most common wildcard characters are the asterisk (*) and the question mark (?).\nls *.csv\ncp stands for “copy”. You can copy a file from one directory to another by giving cp the path to the file you want to copy and the path to the destination directory.\ncp 1cq2.pdb copy_of_1cq2.pdb\ncp 1cq2.pdb ../copy_of_1cq2.pdb\ncp 1cq2.pdb ../bob.txt\nTo delete a file use the rm command, which stands for “remove”.\nrm ../bob.txt\nbut be careful because the file will be gone forever. There is no “are you sure?” or undo.\nTo move a file from one directory to another, use the mv command. mv works like cp except that it also deletes the original file.\nmv ../copy_of_1cq2.pdb .\nMake a directory\nmkdir mynewdir" }, { "objectID": "core/week-2/workshop.html#differences-between-r-and-python", @@ -375,7 +375,7 @@ "href": "core/week-11/workshop.html", "title": "Workshop", "section": "", - "text": "Literate programming is a way of writing code and text together in a single document\nThe document is then processed to produce a report\nQuarto (recommended) or R Markdown\n\nIn this workshop we will go through an example quarto document. You will learn:\n\nformatting (bold, italics, headings)\nchunk options to control whether code and output are in the rendered document\nadding citations\nfigures and tables with cross referencing and autonumberiung\ninline coding to report results\nspecial characters and equations" + "text": "Literate programming is a way of writing code and text together in a single document\nThe document is then processed to produce a report\nQuarto (recommended) or R Markdown\n\nIn this workshop we will go through an example quarto document. You will learn:\n\nwhat the YAML header is\nformatting (bold, italics, headings)\nto control default and individual chunk options\nhow to add citations\nfigures and tables with cross referencing and automatic numbering\nhow to use inline coding to report results\nhow to insert special characters and equations" }, { "objectID": "core/week-11/workshop.html#literate-programming", @@ -389,49 +389,49 @@ "href": "core/week-11/workshop.html#session-overview", "title": "Workshop", "section": "", - "text": "In this workshop we will go through an example quarto document. You will learn:\n\nformatting (bold, italics, headings)\nchunk options to control whether code and output are in the rendered document\nadding citations\nfigures and tables with cross referencing and autonumberiung\ninline coding to report results\nspecial characters and equations" + "text": "In this workshop we will go through an example quarto document. You will learn:\n\nwhat the YAML header is\nformatting (bold, italics, headings)\nto control default and individual chunk options\nhow to add citations\nfigures and tables with cross referencing and automatic numbering\nhow to use inline coding to report results\nhow to insert special characters and equations" }, { "objectID": "core/week-11/study_before_workshop.html#module-assessment", "href": "core/week-11/study_before_workshop.html#module-assessment", "title": "Independent Study to prepare for workshop", "section": "Module assessment", - "text": "Module assessment\nThis module is assessed by:\n\n\nOral presentation 30%\nProject Report and Research Compendium 70% of which\n\n70% report (i.e., 49% of the total mark)\n30% compendium (i.e., 21% of the total mark)\n\n\nThese slides are a guide to Research compendium." + "text": "Module assessment\nThis module is assessed by:\n\nOral presentation 30%\nProject Report and Research Compendium 70% of which\n\n70% report (i.e., 49% of the total mark)\n30% compendium (i.e., 21% of the total mark)\n\n\nThese slides are a guide to Research compendium." }, { "objectID": "core/week-11/study_before_workshop.html#what-is-a-research-compendium", "href": "core/week-11/study_before_workshop.html#what-is-a-research-compendium", "title": "Independent Study to prepare for workshop", "section": "What is a Research Compendium?", - "text": "What is a Research Compendium?\nOverview of assessment\nStage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.\nStudents will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised." + "text": "What is a Research Compendium?\nOverview of assessment\n\nStage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.\nStudents will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised." }, { "objectID": "core/week-11/study_before_workshop.html#what-is-a-research-compendium-1", "href": "core/week-11/study_before_workshop.html#what-is-a-research-compendium-1", "title": "Independent Study to prepare for workshop", "section": "What is a Research Compendium?", - "text": "What is a Research Compendium?\nOverview of assessment\nStage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.\nStudents will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised." + "text": "What is a Research Compendium?\nOverview of assessment\n\nStage 3 Integrated Masters students are expected to submit a Research Compendium that is a documented collection of all the digital parts of the research project including data (or access to data), code and outputs. The collection is organised and documented in such a way that reproducing all the results is straightforward for another individual.\nStudents will be assessed on the technical complexity, completeness and organisation of their compendium and the completeness, reproducibility and clarity of their documentation at the project and code/process level. Marking will focus on the reproducibility of the results and the clarity of the decision making processes rather than the interpretation of the results which is covered in the report. There is no word or size limit for any part of the compendium but its contents should be concise and minimal. Extraneous text, code or files will be penalised." }, { "objectID": "core/week-11/study_before_workshop.html#what-is-a-research-compendium-2", "href": "core/week-11/study_before_workshop.html#what-is-a-research-compendium-2", "title": "Independent Study to prepare for workshop", "section": "What is a Research Compendium?", - "text": "What is a Research Compendium?\n\n\nZipped folder containing all data, code, and text associated with a research project organised and documented clearly. Any unscripted processing should be described.\nEverything needed to reproduce the results, and no more. The compendium should not be a dumping ground for data files and scripts. It needs to be curated. You may generate files that are not needed to reproduce your work and these should be removed.\nYour compendium might be a single Quarto/RStudio Project, or it might be folder including an RStudio Project and some additional materials including the description of unscripted processing.\nIdeally uses literate programming to create submitted report" + "text": "What is a Research Compendium?\n\n\n\nZipped folder containing all data, code and text associated with a research project organised and documented clearly. Any unscripted processing should be described.\nEverything needed to understand what the project is and reproduce the results, and no more. The compendium should not be a dumping ground for data files and scripts. It needs to be curated. You may generate files that are not needed to reproduce your work and these should be removed.\nYour compendium might be a single Quarto/RStudio Project, or it might be folder including an RStudio Project and some additional materials including the description of unscripted processing.\nIdeally uses literate programming to create submitted report" }, { "objectID": "core/week-11/study_before_workshop.html#use-guidelines-from-core-1-and-2", "href": "core/week-11/study_before_workshop.html#use-guidelines-from-core-1-and-2", "title": "Independent Study to prepare for workshop", "section": "Use guidelines from Core 1 and 2", - "text": "Use guidelines from Core 1 and 2\n\nfollow the guidance in Core 1 on organisation, naming things, and documentation\nfollow the guidance in Core 2 well-formatted code, consistency, modularisation and documentation" + "text": "Use guidelines from Core 1 and 2\n\nfollow the guidance in Core 1 on organisation, naming things and documentation\nfollow the guidance in Core 2 on well-formatted code, consistency, modularisation and documentation" }, { "objectID": "core/week-11/study_before_workshop.html#project-level-documentation", "href": "core/week-11/study_before_workshop.html#project-level-documentation", "title": "Independent Study to prepare for workshop", "section": "Project level documentation", - "text": "Project level documentation\n\n\nas concise as possible, bullet points are good\nprimarily in the README file but some details may be in scripts\ntitle, concise description of the work, date, overview of compendium contents\nall the software information including versions\ninstructions needed to reproduce the work, order of workflow, settings/parameter values for software" + "text": "Project level documentation\n\n\nas concise as possible, bullet points are good\nprimarily in the README file but some details may be in scripts\ntitle, concise description of the work, author exam number, date, overview of compendium contents\nall the software information including versions\ninstructions needed to reproduce the work, order of workflow, settings/parameter values for software" }, { "objectID": "core/week-11/study_before_workshop.html#project-level-documentation---cont", @@ -487,14 +487,14 @@ "href": "images/images.html", "title": "Image Data Analysis for Group Project", "section": "", - "text": "The following ImageJ workflow uses the processing steps you used in workshop 3 with one change. That change is to save the results to file rather than having the results window pop up and saving from there. Or maybe two changes: it also tells you to use meaning systematic file names that will be easy to process when importing data. The RStudio workflow shows you how to import multiple files into one dataframe with columns indicating the treatment.\n\nSave files with systematic names: ev_0.avi 343_0.avi ev_1.avi 343_1.avi ev_2.5.avi 343_2.5.avi\nOpen ImageJ\nOpen video file eg ev_2.5.avi\n\nConvert to 8-bit: Image | Type | 8-bit\nCrop to petri dish: Select then Image | Crop\nCalculate average pixel intensity: Image | Stacks | Z Project\n\nProjection type: Average Intensity to create AVG_ev_2.5.avi\n\n\n\nSubtract average from image: Process | Image Calculator\n\nImage 1: ev_2.5.avi\n\nOperation: Subtract\nImage 2: AVG_ev_2.5.avi\n\nCreate new window: checked\nOK, Yes to Process all\n\n\nInvert: Edit | Invert\nAdjust threshold: Image | Adjust | Threshold\n\nMethod: Default\nThresholding: Default, B&W\nDark background: checked\nAuto or adjust a little but make sure the larvae do not disappear at later points in the video (use the slider)\nApply\n\n\nInvert: Edit | Invert\nTrack: Plugins | wrMTrck\n\nSet minSize: 10\nSet maxSize: 400\nSet maxVelocity: 10\nSet maxAreaChange: 200\nSet bendThreshold: 1\n\nImportant: check Save Results File This is different to what you did in the workshop. It will help because the results will be saved automatically rather than to saving from the Results window that other pops up. Consequently, you will be able to save the results files with systematic names relating to their treatments and then read them into R simultaneously. That will also allow you to add information from the name of the file (which has the treatment information) to the resulting dataframes\n\n\nwrMTrck window with the settings listed above shown\n\n\nClick OK. Save to a folder for all the tracking data files. I recommend deleting the “Results of..” part of the name\n\n\nCheck that the Summary window indicates 3 tracks and that the 3 larvae are what is tracked by using the slider on the Result image\nRepeat for all videos\n\nThis is the code you need to import multiple csv files into a single dataframe and add a column with the treatment information from the file name. This is why systematic file names are good.\nIt assumes\n\nyour files are called type_concentration.txt for example: ev_0.txt 343_0.txt ev_1.txt 343_1.txt ev_2.5.txt 343_2.5.txt.\nthe .txt datafile are in a folder called track inside your working directory\nyou have installed the following packages: tidyverse, janitor\n\n\n🎬 Load the tidyverse\n\nlibrary(tidyverse)\n\n🎬 Put the file names into a vector we will iterate through\n\n# get a vector of the file names\nfiles <- list.files(path = \"track\", full.names = TRUE )\n\nWe can use map_df() from the purrr package which is one of the tidyverse gems loaded with tidyvserse. map_df() will iterate through files and read them into a dataframe with a specified import function. We are using read_table(). map_df() keeps track of the file by adding an index column called file to the resulting dataframe. Instead of this being a number (1 - 6 here) we can use set_names() to use the file names instead. The clean_names() function from the janitor package will clean up the column names (make them lower case, replace spaces with _ remove special characters etc)\n🎬 Import multiple csv files into one dataframe called tracking\n\n# import multiple data files into one dataframe called tracking\n# using map_df() from purrr package\n# clean the column names up using janitor::clean_names()\ntracking <- files |> \n set_names() |>\n map_dfr(read_table, .id = \"file\") |>\n janitor::clean_names()\n\nYou will get a warning Duplicated column names deduplicated: 'avgX' => 'avgX_1' [15] for each of the files because the csv files each have two columns called avgX. If you click on the tracking dataframe you see is contains the data from all the files.\nNow we can add columns for the type and the concentration by processing the values in the file. The values are like track/343_0.txt so we need to remove .txt and track/ and separate the remaining words into two columns.\n🎬 Process the file column to add columns for the type and the concentration\n\n# extract type and concentration from file name\n# and put them into additopnal separate columns\ntracking <- tracking |> \n mutate(file = str_remove(file, \".txt\")) |>\n mutate(file = str_remove(file, \"track/\")) |>\n extract(file, remove = \n FALSE,\n into = c(\"type\", \"conc\"), \n regex = \"([^_]{2,3})_(.+)\") \n\n[^_]{2,3} matches two or three characters that are not _ at the start of the string (^)\n.+ matches one or more characters. The extract() function puts the first match into the first column, type, and the second match into the second column, conc. The remove = FALSE argument means the original column is kept.\nYou now have a dataframe with all the tracking data which is relatively easy to summarise and plot using tools you know.\nThere is an example RStudio project containing this code here: tips. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project:\n\nusethis::use_course(url = \"3mmaRand/tips\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called tips-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repro you downloaded. You can now run the code in the project." + "text": "The following ImageJ workflow uses the processing steps you used in workshop 3 with one change. That change is to save the results to file rather than having the results window pop up and saving from there. Or maybe two changes: it also tells you to use meaning systematic file names that will be easy to process when importing data. The RStudio workflow shows you how to import multiple files into one dataframe with columns indicating the treatment.\n\nSave files with systematic names: ev_0.avi 343_0.avi ev_1.avi 343_1.avi ev_2.5.avi 343_2.5.avi\nOpen ImageJ\nOpen video file eg ev_2.5.avi\n\nConvert to 8-bit: Image | Type | 8-bit\nCrop to petri dish: Select then Image | Crop\nCalculate average pixel intensity: Image | Stacks | Z Project\n\nProjection type: Average Intensity to create AVG_ev_2.5.avi\n\n\n\nSubtract average from image: Process | Image Calculator\n\nImage 1: ev_2.5.avi\n\nOperation: Subtract\nImage 2: AVG_ev_2.5.avi\n\nCreate new window: checked\nOK, Yes to Process all\n\n\nInvert: Edit | Invert\nAdjust threshold: Image | Adjust | Threshold\n\nMethod: Default\nThresholding: Default, B&W\nDark background: checked\nAuto or adjust a little but make sure the larvae do not disappear at later points in the video (use the slider)\nApply\n\n\nInvert: Edit | Invert\nTrack: Plugins | wrMTrck\n\nSet minSize: 10\nSet maxSize: 400\nSet maxVelocity: 10\nSet maxAreaChange: 200\nSet bendThreshold: 1\n\nImportant: check Save Results File This is different to what you did in the workshop. It will help because the results will be saved automatically rather than to saving from the Results window that other pops up. Consequently, you will be able to save the results files with systematic names relating to their treatments and then read them into R simultaneously. That will also allow you to add information from the name of the file (which has the treatment information) to the resulting dataframes\n\n\nwrMTrck window with the settings listed above shown\n\n\nClick OK. Save to a folder for all the tracking data files. I recommend deleting the “Results of..” part of the name\n\n\nCheck that the Summary window indicates 3 tracks and that the 3 larvae are what is tracked by using the slider on the Result image\nRepeat for all videos\n\nThis is the code you need to import multiple csv files into a single dataframe and add a column with the treatment information from the file name. This is why systematic file names are good.\nIt assumes\n\nyour files are called type_concentration.txt for example: ev_0.txt 343_0.txt ev_1.txt 343_1.txt ev_2.5.txt 343_2.5.txt.\nthe .txt datafile are in a folder called track inside your working directory\nyou have installed the following packages: tidyverse, janitor\n\n\n🎬 Load the tidyverse\n\nlibrary(tidyverse)\n\n🎬 Put the file names into a vector we will iterate through\n\n# get a vector of the file names\nfiles <- list.files(path = \"track\", full.names = TRUE )\n\nWe can use map_df() from the purrr package which is one of the tidyverse gems loaded with tidyvserse. map_df() will iterate through files and read them into a dataframe with a specified import function. We are using read_table(). map_df() keeps track of the file by adding an index column called file to the resulting dataframe. Instead of this being a number (1 - 6 here) we can use set_names() to use the file names instead. The clean_names() function from the janitor package will clean up the column names (make them lower case, replace spaces with _ remove special characters etc)\n🎬 Import multiple csv files into one dataframe called tracking\n\n# import multiple data files into one dataframe called tracking\n# using map_df() from purrr package\n# clean the column names up using janitor::clean_names()\ntracking <- files |> \n set_names() |>\n map_dfr(read_table, .id = \"file\") |>\n janitor::clean_names()\n\nYou will get a warning Duplicated column names deduplicated: 'avgX' => 'avgX_1' [15] for each of the files because the csv files each have two columns called avgX. If you click on the tracking dataframe you see is contains the data from all the files.\nNow we can add columns for the type and the concentration by processing the values in the file. The values are like track/343_0.txt so we need to remove .txt and track/ and separate the remaining words into two columns.\n🎬 Process the file column to add columns for the type and the concentration\n\n# extract type and concentration from file name\n# and put them into additopnal separate columns\ntracking <- tracking |> \n mutate(file = str_remove(file, \".txt\")) |>\n mutate(file = str_remove(file, \"track/\")) |>\n extract(file, remove = \n FALSE,\n into = c(\"type\", \"conc\"), \n regex = \"([^_]{2,3})_(.+)\") \n\n[^_]{2,3} matches two or three characters that are not _ at the start of the string (^)\n.+ matches one or more characters. The extract() function puts the first match into the first column, type, and the second match into the second column, conc. The remove = FALSE argument means the original column is kept.\nYou now have a dataframe with all the tracking data which is relatively easy to summarise and plot using tools you know.\nThere is an example RStudio project containing this code here: tips. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:\n\nusethis::use_course(url = \"3mmaRand/tips\", destdir = \".\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called tips-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded. You can now run the code in the project." }, { "objectID": "images/images.html#worm-tracking", "href": "images/images.html#worm-tracking", "title": "Image Data Analysis for Group Project", "section": "", - "text": "The following ImageJ workflow uses the processing steps you used in workshop 3 with one change. That change is to save the results to file rather than having the results window pop up and saving from there. Or maybe two changes: it also tells you to use meaning systematic file names that will be easy to process when importing data. The RStudio workflow shows you how to import multiple files into one dataframe with columns indicating the treatment.\n\nSave files with systematic names: ev_0.avi 343_0.avi ev_1.avi 343_1.avi ev_2.5.avi 343_2.5.avi\nOpen ImageJ\nOpen video file eg ev_2.5.avi\n\nConvert to 8-bit: Image | Type | 8-bit\nCrop to petri dish: Select then Image | Crop\nCalculate average pixel intensity: Image | Stacks | Z Project\n\nProjection type: Average Intensity to create AVG_ev_2.5.avi\n\n\n\nSubtract average from image: Process | Image Calculator\n\nImage 1: ev_2.5.avi\n\nOperation: Subtract\nImage 2: AVG_ev_2.5.avi\n\nCreate new window: checked\nOK, Yes to Process all\n\n\nInvert: Edit | Invert\nAdjust threshold: Image | Adjust | Threshold\n\nMethod: Default\nThresholding: Default, B&W\nDark background: checked\nAuto or adjust a little but make sure the larvae do not disappear at later points in the video (use the slider)\nApply\n\n\nInvert: Edit | Invert\nTrack: Plugins | wrMTrck\n\nSet minSize: 10\nSet maxSize: 400\nSet maxVelocity: 10\nSet maxAreaChange: 200\nSet bendThreshold: 1\n\nImportant: check Save Results File This is different to what you did in the workshop. It will help because the results will be saved automatically rather than to saving from the Results window that other pops up. Consequently, you will be able to save the results files with systematic names relating to their treatments and then read them into R simultaneously. That will also allow you to add information from the name of the file (which has the treatment information) to the resulting dataframes\n\n\nwrMTrck window with the settings listed above shown\n\n\nClick OK. Save to a folder for all the tracking data files. I recommend deleting the “Results of..” part of the name\n\n\nCheck that the Summary window indicates 3 tracks and that the 3 larvae are what is tracked by using the slider on the Result image\nRepeat for all videos\n\nThis is the code you need to import multiple csv files into a single dataframe and add a column with the treatment information from the file name. This is why systematic file names are good.\nIt assumes\n\nyour files are called type_concentration.txt for example: ev_0.txt 343_0.txt ev_1.txt 343_1.txt ev_2.5.txt 343_2.5.txt.\nthe .txt datafile are in a folder called track inside your working directory\nyou have installed the following packages: tidyverse, janitor\n\n\n🎬 Load the tidyverse\n\nlibrary(tidyverse)\n\n🎬 Put the file names into a vector we will iterate through\n\n# get a vector of the file names\nfiles <- list.files(path = \"track\", full.names = TRUE )\n\nWe can use map_df() from the purrr package which is one of the tidyverse gems loaded with tidyvserse. map_df() will iterate through files and read them into a dataframe with a specified import function. We are using read_table(). map_df() keeps track of the file by adding an index column called file to the resulting dataframe. Instead of this being a number (1 - 6 here) we can use set_names() to use the file names instead. The clean_names() function from the janitor package will clean up the column names (make them lower case, replace spaces with _ remove special characters etc)\n🎬 Import multiple csv files into one dataframe called tracking\n\n# import multiple data files into one dataframe called tracking\n# using map_df() from purrr package\n# clean the column names up using janitor::clean_names()\ntracking <- files |> \n set_names() |>\n map_dfr(read_table, .id = \"file\") |>\n janitor::clean_names()\n\nYou will get a warning Duplicated column names deduplicated: 'avgX' => 'avgX_1' [15] for each of the files because the csv files each have two columns called avgX. If you click on the tracking dataframe you see is contains the data from all the files.\nNow we can add columns for the type and the concentration by processing the values in the file. The values are like track/343_0.txt so we need to remove .txt and track/ and separate the remaining words into two columns.\n🎬 Process the file column to add columns for the type and the concentration\n\n# extract type and concentration from file name\n# and put them into additopnal separate columns\ntracking <- tracking |> \n mutate(file = str_remove(file, \".txt\")) |>\n mutate(file = str_remove(file, \"track/\")) |>\n extract(file, remove = \n FALSE,\n into = c(\"type\", \"conc\"), \n regex = \"([^_]{2,3})_(.+)\") \n\n[^_]{2,3} matches two or three characters that are not _ at the start of the string (^)\n.+ matches one or more characters. The extract() function puts the first match into the first column, type, and the second match into the second column, conc. The remove = FALSE argument means the original column is kept.\nYou now have a dataframe with all the tracking data which is relatively easy to summarise and plot using tools you know.\nThere is an example RStudio project containing this code here: tips. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project:\n\nusethis::use_course(url = \"3mmaRand/tips\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called tips-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repro you downloaded. You can now run the code in the project." + "text": "The following ImageJ workflow uses the processing steps you used in workshop 3 with one change. That change is to save the results to file rather than having the results window pop up and saving from there. Or maybe two changes: it also tells you to use meaning systematic file names that will be easy to process when importing data. The RStudio workflow shows you how to import multiple files into one dataframe with columns indicating the treatment.\n\nSave files with systematic names: ev_0.avi 343_0.avi ev_1.avi 343_1.avi ev_2.5.avi 343_2.5.avi\nOpen ImageJ\nOpen video file eg ev_2.5.avi\n\nConvert to 8-bit: Image | Type | 8-bit\nCrop to petri dish: Select then Image | Crop\nCalculate average pixel intensity: Image | Stacks | Z Project\n\nProjection type: Average Intensity to create AVG_ev_2.5.avi\n\n\n\nSubtract average from image: Process | Image Calculator\n\nImage 1: ev_2.5.avi\n\nOperation: Subtract\nImage 2: AVG_ev_2.5.avi\n\nCreate new window: checked\nOK, Yes to Process all\n\n\nInvert: Edit | Invert\nAdjust threshold: Image | Adjust | Threshold\n\nMethod: Default\nThresholding: Default, B&W\nDark background: checked\nAuto or adjust a little but make sure the larvae do not disappear at later points in the video (use the slider)\nApply\n\n\nInvert: Edit | Invert\nTrack: Plugins | wrMTrck\n\nSet minSize: 10\nSet maxSize: 400\nSet maxVelocity: 10\nSet maxAreaChange: 200\nSet bendThreshold: 1\n\nImportant: check Save Results File This is different to what you did in the workshop. It will help because the results will be saved automatically rather than to saving from the Results window that other pops up. Consequently, you will be able to save the results files with systematic names relating to their treatments and then read them into R simultaneously. That will also allow you to add information from the name of the file (which has the treatment information) to the resulting dataframes\n\n\nwrMTrck window with the settings listed above shown\n\n\nClick OK. Save to a folder for all the tracking data files. I recommend deleting the “Results of..” part of the name\n\n\nCheck that the Summary window indicates 3 tracks and that the 3 larvae are what is tracked by using the slider on the Result image\nRepeat for all videos\n\nThis is the code you need to import multiple csv files into a single dataframe and add a column with the treatment information from the file name. This is why systematic file names are good.\nIt assumes\n\nyour files are called type_concentration.txt for example: ev_0.txt 343_0.txt ev_1.txt 343_1.txt ev_2.5.txt 343_2.5.txt.\nthe .txt datafile are in a folder called track inside your working directory\nyou have installed the following packages: tidyverse, janitor\n\n\n🎬 Load the tidyverse\n\nlibrary(tidyverse)\n\n🎬 Put the file names into a vector we will iterate through\n\n# get a vector of the file names\nfiles <- list.files(path = \"track\", full.names = TRUE )\n\nWe can use map_df() from the purrr package which is one of the tidyverse gems loaded with tidyvserse. map_df() will iterate through files and read them into a dataframe with a specified import function. We are using read_table(). map_df() keeps track of the file by adding an index column called file to the resulting dataframe. Instead of this being a number (1 - 6 here) we can use set_names() to use the file names instead. The clean_names() function from the janitor package will clean up the column names (make them lower case, replace spaces with _ remove special characters etc)\n🎬 Import multiple csv files into one dataframe called tracking\n\n# import multiple data files into one dataframe called tracking\n# using map_df() from purrr package\n# clean the column names up using janitor::clean_names()\ntracking <- files |> \n set_names() |>\n map_dfr(read_table, .id = \"file\") |>\n janitor::clean_names()\n\nYou will get a warning Duplicated column names deduplicated: 'avgX' => 'avgX_1' [15] for each of the files because the csv files each have two columns called avgX. If you click on the tracking dataframe you see is contains the data from all the files.\nNow we can add columns for the type and the concentration by processing the values in the file. The values are like track/343_0.txt so we need to remove .txt and track/ and separate the remaining words into two columns.\n🎬 Process the file column to add columns for the type and the concentration\n\n# extract type and concentration from file name\n# and put them into additopnal separate columns\ntracking <- tracking |> \n mutate(file = str_remove(file, \".txt\")) |>\n mutate(file = str_remove(file, \"track/\")) |>\n extract(file, remove = \n FALSE,\n into = c(\"type\", \"conc\"), \n regex = \"([^_]{2,3})_(.+)\") \n\n[^_]{2,3} matches two or three characters that are not _ at the start of the string (^)\n.+ matches one or more characters. The extract() function puts the first match into the first column, type, and the second match into the second column, conc. The remove = FALSE argument means the original column is kept.\nYou now have a dataframe with all the tracking data which is relatively easy to summarise and plot using tools you know.\nThere is an example RStudio project containing this code here: tips. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:\n\nusethis::use_course(url = \"3mmaRand/tips\", destdir = \".\")\n\nYou can agree to deleting the zip. You should find RStudio restarts and you have a new project called tips-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded. You can now run the code in the project." }, { "objectID": "omics/week-3/workshop.html", @@ -1425,7 +1425,7 @@ "href": "core/week-11/overview.html", "title": "Overview", "section": "", - "text": "xxxxx\n\nLearning objectives\n\ndd\ndd.\ndd\nd\n\n\n\nInstructions\n\nPrepare\n\n📖 Read\n\nWorkshop\n\n💻 dd.\n💻 ddd\n💻 ddd\n\nConsolidate\n\n💻 dd\n💻 dd" + "text": "xxxxx\n\nLearning objectives\nThe successful student will be able to:\n\nexplain what a research compendium is and describe its components\nrelate the content and concepts in Core 1 and Core 2 to the research compendium\nCreate a quarto document and:\n\nappreciate the role of the YAML header\nformat text as bold, italics, headings etc\nadd citations and a bibliography\ncreate automatically numbered figures and tables with cross references in text\nset default code chunk behaviour and those for individual chunks\nuse inline code to report results\ninsert special characters and mathematical expressions with LaTeX\n\n\n\n\nInstructions\n\nPrepare\nWorkshop" }, { "objectID": "core/week-1/study_after_workshop.html", diff --git a/structures/structures.html b/structures/structures.html index 2c65dab..f4cced8 100644 --- a/structures/structures.html +++ b/structures/structures.html @@ -208,7 +208,7 @@

Structure Data Analysis for Group Project

Published
-

3 December, 2023

+

4 December, 2023

@@ -246,9 +246,9 @@

Structure Data Analysis for Group Project

Workloads tab showing the C++ build tools selected

You can then install the gemmi package again.

I think that’s it! You can now download the RStudio project and run each chunk in the quarto document.

-

There is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project:

+

There is an example RStudio project here: structure-analysis. You can also download the project as a zip file from there but there is some code that will do that automatically for you. Since this is an RStudio Project, do not run the code from inside a project. You may want to navigate to a particular directory or edit the destdir:

-
usethis::use_course(url = "3mmaRand/structure-analysis")
+
usethis::use_course(url = "3mmaRand/structure-analysis", destdir = ".")

You can agree to deleting the zip. You should find RStudio restarts and you have a new project called structure-analysis-xxxxxx. The xxxxxx is a commit reference - you do not need to worry about that, it is just a way to tell you which version of the repo you downloaded.

You should be able to open the antibody_mimetics_workshop_3.qmd file and run each chunk. You can also knit the document to html.