diff --git a/core/core.html b/core/core.html index a66936a..652e53a 100644 --- a/core/core.html +++ b/core/core.html @@ -350,7 +350,7 @@

Core Data Analysis

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-1/overview.html b/core/week-1/overview.html index d2bd87a..276f865 100644 --- a/core/week-1/overview.html +++ b/core/week-1/overview.html @@ -346,7 +346,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-1/study_after_workshop.html b/core/week-1/study_after_workshop.html index 6827848..0837732 100644 --- a/core/week-1/study_after_workshop.html +++ b/core/week-1/study_after_workshop.html @@ -346,7 +346,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-1/study_before_workshop.html b/core/week-1/study_before_workshop.html index 4076be2..6b3c11d 100644 --- a/core/week-1/study_before_workshop.html +++ b/core/week-1/study_before_workshop.html @@ -339,7 +339,7 @@

Independent Study to prepare for workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-1/workshop.html b/core/week-1/workshop.html index cb32db9..b34e371 100644 --- a/core/week-1/workshop.html +++ b/core/week-1/workshop.html @@ -400,7 +400,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-11/overview.html b/core/week-11/overview.html index d1805c0..89acd28 100644 --- a/core/week-11/overview.html +++ b/core/week-11/overview.html @@ -346,7 +346,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-11/study_after_workshop.html b/core/week-11/study_after_workshop.html index 4525d86..0f36894 100644 --- a/core/week-11/study_after_workshop.html +++ b/core/week-11/study_after_workshop.html @@ -339,7 +339,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-11/study_before_workshop.html b/core/week-11/study_before_workshop.html index 3a48ebd..b220187 100644 --- a/core/week-11/study_before_workshop.html +++ b/core/week-11/study_before_workshop.html @@ -388,7 +388,7 @@

Independent Study to prepare for workshop

-

29 March, 2024

+

2 April, 2024

Module assessment

diff --git a/core/week-11/workshop.html b/core/week-11/workshop.html index 4bcafbc..adf799b 100644 --- a/core/week-11/workshop.html +++ b/core/week-11/workshop.html @@ -387,7 +387,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-2/overview.html b/core/week-2/overview.html index 6aa0309..3022faa 100644 --- a/core/week-2/overview.html +++ b/core/week-2/overview.html @@ -337,7 +337,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-2/study_after_workshop.html b/core/week-2/study_after_workshop.html index e55886d..92e48f2 100644 --- a/core/week-2/study_after_workshop.html +++ b/core/week-2/study_after_workshop.html @@ -337,7 +337,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/core/week-2/study_before_workshop.html b/core/week-2/study_before_workshop.html index ca2742c..66d3ae2 100644 --- a/core/week-2/study_before_workshop.html +++ b/core/week-2/study_before_workshop.html @@ -448,7 +448,7 @@ -

29 March, 2024

+

2 April, 2024

Overview

1. Calculate Change in VFA g/l with time

-

🎬 Create dataframe for the change in VFA

+

🎬 Create dataframe for the change in VFA 📢 and the change in time

vfa_delta <- vfa_cummul |> 
     group_by(sample_replicate)  |> 
@@ -464,9 +465,11 @@ 

Workflow for VFA analysis

isopentanoate = isopentanoate - lag(isopentanoate), pentanoate = pentanoate - lag(pentanoate), isohexanoate = isohexanoate - lag(isohexanoate), - hexanoate = hexanoate - lag(hexanoate))
+ hexanoate = hexanoate - lag(hexanoate), + delta_time = time_day - lag(time_day))
-

Now we have two dataframes, one for the cumulative data and one for the change in VFA.

+

Now we have two dataframes, one for the cumulative data and one for the change in VFA and time. Note that the VFA values have been replaced by the change in VFA but the change in time is in a separate column. I have done this because we later want to plot flux (not yet added) against time

+

📢 This code also depends on the sample_replicate column being in the form treatment-replicate. lag is calculating the difference between a value at one time point and the next for a treatment-replicate combination.

2. Recalculate the data into grams per litre

To make conversions from mM to g/l we need to do mM * 0.001 * MW. We will import the molecular weight data, pivot the VFA data to long format and join the molecular weight data to the VFA data. Then we can calculate the g/l. We will do this for both the cumulative and delta dataframes.

🎬 import molecular weight data

@@ -493,13 +496,14 @@

Workflow for VFA analysis

View vfa_cummul to check you understand what you have done.

Repeat for the delta data.

-

🎬 Pivot the change data, delta_vfa to long format:

+

🎬 Pivot the change data, delta_vfa to long format (📢 delta_time is added to the list of columns that do not need to be pivoted but repeated):

vfa_delta <- vfa_delta |> 
   pivot_longer(cols = -c(sample_replicate,
                          treatment, 
                          replicate,
-                         time_day),
+                         time_day,
+                         delta_time),
                values_to = "conc_mM",
                names_to = "vfa") 
@@ -1349,18 +1353,91 @@

Workflow for VFA analysis

labRow = rownames(mat), heatmap_layers = theme(axis.line = element_blank()))
-
- +
+

The heatmap will open in the viewer pane (rather than the plot pane) because it is html. You can “Show in a new window” to see it in a larger format. You can also zoom in and out and pan around the heatmap and download it as a png. You might feel the colour bars is not adding much to the plot. You can remove it by setting hide_colorbar = TRUE, in the heatmaply() function.

One of the NC replicates at time = 22 is very different from the other replicates. The CN10 treatments cluster together at high time points. CN10 samples are more similar to NC samples early on. Most of the VFAs behave similarly with highest values later in the experiment for CN10 but isohexanoate and hexanoate differ. The difference might be because isohexanoate is especially low in the NC replicates at time = 1 and hexanoate is especially high in the NC replicate 2 at time = 22

4. Calculate the flux - pending.

-

Calculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight.

+

Calculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight. Emma’s note: I think the terms flux and reaction rate are used interchangeably

I’ve requested clarification: for the flux measurements, do they need graphs of the rate of change wrt time? And is the sludge volume going to be a constant for all samples or something they measure and varies by vial?

+

Answer: The sludge volume is constant, at 30 mls within a 120ml vial. Some students will want to graph reaction rate with time, others will want to compare the measured GC-FID concentrations against the model output.

+

📢 Kelly asked for “.. a simple flux measurement, which is the change in VFA concentration over a period of time, divided by weight or volume of material. In this case it might be equal to == Delta(Acetate at 3 days - Acetate at 1 day)/Delta (3days - 1day)/50 mls sludge. This would provide a final flux with the units of mg acetate per ml sludge per day.”

+

Note: Kelly says mg/ml where earlier he used g/L. These are the same (but I called my column conc_g_l)

+

We need to use the vfa_delta data frame. It contains the change in VFA concentration and the change in time. We will add a column for the flux of each VFA in g/L/day. (mg/ml/day)

+
+
sludge_volume <- 30 # ml
+vfa_delta <- vfa_delta |> 
+  mutate(flux = conc_g_l / delta_time / sludge_volume)
+
+

NAs at time 1 are expected because there’s no time before that to calculate a changes

5. Graph and extract the reaction rate - pending

Graph and extract the reaction rate assuming a first order chemical/biological reaction and an exponential falloff rate

I’ve requested clarification: for the nonlinear least squares curve fitting, I assume x is time but I’m not clear what the Y variable is - concentration? or change in concentration? or rate of change of concentration?

+

Answer: The non-linear equation describes concentration change with time. Effectively the change in concentration is dependent upon the available concentration, in this example [Hex] represents the concentration of Hexanoic acid, while the T0 and T1 represent time steps.

+

[Hex]T1 = [Hex]T0 - [Hex]T0 * k

+

Or. the amount of Hexanoic acid remaining at T1 (let’s say one hour from the last data point) is equal to the starting concentration ([Hex]T0) minus the concentration dependent metabolism ([Hex]To * k).

+

📢 We can now plot the observed fluxes (reaction rates) over time

+

I’ve summarised the data to add error bars and means

+
+
vfa_delta_summary <- vfa_delta |> 
+  group_by(treatment, time_day, vfa) |> 
+  summarise(mean_flux = mean(flux),
+            se_flux = sd(flux)/sqrt(length(flux))) |> 
+  ungroup()
+
+
+
ggplot(data = vfa_delta, aes(x = time_day, colour = vfa)) +
+  geom_point(aes(y = flux), alpha = 0.6) +
+  geom_errorbar(data = vfa_delta_summary, 
+                aes(ymin = mean_flux - se_flux, 
+                    ymax = mean_flux + se_flux), 
+                width = 1) +
+  geom_errorbar(data = vfa_delta_summary, 
+                aes(ymin = mean_flux, 
+                    ymax = mean_flux), 
+                width = 0.8) +
+  scale_color_viridis_d(name = NULL) +
+  scale_x_continuous(name = "Time (days)") +
+  scale_y_continuous(name = "VFA Flux mg/ml/day") +
+  theme_bw() +
+  facet_wrap(~treatment) +
+  theme(strip.background = element_blank())
+
+
+

+
+
+
+
+

Or maybe this is easier to read:

+
+
ggplot(data = vfa_delta, aes(x = time_day, colour = treatment)) +
+  geom_point(aes(y = flux), alpha = 0.6) +
+  geom_errorbar(data = vfa_delta_summary, 
+                aes(ymin = mean_flux - se_flux, 
+                    ymax = mean_flux + se_flux), 
+                width = 1) +
+  geom_errorbar(data = vfa_delta_summary, 
+                aes(ymin = mean_flux, 
+                    ymax = mean_flux), 
+                width = 0.8) +
+  scale_color_viridis_d(name = NULL, begin = 0.2, end = 0.7) +
+  scale_x_continuous(name = "Time (days)") +
+  scale_y_continuous(name = "VFA Flux mg/ml/day") +
+  theme_bw() +
+  facet_wrap(~ vfa, nrow = 2) +
+  theme(strip.background = element_blank(),
+        legend.position = "top")
+
+
+

+
+
+
+
+

I have not yet worked out the best way to plot the modelled reaction rate

Pages made with R (R Core Team 2023), Quarto (Allaire et al. 2022), knitr (Xie 2022), kableExtra (Zhu 2021)

diff --git a/omics/kelly/workshop_files/figure-html/unnamed-chunk-30-1.png b/omics/kelly/workshop_files/figure-html/unnamed-chunk-30-1.png new file mode 100644 index 0000000..5f5ff3d Binary files /dev/null and b/omics/kelly/workshop_files/figure-html/unnamed-chunk-30-1.png differ diff --git a/omics/kelly/workshop_files/figure-html/unnamed-chunk-31-1.png b/omics/kelly/workshop_files/figure-html/unnamed-chunk-31-1.png new file mode 100644 index 0000000..b91730b Binary files /dev/null and b/omics/kelly/workshop_files/figure-html/unnamed-chunk-31-1.png differ diff --git a/omics/omics.html b/omics/omics.html index 7adcce7..543084c 100644 --- a/omics/omics.html +++ b/omics/omics.html @@ -351,7 +351,7 @@

Omics Data Analysis for Group Project

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/semester-2/workshop.html b/omics/semester-2/workshop.html index 1f471da..7444246 100644 --- a/omics/semester-2/workshop.html +++ b/omics/semester-2/workshop.html @@ -226,7 +226,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-3/overview.html b/omics/week-3/overview.html index 0404729..68c9655 100644 --- a/omics/week-3/overview.html +++ b/omics/week-3/overview.html @@ -328,7 +328,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-3/study_after_workshop.html b/omics/week-3/study_after_workshop.html index 0588f96..2b8e011 100644 --- a/omics/week-3/study_after_workshop.html +++ b/omics/week-3/study_after_workshop.html @@ -321,7 +321,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-3/study_before_workshop.html b/omics/week-3/study_before_workshop.html index 9d87f53..387739d 100644 --- a/omics/week-3/study_before_workshop.html +++ b/omics/week-3/study_before_workshop.html @@ -388,7 +388,7 @@

Independent Study to prepare for workshop

-

29 March, 2024

+

2 April, 2024

Overview

diff --git a/omics/week-3/workshop.html b/omics/week-3/workshop.html index 83288e0..940c196 100644 --- a/omics/week-3/workshop.html +++ b/omics/week-3/workshop.html @@ -416,7 +416,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-4/overview.html b/omics/week-4/overview.html index 43475d6..c50b719 100644 --- a/omics/week-4/overview.html +++ b/omics/week-4/overview.html @@ -349,7 +349,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-4/study_after_workshop.html b/omics/week-4/study_after_workshop.html index dc9ee3d..ccc0d52 100644 --- a/omics/week-4/study_after_workshop.html +++ b/omics/week-4/study_after_workshop.html @@ -321,7 +321,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-4/study_before_workshop.html b/omics/week-4/study_before_workshop.html index 7daf985..b501a78 100644 --- a/omics/week-4/study_before_workshop.html +++ b/omics/week-4/study_before_workshop.html @@ -446,7 +446,7 @@ -

29 March, 2024

+

2 April, 2024

Overview

In these slides we will:

diff --git a/omics/week-4/workshop.html b/omics/week-4/workshop.html index 210679e..ee8d38e 100644 --- a/omics/week-4/workshop.html +++ b/omics/week-4/workshop.html @@ -389,7 +389,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-5/figures/prog-hspc-volcano.png b/omics/week-5/figures/prog-hspc-volcano.png index 76fea74..c7fdb95 100644 Binary files a/omics/week-5/figures/prog-hspc-volcano.png and b/omics/week-5/figures/prog-hspc-volcano.png differ diff --git a/omics/week-5/overview.html b/omics/week-5/overview.html index fe78ad1..246bbce 100644 --- a/omics/week-5/overview.html +++ b/omics/week-5/overview.html @@ -329,7 +329,7 @@

Overview

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-5/study_after_workshop.html b/omics/week-5/study_after_workshop.html index 54ad694..2818e08 100644 --- a/omics/week-5/study_after_workshop.html +++ b/omics/week-5/study_after_workshop.html @@ -321,7 +321,7 @@

Independent Study to consolidate this week

Published
-

29 March, 2024

+

2 April, 2024

diff --git a/omics/week-5/study_before_workshop.html b/omics/week-5/study_before_workshop.html index 6d4b81a..03a1a22 100644 --- a/omics/week-5/study_before_workshop.html +++ b/omics/week-5/study_before_workshop.html @@ -446,7 +446,7 @@
-

29 March, 2024

+

2 April, 2024

Overview

In these slides we will:

diff --git a/omics/week-5/workshop.html b/omics/week-5/workshop.html index 484eaa9..79a9f27 100644 --- a/omics/week-5/workshop.html +++ b/omics/week-5/workshop.html @@ -394,7 +394,7 @@

Workshop

Published
-

29 March, 2024

+

2 April, 2024

@@ -739,8 +739,8 @@

Workshop

labRow = rownames(mat), heatmap_layers = theme(axis.line = element_blank()))
-
- +
+

On the vertical axis are genes which are differentially expressed at the 0.01 level. On the horizontal axis are samples. We can see that the FGF-treated samples cluster together and the control samples cluster together. We can also see two clusters of genes; one of these shows genes upregulated (more yellow) in the FGF-treated samples (the pink cluster) and the other shows genes down regulated (more blue, the blue cluster) in the FGF-treated samples.

@@ -1060,8 +1060,8 @@

Workshop

labRow = rownames(mat), heatmap_layers = theme(axis.line = element_blank()))
-
- +
+

It will take a minute to run and display. On the vertical axis are genes which are differentially expressed at the 0.01 level. On the horizontal axis are cells. We can see that cells of the same type don’t cluster that well together. We can also see two clusters of genes but the pattern of gene is not as clear as it was for the frogs and the correspondence with the cell clusters is not as strong.

diff --git a/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png b/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png index ffe6f0f..a7c523f 100644 Binary files a/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png and b/omics/week-5/workshop_files/figure-html/unnamed-chunk-33-1.png differ diff --git a/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png b/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png index 2aadb57..5178d45 100644 Binary files a/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png and b/omics/week-5/workshop_files/figure-html/unnamed-chunk-65-1.png differ diff --git a/search.json b/search.json index 304444a..85eb830 100644 --- a/search.json +++ b/search.json @@ -2016,7 +2016,7 @@ "href": "core/week-2/workshop.html#rstudio-terminal", "title": "Workshop", "section": "RStudio terminal", - "text": "RStudio terminal\nThe RStudio terminal is a convenient interface to the shell without leaving RStudio. It is useful for running commands that are not available in R. For example, you can use it to run other programs like fasqc, git, ftp, ssh\nNavigating your file system\nSeveral commands are frequently used to create, inspect, rename, and delete files and directories.\n$\nThe dollar sign is the prompt (like > on the R console), which shows us that the shell is waiting for input.\nYou can find out where you are using the pwd command, which stands for “print working directory”.\n\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n\n\nYou can find out what you can see with ls which stands for “list”.\n\nls\n\ndata\nimages\noverview.qmd\nstudy_after_workshop.qmd\nstudy_before_workshop.ipynb\nstudy_before_workshop.qmd\nworkshop.html\nworkshop.qmd\nworkshop.rmarkdown\nworkshop_files\n\n\nYou might have noticed that unlike R, the commands do not have brackets after them. Instead, options (or switches) are given after the command. For example, we can modify the ls command to give us more information with the -l option, which stands for “long”.\n\nls -l\n\ntotal 128\ndrwxr-xr-x 2 runner docker 4096 Mar 29 15:16 data\ndrwxr-xr-x 2 runner docker 4096 Mar 29 15:16 images\n-rw-r--r-- 1 runner docker 1597 Mar 29 15:16 overview.qmd\n-rw-r--r-- 1 runner docker 184 Mar 29 15:16 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4807 Mar 29 15:16 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13029 Mar 29 15:16 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 58063 Mar 29 15:16 workshop.html\n-rw-r--r-- 1 runner docker 8550 Mar 29 15:16 workshop.qmd\n-rw-r--r-- 1 runner docker 8564 Mar 29 15:18 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4096 Mar 29 15:16 workshop_files\n\n\nYou can use more than one option at once. The -h option stands for “human readable” and makes the file sizes easier to understand for humans:\n\nls -hl\n\ntotal 128K\ndrwxr-xr-x 2 runner docker 4.0K Mar 29 15:16 data\ndrwxr-xr-x 2 runner docker 4.0K Mar 29 15:16 images\n-rw-r--r-- 1 runner docker 1.6K Mar 29 15:16 overview.qmd\n-rw-r--r-- 1 runner docker 184 Mar 29 15:16 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4.7K Mar 29 15:16 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Mar 29 15:16 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Mar 29 15:16 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Mar 29 15:16 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Mar 29 15:18 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Mar 29 15:16 workshop_files\n\n\nThe -a option stands for “all” and shows us all the files, including hidden files.\n\nls -alh\n\ntotal 136K\ndrwxr-xr-x 5 runner docker 4.0K Mar 29 15:18 .\ndrwxr-xr-x 6 runner docker 4.0K Mar 29 15:16 ..\ndrwxr-xr-x 2 runner docker 4.0K Mar 29 15:16 data\ndrwxr-xr-x 2 runner docker 4.0K Mar 29 15:16 images\n-rw-r--r-- 1 runner docker 1.6K Mar 29 15:16 overview.qmd\n-rw-r--r-- 1 runner docker 184 Mar 29 15:16 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4.7K Mar 29 15:16 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Mar 29 15:16 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Mar 29 15:16 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Mar 29 15:16 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Mar 29 15:18 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Mar 29 15:16 workshop_files\n\n\nYou can move about with the cd command, which stands for “change directory”. You can use it to move into a directory by specifying the path to the directory:\n\ncd data\npwd\ncd ..\npwd\ncd data\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n\n\nhead 1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nhead -20 data/1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nSOURCE 3 ORGANISM_COMMON: SPERM WHALE; \nSOURCE 4 ORGANISM_TAXID: 9755; \nSOURCE 5 EXPRESSION_SYSTEM: ESCHERICHIA COLI; \nSOURCE 6 EXPRESSION_SYSTEM_TAXID: 562; \nSOURCE 7 EXPRESSION_SYSTEM_VECTOR_TYPE: PLASMID; \nSOURCE 8 EXPRESSION_SYSTEM_PLASMID: PET15A \nKEYWDS HELICAL, GLOBULAR, ALL-HYDROGEN CONTAINING STRUCTURE, OXYGEN STORAGE- \nKEYWDS 2 TRANSPORT COMPLEX \nEXPDTA NEUTRON DIFFRACTION \nAUTHOR F.SHU,V.RAMAKRISHNAN,B.P.SCHOENBORN \nless 1cq2.pdb\nless is a program that displays the contents of a file, one page at a time. It is useful for viewing large files because it does not load the whole file into memory before displaying it. Instead, it reads and displays a few lines at a time. You can navigate forward through the file with the spacebar, and backwards with the b key. Press q to quit.\nA wildcard is a character that can be used as a substitute for any of a class of characters in a search, The most common wildcard characters are the asterisk (*) and the question mark (?).\nls *.csv\ncp stands for “copy”. You can copy a file from one directory to another by giving cp the path to the file you want to copy and the path to the destination directory.\ncp 1cq2.pdb copy_of_1cq2.pdb\ncp 1cq2.pdb ../copy_of_1cq2.pdb\ncp 1cq2.pdb ../bob.txt\nTo delete a file use the rm command, which stands for “remove”.\nrm ../bob.txt\nbut be careful because the file will be gone forever. There is no “are you sure?” or undo.\nTo move a file from one directory to another, use the mv command. mv works like cp except that it also deletes the original file.\nmv ../copy_of_1cq2.pdb .\nMake a directory\nmkdir mynewdir", + "text": "RStudio terminal\nThe RStudio terminal is a convenient interface to the shell without leaving RStudio. It is useful for running commands that are not available in R. For example, you can use it to run other programs like fasqc, git, ftp, ssh\nNavigating your file system\nSeveral commands are frequently used to create, inspect, rename, and delete files and directories.\n$\nThe dollar sign is the prompt (like > on the R console), which shows us that the shell is waiting for input.\nYou can find out where you are using the pwd command, which stands for “print working directory”.\n\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n\n\nYou can find out what you can see with ls which stands for “list”.\n\nls\n\ndata\nimages\noverview.qmd\nstudy_after_workshop.qmd\nstudy_before_workshop.ipynb\nstudy_before_workshop.qmd\nworkshop.html\nworkshop.qmd\nworkshop.rmarkdown\nworkshop_files\n\n\nYou might have noticed that unlike R, the commands do not have brackets after them. Instead, options (or switches) are given after the command. For example, we can modify the ls command to give us more information with the -l option, which stands for “long”.\n\nls -l\n\ntotal 128\ndrwxr-xr-x 2 runner docker 4096 Apr 2 15:33 data\ndrwxr-xr-x 2 runner docker 4096 Apr 2 15:33 images\n-rw-r--r-- 1 runner docker 1597 Apr 2 15:33 overview.qmd\n-rw-r--r-- 1 runner docker 184 Apr 2 15:33 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4807 Apr 2 15:33 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13029 Apr 2 15:33 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 58063 Apr 2 15:33 workshop.html\n-rw-r--r-- 1 runner docker 8550 Apr 2 15:33 workshop.qmd\n-rw-r--r-- 1 runner docker 8564 Apr 2 15:35 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4096 Apr 2 15:33 workshop_files\n\n\nYou can use more than one option at once. The -h option stands for “human readable” and makes the file sizes easier to understand for humans:\n\nls -hl\n\ntotal 128K\ndrwxr-xr-x 2 runner docker 4.0K Apr 2 15:33 data\ndrwxr-xr-x 2 runner docker 4.0K Apr 2 15:33 images\n-rw-r--r-- 1 runner docker 1.6K Apr 2 15:33 overview.qmd\n-rw-r--r-- 1 runner docker 184 Apr 2 15:33 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4.7K Apr 2 15:33 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Apr 2 15:33 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Apr 2 15:33 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Apr 2 15:33 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Apr 2 15:35 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Apr 2 15:33 workshop_files\n\n\nThe -a option stands for “all” and shows us all the files, including hidden files.\n\nls -alh\n\ntotal 136K\ndrwxr-xr-x 5 runner docker 4.0K Apr 2 15:35 .\ndrwxr-xr-x 6 runner docker 4.0K Apr 2 15:33 ..\ndrwxr-xr-x 2 runner docker 4.0K Apr 2 15:33 data\ndrwxr-xr-x 2 runner docker 4.0K Apr 2 15:33 images\n-rw-r--r-- 1 runner docker 1.6K Apr 2 15:33 overview.qmd\n-rw-r--r-- 1 runner docker 184 Apr 2 15:33 study_after_workshop.qmd\n-rw-r--r-- 1 runner docker 4.7K Apr 2 15:33 study_before_workshop.ipynb\n-rw-r--r-- 1 runner docker 13K Apr 2 15:33 study_before_workshop.qmd\n-rw-r--r-- 1 runner docker 57K Apr 2 15:33 workshop.html\n-rw-r--r-- 1 runner docker 8.4K Apr 2 15:33 workshop.qmd\n-rw-r--r-- 1 runner docker 8.4K Apr 2 15:35 workshop.rmarkdown\ndrwxr-xr-x 3 runner docker 4.0K Apr 2 15:33 workshop_files\n\n\nYou can move about with the cd command, which stands for “change directory”. You can use it to move into a directory by specifying the path to the directory:\n\ncd data\npwd\ncd ..\npwd\ncd data\npwd\n\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2\n/home/runner/work/BIO00088H-data/BIO00088H-data/core/week-2/data\n\n\nhead 1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nhead -20 data/1cq2.pdb\nHEADER OXYGEN STORAGE/TRANSPORT 04-AUG-99 1CQ2 \nTITLE NEUTRON STRUCTURE OF FULLY DEUTERATED SPERM WHALE MYOGLOBIN AT 2.0 \nTITLE 2 ANGSTROM \nCOMPND MOL_ID: 1; \nCOMPND 2 MOLECULE: MYOGLOBIN; \nCOMPND 3 CHAIN: A; \nCOMPND 4 ENGINEERED: YES; \nCOMPND 5 OTHER_DETAILS: PROTEIN IS FULLY DEUTERATED \nSOURCE MOL_ID: 1; \nSOURCE 2 ORGANISM_SCIENTIFIC: PHYSETER CATODON; \nSOURCE 3 ORGANISM_COMMON: SPERM WHALE; \nSOURCE 4 ORGANISM_TAXID: 9755; \nSOURCE 5 EXPRESSION_SYSTEM: ESCHERICHIA COLI; \nSOURCE 6 EXPRESSION_SYSTEM_TAXID: 562; \nSOURCE 7 EXPRESSION_SYSTEM_VECTOR_TYPE: PLASMID; \nSOURCE 8 EXPRESSION_SYSTEM_PLASMID: PET15A \nKEYWDS HELICAL, GLOBULAR, ALL-HYDROGEN CONTAINING STRUCTURE, OXYGEN STORAGE- \nKEYWDS 2 TRANSPORT COMPLEX \nEXPDTA NEUTRON DIFFRACTION \nAUTHOR F.SHU,V.RAMAKRISHNAN,B.P.SCHOENBORN \nless 1cq2.pdb\nless is a program that displays the contents of a file, one page at a time. It is useful for viewing large files because it does not load the whole file into memory before displaying it. Instead, it reads and displays a few lines at a time. You can navigate forward through the file with the spacebar, and backwards with the b key. Press q to quit.\nA wildcard is a character that can be used as a substitute for any of a class of characters in a search, The most common wildcard characters are the asterisk (*) and the question mark (?).\nls *.csv\ncp stands for “copy”. You can copy a file from one directory to another by giving cp the path to the file you want to copy and the path to the destination directory.\ncp 1cq2.pdb copy_of_1cq2.pdb\ncp 1cq2.pdb ../copy_of_1cq2.pdb\ncp 1cq2.pdb ../bob.txt\nTo delete a file use the rm command, which stands for “remove”.\nrm ../bob.txt\nbut be careful because the file will be gone forever. There is no “are you sure?” or undo.\nTo move a file from one directory to another, use the mv command. mv works like cp except that it also deletes the original file.\nmv ../copy_of_1cq2.pdb .\nMake a directory\nmkdir mynewdir", "crumbs": [ "Core", "Week 2: Workflow tips", @@ -2468,7 +2468,7 @@ "href": "omics/kelly/workshop.html", "title": "Workflow for VFA analysis", "section": "", - "text": "I have some data and information from Kelly. I have interpreted it and written some code to do the calculations.\nHowever, Kelly hasn’t had a chance to look at it yet so I am providing the exact information and data he supplied along with my suggested workflow based on my interpretation of the data and info.\n\nThe file is a CSV file, with some notes on top and the data in the following order, post notes and headers. Please note that all chemical data is in millimolar. There are 62 rows of actual data.\nSample Name – Replicate, Time (days), Acetate, Propanoate, Isobutyrate, Butyrate, Isopentanoate, Pentanoate, Isohexanoate, Hexanoate\nThe students should be able to transform the data from mM to mg/L, and to g/L. To do this they only need to multiply the molecular weight of the compound (listed in the notes in the file) by the concentration in mM to get mg/L. Obviously to get g/L they will just divide by 1000. They should be able to graph the VFA concentrations with time.\nThey should also be able to do a simple flux measurement, which is the change in VFA concentration over a period of time, divided by weight or volume of material. In this case it might be equal to == Delta(Acetate at 3 days - Acetate at 1 day)/Delta (3days - 1day)/50 mls sludge. This would provide a final flux with the units of mg acetate per ml sludge per day. Let me know if this isn’t clear.\nPerhaps more importantly they should be able to graph and extract the reaction rate, assuming a first order chemical/biological reaction and an exponential falloff rate. I found this as a starting point (https://martinlab.chem.umass.edu/r-fitting-data/) , but I assume Emma has something much more effective already in the pipeline.\n\nI created these two data files from the original.\n\n8 VFA in mM for 60 samples vfa.csv. There were 63 rows of data in the original file. There were no time 0 for one treatment and all values were zero for the other treatment so I removed those.\n\nTwo treatments: straw (CN10) and water (NC)\n10 time points: 1, 3, 5, 9, 11, 13, 16, 18, 20, 22\nthree replicates per treatment per time point\n2 x 10 x 3 = 60 groups\n8 VFA with concentration in mM (millimolar): acetate, propanoate, isobutyrate, butyrate, isopentanoate, pentanoate, isohexanoate, hexanoate\n\n\nMolecular weights for each VFA in grams per mole mol_wt.txt VFAs from AD vials\n\nWe need to:\n\nCalculate Change in VFA g/l with time\nRecalculate the data into grams per litre - convert to molar: 1 millimolar to molar = 0.001 molar - multiply by the molecular weight of each VFA\nCalculate the percent representation of each VFA, by mM and by weight\nCalculate the flux (change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight\nGraph and extract the reaction rate, assuming a first order chemical/biological reaction and an exponential falloff rate\n\n🎬 Start RStudio from the Start menu\n🎬 Make an RStudio project. Be deliberate about where you create it so that it is a good place for you\n🎬 Use the Files pane to make new folders for the data. I suggest data-raw and data-processed\n🎬 Make a new script called analysis.R to carry out the rest of the work.\n🎬 Load tidyverse (Wickham et al. 2019) for importing, summarising, plotting and filtering.\n\nlibrary(tidyverse)\n\n\n🎬 Save the files to data-raw. Open them and examine them. You may want to use Excel for the csv file.\n🎬 Answer the following questions:\n\nWhat is in the rows and columns of each file?\nHow many rows and columns are there in each file?\nHow are the data organised ?\n\n🎬 Import\n\nvfa_cummul <- read_csv(\"data-raw/vfa.csv\") |> janitor::clean_names()\n\n🎬 Split treatment and replicate to separate columns so there is a treatment column:\n\nvfa_cummul <- vfa_cummul |> \n separate(col = sample_replicate, \n into = c(\"treatment\", \"replicate\"), \n sep = \"-\",\n remove = FALSE)\n\nThe provided data is cumulative/absolute. We need to calculate the change in VFA with time. There is a function, lag() that will help us do this. It will take the previous value and subtract it from the current value. We need to do that separately for each sample_replicate so we need to group by sample_replicate first. We also need to make sure the data is in the right order so we will arrange by sample_replicate and time_day.\n\n🎬 Create dataframe for the change in VFA\n\nvfa_delta <- vfa_cummul |> \n group_by(sample_replicate) |> \n arrange(sample_replicate, time_day) |>\n mutate(acetate = acetate - lag(acetate),\n propanoate = propanoate - lag(propanoate),\n isobutyrate = isobutyrate - lag(isobutyrate),\n butyrate = butyrate - lag(butyrate),\n isopentanoate = isopentanoate - lag(isopentanoate),\n pentanoate = pentanoate - lag(pentanoate),\n isohexanoate = isohexanoate - lag(isohexanoate),\n hexanoate = hexanoate - lag(hexanoate))\n\nNow we have two dataframes, one for the cumulative data and one for the change in VFA.\n\nTo make conversions from mM to g/l we need to do mM * 0.001 * MW. We will import the molecular weight data, pivot the VFA data to long format and join the molecular weight data to the VFA data. Then we can calculate the g/l. We will do this for both the cumulative and delta dataframes.\n🎬 import molecular weight data\n\nmol_wt <- read_table(\"data-raw/mol_wt.txt\") |>\n mutate(vfa = tolower(vfa))\n\n🎬 Pivot the cumulative data to long format:\n\nvfa_cummul <- vfa_cummul |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_cummul to check you understand what you have done.\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_cummul <- vfa_cummul |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\nView vfa_cummul to check you understand what you have done.\nRepeat for the delta data.\n🎬 Pivot the change data, delta_vfa to long format:\n\nvfa_delta <- vfa_delta |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_delta to check it looks like vfa_cummul\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_delta <- vfa_delta |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\n\nby mM and by weight\n🎬 Add a column which is the percent representation of each VFA for mM and g/l:\n\nvfa_cummul <- vfa_cummul |> \n group_by(sample_replicate, time_day) |> \n mutate(percent_conc_g_l = conc_g_l / sum(conc_g_l) * 100,\n percent_conc_mM = conc_mM / sum(conc_mM) * 100)\n\n\n🎬 Make summary data for graphing\n\nvfa_cummul_summary <- vfa_cummul |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_g_l = mean(conc_g_l),\n se_g_l = sd(conc_g_l)/sqrt(length(conc_g_l)),\n mean_mM = mean(conc_mM),\n se_mM = sd(conc_mM)/sqrt(length(conc_mM))) |> \n ungroup()\n\n\nvfa_delta_summary <- vfa_delta |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_g_l = mean(conc_g_l),\n se_g_l = sd(conc_g_l)/sqrt(length(conc_g_l)),\n mean_mM = mean(conc_mM),\n se_mM = sd(conc_mM)/sqrt(length(conc_mM))) |> \n ungroup()\n\n🎬 Graph the cumulative data, grams per litre:\n\nvfa_cummul_summary |> \n ggplot(aes(x = time_day, colour = vfa)) +\n geom_line(aes(y = mean_g_l), \n linewidth = 1) +\n geom_errorbar(aes(ymin = mean_g_l - se_g_l,\n ymax = mean_g_l + se_g_l),\n width = 0.5, \n show.legend = F,\n linewidth = 1) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean VFA concentration (g/l)\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n🎬 Graph the change data, grams per litre:\n\nvfa_delta_summary |> \n ggplot(aes(x = time_day, colour = vfa)) +\n geom_line(aes(y = mean_g_l), \n linewidth = 1) +\n geom_errorbar(aes(ymin = mean_g_l - se_g_l,\n ymax = mean_g_l + se_g_l),\n width = 0.5, \n show.legend = F,\n linewidth = 1) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean change in VFA concentration (g/l)\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n🎬 Graph the mean percent representation of each VFA g/l. Note geom_col() will plot proportion if we setposition = \"fill\"\n\nvfa_cummul_summary |> \n ggplot(aes(x = time_day, y = mean_g_l, fill = vfa)) +\n geom_col(position = \"fill\") +\n scale_fill_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean Proportion VFA\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n\nWe have 8 VFA in our dataset. PCA will allow us to plot our samples in the “VFA” space so we can see if treatments, time or replicate cluster.\nHowever, PCA expects a matrix with samples in rows and VFA, the variables, in columns. We will need to select the columns we need and pivot wider. Then convert to a matrix.\n🎬\n\nvfa_cummul_pca <- vfa_cummul |> \n select(sample_replicate, \n treatment, \n replicate, \n time_day, \n vfa, \n conc_g_l) |> \n pivot_wider(names_from = vfa, \n values_from = conc_g_l)\n\n\nmat <- vfa_cummul_pca |> \n ungroup() |>\n select(-sample_replicate, \n -treatment, \n -replicate, \n -time_day) |> \n as.matrix()\n\n🎬 Perform PCA on the matrix:\n\npca <- mat |>\n prcomp(scale. = TRUE, \n rank. = 4) \n\nThe scale. argument tells prcomp() to scale the data to have a mean of 0 and a standard deviation of 1. The rank. argument tells prcomp() to only calculate the first 4 principal components. This is useful for visualisation as we can only plot in 2 or 3 dimensions. We can see the results of the PCA by viewing the summary() of the pca object.\n\nsummary(pca)\n\nImportance of first k=4 (out of 8) components:\n PC1 PC2 PC3 PC4\nStandard deviation 2.4977 0.9026 0.77959 0.45567\nProportion of Variance 0.7798 0.1018 0.07597 0.02595\nCumulative Proportion 0.7798 0.8816 0.95760 0.98355\n\n\nThe Proportion of Variance tells us how much of the variance is explained by each component. We can see that the first component explains 0.7798 of the variance, the second 0.1018, and the third 0.07597. Together the first three components explain nearly 96% of the total variance in the data. Plotting PC1 against PC2 will capture about 78% of the variance which is likely much better than we would get plotting any two VFA against each other. To plot the PC1 against PC2 we will need to extract the PC1 and PC2 score from the pca object and add labels for the samples.\n🎬 Create a dataframe of the PC1 and PC2 scores which are in pca$x and add the sample information from vfa_cummul_pca:\n\npca_labelled <- data.frame(pca$x,\n sample_replicate = vfa_cummul_pca$sample_replicate,\n treatment = vfa_cummul_pca$treatment,\n replicate = vfa_cummul_pca$replicate,\n time_day = vfa_cummul_pca$time_day) \n\nThe dataframe should look like this:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPC1\nPC2\nPC3\nPC4\nsample_replicate\ntreatment\nreplicate\ntime_day\n\n\n\n-2.9592362\n0.6710553\n0.0068846\n-0.4453904\nCN10-1\nCN10\n1\n1\n\n\n-2.7153060\n0.7338367\n-0.2856872\n-0.2030110\nCN10-2\nCN10\n2\n1\n\n\n-2.7423102\n0.8246832\n-0.4964249\n-0.1434490\nCN10-3\nCN10\n3\n1\n\n\n-1.1909064\n-1.0360724\n1.1249513\n-0.7360599\nCN10-1\nCN10\n1\n3\n\n\n-1.3831563\n0.9572091\n-1.5561657\n0.0582755\nCN10-2\nCN10\n2\n3\n\n\n-1.1628940\n-0.0865412\n-0.6046780\n-0.1976743\nCN10-3\nCN10\n3\n3\n\n\n-0.2769661\n-0.2221055\n1.1579897\n-0.6079395\nCN10-1\nCN10\n1\n5\n\n\n0.3480962\n0.3612522\n0.5841649\n-0.0612366\nCN10-2\nCN10\n2\n5\n\n\n-0.7281116\n1.6179706\n-0.6430170\n0.0660727\nCN10-3\nCN10\n3\n5\n\n\n0.9333578\n-0.1339061\n1.0870945\n-0.4374103\nCN10-1\nCN10\n1\n9\n\n\n2.0277528\n0.6993342\n0.3850147\n0.0723540\nCN10-2\nCN10\n2\n9\n\n\n1.9931908\n0.5127260\n0.6605782\n0.1841974\nCN10-3\nCN10\n3\n9\n\n\n1.8365692\n-0.4189762\n0.7029015\n-0.3873133\nCN10-1\nCN10\n1\n11\n\n\n2.3313978\n0.3274834\n-0.0135608\n0.0264372\nCN10-2\nCN10\n2\n11\n\n\n1.5833035\n0.9263509\n-0.1909483\n0.1358320\nCN10-3\nCN10\n3\n11\n\n\n2.8498246\n0.3815854\n-0.4763500\n-0.0280281\nCN10-1\nCN10\n1\n13\n\n\n3.5652461\n-0.0836709\n-0.5948483\n-0.1612809\nCN10-2\nCN10\n2\n13\n\n\n4.1314944\n-1.2254642\n0.2699666\n-0.3152100\nCN10-3\nCN10\n3\n13\n\n\n3.7338024\n-0.6744610\n0.4344639\n-0.3736234\nCN10-1\nCN10\n1\n16\n\n\n3.6748427\n0.5202498\n-0.4333685\n-0.1607235\nCN10-2\nCN10\n2\n16\n\n\n3.9057053\n0.3599520\n-0.3049074\n0.0540037\nCN10-3\nCN10\n3\n16\n\n\n3.4561583\n-0.0996639\n0.4472090\n-0.0185889\nCN10-1\nCN10\n1\n18\n\n\n3.6354729\n0.3809673\n-0.0934957\n0.0018722\nCN10-2\nCN10\n2\n18\n\n\n2.9872250\n0.7890400\n-0.2361098\n-0.1628506\nCN10-3\nCN10\n3\n18\n\n\n3.3562231\n-0.2866224\n0.1331068\n-0.2056366\nCN10-1\nCN10\n1\n20\n\n\n3.2009943\n0.4795967\n-0.2092384\n-0.5962183\nCN10-2\nCN10\n2\n20\n\n\n3.9948127\n0.7772640\n-0.3181372\n0.1218382\nCN10-3\nCN10\n3\n20\n\n\n2.8874207\n0.4554681\n0.3106044\n-0.2220240\nCN10-1\nCN10\n1\n22\n\n\n3.6868864\n0.9681097\n-0.2174166\n-0.2246775\nCN10-2\nCN10\n2\n22\n\n\n4.8689622\n0.5218563\n-0.2906042\n0.3532981\nCN10-3\nCN10\n3\n22\n\n\n-3.8483418\n1.5205541\n-0.8809715\n-0.5306228\nNC-1\nNC\n1\n1\n\n\n-3.7653460\n1.5598499\n-1.0570798\n-0.4075397\nNC-2\nNC\n2\n1\n\n\n-3.8586309\n1.6044929\n-1.0936576\n-0.4292404\nNC-3\nNC\n3\n1\n\n\n-2.6934553\n-0.9198406\n0.7439841\n-0.9881115\nNC-1\nNC\n1\n3\n\n\n-2.5064076\n-1.0856761\n0.6334250\n-0.8999028\nNC-2\nNC\n2\n3\n\n\n-2.4097945\n-1.2731546\n1.1767665\n-0.8715948\nNC-3\nNC\n3\n3\n\n\n-3.0567309\n0.5804906\n-0.1391344\n-0.3701763\nNC-1\nNC\n1\n5\n\n\n-2.3511737\n-0.3692016\n0.7053757\n-0.3284113\nNC-2\nNC\n2\n5\n\n\n-2.6752311\n-0.0637855\n0.4692194\n-0.3841240\nNC-3\nNC\n3\n5\n\n\n-1.2335368\n-0.6717374\n0.2155285\n0.1060486\nNC-1\nNC\n1\n9\n\n\n-1.6550689\n0.1576557\n0.0687658\n0.2750388\nNC-2\nNC\n2\n9\n\n\n-0.8948103\n-0.8171884\n0.8062876\n0.5032756\nNC-3\nNC\n3\n9\n\n\n-1.2512737\n-0.4720993\n0.4071788\n0.4693106\nNC-1\nNC\n1\n11\n\n\n-1.8091407\n0.0552546\n0.0424090\n0.3918222\nNC-2\nNC\n2\n11\n\n\n-2.4225566\n0.4998948\n-0.1987773\n0.1959282\nNC-3\nNC\n3\n11\n\n\n-0.9193427\n-0.7741826\n0.0918984\n0.5089847\nNC-1\nNC\n1\n13\n\n\n-0.8800183\n-0.7850404\n0.0895146\n0.6050052\nNC-2\nNC\n2\n13\n\n\n-1.3075763\n-0.2525829\n-0.2993318\n0.5874269\nNC-3\nNC\n3\n13\n\n\n-0.9543813\n-0.3170305\n0.0885062\n0.7153071\nNC-1\nNC\n1\n16\n\n\n-0.4303679\n-0.9952374\n0.2038883\n0.8214647\nNC-2\nNC\n2\n16\n\n\n-0.9457300\n-0.7180646\n0.3081282\n0.6563748\nNC-3\nNC\n3\n16\n\n\n-1.3830063\n0.0614677\n-0.2805342\n0.5462137\nNC-1\nNC\n1\n18\n\n\n-0.7960522\n-0.5792768\n-0.0369684\n0.6621526\nNC-2\nNC\n2\n18\n\n\n-1.6822927\n0.1041656\n0.0634251\n0.4337240\nNC-3\nNC\n3\n18\n\n\n-1.3157478\n-0.0835664\n-0.1246253\n0.5599467\nNC-1\nNC\n1\n20\n\n\n-1.7425068\n0.3029227\n-0.0161466\n0.5134360\nNC-2\nNC\n2\n20\n\n\n-1.3970678\n-0.2923056\n0.4324586\n0.4765460\nNC-3\nNC\n3\n20\n\n\n-1.0777451\n-0.1232925\n0.2388682\n0.7585307\nNC-1\nNC\n1\n22\n\n\n0.4851039\n-4.1291445\n-4.0625050\n-0.4582436\nNC-2\nNC\n2\n22\n\n\n-1.0516226\n-0.7228479\n1.0641320\n0.4955951\nNC-3\nNC\n3\n22\n\n\n\n\n\n🎬 Plot PC1 against PC2 and colour by time and shape by treatment:\n\npca_labelled |> \n ggplot(aes(x = PC1, y = PC2, \n colour = factor(time_day),\n shape = treatment)) +\n geom_point(size = 3) +\n scale_colour_viridis_d(end = 0.95, begin = 0.15,\n name = \"Time\") +\n scale_shape_manual(values = c(17, 19),\n name = NULL) +\n theme_classic()\n\n\n\n\n\n\n\n🎬 Plot PC1 against PC2 and colour by time and facet treatment:\n\npca_labelled |> \n ggplot(aes(x = PC1, y = PC2, colour = factor(time_day))) +\n geom_point(size = 3) +\n scale_colour_viridis_d(end = 0.95, begin = 0.15,\n name = \"Time\") +\n facet_wrap(~treatment, ncol = 1) +\n theme_classic()\n\n\n\n\n\n\n\nreplicates are similar at the same time and treatment especially early as we might expect. PC is essentially an axis of time.\n\nWe are going to create an interactive heatmap with the heatmaply (Galili et al. 2017) package. heatmaply takes a matrix as input so we can use mat\n🎬 Set the rownames to the sample id whihcih is combination of sample_replicate and time_day:\n\nrownames(mat) <- interaction(vfa_cummul_pca$sample_replicate, \n vfa_cummul_pca$time_day)\n\nYou might want to view the matrix by clicking on it in the environment pane.\n🎬 Load the heatmaply package:\n\nlibrary(heatmaply)\n\nWe need to tell the clustering algorithm how many clusters to create. We will set the number of clusters for the treatments to be 2 and the number of clusters for the vfa to be the same since it makes sense to see what clusters of genes correlate with the treatments.\n🎬 Set the number of clusters for the treatments and vfa:\n\nn_treatment_clusters <- 2\nn_vfa_clusters <- 2\n\n🎬 Create the heatmap:\n\nheatmaply(mat, \n scale = \"column\",\n k_col = n_vfa_clusters,\n k_row = n_treatment_clusters,\n fontsize_row = 7, fontsize_col = 10,\n labCol = colnames(mat),\n labRow = rownames(mat),\n heatmap_layers = theme(axis.line = element_blank()))\n\n\n\n\n\nThe heatmap will open in the viewer pane (rather than the plot pane) because it is html. You can “Show in a new window” to see it in a larger format. You can also zoom in and out and pan around the heatmap and download it as a png. You might feel the colour bars is not adding much to the plot. You can remove it by setting hide_colorbar = TRUE, in the heatmaply() function.\nOne of the NC replicates at time = 22 is very different from the other replicates. The CN10 treatments cluster together at high time points. CN10 samples are more similar to NC samples early on. Most of the VFAs behave similarly with highest values later in the experiment for CN10 but isohexanoate and hexanoate differ. The difference might be because isohexanoate is especially low in the NC replicates at time = 1 and hexanoate is especially high in the NC replicate 2 at time = 22\n\nCalculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight.\nI’ve requested clarification: for the flux measurements, do they need graphs of the rate of change wrt time? And is the sludge volume going to be a constant for all samples or something they measure and varies by vial?\n\nGraph and extract the reaction rate assuming a first order chemical/biological reaction and an exponential falloff rate\nI’ve requested clarification: for the nonlinear least squares curve fitting, I assume x is time but I’m not clear what the Y variable is - concentration? or change in concentration? or rate of change of concentration?\nPages made with R (R Core Team 2023), Quarto (Allaire et al. 2022), knitr (Xie 2022), kableExtra (Zhu 2021)", + "text": "I have some data and information from Kelly. I have interpreted it and written some code to do the calculations.\nHowever, Kelly hasn’t had a chance to look at it yet so I am providing the exact information and data he supplied along with my suggested workflow based on my interpretation of the data and info.\n\nThe file is a CSV file, with some notes on top and the data in the following order, post notes and headers. Please note that all chemical data is in millimolar. There are 62 rows of actual data.\nSample Name – Replicate, Time (days), Acetate, Propanoate, Isobutyrate, Butyrate, Isopentanoate, Pentanoate, Isohexanoate, Hexanoate\nThe students should be able to transform the data from mM to mg/L, and to g/L. To do this they only need to multiply the molecular weight of the compound (listed in the notes in the file) by the concentration in mM to get mg/L. Obviously to get g/L they will just divide by 1000. They should be able to graph the VFA concentrations with time.\nThey should also be able to do a simple flux measurement, which is the change in VFA concentration over a period of time, divided by weight or volume of material. In this case it might be equal to == Delta(Acetate at 3 days - Acetate at 1 day)/Delta (3days - 1day)/50 mls sludge. This would provide a final flux with the units of mg acetate per ml sludge per day. Let me know if this isn’t clear.\nPerhaps more importantly they should be able to graph and extract the reaction rate, assuming a first order chemical/biological reaction and an exponential falloff rate. I found this as a starting point (https://martinlab.chem.umass.edu/r-fitting-data/) , but I assume Emma has something much more effective already in the pipeline.\n\nI created these two data files from the original.\n\n8 VFA in mM for 60 samples vfa.csv. There were 63 rows of data in the original file. There were no time 0 for one treatment and all values were zero for the other treatment so I removed those.\n\nTwo treatments: straw (CN10) and water (NC)\n10 time points: 1, 3, 5, 9, 11, 13, 16, 18, 20, 22\nthree replicates per treatment per time point\n2 x 10 x 3 = 60 groups\n8 VFA with concentration in mM (millimolar): acetate, propanoate, isobutyrate, butyrate, isopentanoate, pentanoate, isohexanoate, hexanoate\n\n\nMolecular weights for each VFA in grams per mole mol_wt.txt VFAs from AD vials\n\nWe need to:\n\nCalculate Change in VFA g/l with time\nRecalculate the data into grams per litre - convert to molar: 1 millimolar to molar = 0.001 molar - multiply by the molecular weight of each VFA\nCalculate the percent representation of each VFA, by mM and by weight\nCalculate the flux (change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight\nGraph and extract the reaction rate, assuming a first order chemical/biological reaction and an exponential falloff rate\n\n🎬 Start RStudio from the Start menu\n🎬 Make an RStudio project. Be deliberate about where you create it so that it is a good place for you\n🎬 Use the Files pane to make new folders for the data. I suggest data-raw and data-processed\n🎬 Make a new script called analysis.R to carry out the rest of the work.\n🎬 Load tidyverse (Wickham et al. 2019) for importing, summarising, plotting and filtering.\n\nlibrary(tidyverse)\n\n\n🎬 Save the files to data-raw. Open them and examine them. You may want to use Excel for the csv file.\n🎬 Answer the following questions:\n\nWhat is in the rows and columns of each file?\nHow many rows and columns are there in each file?\nHow are the data organised ?\n\n🎬 Import\n\nvfa_cummul <- read_csv(\"data-raw/vfa.csv\") |> janitor::clean_names()\n\n🎬 Split treatment and replicate to separate columns so there is a treatment column:\n\nvfa_cummul <- vfa_cummul |> \n separate(col = sample_replicate, \n into = c(\"treatment\", \"replicate\"), \n sep = \"-\",\n remove = FALSE)\n\n📢 This code depends on the sample_replicate column being in the form treatment-replicate. In the sample data CN10 and NC are the treatments. The replicate is a number from 1 to 3. The value does include a encoding for time. You might want to edit your file to match this format.\nThe provided data is cumulative/absolute. We need to calculate the change in VFA with time. There is a function, lag() that will help us do this. It will take the previous value and subtract it from the current value. We need to do that separately for each sample_replicate so we need to group by sample_replicate first. We also need to make sure the data is in the right order so we will arrange by sample_replicate and time_day.\n\n🎬 Create dataframe for the change in VFA 📢 and the change in time\n\nvfa_delta <- vfa_cummul |> \n group_by(sample_replicate) |> \n arrange(sample_replicate, time_day) |>\n mutate(acetate = acetate - lag(acetate),\n propanoate = propanoate - lag(propanoate),\n isobutyrate = isobutyrate - lag(isobutyrate),\n butyrate = butyrate - lag(butyrate),\n isopentanoate = isopentanoate - lag(isopentanoate),\n pentanoate = pentanoate - lag(pentanoate),\n isohexanoate = isohexanoate - lag(isohexanoate),\n hexanoate = hexanoate - lag(hexanoate),\n delta_time = time_day - lag(time_day))\n\nNow we have two dataframes, one for the cumulative data and one for the change in VFA and time. Note that the VFA values have been replaced by the change in VFA but the change in time is in a separate column. I have done this because we later want to plot flux (not yet added) against time\n📢 This code also depends on the sample_replicate column being in the form treatment-replicate. lag is calculating the difference between a value at one time point and the next for a treatment-replicate combination.\n\nTo make conversions from mM to g/l we need to do mM * 0.001 * MW. We will import the molecular weight data, pivot the VFA data to long format and join the molecular weight data to the VFA data. Then we can calculate the g/l. We will do this for both the cumulative and delta dataframes.\n🎬 import molecular weight data\n\nmol_wt <- read_table(\"data-raw/mol_wt.txt\") |>\n mutate(vfa = tolower(vfa))\n\n🎬 Pivot the cumulative data to long format:\n\nvfa_cummul <- vfa_cummul |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_cummul to check you understand what you have done.\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_cummul <- vfa_cummul |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\nView vfa_cummul to check you understand what you have done.\nRepeat for the delta data.\n🎬 Pivot the change data, delta_vfa to long format (📢 delta_time is added to the list of columns that do not need to be pivoted but repeated):\n\nvfa_delta <- vfa_delta |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day,\n delta_time),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_delta to check it looks like vfa_cummul\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_delta <- vfa_delta |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\n\nby mM and by weight\n🎬 Add a column which is the percent representation of each VFA for mM and g/l:\n\nvfa_cummul <- vfa_cummul |> \n group_by(sample_replicate, time_day) |> \n mutate(percent_conc_g_l = conc_g_l / sum(conc_g_l) * 100,\n percent_conc_mM = conc_mM / sum(conc_mM) * 100)\n\n\n🎬 Make summary data for graphing\n\nvfa_cummul_summary <- vfa_cummul |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_g_l = mean(conc_g_l),\n se_g_l = sd(conc_g_l)/sqrt(length(conc_g_l)),\n mean_mM = mean(conc_mM),\n se_mM = sd(conc_mM)/sqrt(length(conc_mM))) |> \n ungroup()\n\n\nvfa_delta_summary <- vfa_delta |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_g_l = mean(conc_g_l),\n se_g_l = sd(conc_g_l)/sqrt(length(conc_g_l)),\n mean_mM = mean(conc_mM),\n se_mM = sd(conc_mM)/sqrt(length(conc_mM))) |> \n ungroup()\n\n🎬 Graph the cumulative data, grams per litre:\n\nvfa_cummul_summary |> \n ggplot(aes(x = time_day, colour = vfa)) +\n geom_line(aes(y = mean_g_l), \n linewidth = 1) +\n geom_errorbar(aes(ymin = mean_g_l - se_g_l,\n ymax = mean_g_l + se_g_l),\n width = 0.5, \n show.legend = F,\n linewidth = 1) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean VFA concentration (g/l)\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n🎬 Graph the change data, grams per litre:\n\nvfa_delta_summary |> \n ggplot(aes(x = time_day, colour = vfa)) +\n geom_line(aes(y = mean_g_l), \n linewidth = 1) +\n geom_errorbar(aes(ymin = mean_g_l - se_g_l,\n ymax = mean_g_l + se_g_l),\n width = 0.5, \n show.legend = F,\n linewidth = 1) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean change in VFA concentration (g/l)\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n🎬 Graph the mean percent representation of each VFA g/l. Note geom_col() will plot proportion if we setposition = \"fill\"\n\nvfa_cummul_summary |> \n ggplot(aes(x = time_day, y = mean_g_l, fill = vfa)) +\n geom_col(position = \"fill\") +\n scale_fill_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"Mean Proportion VFA\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\n\nWe have 8 VFA in our dataset. PCA will allow us to plot our samples in the “VFA” space so we can see if treatments, time or replicate cluster.\nHowever, PCA expects a matrix with samples in rows and VFA, the variables, in columns. We will need to select the columns we need and pivot wider. Then convert to a matrix.\n🎬\n\nvfa_cummul_pca <- vfa_cummul |> \n select(sample_replicate, \n treatment, \n replicate, \n time_day, \n vfa, \n conc_g_l) |> \n pivot_wider(names_from = vfa, \n values_from = conc_g_l)\n\n\nmat <- vfa_cummul_pca |> \n ungroup() |>\n select(-sample_replicate, \n -treatment, \n -replicate, \n -time_day) |> \n as.matrix()\n\n🎬 Perform PCA on the matrix:\n\npca <- mat |>\n prcomp(scale. = TRUE, \n rank. = 4) \n\nThe scale. argument tells prcomp() to scale the data to have a mean of 0 and a standard deviation of 1. The rank. argument tells prcomp() to only calculate the first 4 principal components. This is useful for visualisation as we can only plot in 2 or 3 dimensions. We can see the results of the PCA by viewing the summary() of the pca object.\n\nsummary(pca)\n\nImportance of first k=4 (out of 8) components:\n PC1 PC2 PC3 PC4\nStandard deviation 2.4977 0.9026 0.77959 0.45567\nProportion of Variance 0.7798 0.1018 0.07597 0.02595\nCumulative Proportion 0.7798 0.8816 0.95760 0.98355\n\n\nThe Proportion of Variance tells us how much of the variance is explained by each component. We can see that the first component explains 0.7798 of the variance, the second 0.1018, and the third 0.07597. Together the first three components explain nearly 96% of the total variance in the data. Plotting PC1 against PC2 will capture about 78% of the variance which is likely much better than we would get plotting any two VFA against each other. To plot the PC1 against PC2 we will need to extract the PC1 and PC2 score from the pca object and add labels for the samples.\n🎬 Create a dataframe of the PC1 and PC2 scores which are in pca$x and add the sample information from vfa_cummul_pca:\n\npca_labelled <- data.frame(pca$x,\n sample_replicate = vfa_cummul_pca$sample_replicate,\n treatment = vfa_cummul_pca$treatment,\n replicate = vfa_cummul_pca$replicate,\n time_day = vfa_cummul_pca$time_day) \n\nThe dataframe should look like this:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPC1\nPC2\nPC3\nPC4\nsample_replicate\ntreatment\nreplicate\ntime_day\n\n\n\n-2.9592362\n0.6710553\n0.0068846\n-0.4453904\nCN10-1\nCN10\n1\n1\n\n\n-2.7153060\n0.7338367\n-0.2856872\n-0.2030110\nCN10-2\nCN10\n2\n1\n\n\n-2.7423102\n0.8246832\n-0.4964249\n-0.1434490\nCN10-3\nCN10\n3\n1\n\n\n-1.1909064\n-1.0360724\n1.1249513\n-0.7360599\nCN10-1\nCN10\n1\n3\n\n\n-1.3831563\n0.9572091\n-1.5561657\n0.0582755\nCN10-2\nCN10\n2\n3\n\n\n-1.1628940\n-0.0865412\n-0.6046780\n-0.1976743\nCN10-3\nCN10\n3\n3\n\n\n-0.2769661\n-0.2221055\n1.1579897\n-0.6079395\nCN10-1\nCN10\n1\n5\n\n\n0.3480962\n0.3612522\n0.5841649\n-0.0612366\nCN10-2\nCN10\n2\n5\n\n\n-0.7281116\n1.6179706\n-0.6430170\n0.0660727\nCN10-3\nCN10\n3\n5\n\n\n0.9333578\n-0.1339061\n1.0870945\n-0.4374103\nCN10-1\nCN10\n1\n9\n\n\n2.0277528\n0.6993342\n0.3850147\n0.0723540\nCN10-2\nCN10\n2\n9\n\n\n1.9931908\n0.5127260\n0.6605782\n0.1841974\nCN10-3\nCN10\n3\n9\n\n\n1.8365692\n-0.4189762\n0.7029015\n-0.3873133\nCN10-1\nCN10\n1\n11\n\n\n2.3313978\n0.3274834\n-0.0135608\n0.0264372\nCN10-2\nCN10\n2\n11\n\n\n1.5833035\n0.9263509\n-0.1909483\n0.1358320\nCN10-3\nCN10\n3\n11\n\n\n2.8498246\n0.3815854\n-0.4763500\n-0.0280281\nCN10-1\nCN10\n1\n13\n\n\n3.5652461\n-0.0836709\n-0.5948483\n-0.1612809\nCN10-2\nCN10\n2\n13\n\n\n4.1314944\n-1.2254642\n0.2699666\n-0.3152100\nCN10-3\nCN10\n3\n13\n\n\n3.7338024\n-0.6744610\n0.4344639\n-0.3736234\nCN10-1\nCN10\n1\n16\n\n\n3.6748427\n0.5202498\n-0.4333685\n-0.1607235\nCN10-2\nCN10\n2\n16\n\n\n3.9057053\n0.3599520\n-0.3049074\n0.0540037\nCN10-3\nCN10\n3\n16\n\n\n3.4561583\n-0.0996639\n0.4472090\n-0.0185889\nCN10-1\nCN10\n1\n18\n\n\n3.6354729\n0.3809673\n-0.0934957\n0.0018722\nCN10-2\nCN10\n2\n18\n\n\n2.9872250\n0.7890400\n-0.2361098\n-0.1628506\nCN10-3\nCN10\n3\n18\n\n\n3.3562231\n-0.2866224\n0.1331068\n-0.2056366\nCN10-1\nCN10\n1\n20\n\n\n3.2009943\n0.4795967\n-0.2092384\n-0.5962183\nCN10-2\nCN10\n2\n20\n\n\n3.9948127\n0.7772640\n-0.3181372\n0.1218382\nCN10-3\nCN10\n3\n20\n\n\n2.8874207\n0.4554681\n0.3106044\n-0.2220240\nCN10-1\nCN10\n1\n22\n\n\n3.6868864\n0.9681097\n-0.2174166\n-0.2246775\nCN10-2\nCN10\n2\n22\n\n\n4.8689622\n0.5218563\n-0.2906042\n0.3532981\nCN10-3\nCN10\n3\n22\n\n\n-3.8483418\n1.5205541\n-0.8809715\n-0.5306228\nNC-1\nNC\n1\n1\n\n\n-3.7653460\n1.5598499\n-1.0570798\n-0.4075397\nNC-2\nNC\n2\n1\n\n\n-3.8586309\n1.6044929\n-1.0936576\n-0.4292404\nNC-3\nNC\n3\n1\n\n\n-2.6934553\n-0.9198406\n0.7439841\n-0.9881115\nNC-1\nNC\n1\n3\n\n\n-2.5064076\n-1.0856761\n0.6334250\n-0.8999028\nNC-2\nNC\n2\n3\n\n\n-2.4097945\n-1.2731546\n1.1767665\n-0.8715948\nNC-3\nNC\n3\n3\n\n\n-3.0567309\n0.5804906\n-0.1391344\n-0.3701763\nNC-1\nNC\n1\n5\n\n\n-2.3511737\n-0.3692016\n0.7053757\n-0.3284113\nNC-2\nNC\n2\n5\n\n\n-2.6752311\n-0.0637855\n0.4692194\n-0.3841240\nNC-3\nNC\n3\n5\n\n\n-1.2335368\n-0.6717374\n0.2155285\n0.1060486\nNC-1\nNC\n1\n9\n\n\n-1.6550689\n0.1576557\n0.0687658\n0.2750388\nNC-2\nNC\n2\n9\n\n\n-0.8948103\n-0.8171884\n0.8062876\n0.5032756\nNC-3\nNC\n3\n9\n\n\n-1.2512737\n-0.4720993\n0.4071788\n0.4693106\nNC-1\nNC\n1\n11\n\n\n-1.8091407\n0.0552546\n0.0424090\n0.3918222\nNC-2\nNC\n2\n11\n\n\n-2.4225566\n0.4998948\n-0.1987773\n0.1959282\nNC-3\nNC\n3\n11\n\n\n-0.9193427\n-0.7741826\n0.0918984\n0.5089847\nNC-1\nNC\n1\n13\n\n\n-0.8800183\n-0.7850404\n0.0895146\n0.6050052\nNC-2\nNC\n2\n13\n\n\n-1.3075763\n-0.2525829\n-0.2993318\n0.5874269\nNC-3\nNC\n3\n13\n\n\n-0.9543813\n-0.3170305\n0.0885062\n0.7153071\nNC-1\nNC\n1\n16\n\n\n-0.4303679\n-0.9952374\n0.2038883\n0.8214647\nNC-2\nNC\n2\n16\n\n\n-0.9457300\n-0.7180646\n0.3081282\n0.6563748\nNC-3\nNC\n3\n16\n\n\n-1.3830063\n0.0614677\n-0.2805342\n0.5462137\nNC-1\nNC\n1\n18\n\n\n-0.7960522\n-0.5792768\n-0.0369684\n0.6621526\nNC-2\nNC\n2\n18\n\n\n-1.6822927\n0.1041656\n0.0634251\n0.4337240\nNC-3\nNC\n3\n18\n\n\n-1.3157478\n-0.0835664\n-0.1246253\n0.5599467\nNC-1\nNC\n1\n20\n\n\n-1.7425068\n0.3029227\n-0.0161466\n0.5134360\nNC-2\nNC\n2\n20\n\n\n-1.3970678\n-0.2923056\n0.4324586\n0.4765460\nNC-3\nNC\n3\n20\n\n\n-1.0777451\n-0.1232925\n0.2388682\n0.7585307\nNC-1\nNC\n1\n22\n\n\n0.4851039\n-4.1291445\n-4.0625050\n-0.4582436\nNC-2\nNC\n2\n22\n\n\n-1.0516226\n-0.7228479\n1.0641320\n0.4955951\nNC-3\nNC\n3\n22\n\n\n\n\n\n🎬 Plot PC1 against PC2 and colour by time and shape by treatment:\n\npca_labelled |> \n ggplot(aes(x = PC1, y = PC2, \n colour = factor(time_day),\n shape = treatment)) +\n geom_point(size = 3) +\n scale_colour_viridis_d(end = 0.95, begin = 0.15,\n name = \"Time\") +\n scale_shape_manual(values = c(17, 19),\n name = NULL) +\n theme_classic()\n\n\n\n\n\n\n\n🎬 Plot PC1 against PC2 and colour by time and facet treatment:\n\npca_labelled |> \n ggplot(aes(x = PC1, y = PC2, colour = factor(time_day))) +\n geom_point(size = 3) +\n scale_colour_viridis_d(end = 0.95, begin = 0.15,\n name = \"Time\") +\n facet_wrap(~treatment, ncol = 1) +\n theme_classic()\n\n\n\n\n\n\n\nreplicates are similar at the same time and treatment especially early as we might expect. PC is essentially an axis of time.\n\nWe are going to create an interactive heatmap with the heatmaply (Galili et al. 2017) package. heatmaply takes a matrix as input so we can use mat\n🎬 Set the rownames to the sample id whihcih is combination of sample_replicate and time_day:\n\nrownames(mat) <- interaction(vfa_cummul_pca$sample_replicate, \n vfa_cummul_pca$time_day)\n\nYou might want to view the matrix by clicking on it in the environment pane.\n🎬 Load the heatmaply package:\n\nlibrary(heatmaply)\n\nWe need to tell the clustering algorithm how many clusters to create. We will set the number of clusters for the treatments to be 2 and the number of clusters for the vfa to be the same since it makes sense to see what clusters of genes correlate with the treatments.\n🎬 Set the number of clusters for the treatments and vfa:\n\nn_treatment_clusters <- 2\nn_vfa_clusters <- 2\n\n🎬 Create the heatmap:\n\nheatmaply(mat, \n scale = \"column\",\n k_col = n_vfa_clusters,\n k_row = n_treatment_clusters,\n fontsize_row = 7, fontsize_col = 10,\n labCol = colnames(mat),\n labRow = rownames(mat),\n heatmap_layers = theme(axis.line = element_blank()))\n\n\n\n\n\nThe heatmap will open in the viewer pane (rather than the plot pane) because it is html. You can “Show in a new window” to see it in a larger format. You can also zoom in and out and pan around the heatmap and download it as a png. You might feel the colour bars is not adding much to the plot. You can remove it by setting hide_colorbar = TRUE, in the heatmaply() function.\nOne of the NC replicates at time = 22 is very different from the other replicates. The CN10 treatments cluster together at high time points. CN10 samples are more similar to NC samples early on. Most of the VFAs behave similarly with highest values later in the experiment for CN10 but isohexanoate and hexanoate differ. The difference might be because isohexanoate is especially low in the NC replicates at time = 1 and hexanoate is especially high in the NC replicate 2 at time = 22\n\nCalculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight. Emma’s note: I think the terms flux and reaction rate are used interchangeably\nI’ve requested clarification: for the flux measurements, do they need graphs of the rate of change wrt time? And is the sludge volume going to be a constant for all samples or something they measure and varies by vial?\nAnswer: The sludge volume is constant, at 30 mls within a 120ml vial. Some students will want to graph reaction rate with time, others will want to compare the measured GC-FID concentrations against the model output.\n📢 Kelly asked for “.. a simple flux measurement, which is the change in VFA concentration over a period of time, divided by weight or volume of material. In this case it might be equal to == Delta(Acetate at 3 days - Acetate at 1 day)/Delta (3days - 1day)/50 mls sludge. This would provide a final flux with the units of mg acetate per ml sludge per day.”\nNote: Kelly says mg/ml where earlier he used g/L. These are the same (but I called my column conc_g_l)\nWe need to use the vfa_delta data frame. It contains the change in VFA concentration and the change in time. We will add a column for the flux of each VFA in g/L/day. (mg/ml/day)\n\nsludge_volume <- 30 # ml\nvfa_delta <- vfa_delta |> \n mutate(flux = conc_g_l / delta_time / sludge_volume)\n\nNAs at time 1 are expected because there’s no time before that to calculate a changes\n\nGraph and extract the reaction rate assuming a first order chemical/biological reaction and an exponential falloff rate\nI’ve requested clarification: for the nonlinear least squares curve fitting, I assume x is time but I’m not clear what the Y variable is - concentration? or change in concentration? or rate of change of concentration?\nAnswer: The non-linear equation describes concentration change with time. Effectively the change in concentration is dependent upon the available concentration, in this example [Hex] represents the concentration of Hexanoic acid, while the T0 and T1 represent time steps.\n[Hex]T1 = [Hex]T0 - [Hex]T0 * k\nOr. the amount of Hexanoic acid remaining at T1 (let’s say one hour from the last data point) is equal to the starting concentration ([Hex]T0) minus the concentration dependent metabolism ([Hex]To * k).\n📢 We can now plot the observed fluxes (reaction rates) over time\nI’ve summarised the data to add error bars and means\n\nvfa_delta_summary <- vfa_delta |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_flux = mean(flux),\n se_flux = sd(flux)/sqrt(length(flux))) |> \n ungroup()\n\n\nggplot(data = vfa_delta, aes(x = time_day, colour = vfa)) +\n geom_point(aes(y = flux), alpha = 0.6) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux - se_flux, \n ymax = mean_flux + se_flux), \n width = 1) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux, \n ymax = mean_flux), \n width = 0.8) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"VFA Flux mg/ml/day\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\nOr maybe this is easier to read:\n\nggplot(data = vfa_delta, aes(x = time_day, colour = treatment)) +\n geom_point(aes(y = flux), alpha = 0.6) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux - se_flux, \n ymax = mean_flux + se_flux), \n width = 1) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux, \n ymax = mean_flux), \n width = 0.8) +\n scale_color_viridis_d(name = NULL, begin = 0.2, end = 0.7) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"VFA Flux mg/ml/day\") +\n theme_bw() +\n facet_wrap(~ vfa, nrow = 2) +\n theme(strip.background = element_blank(),\n legend.position = \"top\")\n\n\n\n\n\n\n\nI have not yet worked out the best way to plot the modelled reaction rate\nPages made with R (R Core Team 2023), Quarto (Allaire et al. 2022), knitr (Xie 2022), kableExtra (Zhu 2021)", "crumbs": [ "Omics", "Kelly's Project", @@ -2528,7 +2528,7 @@ "href": "omics/kelly/workshop.html#import", "title": "Workflow for VFA analysis", "section": "", - "text": "🎬 Import\n\nvfa_cummul <- read_csv(\"data-raw/vfa.csv\") |> janitor::clean_names()\n\n🎬 Split treatment and replicate to separate columns so there is a treatment column:\n\nvfa_cummul <- vfa_cummul |> \n separate(col = sample_replicate, \n into = c(\"treatment\", \"replicate\"), \n sep = \"-\",\n remove = FALSE)\n\nThe provided data is cumulative/absolute. We need to calculate the change in VFA with time. There is a function, lag() that will help us do this. It will take the previous value and subtract it from the current value. We need to do that separately for each sample_replicate so we need to group by sample_replicate first. We also need to make sure the data is in the right order so we will arrange by sample_replicate and time_day.", + "text": "🎬 Import\n\nvfa_cummul <- read_csv(\"data-raw/vfa.csv\") |> janitor::clean_names()\n\n🎬 Split treatment and replicate to separate columns so there is a treatment column:\n\nvfa_cummul <- vfa_cummul |> \n separate(col = sample_replicate, \n into = c(\"treatment\", \"replicate\"), \n sep = \"-\",\n remove = FALSE)\n\n📢 This code depends on the sample_replicate column being in the form treatment-replicate. In the sample data CN10 and NC are the treatments. The replicate is a number from 1 to 3. The value does include a encoding for time. You might want to edit your file to match this format.\nThe provided data is cumulative/absolute. We need to calculate the change in VFA with time. There is a function, lag() that will help us do this. It will take the previous value and subtract it from the current value. We need to do that separately for each sample_replicate so we need to group by sample_replicate first. We also need to make sure the data is in the right order so we will arrange by sample_replicate and time_day.", "crumbs": [ "Omics", "Kelly's Project", @@ -2540,7 +2540,7 @@ "href": "omics/kelly/workshop.html#calculate-change-in-vfa-gl-with-time", "title": "Workflow for VFA analysis", "section": "", - "text": "🎬 Create dataframe for the change in VFA\n\nvfa_delta <- vfa_cummul |> \n group_by(sample_replicate) |> \n arrange(sample_replicate, time_day) |>\n mutate(acetate = acetate - lag(acetate),\n propanoate = propanoate - lag(propanoate),\n isobutyrate = isobutyrate - lag(isobutyrate),\n butyrate = butyrate - lag(butyrate),\n isopentanoate = isopentanoate - lag(isopentanoate),\n pentanoate = pentanoate - lag(pentanoate),\n isohexanoate = isohexanoate - lag(isohexanoate),\n hexanoate = hexanoate - lag(hexanoate))\n\nNow we have two dataframes, one for the cumulative data and one for the change in VFA.", + "text": "🎬 Create dataframe for the change in VFA 📢 and the change in time\n\nvfa_delta <- vfa_cummul |> \n group_by(sample_replicate) |> \n arrange(sample_replicate, time_day) |>\n mutate(acetate = acetate - lag(acetate),\n propanoate = propanoate - lag(propanoate),\n isobutyrate = isobutyrate - lag(isobutyrate),\n butyrate = butyrate - lag(butyrate),\n isopentanoate = isopentanoate - lag(isopentanoate),\n pentanoate = pentanoate - lag(pentanoate),\n isohexanoate = isohexanoate - lag(isohexanoate),\n hexanoate = hexanoate - lag(hexanoate),\n delta_time = time_day - lag(time_day))\n\nNow we have two dataframes, one for the cumulative data and one for the change in VFA and time. Note that the VFA values have been replaced by the change in VFA but the change in time is in a separate column. I have done this because we later want to plot flux (not yet added) against time\n📢 This code also depends on the sample_replicate column being in the form treatment-replicate. lag is calculating the difference between a value at one time point and the next for a treatment-replicate combination.", "crumbs": [ "Omics", "Kelly's Project", @@ -2552,7 +2552,7 @@ "href": "omics/kelly/workshop.html#recalculate-the-data-into-grams-per-litre", "title": "Workflow for VFA analysis", "section": "", - "text": "To make conversions from mM to g/l we need to do mM * 0.001 * MW. We will import the molecular weight data, pivot the VFA data to long format and join the molecular weight data to the VFA data. Then we can calculate the g/l. We will do this for both the cumulative and delta dataframes.\n🎬 import molecular weight data\n\nmol_wt <- read_table(\"data-raw/mol_wt.txt\") |>\n mutate(vfa = tolower(vfa))\n\n🎬 Pivot the cumulative data to long format:\n\nvfa_cummul <- vfa_cummul |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_cummul to check you understand what you have done.\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_cummul <- vfa_cummul |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\nView vfa_cummul to check you understand what you have done.\nRepeat for the delta data.\n🎬 Pivot the change data, delta_vfa to long format:\n\nvfa_delta <- vfa_delta |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_delta to check it looks like vfa_cummul\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_delta <- vfa_delta |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)", + "text": "To make conversions from mM to g/l we need to do mM * 0.001 * MW. We will import the molecular weight data, pivot the VFA data to long format and join the molecular weight data to the VFA data. Then we can calculate the g/l. We will do this for both the cumulative and delta dataframes.\n🎬 import molecular weight data\n\nmol_wt <- read_table(\"data-raw/mol_wt.txt\") |>\n mutate(vfa = tolower(vfa))\n\n🎬 Pivot the cumulative data to long format:\n\nvfa_cummul <- vfa_cummul |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_cummul to check you understand what you have done.\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_cummul <- vfa_cummul |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)\n\nView vfa_cummul to check you understand what you have done.\nRepeat for the delta data.\n🎬 Pivot the change data, delta_vfa to long format (📢 delta_time is added to the list of columns that do not need to be pivoted but repeated):\n\nvfa_delta <- vfa_delta |> \n pivot_longer(cols = -c(sample_replicate,\n treatment, \n replicate,\n time_day,\n delta_time),\n values_to = \"conc_mM\",\n names_to = \"vfa\") \n\nView vfa_delta to check it looks like vfa_cummul\n🎬 Join molecular weight to data and calculate g/l (mutate to convert to g/l * 0.001 * MW):\n\nvfa_delta <- vfa_delta |> \n left_join(mol_wt, by = \"vfa\") |>\n mutate(conc_g_l = conc_mM * 0.001 * mw)", "crumbs": [ "Omics", "Kelly's Project", @@ -2588,7 +2588,7 @@ "href": "omics/kelly/workshop.html#calculate-the-flux---pending.", "title": "Workflow for VFA analysis", "section": "", - "text": "Calculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight.\nI’ve requested clarification: for the flux measurements, do they need graphs of the rate of change wrt time? And is the sludge volume going to be a constant for all samples or something they measure and varies by vial?", + "text": "Calculate the flux(change in VFA concentration over a period of time, divided by weight or volume of material) of each VFA, by mM and by weight. Emma’s note: I think the terms flux and reaction rate are used interchangeably\nI’ve requested clarification: for the flux measurements, do they need graphs of the rate of change wrt time? And is the sludge volume going to be a constant for all samples or something they measure and varies by vial?\nAnswer: The sludge volume is constant, at 30 mls within a 120ml vial. Some students will want to graph reaction rate with time, others will want to compare the measured GC-FID concentrations against the model output.\n📢 Kelly asked for “.. a simple flux measurement, which is the change in VFA concentration over a period of time, divided by weight or volume of material. In this case it might be equal to == Delta(Acetate at 3 days - Acetate at 1 day)/Delta (3days - 1day)/50 mls sludge. This would provide a final flux with the units of mg acetate per ml sludge per day.”\nNote: Kelly says mg/ml where earlier he used g/L. These are the same (but I called my column conc_g_l)\nWe need to use the vfa_delta data frame. It contains the change in VFA concentration and the change in time. We will add a column for the flux of each VFA in g/L/day. (mg/ml/day)\n\nsludge_volume <- 30 # ml\nvfa_delta <- vfa_delta |> \n mutate(flux = conc_g_l / delta_time / sludge_volume)\n\nNAs at time 1 are expected because there’s no time before that to calculate a changes", "crumbs": [ "Omics", "Kelly's Project", @@ -2600,7 +2600,7 @@ "href": "omics/kelly/workshop.html#graph-and-extract-the-reaction-rate---pending", "title": "Workflow for VFA analysis", "section": "", - "text": "Graph and extract the reaction rate assuming a first order chemical/biological reaction and an exponential falloff rate\nI’ve requested clarification: for the nonlinear least squares curve fitting, I assume x is time but I’m not clear what the Y variable is - concentration? or change in concentration? or rate of change of concentration?\nPages made with R (R Core Team 2023), Quarto (Allaire et al. 2022), knitr (Xie 2022), kableExtra (Zhu 2021)", + "text": "Graph and extract the reaction rate assuming a first order chemical/biological reaction and an exponential falloff rate\nI’ve requested clarification: for the nonlinear least squares curve fitting, I assume x is time but I’m not clear what the Y variable is - concentration? or change in concentration? or rate of change of concentration?\nAnswer: The non-linear equation describes concentration change with time. Effectively the change in concentration is dependent upon the available concentration, in this example [Hex] represents the concentration of Hexanoic acid, while the T0 and T1 represent time steps.\n[Hex]T1 = [Hex]T0 - [Hex]T0 * k\nOr. the amount of Hexanoic acid remaining at T1 (let’s say one hour from the last data point) is equal to the starting concentration ([Hex]T0) minus the concentration dependent metabolism ([Hex]To * k).\n📢 We can now plot the observed fluxes (reaction rates) over time\nI’ve summarised the data to add error bars and means\n\nvfa_delta_summary <- vfa_delta |> \n group_by(treatment, time_day, vfa) |> \n summarise(mean_flux = mean(flux),\n se_flux = sd(flux)/sqrt(length(flux))) |> \n ungroup()\n\n\nggplot(data = vfa_delta, aes(x = time_day, colour = vfa)) +\n geom_point(aes(y = flux), alpha = 0.6) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux - se_flux, \n ymax = mean_flux + se_flux), \n width = 1) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux, \n ymax = mean_flux), \n width = 0.8) +\n scale_color_viridis_d(name = NULL) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"VFA Flux mg/ml/day\") +\n theme_bw() +\n facet_wrap(~treatment) +\n theme(strip.background = element_blank())\n\n\n\n\n\n\n\nOr maybe this is easier to read:\n\nggplot(data = vfa_delta, aes(x = time_day, colour = treatment)) +\n geom_point(aes(y = flux), alpha = 0.6) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux - se_flux, \n ymax = mean_flux + se_flux), \n width = 1) +\n geom_errorbar(data = vfa_delta_summary, \n aes(ymin = mean_flux, \n ymax = mean_flux), \n width = 0.8) +\n scale_color_viridis_d(name = NULL, begin = 0.2, end = 0.7) +\n scale_x_continuous(name = \"Time (days)\") +\n scale_y_continuous(name = \"VFA Flux mg/ml/day\") +\n theme_bw() +\n facet_wrap(~ vfa, nrow = 2) +\n theme(strip.background = element_blank(),\n legend.position = \"top\")\n\n\n\n\n\n\n\nI have not yet worked out the best way to plot the modelled reaction rate\nPages made with R (R Core Team 2023), Quarto (Allaire et al. 2022), knitr (Xie 2022), kableExtra (Zhu 2021)", "crumbs": [ "Omics", "Kelly's Project", diff --git a/structures/structures.html b/structures/structures.html index f10f732..d963866 100644 --- a/structures/structures.html +++ b/structures/structures.html @@ -217,7 +217,7 @@

Structure Data Analysis for Group Project

Published
-

29 March, 2024

+

2 April, 2024