You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am analysing a CRISPR screen dataset using CB2 and noticed that the software is sensitive to changes in the total raw read count per sample. For example, if I take the Evers_CRISPRn_RT112 example dataset and multiply the input read count data by a factor of 10, the vhat estimate (derived from the fit_ab function) for each guide changes. This appears to affect guides non-uniformly. I have attached an Rscript and output plot to illustrate this.
Could you describe why this happens? My expectation was to find that multiplying everything by 10 would not affect the variance estimation, as the data are normalised for analysis.
I am analysing a CRISPR screen dataset using CB2 and noticed that the software is sensitive to changes in the total raw read count per sample. For example, if I take the Evers_CRISPRn_RT112 example dataset and multiply the input read count data by a factor of 10, the vhat estimate (derived from the fit_ab function) for each guide changes. This appears to affect guides non-uniformly. I have attached an Rscript and output plot to illustrate this.
Could you describe why this happens? My expectation was to find that multiplying everything by 10 would not affect the variance estimation, as the data are normalised for analysis.
cb2_raw_read_count_vhat-1.R.zip
The text was updated successfully, but these errors were encountered: