-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
griffin_merge_sites.py hangs and can't finish #17
Comments
Hi, thanks for the question and sorry for the delay in responding. From your description, I'm not sure what is going on. I'm hoping you figured out the solution, but if not could you post more details? (what did the log files show? Did you re-run the calc_cov for all 10,000 TFBS or just rerun the merge_sites step?) |
Hi, I'm commenting this thread because i think I might have a related issue: I followed the tutorial and was able to complete the analysis using the demo file provided without any problems. However, I am experiencing difficulties when applying the same pipeline to my .bam files. Specifically, the program crashes and generates the following error log when launching the Nucleosome Profiling step:
Using shell: /bin/bash [Thu Aug 29 12:48:32 2024] parameters: Skipping mappability correction real 0m5.126s [Thu Aug 29 12:48:37 2024] Skipping mappability correction The above exception was the direct cause of the following exception: The above exception was the direct cause of the following exception: The above exception was the direct cause of the following exception: Traceback (most recent call last): real 0m7.892s I have checked that the .bam files meet the requirements described in the documentation, but I cannot work out what might be causing the error. I was wondering if you could provide me with any pointers or suggestions to resolve this issue. The main difference, is my sequencing bams are coming from whole genome Nanopore sequencing data with an average read length of 165bps and 8-15Million reads each. I tried several aligning methods and also followed entirely your pipeline using:
I tried to align trimmed and untrimmed fastqs, removing reads longer reads that might be there due to the differences between nanopore and illumina, I triple-checked my griffin_GC_and_mappability_correction.snakefile, config.yaml and samples.yaml files Thank you in advance for your time |
Hi @Ugreek95, Thanks for the question and log file. It looks like the bw.values function is fetching a float rather than a list from the coverage bigwig. The problem is coming from this part of the script: Griffin/scripts/griffin_merge_sites.py Lines 349 to 356 in b624c7a
I'm not sure why this is happening but have a few thoughts on things you could check: Did you change anything in the config other than swapping out the demo bam for your bam? What version of pyBigWig are you using? I tested griffin using pyBigWig 0.3.17 so a different version could be causing a problem. You could also try modifying the code to print out some variables (chrom, start, end, strand, and values) when it throws the error. If you tell me those variables, I can try to figure out why the 'values' isn't the same format as the one generated by the demo file. Thanks, |
Hi, The log file didn't report any error, it was just stuck at merging sites for the last gene (AHR in my case).
I used individual cmd instead of snakemake. The code works fine for 1k TFBSs but not 10k. python ${griffin_dir}/scripts/griffin_coverage.py python ${griffin_dir}/scripts/griffin_merge_sites.py |
I have 30x+ bam files, interestingly, it got stuck for some samples, but not all, so I was wondering if that is the memory issue? I gave them a lot of memory though... |
Hi
At the last step of Griffin, I used griffin_merge_sites.py to merge TFBSs for each TF, When I used top 1k sites, everything worked fine, but when I switched to top 10k TFBSs, this script seemed to get stuck at the last gene and couldn't finish running. I guess the issue is the multiprocessing? Any solution for that?
Thanks!
The text was updated successfully, but these errors were encountered: