You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the previous stage, I completed the processing in WarpEM. After converting the obtained data into the format required by RELION, I successfully ran the InitialModel step in RELION 4.0. However, I encountered some issues during the Refine3D step. Could you please assist me in resolving these problems? Thank you very much.
mpirun -n 3 $(which relion_refine_mpi) --o Refine3D/job001/run --auto_refine --split_random_halves --ios relion_pix10_2d_optimisation_set.star --ref initial.mrc --firstiter_cc --trust_ref_size --ini_high 60 --dont_combine_weights_via_disc --pool 30 --pad 2 --ctf --particle_diameter 230 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --auto_local_healpix_order 3 --offset_range 5 --offset_step 2 --sym C6 --low_resol_join_halves 40 --norm --scale --j 2 --gpu "0" --pipeline_control Refine3D/job001/
failed to open /dev/dri/renderD128: Permission denied
failed to open /dev/dri/renderD129: Permission denied
failed to open /dev/dri/renderD130: Permission denied
failed to open /dev/dri/renderD131: Permission denied
failed to open /dev/dri/renderD128: Permission denied
failed to open /dev/dri/renderD129: Permission denied
failed to open /dev/dri/renderD130: Permission denied
failed to open /dev/dri/renderD131: Permission denied
RELION version: 4.0.2
Precision: BASE=double
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
=== RELION MPI setup ===
Number of MPI processes = 3
Number of threads per MPI process = 2
Total number of threads therefore = 6
Leader (0) runs on host = cryoema100
Follower 1 runs on host = cryoema100
Follower 2 runs on host = cryoema100
=================
uniqueHost cryoema100 has 2 ranks.
Follower 1 will distribute threads over devices 0
Thread 0 on follower 1 mapped to device 0
Thread 1 on follower 1 mapped to device 0
Follower 2 will distribute threads over devices 0
Thread 0 on follower 2 mapped to device 0
Thread 1 on follower 2 mapped to device 0
Device 0 on cryoema100 is split between 2 followers
WARNING: will ignore (but maintain) values for the unknown label: rlnTomoVisibleFrames
WARNING: will ignore (but maintain) values for the unknown label: rlnTomoVisibleFrames
WARNING: will ignore (but maintain) values for the unknown label: rlnTomoVisibleFrames
Running CPU instructions in double precision.
WARNING: will ignore (but maintain) values for the unknown label: rlnTomoVisibleFrames
Estimating initial noise spectra from at most 5000 particles
000/??? sec ~~(,_,"> [oo] Size(Y,X): 40x40 i=[-20..19] j=[-20..19]
Number of images = 41 Size(Y,X): 40x40 i=[-20..19] j=[-20..19]
ERROR:
Array_by_array: different shapes (+)
ERROR:
Array_by_array: different shapes (+)
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
The text was updated successfully, but these errors were encountered:
Hello,
In the previous stage, I completed the processing in WarpEM. After converting the obtained data into the format required by RELION, I successfully ran the InitialModel step in RELION 4.0. However, I encountered some issues during the Refine3D step. Could you please assist me in resolving these problems? Thank you very much.
mpirun -n 3 $(which relion_refine_mpi) --o Refine3D/job001/run --auto_refine --split_random_halves --ios relion_pix10_2d_optimisation_set.star --ref initial.mrc --firstiter_cc --trust_ref_size --ini_high 60 --dont_combine_weights_via_disc --pool 30 --pad 2 --ctf --particle_diameter 230 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --auto_local_healpix_order 3 --offset_range 5 --offset_step 2 --sym C6 --low_resol_join_halves 40 --norm --scale --j 2 --gpu "0" --pipeline_control Refine3D/job001/
failed to open /dev/dri/renderD128: Permission denied
failed to open /dev/dri/renderD129: Permission denied
failed to open /dev/dri/renderD130: Permission denied
failed to open /dev/dri/renderD131: Permission denied
failed to open /dev/dri/renderD128: Permission denied
failed to open /dev/dri/renderD129: Permission denied
failed to open /dev/dri/renderD130: Permission denied
failed to open /dev/dri/renderD131: Permission denied
RELION version: 4.0.2
Precision: BASE=double
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
WARNING: No reference mask filename was found in file relion_pix10_2d_optimisation_set.star. Continuing without mask.
=== RELION MPI setup ===
=================
uniqueHost cryoema100 has 2 ranks.
Follower 1 will distribute threads over devices 0
Thread 0 on follower 1 mapped to device 0
Thread 1 on follower 1 mapped to device 0
Follower 2 will distribute threads over devices 0
Thread 0 on follower 2 mapped to device 0
Thread 1 on follower 2 mapped to device 0
Device 0 on cryoema100 is split between 2 followers
Running CPU instructions in double precision.
Estimating initial noise spectra from at most 5000 particles
000/??? sec ~~(,_,"> [oo] Size(Y,X): 40x40 i=[-20..19] j=[-20..19]
Number of images = 41 Size(Y,X): 40x40 i=[-20..19] j=[-20..19]
ERROR:
Array_by_array: different shapes (+)
ERROR:
Array_by_array: different shapes (+)
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
The text was updated successfully, but these errors were encountered: