You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For external users wanting to use CellBox on their own dataset, what is the best practice to train the model? How many total models, differed by the seed, or --working_index, should be trained before the collection of models achieves statistical power? This question follows the Network Interpretation in the Methods section from the original CellBox paper, when 1000 models were trained for downstream analysis. CellBox and its ODE solver is susceptible to suboptimal weight initialization: setting the wrong random seed (--working_index) while keeping other configs and arguments the same can lead to very different results. Therefore, for new users with a new dataset, should they train only one model or multiple models with different random seeds to yield the best performance?
The text was updated successfully, but these errors were encountered:
Thanks for the question. The users are encouraged to bootstrap their training multiple times and check the training stability. The template config provided was finetuned on the dataset we used in the paper and could (AND should) be changed when we apply it to a different dataset.
Another recommended practice is to adjust the model training configuration on training with random partitions first and use the leave-one-out scenario as a way to test.
For external users wanting to use CellBox on their own dataset, what is the best practice to train the model? How many total models, differed by the seed, or
--working_index
, should be trained before the collection of models achieves statistical power? This question follows the Network Interpretation in the Methods section from the original CellBox paper, when 1000 models were trained for downstream analysis. CellBox and its ODE solver is susceptible to suboptimal weight initialization: setting the wrong random seed (--working_index
) while keeping other configs and arguments the same can lead to very different results. Therefore, for new users with a new dataset, should they train only one model or multiple models with different random seeds to yield the best performance?The text was updated successfully, but these errors were encountered: