You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we learn about the how often a given restraint is violated, but have no info by how much.
A small violation (in terms of distance) might not be a big deal considering we are doing rigid body sampling.
An idea would be to analyse the set of models consistent with the largest possible number of restraints (i.e. not the total number of restraints) - this would be the value of N for which we still find acceptable models. For those models, we could then calculate the average distance violation for all restraints. This would this tell us by how much on average a distance is violated, and would allow to prioritise them for removal - a large violation should be more likely a real false positive than a small violation.
The text was updated successfully, but these errors were encountered:
Currently we learn about the how often a given restraint is violated, but have no info by how much.
A small violation (in terms of distance) might not be a big deal considering we are doing rigid body sampling.
An idea would be to analyse the set of models consistent with the largest possible number of restraints (i.e. not the total number of restraints) - this would be the value of N for which we still find acceptable models. For those models, we could then calculate the average distance violation for all restraints. This would this tell us by how much on average a distance is violated, and would allow to prioritise them for removal - a large violation should be more likely a real false positive than a small violation.
The text was updated successfully, but these errors were encountered: