@Meyenhofer
I played around with both MI-nodes and I have a few theory questions (I hope you can remember):
- when would I need to change the method and what does the bias result value tell me
- what does the sigma result value tell me
- when would I change the log-value
And one question concerning the 'Group MI' node:
I had expected, the MI between two groups gives an idea how similar the histograms are. I created a dummy dataset, 3000 entries for 'library with gaussian data (m=0, sd = 1) and 100 entries 'control' with the same gaussian parameters. I would expect a high MI, as both histograms should be very similar (shapewise). Instead, I get a low value. I made another test with 3000 entries for each group and the value is still low. But if I sort values within the groups from low to high, I get a high MI. From the calculation point it makes sense (as it is based on the join histogram). But I don't get how it could be useful to find out whether one group carries information the reference does not (as mentioned in the help of that node).
Any help to understand these nodes is appreciated :-)
@Meyenhofer
I played around with both MI-nodes and I have a few theory questions (I hope you can remember):
And one question concerning the 'Group MI' node:
I had expected, the MI between two groups gives an idea how similar the histograms are. I created a dummy dataset, 3000 entries for 'library with gaussian data (m=0, sd = 1) and 100 entries 'control' with the same gaussian parameters. I would expect a high MI, as both histograms should be very similar (shapewise). Instead, I get a low value. I made another test with 3000 entries for each group and the value is still low. But if I sort values within the groups from low to high, I get a high MI. From the calculation point it makes sense (as it is based on the join histogram). But I don't get how it could be useful to find out whether one group carries information the reference does not (as mentioned in the help of that node).
Any help to understand these nodes is appreciated :-)