-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract specific messages for inspection during inference #162
Comments
@wmkouw lets put it into our feature list. There is one nuance though, ReactiveMP.jl materializes (read compute) messages at the moment of computing individual marginals. As an example, suppose a variable My proposal: Would it be useful to have an option to extend marginal and to attach to its structure messages that have been used to compute it? For example you could have a marginal for |
@bvdmitri In general I think that would be a useful feature. I have been doing similar operations manually in order to debug nodes/agents and having that be straightforward would be very handy indeed! |
Sorry for the late response; I was on vacation. There are two reasons for inspecting messages: 1) debugging and 2) education. Sepideh showed me an As for debugging, it sounds like your proposal (a user switch to append constituent messages If you want, Sepideh, Tim and I can try to implement this feature? |
This issue may be solved by the addons in issue #172 and PR #215. Basically (once completed) it should be possible to save extra information in messages and marginals. This information could be the messages used to compute a marginal for example. Some kind of history object could be implemented with these addons that shows the full computation history of the messages/marginals. Would this solve this issue @wmkouw, @abpolym, @Sepideh-Adamiat ? |
Possibly. If I understand you correctly, you want to replace the scale factors in But I would argue for making this user-friendly, as in, there would be a keyword argument in |
In #215 we introduce so-called struct Message{D,A}
distribution::D
addons::A
end In this @model [ addons = ( AddonLogScale(), ) ] function model(...)
# model here
end For the memory idea, this could become something like @model [ addons = ( AddonMemory(KeepLast()), ) ] function model(...)
# model here
end Memory is then easily accessible from the resulting marginals that you obtain at the end of the inference procedure. |
What's the status of this issue? |
@wmkouw Do the memory addons provide the functionality you needed? |
I don't know. @abpolym and @Sepideh-Adamiat would look into this but we haven't discussed it recently. I'll check with them when I get back to work. |
Should be fixed in #240 |
This is wonderful! Works entirely as advertised. I tried @model function regression()
x = datavar(Float64)
y = datavar(Float64)
w ~ Normal(mean = 1.0, var = 1.0)
y ~ Normal(mean = x*w, var = 1.0)
end
results = inference(
model = regression(),
data = (x = 0.5, y = 0.0),
returnvars = (w = KeepLast()),
initmessages = (w = NormalMeanVariance(0.0, 100.0),),
free_energy = true,
addons = (AddonMemory(),),
) and got Marginal(NormalWeightedMeanPrecision{Float64}(xi=1.0, w=1.25))) with (AddonMemory(Product memory:
Message mapping memory:
At the node: NormalMeanVariance
Towards interface: Val{:out}
With local constraint: Marginalisation()
With addons: (AddonMemory(nothing),)
With input marginals on Val{(:μ, :v)} edges: (PointMass{Float64}(1.0), PointMass{Float64}(1.0))
With the result: NormalMeanVariance{Float64}(μ=1.0, v=1.0)
Message mapping memory:
At the node: *
Towards interface: Val{:in}
With local constraint: Marginalisation()
With meta: TinyCorrection()
With addons: (AddonMemory(nothing),)
With input messages on Val{(:out, :A)} edges: (NormalMeanVariance{Float64}(μ=0.0, v=1.0), PointMass{Float64}(0.5))
With the result: NormalWeightedMeanPrecision{Float64}(xi=0.0, w=0.25)
),) That's a lot of information, which will be very useful, I think. We'll need an example of how to use this for debugging in the documentation. I also noticed that |
Sounds good! Looking forward to the example. Perhaps we can even highlight this as a separate header in the docs of |
Yes! We can start something like "Debugging" section in the documentation, where addons will be one part |
Sepideh and I will make a draft for a debugging section. We will aim for a PR in late March. |
Perhaps this also relates to #60, asking for a "sharp bits" section. |
Status update: we're working in branch rmp#162 of RxInfer to add a Debugging section to the docs there. If it should be part of the ReactiveMP docs instead, let us know. |
@bvdmitri what do you think, I am fine with just having it in RxInfer. |
I think it's fine. User friendly high level guides/tutorials should be in RxInfer. ReactiveMP should only give API description |
The current Closing this now due to #326 and RxInfer.jl#123 |
Do we have a procedure for inspecting specific messages during inference?
In ForneyLab, the
step!(data, marginals, messages)
functions accepted an emptyArray{Message}()
that would be populated during inference. That array could be inspected and visualized, which was useful for debugging and education purposes (see for instance here).I had a quick look through ReactiveMP's API (e.g., https://biaslab.github.io/ReactiveMP.jl/stable/lib/message/) but couldn't find a method for accessing specific messages sent by nodes. I know you can expose individual nodes through
node, variable ~ Node(args...)
(GraphPPL API), but does that also let you access messages belonging to that node?I imagine that if we have a reference to a specific message, then we can define an observable that subscribes to it and keeps the messages computed during inference for later inspection.
The text was updated successfully, but these errors were encountered: