Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract specific messages for inspection during inference #162

Closed
wmkouw opened this issue Jul 8, 2022 · 19 comments
Closed

Extract specific messages for inspection during inference #162

wmkouw opened this issue Jul 8, 2022 · 19 comments
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request question Further information is requested

Comments

@wmkouw
Copy link
Member

wmkouw commented Jul 8, 2022

Do we have a procedure for inspecting specific messages during inference?

In ForneyLab, the step!(data, marginals, messages) functions accepted an empty Array{Message}() that would be populated during inference. That array could be inspected and visualized, which was useful for debugging and education purposes (see for instance here).

I had a quick look through ReactiveMP's API (e.g., https://biaslab.github.io/ReactiveMP.jl/stable/lib/message/) but couldn't find a method for accessing specific messages sent by nodes. I know you can expose individual nodes through node, variable ~ Node(args...) (GraphPPL API), but does that also let you access messages belonging to that node?

I imagine that if we have a reference to a specific message, then we can define an observable that subscribes to it and keeps the messages computed during inference for later inspection.

@wmkouw wmkouw added documentation Improvements or additions to documentation enhancement New feature or request question Further information is requested labels Jul 8, 2022
@bvdmitri
Copy link
Member

bvdmitri commented Jul 8, 2022

@wmkouw lets put it into our feature list. There is one nuance though, ReactiveMP.jl materializes (read compute) messages at the moment of computing individual marginals. As an example, suppose a variable x is connected to two nodes f_1 and f_2. At the moment then ReactiveMP.jl "sends" a message from f_1 in fact it sends a "notification" that a message could be computed. An actual computation will happen then ReactiveMP.jl will "send" a message from f_2. At this moment inference backend "stops", computes all relevant messages and calculates the associated marginal. In principle, there is undefined period of time between the moment of "notification" and actual message computation. (It is possible to force ReactiveMP.jl to compute a message in place (with as_message function).)

My proposal: Would it be useful to have an option to extend marginal and to attach to its structure messages that have been used to compute it? For example you could have a marginal for q(x_1) and you could have an access to messages f_1_x1, f_2_x1 that were used to compute q(x_1).

@MagnusKoudahl
Copy link
Contributor

@bvdmitri In general I think that would be a useful feature. I have been doing similar operations manually in order to debug nodes/agents and having that be straightforward would be very handy indeed!

@wmkouw
Copy link
Member Author

wmkouw commented Aug 11, 2022

Sorry for the late response; I was on vacation.

There are two reasons for inspecting messages: 1) debugging and 2) education. Sepideh showed me an @call_rule macro to force RMP to compute a message or marginal. I think that's perfectly fine for education purposes; last year we compared manual message calculations with ForneyLab's message computations and I think the macro will allow me to do that with RMP this year.

As for debugging, it sounds like your proposal (a user switch to append constituent messages f_1_x1 and f_2_x1 to the marginal object q(x_1)) would meet our needs. I suggest we try it and if it turns out to be insufficient, then we can think of other solutions.

If you want, Sepideh, Tim and I can try to implement this feature?

@bartvanerp
Copy link
Member

This issue may be solved by the addons in issue #172 and PR #215. Basically (once completed) it should be possible to save extra information in messages and marginals. This information could be the messages used to compute a marginal for example. Some kind of history object could be implemented with these addons that shows the full computation history of the messages/marginals. Would this solve this issue @wmkouw, @abpolym, @Sepideh-Adamiat ?

@wmkouw
Copy link
Member Author

wmkouw commented Oct 21, 2022

Possibly. If I understand you correctly, you want to replace the scale factors in ScaledMessage with the history object? If that object is exposed to the user, then I can imagine it would indeed let us inspect the messages leading up to a bug / unexpected behaviour.

But I would argue for making this user-friendly, as in, there would be a keyword argument in inference that automatically creates that history object (KeepEach() I imagine?) in place of the scale factors and returns it as an entry in results. What is the intended protocol for ScaledMessage?

@bartvanerp
Copy link
Member

bartvanerp commented Oct 21, 2022

In #215 we introduce so-called addons. These yield extra information and are propagated in the message. A message is now (approximately) defined as:

struct Message{D,A}
    distribution::D
    addons::A
end

In this addons field we can pass extra pieces of information, which could potentially also be used to keep a memory of the preceding messages/marginals. For the user this won't be a burden as this is just as easy as specifying MeanField. An example for scale factors:

@model [ addons = ( AddonLogScale(), ) ] function model(...)
    # model here
end

For the memory idea, this could become something like

@model [ addons = ( AddonMemory(KeepLast()), ) ] function model(...)
    # model here
end

Memory is then easily accessible from the resulting marginals that you obtain at the end of the inference procedure.

@bartvanerp bartvanerp self-assigned this Nov 9, 2022
@albertpod
Copy link
Member

What's the status of this issue?

@bartvanerp
Copy link
Member

@wmkouw Do the memory addons provide the functionality you needed?

@wmkouw
Copy link
Member Author

wmkouw commented Jan 10, 2023

I don't know. @abpolym and @Sepideh-Adamiat would look into this but we haven't discussed it recently. I'll check with them when I get back to work.

@bvdmitri
Copy link
Member

bvdmitri commented Feb 7, 2023

Should be fixed in #240

@wmkouw
Copy link
Member Author

wmkouw commented Feb 20, 2023

This is wonderful! Works entirely as advertised. I tried

@model function regression()
           x = datavar(Float64)
           y = datavar(Float64)
           w ~ Normal(mean = 1.0, var = 1.0)
           y ~ Normal(mean = x*w, var = 1.0)
end

results = inference(
               model = regression(),
               data = (x = 0.5, y = 0.0),
               returnvars = (w = KeepLast()),
               initmessages = (w = NormalMeanVariance(0.0, 100.0),),
               free_energy = true,
               addons = (AddonMemory(),),
           )

and got

Marginal(NormalWeightedMeanPrecision{Float64}(xi=1.0, w=1.25))) with (AddonMemory(Product memory:
 Message mapping memory:
    At the node: NormalMeanVariance
    Towards interface: Val{:out}
    With local constraint: Marginalisation()
    With addons: (AddonMemory(nothing),)
    With input marginals on Val{(, :v)} edges: (PointMass{Float64}(1.0), PointMass{Float64}(1.0))
    With the result: NormalMeanVariance{Float64}=1.0, v=1.0)
 Message mapping memory:
    At the node: *
    Towards interface: Val{:in}
    With local constraint: Marginalisation()
    With meta: TinyCorrection()
    With addons: (AddonMemory(nothing),)
    With input messages on Val{(:out, :A)} edges: (NormalMeanVariance{Float64}=0.0, v=1.0), PointMass{Float64}(0.5))
    With the result: NormalWeightedMeanPrecision{Float64}(xi=0.0, w=0.25)
),)

That's a lot of information, which will be very useful, I think.

We'll need an example of how to use this for debugging in the documentation. I also noticed that AddonMemory is not included in test_addons.jl. Tim, Sepideh and I can pick this up? That would give us a chance to become familiar with it.

@bartvanerp
Copy link
Member

Sounds good! Looking forward to the example. Perhaps we can even highlight this as a separate header in the docs of RxInfer.jl, as a lot of people are looking for this feature. @bvdmitri what do you think?

@bvdmitri
Copy link
Member

Yes! We can start something like "Debugging" section in the documentation, where addons will be one part

@albertpod albertpod moved this to 🤔 Ideas in RxInfer Feb 21, 2023
@albertpod albertpod moved this from 🤔 Ideas to 👉 Assigned in RxInfer Feb 21, 2023
@wmkouw
Copy link
Member Author

wmkouw commented Feb 26, 2023

Sepideh and I will make a draft for a debugging section. We will aim for a PR in late March.

@bartvanerp
Copy link
Member

Perhaps this also relates to #60, asking for a "sharp bits" section.

@albertpod albertpod moved this from 👉 Assigned to 📝 In progress in RxInfer Feb 28, 2023
@wmkouw
Copy link
Member Author

wmkouw commented Mar 8, 2023

Status update: we're working in branch rmp#162 of RxInfer to add a Debugging section to the docs there. If it should be part of the ReactiveMP docs instead, let us know.

@bartvanerp
Copy link
Member

@bvdmitri what do you think, I am fine with just having it in RxInfer.

@bvdmitri
Copy link
Member

bvdmitri commented Mar 9, 2023

I think it's fine. User friendly high level guides/tutorials should be in RxInfer. ReactiveMP should only give API description

@Sepideh-Adamiat Sepideh-Adamiat moved this from 📝 In progress to ❓ Under review in RxInfer May 2, 2023
@wmkouw
Copy link
Member Author

wmkouw commented Jun 6, 2023

The current Debugging.md section is just a start. I propose we add explanations to it when we develop new ways to debug RxInfer/ReactiveMP code.

Closing this now due to #326 and RxInfer.jl#123

@wmkouw wmkouw closed this as completed Jun 6, 2023
@github-project-automation github-project-automation bot moved this from ❓ Under review to ✅ Done in RxInfer Jun 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants