Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix reticulate focus #713

Merged
merged 7 commits into from
Feb 26, 2025
Merged

Fix reticulate focus #713

merged 7 commits into from
Feb 26, 2025

Conversation

dfalbel
Copy link
Contributor

@dfalbel dfalbel commented Feb 14, 2025

When there's already a comm open for reticulate, we need to send an event message to the front-end:

if let Some(id) = comm_id_guard.deref() {
// There's a comm_id registered, we just send the focus event
main.get_comm_manager_tx().send(CommManagerEvent::Message(
id.clone(),
CommMsg::Data(json!({
"method": "focus",
"params": {
"input": input_code
}
})),
))?;
return Ok(R_NilValue);
}

I don't know of a direct way of doing this. But here, we send a message to the comm, and then forward it to the front-end
through the outgoing_tx.

This will help addressing posit-dev/positron#3865, as in that case, the quarto extension will call real_python(input=<chunk_code>) and we'll forward that code to the front end so that we can execute within the reticulate Python session.

@dfalbel dfalbel requested a review from lionel- February 14, 2025 13:48
let input_code: Option<String> = input.try_into()?;

let mut comm_id_guard = RETICULATE_COMM_ID.lock().unwrap();
let input_code: Option<String> = match r_is_null(input.sexp) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the focus of this PR, but would be nice to sneak it here because reticulate by default will send a NULL.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment about this reticulate behaviour?

Copy link
Contributor

@lionel- lionel- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since these messages are supposed to be coming from the frontend, it seems a bit roundabout to forward them back to the frontend.

How about creating a global singleton representing the ReticulateService that you could dereference from the C callbacks? Maybe add a send_event() method on that service. And then you can send the event in a more straightforward manner.

What do you think?

In any case we should probably still add handling on these CommMsg from the frontend to at least log them when they come in.

let input_code: Option<String> = input.try_into()?;

let mut comm_id_guard = RETICULATE_COMM_ID.lock().unwrap();
let input_code: Option<String> = match r_is_null(input.sexp) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment about this reticulate behaviour?

@dfalbel dfalbel force-pushed the fix/reticulate-focus branch from eae54f3 to 52895fc Compare February 19, 2025 13:52
@dfalbel
Copy link
Contributor Author

dfalbel commented Feb 19, 2025

The message is now sent directly to the from-end with 4695ccf and 52895fc

It might still not be ideal, because we have to clone the service in order to send it to the execution thread. We could avoid cloning by introducing a different crossbean channel like we do in the UI channel:

let (ui_comm_tx, ui_comm_rx) = crossbeam::channel::unbounded::<UiCommMessage>();

But this seemed an overkill, instead, we cleanup the global when the thread is closed:

// Reset the global service before closing
let mut comm_guard = RETICULATE_SERVICE.lock().unwrap();
log::info!("Reticulate Thread closing {:?}", self.comm.comm_id);
*comm_guard = None;

@dfalbel dfalbel requested a review from lionel- February 19, 2025 14:20
Copy link
Contributor

@lionel- lionel- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, in hindsight it does not really make sense to make the whole service a singleton, since the service is performed from another thread. Having that global variable being a clone that could potentially run the service from another thread does not feel right.

How about storing the necessary channels in that global such as comm.outgoing_tx?

Since this will only be accessed from the R thread, it does not need to be in a Mutex. Instead you can store it in a thread_local! whose lazy intialiser panics in other threads. See

thread_local! {
// Safety: Set once by `RMain` on initialization
static DEVICE_CONTEXT: RefCell<DeviceContext> = panic!("Must access `DEVICE_CONTEXT` from the R thread");
}
for an example. It'll have to be stored in a RefCell, which would cause a panic if double borrowed mutably (need to be careful), but that's better than a deadlock if double locked from the same thread.

spawn!(format!("ark-reticulate-{}", comm_id), move || {
service
serv.clone()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this clone is unnecessary sinceserv was moved here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a more conventional way of moving a clone:

        spawn!(format!("ark-reticulate-{}", comm_id), {
            let service = service.clone();
            move || {
                service
                    .handle_messages()
                    .or_log_error("Reticulate: Error handling messages");
            }
        });

@dfalbel
Copy link
Contributor Author

dfalbel commented Feb 25, 2025

I had an intermediate approach where I stored to the comm socket (and thus the necessary channels) in the global: 4695ccf

Does this sound right to you? Are there problems in storing a clone of the comm socket and calling it from the R thread (potentially at the same time as the service thread)? It seemed fine to me, but I don't know about the socket internals.

@dfalbel
Copy link
Contributor Author

dfalbel commented Feb 25, 2025

I have updated the PR to keep a gloabl reference to the socket. I'm not sure it's only accessed only by the R thread, because we need to clean it up when the current reticulate execution thread isclosing. So I think, we'll still need a mutex.

@@ -18,21 +19,27 @@ use uuid::Uuid;

use crate::interface::RMain;

static RETICULATE_COMM_ID: LazyLock<Mutex<Option<String>>> = LazyLock::new(|| Mutex::new(None));
static RETICULATE_SOCKET: LazyLock<Mutex<Option<CommSocket>>> = LazyLock::new(|| Mutex::new(None));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd have made that as minimal as possible so that only the service is able to handle messages but I guess we can also just be careful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I can probably just keep the channel the outbound.tx channel here.

@dfalbel
Copy link
Contributor Author

dfalbel commented Feb 25, 2025

@lionel- I have updated to keep just the outgoing_tx channel in global: 3e31307

jonvanausdeln added a commit to posit-dev/positron that referenced this pull request Feb 25, 2025
Until posit-dev/ark#713 is merged, reticulate
tests will fail.
@dfalbel dfalbel merged commit 684fd25 into main Feb 26, 2025
6 checks passed
@dfalbel dfalbel deleted the fix/reticulate-focus branch February 26, 2025 09:59
@github-actions github-actions bot locked and limited conversation to collaborators Feb 26, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants