-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback from users #71
Comments
This is very interesting project! |
Thanks for the input @chrbertsch . It's an interesting idea. While manageable, I think wrapping the clients as an FMU is quite hard. Since the FMU is static, it must be compiled against a target FMU running on the server. I guess one could modify the XML to take the IP address as an input, so it would not be specific to a particular host machine. Or perhaps we could supply a FMU without a modelDescription.xml, to be populated by the user itself. |
I would be interested to know the performance overhead of calling an FMU using FMU-Proxy in comparison to calling the FMU "natively" using FMI4cpp in a C++ app. I'm thinking of implementing a tool that could simulate connected ME/CS fmus regardless of architecture and OS. I guess having a system of connected ME FMUs which are called via gRPC does have a noticable performance overhead? Thank you, |
Wow, that was quick, thank you! :) Is the number in the "no. calls" column the 'absolute' number of function calls (say, do step, get values, set values, ...) or just the number of steps (do step calls)? Actually I thought (and hoped) that gRPC was doing better... did you try the flatbuffer option? I did recognize that it is faster to have few larger messages than many small ones though. I'm not sure if that is due to converting to protobuf or establishing the actual call. Do you have some experience regarding the streaming feature of gRPC? One could think of a bidirectional stream for controlling simulation (sending dostep, getting values). That way you reduce the number of separate calls over the wire to 1. On the other hand, you lose the "rpc" functionality with having dedicated methods. |
no.calls is just
I have tried the stream API before, but I found that it had no use in this project. I.e. the list of ScalarVariables could be a stream, but I reckon using a stream is only beneficial when the amount of data to receive is huuge. |
But were there calls to set/get during the benchmarking or was it just do_step? I agree, for getting list of variables etc its much better to use one single large message. But it could be useful if there are multiple messages that are not yet available at the rpc call time, such as logging: In a gRPC-based application I am working on I am using a server stream for logging (stream is started on instantiate and runs in a separate thread until freeInstance. When logging callback is invoked, a new message is pushed to the stream). Thinking of that, this could also be useful for communication during simulation. Start the stream on start of simulation, send "do_step requests" / receive data during simulation, stop it after simulation finished. But I guess, for ModelExchange this still gets very messy... |
Just Using the stream for logging sounds like a good idea. |
Hi,
I would very much like to hear from you, the person reading this.
Why are you interested in this project, have you tried it?
What is unclear, what can be improved?
Note that it is also possible to chat on gitter!
The text was updated successfully, but these errors were encountered: