-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support perf-cost experiment for local backend #119
Comments
I would like to try to work on this issue? Can you please guide me on which part of the codebase to focus on? |
@Kaushik-Iyer Thank you for your interest in the project! You can find the As you can see, it takes an instance of FaaSSystem (line 42) and uses it later to create functions, deploy them, and invoke them. Examples of FaaSSystem are provided in You can start first by running the local function deployment (see docs) - this will create Docker containers running a function's instance. Test it and see how you can invoke functions. Then, try to run the experiment with local deployment - keep the number of functions in a batch low to prevent overloading your system with too many containers. You should be able to experimentally verify which features are missing. The HTTP trigger for local deployment is actually already implemented - see |
@Kaushik-Iyer are you working on this issue? Can I take it up? |
You can work on it |
@Rajiv2605 Please let me know if you have any questions - happy to help. |
@mcopik thanks! I will follow what you have suggested in the previous comment. Please let me know if there is any additional information I need to know. |
Sounds good - please ask if you find anything unclear or if the documentation is missing/incomplete. |
@mcopik I am able to run benchmark in local. But perf-cost is throwing errors. |
Can you post your error? @Rajiv2605 |
rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$
|
@
|
rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$
|
@veenaamb config.json rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$
|
@Rajiv2605 That looks like a bug completely unrelated to the |
@Rajiv2605 I pushed to master a fix that should resolve the issue :) I hope that remaining issues will only be related to the |
@mcopik That issue is resolved but a new issue has come up: rajiv@DESKTOP-B3BCG09:~/Desktop/serverless-benchmarks$
|
By looking into the codebase, I have figured out how the internals work. I am trying to find the cause for this error. Please let me know if this is a bug or if I missed any configuration step. |
@Rajiv2605 The experiment is not fully supported because certain features are not implemented in the Here, we should follow the way it works for OpenWhisk - expect the user to deploy the storage separately and provide configuration. Looking at the stack trace, it seems that the storage object has not been initialized correctly. Either the config you provided (see instructions for OpenWhisk or for deploying local functions) is incorrect/broken, or we failed to correctly load the data. If you did include storage configuration in your experiment configuration, as the documentation shows for the |
Hi @mcopik, I am able to run the benchmark and experiment(perf-cost) commands without errors now. I faced some issues which I have listed below and how I fixed them. After fixing those issues, the benchmark command outputs as expected. Please let me know what you think. Issue 1:
Solution: Added the following line to Issue 2: Cache inconsistency
Solution: Manually clear |
@mcopik I ran the perf-cost experiment after fixing the above issues. It is running without any errors now. The function
...and added the following line to But, I get the following output when I run the perf-cost experiment command:
Please let me know if I have missed some key step. |
@Rajiv2605 Good progress! :-) Issue 1 yes, likely, we do not update the cache at all. In cloud deployment, we always update the cache to store information about deployed functions, e.g., after creating and updating functions - see for the usage of Issue 2 yes, the underlying assumption is that the cached resources are not invalidated by the user. This applies both to docker containers in |
Issue 3 Your approach looks correct - kill the warm containers. However, then you need to launch new containers to create deployments. The significant difference between cloud and local is that in the cloud, the provider will launch new container hosting functions when we invoke the function. If there are 10 containers in the cloud and we invoke the function 15 times simultaneously, the cloud provider will create 5 more containers. Locally, we have to do it by ourselves. Thus, we should spin up new containers when the To simulate cold invocations for the experiment, I suggest applying the following change: in a Local deployment of a function, change it from having 1 to N underlying Docker containers. When you kill containers, the list of active deployments of a function will decrease. Then, in the HTTP trigger, keep track of which containers are "busy", and allocate new containers if there are not enough of them. Also, AFAIK, you can't have two containers with the same name. Maybe add |
@mcopik I have created a PR based on what I have done so far. Please let me know. Regarding issue 1: |
@Rajiv2605 Thanks for the PR! I realized this issue is more complex than we initially anticipated. Nevertheless, I think you made good progress toward completion. I think the last piece is moving the complexity of managing multiple function instances from the |
Good day sir. This is a special request. May our esteemed collaborator @Mingtao Sun help us out on this issue so that we can be able to run the performance cost experiments locally and build test case scenarios thank you |
@jchigu Local backend is primarily used for local testing and low-level measurements, e.g., with PAPI. For actual FaaS measurements, I recommend using a real FaaS platform deployed on edge, e.g., OpenWhisk that we support, or Fission which has partial support in SeBS (#61). As I explained to you before, this is an open-source research software where features are added through our research work and contributions from other students and researchers. We all work on issues that are top priorities for our work. It's not fair to expect other contributors to work on projects they're not personally invested in. If you need a specific feature that is not currently being worked on, then you are welcome to do it yourself - we will be happy to integrate your solution through a pull request. I've included all relevant information in the discussion above. |
@mcopik Is it still an issue? I noticed that the issue was raised last year, and I just wanted to check for the most recent update. |
Thanks for contacting me. I used openWhisk for local edge deployments for
my project.
kind regards
Justin Chigu
…On Sat, Mar 16, 2024 at 7:20 AM Kartik Hans ***@***.***> wrote:
@mcopik <https://github.com/mcopik> Is it still an issue? I noticed that
the issue was raised last year, and I just wanted to check for the most
recent update.
Additionally, could you provide a recent update on the situation so that I
can pick up the task from there?
—
Reply to this email directly, view it on GitHub
<#119 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWMF2JNULUDYAW3UYPDVU3LYYPJBDAVCNFSM6AAAAAAVCJP7H6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBRG4YTQMRZGI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@kartikhans Hi Kartik! Apologies for a very late reply - I must have missed your comment. This issue is still work in progress. For the full deployment of experiments on local platforms, we recommend using a support open-source platform like OpenWhisk (Knative and Fission are coming soon). For the local platform using Docker containers, we plan to add basic semantics of executing functions - but not complete scheduling policies. If you are still interested in this problem and maybe would like to help, then please contact me :) |
The local backend that deploys functions in Docker containers on the user machine has been developed for testing purposes. However, our users want to use this backend for invocations and experiments, and we should also support this. Currently, the local backend does not have the full functionality of a FaaS platform.
faas.System
interface.The text was updated successfully, but these errors were encountered: