Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support perf-cost experiment for local backend #119

Open
4 tasks
mcopik opened this issue Feb 20, 2023 · 29 comments
Open
4 tasks

Support perf-cost experiment for local backend #119

mcopik opened this issue Feb 20, 2023 · 29 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@mcopik
Copy link
Collaborator

mcopik commented Feb 20, 2023

The local backend that deploys functions in Docker containers on the user machine has been developed for testing purposes. However, our users want to use this backend for invocations and experiments, and we should also support this. Currently, the local backend does not have the full functionality of a FaaS platform.

  • Implement missing parts of the faas.System interface.
  • Add HTTP triggers.
  • Support cold invocations with container restart.
  • Add limits on the number of parallel containers to avoid too much CPU and memory overhead.
@Kaushik-Iyer
Copy link

I would like to try to work on this issue? Can you please guide me on which part of the codebase to focus on?

@mcopik
Copy link
Collaborator Author

mcopik commented Feb 28, 2023

@Kaushik-Iyer Thank you for your interest in the project!

You can find the perf-cost experiment here. The experiment's goal is to invoke a function N times, in batches, with cold and warm starts. You can find an example of using it with clouds in documentation.

As you can see, it takes an instance of FaaSSystem (line 42) and uses it later to create functions, deploy them, and invoke them. Examples of FaaSSystem are provided in sebs.aws, sebs.azure, sebs.gcp, and sebs.openwhisk. We have a module sebs.local but its implementation is not as complete as the other systems.

You can start first by running the local function deployment (see docs) - this will create Docker containers running a function's instance. Test it and see how you can invoke functions. Then, try to run the experiment with local deployment - keep the number of functions in a batch low to prevent overloading your system with too many containers. You should be able to experimentally verify which features are missing.

The HTTP trigger for local deployment is actually already implemented - see sebs.local.function.HTTPTrigger.

@Rajiv2605
Copy link

@Kaushik-Iyer are you working on this issue? Can I take it up?

@Kaushik-Iyer
Copy link

You can work on it

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 14, 2023

@Rajiv2605 Please let me know if you have any questions - happy to help.

@Rajiv2605
Copy link

@mcopik thanks! I will follow what you have suggested in the previous comment. Please let me know if there is any additional information I need to know.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 14, 2023

Sounds good - please ask if you find anything unclear or if the documentation is missing/incomplete.

@Rajiv2605
Copy link

@mcopik I am able to run benchmark in local. But perf-cost is throwing errors.

@veenaamb
Copy link

veenaamb commented Mar 15, 2023

Can you post your error? @Rajiv2605

@Rajiv2605
Copy link

rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$ sudo ./sebs.py experiment invoke perf-cost --config config/example.json --deployment local

ERROR:root:'Local' object has no attribute '_measure_interval'
Traceback (most recent call last):
  File "./sebs.py", line 30, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "./sebs.py", line 72, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 97, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 462, in experiment_invoke
    experiment.prepare(sebs_client, deployment_client)
  File "/home/rajiv/serverless-benchmarks/sebs/experiments/perf_cost.py", line 49, in prepare
    self._function = deployment_client.get_function(self._benchmark)
  File "/home/rajiv/serverless-benchmarks/sebs/faas/system.py", line 174, in get_function
    function = self.create_function(code_package, func_name)
  File "/home/rajiv/serverless-benchmarks/sebs/local/local.py", line 194, in create_function
    if self.measurements_enabled and self._memory_measurement_path is not None:
  File "/home/rajiv/serverless-benchmarks/sebs/local/local.py", line 54, in measurements_enabled
    return self._measure_interval > -1
AttributeError: 'Local' object has no attribute '_measure_interval'

@Rajiv2605
Copy link

@

Can you post your error? @Rajiv2605
Please check the error above. Did I miss any step?

@Rajiv2605
Copy link

rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$ sudo ./sebs.py experiment invoke perf-cost --config config/example.json --deployment local --verbose

[sudo] password for rajiv:
19:06:09,721 INFO SeBS-0912: Created experiment output at /home/rajiv/serverless-benchmarks
19:06:09,724 INFO Benchmark-d250: Using cached benchmark 110.dynamic-html at /home/rajiv/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
19:06:09,724 INFO Local-415d: Creating new function! Reason: function 110.dynamic-html-python-3.7 not found in cache.
ERROR:root:'Local' object has no attribute '_measure_interval'
Traceback (most recent call last):
  File "./sebs.py", line 30, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "./sebs.py", line 72, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 97, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 462, in experiment_invoke
    experiment.prepare(sebs_client, deployment_client)
  File "/home/rajiv/serverless-benchmarks/sebs/experiments/perf_cost.py", line 49, in prepare
    self._function = deployment_client.get_function(self._benchmark)
  File "/home/rajiv/serverless-benchmarks/sebs/faas/system.py", line 174, in get_function
    function = self.create_function(code_package, func_name)
  File "/home/rajiv/serverless-benchmarks/sebs/local/local.py", line 194, in create_function
    if self.measurements_enabled and self._memory_measurement_path is not None:
  File "/home/rajiv/serverless-benchmarks/sebs/local/local.py", line 54, in measurements_enabled
    return self._measure_interval > -1
AttributeError: 'Local' object has no attribute '_measure_interval'

@Rajiv2605
Copy link

@veenaamb config.json

rajiv@DESKTOP-B3BCG09:~/serverless-benchmarks$ cat cache/110.dynamic-html/config.json

{
  "local": {
    "python": {
      "code_package": {
        "3.7": {
          "size": 4096,
          "hash": "f2a3c8ea36ffbbb7967c026fc690890c",
          "location": "110.dynamic-html/local/python/3.7/code",
          "date": {
            "created": "2023-03-15 13:52:34.280412",
            "modified": "2023-03-15 13:52:34.280412"
          }
        }
      },
      "functions": {}
    }
  }
}

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 15, 2023

@Rajiv2605 That looks like a bug completely unrelated to the perf_cost experiment - I will try to fix it tonight.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 15, 2023

@Rajiv2605 I pushed to master a fix that should resolve the issue :)

I hope that remaining issues will only be related to the perf-cost experiment.

@Rajiv2605
Copy link

@mcopik That issue is resolved but a new issue has come up:

rajiv@DESKTOP-B3BCG09:~/Desktop/serverless-benchmarks$ sudo ./sebs.py experiment invoke perf-cost --config config/example.json --deployment local --verbose

[sudo] password for rajiv:
17:40:05,309 INFO SeBS-7dd3: Created experiment output at /home/rajiv/Desktop/serverless-benchmarks
17:40:05,310 INFO SeBS-7dd3: hi-experiment-invoke
experiment: <sebs.experiments.perf_cost.PerfCost object at 0x7f7b110eb400>
17:40:05,310 INFO Experiment.PerfCost-f14a: hi-prepare
17:40:05,312 INFO Benchmark-ed5e: Using cached benchmark 110.dynamic-html at /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
17:40:05,323 INFO Local-5dbb: Using cached function 110.dynamic-html-python-3.7 in /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
17:40:05,323 INFO Local-5dbb: Cached function 110.dynamic-html-python-3.7 is up to date.
ERROR:root:'NoneType' object has no attribute 'reload'
Traceback (most recent call last):
  File "./sebs.py", line 30, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "./sebs.py", line 72, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 97, in wrapper
    return func(*args, **kwargs)
  File "./sebs.py", line 467, in experiment_invoke
    experiment.prepare(sebs_client, deployment_client)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/experiments/perf_cost.py", line 52, in prepare
    self._storage = deployment_client.get_storage(replace_existing=self.config.update_storage)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/local/local.py", line 93, in get_storage
    self.storage = Minio.deserialize(
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/storage/minio.py", line 230, in deserialize
    return Minio._deserialize(cached_config, cache_client, Minio)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/storage/minio.py", line 225, in _deserialize
    obj.configure_connection()
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/storage/minio.py", line 92, in configure_connection
    self._storage_container.reload()
AttributeError: 'NoneType' object has no attribute 'reload'

@Rajiv2605
Copy link

By looking into the codebase, I have figured out how the internals work. I am trying to find the cause for this error. Please let me know if this is a bug or if I missed any configuration step.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 16, 2023

@Rajiv2605 The experiment is not fully supported because certain features are not implemented in the Local deployment. In this example, it seems that storage_container is not initialized.

Here, we should follow the way it works for OpenWhisk - expect the user to deploy the storage separately and provide configuration.

Looking at the stack trace, it seems that the storage object has not been initialized correctly. Either the config you provided (see instructions for OpenWhisk or for deploying local functions) is incorrect/broken, or we failed to correctly load the data.

If you did include storage configuration in your experiment configuration, as the documentation shows for the Local deployment (this step is very similar OpenWhisk), then it is a bug. IIRC, this worked for regular invocations of functions with the Local deployment - so this code should work as expected.

@Rajiv2605
Copy link

Rajiv2605 commented Mar 19, 2023

Hi @mcopik, I am able to run the benchmark and experiment(perf-cost) commands without errors now. I faced some issues which I have listed below and how I fixed them. After fixing those issues, the benchmark command outputs as expected. Please let me know what you think.

Issue 1: KeyError "minio"
Occurence: Whenever I run a benchmark
Error in detail:

rajiv@DESKTOP-B3BCG09:~/Desktop/serverless-benchmarks$ sudo ./sebs.py benchmark invoke 110.dynamic-html test --config config/local_deployment.json --deployment local --verbose
19:53:28,769 INFO SeBS-869a: Created experiment output at /home/rajiv/Desktop/serverless-benchmarks
19:53:28,769 INFO SeBS-869a: deployment_client initialized
19:53:28,771 INFO Benchmark-7e1c: Using cached benchmark 110.dynamic-html at /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
19:53:28,782 INFO Local-cc86: Using cached function 110.dynamic-html-python-3.7 in /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
19:53:28,782 INFO Local-cc86: Cached function 110.dynamic-html-python-3.7 is up to date.
ERROR:root:'minio'
Traceback (most recent call last):
  File "./[sebs.py](http://sebs.py/)", line 30, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 72, in wrapper
    return func(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 97, in wrapper
    return func(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 242, in invoke
    input_config = benchmark_obj.prepare_input(storage=storage, size=benchmark_input_size)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/benchmark.py", line 539, in prepare_input
    storage.allocate_buckets(self.benchmark, buckets)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/faas/storage.py", line 215, in allocate_buckets
    self.save_storage(benchmark)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/faas/storage.py", line 236, in save_storage
    self.cache_client.update_storage(
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/cache.py", line 162, in update_storage
    cached_config[deployment]["storage"] = config
KeyError: 'minio'

Solution: Added the following line to update_storage() in ./sebs/cache.py
cached_config[deployment] = {}

Issue 2: Cache inconsistency
Occurrence: When a benchmark is run after a long time or if a container is destroyed. SeBS is unaware of the container lifeycle thus cache remains outdated.
Issue in detail:

rajiv@DESKTOP-B3BCG09:~/Desktop/serverless-benchmarks$ sudo ./sebs.py benchmark invoke 110.dynamic-html test --config config/example.json --deployment local --verbose
17:57:52,283 INFO SeBS-7819: Created experiment output at /home/rajiv/Desktop/serverless-benchmarks
17:57:52,284 INFO SeBS-7819: deployment_client initialized
17:57:52,285 INFO Benchmark-a47d: Using cached benchmark 110.dynamic-html at /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
ERROR:root:Cached container 210681f86315c316c3936382f43b23944544b1a0f8452c731b93bad589e518c0 not available anymore!
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 268, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/210681f86315c316c3936382f43b23944544b1a0f8452c731b93bad589e518c0/json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/local/function.py", line 91, in deserialize
    instance = docker.from_env().containers.get(instance_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/models/containers.py", line 925, in get
    resp = self.client.api.inspect_container(container_id)
  File "/usr/local/lib/python3.8/dist-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/container.py", line 783, in inspect_container
    return self._result(
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 274, in _result
    self._raise_for_status(response)
  File "/usr/local/lib/python3.8/dist-packages/docker/api/client.py", line 270, in _raise_for_status
    raise create_api_error_from_http_exception(e) from e
  File "/usr/local/lib/python3.8/dist-packages/docker/errors.py", line 39, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation) from e
docker.errors.NotFound: 404 Client Error for http+docker://localhost/v1.41/containers/210681f86315c316c3936382f43b23944544b1a0f8452c731b93bad589e518c0/json: Not Found ("No such container: 210681f86315c316c3936382f43b23944544b1a0f8452c731b93bad589e518c0")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./[sebs.py](http://sebs.py/)", line 30, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 72, in wrapper
    return func(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 97, in wrapper
    return func(*args, **kwargs)
  File "./[sebs.py](http://sebs.py/)", line 237, in invoke
    func = deployment_client.get_function(
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/faas/system.py", line 190, in get_function
    function = self.function_type().deserialize(cached_function)
  File "/home/rajiv/Desktop/serverless-benchmarks/sebs/local/function.py", line 102, in deserialize
    raise RuntimeError(f"Cached container {instance_id} not available anymore!")
RuntimeError: Cached container 210681f86315c316c3936382f43b23944544b1a0f8452c731b93bad589e518c0 not available anymore!

Solution: Manually clear ./sebs/cache sub-directory of the benchmark being run.

@Rajiv2605
Copy link

Rajiv2605 commented Mar 19, 2023

@mcopik I ran the perf-cost experiment after fixing the above issues. It is running without any errors now. The function enforce_cold_start() was not implemented in ./sebs/local/local.py. I implented it as follows:

    def enforce_cold_start(self, functions: List[Function], code_package: Benchmark):
        fn_names = [[fn.name](http://fn.name/) for fn in functions]
        for ctr in self._docker_client.containers.list():
            if [ctr.name](http://ctr.name/) in fn_names:
                ctr.kill()

...and added the following line to self._docker_client.containers.run() in create_function():
name=func_name,

But, I get the following output when I run the perf-cost experiment command:

rajiv@DESKTOP-B3BCG09:~/Desktop/serverless-benchmarks$ sudo ./sebs.py experiment invoke perf-cost --config config/local_deployment.json --deployment local --verbose
02:08:42,549 INFO SeBS-9d51: Created experiment output at /home/rajiv/Desktop/serverless-benchmarks
02:08:42,549 INFO SeBS-9d51: deployment_client initialized
02:08:42,551 INFO Benchmark-6140: Using cached benchmark 110.dynamic-html at /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
02:08:42,561 INFO Local-7a6c: Using cached function 110.dynamic-html-python-3.7 in /home/rajiv/Desktop/serverless-benchmarks/cache/110.dynamic-html/local/python/3.7/code
02:08:42,561 INFO Local-7a6c: Cached function 110.dynamic-html-python-3.7 is up to date.
02:08:42,582 INFO Experiment.PerfCost-e4a2: Begin experiment on memory size 128
02:08:42,583 INFO Experiment.PerfCost-e4a2: Begin cold experiments
02:09:23,516 INFO Experiment.PerfCost-e4a2: Processed 0 warm-up samples, ignoring these results.
02:09:39,756 INFO Experiment.PerfCost-e4a2: Processed 0 samples out of 50, 100 errors
02:09:59,116 INFO Experiment.PerfCost-e4a2: Processed 0 samples out of 50, 150 errors
02:10:12,238 INFO Experiment.PerfCost-e4a2: Processed 0 samples out of 50, 200 errors

Please let me know if I have missed some key step.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 20, 2023

@Rajiv2605 Good progress! :-)

Issue 1 yes, likely, we do not update the cache at all. In cloud deployment, we always update the cache to store information about deployed functions, e.g., after creating and updating functions - see for the usage of cache_client in sebs.aws.aws.py. I'm not sure if the issues are in sebs/cache.py - rather from the fact that we did not use cache in Local. Since we did not use it, it was not initialized, and that's why you got an error :-) Definitely, you should not overwrite the cache[deployment] - rather, check if the key is not present and then insert a new dictionary.

Issue 2 yes, the underlying assumption is that the cached resources are not invalidated by the user. This applies both to docker containers in Local, as well as to Lambda functions in AWS. If we deploy containers, then we expect that we will kill them and not the user. What we could do here is to catch the exception, send a warning ("cache function is invalidated"), and remove it from the cache. This way, we do not force users to manually clean the cache. Instead, we remove the stale entry, and the experiment will deploy new functions to replace the missing ones.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 20, 2023

Issue 3 Your approach looks correct - kill the warm containers. However, then you need to launch new containers to create deployments. The significant difference between cloud and local is that in the cloud, the provider will launch new container hosting functions when we invoke the function. If there are 10 containers in the cloud and we invoke the function 15 times simultaneously, the cloud provider will create 5 more containers. Locally, we have to do it by ourselves. Thus, we should spin up new containers when the perf-cost experiment invokes the function and we do not have containers to process the invocation.

To simulate cold invocations for the experiment, I suggest applying the following change: in a Local deployment of a function, change it from having 1 to N underlying Docker containers. When you kill containers, the list of active deployments of a function will decrease. Then, in the HTTP trigger, keep track of which containers are "busy", and allocate new containers if there are not enough of them.

Also, AFAIK, you can't have two containers with the same name. Maybe add func_name-{uuid} to ensure we do not have container collisions?

@Rajiv2605
Copy link

@mcopik I have created a PR based on what I have done so far. Please let me know.

Regarding issue 1: cache_client is used in the same way in both local and aws. It is used to initialize storage in the function get_storage() and to call update_function() in create_trigger(). The error is caused in the function call of update_storage(). It is called from save_storage() in storage.py. There is no line which initialized the dictionary wth the "deployment" key. The dictionary cached_config is directly accessed for the key "deployment" and that is causing the key error. Hence, I added the initialization line.

@mcopik
Copy link
Collaborator Author

mcopik commented Mar 25, 2023

@Rajiv2605 Thanks for the PR!

I realized this issue is more complex than we initially anticipated. Nevertheless, I think you made good progress toward completion. I think the last piece is moving the complexity of managing multiple function instances from the perf_cost experiment to a special trigger in Local that will handle on-the-fly adding of function containers (cold invocations) and redirecting invocations to one of existing and idle function containers (warm invocations).

@jchigu
Copy link

jchigu commented Jul 12, 2023

Good day sir.

This is a special request.

May our esteemed collaborator @Mingtao Sun help us out on this issue so that we can be able to run the performance cost experiments locally and build test case scenarios

thank you

@mcopik
Copy link
Collaborator Author

mcopik commented Jul 15, 2023

@jchigu Local backend is primarily used for local testing and low-level measurements, e.g., with PAPI. For actual FaaS measurements, I recommend using a real FaaS platform deployed on edge, e.g., OpenWhisk that we support, or Fission which has partial support in SeBS (#61).

As I explained to you before, this is an open-source research software where features are added through our research work and contributions from other students and researchers. We all work on issues that are top priorities for our work. It's not fair to expect other contributors to work on projects they're not personally invested in. If you need a specific feature that is not currently being worked on, then you are welcome to do it yourself - we will be happy to integrate your solution through a pull request. I've included all relevant information in the discussion above.

@kartikhans
Copy link

@mcopik Is it still an issue? I noticed that the issue was raised last year, and I just wanted to check for the most recent update.
Additionally, could you provide a recent update on the situation so that I can pick up the task from there?

@jchigu
Copy link

jchigu commented Mar 16, 2024 via email

@mcopik
Copy link
Collaborator Author

mcopik commented Jul 10, 2024

@kartikhans Hi Kartik! Apologies for a very late reply - I must have missed your comment. This issue is still work in progress. For the full deployment of experiments on local platforms, we recommend using a support open-source platform like OpenWhisk (Knative and Fission are coming soon). For the local platform using Docker containers, we plan to add basic semantics of executing functions - but not complete scheduling policies.

If you are still interested in this problem and maybe would like to help, then please contact me :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

6 participants