-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scripts run after localstack is "ready" are no longer working #6
Comments
What do you mean by "not working"? I just tested putting the following script in #!/bin/bash
echo "hello from ready script!"
awslocal s3api create-bucket --bucket testbucket And once the container started, I could see the message in the container logs and it correctly created (and then persisted) the bucket.
A potential issue with that method is that it would mean the resources wouldn't get persisted if the container doesn't exit cleanly |
@GREsau I guess my question is the following then |
Loading persisted state is triggered by the So you should be fine to check the existence of resources in your ready.d scripts 🙂 |
@GREsau The result is a request taking a super long time and then finally timing out. This seems to only happen when I use the persist library I am using FROM gresau/localstack-persist:2.3.2 Here is my localstack compose
|
@GREsau Last thing |
Are you able to share a full reproducible example e.g. including the lambda code and instructions on how they're deployed? That would make it much easier to diagnose the problem |
Hm that will be a little complicated what I can tell you is that this is deployed with python.311 localstack | { |
@GREsau Even more interesting |
Just to add to this I am wondering if there is some sort of race condition happening where it is loading lambda data and this api call is getting stalled? Basically somehow this statement results stalling my code If I run awslocal --verison - i get no issues Please note all of this happens after the FIRST build and start. Basically after everything gets persisted |
I'm afraid without a minimal reproducible example, I can't really spend any time looking into what might be going wrong - especially if it's only happening on a previous version |
I have scripts that get run that
In each of these scenarios I actually first check if what I am looking to create already exists. If yes then don't run again.
This worked fine with localstack pods as I first loaded the data from the pods and then ran these scripts. How can I ensure this works with your scripts as well?
ALso instead of persisting data through lifespan of the running image, would it be easier and make sense to override the "shutdown" logic of localstack and persist at moment of shutdown?
The text was updated successfully, but these errors were encountered: