-
Notifications
You must be signed in to change notification settings - Fork 44
Use docker_file to build node_modules and acquire additional libraries #6
Comments
I want to keep things simple in the simple case. I want to ensure that local development remains easy and has minimal dependencies. I agree that a mode that packaged up the execer library without compiling it would be helpful. You are still going to need to build the Go binary, but compiling that is fast and cross-platform. Maybe there is a npm flag to just copy the code without compiling it. I would be willing to add a Vagrant file with an optional docker provider. Would that help? That could help enable Windows development as well. |
@iangudger I do think the local dev story and deployment will be pretty easy with the technique outlined above: All that you'd really need is docker and native language's toolchains (in this case, just golang) and nothing else. Otherwise, you'd need node, make, gcc too. Note, you can do windows development too here (i.,e docker will do the heavy lifting and just copy over the files from the temp container to your local system (i.,e execv node_module, any additional LD_LIBRARY_PATH libs, etc) I'm fine with deferring this till a later though - this is just extra stuff that should simplify the dev/deployment. In the meantime, i'll post a question to Google Container Builder team to see how that can help with GCF deployments (i.,e a feature request that does these steps for you and 'injects' the files needed into the GCF runtime (vs me creating it and uploading it inside functions.zip)). |
We maybe able to use this sample to simplify build steps by offloading much of it google container builder+cloud source repo (i think). |
Spent sometime refactoring the .NET variation of GCF with multi-stage builds. It simplified the install/setup instructions considerably. (basically use multistage builds to compile execer, install the required .NET shared_object, and finally 'copy them out' of the container to your laptop. I can make a PR for a similar flow for this if you think it'd help here too. |
@salrashid123 Feel free to make a PR. I am not personally a big fan of Docker, but maybe other people are. |
Close -- I'm able to use Cloud Builder to deploy from the repo but the Container Builder for Go is causing me problems... |
OK, it's working. export PROJECT=[[YOUR-PROJECT]]
export BILLING=[[YOUR-BILLING]]
gcloud projects create $PROJECT
gcloud beta billing projects link $PROJECT --billing-account=$BILLING
gcloud services enable cloudfunctions.googleapis.com --project=$PROJECT
gcloud services enable cloudbuild.googleapis.com --project=$PROJECT
# Permit Container Builder to deploy Cloud Functions
NUM=$(gcloud projects describe $PROJECT \
--format='value(projectNumber)')
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${NUM}@cloudbuild.gserviceaccount.com \
--role=roles/cloudfunctions.developer Add to the cloned directory (1)
Then (2) steps:
- name: "gcr.io/cloud-builders/go:debian"
args: ["build", "-tags", "netgo", "-tags","node","main.go"]
env: [
"GOARCH=amd64",
"GOOS=linux",
"GOPATH=."
]
- name: "gcr.io/cloud-builders/npm"
args: ["install", "--ignore-scripts", "-save", "local_modules/execer"]
- name: "gcr.io/cloud-builders/gcloud"
args: [
"beta",
"functions",
"deploy","helloworld",
"--entry-point","helloworld",
"--source","./",
"--trigger-http",
"--project","${PROJECT_ID}"
] Then create a build trigger from (your fork of the) GitHub repo: Pushing the changes of the additional of the
Will trigger the Container Builder build:
and: and: and, of course: curl --request GET https://us-central1-[[YOUR-PROJECT]].cloudfunctions.net/helloworld
Hello, I'm native Go! |
This is a longer term FR:
At the moment, the build and test steps requires gcc,make nodejs. However, those are only needed if you want to deploy with functions.zip or run the test node server.
Suggestion is two part:
allow users to run cloud function alone locally using native toolchains and not deal with node and the execv.
The advantage of this is you don't need node, make, gcc or anything and can elect to acquire them during deployment by other means.
By other means i mean like build node_modules and execve entirely in docker and then "copy" them to your workstation just prior to creating functions.zip.
This would require the runtime binary to accept both sockets as arguments as well as listen on its own (eg:)
The following dockerfile is an example of this 'just in time' library/module support:
Makefile
To compile node_modules:
Dockerfile_node_modules
To acquire additional shared objects (.so) files and copy them to lib/
The text was updated successfully, but these errors were encountered: