Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EC2 Marathon Binding Public IP #7

Open
ghost opened this issue May 9, 2015 · 7 comments
Open

EC2 Marathon Binding Public IP #7

ghost opened this issue May 9, 2015 · 7 comments

Comments

@ghost
Copy link

ghost commented May 9, 2015

The EC2 Marathon Binding checks for a Public IP in marathon.py.

hints['public'] = _peek('public-ipv4')

The _peek() method asserts the command successfully pulls a public-ipv4 address from AWS metadata.

I think it is a valid use case to not associate a public IP address with an EC2 instance inside a VPC. The instance will only have a local-ipv4 address and not a public-ipv4 address. The curl command will fail and the assert will break the binding in this case.

Is there a more graceful way of handling EC2 instances with only private IPs?

@opaugam
Copy link

opaugam commented May 9, 2015

Hey Mehdi, good question !

Right - good point. So the early design was the following : all intra-I/O (container to container, typically to tell your followers to configure) are using private IPs (on K8S this is the sub-netted pod IP). Now, I also offer a way to get you an externally resolvable IP (this really depends on the stack you use) mostly for reporting and to know on what underlying node your container is running from.

In actually the "public" IP is not really used since we use a proxy mechanism. Check out https://github.com/autodesk-cloud/ochothon for instance. You deploy a public-facing portal (which runs in the cluster of course) and talk to that guy only. The portal then only ever uses internal IPs to talk to the containers.

Does it make sense or did I manage to confuse you ? :)

@ghost
Copy link
Author

ghost commented May 9, 2015

The docker containers should still use ochopod as the entrypoint, even with an Ochothon setup? I have ochopod as the entrypoint for my docker containers.

@opaugam
Copy link

opaugam commented May 10, 2015

Yea. You basically want your pod script to run (e.g. the python code importing ochopod and running Pod().boot() at the end). It does not really matter if this is the entry-point or not as long as it runs. If you look at my sample zookeeper image in ochonetes you will notice I tend to use supervisord as my entry-point (cleaner in my opinion). But yea you can totally python your_pod_script.py as your CMD.

@ghost
Copy link
Author

ghost commented May 10, 2015

I am rebuilding my setup with public IPs assigned to all the instances. I will try out a Marathon deployment with Ochopod when I have the cluster running again. Thanks.

@opaugam
Copy link

opaugam commented May 10, 2015

Just use the Ochothon project, it's going to be a breeze (well it should be :) and will take 5 mins. The portal image used is already on the hub anyway so you don't have to build anything.

Do you want me to put in git a simple 'hello world' docker image including ochopod to get you started ?

@opaugam
Copy link

opaugam commented May 11, 2015

Actually I did pro-activate :)

https://github.com/opaugam/marathon-ec2-flask-sample

@ghost
Copy link
Author

ghost commented May 11, 2015

Thanks for the example!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant