Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

vagrant reload does not work #21

Open
deanmalmgren opened this issue Jun 7, 2013 · 25 comments
Open

vagrant reload does not work #21

deanmalmgren opened this issue Jun 7, 2013 · 25 comments

Comments

@deanmalmgren
Copy link
Contributor

Based on the documentation, it seemed like the vagrant reload command would work, but I receive an error when I run it:

[unix]$ vagrant reload
Vagrant attempted to call the action 'reload' on the provider
'RackSpace Cloud', but this provider doesn't support this action. This
is probably a bug in either the provider or the plugin calling this
action, and should be reported.
[unix]$ vagrant --version
Vagrant version 1.2.2

What other information would be helpful for debugging here?

@benjiqq
Copy link

benjiqq commented Oct 17, 2013

+1. same issue here. I need to be able to reload

@maxlinc
Copy link
Contributor

maxlinc commented Oct 17, 2013

Can you guys give some more details on what you´re trying to accomplish? I think the reload semantics are left to each provider. The built-in provider (virtualbox) does quite a bit, but I think most other providers have basic, if any, reload ability.

Otherwise... http://agilewarrior.wordpress.com/2010/11/06/the-agile-inception-deck/
(see first 3 images)

@deanmalmgren
Copy link
Contributor Author

Thanks for picking this up, @maxlinc. Two outcomes would be satisfactory for me.

  1. Remove the reload functionality altogether in the vagrant-rackspace provider or, if you can't remove it because its a vagrant thing, at least provide a better error message to make it clear that the vagrant reload method isn't supposed to do anything.
  2. Have vagrant reload do an equivalent of vagrant halt && vagrant up on rackspace machine, but without losing the IP address of the Rackspace machine. This seems like a viable option, provided vagrant halt does not work. #25 is addressed.

To be honest, I haven't had a strong use case for vagrant reload and I'd probably lean towards option 1.

@deanmalmgren
Copy link
Contributor Author

Actually...now that I'm looking a bit more carefully at Rackspace, it doesn't seem like there is a way to halt a machine, but you can restart it. A revised option 2 might then be to have vagrant reload restart the Rackspace machine. Does that make sense?

@benjiqq
Copy link

benjiqq commented Oct 18, 2013

Yep, reload is a short cut

The equivalent of running a halt followed by an up.
This command is usually required for changes made in the Vagrantfile to take effect. After making any modifications to the Vagrantfile, a reload should be called.
The configured provisioners will not run again, by default. You can force the provisioners to re-run by specifying the --provision flag.

@deanmalmgren
Copy link
Contributor Author

@BenjyC looks like we were writing at nearly the same time. The tricky bit is that there isn't an equivalent cloud operation to vagrant halt on Rackspace, is there?

@benjiqq
Copy link

benjiqq commented Oct 18, 2013

@deanmalmgren
Copy link
Contributor Author

Sorry...that wasn't terribly clear. I realize there isn't a halt action in the vagrant-rackspace provider.

What I'm wondering is whether its possible to power down a machine on Rackspace (retaining the same IP) without destroying the virtual machine? That would be the Rackspace equivalent to the vagrant halt step, but from quickly playing around with the web interface, it doesn't seem like this is possible [upvote feature request here].

Independent of whether you can just power down a machine on Rackspace, you can certainly reboot the box. This seems like a reasonable fallback for the time being. Thoughts?

@krames
Copy link
Collaborator

krames commented Oct 18, 2013

@deanmalmgren I am the racker that is responsible for this project. Sorry for taking so long to get back to you on this one!

I believe we could implement reload using the rebuild functionality in Cloud Servers. It essentially restores the VM with a clean image while retaining it's IP. This might be just the ticket for reload.

http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Rebuild_Server-d1e3538.html

Implementing halt on the other hand is a little harder. You can't just stop a VM and then resume it's execution later. The close thing we could do is, take an image of that VM and then delete it. This unfortunately would be slow and it wouldn't retain the IP either.

What are your thoughts?

@deanmalmgren
Copy link
Contributor Author

No worries, @krames. Using the rebuild functionality of the Cloud Servers sounds perfectly reasonable to me. Thanks!

If halt isn't possible in a way that makes sense, then I'd suggest at least having a more friendly error message, a suggestion I'll add to the vagrant halt ticket #25.

@krames
Copy link
Collaborator

krames commented Oct 18, 2013

@deanmalmgren Will do!

@krames
Copy link
Collaborator

krames commented Oct 23, 2013

@deanmalmgren I started to implement this feature, but I ran into a cloud servers issue populating ssh keys with the rebuild command. I am going to pick this back up when it gets fixed.

In the meantime, thanks for your patience!

@deanmalmgren
Copy link
Contributor Author

sounds good Kyle. Thanks for the update!

On Wed, Oct 23, 2013 at 3:57 PM, Kyle Rames [email protected]:

@deanmalmgren https://github.com/deanmalmgren I started to implement
this feature, but I ran into a cloud servers issue populating ssh keys with
the rebuild command. I am going to pick this back up when it gets fixed.

In the meantime, thanks for your patience!


Reply to this email directly or view it on GitHubhttps://github.com//issues/21#issuecomment-26944850
.

@deanmalmgren
Copy link
Contributor Author

@krames was your ssh keys problem related to #58 or something else entirely?

@krames
Copy link
Collaborator

krames commented Jan 7, 2014

@deanmalmgren This problem is with the underlying rebuild command itself. It is not using the ssh keys I am sending.

@deanmalmgren
Copy link
Contributor Author

ahhhhh, gotcha!

On Tue, Jan 7, 2014 at 4:36 PM, Kyle Rames [email protected] wrote:

@deanmalmgren https://github.com/deanmalmgren This problem is with the
underlying rebuild command itself. It is not using the ssh keys I am
sending.


Reply to this email directly or view it on GitHubhttps://github.com//issues/21#issuecomment-31787429
.

@cgoldammer
Copy link

Hi, I am wondering whether this is still actively worked on? My current use case is that I'd like to have the ability to reset a Vagrant Box on Rackspace while keeping the IP address. Thanks for any help!

Note: Referred from google groups (https://groups.google.com/forum/#!topic/vagrant-up/FRWf2e6XmVo).

@deanmalmgren
Copy link
Contributor Author

Any progress, @krames?

@CodeCommander
Copy link

Any progress on this? It would be nice to be able to update the vagrant file on these servers without losing our IP address.

@mttjohnson
Copy link

It looks like @krames no longer works at Rackspace (LinkedIn Profile), and nobody else has contributed to the rackspace/vagrant-rackspace fork that @krames was working on back in 2014.

@maxlinc
Copy link
Contributor

maxlinc commented Jul 15, 2016

Kyle was part of the group behind https://developer.rackspace.com. The best
way to get in touch with the group and see if anyone is actively working in
this project (or at least able to review, accept, and release a pull
request) is probably to email [email protected].

On Fri, Jul 15, 2016, 1:20 PM Matt Johnson [email protected] wrote:

It looks like @krames https://github.com/krames no longer works at
Rackspace (LinkedIn Profile https://www.linkedin.com/in/krames), and
nobody else has contributed to the rackspace/vagrant-rackspace
https://github.com/rackspace/vagrant-rackspace fork that @krames
https://github.com/krames was working on back in 2014.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#21 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA2vbjkB5LAop_lnV2vCfmgtNwNi_BTkks5qV8FzgaJpZM4At0le
.

@mttjohnson
Copy link

It looks like this should be possible. I thought I would share what I have uncovered, in the event someone knows how to translate this into this vagrant plugin:

The Rackspace Cloud has some API calls you can use to rebuild the server, but from what I can tell when it rebuilds the server there isn't an option to set a public key on the server during the rebuild process. You could set a password during rebuild, but who uses those for authentication these days. I found that if you associate an SSH key name with the server when it is created, the key will persist during a rebuild and allow you to login with key authentication after the rebuild.

You can add an SSH key and assign it a key name in the Control Panel or you can import one via the API.

export RAX_USERNAME="<username>"
export RAX_API_KEY="<api_key>"
export KEY_NAME="<key_name>"

You would need to create the machine initially with a key_name specified referencing the key you use with Vagrant.

Vagrant.configure("2") do |config|
  # ... other stuff  
  config.vm.define :centos do |centos|
    centos.ssh.private_key_path = '~/.ssh/id_rsa'
    centos.vm.provider :rackspace do |rs|
      rs.username = ENV['RAX_USERNAME']
      rs.api_key  = ENV['RAX_API_KEY']
      rs.key_name = ENV['KEY_NAME']
      rs.public_key_path = '~/.ssh/id_rsa.pub'
      # ... other stuff
    end
  end
end

For interacting with the Rackspace API you can reference their documentation.

Get the tenant id, endpoint, and token for subsequent API calls

curl https://identity.api.rackspacecloud.com/v2.0/tokens  \
    -X POST \
    -d '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"'"$RAX_USERNAME"'","apiKey":"'"$RAX_API_KEY"'"}}}' \
    -H "Content-type: application/json" \
    | python -m json.tool

In the response look for the tenantId and endpoint address from the "cloudServersOpenStack" endpoint for the region you are working with.

export TENANT_ID="<tenantId>"
export API_ENDPOINT="iad.servers.api.rackspacecloud.com"

Also in the response from above get the token id

export AUTH_TOKEN="<token_id>"

If you don't already have a public key on your Rackspace Account you can import one or add one from the Control Panel

export KEY_NAME="<key_name>"
export KEY_PUB=$(cat ~/.ssh/id_rsa.pub)

Import key pair

curl -s https://$API_ENDPOINT/v2/$TENANT_ID/os-keypairs \
   -X POST \
   -H "Content-type: application/json" \
   -H "X-Auth-Token: $AUTH_TOKEN" \
   -H "Accept: application/json" \
   -d '{ "keypair":{ "name":"'"$KEY_NAME"'", "public_key":"'"$KEY_PUB"'" } }' \
   | python -m json.tool

List existing key pairs

curl -s https://$API_ENDPOINT/v2/$TENANT_ID/os-keypairs \
   -H "X-Auth-Token: $AUTH_TOKEN" | python -m json.tool

You need the server ID for the server you want to rebuild which you can get from a server list or from the Control Panel

curl -s https://$API_ENDPOINT/v2/$TENANT_ID/servers/detail \
       -H "X-Auth-Token: $AUTH_TOKEN" | python -m json.tool

Set the value for your server ID

export SERVER_ID="<server_id>"

You need the Image Id which you can get from the server details above, or list all images and find the one you are wanting to rebuild with

vagrant rackspace images list

Set the value for the image ID

export IMAGE_REF="<image_id>"

Rebuild the server - based on API documentation

curl -s https://$API_ENDPOINT/v2/$TENANT_ID/servers/$SERVER_ID/action \
   -X POST \
   -H "Content-type: application/json" \
   -H "X-Auth-Token: $AUTH_TOKEN" \
   -H "Accept: application/json" \
   -d '{ "rebuild" :{ "imageRef" : "'"$IMAGE_REF"'" } }' \
   | python -m json.tool

Re-run vagrant provisioner after the rebuild

vagrant provision <machine_name>

@mttjohnson
Copy link

So... I took a stab at trying to figure out how to add this functionality to the plugin:
mttjohnson@e55532f

I found the rebuild method listed on the fog-rackspace gem:
https://github.com/fog/fog-rackspace/blob/master/lib/fog/rackspace/models/compute_v2/server.rb#L447

Considering this is the first time I've tried to modify much Ruby code, let alone deal with a vagrant plugin, I'm still trying to figure out how to test my modifications with vagrant to see if anything I hacked up actually does anything.

@mttjohnson
Copy link

I am trying to install my local modified copy of the vagrant-rackspace plugin with no success.

I cloned the vagrant-rackspace repo
I made my modifications to add the rebuild action
I ran rake build from the plugin source directory
That generated a .gem file
I ran vagrant plugin install pkg/vagrant-rackspace-0.1.11dev.gem and then I got an error

Installing the 'pkg/vagrant-rackspace-0.1.11dev.gem' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

Could not find gem 'vagrant-rackspace (= 0.1.11dev)' in any of the gem sources listed in your Gemfile or available on this machine.

Am I missing something? All the various documentation I can find is indicating those are the steps to follow, but I don't understand what to do about the fact that it's claiming it can't find the gem that I gave it the exact path for.

@mttjohnson
Copy link

I ended up figuring out how to execute the modified version of the plugin by creating a Gemfile in the directory I was wanting to run vagrant from and included the path of my modified version of vagrant-rackspace

source 'https://rubygems.org'

group :plugins do
  gem "vagrant", git: "https://github.com/mitchellh/vagrant.git", :branch => 'master'
  gem "vagrant-rackspace", path: "./vagrant-rackspace"
  gem "vagrant-triggers", git: "https://github.com/emyl/vagrant-triggers.git", :branch => 'master'
end

Then I ran the commands via bundler to test the new command and fix a couple bugs

bundle exec vagrant
bundle exec vagrant status
bundle exec vagrant rebuild mymachinename
bundle exec vagrant ssh mymachinename

Everything worked great rebuilding the machine. It kept the same IP address and ID, retained the storage volumes I had attached to the machine, re-ran the provisioners, and most importantly I was still able to SSH into the box using key authentication. Happy Day!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants