Sunday, September 27, 2015

Ansible Setup for Nutanix on Acropolis

Getting started with a new provisioning platform like Acropolis on Nutanix can seem unfamiliar if you’ve spent many years with the way that vSphere creates virtual machines. However, you can bootstrap your environment with just a few VMs that allow you to clone and then set app configurations quickly.

The most popular configuration management platforms you may already be familiar with, and in no particular order are Chef, Puppet, Ansible, and SaltStack. I’d like to begin with Ansible just as a relatively straightforward example. If you’d like a more thorough Ansible introduction, check here. After imaging a Nutanix cluster with the Acropolis Hypervisor (or AHV), you should create a container and storage pool so that you can immediately begin laying the baseline for your environment. First let's create a default template we can use, in this case, I am using Ubuntu.
  1. Upload the Ubuntu server ISO via sftp to port 2222 with user "admin" and your admin's password that you set for the Nutanix cluster. For example, on Windows I use WinSCP, and on Mac I use Cyberduck but feel free to use whatever client you are most familiar with. Optionally, once logged in you can create subdirectories to help organize but these are not required in the container.
  2. Create a new VM and feel free to customize the following: your local root disk size and vlan network attachment to the VM.
  3. Update the CDROM to “Clone from ADSF file” and use the path of uploaded ISO. This should auto-complete if you are using the correct path. For example "/container_name/ubuntu14.04.3-server-amd64.iso. If you have NOS version 4.1, this will look almost the same except for a cosmetic difference of "Clone from NDFS file".
  4. Power on the VM and install Ubuntu from ISO as you normally would. Feel free to customize what you wish to be used as part of a base template for your given OS at this point, but I would make sure to install OpenSSH Server.
  5. Detach the ISO from the VM and reboot.
  6. If you used DHCP, then you should see the VM now has an IP from the Acropolis GUI or use the static IP you assigned to SSH into the VM. You may also launch the console, but through my virtual desktop I was having keyboard translation issues so SSH avoids that just in case.
  7. Log in with your given ubuntu admin user and password, then prep your template. I would recommend at least updating with: sudo apt-get update
  8. Since Ansible uses SSH to distribute all its commands, you can either switch to root or a given admin that will be responsible for running all Ansible commands on each VM. Then run ssh-keygen. Also you'll want to copy the public key into authorized keys: cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  9. Shutdown the VM.
Now that we have a base template ready, we can use it to build our master Ansible server and any clients that we wish to overlay configurations.

  1. In the Acropolis GUI, clone the shutdown VM to a new VM, for example ‘ansiblemaster’ and power on.
  2. You'll want to make sure the IP is unique and change the hostname first:
    1. sudo hostname ansiblemaster
    2. Sudo vi /etc/hosts
    3. Sudo vi /etc/hostname
    4. Sudo reboot
  3. Just in case you want to use an older version of Ubuntu, < 14.04, it may be necessary to run: sudo apt-get install software-properties-common
  4. Add a new repo for Ansible to be installed from: sudo apt-add-repository ppa:ansible/ansible
  5. Update your apt repos again: sudo apt-get update
  6. Install the latest version of Ansible: sudo apt-get install ansible –y
  7. Most of the playbooks I work with I find on github.com so I would recommend installing git at this point: sudo apt-get install git -y
  8. Update the starter ansible hosts file. There are some examples of groups and hosts/IPs that should be commented or deleted and add your own corresponding hosts once you have a grasp of the layout: sudo vi /etc/ansible/hosts
At this point you'll want some test VMs to show off Ansible so feel free to clone one of more VMs from your starting template and make sure they have unique IPs and hostnames. For Ansible to communicate successfully, you'll need id_rsa, id_rsa.pub, known_hosts, and authorized_keys files on each VM. By cloning the base template, you should have 3 of these 4 pieces, but without known_hosts, you will receive an interactive message the first time you initiate communication with a new client. In order to disable this, in effect granting passwordless ssh access, you can edit /etc/ansible/ansible.cfg or ~/.ansible.cfg and add the following line:

[defaults] host_key_checking = False

Here is a simple command to verify that Ansible is working as intended:
ansible -m shell -a 'free -m' some_hostname

Ansible keeps track of the set of commands you want to run, or "plays", in a playbook. Roles may be used to encapsulate and help organize multiple playbooks that could be applied collectively. Example roles and playbooks are located on github here and on Ansible's galaxy site here. For a quick nginx example, follow the directions here, and to customize the installation for Ubuntu, check here. A successful run will look similar to the following:

Also worth noting is that Ansible's playbooks could also be applicable when configuring other parts of your environment, for example even networking devices such as Cumulus Network switches, with more details from Github here.

Additional links:
http://docs.ansible.com/ansible/intro_getting_started.html

No comments:

Post a Comment