Wednesday, October 21, 2015

Running Tutum across Nutanix Acropolis and AWS for Hybrid Cloud PaaS

Docker acquired Tutum today and it’s something that I’ve been working with as I look at different PaaS models around the container ecosystem. I’ve linked my Tutum account (which is actually my Docker hub account) to a Tutum-auth user inside my AWS account, you can set this under "Account Settings" after you register:

Did you know it is also possible to use Tutum on Nutanix to quickly enable a hybrid cloud deployment? Once you login to your Tutum account, go to the Nodes tab and click on "Bring your own node" to get your deployment string to use in the following steps.

  1. Let’s clone a few (in this case 3 to start) nodes from our base linux template, directions from my blog here. Now we can also have them with a salt-minion deployed and use that to issue commands to all of the nodes from our salt-master: salt 'tutumnode*' cmd.run 'curl -Ls https://get.tutum.co/ | sudo -H sh -s 9a...'
  2. Or we could of course log in interactively and run the prep nodes command: curl -Ls https://get.tutum.co/ | sudo -H sh -s 9a...



You should be able to see all of your nodes grab the Tutum agent and be recognized (as long as they are internet accessible) within the “Nodes” tab of your Tutum dashboard.
Now if you’re new to Tutum, we can deploy our first “stack”, or collection of Dockerized services, to our nodes. The example given here is a Redis, web and load-balancer stack: https://tutum.freshdesk.com/support/solutions/articles/5000583471
lb:
  image: tutum/haproxy
  links:
    - "web:web"
  ports:
    - "80:80"
  roles:
    - global
web:
  image: tutum/quickstart-python
  links:
    - "redis:redis"
  target_num_containers: 4
redis:
  image: tutum/redis
  environment:
    - REDIS_PASS=password
The collection of services will start on your Nutanix nodes and you can seamlessly develop collections of services or stacks that could be deployed both to your on-prem Nutanix cluster or simultaneously and identically to AWS or your public cloud vendor. By default the deployment strategy is emptiest_node but you can deploy to all nodes or potentially in the future we could see specific availability zone deployment strategies, tell them:-)




With a Nutanix cluster and Acropolis/AHV you can quickly spin up nodes for Tutum consumption to build a hybrid-cloud PaaS for development in just a few minutes. While running on Nutanix, these nodes will benefit from the same shared and accelerated pool of compute and storage as well as infrastructure analytics and durable API accessibility for any other platform projects like Hadoop, MongoDB, or an ELK stack from my blog here or from my colleague Ray here. More on Nutanix benefits for next-gen apps here. As always, if you have questions or recommendations on more integration feel free to reach out on twitter @vmwnelson.

Saltstack Setup for Nutanix on Acropolis

In the spirit of my recent posts around config management and orchestration tools, I’ve also seen several customers using Saltstack and want to show how it is very straightforward to set up and use with Nutanix and the Acropolis Hypervisor (AHV). Saltstack is a powerful tool to help deploy 'states' or idempotent (repeatably identical) sets of expected configuration criteria to your VMs. Also, internally, Acropolis uses Saltstack for our own security and config management. You can find help for creating a master image from my post here: http://virtual-hiking.blogspot.com/2015/10/acropolis-image-and-cloning-primer-for.html With your baseline gold image, let’s first install our Salt-master server:
  1. Create a clone from your gold image and set themaster hostname and a static IP address. I’ll be using Ubuntu 14.04 but for other OS images, please use the relevant package manager.
  2. Make sure to register the salt master in DNS so that all of the worker nodes will be able to resolve it correctly. By default, the master expects to use the name ‘salt’ but this can be customized.
  3. Add the salt repo: add-apt-repository ppa:saltstack/salt
  4. Install the salt-master package: apt-get install salt-master –y
  5. Ensure you have the current hostname and salt-master key ready to insert in your /etc/salt/minion file by running this command on the master and copying the output: salt-key –F master
Now we can prep a new worker template with the salt-minion pre-installed:
  1. Create a clone from your gold image, I’ll be using Ubuntu again but for other OS images, please use the relevant package manager.
  2. Add the salt repo: add-apt-repository ppa:saltstack/salt
  3. Install the salt-minion package: apt-get install salt-minion –y
  4. Depending on whether you customized the salt-master hostname, either uncomment or replace the salt master hostname and IP which is in the /etc/salt/minion config file as:
  5. Add the salt-master key to the /etc/salt/minion config file:
  6. With the salt-minion pre-installed, make sure to remove the /etc/salt/minion_id and any other minion identification files: rm /etc/salt/minion_id  rm /etc/salt/minion.*
  7. Shutdown the salt-minion template.
Now after cloning (recommendations here) the VMs will power-on, grab their hostname from DNS/DHCP, and create a new minion-id that will register with the salt master. You can accept the new salt-minions en masse from the salt-master with: salt-key –A –y, then they will be ready to apply formulas. Other options for boostrapping minions include preseeding the keys on the master: https://docs.saltstack.com/en/latest/topics/tutorials/preseed_key.html
Also you have the option of disabling the authentication step, with the necessary "only do this if you know what you're doing" caveats, by editing the /etc/salt/master:
Finally, you also have the option of just using SSH via the salt-ssh package for an agentless (Ansible-like?) deployment: https://docs.saltstack.com/en/latest/topics/ssh/ For this to work, you will need to enable passwordless-SSH and I described preparing for that here.

For next steps, you could use Salt to deploy some sample workloads like vim or nginx:
https://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html#the-first-sls-formula

...


And you can find more example formulas on github to work with and modify to suit your intended environment:

If you want a quick ELK stack deployment on a single host:

  1. Clone the example on the salt-master server: git clone https://github.com/saltstack-formulas/elasticsearch-logstash-kibana-formula.git
  2. Move the state files to the salt repo directory: mv elasticsearch-logstash-kibana/kibana /srv/salt/
  3. Apply to one of your guest VMs: salt vm_name state.sls kibana
     ...
     ...


Friday, October 16, 2015

Using Packer to Build Images for the Acropolis Hypervisor

Packer is a tool from Hashicorp, responsible for Vagrant, Terraform and Vault among others, that I have had a lot of experience on the vSphere and AWS side but hardly any on the KVM / OpenStack side until joining Nutanix. Like most tooling around the “Infrastructure-as-Code” mindset, internally I categorize these rather as “Infrastructure-as-Text-Files” because I can define the build of a VM in a text file, json to be specific, instead of interactively installing an OS. Just in general, I really appreciate this technique as I can use a Vagrantfile to define a self-contained lab, or I can use a Dockerfile to describe a container for example.

Packer actually can simultaneously generate images across multiple platforms as I’ve used it to build multiple images across my vSphere and AWS projects. One would think that for having OVF’s and OVA’s that we would at least have some measure of easy portability across environments, but alas here we are.  In AWS, you have your AMIs and in vSphere, you have an entirely separate vmdk/OVA/OVF and of course there are good reasons for both as each has their own constituent components. 

Since the Acropolis hypervisor is built on a fork of the open source KVM hypervisor, we can use QEMU and Packer to facilitate building images on it, but there is a small learning curve if you are new to Packer or coming from a vSphere environment.

On your workstation of choice, Packer works with Virtualbox, Fusion, Workstation, or Parallels, you will need a linux VM with QEMU installed with your OS and the VT instruction set exposed to the VM. Here is an example with Virtualbox:

And with VMware Fusion:

I would recommend installing CentOS 6.5+ or Ubuntu 14.04+ as a guest VM to create your Packer workstation. Make sure to install the "Virtualization Host" option or install the qemu-kvm packages after you do your base install. Some examples of what this may look like:


Next download and install directions for Packer are here. Once you have it installed, you need a json file specific to the VM you want to build to run with it:
packer build example_os.json

For a starter example of a CentOS VM, you can go to my Github repo here. Clone with: git clone https://github.com/nelsonad77/acropolis-packer-examples.git
In order to see some useful logs, especially if you're building a new or unfamiliar image, I usually run this:
export PACKER_LOG="yes" && export PACKER_LOG_PATH="packer.log" && packer build centos.json

This will do a base installation of CentOS 6.7 and you can customize either the json or the kickstart file as you wish. Some notable specs in the json file are:

"type": "qemu" –Of course this will have to be set as we are building using QEMU

"format": "raw" –Options are RAW or QCOW2. Since the Nutanix file system already handles copy-on-write quite well and has other acceleration already built-in, we just use raw.

"headless": true –This is because I am just running an Ubuntu server from command-line. You have to do this if you're not using a desktop gui to show windows, but since this is deliberately an automated install, the gui is being driven by Packer and not by you anyway.

"accelerator": "kvm" –You can use this if you are building with QEMU installed on your linux virtualization host or you could also use “none”.

"iso_url": "http://mirrors.rit.edu/centos/6.7/isos/x86_64/CentOS-6.7-x86_64-minimal.iso",
"iso_checksum": "9381a24b8bee2fed0c26896141a64b69" –Feel free to update this to your ISO of choice and make sure that the checksum matches or of course the build will not commence successfully.

"disk_size": 5000 –Default initial disk size, feel free to customize.

"output_directory": "output_image"
"vm_name": "centos67template.img" –Assuming the build completes successfully, this is the .img file and path to look in that you can import with the Prism Image Configuration service.

"provisioners":
  [
    {
      "type": "shell",
      "inline": [
        "sleep 3",
        "echo \"NOZEROCONF=yes\" >> /etc/sysconfig/network",
        "adduser nutanix-admin",
        "echo 'nutanix-admin:nutanix' |chpasswd",
        "mkdir /home/nutanix-admin/.ssh",
        "chown nutanix-admin:nutanix-admin /home/nutanix-admin/.ssh",
        "chmod 700 /home/nutanix-admin/.ssh",
        "echo \"nutanix-admin ALL=(ALL) ALL\" >> /etc/sudoers"
      ]
    },
    {
      "type": "file",
      "source": "centos.json",
      "destination": "/root/centos.json",
      "source": "httpdir/centos6-ks.cfg",
      "destination": "/root/centos6-ks.cfg"
    },
Under provisioners you can see some examples of things you can do to your image. You have the ability to add users, copy in ssh keys, install and update packages and otherwise prep the image baseline exactly as you want. However, I would also caution against adding too much in the default image as individual app-specific packages can be added with your Chef, Ansible, or configuration management tool of choice after you clone your images.

Once the image build completes, you can login to your Nutanix cluster and upload your .img file with the Image Configuration service as a DISK instead of an ISO. From there, you can create a new template VM, choose to clone the disk from the Image service and power it on to verify that you have a new working template that you didn't even have to interactively install.

From there you can prep the image for mass cloning as I described here, if you didn't already perform these steps with the Packer inline provisioning, and then immediately move on to deploying and scaling out your apps.

There are not as many KVM examples in the wild as there are for AWS or vSphere, which is one of the main reasons I wrote this post, but at least there are other builds that can be adapted like the Bento boxes from Chef here: https://github.com/chef/bento
If you find this useful, I would encourage you to share out your own public packer build files if possible.

Additional links:

https://www.packer.io/docs/builders/qemu.html

Tuesday, October 13, 2015

Acropolis Image and Cloning Primer for Automation

Part of scaling out any environment is having a good template that can be spun up and incorporated easily into your imaging and configuration management prep before it’s added to your platform of choice. For Nutanix, I wanted to get these base templates ready and share what I needed to do for Ubuntu/Debian, CentOS/RHEL, and CoreOS:

Ubuntu 14.04
1. Run any updates and add any packages you want to be included in your baseline template with sudo apt-get. Keep in mind that for most packages and configurations, you will want as minimal an image as possible and add those components with Chef, Ansible, etc. Shared keys may be copied to an admin user's ~/.ssh directory to prepare for passwordless SSH.
2. Precreate DNS entries for VMs so that when clones boot they pull hostname from DHCP.
3. Add a ‘hostname’ script under /etc/dhcp/dhclient-exit-hooks.d:

#!/bin/sh
# Filename:     /etc/dhcp/dhclient-exit-hooks.d/hostname
# Purpose:      Used by dhclient-script to set the hostname of the system
#               to match the DNS information for the host as provided by
#               DHCP.
#


# Do not update hostname for virtual machine IP assignments
if [ "$interface" != "eth0" ] && [ "$interface" != "wlan0" ]
then
    return
fi


if [ "$reason" != BOUND ] && [ "$reason" != RENEW ] \
   && [ "$reason" != REBIND ] && [ "$reason" != REBOOT ]
then
        return
fi

echo dhclient-exit-hooks.d/hostname: Dynamic IP address = $new_ip_address
hostname=$(host $new_ip_address | cut -d ' ' -f 5 | sed -r 's/((.*)[^\.])\.?/\1/g' )
echo $hostname > /etc/hostname
hostname $hostname

echo dhclient-exit-hooks.d/hostname: Dynamic Hostname = $hostname

4. Make the ‘hostname’ script ready: chmod a+r hostname
5. Poweroff the VM that will become the template.

CentOS 6.5
1. Run any updates and add any packages you want to be included in your baseline template with yum or the appropriate package manager. Keep in mind that for most packages and configurations, you will want as minimal an image as possible and add those components with Chef, Ansible, etc. Shared keys may be copied to an admin user's ~/.ssh directory to prepare for passwordless SSH.
2. Precreate DNS entries for VMs so that when clones boot they pull their hostname from DHCP.
3. Remove or delete the HWADDR line from the /etc/sysconfig/network-scripts/ifcfg-eth0 file.
4. Remove the mapped network device so that when a new clone boots, it grabs a new MAC and is able to re-use eth0: rm /etc/udev/rules.d/70-persistent-net.rules
5. You can leave the hostname at localhost.localdomain since the new hostname will be mapped on boot up from the DNS record lookup.
6. Poweroff the VM that will become the template.

CoreOS
CoreOS is a special case as it deliberately is “Just-enough-OS” to run containers. To get into the really interesting work of CoreOS with Kubernetes and Tectonic, you’ll need a cluster master(s) with more details around cluster architecture from CoreOS: https://coreos.com/os/docs/latest/cluster-architectures.html
For cloning mass amounts of nodes, you’ll want to create your own cluster member/worker/minion template that feeds into that master with a cloud-config file on a config-drive. Configuring an etcdmaster is here until I customize my own procedure. 

1. Download the latest ISO from the stablebeta, or alpha release channel. You can also set this when you install to disk.
2. Make sure you have one or more ssh key(s) generated on the host/desktop/laptop you would like to use to connect to any of these worker nodes for individual configuration: ssh-keygen –t rsa –b 2048. Also, since you'll basically have a shared ssh key across the hosts at this point, Ansible can take advantage of this easily, more info from CoreOS here and from me here.
3. Create a user_data text file according to directions here.
4. Create a user_data text file with your favorite text editor. You’ll need a few changes for an etcdmaster node, but here is an example cloud-config file from coreos here:

#cloud-config
ssh_authorized_keys:
-ssh-rsa <copy the entirety of your id_rsa.pub file here>
coreos:
            etcd2:
                        proxy: on
                        initial-cluster: etcdserver=http://<etcd-master-ip-here>:2380
                        listen-client-urls: http://localhost:2379
             fleet:
                        etcd_servers: http://localhost:2379
                        metadata: "role=etcd"
            units:
                        - name: etcd2.service
                        command: start
                        - name: fleet.service
                        command: start

5. Convert that text file into an iso according to the directions copied from here. Depending on your OS you may use mkisofs instead of hdiutil
mkdir –p /tmp/new-drive/openstack/latest
cp user_data /tmp/new-drive/openstack/latest
hdiutil makehybrid -iso -joliet -default-volume-name config-2 -o configdrive.iso /tmp/new-drive                                                             
rm –r /tmp/new-drive
6. Upload both the config-drive ISO and your chosen CoreOS ISO to the Nutanix image service.
7. Create your base CoreOS VM and attach both ISO’s and boot the VM.
8. When your VM boots, it should auto-login as the ‘core’ user and you can run the install to disk: sudo coreos-install –d /dev/sda –C stable –c /media/configdrive/openstack/latest/user_data
9. Eject both ISO’s and power off the template: sudo poweroff.

Mass Cloning in acli
Using either the Acropolis CLI or REST API, you can set up the quick provisioning of a massive number of clones, given the number of nodes at your disposal.
1. To clone the VMs using the Acropolis CLI, ssh into either the cluster IP or a CVM IP as user Nutanix.
2. At the command line, enter ‘acli’ and feel free to customize the upper bound of VMs that you can build on your Nutanix cluster (code modified from here https://vstorage.wordpress.com/2015/06/29/bulk-creating-vms-using-nutanix-acropolis-acli/):
for n in {1..10}
do
vm.clone vm$n clone_from_vm=basetemplate_name
vm.on vm$n
done
3. If you want to script the operation, just preface the vm.operation with acli, as in:
for n in {1..10}
do
acli vm.clone vm$n clone_from_vm=basetemplate_name
acli vm.on vm$n
done
4. REST examples of cloning VMs via Acropolis on Github, https://github.com/nelsonad77/acropolis-api-examples courtesy of Manish Lohani @ Nutanix.
5. The number of cloned VMs should power on, claim their IP/hostname, and be ready for deploying a configuration with your favorite CM tool.

Additional Links: