Sunday, September 27, 2015

Ansible Setup for Nutanix on Acropolis

Getting started with a new provisioning platform like Acropolis on Nutanix can seem unfamiliar if you’ve spent many years with the way that vSphere creates virtual machines. However, you can bootstrap your environment with just a few VMs that allow you to clone and then set app configurations quickly.

The most popular configuration management platforms you may already be familiar with, and in no particular order are Chef, Puppet, Ansible, and SaltStack. I’d like to begin with Ansible just as a relatively straightforward example. If you’d like a more thorough Ansible introduction, check here. After imaging a Nutanix cluster with the Acropolis Hypervisor (or AHV), you should create a container and storage pool so that you can immediately begin laying the baseline for your environment. First let's create a default template we can use, in this case, I am using Ubuntu.
  1. Upload the Ubuntu server ISO via sftp to port 2222 with user "admin" and your admin's password that you set for the Nutanix cluster. For example, on Windows I use WinSCP, and on Mac I use Cyberduck but feel free to use whatever client you are most familiar with. Optionally, once logged in you can create subdirectories to help organize but these are not required in the container.
  2. Create a new VM and feel free to customize the following: your local root disk size and vlan network attachment to the VM.
  3. Update the CDROM to “Clone from ADSF file” and use the path of uploaded ISO. This should auto-complete if you are using the correct path. For example "/container_name/ubuntu14.04.3-server-amd64.iso. If you have NOS version 4.1, this will look almost the same except for a cosmetic difference of "Clone from NDFS file".
  4. Power on the VM and install Ubuntu from ISO as you normally would. Feel free to customize what you wish to be used as part of a base template for your given OS at this point, but I would make sure to install OpenSSH Server.
  5. Detach the ISO from the VM and reboot.
  6. If you used DHCP, then you should see the VM now has an IP from the Acropolis GUI or use the static IP you assigned to SSH into the VM. You may also launch the console, but through my virtual desktop I was having keyboard translation issues so SSH avoids that just in case.
  7. Log in with your given ubuntu admin user and password, then prep your template. I would recommend at least updating with: sudo apt-get update
  8. Since Ansible uses SSH to distribute all its commands, you can either switch to root or a given admin that will be responsible for running all Ansible commands on each VM. Then run ssh-keygen. Also you'll want to copy the public key into authorized keys: cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  9. Shutdown the VM.
Now that we have a base template ready, we can use it to build our master Ansible server and any clients that we wish to overlay configurations.

  1. In the Acropolis GUI, clone the shutdown VM to a new VM, for example ‘ansiblemaster’ and power on.
  2. You'll want to make sure the IP is unique and change the hostname first:
    1. sudo hostname ansiblemaster
    2. Sudo vi /etc/hosts
    3. Sudo vi /etc/hostname
    4. Sudo reboot
  3. Just in case you want to use an older version of Ubuntu, < 14.04, it may be necessary to run: sudo apt-get install software-properties-common
  4. Add a new repo for Ansible to be installed from: sudo apt-add-repository ppa:ansible/ansible
  5. Update your apt repos again: sudo apt-get update
  6. Install the latest version of Ansible: sudo apt-get install ansible –y
  7. Most of the playbooks I work with I find on github.com so I would recommend installing git at this point: sudo apt-get install git -y
  8. Update the starter ansible hosts file. There are some examples of groups and hosts/IPs that should be commented or deleted and add your own corresponding hosts once you have a grasp of the layout: sudo vi /etc/ansible/hosts
At this point you'll want some test VMs to show off Ansible so feel free to clone one of more VMs from your starting template and make sure they have unique IPs and hostnames. For Ansible to communicate successfully, you'll need id_rsa, id_rsa.pub, known_hosts, and authorized_keys files on each VM. By cloning the base template, you should have 3 of these 4 pieces, but without known_hosts, you will receive an interactive message the first time you initiate communication with a new client. In order to disable this, in effect granting passwordless ssh access, you can edit /etc/ansible/ansible.cfg or ~/.ansible.cfg and add the following line:

[defaults] host_key_checking = False

Here is a simple command to verify that Ansible is working as intended:
ansible -m shell -a 'free -m' some_hostname

Ansible keeps track of the set of commands you want to run, or "plays", in a playbook. Roles may be used to encapsulate and help organize multiple playbooks that could be applied collectively. Example roles and playbooks are located on github here and on Ansible's galaxy site here. For a quick nginx example, follow the directions here, and to customize the installation for Ubuntu, check here. A successful run will look similar to the following:

Also worth noting is that Ansible's playbooks could also be applicable when configuring other parts of your environment, for example even networking devices such as Cumulus Network switches, with more details from Github here.

Additional links:
http://docs.ansible.com/ansible/intro_getting_started.html

Thursday, September 10, 2015

Cloud Foundry Setup on Nutanix

After 3 months at Nutanix, I’ve already seen customers realizing the value in consolidating their hardware stack. They want to focus on their platform of choice and spend less time chasing the exponential problem of aligning the perfect hardware and software matrices. Now, what the platform of choice is (or Platform-as-a-Service) can vary widely.

Consistent with other technical doctrines, there is still a lot of separation in how customers regard and evaluate what actually constitutes a PaaS. I would wholly agree with customers falling into two categories, i.e. a “Structured and Unstructured PaaS” dichotomy I first saw published by Brian Gracely.

Choosing either type, a structured or turnkey PaaS vs a build and customize PaaS, indicates a desire to spend more time on development than ops. I spoke about operationalizing containers at both the Hadoop Summit in San Jose this summer with Mesos and Myriad:
https://www.youtube.com/watch?v=FAxmal6ozLY
and at VMworld last month in #CNA4725 when I spoke about Mesos with Marathon(and Docker) as another potential platform. Replays available hopefully from vmworld.com but will require a login. In future articles I will walkthrough deploying Mesos, Kubernetes, and other potential developer platforms on a given Nutanix cluster.

The quickest way to get started from deploying a PaaS in your Nutanix environment is to download and setup Pivotal's Cloud Foundry which I will walkthrough below. PCF is arguably the best example of a turnkey PaaS today as it comes with the Ops Manager tool for very minimal, straightforward deployment and configuration of the Pivotal Elastic Runtime (the primary PaaS environment) as well as supplemental services for SQL and NoSQL, all available from: https://pivotal.io/platform.

Just like the storage and management layers are ubiquitous and IO accelerated across the cluster by Nutanix for simplicity and scalability, the communication, scheduling, load-balancing, and logging of app services is handled by the PaaS management layer.

For the quickest out-of-the-box experience today, setup of Pivotal Cloud Foundry is really easy:
·      Make sure your Nutanix cluster is imaged with vSphere 5.5 or 6.0.
·      Upload the vCenter Server Appliance (directions for 5.5 and 6.0) to one of the nodes and initialize it, or if you already have vCenter up and running, you can go straight to the next step.
·      Download the Pivotal Ops Manager ova and Elastic Runtime from http://network.pivotal.io. You may also download additional service components for later like Datastax Cassandra or MySQL. (Pivotal account required, but does not require purchase to evaluate.)

·      Upload the Pivotal Ops Manager ova to vCenter and give it a name, cluster to be deployed on, and network address settings.
·      Log into the Ops Manager IP in a web browser and give the admin user a name and password.
     Run the Ops Manager configuration. It will ask for a vSphere admin user credentials, the datacenter name and cluster name. You’ll also need a VM network port group name and range of IP addresses that you want to include or exclude for the individual VMs. More detailed requirements here: http://docs.pivotal.io/pivotalcf/customizing/requirements.html

·      You will need at least 1 wildcard domain (2 recommended) to assign to the environment for an apps and system domain so that these resolve to the HAproxy IP address(es). The method will depend on your DNS server of choice, but basically any *.apps.yourdomain.com or *.system.yourdomain.com subdomain should resolve to the load-balancer of choice (HAproxy by default) where it can then be resolved internally by Cloud Foundry. If this is not pre-created before trying to configure the Elastic Runtime piece, you will get an error and the installation will likely fail around the smoke tests run for validation.
·      Upload and configure the Pivotal Elastic Runtime. At a minimum, the Cloud Controller and Security line items will need additional configuration. You may configure the HAproxy or custom load-balancer piece for your environment if you prefer Nginx or something else.




·      After the installation and validation is complete, you should have all you need to start playing around with Pivotal Cloud Foundry on your Nutanix cluster. You may also upload additional services for your apps like Cassandra or MySQL:


·      In order to login interactively, you can copy the Admin credentials from within the Ops Manager UI, click on the Elastic Runtime component and the Credentials tab, then scroll to the UAA heading and Admin row for its current password.


·      From a command prompt, you can use the cf login command and push your first app. A helpful blog to using these commands is here:

From there, in Cloud Foundry, you can create more Orgs and Spaces, set quotas and focus on deploying apps that scale on your Nutanix infrastructure. Another interesting project to play with in your deployment is the Chaos Lemur, the Cloud Foundry version of the Chaos Monkey to simulate targeted failures and determine the resiliency and availability of the platform in your environment.

In the next part of this series, I will be working on how to deploy Cloud Foundry on the Nutanix Acropolis environment.


Tuesday, September 8, 2015

An Introduction to Next-Gen Apps on Nutanix


I spent the first decade of my career doing managed and professional IT services around SAN and NAS for EMC, and I remember rigorously checking the EMC compatibility matrix to ensure an environment was ready to go before it was even built in the datacenter. But, did that actually guarantee no issues?

Of course not. There were still plenty of support calls filed—from lack of consistency in the environment, to firmware issues, to independent hardware failures that still incurred faults in other parts of the solution. Part of a project sign-off involved getting a HEAT report, a scripted check against the EMC support matrix, that didn’t show any mismatches or configuration issues. Then came E-lab advisor and many other iterations trying to solve the interoperability problem, but they were fundamentally unable to outpace the exponential growth of an HCL for a best-of-breed approach. Opposite this perspective, you have the undeniable acceleration of public cloud providers where you only pay for a virtual form factor. The underlying hardware is (and should be) irrelevant to what you, the customer, concentrates on—the software you want to build.

Customers have an abundance of software stacks to deliver, from traditional web/app/database platforms to more loosely coupled platform components designed for rapid iteration. The expectation of quick and constant evolution in any given constituent component at any given time is, in my opinion, the defining characteristic of the next generation of app environments, or “cloud-native apps”.  For a far more rigorous rubric and definition, see http://12factor.net/. I’ve seen firsthand with Hadoop and HPC environments as customers evaluate virtualization and try to decide whether to go with a siloed bare-metal approach, internal virtualization, or a service provider.

If you take the evolution of Hadoop with regards to Big Data for example, traditionally product management, marketing or R&D business units would provide input for a data warehouse with arbitrary expectations set a year or two in the future, and the DBAs would design for that without the same stepping-model insight that you only get with experience. Compare that to HPC programmers, who may be building and tuning code for hardware that hasn’t even hit a datacenter floor yet, trying to optimize compilers for potentially theoretical working sets and hardware-accelerated solutions. In HPC and Hadoop, it has been very exciting to witness a shift in perspective. Customers are able to learn and scale their approach constantly. This gives them more options to experiment and grow along the way because their business goals and technical roadblocks are always evolving as well.

Nutanix aims to give these environment owners more time to focus on their specialty and less on infrastructure as more than Yet-Another-Hyperconverged-Vendor by:
·      A distributed management layer across the cluster for resiliency and durability of meta-data. This also becomes the distributed endpoint for API calls and stacking of higher-level services. A quickly changing environment means a lot of API interaction, so this by necessity is fault-tolerant and without bottlenecks.
·      A distributed logical storage space for performance, availability, and durability. At the same time the storage pool is a singular abstraction for transient and persistent data across any VMs, containers, or applications (or app-building platform).


While simplifying the management and storage layers, customers are allowed to choose:
·      Their virtualization hypervisor and tooling available.
·      Their hardware form factors from Nutanix and Supermicro and Dell




Holistically, the Nutanix platform is designed to support all of these ideals to minimize bespoke architectural designs and provide straightforward manageability and scalability. In the next of this series of blog posts I will review deploying Pivotal Cloud Foundry on Nutanix, here:
http://virtual-hiking.blogspot.com/2015/09/cloud-foundry-setup-on-nutanix.html