Nutanix

1201, 2016

Happy New Year from Planetchopstick

By |January 12th, 2016|Dell Storage, DellXC, Nutanix, Storage|0 Comments

Ahh, a brand new year. It smells fresh …

I’m not a New Years Resolution guy, because I always break them. I prefer nondescript dates, innocuous, inconsequential (I’m all out of big words). A good example is when I started the LCHF eating plan, I picked Sunday Nov 2nd 2014 for no other reason than it wasn’t a key date and it wasn’t a Monday to start :)

So, I’m breaking habits but like a lot of people I am making a resolution to post more and about more things, and what an easy start, a post about posting more.

Just before the Christmas break we got a bunch of the Queensland Dell specialists and solution consultants together in the boardroom and we brought in all our toys.

  • Dell XC Nutanix
  • Dell Networking running Cumulus
  • Dell Storage SC Series SCv2020 iSCSI array
  • “Cough” Supermicro Nutanix :)
  • Dell Storage PS Series PS6210XV
  • And a bunch of other bits, including my trusty 1G TP-LINK 16 port switch, which I call THE WORKHORSE

We may run out of ports here

Cumulus switch deployment and configuration with Ansible on a Dell Networking S4048-ON open networking switch.

Lots of playing around with the XC clusters, connecting SC storage into XC nodes. Pulling cables, failure scenarios. The networking team automated the deployment of Cumulus and configuration via Ansible which was pretty awesome. Cam from Nutanix testing replication from ESX to Acropolis and doing a restore. (I won’t steal his thunder here as he is going to do his own post about this at http://invisibleinfra.com/).

One thing we did (sort-of) get working is the Cumulus integration into Prism. Very cool. Full doco on how to do that is here.

We learnt a lot, especially that we tried to do too much at once which slowed us down a bit. We will be looking to run these once a quarter and if we get better at it we’ll get customers involved as well. I was using the hashtag #bnedellplugfest but I think that was a bit long and it would have been more interesting if it wasn’t just me tweeting :)

As you can see, fodder for a few interesting blog posts. Now to follow through …

I hope everyone has a great year, I know as a Storage guy in Dell this year is going to be VERY interesting.

Cheers Daniel

1506, 2015

What was announced at .Next 2015 by Nutanix and Dell

By |June 15th, 2015|Nutanix, Storage|0 Comments

Disclaimer: I work for Dell. I sell the Dell XC Nutanix appliance. I have a vested interest however those who know me know I like to be fair and as little cool-aidy as possible, unless its about Basketball. This is my hot take on the announcements. BTW, it turns out I can’t spell announcements consistently. Do what you need to do with that info.

There are a plethora of blogs, articles, tweets, LinkedIn posts about .Next and the announcements so I’ll try not to rehash but instead give a Dell/storage guys opinion on them. At the bottom of the post I have collated a bunch of the articles I have read over the past few days.

Key Announcements

  • XCP – extreme computing platform
    • Acropolis
    • Prism
  • Storage node
  • Native NAS appliance
  • Nutanix App Framework
  • Erasure Coding
  • In-Guest iSCSI Support
  • Sync Replication
  • Nutanix CE
  • Global Search
  • Improved Analytics and Capacity Trending

NOS 4.1.3 (released last week)

  • erasure coding
  • sync replication
  • in-guest iSCSI support
  • ToR visibility
  • Acropolis Image Service and HA

Dell announcements

  • GPU offerings to support graphics intensive VDI workloads
  • Support for factory installed KVM and ESX 6.0 hypervisors
  • Storage-only node to increase cluster storage capability when additional compute and hypervisor resources are not required
  • IDSDM – Internal Dell SD Module
  • XC430 for smaller workloads (lite-compute)

A Big Week

You can see it was a busy week, the biggest part of which was the XCP “invisible infrastructure” and Acropolis announcements. Andre Leibovici (@andreleibovici) had a number of good blog posts prepared for the event but this is the great overview of the new features and vision for XCP.

XCP stands for Xtreme Computing Platform, which is a mix of Acropolis and Prism. Previously it was called VCP (Virtual Computing Platform) and the idea is now that hypervisor and the Nutanix Prism components stand separately on their own feet … if software had feet.

Notice it is now called XCP and our appliance is called XC Series. Coincidence? I think … probably. But still pretty cool!

App Mobility is the ability to migrate a VM workload from one hypervisor to another. The demo onstage was converting a ESX VM to a KVM VM. The VM needs to be on the NDFS filesystem (that’s redundant I know, like ATM machine) and then Acropolis converts some key meta-data to make it compatible with KVM. Very nice.

I asked the question on twitter if you could move back as easy and @vcdxnz001 responded quickly

It was recommended in one of the sessions (can’t remember which one) to clone your existing VM and convert the clone rather than converting the original, I think that is sage advice just in case there are any hiccups in the early days.

What this all leads into is for Acropolis to be your go to point for VM management, push workloads between hypervisors and between clouds. Ambitious but I think a great idea going forward.

The improved analytics and capacity trending build on the already excellent Prism reporting tools. There are loads of examples in this video:

New Dell XC Series Announcements

Here is the official press release for the new Dell XC Series features.

Out of the Dell announcements KVM support is a biggy, especially with the launch of the Nutanix Acropolis platform. AFAIK there wasn’t an architectural issue with XC supporting KVM, its just getting our engineering processes and support infrastructure in place to support it effectively.

Karl Friedrich who works in our Engineering department and presented the Dell XC session at .Next had a great point; Dell announced the XC product only 10 months ago, since then we have had two major releases – so 3 releases since it in 10 12 months. That’s a heavy workload and there is more to come.

Two new appliances were announced as well, the XC730-16G and XC430-4.

  • XC730-16G will support Nvidia GRID K1 & K2 GPUs as well as up to 16 drives. CPUs, Memory and Networking is configurable.
  • XC430-4 is a fully-featured box, just in a short-depth form factor, 24″. Ideal for ROBO and deployments that that use short depth racks.

Just to explain the naming structure, all the XC appliances are based on the Dell 13G PowerEdge rack servers. eg an XC630 is an R630 with some tweeks to support Nutanix like IDSDM, bios, drives, and most importantly the front sticker :) The components inside like CPU, Memory and Networking is configurable just like you would expect when buying a normal server.

The number at the end of the name, XC630-10, is the maximum number of disks the appliance will support. You don’t have to fill it but that is one of the advantages of the XC product that they have excellent disk density. Finally, the G in XC730-16G stands for GPU.

Dell XC series

The Dell XC630-10 (top) and XC730-24 (bottom)

So using all that new info you can work out the new appliances have up to 16 drives in the XC730-16G and up to 4 drives in the XC430-4.

IDSDM is a 16GB SD card residing on an internal motherboard riser card. It’s loaded during Dell Factory software installation with all necessary content to reset the XC system back to the Dell factory image. It’s available to Dell Support for initial deployment or remedial support when a factory reset is required and can restore the system back to the factory state in 10-20 minutes.

One of the Dell XC advantages is that we can ship the appliance with ESXi already installed. Nutanix is fast to deploy but with pre-installed ESX and rapid rails XC is super fast :) Here is a video from the Dell Sydney Solutions Centre that has a 3 node XC cluster being racked and configured in under 30 mins. You have to love the PowerEdge tool-less rapid rails.

If you want to see what all the Dell XC series appliances look like and their specs, you can view them via the Dell Virtual Rack. The new appliances aren’t there yet but you can see the existing XC630-10, XC730-12, and XC730-24 appliances.

Remember, Dell is taking great pains to not touch the Nutanix software. Your path, our platform as we like to say. When a new version of the software is released, it is available to all customers, XC included. A bad example is Samsung and Android. A new Android version is released and it might be a year until its available because of all the modifications and bloat that Samsung inject. Painful and a great way to alienate customers.

Our goal is to do what we do best which is produce great hardware with the sales and support that Dell can bring, and let the uber-smart dudes at Nutanix do their thing and produce great software. (*rant over, can you tell I’m dirty at Samsung?)

 

New Storage Features

Because of my background the new storage related features probably interested me the most, Erasure Coding, NAS functionality, and the Storage only node.

Erasure Coding is a great option to add and will really help get best bang for buck by allowing more usable capacity from the same amount of drives. If you are not familiar with Erasure encoding @jpwarren does a good job of explaining it in a couple of paragraphs here . Effectively it’s like RAID but rather than being at the physical drive layer it’s at the 4MB extent group layer (I think I recall someone mentioning that’s the granularity).

Thanks to Webster, Michael Webster for the clarification

Andre again explains the Nutanix space savings here. One caveat is that you need a cluster of at 4 nodes but that’s not a biggy. You’ll see better storage savings the more nodes your cluster has. I’m sure there will be more rules and best practices come out about when and where its best to use EC.

Next was native file serving straight out of a Nutanix cluster. The details are light on at the moment but it will be some sort of VM like the CVM running CIFS/NFS services. It will be very handy for VDI deployments where you need remote profiles and home directories etc.

One of the customers i spoke to will be looking to move all their file serving to the new NAS appliance. I asked why don’t why just run a VM with Openfiler or something similar and the response was because it wouldn’t be integrated into the Nutanix ecosystem, Prism, performance monitoring, support etc. Fair enough, although I have spent a lot of time around NAS products like Celerra and FluidFS and it’s a difficult thing to make simple.

When the storage only node was announced I was a little surprised, not because Nutanix would do it, but my first thoughts were “Oh shit, Dell doesn’t have one”. The good news is we do have one that was announced straight after the kickoff. Whew. Goes to show where I sit in the pecking order at Dell 😉 I don’t have any details at the moment but it will be like the NX-6035C, just enough compute to run the base KVM hypervisor but not to run any VM workloads.

Josh Odgers has a writeup about the storage only node. This has been a been a sorely needed piece of the Nutanix puzzle.

In-Guest iSCSI helps out with those large Exchange installations out there. It means you can serve storage from NDFS directly into a guest via iSCSI. Does it mean you can serve out iSCSI to non-Nutanix nodes? From what I could gather it will work but don’t do it because it makes no sense. If you want to use for migrations use the Whitebox option instead.

Nutanix Community Edition (CE)

Nutanix Community Edition (CE)  also is now officially available, and the best news is, it can be nested as a VM. Good review of what features it supports here but for hints and tips on how to install it Matt Day @idiomatically and Joep Piscaer @jpiscaer are the men you want to talk to as they ran the CE session at .Next.

Apparently after getting Nutanix and Rubrik Matt has heaps of time on his hands so smash him with questions :)Synchronous replication is a good new feature add and is essential for any product wanting to play in larger, highly-critical environments.

Whats .Next?

At the end of the conference the .Next tour was announced which will reach 100 cities around the world, and luckily will hit a lot of locations in ANZ, including Brisbane where I live!

Every man and his dog is blogging, tweeting about the conference and the new features, which is why I linked ot so many in this post. Here is a list of some of the blogs I have read so far from around the web.

 

*Featured Image at the top courtesy of @vcdxnz001

Load More Posts