Blog

Dell FluidFS v5 2016 New Features

Recently FluidFS v5 for the FS8600 was released by Dell engineering to coincide with SCOS 6.7 for Dell Storage SC series. FluidFS has come along in leaps and bounds in the last few years, VAAI support, SMB3, advanced quota management and scale out to support 4PB in a single share.

FluidFS runs across the SC family, from the smaller SCv2000 up to the massive SC9000, same code, same kit, just different disk configs depending on the requirements.FluidFS Features

Improved scalability with a single Global Namespace

Global Namespace uses folder redirections. You saw before that a single FS8600 cluster can support up to 4PB. With the GN functionality, you can have multiple FS8600 systems look like one massive system, multi 10s of PB. Very attractive especially when you consider that FluidFS isn’t an all-inclusive license, there is no capacity license so it can grow as much as you want. GN works with SMB2, SMB3, & NFS4.x.

eg

  • System 1 \\cluster1\mainshare
  • System 2 \\cluster1\mainshare\archive
  • System 3 \\cluster1\mainshare\cctv
  • System 4 \\cluster1\mainshare\remotesite1 (witht SMB branchcache)

The systems above don’t all have to have the same backends. One system could be all flash, one could be optimised for CCTV streaming. Like I mentioned, it’s configurable based on your workload.

Data governance and security improvements

FluidFS v5 bring support for advanced auditing, access violations, ownership changes etc. Great for extra security and compliance. At the moment (Feb 2016) only Quest Change Auditor is supported but there are more coming in next few months.

Powershell and REST API support

Very useful, especially once you start getting into large deployments. You can script day to day activities like provisioning shares, snapshots, and also monitoring features for reporting.

Additional enhancements are FTP support, replication throttling, SMB branchcache and 2-way NDMP over FC.

http://www.dell.com/us/business/p/dell-compellent-fs8600/pd

By |February 7th, 2016|FluidFS, Storage|0 Comments

Whats new with Dell XC for January 2016

Dell has recently announced new features and options in the XC Nutanix lineup to get the year started. The Dell/Nutanix relationship is still going strong with a bunch of announcements in December 2015 and with the release of NOS 4.6 due at the end of January there are even more improvements.

December 2015 Announcements

  • Improved SSD options
  • XC630-10F All-Flash node
  • XC 6320-gF All-flash node (4 node in 2RU)
  • Dell SAS HBA330 on all new models except XC6320(12G SAS)
  • 6TB drive support on XC730 & XC430
  • 64GB DIMM support

January 2016 Announcements

  • 40TB Capacity Nodes (previously 32TB)
  • MS SCOM Management Dell Agent Free Pack*
  • SED support on SSD & NLSAS across all models

* Documentation to be released shortly

Two useful Reference Architectures for Dell XC have been released as well. They explain how to set up a 3-node XC630 cluster and cover network setup, (cabling, switch configurations, and VLAN tagging) initial cluster creation and best practices.

Acropolis (AHV) Reference is in the works but not for a little while.

With the NOS 4.6 code about to be released it allows support for 40TB per node on the XC730 series.

XC730xd-12

Up to 12 x 3.5″ drives. SSD support is Min 2 – Max 4 with sizes of 200/ 400/ 800/ 1600GB. For the HDD there is a Min 4 – Max 10 with 2/ 4/ 6TB options.
XC730xd

XC730-24

Up to 24 x 2.5″ drives. SSD support is Min 2 – Max 4 with sizes of 200/ 400/ 800GB. For the HDD it’s Min 4 – Max 20 drives with 1/2/TB options.xc730-24

XC730xd-12C

Up to 12 x 3.5″ drives. This is the Lite Compute Storage Node. It has 40TB Fixed Capacity with  2 x 400GB SSD and 10x4TB HDD.XC730xd

For the All-Flash nodes currently all drive slots need to be populated but that restriction should be lifted in the coming months.

If you were to fill the XC630-10F with 1.6TB SSD drives you would have 16TB RAW per node BEFORE dedupe and compression in 1RU. Very cool. At the moment it is best practice to create a seperate cluster for All-flash nodes.

Lots of information in the XC630 owners manual.

There is a lot of interest around Nutanix and Dell Networking at the moment, especially around Software Defined Networking and linux based solutions such as Cumulus. The Open Networking (ON) range lets customers load the networking OS of choice on their switch, Cumulus, BigSwitch or Dell FTOS to name a few. There is a load of information on the Dell Networking for Nutanix wiki.

The Dell Storage wiki for XC Series has loads of information about XC solutions and models if you need to go deeper. If there are any questions please let me know in the comments below of on twitter @danmoz

By |January 19th, 2016|DellXC, Storage|0 Comments

Happy New Year from Planetchopstick

Ahh, a brand new year. It smells fresh …

I’m not a New Years Resolution guy, because I always break them. I prefer nondescript dates, innocuous, inconsequential (I’m all out of big words). A good example is when I started the LCHF eating plan, I picked Sunday Nov 2nd 2014 for no other reason than it wasn’t a key date and it wasn’t a Monday to start :)

So, I’m breaking habits but like a lot of people I am making a resolution to post more and about more things, and what an easy start, a post about posting more.

Just before the Christmas break we got a bunch of the Queensland Dell specialists and solution consultants together in the boardroom and we brought in all our toys.

  • Dell XC Nutanix
  • Dell Networking running Cumulus
  • Dell Storage SC Series SCv2020 iSCSI array
  • “Cough” Supermicro Nutanix :)
  • Dell Storage PS Series PS6210XV
  • And a bunch of other bits, including my trusty 1G TP-LINK 16 port switch, which I call THE WORKHORSE

We may run out of ports here

Cumulus switch deployment and configuration with Ansible on a Dell Networking S4048-ON open networking switch.

Lots of playing around with the XC clusters, connecting SC storage into XC nodes. Pulling cables, failure scenarios. The networking team automated the deployment of Cumulus and configuration via Ansible which was pretty awesome. Cam from Nutanix testing replication from ESX to Acropolis and doing a restore. (I won’t steal his thunder here as he is going to do his own post about this at http://invisibleinfra.com/).

One thing we did (sort-of) get working is the Cumulus integration into Prism. Very cool. Full doco on how to do that is here.

We learnt a lot, especially that we tried to do too much at once which slowed us down a bit. We will be looking to run these once a quarter and if we get better at it we’ll get customers involved as well. I was using the hashtag #bnedellplugfest but I think that was a bit long and it would have been more interesting if it wasn’t just me tweeting :)

As you can see, fodder for a few interesting blog posts. Now to follow through …

I hope everyone has a great year, I know as a Storage guy in Dell this year is going to be VERY interesting.

Cheers Daniel

By |January 12th, 2016|Dell Storage, DellXC, Nutanix, Storage|0 Comments

Why you should use TLC flash in your storage arrays

*Ed note* I wrote this over 3 months ago but didn’t get around to posting. Still relevant though.

Everyone knew SSD drives would change the storage landscape dramatically but the speed of development and rate the capacities are growing is still impressive. Dell has taken the next step in SSD evolution by announcing support for TLC SSD drives in our SC series storage arrays. We were first to market with the high capacity TLC drives in an enterprise storage array. We are still the only vendor as far as I know that can mix multiple SSD types in the same pool.

Why do you care? It all comes down to the different SSD drive costs, quality, resiliency, and perhaps most importantly, *capacity*.

There is a lot of information out there about the various SSD types and their use types so I wont go into it in much detail here(reference Pure SSD breakdown article). There are three types of SSD drives supported in Dell SC series (Compellent) arrays

Perhaps make this into a simple table, resiliency, cost, capacity

  • SLC – WI Write Intensive. Great for writes, great for reads, $/TB high
  • MLC – PRI Premium RI Read Intensive. Ok for writes, great for reads, $/TB better and higher capacity drives
  • TLC – MRI Mainstream Read Intensive, Average for writes, still great for reads, $/TB excellent and the highest capacity (in a 2.5 inch form factor to boot!). Massively outperforms a 15K drive.

*Ed Note* The 1.6TB WI drives are Mixed Used WI drives.

Where SC series has its *magic sauce* is it uses Tiers of different disk types/speeds to move data within the array. Hot data on Tier 1, warm data or write heavy data on tier 2, and older, colder data in Tier 3. Typically Tier 3 would be NLSAS 7.2K spinning drives as they add the best cost/TB. SC series can mix and match the drive types in the same pool because of *data progression*. New writes and heavy lifting is handled by the top tiers and the bottom tiers are only periodically used and only for reads.

The largest TLC drive at time of writing (Sept 2015) is 3.8TB. 3.8TB in a 2.5 inch caddy, low power consumption and no moving parts. I don’t have exact performance details but for read workloads the MRI drives perform about the same as the PRI drives, but for random write workloads they are about half the performance of a PRI SSD. (*Rule of thumb, every workload is different. Speak to your local friendly storage specialist to get the right solution for your workloads). Compare that with a 15K spindle, its better rack density and power saving and a huge performance boost per drive. Then consider 4TB NLSAS drive that is 3.5inch, 80 – 100 IOPS with a random workload, spinning constantly so higher power consumption and moving parts. Obviously you can have situations where a NLSAS drive can spin down when not getting used but thats not the norm. The TLC drive is going to be more expensive than the NLSAS drive but when you take into account power, footprint, added performance over the life of the array it becomes a different calculation.

*magic sauce – secret sauce and magic dust together, like ghost chilli sauce, just with more tiers (geddit? hot sauce .. tears? )

SC Drive Types Dec 2015

You can see there are 4 capacities being supported at the moment.

  • 480GB, 960GB, 1.9TB, 3.8TB!!!

Yup, a nearly 4TB drive in 2.5 inch form-factor that is low power and 1000s of times faster than a 15k spinning drive with the about same cost per GB as a 15K drive. This is just the beginning, there are larger capacities on the roadmap. I wouldn’t be surprised by the end of next year to see the end of 15k and 10k drives in any of our storage arrays.

While we are on the topic, this is an excellent blog on the newer types of flash storage being tested and developed to help take Enterprise Storage into the future, whatever it looks like.

What are the gotchas? It can’t all be peaches and cream. You can see in the table above, there are different SSD types for different workloads. If you have a high write environment then the RI drives may not be a good fit because of the high erase cost and NAND cell resiliency. For that workload you would be better off with the WI SSDs.

However, most of the workloads I see and also stats that come from our awesome free DPACK tool is most environments are about 70/30 R/W% and average 32K IOs. (Typical VM environment). These are a great candidate for the RI drives.

Here is the great part for Compellent SC, if you want the best of both worlds we can do that using tiering and Data Progression to leverage a small group of the WI drives to handle the write workloads and a larger group of the RI drives to handle all the read traffic, even though to the application its just one bucket of flash. Now we can provide an all-flash, or hybid array with loads of flash but with a much much lower $/GB which is essential with the current data growth rates.

Data Progression in SC series

Data Progression in SC series

Here is an example. You have a VMware workload that you would like to turbo charge. You want to be able to support more IOPS but you also want those IOPS to be sub millisecond. You reach out to me, I talk about myself for the first 15 mins and then we run the free DPACK tool to analyse your workload.

  • DPACK reports 70/30 R/W% and average 32K IOs, with 95% of the time sitting at 5000 IOPS peaking to 12000 during backups.
  • Also that there is latency spikes throughout the day when SQL devs run large queries at 10am and 2pm but it usually sits about 3ms – 10ms, not too bad although during backups read latency jumps up to 30ms sometimes.
  • Queue depth is pretty good and CPU/MEM usage is fine. Capacity is 60TB used but a lot of probably cold data.
  • Looking at the backups about 2TB of data changes per day.
  • The SQL devs want to lock the SQL volumes into flash because they write shitty queries and can’t be assed optimising them. (I used to be a Oracle DBA, devs are lazy).
  • Growth no more than 30% year but a lot of that will be data growth, not workload growth.

This is a very common workload I see, it helps that Australia and New Zealand are very highly virtualised so a lot the workloads we see are ESX, with Hyper-v becoming more common. With this much information its reasonably simple to design an SC array that I would be 100% confident would nail that workload.

Its not a massive system and growth will mainly be Tier 3 but there are a few writes from the SQL databases so a SC4020 array with WI SSD, RI SSD, and NLSAS for the cold tier should do the trick.

The SC array uses tiering and incoming writes into the array very differently to a lot of arrays in the market. All new writes come into the array into Tier 1 (the fastest tier) as RAID 10 (the most efficient write). This is done on purpose to get the write committed and the ack back to the application as fast as possible. The challenge is R10 has 50% overhead and with flash that can mean $$$ and this is where the two tiers of SSD comes into its own. Every couple of hours (2 hours by default), a replay (snapshot) is taken by the SC array and marks the volumes blocks as read only. This is then instantly migrated to the second RI flash tier as R5 to maximise usable capacity. Because the data isn’t R/W anymore there is no need for it to be R10. SC uses redirect on write so new writes are written into Tier 1 as R10 and volume pointers are just simply updated.

A lot of info in a small paragraph but you can see what is happening there, the WI tier does all the heavy lifting in the array and then older data is moved to the RI tier for it to be read from. Then, as the data gets cold, it is typically moved to Tier 3 (NLSAS in my example) as R6. Same data, moved to the write tier and the right time to maximise performance and $/GB.

The replay is taken every 2 hours and then moves the data down to Tier 2. This means we only need to size Tier 1 for the required IOPS and enough capacity to hold 2 hours worth of writes x 2 (R10 overhead). in my example above there is about 2TB of data written everyday (if every write was a new write, assuming worse case). If you break that into 2 hour chunks its less than 200GB per replay, double it for R10 and I would only need 400GB of WI SSD to service that 60TB workload. The reality is that there are spikes during the day and the DPACK tool identifies those but you get my drift.

So .. Tier 1 lets go with 6 x 400GB WI drives. (1 is a Hot Spare). I wont put the exact figures here but those drives with that workload would smash it out of the park with 0.2ms latency.

Now I can focus on Tier 2 almost purely from a capacity standpoint. Remember, this tier will hold the data being moved down from the WI tier but its also holding data classified as hot that gets read from a lot. Everything in this tier will be R5 to get the best usable capacity number. They have 60TB , change 2TB a day and the SQL DB they want to pin is 10TB. So I want to aim for about 18TB usable in this tier just to be safe. I don’t have to worry about SSD write performance on this tier because it will be nearly 100% read except when data is moved down every couple of hours.

So . Tier 2 I’ll use 12 x 1.9TB MRI drives (1 hot spare). This gives me 18TB usable (not raw, you’ll find Dell guys always talk usable). Plenty of room for hot data and to lock the entire SQL workload in this tier. You would need shelves of 15k to get the same performance.

By splitting up the the WI & RI tiers it gives a level of flexibility that is difficult without tiers. If the write workload stays static, in other words stays around the same IOPS number and TB/day, there is no need to grow it. However some other business units see the benefits that the SQL guys are getting and want in on that action. We can grow the WI and RI tiers separately. Simply add a couple more 1.9TB RI drives and that tier gets bigger. We then change the Storage Profile on that volume (and with VVols we’ll change it on that VM) and voila, that volume is now pinned to flash.

Finally, we need another 40TB for the rest of the workload + 30% a year for growth over three years = approx 90TB.

Note: you can add drives anytime into an SC array and the pool will expand and rebalance so you dont have to purchase everything upfront. Also, with thin (assuming provisioning), thin writes, compression, raid optimisation etc there are extra savings but I’ll leave those out for now.

Like the RI tier, all the data in here will be Read Only and for larger drives will be R6. Because we don’t write to this tier besides internal data movement we are squeezing as much performance out of the spinning rust as possible. The key is to not have too much of a divide between the SSD tiers and the NLSAS tier. Again, DPACK allows us to size for the workload instead of guessing. We know the workload is 5000 IOPS so I want this tier to be about 15-20% of the total number, 1000 IOPS (that’s convenient). The NLSAS drives aren’t being written to and so there is no RAID write penalty so I can assume 80 IOPS per drive, 12 drives gets me very close to my IOPS number with a hot spare and magically its also the amount of 3.5inch drives we can fit in a 2U enclosure. Its almost like I’m making this up :) I have the drive number, but I want to get to 90TB usable at R6. Different story, with 24 x 6TB drive we get about 100TB usable. The good thing is I know I have met the performance brief.

Still with me, this has been a longer explanation that I intended. Speak of Puns, I hoped some of the 10 puns I have in this post would make you laugh, but sadly no pun in ten did.

End result, I have an SC4020 with 18 SSD drives (6 spare slots for expansion), 2 extra SC200 enclosures with 24 6TB NLSAS drives. 6RU in total and it nails the performance and growth rates needed.

You can see, having the option for multiple flash types makes for very flexible and cost effective solutions.

Where to from here? I’m sure drive capacities will continue to grow and grow, with the newer types becoming more mainstream. Samsung released a 12GB SSD recently and without doubt we’ll see that sort of capacity in our arrays over time. Imagine have a 16TB SSD in 2.5 inch, 32TB? A 1RU XC630 Nutanix node with 24 x 4TB 1.8 inch SSD. The only issue is we still have to back it up!!!

*Final Ed note* Since I wrote this post Dell has released the SC9000 platform. When it is paired up with all flash it is a monster.

By |December 10th, 2015|Compellent, Dell Storage, Storage|0 Comments

What was announced at .Next 2015 by Nutanix and Dell

Disclaimer: I work for Dell. I sell the Dell XC Nutanix appliance. I have a vested interest however those who know me know I like to be fair and as little cool-aidy as possible, unless its about Basketball. This is my hot take on the announcements. BTW, it turns out I can’t spell announcements consistently. Do what you need to do with that info.

There are a plethora of blogs, articles, tweets, LinkedIn posts about .Next and the announcements so I’ll try not to rehash but instead give a Dell/storage guys opinion on them. At the bottom of the post I have collated a bunch of the articles I have read over the past few days.

Key Announcements

  • XCP – extreme computing platform
    • Acropolis
    • Prism
  • Storage node
  • Native NAS appliance
  • Nutanix App Framework
  • Erasure Coding
  • In-Guest iSCSI Support
  • Sync Replication
  • Nutanix CE
  • Global Search
  • Improved Analytics and Capacity Trending

NOS 4.1.3 (released last week)

  • erasure coding
  • sync replication
  • in-guest iSCSI support
  • ToR visibility
  • Acropolis Image Service and HA

Dell announcements

  • GPU offerings to support graphics intensive VDI workloads
  • Support for factory installed KVM and ESX 6.0 hypervisors
  • Storage-only node to increase cluster storage capability when additional compute and hypervisor resources are not required
  • IDSDM – Internal Dell SD Module
  • XC430 for smaller workloads (lite-compute)

A Big Week

You can see it was a busy week, the biggest part of which was the XCP “invisible infrastructure” and Acropolis announcements. Andre Leibovici (@andreleibovici) had a number of good blog posts prepared for the event but this is the great overview of the new features and vision for XCP.

XCP stands for Xtreme Computing Platform, which is a mix of Acropolis and Prism. Previously it was called VCP (Virtual Computing Platform) and the idea is now that hypervisor and the Nutanix Prism components stand separately on their own feet … if software had feet.

Notice it is now called XCP and our appliance is called XC Series. Coincidence? I think … probably. But still pretty cool!

App Mobility is the ability to migrate a VM workload from one hypervisor to another. The demo onstage was converting a ESX VM to a KVM VM. The VM needs to be on the NDFS filesystem (that’s redundant I know, like ATM machine) and then Acropolis converts some key meta-data to make it compatible with KVM. Very nice.

I asked the question on twitter if you could move back as easy and @vcdxnz001 responded quickly

It was recommended in one of the sessions (can’t remember which one) to clone your existing VM and convert the clone rather than converting the original, I think that is sage advice just in case there are any hiccups in the early days.

What this all leads into is for Acropolis to be your go to point for VM management, push workloads between hypervisors and between clouds. Ambitious but I think a great idea going forward.

The improved analytics and capacity trending build on the already excellent Prism reporting tools. There are loads of examples in this video:

New Dell XC Series Announcements

Here is the official press release for the new Dell XC Series features.

Out of the Dell announcements KVM support is a biggy, especially with the launch of the Nutanix Acropolis platform. AFAIK there wasn’t an architectural issue with XC supporting KVM, its just getting our engineering processes and support infrastructure in place to support it effectively.

Karl Friedrich who works in our Engineering department and presented the Dell XC session at .Next had a great point; Dell announced the XC product only 10 months ago, since then we have had two major releases – so 3 releases since it in 10 12 months. That’s a heavy workload and there is more to come.

Two new appliances were announced as well, the XC730-16G and XC430-4.

  • XC730-16G will support Nvidia GRID K1 & K2 GPUs as well as up to 16 drives. CPUs, Memory and Networking is configurable.
  • XC430-4 is a fully-featured box, just in a short-depth form factor, 24″. Ideal for ROBO and deployments that that use short depth racks.

Just to explain the naming structure, all the XC appliances are based on the Dell 13G PowerEdge rack servers. eg an XC630 is an R630 with some tweeks to support Nutanix like IDSDM, bios, drives, and most importantly the front sticker :) The components inside like CPU, Memory and Networking is configurable just like you would expect when buying a normal server.

The number at the end of the name, XC630-10, is the maximum number of disks the appliance will support. You don’t have to fill it but that is one of the advantages of the XC product that they have excellent disk density. Finally, the G in XC730-16G stands for GPU.

Dell XC series

The Dell XC630-10 (top) and XC730-24 (bottom)

So using all that new info you can work out the new appliances have up to 16 drives in the XC730-16G and up to 4 drives in the XC430-4.

IDSDM is a 16GB SD card residing on an internal motherboard riser card. It’s loaded during Dell Factory software installation with all necessary content to reset the XC system back to the Dell factory image. It’s available to Dell Support for initial deployment or remedial support when a factory reset is required and can restore the system back to the factory state in 10-20 minutes.

One of the Dell XC advantages is that we can ship the appliance with ESXi already installed. Nutanix is fast to deploy but with pre-installed ESX and rapid rails XC is super fast :) Here is a video from the Dell Sydney Solutions Centre that has a 3 node XC cluster being racked and configured in under 30 mins. You have to love the PowerEdge tool-less rapid rails.

If you want to see what all the Dell XC series appliances look like and their specs, you can view them via the Dell Virtual Rack. The new appliances aren’t there yet but you can see the existing XC630-10, XC730-12, and XC730-24 appliances.

Remember, Dell is taking great pains to not touch the Nutanix software. Your path, our platform as we like to say. When a new version of the software is released, it is available to all customers, XC included. A bad example is Samsung and Android. A new Android version is released and it might be a year until its available because of all the modifications and bloat that Samsung inject. Painful and a great way to alienate customers.

Our goal is to do what we do best which is produce great hardware with the sales and support that Dell can bring, and let the uber-smart dudes at Nutanix do their thing and produce great software. (*rant over, can you tell I’m dirty at Samsung?)

 

New Storage Features

Because of my background the new storage related features probably interested me the most, Erasure Coding, NAS functionality, and the Storage only node.

Erasure Coding is a great option to add and will really help get best bang for buck by allowing more usable capacity from the same amount of drives. If you are not familiar with Erasure encoding @jpwarren does a good job of explaining it in a couple of paragraphs here . Effectively it’s like RAID but rather than being at the physical drive layer it’s at the 4MB extent group layer (I think I recall someone mentioning that’s the granularity).

Thanks to Webster, Michael Webster for the clarification

Andre again explains the Nutanix space savings here. One caveat is that you need a cluster of at 4 nodes but that’s not a biggy. You’ll see better storage savings the more nodes your cluster has. I’m sure there will be more rules and best practices come out about when and where its best to use EC.

Next was native file serving straight out of a Nutanix cluster. The details are light on at the moment but it will be some sort of VM like the CVM running CIFS/NFS services. It will be very handy for VDI deployments where you need remote profiles and home directories etc.

One of the customers i spoke to will be looking to move all their file serving to the new NAS appliance. I asked why don’t why just run a VM with Openfiler or something similar and the response was because it wouldn’t be integrated into the Nutanix ecosystem, Prism, performance monitoring, support etc. Fair enough, although I have spent a lot of time around NAS products like Celerra and FluidFS and it’s a difficult thing to make simple.

When the storage only node was announced I was a little surprised, not because Nutanix would do it, but my first thoughts were “Oh shit, Dell doesn’t have one”. The good news is we do have one that was announced straight after the kickoff. Whew. Goes to show where I sit in the pecking order at Dell 😉 I don’t have any details at the moment but it will be like the NX-6035C, just enough compute to run the base KVM hypervisor but not to run any VM workloads.

Josh Odgers has a writeup about the storage only node. This has been a been a sorely needed piece of the Nutanix puzzle.

In-Guest iSCSI helps out with those large Exchange installations out there. It means you can serve storage from NDFS directly into a guest via iSCSI. Does it mean you can serve out iSCSI to non-Nutanix nodes? From what I could gather it will work but don’t do it because it makes no sense. If you want to use for migrations use the Whitebox option instead.

Nutanix Community Edition (CE)

Nutanix Community Edition (CE)  also is now officially available, and the best news is, it can be nested as a VM. Good review of what features it supports here but for hints and tips on how to install it Matt Day @idiomatically and Joep Piscaer @jpiscaer are the men you want to talk to as they ran the CE session at .Next.

Apparently after getting Nutanix and Rubrik Matt has heaps of time on his hands so smash him with questions :)Synchronous replication is a good new feature add and is essential for any product wanting to play in larger, highly-critical environments.

Whats .Next?

At the end of the conference the .Next tour was announced which will reach 100 cities around the world, and luckily will hit a lot of locations in ANZ, including Brisbane where I live!

Every man and his dog is blogging, tweeting about the conference and the new features, which is why I linked ot so many in this post. Here is a list of some of the blogs I have read so far from around the web.

 

*Featured Image at the top courtesy of @vcdxnz001

By |June 15th, 2015|Nutanix, Storage|0 Comments

Looking back at my personal experiences at Nutanix .Next 2015

Disclaimer: I work for Dell. I sell the Dell XC Series Nutanix appliance. I have a vested interest however those who know me know I like to be fair and as little cool-aidy as possible, unless its about Basketball. This is a personal post about my experience.

This last week I was fortunate to attend the inaugural Nutanix .Next user conference in Miami. I know what you’re thinking “Why was a Dell Storage pre-sales guy from sunny Brisbane there?”. Turns out a lot of people were wondering that because I got asked more than 20 times. Long before Dell announced the OEM partnership with Dell I have been following and learning about Nutanix tech because I have a few mates that have worked for Nutanix for a while and I trust their opinions.

I remember a while ago I asked one, “if you were a customer would you run this in your company?” and the response was “Previously, maybe not, but its rock solid now and yes I would run it”. This guy had never led me astray so that was high praise IMHO.

So why was I there? Because I’m an (unofficial) internal XC champion here at Dell and, full disclosure, I begged. :)

It was at the Fontainebleau Hotel in Miami, pretty cool location but bloody expensive. Apparently it has a deep history and the scene with the girl covered in gold paint in Goldfinger was filmed there, good excuse to watch an old bond movie again if you ask me.

There were just under 1000 people there and more than half were customers. Pretty short event, only 1 1/2 days plus a few drinks pre-event on the Monday night. They crammed heaps into that time, maybe a bit too much as some sessions seemed rushed. Food was very nice, although very carb heavy, but I guess that’s any event catering. Nightlife was excellent, although watching the Warriors lose two NBA Finals games was a downer. [edit, at time of writing its back to 2-2 with game 5 about to start, #GoDubsGo]

Side note, what is with the people who go to that hotel to hang out by the pool? Some of the best looking people I have ever seen in my life, male or female, also some of the biggest train wrecks. It’s apparently the place to be seen, but sometimes I saw a little bit too much.

There were a heap of announcements during the event, enough that it got it’s own post that I will finish next week. This will be a breakdown of the announcements and links to other blogs around the place.

Here is the video of the new announcements and demos

The sessions were ok, I am a partner and I have been on NPP training so I had already gone over a lot of the content but I just didn’t feel like I got as much as I could from the sessions. I also think I chose the wrong ones, e.g. It didn’t help I went to 2 containers/docker advanced sessions without really understanding much about containers. (I still don’t).

The storage session had so much potential but we just didn’t have enough time to delve into the nitty gritty. Reality is I doubt many people there wanted to get into the weeds on storage as much as I wanted to. Everyone was tired but I think we could have repeated a few more sessions, which was unrealistic because as soon as the .Next conference finished, the .Now partner conference started an hour later. I’m sure it will be better next year.

Dell had a large stand at the event and I manned the booth through the all-important welcome drinks session. We were right near the bar, good for customers, good for me :) (Dell sponsored BTW!). Alongside me were product management and some of the engineering team, great people and nice to spend time with. As an Aussie I can’t stress how useful it is to make face to face contact with marketing and engineers creating the products I have to recommend to customers. I wish it could be done more but the distance is a real barrier.

The coolest part of the stand was a massive touchscreen with up on the screen. It is a virtual rack that has all of the Enterprise kit listed with pictures and tech specs. I recommend checking out the site, its impressive – http://esgvr.dell.com/. The Aussie marketing team needs to get one of these ASAP. There were a number of Dell XC customers at the event too which was great.

Networking is always one of the best parts of these events and because the number of people wasn’t too large there were chances to have repeat conversations and not feel rushed. It was really great to meet a lot of people I only know from online, and i made a few new friends.

A bit of a fan boi shout out to the Nutanix exec team. I’m pretty sure I spoke to all of them more than once and they are very warm and made me feel very welcomed. Great guys and nice to meet them. But lets see what happens when I add them on LinkedIn :) Believe it or not, it reminds me a bit of Dell execs. One thing I have liked about my 4 years at Dell is that the management team are very visible and very approachable, which isn’t something I didn’t experience much at my time at other companies.

A big part of the event for me was Twitter. I sent more tweets in 4 days than I have in the last 6 months (sorry for your timeline abuse btw). I favourited heaps but here are some of the top ones.

Apparently @ccstockwell thinks I have some monkey in my jeans genes

On the last morning I finally made it for a swim in the Atlantic. Turns out I’m stupid for waiting because the water was glorious, although full of seaweed. The ocean was so flat it was eerie, I guess I’m just used to all the beaches in Australia having some form of waves. It was very peaceful. But the good thing is I’ve now swum in the Pacific, Indian, and Atlantic oceans now – tick on the bucket list.

The distance is a killer, door to door it was about 72 hours travelling in 7 days. In a word, Ug. All in all I found it very worthwhile week and I would love to make it to the event next year although there is low chance of that unless I change roles in that time.

A couple of photos from the week, excuse the ugly mug! p.s I took the photo at the top of the page believe it or not.

I’ll end with this video which is a short roudup of the week. I feature for about 1/millionth of a sec but I look like a troll, enjoy

 

By |June 15th, 2015|Storage|0 Comments

Where to watch the NBA Finals in Brisbane 2015

Very short post based on the fact I can’t find this information anywhere on the web. The NBA Finals are starting tomorrow and here in Australia we are the perfect timezone to watch the games, but where do you watch the games?

I know Australians have the highest signup ratio for NBA League Pass in the world but its good to watch these games in a social setting, not just at home on the lounge. A few phone calls and I have come up with this very short list. Feel free to add more in the comments.

Hotels Showing the NBA Finals Live in Brisbane

  • Jubilee Hotel – Fortitude Valley
    • I called them and they will be showing it for sure. This is where I will be watching game 1! Come and say Hi! #GoDubsGo
  • Elephant & Wheelbarrow – Fortitude Valley 
    • Haven’t confirmed this one but I am assured they are showing it. They have a projector out the back I think.
  • Pig & Whistle – Fortitude Valley 
    • There are 4 P&W bars in Brisbane now but the newest one is in Brunswick St in the Valley. I was there last week to watch the Origin and the place is well set up for watching sport. They have a projector but I’m not sure if the game will be on it. I’m sure the other bars will have it on as well.
  • The Regatta – Toowong
    • They will be showing it for sure as they have been showing most of the games in the playoffs. There is a massive TV in the bar near the fire. This would be my preferred place to watch it but my car was towed from their carpark after I did the right thing and caught a cab home after a few beers. $560 dollars later I am boycotting the bar. Asshats.
  • Down Under Bar – City
    • They always show the games but without the sound. This might be different for the finals. They have a great $12 lunch deal with a steak and a pint, hard to beat that value.

There you go, not an exhaustive list by a long way but at least there is some content on the web when you search for it!

Here are the game times on TV in Australia:

  • Game 1 – Friday June 5th 11am
  • Game 2 – Monday June 8th 10am
  • Game 3 – Wednesday June 10th 11am
  • Game 4 – Friday June 12th 11am
  • Game 5* – Monday June 15th 10am
  • Game 6* – Wednesday June 17th 11am
  • Game 7* – Saturday June 20th 11am

* These games won’t be needed because the Warriors are going to sweep the Cavs :) Kidding, hopefully it’s an awesome series, and the best part is there is an Aussie on both teams so once again we’ll have an Aussie champion to go along with Gaze, Longley, & Mills/Baynes.

By |June 4th, 2015|Personal|1 Comment

Dell Storage Compellent Integration Tools for VMware 3.0 release (CITV)

The latest CITV v3 pack has been released. The CITV appliance is free to use for customers with an existing maintenance contract and enables advanced functionality in VMware environments. It is highly recommended to use the CITV tools if you have VMware on Compellent.

With this release CITV 3.0 now supports:

  • SCv2000 series
  • vSphere 6.0
  • CentOS 5.10, which was introduced in CITV 2.1
  • Security updates

It also includes the following updates:

  • VASA Provider: updated to use TLSv1.1 for certificate exchange
  • vSphere Web Plugin: Added support for NAS (FluidFS), and Live Volume + Sync Replication
    • VMFS Datastore now has the Live Volume/Replication support Tab
    • vSphere 6.0 support
  • Replay Manager for VMware: updated to support SCv2000 (FC and iSCSI only) and vSphere 6.0

If you have a SCv2000 you can get access to it immediately as it comes with the latest 6.6.20 code. If you are running another Compellent model (SC8000, SC4020 etc), just talk to Copilot and get on the key release list.

If you want to get the version for the SCv2000 you can get it here 

By |May 18th, 2015|Compellent, Storage|0 Comments

Some beginners information on Keto (LCHF) diet I have been trying out.

I originally wrote this as a post on FB for my basketball team asking about this diet change I have been embracing. Those who know me know that I’m tall and pretty active but years of IT has made me a little soft around the middle. The best part of it is how alert I have been feeling through the day.

Keto Diet Info

OK, this is the abridged version of the information I have about this diet. I dropped about 10 kilos in 4 weeks by eating a high fat diet and no exercise.

Firstly, its called Keto and is very similar to Paleo except it’s priority is fat, and not protein. Basically you are getting your liver to process fat for energy (ketones) instead of carbs (sugar). LCHF stands for Low Card High Fat. There are heaps of variants and book but the Keto option works for me because there is heaps of info and receipes on Reddit.

This seems like a silly thing, especially based on the way we are taught the food pyramid (small amount of fat, high amount of carbs). Current thinking is that they got it wrong, we should be flipping the pyramid and eating quality fats and little carbs. With the rise in fat buggers and diabetes its a very interesting concept. The movie Cereal Killers i list below explains all this very well.

When it says low-carb its super low-carb, eg I eat about 20g of carbs a day, if I have a piece of extra chewing gum that is 1g carbs, if I have 1 cup of chopped carrots its 12g carbs. Don’t even look at anything made from wheat. It will take about 3 – 4 days to enter into the “keto” state, where you body flicks over from using sugar to fats for energy.

The 3rd day you will feel extremely flat, just push through and keep drinking water. You have to drink a lot of water for this diet to work, you won’t do ‪#‎2s‬ anywhere near as often so you need to flush out the toxins with water.

Edit: This is a good list of foods http://realmealrevolution.com/real-food-lists

To work out how much you should be eating for your size and goals,http://keto-calculator.ankerl.com/ is the best calculator on the web. Make sure you read the instructions.

You have to track your intake if you want to be successful because it’s so easy to kick yourself out of keto. Just imagine, your body is back to storing fats and 70% of what you are eating is fat, swelling intensifies.

The best tracker I have found is MyFitnessPal, its a free website and app(apple & android). You can customise the daily goals to ignore the default recommendation of mostly carbs.

So, an example days meals. My goals were 1500 calories a day with 20g carbs.

  • 7am bacon and 2 eggs, black coffee with 30g double cream
  • 10am 30g almonds (high fat but lowest carb nuts)
  • 1pm can of tuna, half avocado, 20g mayonaise, 100g of baby spinach
  • 4pm cheese, or another coffee or nuts again (so convienent)
  • 7pm Any meat, preferably fatty, cauliflower/broccoli/spinach/other low carb vege.
  • 10pm snack.

The more you keep that small meal, often balance the more your body burns at rest cause it doesn’t think its going to starve. In the movie the guys resting metabolic rate went up 50%.

There are lots of resources on the web, just search for Keto but best ones are:
http://www.reddit.com/r/keto – Lots of conversations and tips here
http://www.reddit.com/r/keto/comments/2oq0sl/newbie_tuesday_are_you_new_here_need_advice_want/ This is the getting started post, START HERE. It also has links to receipes and different subreddits like how to bulk under keto
http://forum.bodybuilding.com/forumdisplay.php?f=61 Is a bodybuilding forum devoted to Keto. It’s awesome.

You can see there is a lot to read. If you want an easy intro there is a great documentary on how a high carb diet is slowly killing everyone and the guy does a high fat diet with doctors monitoring his progress. It is called Cereal Killers.

I highly highly recommend it and at the moment you can watch it for free at the bottom of http://www.thepaleoway.com/, although Im not sure for how much longer. If not its maybe available at some download sites, maybe

It’s about an hour. He is also about to release a sequel for Keto for high intensity athletes called Run on Fat (https://www.facebook.com/CerealKillersMoviehttp://www.runonfatmovie.com/).

There are a bunch of books on it as well if you are super keen. Let me know if you get to that point and I can dig them out.

By |December 17th, 2014|Personal|0 Comments

Dell to explain Project Blue Thunder and SDS strategy at Dell User Forum in Sydney.

Dell User Forum ANZThis is a follow up post to Dell User Forum registrations open for Sydney 15th October 2014

Date: Wednesday, 15th October, 2014 @ Randwick Racecourse, Sydney
Website: http://www.delluserforumaustralia.com.au/
Twitter: #DUFAU14
Register here: http://www.etouches.com/97034

Every large vendor needs their own shindig, and here at Dell Australia, we have ours coming up in 3 weeks on a grand scale.

Dell User Forum will be in Sydney on October 15th and is Dell’s largest customer event in ANZ this year. It’s being held at Randwick Racecourse which should be really interesting. Apparently the function centre has had some renovations recently and the word internally is that it looks fantastic.

I’m particularly keen to see what it’s like inside because for all the years I lived in Sydney I have never been to the track to watch a horse race. This is mainly because horse racing is phenomenally boring, and I have no idea what I’m doing when placing a bet, and I’m too big to be a jockey.

Fun fact: in the Triathlon series in Sydney, they have a separate category for guys over 92kg called the “Clydesdale Class”.

I did go there once to set up a CX3 system to keep track of all the horse husbandry data, which is an enormous industry in itself, e.g. the son of Silver Sovereign and I’llShoutyouGetEm was called MadonnaCantSing, but that’s a story for another time.

Now that the event has scaled to include all the main Dell pillars; Storage (Woo!!), Server, Software & Security, there will be tracks you can follow if you are interested in a certain topic. It’s buzzword city but after DUF in Miami in June these are the main topics attendees were interested in:

  • Big Data
  • Cloud
  • Mobility

Of course, if you want you can just pick and choose and go to whatever session you want.DUF Agenda

The formal agenda has been released to the DUF website. The PDF file has all the descriptions on what the breakout sessions will be covering.

I’m a storage guy, or Dan Dan Storage Man as I seem to get called lately, so I HIGHLY recommend you ditch all the other crap and only come to the storage sessions. In particular ignore servers, servers are the past man, storage is the future , like *cough* .. Nutanix .. EVO:Rail .. Storage Spaces …….. oh shit :S

I’m only kidding, SDS & convergence is a big topic this year and Dell seems to have its paws all over it. The upcoming OEM agreement with Nutanix, the new VMware EVO:Rail appliance, and Microsoft is a Platinum sponsor so heaps of info on Storage Spaces and whats coming from their part of the world. I have heard a rumour that we will have an EVO:Rail box on display at the Solutions Expo but that’s not confirmed. I’m keen to check it out. Hopefully I’ll know soon and put it up on twitter.

Helping the big push is the newly announced new PowerEdge 13G platform that is going to drive all these SDS products. Density and flexibility in storage/connectivity and of course raw compute power is needed to take it all this hyper-converged industry to the next level, and that is where the new R730XD and FX2 blade system will shine.

There will be 13G hardware demos and displays on the Expo floor.

Dell Storage

This time last year Dell was announcing our industry first ability to mix Write Intensive SLC with Read Intensive MLC flash drives with our On Demand Data Progression (ODDP) technology. This allows us to have the blinding write performance and endurance of SLC with the capacity and read performance of MLC at the same price/capacity of an equivalent spinning disk system.

You can see from this Register article on the 17/09/14 where the author wistfully dreams of a future where you can tier between two types of flash and Compellent has been able to do it for over a year!

It’s a year later and we have shipped a shed tonne of flash in that time and now we will be showcasing the latest Storage Center SC6.5 code that features compression, greater addressable capacity and some other goodness. As always, the additional software features are free if you have a valid support contract.

There will be a running SC4020 on the Expo floor for you to get hands on and try out some live demos.

One session not to miss is the “What’s coming next with Dell Storage (NDA session). We will have our very own storage Zen Master Andrew Diamond co-presenting with two of the guys running the Nextgen Storage strategy for Dell; Keith Swindell and Peter Korce. I expect to learn stuff in that session too, in which I will be live tweeting :)

On a side note, how do you live tweet an NDA session?

As most people who will read this post is aware, Dell is on a path to merge all of our storage systems into one common architecture over the next few years and SDS the fudge out of it. CRN has a good article on Project Blue Thunder  with an interview with Alan Atkinson (VP Storage) and where Dell Storage is heading. Andrew’s session will explain all of this in more detail.

Also, EqualLogic v8 firmware is about to be released with support for Compression and VMware Vvols. In the roadmap session we’ll take you through the next generation of the PS4110 and PS6510 systems.

The “Talent”

As part of the Keynote Forrester Research will be “presenting their exclusive findings on how Australian businesses are innovating using Cloud, Big Data and Mobility to securely meet the changing needs of their customers.” This will be an interesting session as I’m curious to see what they think the different Australian states are doing. Here in QLD we are seeing a massive push for Cloud and managed services, not just for storage but for everything in the datacenter. This is partially driven by government policy up here but private businesses are looking to make the move too. In NSW & VIC the cloudy push isn’t as widespread (IMHO). It probably helps that Dell has a established Data Centre presence in QLD already.

We have a lot of Dell execs coming down to present and talk with customers fresh after DUF in Miami, including

  • Alan Atkinson – VP Dell Storage @dell_storage
  • Enrico Bracalente – Directory of Product Management for 13G
  • Kishore Gagrani – Product Director – Fluid Cache for SAN @kgagrani
  • Amit Midha – Dell APJ President
  • Daniel Morris – All around good bloke and chicken enthusiast @danmoz

candy-thumb-400x300-49I haven’t personally met the other guys but if you can try and have a chat with Alan. He is remarkably approachable and happy to take the time to talk for someone at his level. Top bloke.

And then finally, at the end of the day we get to celebrate and have a beer or two. Sadly  we have training at 8am the next morning in Frenches Forrest, so that’s not so hot. The beers are sponsored by Cloud which is as ambiguous as you can get.

Orange whip? Orange whip? Three Orange whips!!

If you see me at DUF come and say G’day. I’ll be the guy frantically trying to cut and paste ■ ■ ■ ■ continually on my phone :)

Register now

Dell User Forum ANZ

By |September 24th, 2014|Dell Storage, DUF, Storage|0 Comments
Load More Posts