DellXC

1901, 2016

Whats new with Dell XC for January 2016

By |January 19th, 2016|DellXC, Storage|0 Comments

Dell has recently announced new features and options in the XC Nutanix lineup to get the year started. The Dell/Nutanix relationship is still going strong with a bunch of announcements in December 2015 and with the release of NOS 4.6 due at the end of January there are even more improvements.

December 2015 Announcements

  • Improved SSD options
  • XC630-10F All-Flash node
  • XC 6320-gF All-flash node (4 node in 2RU)
  • Dell SAS HBA330 on all new models except XC6320(12G SAS)
  • 6TB drive support on XC730 & XC430
  • 64GB DIMM support

January 2016 Announcements

  • 40TB Capacity Nodes (previously 32TB)
  • MS SCOM Management Dell Agent Free Pack*
  • SED support on SSD & NLSAS across all models

* Documentation to be released shortly

Two useful Reference Architectures for Dell XC have been released as well. They explain how to set up a 3-node XC630 cluster and cover network setup, (cabling, switch configurations, and VLAN tagging) initial cluster creation and best practices.

Acropolis (AHV) Reference is in the works but not for a little while.

With the NOS 4.6 code about to be released it allows support for 40TB per node on the XC730 series.

XC730xd-12

Up to 12 x 3.5″ drives. SSD support is Min 2 – Max 4 with sizes of 200/ 400/ 800/ 1600GB. For the HDD there is a Min 4 – Max 10 with 2/ 4/ 6TB options.
XC730xd

XC730-24

Up to 24 x 2.5″ drives. SSD support is Min 2 – Max 4 with sizes of 200/ 400/ 800GB. For the HDD it’s Min 4 – Max 20 drives with 1/2/TB options.xc730-24

XC730xd-12C

Up to 12 x 3.5″ drives. This is the Lite Compute Storage Node. It has 40TB Fixed Capacity with  2 x 400GB SSD and 10x4TB HDD.XC730xd

For the All-Flash nodes currently all drive slots need to be populated but that restriction should be lifted in the coming months.

If you were to fill the XC630-10F with 1.6TB SSD drives you would have 16TB RAW per node BEFORE dedupe and compression in 1RU. Very cool. At the moment it is best practice to create a seperate cluster for All-flash nodes.

Lots of information in the XC630 owners manual.

There is a lot of interest around Nutanix and Dell Networking at the moment, especially around Software Defined Networking and linux based solutions such as Cumulus. The Open Networking (ON) range lets customers load the networking OS of choice on their switch, Cumulus, BigSwitch or Dell FTOS to name a few. There is a load of information on the Dell Networking for Nutanix wiki.

The Dell Storage wiki for XC Series has loads of information about XC solutions and models if you need to go deeper. If there are any questions please let me know in the comments below of on twitter @danmoz

1201, 2016

Happy New Year from Planetchopstick

By |January 12th, 2016|Dell Storage, DellXC, Nutanix, Storage|0 Comments

Ahh, a brand new year. It smells fresh …

I’m not a New Years Resolution guy, because I always break them. I prefer nondescript dates, innocuous, inconsequential (I’m all out of big words). A good example is when I started the LCHF eating plan, I picked Sunday Nov 2nd 2014 for no other reason than it wasn’t a key date and it wasn’t a Monday to start :)

So, I’m breaking habits but like a lot of people I am making a resolution to post more and about more things, and what an easy start, a post about posting more.

Just before the Christmas break we got a bunch of the Queensland Dell specialists and solution consultants together in the boardroom and we brought in all our toys.

  • Dell XC Nutanix
  • Dell Networking running Cumulus
  • Dell Storage SC Series SCv2020 iSCSI array
  • “Cough” Supermicro Nutanix :)
  • Dell Storage PS Series PS6210XV
  • And a bunch of other bits, including my trusty 1G TP-LINK 16 port switch, which I call THE WORKHORSE

We may run out of ports here

Cumulus switch deployment and configuration with Ansible on a Dell Networking S4048-ON open networking switch.

Lots of playing around with the XC clusters, connecting SC storage into XC nodes. Pulling cables, failure scenarios. The networking team automated the deployment of Cumulus and configuration via Ansible which was pretty awesome. Cam from Nutanix testing replication from ESX to Acropolis and doing a restore. (I won’t steal his thunder here as he is going to do his own post about this at http://invisibleinfra.com/).

One thing we did (sort-of) get working is the Cumulus integration into Prism. Very cool. Full doco on how to do that is here.

We learnt a lot, especially that we tried to do too much at once which slowed us down a bit. We will be looking to run these once a quarter and if we get better at it we’ll get customers involved as well. I was using the hashtag #bnedellplugfest but I think that was a bit long and it would have been more interesting if it wasn’t just me tweeting :)

As you can see, fodder for a few interesting blog posts. Now to follow through …

I hope everyone has a great year, I know as a Storage guy in Dell this year is going to be VERY interesting.

Cheers Daniel

Load More Posts