EqualLogic MasterClass 2012

imageOn the back of the DSF sessions we are holding technical EqualLogic Masterclasses(es) around the country and soon in NZ too. They will be technical learning sessions with very little marketing fluff and there will be an introduction session and two deep dive advanced sessions. A lot of it will involve demos and examples of actual scenarios you might encounter day to day.

Who should attend?

  • If you currently manage an EQL environment and you want some free training and to find out what’s new and exciting.
  • Existing customers that weren’t able to attend Dell Storage Forum.
  • If you don’t know much about peer scaling storage arrays this is a good opportunity to grill the Dell storage experts and also talk to existing customers about their experiences.
  • Anyone who live in Brisbane and wants to meet me Smile

How to Register

Click here to register to attend.


It will be the same agenda for each state. You don’t have to attend all sessions but why wouldn’t you, its free training and there is food, how can you say no. For the Sydney sessions

09:00am – 09:30am Registrations & breakfast  
09:30am – 11:00am 101 – Core Technology Entry-level storage course designed for users relatively new to the Dell EqualLogic series of arrays. Includes EQL architecture overview, load-balancing & host MPIO, Tiering, snapshots & Auto-replication.
11:15am – 12:45pm

201 – Advanced Features

Builds upon the 101 course and includes Auto-replication, Auto Snapshot Manager, off-host backup techniques, SAN HQ monitoring

12:45pm – 01:30pm Buffet lunch  
01:30pm – 03:00pm 301 – Advanced Features II

Covers EqualLogic SAN Headquarters reporting & analysis, Storage Blade, networking considerations/DCB, Synchronous Replication, Asynch Failover/Failback, NEW Virtual Storage Manager 3.5

Dates and Locations

Note there are two sessions in Sydney in Frenchs Forrest so you get to see our shiny sparkling Solutions Centre which is an awesome setup.

Brisbane Tuesday 20 November Sofitel Brisbane Central. 249 Turbot St, Brisbane, QLD, 4000
Melbourne Thursday 22 November Grand Hyatt Melbourne. 123 Collins Street, Melbourne, VIC, 3000
Sydney Tuesday 27 November Dell Solutions Centre. Level 1, 14 Aquatic Drive, Frenchs Forest
Sydney Wednesday 28 November Dell Solutions Centre. Level 1, 14 Aquatic Drive, Frenchs Forest
Perth Thursday 29 November Hyatt Regency Perth. 99 Adelaide Terrace, Perth, WA, 6000
Adelaide Tuesday 4 December Intercontinental Hotel. North Terrace, Adelaide, SA, 5000

I will be assisting with the Brisbane sessions so make sure you come and say hi.

If you end up attending I would love to hear your feedback, good or bad, so just add it to the comments of this post.

By |November 8th, 2012|Education, EqualLogic|0 Comments

Dell Storage Forum Sydney a massive success

Those following me on twitter would have been sick to death of hearing me talk about it but the Aussie Dell Storage Forum (DSF) has come and gone and it was a bloody good day .. very long, but good. It was the best opportunity we have had to showcase Dell’s growing storage solutions and portfolio in Australia and the feedback so far has been very positive :).

Imma let me keep this post short, but @PenguinPunk has one of the best event overviews of all time (cause it saves me lots of time)

The keynote was the same style as the Boston DSF with the whiteboard and talking about where Fluid Data is heading, especially around Fluid Cache and AppAssure. It was a bit different to Carter George’s slow and steady wins the race style Brett Roscoe lead it well and was really funny. The room was so packed that I heard there were 50 – 80 people outside the room watching on the big screen.

There there were four main session tracks, Compellent, EqualLogic, Powervault and “everything else” :). I was in the everything else stream presenting the first two AppAssure sessions with Andrew Diamond. I was handling the live demo .. yes .. live demo, danger, and it was running on the worlds largest laptop – a  Precision M6700 – it had more guts than my R710 PowerEdge server but after lugging it around for two days I am now an inch and a half shorter (In height, mind out of gutter please). It all went well and no one asked me to repeat myself because of mumbling so I consider that a win!

For those who don’t know Andrew, I am like the Robin to his Batman for Dell Storage in Queensland and NT … because being from QLD, we love hanging out in our undies (thanks Dave). Not a whole lot that boy doesn’t know. I help him out with things like cranking up the the twitter machine and which meme is appropriate where :)

On the floor was where most of the action was with two racks full of kit and a bunch of the storage specialists answering questions and doing live demos. It had a bunch of new kit including:

  • The new EQL M4110 blade
  • SC8000 Compellent array
  • FS8600 Compellent FluidFS NAS box
  • The 60 drive “RoadKing” PowerVault MD3 array

Lunch for the masses apparently was good, I got to have fancy lunch with all the media dudes because I am a “blogger”. I think I’m going to rock up to other tech or fashion shows and say “I’m a blogger, food please”. One feedback I had about the public lunch was that the salmon was the driest they had ever eaten. I just assumed there was a salmon walking around cracking jokes like:

What are the two sexiest animals in the barn yard? “Brown chicken brown cow”.

So some stats if you care:

  • 411 customers, 4 were mine :)
  • 31 breakout sessions, 23 hours of content and 19 speakers and 1 sub woofa mate

We topped it off with a few tasty beverages at the end of the day including these blue Fluid Cocktails – they had some kick – and Prinnie from the Voice singing some tunes, the girl can sing.

It doesn’t just stop there, here is a list of all the sessions that were run during the day. If there is something that interests you and you want to find out more about it, contact your Dell sales team and we’ll help you out. If you’re not currently a customer all good, either contact me through the blog or on twitter and we can go from there. Don’t forget to ask about DPACK assessment as well.

One of the highlights of the day is that this tweet made it on the big board :)


Finally, here are a few photos from the day on Facebook.

The day was a great success and was exceptionally run, I can’t wait for next year, it will be huge.

By |November 5th, 2012|AppAssure, Compellent, DSF, EqualLogic|0 Comments

Some good EqualLogic reference documents

Some of the EqualLogic technical documents have been updated on Dell TechCenter. The Snapshots and Clones one is a really handy doc. Enjoy.



Published /


Reference Architecture for a Virtualized SharePoint 2010 Document Management Solution

This white paper describes the results of performance tests for a virtualized sharepoint 2010 document management solution on an EqualLogic PS6100E storage array. The primary audience for this paper is IT managers and technical decision makers interested in sizing and performance results for a cost effective SharePoint 2010 document management storage solution.


Solution Best Practice

Silver Peak WAN Optimization and Dell EqualLogic PS Series

This technical report details the benefits that Silver Peak’s WAN optimization appliances provide for Dell EqualLogic Auto-Replication deployments. This report includes performance results from the EqualLogic test lab


Tech Report

Performance Baseline for Deploying Oracle 11g Release 2 Based Decision Support Systems using Dell EqualLogic PS6110XV Arrays

This paper describes sizing and best practices for deploying Decision Support Systems (DSS) based on Oracle® 11g Release 2 database using Dell™ EqualLogic™ storage


Best Practice Addendum

Sizing MS Exchange 2010 on EqualLogic PS4100-6100 Arrays with VMware vSphere 5

This white paper presents the results of a study centered on the characterization of I/O patterns of Microsoft Exchange Server when deployed on a storage subsystem based on EqualLogic SAN and in a dense server environment built on Dell PowerEdge blade servers. It will guide Exchange and SAN administrators to understand their messaging workload, and predict their SAN size requirements. The scope of this paper is a virtual infrastructure built on VMware vSphere connecting to the SAN directly from the virtual machines and storage sizing, leaving aside server sizing activity.


Solution Best Practices

EqualLogic Snapshots and Clones: Best Practices and Sizing Guidelines

This white paper is intended for storage administrators who are involved in the planning, implementation, configuration, and administration of EqualLogic iSCSI storage area networks and the use of snapshots and clones for the purpose of data recovery and availability


Solution Best Practices

Scaling and Best Practices for Virtual Workload Environments with the FS7500

This paper provides best practices and performance optimization recommendations for the Dell EqualLogic FS7500 Unified NAS storage solution in virtualized environments based on VMware vSphere. It also shows how EqualLogic FS7500 scales with the addition of storage resources for a specific virtual workload use-case scenario


Solution Best Practices

Dell EqualLogic Virtual Desktop Deployment Utility – Sizing and Best Practices

The purpose of this paper is to demonstrate how the Virtual Desktop Deployment Utility leverages the Thin Clone feature introduced in the EqualLogic PS Series Firmware version 5.0 to efficiently provision and deploy virtual desktops in a VMware View Virtual Desktop Infrastructure (VDI) environment.


Solution Best Practices

By |November 5th, 2012|EqualLogic|0 Comments

DSF competitions and giveaways

Oooooh, chimpanzee that monkey news

The good stuff, the stuff that makes you want to sit up straight and turn off Gangnam Style for 5 minutes.

Competitions and prizes. Prizes and competitions. Actually, I don’t know why I am excited. I have been working for vendors for 10 years now so I’m never allowed to bloody win anything. Just because I can rig it, not fair.

Sorry, went to a dark place there


One thing Dell does well is Laptops so it makes sense to give away .. laptops. Sexy laptops. Laptops so hot that could make butter melt, but they’re not actually hot, they’re cool, so cool they’re hot. Umm, got myself mixed up there. Short story is they look nice and don’t warm your lap. I should be a novelist.

Up for grabs are:

Competition #1 – Twitter

Easy one this one, and its already started and doesn’t close until 6pm Thursday 25th Oct.

Using 140 characters or less, tweet what you learned at Dell Storage Forum Sydney 2012 and include the hashtag #DellSFau and mention @DellBizAU

Pour example –> “Hey @DellBizAU, before DSF I couldn’t tell your products apart because they all have the same bezel. Now I can spot a FS8600 in a snowstorm underwater #DellSFau”

Not my best but I wouldn’t give that to you because you would steal it and win and then I would have won but still not have won …. sob. If you can’t tell, I don’t have an XPS13 to call my own.

Win one of 3 XPS13 Ultrabooks

Competition #2 – Stamp Your Passport

This is even easier. Go to all 12 vendor booths, get your passport stamped, take as much shwag as you can. Win-win

This is a draw and closes at 5pm on the Wednesday 24th and drawn 20 mins later

Win an XPS All-In-One

Competition #3 – Event Feedback Form

Yes I know you don’t want to do it but there could be a laptop in your future  – my laptop you thief!

Just fill in the event feedback for before 5pm on Wed 24th and it will also be drawn 20 mins later.

Win an XPS13 Ultrabook powered by Intel (the other ones are also powered by Intel but this one is sponsored by Intel. You see what I did there?)

Pretty bloody easy and remember, if you win and don’t want it, give it to me and I will give you a years worth of high fives – promise.

By |October 22nd, 2012|DSF|0 Comments

New Dell Storage products just released in time for DSF Sydney

The good thing about trade shows is new stuff, and DSF Sydney is no exception. Actually there are a few so I will keep this to the point.

  • Hybrid EqualLogic storage array – SSD and NLSAS in the same shelf.
  • Compellent FS8600 – Unified NAS for Compellent
  • AppAssure 5.3 with Linux support
  • FluidFS support for Quest SharePoint Maximizer
  • Dell Active Infrastructure

Hybrid EqualLogic storage array

EqualLogic PS6510 iSCSI ArrayThe PS65x0ES is a pretty nifty box. It is based on the 48 drive sumo form factor but with 7 SSD drives and 41 NLSAS drives. The array acts as one pool of storage and tiers data automatically, giving you the SSD punch with the NLSAS fat capacity. It would suit apps like VDI, some databases and high bandwidth media.

  • PS6500ES is the 1Gbit version
  • PS6510ES is the 10Gbit version

Another bonus is that the new VMware HIT kit will be released. The HIT kit version is 3.5 and supports ESX 5.1 now and the new EqualLogic v6 firmware. I wrote it like that because a lot of the marketing material says HIT/VMware 3.5 which confuses everyone cause this think it’s for VMware version 3.5. 

Compellent FS8600

Being a NAS guy at heart, I was personally very happy to see the FS8600 get released. It’s high performance, scale-out NAS for Compellent. The FS8600 runs the same FluidFS system that the NX3600 and FS7600 uses but it is fully integrated in the Compellent management systems.

It tweaked to maximise the thin-all-day-everyday-ness of Compellent as well as taking full advantage of the automatic tiering benefits that made Compellent famous in the first place.

It’s a 2U box that contains two clustered controllers, so they are active-active with mirrored write cache. We can get up to 4 x FS8600’s to act as a single namespace serving out a single share up to 1PB in size. That’s the figures but no one I know needs a filesystem that big at the moment, at least not in Australia. On the sister FS7500/FS7600 systems I am more commonly seeing shares about the 50 – 100TB mark.

It supports replication, AV, quotas etc, but I think it’s main advantage is that because it front ends Compellent, Compellent will auto tier your NAS data, so the new and hot data is on tier 1, and the old stuff no one is using gets moved to cheap tier 3 disk.

I intend on doing a more in depth techo post in the next couple of weeks. 

AppAssure 5.3 with Linux support

AppAssure is another product I have been meaning to write about but time and sleep have gotten in the way. It’s a next gen backup system that Dell bought a few months ago. It uses snapshots and deltas to trickle feed backups over the course of the day, forever incremental style. Instead of the nightly big bang backups that smash the system, it’s spread over the course of the day and sent to a centralised core system to minimise impact to prod systems. It’s a great way of looking at backups and I have been treating it like my little baby since it came out. It doesn’t fit everywhere but for a lot of customers I meet it’s a great fit, and quite affordable too.

Again, I intend on adding much more content about AppAssure ..  promise. In the mean time, its free for a 14 day trial and takes about 30 mins to setup … pieceofpissmate

The big addition is Linux support but the bulk deploy features will save a lot of admins a fair chunk of time. Here are the main updates in AppAssure Software (V5.3.1)

  • Capability to protect Linux servers

      Red Hat Enterprise Linux (RHEL) 6.3 (32 and 64 bit)
      CentOS 6.3 (32 and 64 bit)
      Ubuntu 12.04 LTS (32 and 64 bit)
      SuSE Linux Enterprise Server 11 SP2 (32 and 64 bit)

  • Bulk Deploy and Bulk Protect using Active Directory or vCenter
  • Command Line and GUI Usability Enhancements
  • Updated Reporting and Embedded Help

    FluidFS support for Quest SharePoint Maximizer

    sharepointmaximizerIf you didn’t know, Dell acquired Quest Software last month which added about 42000* new products to the Dell Software offerings (*slight exaggeration).

    One of those products is the Quest Storage Maximizer for SharePoint which now supports Dell’s FluidFS platform.

    “Quest Storage Maximizer (SMAX) is the most efficient and lightest weight external storage solution for SharePoint on the market. Files that are typically housed within SQL can be externalized and stored on a Fluid File System (Fluid FS) thus reducing the burden on SQL and increasing SharePoint performance.”

    What that is saying is that  using the Quest tool, we can use FluidFS as the storage for all your SharePoint data, instead of messy blobs inside a SQL database. Not only does SharePoint go quicker, but it can scale a hell of a lot easier.

    Dell Active Infrastructure

    This one I’m still catching up with. Dell vStart all in one solutions have been around for a while, this now adds converged infrastructure directly in the blade chassis with a new IO Aggregator and unified management across the system. Dell is also offering these systems will pre-integrated, optimised software and solutions like SharePoint out of the box which will be fantastic. I am picturing a situation where a customer tells me they are looking at implementing SharePoint 2013 and will need to buy some more storage and compute as we as come services to set it all up. Instead I give them one product number and the new system arrives in a few weeks, already racked, already cabled with SharePoint installed and running and ready for production. Tres sexy.

    It’s only just been announced so not a lot of in-depth info yet, lets hope what I am thinking is right! Smile And I also hope you can customise those stickers on the rack, like this.

    That’s it

    DSF is going to be a great day, it’s not too late to register and come along and remember, use the #DellSFau tag and if you are leaving your computer don’t forget to

    By |October 22nd, 2012|AppAssure, Compellent, DSF, EqualLogic|0 Comments

    DSF Sydney nearly here – Wed 24th Oct

    This Wednesday the 24th Oct the inaugural Dell Storage Forum kicks off in Sydney. I’m really looking forward to it, and not just because this will be my first presenting job in front of a large audience (well, co-presenting). We have had 100’s and 100’s of registrations so far so if everyone turns out it’s going to be awesome. I will be doing the AppAssure sessions so come say hi :)

    It’s going to be a pretty full day, the presenters have been asked to get there at 7am to get setup – that’s 6am Brisbane time. Curse you non-daylight savings time Brisbane. I’m going to be a bleary eyed wreck.

    The hastag for the day is #DellSFau. Remember that because a prize could potentially be in your future.

    One thing I am really happy with is the level of Dell US exec attendance, for the keynote and sessions throughout the day.

    • Brett Roscoe                  GM & Executive Director of PowerVault and Data Management Business
      Phil Davis                       Vice President of Enterprise Solutions Group, Commercial Business APJ
      Mike Davis                     Director of Product Marketing – Dell FluidFS and Ocarina
      Tim Plaud                       Enterprise Technologist Compellent
      Marc Keating                 Enterprise Technologist EqualLogic
      Harmeet Malhotra        Storage Director APJ
      Alvin Kho                        Enterprise Technologist APJ
      Noorul Huq                     Enterprise Technologist APJ

    The keynote will highlight the future of Dell storage and where we are heading. It will be like the keynote at the US Dell Storage Forum

    Dell Storage Forum Boston Keynote–50 mins

    There are 5 different streams throughout the day so there is plenty of variety of content to cater for most interests, and there is a healthy amount of NDA sessions to show you what’s up and coming.

    On top of this goodness of course all the Dell ANZ storage team will be there, helping out with demos and displays of our kit and also answering any questions you might have.

    Other Goodies

    There will be competitions and giveaways and they are pretty bloody good. I am just confirming the exact prizes and I will put them into another post soon.

    All delegates will get a bag (hopefully we don’t run out) which is a good bag – hopefully I can snaffle one.

    To get there

    • Dateimage
      Wednesday 24th October
      –Registrations & Breakfast from 7:45am
      –Conference sessions: 8:30am to 4:45pm
      –Networking drinks: 4:45pm onwards
    • Location
      •Australian Technology Park
      •Bay 4, 2 Locomotive Street Eveleigh NSW 2015

    See you there,

    Cheers Daniel (@danmoz)

    By |October 22nd, 2012|DSF, Storage|0 Comments

    (*#&*^$$^ Hackers blah grr new site $*%

    Unfortunately the site got hacked again. Not sure how they are getting in, I changed all passwords including DB passwords, checked .htaccess and went though all the hacking checklists but they still got in. I tried restoring from backup but the backup was corrupted as well. DOH

    I have about 10 other sites that run off the same server and they have been fine. They must have been getting though with a vulnerability in one of the plugins I was using. Who knows. The main thing is it was all too hard. 

    So I have deleted the site and starting from scratch. Looking at all the posts the main ones anyone ever looked at were the CAVA ones so I kept the txt of those and I will post them soon.



    By |October 22nd, 2012|Storage|1 Comment

    Dell Storage Forum Coming Down Under

    Good news everyone, the Dell Storage Forum is coming to Sydney, Australia on October 24th 2012. It will be similar to the DSF in Boston, London, Paris and Mumbai and will be a mix of futures, tech deep dives and a fair bit of hands on. It will be a one day event that will go through the growing array of arrays Dell has – Compellent, EqualLogic and PowerVault and also all the new sexy stuff like the DR4000, DX and AppAssure.

    It’s going to be held at the Australia Technology Park, Locomotive St in Eveleigh, near Redfern train station. Of all the years I lived in Sydney I don’t think I have ever been in Eveleigh so it should be interesting to check it out. You can get all the transport details here.

    To kick things off here is a really good video of Dell’s storage vision from the first DSF in Boston earlier in the year. It’s 50 mins long but gives a great insight to how all Dell’s hardware and acquisitions are coming together to form Fluid Data. It involves the new Poweredge 12G servers, Dell Storage, RNA networks and AppAssure to really give an interesting and different view on storage consumption and management.

    There will be some great speakers from around the world

    An interesting touch is a pre-written “Convince your manager” document that has all the reasons, incentives and excuses you need to get your manager to fly you to Sydney. You can download, insert your name and send to your manager – easy.

    It’s free to attend and will be a great day so hopefully you can come. I am still trying to work out what my involvement on the day will be. I might have to do some talking but I’m hoping I can be more involved in the social side of it or at least something fun. Expect a few more posts leading up to the event, I’m hoping to organise a tweetup ala VMware style or something similar.

    Follow me on twitter @danmoz

    By |October 1st, 2012|DSF|0 Comments

    CAVA troubleshooting

    This post is to help you troubleshoot CAVA install and running issues and to give you a set of steps you can follow to try and find where the fault is.

    1. What is CAVA?
    2. CAVA Considerations and basic setup
    3. CAVA troubleshooting (which is really why I am doing this) (this post)

    Before I start let me say that EMC support can help you with this. They won’t install it for you but they will help you out as much as they can and they have access to secret squirrel backend stuff if the problem turns out to be nice and hairy.

    The Basics

    Going back to the first post, here are events that trigger virus checking. If one of these happens and you have CAVA enabled, the file will get scanned.

    [list type="checklist2"]

    Modifying & closing an existing file
    Creating and saving a file
    Moving or copying a file
    Restoring a file from backup
    Renaming a file with a different extension
    Scan on Read if the access time is earlier than the reference time for CIFS clients
    [box type="1"]That last point is in reference to your AV virus definitions. When you receive an updated virus definitions file from your AV vendor, it changes the reference time in CAVA. So, if a file is accessed on Monday and is scanned, then AV updates on Tuesday, when the same file is accessed on Wednesday it will get scanned again against the new definitions file.[/box]

    Common Commands via the CLI

    Replace server_x with the data mover you are accessing eg server_2

    server_viruschk server_x    Shows if virus checking is running and scanning rules
    server_viruschk server_x -audit    Shows CAVA scanning stats and scan queue. Very useful to see if the CAVA queue is blocked
    server_log server_x    To see if there are any errors on the data movers
    server_setup server_x –P viruschk –o start=64    Start the virus checker service on the data mover
    server_setup server_x –P viruschk –o stop    Stop the virus checker service on the data mover
    server_viruschk server_x –fsscan fs1 –create    Starts a virus scanning job a on file system
    server_viruschk server_x –fsscan fs1 –delete    Stops a virus scanning job on a file system
    server_viruschk server_x –fsscan fs1 –list    Show the scanning status
    How to verify the CAVA install

    [list type="checklist2"]

    Make sure the correct AV domain user has the “EMC Virus Checking Privileges” using the Celerra MMC.
    The AV engine should be installed BEFORE the CAVA agent except in the case of Trend Micro.
    The AV domain user needs to be a local admin on the CAVA CIFS server and all AV servers.
    Make sure network share scanning is enabled in your AV software or it won’t scan the files on the NAS. It’s in the doco but this one pops up a bit.
    The include/exclude options in the viruschecker.conf file need to match the settings on the AV server. If you have scan *.doc files enabled on the data mover but not in the AV engine, then the Celerra will still send the scan request and the AV engine will ignore it.
    The CAVA CIFS server cannot be in a VDM.
    No spaces in your viruschecker.conf file. Look at the example here.
    The CAVA service must “Run as” the domain AV user account. You can see the pattern here – the domain user is very important as CAVA only works with CIFS. The user can only scan files it has access to. So it needs to have access all the way through from the data mover to the AV engine.

    Troubleshooting CAVA

    This is reasonably close the the top down steps I do through looking for an issue with CAVA.

    It no worky!

    [list type="numlist"]

    Restart the viruschk process on the data mover and then check the server_log.

    Nine times out of ten this will show me where the issue is. Look at the viruschk output and see if the process makes contact with the CAVA agent on the AV servers. It should list the IP address and the AV engine vendor and version. If it doesn’t … NEXT!

    Restart the AV engine servers.

    Hey, it’s not working and if you read my other posts there are no other applications on these machines so it fine … right? Once they are back restart the viruschk process and check the server_log (see a pattern). You can also check the Windows Event logs to see if the CAVA agent is starting properly and finding the installed AV engine.

    Check the viruschecker.conf file.

    Do you have the right options, IP addresses, CAVA CIFS server name? Has it been uploaded right? If you are not sure, cut it back to its bare bones and try again.

    Do you have enough rights?

    Make sure the domain AV user has been given access in all the right places. I have mentioned them a few times in these posts so I won’t go through it again.


    Can the data mover access the AV servers? Can the AV server access the data mover? Use the server_ping server_x -i command and specify the CAVA CIFS server interface. If you can’t ping it it may be a routing issue on the data mover or maybe the interface isn’t tagged with the right VLAN.

    CAVA is running but not scanning anything.

    If viruschk is running it can see the AV engine have made a connection to the agent, so you can cross those off your list for now. This can still be a permissions issue so check that first. Make sure the AV engine has network scanning enabled. The include/exclude lists need to match between the AV engine and the viruschecker.conf. To check if its scanning use the server_viruschk server_2 -audit option, or if you want to be fancy the command below on the control station. To exit ctrl+c.

    [box type="2"]watch -d server_viruschk server_2 -audit[/box]


    You can set debug logging on the data mover
    [box type="2"]
    .server_config server_2 "param viruschk Traces=0x00000004" #turns on debug for AV in the server_log
    .server_config server_2 "param viruschk Traces=0x00000000" #turns off debug for AV in the server_log

    You can also look at the files in the AV queue. You can browse to \\\c$\.etc and look at the viruschecker.queue file. If there is a bottleneck you can see the file currently being scanned and action from there. If AV is working ok you wont be able to see the files as AV will be too fast for you.

    On the Windows side you can enable debug via the registry.
    [box type="2"]HKLM > software > EMC > CAVA > Sizing and set ‘Sizing’ value to 1 #turns on debug for AV in the event log
    HKLM > software > EMC > CAVA > Sizing and set ‘Sizing’ value to 0 #turns off debug for AV in the event log

    It go slowy!

    It can go slow for a number of reasons. here are a few I encounter.
    [list type="numlist"]

    Is the include scan list in the viruschecker.conf appropriate?

    If you have a mask of *.* CAVA will scan every file that is accessed on the Celerra, regardless of whether it can harbour a virus or not. By refining this list you can reduce the amount of unneccessary files that are scanned and speed up the process.

    Is there a migration happening?

    Are there any robocopy sessions running? A migration scheduled? Are users rearranging the file system? This can flood the AV process and steal scanning cycles from the normal user traffic, giving the appearance that the NAS has slowed down where the real issue is the CAVA queue is filling up. The fix? Nothing great I’m afraid. you can disable CAVA during the copy/migration and then re-enable, or you can ask your users to be mindful of the impact and do large copies out of normal working hours ……………… sorry, I crack myself up. A good mask list can help minimise this impact.

    Use stats to help you out.

    You can use the server_viruschk server_x -audit command to see how fast files are being processed. It will show you the average time to scan a file in ms. Anything over 80 isn’t good. It will also tell you how many files there are in the queue and how many files have bee scanned since CAVA was started. If you restart CAVA these stats reset.

    Are there a lot of large files being accessed?

    What about zip files? Zip files are compressed (Im glad that’s off my chest) and so the CAVA process can’t get the signature of the file. It has to send the entire file to the AV server to be scanned. If the file is massive this can take some time and clog up the AV queue.

    How often do your AV virus definitions get updated?

    If you have a nightly update cycle, you potentially have a situation that everyday when someone accesses a file it has to be rescanned. If you are having load issues, try extending that update cycle to every 3 or 4 days and see what happens.

    Do you have enough AV engines to process the load?

    Are the AV servers overloaded? Do you need more?You have bought a NAS supercar, designed to get files to users as fast as possible. CAVA sits between the NAS and the users and you don’t want it to be a bottleneck. Users don’t know about AV, they just think "the NAS is slow" and then management are turning to you asking why you bought a slow NAS? Are you loco? Sizing this is very important, CPU and network. Especially if the AV servers are virtual machines they might be put in a low resource pool and starved for air. The ESX farm might be smashed and network is slow. This is a tricky one. Unless the VM admins are helpful they tend to blame the app and then you’re stuck unless you can prove it’s not. Save yourself the hassle and spec them correctly upfront.

    Any databases on the NAS?

    Antivirus and databases aren’t friends. don’t worry, I tried. I had them over for dinner and had Mr Bean on TV and neither of them laughed. Old grumblepants I called them. Don’t let AV scan databases. How do you control it? You can create a new filesystem and then mount it with the noscan option, that way AV will ignore that file system.

    Is there a manual scan running?

    If you have control of your environment then you will know if you have run a manual scan or not. But if there are multiple admins someone could run a scan and not tell you. Just use the server_viruschk server_x -fsscan-list command to see if anything is running.


    There are deeper options but they more internal tools. If you are stuck here then EMC support should be your next option.

    That’s the end of the CAVA posts, I hope they help you through any pain points. As always, if you have any questions I’m happy to answer them unless you are clevererererer than me.


    By |September 3rd, 2012|EMC|2 Comments

    CAVA Considerations and basic setup

    This is the second post out of 3 about CAVA on the Celerra/VNX File.

    1. What is CAVA?
    2. CAVA Considerations and basic setup (this post)
    3. CAVA troubleshooting (which is really why I am doing this)

    Between the first post and this one EMC has released the new VNX range to replace the CLARiiON and Celerra. So the new version of the Celerra is called VNX File and they have dropped the name ‘Celerra’, which is a shame. So from now on I will use VNX File instead of Celerra. This post is a mash of my own notes and quotes from the doco.

    CAVA is part of the CEE or VEE framework, which is a mix of API’s, agents  and events that enable things like quota managements, antivirus and auditing on the VNX File. Mainly it’s for partners and 3rd party apps to interface with the Celer… umm … VNX. Bah, this is going to be a hard habit to break.

    How much do you care about it? Well, if you are just trying to run CAVA then not much, just note that all the CAVA downloads on Powerlink are in the VNX Event Enabler pack. At the time of writing you can still get it via

    [box type="2"]
    Home > Support > Software Downloads and Licencing > Downloads C > Celerra Anti Virus Agent (CAVA)

    but that might change when the VNX name starts taking over. The download is an ISO and about 120MB. This includes both the 32 and 64bit versions. I have never understood why they weren’t individual downloads. Powerlink is a bit slow at the best of times outside the US, and it’s excruciatingly slow over a 3G card at a customer’s site when you forgot to download it before you got to site and you only have 3 hours to finish the job. But I would never do that … *cough*.

    Another tool you need is the Celerra MMC on the NAS Tools CD. You will get this as part of your delivery of software when the hardware turns up.  Just install that on some machine that you can run it with heightened permissions (domain admin). If you can’t find the software then you can download it from Powerlink at
    [box type="2"]
    Home > Support > Product and Diagnostic Tools > Celerra Tools > NS-960
    and get the Celerra Network Server Version Applications and Tools CD (277MB). That’s right, 227MB and from it you want a 2MB file. Handy Hint – don’t misplace the CD’s 😉 This version still works fine with the VNX.

    What do you need:
    [list type="checklist"]

    A downloaded copy of the VEE Celerra install
    Minimum 2 x Windows Servers to run the AV engine software (McAfee, Symmantic etc).
    A Windows Domain service account to run the AV process and access files on the Velerra.
    Velerra NAS Tools Microsoft Management Console (MMC)
    A CIFS server on the data mover NOT in a VDM.
    A configured and installed viruschecker.conf file.

    [box type="info"]
    As far as I can tell, even though the names have changed, all these steps are the same for Celerra and VNX.

    So onto the install. All of these steps are available in more detail in the "Using VNX Event Enabler" documentation available on Powerlink. This includes detailed instructions on installing and configuring antivirus engines like McAfee and Sofos. Because of this I won’t go into to much detail, just highlight the steps.

    Download and install, CAVA, your AV software and the NAS Tools MMC files.

    Provision your CAVA Windows servers, either physical or VM’s. Install your AV software and then the CAVA agent on each machine. The CAVA install is very basic, next next next finish.

    For our example I’ll call the AV servers AV1 and AV2 with IP’s of &

    Create the CAVA CIFS server

    This is the easy bit, you just need a CIFS server on the data mover with an IP address. That’s it. It doesn’t need storage but it must be on the physical data mover and not in a VDM.

    For our example I’ll call my CIFS server BOB and it’s on server_2 with an IP of

    Create the Domain User Account and grant access

    The CAVA installation requires a Windows user account that is recognised by Celerra Data Movers as having the EMC virus-checking privilege. This user account enables the Data Mover to distinguish CAVA requests from all other client requests. To accomplish this, you should create a new domain user, assign to this user the EMC virus-checking right locally on the Data Mover, and run the CAVA service in this user context.

    For our example I will call the user CAVAservice

    Using compmgmt.msc and browsing to the CAVA CIFS server (BOB), create a new local group called Viruscheckers and add the CAVAservice user to it. While you are there add the user to the local administrators group on the CAVA AV servers and the CAVA CIFS servers.

    Using the NAS MMC, browse to the BOB server and add the Viruscheckers group to the “EMC Virus Checking” section. If you don’t have this CAVA no worky.

    Finally, you need to change the CAVA service on the AV servers to "run as" the CAVAservice user. You can do this via services.

    Viruschecker.conf settings

    The viruschecker.conf file defines the Celerra virus-checking parameters for each Data Mover in the domain.

    This is an example viruschecking.conf configuration file. It lists the AV servers as well as the rules of what to scan. This is a common example of the file and can be customised.

    maxthreadWaiting=40       (20 on each AV server)
    CIFSserver=<CAVA CIFS server name> eg. BOB
    Addr=<IP addresses of AV engines separated by semi colons> eg





    [box type="info"]You can see that’s a large list. Always consult with the antivirus vendor to determine exactly which file types cannot and/or should not be scanned in real-time network scanning and what the workarounds are.[/box]

    Upload the viruschecker.conf to the data mover

    There are three ways of getting the viruschecker.conf file to the data mover.

    Use server_file server_2 -put viruschecker.conf viruschecker.conf. Think of that command like an FTP, you have to have the file on the control station and it’s called viruschecker.conf.
    You can create the viruschecker.conf file on your Windows machine and then copy it to \\BOB\c$\.etc\viruschecker.conf
    Use the NAS MMC to generate the file and it will save it to the right location.
    Start the CAVA service on the data mover.

    There are two ways of this

    Use the NAS MMC to start the service
    Run server_setup server_2 -P viruschk –o start=32. The 32 is the number of CIFS threads you want to use, 32 is the default.
    Check to make sure the sucker is running!

    Seems obvious but is often overlooked, especially because the normal behaviour is that if something is wrong, the viruschecking service is stopped and then the Velerra goes about its normal business. "What problem, I don’t see no stinking problem".

    I’ll go into it more in the troubleshooting post but like everything else Velerra, always run server_log server_2 before you do anything else. If there is anything wrong it wil pop up there. If don’t see anything suspicious then use the server_viruschk commands to see if it’s scanning
    [box type="2"]
    server_viruschk server_2
    server_viruschk server_2 -audit

    That’s about it for the installation and setup. There is a lot more detail in "Using VNX Event Enabler" including command definitions and screenshots.

    VNX File CAVA Considerations

    Below are some considerations when setting up and deploying CAVA.

    Virus Checker Configuration File Considerations

    (Filename viruschecker.conf, resides on the Data Mover)
    [list type="numlist"]

    The mask= parameter can greatly impact virus checking performance.  It is recommended that you do not use mask=*.* since this setting scans all files.  Many file types cannot harbor viruses, therefore, mask=*.* is not an efficient setting.  Most AV engines do not scan all file types.  Also scans of file types with an unknown extension will result in the entire file being scanned, increasing network bandwidth and resources.
    It is recommended that .pst and other similar “container files” be left out of the scanning queues for the Celerra AV functionality to work properly.  McAfee and Symantec do not scan Outlook .pst files and recommend excluding them from scans.  Either scan at desktop level or use Exchange server specific products or Exchange client snap-ins.  This is good advice for other AV products as well.
    It is recommended that you do not set up real-time scanning of databases.  Accessing a database usually triggers a high number of scans, which in turn can cause a large amount of lag.  To ensure that your database files are virus free, you should schedule regular scans for times when the database is not in use.  You can schedule scans through your AV engine.  See knowledgebase article emc60746 for specific extensions that should be excluded.
    Most AV vendors recommend excluding real-time network scanning of compressed and/or archive files.  Scanning compressed or archive files requires a lot of system resources.  In order to scan a compressed or archive files, the AV software must extract the file to a temporary location, scan it, and then replace the file.  This functionality requires RAM, drive space, and CPU, thereby degrading the overall performance of the server.
    [box type="info"]In a complete network AV solution any infected files will be found as they are added to, moved from, or launched from compressed or archive files.[/box]

    Depending on the 3rd party AV package, contents of compressed and/or recursively zipped files may or may not be supported for scanning in real-time for network shares.  If the vendor does not support scanning compressed files or recursively zipped files in real-time or if scanning of compressed files is not enabled, they should be excluded from scans.
    Due to known issues with antivirus software compatibility with Microsoft Excel and MS Project software add the following 8 characters “????????” to the exclude list as a workaround.
    This is to avoid a timing or deadlock issue with the 8 character temporary file that it created when files are saved or modified.  (For more information refer to knowledgebase article emc60253)
    For maximum protection antivirus software vendors suggest scanning all executable files and files that contain executable code.  For maximum performance and to reduce network bandwidth and resources, exclude file types that do not contain executable code.  Always consult your antivirus vendor for latest information.
    Recommend that the “shutdown=” setting in the viruschecker.conf file is configured to shutdown=virus checking.  This is the action to take when the VC client on the Data Mover does its routine polling to determine which AV servers are in “ONLINE” status and what action to take if it cannot find any “ONLINE” AV servers.  An alert should be configured when this action is triggered for notification and to get the AV servers functioning again.  To protect the server during the timeframe the virus checking service was offline, run a manual scan or use the scan on first read feature.
    This is recommended over allowing virus checking to continue when no AV servers are available.  If all the AV servers are offline for an extended period of time, the file types that meet the criteria for a virus check will wait in the collector queue until an AV server comes back “ONLINE”.  The files in the queue are locked to the user until the file is successfully scanned.  Each scan request ties up a thread on the Data Mover which can eventually exhaust all the Data Mover threads over a period of time.

    [box type="info"]Status “ONLINE” indicates successful communications between the VC client on the Data Mover, CAVA, and the 3rd party antivirus software running on the AV server(s).  To verify AV server(s) status at any given time run the server_viruschk server_x command.[/box]

    AV Server Workstations

    [list type="numlist"]

    Make sure all file types that are configured to meet the criteria for a virus check on the Data Mover can be checked on the AV servers.  3rd party antivirus “File Types” scan and exclude settings should match the viruschecker.conf file settings.  The VC client on the Data Mover should not be configured to trigger scan requests of file types that the AV servers’ antivirus software is not configured to scan.
    For every AV vendor EXCEPT Trend Micro, you need to install the AV engine first before the CAVA agent. For Trend you have to install CAVA first, then the AV software. This has got us a few times.
    AV servers should be strictly dedicated for CAVA use only.  They should NOT also be used for other windows services such as a Domain Controller, DNS, WINS, backup server, CIFS client, etc.  Each AV server should only be running one 3rd party antivirus software product at a time.
    The dedicated “AV user” domain account that the CAVA service starts under should always be configured so that the password doesn’t expire.  Make sure both the CAVA and 3rd party software services on the CAVA server are starting using the AV user domain account, and not a local Admin or AV user account.
    If the AV servers are “managed” by a group policy management software package from the AV vendor, the AV servers should be managed in a separate policy to safeguard the required user, permissions, and scanning options required for Celerra virus checking with CAVA from regular workstation settings.
    The AV server(s) should not be used for copy and/or scanning proof of concept testing.  These tests should only be executed on the client side.
    If an AV server is going to be temporarily or permanently removed, then its IP address should be removed from the viruschecker.conf file before the CAVA service is shutdown.

    Datamover Considerations

    [list type="numlist"]

    CIFS should be completely configured, tested, and working before setting up virus checker.  Before using Celerra virus checking for production use, test the configuration to verify it is suitable for the environment by simulating a production load on the Data Mover(s).
    Always ensure that the number CIFS threads used are greater than virus checker threads.
    Do not modify the param “maxVCThreads=” unless directed by Engineering/TS2.
    VirusChecker can only be configured on a physical Data Mover using a regular CIFS Server and NOT on a CIFS VDM Server, since only the physical data mover root can host the CHECK$ share used for viruschecking operations.

    Other Considerations

    [list type="numlist"]

    Monitor the server_log(s) and/or system log (/nas/log/sys_log) for “VC: highwater mark reached” (peak activity) entries.  These messages may indicate the need for additional AV server(s).
    Avoid using real-time network scanning of Celerra shares in addition to the Celerra virus checker feature.  Client AV scanning should be disabled for Celerra CIFS shares, this could result in sharing violations and impact performance.
    [box type="info"]It is recommended to enforce desktop scanning for all clients.  It is important to understand that the CAVA solution is primarily meant to protect the data on the Celerra File Server.[/box]

    Virus checker must be disabled during migrations.  Files should be scanned prior to the migration or after it’s completed.  The virus checker solution assumes you are starting with a clean filesystem.
    Care must be taken when sizing a virtual machine for a CAVA server. All sizing tools assume a physical machine.
    Protecting data against viruses is a critical service and you do not want to be in a position where other services running in different VMware machines starves it for resources. If this were to happen then DART queue for scanning requests can build up thus affecting file access. Hence the recommendation will be to run CAVA in a non VMware environment until substantial work is done to understand guidelines for CAVA running in a VMware environment.

    I hope that helps someone out there. I’ll try and create  the troubleshooting post as soon as I can.


    By |September 2nd, 2012|EMC|8 Comments
    Load More Posts