# Category Archives: CMS

## Automatically Update your MDT Reference Images

Typical workflow for handling reference images in MDT:

1. Import OS install media from an ISO you downloaded off Technet / Microsoft Volume Licensing
2. Build a task sequence to deploy it (I call this my Build Image task)
3. Capture it to the Captures folder on your deployment share
4. Import the captured OS
5. Build a task sequence to deploy it (I call this my Deploy Image task)
This looks mundane, but doing steps 3, 4 and 5 sucks! Trying to remember exactly how you customized your task sequence is no way to live when it’d be way easier to re-use the existing Deploy Image task when updating your reference image.  I also would love it if I’m not the only one who can perform updates to reference images ….so I figured it all out and now I live happily every after!
It’s a little extra work up front, but here’s how you can turn updating your reference images into a one step process that anyone could perform:
1. Create a script called Relocate.cmd in your Scripts directory off the Deployment Share that contains the following one-liner:
• move /Y "%DEPLOYDRIVE%\Captures\%1.wim" "%DEPLOYDRIVE%\Operating Systems\%1"
2. Create your Build Image task. Keep the ID short. For example, let’s say we’re deploying a general purpose Windows 8 image.  My Task Sequence ID that builds the reference image is 8_GP
3. Run this task sequence and capture your reference image. Make sure to save it to the Captures folder and name it after your task sequence. For my example, this is .\Captures\8_GP.wim
4. This one time, you’ll need to use the Import Operating System wizard. Be sure to name the folder for this operating system to match your task sequence that builds the reference image. For my example, I have .\Operating Systems\8_GP\8_GP.wim
5. Go back into your Build task sequence and add a custom task that runs the following command as the final step (you don’t need to set the Start In value):
• cmd /? %SCRIPTROOT%\Relocate.cmd %TaskSequenceID%

Note: Do to ever-vigilent WordPress Security, I had to change out the letter C to a question mark. Pleaee change it back when trying on your own.

6. Create your new Deploy Image task sequence using the OS from the previous step. I recommend that for your Task Sequence ID you use something like 8_GP_DEPLOY
You’re done! At this point, to get the latest Windows Updates into an image, just run your “Build Image” task  sequence – the WIM is captured and automatically replaces the OS that gets deployed when someone runs the “Deploy Image” task.
There is one word of caution: Significant changes to the OS in your WIM (Service pack, new IE version, etc.) might break the Deploy OS task. If that happens go through step 3 and step 6 again so that the MDT can “refresh” what it knows about the deployment OS you’re using

## Microsoft Deployment Toolkit trickiness

So over the past few days I learned a very long and slow lesson about why the Microsoft Deployment Toolkit only has instructions for using local storage. Turns out, there’s either a bug in WinPE or in certain storage filers’ (NetApp) implementation of CIFS.   Due to one of those two factors, connecting to network storage from WinPE is buggy. It works just enough to make you want to blame everything else in the universe because it should “just work”.

Well, after I got over that, I was still stuck with a server with no available local storage, and a huge NetApp volume sitting there doing nothing. So I decided to get tricky. We have good network performance and an awesome storage admin, so I decided to virtualize my deployment share. I created a fixed disk 200 GB VHD file and had it mount to  path on the local storage. Using the Disk Manager GUI in Server 2008 R2 made this easy, but it also could have been done via diskpart if you want to go all CLI (I have yet to see straight-forward PowerShell stuff for working with VHDs directly).

This was cool and so far has been working pretty well, although I have yet to do an in depth comparison of deployment speeds when I have more than one or two clients doing an install.

One final challenge I had was that on a reboot the VHD would disappear until someone went and remounted it. I fixed that by doing the following:

1. Create a script with two lines (don’t include the 1. and 2.) and save it in the same folder on the NetApp/filer/whatever as your VHD file. I called my “Dev.dp” because this is a Dev environment and the script will be run with DiskPart
1. SELECT VDISK FILE=”\\unc\path\to\your\file.vhd”
2. ATTACH VDISK
2. Open Task Scheduler and in the Actions pane pick Create Basic Task…
3. Give your task a name. For the trigger, pick “When the computer starts”
4. For the action, choose Start a Program. Use the following info:
5. Program/script:  c:\windows\system32\diskpart.exe
7. Click the check box to open properties when you’re done. In the General tab, change these settings:
1. Use Change User or Group to set an account that has permissions on the NetApp. For me that was MYDOMAIN\svc.deploy
2. Pick Run whether user is logged on or not
3. Check Run with highest privileges
8. In the setting tab, you may want to back off the failure conditions or have the task retry if it fails (but don’t forget if for some reason it was already mounted, the task will fail because of that)

At this point you’re all done. When diskpart mounts the VHD it will automatically restore the mount point or drive letter that was used last time it was mounted. You can now use a filer to store your deployment data, but have it behave as if it were local storage because that’s the only way your deployment server will work. And your system can survive a reboot without manually re-attaching the drive!

## When Virtual Worlds Collide

Seems like lately I only remember to post after taking a training class.  This time it was a series of two classes, both for vSphere.  One was a “What’s New” class that was mostly repeat of a previous vSphere 5.0 Anything and Everything class and the other was for automation and scripting via PowerCLI. One of the classes came with a voucher for a free VCP exam and I just barely squeezed that in before it expired and just barely squeezed a passing score (more on that later).

I think I’ve stated my suspicions before, but I’ll reiterate that PowerShell is the future for Windows systems administration.  I’m almost at the point where I’m mad when I’m not using PowerShell to do things, but it’s a very conflicting state to be in since I never actually use PowerShell for anything…. well, until today that is. I did a quick script to list all the VMs in our environment along with their VMware Tools versions since we have many that are running sans Tools or with an old version. Yeah, it was simple and not anything beyond a sample script, but it felt great to do because it’s practical.

As for my VCP certification…. phew that was an ordeal. Let me tell you, the testing facility was absolutely the worst I’ve ever seen.  I’m pretty sure the front desk receptionist was a stripper, but was also the main tech support for the testing systems.  I hope her night job is a more fruitful career than her day job – My test was scheduled for 7 AM but didn’t start until 8:30 due to what I believe is operator error on behalf of the receptionist.  Her troubleshooting methods for a broken desktop shortcut included using Windows Search (remember the sidebar in XP that shows the dog Fetch who will find your files? Yeah, that search) to search for the shortcut (not the target, the shortcut itself) and then clicking it a million times expecting it to load better somehow. Seriously, an hour and a half of this and similar techniques.  I walked out and sat in the lobby because it was too painful to watch after the first 30 minutes.  This killed my nerves and my mojo for when I was finally able to start and on top of that it’s a genuinely hard test!  Maybe I just don’t use enough of the available features to have a strong working knowledge, but I really don’t think VCP certification syncs with real-world challenges.

I don’t want to put the name of the company here and just blast out negativity towards them in case I had a unique and atypical experience (but mostly because I managed to pass my exam), but if you’re planning on taking a test in San Diego county and are wondering who to avoid or where to find questionable receptionists feel free to contact me

## I’m Still Alive… I think

Yes, it’s been quite a few months since I blogged, but sometimes life is just busy and you have to concentrate on what matters most. I’m in the midst of some back-to-back training – it’s nice when class gets done early, but since I just moved it’s not worth fighting traffic to head out – so instead here I am blogging. And hopefully I’ll keep it up without having to make a New Year’s resolution!

Virtualization is cool stuff. Last week I finished up a foundational class all about VMware’s vSphere/vCenter products. It wasn’t really “new” to me, but they went really in depth into enterprise storage fundamentals and how to hook up SANs. That’s actually where I got the most benefit! And now this week I’m learning all about Puppet. I’m pretty jealous because we’re diving headlong into wrangling our linux environment and getting things properly managed. Now if only I could convince someone that doing the same with Windows is just as important!

Over the past few months I’ve been prepping my group (Systems Engineering) for taking ownership of our company’s AD environment (previous owner being “….uhh?”). Our boss is pushing hard to align what our customers want/need with specific services that IT provides. And at the same time we’re aligning our department’s strategy on managing those services in a Plan/Build/Run model. I have no idea if it’s an actual thing, but I like the premise – We have a team that plans it out, another that builds it, and another that does daily run tasks.

As an Engineer I’m excited because I might get to be a little more distanced from the daily break/fix distractions and do more quality ‘building’ work.  My real question is where the line is between Planning and Building, but whatever.  I ended up writing about 13 pages of a Word doc that spells out anything and everything related to the AD service and is I believe what all our future projects should embrace when trying to match this PBR model. If we stick with it, I think there’s actually some hope of getting out of technical debt and eventually becoming a much more valuable asset for the business teams our IT group supports.

## You Can Lead a Horse To Water…

I recently got back from my trip to Las Vegas for a Symantec conference.  I never really thought that Symantec would be able to throw an event that would hold my interest and actually get me exited, but they pulled through. Just being in the presence of so many other companies struggling with CMS implementations and deployment strategies was a big morale boost for me.  I’m not alone, and the difficulties in getting my company to the Utopia that our Symantec sales rep promised us are common and (more importantly) surmountable.

But not two days back and I’m facing the reality of how things are. We have four Helpdesk teams, all with their own way of doing things.  Someone emailed me and said “Hey Slowest Zombie, we got some new VAIO laptops in and it’s a pain in the butt to get them rolled out to our end-users. Can’t you get this automated like everything else?” Well, my answer was not short. I could have said yes for this one new laptop, but what about the next new laptop and the one after that? I brought up the fact that the end-users we support are currently given the choice of whatever laptop they want with no limits. The complexity of laptop drivers, dealing with custom system image discs, and the fact that (especially with VAIOs) there’s rarely more than two users in the company who end up ordering the same specific laptop brand and model all adds up to the fact that the time to automate the laptop deployment process will probably never generate benefits greater than just dealing with each one manually.

It’s very disheartening to hear the response “This is how we’ve always done it and how we’ll keep on doing it forever” from one of the most senior helpdesk staff members. It’s completely understandable – their customers have come to expect that type of flexibility – but someone somewhere signed a VERY expensive contract that said “Let’s buy Altiris and in the end we’ll save money by making things efficient.”  Well, I’m offering up a path to get there, but no one is interested in even talking about possibilities or discussing things that would change the way they approach support. It makes me wonder if I’m just spinning my wheels trying to engineer a solution that no one really wants, and all this rant is just about dealing with one of the four helpdesk sites – I’ll be honest and say I’m not looking forward to even attempting to build a process that works for all of them.

## A Beginner’s Guide

On more than one occasion I’ve been asked where to start when it comes to automating Windows OS and Software deployment processes. I’ve never had a good answer because I couldn’t ever find a one-stop all-inclusive solution.  Most of my experience and expertise has come about by ramming into a wall over and over until I found a way through.  My goal in this post is to help my friends and colleagues who are looking to get started.

Before we get into the how or why, let’s go over what I imagine your current environment might be like. It’s hard to say where we’re going if we don’t know where we’re coming from, right?

• You’re involved in deploying/maintaining OSes and software
• Roughly 5-500+ employees are under your jurisdiction
• Primarily a Windows shop. If you haven’t already moved to Windows 7, it’s looming over you like a black cloud
• Triple Squeeze Play: Requests are up, responsibilities are increased, budgets are down
• Hardware could be standardized or could be all over the place: Dell, HP, custom, whatever’s on sale at Best Buy…
• OS deployments could happen through the 1997 version of Norton Ghost to “stick a CD in there and wing it”
• You have a list of software to install by person/department, either officially or in your head

So you know what you’re up against, but maybe you’re not quite sure what would make life better for you.  While the business really is the ultimate dictator of your goals, it’s safe to agree on some commonalities:

• Faster
• Cheaper

You’re in luck! Without cutting corners or without racking up bills for software / training / professional services, you can be the hero. You may have to dedicate some personal time to pulling this off, but trust me: the payoff will be worth it.

# The Plan

There’s two parts to this – the OS deployments and the Software deployments.  But both parts are going to use the same tools.