Wednesday, October 8, 2014

XenDesktop DSC resource and the Azure DSC VM Extension

After all of the previous blog posts, I thought that I would climax with a few real examples of using the DSC resource for XenDesktop.

Earlier this year (just a few++ weeks ago) Microsoft released the VM extensions for Virtual Machines.  This is the IaaS style VM role.  The VM Extensions are small modules that you can inject into your VM and you can interface with them.  The simplest way to think of it is they are purpose built agents.  You can configure them on provisioning, but also nifty is that you can update the configuration of an extension after deployment and do something else.

There is a script extension that can be used to download and execute any script(s) inside of your VM.  Handy for performing a number of things. You can use this to drive DSC if you like.
After that the DSC extension was released.  This one is purpose built just for DSC and the one this post will be focusing on.

I will warn you, I am going to expose a few warts of the process as I do this, just because the extension is built to support DSC and only the DSC packages.  And this process requires external media - yes, the media could be bundled into the DSC resource (the only workaround to this process at the moment), but that would make the resource really fat.  So, why not continue to think of it as two problems.

One thing - the DSC provider for XenDesktop expects media to be on the machine where it runs or a DVD attached ISO.  So a ZIP or ISO (DVD attached or downloaded to the machine) or folder.  This way there is no requirement to save credentials for connecting to some share or other store.

Now, let get everything set up.  (I am experimenting with a slightly different writing format here, so you will have to let me know if it works for you)

First of all, I am going to assume that you are sitting at your 'configuration computer' - that management computer that you use that has all of the consoles and what not installed.  In this case it is where you build all of your configurations that you later push to machines or place on your pull server.  Your configuration computer needs the XenDesktop resource module installed under the path "%ProgramFiles%\WindowsPowerShell\Modules" so a later cmdlet for the DSC script provider can automagically pick it up and bundle it for you.

Begin by connecting to your Azure subscription and set the storage account you will be using ( Set-AzureStorageAccount -StorageAccountName "YourCoolStorageAccount" )

Then upload the XenDesktop media.  ( the media is created from the ISO, simply ZIP the contents of the ISO - don't add any folders into the path)
Be sure that your container security is set to public blob (not public container) since we don't want folks discovering it.

For a later step you will need the URI to the blob that you just uploaded.

Now, we need to define the configuration (the DSC one) that will be applied to the VM.

For that I created a PowerShell script that accepts two parameters; the URI to the media and the role to be installed.

The DSC extension will run this and then apply the configuration.

Now that the configuration script exists, this must be packaged up, with the module, for the Azure DSC extension to use.  I saved mine with the name "XenDesktopInAzure.ps1".

The Azure PowerShell cmdlets have created a cmdlet just for this action; Publish-AzureVMDscConfiguration

After running the Publish-AzureVMDscConfiguration cmdlet, go check out your Azure management portal and look into the container and you will find a ZIP archive with a name that matches your script.  What the Public- command does is package the script and the modules that are used into the ZIP, then uploads them to Azure for you.

Now, define the VM, so that can be created.

First needed is an Azure VM Image - an 'image' is a virtual disk that is installed with an OS and prepared with sysprep and registered with Azure as an image - to the management platform, this means that it can specialize the OS on provisioning through mini-setup and the use of an unattended answer file.
Moving forward to the next step of building the VM configuration.

First, begin with a base VM configuration.

Then add the provisioning configuration ( these are the specialization settings ).
Then configure the Azure DSC VM extension.

Then create the VM.  Azure and the VM extension take care of processing the configuration script, and the configuration script downloads the ZIP, unpacks it, and installs the role.

One really cool thing about DSC is that you can keep applying configurations.  The last applied settings to a specific thing win, configurations are not undone when a new one is added, simply the changes defined in the new configuration are applied.  So, if you have some management layer, it can apply configuration after configuration if it so wanted to, or change a configuration over time to modify something.

As an option for delivering media, you can use a data disk with the XenDesktop media within it.  Then you can attach that to a new VM and have DSC mount the disk (rather, make sure it is mounted at a specific drive letter) and then perform the installation.

If you are using SCVMM, you can have an ISO attached to the VM as part of the template.  You could even use DSC to map a drive if that is what fits for your environment.  They are all possibilities.

Have fun! And please, send feedback!

Monday, October 6, 2014

Using the XenDesktop DSC resource

In my last article I covered using the built-in desired state configuration (DSC) resources to stand up a Citrix License server.  All but one of the resources I used shipped in the box with Server 2012 R2 - and you could argue that I really didn't have to use the custom resource to fetch and install my license file.

I used the package resource to install the Citrix License server. 

The package resource works great for the License Server, which is a really quick and simple install.  But what do you do when you have an installer that requires reboots mid-stream?  Or even multiple reboots? 

Some of you who have installed XenApp or XenDesktop over the years know that this takes extra time and you have to follow your checklist or the wizard to make sure you didn't miss a step.  What if you didn't need to do that?  Or what if we could greatly simplify the entire process?

Here is an example I think you will like.

Recently released is a Technology Preview of a desired state configuration resource for XenDesktop.  You can find it here:

Follow the admin guide and place the module in the PowerShell path on your target (and the machine where you create configurations) and move forward.

Since I will be using this in upcoming examples, lets give it a quick description.

This brings together a few of the previous articles.

Lines 9 - 12 coupled with line 47 sets the local configuration manager (LCM) to handle reboots on its own.

Lines 15 - 20 creates my unzip and logging path

Lines 24 - 30 unzips my media.  I know what the name of my media is and I expect that media to be delivered to the same path where this script is being run by some other agent or process - such as an external process that downloads the media plus this script to the same path and then runs this script.

Lines 33 - 38 call the XenDesktop resource and install one of the roles defined.

If the role installation sets that the system requires a reboot, the LCM handles that.  On reboot it processes the configuration again, until the XenDesktop resource returns 'true' for ensure being set to 'present'.

For the Controller this is generally one reboot, for a session host or VDA tis could be two reboots.  The great thing for me is that it is totally hands off.  DevOps in the XenDesktop world.

Personally, I hope that you check this out, and I am looking for feedback on where this needs to go next and if it is useful to you.  So please, speak up in comments, in the forum, or complete the survey.

Next - lets show this all in action in some real use cases.

Tuesday, September 30, 2014

Using the native DSC resources

In my previous posts the focus was on giving some background into how desired state configuration DSC works, some ideas about how to use it, and some details around a very simple example.

Here I am going to walkthrough a configuration that uses a number of the built in DSC resources to apply a configuration.

First, I want to back up a bit and mention a concept.  If you read through many of the blog posts and TechNet documents out there all of the examples you see you could easily describe as being installation focused.  Or only making changes to the system that one would do when installing something.  And this observation is correct.  DSC is still in that v1 state at this writing which means that folks are getting used to it, using it, extending it.  And frankly talking about installation actions is relatively easy. 

One thing I want to mention now is that anyone can build a Custom Resource.  And if you want to configure an application, this is the way to go.  Put all of your complex logic into that custom resource.  Or, wait for the software vendor to write that custom resource so you can use it.  And you may need to ask them to do that for you to get the ball rolling.  There is nothing that says that DSC cannot be used to configure the applications that it also installs in a previous step.  We will get closer to that later on.

Back to this example, installing a Citrix License Server using DSC resources.

Before we get into my example I want to mention a couple things:

  1. In my setup I am using the full XenDesktop media ISO as a ZIP archive and I am hosing it on an HTTP endpoint (IIS Server).  This is an internally hosted, isolated, and unsecured endpoint - it is strictly an example to demonstrate different resources and possibilities.  In other words, please learn from the example but please "don't do this at home". 
  2. Also,  I am not demonstrating any commands that are not already documented in Citrix eDocs nor the DSC documentation in TechNet.  You need to read those sources for the definitive and current documentation.

Here is the configuration script sample:

Lets start to work through this example.

On line 3 is something new; Import-DscResource.  This is now modules are invoked when they are not shipped as part of the operating system.  Prior to being able to apply this configuration the module xPSDesiredStateConfiguration must be in the PowerShell path.  If you observe the behavior of a Pull configuration you will notice that it uses the path "%programfiles%\WindowsPowerShell\Modules" - and because of that I tend to use that one myself.  More on custom resources later.

On lines 7 - 12 you see the [File]citrixFolder resource that I have been showing in the previous articles.

On lines 14 - 31 is something new as well.  The Script resource

This is a utility resource that fills a gap when there is no other resource available to perform your actions.  When you use the Script resource you essentially write a mini custom resource - it has all the same base requirements of a custom resource provider. 

In this case I am using the web client to download the XenDesktop media to the %programdata% path I defined with [File]citrixFolder.  You might also notice that there is a dependency on that previous resource to ensure that the path exists prior to downloading.

Lines 33 - 39 use the Archive resource.  Here I define the path where I know the ZIP exists (previous dependency), and the destination I want it extracted to.

If you run the -Verbose switch when applying this configuration you will notice some interesting behavior with this provider.  The archive provider actually opens the ZIP, reads every file and tests the destination for the existence of the file.  If a file exists, it is not extracted, if it does not exist it is extracted.  This is just a bit of insight into the thinking around how desired state works and what it means to apply a configuration.

Lines 41 - 50 use the Package resource.  This resource is used to install or uninstall packages.  It is centric to MSI packages but it can also be used with EXE packages.  The installer must support silent installation.  Also, as you can see I left the ProductId empty - that is a property of MSI packages so I had to define it this way.

Lines 52 - 57 are the reason for the Import-DscResource.  Within that module is the provider xRemoteFile.  This resource allows me to use a URI as the source.  I defined the source file and my destination - and this places my license file right where my license server default path is.  Ready to go!

Why didn't I use this in place of the script extension?  Good question.  It is because the file I am downloading using the script resource is too large and the download fails at 1.7 Gb in a very reliable way (it is actually a very old bug).

That is it.  The hardest part is getting the infrastructure working to get the media and files delivered.

But quite honestly, MSFT is working on that problem.  Have you heard of OneGet or PSGetOneGet is all about packages, packages that could be used by DSC.  PSGet is about PowerShell modules.  And, you can set up your own internal version if you don't trust the public repositories.  But the nifty thing is this; ISVs can publish their packages to OneGet - and then their customer just points to it and before you know it that application is downloaded and installed.  Really nifty delivery method - almost like yum for Windows.

Friday, September 26, 2014

The next step for DSC configurations

Just to recap, this is a series regarding Desired State Configuration which shipped with Windows Management Framework v4 (Server 2012 R2 / Windows 8.1).  There are some references in my first article if you want to get deeper, and there is always TechNet and MSDN documentation.

In this article I am going to expand on my simple example from the last article.  Step it up a notch if you will.

The first thing I am going to cover is the DependsOn property.

This is why each of your declared resources has a name.

Lets consider the following sample:

The Log resource DependsOn the File resource citrixFolder.  What this means is that the File citrixFolder must complete or else the Log afterCitrixFolder will not be attempted.  Another way to say this is that the test for citrixFolder must return True.

This allows a chain of dependencies to be defined.  Defining them here, the configuration author must know about the dependencies and declare them.  Another option is that the resource provider itself is aware of dependencies and enforces them, but that is definitely an advanced topic.

In the last article I had mentioned that the Local Configuration Manager configuration could be changed.  This to is a configuration, and a special command.

Here is my scenario, I have a package installation that DSC is handling for me.  This is a complex installation and it requires two reboots for everything to be installed properly.  I want DSC to handle the rebooting, I don't want to have to setup jobs and start-up commands and the like as in the past.

For this I use the following example;

The first thing to notice is lines 6 through 10.  This is a special resource; localconfigurationmanager.  It supports modifying the properties of the LCM itself.  In this case the reboot behavior is false by default and I want to set that.

After I generate the MOF document, there will be two documents in the .\sample path.  The localhost.mof as before and a localhost.meta.mof that defines the behavior of the LCM.  Set-DscLocalConfiguraitonManager applies the LCM configuration and then Start-DscConfiguration applies the remainder.

If you have done any blog reading of DSC you would most likely have run across the 'Pull' model (and realize that I have been showing the 'push' model) of applying and managing configurations.

I bring this up now just to mention that the Pull model requires the LCM to be configured this way; as shown here:

In this sample I am setting the GUID that matches the configuration of this machine, I am telling it that it is using WebDownloadManager to download its configuration, the URL of my DSC Pull Server, that it will only apply the configuration and then stop, that it will reboot, and I set the frequency down to the minimum since I am impatient.

Just like above, This is a modification of the Local Configuration Manager on the target machine.

The Pull model is pretty handy, as there is simply one web endpoint where configurations call home to and then fetch their configuration and apply it.  If I change the configuration, I change it here and when the node performs its refresh it will look to see if the configuration changed and if it did the new configuration is downloaded and applied.

Pretty slick feature.  But I have to admit, keeping a bunch of GUIDs straight is not the easiest thing.  In this mode each configuration is the GUID identified by the ConfigurationID.  And you have to make sure you give the right configuration ID to the right VM.

Optimally, you set up an array and simply randomly assign one configuration to a machine out of a bunch of machines and wait for the magic to happen.  That is really where that model works.

If you really want to get into all of the possibilities of applying configurations to machines there is a good blog article that describes the options when you want your machine to automatically configure on boot up with DSC.  There are some interesting options in there such as

  • using mini-setup and the unattend.xml to set a task to run a configuration script on boot.
  • drop a pending.mof in the path %systemdrive%\Windows\System32\Configuration (mount the vhd and copy the file in) that the LCM will automatically process on boot
  • inject a meta-configuration (the configuration of the LCM) to download its configuration document

Not mentioned is that you can use and agent in the VM to execute a script, set a MOF, or trigger the LCM in a number of combinations.  And there is also remote applying a configuration as this is what Start-DscConfiguration uses by default using the -ComputerName of a remote machine (you have to be administrator).

Using SCVMM Service Templates the DSC configuration script would be the 'script application package' being executed by the SCVMM agent.  And the model is similar for a Windows Azure Pack Gallery Image.  If you are deploying to Azure you can use the VM DSC Extension or the VM Script Extension - both through the Azure API.

So, as you can see, the flexibility is all over the place.

Monday, September 22, 2014

How does Desired State Configuration work

Windows Management Framework v4 included a new feature; Desired State Configuration.  This is the second in a series of articles describing Desired State Configuration with the intent on giving a sip instead of a fire hose to get you into it.

In the previous post I made an attempt to introduce all of the moving parts of Desired State Configuration.  This is an introduction to the basics of the ability it brings and I am going to describe how it works by walking through a very basic sample.

Just to avoid confusion, I am going to follow the pattern that you see in many of the blogs, with a lot of words around it.  That said, it is rare to find a discussion of the MOF document itself.  Most articles focus on generating the MOF configuration and then applying it.  But I want you to be clear that there is a separation.

In my sample I am working in the PowerShell ISE instead of a console.  It is still just a PowerShell session running in my security context.

I am going to assume that you know what localhost is (the machine where I am executing commands).

First of all, I have a PowerShell script (a series of commands) that creates a configuration in memory, writes that configuration to a MOF document, and then applies that configuration to localhost.  MSFT refers to this configuration as configuration data at this point.  It is not a configuration until it is written to the MOF.

Here is the sample that relates to this article:

Lets first look at lines 2 - 14.  The special function 'configuration' is being used to declare a configuration named 'sample' surrounded in brackets.

Within the configuration 'sample' is one node element, it is named 'localhost'.  This node specifies the computer(s) the configuration applies to. It must match / resolve to a computer name for the configuration to be applied.  I could quickly get complex here and I am consciously trying to avoid that.  Lets just keep this to the local computer at the moment*.

Within the node is a state of a DSC Resource.  In this case the File resource which is built-in.  I named this File state 'citrixFolder' (each defined state has a name, I will mention more about that later).

The File resource has parameters.  I have defined that ensure is 'present' (it exists), the type as 'Directory' (so that it does not think I am defining a File), and the path.  If I stated this I would say: "ensure that a directory named 'EasyButton' exists at the path C:\ProgramData\Citrix\EasyButton"

I then close each element of the configuration.

On line 17 I call the configuration named sample - the same way you would call a function.

What this does is generate a folder .\sample and within that a MOF document - .\sample\localhost.mof  (note the '.\' in the path.  Since I did not define a literal path for the output).

If I look at localhost.mof, I can see how this looks in the MOF format.

You can see that it was generated by me, and my workstation is named scooter, as well as the time and date, and the configuration that I defined for the node localhost.

Now, lets apply this configuration - make it a reality.  That is where line 20 of the sample script comes into play.

I start the DSC Configuration engine and tell it to apply the configurations found at .\sample to the computer localhost.  I force the action, I request verbose output, and I want to wait.  Verbose is handy to show you what is happening.  Waiting makes the configuration run within the session, instead of off on a job thread.  Useful for debugging.

As you can see from the verbose output the first thing that happens is that the current state is evaluated.  Then the local configuration manager decides if it needs to make a change.  In this case the test of "is the directory C:\ProgramData\Citrix\EasyButton present" returned false.  So the Set is called to make that change.  The change returns a success and the local configuration manager moves on.

The behavior of the local configuration manager can be modified as well.  Since a configuration can be applied (make the changes), or applied and enforced (make the changes and be sure nothing changes), or applied and monitored (make the changes and toss an event if something changes).  More about that later.

* I am sure that some of you could imagine multiple node names, or multiple node elements (one big document defining multiple nodes) with the same document fed to them all.  Then only the configuration that matches the computer name is applied.

Friday, September 19, 2014

DSC resource for XenDesktop in TechPreview

In true form to the blogging I am doing around desired state configuration - the resource for XenDesktop went live as a Technology Preview today.

You can find it here:

This goes along side the SCVMM Service Template and the Windows Azure Pack Gallery Image - which are available from here:

Why is there a DSC resource for XenDesktop?  Frankly, DSC is a really cool feature of windows management framework (aka PowerShell) v4 and I think everyone wants to play in the DevOps pool (or should at least learn how to).

Most of all, I really want your feedback about the desired state configuration resource.  How you use it, what more you would like to see, what it is lacking, what it does not do for you.  I want to know it all.

We have a survey to capture this.  Or, you can simply reply with comments.  There is also a support forum where I will be fielding issues.

Happy hands off installs!

Wednesday, September 17, 2014

What is Desired State Configuration?

Desired State Configuration (DSC) is a feature of the Windows Management Framework v4 (many folks just call this PowerShell v4) which ships natively with Server 2012 R2 and Windows 8.1.

One thing to be clear on; DSC is not a single thing, it is a feature with different components.  And this is an exploration of the main components at a high level to understand all of the concepts.  Lets take a look at the moving parts that work together to enable DSC; the configuration keyword, a configuration, the Local Configuration Manager, resource providers, WinRM, PowerShell.

Before I get to deep into this I want to thank some fellow MVPs who have the award area of PowerShell for helping me with a few things as I worked through my understanding.  I also want to thank the product team at MSFT for taking some time to sit with me and talk as a customer, helping give me more insight and giving them feedback into my scenarios and my customers' scenarios.

The MVP community is great at producing resources and communicating out.  Here are a few good resources:

Lets begin with the first stumbling block.  Windows Remote Management (WinRM)

WinRM is required for DSC to receive and process configurations in a Push or Pull model (more about those later).  Let me say here that if you are learning and follow any of the examples and apply a configuration; you will likely be applying a configuration interactively at the console (Push model).  This will simply work out of the box if your target is Server 2012 R2.  This is will simply fail out of the box if your target is Windows 8.1.

The difference?  WinRM is enabled by default on Server and disabled by default on Client.  So plan accordingly, or you will get error messages that you cannot apply your configuration because WinRM is not running.

PowerShell is the enabling technology and the owning product team at MSFT.  If you are following recent PowerShell developments you have heard about OMI.  And if you recently listen to Jeffry Snover talk about DSC you will hear mention of the Monad Manifesto, and that DSC is a component of the original Monad vision that is realized as 'PowerShell'.

The Local Configuration Manager (LCM) can be thought of as the agent that makes it happen. 

The LCM is essentially a service that takes the configuration, parses the configuration, sorts out the dependencies defined in the configuration, then makes that configuration happen.  The key thing to understand is that all configuration are applied in the security context of the Local System.  By default this allows no access to anything outside the machine, and can only act on objects that are local to the machine.  For example; if a software package is being installed, that package must be local to the machine, or the machine must be able to access it within its security context.

Did you get that DSC is not a management layer?  It is the lowest level execution engine / the enabler / the Riker to Picard (this is the analogy that the PowerShell team originally used). 

In fact companies like Puppet and Chef are already taking advantage of DSC, using it to do their dirty work and are not threatened by it.  And MSFT has enabled DSC as a VM Extension to apply configurations to Azure IaaS virtual machines through the API.  Also, if you have SCVMM Service Templates or Windows Azure Pack Gallery Items; they too can use DSC as then end engine that performs the configuration actions.

Now we get down to the configuration itself.

The configuration is a MOF format document that defines the end result that the Local Configuration Manager must make happen.  The MOF can be generated in a number of ways, but the easiest by far is to use the configuration keyword within a PowerShell script.

The configuration keyword is nearly the last enabler.  You can call it within a script to define this special thing called a configuration.  That is then realized as a MOF file.  That is in turn applied to the LCM.  Which in turn makes it so.

Now, there is the last enabler.  The resource provider. 

These are special modules - they are task built and their whole job is to Get the state of some resource, Set the state of some resource, and Test the state of some resource.  The resource is the item being manipulated.

There are resource providers that are built into the OS; archive (handling ZIP files), file (manipulating files), package (installing / uninstalling packages), and so on.  Each one of these is declared in a configuration with the state you desire it to have.  And you can perform actions such as unzipping an archive, copying a file to a specific path, installing / uninstalling a specific MSI, and so on.

There is also a framework for custom resource providers.  This is to extend support to third party applications such as XenDesktop, or to allow the community to build modules that do far more than the ones provided by MSFT.

There are also MSFT provided 'x' resource providers.  But frankly, I am hoping that PowerShell v5 will include many of these at a release quality level.  Until then, they are just like any other custom resource.

Wednesday, September 3, 2014

XenDesktop Windows Azure Pack Gallery Image Tech Preview

Today we launched our XenDesktop 7.5 Windows Azure Pack Gallery Image Tech Preview as a download from


This is open to customers and prospective customers and is intended to simplify and automate XenDesktop deployments for large enterprises and service providers who leverage Windows Azure Pack, System Center, and the Microsoft Cloud OS stack.

If you are not already familiar; a Windows Azure Pack Gallery Image is a standard, reusable, and sharable artifact that allows customers of a large enterprise or service provider to self-serve provision virtual machine roles as a repeatable configuration.

The current XenDesktop specific Gallery Image can installs all XenDesktop roles.

We are also looking for feedback regarding where to take this next. So as always, you can add comments here, or find me on Twitter, or email me directly. 

This is a continuation of similar efforts to streamline XenDesktop deployments that we rolled out with the XenDesktop System Center Templates launched for XenDesktop 7.1 late last year and for XenDesktop 7.5 in the last couple of months.


Friday, August 15, 2014

Getting errors from the Azure VM custom script extension without RDP

Since Azure has begun adding the VM agent and other extension to IaaS virtual machines (persistent VM Roles) a number of scenarios and possibilities have opened up.

The extension is a simple binary that is dropped into your VM and built to be triggered and to perform very specific actions in the security context of Local System.

The Desired State Configuration Azure extension is the very latest. 

Prior to this I have been spending some time with the Custom Script Extension.  And it is rather nifty.  But the biggest pain that I have had is in working through the process of troubleshooting the script as I develop it.

I have found no way to capture the standard output - other than directing to a text file.  But then I have to RDP into the VM to fetch it.

I can also look at the runtime status of the extension while connected over RDP - that that is one file with bad line endings, making it difficult to read in Notepad.

Trough a bit of searching I came across a few tips and started poking around a bit with the Azure PowerShell cmdlets. 

What I discovered is that you cannot get the standard output, but you can get the standard error through the API.  So, if the script tosses some terminating error, there is output to be fetched. If there was no error, there is no output to be returned.

What I ended up doing is the following:

New-AzureVM -Location $location -VM $vmConfig -ServiceName $Service

(Get-Date -Format s) + " .. Watch the script extionsion to monitor the deployment and configuration"
Do {
    $Vm = Get-AzureVM -Name $vmName -ServiceName $Service

    Get-Date -Format s
    "  Machine status: " + $Vm.Status
    "  Guest agent status: " + $Vm.GuestAgentStatus.FormattedMessage.Message
    foreach ( $extension in $Vm.ResourceExtensionStatusList ) {
        If ( $extension.HandlerName -match 'CustomScriptExtension' ) {
            "  ExtensionStatus: " + $extension.ExtensionSettingStatus.FormattedMessage.Message
            $scriptStatus = $extension.ExtensionSettingStatus.FormattedMessage.Message
            $scriptError = foreach ($substatus in $extension.ExtensionSettingStatus.SubStatusList) { ($substatus.FormattedMessage.Message).Replace("\n","`n") }
    Start-Sleep 10
} until ( ($scriptStatus -eq "Finished executing command") )

I fetch the VM, then drill into the object for the Custom Script Extension, then I dig into the Extension status, and it even has sub status.  It is in this sub status where the Standard Error ends up being bubbled up for the extension.

I realize that this leaves me waiting around and calling back and forth.  But a green light on "Finished executing command" only means the script extension is completed running whatever I told it to run, not that it worked.

I just wish I could get the standard output.


Friday, August 1, 2014

WAP Gallery Image, Dynamic IP address, and the SCVMM DHCP switch extension

Recently I had to put together a hands on lab for a number of sales engineers.
The lab involved SCVMM Service Templates, a custom Windows Azure Pack Gallery Image, and a Desired State Configuration module.

I had my environment of Hyper-V 2012 R2, SCVMM 2012 R2, and WAP about 95% configured.  As much as I could and still support the students re-using my VMs with their own Hyper-V Server.

Since the lab was not about WAP, but instead about my gallery image, I wanted to keep it as simple as possible.  I had a cloud, the cloud had a VM Network assigned, the students created a static IP pool.

(I already had an Internal Virtual Switch being created by SCVMM as a Logical Switch so that all lines of dependency were properly drawn)

In the WAP Admin portal - I had the students add the cloud and the VM Network to their plan.

I deploy my Gallery Image, and the domain join failed. 
I look closer, and I see that my VM ended up with an APIPA address and not an address from the IP Pool. 

Come to find out, the default behavior of a WAP Gallery Image is for dynamic IP address assignment. 
Which, if you only ever deploy a gallery image to a Windows Network Virtualization VM Network, you will never notice.  You will instead see that you get an IP from the IP Pool.

Something that I discovered long ago was that there is a custom Hyper-V Virtual Switch extension that ships with SCVMM.  It is actually a DHCP responder.  It catches the IP request, notifies SCVMM, and SCVMM responds with an IP from the SCVMM IP Pool assigned to the VM.  Nifty.

But, this path only happens if the VM is attached to a Windows Network Virtualization (NVGRE) network managed by SCVMM.

Back to the default Gallery Image behavior of a dynamic IP address.  No WNV network, no IP from an IP Pool.  How to fix this?

The only way to fix this is to open the Resource Definition of the Gallery Image, and then open the Network Profile, then the NIC.
And change the AllocationMethod to Static.

While you are in there, you will most likely notice a number of other interesting settings.

But the thing to be aware of is this, these are hard coded values, unless you work through making them settings that are actually exposed to your end customer (at this time you can't expose these settings).
If you change a setting here, that makes a dependency on an SCVMM placement rule, SCVMM will have to find a place that this VM can go to support all of the settings.  If it cannot, your VM will not be deployed.  And your tenant will call.

Tuesday, July 22, 2014

Copying files into Hyper-V VMs with Copy-VMFile

Over the life of Hyper-V there have been lots of convoluted ways that folks have used to get files in and out of Hyper-V VMs.

The most common method has been to mount the VHD and copy files in and out.  But you can't do this while the VM is running.

Then there the issue of using differencing disks or snapshots - and you want to replicate one file to many VMs.  Folks try and mount the parent virtual disk and copy files in - but due to the way that differencing disks work, this gives mixed results if it works at all.

Well, Hyper-V has a nifty feature of the Integration Components / Integration Services that allows you to inject files into a running VM.
The PowerShell cmdlet is Copy-VMFile.

I recently stumbled on this while getting some labs set up and I suddenly realized that I have 25 lab machines with 4 VMs each that my students will be using, and I have a broken lab if I don't correct one file.  Did I mention that I can't physically visit these servers?  I only have remote access.  What a pain.

Prior to being able to use the cmdlet you must have Guest Services enabled on your VM - and this is not on by default.

Enable-VMIntegrationService -Name 'Guest Service Interface' -VMName DSC01
Then, you can push a file into a VM from the Hyper-V Server by using -FileSource Host  And the Host is the only option.  You can only push in, not pullout.

Copy-VMFile -Name DSC01 -SourcePath .\ -DestinationPath 'C:\Users\Public\' -FileSource Host -CreateFullPath
You use the -Force parameter if you are overwriting an existing file.  And you don't need -Force otherwise.
-CreateFullPath does just what you would think it does, it creates the folder path you defined if it is not already present.

Simple as that.

There is some safety built into this I will mention.  Such as you cannot copy into the system path and other permissions blocks you will encounter.

Hyper-V has always approached the VM from the angle that it is evil, the VM is malicious.  This is the protection assumption.  Always keep that machine contained.

Wednesday, June 18, 2014

Cannot load Windows PowerShell snap-in microsoft.windowsazure.serviceruntime

There are some issues that simply lose interest due to not being pertinent enough, I believe I have found one.
And I can see from searching that I am not alone here.
The thing that is going on is that I am using startup tasks in a Worker Role to install an MSI and configure it.
When the script being executed by my startup task attempts to: add-pssnapin microsoft.windowsazure.serviceruntime

I receive the error:
add-pssnapin : Cannot load Windows PowerShell snap-in microsoft.windowsazure.serviceruntime because of the following error: The Windows PowerShell snap-in module
F:\plugins\RemoteAccess\Microsoft.WindowsAzure.ServiceRuntime.Commands.dll does not have the required Windows PowerShell snap-in strong name Microsoft.WindowsAzure.ServiceRuntime.Commands, Version=, Culture=neutral,
The work around that has been posted is to edit the registry key for the DLL, because the version is wrong.
My impression of this workaround is this:  "So, now I have to script a work around for an MSFT provided DLL because no one is updating the DLL registration for the snap-in in MSFT provided Gallery Images in Azure."
Mind you, these are developers, and if a workaround can be done in code - my experience has been that the issue is ignored after the workaround exists.

A workaround is great, but there are caveats to all of the workarounds posted thus far;
  • if I update the registry key with the incorrect version number, it still won’t load.  
  • I have to get the right version number and fix the registry entry.
  • And edit both the Assembly Name string and the version string.
Each posted workaround that I can find through search is a point version, and a point version edit.  A lasting workaround (since it seems like this bug is not going to get fixed anytime soon) is to dynamically deal with any version of the DLL and strong name enforcement in the OS.
I always preface that my solution might not be the most elegant thing that a developer comes up with, but I hope that folks can follow it and understand what I did.
# Get the current execution path
$exPath = Split-Path -parent $MyInvocation.MyCommand.Definition
$exPath # Echo for logging

### Before we can load the service runtime, we must fix the broken registry information for the DLL
### and the version must be right, or it will continue to be broken.

$runtimeDllReg = Get-Item HKLM:\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.WindowsAzure.ServiceRuntime
$runtimeDll = Get-Item $runtimeDllReg.GetValue('ModuleName')

$exportPath = $expath + "\AzureServiceRuntime.reg"

& $env:SystemRoot\System32\regedt32.exe /E $exportPath 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.WindowsAzure.ServiceRuntime'

# This is because the wait does not wait until regedit actually completes.

Do {

Start-Sleep 10

$regFile = Get-Item -Path $exportPath -ErrorAction SilentlyContinue

} Until ($regFile.Exists)

$runtimeRegFile Get-Content -Path $exportPath
$newRuntimeRegFile = @()

foreach ($e in $runtimeRegFile) {
     switch -wildcard ($e) {
          "**" { $e = $e -replace "", ($runtimeDll.VersionInfo.FileMajorPart.ToString() + '.' + $runtimeDll.VersionInfo.FileMinorPart.ToString() + '.0.0') }
     $newRuntimeRegFile += $e

Set-Content -Value $newRuntimeRegFile -Path $exportPath -Force

Start-Sleep 2

& $env:SystemRoot\System32\regedt32.exe /S $exportPath

Start-Sleep 2

# Load the necessary PowerShell modules for configuration and management Web and Work Roles
add-pssnapin microsoft.windowsazure.serviceruntime

# Take the VM Instance offline with Azure, or else Azure will keep attempting to start the script
Set-RoleInstanceStatus -Busy

You might wonder why I am using regedit.exe instead of the PowerShell cmdlets to set the value. What I found is that I kept running into permissions issues. And using regedit to export and import is a way around that.