NetApp PowerShell Toolkit has a PowerShell Provider

I was at the PowerShell Deep Dive the first half of this week. During a break, I had the chance to meet Clinton Knight, the lead guy behind the NetApp PowerShell Toolkit. Since v1, I have been asking for a provider. Well it turns out that they slipped one in in their latest release and I didn’t even notice. That will teach me to start reading release notes.

Anyway, now with a filer you can do things like the following. This is going to make it even easier now to navigate all your volumes, q-trees, and LUN’s.


Another very cool feature that I just found out about as well is the ability to store credentials to connect to different filers. They have three cmdlets that allow you to Get, Add, and remove credentials.These are encrypted and stored on disk so that only the user that created them can access them. Definitely a nice touch.


NetApp has really done a fantastic job with their Cmdlets. Their implementation of PowerShell is by far one of the best I have seen, including Microsoft and third parties. Companies looking to use PowerShell in their products should definitely take a look at what NetApp has done.

Use PowerShell to find out if a Disk Partition is GPT or MBR

When you go to create a partition on a disk in Windows, you can use two different styles, MBR and GPT. The stand for Master Boot Record and Guid Partition Table. One of the major differences between these two is the size of partition you can create. If you have SAN storage and are using CSV’s in Hyper-V, you very well might bump up against this. MBR partitions are limited to 2 Terabytes (2.19 * 1012). GPT allows for a maximum size of 9.4 ZettaBytes (9.4 x 1021) In other words, at this point size doesn’t matter, at least in this decade.

Anyway, I had a situation where I needed to use diskpart.exe to reset a disk signature. The problem was, disk signatures are GUID’s for GPT disks and a set of hex digits for MBR’s. I needed to figure out how to code this and do the appropriate task in DiskPart. I ended up using WMI to figure it out


Function Get-DiskPartitionStyle {


[Parameter( ValueFromPipeline=$true,
ValueFromPipelineByPropertyName = $true)]

Parameter( ValueFromPipeline=$true,
ValueFromPipelineByPropertyName = $true)]
$Computer ="."


$partitions = Get-WmiObject Win32_DiskPartition -computer $Computer

foreach ($disk in $Disks) {
    $partitions | Select @{name="DiskID" ;e={$_.DiskIndex}},Type,Name |
    Where-Object {$_.DiskID -like $disk}

-disks 10
Get-DiskPartitionStyle -disks 0,5,7,10

I used PowerShell to create an Advanced Function. By Default, it will return all partitions and their disks. You can also specify just the disk you want as well an array of disks.

Generating Output as Objects

PowerShell is based on objects. Objects are sent down a pipeline. Cmdlets take objects as input and emit objects as output. Scripts and Functions should do this as well. This enables end users to take advantage of thins like Format-Table, Format-List, Export-Csv, and ConvertTo-HTML

This is pretty easy if you are getting an single object and returning a single object. But what if you wanted to create a new object that was a combination of several source objects. This would be like a Join in SQL. You can create what is called a PSObject using the New-Object Cmdlet.

I have seen a lot of people in the Scripting Games use this technique very effectively. Let’s look at a quick example


This is great but there is a trick that might make life a bit simpler. You can pass in a hash table to New-Object which eliminates a few lines and make things a bit easier to read and understand.


Extra Points for Style when writing PowerShell Code

This is a blog post I have been meaning to write for a while. Really, being a judge for the 2011 Scripting Games caused me to get this post out. As a judge for the games, I have been reading dozens of scripts every day, and I am sure you can imagine it can get a little tiresome reading through code. That being said, I am learning a ton and have thoroughly enjoyed seeing some great scripts and very innovative solutions. This post is both a commentary on what I have seen in the Scripting Games and my opinions on some best practices.

Don’t make Scripts more complicated than they need to be

Use Cmdlets when possible. Only resort to invoking .NET code when there is no cmdlet available. 

Learn to use the pipeline. This can be extremely efficient. In order to do this well, your functions need to be pipeline friendly. Learn to use “AcceptVaueFromPipeline” in parameters, but don’t overuse it.

Use Proper Naming conventions for your functions and scripts.

There is a clearly defined list of acceptable verbs and also strong guidance on how to name your nouns. Also, run Get-Verb to see all the available verbs that you can use. For nouns, if you need to disambiguate from another set of Cmdlets or functions, use a 2 or 3 letter prefix. Also, always use Singular nouns. Plural nouns get complicated. For example,the noun child is used quite often in providers. Pluralizing “child” turns it into “children” which is not very discoverable, especially if you consider localization.

Use variables that make it easy to understand what you are doing in your script. Comments in code are great, but your code should be readable and understood without them. In general, I would choose good variable names and clear processes over heavily commented code. Comment based help is completely different. It’s great. Use it. Include a lot of examples in your comment based help.


Comment Based Help

Comment Based Help is great. However, it can be very verbose in your script. In the games, I have seen people squeeze it all together and make it hard to read. I would say the most important points in help are the examples. In your examples, start at the easiest way to use your script and add more complexity in more examples.


Parameters and Advanced Functions

There is a ton of great functionality in Advanced Functions. In particular, with just a bit of annotation, you can get a lot for free when it comes to parameters. You can force a parameter to be mandatory. You can say whether or not it can take values from the pipeline. You can run a number of validation against them, and the list goes on. However, with all these options, and if you have 3 or more parameters, it can get pretty verbose pretty quick. Here's how I handle the verbosity. Always put spaces between parameters.


Using Tabs and Curly Braces

In the ISE and most script editors, a TAB is equal to 4 empty spaces. I think this is a good number and is pleasing to the eye when reading code. You should use Tabs and Curly Braces to visually show where a block of code begins and ends.


Along these lines, I would also like to point out when I use blank lines in code. Sometimes in a function or some block of code, I may be doing two or three sets of operations. If you have much more than that, I would suggest you need to break down your function, but that is a whole other blog post.

In this example, I have added blank lines between chunks of code that seem like they should be grouped together.


Looking at this code I wrote 2 years ago, I probably should have broken these down into more usable code and wrote some functions like New-NaElement and New-OutputObject.  Oh well, looking back and reflecting is a good thing. I now know better. Also, naming variables was kind of weak here as well. $NaElement and $NaElement2 is bad. I should have named them $lunInfoElement and $occupiedSizeElement or something along those lines.


  • Write the simplest code possible, but not too simple that you lose functionality
  • Let the code document itself
  • Make it pleasing to the eye and easy to follow visually – Think of it as art
  • Ask yourself “If I was reading this for the first time, would I understand how it work?”

How to Package and Distribute PowerShell Cmdlets, Functions, and Scripts

I have noticed quite a bit of discussions and questions in various online communities on how to package and distribute the PowerShell Cmdlets, Functions, and Scripts. What I love about PowerShell is that you can be fast and loose with distributing functionality, very rigorous, or anywhere in between, when it comes to code distribution. I have sent one-liners via IM to folks and I have checked code into Team Foundation Server. It all depends on what your goal is.

I would like to discuss distributing code with a little more rigor than using email or IM as a distribution vector, although that does work great in some situations. Whether you are a developer writing compiled binary cmdlets or an IT Pro writing functions, answer is Modules.  You can read all about them in the help documents. Just run

PS > Help about_module


The beauty of Modules in PowerShell is that they allow you to easily distribute and deploy your code or cmdlets to others using nothing but copy and paste.  Let’s say you have a module with a a lot of functions to Get, Set, and Remove Network Settings, and you named this module Network. All you need to do is create a folder called “Network” and put your module files into that folder.

There are 2 default locations for Modules. There is one at the system level and also one for the current user, similar to how profiles work. The System Modules directory is


and the one for the current user can be found at


There are three basic types of modules; Script, Manifest, and Binary. Let’s look at each one, as they tend to build on on another.

Script Modules

Script modules are really nothing more than a .PS1 file renamed to .PSM1. For example, if you have a script that has a list of functions you use everyday called myfunctions.ps1, you could turn it into a module by renaming the script to myfunctions.psm1. Then, instead of dot sourcing your script, you could just use Import-Module to bring all those functions into your PowerShell session.

Manifest Modules

Manifests can be used to add a bunch of useful information for code authors and users. A manifest is a separate file that you can include with your PSM1 file or your compiled Module, which is just a DLL. The manifest file is just a a PowerShell HashTable. You can use a cmdlet called New-ModuleManifest to create one with some very basic information. Manifests are really nice for adding version information and prerequisites for your module. If you create a folder called  c:\windows\sytem32\windowspowershell\v1.0\modules\myModule and drop in a PSM1 file and a PSD1 file, PowerShell will load the manifest.  You just need to put in the ModuleToProcess field to add your PSM1 or DLL.

# Module manifest for module 'demo'
# Generated by: Andy Schneider
# Generated on: 4/4/2011


# Script module or binary module file associated with this manifest
ModuleToProcess = ''

# Version number of this module.
ModuleVersion = '1.0'

# ID used to uniquely identify this module
GUID = '8e420ad8-c7d7-4139-8d2e-02d4e31416a9'

# Author of this module
Author = 'Andy Schneider'

# Company or vendor of this module
CompanyName = 'get-powershell'

# Copyright statement for this module
Copyright = '2011 - Use at your own discretion - if it kills your cat - not my fault'

# Description of the functionality provided by this module
Description = 'To demo a module manifest'

# Minimum version of the Windows PowerShell engine required by this module
PowerShellVersion = ''

# Name of the Windows PowerShell host required by this module
PowerShellHostName = ''

# Minimum version of the Windows PowerShell host required by this module
PowerShellHostVersion = ''

# Minimum version of the .NET Framework required by this module
DotNetFrameworkVersion = ''

# Minimum version of the common language runtime (CLR) required by this module
CLRVersion = ''

# Processor architecture (None, X86, Amd64, IA64) required by this module
ProcessorArchitecture = ''

# Modules that must be imported into the global environment prior to importing this module
RequiredModules = @()

# Assemblies that must be loaded prior to importing this module
RequiredAssemblies = @()

# Script files (.ps1) that are run in the caller's environment prior to importing this module
ScriptsToProcess = @()

# Type files (.ps1xml) to be loaded when importing this module
TypesToProcess = @()

# Format files (.ps1xml) to be loaded when importing this module
FormatsToProcess = @()

# Modules to import as nested modules of the module specified in ModuleToProcess
NestedModules = @()

# Functions to export from this module
FunctionsToExport = '*'

# Cmdlets to export from this module
CmdletsToExport = '*'

# Variables to export from this module
VariablesToExport = '*'

# Aliases to export from this module
AliasesToExport = '*'

# List of all modules packaged with this module
ModuleList = @()

# List of all files packaged with this module
FileList = @()

# Private data to pass to the module specified in ModuleToProcess
PrivateData = ''


Binary Modules

If you are a developer or an IT Pro that loves to code, you might want to create a compiled module. This is a module that contains Cmdlets and or Providers that are compiled, and are typically written in C# or VB.NET. A binary module is actually a DLL file. You can read all about creating a compiled cmdlets here on MSDN. Just like a PSM1 module, you can create a module manifest and add the DLL into ModulesToProcess in your manifest.

Using Modules

There are several cmdlets that allow you to interact with Modules


What most of these do should be pretty obvious from their names. One nifty trick is the –listAvailable switch on get-module. this will list all the modules that are available on your system, so you know which ones you can import. You can even filter this based on Module type.