NetApp PowerShell Toolkit has a PowerShell Provider

I was at the PowerShell Deep Dive the first half of this week. During a break, I had the chance to meet Clinton Knight, the lead guy behind the NetApp PowerShell Toolkit. Since v1, I have been asking for a provider. Well it turns out that they slipped one in in their latest release and I didn’t even notice. That will teach me to start reading release notes.

Anyway, now with a filer you can do things like the following. This is going to make it even easier now to navigate all your volumes, q-trees, and LUN’s.

image

Another very cool feature that I just found out about as well is the ability to store credentials to connect to different filers. They have three cmdlets that allow you to Get, Add, and remove credentials.These are encrypted and stored on disk so that only the user that created them can access them. Definitely a nice touch.

image

NetApp has really done a fantastic job with their Cmdlets. Their implementation of PowerShell is by far one of the best I have seen, including Microsoft and third parties. Companies looking to use PowerShell in their products should definitely take a look at what NetApp has done.

Use PowerShell to find out if a Disk Partition is GPT or MBR

When you go to create a partition on a disk in Windows, you can use two different styles, MBR and GPT. The stand for Master Boot Record and Guid Partition Table. One of the major differences between these two is the size of partition you can create. If you have SAN storage and are using CSV’s in Hyper-V, you very well might bump up against this. MBR partitions are limited to 2 Terabytes (2.19 * 1012). GPT allows for a maximum size of 9.4 ZettaBytes (9.4 x 1021) In other words, at this point size doesn’t matter, at least in this decade.

Anyway, I had a situation where I needed to use diskpart.exe to reset a disk signature. The problem was, disk signatures are GUID’s for GPT disks and a set of hex digits for MBR’s. I needed to figure out how to code this and do the appropriate task in DiskPart. I ended up using WMI to figure it out

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027

Function Get-DiskPartitionStyle {

param

(
[Parameter( ValueFromPipeline=$true,
ValueFromPipelineByPropertyName = $true)]
$Disks="*"
,

[
Parameter( ValueFromPipeline=$true,
ValueFromPipelineByPropertyName = $true)]
$Computer ="."
)

PROCESS
{

$partitions = Get-WmiObject Win32_DiskPartition -computer $Computer

foreach ($disk in $Disks) {
    $partitions | Select @{name="DiskID" ;e={$_.DiskIndex}},Type,Name |
    Where-Object {$_.DiskID -like $disk}
        }
    }
}


Get-DiskPartitionStyle
Get-DiskPartitionStyle
-disks 10
Get-DiskPartitionStyle -disks 0,5,7,10

I used PowerShell to create an Advanced Function. By Default, it will return all partitions and their disks. You can also specify just the disk you want as well an array of disks.

Generating Output as Objects

PowerShell is based on objects. Objects are sent down a pipeline. Cmdlets take objects as input and emit objects as output. Scripts and Functions should do this as well. This enables end users to take advantage of thins like Format-Table, Format-List, Export-Csv, and ConvertTo-HTML

This is pretty easy if you are getting an single object and returning a single object. But what if you wanted to create a new object that was a combination of several source objects. This would be like a Join in SQL. You can create what is called a PSObject using the New-Object Cmdlet.

I have seen a lot of people in the Scripting Games use this technique very effectively. Let’s look at a quick example

image

This is great but there is a trick that might make life a bit simpler. You can pass in a hash table to New-Object which eliminates a few lines and make things a bit easier to read and understand.

image

Extra Points for Style when writing PowerShell Code

This is a blog post I have been meaning to write for a while. Really, being a judge for the 2011 Scripting Games caused me to get this post out. As a judge for the games, I have been reading dozens of scripts every day, and I am sure you can imagine it can get a little tiresome reading through code. That being said, I am learning a ton and have thoroughly enjoyed seeing some great scripts and very innovative solutions. This post is both a commentary on what I have seen in the Scripting Games and my opinions on some best practices.

Don’t make Scripts more complicated than they need to be

Use Cmdlets when possible. Only resort to invoking .NET code when there is no cmdlet available. 

Learn to use the pipeline. This can be extremely efficient. In order to do this well, your functions need to be pipeline friendly. Learn to use “AcceptVaueFromPipeline” in parameters, but don’t overuse it.

Use Proper Naming conventions for your functions and scripts.

There is a clearly defined list of acceptable verbs and also strong guidance on how to name your nouns. Also, run Get-Verb to see all the available verbs that you can use. For nouns, if you need to disambiguate from another set of Cmdlets or functions, use a 2 or 3 letter prefix. Also, always use Singular nouns. Plural nouns get complicated. For example,the noun child is used quite often in providers. Pluralizing “child” turns it into “children” which is not very discoverable, especially if you consider localization.

Use variables that make it easy to understand what you are doing in your script. Comments in code are great, but your code should be readable and understood without them. In general, I would choose good variable names and clear processes over heavily commented code. Comment based help is completely different. It’s great. Use it. Include a lot of examples in your comment based help.

image

Comment Based Help

Comment Based Help is great. However, it can be very verbose in your script. In the games, I have seen people squeeze it all together and make it hard to read. I would say the most important points in help are the examples. In your examples, start at the easiest way to use your script and add more complexity in more examples.

image

Parameters and Advanced Functions

There is a ton of great functionality in Advanced Functions. In particular, with just a bit of annotation, you can get a lot for free when it comes to parameters. You can force a parameter to be mandatory. You can say whether or not it can take values from the pipeline. You can run a number of validation against them, and the list goes on. However, with all these options, and if you have 3 or more parameters, it can get pretty verbose pretty quick. Here's how I handle the verbosity. Always put spaces between parameters.

image

Using Tabs and Curly Braces

In the ISE and most script editors, a TAB is equal to 4 empty spaces. I think this is a good number and is pleasing to the eye when reading code. You should use Tabs and Curly Braces to visually show where a block of code begins and ends.

image

Along these lines, I would also like to point out when I use blank lines in code. Sometimes in a function or some block of code, I may be doing two or three sets of operations. If you have much more than that, I would suggest you need to break down your function, but that is a whole other blog post.

In this example, I have added blank lines between chunks of code that seem like they should be grouped together.

image

Looking at this code I wrote 2 years ago, I probably should have broken these down into more usable code and wrote some functions like New-NaElement and New-OutputObject.  Oh well, looking back and reflecting is a good thing. I now know better. Also, naming variables was kind of weak here as well. $NaElement and $NaElement2 is bad. I should have named them $lunInfoElement and $occupiedSizeElement or something along those lines.

Summary

  • Write the simplest code possible, but not too simple that you lose functionality
  • Let the code document itself
  • Make it pleasing to the eye and easy to follow visually – Think of it as art
  • Ask yourself “If I was reading this for the first time, would I understand how it work?”

How to Package and Distribute PowerShell Cmdlets, Functions, and Scripts

I have noticed quite a bit of discussions and questions in various online communities on how to package and distribute the PowerShell Cmdlets, Functions, and Scripts. What I love about PowerShell is that you can be fast and loose with distributing functionality, very rigorous, or anywhere in between, when it comes to code distribution. I have sent one-liners via IM to folks and I have checked code into Team Foundation Server. It all depends on what your goal is.

I would like to discuss distributing code with a little more rigor than using email or IM as a distribution vector, although that does work great in some situations. Whether you are a developer writing compiled binary cmdlets or an IT Pro writing functions, answer is Modules.  You can read all about them in the help documents. Just run

PS > Help about_module

 

The beauty of Modules in PowerShell is that they allow you to easily distribute and deploy your code or cmdlets to others using nothing but copy and paste.  Let’s say you have a module with a a lot of functions to Get, Set, and Remove Network Settings, and you named this module Network. All you need to do is create a folder called “Network” and put your module files into that folder.

There are 2 default locations for Modules. There is one at the system level and also one for the current user, similar to how profiles work. The System Modules directory is

%windir%\System32\WindowsPowerShell\v1.0\Modules

and the one for the current user can be found at

%UserProfile%\Documents\WindowsPowerShell\Modules 

There are three basic types of modules; Script, Manifest, and Binary. Let’s look at each one, as they tend to build on on another.

Script Modules

Script modules are really nothing more than a .PS1 file renamed to .PSM1. For example, if you have a script that has a list of functions you use everyday called myfunctions.ps1, you could turn it into a module by renaming the script to myfunctions.psm1. Then, instead of dot sourcing your script, you could just use Import-Module to bring all those functions into your PowerShell session.

Manifest Modules

Manifests can be used to add a bunch of useful information for code authors and users. A manifest is a separate file that you can include with your PSM1 file or your compiled Module, which is just a DLL. The manifest file is just a a PowerShell HashTable. You can use a cmdlet called New-ModuleManifest to create one with some very basic information. Manifests are really nice for adding version information and prerequisites for your module. If you create a folder called  c:\windows\sytem32\windowspowershell\v1.0\modules\myModule and drop in a PSM1 file and a PSD1 file, PowerShell will load the manifest.  You just need to put in the ModuleToProcess field to add your PSM1 or DLL.

#
# Module manifest for module 'demo'
#
# Generated by: Andy Schneider
#
# Generated on: 4/4/2011
#

@{

# Script module or binary module file associated with this manifest
ModuleToProcess = ''

# Version number of this module.
ModuleVersion = '1.0'

# ID used to uniquely identify this module
GUID = '8e420ad8-c7d7-4139-8d2e-02d4e31416a9'

# Author of this module
Author = 'Andy Schneider'

# Company or vendor of this module
CompanyName = 'get-powershell'

# Copyright statement for this module
Copyright = '2011 - Use at your own discretion - if it kills your cat - not my fault'

# Description of the functionality provided by this module
Description = 'To demo a module manifest'

# Minimum version of the Windows PowerShell engine required by this module
PowerShellVersion = ''

# Name of the Windows PowerShell host required by this module
PowerShellHostName = ''

# Minimum version of the Windows PowerShell host required by this module
PowerShellHostVersion = ''

# Minimum version of the .NET Framework required by this module
DotNetFrameworkVersion = ''

# Minimum version of the common language runtime (CLR) required by this module
CLRVersion = ''

# Processor architecture (None, X86, Amd64, IA64) required by this module
ProcessorArchitecture = ''

# Modules that must be imported into the global environment prior to importing this module
RequiredModules = @()

# Assemblies that must be loaded prior to importing this module
RequiredAssemblies = @()

# Script files (.ps1) that are run in the caller's environment prior to importing this module
ScriptsToProcess = @()

# Type files (.ps1xml) to be loaded when importing this module
TypesToProcess = @()

# Format files (.ps1xml) to be loaded when importing this module
FormatsToProcess = @()

# Modules to import as nested modules of the module specified in ModuleToProcess
NestedModules = @()

# Functions to export from this module
FunctionsToExport = '*'

# Cmdlets to export from this module
CmdletsToExport = '*'

# Variables to export from this module
VariablesToExport = '*'

# Aliases to export from this module
AliasesToExport = '*'

# List of all modules packaged with this module
ModuleList = @()

# List of all files packaged with this module
FileList = @()

# Private data to pass to the module specified in ModuleToProcess
PrivateData = ''

}
 

Binary Modules

If you are a developer or an IT Pro that loves to code, you might want to create a compiled module. This is a module that contains Cmdlets and or Providers that are compiled, and are typically written in C# or VB.NET. A binary module is actually a DLL file. You can read all about creating a compiled cmdlets here on MSDN. Just like a PSM1 module, you can create a module manifest and add the DLL into ModulesToProcess in your manifest.

Using Modules

There are several cmdlets that allow you to interact with Modules

image

What most of these do should be pretty obvious from their names. One nifty trick is the –listAvailable switch on get-module. this will list all the modules that are available on your system, so you know which ones you can import. You can even filter this based on Module type.

image

Moving Get-PowerShell from Word Press to Blog Engine .NET

If you happen to be one of the six people that read this blog on a regular basis, you will have noticed a few changes. I recently converted my blog from Word Press to BlogEngine.NET. There are a few reasons why I made the change. First, I wanted to start learning more about web development on .NET. Secondly, Microsoft just released a product called WebMatrix. WebMatrix is an app that allows a techy guy like me to start hacking and slashing at a web site and provides a vehicle for me to get into some ASP.NET code if I choose to do so. What’s great about WebMatrix is that it allows you to pull down templates and sites from a large list of great open source projects that you can use in your own web site. From this list I chose Blog Engine.NET.  This is where the fun begins.

My first problem was to figure out how to move content from my Word Press blog to my new Blog Engine.NET blog. Searching the Interwebs with Bingle, I came across a project on codeplex called BlogML. Blog ML is essentially a form of XML used specifically to transfer Blog data between different Blog Engines. Blog Engine .NET has native functionality to import a BlogML file. However, getting Word Press to export Blog ML was a bit of trick, until I found this Export Script.

Once I had the BlogML file, I needed to clean up the file a little bit. Luckily, the BlogML project has a XML Schema file that I was able to use to check the validity of the XML. There were about 10 errors in the exported file that I had to clean up.

So now I have a new web site with my content pulled in, but there is one big problem. I needed to be able to maintain my permalinks. Here’s where PowerShell came into action. By the way, this is what I love about PowerShell. It’s a complete and total Swiss Army knife. You can use it to do just about anything anywhere.

I needed to get a list of all my permalinks. So what I did was tweak my RSS feed on my old and new blog to list all of my blog entries. Then I used the System.Web.Client to download the XML and parse out the URL’s that I needed.

001
002
003
004
005
006
007
008
009
010
011
012

Function Get-RssLink {
param(

$url="http://www.get-powershell.com/syndication.axd" 
)

$web = New-Object System.Net.WebClient 
$rss = $web.downloadString($url) 
$cleaned = $rss -replace  '',"" 
$xml = [xml]$cleaned 
$xml.rss.channel.item | select link

}

 

Once I had this from the old and new blogs, I generated a CSV file that had two columns, OldURL and NewURL. Armed with this information, I started looking at a tool in IIS called URL Rewrite 2.0. My blog is hosted at Cytanium. One thing that is really cool about them is that you can manage your Web Site directly using IIS Manager. Looking at my CSV file, there was an obvious basic pattern and I looked at using RegEx in URL Rewrite.  This worked for a few of my PermaLinks but I quickly found out that there were many more exceptions than rules in how the URL’s were translated. What I ended up doing was creating a rule for each permalink. Sure this may not be the most optimal situation, but its not like I am running Amazon.com.

However, I needed to be able to create a good chunk of URL Rewrite Rules. Again, I busted out my trusty old Swiss Army Knife – aka PowerShell. I actually found out the IIS Manager can generate scripts for you, but it generates C# and  JavaScript. Now these are nice languages but not ideal for IT Admins. I took the liberty of converting the generated C# to PowerShell.

Here’s what I came up with:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046

Function Add-UrlRewriteRule {
param(

$name,
$matchUrl,
$actionType,
$actionUrl ,
$actionRedirectType = "Found"
)


$serverManager = new-object Microsoft.Web.Administration.ServerManager
$config = $serverManager.GetWebConfiguration("blog")
$rulesSection = $config.GetSection("system.webServer/rewrite/rules")
$rulesCollection = $rulesSection.GetCollection()

$ruleElement = $rulesCollection.CreateElement("rule")
$ruleElement["name"] = 
$name
$ruleElement
["stopProcessing"] = 
$true

$matchElement
 = $ruleElement.GetChildElement("match")
$matchElement["url"] = 
$matchUrl

$actionElement
 = $ruleElement.GetChildElement("action")
$actionElement["type"] = 
$actionType
$actionElement
["redirectType"] = 
$actionRedirectType
$actionElement
["url"] = 
$actionUrl
$rulesCollection
.Add($ruleElement)

$serverManager.CommitChanges
()
}



$maps = import-csv "d:\Users\My Documents\UrlRewriteMapping.csv"


foreach ($map in $maps ) {
        Add-UrlRewriteRule -name $map.old.trim() `
                            -matchUrl $map.old.trim() `
                            -actionUrl $map.new.trim() `
                            -actionType "Redirect"

                    }
          
       


 

Once I had my URL Rewrites working, I was pretty much ready to go. All the basic functionality that I need is up and running now. If there are any quirks, please send any feedback via my Contact Page.

Search TechNet Script Center Forum for Unanswered PowerShell Questions

I wanted a quick and easy way to search the TechNet Script Center forums for all questions that had the word “PowerShell” in either the title or in the question itself. Here is what I came up with:

001
002
003
004
005
006
007
008
009
010
011

Function Get-ScriptCenterUnanswered {
$url = 
"http://social.technet.microsoft.com/Forums/en/ITCG/threads?outputAs=rss&filter=unanswered"
$web
 = New-Object System.Net.WebClient
$rss = $web.downloadString($url)
$cleaned = $rss -replace  '',
""
$xml
 = [xml]
$cleaned
$xml
.rss.channel.item | ? {
        ($_.Title -like "*powershell*") -or ($_.Description -like "*powershell*")
 
       
} | fl title,link
}

You can use a filter to get only the unanswered questions. Once I pulled those down, I found that it was giving me some odd characters at the beginning of the feed. I just used the –replace operator to get rid of them. Once I did that, I can convert to XML and treat the output as an object and use the where-object cmdlet to filter the output based on my criteria.

I slapped this together in about 10 minutes so this is not very reusable – but I thought I would share.

-Andy

Search TechNet Script Center Forum for Unanswered PowerShell Questions

I wanted a quick and easy way to search the TechNet Script Center forums for all questions that had the word “PowerShell” in either the title or in the question itself. Here is what I came up with:

001
002
003
004
005
006
007
008
009
010
011
Function Get-ScriptCenterUnanswered {
$url = 
"http://social.technet.microsoft.com/Forums/en/ITCG/threads?outputAs=rss&filter=unanswered"
$web
 = New-Object System.Net.WebClient
$rss = $web.downloadString($url)
$cleaned = $rss -replace  '',
""
$xml
 = [xml]
$cleaned
$xml
.rss.channel.item | ? {
        ($_.Title -like "*powershell*") -or ($_.Description -like "*powershell*")
 
       
} | fl title,link
}

You can use a filter to get only the unanswered questions. Once I pulled those down, I found that it was giving me some odd characters at the beginning of the feed. I just used the –replace operator to get rid of them. Once I did that, I can convert to XML and treat the output as an object and use the where-object cmdlet to filter the output based on my criteria.

I slapped this together in about 10 minutes so this is not very reusable – but I thought I would share.

-Andy

Scripting Games 2011

I am very excited about the upcoming Scripting Games this year. This is always a great opportunity for people to learn an incredible amount. If you are a beginner, this is a great place to start learning. If you’ve scripted yourself out of some deep dark holes, it’s a great opportunity to learn from other top notch folks and to also share your expertise. Also, I get to be one of the judges this year. I am really looking forward to reviewing scripts and seeing all the great solutions that ya’ll are going to come up with!  Hope to see you there !

 

 

ScriptingGames

It's what's on the inside that counts

Have you ever ran a PowerShell Cmdlet, looked at the properties and want do something with one of those properties like sort or group the output and have it fail? This has happened to me a few times. One example of this was with the NetApp PowerShell Toolkit.

I was looking at snapshots and wanted to sort by their Created date.Notice that there is a Created column in the output

image

Looks good, but when I try to sort by Created I get the same output as when I sort by “foo” (This is not a good thing)

image

It turns out “Created”, just like “foo”, is not a real property contained in the SnapShot objects. Lets see what is really there. There are a couple ways you can see what’s really going on. First, you can use get-member

image

Get-member tells us that there are a couple properties, AccessTime and AccessTimeDT that could be useful. From this I can infer that Created is somehow related to AccessTimeDT. But how and why is the real question.

An author of a PowerShell module can create a xml file that tells PowerShell how to display data in the console. If we go and look at the formatting file in the NetApp toolkit, we find that Created is mapped to the AccessDT property for SnapMirror objects.

image

So what’s this mean for you as a person using PowerShell. If Sorting or grouping by a property is not working, make sure that property is real. You can use get-member or go hacking through the format.ps1xml file for that particular module.

What does this mean for PowerShell Module authors? I would say be very careful of how you present data to end users. My humble suggestion is to use types.ps1xml file and create script properties and have those properties exposed to end users using the format file. This way, we can sort, group, and filter by the properties we expect to be there.