Using Claims Authorization Rules in ADFS 2.0

I am willing to bet that 90% of the time you have created claims you never really noticed that there are actually 3 tabs for claims that you can use.

image

Most of the time, we are just messing with the “Issuance Transform Rules.” When you walk through the “Add a Relying Party” Wizard, you may not notice the claim that gets created automatically in “Issuance Authorization Rules.”

Here is a screen shot of the rule that says “Allow everyone”

image

If we look at the claim, we can see that it is of type permit and the value it true

image

You can also create a rule that says “Deny someone with this value of a claim.” For instance, I can add a rule that says “Deny access to this Relying Party if anyone tries to log in and has a claim of type Name with a value that looks like “Andy”

image

By the way, the =~ syntax is saying If a claim of type Name has a value that matches the regular expression, then issue the claim of type deny with a value of “DenyUsersWithClaim.”

You can also issue a strict deny all by doing a straight deny like this

=> issue(Type = "http://schemas.microsoft.com/authorization/claims/deny", Value = "true");

So this is all great but what if you need to combine some of these rules and perhaps make an exception for a handful of users. When I first started messing with this, I figured these authorization claim rules would act like a firewall policy. It would start at the top, and then would act on the first rule that it matched, and then processing would stop. THIS IS NOT THE CASE. What made me really think this was the case was that there is an option to rearrange the order of the claims.

image

You wish it were so easy!  It turns out that if there is a “Deny” rule that matches a users claim anywhere, it will always win, no matter where on the list it is. So in this case, even though permit all is first, if I have a claim that says my “Name” is “Andy” then I will get a denied access error.

We have to get a little tricky. Lets say we want to deny everyone from a particular Identity Provider except for 3 separate users. How would we go about it. Here is one way.

First, lets figure out an easy way to determine at the claims authorization rule, which IDP the user came from. If you go to “Trust Relationships | Claims Provider Trusts” You will see a trust for Active Directory and any other IDP’s that you have added. For the sake of demonstration, lets say you have one called Contoso.  Right click on the Contoso IDP and select “Edit Claim Rules”

You can add a claim using a custom rule. Choose “Send Claims Using a Custom Rule.” You can use any namespace that you want. I would suggest using one namespace for custom claims and sticking with it. For this demo, I am using http://sso.contoso.com/users

This rule will add a claim to all users that log in from the Contoso STS that says “Company” = “Contoso”

image

 

All right so now we have the claim for people from Contoso, but we want to create exemptions for foo@contoso.com, bar@contoso.com, and andy@contoso.com.

To do this, we need to go back to our Relying Party Authorization Rules and add a new custom rule.

 

image

Here’s how this rule breaks down. For a bit more details, I would highly suggest reading through the claims rule language primer.

Here’s the actual claim rule

c1:[Type == "http://sso.contoso.com/users/Company", Value =~ "^(?i)Contoso$"]
&& c2:[Type == "
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", Value =~ "(?<!foo|bar|andy)@contoso.com"]
=> issue(Type = "
http://schemas.microsoft.com/authorization/claims/deny", Value = "DenyUsersWithClaim");

Here’s the basic logic of the claim rule

If  ((claim.Company matches “Contoso”)  AND (claim.upn –NOTEQUAL (andy@contoso.com OR foo@contoso.com OR bar@contoso.com)) Then (Issue “DenyUsersWithClaim”)

Perhaps the trickiest part here is the REGEX that filters out the exceptions. Basically, the “!” says “not match” and the “|” symbol is the Alternation operator – or functionally the "-or” operator.

Now, all users from Contoso will get denied access except for the 3 that match the regex.

Using Enterprise AD Credentials to Manage Azure Access Control Service

ACS is Azure’s Access Control Service. It is a cloud based Secure Token Service (STS). With the recent advent of Windows Azure Active Directory and ACS being offered for free, I am envisioning more and more enterprises beginning to leverage these services.

Typically, when you create a Azure ACS namespace, you login with a Windows Live ID and create/delete/manage services. However, if you have an Identity and Access Management team in your enterprise, you may want to have a bit more control over who can manage ACS and also ensure that they are using their AD credentials rather people’s personal Windows Live accounts. This is now completely possible using on premise ADFS.

This post assumes you have built out and installed an ADFS infrastructure and are familiar with adding Relying Parties and using claims.

To create a new ACS namespace, you will need to go to https://manage.windowsazure.com, log in to the portal, and then click on your name and choose Previous Portal.

image

In the old portal, you can manage Service Bus, Access Control, and caching.

image

Click on there and create a new ACS Namespace. Once the namespace is created, you can go in and manage “Identity Providers”

Typically, this is allowing you to add ID Providers that you will use to authenticate users to your Relying Parties. Live ID is there by default, and you can add more like Google, Facebook, and Yahoo!

image

The one you need here is WS-Federation ID Provider (ADFS 2.0)

From there you can give the URL of your ADFS federation metadata. It is typically something like https://sts.example.com/FederationMetadata/2007-06/FederationMetadata.xml

You must also add ACS as a Relying Party to your ADFS instance as well to establish a trust.

Now that you have added your ADFS service as a Trusted Identity Provider, you can use ADFS to authenticate your relying parties.

However, that is not the end goal in this scenario. We want to set up ACS so that we can log in to the management portal with our Active Directory Credentials. Here’s what else you need to do.

In ACS, to to Administration and choose Add Administrator

image

The one thing you will need to do is specify the claim and value that has permission to manage the portal.

I would suggest you use a Role claim and then in ADFS on your side, you can map a group in AD to that role claim.

Here’s the role claim

image

The value you specify is the value of the claim you set in ADFS when you add the claim rule to map a claim to a Group Membership. An example would be Domain\ACSAdministrators

To test this out, you should add yourself to the ACSAdministrators group you created and then try and authenticate to the management URL for your ACS Namespace. It will be something like https://get-powershell.accesscontrol.windows.net/v2/mgmt/ From there, you will be prompted for which ID Provider you want to log in with. Choose your ADFS provider, log in with your corporate credentials, and you will have access to manage ACS.

A New Blog !

Hey folks. I wanted to let you know I have started a new blog over at The IT Fishing Pole. The basic concept of the Fishing Pole is the old saying “Give a man a fish, feed him for a day. Teach him how to fish, feed him for a lifetime.. or at least until he’s sick of seafood.”

I will still keep Get-PowerShell around and write occasional posts here but I am starting to get into a lot more technologies including Identity Management, ADFS and  PKI to name a few, and I wanted a more generic platform for these articles. Plus, I have to say, there is something that just feels good about starting something brand new.

Thanks for reading!

-Andy

Windows 8 Active Directory PowerShell Provider

One of the most potentially useful features of the AD tools provided by Microsoft is the AD PowerShell Provider. A Provider in PowerShell allows a user to interact with a data structure similarly to how they would interact with a file system. You can change directories, list items, create new items, and delete items. This is a really good model for any kind of hierarchical data. AD is a great example. I like to use PowerShell providers to find and navigate through data very quickly.

The problem I have with the AD Provider is the names that are used when you navigate into Active Directory. In this case, a picture is worth a thousand words.

image

What do you mean the path doesn’t exist? According to the output of my last ls command, I should be able to cd into windev, configuration, and 3 others. It turns out you don’t use the name of the “Name” property. You have to cd into the “Distinguished Name”

image

To me, that is very unintuitive and does not align well with the PowerShell mantra, “Think”, “Type”, “Get.” Well in the Windows 8 Developer Preview Server, you can address this issue. The default drive AD: still exists and behaves in the same way, but it doesn’t have to stay that way.

You can remove the AD PSDrive and create your own new one. The trick here is to use the –FormatType switch with the parameter “canonical.” This will change how the drive is set up and what names the provider binds to

image

Now we can navigate through AD much more easily.

image

Random Collection of New Features found while Exploring PowerShell V3

I spent some time this afternoon playing around with PowerShell V3 and came across a (very random) set of new features and functionality that I thought I would highlight here. My wife is pretty convinced I have Adult Attention Deficit Disorder. This random collection of features and tips seem to corroborate that theory. At any rate, here is what I have found.

  • There is a new Add-On model in the ISE. From the help I read “Windows PowerShell ISE now supports add-on tools, which are Windows Presentation Foundation (WPF) controls that are added by using the object model.” It is going to be awesome to see what some of the UI/WPF folks come up with using this new functionality.

The ISE now lets you edit XML natively just like you can with PS1 files. You can even set the token colors using $PSIse.Options.XMLTokenColors.

  • The ISE now supports Intellisense. This is a great new feature. One thing that I found extremely helpful was to switch the option $psise.Options.UseEnterToSelectInCommandPanelIntellisense = $true. If you don’t do this, by default, when intellisense pops up and you hit enter, the command will execute without adding what you selected via Intellisense to the command. There is also an option to use enter to select intellisense in the script pane. I recommend setting both of these to $true.

There are cmdlets to mess with IP Addresses. Get-NetIPaddress and Set-IPAddress to just name the first two obvious ones. This is just an example of the breadth of OS coverage we have. If you know PowerShell, you can manage all of Windows.

There is some new, simpler syntax for where and foreach-object. You no longer need to use braces. You can say something like get-service | where name –like svc*

Last but not least, we now have access to control panel items directly in PowerShell. You can get control panel items and start them. One example is Get-ControlPanelItem System | Start-ControlPanelItem

So much good stuff.. It feels like we might get to PowerShell v4 before we finish finding out how to use all the goodness in V3!

PowerShell V3 Enables Windows 8 Server to be optimized for the Cloud

Windows Server 8 took the stage on Day 2 of the BUILD conference. This OS was built from the ground up to be highly optimized for private and public cloud computing. To have an OS that is optimized for the cloud, there must be a management framework  built into the OS that can be used to manage hundreds of thousands of servers reliably and securely.

The Windows Management Framework is that framework. The entire stack has been updated. Windows 8 brings with it a new version of WMI, WMI v2. It is now an order of magnitude easier to write WMI providers. WSMan, the HTTP protocol used under the hood of PowerShell remoting has been updated as well. In a presentation at Build, Jeffrey Snover mentioned that something gets built very well when you rely on it 100%. Microsoft is going all in on WSMan and the Windows Management Framework.

To manage an OS optimized for cloud computing, GUI’s are simply not an option. Windows 2008 introduced the first version of Server Core, removing the GUI from the OS completely. Microsoft is starting to push Server Core as the preferred installation for Windows Server 8. To enable this, remoting must work all the time. Therefore, in Windows 8, PowerShell remoting and WSMan is on by default, out of the box.

Not only is remoting on by default, but MS has invested a great deal of engineering into this technology to make it incredibly robust. Users can now create a session, connect to it, disconnect, and reconnect at a later time. This enables many different scenarios. Just imagine you are at work and kick of a set of jobs across 250 servers that will take a while. You can disconnect from those sessions and go home. After dinner, you can connect back in and check on the jobs by reconnecting to those sessions.

In addition to having a great robust remoting story, the Framework must also be able to cover as much of the OS as possible. MS has made it incredibly easy for teams to add PowerShell cmdlets to their product. When teams create a WMI V2 provider, there are now tools that can be used to automatically generate all the cmdlets associated with their WMI provider. Because of this, there are now cmdlets to manage nearly all aspects of Windows, including low level disk management and networking.

Finally, on top of all these requirements, the Framework must run as fast as possible. PowerShell V3 leverages the Dynamic Language Runtime (the DLR) in its execution engine. This allows script code that is running a lot to be compiled on demand and then executed. The PowerShell team has seen up to a 600% increase in performance of script execution because of this change.

The combination of the performance increase, a robust infrastructure, and an incredibly large foot print across the OS will make Windows 8 a highly optimized cloud computing platform. It is absolutely clear that if you have invested in PowerShell, you will not be sorry. It is likely one of the best investments you have ever made in your IT Pro career.

PowerShell V3

I have the privilege of attending BUILD this year. I am super excited about Windows 8. All the BUILD attendees were given a new Samsung Slate PC running a pre-release Developer Edition of Windows 8. One of the first things I did was crack open PowerShell, looked at $psversiontable and indeed, I was running PowerShell V3!

  1. I’ll be sure to blog more details, but a couple things I noticed right away

ISE has Intellisense

44 Modules were available out of the box

1015 Cmdlets returned from Get-Command

There are a bunch of  *-PSWorkflow cmdlets

There are iSCSI cmdlets

There are TPM (Trusted Platform Module) Cmdlets

Basically, it looks like you can manage just about anything on the PC with PowerShell V3.

I’ll be posting more updates here on the blog as well as on Twitter @andys146

 

Use Nuget to Share PowerShell Modules in your Enterprise

Nuget is not just for developers! If you are an IT Pro, you can use it as well. Nuget is a relatively new tool from Microsoft that provides the ability for people to easily share and use code. Microsoft is marketing it to developers as the way to share and use Open Source code in Visual Studio Projects. It does a great job at this and is really starting to take off in the developer community. But we, IT Pros, can take it and use it for PowerShell Modules.

So why would you want to use Nuget to source your enterprise scripts?

  • Versioning– You can match your Module Version numbers with the Nuget Package version
  • Dependencies – In a Nuget Package, you can say this package depends on this package, so check to see if its installed and if not, please install it for me.
  • At least 10 other cool things that I haven’t discovered yet.
    Nuget is a client server application. You can save a bunch of packages up on a server, and you use the client to download and install these packages. Microsoft hosts a Nuget server that currently has around 1500 published packages that you can use.

Last week at Teched Scott Hanselman gave a talk on using Nuget in the Enterprise.  The quick version of the talk is that you can host your own internal Nuget Server. In fact, all you really need to do is set up a file share. The other key is that all you need to access the packages on a Nuget Server is the Nuget command line tool

After seeing this talk, I was thinking this could be a great tool to distribute and share PowerShell Modules with the rest of my IT Department. What I didn’t know was whether or not it would be easy to package up a module and then install it using Nuget. So I started playing. As an example, I am going to use a Module I wrote to manage my work items in TFS.

The first thing to do is download the Nuget command line tool. After you download it, be sure to unblock it because it was downloaded from the Internet. I put the file in a directory in my documents folder and created an alias to it.

image

Once that is set up, you can start creating a package. There are a couple things you need to do if you are using the command line version of Nuget.

First, you need to create a spec file.  You can do this with the command nuget spec, as shown below. You can see that it creates an XML file.

image

There are all kinds of properties here that you can set. I am going to leave everything as default for demo purposes, except that I am going to rip out the dependency node and change the tag values to something more reasonable. This could obviously be automated very easily, but I simply used good old notepad2.

Now the dependency node is gone and tags are updated.

image

Once we have this we can create the package, again using nuget.exe. I really should mention that there are conventions for structuring a package. You can read all about it on http://nuget.org. I am going to completely ignore them because all I really care about the PowerShell Module.

Nuget.exe has a parameter called pack. You can specify a directory to package up by specifying the basepath parameter, the spec file, and the output directory.

image

Now I have a package called tfs.1.0.nupkg. NUPKG is the extension for Nuget Package. All this really is a zip file. In fact,you can rename this to tfs.1.0.zip and unzip the contents. In there is all the files for my modules.

OK, so now that I have a package, I need to put it somewhere where people can access it and pull it down with a Nuget client. There are tons of articles on how to create your own Nuget Server. http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds.

If you just want to use a file share, you can do that. Or you can actually build a NuGet Server Web Site. I guarantee that you won’t have to write any code, but you would have to install Visual Studio Express, or get a developer to build a quick app for you. NuGet server itself is a Nuget Package. With Visual Studio installed, I had my NuGet Server up and running in under 10 minutes. 

After you set up your Nuget Server, there is a directory where you can place your packages. It is appropriately named, “Packages.” Here is the directory where I have all my files for my local instance of IIS. You can see that I have two packages up here, one for the DataOnTap Module and the one we just created.

image

Here is the client listing the available packages from this source.

image

For the sake of demonstration, I created a folder called c:\modules and used nuget.exe to install my module there.

image

The only thing wrong with this is that the folder name contains the version and the nupkg file is still around. This can all be cleaned up pretty easily, manually, using PowerShell, or probably with some Nuget options I haven’t found yet.

I will be testing this out this week too see what else I can come up with. I am looking forward to seeing how versioning works. Also, I want to wrap some of this functionality with Advanced Functions.

If you install Visual Studio and Nuget, you will also get a set of PowerShell cmdlets to manage packages. However, as of right now the Module is not a separate download. I’ll look into this and let you know what else I find. I am sure there will be at least one or two follow up posts on this as I learn more about NuGet.

When Read-Host doesn’t quite cut it

Ninety percent of the time when you are writing PowerShell code, you can use parameters in advanced functions to get the data you need to get from a user. However, there are times that you may want to have a bit more control over the user experience. Out of the box, PowerShell provides a cmdlet called Read-Host.

image

From here you can use the variable in your code

image

This is cool but what if you want to offer choices to the user, and what if you want to customize the caption in the window in addition to the actual message. It turns out PowerShell has some messaging capabilities built in that are not exposed as direct Cmdlets. If you have used –confirm and –whatif on some Cmdlets, you have probably seen this UI.

image

I thought it would be pretty cool to be able to use this functionality with my own custom choices, caption, and message. I wrote a function called New-Choice that is up on Poshcode

Here are some example of using the function.

In PowerShell.exe

image

In ISE

image

And even in PowerGUI

image

In summary, this function provides a great way to provide a rich user experience and maintain control of possible inputs the user could provide.

NetApp PowerShell Toolkit 1.4 Released! Get-NaHyperVHost

Last Friday, NetApp released version 1.4 of their PowerShell Toolkit. They have a total of 501 Cmdlets with this release.

image

Their stuff just keeps getting better and better.

There are a couple of Cmdlets that I wanted to highlight because they were extremely useful for me the other day. We have several 8 to 10 node Hyper-V Clusters all using NetApp and iSCSI storage. We have been moving VM’s to faster disks on our NetApp. One challenge that can crop up is correlating which VM’s in HyperV are stored on which Volume or QTree on our NetApp.

We have a great Ops guy who is super nitpicky about naming standards and because of our naming standards, we know exactly how everything lines up, at least for the VM’s that have been created in the last year or so. The problem is some legacy VM’s that don’t adhere to our standards in development and test environments. This is where Get-NaHyperV comes in to save the day. This CmdLet has actually been around for a while, but with this release, it now supports clustered disks, which is exactly what we needed. In addition to getting info on our CSV’s and exact location of VHD’d, we were also able to enumerate exactly which NetApp Volume, QTree, and LUN the VM Disk resources were associated with. Absolutely brilliant!

image

Here’s a screenshot of an example from the NetApp Help on the cmdlet

image

There is also a more generic cmdlet call Get-NaHostDisk which does essentially the same thing for disks that are on the SAN but not necessarily associated with Hyper-V VM’s. This can be used for clustered SQL or something else that uses shared storage.

I use these cmdlets nearly everyday. I can’t tell you how much they have streamlined our processes and tooling for working with our storage on a daily basis.  NetApp, keep up the good work!