Why teach your kid times tables when you can teach them times tables and a little bit of coding ?

March 16, 2014 at 4:21 AMAndy Schneider

So I promised my daughter a big present if she could learn all of her times tables, up to 12 x 12. Sure, we have some flash cards, and there are endless apps on any software platform to practice math facts. But why do that when you can write your own? The side benefit, is that she and I can personalize it to whatever we want. Next up is division, but she is going to have to help me code up that functionality.







using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace PracticeMultiplication
    class Program
        static void Main(string[] args)

            // Get the largest number to multiply by, up to 12
            // Choosing 2 will get 1 x 12 - 2 x 12, choosing 6 will get 1 x 12 up through 6 X 12

            Console.Write("What's the largest number you want to multiply (1-12) ? ");
            int max = Convert.ToInt32(Console.ReadLine());
            Console.WriteLine("Just type 'done' or 'q' to quit");
            string studentAnswer = String.Empty;
            int score = 0; //total correct answers
            int correctAnswersInARow = 0; // correct answers in a row

            while (true)

                ShowCurrentScore(score, correctAnswersInARow);

                // Generate a new problem using random numbers based on the max value
                // num2 will always be between 1 and 12
                Random random = new Random();
                int num1 = random.Next(0, max);
                int num2 = random.Next(0, 12);
                Console.Write("{0} x {1} = ", num1, num2);
                studentAnswer = Console.ReadLine();

                // check to see if we should quit
                if (studentAnswer == "done" || studentAnswer == "q")
               // Convert to Int so we can multiply and see what the answer is
                int studentNumber = Convert.ToInt32(studentAnswer);
                int answer = (num1 * num2);

                if (answer == studentNumber)
                    // Score and Correct get bumped up if they get it right.

                    // If they get 5 rigth in a row, they get a bonus of 5 points
                   // Using Modulo 5 to test for divisible by 5
                    if (correctAnswersInARow > 1 && correctAnswersInARow % 5 == 0)
                        score += 5;
                        WriteMessage("Bonus 5 points for getting 5 right in a row!", ConsoleColor.DarkGreen, score);
                    WriteMessage("Great Job Madeline! ",ConsoleColor.Green,score);
                    WriteMessage("Oh bummer! Let's try another one",ConsoleColor.Red,score);
                    // reset to 0 since we got it wrong
                    correctAnswersInARow = 0;

        private static void ShowCurrentScore(int score, int rightInarow)
            Console.BackgroundColor = ConsoleColor.Blue;
            Console.ForegroundColor = ConsoleColor.Yellow;
            Console.WriteLine("Your score is {0}                        ", score);
            Console.WriteLine("You have got {0} correct answers in a row", rightInarow);

        private static void WriteMessage(string Message, ConsoleColor Color, int score)
            Console.ForegroundColor = Color;
            Console.WriteLine("Your score is {0}",score);


Posted in: C# | Parenting | Kids


Active Directory Search that works - Ambiguous Name Resolution

March 5, 2014 at 10:40 PMAndy Schneider

I am not a big fan of having to specify filters using the syntax prescribed for Get-ADuser.  Ambiguous Name Resolution is an old API that allows you to query against multiple attributes at the same time. There is some more information on ANR here http://support.microsoft.com/kb/243299

By default, the following attributes are set for ANR:

  • GivenName
  • Surname
  • displayName
  • LegacyExchangeDN
  • msExchMailNickname
  • RDN
  • physicalDeliveryOfficeName
  • proxyAddress
  • sAMAccountName

It turns out you just need to pass in an LDAP Query. Once you get the list of results, you can pipe them into the Get-ADuser cmdlet to get the user objects as you would expect them. All we have to do is build an LDAP Filter and query against an attribute called ANR. This will return all objects that have an attribute from the list above that maches User. You can kind of think of it as a wildcard search on steroids.

Function Get-User {
BEGIN {import-module activedirectory}
   $filter = "(&(ObjectClass=User)(ANR=$User))" 
   Get-ADObject -LDAPFilter $filter  |

Hope this is helpful.

Posted in:


FIM and Orphaned Expected Rule Entries (ERE’s)

June 24, 2013 at 7:51 PMAndy Schneider

I just started working with FIM 2010 R2 in a development environment. Before you read any further, please take this into consideration. All of this has come from a development environment. I would never do this in a production environment. I would probably go as far as to say that my level of fear and caution for “Hey, what does this button do?” type of scenarios is probably a little too low.

With that understanding, let’s get to the point. I was setting up a typical scenario with a SQL MA and and Active Directory MA. The SQL Database has approximately 50,000 entries that I needed to pull into the metaverse. I like to learn by doing. Yeah, I read a little bit before I started, but clearly not enough. I was still trying to figure out if I was going to use classic rules or if I was going to need to use the FIM portal to create Synchronization rules. After several imports, exports, and syncs, and then messing with Sync rules, I finally decided that I actually did need to use the FIM portal and Synchronization rules. I wanted to start with a completely clean slate. So I went in and deleted everything, including MA’s.

The problem came up when I deleted the FIM service Management Agent connector space. No matter what I did, every time I did an import, it would pull in about 150,000 Expected Rule Entries, or ERE’s.

Poking around in the FIM Script Box, I found some PowerShell code that could delete these for me. Basically, it does a search for all the orphaned ERE’s using the FIM web service under the hood of the “Export-FIMConfig” PowerShell cmdlet. Well this is great if you are looking at a couple 100 objects. But with 150,000 objects, I was looking at more like days.

So I decided to poking around a bit more. I figured that at the end of the day, these objects have to be somewhere in the database. If you are getting worried now, don’t. I promise it gets better. Looking at the FIMService database, I found one table that was particularly interesting. There is a FIM.objects table. Sweet!


Here’s where it got a little nuts. Looking in the connector space, I found the object ID of ONE of my orphaned ERE’s. It had an objectID of something like “B8A72DEB-6A7F-482F-81A6-8DD66D91D6EA.”

So I ran a quick query like the following

SELECT * FROM Fim.Objects
WHERE ObjectID = 'B8A72DEB-6A7F-482F-81A6-8DD66D91D6EA'

Here I found out that it had an ObjectType of 11. Looking at the table, it was referencing the ObjectTypeInternal table.


So the next thing I did was run a similar query and found “Astonishingly!” that there were about 150,000 ERE objects with ObjectType = 11.

Using this query, I was able to find all the ObjectKey’s of all the objects I needed to delete.

SELECT * FROM Fim.Objects
WHERE ObjectTypeKey = 11

Now I needed a way to delete these suckers. Looking over the database, I was wondering if there might be some stored procedure I might be able to use.

Low and Behold, sitting there, was something called debug.DeleteObject


Nevermind the “Debug” prefix on this stored procedure. Using a little Excel Magic, I created a huge long SQL command (What can I say, I am not a SQL guy) that looked something like this. I basically took the ObjectKey column from the last query, dumped it in excel, and then used some string manipulation to generate my SQL script.



Here’s what eventually what went into SQL Query Analyzer – actual code was 150,000 calls.


Running this only took about a little over 2 hours. Way faster than using the Import-FimConfig and Export-FIMConfig. Using the web service abstraction layer is great and all, but sometimes you just need to bypass as much as possible.

So the next time you are working in a development environment and feel like taking a chance on blowing off your toe, or maybe your whole foot, this just might help!

Posted in: PowerShell | FIM


Using Claims Authorization Rules in ADFS 2.0

December 19, 2012 at 2:10 AMAndy Schneider

I am willing to bet that 90% of the time you have created claims you never really noticed that there are actually 3 tabs for claims that you can use.


Most of the time, we are just messing with the “Issuance Transform Rules.” When you walk through the “Add a Relying Party” Wizard, you may not notice the claim that gets created automatically in “Issuance Authorization Rules.”

Here is a screen shot of the rule that says “Allow everyone”


If we look at the claim, we can see that it is of type permit and the value it true


You can also create a rule that says “Deny someone with this value of a claim.” For instance, I can add a rule that says “Deny access to this Relying Party if anyone tries to log in and has a claim of type Name with a value that looks like “Andy”


By the way, the =~ syntax is saying If a claim of type Name has a value that matches the regular expression, then issue the claim of type deny with a value of “DenyUsersWithClaim.”

You can also issue a strict deny all by doing a straight deny like this

=> issue(Type = "http://schemas.microsoft.com/authorization/claims/deny", Value = "true");

So this is all great but what if you need to combine some of these rules and perhaps make an exception for a handful of users. When I first started messing with this, I figured these authorization claim rules would act like a firewall policy. It would start at the top, and then would act on the first rule that it matched, and then processing would stop. THIS IS NOT THE CASE. What made me really think this was the case was that there is an option to rearrange the order of the claims.


You wish it were so easy!  It turns out that if there is a “Deny” rule that matches a users claim anywhere, it will always win, no matter where on the list it is. So in this case, even though permit all is first, if I have a claim that says my “Name” is “Andy” then I will get a denied access error.

We have to get a little tricky. Lets say we want to deny everyone from a particular Identity Provider except for 3 separate users. How would we go about it. Here is one way.

First, lets figure out an easy way to determine at the claims authorization rule, which IDP the user came from. If you go to “Trust Relationships | Claims Provider Trusts” You will see a trust for Active Directory and any other IDP’s that you have added. For the sake of demonstration, lets say you have one called Contoso.  Right click on the Contoso IDP and select “Edit Claim Rules”

You can add a claim using a custom rule. Choose “Send Claims Using a Custom Rule.” You can use any namespace that you want. I would suggest using one namespace for custom claims and sticking with it. For this demo, I am using http://sso.contoso.com/users

This rule will add a claim to all users that log in from the Contoso STS that says “Company” = “Contoso”



All right so now we have the claim for people from Contoso, but we want to create exemptions for foo@contoso.com, bar@contoso.com, and andy@contoso.com.

To do this, we need to go back to our Relying Party Authorization Rules and add a new custom rule.



Here’s how this rule breaks down. For a bit more details, I would highly suggest reading through the claims rule language primer.

Here’s the actual claim rule

c1:[Type == "http://sso.contoso.com/users/Company", Value =~ "^(?i)Contoso$"]
&& c2:[Type == "
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", Value =~ "(?<!foo|bar|andy)@contoso.com"]
=> issue(Type = "
http://schemas.microsoft.com/authorization/claims/deny", Value = "DenyUsersWithClaim");

Here’s the basic logic of the claim rule

If  ((claim.Company matches “Contoso”)  AND (claim.upn –NOTEQUAL (andy@contoso.com OR foo@contoso.com OR bar@contoso.com)) Then (Issue “DenyUsersWithClaim”)

Perhaps the trickiest part here is the REGEX that filters out the exceptions. Basically, the “!” says “not match” and the “|” symbol is the Alternation operator – or functionally the "-or” operator.

Now, all users from Contoso will get denied access except for the 3 that match the regex.

Posted in: ADFS | Claims | Authorization


Using Enterprise AD Credentials to Manage Azure Access Control Service

December 15, 2012 at 2:10 AMAndy Schneider

ACS is Azure’s Access Control Service. It is a cloud based Secure Token Service (STS). With the recent advent of Windows Azure Active Directory and ACS being offered for free, I am envisioning more and more enterprises beginning to leverage these services.

Typically, when you create a Azure ACS namespace, you login with a Windows Live ID and create/delete/manage services. However, if you have an Identity and Access Management team in your enterprise, you may want to have a bit more control over who can manage ACS and also ensure that they are using their AD credentials rather people’s personal Windows Live accounts. This is now completely possible using on premise ADFS.

This post assumes you have built out and installed an ADFS infrastructure and are familiar with adding Relying Parties and using claims.

To create a new ACS namespace, you will need to go to https://manage.windowsazure.com, log in to the portal, and then click on your name and choose Previous Portal.


In the old portal, you can manage Service Bus, Access Control, and caching.


Click on there and create a new ACS Namespace. Once the namespace is created, you can go in and manage “Identity Providers”

Typically, this is allowing you to add ID Providers that you will use to authenticate users to your Relying Parties. Live ID is there by default, and you can add more like Google, Facebook, and Yahoo!


The one you need here is WS-Federation ID Provider (ADFS 2.0)

From there you can give the URL of your ADFS federation metadata. It is typically something like https://sts.example.com/FederationMetadata/2007-06/FederationMetadata.xml

You must also add ACS as a Relying Party to your ADFS instance as well to establish a trust.

Now that you have added your ADFS service as a Trusted Identity Provider, you can use ADFS to authenticate your relying parties.

However, that is not the end goal in this scenario. We want to set up ACS so that we can log in to the management portal with our Active Directory Credentials. Here’s what else you need to do.

In ACS, to to Administration and choose Add Administrator


The one thing you will need to do is specify the claim and value that has permission to manage the portal.

I would suggest you use a Role claim and then in ADFS on your side, you can map a group in AD to that role claim.

Here’s the role claim


The value you specify is the value of the claim you set in ADFS when you add the claim rule to map a claim to a Group Membership. An example would be Domain\ACSAdministrators

To test this out, you should add yourself to the ACSAdministrators group you created and then try and authenticate to the management URL for your ACS Namespace. It will be something like https://get-powershell.accesscontrol.windows.net/v2/mgmt/ From there, you will be prompted for which ID Provider you want to log in with. Choose your ADFS provider, log in with your corporate credentials, and you will have access to manage ACS.

Posted in:


A New Blog !

August 9, 2012 at 5:19 AMAndy Schneider

Hey folks. I wanted to let you know I have started a new blog over at The IT Fishing Pole. The basic concept of the Fishing Pole is the old saying “Give a man a fish, feed him for a day. Teach him how to fish, feed him for a lifetime.. or at least until he’s sick of seafood.”

I will still keep Get-PowerShell around and write occasional posts here but I am starting to get into a lot more technologies including Identity Management, ADFS and  PKI to name a few, and I wanted a more generic platform for these articles. Plus, I have to say, there is something that just feels good about starting something brand new.

Thanks for reading!


Posted in:


Windows 8 Active Directory PowerShell Provider

October 27, 2011 at 7:03 PMAndy Schneider

One of the most potentially useful features of the AD tools provided by Microsoft is the AD PowerShell Provider. A Provider in PowerShell allows a user to interact with a data structure similarly to how they would interact with a file system. You can change directories, list items, create new items, and delete items. This is a really good model for any kind of hierarchical data. AD is a great example. I like to use PowerShell providers to find and navigate through data very quickly.

The problem I have with the AD Provider is the names that are used when you navigate into Active Directory. In this case, a picture is worth a thousand words.


What do you mean the path doesn’t exist? According to the output of my last ls command, I should be able to cd into windev, configuration, and 3 others. It turns out you don’t use the name of the “Name” property. You have to cd into the “Distinguished Name”


To me, that is very unintuitive and does not align well with the PowerShell mantra, “Think”, “Type”, “Get.” Well in the Windows 8 Developer Preview Server, you can address this issue. The default drive AD: still exists and behaves in the same way, but it doesn’t have to stay that way.

You can remove the AD PSDrive and create your own new one. The trick here is to use the –FormatType switch with the parameter “canonical.” This will change how the drive is set up and what names the provider binds to


Now we can navigate through AD much more easily.


Posted in:


Random Collection of New Features found while Exploring PowerShell V3

September 18, 2011 at 3:46 AMAndy Schneider

I spent some time this afternoon playing around with PowerShell V3 and came across a (very random) set of new features and functionality that I thought I would highlight here. My wife is pretty convinced I have Adult Attention Deficit Disorder. This random collection of features and tips seem to corroborate that theory. At any rate, here is what I have found.

  • There is a new Add-On model in the ISE. From the help I read “Windows PowerShell ISE now supports add-on tools, which are Windows Presentation Foundation (WPF) controls that are added by using the object model.” It is going to be awesome to see what some of the UI/WPF folks come up with using this new functionality.

The ISE now lets you edit XML natively just like you can with PS1 files. You can even set the token colors using $PSIse.Options.XMLTokenColors.

  • The ISE now supports Intellisense. This is a great new feature. One thing that I found extremely helpful was to switch the option $psise.Options.UseEnterToSelectInCommandPanelIntellisense = $true. If you don’t do this, by default, when intellisense pops up and you hit enter, the command will execute without adding what you selected via Intellisense to the command. There is also an option to use enter to select intellisense in the script pane. I recommend setting both of these to $true.

There are cmdlets to mess with IP Addresses. Get-NetIPaddress and Set-IPAddress to just name the first two obvious ones. This is just an example of the breadth of OS coverage we have. If you know PowerShell, you can manage all of Windows.

There is some new, simpler syntax for where and foreach-object. You no longer need to use braces. You can say something like get-service | where name –like svc*

Last but not least, we now have access to control panel items directly in PowerShell. You can get control panel items and start them. One example is Get-ControlPanelItem System | Start-ControlPanelItem

So much good stuff.. It feels like we might get to PowerShell v4 before we finish finding out how to use all the goodness in V3!

Posted in:


A new PowerShell V3 Cmdlet - Invoke-WebRequest

September 18, 2011 at 1:00 AMAndy Schneider

Playing around with PowerShell V3, I just came across an incredibly powerful new cmdlet called Invoke-WebRequest. This cmdlet will return content of a web site organized by properties. You can access the following properties from the returned Microsoft.PowerShell.Commands.HtmlWebResponseObject.
















With this cmdlet,I was able to get all the links on my blog

$blog = invoke-webrequest get-powershell.com


This is really going to make it super easy to script against web pages and parse them when you don’t have a web service to hit. Again, PowerShell making it easy to reach into a messy world.

Posted in: PowerShell | V3 | Web


PowerShell V3 Enables Windows 8 Server to be optimized for the Cloud

September 15, 2011 at 6:44 PMAndy Schneider

Windows Server 8 took the stage on Day 2 of the BUILD conference. This OS was built from the ground up to be highly optimized for private and public cloud computing. To have an OS that is optimized for the cloud, there must be a management framework  built into the OS that can be used to manage hundreds of thousands of servers reliably and securely.

The Windows Management Framework is that framework. The entire stack has been updated. Windows 8 brings with it a new version of WMI, WMI v2. It is now an order of magnitude easier to write WMI providers. WSMan, the HTTP protocol used under the hood of PowerShell remoting has been updated as well. In a presentation at Build, Jeffrey Snover mentioned that something gets built very well when you rely on it 100%. Microsoft is going all in on WSMan and the Windows Management Framework.

To manage an OS optimized for cloud computing, GUI’s are simply not an option. Windows 2008 introduced the first version of Server Core, removing the GUI from the OS completely. Microsoft is starting to push Server Core as the preferred installation for Windows Server 8. To enable this, remoting must work all the time. Therefore, in Windows 8, PowerShell remoting and WSMan is on by default, out of the box.

Not only is remoting on by default, but MS has invested a great deal of engineering into this technology to make it incredibly robust. Users can now create a session, connect to it, disconnect, and reconnect at a later time. This enables many different scenarios. Just imagine you are at work and kick of a set of jobs across 250 servers that will take a while. You can disconnect from those sessions and go home. After dinner, you can connect back in and check on the jobs by reconnecting to those sessions.

In addition to having a great robust remoting story, the Framework must also be able to cover as much of the OS as possible. MS has made it incredibly easy for teams to add PowerShell cmdlets to their product. When teams create a WMI V2 provider, there are now tools that can be used to automatically generate all the cmdlets associated with their WMI provider. Because of this, there are now cmdlets to manage nearly all aspects of Windows, including low level disk management and networking.

Finally, on top of all these requirements, the Framework must run as fast as possible. PowerShell V3 leverages the Dynamic Language Runtime (the DLR) in its execution engine. This allows script code that is running a lot to be compiled on demand and then executed. The PowerShell team has seen up to a 600% increase in performance of script execution because of this change.

The combination of the performance increase, a robust infrastructure, and an incredibly large foot print across the OS will make Windows 8 a highly optimized cloud computing platform. It is absolutely clear that if you have invested in PowerShell, you will not be sorry. It is likely one of the best investments you have ever made in your IT Pro career.

Posted in: PowerShell | V3 | Cloud | Windows Server 8