Wednesday, June 29, 2011

IAM project painpoints

In my experience IAM projects generally have severe pain points in three areas:

  1. Processes
  2. Data
  3. Technology
On the processes side it is often unclear if the new system should reflect how things should be done or how things are actually done. You also have the built in conflict between operations (things should be done as simply and straightforward as possible) and audit/compliance/security (the processes should provide adequate
safeguards). One safe way to fail an IDM project is to not get your processes defined and accepted by the key stakeholders at an early stage of the project but rather discover this issue during UAT.

If your data is dirty it doesn't really matter how good your provisioning and/or access logic is. Data ownership is often a huge issue as the owners, if they even exist, usually are blissfully unaware of how bad the data actually is. Data issues are interesting because there are lots of different kinds of data problems. In some cases the data lacks clear referential integrity between different systems which will hit you during initial load. Another data issue that may surface if you use user names to generate things like logins and email addresses is that names can cause problems. In many cases you need a reporting structure to be able to communicate to the users manager. If you don't really know who the manager is, which isn't that uncommon among contractors then you will have a problem.

Finally the technology part is about having a vendor that has experience on not only standing up the technology in itself but also to integrate it with the target applications. It is not uncommon that you spend 2-3 weeks on implementing the technical part of an integration the first time you do it while it takes you 2-3 days (or even 2-3 hours) the second time. Experienced high quality technical resources are key to have a quick and efficient implementation but right now there are many more projects than qualified engineers and architects.

Sunday, June 26, 2011

JIT provisioning - the compliance view

JIT provisioning gives you significant advantages in operational agility as the cost of integrating provisioning to an application, measured in time and effort, becomes a lot smaller than with the conventional provisioning approach. As always there is of course downsides with JIT provisioning so lets talk about the issues and how to mitigate them.

The reason why provisioning systems exists are basically to make onboarding, offboarding and maintenance of the access profile of the corporate user more efficient. The efficiency gain comes partly from automating the actual provisioning and deprovisioning operations and partly from automating compliance activities (who has access to what). It is clear that JIT addresses basic operations in an efficient manner but what about compliance?

Conventional provisioning systems offers the ability to see what a user has access to and also why the user access to these resources. The answer to the why question may be that "because the user in an employee the provisioning policy dictates that resource X should be granted" or "the users manager raised a request for resource Y and the resource owner granted it". Some provisioning systems also supports access recertification ("on May 15 2011 the users manager thought that the user should have access to resource Y"). The access information is often exposed through reporting functions and/or a pretty web interface so auditors can get the information they need without having to understand the inner workings of the provisioning system.

In the JIT world things get a bit more complex. In essence the authorization is based on what the guy on the other end is claiming to be true. In it's simplest form anyone who comes over from the partner application would have full access to your application. In a more complex situation you may have the partner sending you either raw user information attributes (user y is in cost center x) or some form of role attributes (user y has the role of broker level two). The application then makes an authorization decision based in this information. This two tiered authorization model makes the auditors life substantially harder but there are ways to increase transparency (i.e. use XACML instead of embedding the authorization decision in code).

Even with transparency measures in place a nswering the questions "who has access to resource X" and "what resource does user Y have access to" becomes really tricky in a JIT world. If you also need to answer the why questions you are in real trouble. It is going to be really interesting to see what vendor in the access governance space will be first with addressing this need.

Tuesday, June 21, 2011

JIT provisioning - the application view

In JIT provisioning I looked at how you could create a just in time provisioning system. In that posting I discussed the case from the identity hubs point of view. Now lets take the other viewpoint and be the app instead.

As the app you basically have made the choice to trust that the identity hub has done a good job of doing the authentication and authorization. You don't really have any other choice than to trust the hub. If you are an application that doesn't need to persist state between session your life becomes very simple. You serve the content based on the information provided in the request.

On the other hand if you need to persist state you basically need to create a new account every time someone with a new primary key attribute shows up. You would also need some kind of mechanism to invalidate accounts that haven't been used for a while as you would otherwise just accumulate active accounts indefinitely. The disablement could be done through straight ageing (no usage for one month results in the account being put in disabled status or perhaps even deleted) or by querying the identity hub. The query could either be a delta recon (what have been disabled/deleted since I last asked) or a full recon  (get all accounts from the hub and see what accounts are present on your side but not on the hub side).

One interesting aspect of this is that in a company to company situation it should be of interest to the hub company to be able to show the application partners what the hub authorization logic really is as they are in fact trusting the hub blindly. This is a very interesting use case for XACML as it is much easier to review some XACML than hundreds or even thousands of lines of Java/c#.

Monday, June 6, 2011

JIT provisioning

Lets take a break from the check lists and take a look at another interesting subject: Just In Time (JIT) provisioning.

Over the last couple of years SAML has emerged as the defacto standard for federated authentication and authorization. If you are working with a partner the first question is usually "Do you support SAML?".

Incoming SAML makes it possible to essentially outsource the process of authentication and authorization to a business partner. The partner vouches for the identity of the user and you can essentially use this information to give the user access to your system. This solves the run time but in most cases you still need a "back channel" provisioning process. Getting a SAML assertion telling you that the user "msandr01" would like to log in to your application  logging is good but most application needs more information to create a working system account.

Nishant Kaushik published a very good four part series of blog postings on this subject about a year ago that I highly recommend. I ran into the problem in a discussion at Wisegate a couple of weeks ago and I am also looking into the problem for a couple of use cases at work.

The use case we talked about at Wisegate was a bank that had outsourced all of their customer facing applications to various vendors. One vendor did retail banking, another brokerage, a third investment banking etc. All of the different vendors had their own user repositories and own SSO solutions so a single bank customer could have multiple logins and multiple passwords and would have to login to each application separately. The business of course didn't like this and wanted an SSO solution.

The true high tech JIT solution would be to use a federated authentication product such as IBM TFIM and do SAML with the apps all in run time. The hub would be truely light weight and not persist any information about the users. A typical user interaction with the hub would look like:
  1. Take the request from the user
  2. Figure out which application this user is trying to login to
  3. Figure out if the user has any account in any of the apps by asking the apps
  4. Authenticate the user
  5. Authorize the user
  6. Create the SAML assertion and send it to the app
  7. Act as a reverse proxy in the interaction between the user and the app

In theory this is a great idea but there are some practical considerations.

One issue is latency. Given that this is an online person facing transaction the login should ideally not take more than three seconds (or so) and if we end up pushing 10-15 seconds the business will start screaming. The SSO hub and the apps are physically in different places which means that you will get latency even if you have lighting fast machines that process the requests.

Another issue is complexity. This does require quite a lot of bleeding edge technology and there are plenty of things that could go wrong.

In the end the discussion ended with the conclusion that it is probably safer to go for a more conventional approach where user populations of the apps are reconned back to a central repository in the SSO hub using a meta directory product. SAML would still be used to communicate assertions to the applications but this solution is a lot faster and eliminates a lot of the unknowns. This solution pattern is very common among a number of vendors including Symplified.

Not as high tech and cool but is guaranteed to work and won't cause hard to fix latency based performance problems .

Sunday, June 5, 2011

Checklist manifesto part two - requirements gathering

This posting is a continuation to Checklist manifesto. In that post I discussed how the concept of checklists can be applied to IAM projects on the overall delivery methodology level. Lets talk a bit about how check lists can be used in the different parts of the delivery methodology.

Lets assume that you are using a classical waterfall. This gives you the following steps:

  1. Requirements gathering
  2. Design
  3. Implementation
  4. Test
  5. Go live
  6. Maintenance

In this post I will focus on how you would user checklists in the requirements gathering phase.

One thing I have noticed over my IAM implementations is that if you take a use case driven approach it seems like most provisioning projects will contain almost the same use cases. Depending on how you slice and dice your cases and what your scope is you usually end up with 30-50 core use cases which tend to cover the same subjects.

The use cases may be very different as each and every company seem to like to do things their own special way but you will cover the same overall business process.

This means that a mature implementation organization should be able to come up with a list of use cases that can be given to the more junior resources that will perform the actual requirements gathering. If you are a customer I would definitely include this as a question on the RFP. If you are a junior resource I would speak to your seniors and check if they don't have a list of use cases on their hard drives or if they quickly can create one based on their previous projects.

More about my experiences in the lovely world of requirements gathering can be found in the post UAT and requirements gathering.