Saturday, September 25, 2010

RBAC vs ABAC

Over the last week there has been a very interesting discussion around role based access control vs attribute based access control. My personal opinion is that there really isn't any razor sharp division between the different paradigms and that in many cases a blended solution is what gives best support for the business case. Let me demonstrate what I mean with an example.

The business case is that VPN system needs to be converted from a "if you can authenticate then you have full access" to a system that only gives access to the systems the user needs. The idea is that you somehow can define groups of users that need access to certain systems. For now the only user groups that have been defined are the different company divisions and consultants vs employees but the system needs to be flexible to support other group definitions further down the line.

The access control device supports definition of access groups through either AD groups or through AD attributes. The AD attributes are analyzed in an XACML light module (if attribute company="truck manufacturing" and user_type="employee" -> user_is_member_of_access_group_employee_in_truck_manufacturing). Alternatively the access control device can simply look for users that are member of the AD group employee_in_truck_manufacturing and then apply the access rules for this  group.

This means that at first glance you have a choice between a more ABAC oriented approach (access device looks at attributes and makes a decision) or a more RBAC oriented approach (access device looks at a group membership/role).

The complexity here is how do you populate the attributes and/or place the user in the correct AD group?

Assuming that the attributes exists somewhere, i.e. in the HR system, you can populate the AD record using your favorite provisioning tool at AD account creation time and then maintain them either using the provisioning tool or a separate metadirectory product.

The AD group membership can be solved by implementing a rule based AD group allocation module in the provisioning product. The provisioning product simply evaluates rules written in XACML or other suitable language (LDAP filters, simple boolean logic, regexp) and then provisions AD group memberships.

Now the conclusion you can draw from this is that in the use case where the access control decision is done based on user attributes the essential difference between the two choices is only the exact location of the evaluation logic. Should it live in the run time access control device or should the decision be made in the provisioning product?

In most cases I would argue that the correct choice depends more on the capabilities of respective product. If the decision rules are easily modeled in the rules language offered by the provisioning product then it may make more sense to place the logic there. If the access control offers the better platform then use that.

One advantage of using the provisioning platform may be that it is easier to explain to the helpdesk and the end users that "if you aren't member of the AD group truck_manufacturing_employees then you won't have remote access to the trucking portal" then having to say "if your employee_type attribute is employee and your company attribute is truck_manufacturing then you have access". This is especially true if the attributes contains codes instead of names.

If you want to learn more about ABAC I recommend David Brossard's excellent ABAC primer "Authorization is not just about who you are".

Thursday, September 23, 2010

Upgrading MIIS 2003 to ILM 2007

I recently upgraded a Microsoft Identity Integration Server 2003 to Identity Lifecycle Manager 2007. As far as I have been able to determine there are very few differences between these two products other than the fact that ILM supports AD 2008.

MIIS/ILM is basically a quite decent metadirectory engine that also can be used as a poor mans provisioning solution although the total lack of support for requests, approval workflows, self service and recertification to just pick a few of the features you normally would expect in a provisioning solution can be a tiny bit limiting. Microsoft has addressed some of these concerns in Microsoft Forefront that was released earlier this year.

The upgrade process was actually very straightforward.

  1. Take backup of encryption key in old MIIS install
  2. Take backup of old database (SQL 2000)
  3. Import backup into new database (SQL 2005)
  4. Put the encryption key on new app server
  5. Start install and do some basic configuration
  6. Get some coffe and let the upgrade run for about an hour
  7. Load up the encryption key in the new ILM install
  8. Patch with the latest patch
  9. Done

The whole process took about two hours for the db steps and another hour or so for the application step. I was very impressed with the ease of the upgrade process. Normally IDM upgrades are really complex and time consuming so this was a very pleasant surprise.

One interesting feature was that the custom dlls that contain our custom rules actually got copied over to the file system of the new application server automatically. I assume that MIIS/ILM keeps them in form of blobs in the database and the upgrade process copy the files out of the db.

Friday, September 10, 2010

What a difference a year makes

There was a posting in of the IDM groups on LinkedIn today that made me take another look at the 2009 Forrester IAM wave report. The report is getting a bit old but it is sometimes interesting to look back and see what has happend in the IAM market over the last year.

In my opinion the biggest change is clearly the aquisition of Sun and the demise of Sun Identity Manager. Suddenly one of the strongest players in the market just disappeared which opened up a lot of room for other systems. One of biggest winners seems to be Courian that suddenly got a shining example of why buying a suite from one of the big boys doesn't neccessarily mean that you have a stronger support and continued development track in front of you.

Other big changes are IBM TIM 5.1, Microsoft Forefront and Oracle 11g.

TIM 5.1 meant that IBM got substantially improved role management, access recertification and group management. I think largely that the features are well implemented but they really don't have the depth that some of the free standing role management tools have (i.e. Oracle Identity Analytics). Martin Kuppinger at Kuppinger Cole wrote an interesting posting about TIM5.1 in his very good blog.

Microsoft Forefront really means that ILM stops being a glorified metadirectory engine and takes the step into being a proper provisioning platform. If I was in a Microsoft only shop and had a business that was trying to deploy ten different Sharepoint portals (don't they all these days?) I would clearly consider taking a deeper look at the product.

Oracle 11g has a lot of nifty new features that I have been talking about in various posts.

Overall I think that the wave still gives a good overview of the IAM marketspace and I am really looking forward to the 2010 version of the Forrester IAM wave.

(Full disclosure note: I have an immediate family members that works for Forrester)

Monday, September 6, 2010

OIM Howto: Target system group memberships through OIM groups and access policies

In OIM there is often multiple ways to implement the same functionality.

One such case is target system group memberships. In Leverage standard connector group management I described how to leverage the functionality provided by the OIM AD connector to manage AD group memberships. You can also use the exact same functionality as well as the OIM rules, groups and access policies framework to manage group memberships.

  1. Create a rule that adds users to an OIM group under certain circumstances (i.e. user location is "New York" or costcenter is 2387)
  2. Add an access policy to that group that provisions the AD user object to the user with the group child form row set to give out the appropriate AD group

You can give a specific user more than one AD group through this strategy as the access policy evaluation engine basically adds the union of all child form rows to the process form of the access policy with the highest priority. Where you do run into trouble is if the same AD group membership is given to the same user by more than one access policy. If this happen the second group membership add will result in an error.

Taking the route over OIM groups and access policies has the advantage of making things clearer for administrators as well as auditors. It makes it possible to use certain out of the box OIM reports that covers OIM group memberships as proxies for AD group membership reports which certainly is helpful in certain situations.

OIM Howto: One resource object per target system group

In most cases of target system group management you need to manage a large number of different groups but sometimes you only need to handle a handful of groups. This commonly happens if the primary purpose of the OIM system is to manage some specific target system that actually uses groups on an LDAP server (often AD) to do fine, medium or coarse grained authorization. In some cases access to an application may be granted by an AD group membership (commonly used by portal software such as Plumtree).

In these cases it may be appropriate to create an independent resource object for each target system group. There are some substantial advantages to this approach:

  • In the user resource view an administrator will clearly see what target system group or application the user has access to
  • Attestation works cleaner
  • Out of the box reports works better

There is also nothing that stops you from doing a "mix and match" approach where some AD groups are represented as independent resource objects and other are grouped under a general "Add AD group" resource object.

The implementation basically follows the steps in Support for request based OIM group memberships other than the fact that you will not need any object form as the group name is reflected in the resource object itself.

Thursday, September 2, 2010

The downside of OIM resource object proliferation

The basic function of a resource object in OIM is to represent access for a specific user to a specific system. In many OIM architectures you chose to leverage the resource object to represent all kinds of entities in order to make the entity requestable in the request interface or attestable in the attestation framework.

Examples of entities that can be modelled as resource objects are:


One problem you will get if you have a lot of "transaction oriented resource objects" is that the user resource view can get drowned in objects so that it is hard for the OIM administrators to find the resource object instances that they really are interested in. Lets say that you have a user that has worked for the company for five years and have twenty AD group membership adds, five removals and ten resource object instances that represents user data updates. This user will have thirtyfive extra resource object instances in his resource object instance view.

You can of course argue that the state changes and group memberships should be properly recorded in the user's resource object instance view (it is not a bug, it is a feature!). If your customer insists that it will look ugly and limit administrator productivity you can simply change the design to have the resource object being raised against a dummy organisation or user. Simply add the real target user login to the object and process form and problem is solved.

Well, of course excluding training the end users about the fact that they need to pick "AD group user" instead of "John Smith" as the user when they want to request an AD group membership for John Smith. Depending on how big your user population with request generation privileges this may or may not be a problem.

Wednesday, September 1, 2010

OIM 11g: Approval Workflow Orchestration with BPEL

In OIM pre 11g you were basically forced to implement decently complex approval workflow in Java code as the built in workflow editor simply didn't support more complex workflows. Coding the approval workflow was not a big problem if you had basic Java programming skills but it did mean that the person that did the approval configuration had to be a programmer.

Implementing in code also meant that the implementation really lacked agility. You do need to test code more extensive than configuration and that means that your cycle time will be longer than if you could create approval workflows through configuration.

In OIM 11g approval workflow configuration is done through BPEL. This means that you can get a business analyst with BPEL skills to do most if perhaps not all of the approval configuration and there are many more people with BPEL skills than OIM approval API available on the market.

It is clear that this was a long overdue improvement of OIM that will help customers both to have quicker and less painful implementations as well as improving the maintainability of OIM.

One thought that kind of strikes you is that this may be the first step to move more and more of the workflows in OIM from the mysterious objects in the design console and the Java API code into the world of BPEL. Interesting concept isn't it?

Saturday, August 28, 2010

OIM Howto: Cascading user form changes using triggers

In the post Cascading user form changes with approval I am suggesting triggering the approval through a entity adapter. I got a very good question about this post.

The conventional approach for cascading user form changes down to process forms is to enter a task name in the LOOKUP.USR_PROCESS_TRIGGERS lookup table and then creating a task in the provisioning process with the same task name. This task then writes the data to the process form. The question was why not simply let this task trigger the request creation and avoid the whole complicated business with the entity adapter.


The answer is that the process trigger approach in many cases works fine. If you only have one resource object or a small number of resource objects where you need this functionality using the process trigger approach works great. If you on the other hand have many resource objects that needs this function things get a bit more complex if you want to stay with the process trigger approach.


One option is of course to implement the task in each resource object. That works fine but will cost you performance as each task initiation will take considerable time. You also will have more functionality to maintain although that really isn't a big issue as the logic is contained in the Java methods and you can keep a single copy that is shared between the resource objects. One advantage of the approach is that unless the user has been provisioned the resource the task will never fire so you don't have to create logic that controls what resource objects have been provisioned to the user.


The other option is to manage the update of several resource objects from a single task by using the APIs to check if the user has been provisioned with the other resources or not and take appropriate actions. This approach will make your code a bit more complex and in general it is also not advisable to let a task in one resource object impact other totally separate resource objects as this tends to be confusing (sort of variant of a hidden code side effect).

Friday, August 27, 2010

OIM Howto: Support for request based OIM group memberships

The normal OIM group management interface is oriented towards an administrator. In many cases it can be useful to be able to support request based OIM group memberships. To do this you basically follow the same steps as documented in Leverage standard connector group management
  1. Create a new RO called "OIM group membership" 
  2. Add an object form that lets the user indicate what OIM group they would like to become member of.
  3. Add a process form and data sink or prepop from the object form
  4. Add approval process (if needed)
  5. Add provisioning process that basically calls a task that calls the addMemberUser method in tcGroupOperationsIntf.
This in turn can be leveraged to do request based AD group memberships through attaching access policies to the groups that adds rows to the AD group membership child form of the AD User object. This will support multiple groups as the child form rows are added culminatively.

There are a couple of different options for the object form in step two and which approach you choose largely depends on your requriements.

One is to use a drop down backed by a lookup table. The lookup table could either be populated manually or as P.K. suggests in P.K. suggests in a recent thread on OTN discussion forum you could also create a scheduled taks and use the APIs to auto populate the lookup with the OIM groups. If you go down that path you may want to include logic that excludes certain OIM groups, i.e. system administrators, or just takes a subset of groups, i.e. all oim groups that starts with adGroups.

Another option is to use a child form which would support requests for multiple groups in a single reqeuest. If you go for this option you have to add the support on the process form as well and your provisioning logic will be slightly more complex.
    The target system net result is identical to the approach in Leverage standard connector group management but you can argue that it is a cleaner approach that more leverages the standard OIM functionality. It also leverages the OIM group admin user interface which makes it clearer what AD groups a specific user has access to.

    Thursday, August 26, 2010

    OIM 11g: Request management

    OIM has always had support for requests based provisioning but the OIM request model is strongly connected  to resource objects. This works great if you want to request something that natively out of the box is a resource object, i.e. an AD account, but works less well if you need to be able to support requests for more granular things like attributes on a process form or target system roles on a child form connected to the process form.

    There is a number of ways to work around this problem but none of these approaches is entirely problem free and/or require a lot of implementation work:
    1. Wrap the entity in a custom resource object (example AD group memberships
    2. Wrap the entity in a custom resource object and leverage OIM group and Access Policy framework
    3. Create an custom menu item and do a custom request workflow
    4. Create a totally custom request interface and connect to OIM using the APIs. Potentially use web services as a communication channel
    Option one and two require some OIM knowledge and a bit of Java prowess. Option three requires Java, , OIM API skills, Spring and some basic GUI creation skills and four requires knowledge of some kind of web interface plus some understanding of the OIM APIs. Nothing extremely complicated but definitely requires more skill and time than simple configuration.

    In 11g there is a new request framework that looks very promising that should hopefully mean that you no longer need to write custom code as soon as you need to support request for anything outside of the base resource objects. This will make OIM implementation that includes decently advanced requirements around requests substantially cheaper and faster.

    If you look at the competition it is clearly a weak spot for OIM. IBM TIM has had framework for handling application roles/groups (they call it "access") since 5.0 so OIM clearly needed to catch up on this feature. The OIM framework looks more flexible so if the feature delivers on it's promises it could be a strong advantage for OIM.

    Tuesday, August 24, 2010

    OIM Howto: Cascading user form changes with approval

    The OIM user self service function makes it possible for the user to update attributes on their own user form. This of course makes it possible to trigger provisioning events based on the updates but in many cases you need an approval workflow to ensure that the user does not give itself inappropriate privileges. How do you do that?

    Lets say you have a field called "primary role" on the user form that the user can update. On update of this field you want an approval process to be fired and if the change is approved "something" should happen through a provisioning task.

    The first question is how do you detect the change to the user form? This can be done through an entity adapter set on post update on the user. This adapter will be fired on any update to the user form so you need to add an additional and invisble field to the user form called "primary role old". The first thing you do in the adapter is to check if "primary role" and "primary role old" are the same. If so no change has been done to this field and do nothing. If not then fire off the request. At the end update "primary role old" to be the same as "primary role". This will trigger a second round of entity adapter check but as the two attributes now are the same nothing will happen.

    Next you need to create a custom resourve object with the associated object form, process form, approval process and provisioning process.

    Firing off the request for your shiny new resource object is done using the createRequest API from the tcRequestOperationsIntf class. The details for how to do this can be found in this OTN discussion on creating requests

    Now the update to the user form will create a request that is approved (or denied) by the appropriate parties and your users get provisioned with whatever you want to put in your provisioning process. Elegant isn't it?

    If you have more than one field on the user form that needs this treatment I strongly recommend that you use a single entity adapter instead of one adapter per field as entity adapter are expensive to instantiate and your user updates will get very, very slow if you have 10+ entity adapters with each one looking at a specific attribute.

    OIM 11g: X2 is finally here

    Back in the summer of 2005 I got trained on a product called Xellerate from a company called Thor Technologies. I really liked some features but a lot of the GUI felt distinctively old and Swing really never was a good GUI framework. No worries mate, said the nice company representative, X2 will be here soon and that will come with a brand new GUI. We even got to see some screenshots that looked quite nice.

    As you all know Thor got bought by Oracle and the Oracleization process that turned Xellerate into OIM took a couple of years. Funnily enough the screenshots for OIM 11g are actually very similar to the screenshots that were shown to me in the hot conferance room in London way back in the summer of 2005.

    Historically Oracle has often promised very interesting features that were delivered but didn't really have enough functional depth to be really useful until a couple of versions down the line (ORM integration, SPML support and generic connectors are a few examples). Many of the new features have been talked about for many years so hopefully this won't be the case this time.

    The new OIM version looks really good with plenty of really strong features. A good feature overview can be found in A Primer on OIM 11g and you can also visit my, hopefully over time growing, list of in depth look at the new features:

    Sunday, August 22, 2010

    Inappropriate network access is a material weakness?

    I recently found a very interesting KPMG audit findings report on Fema which is the Federal Emergency Management Agency. The reason for this audit report being interesting was not as much the auditing target as the content of the audit report.

    What did KPMG find during their audit?
    During our audit engagement, we noted certain matters in the areas of security management, access controls, configuration management, and contingency planning with respect to FEMA’s financial systems information technology (IT) general controls which we believe contribute to a DHS-level significant deficiency that is considered a material weakness in IT controls and financial system functionality. These matters are described in the IT General Control and Financial System Functionality Findings by Audit Area section of this letter.

    This clearly sounds interesting. Now lets look in the "IT General Control and Financial System Functionality Findings" in the access control section and see what is says.
    Password, security patch management, and configuration deficiencies were identified during the vulnerability assessment on hosts supporting the key financial applications and general support systems; 
    Core IFMIS, G&T IFMIS, NEMIS, and PARS application and/or database accounts, network, and remote user accounts were not periodically reviewed for appropriateness, resulting in inappropriate authorizations and excessive user access privileges. For G&T IFMIS, we determined that recertification of user accounts had not been conducted since the application was implemented at FEMA in FY 2007; 
    Financial application, network, and remote user accounts were not disabled or removed promptly upon personnel termination; 
    Initial and modified access granted to Core and G&T IFMIS financial application and/or database, network, and remote users was not properly documented and authorized;
    What does this really mean? A "material weakness" is something that can in the long run could lead to a financial misstatement occurring. If you are a US based public company this would be very bad as post SOX that could lead to your CFO having to go to prison. CFOs generally don't like prison, not even club Fed, so they tend to be motivated to have "material weaknesses" fixed as soon as possible.

    Usually the CFO and the auditors will give you some respite if you show signs of material progress towards the goal but if you totally ignore the problem they will not be pleased.

    The first thing that struck me when reading the report is that there seems to be a shift from "random data sampling auditing" to "auditing of the process".

    Traditionally IT auditing has largely been done the same way as traditional financial auditing. When I was in college I helped out my student union by serving as an "amateur auditor" for the different societies that was run by the student union. This was mostly things like "the society that arranges parties" and "the other slightly different society that arranges parties as well". In many cases the societies where better at arranging parties than keeping books so my job was to try to make the treasurers to at least keep some kind financial records.

    The audit process was quite simple. First you check that the general ledger exists and that there are transaction records connected to the general ledger (hundreds of receipts and income records in a shoebox does not count) . Secondly you picked a five to ten transactions at random and checked if the transactions sounded reasonable i.e. the beer that was bought was reasonable prized and it looked like most of it was sold to the students after a reasonable length of time (drunk by the party association members themselves does not count).

    IT auditing has up until now largely followed the same pattern. First the very high level processes are checked for existence (i.e. a process for how to give out accounts to new employees exists) and then a number of provisioning events and termination events are controlled in detail. Even if your processes coverage is really bad and you have a lot of transactions that totally bypasses your "official" processes you usually won't be caught because in most cases the auditing is done based on events initiated by the trusted source, i.e. your HR system, and therefor followed your official processes.

    If you look at the findings it is clear that KPMG looked substantially deeper at the core business processes such as initial provisioning, access level update and termination. They are also saying that a number of processes i.e. access recertification simply is mandatory and must be performed.

    In my next postings I will take a little closer look at each issue found by KPMG and talk a bit about how to solve the issues that they point out.

    Thursday, August 19, 2010

    [OIM vs TIM] Physical deployment architecture

    Between 2006 and 2008 I mostly did OIM implementations but in December of 2008 I switched over to IBM TIM as I switched jobs and my new employer is an IBM shop.

    Changing from one security stack was a little bit like a musician switching from playing guitar to keyboards. Many of the basic concepts are the same but it did take a while to figure out how you actually do things in the new environment. Functionally OIM and TIM are very similar. There are very few business processes that you can support in one product and not in the other. On the other hand there is a number of things where the architecture or the implementation hinders or helps you substantially compared to the other product.

    One example where OIM and TIM have different approaches is deployment architecture.

    OIM uses a very standard physical deployment architecture with webserver, appserver, application and database. Most of the business logic configuration gets implemented by configuration changes and Java extensions deployed on the application server.

    TIM uses the same basic structure but splits the data storage layer between a database and an LDAP server. Most of the business logic implementation is done inside of TIM either by straight configuration or by adding Javascript "scripts" into hooks in the GUI.

    There are advantages as well as disadvantages with the TIM approach. The biggest disadvantage is that there is another critical piece of infrastructure that you need to support. As the LDAP server also needs a db you may end up having to support an Oracle DB and a DB2 instance for the Tivoli Directory Server. Not fun.

    Aside from the support issue the LDAP server means that you end up with a large number of servers in an enterprise grade no single point of failure install. As it generally really isn't advisable to run a DB2 and an Oracle DB on the same physical host due to memory footprint you suddenly need four data layer hosts. High hardware costs, lost of energy and cooling and many servers to patch.

    The problem of server sprawl is further increased by the fact that you really need to make your final testing environment as similar as you can afford to your production environment which means that it also should have full high availability. There you have another four servers just for the storage layer. 

    The advantage of the LDAP approach is that the users and their base data is easily accessible through LDAP calls which makes implementing certain business processes such as "warn manager about contractor that is about to expire" very easy to implement. It also means that you can peek into the user information using an LDAP browser instead of a SQL client which may or may not be nicer depending on your personal preferences.

    Sunday, August 15, 2010

    In defense of OIM IT resources

    In OIM Resource Objects, provisioning processes and connectors and IT resources I discussed the different objects that make up an OIM system. One of the objects that isn't strictly necessary is the IT resource.

    So if it isn't necessary why does it exist? The simple answer is that it is a very convenient place to store environment dependent information.

    In most OIM projects you have a number of OIM environments. You have at least one dev, one integration and one production environment. In some cases you may have a number of dev environments or even one dev per developer, perhaps a UAT environment and a training environment. At a minimum the dev and test environments have separate target systems and you may even have a separate set of target systems for each environment.

    You definitely don't want to risk provisioning or even worse deprovision to production target systems when you are doing dev or testing. You also don't want to move dev target system environment configuration into production as a part of a code drop. How do you solve this problem?

    The most simplistic solution would be to store the target system information in the source code and change it when you promote the code to the next level. Anyone that has ever done that knows that it is a certain path to pain and suffering. In OIM you can actually place environment configuration information in all kinds of interesting and exciting places. It can live in the Java source code, in adapters and in provisioning tasks. There are actually very few limits on what kind of pain a naive and inexperienced OIM developer can inflict on the poor souls that has to maintain the system.

    The better option is of course to externalize and centralize the configuration into some kind of configuration repository. The standard OIM repository for environmental information is the IT resource and in many cases it is a good choice. It is easily available for reference and update in the design client. It is quite flexible and accessible by the OIM admins.

    Java does offer you some good alternatives if you prefer physical configuration files. You can of course just use the FileReader class and write your own parser. I personally prefer the Java Properties framework

    The main disadvantage of configuration files is when you are running in a cluster. I don't know how many times over the years I have spent considerable amounts of time debugging issues just to discover that a setting or a configuration file wasn't updated on one of the cluster member servers.