The names of persons is a subject that will impact a provisioning roll out in many ways. Some very upfront and other more convoluted. If you ignore the subject you may regret this once your system is live and the problems starts showing up. Names are not only used to populate the basic name fields but also as generators for things like logins and email addresses so by solving some of the problems up front in the feeds you can avoid a lot of issues in the target systems.
In this posting I will talk about static names while a later posting will discuss name changes.
If you are US based and you are rolling out an employee only system the name seeding is quite straightforward. The HR system usually uses the name that is featured on the social security card so that is the name that gets feed to you. For the last name this is usually straightforward but many people go by a different first name. Their real first name may be "Joseph" but everyone calls them Joe so they want their email to be joe.smith instead of joseph.smith. The solution to this problem is to have a preferred first name field and use that in case it exists.
Next problem comes in form of names that contains weird characters such as O'Malley. Normal solution is to just filter out any non a-z and A-Z characters from the feed. Hyphens are also usually allowed.
Outside of the US problems get worse. You have all kinds of strange characters in names and just filtering away all "strange" characters may not work very well. The happy Swede "Åke Öhlund" would not be so happy with the email "ke.hlund". What I have found is that you can actually support most languages by a simple translation table with about thirty entries that simply drops the umlauts and accent characters and turn them into the corresponding ASCII character. Our friend "Åke Öhlund" would usually like to get the email "ake.ohlund". Or at least he won't complain too loudly.
There a number of national or regional issues that I have run into over the years. If your system will cover these regions it is worth investigating if you will run into this specific issue or not.
In some parts of the world people have more than one last as well as first name. For example among expat Chinese in Singpore it is common that you have an official Chinese name and in addition a western name. A preferred last name as well as preferred first name solves this problem.
In Holland a lot of people tend to start their last name with van as in "van Fleet". There may be a request to generate email addresses with the van removed.
Germans like their formal titles and sometimes the Herr Doctors wants their Doctor degrees to be an integral part of their last name. Don't be surprised if you find "Schmitt, Dr" in the last name field in the HR feed. In severe cases you may even find "Schmitt, Dr, Dr" or "Schmitt. Professor Dr".
In Latin America people usually have two last names as your inherit one last name from your mother and one from your father.
Reflections, musings and ideas about identity and access management.
Monday, December 27, 2010
Thursday, December 23, 2010
Gartner Magic Quadrants
Gartner has published a new provisioning magic quadrant. In my opinion it is a quite interesting read.
One of the main points is that there has been a shift away from provisioning towards auditing, access recertification (attestation in OIM speak) and governance. Gartner uses the term IAI (Identity and Access Intelligence). The main driver here is the fact that provisioning projects are long, complex and painful while IAI projects are easier and quicker.
In my experience this is a correct observation. The real challenge in provisioning projects is that the provisioning process in an enterprise tends to be quite complex. In many cases the process isn't properly documented or there may be multiple different processes in different parts of the enterprise. The business analyzes work that is needed to document the process can be very time consuming and in many cases important points are missed.
An alternative is of course to simply adopt a "best practice" provisioning process but this requires a lot of political will and in many cases the complexities of the enterprise process is present for a reason.
On the other hand few companies have an established IAI process so adopting the "best practice" is relatively painless. This means that the time consuming step of documenting and implementing the current corporate business process can be almost completely skipped in an IAI project. The integrator can basically use whatever canned approach they happen to have handy which means that results can show up in weeks rather than months (or sometimes even years) which is the time scale you need for a custom provisioning implementation.
IAI projects are usually run to improve security and reach compliance but they can actually result in substantial operational efficiencies as well. In one IAI project we found 100+ user accounts for a quite expensive (1000+ USD per year license fee) application that really weren't needed. The lower license cost was a quite nice bonus for the customer but they actually were even more happy about the fact that we found 600 active remote access accounts that no one could explain who they belonged to.
One of the main points is that there has been a shift away from provisioning towards auditing, access recertification (attestation in OIM speak) and governance. Gartner uses the term IAI (Identity and Access Intelligence). The main driver here is the fact that provisioning projects are long, complex and painful while IAI projects are easier and quicker.
In my experience this is a correct observation. The real challenge in provisioning projects is that the provisioning process in an enterprise tends to be quite complex. In many cases the process isn't properly documented or there may be multiple different processes in different parts of the enterprise. The business analyzes work that is needed to document the process can be very time consuming and in many cases important points are missed.
An alternative is of course to simply adopt a "best practice" provisioning process but this requires a lot of political will and in many cases the complexities of the enterprise process is present for a reason.
On the other hand few companies have an established IAI process so adopting the "best practice" is relatively painless. This means that the time consuming step of documenting and implementing the current corporate business process can be almost completely skipped in an IAI project. The integrator can basically use whatever canned approach they happen to have handy which means that results can show up in weeks rather than months (or sometimes even years) which is the time scale you need for a custom provisioning implementation.
IAI projects are usually run to improve security and reach compliance but they can actually result in substantial operational efficiencies as well. In one IAI project we found 100+ user accounts for a quite expensive (1000+ USD per year license fee) application that really weren't needed. The lower license cost was a quite nice bonus for the customer but they actually were even more happy about the fact that we found 600 active remote access accounts that no one could explain who they belonged to.
Monday, December 13, 2010
OIM Howto: Interacting with Tivoli Directory Server
OIM has a quite large list of of connectors but sometimes you need to interact with a target system that lacks a standard connector. One such example is TDS (Tivoli Directory Server).
TDS is used as the internal directory server in TIM and can also be used as a standard corporate directory or as the user directory for the IBM Tivoli Access Manager. It is a standard LDAP v3 compliant directory so in theory you should be able to use any of the LDAP connectors (AD, eDirectory and Sun JDS). I would generally not recommend trying to use the AD connector as it includes a lot of functionality that addresses peculiarities in AD but either the eDirectory or Sun JDS will work fine.
The problems that may force you to write a custom connector will often have more to do with the lack of functionality of the standard connectors than incompabilities between TDS and the connectors. The eDirectory and the JDS connectors have basically gotten minimal updating during the last four years so compared with the AD connector their functional depth is quite limited. One area where you may see compability issues is in the handling of roles and groups.
You may end up writing some custom logic in JNDI to complement the functionality of the standard connector which really isn't very complicated if you have some basic Java programming skills. An example implementation of a JNDI based connector can be found in the JNDI demo tool
TDS is used as the internal directory server in TIM and can also be used as a standard corporate directory or as the user directory for the IBM Tivoli Access Manager. It is a standard LDAP v3 compliant directory so in theory you should be able to use any of the LDAP connectors (AD, eDirectory and Sun JDS). I would generally not recommend trying to use the AD connector as it includes a lot of functionality that addresses peculiarities in AD but either the eDirectory or Sun JDS will work fine.
The problems that may force you to write a custom connector will often have more to do with the lack of functionality of the standard connectors than incompabilities between TDS and the connectors. The eDirectory and the JDS connectors have basically gotten minimal updating during the last four years so compared with the AD connector their functional depth is quite limited. One area where you may see compability issues is in the handling of roles and groups.
You may end up writing some custom logic in JNDI to complement the functionality of the standard connector which really isn't very complicated if you have some basic Java programming skills. An example implementation of a JNDI based connector can be found in the JNDI demo tool
Monday, November 29, 2010
Initial load
When you plan a typical internal focused provisioning system roll out one of the problems that you have to solve is how to get the information about the already existing users and accounts loaded into your new and shiny IDM system. Lets talk a little bit about design patterns for solving this problem.
In most cases you start out with one or more sources of basic user identities. These are the canonical truths about who actually works for your company. In most cases it includes a human resources system of some kind. If this system is connected to payroll it tends to contain very good data as the employees tends to complain if they don't get payed and the company don't like to continue paying people that no longer works for the company. In some cases you will discover that the HR system is only linked to payroll in some parts of the world while other, i.e. the UK office, uses another system to feed payroll. This usually results in the data in the HR system being less well maintained which can cause serious issues.
In many cases there are entities that needs to go into the IDM system that aren't present in the HR system i.e. contractors. Getting hold of data about these users is often not easy but there may be a contractor database somewhere. Worst case you may have to settle with data from the security badging system or from the corporate active directory. Even when you find basic information about the contractors you will often discover that the data quality can be very bad. Information such as manager id, current status or end date may not necessarily be well maintained. If you for example are planning to send warning emails to the manager of the contractor it will not be good if all 200 contractors in the manufacturing division reports to the division VP.
Assuming that you managed to discover your base identities the next step is to identify what target system accounts belongs to the base identities. In a perfect world there should be a unique identifier in each target system account such as an employeeid that can be traced back to one and only one account in the trusted source (i.e. the HR system). In practice this is rarely true. In most cases some target system accounts contains the unique identifier while a large percentage will need to be linked using less exact methods such as email addresses or in worst case names. Name based linking can be very time consuming and there is also a substantial risk that you will end up with false matches.
There are many tools available that will make the linkage process and if your account volume is decently high you want to start by doing automated matching using some form of script or program. Once you have cleared out the obvious matches you may want to switch over to a manual process that utilizes a simple Excel sheet to match between trusted source accounts and target system accounts.
If you can use a divide and conquer approach and divide the trusted source accounts and the target system accounts into distinct buckets things gets much easier. Lets say you have 3000 unmatched trusted source accounts and 5000 unmatched target system accounts you want to investigate if you can divide the accounts into ten country buckets based on the country attribute in both the trusted source account and the target account. This will reduce the problem to ten instances of matching 200-400 trusted accounts to 300-700 target accounts which is a much easier problem to solve. This approach of course assumes that there is a suitable "bucketing" attribute available in the target system as well as in the trusted source.
To sum things up preparing for the initial load essentially consists of the following steps:
If the data is in good shape this can be a quick process but if the data is bad it is not that uncommon that you need to spend 3-6 months on the data cleanup. It is therefor a good idea to include a data clean up and correlation thread in your IDM program that starts at the same time as you kick off the provisioning implementation project.
In most cases you start out with one or more sources of basic user identities. These are the canonical truths about who actually works for your company. In most cases it includes a human resources system of some kind. If this system is connected to payroll it tends to contain very good data as the employees tends to complain if they don't get payed and the company don't like to continue paying people that no longer works for the company. In some cases you will discover that the HR system is only linked to payroll in some parts of the world while other, i.e. the UK office, uses another system to feed payroll. This usually results in the data in the HR system being less well maintained which can cause serious issues.
In many cases there are entities that needs to go into the IDM system that aren't present in the HR system i.e. contractors. Getting hold of data about these users is often not easy but there may be a contractor database somewhere. Worst case you may have to settle with data from the security badging system or from the corporate active directory. Even when you find basic information about the contractors you will often discover that the data quality can be very bad. Information such as manager id, current status or end date may not necessarily be well maintained. If you for example are planning to send warning emails to the manager of the contractor it will not be good if all 200 contractors in the manufacturing division reports to the division VP.
Assuming that you managed to discover your base identities the next step is to identify what target system accounts belongs to the base identities. In a perfect world there should be a unique identifier in each target system account such as an employeeid that can be traced back to one and only one account in the trusted source (i.e. the HR system). In practice this is rarely true. In most cases some target system accounts contains the unique identifier while a large percentage will need to be linked using less exact methods such as email addresses or in worst case names. Name based linking can be very time consuming and there is also a substantial risk that you will end up with false matches.
There are many tools available that will make the linkage process and if your account volume is decently high you want to start by doing automated matching using some form of script or program. Once you have cleared out the obvious matches you may want to switch over to a manual process that utilizes a simple Excel sheet to match between trusted source accounts and target system accounts.
If you can use a divide and conquer approach and divide the trusted source accounts and the target system accounts into distinct buckets things gets much easier. Lets say you have 3000 unmatched trusted source accounts and 5000 unmatched target system accounts you want to investigate if you can divide the accounts into ten country buckets based on the country attribute in both the trusted source account and the target account. This will reduce the problem to ten instances of matching 200-400 trusted accounts to 300-700 target accounts which is a much easier problem to solve. This approach of course assumes that there is a suitable "bucketing" attribute available in the target system as well as in the trusted source.
To sum things up preparing for the initial load essentially consists of the following steps:
- Discovering the trusted sources that will provide the base identities
- Extracting the user data from the trusted sources
- Cleaning or at least evaluating the quality of the data contained in the trusted sources
- Discovering the target system accounts
- Linking the trusted identities to the target system accounts
If the data is in good shape this can be a quick process but if the data is bad it is not that uncommon that you need to spend 3-6 months on the data cleanup. It is therefor a good idea to include a data clean up and correlation thread in your IDM program that starts at the same time as you kick off the provisioning implementation project.
Thursday, November 18, 2010
Polyarchies in pratice
In A bridge(stream) too far I talked a little bit about using polyarchies to implement role mining. Lets take a closer look at this concept including a simple example.
Lets say you have a 1000 AD groups and 2000 users in your system and you would like to do some role mining in order to figure out if you could apply a role based approach to automatically grant the correct entitlements (AD groups) to the right user.
First you look at the information you have available about your users. You may find that you are able to place them in a number of different hierarchies. Lets start by looking at the location based hierarchy.
The company has ten locations and the locations can be organized in a country -> city -> building pattern. So for example the US has offices in two cities: Boston and LA. In LA there is a single location while Boston contains two locations.
Now sort the users according to their location. You may end up with something like:
->LA downtown: 130 users
Next step is to associate AD group membership with each site and sort them according to how many members that has that specific location exists in each group. The Boston Federal Square location may have the following groups:
Boston Distribution List 143 members
Boston FS Distribution List 140 members
Sales Distribution List 138 Members
Boston FS printers and fileshare 132 Members
Out of these it looks like Boston FS Distribution List and Boston FS printers and fileshares should be given to any users that have a Boston FS location. The Boston Distribution list could be checked against the parent node to see if it is also given to the Boston Haymarket users. If not then perhaps it is an additional group used for Boston FS.
The Sales Distribution List may be assigned through location but it looks more likely that it is tied to the functional hierarchy. It just happens that many sales people are based out of the Boston Federal Square office.
Doing this work by hand using Excel or a small database is very time consuming but it is fairly easy to automate this using Java or whatever is your favorite programming language.
You basically need:
If you prefer the COTS approach there are lots of different options. In my opinion the Oracle Identity Analytics (ex Sun Role Manager, ex Vauu RBACx) is a quite nice implementation. IBM has also included some capability in TIM 5.1 that is worth taking a closer look at if you are an IBM shop.
For further reading Oracle published a whitepaper on this subject this summer that is well worth reading.
Happy mining!
Lets say you have a 1000 AD groups and 2000 users in your system and you would like to do some role mining in order to figure out if you could apply a role based approach to automatically grant the correct entitlements (AD groups) to the right user.
First you look at the information you have available about your users. You may find that you are able to place them in a number of different hierarchies. Lets start by looking at the location based hierarchy.
The company has ten locations and the locations can be organized in a country -> city -> building pattern. So for example the US has offices in two cities: Boston and LA. In LA there is a single location while Boston contains two locations.
Now sort the users according to their location. You may end up with something like:
USA total: 380 users
-> Boston total: 250 users
---> Boston FS (Federal Square): 150 users
---> Boston Haymarket: 100 users
->LA downtown: 130 users
Next step is to associate AD group membership with each site and sort them according to how many members that has that specific location exists in each group. The Boston Federal Square location may have the following groups:
Boston Distribution List 143 members
Boston FS Distribution List 140 members
Sales Distribution List 138 Members
Boston FS printers and fileshare 132 Members
Out of these it looks like Boston FS Distribution List and Boston FS printers and fileshares should be given to any users that have a Boston FS location. The Boston Distribution list could be checked against the parent node to see if it is also given to the Boston Haymarket users. If not then perhaps it is an additional group used for Boston FS.
The Sales Distribution List may be assigned through location but it looks more likely that it is tied to the functional hierarchy. It just happens that many sales people are based out of the Boston Federal Square office.
Doing this work by hand using Excel or a small database is very time consuming but it is fairly easy to automate this using Java or whatever is your favorite programming language.
You basically need:
- Extract your base user data out of the trusted source (often an HR csv file feed)
- Enumerate the unique values of suitable attributes (i.e. list all unique locations) that is present in the trusted source
- Extract the group memberships (JNDI is my favorite) as well as user identities from the target system
- Correlate the users form the trusted source and the target system
- Calculate the user population in each unique attribute value
- Get the group memberships of the user population in 5
- Sort the groups according to the number of members
- Output the result in a user friendly format (Excel sheets works great)
- Attach some kind of cut off value i.e. only list groups where at least 75% of the users in a particular location is a member
- Look at the results and pick the likely candidates
If you prefer the COTS approach there are lots of different options. In my opinion the Oracle Identity Analytics (ex Sun Role Manager, ex Vauu RBACx) is a quite nice implementation. IBM has also included some capability in TIM 5.1 that is worth taking a closer look at if you are an IBM shop.
For further reading Oracle published a whitepaper on this subject this summer that is well worth reading.
Happy mining!
Thursday, November 11, 2010
New to OIM
Back in the bad old days when you started with Thor Xellerate (or a little bit later OIM) you usually got the three day Thor training workshop and then you basically had to figure out things on your own. There was some documentation but most things in the central OIM platform you had to basically reverse engineer and good decompiler and sniffer were your best friends.
Oracle has made great steps forward on the documentation side over the last few years and a lot of the material is available to anyone.
If you are new to OIM I would suggest starting with downloading the OIM install and taking a look at the documentation folder (9.1 release). I would start by reading through the concepts document as this will give you a good overview of what the OIM tool actually can do.
Next step is to implement a couple of the exercises in Oracle by Example. I would suggest the following:
This will give you an overview of the basic function of OIM, Once you are done with the basics you can continue to explore how to customize the user interface and creating your own connectors.
The Oracle IDM forum is another great resource and if you have a metalink account there is also a lot of good information in the support knowledge DB.
On a slightly more humorous note I suggest So You Want to be a Game Designer. The skill set that makes a good Game Designer is actually quite similar to the skill set you need as an access and identity designer although knowledge of world myths and world religions may be slightly more useful to the game designer (although both can benefit from knowing what Kerberos is).
Welcome and best of luck!
Oracle has made great steps forward on the documentation side over the last few years and a lot of the material is available to anyone.
If you are new to OIM I would suggest starting with downloading the OIM install and taking a look at the documentation folder (9.1 release). I would start by reading through the concepts document as this will give you a good overview of what the OIM tool actually can do.
Next step is to implement a couple of the exercises in Oracle by Example. I would suggest the following:
This will give you an overview of the basic function of OIM, Once you are done with the basics you can continue to explore how to customize the user interface and creating your own connectors.
The Oracle IDM forum is another great resource and if you have a metalink account there is also a lot of good information in the support knowledge DB.
On a slightly more humorous note I suggest So You Want to be a Game Designer. The skill set that makes a good Game Designer is actually quite similar to the skill set you need as an access and identity designer although knowledge of world myths and world religions may be slightly more useful to the game designer (although both can benefit from knowing what Kerberos is).
Welcome and best of luck!
OIM Howto: Add process form child
There is a number of posts on this blog that talks about managing target system groups by manipulating child form contents. In a recent OTN IDM discussion thread Deepika posted some useful example code so in order to make the previous posts on this subject more useful I thought it would be a good idea to link in the example.
public voidAddProcessChildData(long pKey){ try { tcFormInstanceOperationsIntf f = (tcFormInstanceOperationsIntf) getUtility("Thor.API.Operations.tcFormInstanceOperationsIntf "); tcResultSet childFormDef = f.getChildFormDefinition(f.getProcessFormDefinitionKey(pKey),f.getProcessFormVersion(pKey)); long childKey = childFormDef.getLongValue("Structure Utility.Child Tables.Child Key"); //if there is only 1 child table fo rthe parent form else you need to iterate through result set Map attrChildData = new HashMap(); String groupDN = "someValue"; attrChildData .put("UD_ADUSRC_GROUPNAME",groupDN); f.addProcessFormChildData(childKey,pKey,attrChildData); }catch (Exception e){ e.printStackTrace(); } }
Monday, November 8, 2010
A bridge(stream) too far
Over the years I have run into a number of products that had lots of good ideas but perhaps simply didn't have enough resources to implement them properly. One of these is Bridgestream Smartroles (aka Oracle Role Manager). This product is no longer with us having been trumped by Sun Role Manager (aka Vauu RBACx) but I really think that the product came with some good concepts that you can use in your IDM implementation no matter what product you are using.
Business roles vs IT roles
Roles are basically a tool to group entitlements together into manageable packets. If you ask an IT person and a business person what manageable packets are and how they should be named the two groups tend to disagree a lot.
One approach to solve this problem is to let the business have their business roles (teller level two) and let the IT guys build their own roles (two AD groups, logins to two applications configured with specific application roles). Then you just combine the IT roles into business roles and the business can then be asked to certify that Emma Smith the teller should have this business role. The fact that the business role actually results in three IT roles which in turn results in a bucket load of entitlements (AD groups etc) is not really relevant to the certification decision.
In reality things rarely works out this smoothly but I have found the approach useful.
Everything is temporal
One of the nifty features in Bridgestream was the temporal engine which made the eternal problem of the ever changing nature of everything.
In many role related IDM projects it is very easy to forget that everything including the roles and the entitlement has a lifecycle and will need to change as some point. Manging this without support in the base framework can get very complex so building in proper support for temporality is key to making maintenance cheap and easy.
Polyarchy
Hierarchies are really useful when you want to organize users. One problem that you often run into is the fact that a specific user may be in a number of different hierarchies. One may be the geographical location, another may be the reporting chain and a third may be the kind of position you are in (i.e. embedded HR or IT departments that report into the business unit with dotted lines to the corporate HR or IT). It is literally impossible to capture all these relationships in a single hierarchy.
Bridgestream introduced, at least to me, the concept of polyarchy. Instead of trying to wrestle all these relationships into a single hierarchy you simply create multiple hierarchies where each hierarchy reflects one aspect of the users relationship with surrounding nodes. If you also are able to divide up the entitlements into buckets where each specific bucket is likely to be assigned due the users position in this specific hierarchy (a role called "Cambridge campus" or "Floor 3 - building 6 - Cambridge campus" are likely location based) you have a good tool that can reduce the complexity of the role discovery substantially.
There is a more expanded example of Polyarchys in action in the post Polyarchies in pratice
Business roles vs IT roles
Roles are basically a tool to group entitlements together into manageable packets. If you ask an IT person and a business person what manageable packets are and how they should be named the two groups tend to disagree a lot.
One approach to solve this problem is to let the business have their business roles (teller level two) and let the IT guys build their own roles (two AD groups, logins to two applications configured with specific application roles). Then you just combine the IT roles into business roles and the business can then be asked to certify that Emma Smith the teller should have this business role. The fact that the business role actually results in three IT roles which in turn results in a bucket load of entitlements (AD groups etc) is not really relevant to the certification decision.
In reality things rarely works out this smoothly but I have found the approach useful.
Everything is temporal
One of the nifty features in Bridgestream was the temporal engine which made the eternal problem of the ever changing nature of everything.
In many role related IDM projects it is very easy to forget that everything including the roles and the entitlement has a lifecycle and will need to change as some point. Manging this without support in the base framework can get very complex so building in proper support for temporality is key to making maintenance cheap and easy.
Polyarchy
Hierarchies are really useful when you want to organize users. One problem that you often run into is the fact that a specific user may be in a number of different hierarchies. One may be the geographical location, another may be the reporting chain and a third may be the kind of position you are in (i.e. embedded HR or IT departments that report into the business unit with dotted lines to the corporate HR or IT). It is literally impossible to capture all these relationships in a single hierarchy.
Bridgestream introduced, at least to me, the concept of polyarchy. Instead of trying to wrestle all these relationships into a single hierarchy you simply create multiple hierarchies where each hierarchy reflects one aspect of the users relationship with surrounding nodes. If you also are able to divide up the entitlements into buckets where each specific bucket is likely to be assigned due the users position in this specific hierarchy (a role called "Cambridge campus" or "Floor 3 - building 6 - Cambridge campus" are likely location based) you have a good tool that can reduce the complexity of the role discovery substantially.
There is a more expanded example of Polyarchys in action in the post Polyarchies in pratice
Friday, October 29, 2010
Avoiding issues with referrals in LDAP searches
If the target LDAP server contains referrals, especially broken ones, this may cause issues during LDAP searches.
If you switch "is there any more objects left" test from:
When I have run into strange LDAP error messages I have found the LDAP Status Codes and JNDI Exceptions to be very helpful.
If you switch "is there any more objects left" test from:
while (results != null && results.hasMore())to:
while (results != null && results.hasMoreElements())this problem goes away.
When I have run into strange LDAP error messages I have found the LDAP Status Codes and JNDI Exceptions to be very helpful.
Wednesday, October 27, 2010
AD/LDAP reconciliation using paging
Depending on the settings on your target AD/LDAP server you may need to use pageing when doing larger searches. AD2008 per default only gives you a 1000 objects in any one LDAP search which means that most recons requires either paging or many very well defined LDAP filters that gives you a very small return sets.
Paging is supported in JNDI by activating a specific RequestControl. The LDAP server will then serve you pageSize number of records together with a cookie that you send back to get another batch. Rinse and repeat until you have all the result records.
The example code was written for TDS but can be used with any LDAP V3 compliant LDAP server.
The example code was written for TDS but can be used with any LDAP V3 compliant LDAP server.
private HashMap performTimSearch(String ouPath){ NamingEnumeration results=null; //keyed on the String name, contains SearchResult objects HashMap totalResults=new HashMap(30000); if(timAdminPassword.equals("")){ logger.debug("\nNo TIM admin password. No TIM search will be performed."); return totalResults; } try { // Create initial context LdapContext ctx = new InitialLdapContext(timEnv,null); // Activate paged results int pageSize = 400; byte[] cookie = null; PagedResultsControl pageControl = new PagedResultsControl(pageSize, Control.NONCRITICAL); //activate search controls SearchControls searchCtrl = new SearchControls(); searchCtrl.setReturningAttributes(timAttributes); searchCtrl.setSearchScope(SearchControls.SUBTREE_SCOPE); ctx.setRequestControls(new Control[] { new PagedResultsControl(pageSize, Control.NONCRITICAL) }); int total=0; // Specify the attributes to match String matchAttrs = "(objectclass=user)"; do { /* perform the search */ results = ctx.search(ouPath, matchAttrs, searchCtrl); while (results != null && results.hasMore()) { SearchResult entry = (SearchResult) results.next(); totalResults.put(entry.getName(), entry); total=total+1; } // Examine the paged results control response Control[] controls = ctx.getResponseControls(); if (controls != null) { for (int i = 0; i < controls.length; i++) { if (controls[i] instanceof PagedResultsResponseControl) { PagedResultsResponseControl prrc = (PagedResultsResponseControl) controls[i]; cookie = prrc.getCookie(); } } } else { logger.debug("No controls were sent from the server"); } // Re-activate paged results ctx.setRequestControls(new Control[] { new PagedResultsControl( pageSize, cookie, Control.CRITICAL) }); } while (cookie != null); logger.debug("Finished TIM search. Found " + total + " users."); // Close the context when we're done ctx.close(); } catch (Exception e) { logger.error("TIM search failed"); logger.error(e.getMessage()); } return totalResults; }
Monday, October 11, 2010
OIM Howto: Date based enable/disable of resources
In some cases you don't want to give a user access to a resource immediately but rather start the access at a certain point in time. In a similar fashion you may not want to give the access permanently but rather up until a certain end date.
There is a number of ways to implement this but I would recommend using the following approach:
There is a number of ways to implement this but I would recommend using the following approach:
- Add start date and end date to the process form.
- As a part of the provisioning process add a step that disables the resource if the start date hasn't happened yet or end date has passed.
- Create a scheduled task that checks all provisioned resources of this type and enables/disables as appropriate given the start date and end dates.
Sunday, October 3, 2010
[OIM vs TIM] Basic RBAC
In my previous post RBAC vs ABAC I talked about using your favorite provisioning tool to implement a simple case of AD group membership management. I thought it might be interesting to compare how you implement this use case in OIM and TIM.
In OIM there is a large number of different ways to do AD group memberships but lets concentrate on the most standard and out of the box method: rules, groups and access policies. In the use case that we are dealing with OIM's standard functionality actually works great. If the rules that governs which AD group is given to which group of users would be slightly different i.e. give everyone whose cost center is between 1200 - 1299 AD group X and 1300-1399 AD group Y then the OIM standard functionality basically stops working and you will have to bring an entity adapter based approach to bear (see Role based group memberships in OIM for more details on how to solve this problem)
In TIM the OIM groups are basically called Roles. The TIM role basically contains it's own OIM rule definition. The TIM role membership definition is done through an LDAP filter which of course is a much more flexible mechanism than OIM's rules. as you have access to wildcards and everything else an LDAP filter can offer you.
In TIM you use a provisioning policy to implement the mapping of TIM role (OIM group) to entitlement. A TIM provisioning policy more or less is equivalent to an Access Policy in OIM. The TIM provisioning policy contains more configuration as some of the things that is configured in pre pop adapters and the process form in OIM is controlled by the provisioning policy in TIM. The configuration in TIM is done through Javascript while you usually tend to use Java code in OIM.
To implement the use case in TIM you would:
- Create a TIM role for each AD group that you want to provision.
- Configure each TIM role to only include the correct users using an LDAP filter.
- Attach a provisioning policy to each TIM role that gives the specific AD group.
- Done!
Overall I think that TIM is the winner in this specific battle as you can do more things without having to resort to coding in TIM. You can do the same things in OIM as in TIM but in many cases you will have to resort to coding at an earlier stage in OIM as TIM has a more flexible configuration interface.
Saturday, September 25, 2010
RBAC vs ABAC
Over the last week there has been a very interesting discussion around role based access control vs attribute based access control. My personal opinion is that there really isn't any razor sharp division between the different paradigms and that in many cases a blended solution is what gives best support for the business case. Let me demonstrate what I mean with an example.
The business case is that VPN system needs to be converted from a "if you can authenticate then you have full access" to a system that only gives access to the systems the user needs. The idea is that you somehow can define groups of users that need access to certain systems. For now the only user groups that have been defined are the different company divisions and consultants vs employees but the system needs to be flexible to support other group definitions further down the line.
The access control device supports definition of access groups through either AD groups or through AD attributes. The AD attributes are analyzed in an XACML light module (if attribute company="truck manufacturing" and user_type="employee" -> user_is_member_of_access_group_employee_in_truck_manufacturing). Alternatively the access control device can simply look for users that are member of the AD group employee_in_truck_manufacturing and then apply the access rules for this group.
This means that at first glance you have a choice between a more ABAC oriented approach (access device looks at attributes and makes a decision) or a more RBAC oriented approach (access device looks at a group membership/role).
The complexity here is how do you populate the attributes and/or place the user in the correct AD group?
Assuming that the attributes exists somewhere, i.e. in the HR system, you can populate the AD record using your favorite provisioning tool at AD account creation time and then maintain them either using the provisioning tool or a separate metadirectory product.
The AD group membership can be solved by implementing a rule based AD group allocation module in the provisioning product. The provisioning product simply evaluates rules written in XACML or other suitable language (LDAP filters, simple boolean logic, regexp) and then provisions AD group memberships.
Now the conclusion you can draw from this is that in the use case where the access control decision is done based on user attributes the essential difference between the two choices is only the exact location of the evaluation logic. Should it live in the run time access control device or should the decision be made in the provisioning product?
In most cases I would argue that the correct choice depends more on the capabilities of respective product. If the decision rules are easily modeled in the rules language offered by the provisioning product then it may make more sense to place the logic there. If the access control offers the better platform then use that.
One advantage of using the provisioning platform may be that it is easier to explain to the helpdesk and the end users that "if you aren't member of the AD group truck_manufacturing_employees then you won't have remote access to the trucking portal" then having to say "if your employee_type attribute is employee and your company attribute is truck_manufacturing then you have access". This is especially true if the attributes contains codes instead of names.
If you want to learn more about ABAC I recommend David Brossard's excellent ABAC primer "Authorization is not just about who you are".
The business case is that VPN system needs to be converted from a "if you can authenticate then you have full access" to a system that only gives access to the systems the user needs. The idea is that you somehow can define groups of users that need access to certain systems. For now the only user groups that have been defined are the different company divisions and consultants vs employees but the system needs to be flexible to support other group definitions further down the line.
The access control device supports definition of access groups through either AD groups or through AD attributes. The AD attributes are analyzed in an XACML light module (if attribute company="truck manufacturing" and user_type="employee" -> user_is_member_of_access_group_employee_in_truck_manufacturing). Alternatively the access control device can simply look for users that are member of the AD group employee_in_truck_manufacturing and then apply the access rules for this group.
This means that at first glance you have a choice between a more ABAC oriented approach (access device looks at attributes and makes a decision) or a more RBAC oriented approach (access device looks at a group membership/role).
The complexity here is how do you populate the attributes and/or place the user in the correct AD group?
Assuming that the attributes exists somewhere, i.e. in the HR system, you can populate the AD record using your favorite provisioning tool at AD account creation time and then maintain them either using the provisioning tool or a separate metadirectory product.
The AD group membership can be solved by implementing a rule based AD group allocation module in the provisioning product. The provisioning product simply evaluates rules written in XACML or other suitable language (LDAP filters, simple boolean logic, regexp) and then provisions AD group memberships.
Now the conclusion you can draw from this is that in the use case where the access control decision is done based on user attributes the essential difference between the two choices is only the exact location of the evaluation logic. Should it live in the run time access control device or should the decision be made in the provisioning product?
In most cases I would argue that the correct choice depends more on the capabilities of respective product. If the decision rules are easily modeled in the rules language offered by the provisioning product then it may make more sense to place the logic there. If the access control offers the better platform then use that.
One advantage of using the provisioning platform may be that it is easier to explain to the helpdesk and the end users that "if you aren't member of the AD group truck_manufacturing_employees then you won't have remote access to the trucking portal" then having to say "if your employee_type attribute is employee and your company attribute is truck_manufacturing then you have access". This is especially true if the attributes contains codes instead of names.
If you want to learn more about ABAC I recommend David Brossard's excellent ABAC primer "Authorization is not just about who you are".
Thursday, September 23, 2010
Upgrading MIIS 2003 to ILM 2007
I recently upgraded a Microsoft Identity Integration Server 2003 to Identity Lifecycle Manager 2007. As far as I have been able to determine there are very few differences between these two products other than the fact that ILM supports AD 2008.
MIIS/ILM is basically a quite decent metadirectory engine that also can be used as a poor mans provisioning solution although the total lack of support for requests, approval workflows, self service and recertification to just pick a few of the features you normally would expect in a provisioning solution can be a tiny bit limiting. Microsoft has addressed some of these concerns in Microsoft Forefront that was released earlier this year.
The upgrade process was actually very straightforward.
The whole process took about two hours for the db steps and another hour or so for the application step. I was very impressed with the ease of the upgrade process. Normally IDM upgrades are really complex and time consuming so this was a very pleasant surprise.
One interesting feature was that the custom dlls that contain our custom rules actually got copied over to the file system of the new application server automatically. I assume that MIIS/ILM keeps them in form of blobs in the database and the upgrade process copy the files out of the db.
MIIS/ILM is basically a quite decent metadirectory engine that also can be used as a poor mans provisioning solution although the total lack of support for requests, approval workflows, self service and recertification to just pick a few of the features you normally would expect in a provisioning solution can be a tiny bit limiting. Microsoft has addressed some of these concerns in Microsoft Forefront that was released earlier this year.
The upgrade process was actually very straightforward.
- Take backup of encryption key in old MIIS install
- Take backup of old database (SQL 2000)
- Import backup into new database (SQL 2005)
- Put the encryption key on new app server
- Start install and do some basic configuration
- Get some coffe and let the upgrade run for about an hour
- Load up the encryption key in the new ILM install
- Patch with the latest patch
- Done
The whole process took about two hours for the db steps and another hour or so for the application step. I was very impressed with the ease of the upgrade process. Normally IDM upgrades are really complex and time consuming so this was a very pleasant surprise.
One interesting feature was that the custom dlls that contain our custom rules actually got copied over to the file system of the new application server automatically. I assume that MIIS/ILM keeps them in form of blobs in the database and the upgrade process copy the files out of the db.
Friday, September 10, 2010
What a difference a year makes
There was a posting in of the IDM groups on LinkedIn today that made me take another look at the 2009 Forrester IAM wave report. The report is getting a bit old but it is sometimes interesting to look back and see what has happend in the IAM market over the last year.
In my opinion the biggest change is clearly the aquisition of Sun and the demise of Sun Identity Manager. Suddenly one of the strongest players in the market just disappeared which opened up a lot of room for other systems. One of biggest winners seems to be Courian that suddenly got a shining example of why buying a suite from one of the big boys doesn't neccessarily mean that you have a stronger support and continued development track in front of you.
Other big changes are IBM TIM 5.1, Microsoft Forefront and Oracle 11g.
TIM 5.1 meant that IBM got substantially improved role management, access recertification and group management. I think largely that the features are well implemented but they really don't have the depth that some of the free standing role management tools have (i.e. Oracle Identity Analytics). Martin Kuppinger at Kuppinger Cole wrote an interesting posting about TIM5.1 in his very good blog.
Microsoft Forefront really means that ILM stops being a glorified metadirectory engine and takes the step into being a proper provisioning platform. If I was in a Microsoft only shop and had a business that was trying to deploy ten different Sharepoint portals (don't they all these days?) I would clearly consider taking a deeper look at the product.
Oracle 11g has a lot of nifty new features that I have been talking about in various posts.
Overall I think that the wave still gives a good overview of the IAM marketspace and I am really looking forward to the 2010 version of the Forrester IAM wave.
(Full disclosure note: I have an immediate family members that works for Forrester)
In my opinion the biggest change is clearly the aquisition of Sun and the demise of Sun Identity Manager. Suddenly one of the strongest players in the market just disappeared which opened up a lot of room for other systems. One of biggest winners seems to be Courian that suddenly got a shining example of why buying a suite from one of the big boys doesn't neccessarily mean that you have a stronger support and continued development track in front of you.
Other big changes are IBM TIM 5.1, Microsoft Forefront and Oracle 11g.
TIM 5.1 meant that IBM got substantially improved role management, access recertification and group management. I think largely that the features are well implemented but they really don't have the depth that some of the free standing role management tools have (i.e. Oracle Identity Analytics). Martin Kuppinger at Kuppinger Cole wrote an interesting posting about TIM5.1 in his very good blog.
Microsoft Forefront really means that ILM stops being a glorified metadirectory engine and takes the step into being a proper provisioning platform. If I was in a Microsoft only shop and had a business that was trying to deploy ten different Sharepoint portals (don't they all these days?) I would clearly consider taking a deeper look at the product.
Oracle 11g has a lot of nifty new features that I have been talking about in various posts.
Overall I think that the wave still gives a good overview of the IAM marketspace and I am really looking forward to the 2010 version of the Forrester IAM wave.
(Full disclosure note: I have an immediate family members that works for Forrester)
Monday, September 6, 2010
OIM Howto: Target system group memberships through OIM groups and access policies
In OIM there is often multiple ways to implement the same functionality.
One such case is target system group memberships. In Leverage standard connector group management I described how to leverage the functionality provided by the OIM AD connector to manage AD group memberships. You can also use the exact same functionality as well as the OIM rules, groups and access policies framework to manage group memberships.
You can give a specific user more than one AD group through this strategy as the access policy evaluation engine basically adds the union of all child form rows to the process form of the access policy with the highest priority. Where you do run into trouble is if the same AD group membership is given to the same user by more than one access policy. If this happen the second group membership add will result in an error.
Taking the route over OIM groups and access policies has the advantage of making things clearer for administrators as well as auditors. It makes it possible to use certain out of the box OIM reports that covers OIM group memberships as proxies for AD group membership reports which certainly is helpful in certain situations.
One such case is target system group memberships. In Leverage standard connector group management I described how to leverage the functionality provided by the OIM AD connector to manage AD group memberships. You can also use the exact same functionality as well as the OIM rules, groups and access policies framework to manage group memberships.
- Create a rule that adds users to an OIM group under certain circumstances (i.e. user location is "New York" or costcenter is 2387)
- Add an access policy to that group that provisions the AD user object to the user with the group child form row set to give out the appropriate AD group
You can give a specific user more than one AD group through this strategy as the access policy evaluation engine basically adds the union of all child form rows to the process form of the access policy with the highest priority. Where you do run into trouble is if the same AD group membership is given to the same user by more than one access policy. If this happen the second group membership add will result in an error.
Taking the route over OIM groups and access policies has the advantage of making things clearer for administrators as well as auditors. It makes it possible to use certain out of the box OIM reports that covers OIM group memberships as proxies for AD group membership reports which certainly is helpful in certain situations.
OIM Howto: One resource object per target system group
In most cases of target system group management you need to manage a large number of different groups but sometimes you only need to handle a handful of groups. This commonly happens if the primary purpose of the OIM system is to manage some specific target system that actually uses groups on an LDAP server (often AD) to do fine, medium or coarse grained authorization. In some cases access to an application may be granted by an AD group membership (commonly used by portal software such as Plumtree).
There is also nothing that stops you from doing a "mix and match" approach where some AD groups are represented as independent resource objects and other are grouped under a general "Add AD group" resource object.
In these cases it may be appropriate to create an independent resource object for each target system group. There are some substantial advantages to this approach:
- In the user resource view an administrator will clearly see what target system group or application the user has access to
- Attestation works cleaner
- Out of the box reports works better
There is also nothing that stops you from doing a "mix and match" approach where some AD groups are represented as independent resource objects and other are grouped under a general "Add AD group" resource object.
The implementation basically follows the steps in Support for request based OIM group memberships other than the fact that you will not need any object form as the group name is reflected in the resource object itself.
Thursday, September 2, 2010
The downside of OIM resource object proliferation
The basic function of a resource object in OIM is to represent access for a specific user to a specific system. In many OIM architectures you chose to leverage the resource object to represent all kinds of entities in order to make the entity requestable in the request interface or attestable in the attestation framework.
Examples of entities that can be modelled as resource objects are:
One problem you will get if you have a lot of "transaction oriented resource objects" is that the user resource view can get drowned in objects so that it is hard for the OIM administrators to find the resource object instances that they really are interested in. Lets say that you have a user that has worked for the company for five years and have twenty AD group membership adds, five removals and ten resource object instances that represents user data updates. This user will have thirtyfive extra resource object instances in his resource object instance view.
You can of course argue that the state changes and group memberships should be properly recorded in the user's resource object instance view (it is not a bug, it is a feature!). If your customer insists that it will look ugly and limit administrator productivity you can simply change the design to have the resource object being raised against a dummy organisation or user. Simply add the real target user login to the object and process form and problem is solved.
Well, of course excluding training the end users about the fact that they need to pick "AD group user" instead of "John Smith" as the user when they want to request an AD group membership for John Smith. Depending on how big your user population with request generation privileges this may or may not be a problem.
Examples of entities that can be modelled as resource objects are:
- AD group memberships (adds as well as removals)
- Updates to process forms with approval workflow (with triggerd tasks or with entity adapter)
- Contractor extensions
- OIM group memberships
One problem you will get if you have a lot of "transaction oriented resource objects" is that the user resource view can get drowned in objects so that it is hard for the OIM administrators to find the resource object instances that they really are interested in. Lets say that you have a user that has worked for the company for five years and have twenty AD group membership adds, five removals and ten resource object instances that represents user data updates. This user will have thirtyfive extra resource object instances in his resource object instance view.
You can of course argue that the state changes and group memberships should be properly recorded in the user's resource object instance view (it is not a bug, it is a feature!). If your customer insists that it will look ugly and limit administrator productivity you can simply change the design to have the resource object being raised against a dummy organisation or user. Simply add the real target user login to the object and process form and problem is solved.
Well, of course excluding training the end users about the fact that they need to pick "AD group user" instead of "John Smith" as the user when they want to request an AD group membership for John Smith. Depending on how big your user population with request generation privileges this may or may not be a problem.
Wednesday, September 1, 2010
OIM 11g: Approval Workflow Orchestration with BPEL
In OIM pre 11g you were basically forced to implement decently complex approval workflow in Java code as the built in workflow editor simply didn't support more complex workflows. Coding the approval workflow was not a big problem if you had basic Java programming skills but it did mean that the person that did the approval configuration had to be a programmer.
Implementing in code also meant that the implementation really lacked agility. You do need to test code more extensive than configuration and that means that your cycle time will be longer than if you could create approval workflows through configuration.
In OIM 11g approval workflow configuration is done through BPEL. This means that you can get a business analyst with BPEL skills to do most if perhaps not all of the approval configuration and there are many more people with BPEL skills than OIM approval API available on the market.
It is clear that this was a long overdue improvement of OIM that will help customers both to have quicker and less painful implementations as well as improving the maintainability of OIM.
One thought that kind of strikes you is that this may be the first step to move more and more of the workflows in OIM from the mysterious objects in the design console and the Java API code into the world of BPEL. Interesting concept isn't it?
Implementing in code also meant that the implementation really lacked agility. You do need to test code more extensive than configuration and that means that your cycle time will be longer than if you could create approval workflows through configuration.
In OIM 11g approval workflow configuration is done through BPEL. This means that you can get a business analyst with BPEL skills to do most if perhaps not all of the approval configuration and there are many more people with BPEL skills than OIM approval API available on the market.
It is clear that this was a long overdue improvement of OIM that will help customers both to have quicker and less painful implementations as well as improving the maintainability of OIM.
One thought that kind of strikes you is that this may be the first step to move more and more of the workflows in OIM from the mysterious objects in the design console and the Java API code into the world of BPEL. Interesting concept isn't it?
Saturday, August 28, 2010
OIM Howto: Cascading user form changes using triggers
In the post Cascading user form changes with approval I am suggesting triggering the approval through a entity adapter. I got a very good question about this post.
The conventional approach for cascading user form changes down to process forms is to enter a task name in the LOOKUP.USR_PROCESS_TRIGGERS lookup table and then creating a task in the provisioning process with the same task name. This task then writes the data to the process form. The question was why not simply let this task trigger the request creation and avoid the whole complicated business with the entity adapter.
The answer is that the process trigger approach in many cases works fine. If you only have one resource object or a small number of resource objects where you need this functionality using the process trigger approach works great. If you on the other hand have many resource objects that needs this function things get a bit more complex if you want to stay with the process trigger approach.
One option is of course to implement the task in each resource object. That works fine but will cost you performance as each task initiation will take considerable time. You also will have more functionality to maintain although that really isn't a big issue as the logic is contained in the Java methods and you can keep a single copy that is shared between the resource objects. One advantage of the approach is that unless the user has been provisioned the resource the task will never fire so you don't have to create logic that controls what resource objects have been provisioned to the user.
The other option is to manage the update of several resource objects from a single task by using the APIs to check if the user has been provisioned with the other resources or not and take appropriate actions. This approach will make your code a bit more complex and in general it is also not advisable to let a task in one resource object impact other totally separate resource objects as this tends to be confusing (sort of variant of a hidden code side effect).
The conventional approach for cascading user form changes down to process forms is to enter a task name in the LOOKUP.USR_PROCESS_TRIGGERS lookup table and then creating a task in the provisioning process with the same task name. This task then writes the data to the process form. The question was why not simply let this task trigger the request creation and avoid the whole complicated business with the entity adapter.
The answer is that the process trigger approach in many cases works fine. If you only have one resource object or a small number of resource objects where you need this functionality using the process trigger approach works great. If you on the other hand have many resource objects that needs this function things get a bit more complex if you want to stay with the process trigger approach.
One option is of course to implement the task in each resource object. That works fine but will cost you performance as each task initiation will take considerable time. You also will have more functionality to maintain although that really isn't a big issue as the logic is contained in the Java methods and you can keep a single copy that is shared between the resource objects. One advantage of the approach is that unless the user has been provisioned the resource the task will never fire so you don't have to create logic that controls what resource objects have been provisioned to the user.
The other option is to manage the update of several resource objects from a single task by using the APIs to check if the user has been provisioned with the other resources or not and take appropriate actions. This approach will make your code a bit more complex and in general it is also not advisable to let a task in one resource object impact other totally separate resource objects as this tends to be confusing (sort of variant of a hidden code side effect).
Friday, August 27, 2010
OIM Howto: Support for request based OIM group memberships
The normal OIM group management interface is oriented towards an administrator. In many cases it can be useful to be able to support request based OIM group memberships. To do this you basically follow the same steps as documented in Leverage standard connector group management
There are a couple of different options for the object form in step two and which approach you choose largely depends on your requriements.
One is to use a drop down backed by a lookup table. The lookup table could either be populated manually or as P.K. suggests in P.K. suggests in a recent thread on OTN discussion forum you could also create a scheduled taks and use the APIs to auto populate the lookup with the OIM groups. If you go down that path you may want to include logic that excludes certain OIM groups, i.e. system administrators, or just takes a subset of groups, i.e. all oim groups that starts with adGroups.
Another option is to use a child form which would support requests for multiple groups in a single reqeuest. If you go for this option you have to add the support on the process form as well and your provisioning logic will be slightly more complex.
- Create a new RO called "OIM group membership"
- Add an object form that lets the user indicate what OIM group they would like to become member of.
- Add a process form and data sink or prepop from the object form
- Add approval process (if needed)
- Add provisioning process that basically calls a task that calls the addMemberUser method in tcGroupOperationsIntf.
There are a couple of different options for the object form in step two and which approach you choose largely depends on your requriements.
One is to use a drop down backed by a lookup table. The lookup table could either be populated manually or as P.K. suggests in P.K. suggests in a recent thread on OTN discussion forum you could also create a scheduled taks and use the APIs to auto populate the lookup with the OIM groups. If you go down that path you may want to include logic that excludes certain OIM groups, i.e. system administrators, or just takes a subset of groups, i.e. all oim groups that starts with adGroups.
Another option is to use a child form which would support requests for multiple groups in a single reqeuest. If you go for this option you have to add the support on the process form as well and your provisioning logic will be slightly more complex.
The target system net result is identical to the approach in Leverage standard connector group management but you can argue that it is a cleaner approach that more leverages the standard OIM functionality. It also leverages the OIM group admin user interface which makes it clearer what AD groups a specific user has access to.
Thursday, August 26, 2010
OIM 11g: Request management
OIM has always had support for requests based provisioning but the OIM request model is strongly connected to resource objects. This works great if you want to request something that natively out of the box is a resource object, i.e. an AD account, but works less well if you need to be able to support requests for more granular things like attributes on a process form or target system roles on a child form connected to the process form.
There is a number of ways to work around this problem but none of these approaches is entirely problem free and/or require a lot of implementation work:
In 11g there is a new request framework that looks very promising that should hopefully mean that you no longer need to write custom code as soon as you need to support request for anything outside of the base resource objects. This will make OIM implementation that includes decently advanced requirements around requests substantially cheaper and faster.
If you look at the competition it is clearly a weak spot for OIM. IBM TIM has had framework for handling application roles/groups (they call it "access") since 5.0 so OIM clearly needed to catch up on this feature. The OIM framework looks more flexible so if the feature delivers on it's promises it could be a strong advantage for OIM.
There is a number of ways to work around this problem but none of these approaches is entirely problem free and/or require a lot of implementation work:
- Wrap the entity in a custom resource object (example AD group memberships)
- Wrap the entity in a custom resource object and leverage OIM group and Access Policy framework
- Create an custom menu item and do a custom request workflow
- Create a totally custom request interface and connect to OIM using the APIs. Potentially use web services as a communication channel
In 11g there is a new request framework that looks very promising that should hopefully mean that you no longer need to write custom code as soon as you need to support request for anything outside of the base resource objects. This will make OIM implementation that includes decently advanced requirements around requests substantially cheaper and faster.
If you look at the competition it is clearly a weak spot for OIM. IBM TIM has had framework for handling application roles/groups (they call it "access") since 5.0 so OIM clearly needed to catch up on this feature. The OIM framework looks more flexible so if the feature delivers on it's promises it could be a strong advantage for OIM.
Tuesday, August 24, 2010
OIM Howto: Cascading user form changes with approval
The OIM user self service function makes it possible for the user to update attributes on their own user form. This of course makes it possible to trigger provisioning events based on the updates but in many cases you need an approval workflow to ensure that the user does not give itself inappropriate privileges. How do you do that?
Lets say you have a field called "primary role" on the user form that the user can update. On update of this field you want an approval process to be fired and if the change is approved "something" should happen through a provisioning task.
The first question is how do you detect the change to the user form? This can be done through an entity adapter set on post update on the user. This adapter will be fired on any update to the user form so you need to add an additional and invisble field to the user form called "primary role old". The first thing you do in the adapter is to check if "primary role" and "primary role old" are the same. If so no change has been done to this field and do nothing. If not then fire off the request. At the end update "primary role old" to be the same as "primary role". This will trigger a second round of entity adapter check but as the two attributes now are the same nothing will happen.
Next you need to create a custom resourve object with the associated object form, process form, approval process and provisioning process.
Firing off the request for your shiny new resource object is done using the createRequest API from the tcRequestOperationsIntf class. The details for how to do this can be found in this OTN discussion on creating requests
Now the update to the user form will create a request that is approved (or denied) by the appropriate parties and your users get provisioned with whatever you want to put in your provisioning process. Elegant isn't it?
If you have more than one field on the user form that needs this treatment I strongly recommend that you use a single entity adapter instead of one adapter per field as entity adapter are expensive to instantiate and your user updates will get very, very slow if you have 10+ entity adapters with each one looking at a specific attribute.
Lets say you have a field called "primary role" on the user form that the user can update. On update of this field you want an approval process to be fired and if the change is approved "something" should happen through a provisioning task.
The first question is how do you detect the change to the user form? This can be done through an entity adapter set on post update on the user. This adapter will be fired on any update to the user form so you need to add an additional and invisble field to the user form called "primary role old". The first thing you do in the adapter is to check if "primary role" and "primary role old" are the same. If so no change has been done to this field and do nothing. If not then fire off the request. At the end update "primary role old" to be the same as "primary role". This will trigger a second round of entity adapter check but as the two attributes now are the same nothing will happen.
Next you need to create a custom resourve object with the associated object form, process form, approval process and provisioning process.
Firing off the request for your shiny new resource object is done using the createRequest API from the tcRequestOperationsIntf class. The details for how to do this can be found in this OTN discussion on creating requests
Now the update to the user form will create a request that is approved (or denied) by the appropriate parties and your users get provisioned with whatever you want to put in your provisioning process. Elegant isn't it?
If you have more than one field on the user form that needs this treatment I strongly recommend that you use a single entity adapter instead of one adapter per field as entity adapter are expensive to instantiate and your user updates will get very, very slow if you have 10+ entity adapters with each one looking at a specific attribute.
OIM 11g: X2 is finally here
Back in the summer of 2005 I got trained on a product called Xellerate from a company called Thor Technologies. I really liked some features but a lot of the GUI felt distinctively old and Swing really never was a good GUI framework. No worries mate, said the nice company representative, X2 will be here soon and that will come with a brand new GUI. We even got to see some screenshots that looked quite nice.
As you all know Thor got bought by Oracle and the Oracleization process that turned Xellerate into OIM took a couple of years. Funnily enough the screenshots for OIM 11g are actually very similar to the screenshots that were shown to me in the hot conferance room in London way back in the summer of 2005.
Historically Oracle has often promised very interesting features that were delivered but didn't really have enough functional depth to be really useful until a couple of versions down the line (ORM integration, SPML support and generic connectors are a few examples). Many of the new features have been talked about for many years so hopefully this won't be the case this time.
The new OIM version looks really good with plenty of really strong features. A good feature overview can be found in A Primer on OIM 11g and you can also visit my, hopefully over time growing, list of in depth look at the new features:
As you all know Thor got bought by Oracle and the Oracleization process that turned Xellerate into OIM took a couple of years. Funnily enough the screenshots for OIM 11g are actually very similar to the screenshots that were shown to me in the hot conferance room in London way back in the summer of 2005.
Historically Oracle has often promised very interesting features that were delivered but didn't really have enough functional depth to be really useful until a couple of versions down the line (ORM integration, SPML support and generic connectors are a few examples). Many of the new features have been talked about for many years so hopefully this won't be the case this time.
The new OIM version looks really good with plenty of really strong features. A good feature overview can be found in A Primer on OIM 11g and you can also visit my, hopefully over time growing, list of in depth look at the new features:
Sunday, August 22, 2010
Inappropriate network access is a material weakness?
I recently found a very interesting KPMG audit findings report on Fema which is the Federal Emergency Management Agency. The reason for this audit report being interesting was not as much the auditing target as the content of the audit report.
What did KPMG find during their audit?
This clearly sounds interesting. Now lets look in the "IT General Control and Financial System Functionality Findings" in the access control section and see what is says.
Usually the CFO and the auditors will give you some respite if you show signs of material progress towards the goal but if you totally ignore the problem they will not be pleased.
The first thing that struck me when reading the report is that there seems to be a shift from "random data sampling auditing" to "auditing of the process".
Traditionally IT auditing has largely been done the same way as traditional financial auditing. When I was in college I helped out my student union by serving as an "amateur auditor" for the different societies that was run by the student union. This was mostly things like "the society that arranges parties" and "the other slightly different society that arranges parties as well". In many cases the societies where better at arranging parties than keeping books so my job was to try to make the treasurers to at least keep some kind financial records.
The audit process was quite simple. First you check that the general ledger exists and that there are transaction records connected to the general ledger (hundreds of receipts and income records in a shoebox does not count) . Secondly you picked a five to ten transactions at random and checked if the transactions sounded reasonable i.e. the beer that was bought was reasonable prized and it looked like most of it was sold to the students after a reasonable length of time (drunk by the party association members themselves does not count).
IT auditing has up until now largely followed the same pattern. First the very high level processes are checked for existence (i.e. a process for how to give out accounts to new employees exists) and then a number of provisioning events and termination events are controlled in detail. Even if your processes coverage is really bad and you have a lot of transactions that totally bypasses your "official" processes you usually won't be caught because in most cases the auditing is done based on events initiated by the trusted source, i.e. your HR system, and therefor followed your official processes.
If you look at the findings it is clear that KPMG looked substantially deeper at the core business processes such as initial provisioning, access level update and termination. They are also saying that a number of processes i.e. access recertification simply is mandatory and must be performed.
In my next postings I will take a little closer look at each issue found by KPMG and talk a bit about how to solve the issues that they point out.
What did KPMG find during their audit?
During our audit engagement, we noted certain matters in the areas of security management, access controls, configuration management, and contingency planning with respect to FEMA’s financial systems information technology (IT) general controls which we believe contribute to a DHS-level significant deficiency that is considered a material weakness in IT controls and financial system functionality. These matters are described in the IT General Control and Financial System Functionality Findings by Audit Area section of this letter.
This clearly sounds interesting. Now lets look in the "IT General Control and Financial System Functionality Findings" in the access control section and see what is says.
Password, security patch management, and configuration deficiencies were identified during the vulnerability assessment on hosts supporting the key financial applications and general support systems;
Core IFMIS, G&T IFMIS, NEMIS, and PARS application and/or database accounts, network, and remote user accounts were not periodically reviewed for appropriateness, resulting in inappropriate authorizations and excessive user access privileges. For G&T IFMIS, we determined that recertification of user accounts had not been conducted since the application was implemented at FEMA in FY 2007;
Financial application, network, and remote user accounts were not disabled or removed promptly upon personnel termination;
Initial and modified access granted to Core and G&T IFMIS financial application and/or database, network, and remote users was not properly documented and authorized;What does this really mean? A "material weakness" is something that can in the long run could lead to a financial misstatement occurring. If you are a US based public company this would be very bad as post SOX that could lead to your CFO having to go to prison. CFOs generally don't like prison, not even club Fed, so they tend to be motivated to have "material weaknesses" fixed as soon as possible.
Usually the CFO and the auditors will give you some respite if you show signs of material progress towards the goal but if you totally ignore the problem they will not be pleased.
The first thing that struck me when reading the report is that there seems to be a shift from "random data sampling auditing" to "auditing of the process".
Traditionally IT auditing has largely been done the same way as traditional financial auditing. When I was in college I helped out my student union by serving as an "amateur auditor" for the different societies that was run by the student union. This was mostly things like "the society that arranges parties" and "the other slightly different society that arranges parties as well". In many cases the societies where better at arranging parties than keeping books so my job was to try to make the treasurers to at least keep some kind financial records.
The audit process was quite simple. First you check that the general ledger exists and that there are transaction records connected to the general ledger (hundreds of receipts and income records in a shoebox does not count) . Secondly you picked a five to ten transactions at random and checked if the transactions sounded reasonable i.e. the beer that was bought was reasonable prized and it looked like most of it was sold to the students after a reasonable length of time (drunk by the party association members themselves does not count).
IT auditing has up until now largely followed the same pattern. First the very high level processes are checked for existence (i.e. a process for how to give out accounts to new employees exists) and then a number of provisioning events and termination events are controlled in detail. Even if your processes coverage is really bad and you have a lot of transactions that totally bypasses your "official" processes you usually won't be caught because in most cases the auditing is done based on events initiated by the trusted source, i.e. your HR system, and therefor followed your official processes.
If you look at the findings it is clear that KPMG looked substantially deeper at the core business processes such as initial provisioning, access level update and termination. They are also saying that a number of processes i.e. access recertification simply is mandatory and must be performed.
In my next postings I will take a little closer look at each issue found by KPMG and talk a bit about how to solve the issues that they point out.
Thursday, August 19, 2010
[OIM vs TIM] Physical deployment architecture
Between 2006 and 2008 I mostly did OIM implementations but in December of 2008 I switched over to IBM TIM as I switched jobs and my new employer is an IBM shop.
Changing from one security stack was a little bit like a musician switching from playing guitar to keyboards. Many of the basic concepts are the same but it did take a while to figure out how you actually do things in the new environment. Functionally OIM and TIM are very similar. There are very few business processes that you can support in one product and not in the other. On the other hand there is a number of things where the architecture or the implementation hinders or helps you substantially compared to the other product.
One example where OIM and TIM have different approaches is deployment architecture.
OIM uses a very standard physical deployment architecture with webserver, appserver, application and database. Most of the business logic configuration gets implemented by configuration changes and Java extensions deployed on the application server.
TIM uses the same basic structure but splits the data storage layer between a database and an LDAP server. Most of the business logic implementation is done inside of TIM either by straight configuration or by adding Javascript "scripts" into hooks in the GUI.
There are advantages as well as disadvantages with the TIM approach. The biggest disadvantage is that there is another critical piece of infrastructure that you need to support. As the LDAP server also needs a db you may end up having to support an Oracle DB and a DB2 instance for the Tivoli Directory Server. Not fun.
There are advantages as well as disadvantages with the TIM approach. The biggest disadvantage is that there is another critical piece of infrastructure that you need to support. As the LDAP server also needs a db you may end up having to support an Oracle DB and a DB2 instance for the Tivoli Directory Server. Not fun.
Aside from the support issue the LDAP server means that you end up with a large number of servers in an enterprise grade no single point of failure install. As it generally really isn't advisable to run a DB2 and an Oracle DB on the same physical host due to memory footprint you suddenly need four data layer hosts. High hardware costs, lost of energy and cooling and many servers to patch.
The problem of server sprawl is further increased by the fact that you really need to make your final testing environment as similar as you can afford to your production environment which means that it also should have full high availability. There you have another four servers just for the storage layer.
The problem of server sprawl is further increased by the fact that you really need to make your final testing environment as similar as you can afford to your production environment which means that it also should have full high availability. There you have another four servers just for the storage layer.
The advantage of the LDAP approach is that the users and their base data is easily accessible through LDAP calls which makes implementing certain business processes such as "warn manager about contractor that is about to expire" very easy to implement. It also means that you can peek into the user information using an LDAP browser instead of a SQL client which may or may not be nicer depending on your personal preferences.
Sunday, August 15, 2010
In defense of OIM IT resources
In OIM Resource Objects, provisioning processes and connectors and IT resources I discussed the different objects that make up an OIM system. One of the objects that isn't strictly necessary is the IT resource.
So if it isn't necessary why does it exist? The simple answer is that it is a very convenient place to store environment dependent information.
In most OIM projects you have a number of OIM environments. You have at least one dev, one integration and one production environment. In some cases you may have a number of dev environments or even one dev per developer, perhaps a UAT environment and a training environment. At a minimum the dev and test environments have separate target systems and you may even have a separate set of target systems for each environment.
You definitely don't want to risk provisioning or even worse deprovision to production target systems when you are doing dev or testing. You also don't want to move dev target system environment configuration into production as a part of a code drop. How do you solve this problem?
The most simplistic solution would be to store the target system information in the source code and change it when you promote the code to the next level. Anyone that has ever done that knows that it is a certain path to pain and suffering. In OIM you can actually place environment configuration information in all kinds of interesting and exciting places. It can live in the Java source code, in adapters and in provisioning tasks. There are actually very few limits on what kind of pain a naive and inexperienced OIM developer can inflict on the poor souls that has to maintain the system.
The better option is of course to externalize and centralize the configuration into some kind of configuration repository. The standard OIM repository for environmental information is the IT resource and in many cases it is a good choice. It is easily available for reference and update in the design client. It is quite flexible and accessible by the OIM admins.
Java does offer you some good alternatives if you prefer physical configuration files. You can of course just use the FileReader class and write your own parser. I personally prefer the Java Properties framework
The main disadvantage of configuration files is when you are running in a cluster. I don't know how many times over the years I have spent considerable amounts of time debugging issues just to discover that a setting or a configuration file wasn't updated on one of the cluster member servers.
So if it isn't necessary why does it exist? The simple answer is that it is a very convenient place to store environment dependent information.
In most OIM projects you have a number of OIM environments. You have at least one dev, one integration and one production environment. In some cases you may have a number of dev environments or even one dev per developer, perhaps a UAT environment and a training environment. At a minimum the dev and test environments have separate target systems and you may even have a separate set of target systems for each environment.
You definitely don't want to risk provisioning or even worse deprovision to production target systems when you are doing dev or testing. You also don't want to move dev target system environment configuration into production as a part of a code drop. How do you solve this problem?
The most simplistic solution would be to store the target system information in the source code and change it when you promote the code to the next level. Anyone that has ever done that knows that it is a certain path to pain and suffering. In OIM you can actually place environment configuration information in all kinds of interesting and exciting places. It can live in the Java source code, in adapters and in provisioning tasks. There are actually very few limits on what kind of pain a naive and inexperienced OIM developer can inflict on the poor souls that has to maintain the system.
The better option is of course to externalize and centralize the configuration into some kind of configuration repository. The standard OIM repository for environmental information is the IT resource and in many cases it is a good choice. It is easily available for reference and update in the design client. It is quite flexible and accessible by the OIM admins.
Java does offer you some good alternatives if you prefer physical configuration files. You can of course just use the FileReader class and write your own parser. I personally prefer the Java Properties framework
The main disadvantage of configuration files is when you are running in a cluster. I don't know how many times over the years I have spent considerable amounts of time debugging issues just to discover that a setting or a configuration file wasn't updated on one of the cluster member servers.
Wednesday, August 11, 2010
The primary limitation of OIM access policies
In identity and access management there are a few paradigms that you see implemented in most products on the market. One such paradigm is the "rule triggered on user form attribute" -> "group membership" -> "resource/service provisioned". The different pieces are called slightly different things but the general concept is the same.
How do you overcome this limitation? The primary method is to simply write your own access policy framework and base it on entity adapters either based on the user form or on the USG (group) table. The main disadvantage of creating your own AP framework is the classical balance between ease of implementation and ease of configuration.
In OIM this approach is supported in the "rule puts user in OIM group which triggers an access policy which gives the user a resource object configured in a certain way and the RO finally gives the user access on the target system". This approach works great as long as the user can only be given zero or one instances of the RO. If the user should be able to be given more than one RO instance access policies simply don't work. Instead of provisioning multiple ROs the AP with the lowest (or potentially highest) priority will simply set the content of the process form.
This is of course one of these architectural decisions that you can argue for and against but it does mean that standard OIM access policies are of limited value in many situations.
How do you overcome this limitation? The primary method is to simply write your own access policy framework and base it on entity adapters either based on the user form or on the USG (group) table. The main disadvantage of creating your own AP framework is the classical balance between ease of implementation and ease of configuration.
Monday, August 9, 2010
How I learned to stop worrying and love the sniffer
Once upon a time one used sockets to speak to the network. You basically manually pushed your little bits out on the network and you could almost physically feel them sail away into the void. Today I tend to use high level libs and it is often not trivial to figure out what calling the metod createObjectOnRemoteSystem actually results in. Logs are good but sniffing the network is sometimes the best way to figure out what is really going on. On a large number of occasions a good network sniffer is the difference between being totally stuck and solving a problem.
My favorite sniffer is Wireshark (formerly known as Ethereal). This sniffer is free and has a very good protocol analyzer while still giving you convenient access to the raw bits. The user interface may take some time to get used to so I thought I should write up a quick introduction on how to do some simple tasks.
Let me use an actual incident as an example on how to use a sniffer to find the root cause of a service outage in a complex environment.
The system outage first showed up in an application that basically provides an interface that translates between web service calls and LDAP queries into an AD database. The application suddenly started failing complaining in the logs that it couldn’t bind to the AD server.
First step was to check that you could bind to the AD server that was the primary login controller for the application server that hosted the service. That worked fine.
Second step was to look in the logs on the AD server to see if there were any entries about the failed binds. Unfortunately everything looked fine.
Now things looked a bit confusing. Was the problem that the application had gone totally off the rails? I decided to install Wireshark and see if the app was at all communicating with the DC.
Sniffing traffic with Wireshark is easy:
Analyzing can be a bit challenging. This is especially true if you are on a network where there is a lot of traffic so your traffic will simply drown in the background noise. The trick here is to figure out a good filter that lets you find your signal. In the below are a couple of options that I have found useful over the years.
In the lower part of the screen you can see the protocol layer stack. Depending on what you are doing you may be more interested more in the application layer or the network layer so you can expand or detract the different layers by clicking on the + signs on the left.
In this specific example I discovered that my app was talking to a completely different AD domain controller. Once the DC was taken down my application rebound to another DC and suddenly started working again.
There are many more nifty tricks you can use in Wireshark but I think this is enough for one posting. Stay tuned for more! (queue "We'll Meet Again")
My favorite sniffer is Wireshark (formerly known as Ethereal). This sniffer is free and has a very good protocol analyzer while still giving you convenient access to the raw bits. The user interface may take some time to get used to so I thought I should write up a quick introduction on how to do some simple tasks.
Let me use an actual incident as an example on how to use a sniffer to find the root cause of a service outage in a complex environment.
The system outage first showed up in an application that basically provides an interface that translates between web service calls and LDAP queries into an AD database. The application suddenly started failing complaining in the logs that it couldn’t bind to the AD server.
First step was to check that you could bind to the AD server that was the primary login controller for the application server that hosted the service. That worked fine.
Second step was to look in the logs on the AD server to see if there were any entries about the failed binds. Unfortunately everything looked fine.
Now things looked a bit confusing. Was the problem that the application had gone totally off the rails? I decided to install Wireshark and see if the app was at all communicating with the DC.
Sniffing traffic with Wireshark is easy:
- Start up Wireshark
- Pick Capture-> Interfaces
- Pick the Network Interface Card that you want to listen to and press Start.
- Generate the traffic you want to listen to
- Press Capture -> Stop when you are done
Analyzing can be a bit challenging. This is especially true if you are on a network where there is a lot of traffic so your traffic will simply drown in the background noise. The trick here is to figure out a good filter that lets you find your signal. In the below are a couple of options that I have found useful over the years.
- "tcp.port==389" gets you all tcp traffic on that port (LDAP in this case)
- "ip.host==192.168.1.30" gets you all ip based traffic to and from that specific host
In the lower part of the screen you can see the protocol layer stack. Depending on what you are doing you may be more interested more in the application layer or the network layer so you can expand or detract the different layers by clicking on the + signs on the left.
In this specific example I discovered that my app was talking to a completely different AD domain controller. Once the DC was taken down my application rebound to another DC and suddenly started working again.
There are many more nifty tricks you can use in Wireshark but I think this is enough for one posting. Stay tuned for more! (queue "We'll Meet Again")
Saturday, August 7, 2010
Externalized authorization and Xacml
Back in 2001-2003 I was working for a business system vendor and we were looking at how to integrate the technical infrastructure of the business system with the new platform technology that was starting to mature. We started out by creating a Corba based architecture that was later converted to J2EE. On the security side we integrated the authentication engine with LDAP/AD. We looked at having authorization objects externalized but determined that there probably wasn’t any market pressure for that feature at the time.
The years passed and the web access managers became the standard for coarse grained web authorization. Most access managers deal in URL access which is good if you just want to protect an application or a part of the application but if you want to say “users in group A can only update transactions that originated in the central European region and whose total value is less than $50 000” you really are in trouble.
XACML and attribute based access control does offer a promise to give you this ability. It is off course nothing revolutionary as you can implement exactly the same in your favorite programming language but there are situations where having the authorization logic embedded within the business logic may not be so good.
One example from pharmaceuticals world is that FDA is putting more and more pressure on companies to deliver data about not only how their drugs behave during trials but also how the drugs behave in the commercial patient population. As competition increases between different drug makers and makers of generic drugs it also becomes increasingly important to have a close relationship with your patients and doctors. The most common solution to this problem is to create a registry that basically is an online electronic health record system where patients and their doctors can record how the therapy is progressing.
One important factor here is of course data privacy. Health information is simply highly sensitive so you don’t want this information to grant inappropriate access. Defining what is appropriate and inappropriate is unfortunately slightly more complex. In most cases the patient and the treating doctor should have full access. In many cases other doctors in the same practice should also have access along with nurses and other health care professionals. In some cases patients are treated in multiple practices or may switch practices temporarily or permanently.
You could of course implement all of this functionality in code but if you ever need to prove that only the appropriate users have access to the patient’s information the auditors may not be happy with having to look through thousands of lines of code. Also if you run a global system you may run into requirements where you have to handle people from different jurisdictions differently as the German Bundesdatenschutz may require special rules for German citizens.
Xacml clearly offers a very attractive way to externalize and document the authorization logic in a format that is clearly understandable by auditors and other interested parties.
Xacml in itself does not solve the whole problem but it is an important puzzle piece. Next posting will talk more about the other pieces.
The years passed and the web access managers became the standard for coarse grained web authorization. Most access managers deal in URL access which is good if you just want to protect an application or a part of the application but if you want to say “users in group A can only update transactions that originated in the central European region and whose total value is less than $50 000” you really are in trouble.
XACML and attribute based access control does offer a promise to give you this ability. It is off course nothing revolutionary as you can implement exactly the same in your favorite programming language but there are situations where having the authorization logic embedded within the business logic may not be so good.
One example from pharmaceuticals world is that FDA is putting more and more pressure on companies to deliver data about not only how their drugs behave during trials but also how the drugs behave in the commercial patient population. As competition increases between different drug makers and makers of generic drugs it also becomes increasingly important to have a close relationship with your patients and doctors. The most common solution to this problem is to create a registry that basically is an online electronic health record system where patients and their doctors can record how the therapy is progressing.
One important factor here is of course data privacy. Health information is simply highly sensitive so you don’t want this information to grant inappropriate access. Defining what is appropriate and inappropriate is unfortunately slightly more complex. In most cases the patient and the treating doctor should have full access. In many cases other doctors in the same practice should also have access along with nurses and other health care professionals. In some cases patients are treated in multiple practices or may switch practices temporarily or permanently.
You could of course implement all of this functionality in code but if you ever need to prove that only the appropriate users have access to the patient’s information the auditors may not be happy with having to look through thousands of lines of code. Also if you run a global system you may run into requirements where you have to handle people from different jurisdictions differently as the German Bundesdatenschutz may require special rules for German citizens.
Xacml clearly offers a very attractive way to externalize and document the authorization logic in a format that is clearly understandable by auditors and other interested parties.
Xacml in itself does not solve the whole problem but it is an important puzzle piece. Next posting will talk more about the other pieces.
Thursday, August 5, 2010
OIM resource objects, provisioning processes, connectors and IT Resources
When you first start working with OIM there are all of these strange new concepts that are really hard to grasp. Even worse is to try to figure out how everything hangs together. Having tried to explain this a number of times to different people over the years I wanted to try to write down a very basic guide to some of the core objects in OIM.
Lets start with the resource object (often called RO). A RO is in it's most basic form basically a virtual representation of an account on a target system. If an OIM user has an account on the target system the user has an RO instance associated with it.
The most basic process that you do with ROs is to provision the account to a target system. The provisioning is handled by a provisioning process. The provisioning processes usually consists of a number of provisioning tasks that fires adapters that in turn calls code, often Java code, that actually does the provisioning work.
In many cases the provisioning tasks needs information about the target system such as logins and passwords for the accounts that is used to run the provisioning process. This information is often kept in an IT resource and is referenced by the Java code. In many cases you keep a reference to the relevant IT resource on the process form or in the attribute section of a scheduled task. This makes it possible to have multiple physical target systems interacted with by a single resource object and also makes it clearer for an admin exactly what physical target system is being managed by a specific resource object instance or scheduled task.
A connector is a set of objects such as resource objects, provisioning processes and it resources. It also includes the jars that contain the Java code that performs the provisioning process.
The out of the box connectors are often very complex but a simple custom connector may only consist of an RO, a provisioning process, an adapter that links the provisioning process to the provisioning logic and finally some provisioning logic in a jar. The IT Resource is not essential but it is very useful to avoid having to put system specific information straight into the provisioning process (or even the provisioning logic). One common mistake is to think that the IT Resource is much more important than it actually is.
Lets start with the resource object (often called RO). A RO is in it's most basic form basically a virtual representation of an account on a target system. If an OIM user has an account on the target system the user has an RO instance associated with it.
The most basic process that you do with ROs is to provision the account to a target system. The provisioning is handled by a provisioning process. The provisioning processes usually consists of a number of provisioning tasks that fires adapters that in turn calls code, often Java code, that actually does the provisioning work.
In many cases the provisioning tasks needs information about the target system such as logins and passwords for the accounts that is used to run the provisioning process. This information is often kept in an IT resource and is referenced by the Java code. In many cases you keep a reference to the relevant IT resource on the process form or in the attribute section of a scheduled task. This makes it possible to have multiple physical target systems interacted with by a single resource object and also makes it clearer for an admin exactly what physical target system is being managed by a specific resource object instance or scheduled task.
A connector is a set of objects such as resource objects, provisioning processes and it resources. It also includes the jars that contain the Java code that performs the provisioning process.
The out of the box connectors are often very complex but a simple custom connector may only consist of an RO, a provisioning process, an adapter that links the provisioning process to the provisioning logic and finally some provisioning logic in a jar. The IT Resource is not essential but it is very useful to avoid having to put system specific information straight into the provisioning process (or even the provisioning logic). One common mistake is to think that the IT Resource is much more important than it actually is.
Role based group memberships in OIM
As I recently discussed role driven automatic provisioning of target system roles on the Oracle IDM discussion board I thought it may be interesting to shine a little spotlight on this specific form of target system role management.
My addition to the thread was basically the "Role based group memberships in OIM" section of the AD and LDAP group management through OIM.
In the discussion Oracle Quest made the excellent suggestion to use a combination of SQL, XSLT and Regex to create a very agile and very fully featured system for rules.
In some cases you might not need this much flexibility and a simple model where the rules are contained in lookups and the only real addition is support for wildcards may be sufficient. Basic implementation can be done through an entity adapter set on post insert on the user form.
Wednesday, August 4, 2010
Manage AD with JNDI demo tool
Provisioning and mangaging user objects in AD is one of the most basic functions in most provisioning implementations. Today the standard connectors have gotten quite good but sometimes you still have to implement some "missing" functionality yourself.
I created a small tool that demos how to manipulate AD user objects using JNDI. The tool supports unlocking, setting the description attribute and group memberships. You can run it through a bat script or by integrating it with your favorite IDM system.
AD user object management demo tool
I created a small tool that demos how to manipulate AD user objects using JNDI. The tool supports unlocking, setting the description attribute and group memberships. You can run it through a bat script or by integrating it with your favorite IDM system.
AD user object management demo tool
OIM Howto: Request based group membership management
In OIM there are often more than one way to implement a certain requirement. Request based target system group memberships is an area where there are at least half a dozen different ways to get the job done.
The process basically contains two steps:
The child table manipulation can be done either through the APIs (see below) or by target system group memberships through access policies
The API that will let you do this is in the tcFormInstanceOperationsIntf class and the methods are called addProcessFormChildData and removeProcessFormChildData.
If for example you would like to support request based addition to AD groups and already have one of the later versions of the AD connector installed you would need to do the folowing:
There is a couple of downsides to this approach that are worth mentioning. As you are using a single RO the resources view of the user will show up as a collection of "AD group membership add" resources. The name of the group is available on the process form but it is not directly visible to an admin without an additional click. Likewise if you use attestation (access recertification) the attestation events will be less than helpful for the certifier as they only see the resource name. Same thing for certain out of the box reports.
Code example for adding rows to a child table of a process form
The process basically contains two steps:
- Creation of a custom resource object that facilitates the request and approval workflow
- Creation of a provisioning workflow that sets the group membership
The child table manipulation can be done either through the APIs (see below) or by target system group memberships through access policies
The API that will let you do this is in the tcFormInstanceOperationsIntf class and the methods are called addProcessFormChildData and removeProcessFormChildData.
If for example you would like to support request based addition to AD groups and already have one of the later versions of the AD connector installed you would need to do the folowing:
- Create a new RO called "AD group membership add"
- Add an object form that lets the user indicate what AD groups they would like to become member of.
- Add a process form and data sink or prepop from the object form
- Add approval process (if needed)
- Add provisioning process that basically calls a task that uses the addProcessFormChildData AD group identifier (group name if I remember correctly) to the AD group childtable that is attached to the main AD resource object.
There is a couple of downsides to this approach that are worth mentioning. As you are using a single RO the resources view of the user will show up as a collection of "AD group membership add" resources. The name of the group is available on the process form but it is not directly visible to an admin without an additional click. Likewise if you use attestation (access recertification) the attestation events will be less than helpful for the certifier as they only see the resource name. Same thing for certain out of the box reports.
Code example for adding rows to a child table of a process form
Tuesday, August 3, 2010
AD and LDAP group management through OIM
Provisioning systems are often initially brought in to provision the basic resources such as AD accounts, email and perhaps a basic ERP account. Once that functionality is in place it is common to start looking at handling group memberships in the target application. In some cases you then go on to manage not only the group memberships but also the groups themselves.
A very common example are groups in Active Directory and/or the corporate LDAP. I have written down some thoughts about how to best leverage OIM in this capacity.
Take a look and feel free to comment if you find the document useful:.
AD and LDAP group management through OIM
A very common example are groups in Active Directory and/or the corporate LDAP. I have written down some thoughts about how to best leverage OIM in this capacity.
Take a look and feel free to comment if you find the document useful:.
AD and LDAP group management through OIM
OIM Howto: Add parameters to your scheduled tasks
In many cases you want to externalize configuration parameters from the actual code in order to avoid having to recompile and deploy every time you want to change something. In scheduled tasks the simplest way to do this is to simply define your parameters in the scheduled task and then call them from the code.
This will get you access to the scheduled task attribute called DATABASE_NAME.
A slightly more advanced variant of the same is to let the attribute be a reference to another OIM object. In many cases ITResources are useful places to store attributes
Now you have the parameters of this ItResource in the hashtable.
You will have to import com.thortech.xl.util.adapters.tcUtilXellerateOperations which is a very useful class full of nifty static methods.
public void init() { try { String dbName = getAttribute ( DATABASE_NAME ); } catch(Exception e) { logger.error ( "Exception while initializing recon analyzer scheduled task " + e.getMessage() ) ; } }
This will get you access to the scheduled task attribute called DATABASE_NAME.
A slightly more advanced variant of the same is to let the attribute be a reference to another OIM object. In many cases ITResources are useful places to store attributes
String dbName = getAttribute ( DATABASE_NAME ); Hashtable itResourceAttr = tcUtilXellerateOperations.getITAssetProperties (super.getDataBase(), dbName.trim());
Now you have the parameters of this ItResource in the hashtable.
You will have to import com.thortech.xl.util.adapters.tcUtilXellerateOperations which is a very useful class full of nifty static methods.
Monday, August 2, 2010
OIM Howto: Create scheduled tasks
Scheduled tasks in OIM are used to run all kinds of reoccuring processes. They can also be used as convenient places to store little processes that needs to be run on demand.
To create a scheduled tasks you need to do the following things:
1. Create a java class that extends the SchedulerBaseTask (see example below)
2. Write your business logic
3. Compile and jar
4. Place the jar in the ScheduleTask directory in your OIM install
5. Create a new scheduled task using the OIM developer console
6. Link in the new class into your new scheduled task.
7. Done!
Example code for a scheduled task:
To create a scheduled tasks you need to do the following things:
1. Create a java class that extends the SchedulerBaseTask (see example below)
2. Write your business logic
3. Compile and jar
4. Place the jar in the ScheduleTask directory in your OIM install
5. Create a new scheduled task using the OIM developer console
6. Link in the new class into your new scheduled task.
7. Done!
Example code for a scheduled task:
import com.thortech.xl.scheduler.tasks.SchedulerBaseTask; public class ScheduledtaskExample extends SchedulerBaseTask { public void init() { //this method is run before execute by the scheduler } public void execute() { //is executed by the scheduler runMyBusinessLogic(); } private void runMyBusinessLogic(){ //place your business logic here } public boolean stop() { //place logic that should run on a stop signal here } }
Sunday, August 1, 2010
OIM Howto: Limit admin privileges for helpdesk
Q: I need to give the helpdesk limited admin privileges to perform level one admin tasks such as resetting passwords, unlock accounts or enable disabled users but I don’t want to give them the whole user management menu item. How do I do this?
A: The easiest way to implement this requirement is to create a custom menu item in the standard OIM admin web application. In this menu item you implement exactly the functionality that the helpdesk needs to do their job using the standard OIM GUI framework and the OIM APIs.
A: The easiest way to implement this requirement is to create a custom menu item in the standard OIM admin web application. In this menu item you implement exactly the functionality that the helpdesk needs to do their job using the standard OIM GUI framework and the OIM APIs.
Implementing a custom menu item does require some knowledge of the web GUI framework that OIM is built upon but once you master this skill it is fairly easy. A good starting point is the OIM GUI customization guide (for 9.1.0.1).
Subscribe to:
Posts (Atom)