Saturday, August 28, 2010

OIM Howto: Cascading user form changes using triggers

In the post Cascading user form changes with approval I am suggesting triggering the approval through a entity adapter. I got a very good question about this post.

The conventional approach for cascading user form changes down to process forms is to enter a task name in the LOOKUP.USR_PROCESS_TRIGGERS lookup table and then creating a task in the provisioning process with the same task name. This task then writes the data to the process form. The question was why not simply let this task trigger the request creation and avoid the whole complicated business with the entity adapter.


The answer is that the process trigger approach in many cases works fine. If you only have one resource object or a small number of resource objects where you need this functionality using the process trigger approach works great. If you on the other hand have many resource objects that needs this function things get a bit more complex if you want to stay with the process trigger approach.


One option is of course to implement the task in each resource object. That works fine but will cost you performance as each task initiation will take considerable time. You also will have more functionality to maintain although that really isn't a big issue as the logic is contained in the Java methods and you can keep a single copy that is shared between the resource objects. One advantage of the approach is that unless the user has been provisioned the resource the task will never fire so you don't have to create logic that controls what resource objects have been provisioned to the user.


The other option is to manage the update of several resource objects from a single task by using the APIs to check if the user has been provisioned with the other resources or not and take appropriate actions. This approach will make your code a bit more complex and in general it is also not advisable to let a task in one resource object impact other totally separate resource objects as this tends to be confusing (sort of variant of a hidden code side effect).

Friday, August 27, 2010

OIM Howto: Support for request based OIM group memberships

The normal OIM group management interface is oriented towards an administrator. In many cases it can be useful to be able to support request based OIM group memberships. To do this you basically follow the same steps as documented in Leverage standard connector group management
  1. Create a new RO called "OIM group membership" 
  2. Add an object form that lets the user indicate what OIM group they would like to become member of.
  3. Add a process form and data sink or prepop from the object form
  4. Add approval process (if needed)
  5. Add provisioning process that basically calls a task that calls the addMemberUser method in tcGroupOperationsIntf.
This in turn can be leveraged to do request based AD group memberships through attaching access policies to the groups that adds rows to the AD group membership child form of the AD User object. This will support multiple groups as the child form rows are added culminatively.

There are a couple of different options for the object form in step two and which approach you choose largely depends on your requriements.

One is to use a drop down backed by a lookup table. The lookup table could either be populated manually or as P.K. suggests in P.K. suggests in a recent thread on OTN discussion forum you could also create a scheduled taks and use the APIs to auto populate the lookup with the OIM groups. If you go down that path you may want to include logic that excludes certain OIM groups, i.e. system administrators, or just takes a subset of groups, i.e. all oim groups that starts with adGroups.

Another option is to use a child form which would support requests for multiple groups in a single reqeuest. If you go for this option you have to add the support on the process form as well and your provisioning logic will be slightly more complex.
    The target system net result is identical to the approach in Leverage standard connector group management but you can argue that it is a cleaner approach that more leverages the standard OIM functionality. It also leverages the OIM group admin user interface which makes it clearer what AD groups a specific user has access to.

    Thursday, August 26, 2010

    OIM 11g: Request management

    OIM has always had support for requests based provisioning but the OIM request model is strongly connected  to resource objects. This works great if you want to request something that natively out of the box is a resource object, i.e. an AD account, but works less well if you need to be able to support requests for more granular things like attributes on a process form or target system roles on a child form connected to the process form.

    There is a number of ways to work around this problem but none of these approaches is entirely problem free and/or require a lot of implementation work:
    1. Wrap the entity in a custom resource object (example AD group memberships
    2. Wrap the entity in a custom resource object and leverage OIM group and Access Policy framework
    3. Create an custom menu item and do a custom request workflow
    4. Create a totally custom request interface and connect to OIM using the APIs. Potentially use web services as a communication channel
    Option one and two require some OIM knowledge and a bit of Java prowess. Option three requires Java, , OIM API skills, Spring and some basic GUI creation skills and four requires knowledge of some kind of web interface plus some understanding of the OIM APIs. Nothing extremely complicated but definitely requires more skill and time than simple configuration.

    In 11g there is a new request framework that looks very promising that should hopefully mean that you no longer need to write custom code as soon as you need to support request for anything outside of the base resource objects. This will make OIM implementation that includes decently advanced requirements around requests substantially cheaper and faster.

    If you look at the competition it is clearly a weak spot for OIM. IBM TIM has had framework for handling application roles/groups (they call it "access") since 5.0 so OIM clearly needed to catch up on this feature. The OIM framework looks more flexible so if the feature delivers on it's promises it could be a strong advantage for OIM.

    Tuesday, August 24, 2010

    OIM Howto: Cascading user form changes with approval

    The OIM user self service function makes it possible for the user to update attributes on their own user form. This of course makes it possible to trigger provisioning events based on the updates but in many cases you need an approval workflow to ensure that the user does not give itself inappropriate privileges. How do you do that?

    Lets say you have a field called "primary role" on the user form that the user can update. On update of this field you want an approval process to be fired and if the change is approved "something" should happen through a provisioning task.

    The first question is how do you detect the change to the user form? This can be done through an entity adapter set on post update on the user. This adapter will be fired on any update to the user form so you need to add an additional and invisble field to the user form called "primary role old". The first thing you do in the adapter is to check if "primary role" and "primary role old" are the same. If so no change has been done to this field and do nothing. If not then fire off the request. At the end update "primary role old" to be the same as "primary role". This will trigger a second round of entity adapter check but as the two attributes now are the same nothing will happen.

    Next you need to create a custom resourve object with the associated object form, process form, approval process and provisioning process.

    Firing off the request for your shiny new resource object is done using the createRequest API from the tcRequestOperationsIntf class. The details for how to do this can be found in this OTN discussion on creating requests

    Now the update to the user form will create a request that is approved (or denied) by the appropriate parties and your users get provisioned with whatever you want to put in your provisioning process. Elegant isn't it?

    If you have more than one field on the user form that needs this treatment I strongly recommend that you use a single entity adapter instead of one adapter per field as entity adapter are expensive to instantiate and your user updates will get very, very slow if you have 10+ entity adapters with each one looking at a specific attribute.

    OIM 11g: X2 is finally here

    Back in the summer of 2005 I got trained on a product called Xellerate from a company called Thor Technologies. I really liked some features but a lot of the GUI felt distinctively old and Swing really never was a good GUI framework. No worries mate, said the nice company representative, X2 will be here soon and that will come with a brand new GUI. We even got to see some screenshots that looked quite nice.

    As you all know Thor got bought by Oracle and the Oracleization process that turned Xellerate into OIM took a couple of years. Funnily enough the screenshots for OIM 11g are actually very similar to the screenshots that were shown to me in the hot conferance room in London way back in the summer of 2005.

    Historically Oracle has often promised very interesting features that were delivered but didn't really have enough functional depth to be really useful until a couple of versions down the line (ORM integration, SPML support and generic connectors are a few examples). Many of the new features have been talked about for many years so hopefully this won't be the case this time.

    The new OIM version looks really good with plenty of really strong features. A good feature overview can be found in A Primer on OIM 11g and you can also visit my, hopefully over time growing, list of in depth look at the new features:

    Sunday, August 22, 2010

    Inappropriate network access is a material weakness?

    I recently found a very interesting KPMG audit findings report on Fema which is the Federal Emergency Management Agency. The reason for this audit report being interesting was not as much the auditing target as the content of the audit report.

    What did KPMG find during their audit?
    During our audit engagement, we noted certain matters in the areas of security management, access controls, configuration management, and contingency planning with respect to FEMA’s financial systems information technology (IT) general controls which we believe contribute to a DHS-level significant deficiency that is considered a material weakness in IT controls and financial system functionality. These matters are described in the IT General Control and Financial System Functionality Findings by Audit Area section of this letter.

    This clearly sounds interesting. Now lets look in the "IT General Control and Financial System Functionality Findings" in the access control section and see what is says.
    Password, security patch management, and configuration deficiencies were identified during the vulnerability assessment on hosts supporting the key financial applications and general support systems; 
    Core IFMIS, G&T IFMIS, NEMIS, and PARS application and/or database accounts, network, and remote user accounts were not periodically reviewed for appropriateness, resulting in inappropriate authorizations and excessive user access privileges. For G&T IFMIS, we determined that recertification of user accounts had not been conducted since the application was implemented at FEMA in FY 2007; 
    Financial application, network, and remote user accounts were not disabled or removed promptly upon personnel termination; 
    Initial and modified access granted to Core and G&T IFMIS financial application and/or database, network, and remote users was not properly documented and authorized;
    What does this really mean? A "material weakness" is something that can in the long run could lead to a financial misstatement occurring. If you are a US based public company this would be very bad as post SOX that could lead to your CFO having to go to prison. CFOs generally don't like prison, not even club Fed, so they tend to be motivated to have "material weaknesses" fixed as soon as possible.

    Usually the CFO and the auditors will give you some respite if you show signs of material progress towards the goal but if you totally ignore the problem they will not be pleased.

    The first thing that struck me when reading the report is that there seems to be a shift from "random data sampling auditing" to "auditing of the process".

    Traditionally IT auditing has largely been done the same way as traditional financial auditing. When I was in college I helped out my student union by serving as an "amateur auditor" for the different societies that was run by the student union. This was mostly things like "the society that arranges parties" and "the other slightly different society that arranges parties as well". In many cases the societies where better at arranging parties than keeping books so my job was to try to make the treasurers to at least keep some kind financial records.

    The audit process was quite simple. First you check that the general ledger exists and that there are transaction records connected to the general ledger (hundreds of receipts and income records in a shoebox does not count) . Secondly you picked a five to ten transactions at random and checked if the transactions sounded reasonable i.e. the beer that was bought was reasonable prized and it looked like most of it was sold to the students after a reasonable length of time (drunk by the party association members themselves does not count).

    IT auditing has up until now largely followed the same pattern. First the very high level processes are checked for existence (i.e. a process for how to give out accounts to new employees exists) and then a number of provisioning events and termination events are controlled in detail. Even if your processes coverage is really bad and you have a lot of transactions that totally bypasses your "official" processes you usually won't be caught because in most cases the auditing is done based on events initiated by the trusted source, i.e. your HR system, and therefor followed your official processes.

    If you look at the findings it is clear that KPMG looked substantially deeper at the core business processes such as initial provisioning, access level update and termination. They are also saying that a number of processes i.e. access recertification simply is mandatory and must be performed.

    In my next postings I will take a little closer look at each issue found by KPMG and talk a bit about how to solve the issues that they point out.

    Thursday, August 19, 2010

    [OIM vs TIM] Physical deployment architecture

    Between 2006 and 2008 I mostly did OIM implementations but in December of 2008 I switched over to IBM TIM as I switched jobs and my new employer is an IBM shop.

    Changing from one security stack was a little bit like a musician switching from playing guitar to keyboards. Many of the basic concepts are the same but it did take a while to figure out how you actually do things in the new environment. Functionally OIM and TIM are very similar. There are very few business processes that you can support in one product and not in the other. On the other hand there is a number of things where the architecture or the implementation hinders or helps you substantially compared to the other product.

    One example where OIM and TIM have different approaches is deployment architecture.

    OIM uses a very standard physical deployment architecture with webserver, appserver, application and database. Most of the business logic configuration gets implemented by configuration changes and Java extensions deployed on the application server.

    TIM uses the same basic structure but splits the data storage layer between a database and an LDAP server. Most of the business logic implementation is done inside of TIM either by straight configuration or by adding Javascript "scripts" into hooks in the GUI.

    There are advantages as well as disadvantages with the TIM approach. The biggest disadvantage is that there is another critical piece of infrastructure that you need to support. As the LDAP server also needs a db you may end up having to support an Oracle DB and a DB2 instance for the Tivoli Directory Server. Not fun.

    Aside from the support issue the LDAP server means that you end up with a large number of servers in an enterprise grade no single point of failure install. As it generally really isn't advisable to run a DB2 and an Oracle DB on the same physical host due to memory footprint you suddenly need four data layer hosts. High hardware costs, lost of energy and cooling and many servers to patch.

    The problem of server sprawl is further increased by the fact that you really need to make your final testing environment as similar as you can afford to your production environment which means that it also should have full high availability. There you have another four servers just for the storage layer. 

    The advantage of the LDAP approach is that the users and their base data is easily accessible through LDAP calls which makes implementing certain business processes such as "warn manager about contractor that is about to expire" very easy to implement. It also means that you can peek into the user information using an LDAP browser instead of a SQL client which may or may not be nicer depending on your personal preferences.

    Sunday, August 15, 2010

    In defense of OIM IT resources

    In OIM Resource Objects, provisioning processes and connectors and IT resources I discussed the different objects that make up an OIM system. One of the objects that isn't strictly necessary is the IT resource.

    So if it isn't necessary why does it exist? The simple answer is that it is a very convenient place to store environment dependent information.

    In most OIM projects you have a number of OIM environments. You have at least one dev, one integration and one production environment. In some cases you may have a number of dev environments or even one dev per developer, perhaps a UAT environment and a training environment. At a minimum the dev and test environments have separate target systems and you may even have a separate set of target systems for each environment.

    You definitely don't want to risk provisioning or even worse deprovision to production target systems when you are doing dev or testing. You also don't want to move dev target system environment configuration into production as a part of a code drop. How do you solve this problem?

    The most simplistic solution would be to store the target system information in the source code and change it when you promote the code to the next level. Anyone that has ever done that knows that it is a certain path to pain and suffering. In OIM you can actually place environment configuration information in all kinds of interesting and exciting places. It can live in the Java source code, in adapters and in provisioning tasks. There are actually very few limits on what kind of pain a naive and inexperienced OIM developer can inflict on the poor souls that has to maintain the system.

    The better option is of course to externalize and centralize the configuration into some kind of configuration repository. The standard OIM repository for environmental information is the IT resource and in many cases it is a good choice. It is easily available for reference and update in the design client. It is quite flexible and accessible by the OIM admins.

    Java does offer you some good alternatives if you prefer physical configuration files. You can of course just use the FileReader class and write your own parser. I personally prefer the Java Properties framework

    The main disadvantage of configuration files is when you are running in a cluster. I don't know how many times over the years I have spent considerable amounts of time debugging issues just to discover that a setting or a configuration file wasn't updated on one of the cluster member servers.

    Wednesday, August 11, 2010

    The primary limitation of OIM access policies

    In identity and access management there are a few paradigms that you see implemented in most products on the market. One such paradigm is the "rule triggered on user form attribute" -> "group membership" -> "resource/service provisioned". The different pieces are called slightly different things but the general concept is the same.

    In OIM this approach is supported in the "rule puts user in OIM group which triggers an access policy which gives the user a resource object configured in a certain way and the RO finally gives the user access on the target system". This approach works great as long as the user can only be given zero or one instances of the RO. If the user should be able to be given more than one RO instance access policies simply don't work. Instead of provisioning multiple ROs the AP with the lowest (or potentially highest) priority  will simply set the content of the process form.

    This is of course one of these architectural decisions that you can argue for and against but it does mean that standard OIM access policies are of limited value in many situations.

    How do you overcome this limitation? The primary method is to simply write your own access policy framework and base it on entity adapters either based on the user form or on the USG (group) table. The main disadvantage of creating your own AP framework is the classical balance between ease of implementation and ease of configuration.

    Monday, August 9, 2010

    How I learned to stop worrying and love the sniffer

    Once upon a time one used sockets to speak to the network. You basically manually pushed your little bits out on the network and you could almost physically feel them sail away into the void. Today I tend to use high level libs and it is often not trivial to figure out what calling the metod createObjectOnRemoteSystem actually results in. Logs are good but sniffing the network is sometimes the best way to figure out what is really going on. On a large number of occasions a good network sniffer is the difference between being totally stuck and solving a problem.

    My favorite sniffer is Wireshark (formerly known as Ethereal). This sniffer is free and has a very good protocol analyzer while still giving you convenient access to the raw bits. The user interface may take some time to get used to so I thought I should write up a quick introduction on how to do some simple tasks.
    Let me use an actual incident as an example on how to use a sniffer to find the root cause of a service outage in a complex environment.

    The system outage first showed up in an application that basically provides an interface that translates between web service calls and LDAP queries into an AD database. The application suddenly started failing complaining in the logs that it couldn’t bind to the AD server.

    First step was to check that you could bind to the AD server that was the primary login controller for the application server that hosted the service. That worked fine.

    Second step was to look in the logs on the AD server to see if there were any entries about the failed binds. Unfortunately everything looked fine.

    Now things looked a bit confusing. Was the problem that the application had gone totally off the rails? I decided to install Wireshark and see if the app was at all communicating with the DC.

    Sniffing traffic with Wireshark is easy:
    1. Start up Wireshark
    2. Pick Capture-> Interfaces
    3. Pick the Network Interface Card that you want to listen to and press Start.
    4. Generate the traffic you want to listen to
    5. Press Capture -> Stop when you are done
    You have now sniffed the traffic and next up is analyzing.

    Analyzing can be a bit challenging. This is especially true if you are on a network where there is a lot of traffic so your traffic will simply drown in the background noise. The trick here is to figure out a good filter that lets you find your signal. In the below are a couple of options that I have found useful over the years.

    • "tcp.port==389" gets you all tcp traffic on that port (LDAP in this case)
    • "ip.host==192.168.1.30" gets you all ip based traffic to and from that specific host
    Once you have identified the packet that is the trigger event by creating a good filter and press the Apply button you should be able to find the traffic. If you can’t directly filter for the traffic you want to look at you can filter on the triggering event and then can remove the filter by pressing clear. Usually you can find your packet of interest just below the triggering event.

    In the lower part of the screen you can see the protocol layer stack. Depending on what you are doing you may be more interested more in the application layer or the network layer so you can expand or detract the different layers by clicking on the + signs on the left.

    In this specific example I discovered that my app was talking to a completely different AD domain controller. Once the DC was taken down my application rebound to another DC and suddenly started working again.

    There are many more nifty tricks you can use in Wireshark but I think this is enough for one posting. Stay tuned for more! (queue "We'll Meet Again")

    Saturday, August 7, 2010

    Externalized authorization and Xacml

    Back in 2001-2003 I was working for a business system vendor and we were looking at how to integrate the technical infrastructure of the business system with the new platform technology that was starting to mature. We started out by creating a Corba based architecture that was later converted to J2EE. On the security side we integrated the authentication engine with LDAP/AD. We looked at having authorization objects externalized but determined that there probably wasn’t any market pressure for that feature at the time.

    The years passed and the web access managers became the standard for coarse grained web authorization. Most access managers deal in URL access which is good if you just want to protect an application or a part of the application but if you want to say “users in group A can only update transactions that originated in the central European region and whose total value is less than $50 000” you really are in trouble.

    XACML and attribute based access control does offer a promise to give you this ability. It is off course nothing revolutionary as you can implement exactly the same in your favorite programming language but there are situations where having the authorization logic embedded within the business logic may not be so good.

    One example from pharmaceuticals world is that FDA is putting more and more pressure on companies to deliver data about not only how their drugs behave during trials but also how the drugs behave in the commercial patient population. As competition increases between different drug makers and makers of generic drugs it also becomes increasingly important to have a close relationship with your patients and doctors. The most common solution to this problem is to create a registry that basically is an online electronic health record system where patients and their doctors can record how the therapy is progressing.

    One important factor here is of course data privacy. Health information is simply highly sensitive so you don’t want this information to grant inappropriate access. Defining what is appropriate and inappropriate is unfortunately slightly more complex. In most cases the patient and the treating doctor should have full access. In many cases other doctors in the same practice should also have access along with nurses and other health care professionals. In some cases patients are treated in multiple practices or may switch practices temporarily or permanently.

    You could of course implement all of this functionality in code but if you ever need to prove that only the appropriate users have access to the patient’s information the auditors may not be happy with having to look through thousands of lines of code. Also if you run a global system you may run into requirements where you have to handle people from different jurisdictions differently as the German Bundesdatenschutz may require special rules for German citizens.

    Xacml clearly offers a very attractive way to externalize and document the authorization logic in a format that is clearly understandable by auditors and other interested parties.

    Xacml in itself does not solve the whole problem but it is an important puzzle piece. Next posting will talk more about the other pieces.

    Thursday, August 5, 2010

    OIM resource objects, provisioning processes, connectors and IT Resources

    When you first start working with OIM there are all of these strange new concepts that are really hard to grasp. Even worse is to try to figure out how everything hangs together. Having tried to explain this a number of times to different people over the years I wanted to try to write down a very basic guide to some of the core objects in OIM.

    Lets start with the resource object (often called RO). A RO is in it's most basic form basically a virtual representation of an account on a target system. If an OIM user has an account on the target system the user has an RO instance associated with it.

    The most basic process that you do with ROs is to provision the account to a target system. The provisioning is handled by a provisioning process. The provisioning processes usually consists of a number of provisioning tasks that fires adapters that in turn calls code, often Java code, that actually does the provisioning work.

    In many cases the provisioning tasks needs information about the target system such as logins and passwords for the accounts that is used to run the provisioning process. This information is often kept in an IT resource and is referenced by the Java code. In many cases you keep a reference to the relevant IT resource on the process form or in the attribute section of a scheduled task. This makes it possible to have multiple physical target systems interacted with by a single resource object and also makes it clearer for an admin exactly what physical target system is being managed by a specific resource object instance or scheduled task.

    A connector is a set of objects such as resource objects, provisioning processes and it resources. It also includes the jars that contain the Java code that performs the provisioning process.

    The out of the box connectors are often very complex but a simple custom connector may only consist of an RO, a provisioning process, an adapter that links the provisioning process to the provisioning logic and finally some provisioning logic in a jar. The IT Resource is not essential but it is very useful to avoid having to put system specific information straight into the provisioning process (or even the provisioning logic). One common mistake is to think that the IT Resource is much more important than it actually is. 

    Role based group memberships in OIM

    As I recently discussed role driven automatic provisioning of target system roles on the  Oracle IDM discussion board I thought it may be interesting to shine a little spotlight on this specific form of target system role management.

    My addition to the thread was basically the "Role based group memberships in OIM" section of the AD and LDAP group management through OIM
    In the discussion Oracle Quest made the excellent suggestion to use a combination of SQL, XSLT and Regex to create a very agile and very fully featured system for rules. 
    In some cases you might not need this much flexibility and a simple model where the rules are contained in lookups and the only real addition is support for wildcards may be sufficient. Basic implementation can be done through an entity adapter set on post insert on the user form.

    Wednesday, August 4, 2010

    Manage AD with JNDI demo tool

    Provisioning and mangaging user objects in AD is one of the most basic functions in most provisioning implementations. Today the standard connectors have gotten quite good but sometimes you still have to implement some "missing" functionality yourself.

    I created a small tool that demos how to manipulate AD user objects using JNDI. The tool supports unlocking, setting the description attribute and group memberships. You can run it through a bat script or by integrating it with your favorite IDM system.

    AD user object management demo tool

    OIM Howto: Request based group membership management

    In OIM there are often more than one way to implement a certain requirement. Request based target system group memberships is an area where there are at least half a dozen different ways to get the job done.

    The process basically contains two steps:
    1. Creation of a custom resource object that facilitates the request and approval workflow
    2. Creation of a provisioning workflow that sets the group membership
    The most common way to handle step two is to leverage the standard functionality provided by most connectors and manipulate the child table on the process form that contains the target system roles/group memberships. By manipulating the content of this child table you can trigger the provisioning or deprovisioning of groups on the target system.

    The child table manipulation can be done either through the APIs (see below) or by target system group memberships through access policies

    The API that will let you do this is in the tcFormInstanceOperationsIntf class and the methods are called addProcessFormChildData and removeProcessFormChildData.

    If for example you would like to support request based addition to AD groups and already have one of the later versions of the AD connector installed you would need to do the folowing:
    1. Create a new RO called "AD group membership add"
    2. Add an object form that lets the user indicate what AD groups they would like to become member of.
    3. Add a process form and data sink or prepop from the object form
    4. Add approval process (if needed)
    5. Add provisioning process that basically calls a task that uses the addProcessFormChildData AD group identifier (group name if I remember correctly) to the AD group childtable that is attached to the main AD resource object.
    OIM will automatically take care of the rest for you.

    There is a couple of downsides to this approach that are worth mentioning. As you are using a single RO the resources view of the user will show up as a collection of "AD group membership add" resources. The name of the group is available on the process form but it is not directly visible to an admin without an additional click. Likewise if you use attestation (access recertification) the attestation events will be less than helpful for the certifier as they only see the resource name. Same thing for certain out of the box reports.

    Code example for adding rows to a child table of a process form

    Tuesday, August 3, 2010

    AD and LDAP group management through OIM

    Provisioning systems are often initially brought in to provision the basic resources such as AD accounts, email and perhaps a basic ERP account. Once that functionality is in place it is common to start looking at handling group memberships in the target application. In some cases you then go on to manage not only the group memberships but also the groups themselves.

    A very common example are groups in Active Directory and/or the corporate LDAP. I have written down some thoughts about how to best leverage OIM in this capacity.

    Take a look and feel free to comment if you find the document useful:.
    AD and LDAP group management through OIM

    OIM Howto: Add parameters to your scheduled tasks

    In many cases you want to externalize configuration parameters from the actual code in order to avoid having to recompile and deploy every time you want to change something. In scheduled tasks the simplest way to do this is to simply define your parameters in the scheduled task and then call them from the code.

    public void init() {
     try {
        String dbName = getAttribute ( DATABASE_NAME );     
     } catch(Exception e) {
        logger.error ( "Exception while initializing recon analyzer scheduled task " + e.getMessage() ) ;
     }
    }
    

    This will get you access to the scheduled task attribute called DATABASE_NAME.

    A slightly more advanced variant of the same is to let the attribute be a reference to another OIM object. In many cases ITResources are useful places to store attributes

    String dbName = getAttribute ( DATABASE_NAME );
    Hashtable itResourceAttr = tcUtilXellerateOperations.getITAssetProperties (super.getDataBase(), dbName.trim());
    

    Now you have the parameters of this ItResource in the hashtable.

    You will have to import com.thortech.xl.util.adapters.tcUtilXellerateOperations which is a very useful class full of nifty static methods.

    Monday, August 2, 2010

    OIM Howto: Create scheduled tasks

    Scheduled tasks in OIM are used to run all kinds of reoccuring processes. They can also be used as convenient places to store little processes that needs to be run on demand.

    To create a scheduled tasks you need to do the following things:
    1. Create a java class that extends the SchedulerBaseTask (see example below)
    2. Write your business logic
    3. Compile and jar
    4. Place the jar in the ScheduleTask directory in your OIM install
    5. Create a new scheduled task using the OIM developer console
    6. Link in the new class into your new scheduled task.
    7. Done!

    Example code for a scheduled task:

    import com.thortech.xl.scheduler.tasks.SchedulerBaseTask;
    
    public class ScheduledtaskExample extends SchedulerBaseTask {
    
       public void init()
       {
          //this method is run before execute by the scheduler 
       }
    
       public void execute() {
          //is executed by the scheduler
          runMyBusinessLogic();
       }
    
       private void runMyBusinessLogic(){
       //place your business logic here
       }
       
       public boolean stop() {
       //place logic that should run on a stop signal here
       }
    }
    

    Sunday, August 1, 2010

    OIM Howto: Limit admin privileges for helpdesk

    Q: I need to give the helpdesk limited admin privileges to perform level one admin tasks such as resetting passwords, unlock accounts or enable disabled users but I don’t want to give them the whole user management menu item. How do I do this?


    A: The easiest way to implement this requirement is to create a custom menu item in the standard OIM admin web application. In this menu item you implement exactly the functionality that the helpdesk needs to do their job using the standard OIM GUI framework and the OIM APIs.
    Implementing a custom menu item does require some knowledge of the web GUI framework that OIM is built upon but once you master this skill it is fairly easy. A good starting point is the OIM GUI customization guide (for 9.1.0.1).

    First post

    Welcome to my blog about identity and access management!