After my high level overview of Siebel configuration management, I had a few requests to explore the principals of parallel Siebel development. This is where we concurrently run with two or more code streams in order to support independent pieces of work. This is typically to allow long term change development to run in parallel with short term bug fix but can also be used to allow separate teams to work on disparate pieces of functionality that are not dependent on each other. This allows for more flexible time scales, delivering functionality when it is ready, not when all development work has come to an end.
Parallel Development – Challenges
Parallel development in Siebel is a tricky thing to manage and there are a number of reasons for this:
1. Siebel ‘code’ is stored in many places and modified and managed in many ways
Siebel configuration is a mixture of script, repository definitions, metadata, reference data and other types. For example:
- Repository objects are maintained in the repository tables in the database and modified by Siebel Tools. Changes are deployed via the SRF
- List of values (LOVs) are maintained in customer tables in the database and modified through the Siebel Client. Changes are deployed via ADM or manually
- Workflow Processes are maintained in the repository tables but deployed through the Siebel Client via run time tables. Changes are deployed through the Repository with manual activation
- Run Time Events, Personalization, State Models, Views and Responsibilities, Audit Trail, Workflow Policies, Assignment Rules, and many other object types, are maintained and deployed via a mixture of Siebel Tools, reference data, exported XML files, SRF, Repository and other mechanisms.
2. Siebel does not inherently support object versioning
Although Siebel Tools offers an ability to integrate, loosely, with version control software, this does not cover any non-repository objects. It is a distinctly manual process to keep tabs on non-repository items and ensure an accurate and useful version history. Siebel Tools integration, via a batch file that runs each time an object is checked in, is not watertight so it is more of an archive tool than a trusted configuration management and build tool.
3. Siebel does not understand your business logic or requirements
The concept of ‘merge’ is an important one in parallel development. Branching of a code stream is required to introduce the parallel code streams. By definition, this means convergence of the streams at some point is inevitable. Though Siebel provides some mechanism to ‘merge’ repository objects via SIF files, it cannot be expected to understand, evaluate and correctly resolve conflicting objects to deliver the combined functionality. It provides no functionality to intelligently, or otherwise, merge other object types.
Parallel Development – In Practice
Typically, as fixes are identified and scoped, the solutions are derived and documented in LLD (Low Level Design) form along with supporting Release Note documentation that defines non-repository changes and the process to deploy everything to a target environment. Code changes are made and, through the ‘route to Production’ that we discussed in our earlier article, deployed from the emergency fix code stream into the Production environment. In the diagram above, I’ve tried to show that you may put emergency fix through test then Pre Production or straight to Pre Prod. You may even go straight to Production, depending on the urgency and risk of the issue and fix. Once the fixes have been accepted by the customer, they are subsequently ‘manually merged’ into the change code stream using the LLD documentation produced by the fixer along with the Release Note developed in parallel. This has to be a manual process as the developer working on the change code stream must implement the fix supplied to him in the context of the change code. We cannot assume that copy and pasting the fix work into the change code stream will deliver the desired functional effect.
For example, if merging a fix to the BusComp_PreWriteRecord event in the ‘Contact’ BC, the fix must be assessed against any changes that may have already been made in the ‘change’ code stream. If something in the ‘change’ code stream overrides the functionality delivered in the ‘fix’ then it could potentially be discarded. Alternatively, the developer may need to rework some of their code and configuration changes that they have made in order to accommodate the functionality provided by the fix. In short, the developer must merge the code and configuration changes in the context of delivering the correct functionality that combines both streams.
The goal, then, is to ensure that any functional changes made in the ‘fix’ code stream are present in the ‘change’ stream. In essence, we are not really merging the code, but instead we are merging the functionality. By using low level design, high level design and release note documentation to manage changes in both streams, the developer can assess the functional merge requirements as well as get some detailed guidance, via the LLD, as to what technical code level merge activities, if any, are required to converge the code.
In general, the manual merge process goes one way – in the scenario above, fixes are merged into the change code stream. Where two streams are involved, we don’t need to merge the other way. Typically, when the change code stream is released and deployed into Production, we simply refresh the entire fix code stream with the released change baseline. Note that this implies using the code baseline that was deployed to and accepted in Production – not necessarily the current code baseline in the development environment, which may well have moved on. In the diagram above I’ve tried to highlight this to show that:
- The ‘merge’ process happens in the standard development manner, with developers checking out, locally testing and checking in merge work
- The ‘refresh’ process is a system process, consisting of the deployment of the new Production baseline code into the emergency fix environment
We can then continue our process of fixing and merging while the next swathe of change work progresses.
Parallel Development – Other Scenarios
Now expand this out to a scenario where four or five teams are working on large swathes of change across multiple code streams. Convergence could occur at any point in the timeline across any or all of the code streams. Siebel does not provide the tools or functionality to easily manage this so process and configuration management is key. Many customers and integrators are looking at Siebel Application Deployment Manager (ADM), third party deployment tools such as KACE and ANT or developing their own custom configuration management and deployment tools. The bottom line is that if you want to develop across a number of code streams or want to use continuous development methodologies, Siebel does not provide adequate functionality to really support these. A lot of hard work will need to go into enforcing process and potentially developing bespoke tools to help.
At the end of the day, all but the largest Siebel implementations will find themselves bogged down by the costs and red tape inherent in applying these methodologies.