Parallel Siebel Development

After my high level overview of Siebel configuration management, I had a few requests to explore the principals of parallel Siebel development. This is where we concurrently run with two or more code streams in order to support independent pieces of work. This is typically to allow long term change development to run in parallel with short term bug fix but can also be used to allow separate teams to work on disparate pieces of functionality that are not dependent on each other. This allows for more flexible time scales, delivering functionality when it is ready, not when all development work has come to an end.

Parallel Development – Challenges

Parallel development in Siebel is a tricky thing to manage and there are a number of reasons for this:

1. Siebel ‘code’ is stored in many places and modified and managed in many ways

Siebel configuration is a mixture of script, repository definitions, metadata, reference data and other types. For example:

  • Repository objects are maintained in the repository tables in the database and modified by Siebel Tools. Changes are deployed via the SRF
  • List of values (LOVs) are maintained in customer tables in the database and modified through the Siebel Client. Changes are deployed via ADM or manually
  • Workflow Processes are maintained in the repository tables but deployed through the Siebel Client via run time tables. Changes are deployed through the Repository with manual activation
  • Run Time Events, Personalization, State Models, Views and Responsibilities, Audit Trail, Workflow Policies, Assignment Rules, and many other object types, are maintained and deployed via a mixture of Siebel Tools, reference data, exported XML files, SRF, Repository and other mechanisms.

2. Siebel does not inherently support object versioning

Although Siebel Tools offers an ability to integrate, loosely, with version control software, this does not cover any non-repository objects. It is a distinctly manual process to keep tabs on non-repository items and ensure an accurate and useful version history. Siebel Tools integration, via a batch file that runs each time an object is checked in, is not watertight so it is more of an archive tool than a trusted configuration management and build tool.

3. Siebel does not understand your business logic or requirements

The concept of ‘merge’ is an important one in parallel development. Branching of a code stream is required to introduce the parallel code streams. By definition, this means convergence of the streams at some point is inevitable. Though Siebel provides some mechanism to ‘merge’ repository objects via SIF files, it cannot be expected to understand, evaluate and correctly resolve conflicting objects to deliver the combined functionality. It provides no functionality to intelligently, or otherwise, merge other object types.

Parallel Development – In Practice

Typically, as fixes are identified and scoped, the solutions are derived and documented in LLD (Low Level Design) form along with supporting Release Note documentation that defines non-repository changes and the process to deploy everything to a target environment. Code changes are made and, through the ‘route to Production’ that we discussed in our earlier article, deployed from the emergency fix code stream into the Production environment. In the diagram above, I’ve tried to show that you may put emergency fix through test then Pre Production or straight to Pre Prod. You may even go straight to Production, depending on the urgency and risk of the issue and fix. Once the fixes have been accepted by the customer, they are subsequently ‘manually merged’ into the change code stream using the LLD documentation produced by the fixer along with the Release Note developed in parallel. This has to be a manual process as the developer working on the change code stream must implement the fix supplied to him in the context of the change code. We cannot assume that copy and pasting the fix work into the change code stream will deliver the desired functional effect.

For example, if merging a fix to the BusComp_PreWriteRecord event in the ‘Contact’ BC, the fix must be assessed against any changes that may have already been made in the ‘change’ code stream. If something in the ‘change’ code stream overrides the functionality delivered in the ‘fix’ then it could potentially be discarded. Alternatively, the developer may need to rework some of their code and configuration changes that they have made in order to accommodate the functionality provided by the fix. In short, the developer must merge the code and configuration changes in the context of delivering the correct functionality that combines both streams.

The goal, then, is to ensure that any functional changes made in the ‘fix’ code stream are present in the ‘change’ stream. In essence, we are not really merging the code, but instead we are merging the functionality. By using low level design, high level design and release note documentation to manage changes in both streams, the developer can assess the functional merge requirements as well as get some detailed guidance, via the LLD, as to what technical code level merge activities, if any, are required to converge the code.

In general, the manual merge process goes one way – in the scenario above, fixes are merged into the change code stream. Where two streams are involved, we don’t need to merge the other way. Typically, when the change code stream is released and deployed into Production, we simply refresh the entire fix code stream with the released change baseline. Note that this implies using the code baseline that was deployed to and accepted in Production – not necessarily the current code baseline in the development environment, which may well have moved on. In the diagram above I’ve tried to highlight this to show that:

  1. The ‘merge’ process happens in the standard development manner, with developers checking out, locally testing and checking in merge work
  2. The ‘refresh’ process is a system process, consisting of the deployment of the new Production baseline code into the emergency fix environment

We can then continue our process of fixing and merging while the next swathe of change work progresses.

Parallel Development – Other Scenarios

Now expand this out to a scenario where four or five teams are working on large swathes of change across multiple code streams. Convergence could occur at any point in the timeline across any or all of the code streams. Siebel does not provide the tools or functionality to easily manage this so process and configuration management is key. Many customers and integrators are looking at Siebel Application Deployment Manager (ADM), third party deployment tools such as KACE and ANT or developing their own custom configuration management and deployment tools. The bottom line is that if you want to develop across a number of code streams or want to use continuous development methodologies, Siebel does not provide adequate functionality to really support these. A lot of hard work will need to go into enforcing process and potentially developing bespoke tools to help.

At the end of the day, all but the largest Siebel implementations will find themselves bogged down by the costs and red tape inherent in applying these methodologies.


Siebel Configuration Management

I’ve noticed, on a number of projects now, that Siebel Configuration, Release and Environments Management isn’t always at the top of project managers lists. This is especially true if the development work is behind schedule.

All of these areas are critical to the success of a Siebel project.

Configuration Management

The Repository based object control within Siebel is not ideal – not least due to the lack of any object versioning within the Repository tables or Siebel Tools. However, this doesn’t mean it is not possible to apply sensible and useful configuration management processes. For example:

  • Version Control Integration

Siebel Tools does provide loose integration with version control systems via a batch file. I’ve used this, with a degree of success, with VSS, Subversion and CM Synergy to control object level SIF files. One programme I know even developed an automated build tool that would compile an SRF using specific build identified SIF files using task based work packages defined in CM Synergy. Note that configuration management plays a key role in managing the merge process, if you elect to maintain multiple code streams; for example, for long term change and short term emergency fix.

  • Non Repository Version Control

Using ‘Export as XML’ or ADM will allow you to control and manage a large number of non-repository objects, such as Run Time Events, Personalisation, SmartScripts and so on. Client tools such as Tortoise SVN or CM Synergy allow these to be treated and managed as any other source code would be. Though clunky, I’ve also found manual ‘Release Notes’ to be a very useful tool in managing small releases of non-repository items. The release note can, and should, be used to tie the pieces together whether using VC integration or otherwise. That is, every release should be accompanied by a Release Note which details the steps, objects and versions required to build and deploy the release.

Environment Management

Having appropriate systems and environments in place to allow development, testing and release to Production is vital to delivering a project. This also plays a key role in the continued development and maintance of Siebel systems. Consider at least a development, system test and a ‘Production Like’ environment, in addition to Production, to allow for adequate unit, integration, UAT / OAT / PEN / Performance testing. For maintenance, you’ll want to consider a secondary development environment to support a parallel ’emergency fix’ code stream, along with a seperate testing environment for this purpose. If you are implementing integration technologies across other systems, you may want to consider equivalent systems across environments for these too. It’s common, however, to share physical resources for the less performance critical systems in the environment. You will also want to set up automated nightly SRF builds and deployments in at least your main Development environment, while encouraging your developers to perform SRF refreshes and incremental Tools ‘gets’ first thing each morning.

Release Management

Siebel deployments range from simple reference data changes to full releases involving SRF, Repository, DDL, non-repository items, patch sets and infrastructure changes and upgrades. Do not underestimate the effort and risk involves in doing a release. The concepts described above feed directly into this, allowing you to prove your code release, be it a combination of SRF, Repository and Release Note, using the environments that you’ve flagged as your route to Production. Thoroughly unit tested code will mean nothing if you do not deploy the same code to the Production environment!

I’d be really interested to hear your stories or comments around your experience with managing Siebel code, especially from those who have implemented automated build and deployment toolsets.