With my new customer architect hat on, I’m looking across a broad range of applications and services to help my organisation be more effective and productive. As a large retail organisation, customer data quality is key to our ability to talk to our customers effectively. Imagine sending letters out to customers – in the digital age, that’s a pretty expensive channel but one which is required none-the-less. If we get the address wrong, or we have duplicate entries for a customer, we end up with a failed communication (really bad for our customer) or duplicate communications (annoying for our customer and bad for our bottom line). Oracle Enterprise Data Quality can help us resolve such problems.
Quickstats Profiler on the S_CONTACT table
I’m running Oracle Customer Hub (the Customer Master Data Management solution formerly known as UCM or Universal Customer Master), which is a Siebel vertical product that orchestrates and governs a process to take data in from sources, cleanse, enrich, match and de-duplicate, before publishing to consuming systems.
EDQ is a critical component of this solution, as it fills the cleanse, enrich and match capabilities of our end to end MDM process. It’s also an extremely impressive and feature rich product and I’m going to explore some of it’s features over the coming months. First things first, let’s download and install Oracle EDQ release 12:
- First up, you’ll need a Java JRE to run the front end components, so download and install a 64-bit JRE 8 from Oracle’s Java home
- Now, let’s get a trial copy of EDQ installed on a VM so we can have a bit of a mess around. Skipping the usual eDelivery route, head straight to the EDQ product page to download the product
- I’ve gone for the standalone 126.96.36.199 Windows installer version, since I’m running a Windows 2012 R2 server VM, but you can sit the newer 188.8.131.52 installation on top of a Weblogic server running on whatever host OS you choose
- Once downloaded, run the installer and simply following the instructions – it’s really straight forward for an Oracle product
- You’ll find a shortcut to the Launchpad – this is where you’ll find shortcuts to all of the features of the platform
The EDQ Launchpad
- Launch Director, where you’ll set up EDQ to point to your data source and carry out activities. If you’re prompted to open a JNLP file, and your installation does not automatically associate this with the Java Web Start, then navigate to your Java installation folder and associate it with “javaws.exe”
- The default username and password is dnadmin/dnadmin – be sure to change the password to something memorable. Log in and you’re ready to go
Simple as that! Next time, we’ll connect EDQ up to our Siebel server database (you know, the one with all the lovely sample DB data in it) and run some profiling jobs.
It has been many, many months since my last blog update. Life has changed considerably for me in 2016 with my two kids growing up, school drop offs, homework and being aeroplanes in the garden taking precedence over writing articles and installing Siebel patches.
In addition to my focus on family life, I’ve also closed down my consultancy (the imaginatively named “Ollerenshaw IT Ltd”), hung up my Siebel gloves and explored the world of full time roles in the great wide world.
It’s been fun, if a little daunting: I’ve worked almost exclusively with Siebel for over 15 years, so immersing myself in other technologies and roles has been difficult. I’ve found that my new roles involve a lot less hands on technology and I now find myself in the fuzzy world of architecture and strategy. It’s no bad thing, and I enjoy the challenge of taking greater responsibility for overall solution designs, working with businesses to understand what’s really important and how I can help them achieve those goals. To me, it’s an important role as it’s always been the case that shiny, snazzy Siebel solutions (including some of the “awesome” capabilities of OpenUI) often exist in the world to satiate the appetites of hungry developers – there’s often little to no business or user benefit in a lot of what happens in technically focused IT.
As part of my current role, I get involved in a lot of talk about strategy and planning for the future. It’s interesting to hear, from the coal face as it were, what’s getting the IT industry excited. I work for a retail organisation, and each vertical has different needs and wants and different solutions appeal, but for us there’s something that currently stands out: Big Data.
Of course, we’re very excited here about the prospect of Cloud, including Infrastructure As A Service, Azure and AWS. We’re also excited about Software As A Service, which is now at a level of maturity where we believe we can actually start to map some of our business need onto these types of cloud services. Though not something we’re looking at here, Birst is a great example of SaaS: it’s Cloud BI by two of the guys responsible for OBIEE nee Siebel Analytics.
But Big Data is what it’s all about here. We’re talking Hadoop, MapR – we’re looking at some of the big players and some not so big. We’re also looking at data visualisation platforms – Tableau, PowerBI, Business Objects and many others. With a bit of luck, if the kids allow me the time to do so, I’ll pen some articles on my experiences in this ever growing buzz space.
Until then, wishing everyone in Siebel Land the very best for now and the future!
Oracle XE is a a bit of beast, a pain in the proverbial to install and overkill for the purpose of a local Siebel database. However, it is at least an Oracle database and not a Sybase database, as it was in days gone by.
What this means is that it is possible to use Oracle tools to access and manipulate the data. Gone are the days of disql and it’s more than limited functionality. We can now access the local and sample databases using SQL Developer and other Oracle tools, such as import / export and data pump. I used this fact to my advantage, recently, to populate a fresh Siebel 16 server installation with demo data. This lets me use the full gamut of thin client apps, with useful data and users.
Terry Smythe’s Accounts!
Install the Sample DB
First up, we need to install the XE sample using the Tools or Client installers. All straight forward (as of 16.2, anyway) and you’ll be left with an XE Service that’s running and listening on localhost, port 1521.
Reset the System Password
We now need to get access to the system account, so that we can mess around. Bring up a command prompt and enter:
connect / as sysdba
alter user system identified by system;
This will reset the “system” password. Note that SADMIN and SIEBEL use the equivalent value for logging in.
Export the XE Sample Data
You can use Oracle Data Pump (impdp / expdp) to do this, but I found it to be a bit of a hassle – my target is an RDS instance on Amazon, so I’d have to push a load of files up in order for data pump to suck them in. As it is, I used the old fashion import / export (imp / exp) commands in XE to do the job.
First up, we don’t want all the data from sample – we really only want “S_” tables and we want to exclude the Repository tables. As such, I selected out the tables I wanted directly from S_TABLE:
SELECT NAME FROM SIEBEL.S_TABLE
WHERE TYPE <> 'Repository'
AND NAME LIKE 'S_%'
ORDER BY NAME ASC
I then saved this to a text file (tables.par) and modified this to work as a parameter file input into the “exp” command:
I then initiated an export from my XE database. Note that the exp command can be found in the Tools installation folder, <TOOLS>\oraclexe\app\oracle\product\11.2.0\server\bin:
exp SIEBEL/SIEBEL@SAMPLE_XE BUFFER=16384 FILE=C:\Oracle\Exp\Sample.dmp GRANTS=N TRIGGERS=N CONSTRAINTS=N STATISTICS=NONE PARFILE=C:\Oracle\Exp\tables.par
This took roughly 15 minutes to complete and I’m left with a “.dmp” file of around 4GB in size.
Import the Sample Data into Oracle Server
You can probably guess the next step – using the “imp” command to import the data into my Server Oracle instance. The process is much the same as for export – the main point of note here is that we want to explicitly ignore errors. Even though we’re pushing sample data into a “clean” Siebel Database, there’s still a load of seed data in there that we don’t want duplicated. By ignoring errors, the indexes will keep duplicates out while allowing the whole process to complete without terminating:
imp SIEBEL/SIEBEL@TARGET FROMUSER=SIEBEL TOUSER=SIEBEL BUFFER=16384 FILE=C:\Oracle\Exp\Sample.dmp IGNORE=Y PARFILE=tables.par
Create our test users
Now that we have all the sample data in the server DB, we must create some users to match the thousand or so S_USER records that have been created for our use across the sample system. Again, I put Oracle’s own tools to good use and created a SQL script in SQL Developer:
SELECT 'GRANT CONNECT TO ' || LOGIN || 'IDENTIFIED BY ' || LOGIN || '; ' || 'GRANT SSE_ROLE TO ' || LOGIN
WHERE LOGIN NOT IN ('SADMIN', 'SIEBEL', 'GUESTCST', 'GUESTERP', 'LDAPUSER')
Execute this script and users will be created. Be careful to exclude any users you don’t want to affect (SIEBEL and SADMIN being the main ones) as the script will reset their passwords. You can find a list of relevant test users for each Siebel vertical in this Bookshelf Guide.
Login as Casey Cheng!
It’s what you’ve always wanted, right? Fire up Siebel Call Center and login as CCHENG. Ah, memories!
Now sit back and enjoy perusing the sample data from the comfort of your full Siebel installation.
Well, it was only a matter of time before I ended up working with Siebel again!
I’ve been introduced recently to Amazon Web Services – AWS to you and me. So this is what Cloud Computing is all about! I’m utterly gob-smacked by the power of this platform, the stuff you can do and the speed at which you can do it.
I was tasked with building a Siebel 16 installation using AWS and it is amazingly straight forward. Not to mention incredibly cost effective, compared to using physical hardware and on-premise virtualisation.
I thought I’d share with you the process of spinning Siebel up “in the Cloud”!
First up, my architecture involves two AWS instances – an RDS instance for the database and an EC2 instance for the Application and Web Servers. Using RDS, I’m able to spin up an Oracle Enterprise Edition 12c database in less than 10 minutes. It’s literally a few clicks and you’re done – no more slaving away with OUI, downloading files and the installation and configuration process. AWS captures parameters from simple web pages and does the rest – lightening fast. AWS will even keep my Oracle installation up to date, automatically applying minor patch updates during a maintenance windows that I can define.
So, I have a database instance up and running. I then spin up a new Windows 2012 R2 server. Again, the process is ludicrously quick. I choose from a number of types that have varying RAM, CPU and bandwidth allocations, allocate C and D drives with 60 and 120GB respectively and click a button. Less than 10 minutes later, I have an RDP file to download to my laptop that allows me to connect to the new server. I then jump on to Oracle eDelivery and download the Siebel installation media – I’m getting speeds of nearly 70MBs (that’s mega BYTES, not BITS!) and download the full Siebel installation in under 15 minutes. I unzip the media and backup the installation files to Amazon’s storage offering, S3.
I download and install a Java SDK, Oracle 12c 32-bit Client and SQL Developer then set up my TNSNAMES. I then run SNIC to extract everything and prepare my Siebel installation media and kick off my Siebel installation. Within another couple of hours I have a fully functioning Siebel environment.
While this is all pretty much “installing Siebel on a remote server”, the real benefit comes of now saving my Siebel Server installed as an Amazon Machine Image (AMI). This will allow me to VERY quickly spin up new Siebel 16 server environments for whatever I need – training, dev environments, the list is endless.
I’ve always been a bit of a cynic when it comes to “Cloud” computing but AWS has got me converted. This stuff is HUGE and it is AWESOME – with this stuff, server rooms are sure to become a thing of the past.
I’ve been working on orchestrating an ETL load process from text files stored in the Cloud based file management tool, Box. Working with Python for the first time, I’ve found it to be an extremely flexible and straightforward language to use for such a task. There’s even a Box SDK that provides direct access to the bulk of the services needed to manage files in Box.
However, the Box API uses OAuth2.0 for authentication. Part of the process for permitting access to the Box API is to present the user with a web page, requesting permission to access the Box account. This does not fit well with an automated orchestration process.
I’ve written a small Python script that uses the keyring Python module as secure storage for the authorisation and refresh tokens required to connect to Box. Using the Box SDK, the tokens are refreshed automatically when they expire and you’ll see in the code below how I leverage the Box SDK to retain the tokens indefinitely in the secure key store. After “priming” the store with an initial set of tokens, the script will maintain the tokens automatically, provided it runs within the 60 day periods before the refresh token expires. Note that you’ll need to install the Box SDK and keyring Python modules, using PIP, prior to running the script. You’ll also need to specify the client_id and client_secret values from your Box Developer App in the script below.
"""An example of Box authentication with external store"""
from boxsdk import OAuth2
from boxsdk import Client
CLIENT_ID = 'specify your Box client_id here'
CLIENT_SECRET = 'specify your Box client_secret here'
"""Reads authorisation tokens from keyring"""
# Use keyring to read the tokens
auth_token = keyring.get_password('Box_Auth', 'email@example.com')
refresh_token = keyring.get_password('Box_Refresh', 'firstname.lastname@example.org')
return auth_token, refresh_token
def store_tokens(access_token, refresh_token):
"""Callback function when Box SDK refreshes tokens"""
# Use keyring to store the tokens
keyring.set_password('Box_Auth', 'email@example.com', access_token)
keyring.set_password('Box_Refresh', 'firstname.lastname@example.org', refresh_token)
"""Authentication against Box Example"""
# Retrieve tokens from secure store
access_token, refresh_token = read_tokens()
# Set up authorisation using the tokens we've retrieved
oauth = OAuth2(
# Create the SDK client
client = Client(oauth)
# Get current user details and display
current_user = client.user(user_id='me').get()
print('Box User:', current_user.name)
if __name__ == '__main__':
The key here, if you’ll pardon the pun, is in the store_tokens function. This is invoked automatically by the Box SDK, whenever it detects the expiration of the access token. The SDK will automatically request new tokens, using the active refresh token, and pass those two new token values back to the pass back function defined in the OAuth2 object initialisation. In the example above, we use the keyring module to store the values securely with the Windows Credential Manager. Before invoking any methods, we read these values and use them in the authentication constructor. Great stuff!