Data Archival is paramount for any growing organization in the current business scenario.
As the business grows, so do the challenges towards carefully managing and monitoring the ever-increasing data! Keeping tabs on this data has proved problematic for businesses that have never put an archiving system in place, which was the case with my client as well.
My client, a Telecom giant, had a huge volume of dormant records. This presented a myriad of challenges, listed below:
- Storage space and cost, as there was a need to retain all call records throughout their lifecycle
- Archived dormant records were also vulnerable to data security breaches
- Time-consuming activity behind locating old cases due to disorganization of storage
- Storing sizably voluminous volumes of dormant cases was affecting the system performance
To solve all these challenges, I developed an application to archive dormant cases into different databases, using Salesforce, MuleSoft, Heroku and PostgreSQL
as my technology stack. Utilizing Anypoint Platform and Dataweave 2.0,
transforming data into different formats: XML, JSON, Java & CSV, became a cakewalk!
How I Went About It!
To build this application, I first needed to connect Salesforce and MuleSoft. I used a readymade inbuilt Salesforce connector and ended up saving 67% of the integration time. Then I pushed dormant cases from Salesforce to MuleSoft, by querying the data, utilizing Salesforce connectors. This enabled me to have the data in MuleSoft in the JSON format. This was followed by transforming the data into a different format, for which I utilized Dataweave 2.0. My final task was to push the data from MuleSoft to a different database. In earlier versions of Mule, the connectors were available for PostgreSQL (Heroku Database) as well. However, in the latest release, this was abstracted by MuleSoft. To my great relief, a generic connector available for database, which solved my quandary thoroughly. I just utilized that connector, configured it and created a table in database and pushed that dormant information to the database with the help of the connector. If I had to do this entirely via coding, I would have probably spent 3x the time!
Step By Step Guide
Let me now show you how to replicate what I did.
Img1 – Design Center Overview
Create Your First Project And Flow
Create your first flow with an Http Listener to accept data from Salesforce. Apart from an Http Listener
, you need Database Connector
to push data into database hosted on Heroku, a second Database Connector
to pull inserted data from database for confirmation of successful data insertion, a Dataweave transform
Component to transform data into JSON format and a Salesforce Connector
to delete the recently archived cases from Salesforce.
Img2 – Archive Case Flow
Step 1 HTTP Listener:
for the http listener, set the Path to /case
Img3 – HTTP Listener
Step 2 Database Connector:
Map your Salesforce data with PostgreSQL table which you hosted on the Heroku server. I have created a table case_data
with some columns. Given below is the sample code for transformation.
SQL Query Text:
INSERT INTO case_data
) VALUES (
Img4 – Database Connector
You have to configure the database connector to insert data into PostgreSQL. Create a new Database Configuration with following values:
- Configuration Name: Database
- Connection Type: Generic Connection
- URL: URL to your hosted database
- Driver Class Name: org.postgresql.Driver
- User: your username
- Password: your password
Img5 – Database Configuration
Step 3 Database Connector:
Pulling the data from PostgreSQL table (In my case Case_Data
) to delete the recently archived data from Salesforce
SQL Query Text:
Select id from case_data
Img6 – Database Connector
Step 4 Dataweave Transform:
To transform the PostgreSQL data and pass the record IDs to Salesforce. Given below is the sample code:
Img7 – Transform Element
Step 5 Salesforce Connector (Delete):
Salesforce connector to delete the archived record from Salesforce. Given below is the sample code:
Record to Delete IDs
Img8 – Salesforce Connector
Woohoo! You have successfully created your flow to push data into PostgreSQL. Why don’t you give it a try and tell me all about it?
Written by Bhushan Patil, Senior Systems Executive at Eternus Solutions