June 7, 2019

Data Archival Simplified With MuleSoft!

Data Archival is paramount for any growing organization in the current business scenario. As the business grows, so do the challenges towards carefully managing and monitoring the ever-increasing data! Keeping tabs on this data has proved problematic for businesses that have never put an archiving system in place, which was the case with my client as well.

My client, a Telecom giant, had a huge volume of dormant records. This presented a myriad of challenges, listed below:
  1. Storage space and cost, as there was a need to retain all call records throughout their lifecycle
  2. Archived dormant records were also vulnerable to data security breaches
  3. Time-consuming activity behind locating old cases due to disorganization of storage
  4. Storing sizably voluminous volumes of dormant cases was affecting the system performance

To solve all these challenges, I developed an application to archive dormant cases into different databases, using Salesforce, MuleSoft, Heroku and PostgreSQL as my technology stack. Utilizing Anypoint Platform and Dataweave 2.0, transforming data into different formats: XML, JSON, Java & CSV, became a cakewalk!

How I Went About It!

To build this application, I first needed to connect Salesforce and MuleSoft. I used a readymade inbuilt Salesforce connector and ended up saving 67% of the integration time. Then I pushed dormant cases from Salesforce to MuleSoft, by querying the data, utilizing Salesforce connectors. This enabled me to have the data in MuleSoft in the JSON format. This was followed by transforming the data into a different format, for which I utilized Dataweave 2.0. My final task was to push the data from MuleSoft to a different database. In earlier versions of Mule, the connectors were available for PostgreSQL (Heroku Database) as well. However, in the latest release, this was abstracted by MuleSoft. To my great relief, a generic connector available for database, which solved my quandary thoroughly. I just utilized that connector, configured it and created a table in database and pushed that dormant information to the database with the help of the connector. If I had to do this entirely via coding, I would have probably spent 3x the time!

Step By Step Guide

Let me now show you how to replicate what I did.

Design Center Overview
Design Center Overview
Img1 – Design Center Overview


Create Your First Project And Flow

Create your first flow with an Http Listener to accept data from Salesforce. Apart from an Http Listener, you need Database Connector to push data into database hosted on Heroku, a second Database Connector to pull inserted data from database for confirmation of successful data insertion, a Dataweave transform Component to transform data into JSON format and a Salesforce Connector to delete the recently archived cases from Salesforce.

Archive Case Flow
Img2 – Archive Case Flow



Step 1 HTTP Listener: for the http listener, set the Path to /case
HTTP Listener
Img3 – HTTP Listener


Step 2 Database Connector: Map your Salesforce data with PostgreSQL table which you hosted on the Heroku server. I have created a table case_data with some columns. Given below is the sample code for transformation.
Input Parameters:
%dw 2.0
output application/java

payload map{
‘id’: $.Id,
‘account’: $.AccountId
}

SQL Query Text:
INSERT INTO case_data
(
id,
account
) VALUES (
:id ,
:account
)

Database Connector
Img4 – Database Connector


Step 2.1: You have to configure the database connector to insert data into PostgreSQL. Create a new Database Configuration with following values:
  1. Configuration Name: Database
  2. Connection Type: Generic Connection
  3. URL: URL to your hosted database
  4. Driver Class Name: org.postgresql.Driver
  5. User: your username
  6. Password: your password


Database Configuration
Database Configuration
Img5 – Database Configuration


Step 3 Database Connector: Pulling the data from PostgreSQL table (In my case Case_Data) to delete the recently archived data from Salesforce
SQL Query Text: Select id from case_data
Database Connector
Img6 – Database Connector


Step 4 Dataweave Transform: To transform the PostgreSQL data and pass the record IDs to Salesforce. Given below is the sample code:
Transformation Script:
%dw 2.0
output application/json

payload.*id

Transform Element
Img7 – Transform Element


Step 5 Salesforce Connector (Delete): Salesforce connector to delete the archived record from Salesforce. Given below is the sample code:
Record to Delete IDs
Payload
Salesforce Connector
Img8 – Salesforce Connector


Woohoo! You have successfully created your flow to push data into PostgreSQL. Why don’t you give it a try and tell me all about it?



Written by Bhushan Patil, Senior Systems Executive at Eternus Solutions

Leave a Reply

Your email address will not be published. Required fields are marked *

four × 3 =

Share This Blog

Blog Archives