Helical Insight for ETL Tester (QA)

Helical Insight for ETL Tester (QA)

Let’s take an example that your ETL developer has just wrote an ETL script, which loaded the data into the Data Warehouse, few millions of records got inserted into the Data Warehouse.

For QA, it is super difficult to test all (millions) of the data and also a time consuming process. Moreover, there are many other problems like

  • some might not be well versed with writing complex SQL queries
  • There might be multiple data sources involved in generating the Data Warehouse.
  • Writing multiple queries on multiple data sources and then combining it in one single result set would be a sweat breaking task.

Now think of Helical Insight as your friend which helps you to do most of your tasks, allowing you to focus on more critical thinking rather than just doing stare and test whole day. Wondering how Helical Insight can help you achieve the same?

Here is your answer,

  • Helical Insight is capable of communicating with any of your data source(s) like CSV, Excel, MySql, Oracle, SQL Server, Postgres etc.
  • Helical Insight has a drag and drop feature to make your test cases ready, without even having a prior knowledge of SQL queries. Helical Insight generates the SQL queries including the joins between multiple entities. Helical Insight understands your data structure, hence it generates the query even if you are working with different set of data source(s), DW or normalized database, Helical Insight understands all.
  • Using this (drag and drop) facility you can generate all your test cases in a form of report, which you can call it as your test cases. For example, your report does a count of records for a dimension. You can save this report and check the same with source system by creating a separate report. If its a migration project the count should definitely match otherwise we know something is wrong with the ETL Script.
  • Once you have saved these reports (test cases) you can re-executes them again an again to verify your data. In the above situation you have just identified that Helical Insight is really a very useful tool with some really cool features like generating the SQL query for any database or data source(s), ability to execute the queries on any databases or data source(s), ability to save the report / test cases / SQL queries for future usage.

Case Study Report on CA Technologies

CA logo

Case Study Report

On

CA Technologies

                                                                                                                                                                           

Consumer

CA Technologies

Geographical Location

US, India

Tools Used

Database : Microsoft SQL Server

BI tool : Japersoft Enterprise Edition 5.6

Company Overview

CA technologies, a fortune 500 company with more than 75000+ employees, is into IT services and products. CA software and solutions helps the customer in driving productivity, providing differentiated user experience and growth opportunities. The portfolio of solutions that CA technologies offers are IT portfolio investments, services and assets management, application delivery, performance optimization and security. CA technologies is also capable of delivering services on multiple platforms like mobile, private and public cloud, distributed and mainframe.

Problem Statement

CA Technology is developing information technology management software (ITSM) product and had chosen Jaspersoft to enable BI capabilities inside the tool. The customers of the CA technologies would utilize the application for managing their IT service desk. CA Technology wanted to use advanced charts from D3 visualizations to be used inside Jaspersoft and then embedded inside this application. Also some Asset Management specific reports had to be developed.

Solution Developed

Having extensive experience on data visualization technologies and worked upon different charting engines including D3, we understood the requirement and which all D3 charts were to be used. We had then worked and developed those D3 charts using their data, and then these D3 charts were integrated inside Jaspersoft.

 

Chord Diagram

chordDiagram

                                                                                                                                                                                                                                                                                                                         Force Directed Chart

forced Connected

Calender View Chart

calendarView

                                                                                                                                                                                                                                                                                                                 Bubble Chart

Bubble Chart

Also after the development of these charts, via web services we had done a seamless integration of these advanced visualization inside Jaspersoft.

 Aside, many reports were as well developed for asset management as well like

  1. Asset Summary Report
  2. Native Discovery Coverage Report
  3. Asset Reconciliation Report
  4. Missing Asset Report
Database Work

Microsoft SQL Server was used to store the information and later on used for the charts generation.

Solution

BI stack was developed for the product, including advanced visualizations from D3 charts and its seamless integration. The solution which includes the charts and reports generation was successfully developed. The solution helped the user in analyzing the complex data results in identifying the scope of improvement in processes in their organization, customer satisfaction level increased, business increased.

Business Intelligence in Human Resource

Business Intelligence in Human Resource

 

Business Intelligence 

Business Intelligence refers to the ability to use information to gain a competitive edge over competitors and to collect business data to find information primarily through asking questions, reporting, and online analytical processes. It is highly capable of handling large amounts of unstructured data to help identify, develop and otherwise create new strategic business opportunities.

Business Intelligence includes data management methods for planning, collecting, storing, and structuring data into data warehouses and data marts for clustering, classification, segmentation, and prediction.

Business intelligence plays a crucial role to achieve competitive edge over competitors in the challenging economy. The data collected contains a copy of analytical data that facilitates decision support which serve for business intelligence.

Human Resource Management

Human resource management is a function in organizations designed to maximize employee performance in service of their employer’s strategic objectives. It is primarily concerned with how people are managed within organizations, focusing on policies and systems. HR departments and units in organizations are typically responsible for a number of activities including employee recruitment, training and development, and rewarding.

Importance of Business Intelligence in Human Resource

Today, analyzing the demographics of a workforce has become an increasingly important part of HR function. Companies in the traditional markets face the problem of an aging workforce and there is often an intense competition for the best new talent.

Human Resource system globally contains masses of data. HR system manages information such as employee profiles, appraisals, compensation, benefits, etc. It must analyze the key skill sets and demographics of their existing workforce, assess whether it is helping them to meet their business targets and then identify whether things are going correct or not to help the company progress.

The Business Intelligence is potentially the missing link to turn those swarms of data into valuable information that can be used to inform decisions at all levels within a company. One of the most important features of modern BI is being able to tailor access to sensitive data.

Moreover the BI solution contains hundreds of pre-defined metrics (Skills by location, function, grade, etc , Employee performance by function, demographics, location, etc , Costing of HR functions(recruit, train), Benefits Costing, etc), assembled in meaningful charts, dashboards, scoreboards and various reports. The solution should be in such a way that HR manager can access the information they need with little effort. The user should be able to slice and dice the information through the charts and dashboards to easily drill down to the underlying data. The developed BI solution contains better use of Key Performance Indicators (KPIs). The solution should help the users easily answer important questions, identify emerging trends and predict risks and results.

 

Benefits of using Business Intelligence in HR system
  • Workforce Optimization : Using analysis throughout the company, top and bottom performers can be identified and then moved up or down. This could be used to inform future spending on training and to identify learning needs of employees.
  • Optimize Compensation : BI will help in analyzing the salary trends, group wise salary comparison, salary distribution and skews by grade, performance, etc. It will also help in evaluating benefits plan for maximum value.
  • Manage Recruitment : BI will help in analyzing the time and cost by recruitment method. Also they can analyze the recruitment success rates, applicant statistics, dropout reasons etc.
  • Analyze Workforce Composition : It will help in understanding workforce trends by job, geography, business areas, user-defined categories etc.
  • Staff Requirements Forecasting : This is based on organization goals, turnover prediction, Staff demographics, attrition etc.

Following are the key KPIs for the HR functions              

HR functions KPIs
  • % external hire rate
  • Net hire rate
  • % new position recruitment rate
  • new position recruitment ratio
  • Applicant Ratio
  • % newly recruited employees screened
  • $ Average interviewing Cost
  • % Actual cost of hire
  • Average feedback time on candidates
  • % Employee probation reports outstanding
  • % Internal appointments above level
  • % Internal hire rate
  • % Internal placement rate
  • % Cross functional mobility
  • % Employee transfer rate
  • Average interviews per hire
  • % Referral rate
  • Interviewee ratio
  • $ cost per hire
  • $ Average signing bonus expense
  • Average open time of job positions

Conclusion

HR departments struggle to make sense of disparate and overlapping data sources such as ERP systems, spreadsheets, payroll and benefits data, employee surveys, industry benchmarks. These can be brought down by using the business intelligence, which helps the HR’s to design effective compensation benefits, analyze overtime hours and costs by the departments. It also helps to measure the employees productivity and performance and correlate the information with the employee’s skill set to identify the skills that contribute to good performance by the employees.


 

Case Study Report on Sage Human Capital


Sage_logoCase Study Report

on

Sage Human Capital

                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Customer :

Sage Human Capital, HR Company

Geographical Location :

San Bruno, California

Tools Used :

DB used                  :  MySQL, Postgres

ETL tool                  :  Talend Open Studio

BI Tool                     :  Jaspersoft Professional

Cloud Service       :  Amazon Web Hosting (AWS)

Other softwares  :  Maxhire, Ringcentral

                                                                                                                                                                               

Company Overview :

 Sage Human Capital is US leading human resource company. They have revolutionized HR recruiting industry by bringing introducing some innovative concepts like “talent as a service” and “SmartSource” and boast of clientele like Accenture, Jaspersoft, McAfee, Salesgenie, Adidas, Text100 to name a few.

                                                                                                                                              

Problem Statement :

Working in HR domain, Sage gets resource requirement for specific job profiles with specific skillsets. Based on skillset and other factors like location experience etc, Sage starts searching for suitable candidates. Different teams come into picture at different stage, sourcing team is responsible for sourcing of CVs from all the platforms, KAM are responsible for approaching candidates via making and other mediums calls and taking interviews, followups, shortlisting and then the result is shared with client for the final process.

sage connectionThe problems faced by Sage was:

  • Lack of transparency of information to client regarding the job progress
  • Lack of trust on the performance data
  • Tracking the candidate progress by clients
  • Checking the performance of the sourcing team and key account managers in terms of actual hiring.

Sage was in need of a BI platform which could be used by different stakeholders like client, Sage internal team etc with different access and privilege. These teams once login, they can see their specific KPIs related to jobs, hires, interviews etc.

                                                                                 

Solution Developed :

Helical did handholding of the client and understood the requirement thoroughly. Once the same was over, we helped in deciding which BI suite to go for. We chose Jaspersoft BI tool. We used the Jaspersoft professional version which is hosted on AWS to develop the solution.

Also, we suggested to create separate reporting database using Postgres which will help the clients in fetching the reports and dashboards with minimal lag and much better performance.

Different kind of reports were developed with very interactive visualizations. Some of the reports which were developed are:

  • Weekly calling Report showing number of calls made by different KAM for different profiles
  • Candidate status Report showing profile wise status like how many candidates approached, how many interviews done, how many declined etc
  • Client engagement Report
  • Weekly Summary Report, etc.

 Also many dashboards were developed, some of which includes :

  • Sourcing Dashboard showing work done by sourcing team in getting CVs
  • Recruitment Manager Dashboard
  • Company Dashboard

Information related to calls made is coming from Ringcentral, information related to candidates, their skillset location company experience etc is coming from Maxhire.

All of these reports and dashboard were very interactive with input parameters. Based on those input parameters like (company name, date range, relationship manager etc) the entire report/dashboard data changes. Also various kind of visualizations were used like bar chart, candle stick, gauges, histogram etc to improve the readability.

Also these reports and dashboard were having the ability to drill down, which means as soon as we click on any of the panel, another panel will appear with detailed information, thus enabling the customer to do a deep down analysis. Also via email bursting, many of the reports were sent to different managers inbox in pdf/doc format.

Multi-tenancy was also implemented and proper user and role management was also implemented. Hence they were able to provide all of their customers separate user id and password, thus customers were able to access this hosted HR BI solution but they were restricted to see their own specific data only. Entire HR BI solution was hosted on Amazon Web Services.

Many functionality which were not directly available from Jaspersoft were also implemented by custom coding (like saving dashboard, rejection charts, saving input controls etc). This entire developed BI solution was integrated inside their website. Also the entire solution was also white labeled according to the companies color themes, text font, color etc.

Helical was also responsible for hosting this solution on AWS, and also doing an end to end performance tuning (SQL query optimization, cache memory increasing, hardware increasing etc.

                                                    

ETL and DW Work :

Sage haS primarily two data source – One is Maxhire : MySQL and the other one is RingCentral : Postgres). Maxhire was storing information related to CV, job profile, skillset, company details, candidate information, job posting, etc. Whereas Ringcentral database is holding the information related to calls made by KAM, response, duration, etc.

Helical was responsible for writing ETL and creating a datawarehouse. The ETL tool used was an open source ETL tool namely Talend Open Studio. Different types of ETL were written like Initial Load ETL, Change Data Capture (CDC) ETL, Maintenance ETL etc. These ETL were executing at different frequency some were on daily and some on weekly basis.

              

ETL Automation Activity Details :

To transfer the data from the Max Hire database to Postgres we ran the ETL process using the Talend Open Studio tool. Here we performed various types of ETL to achieve the requirement which are:

  1. Initial Load ETL : Here we had defined the properties like source and destination database, required table , format of data, etc. Then a transformation of the data in the required format is done. At last, loading of data in the Postgres database is performed.
  1. Change Data Capture (CDC) : It is performed on a daily basis to check the manipulation of the data in the source database. If happened then the updation in the destination database is also performed.
  1. Maintenance ETL : It is performed on a weekly basis to check the manipulation of the data. But it checks the data from start till date and then updation of the data in the destination is performed.
  1. RingCentral ETL : This ETL process helps in uploading the calling data records in the Postgres database. Whereas the input records are in excel format. It is performed on the weekly basis.

Activity includes :

  • Job scheduling : For updating the database source to destination database it is performed on a weekly basis.

The built HR BI solution by Helical is vigorously used by Sage internal team as well as its client and it is using this data analysis as one of its USP to sell more of its services. Not only this platform helped them to better serve their client but also they are more efficiently able to monitor the performance of different members of their team as well. Sage client retention has also improved drastically and they have become a heavy user of the data analysis capabilities of the build BI platform.

                   

Snapshots for BI solution :

sage_loginLogin Page

We have used here features like multi-tenancy, white-labelling

                                                                                                                                                                                                                                        Some Dashboards are :

  • Recruiting Dashboard

sage_Recruit_dash

  • Client Dashboard

Sage_Client Dashboard

  • Sourcing Dashboard

Sage_sourcing_Dashboard

  • Home Dashboard

Home Dashboard

  •   List of exporting/saving the reports in various formats

Saving report format

                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Testimonial :

“Team Helical really did a excellent job building a world class BI solution for us. They helped us choose the right platform, work through our business cases and implement efficiently and effectively. They’re offshore so we were a bit apprehensive; however, they work during US hours and worked quickly. I would highly recommend Nitin and his team to anyone doing BI work”

–  Paul Grewal, CEO, Sage Human Capital

.

Case Study report on Envision Global using Helical Insights (HI)

envision logo

Case Study Report on

Envision Global Leadership (EGL)

using

Helical Dashboard Insights (HDI)

 

Customer :

Envision Global Leadership (EGL)

Geographical Location :

Los Angeles, California

Tools Used :

DB   :  MySQL

ETL :  Talend open studio

BI    :  Helical Dashboard Insight (HDI)

 

Company Overview :

Envision Global Leadership (EGL) is an institute having expertise in providing training on leadership development. Envision helps the organization seeking leadership development by providing training for mid to upper level executives. They also provide advance level training and certifications for the executive coaching skills. Moreover envision provides the thoughtful insights, tools for the organization to improve the organizational productivity.

Problem Statement :

Envision Global Leadership (EGL) provides training followed by an assessment report to individuals. For that, a test is conducted in the form of surveys using Limesurvey platform. The individual is required to answer the questionnaire and is rated between 1 to 5 for different parameters (like transparency, leadership, emotional quotient etc). The information is stored in the LimeSurvey database.

Envision needed a platform that could fetch data from LimeSurvey and create reports with data visualization, which could help the organization to understand strength and weakness and then invest in the skill set building.

Following are earlier problems faced by Envision :

  • The envision requirements are not met by other BI tools.
  • Difficulty in utilizing the data to find appropriate person for job.
  • Generated reports not meeting Envision requirements.
  • The required charts were not feasible to create
  • Conversion of stored data in LimeSurvey application into their format for reports

Solution Developed :

Helical did handholding of the client and understood the requirement thoroughly. Once the same was over, we have suggested to create a Reporting Database. Then with the help of Helical Dashboard Insights (HDI), which is a developer friendly Business Intelligence framework, through which client can view the reports with data visualization along with other features like exporting to PDF, email scheduling, user and role management, etc.

We had developed different kinds of report which includes interactive visualizations. Some of the developed reports are :

  • Self Assessment Report
  • Goal Setting Report
  • Multi Rater Leadership Report
  • Rollup Report
  • Comparison report, etc

All of the reports are very interactive. For data visualization, HDI not only allows us to use normally used charts but also advanced statistical and scientific charts. Also being a developer friendly platform using HDI we can, not only create a chart that is existing in the market but also can create an all together new chart. The same we did for Envision wherein we created a horizontal Candle Stick Whisker chart with additional information. This chart was introduced for the first time in a report for analysis.

On the above candle stick chart, on the basis of certain factors like Emotional Stability, Intellect, Agreeableness is rated between 1 to 5. For example, say 100 people contributed for the survey so a candle stick chart is generated separately for each user after login. The chart will differ for each user in terms of location of black dot and the horizontal stick size, maximum and minimum value, average is common for all the 100 users contributed.

 Here the black dot represent self-rating given by individual, Maximum and minimum represents the highest and lowest rating given for the factors, centre point of horizontal box represents the Average rating given by individuals and the left and right endpoints of brown and green box represents -1 SD to +1 SD where SD is Standard Deviation. This chart is mainly used for representing the large size of data.

Helical using HDI had provided the feature of User & role management in which the specific reports is viewed by the individuals of the organization. As shown in the snapshot below :

HDI login

HDI login

An individual from organization “A” had to type the name in organization box. Then type user name and password that for login. Then individual from organization “A” can view self reports. Only specific person from the organization is allowed to view all performance reports to take strategic decisions.

  • Also, HDI provides Multi-tenancy features, wherein the individual from different organizations are allowed to login by typing the organization name, username and password.
  • Using HDI platform, Admin user of the organization can edit the content using editing engine. Thus, this will help to produce report in customized manner.
  • HDI platform is providing the individual report files to be exportable, downloadable and e-mail schedulable. When exported using HDI platform pixel perfect reports are generated with indexing of pages, pagination etc taken care of automatically. 
  • HDI is a responsive to screen size, meaning on whichever device we are seeing the reports and data visualization, the solution will get adjusted according to screen size and the end user experience will not get hampered.

  • Editing Engine : HDI platform provides Admin user of the organization for editing the content like updating, deleting content through editing engine. Also, Admin user from non-technical background can easily use editing engine thus reducing dependency on technical person. The admin user had to follow steps given below. Thus, HDI platform helping the user in generating the report in a customized manner.

        Step 1:

       Step 2:

ETL and Other database work:

Envision Global Leadership (EGL) has two data sources (One is EGL database and LimeSurvey Database: MySQL). EGL was storing the information of the candidates got trained in their portal and LimeSurvey database is storing the survey result of the candidates.

Helical was responsible for writing the ETL and creation of Reporting database. The ETL tool used was an opensource ETL tool namely Talend and for Reporting database MySQL was used. Different types of ETL were written like Initial Load ETL, Change Data Capture (CDC) ETL, Maintenance ETL etc. These ETL were executing at different frequency, some were on daily and some on weekly basis. Also one of the data source was in excel format, on which we had run ETL and loaded the required tables into reporting database.

The advantages of reporting database creation are :

  • Excellent performance of the reports and dashboards (response time of rendering reports is <1 seconds )
  • Not affecting the transactional system database
  • Reporting database schema designed in such a way that in future it is ready to accommodate any changes, new tables, new surveys, etc.

ETL Automation Activity Details :

On HDI we had created a user friendly UI, through which a business user can himself also upload data, schedule ETL, define data connection, thus reducing the dependency on developers.

  • We had defined the various properties like setting the source and destination database which is required in order to run the ETL process.
  • Running the ETL process after defining the valid properties.
  • The ETL job pulls the data from source database and then in transformation data is being manipulated as per the requirement.
  • Then the loading of the data to the destination database is done.

 Other activity like :

  • Job scheduling : For updating the database it is performed like to update database from source to destination database.

Solution :

The solution provided by the Helical IT Solution using Helical Dashboard Insight is used by the Envision Global Leadership (EGL) which is helping the organization to provide the result with more accuracy to the user. This is helping the organization to know the scope of improvement of their employee to be the future leaders of the organization.

From the EGL point of view the HDI helping the EGL in getting the best out of their data with high quality of the reports. Hence Helical successfully fulfilled all the requirements of the Envision using Helical Data Insights (HDI) BI tool.

Snapshots for BI solution using Helical Dashboard Insight

envi_snap1

Candle Stick Chart for various factors

envi_snap3

envi_snap4

Radar_chart

Comparion_chart

Comparison Chart 2 ( >2 person comparison)

Saving the Report Format

Ways of saving Report

Introduction to “Helical Insights”

 

Helical Helical Insight is our own BI tool.

Helical Insight consists of different 5 layers:

hdi-map

1) Templating Layer:In this layer dashboard is defined.It is a end user interaction layerthis is realated to the JavaScript framework layer.

2) Javascript framework Layer:All the interaction with the Templating layer is done by this layer.It also communicate with the Data Layer and Visualization layer.The combination of Templating layer and Javascript framework layer forms a front end of a dashboard.

3) Data Layer:The main role of Data layer is to provide all data relaed information required by front end.

4) Visualization Layer:basically this layer generates a visualization and provides it to Front end.

5) Background Services:This layer manages the communication between Front end,Data layer and visualization layer.

Helical Insight consists of different file extensions:

Firstly create your own folder which can have multiple dashboards.

1).EFW file:This file is required for recognization of Dashboard ,it contains metadata about Dashboard.In this file ,we can define the template file(.html/.js) whichever we required in tag.

<?xml version=”1.0″ encoding=”UTF-8″ ?><efw>

<title>HDI Demo On LocalHost</title>

<author>Sayali</author>

<description>Sample Dashboard</description>

<icon>images/image.ico</icon>

<template>test.html</template>

<visible>true</visible>

<style>clean</style>

</efw>

2).EFWD file:This file contains related data,data connection(DataSource) and related to queries.

<EFWD><DataSources>

<Connection id=”1″ type=”sql.jdbc”>

<Driver>com.mysql.jdbc.Driver</Driver>

<Url>jdbc:mysql://192.168.2.9:3306/output_db_1216</Url>

<User>devuser</User>

<Pass>devuser</Pass>

</Connection>

</DataSources>

<DataMaps>

<DataMap id=”1″ connection=”1″ type=”sql” >

<Name>Sql Query on SampleData – Jdbc</Name>

<Query>

<![CDATA[select distinct sector as sector, sum(promo_value) as val  from Subbrand_Level where promo_value>0 andsector in (${sector})group by sector;]]>

</Query>

<Parameters>

<Parameter name=”sector” type=”collection” default='””‘/>

</Parameters>

</DataMap>

</DataMaps>

</EFWD>

 

 

3).EFWVF file:This file defines the visualization of Dashboard.It contains the Charts,table etc.It is a .xml file which is used while writing JavaScript Chart Components.

<charts><chart id=”1”>

<prop>

<name>Pie chart</name>

<type>custom</type>

<datasource>1</datasource>

<script>

<!CDATA[[console,log(data);]]

</script>

</prop>

</chart>

</charts>

 

4)Template file(.html): It is used to define components which are used in Dashboard.To set variable it requires some component configuration Dashboard.setVariable() and calls Dashboard.init().

Var component={}

Var components=[];

Dashboard.init(components);

 

 

 

Thanks,

Sayali Mahale

HDI Installation guide

HDI Installation Guide

Here, we are going to discuss installation process of HDI. Installation process of HDI is very simple, you just need to follow below mentioned steps:

STEP-1

Get or download the latest .war file of HDI.

STEP-2

Copy this war file to {TOMCAT-HOME}/webapps folder. After few seconds, you can see Tomcat has created one folder with the same name of copied war file. For example, after deploying hdi.war file, hdi folder will be created at the same location.

STEP-3

Open {TOMCAT_HOME}/webapps/hdi/WEB-INF/classes/project.properties file with any file editor.

i) Find settingPath parameter and replace with file location of setting.xml.

settingPath ={setting.xml Location}

e.g.

settingPath = C:/EFW/System/Admin/setting.xml

Description:

This parameter indicates location of setting.xml file, which comes under EFW folder. This setting.xml file contains EFW project settings.

ii) Find schedularPath parameter and replace with file location of scheduling.xml.

 

schedularPath = {scheduling.xml location)

e.g.

schedularPath = C:/EFW/System/scheduling.xml

Description:

scheduling.xml file contains information of scheduled reports and related data.

STEP-4:

Open {TOMCAT_HOME}/webapps/hdi/WEB-INF/classes/log4j.properties file in any file editor:

Find log4j.appender.file.File parameter, and replace with:

log4j.appender.file.File= {Location where you want to generate application logs for debugging}

e.g

log4j.appender.file.File=C:/EFW/loggingFile.log

STEP-5:

Open {TOMCAT_HOME}/webapps/hdi/WEB-INF/spring-mvc-servlet.xml file with any file editor.

Configure your database to which the application (HDI) is going to store information such as user credantials. As shown in following example in the specified tag of the XML:

e.g.

 <bean id=”dataSource” class=”org.apache.commons.dbcp.BasicDataSource” destroy-method=”close”>

        <property name=”driverClassName” value=”com.mysql.jdbc.Driver”/>

        <property name=”url” value=”jdbc:mysql://localhost:3306/hdi”/>

        <property name=”username” value=”username”/>

        <property name=”password” value=”password”/>

  </bean>

NOTE:

You can also configure database connection pooling (in the same XML as mentioned above), which application is going to use for login related connections.

License Deployment:

  • Get HDI License file and copy at {TOMCAT_HOME}/webapps/hdi/hdi.licence

NOTE:

  • Tomcat user should have R/W access on the war file.
  • Tomcat user should have R/W access on the HDI-SOLUTION directory (EFW).
  • Tomcat user should have R/W access on the LICENSE file.

EFW-Project Deployment:

  • Get EFW project from prescribed location.
  • Copy EFW Folder to suitable location.
  • Open setting.xml  with any file editor. You can find this file at location:

C:/EFW/System/Admin/setting.xml

  • Find <efwSolution> xml tag and give location of your EFW solution folder. E.g.

 

<efwSolution>C:\\EFW\\EFW</efwSolution>

 

Now, we are done. We have installed HDI successfully. You can access using

http://<host>:<port>/hdi

e.g.

http://localhost:8080/hdi

 

 NOTE:  Here, We assume we deployed EFW folder at location C:/EFW/

 

 

Thanks

Sharad Sinha

Making a Simple Interactive Map in HDI (Helical Dashboard Insights)

 

Creating Interactive and Zoomable Map in HDI (Helical Dashboard Insight)

 
The Goal of this blog is how to make responsive, interactive and zoomable Map in HDI (Helical Dashboard Insights):

For creating the Map in HDI, we are using D3.js , a javascript library.

The Data to use :

A special geospatial file called a Topojson. Here we are going to use a file that is comprised of all US counties. If u go to this link and copy into text file and save it as “us.json” (or anything .json) .

Since we have a county map of USA, we will need some data that is broken down by county. The us.json file we are using only has counties drawn. For this tutorial we are using data from query in json format.

Whatever data you have, make sure that there is a column that associates the information to a naming or id standard that is also present in your map/topojson.

Integrating Map in HDI:

To start integration of Map we have to change four files in HDI.

1) EFW file: EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2) HTML File:HTML file name should be the same that specified in the EFW file under the Template Section.

In HTML File On Top we specify links of the external link and CSS properties.
Here we are using the ‘topojson.js’ external Library and it specified in the HDI as below:

<script src="getExternalResource.html?=Map/topojson.js"></script>

And CSS used to create Map is as follows:

.states {
  fill: none;
  stroke: #fff;
  stroke-linejoin: round;
}
body {
 font-family: Arial, sans-serif;
}
.city {
 font: 10px sans-serif;
 font-weight: bold;
}
.legend {
 font-size: 12px;
}
div.tooltip {
 position: absolute;
 text-align: left;
 width: 150px;
 height: 25px;
 padding: 2px;
 font-size: 10px;
 background: #FFFFE0;
 border: 1px;
 border-radius: 8px;
 pointer-events: none;
}

We have to declare one Map component in “Map.html” file and in this component we need to provide the link of the file where the Map chart property is defined.

3) EFWD File: EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database.

The DataSource Details used in our demo is shown as below:-

<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF.

Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.

<DataMap id="2" connection="1" type="sql" >
       <Name>Query for Tooltip </Name>
		<Query>
			<![CDATA[
					SELECT  cl.ID as id,cl.Name as name,sum(fact.votes) as votes
					FROM
					Voting_Summary as fact,region as r,contest as ct,county_list as cl
					where
					fact.region_id=r.region_id and
					fact.contest_id=ct.contest_id and
                   		        cl.Name = r.county	
			]]>
              </Query>
</DataMap>

4)EFWVF File :-

In EFWVF file we first set the chart id the chart we set the chart properties. For Map, we set the Map Properties between the tag. The properties such as Chart name, Chart type, Chart Data Source.

“Path” refers to lines drawn as instructed by our topojson file (us.json). Notice that .legend and .tooltip refer to objects we’ll designate with our javascript, but we can still set what they’ll look like here in the CSS.

You’ll see a lot of “var=”, which is setting up our variables for the code. Note that the first of the variables affect what values map to what colors. See that changing up these variables is an easy way to change the appearance of this map (as well as the CSS).

Colors are coded by RGB HEX value . There are multiple ways to scale colors, but this is the one we’ll go with here.

In Script we set the Map as Below :

Setting the Map Size, Position. And translation.

var width = 960,
 height = 500,
centered; 

Setting up the view:

var projection = d3.geo.albersUsa()
    .scale(1070)   // If scale is specified, this sets the projection’s scale factor to the specified value.
    .translate([width / 2, height / 2]);

Defing Map and Legend Color :

var color_domain = [5000,10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000, 50000, 55000, 60000]
 var ext_color_domain = [0, 5000,10000, 15000, 20000, 25000, 30000, 35000, 40000, 45000, 50000, 55000, 60000]
 var legend_labels = ["< 5000","10000", "15000+", "15000+", "20000+", "25000+", "30000+", "35000+", "40000+", "45000+", "50000+", "55000+", "60000+"]
 var color = d3.scale.threshold()
 .domain(color_domain)
 .range(["#CCE0FF","#B2D1FF","#99C2FF","#80B2FF","#66A3FF","#4D94FF","#3385FF","#1975FF","#005CE6","#0052CC","#0047B2","#003D99","#003380","#002966"]);

The follow portion of code creates a new geographic path generator


var path = d3.geo.path()
    .projection(projection);

The next block of code sets our svg window;

var svg = d3.select("#chart_4").append("svg")
.attr("viewBox", "0 0 " + width + " " + height)
 .style("margin", "10px auto");
var div = d3.select("#chart_4").append("div")
 .attr("class", "tooltip")
 .style("opacity", 0);
svg.append("rect")
    .attr("class", "background")
    .attr("viewBox", "0 0 " + width + " " + height)
    .on("click", clicked);

Since our data file contains the json data returned from query and this data is used to map the tooltip.

 var pairIdWithId = {};
 var pairNameWithId = {};
var pairVotesWithId = {};
 
data.forEach(function(d) {
 pairIdWithId[d.id] = +d.id;
 pairNameWithId[d.id] = d.name;
 pairVotesWithId[d.id] = d.votes; 
 });

here d.id ,d.name and d.votes refer to the column headers of our query.And now we’ll select the svg objects we’ve created but not specified, and map our data onto them:

var g = svg.append("g");
 g.append("g")
 .attr("class", "county")
 .selectAll("path")
 .data(topojson.feature(data1, data1.objects.counties).features)
 .enter().append("path")
 .attr("d", path)
 .on("click", clicked)
 .style ( "fill" , function (d) {
 return color (pairIdWithId[d.id]);
 })
.style("opacity", 0.8)

This will draw each county as an object, each with its own values. Notice that we’ve named this class of object “county”.

If we wanted to change the style of the counties in CSS up at the top, we could just refer to .county and make changes Also the “.data” line associates information from our us.json file with the county objects.

Also important is that “color” refers to the function set above in the code. “Color” expects a number as input, but instead of a specific number, we’re going to give it our container filled with pairs of ID numbers and rate values, and use [d.id] to make sure that we read in a value for each id number.The rest is what happens when the mouse glances over the county:


.on("mouseover", function(d) {
 d3.select(this).transition().duration(300).style("opacity", 1);
 div.transition().duration(300)
 .style("opacity", 1)
 div.text(pairNameWithId[d.id] + " : " + pairVotesWithId[d.id])
 .style("left", (d3.event.pageX) + "px")
 .style("top", (d3.event.pageY -30) + "px");
 })
 .on("mouseout", function() {
 d3.select(this)
 .transition().duration(300)
 .style("opacity", 0.8);
 div.transition().duration(300)
 .style("opacity", 0);
 })

If you want to change what each label is, make sure to adjust the variable “legend_labels.”


var legend = svg.selectAll("g.legend")
 .data(ext_color_domain)
 .enter().append("g")
 .attr("class", "legend");
 
var ls_w = 20, ls_h = 20;
legend.append("rect")
 .attr("x", 20)
 .attr("y", function(d, i){ return height - (i*ls_h) - 2*ls_h;})
 .attr("width", ls_w)
 .attr("height", ls_h)
 .style("fill", function(d, i) { return color(d); })
 .style("opacity", 0.8);
 
legend.append("text")
 .attr("x", 50)
 .attr("y", function(d, i){ return height - (i*ls_h) - ls_h - 4;})
 .text(function(d, i){ return legend_labels[i]; });

Function that gives Zoom Functionality to the map


function clicked(d) {
  var x, y, k;
  if (d && centered !== d) {
    var centroid = path.centroid(d);
    x = centroid[0];
    y = centroid[1];
    k = 4;
    centered = d;
  } else {
    x = width / 2;
    y = height / 2;
    k = 1;
    centered = null;
  }

  g.selectAll("path")
      .classed("active", centered && function(d) { return d === centered; });

  g.transition()
      .duration(750)
      .attr("transform", "translate(" + width / 2 + "," + height / 2 + ")scale(" + k + ")translate(" + -x + "," + -y + ")")
      .style("stroke-width", 1.5 / k + "px");
}

By following these we are able to see the output which is as follows:
D3 Map

-By
Nitin Uttarwar
Helical It Solution

Business Intelligence Project Lifecycle

Business Intelligence Project Lifecycle

BI&DW_Project_LifeCycle

Project Planning

Project life cycle begins with project planning. Obviously, we must have a basic understanding of the business’s requirements to make appropriate scope decisions. Project planning then turns to resource staffing, coupled with project task identification, assignment, duration, and sequencing. The resulting integrated project plan identifies all tasks associated with the Lifecycle and the responsible parties.

Project Management

Project management ensures that the Project Lifecycle activities remain on track and in sync. Project management activities focus on monitoring project status, issue tracking, and change control to preserve scope boundaries. Ongoing management also includes the development of a comprehensive communication plan that addresses both the business and information technology (IT) constituencies. Continuing communication is critical to managing expectations; managing expectations is critical to achieving your DW/BI goals.

Business Requirement Definition

Business users and their requirements impact nearly every decision made throughout the design and implementation of a DW/BI system. From our perspective, business requirements sit at the centre of the universe, because they are so critical to successful data warehousing. Our understanding of the requirements influence most Lifecycle choices, from establishing the right scope, modeling the right data, picking the right tools, applying the right transformation rules, building the right analyses, and providing the right deployment support.

Business_Requirement_Definition

Product selection and installation

DW/BI environments require the integration of numerous technologies. Considering business requirements, technical environment, specific architectural components such as the hardware platform, database management system, extract-transformation-load (ETL) tool, or data access query and reporting tool must be evaluated and selected. Once the products have been selected, they are then installed and tested to ensure appropriate end-to-end integration within your DW/BI environment.

Data Modeling

The first parallel set of activities following the product selection is the data track, from the design of the target dimensional model, to the physical instantiation of the model, and finally the “heavy lifting” where source data is extracted, transformed, and loaded into the target models.

Dimensional Modeling

During the gathering of business requirements, the organization’s data needs are determined and documented in a preliminary enterprise data warehouse representing the organization’s key business processes and their associated dimensionality. This matrix serves as a data architecture blueprint to ensure that the DW/BI data can be integrated and extended across the organization over time.

Designing dimensional models to support the business’s reporting and analytic needs requires a different approach than that used for transaction processing design. Following a more detailed data analysis of a single business process matrix row, modelers identify the fact table granularity, associated dimensions and attributes, and numeric facts.

Refer this link for more information on dimensional modeling process:
http://helicaltech.com/dimensional-modeling-process/

Physical Design

Physical database design focuses on defining the physical structures, including setting up the database environment and instituting appropriate security. Although the physical data model in the relational database will be virtually identical to the dimensional model, there are additional issues to address, such as preliminary performance tuning strategies, from indexing to partitioning and aggregations.

ETL Design & Development

Design and development of the extract, transformation, and load (ETL) system remains one of the most vexing challenges confronted by a DW/BI project team; even when all the other tasks have been well planned and executed, 70% of the risk and effort in the DW/BI project comes from this step.

BI Application Track

The next concurrent activity track focuses on the business intelligence (BI) applications.

BI application design

Immediately following the product selection, while some DW/BI team members are working on the dimensional models, others should be working with the business to identify the candidate BI applications, along with appropriate navigation interfaces to address the users’ needs and capabilities. For most business users, parameter driven BI applications are as ad hoc as they want or need. BI applications are the vehicle for delivering business value from the DW/BI solution, rather than just delivering the data.

BI application development

Following BI application specification, application development tasks include configuring the business metadata and tool infrastructure, and then constructing and validating the specified analytic and operational BI applications, along with the navigational portal.

Deployment

The 2 parallel tracks, focused on data and BI applications, converge at deployment. Extensive planning is required to ensure that these puzzle pieces are tested and fit together properly, in conjunction with the appropriate education and support infrastructure. It is critical that deployment be well orchestrated; deployment should be deferred if all the pieces, such as training, documentation, and validated data, are not ready for prime time release.

Maintenance

Once the DW/BI system is in production, technical operational tasks are necessary to keep the system performing optimally, including usage monitoring, performance tuning, index maintenance, and system backup. We must also continue focus on the business users with ongoing support, education, and communication.

Growth

If we have done your job well, the DW/BI system is bound to expand and evolve to deliver more value to the business. Prioritization processes must be established to deal with the ongoing business demand. We then go back to the beginning of the Lifecycle, leveraging and building upon the foundation that has already been established, while turning our attention to the new requirements.

– Archana Verma

Adding new chart in Helical Dashboard Insight (HDI)

Adding new chart in Helical Dashboard Insight (HDI)

1) Adding the Pie Chart in HDI:-
HDI use D3 (Data-Driven Documents) library. D3 allows you to bind arbitrary data to a Document Object Model (DOM), and then apply data-driven transformations to the document. For example, you can use D3 to generate an HTML table from an array of numbers.
For adding the Pie chart in the HDI following steps should be followed:-
1) EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2) EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart
Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var width = 300,
height = 300;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.data.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with status \"+ d.data.STATUS+\" are \"+d.data.totalorders";
var legendValues = "d.STATUS";

You may change the below script directly for further customization


function angle(d)
{
var a = (d.startAngle + d.endAngle) * 90 / Math.PI - 90;
return a > 90 ? a - 180 : a;
}
$(placeHolder).addClass('panel').addClass('panel-primary');
var radius = Math.min(width, height) / 2;
var color = d3.scale.ordinal().range(["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd", "#8c564b", "#e377c2", "#7f7f7f", "#bcbd22", "#17becf"]);
var arc = d3.svg.arc().outerRadius(radius - 10).innerRadius(0);
var pie = d3.layout.pie().sort(null).value(function(d) { return eval(values); });
var heading = d3.select(placeHolder).append("h3").attr("class", "panel-heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
Creating The SVG Element
var svg = d3.select(placeHolder).append("div").attr("class", "panel-body").append("svg")
.attr("width", width)
.attr("height", height)
.append("g")
.attr("transform", "translate(" + width / 2 + "," + height / 2 + ")");

Drawing The Pie


var g = svg.selectAll(".arc")
.data(pie(data))
.enter().append("g")
.attr("class", "arc");
g.append("path")
.attr("d", arc)
.style("fill", function(d) { return color(eval(category)); });
g.append("text")
.attr("transform", function(d) {
d.outerRadius = radius -10;
d.innerRadius = (radius -10)/2;
return "translate(" + arc.centroid(d) + ")rotate(" + angle(d) + ")"; })
.attr("dy", ".35em")
.style("text-anchor", "middle")
.style("fill", "White")
.text(function(d) { return eval(category); });

Drawing The Lable and Title


g.append("title")
.text(function(d){ return eval(tooltip)});
var legend = d3.select(placeHolder).select('.panel-body').append("svg")
.attr("class", "legend")
.attr("width", 75 * 2)
.attr("height", 75 * 2)
.selectAll("g")
.data(data)
.enter().append("g")
.attr("transform", function(d, i) { return "translate(50," + (i * 20) + ")"; }); legend.append("rect")
.attr("width", 18)
.attr("height", 18)
.style("fill", function(d){return color(eval(legendValues))});
legend.append("text")
.attr("x", 24)
.attr("y", 9)
.attr("dy", ".35em")
.text (function(d) { return eval(legendValues); });
]]>

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "1",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Pie chart is Created.

pie

2)Adding the Pie Chart in HDI:-

1)EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2)EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.
Ex:-


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart

Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var margin = {top: 20, right: 30, bottom: 30, left: 70},
width = 500 - margin.left - margin.right,
height = 400 - margin.top - margin.bottom;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with Status \"+ d.STATUS+\" are\"+d.totalorders";

You may change the below script directly for further customization


$(placeHolder).addClass('panel').addClass('panel-primary');
var x = d3.scale.ordinal().rangeRoundBands([0, width], .1);
var y = d3.scale.linear().range([height, 0]);
var xAxis = d3.svg.axis().scale(x).orient("bottom");
var yAxis = d3.svg.axis().scale(y).orient("left");
var heading = d3.select(placeHolder).append("h3").attr("class", "panel
heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
var chart = d3.select(placeHolder).append("div").attr("class", "panel-body")
.append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");

Drawing The Bar Chart


x.domain(data.map(function(d) { return eval(category); }));
y.domain([0, d3.max(data, function(d) { return eval(values); })]);
        chart.append("g")
        .attr("class", "x axis")
	.attr("transform", "translate(0," + height + ")")
	.call(xAxis);

	chart.append("g")
	.attr("class", "y axis")
	.call(yAxis);

	chart.selectAll(".bar")
	.data(data)
	.enter().append("rect")
	.attr("class", "bar")
	.attr("x", function(d) { return x(eval(category)); })
	.attr("y", function(d) { return y(eval(values)); })
	.attr("height", function(d) { return height - y(eval(values)); })
	.attr("width", x.rangeBand());
	
        chart.selectAll(".bar")
	.append("title")
	.text(function(d){ return eval(tooltip)});

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "2",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Bar chart is Created.

Bar Chart