Create A Custom Table Report using Helical Insight (Dynamically Picking the Columns Names and Data)

Create A Custom Table Report using HI (Dynamically Picking the Columns Names and Data)

If you have already had a Hands-On experience on the Helical Insight Tool [HI tool] then this blog would be helpful

For a creating a report there are 4 files required
1. EFW
2. HTML
3. EFWD
4. EFWVF

the Report Layout lies on the HTML page, the SQL queries lies within the EFWD file and the visualization lies in the EFWVF File.
Hence once the Query is fired it comes to the visualization file to create a table as that’s our goal. With the help of the following code below it can be created with ease.

The below code here is a template of our EFWVF looks like
<Charts>
<Chart id=”1″>
<prop>
<name>Table</name>
<type>Custom</type>
<DataSource>1</DataSource>
<script>

//Your Visualization Code Goes Here

</script>
</prop>
</Chart>
</Charts>

Chart ID here is 1 which is unique
Type is of custom
DataSource 1 is the Unique ID defined in the EFWVF File. ie (<DataMap id=”1″ )
Now within the <script> </script>
we Paste the following:

<![CDATA[
//The If Block Does Return A Message NO Data when there is No Data
if(data.length == 0)
{
$(‘#chart_1′).html(“<div ><h4 style=’text-align:CENTER;color:black; padding-top:60px;’>No Data Available For Current Selection</h4></div>”);
return;
}
//The Else Block Returns The Table if there is a Table
else
{
//Here the funtion Tabluate Returns the Data in Tabular Form
function tabulate(elem, data, columns)
//Function Start
{
var table = d3.select(elem).append(“table”)
.attr(“class”,” table display compact width:100%;cellspacing:1 “)
.attr(“id”,”table”)
thead = table.append(“thead”),
tbody = table.append(“tbody”);

//Append the header row
thead.append(“tr”)
.selectAll(“th”)
.data(columns)
.enter()
.append(“th”)
.text(function(column) { return column; })
.attr(‘class’, function(d, i){ return “colH_” + i; })
.style(‘background-color’,’#ededed’)
.style(‘color’,’black’)
.style(‘@media print’,’display:none’)
.style(‘padding-left’,’25px’);

// create a row for each object in the data
var rows = tbody.selectAll(“tr”)
.data(data)
.enter()
.append(“tr”);

// create a cell in each row for each column
var cells = rows.selectAll(“td”)
.data(function(row) {
return columns.map(function(column) {
return {column: column, value: row[column]};
});
})
.enter()
.append(“td”)
.text(function(d) { return d.value; })
.attr(‘class’, function(d, i){ return “col_” + i; })
.attr(‘align’, ‘left’)
return table;
//Function END
}

//Render the table
//Object.keys(data[0]) is the Data to fetch the Column Header
//data has the data of the Table
console.log(Object.keys(data[0]));
var subjectTable = tabulate( ‘#chart_1’, data, Object.keys(data[0]));
}
]]>
Save it in the same Directory with rest of the file

Run your report on HI and you get a table.

Keep in mind you can change the query and still get the new columns from the new query.

Thanks
Sohail Izebhijie

Beginner’s Guide to E.T.L (Extract, Transform and Load) – A Basic Process

Loading Data from Source to Target

Before we proceed it’s important to identify the tool you might need to accomplish the process of ETL, in my case i would be using the Pentaho Data Integration Application (keep in mind irrespective of the tool you use the steps or procedures are similar but the approach might be differ).

The following Steps can be followed:

1. Identify your Source, it could be the following:
a. C.S.V file
b. Text file
c. A database
d. and so on

in my scenario a C.S.V (comma separated file)file.

2. Open Up your Spoon.bat
Select a new transformation and select a “Input” then select what type of input you require
we have Text File Input, C.S.V file input, SAS Input, Table Input and so on. In My case since i’ll be using C.S.V file as a Source i’ll select C.S.V file Input Component.

3. Set Up your Connection Based on your preferred connection type in my case i’ll be using the postgreSQL.

[Read my next Blog on setting Up a Connection using Connection type: PostgreSQL]

4. Once Connection has been Established you can Right-Click on the Connection and Select Share if that’s a common Connection to all your transformations will be using this will share the Transformation Connection to other transformation.

5. So we will be Sending data from Source to a Target to we need to to have a “Input” as the source
and an “Output” as the Target.

6. Input

download a C.S.V file from the internet

or even Create a TXT/C.S.V input file
as shown below.

Create a source if required
Text_Source (comma delimited)

Employee_Number,First_Name, Last_Name, Middle_Initial, Age, Gender, Title
101,Ehizogie,Izebhijie,Sohail,24,Male,Developer
102,Fahad,Anjum,WithClause,23,Male,Developer
103,Gayatri,Sharma,A,24,Female,Accountant

Save as txt or csv and this can be your input.

Here since our input is from a csv file
we open Up or C.S.V File Input component
Step Name: Anyname
File Name: Browse the selected path
Delimiter: , (comma)
Enclosure: ”

and then Select Get Fields and Select OK
Preview your Data

7. Output
Open Up Table output component
Select Target Schema
Select The Target Table

[Keeping in mind a Table exist in the DB]

Select OK!

Right-Click on the table output to MAP the columns from Source to Target

Now this is important as the to get the right data from the source to the column in the target
and Then Run

As a Beginner keep in mind that
Errors are a bound to occur
Such as Type of data from
Source Does not Match your Target Table format.
and so on.

Here a we can Have some little transformation step to convert and take care of such format errors
[In my next blog we can look into handling that]

Now Go to your Target Database and Run the SELECT * FROM table_name

there you go!

Thanks
Sohail Izebhijie

Helical Insight for ETL Tester (QA)

Helical Insight for ETL Tester (QA)

Let’s take an example that your ETL developer has just wrote an ETL script, which loaded the data into the Data Warehouse, few millions of records got inserted into the Data Warehouse.

For QA, it is super difficult to test all (millions) of the data and also a time consuming process. Moreover, there are many other problems like

  • some might not be well versed with writing complex SQL queries
  • There might be multiple data sources involved in generating the Data Warehouse.
  • Writing multiple queries on multiple data sources and then combining it in one single result set would be a sweat breaking task.

Now think of Helical Insight as your friend which helps you to do most of your tasks, allowing you to focus on more critical thinking rather than just doing stare and test whole day. Wondering how Helical Insight can help you achieve the same?

Here is your answer,

  • Helical Insight is capable of communicating with any of your data source(s) like CSV, Excel, MySql, Oracle, SQL Server, Postgres etc.
  • Helical Insight has a drag and drop feature to make your test cases ready, without even having a prior knowledge of SQL queries. Helical Insight generates the SQL queries including the joins between multiple entities. Helical Insight understands your data structure, hence it generates the query even if you are working with different set of data source(s), DW or normalized database, Helical Insight understands all.
  • Using this (drag and drop) facility you can generate all your test cases in a form of report, which you can call it as your test cases. For example, your report does a count of records for a dimension. You can save this report and check the same with source system by creating a separate report. If its a migration project the count should definitely match otherwise we know something is wrong with the ETL Script.
  • Once you have saved these reports (test cases) you can re-executes them again an again to verify your data. In the above situation you have just identified that Helical Insight is really a very useful tool with some really cool features like generating the SQL query for any database or data source(s), ability to execute the queries on any databases or data source(s), ability to save the report / test cases / SQL queries for future usage.

Is your Business Intelligence tool really Intelligent?

According to Forrester blog a typical BI tool costs $1,50,000/- and ETL costs about the same. Services, hardware and implementation generally scale up to 5 times the software cost. It is after investing so much  that most businesses realize that the solution they have purchased is not really future ready. As the business expands, number of users increases, data grows, databases change, more software’s are incorporated and new technology adopted, it is observed that a BI solution is unable to incorporate future expectations. Of course, most BI software’s are more or less generic in nature with features such as report, dashboard, adhoc, cache, security, etc. What ALL of these tools lack is FUTURE-READY ARCHITECTURE. This either results in business needs being compromised or dropped, or use of best of breed solutions or development of their own solution or outsourcing. The result is wastage of money and time—search for new BI tool, technical resources, implementation of the solution or usage of a best of breed solution.

If you are able to connect with this situation, you may need to relook your BI tool or BI implementation!

Why should a business user adjust his requirements when ideally, it should be the other way around?

Also the expectations from a BI tool keeps on increasing and since generally the tools are not able to live up to the growing expectations hence they expire. This throws up an important question—Are Business Intelligence tools really intelligent?

An ideal case would be to have a BI tool which is future ready and developer friendly, thus it’s flexible and extensible. With the growing and ever-changing business requirements, IT staff will then be able to accommodate the advanced requirements by adding features, adopt new technology and justify the investment made. Such a BI framework will not be bound by any tool or technology limitation and can adapt to any sort of requirement—current or future. It will be a developer’s paradise given the liberty to do or create anything and business users’ dream since whatever they seek can be achieved without compromises with no incremental cost. This will also remove dependence on the BI vendor for any additional functionality or patches/releases based on their product roadmap, since your own team can add functionality.

Below mentioned are some of the instances wherein such a need is felt:

1. Can I add new data type as source:

Most BI tools support commonly used data sources, which are limited in number. If any new database type is to be added as a data-source, it may not be possible without the database vendor providing the connect. Also in case if the data storage technology is different like Hadoop the reliance is on the BI vendor to come up with a new patch or version. BI vendors who have shifted their focus from product innovation to sales, such requests generally take a lot of time for execution. Wouldn’t it be great if the developer himself was able to add data sources, API, etc., and enhance the tool?

2. There’s a new API in the market. Can I fetch data from there?

BI tools generally come with native connectors to certain popular APIs. But, with changing times and requirements, new and more relevant APIs come up. Fetching data from other APIs than the pre-installed ones may be impossible or often very difficult. In such a scenario, one may feel the BI tools available today are not very future ready.

3. Charting options are few, limiting my advanced & analytics usage

After connecting to database, reports/dashboard are created. Most of the BI tools come with out-of-box charting options which are limited and may not suffice the requirement. Though some BI tools allow external integration of charts, they often forgo other functionalities such as exporting, email scheduling etc.

These limited charting options affect companies/people who are looking for advanced functionalities and might be working on predictive analytics and trend analytics like data scientists and statisticians who are looking for statistical and advanced charting. A BI tool should allow charts to be integrated inside the report, dashboard, adhoc, etc., with ability to define inter-panel communication, input filters, etc. Also, even if  integration of charts is external, other functionalities such as emailing, exporting, trigger, etc., should work fit like a fiddle..

4. Reports Dashboards are cliché…. but I don’t have more options!!

Not only the plain vanilla reports and dashboards, a BI tool should be future ready enough for other visualization options like infographic, what-if analysis, mash-up, cubes, scorecards or any other type which might come up in future

5. BI Software UI looks so very alien!

Often, companies have their own products/software with certain navigation options, icons and color, adhering to a chosen theme. With BI also being introduced to their solution stack, wouldn’t it be wonderful if the BI tool can be customized to match the design template of their existing solution stack, i.e., option to change navigation way, repository access, icons, content menu, color, text, theme, file extensions, etc.? Such exhaustive white-labelling capabilities can lead to a unified view of all the enterprise applications leading to ease of branding, usage and viewing.

Currently, what most BI tools offer in the name of white labelling is change in the header and footer design, color and text and very limited options.

6. There are so many tools I am compelled to use of single BI Software!

Many of the BI tools come with a number of separate software/hardware to be used like the server, designer tool, plug-ins, community plug-ins, etc. BI companies release enhancements within these, which at times, lead to compatibility issues. Here’s food for thought: What If using the browser itself, we are able to execute everything exactly as the way the BI solution is being accessed? Imagine… no more downloading heavy software’s, no more compatibility issues, separate purchase of tools, etc.

7. BI vendor engagement never seems to end – and so their billing!

Licensing presents complex issues. Licensing may be core-based or user-based or server-based or mix-matched or data size-based. Also there generally are separate licenses for separate tools like designer, server, plug-in, etc. Sluggish performance of the solution leads to increase in core server, and hence the licenses. Maintenance cost, development cost and renewal cost are top-ups. Prices are not benchmarked and in many cases, pricing is not crystal clear and often depends on the salesperson and the bargain being struck.

8. Adhoc capabilities is not very capable

Adhoc capabilities allow business users to drag, drop and create their own reports and dashboards. Many BI tools are extremely limited here, not allowing or extending features to write custom scripts, add html, add visualization for adhoc, add custom calculated columns, etc.

9. Can I extend core functionality altogether?

Almost all BI tools fail in their ability to extend functionality. BI tools are designed with adoption of the approach that ‘one size fits all’ wherein they are selling only their out of box features. However, every client has a unique requirement. Ability to extend functionality and add features are something that could change the way people view and use BI. Examples of extending functionality could be things like outlook plug-in of BI, offline viewing, directly fetching data from ETL scripts, introducing new exporting options, rule-based system, custom alerting notifications and triggers, custom business processes, etc. This could lead to a paradigm shift in the entire scope of BI. Frankly, sky isn’t the limit!

10. Sequence of events: Wish I could define the flow!

An integrated workflow inside a BI tool could help in defining business processes and thus enhance capabilities. Examples of workflow could be things like ‘run ETL AND create report AND mail to one set of users when value is between 0-50%, AND send it to other set of users when value is greater than 50%’.

11. So many software’s, so many screens.

Companies generally use many software’s, thus a client has to navigate through them based on the requirement. Right now, we can only integrate BI charts inside other applications. It would be real value addition if the BI tool is flexible enough to allow integration of other software’s inside the tool, interact with those software too and BI directly invokes their functions as well!

12. I just can’t find the BI resource

Skilled resource is one of the hard-pressed problems in the BI domain. Resources are far too less and the salary they command is far too much, leading to outsourcing of the projects. Why should there be a separate set of resource for BI at all? Why can’t BI tools be simple enough for a HTML/Java resource too to be able to work on the tool?

Do these situations sound familiar to you? Can you associate or connect with the issues raised? Do you agree with the solutions? Having worked in the BI domain for many years over a number of tools and seen the current limitations, Helical IT Solutions has conjured the miracle BI tool! Watch out—The BETA edition—soon to be launched! Write to us for early access, architecture and other documents at Nikhilesh@helicaltech.com or beta@helicalinsight.com. We would like to hear from YOU regarding features, licensing type, costing and other comments.

Importance of Business Intelligence in Travel Industry

Importance of Business Intelligence in Travel Industry

 

The travel industry is highly complex with multiple players and systems interacting with each other on real time basis for smooth functioning of the business. The various players and systems include Travel Management Companies, Global Distribution System Providers, Call Centers, Travel Agencies, etc. Due to these complex systems, a huge amount of data is generated continuously. But, there are big voids in data collection and this poses as a big challenge for the travel industry. Travel companies are hence finding it very difficult to run targeted campaigns; they are neither unable to offer personalized products to customers nor utilize Predictive Analytics. However, introduction of new technologies is slowly changing the way travel organizations collect and use data.

         Business Intelligence and Analytics play a key role in addressing many revenue impacting and operational inefficiencies. When the data is combined with multiple external sources like data from travel companies, online portals, private websites and from social media, the intelligence obtained is significantly gives greater insights into customer behavior patterns. Such kinds of insights help organizations analyze trends and customer preferences – their likes & dislikes and sentiments. This would then act as an extremely powerful tool for devising business strategies and discovering hidden sales opportunities.

For example, when an Airline route suddenly starts showing negative revenues while operating which has always been profitable before, Business Analytics is capable of providing insights. Data from travel companies may reveal increased competition in the sector. Online portals like Ibibo, MakemyTrip will provide data in the form of user comments and blogs, which when analyzed, can provide results from sentiment analysis. It can reveal the brand equity and impression that customers have about the organization. If the outcomes are not favorable, organizations can put in extra efforts to analyze the reasons behind it and devise an improvement plan. The processed data can also be presented in the form of reporting dashboards showing factors affecting customer sentiments.

Predictive Analytics in Travel

Suppose a person is travelling for an International Vacation to Singapore. He booked his tickets using one of the online portals like MakeMyTrip. Thanks to the power of predictive analytics, the person might receive an exclusive offer from his favorite airline for the ideal route along with an option to include a hotel and perhaps best restaurants in Singapore for someone traveling on an expense account.

KPIs for Travel Industry

The following are the most generic and key categories for Travel Organizations:

  • Spend and Savings :  Spend Under Contract, Booking Visibility, Payment Visibility, Reaalized Negotiated Savings, Contract Competitiveness, Cost of Managed Travel.
  • Traveler’s Behavior and Policy : Cabin Non-compliance, Lowest Logical Airfare (LLA) Non-compliance, Advance Booking Non-compliance, Online Adoption Rate, Hotel Visibility, Hotel Quality.
  • Suppliers : Traveler Satisfaction, Contract Support
  • Process : Re-booking Rate, Reimbursement Days
  • Traveler’s Safety : Location Insights, Profile Competition
  • Corporate Social Responsibility (CSR) : Carbon Visibility, Rail vs Air
  • Data Quality : Data Quality

 

Benefits of Using BI in Travel Industry

  • Enhance customer segmentation
  • Increase revenue
  • Targeted offers and promotions
  • Benchmark against industry standards
  • Reduce operational cost
  • Competitor insights
  • Increase inventory utilization
  • Improve customer service

 

Some of the other areas where BI can be applied to Travel Industry are

  • Capacity Planning
  • Transporters Performance Evaluation
  • Mode-Cost Analysis
  • Supplier Compliance Analysis
  • Routing and Scheduling
  • Driver Performance Analysis

 

A Travel Domain company “CTI Travel Ltd” had implemented business intelligence in their system. This upgraded system had benefited them in various ways like :

  • Real time tracking on supplier
  • Sales team got benefited in finding new business leads, profitable and under-performing clients.
  • Improved client experience and customer centric services
  • Helped in finding the gaps using the real time data
  • High quality Reports using real time data
  • Better Financial Management
  • Improved complicated processes
  • Improved decision making processes by management / teams / individuals

 

With rich experience in various domains including travel (empowering travel management software of IBNTech) get in touch with us at Helical IT to find out more about how a BI solution can benefit your company. Reach out to us at nikhilesh@helicaltech.com

 

 

Must watch talks from React Europe

Must watch talks from React Europe

If you are an react developer, you must have heard about react-europe. It’s a conference where developers from different parts of the world come together and share their experiences and knowledge while using `React.js`. This year, it was held in Paris and talks were given by likes of Ryan Florence, Chistoper Chedeau to name a few.

This is my view. Yours may defer. This is in no ways intended to promote or demote someone

Below are few of the talks (in no particular order), which I believe are a must watch for a react developer:

Ryan Florence – Don’t Rewrite, React!

In this video Ryan Florence talks about migrating your existing app to react and also shows a live demo of converting a backbone todo list to react.

Dan Abramov – Live React: Hot Reloading with Time Travel

In this video, Dan Abramov talks about “hot reloading” react components without losing the state and also demos the ability to move between states a.k.a time-travel. He also demos his new library called “Redux”.

Cheng Lou – The State of Animation in React

In this video, Cheng Lou demonstrates how he has solved the problem of animating components in react.

Evan Morikawa & Ben Gotow – How React & Flux Turn Apps Into Extensible Platforms

In this video, the speakers show a way to make a react app more extensible by making use of plugins.

Michael Jackson – React Router

In this video, Michael Jackson (no not the king of pop), talks about the problems faced while developing the react-router.

Cacti Installation in Windows

Cacti Installation in Windows

 

Cacti is a complete network graphing solution which is frontend tool to RRDTool.
Cacti frontend is completely PHP driven.
It maintains the graphs as well as handles the data gathering.

Install and configure Cacti :

  1. Install cacti from the zip distribution and install in the web root or your choice. May choose to install into a “Cacti” sub folder.
  2. RRDTool – Install from the Cacti website. Install it into the c:\cacti directory.
  3. PHP 5.x – Install into the c:\php folder. If you choose to install into c:\Program Files\php, you will have to use 8.3 filenames to reference it’s binaries in Cacti.
  4. Install the MySQL.

 

  • Configure PHP

 

– Add the following directory to the existing Windows System PATH environment variable: c:\php. The Windows path can be accessed via the Control Panel at: System | Advanced | Environment Variables | System Variables.
– Add the following directory to a new Windows System environment variable called PHPRC: c:\php.
– Add a new Windows System environment variable called MIBDIRS set it to c:\php\extras\mibs
– Rename the file c:\php\php.ini.dist to php.ini, and make the following changes to it:

Uncomment the following lines.

extension_dir = c:\php\ext
extension=php_mysql.dll
extension=php_snmp.dll
extension=php_sockets.dll
cgi.force_redirect = 0

 

  • Configure the Webserver (Apache)

– Make sure you have stopped any IIS web servers before you proceed with Apache installation, or make sure Apache is configured on an alternate port.
– If using Apache 2.x and PHP 5, then add the following lines.

LoadModule php5_module c:\php\php5apache2.dll
AddType application/x-httpd-php .php
DirectoryIndex index.html index.htm index.php

 

  • Follow the Next Steps :
  1. Create the MySQL database:

shell> mysqladmin –user=root create cacti

2. Import the default cacti database:

shell> mysql cacti < cacti.sql

For Ex.

mysql.exe -u root -p cacti<C:\Apache2\htdocs\cacti\cacti.sql

3. Create a MySQL username and password for Cacti.

shell> mysql –user=root mysql
mysql> GRANT ALL ON cacti.* TO cactiuser@localhost IDENTIFIED BY ‘somepassword’;

For Ex.

mysql> GRANT ALL ON cacti.* TO cactiuser@localhost IDENTIFIED BY ‘cactipw’;

mysql> flush privileges;

 

  • Configure Cacti

1. Edit (cacti/include/config.php) and specify the database type, name, host, user and password,database port for your Cacti configuration.

$database_type = “mysql”;
$database_default = “cacti”;
$database_hostname = “localhost”;
$database_username = “cactiuser”;
$database_password = “cactipw”;
$database_port = “3306”;

2. Point your web browser to:

http://your-server/cacti/

3. Open cacti in browser : “localhost/cacti” which will ask you to Install cacti as
cacti1

4.Click on Next, it will check all dependant path you can set if any path is different in your case.
cacti2

5.Click on Finish ,after that it will gives you a login windows.
6.Login using the username and password of admin/admin.
7.Next it will be required to change this password immediately so set the New password.
cacti3

Now You can use the cacti to create the graphs.

Thanks,
Sayali Mahale.

Connection Time Out in AWS EC2

Connection Time Out in AWS EC2

 

Possible reasons for timeout when trying to access EC2 instance

  • The most likely one is that the Security Group is not configured properly to provide SSH access on port 22 to your i.p.
  • The local firewall configuration does not allow SSH access to the server.
  • The server is not started properly
  • Net connectivity
  • Wrong pem file / Host Name
  • Spelling mistake

 

When it times out or fails, check the following:

Security Group:

Make sure to have an inbound rule for tcp port 22 and either all ips or your ip. You can find the security group through the ec2 menu, in the instance options.

Routing Table:

For a new subnet in a vpc, you need to change to a routing table that points 0.0.0.0/0 to internet gateway target. When you create the subnet in your vpc, by default it assigns the default routing table, which probably does not accept incoming traffic from the internet. You can edit the routing table options in the vpc menu and then subnets.

Add Route to Routing Table
Destination: 0.0.0.0/0
Target: <Internet Gateway from earlier>

Elastic IP:

For an instance in a vpc, you need to assign a public elastic ip address, and associate it with the instance. The private ip address can’t be accessed from the outside. You can get an elastic ip in the ec2 menu (not instance menu).

Username:

Make sure you’re using the correct username. It should be one of ec2-user or root or ubuntu. Try them all if necessary.

Private Key:

Make sure you’re using the correct private key (the one you download or choose when launching the instance). Seems obvious, but copy paste got me twice.

Alternatively, Building everything back. This included:

  1. Create VPC
  2. Create Internet Gateway
  3. Attach Internet Gateway to VPC
  4. Create Routing Table
  5. Add Route to Routing Table
  6. Create Subnet