Empowering the Business User in your BI and Analytics Environment


There’s one trend on Gartner’s radar that hasn’t changed much over the last few years and that’s the increasing move toward a self-service BI model. Gone are the days of your IT or analytics department being report factories. And if those days aren’t gone for you, then it’s time you make some substantive changes to your business intelligence environment. When end-users are forced to rely on another department to deliver the reports they need, the entire concept of being a “data-driven” organization goes right out the window. 

So other than giving your users access to ad hoc reporting capabilities, how do  you empower the user?

Bi-Modal BI: Balancing Self-Service and Governance

                                   

The age old conflict.  IT needs centralization, governance, standards and control; on the other side of the coin?  Business units need the ability to move fast and try new things.  How can we get lines of business access to the data they need to for projects so they can spend their time focused on discovering new insights?  Typically they get stuck in a bottleneck of IT requests or spending 80% of their time doing data integration and preparation.  Neither group seems particularly excited to do it, and I don’t blame them.  For the analyst it increases the complexity of their tasks and seriously raises the technical knowledge requirements.  For IT, it’s a major distraction from their main purpose in life, an extra thing to do.  Self serve BI is trying to destroy the backlogged “report factories,” only to replace them with “data stores,”  which are sadly even less equipped for the job at hand.  Either way, the result is a painfully inefficient process, straining both ends of the value chain in any company that embarks on the data driven journey.

The Bi-Modal BI Answer?

An organization's ability to effectively extract value from data and analytics while maintaining a well governed source of truth is the difference between competitive advantage or sunken costs and missed opportunities.  How can we create an environment that provides the agile data access needed by the business users while still maintaining sound data governance?    Gartner has referred to a  Bi-modal IT strategy.  A big challenge with Bi-modal IT is that it pushes IT management to divide their efforts between ITs traditional focus and a more business focused agile methodology.

The DBA and Analyst Divide

Another major challenge in data access comes from the separation between DBAs and business users.  Although the technical side may have the necessary expertise to implement ETL projects, they often lack the business domain expertise needed to make the correct assumptions around context and how the data is regarded.  With so many projects competing for resources, we shouldn’t have to task a DBA on all of them.  Back to the flip side of the coin, data analysts and scientists want the right data for their tools of choice and they want it fast.  Even though there is growing set of data integration tools that allows individual business units to create and maintain their own data projects, this typically requires a lot of manual data modeling and can lead to siloed data or inconsistent metrics.  

Instead of controlling all of BI, IT can enable the business to develop their analytics without sacrificing control and governance standards.  So how can we get the right data in the hands of people who understand and need it in a timely manner?

3 Critical Steps to Evangelize the New in Business Intelligence and Analytics

The rapid evolution in business intelligence and analytics capabilities is both exhilarating and overwhelming. 

How do you protect the stability of the work you’ve already done, while evangelizing experimentation, exploration and progress within your organization?

We’ve got a few tips for you.

Keboola: Data Monetization Series - How data science can help

                       

Having access to the right data in a clean and accessible format is the first step (or series of steps) leading up to actually extracting business value from your data.  As much as 80% of the time spent on data science projects involves data integration and preparation.  Once we get there, the real fun begins.  With the continued focus on big data and analytics to drive competitive advantage, data science has been spending a lot of time in the headlines.  (Can we fit a few more buzzwords into one sentence?)

Let’s take a look at a few data science apps available on our platform and how they can help us into our data monetization efforts.

Basket analysis

One of the most popular algorithms is market basket analysis.  It provides the power behind things like Amazon’s product recommendation engine and identifies that if someone buys product A, they are likely to buy product B.  More specifically, it’s not identifying products placed next to each other on the site that get bought together, rather products that aren’t placed next to each,  This can be useful in improving in-store and on site customer experience, target marketing and even the placement of content items on media sites.

Anomaly detection

Anomaly detection refers to identifying specific events that don’t conform to the expected pattern from the data.  This could take the form of fraud detection, identifying medical problems or even detecting subtle change in consumer buying behaviors.  If we look at the last example, this could help us in  identifying new buying trends early and taking advantage.  Using the example of an eCommerce company, you could identify anomalies in carts created per minute, a high number of carts abandons, an odd shift in orders per minute or a significant variance in any other number of metrics.

Using a Data Prep Platform: The Key to Analytic Product Agility

                                                     

                                                                                      Guest post by Kevin Smith

For a product owner, one of the biggest fears is that the product you're about to launch won't get the necessary adoption to achieve success. This might happen for a variety of reasons— two of the most common are a lack of fit to the customers' needs and confusing design (it's just too hard to use!).

To combat the possibility of failure, many product owners have adopted the "agile" approach to building products that have enough functionality to meet to minimum needs, but are still lean enough to facilitate easy change.

As a data product builder — someone building customer-facing analytics that will be part of a product — the needs are no different but achieving agility can be a real challenge. Sure, every analytics platform provider you might consider claims that they can connect to any data, anywhere, but this leaves a lot of wiggle room. Can you really connect to anything? How easy is it? How hard is it to change later? What about [insert new technology on the horizon here] that I just heard about? If you want to build an agile data product, you've got a tough road ahead... as I found out.

Recently I started working on the analytics strategy for a small start-up firm focused on providing services to large enterprises. As they delivered their services, they wanted to show the results in an analytics dashboard instead of the traditional PowerPoint presentation. It would be more timely, easier to deliver, and could be an on-going source of revenue after an engagement was completed. As I spoke with the team, a few goals surfaced:

  1. They wanted to buy an analytics platform rather than build from scratch. The team realized that they would be better off developing the methodology that would differentiate them from the competition instead of creating the deep functionality already provided by most analytics platforms.
  2. The system had to be cost-effective both to set-up and to operate. As a start-up, there simply wasn't the cashflow available for costly analytics platforms that required extensive professional services to get started. The product had to be flexible and "configurable" by non-Engineers. With little to no budget for an Engineering staff, the team wanted a BI platform that could be configured easily as customer needs changed.
  3. Up and running quickly. This company had customers ready to go and needed a solution quickly. It would be essential to get a solution in front of the customers NOW, rather than try to migrate them to a new way of operating once the dashboards were ready. Changes would certainly be needed post-launch, but this was accepted as part of the product strategy.

None of this seemed to be impossible. I've worked on many data products with similar goals and constraints. Product teams always want to have a platform that's cost-effective, doesn't strain the technical capabilities of the organization, is flexible, and is launched sooner rather than later. It was only after a few more conversations that the problem arose: uncertain data sources.

Most data-driven products work like this: you've got a workflow application such as a help desk application or an ordering system that generates data into a database that you control. You know what data is flowing out of the workflow application and therefore, you understand the data that is available for your analytics. You connect to your database, transform the data into an analytics-ready state, then display the information in analytics on a dashboard. The situation here was different. As a services company, this business had to operate in a technology environment dictated by the customer. Some customers might use Salesforce, some might use Sugar CRM. Still others might use Zoho or one of the myriad other CRM platforms available. Although the team would structure the dashboards and analytics based on their best practices and unique methodology, the data driving the analytics product would differ greatly from customer to customer.

Keboola: Data Monetization Series Pt. 2

             

As we examined in part 1 of our Data Monetization blog series, the first step to increasing revenue with data is identifying who the analytics will be surfaced to, what their top priorities are, what questions we need to ask and which data sources we need to include.  For this blog, let’s take a look at what tools we will need to bring it all together.  

With our initial example of a VP of Sales dashboard, fortunately the secondary data sources (NetProspex, Marketo and HubSpot Signals) all integrate fairly seamlessly with the Salesforce CRM.  This should allow for some fairly straightforward analytics built on top of all the data we’ve aggregated.  If we pivot over to our CMO dashboard, things get a bit murkier.

Although our Marketo instance  easily integrates with Salesforce, the sheer volume of data sources that can provide insight to our marketing activity makes this project a much more daunting ask.  What about our social channels, Adobe Omniture, Google Ads, LinkedIn Ads, Facebook Ads, SEO as well as various spreadsheets.  In more and more instances, especially for a team managing multiple brands / channels, this number can easily shoot into the dozens.

Keboola: Data Monetization Series Pt. 1


When a company thinks about monetizing data, the things that come to mind are increasing revenue, identifying operational inefficiencies or creating a new revenue stream.  It’s important to keep in mind that these are the results of an effective strategy but can't be the only goal of the project.  In this blog series, we will exam these avenues with a focus on the added value that ultimately leads to monetization.  For this blog, lets look at it from the perspective of creating executive level dashboards at a B2B software company.

Who will be consuming the data and what do they care about?

Before we jump into the data itself, take a step back and understand who the analytics will be surfaced to and what their challenges are.  Make profiles with their top priorities, pain points and the questions they will be asking.  One way to get started is to make a persona priority matrix listing the top three to five challenges for each (ex. below.)

Screen Shot 2016-01-16 at 15642 PMpng

Once the matrix is laid out, you can begin mapping specific questions to each priority.  What answers might help a VP of Sales increase the effectiveness of the sales team and ultimately revenue?

  • What do our highest velocity deals look like (vertical, company size, who’s involved)?

  • What do our largest deals look like?

  • Where do our deals typically get stuck in the sales process?

  • What activities and actions are our best reps performing?

Top 3 challenges of big data projects


The Economist Intelligence report Big data evolution: forging new corporate capabilities for the long term published earlier this year provided insight into big data projects from 550 executives across the globe. When asked what their company’s most significant challenges are related to big data initiatives, maintaining data quality, collecting and managing vast amounts of data and ensuring good data governance were 3 of the top 4 (data security and privacy was number 3.) Data availability and extracting value were actually near the bottom. This is a bit surprising as ensuring good data quality and governance is critical to getting the most value from your data project.

Maintaining data quality

Having the right data and accurate data is instrumental in the success of a big data project. Depending on the focus, data doesn’t always have to be 100% accurate to provide business benefit, numbers that are 98% confident is enough to give you insight into your business. That being said, with the sheer volume and sources available for a big data project, this is a big challenge. The first issue is ensuring that the original system of record is accurate (the sales rep updated Salesforce correctly, the person filled out the webform accurately, and so forth) as the data needs to be cleaned before integration. I’ve personally worked through CRM data projects; doing cleanup and de-duping can take a lot of resources. Once this is completed, procedures for regularly auditing the data should be put in place. With the ultimate goal of creating a single source of truth, understanding where the data came from and what happened to it is also a top priority. Tracking and understanding data lineage will help identify issues or anomalies within the project.

Collecting and managing vast amounts of data

Before the results of a big data project can be realized, processes and systems need to be put into place to bring these disparate sources together. With data living in databases, cloud sources, spreadsheets and the like, bringing all the disparate sources together into a database or trying to fuse incompatible sources can be complex. Typically, this process consists of using a data warehouse + ETL tool or custom solution to cobble everything together. Another option is to create a networked database that pulls in all the data directly, this route also requires a lot of resources. One of the challenges with these methods is the amount of expertise, development and resources required. This spans from database administration to expertise in using an ETL tool. It doesn’t end there unfortunately; this is an ongoing process that will require regular attention.

Ensuring good data governance

In a nutshell, data governance is the policies, procedures and standards an organization applies to its data assets. Ensuring good data governance requires an organization to have cross-functional agreement, documentation and execution. This needs to be a collaborative effort between executives, line of business managers and IT. These programs will vary based on their focusbut will all involve creating rules, resolving conflicts and providing ongoing services. Verifications should be put into place that confirm the standards are being met across the organization.

Conclusion

Having a successful big data project requires a combination of planning, people, collaboration, technology and focus to realize maximum business value. At Keboola, we focus on optimizing data quality and integration in our goal to provide organizations with a platform to truly collaborate on their data assets. If you’re interested in learning more you can check out a few of our customer stories.

KBC as a Data Science Brain Interface

The Keboola Data App Store has a fresh new addition. That brings us to total of 16 currently available apps, three of which provided by development partners.

This new one is called “aLook Analytics”, and technically it is a clone of our development project, a “Custom Science” app (not available yet, but soon!). It facilitates connection to a GitHub/Bitbucket repository of a specific data science shop, which you can “hire” via the app and enable them to safely work on your project.

This first instance is connected to Adam Votava’s company aLook Analytics (check them out at http://www.alookanalytics.com/).

How does it work?

Let’s imagine you want to build something data-science-complex in your project. You get in touch with aLook and agree on what it is you want them to do for you. You exchange some data, the boys there will do some testing on their side, set up the environment and once they’re done, they’ll give you a short configuration script that you will enter into their app in KBC. Any business agreement regarding their work is to be made directly between you and aLook, Keboola stays on the sidelines for this one.
When you run the app, your data gets served to aLook’s prepared model and scripts, saved in aLooks repository get executed on Keboola servers. All the complex stuff happens and the resulting data gets returned into your project. The app can be (like any other) included in your Orchestrations, which means it can run automatically as a part of your regular workflow.

The user of KBC does not have direct access to the script, protecting aLook’s IP (of course, if you agree with them otherwise, we do not put up any barriers).

Very soon we will enable the generic “Custom Science” app mentioned above. That means that any data science pro can connect their GitHub/Bitbucket themselves - that gives you, our user, the freedom to find the best brain in the world for your job.

Why people and not just machines?

No “Machine Learning Drag&Drop” app provides the same quality as a bit of thought by a seasoned data scientist. We’re talking business analytics here! People can put things in context and be creative, while all machines can do is to adjust (sometimes thousands of) parameters and tests the results against a training set. That may be awesome for facial recognition or self-driving car AI, but in any specific business application, a trained brain will beat the machine. Often you don’t even have enough of a test sample so a bit of abstract thinking is critical and irreplaceable.

How we "hacked" Vizable

Tableau unveiled their new Vizable app the first full day of the Tableau User Conference 2015 (A.K.A. TC-15) to much oohs and aahs. Vizable is a tablet app that allows users to take data from an .xls or .csv file and easily interact with it right on their tablet. It is unparalleled in its ease of use and intuitiveness, providing an exciting new way to consume data and drive insights. More information here: http://vizable.tableau.com/

As soon as we saw it, the Keboola team thought, “What an exciting way to use data from Keboola Connection - if only we could send data to it immediately to test it!” The app is built to accept .xls and .csv files that are physically present on the iPad it runs from, so at a glance, it is completely and utterly off-line. We immediately wondered if Keboola Connection - due to its integration with DropBox and Google Drive - could make Vizable the ultimate, on-the-go data visualization app.

(a little bit of frantic testing later)

Yeah! We can easily schedule pushing data into the iPad using our existing integrations. We didn't have to write a single line of code and already during the conference we were able to play with #data15 mentions we’d pulled in through Keboola Connection, with fresh data being automatically pushed into the iPad every 30 minutes.

We eagerly shared our success with the Vizable team and started showing conference attendees and members of the Tableau team just how we’d made it all happen! It was great to receive a string of visits from the whole Vizable crew all the way up to Dave Story, VP of Mobile and Strategic Growth, and Chris Stolte, the Chief Development Officer. What a thrilling way to educate the Tableau folks on all the cool stuff Keboola does with their tool and for their customers.

Get in touch with us if you want to know more!