Keboola: Data Monetization Series Pt. 2

             

As we examined in part 1 of our Data Monetization blog series, the first step to increasing revenue with data is identifying who the analytics will be surfaced to, what their top priorities are, what questions we need to ask and which data sources we need to include.  For this blog, let’s take a look at what tools we will need to bring it all together.  

With our initial example of a VP of Sales dashboard, fortunately the secondary data sources (NetProspex, Marketo and HubSpot Signals) all integrate fairly seamlessly with the Salesforce CRM.  This should allow for some fairly straightforward analytics built on top of all the data we’ve aggregated.  If we pivot over to our CMO dashboard, things get a bit murkier.

Although our Marketo instance  easily integrates with Salesforce, the sheer volume of data sources that can provide insight to our marketing activity makes this project a much more daunting ask.  What about our social channels, Adobe Omniture, Google Ads, LinkedIn Ads, Facebook Ads, SEO as well as various spreadsheets.  In more and more instances, especially for a team managing multiple brands / channels, this number can easily shoot into the dozens.

Keboola: Data Monetization Series Pt. 1


When a company thinks about monetizing data, the things that come to mind are increasing revenue, identifying operational inefficiencies or creating a new revenue stream.  It’s important to keep in mind that these are the results of an effective strategy but can't be the only goal of the project.  In this blog series, we will exam these avenues with a focus on the added value that ultimately leads to monetization.  For this blog, lets look at it from the perspective of creating executive level dashboards at a B2B software company.

Who will be consuming the data and what do they care about?

Before we jump into the data itself, take a step back and understand who the analytics will be surfaced to and what their challenges are.  Make profiles with their top priorities, pain points and the questions they will be asking.  One way to get started is to make a persona priority matrix listing the top three to five challenges for each (ex. below.)

Screen Shot 2016-01-16 at 15642 PMpng

Once the matrix is laid out, you can begin mapping specific questions to each priority.  What answers might help a VP of Sales increase the effectiveness of the sales team and ultimately revenue?

  • What do our highest velocity deals look like (vertical, company size, who’s involved)?

  • What do our largest deals look like?

  • Where do our deals typically get stuck in the sales process?

  • What activities and actions are our best reps performing?

Adding Context With Different Types of Data

                     

Data can be vast and overwhelming, so understanding the different types helps to simplify what kind of numbers we are looking for.  Even with the treasure trove of data most organizations have in-house, there are tons of additional data sets that can be included in a project to add valuable context and create even deeper insights.  It’s important to keep in mind what type of data it is, when and where it was created, what else was going on in the world when this data was created, and so forth.  Using the example of a restaurant, let’s look at some different types of data and how they could impact an analytics project.  

Numerical data is something that is measurable and always expressed in numerical form.   For example, the number of diners attending a particular restaurant over the course of a month or the number of appetizers sold during a dinner service.  This can be segmented into two sub-categories.  

Discrete data represent items that can be counted and is listed as an exact number and take on possible values that can be listed out. The list of possible values may be fixed (also called finite); or it may go from 0, 1, 2, on to infinity (making it countably infinite).  For example:

  • Number of diners that ate at the restaurant on a particular day (you can’t have half a diner.)

  • Amount of beverages sold each week.

  • How many employees were staffed at the restaurant on a day.

Continuous data represent measurements; their possible values cannot be counted and can only be described using intervals on the real number line.  For example, the exact amount of vodka left in the bottle would be continuous data from 0 mL to 750 mL, represented by the interval [0, 750], inclusive.   Other examples:

  • Pounds of steak sold during dinner service

  • The high temperature in the city on a particular day

  • How many ounces of wine was poured in a given week

You should be able to do most mathematical operations on numerical data as well as list in ascending/descending order and display in fractions.

6 Gift Ideas for the Data Geek in Your Life

                                                      

Its that time of year again and there are so many gift options to choose from.  Be it hover-boards (that may explode,) drones or Star Wars’ own BB-8 remote control droid, there’s been quite a boom in tech gadgets this year.  At Keboola we love all things data, so to get you in the holiday spirit, we wanted to share some cool gift ideas that use data to make your life easier (or at least a bit more interesting.)

Automatic Adapter

Similar to the gadget seen in the Progressive commercials, the Automatic Adapter is basically a fitness app for your vehicle.  It provides a full report on behavior through an app or a web interface regarding where you’ve been, driving behavior and even tag routes for business travel expenses.

                                                                           

Top 3 challenges of big data projects


The Economist Intelligence report Big data evolution: forging new corporate capabilities for the long term published earlier this year provided insight into big data projects from 550 executives across the globe. When asked what their company’s most significant challenges are related to big data initiatives, maintaining data quality, collecting and managing vast amounts of data and ensuring good data governance were 3 of the top 4 (data security and privacy was number 3.) Data availability and extracting value were actually near the bottom. This is a bit surprising as ensuring good data quality and governance is critical to getting the most value from your data project.

Maintaining data quality

Having the right data and accurate data is instrumental in the success of a big data project. Depending on the focus, data doesn’t always have to be 100% accurate to provide business benefit, numbers that are 98% confident is enough to give you insight into your business. That being said, with the sheer volume and sources available for a big data project, this is a big challenge. The first issue is ensuring that the original system of record is accurate (the sales rep updated Salesforce correctly, the person filled out the webform accurately, and so forth) as the data needs to be cleaned before integration. I’ve personally worked through CRM data projects; doing cleanup and de-duping can take a lot of resources. Once this is completed, procedures for regularly auditing the data should be put in place. With the ultimate goal of creating a single source of truth, understanding where the data came from and what happened to it is also a top priority. Tracking and understanding data lineage will help identify issues or anomalies within the project.

Collecting and managing vast amounts of data

Before the results of a big data project can be realized, processes and systems need to be put into place to bring these disparate sources together. With data living in databases, cloud sources, spreadsheets and the like, bringing all the disparate sources together into a database or trying to fuse incompatible sources can be complex. Typically, this process consists of using a data warehouse + ETL tool or custom solution to cobble everything together. Another option is to create a networked database that pulls in all the data directly, this route also requires a lot of resources. One of the challenges with these methods is the amount of expertise, development and resources required. This spans from database administration to expertise in using an ETL tool. It doesn’t end there unfortunately; this is an ongoing process that will require regular attention.

Ensuring good data governance

In a nutshell, data governance is the policies, procedures and standards an organization applies to its data assets. Ensuring good data governance requires an organization to have cross-functional agreement, documentation and execution. This needs to be a collaborative effort between executives, line of business managers and IT. These programs will vary based on their focusbut will all involve creating rules, resolving conflicts and providing ongoing services. Verifications should be put into place that confirm the standards are being met across the organization.

Conclusion

Having a successful big data project requires a combination of planning, people, collaboration, technology and focus to realize maximum business value. At Keboola, we focus on optimizing data quality and integration in our goal to provide organizations with a platform to truly collaborate on their data assets. If you’re interested in learning more you can check out a few of our customer stories.