"Data Monetization" is a term you might have heard a lot lately. But what does it really mean for you and your business? There is gold in your data, but how can you extract it to gain all its benefits without adding resource burdens on your business? We collected the main approaches successful companies are using to give you inspiration and insight into how you can use data you already have to improve efficiencies, create new revenue streams or increase value and hence your wallet share from your current customer base.
Use data to make better decisions
We’re always keeping an eye out for BI and analytics experts to add to our fast growing network of partners and we are thrilled to add a long-standing favorite in the Tableau ecosystem! InterWorks, who holds multiple Tableau Partner Awards, is a full spectrum IT and data consulting firm that leverages their experienced talent and powerful partners to deliver maximum value for their clients. (Original announcement from InterWorks here.) This partnership is focused on enabling consolidated end-to-end data analysis in Tableau.
Whether we’re talking Tableau BI services, data management or infrastructure, InterWorks can deliver everything from quick-strikes (to help get a project going or keep it moving) to longer-term engagements with a focus on enablement and adoption. Their team has a ton of expertise and is also just generally great to work with.
InterWorks will provide professional services to Keboola customers, with the focus on projects using Tableau alongside Keboola Connection, both in North America and in Europe, in collaboration with our respective teams. “We actually first got into Keboola by using it ourselves,” said InterWorks Principal and Data Practice Lead Brian Bickell. “After seeing how easy it was to connect to multiple sources and then integrate that data into Tableau, we knew it had immediate value for our clients.”
What does this mean for Keboola customers?
InterWorks brings world-class Tableau expertise into the Keboola ecosystem. Our clients using Tableau can have a one-stop-shop for professional services, leveraging both platforms to fully utilize their respective strengths. InterWorks will also utilize Keboola Connection as the backbone for their white-gloves offering for a fully managed Tableau crowned BI stack.
Whether working on projects with customers or partners, we both believe that aligning people and philosophy is even more critical than the technology behind it. To that end, we’ve found in InterWorks a kindred spirit, we believe in being ourselves and having fun, while ensuring we deliver the best results for our shared clients. The notion of continuous learning and trying new things was one of the driving factors behind the partnership.
Have a project you want to discuss with InterWorks?
It’s been quite an exciting year for us here at Keboola and the biggest reason for that is our fantastic network of partners and customers -- and of course a huge thanks to our team! In the spirit of the season, we wanted to take a quick stroll down memory lane and give thanks for some of the big things we were able to be a part of and the people that helped us make them happen!
Probably the biggest news from a platform perspective this year came about two years after we first announced support for the “nextt” data warehouse called Amazon Redshift. At the time, it was a huge step in the right direction. We still use Redshift for some of our projects (typically due to data residency or tool choice) but this year we were thrilled to announce a partnership born in the cloud when we officially made the lightning fast and flexible Snowflake the database of choice behind our storage API and the primary option for our transformation engine. Not to get too far into the technical weeds (you can read the full post here,) but it has helped us deliver a ton of value to our clients (better elasticity and scale, huge performance improvement for concurrent data flows, better “raw” performance by our platform, more competitive pricing for our customers and best of all, some great friends!) Since our initial announcement, Snowflake joined us in better supporting our European customers by offering a cloud deployment hosted in the EU (Frankfurt!) We’re very excited to see how this relationship will continue to grow over the next year and beyond!
One of our favorite things to do as a team is participate in field events so we can get out in the data world and learn about the types of projects people work on, challenges they run into, and find out what’s new and exciting. It’s also a great chance for our team to spend some time together as we span the globe - sometimes Slack and Goto Meeting isn’t enough!
SeaTug in May
We had the privilege of teaming up with Slalom Consulting to co-host the Seattle Tableau User Group back in May. Anthony Gould was a gracious host, Frank Blau provided some great perspective on IoT data and of course Keboola’s own Milan Veverka dazzled the crowd with his demonstration focused on NLP and text analysis. Afterwards, we had the chance to grab a few cocktails, chat with some very interesting people and make a lot of new friends. This event spawned quite a few conversations around analytics projects; one of the coolest came from a group of University of Washington students who analyzed the sentiment of popular music using Keboola + Tableau Public (check it out.)
In a recent post, we started scoping our executive level dashboards and reporting project by mapping out who the primary consumers of the data will be, what their top priorities / challenges are, which data we need and what we are trying to measure. It might seem like we are ready to start evaluating vendors and building it out the project, but we still have a few more requirements to gather.
What data can we exclude?
With our initial focus around sales analytics, the secondary data we would want to include (NetProspex, Marketo and ToutApp) all integrates fairly seamlessly with the Salesforce so it won't require as much effort on the data prep side. If we pivot over to our marketing function however, things get a bit murkier. On the low end this could mean a dozen or so data sources. But what about our social channels, Google Ads, etc, as well as various spreadsheets. In more and more instances, particularly for a team managing multiple brands or channels, the number of potential data sources can easily shoot into the dozens.
Although knowing what data we should include is important, what data can we exclude? Unlike the data lake philosophy (Forbes: Why Data Lakes Are Evil,) when we are creating operational level reporting, its important focus on creating value, not to overcomplicating our project with additional data sources that don't actually yield additional value.
Who's going to manage it?
Just as critical to the project as what and how; who’s going to be managing it? What skills do we have out our disposal and how many hours can we allocate for the initial setup as well as ongoing maintenance and change requests? Will this project be managed by IT, our marketing analytics team, or both? Perhaps IT will manage data warehousing and data integration and the analyst will focus on capturing end user requirements and creating the dashboards and reports. Depending on who's involved, the functionality of the tools and the languages used will vary. As mentioned in a recent CMS Wire post Buy and Build Your Way to a Modern Business Analytics Platform, its important to take an analytical inventory of what skills we have as well as what tools and resources we already have we may be able to take advantage of.
As we covered in our recent NLP blog, there are a lot of cool use cases for text / sentiment analysis. One recent instance we found really interesting came out of our May presentation at SeaTUG (Seattle Tableau User Group.) As part of our presentation / demo we decided to find out what some of the local Tableau users could do with trial access to Keboola; below we’ll highlight what Hong Zhu and a group of students from the University of Washington were able to accomplish with Keboola + Tableau for a class final project!
What class was this for and why did you want to do this for a final project?
We are a group of students at the University of Washington’s department of Human Centered Design and Engineering. For our class project for HCDE 511 – Information Visualization, we made an interactive tool to visualize music data from Last FM. We chose the topic of music because all 4 of us are music lovers.
Initially, the project was driven by our interest in having an international perspective on the popularity vs. obscurity of artists and tracks. However, after interviewing a number of target users, we learned that most of them were not interested in rankings in other countries. In fact, most of them were not interested in the ranking of artists/tracks at all. Instead, our target users were interested in having more individualized information and robust search functions, in order to quickly find the right music that is tailored to one’s taste, mood, and occasion. Therefore, we re-focused our efforts on parsing out the implicit attributes, such as genre and sentiment, from the 50 most-used tags of each track. That was when Keboola and its NLP plug-in came into play and became instrumental in the success of this project.
The age old conflict. IT needs centralization, governance, standards and control; on the other side of the coin? Business units need the ability to move fast and try new things. How can we get lines of business access to the data they need to for projects so they can spend their time focused on discovering new insights? Typically they get stuck in a bottleneck of IT requests or spending 80% of their time doing data integration and preparation. Neither group seems particularly excited to do it, and I don’t blame them. For the analyst it increases the complexity of their tasks and seriously raises the technical knowledge requirements. For IT, it’s a major distraction from their main purpose in life, an extra thing to do. Self serve BI is trying to destroy the backlogged “report factories,” only to replace them with “data stores,” which are sadly even less equipped for the job at hand. Either way, the result is a painfully inefficient process, straining both ends of the value chain in any company that embarks on the data driven journey.
The Bi-Modal BI Answer?
An organization's ability to effectively extract value from data and analytics while maintaining a well governed source of truth is the difference between competitive advantage or sunken costs and missed opportunities. How can we create an environment that provides the agile data access needed by the business users while still maintaining sound data governance? Gartner has referred to a Bi-modal IT strategy. A big challenge with Bi-modal IT is that it pushes IT management to divide their efforts between ITs traditional focus and a more business focused agile methodology.
The DBA and Analyst Divide
Another major challenge in data access comes from the separation between DBAs and business users. Although the technical side may have the necessary expertise to implement ETL projects, they often lack the business domain expertise needed to make the correct assumptions around context and how the data is regarded. With so many projects competing for resources, we shouldn’t have to task a DBA on all of them. Back to the flip side of the coin, data analysts and scientists want the right data for their tools of choice and they want it fast. Even though there is growing set of data integration tools that allows individual business units to create and maintain their own data projects, this typically requires a lot of manual data modeling and can lead to siloed data or inconsistent metrics.
Instead of controlling all of BI, IT can enable the business to develop their analytics without sacrificing control and governance standards. So how can we get the right data in the hands of people who understand and need it in a timely manner?
Having access to the right data in a clean and accessible format is the first step (or series of steps) leading up to actually extracting business value from your data. As much as 80% of the time spent on data science projects involves data integration and preparation. Once we get there, the real fun begins. With the continued focus on big data and analytics to drive competitive advantage, data science has been spending a lot of time in the headlines. (Can we fit a few more buzzwords into one sentence?)
Let’s take a look at a few data science apps available on our platform and how they can help us into our data monetization efforts.
One of the most popular algorithms is market basket analysis. It provides the power behind things like Amazon’s product recommendation engine and identifies that if someone buys product A, they are likely to buy product B. More specifically, it’s not identifying products placed next to each other on the site that get bought together, rather products that aren’t placed next to each, This can be useful in improving in-store and on site customer experience, target marketing and even the placement of content items on media sites.
Anomaly detection refers to identifying specific events that don’t conform to the expected pattern from the data. This could take the form of fraud detection, identifying medical problems or even detecting subtle change in consumer buying behaviors. If we look at the last example, this could help us in identifying new buying trends early and taking advantage. Using the example of an eCommerce company, you could identify anomalies in carts created per minute, a high number of carts abandons, an odd shift in orders per minute or a significant variance in any other number of metrics.
Guest post by Kevin Smith
For a product owner, one of the biggest fears is that the product you're about to launch won't get the necessary adoption to achieve success. This might happen for a variety of reasons— two of the most common are a lack of fit to the customers' needs and confusing design (it's just too hard to use!).
To combat the possibility of failure, many product owners have adopted the "agile" approach to building products that have enough functionality to meet to minimum needs, but are still lean enough to facilitate easy change.
As a data product builder — someone building customer-facing analytics that will be part of a product — the needs are no different but achieving agility can be a real challenge. Sure, every analytics platform provider you might consider claims that they can connect to any data, anywhere, but this leaves a lot of wiggle room. Can you really connect to anything? How easy is it? How hard is it to change later? What about [insert new technology on the horizon here] that I just heard about? If you want to build an agile data product, you've got a tough road ahead... as I found out.
Recently I started working on the analytics strategy for a small start-up firm focused on providing services to large enterprises. As they delivered their services, they wanted to show the results in an analytics dashboard instead of the traditional PowerPoint presentation. It would be more timely, easier to deliver, and could be an on-going source of revenue after an engagement was completed. As I spoke with the team, a few goals surfaced:
- They wanted to buy an analytics platform rather than build from scratch. The team realized that they would be better off developing the methodology that would differentiate them from the competition instead of creating the deep functionality already provided by most analytics platforms.
- The system had to be cost-effective both to set-up and to operate. As a start-up, there simply wasn't the cashflow available for costly analytics platforms that required extensive professional services to get started. The product had to be flexible and "configurable" by non-Engineers. With little to no budget for an Engineering staff, the team wanted a BI platform that could be configured easily as customer needs changed.
- Up and running quickly. This company had customers ready to go and needed a solution quickly. It would be essential to get a solution in front of the customers NOW, rather than try to migrate them to a new way of operating once the dashboards were ready. Changes would certainly be needed post-launch, but this was accepted as part of the product strategy.
None of this seemed to be impossible. I've worked on many data products with similar goals and constraints. Product teams always want to have a platform that's cost-effective, doesn't strain the technical capabilities of the organization, is flexible, and is launched sooner rather than later. It was only after a few more conversations that the problem arose: uncertain data sources.
Most data-driven products work like this: you've got a workflow application such as a help desk application or an ordering system that generates data into a database that you control. You know what data is flowing out of the workflow application and therefore, you understand the data that is available for your analytics. You connect to your database, transform the data into an analytics-ready state, then display the information in analytics on a dashboard. The situation here was different. As a services company, this business had to operate in a technology environment dictated by the customer. Some customers might use Salesforce, some might use Sugar CRM. Still others might use Zoho or one of the myriad other CRM platforms available. Although the team would structure the dashboards and analytics based on their best practices and unique methodology, the data driving the analytics product would differ greatly from customer to customer.
As we examined in part 1 of our Data Monetization blog series, the first step to increasing revenue with data is identifying who the analytics will be surfaced to, what their top priorities are, what questions we need to ask and which data sources we need to include. For this blog, let’s take a look at what tools we will need to bring it all together.
With our initial example of a VP of Sales dashboard, fortunately the secondary data sources (NetProspex, Marketo and HubSpot Signals) all integrate fairly seamlessly with the Salesforce CRM. This should allow for some fairly straightforward analytics built on top of all the data we’ve aggregated. If we pivot over to our CMO dashboard, things get a bit murkier.
Although our Marketo instance easily integrates with Salesforce, the sheer volume of data sources that can provide insight to our marketing activity makes this project a much more daunting ask. What about our social channels, Adobe Omniture, Google Ads, LinkedIn Ads, Facebook Ads, SEO as well as various spreadsheets. In more and more instances, especially for a team managing multiple brands / channels, this number can easily shoot into the dozens.
When a company thinks about monetizing data, the things that come to mind are increasing revenue, identifying operational inefficiencies or creating a new revenue stream. It’s important to keep in mind that these are the results of an effective strategy but can't be the only goal of the project. In this blog series, we will exam these avenues with a focus on the added value that ultimately leads to monetization. For this blog, lets look at it from the perspective of creating executive level dashboards at a B2B software company.
Who will be consuming the data and what do they care about?
Before we jump into the data itself, take a step back and understand who the analytics will be surfaced to and what their challenges are. Make profiles with their top priorities, pain points and the questions they will be asking. One way to get started is to make a persona priority matrix listing the top three to five challenges for each (ex. below.)
Once the matrix is laid out, you can begin mapping specific questions to each priority. What answers might help a VP of Sales increase the effectiveness of the sales team and ultimately revenue?
What do our highest velocity deals look like (vertical, company size, who’s involved)?
What do our largest deals look like?
Where do our deals typically get stuck in the sales process?
What activities and actions are our best reps performing?