How can I get more out of my Salesforce data?
Along with being the world’s #1 CRM, Salesforce provides an end-to-end platform to connect with your customers including Marketing Cloud to personalize experiences across email, mobile, social, and the web, Service Cloud to support customer success, Community Cloud to connect customers, partners and employees and Wave Analytics designed to unlock the data within.
After going through many Salesforce implementations, I’ve found that although companies store their primary customer’s data there, the opportunity enrich it further by bringing in related data stored in other systems such as invoices in ERP or contracts in dedicated DMS is a big one. For example, I’ve seen clients run into the issue of having inconsistent data in multiple source systems when a customer changes their billing address. In a nutshell, Salesforce makes it easy to report on that data stored within but can’t provide a complete picture of the customer unless we broaden our view.
There is an increasing number of use cases and data projects for which geolocation data can add a ton of value - e-commerce and retail, supply chain, sales and marketing, etc. Unfortunately, one of the most challenging asks of any data project is relating geographical information to various components of the dataset. On a more positive note, however, KBC’s easy integration with Google apps of all kinds allows users to leverage Google Maps to add geo-coding functionality. Since we have so many clients taking advantage of geo-coding capabilites, one of our consultants, Pavel Boiko outlined the process of adding this feature to your KBC environment. Check it out!
There’s one trend on Gartner’s radar that hasn’t changed much over the last few years and that’s the increasing move toward a self-service BI model. Gone are the days of your IT or analytics department being report factories. And if those days aren’t gone for you, then it’s time you make some substantive changes to your business intelligence environment. When end-users are forced to rely on another department to deliver the reports they need, the entire concept of being a “data-driven” organization goes right out the window.
So other than giving your users access to ad hoc reporting capabilities, how do you empower the user?
The age old conflict. IT needs centralization, governance, standards and control; on the other side of the coin? Business units need the ability to move fast and try new things. How can we get lines of business access to the data they need to for projects so they can spend their time focused on discovering new insights? Typically they get stuck in a bottleneck of IT requests or spending 80% of their time doing data integration and preparation. Neither group seems particularly excited to do it, and I don’t blame them. For the analyst it increases the complexity of their tasks and seriously raises the technical knowledge requirements. For IT, it’s a major distraction from their main purpose in life, an extra thing to do. Self serve BI is trying to destroy the backlogged “report factories,” only to replace them with “data stores,” which are sadly even less equipped for the job at hand. Either way, the result is a painfully inefficient process, straining both ends of the value chain in any company that embarks on the data driven journey.
The Bi-Modal BI Answer?
An organization's ability to effectively extract value from data and analytics while maintaining a well governed source of truth is the difference between competitive advantage or sunken costs and missed opportunities. How can we create an environment that provides the agile data access needed by the business users while still maintaining sound data governance? Gartner has referred to a Bi-modal IT strategy. A big challenge with Bi-modal IT is that it pushes IT management to divide their efforts between ITs traditional focus and a more business focused agile methodology.
The DBA and Analyst Divide
Another major challenge in data access comes from the separation between DBAs and business users. Although the technical side may have the necessary expertise to implement ETL projects, they often lack the business domain expertise needed to make the correct assumptions around context and how the data is regarded. With so many projects competing for resources, we shouldn’t have to task a DBA on all of them. Back to the flip side of the coin, data analysts and scientists want the right data for their tools of choice and they want it fast. Even though there is growing set of data integration tools that allows individual business units to create and maintain their own data projects, this typically requires a lot of manual data modeling and can lead to siloed data or inconsistent metrics.
Instead of controlling all of BI, IT can enable the business to develop their analytics without sacrificing control and governance standards. So how can we get the right data in the hands of people who understand and need it in a timely manner?
The rapid evolution in business intelligence and analytics capabilities is both exhilarating and overwhelming.
How do you protect the stability of the work you’ve already done, while evangelizing experimentation, exploration and progress within your organization?
We’ve got a few tips for you.
Data can be vast and overwhelming, so understanding the different types helps to simplify what kind of numbers we are looking for. Even with the treasure trove of data most organizations have in-house, there are tons of additional data sets that can be included in a project to add valuable context and create even deeper insights. It’s important to keep in mind what type of data it is, when and where it was created, what else was going on in the world when this data was created, and so forth. Using the example of a restaurant, let’s look at some different types of data and how they could impact an analytics project.
Numerical data is something that is measurable and always expressed in numerical form. For example, the number of diners attending a particular restaurant over the course of a month or the number of appetizers sold during a dinner service. This can be segmented into two sub-categories.
Discrete data represent items that can be counted and is listed as an exact number and take on possible values that can be listed out. The list of possible values may be fixed (also called finite); or it may go from 0, 1, 2, on to infinity (making it countably infinite). For example:
Number of diners that ate at the restaurant on a particular day (you can’t have half a diner.)
Amount of beverages sold each week.
How many employees were staffed at the restaurant on a day.
Continuous data represent measurements; their possible values cannot be counted and can only be described using intervals on the real number line. For example, the exact amount of vodka left in the bottle would be continuous data from 0 mL to 750 mL, represented by the interval [0, 750], inclusive. Other examples:
Pounds of steak sold during dinner service
The high temperature in the city on a particular day
How many ounces of wine was poured in a given week
You should be able to do most mathematical operations on numerical data as well as list in ascending/descending order and display in fractions.
BI at your fingertips
In 2014 it’s already passe to have your dashboard behind two firewalls and two-factor authorization, full of information with various levels of importance. What our customers need in today’s fast world is literally have the most critical information at their fingertips - and not even an iPad dashboard can fulfill this promise with the expected level of convenience.
If you don’t have it already … install the Pebble app into your phone (Android, iOS)
Into that, you will install the Keboola Stats app.
Finally, enter the token generated by our “Pebble Writer.”
… Oh, and it does help to have the Pebble watch.
You can answer these questions in a matter of seconds, putting you ahead of the game and making you a rockstar in your next morning meeting. A glance on your ONE number tells you what you need to do next - be it nothing, or be it looking at in detail what changed the number.
But that’s only one example. Pretty much anything that fits on the screen and derived from your data can be delivered there. We now have 33 data sources + an API ready to accept any type of data from our clients - we routinely process everything from social data to POS transactions to support tickets.
If you’re as psyched about this as we are, you can thank Tomas Kacur for making it all happen. Oh and Martin Karasek for snapping out the Pebble Store icons in record time - something like 45sec ? :)
If you already have your data in our care, tell us the numbers you want to see and we’re pretty much done. We will agree with you on the frequency of updates depending on the context (no point in frequently updating a number that in its nature changes slowly).
If you’re new to our services, let us know and let’s talk about how to get to your data in the most sensible way. GoodData clients have an advantage because we can connect directly to a report within the platform.P.S. For the tech savvy, the phone Pebble app (JS) and the app for your watch (Vanilla C) are published as an OpenSource. You can get it from our GitHub (backend API is in Apiary.io.).
You’re certainly using them, you probably like them, and perhaps they even help you save some money. However, you’ll find the real treasure of third party data sources the moment you interconnect them and find the answers to your business questions.
In this edition of the Beginner’s Guide you’ll find out how to use data from external services and databases to better understand your data. You will also begin to recognize the importance of getting to know your data (actually it’s time to become best friends), and how to ask the right questions to get the right results (or buckle up for one bumpy ride!).
What data sources does Keboola use?
The short answer…...lots. At Keboola, we are able to connect to most modern systems. We simply need to find the API and it’s ready, set, go. We like to think of APIs as magical translators that allow programs to exchange data and thus make it more meaningful to you.
These are the 9 nominees for “most used source in a Keboola project” (in no particular order):
- Google Analytics
- Google Drive
- Microsoft Dynamics
Although these are the most common, the potential for new sources is limitless (and that is why we love our dev team).
If it is readable, we can use any kind of data.
Along with service and applications connections via API, you can send us your data in almost any format. We are able to read data in everything from CSV to JSON to unstructured text in a notepad.
We can even go beyond text data and bring in pictures (bless the magic of OCR) if you so desire. The most important thing to remember when bringing data in is that it needs to be readable.
Once we have established the readability of your data we can start building out your project. Our process is generally top secret but usually involves locking ourselves in the office, utilizing only food delivery trucks for survival. We think through the logics of connection, carry out tests, and write documentation. We are then ready to upload your data and start building reports for your viewing pleasure.
Sounds great, except I have no idea where to start and what to do!
Don’t panic. Data can seem overwhelming but it is all about asking a few simple questions and then doing a few simple things.
Start by asking yourself some questions like:
- What exactly do you want to assess?
- How can data help you with that?
- What indicators do you need to watch?
- What information is missing from the tools you already have?
Next gather the data.
For external sources begin investigating how information is communicated, the magical translators known as APIs are a great place to start. For internal sources just keep doing what you are doing and update the information you already have. If you haven’t started yet, think of ways to capture that internal information and initiate the process.
By doing some strategic thinking and then organizing your data you are well on your way to creating the right results. This process also helps to explain why more expensive data services are not necessarily better than those that are free. What matters most is the relevance of your data to answering your business questions.
It’s sort of like buying an s-class Mercedes for a ride through the rough and rocky Rubicon Trail. Arguably Mercedes makes one heck of a car, but if you don’t ask where you are going it might be a rather unpleasant ride for you and the car. That’s why it is important to ask questions first and then collect, collect, collect until you are able to cruise through to the right results.
We have to drive off into the sunset for now, but stayed tuned as we builds on this idea in our next article featuring an interview with Tomáš from Czech Keboola.
- Together you will identify and gather your KPIs - the parameters you want to monitor. Maybe the average spending by cafe and waiter. Or customer loyalty. Or anything else.
- Together you can come up with reports you wish to follow. How they should look like and what they should compare.
- You can start looking forward to a return on your investment.
Now it is time for the "IT stuff"
We will create for your data a model with a clear structure in the Keboola Connection tool. It is thanks to this model that later the whole system will tread quickly, flexibly and accurately. Using the model we will be able to find relationships between the data.
But the model wants to eat – the model wants to be fed data. Which, will come mostly from these four main sources:
- If you run your own database, we will connect to it remotely and process all the necessary data.
- If your data is scattered in multiple systems or locations, we will tell you exactly how to connect the dots with our interface.
- Do you wish to relate your data from cafés sales with your website traffic from Google Analytics data? Or with population using open data from your city hall in each city and neighbourhood? We can do it for you!
Historical data is not a problem either. (Yeah, we're talking about the 10 -year-old Excel sheet with sales data). All you have to do is keep its structure.
A short wait for the first report
Once we have fed the model with data, we will send the processed data into an application called GoodData. After which, you almost immediately gain access to your reports. Rest assured that the first contact will feel a bit like magic.
Once you’ve had your first dose of satisfaction, we guarantee you that you will want more: "I do not want this report and I want that report to take weather into account." Ok. Post your requirements and wait
for two months for a couple of days and then you are looking at your new reports.
Or even better - access our know-how in Keboola Academy to learn how to work the system and then you will be able to modify the reports yourself. After that no one will ever be able to tear you apart from your data.
A boss with GoodData, who is lounging on a beach half a world away, knows more than any boss present at work without it.
Now, if you wish you can sit under a beach umbrella in Honolulu with a tablet and every five minutes you can check just how much money you are making.
You will notice that the people who were served by Olivier never came back to your cafe.
You will see that customers in Vancouver are spending roughly twice as much as customers in Quebec, as you just launched an advertising campaign in there.
You will observe that when it rains your sales of pour over coffee rise sharply – unless the manager forgets to stock up on the filters.
You will clearly see how the purchasing behaviour of your customers changes in time, so you will spot new trends early to take the full advantage.
As you sip your Mai Tai slowly, you’ll then start to write your first email: "Mary, please order extra thin filters for our coffee machines and also tall glasses for Vancouver. It seems like there's a new fad..."
During my tenure at Keboola, and for some time before that, I’ve helped to design successful BI implementations for numerous companies, big and small.
In my role I taught others and helped them to achieve the same. Together, we build solutions that amaze me daily with their ability, value they bring to the users, and potential for the future. We process billions of rows of data, 10s of millions of text entries of all kinds, millions of deals and billions of dollars in business transactions. We perform some serious analytics over all that, helping to draw out business value for our clients every day. We innovate and help to redefine what it means to do BI. Our own company runs on data.
Yet, I would not call myself a Data Scientist.
I rarely code. I suck at stats. I definitely need to freshen up on my math skills. I avoid fancy terms like OLAP cube and Linear Regression. I prefer simple language. With my resume, I wouldn’t fit the bill for 80% of data analyst jobs postings out there.
I don’t hold a PhD.
For me, Big Data is not a category of its own. It is something too big to handle using the tools at hand. So you get a bigger hammer and move on.
I’m a user, in all senses of the word. I’m addicted to data. I look for it everywhere, behind every question and problem. I love great business ideas and using data to make them fly. I love to work with people who think the same way.
How do I pull it off? Sometimes I wonder. For the most part, I believe it’s about the right tools. Tools that are conductive to this kind of thinking. I mostly use just two of them - Keboola Connection to bring the data together and put it where and how I need it, and GoodData to extract the meanings and answers to business questions.
Petr Olmer, Director of Expert Services at GoodData once tweeted that the most underused tool in BI is the human brain, and the most underrated method is asking questions. I believe it, and would add that the term “Data Scientist” ranks up there with the most over- (and mis-) used.
At Keboola we are trying to change that. Consultants at Keboola are people who understand the business and speak its language. They use their brains and ask a lot of questions.
Both Keboola and GoodData have some brilliant people that you could call serious scientists, data or otherwise. But their talents are being applied to making the tools smarter and more useful for us, the common folks. What they do keeps things simple for us. It allows us to focus on the business objective of the task at hand rather than the “how” of it all. Thanks to them, you don’t need to hire a scientist (or be one) to find the wealth in your data.