GoodData XAE: The BI Game-Changer (2nd part)

For previous part (1/3), continue here


An honest look at your data

Moving forward with our previous example; uploading all of the data sources we use internally (from one side of the pond to the other) into an LDM makes each piece of information easily accessible in GoodData - that’s 18 datasets and 4 date dimensions.

Over this model, we can now build dashboards in which we watch how effective we are, compare the months with one another, compare people, different kinds of jobs, look at the costs, profits and so on.

Therefore, anything in our dashboard suits our needs exactly. No one dictated us how the program will work...this freedom is crucial for us. Thanks to it we can build anything that we want in GoodData – only our abilities matter in the question of succeeding and making the customer satisfied.

What’s a little bit tricky is that a dashboard like this can be built in anything. For now let’s focus on dashboards from KlipFolio. They are good, however they have one substantial “but” – all the visual components are objects that load information out of rigid, and predefined datasets. Someone designed these datasets exactly for the needs of the dashboard and made it not possible to tweak -  take two numbers out of two base tables … and watch their quotient in time. A month-to-date of this quotient can be forgotten immediately… and not even think about the situation in which there are “many to many” linkages. The great advantage of these BI products (they call themselves BI but we know the truth) is that they are attractive and pandering. However, one should not assume in the beginning that he has bought a diamond, when in actuality it cannot do much more than his Excel. (Just ask any woman her thoughts on cubic zirconia and you’ll see the same result).

Why is the world flooded with products that play on a little playground with walls plastered with cool visuals? I don’t know. What I know is that people are sailing on the “Cool, BigData analytics!” wave and they are hungry for anything that looks just a little like a report. Theme analytics can be done in a few days – transformation of transactions and counting of “Customer lifetime value”  is easy until everyone starts telling you their individual demands.

No one in the world except GoodData has the ability to manage analytics projects that are 100% free in their basis (the data model) and to let people do anything they want in these projects without having to be “low-level” data analysts and/or programmers. Bang!

So how does GoodData manage to do it?

Everyone is used to adding up an “A” column by inputting the formula “=SUM(A:A)”. In GoodData you add up the “A” column by inputting the formula “SELECT SUM(A)”. The language used to write all these formulas in GoodData is called MAQL – Multi-dimensional Analytical Query Language. It sounds terrifying but everyone was able to manage it – even Pavel Hacker has a Report Master diploma out of the Keboola Academy!

If you look back at my data model out of our internal projects you might say that you want the average number of hours out of one side of the data model but you want it filtered with the type of operation, put together according to the projects descriptions and the name of the client and you want to see only the operations that took place this weekend’s afternoons. All the metrics will look like “SELECT AVG(hours_entries) WHERE name(task) = cleaning". The multi-dimensionality of this language is hidden in the fact that you don’t have to deal with questions such as: what dimension is the name of the task in? What relation does it hold toward the number of worked hours? And furthermore – what relation does it hold towards the name of the client? GoodData (or the relations in the logical model that we design for our client) will solve everything for you.

So getting straight to the point, if I design a (denormalized) Excel table in which you find everything comfortably put together, no one who reads this will have trouble counting it. If we give you data divided by dimensions (and dimensions will often be other sources of data – just like outputs from our Czech and Canadian accounting systems) it would be much more complicated to process (most likely you will start adding in SQL like a mad person). Since the world cannot be described in one table (or maybe it does – key value pair... but you cannot work with that very much) the look into a lot of dimensions is substantial. Without it, you are just doing some little home arithmetic ☺.

Do I still have your attention? Now is almost the time to say “wow” because if you like to dig around in data, you are probably over the moon about our described situation by now ☺.

And to the Finale...

Creating a query language is the most complicated task to be solved in BI. GoodData on the other hand, uses a simple, yet effective language to mitigate any of these “complications” and express the questions you have about your data. Part 3 of our series will dive deeper into this language, known as MAQL, and its ability to easily derive insights hidden in your data.


For final part (3/3), continue here

GoodData XAE: The BI Game-Changer (1st part)

Putting your data into the right context

At the beginning of last summer, GoodData launched its new analytic engine AQE (Algebraic Query Engine). Its official product name is GoodDate XAE. However, since I believe that XAE is Chinese for “underfed chicken”, I will stick with AQE ☺. Since the first moment I saw it, I considered it a concept with the biggest added value. When Michael showed me AQE I immediately fell in love.

However, before we can truly reveal AQE and the benefits that can be derived from it, we need to begin with an understanding of it’s position in the market - starting from the foundation on which GoodData’s platform rests. In a three part series we’ll cover AQE’s impact on contextual data, delivering meaningful insights and finally digging for those hidden gems.

First, a bit more comprehensive introduction...

Any system with ambitions to visualize data needs some kind of mathematical device. For instance, if I choose sold items using the names of salespeople as my input and my goal is to find out the median of the salespersons’ turnover, somewhere in the background a total summation of the sold items per month (and per salesperson) must take place. Only after getting that result can we count the requested median. Notice the below graphic - the left table is the crude input with the right table being derived in the course of the process - most of the time we don’t even realize that these inter-outputs keep arising. Within the right table, we can quickly calculate the best salesperson of the month, the average salesperson/median and so on…

And how does this stack up against the competition?

If we don’t have a robust analytic backend, we cannot have the freedom to do whatever we want to. We have to tie our users to some already prepared “vertical analysis“ (churn analysis of the e-shop’s customers, RFM segmentation, cohorts of subscriptions, etc…). Fiddling with the data is possible in many ways. Besides GoodData, you can find tools such as Birst, Domo, Klipfolio, RJMetrics, Jaspersoft, Pentaho and many, many others. They look really cool and I have worked with some of them before! A lonely data analyst can also reach for R, SPSS, RapidMiner, Weka and other tools. However, these are not BI tools.

Most of the aforementioned BI tools do not have a sophisticated mathematical device. Therefore, it will simply allow you to count the data, calculate the frequency of components, find out the maximum, minimum and mean. The promo video of RJMetrics is a great example.

Can I just use a calculator instead?

Systems such as Domo.com or KlipFolio.com solve the problem of an absentee mathematical device in a bit of a bluffy way. They offer their users several mathematical devices – just the same as Excel does. The main difference is that they can be used with separate tables, not with the whole data model. Someone may think that it does not matter but, quite the contrary – this is the pillar of anything connected to data analytics. I will try to explain why...

The border of our sandbox lays with the application of the law of conservation of “business energy”.

“If we don’t manage to earn our customer more money than our services (and GoodData license) cost him, he won’t collaborate with us.“

Say for example if we take the listing of invoices from SAP and draw a graph of growth, our customers will sack us from the offices. We need a little bit more. We need to put each data dimension into context (dimension = thematic data package usually presented by data table). Each dimension does not have to have any strictly defined linkages; the table in our analytics project is called dataset.

But how is it all connected?

The moment we give each dimension it’s linkage (parents, children … siblings?), we get a logical data model. A logical data model describes the “business” linkages and most of the time it is not identical with the technical model in which any kind of system saves it’s data. For example, if Mironet has its own e-shop, the database of the e-shop is optimized for the needs of the e-shop – not financial, sales and/or subscription analytics. The more the environment (of which we analyze the data) is complicated, the less similarities the technical and analytical data models have. A low structural similarity of the source data and the data we need to analytics, divides the other companies from GoodData.

A good example of this is our internal project. I chose the internal project because it contents the logical model we need only for ourselves. Therefore, it is not somehow artificially extended just because we know “the customer will pay for it anyway”.

We upload different kinds of tables into GoodData. These tables are connected through linkages. The linkages define the logical model; the logical model then defines “what can we do with the data”. Our internal project serves to measure our own activity and it connects the data from the Czech accounting system (Pohoda), Canadian accounting system (QuickBooks), the cloud application Paymo.biz and some Google Drive documents. In total, our internal project has 18 datasets and 4 date dimensions.

The first image (below) is a general model, select the arrow in the left corner to see what a more detailed model looks like.

In the detailed view (2 of 2), note that the name of the client is marked with red, the name of our analyst is marked with black and the worked hours are marked with blue. What I want to show here is that each individual piece of information is widely spread throughout the project. Thanks to the linkages, GoodData knows what makes sense altogether.

UP NEXT

Using business-driven thinking to force your data to comply to your business model (rather than the other way around) will allow you to report on meaningful and actionable insights. Part 2 of the following series on AQE (...or more formally XAE) will uncover the translation of the Logical Data Model into the GoodData environment.


For next part (2/3), continue here

Why is GoodData special?

Today's world is oversaturated with data. Telling stories through data is beginning to be so sexy, that many people are building their career on it. A few semi-experts in the Czech Republic have even changed their colours and started talking about BigData (in a worst-case scenario, they also hold conferences on this topic). However, I'll save this topic for a future blog post, in which I'll ground their Hadoop enthusiasm a bit for you.

Data...

People want to know more about the environment in which they operate. It helps them make better decisions, which usually leads to a competitive advantage. Generally, for good decision making we need a combination of three things: proper input parameters (information / data), common sense / experience, and a modicum of luck. However, an idiot will still be stupid and although luck can be occasionally bought in the Czech Republic, there's the threat of being arrested for bribery. Hence why information remains the most influenceable component of success. At my playground, the correct information serves as answers to your most penetrating questions you can think of.

I assume that each of you knows how much money you have in your personal bank account. Most of us will also know how much money we spend per month. Fewer of you will know exactly what it was for. An even smaller group of people will know the structure of all the pleasant cups of coffee, ice cream, wine, lunches and so on (we call it long tail). I would bet that almost no one knows one´s personal annual trend in the cost structure of such long tail. You’ll probably argue that you don't care. If you're a company that wants to succeed, you can't do without such information. As for one's own personal life, the biggest nutcase, as I see it, is Stephen Wolfram, who has been measuring almost everything since 1990. He wrote about almost everything except lint from his bellybutton (unlike Graham Barker :)

Because there’s nothing about the executive summary of your accounting on CRM, Google Analytics or social networks on TV after the evening news, you're forced to build different variants of reports and dashboards yourselves.

I'll try to summarize the tools which I know are available; but in the end I'll tell you that it's all just a toy gun, and whomever wants a proper data gun must reach for GoodData. To be fair, I'll do my best and argue a bit :)

Excel

Today, Excel is on every corner. It's a good helper, but quite a lot of people have a strange tendency to make Excel Engineers of themselves, which is the most dangerous expertise you can come across. The Excel Engineer often ends with a contingency table and SUMIF() formula. At the same time, business data processing is interconnected with him and, perhaps unwittingly, he's becoming a brake on progress. The biggest risks of reporting in Excel, in my opinion, are as follows:

  1. The primary data, from which reports are made, are stored in Excel; these data were imported by someone into Excel at some point - with both a poor/expensive possibility of updating
  2. Excel sheets tend to travel around Corporate Outlooks, leading to different versions. It often comes in handy to change YTD% a little bit, or it may easily happen that another department has the same Excel, but with different numbers - it undermines confidence in the reports and can easily allow for distortion of reality
  3. Complicated reports must be created by the reporting department (only they know how to update the data - see point 1), where Excel experts provide answers to business questions they do not always understand. Therefore, it often happens that the ad-hoc responses to your ad-hoc hypotheses have been forming for days (submitter burnout occurs)
  4. The combination of manual operations and macros made by the one that doesn't work here anymore, introduces errors to Excel, thanks to which the cosmoverse then collapses!

It's probably obvious that Excel reporting should end at the level of a sole trader. Nothing reliable can be created with it efficiently. You can be sure that the Excels on ZEE (network drive, of course!) contain errors, are not up-to-date and were made ​​by people who were assigned to it by someone else, so they knew damn all about the nature of the data they involved into the VLOOKUP! Excel Engineers usually don't have it in their genes to do data discovery, and even if they came across something interesting, they probably wouldn't notice. You will know best what the correct information is at that very moment (and Excel isn't really what you should master in 2013 at the level of VBS macros and dirty hacks)!

Visualization

Today's market is oversaturated with tools that aim to help you visualize some kind of business information. Imagine business information as the number of orders per today, the net margin for the last hour, the average profit per user, etc. In the majority of cases it works in the following way: you calculate this information on your side and send it automatically via an interface to a service that ensures the given metric is presented. Examples of such services are Mixpanel, KissMetrics, StatHat, GeckoBoard and even KlipFolio. The advantage compared to Excel lies especially in the fact that the reports and dashboards can be easily automated and then shared. Information sharing is quite underrated! An example of such information could be the number of data transformations that are executed in minute granularity at our staging layer:

You can build Dashboards from these reports, and for a while you well feel good. The problem occurs when you find out that any extension of such a dashboard requires intervention from your programmers, and the more complex your questions are, the more complicated the intervention. If you operate in B2C and have transactional data, you can be sure that the clinical death of this form of reporting will be, for example, a query on the number of customers with time that spent at least 20% more than the average order for the previous quarter, and all of whom at the same time bought an ABC product this month for the first time. If your programmers, by luck, manage to implement it, they'll shoot their heads off once you add that you want daily numbers of TOP 10 customers from each city who meet the previous rule. If you have just a few more transactions, it will mean remaking your existing DB on your side, which will eventually lead to a 100% collapse. Even if you try to make it survive at all costs, you can be sure that you won't slay the competition thanks to that zero flexibility - you won’t even be able to gently take the analytical helm because the market will pivot around you.

It's possible you don't have similar questions about your business, and it doesn't bother you. The cruel truth is that your competition is asking about it right now, and you will have to somehow respond to it...

Pseudo BI

Neither Excel nor visualization tools usually have any sophisticated back-end, which applies similarly to services like Domo or Jolicharts. They look super sexy at first glance, but inside is a masked set of visualization tools, sometimes coated with a few statistical features that you mostly won’t use. The common denominator is the absence of a language with which you could step out of the predefined dashboards, and begin to implement similar services so that they were to your benefit.

Their only advantage is that they can be quickly implemented. Unfortunately, that's it, and after a short intoxication period, sobriety sets in. If by chance you are a little bit more demanding, you haven't got a chance for a very happy life.  

Low Level Approach

There are services that allow you to upload data and raise queries. As I see it, nowadays the hottest is Google BigQuery. For us at Keboola, it's a tremendous help with data transformation, denormalization and JOINs of huge tables. It can serve you well if it seems like a good idea to you to write the following:

...to get this...:

It's evident that if you don't make a living as an SQL consultant and don't have any ambition to create your own analytical service, you’d better leave this approach to nerds (like us!) and attend to your own business :)

Cloud BI

If you google cloud BI, Google will return names like Birst, GoodData, Indicee, Jaspersoft, Microstrategy, Pentaho, etc. (if you have Zoho Reports among the results, the universe got crazy because that should have remained in Asia :).

From many trends, it's obvious that the Cloud moves the world of today. In the Czech Republic, the most common concern about the concept is a worry about the data and the feeling that my IT can do something better than that of the vendor. If you feel the same concerns, you should know that when any troubles arise in the Cloud, the best people available on this planet are working on it immediately, so that everything will again run like clockwork. Dave Girouard (coincidentally also a board member of GoodData) summed it up nicely in this article.

Except for Microstrategy, which probably discovered the Cloud this morning, the above-mentioned brands are relatively established within the Cloud. However, there are different surprises hiding under the lid. Pentaho requires highly technical knowledge so that you can make the most of it. Jaspersoft is Excel on the web that, in short, failed. Indicee would like to play in the major leagues, but I know at least one large customer from Vancouver who, after trying to implement their solutions for a year, moved to GoodData. When I tried Birsta it was all in Flash, and despite my enormous effort I really didn't understand it :(

As I said in the beginning, everything except GoodData sucks. There are several reasons for this:

  1. GoodData has a powerful language for the definition of metrics. With this language, it's possible for anyone to generate reports, no matter how complicated. The fact that these reports are created not only by clicking is more than essential - it gives you the flexibility you'll need to fight for first place against your competition. If GoodData satisfies Tomáš Čupr (ex-Slevomat, DámeJídlo.cz), you can be sure it will suit you as well. Also constructs which are complex at first glance can be quickly learned at the Keboola Academy.
  2. GoodData, unlike its competitors, has fundamentally designed API interface to enable companies such as Keboola to bend the whole analytical platform so that it plays first violin in your environment. Seamless integration with other information systems, white-labeling, single-sign-on and a framework for data extraction and transformation means that there are no compromises during the implementation.
  3. GoodData aren't just reports in a web browser but an entire set of abstractly separated functional layers (from a physical model representing the data up to a logical model representing the business relationship), thanks to which the implementation doesn't include things like, for example, a feasibility study or technical specification. In comparison with the competition, GoodData can be implemented with tremendous speed (no projects for months).
  4. GoodData has a phantom lab in Brno where R&D is taking place, the output of which are innovations which I'm not sure I can make public today. Nevertheless, I can honestly say that the others will soon shit their pants from it. I'll definitely add it here in time!

All in all, the quality of GoodData shows, among other things, a lot of connections, such as Zendesk.com (the biggest service to support customers in the world). The ability of such flexibility is, from my point of view, absolutely essential for future success. Any one of you can rent high-performance servers, design super-cool UI or program specific statistical functions (or perhaps borrow them from Google BigQuery), but in the foreseeable future no one will come out with a comprehensive concept that makes sense and is applicable to small dashboards (we have a client who uses GoodData to look at some data from Facebook Insights) as well as gigantic projects with a six-digit $ budget for just the first implementation phase.

GoodData Rocks! 

Howgh!

 


Keboola Stats on Pebble

BI at your fingertips


In 2014 it’s already passe to have your dashboard behind two firewalls and two-factor authorization, full of information with various levels of importance. What our customers need in today’s fast world is literally have the most critical information at their fingertips - and not even an iPad dashboard can fulfill this promise with the expected level of convenience.

Why?

We believe that everyone of us, regardless of job or interest, has their ONE number. The one number that captures the essence of what you really do. For instance, as a salesperson you may be very easily motivated if you see what you will make on commissions and how you compare to the rest of the team. The CEO needs one number from the CFO… this immediate feedback loop is crucial for understanding and connecting actions and outcomes. Alternately, if you’re blogger, the number of followers and/or comments is what gets you out of the bed and to the keyboard each day.

In the time it takes you to reach for your iPad, another event in your business has taken place.  … And now the only thing you need to do is look at your watch.


There’s pretty short distance from your fingertips to your wrist, making the smartwatch an obvious choice. We choose the pioneer, Pebble, as our jumping point into the “wearables” movement.

Using Pebble, Keboola Stats connects with your data to deliver business insights on the go. With a delivery speed of up to 1x/10 second, we’re serious when we say “real time updates”.

How does it work?

Simple. Just three easy steps:
  1. If you don’t have it already … install the Pebble app into your phone (Android, iOS)

  2. Into that, you will install the Keboola Stats app.

  3. Finally, enter the token generated by our “Pebble Writer.”

… Oh, and it does help to have the Pebble watch.


Ok, show me what can I do ...

Keboola Stats can show you a dashboard with your top 2 numbers and their % changes in real time. So, what if you wanted to see your actual revenue from midnight (until now) and see the % change compared to (the same point in time) yesterday? Or how about today’s order count and it’s % change from the beginning of the week?

You can answer these questions in a matter of seconds, putting you ahead of the game and making you a rockstar in your next morning meeting. A glance on your ONE number tells you what you need to do next - be it nothing, or be it looking at in detail what changed the number.

But that’s only one example. Pretty much anything that fits on the screen and derived from your data can be delivered there. We now have 33 data sources + an API ready to accept any type of data from our clients - we routinely process everything from social data to POS transactions to support tickets.

So what’s next?

We have the full stack on back end - data collection, analyses and API. Now we’re ready to roll it out to LG, Samsung and Motorola :)

If you’re as psyched about this as we are, you can thank Tomas Kacur for making it all happen. Oh and Martin Karasek for snapping out the Pebble Store icons in record time - something like 45sec ? :)

If you already have your data in our care, tell us the numbers you want to see and we’re pretty much done. We will agree with you on the frequency of updates depending on the context (no point in frequently updating a number that in its nature changes slowly).

If you’re new to our services, let us know and let’s talk about how to get to your data in the most sensible way. GoodData clients have an advantage because we can connect directly to a report within the platform.

P.S.  For the tech savvy, the phone Pebble app (JS) and the app for your watch (Vanilla C) are published as an OpenSource. You can get it from our GitHub (backend API is in Apiary.io.).

The Beginner’s Guide To Keboola III: We ♥ Your Third Party Data Sources

You’re certainly using them, you probably like them, and perhaps they even help you save some money. However, you’ll find the real treasure of third party data sources the moment you interconnect them and find the answers to your business questions.

In this edition of the Beginner’s Guide you’ll find out how to use data from external services and databases to better understand your data. You will also begin to recognize the importance of getting to know your data (actually it’s time to become best friends), and how to ask the right questions to get the right results (or buckle up for one bumpy ride!).

What data sources does Keboola use?

The short answer…...lots. At Keboola, we are able to connect to most modern systems. We simply need to find the API and it’s ready, set, go.  We like to think of APIs as magical translators that allow programs to exchange data and thus make it more meaningful to you.

These are the 9 nominees for “most used source in a Keboola project” (in no particular order):

Although these are the most common, the potential for new sources is limitless (and that is why we love our dev team).

If it is readable, we can use any kind of data.

Along with service and applications connections via API, you can send us your data in almost any format. We are able to read data in everything from CSV to JSON to unstructured text in a notepad.

We can even go beyond text data and bring in pictures (bless the magic of OCR) if you so desire. The most important thing to remember when bringing data in is that it needs to be readable.

Once we have established the readability of your data we can start building out your project. Our process is generally top secret but usually involves locking ourselves in the office, utilizing only food delivery trucks for survival. We think through the logics of connection, carry out tests, and write documentation. We are then ready to upload your data and start building reports for your viewing pleasure.

Sounds great, except I have no idea where to start and what to do!

Don’t panic. Data can seem overwhelming but it is all about asking a few simple questions and then doing a few simple things.

Start by asking yourself some questions like:

  • What exactly do you want to assess?
  • How can data help you with that?
  • What indicators do you need to watch?
  • What information is missing from the tools you already have?

Next gather the data. 

For external sources begin investigating how information is communicated, the magical translators known as APIs are a great place to start. For internal sources just keep doing what you are doing and update the information you already have. If you haven’t started yet, think of ways to capture that internal information and initiate the process.

By doing some strategic thinking and then organizing your data you are well on your way to creating the right results. This process also helps to explain why more expensive data services are not necessarily better than those that are free. What matters most is the relevance of your data to answering your business questions.

It’s sort of like buying an s-class Mercedes for a ride through the rough and rocky Rubicon Trail. Arguably Mercedes makes one heck of a car, but if you don’t ask where you are going it might be a rather unpleasant ride for you and the car. That’s why it is important to ask questions first and then collect, collect, collect until you are able to cruise through to the right results.

We have to drive off into the sunset for now, but stayed tuned as we builds on this idea in our next article featuring an interview with Tomáš from Czech Keboola.

The Beginner’s Guide To Keboola II: How It Works

You already know that Keboola can process your data in such a way that it makes sense and that it is of value to you. This time we shall go a little deeper and show you how it is done in practice.

Let’s say you're the owner of a chain of coffee shops. You wish to expand your business and at the same time you would like to figure out where you are losing money. You have lots of data from your POS system and of course your accounting software.

This is where Keboola steps in.
  • Together you will identify and gather your KPIs - the parameters you want to monitor. Maybe the average spending by cafe and waiter. Or customer loyalty. Or anything else.
  • Together you can come up with reports you wish to follow. How they should look like and what they should compare.
  • You can start looking forward to a return on your investment.

Now it is time for the "IT stuff"

We will create for your data a model with a clear structure in the Keboola Connection tool. It is thanks to this model that later the whole system will tread quickly, flexibly and accurately. Using the model we will be able to find relationships between the data.

But the model wants to eat – the model wants to be fed data. Which, will come mostly from these four main sources:

  1. If you run your own database, we will connect to it remotely and process all the necessary data. 
  2. If your data is scattered in multiple systems or locations, we will tell you exactly how to connect the dots with our interface. 
  3. Do you wish to relate your data from cafés sales with your website traffic from Google Analytics data? Or with population using open data from your city hall in each city and neighbourhood? We can do it for you! 
  4. Historical data is not a problem either. (Yeah, we're talking about the 10 -year-old Excel sheet with sales data). All you have to do is keep its structure. 

A short wait for the first report

Once we have fed the model with data, we will send the processed data into an application called GoodData. After which, you almost immediately gain access to your reports. Rest assured that the first contact will feel a bit like magic.

Once you’ve had your first dose of satisfaction, we guarantee you that you will want more: "I do not want this report and I want that report to take weather into account." Ok. Post your requirements and wait for two months for a couple of days and then you are looking at your new reports.

Or even better - access our know-how in Keboola Academy to learn how to work the system and then you will be able to modify the reports yourself. After that no one will ever be able to tear you apart from your data. 

A boss with GoodData, who is lounging on a beach half a world away, knows more than any boss present at work without it.

Now, if you wish you can sit under a beach umbrella in Honolulu with a tablet and every five minutes you can check just how much money you are making. 

You will notice that the people who were served by Olivier never came back to your cafe.

You will see that customers in Vancouver are spending roughly twice as much as customers in Quebec, as you just launched an advertising campaign in there.

You will observe that when it rains your sales of pour over coffee rise sharply – unless the manager forgets to stock up on the filters.

You will clearly see how the purchasing behaviour of your customers changes in time, so you will spot new trends early to take the full advantage.

As you sip your Mai Tai slowly, you’ll then start to write your first email: "Mary, please order extra thin filters for our coffee machines and also tall glasses for Vancouver. It seems like there's a new fad..."

BI Dashboard Crisis

People were getting lost in data - so they created tools to help them. Since 1958, when  Hans Peter Luhn coined the term “Business Intelligence” until the end of the 80’s, the whole industry lived by terms such as data warehouses, OLAP cubes etc. In 1989 Howard Dresner defined BI as a “set of concepts and methods to improve business decision making by using fact-based support systems”.

Over the last half century, BI has been progressing until today, when it finds itself in a bit of crisis.

The Dashboard Crisis

We are overwhelmed by data - no longer the raw data - but rather the categorized, mathematically processed data represented in what we call “reports”.

Imagine that you have a large amount of data. You know that there is a lot of very interesting information in it. So, you take tools that pull all that data into one place, clean it, polish and present back to you - and you start looking at it (that’s what we do with Keboola & GoodData).

Over time, though, one can easily experience the following side effects:

  1. resignation / the “juicer syndrome”. You see (if you use the system passively) the same information in the data day after day. Inside the first few weeks, you drill into the data and look from all angles. As time follows, your focus falls away while you continually ingest more and more data you don’t need to see again (Avast Antivirus now has more than 200M users, they’ll still have more than 200M tomorrow, no one needs to be reminded of that daily). If you bought a new juicer, you probably drank nothing but fresh juice for a week or two, and since then the appliance has been collecting dust somewhere. Something very similar can easily happen in BI.
  2. drowning in data. If you have a good tool that allows you to drill into your data and you use it, you generate one report after another as you find more and more interesting answers. At one point you’ll have so many reports that you get lost.

Once you have hundreds of reports, all sorting, tagging or naming conventions stop working. You’ll get to the point when no-one will be able to find what they need. Instead of looking for existing reports, people will start building the same ones again and again. Your sales director knows, that there was a report “Margin estimate for the next 4 weeks based on sales managers’ estimate” somewhere, but it is harder to find it than to build it again (which speaks, in case of GoodData, volumes about the ease of its use).

What are the attempts for solutions?

  • Use of natural language - Microsoft is trying in it’s “Power BI” to understand queries asked in a similar matter to how we ask a search engine. In that case, natural language needs to be somehow connected to the semantic model leading to the data. It looks pretty (see the Power BI link), but Odin, my colleague, nailed it when he commented after reading one such article:

“I read it and IMHO it’s a bit of BS, because articles like that have been showing up regularly since the 50’s - saying that use of natural language is “almost here”. The best generic tools for interactive communication with a computer (asking the computer for something) is so far SQL, which was supposed to be so simple, that everyone can write a query as easy as a sentence. Time has shown that reality (and therefore also natural language) is so idiotically complex, that any language describing it needs to be also complex and you need to study for 5 years to master it (same as natural language).”

  • Use of visual interface between the system and a human - you can see that nicely on an example of BIRST. It’s a beautifully executed marketing video, but once the data model (a.k.a. the relationships between information) gets sufficiently complex, the interface stops working - it doesn’t understand what we want from it or controlling it gets so complicated, that its advantages are lost.

What are we doing about it?

It is important to take a bit of everything. It will remain critical that everyone has access to information they feel they need (to validate hypothesis, support their decision etc.). Apart from that the machines need to help a bit with sifting through the data - so you don’t have to generate hundreds of reports trying to find the golden nugget.

At Keboola we’ve been working on a system that is attempting to solve exactly that since the Summer of 2013. Today it is practically a complete set of functions, that can recognize the meaning of data (time, ID, number, attribute - we call that piece “data profiler”), relationships between data (for example it can figure out how to connect Google Analytics with CRM data) and afterwards run tests to identify “interesting moments”. For example it can discover seasonality in a particular segment of customers and point to it, without the need for an analyst to get the idea to try something like that out. Our system “guesses” where the data relates to a specific customer and if it finds something interesting, it will point it out. Ideally it by itself creates a report in GoodData filtered to the given situation.

As an example, for “on-line transaction” data types we have a set of tests that are looking for those interesting moments. One of these tests (working title “Wrong Order Test”) creates histograms of all combinations of facts (typically monetary values) and attributes (products / locations / months / user types etc. ) Among those it tests whether the counts of ID’s (such as orders) correlate with the values - if some attribute seems outside of “normal” in a particular situation, it’s a reason enough to bring it up with the business user.

This picture shows how for a specific time period and product (or a user group), the system identified that there is unexpected drop of profit for a particular payment method - “interesting behaviour”. Unless you somehow get the idea to test for precisely this situation and report setting, you have practically NO CHANCE to discover this. On top of that, the same anomaly may not present itself a week later, therefore you need continual detection.

Our goal is to periodically test the various data types sitting in Keboola and inform their owners of those interesting facts in the form of an automated dashboard within their GoodData projects. The last thing we need to do is to define how to configure the tests, as the true power lies in the interaction of various tests over the same data. Everything else - the data profiler, tests themselves, supporting R functions, API, infrastructure apod. is ready to go.

This way Keboola will not only help use data to find answers to your business questions, but also phrase new questions based on gems hidden in the data.

GoodData Open Analytics Platform - a Category of One

(originally published as a guest post on the GoodData blog)

In the world of big data and analytics, what is the definition of a platform? What belongs in the category and what doesn't? If Tableau is on the list, why not Excel? If Excel, why not Numbers or GoogleSheets? (Hey, that one's even cloud based!) The whole thing is somewhat silly to me. It is trying to compare the incomparable.

Over the years of Keboola's existence and focus on business intelligence, we've been closely monitoring the tools available. We are an independent company and while we partner with GoodData, our ultimate focus is to do what's best for our customers. There are many tools out there. Some mediocre, some brilliant in what they do. It never ceases to amaze me how can solutions built on Cognos, Microstrategy or Business Object cost so much while so little value seems to be actually delivered. Similarly how Domo took a simple dashboarding tool and by some serious marketing dollars made it appear almost like a BI product. Conversely looking at a product like Tableau, its visualizations are unparalleled. And yet simply put, nothing comes close to fulfilling our vision of BI Platform as well as GoodData does.

If you disagree, start asking questions - Which BI tools have a robust API that can connect to and push data from any data source? Do they allow you to filter data based on the user that is looking at it? Can you automatically build scripts that generate reports relevant to the current situation of your business? Does a BI tool allow you to analyze hundreds of millions of rows of data in seconds? Does it have a front end interface that anyone who came near a medium-complex spreadsheet and knows how to drag and drop can use? Does the platform allow you to build a product that you deploy to hundreds of customers by a touch of a button? And which tool allows you to do ALL these things? Right.

GoodData is more than a tool, it is a true open platform. For some this comes as news, for us at Keboola, it has always been that way. We have always treated GoodData as a platform.

True platform gives you tools and space at the same time. The tools allow you to do things, and the space to imagine and create new ways of doing. Your imagination, not the tool is the limit. We built, using GoodData itself, a training tool to teach people how to use GoodData called Keboola Academy. We built AI that modifies not only the data in the reports, but the dashboard layout of the dashboard to pinpoint what is important. We completely integrated with GoodData so deployment of dashboards and analytics over our own business data warehouse product is seamless and largely automatic. We built whole data products, deeply embedded into our customers' interfaces, all using an open analytics platform called GoodData.

Keboola is about helping companies make more money using data. Whether it is for internal reporting and analytics, or to create new revenue streams by monetizing data-as-a-product, GoodData gave us the freedom to build amazing things and continuously grow our business (so far 200% or more year over year) and that is why I consider it the only true BI platform on the market today. "BI Platforms" is a category of one.

Why I'm not a Data Scientist

During my tenure at Keboola, and for some time before that, I’ve helped to design successful BI implementations for numerous companies, big and small.

In my role I taught others and helped them to achieve the same. Together, we build solutions that amaze me daily with their ability, value they bring to the users, and potential for the future. We process billions of rows of data, 10s of millions of text entries of all kinds, millions of deals and billions of dollars in business transactions. We perform some serious analytics over all that, helping to draw out business value for our clients every day. We innovate and help to redefine what it means to do BI. Our own company runs on data.

Yet, I would not call myself a Data Scientist.

I rarely code. I suck at stats. I definitely need to freshen up on my math skills. I avoid fancy terms like OLAP cube and Linear Regression. I prefer simple language. With my resume, I wouldn’t fit the bill for 80% of data analyst jobs postings out there. 

I don’t hold a PhD.

For me, Big Data is not a category of its own. It is something too big to handle using the tools at hand. So you get a bigger hammer and move on.

I’m a user, in all senses of the word. I’m addicted to data. I look for it everywhere, behind every question and problem. I love great business ideas and using data to make them fly. I love to work with people who think the same way.

How do I pull it off? Sometimes I wonder. For the most part, I believe it’s about the right tools. Tools that are conductive to this kind of thinking. I mostly use just two of them - Keboola Connection to bring the data together and put it where and how I need it, and GoodData to extract the meanings and answers to business questions.

Petr Olmer, Director of Expert Services at GoodData once tweeted that the most underused tool in BI is the human brain, and the most underrated method is asking questions. I believe it, and would add that the term “Data Scientist” ranks up there with the most over- (and mis-) used.

At Keboola we are trying to change that. Consultants at Keboola are people who understand the business and speak its language. They use their brains and ask a lot of questions.

Both Keboola and GoodData have some brilliant people that you could call serious scientists, data or otherwise. But their talents are being applied to making the tools smarter and more useful for us, the common folks. What they do keeps things simple for us. It allows us to focus on the business objective of the task at hand rather than the “how” of it all. Thanks to them, you don’t need to hire a scientist (or be one) to find the wealth in your data

But you might want to talk to - or become - one of us.

The Beginner’s Guide To Keboola

"The whole thing is a bit complicated…" started Vojta, one of Keboola’s consultants, over an English breakfast in the coffee shop with the best coffee in Prague. He was right. It was complicated. But a few hours (and a pint of coffee) I got pretty good idea what was going on. Here, I will try to relay it to you.

Intro: Companies today often have enough data to get completely lost in it and it is unfathomable to put it into context and extract any useful meaning. Even if they can, there are high costs associated with time and money.

Finding the gold in the data

Keboola does something called data ETL (Extract, Transform, Load). It sounds (just like many other fancy terms from this field) more complicated than it is.

Keboola helps you:

  1. Identify, locate and pull together all data relevant to your business from both your own and third-party sources. Anything from accounting and ERP systems to some related open-data initiatives of the government to comments on your Facebook pages. This is the Extract stage.
  2. They manage the whole load, organize it into a structure in which one can meaningfully work with it. That’s Transform.
  3. Then the data is pushed into the system or application selected for the final consumption - Load.

The toolset that Keboola uses to perform (amongst other things) the ETL tasks, is their own Keboola Connection.

The platform that Keboola uses for the analytics and producing all of those wondrous charts and dashboards is GoodData.

So what is it all good for?

You’ve got data. Lots of it.

To give it meaning, the data needs to be pre-processed, the pieces put in order and with the right context, so that GoodData will give you the results you need. That is what Keboola is for:

  • Helping you to find meaning in your data.
  • Continuously processes your data using Keboola Connection
  • Sets up GoodData so you can find the answers you need. Answers to questions like "how much revenue did we get from customers brought to us by the expensive marketing campaign from last fall?” or “what impact does weather have on our sales people’s performance?” Or whatever else comes to mind.

Keboola can do all of that pretty fast and practically without limitations. But that’s my topic for the next time.

If anything here doesn’t make sense to you, please ask! I’ll reply and explain better in the article.