Why Avast Hasn’t Migrated into Full Cloud DWH

How breaking up with Snowflake.net is like breaking up with the girl you love


At the beginning of May, I got a WhatsApp message from Eda Kucera:

“Cheers, how much would it cost to have Peta in Snowflake? Eda”

There are companies that rely only on “slide ware” presentations. Other companies are afraid to open the door to the unknown and not have their results guaranteed. Avast is not one of them. I am glad I can share with you this authentic description of Avast’s effort to balance a low-level benchmark, fundamental shift in their employees’ thinking and the no-nonsense financial aspect of all that.

Let’s get back to May. Just minutes after receiving Eda’s WhatsApp message, almost 6 months of deep testing began in our own Keboola instance of Snowflake. Avast tested Snowflake with their own data. (At this point I handed it to them, the rest was entirely in their hands.)

They dumped approximately 1.5TB a day from Apache Kafka into Keboola’s Snowflake environment, and assessed the speed of the whole process, along with its other uses and its costs.


With a heavy heart, I deleted Avast’s environment within our Snowflake on October 13. Eda and Pavel Chocholous then prepared the following “post mortem”:

Pavel’s Feedback on Their Snowflake Testing

“It’s like breaking up with your girl…”

This sentence is summing it all up. And our last phone call was not a happy one. It did not work out. Avast will not migrate into Snowflake. We would love it, but it can’t be done at this very moment. But I’m jumping ahead. Let’s go back to the beginning.

The first time we saw Snowflake was probably just before DataHackathon in Hradec Kralove. It surely looked like a wet dream for anyone managing any BI infrastructure. Completely unique features like cloning the whole DWH within minutes, linear scalability while running, “undrop table”,”select…at point in time” etc.


How well did it work for us? The beginning was not so rosy, but after some time, it was obvious that the problem was on our side. Data was filled with things like “null” as text value, and as the dataset was fairly “thin”, it had a crushing impact. See my mental notes after the first couple of days:

“Long story short — overpromised. 4 billion are too much for it. The query has been parsing that json for hours, saving the data as a copy. I’ve already increased the size twice. I’m wondering if it will ever finish. The goal was to measure how much space it will take while flattened, jsoin is several times larger than avro/parquet…."


Let me add that not only our data was bad. I also didn’t know that scaling while running a query would affect only the consequently run queries and not the currently running one. So I was massively overcharging my credit card having the megainstance of Snowflake ready in the background, while my query was still running on the smallest possible node. Well, you learn from your mistakes :). This might be the one thing I expected to “be there”, but I’m a spoiled brat. It really was too good to be true.

Okay, after ironing out all the rookie errors and bugs, here is a comparison of one month of real JSON format data:

(Data size within SNFLK vs. Cloudera parquet file 3.7TB (hadoop) vs. 4.2TB (Snowflake))

Tabulated results…You know, it is hard to understand what our datasets look like and how complicated queries they hold. The main thing is that benchmarks really suck :). I personally find  the overall results much more interesting:
  • Data in Snowflake are roughly the same size as in our Hadoop, yet I would suggest to expect 10% — 20% difference.

  • Performance: we didn’t find any blockers.

  • Security/roles/privileges: SNFLK is much more mature than Hadoop platform, yet it cannot be integrated with on-premise LDAP.

  • Stability: SNFLK is far more stable than Hadoop. We didn’t encounter a single error/warning/outage so far. Working with Snowflake is nearly the opposite to hive/impala where errors and cryptical and misleading error messages are part of the ecosystem culture ;).

  • Concept of caching in SNFLK cannot be fully tested, but we have proved that it affects performance in a pleasant yet a bit unpredictable way.

  • Resource governance in SNFLK is a mature feature, beast type of queries are queued behind the active queries while small ones sneak through etc.

  • Architecture of separated 'computing nodes' can stop inter-team collisions easily. Sounds like marketing bullshit, but yes, not all teams do love each other and are willing to share resources.

  • SNFLK can consume data from various sources from most of cloud-on/on-premise services (Kafka, RabbitMQ, flat files, ODBC, JSBC, practically any source can be pushed there). Its DWH as a service architecture is unique and compelling (Redshift/Google BigQuery/GreenPlum could possibly reach this state in the near future).

  • Migration of 500+ TB data? Another story  —  one of the points that undermine our willingness to adopt Snowflake.

  • SNFLK provides limited partitioning abilities; it can bring even more performance, once enabled at full scale.

  • SNFLK would allow platform abuse with all of its 'create database as a copy', 'create warehouse as a copy', 'pay more, perform more'. And costs can grow through the roof. Hadoop is a bit harder to scale which somehow guarantees only reasonable upgrades ;).

  • SNFLK can be easily integrated into any scheduler. Its command line client is the best one I’ve seen in last couple of years.

Notes from Eda

“If we did not have Jumpshot in the house, I would throw everything into Snowflake…”

If I was to build a Hadoop cluster of the size 100TB-200TB from scratch, I would definitely start with Snowflake…Today, however, we would have to pour everything in it, and that is really hard to do while you’re fully on-premise… It would be a huge step forward for us. We would become a full-scale cloud company. That would be amazing!

If I had to pay the people in charge of Hadoop US wages instead of Czech wages, I would get Snowflake right away. That’s a no brainer #ROI.

Unfortunately, we will not go for it right now. Migrating everything is just too expensive for us at the moment and using Snowflake only partially just doesn’t make sense.


Our decision was also affected by our strong integration with Spark; we’ve been using our Hadoop cluster as compute nodes for it. In SNFLK’s case, this setup would mean pushing our data out of SNFLK into the EC2 instance where the Spark jobs would be running. That would cost additional 20-30% (the data would be running inside AWS, but the EC2s cost something as well). I know Snowflake is currently working on a solution for this setup, but I haven’t found out what it is.

In our last phone call with SNFLK, we learned that storage prices were going down again. So, I assume that we will meet within a reasonable time frame, and reopen our discussion. (In November, Snowflake has started privately testing their EU datacenter and will open it publicly in January 2017.) In the meantime, we’ll have an on-demand account for practicing :).

Please hold, your call is important to us

We’ve recently experienced two fairly large system problems that have affected approximately 35% of our clients.

The first issue took 50 minutes to resolve and the other approximately 10 hours. The root cause in both cases was the way we handled the provisioning of adhoc sandboxes on top of our SnowflakeDB (a few words about "how we started w/ them").

We managed to find a workaround for the first problem, but the second one was out of our hands.  All we could do was fill in a support ticket with Snowflake and wait. Our communication channels were flooded with questions from our clients and there was nothing we could do. Pretty close to what you would call a worst-case scenario.! Fire! Panic in Keboola!

My first thoughts were like: “Sh..t! What if we run the whole system on our own infrastructure, we could do something now. We could try to solve the issue and not have to just wait…”

But, we were forced to just wait and rely on Snowflake. This is the account of what happened since:

New dose of steroids in the Keboola backend

More than two years after we announced support for Amazon Redshift in Keboola Connection, it’s about the friggin’ time to bring something new to the table. Something that will propel us further along. Voila, welcome Snowflake.

About 10 months ago we presented Snowflake at a meetup hosted at the GoodData office for the first time.

Today, we use Snowflake both behind the Storage API (it is now the standard backend for our data storage) and the Transformations Engine (you can utilize the power of Snowflake for your ETL-type processes). Snowflake’s SQL documentation can be found here.

What on Earth is Snowflake?

It’s a new database, built from scratch to run in the cloud. Something different that when a legacy vendor took an old DB and hosts it for you (MSSQL on Azure, Oracle in Rackspace or PostgreSQL in AWS).

KBC as a Data Science Brain Interface

The Keboola Data App Store has a fresh new addition. That brings us to total of 16 currently available apps, three of which provided by development partners.

This new one is called “aLook Analytics”, and technically it is a clone of our development project, a “Custom Science” app (not available yet, but soon!). It facilitates connection to a GitHub/Bitbucket repository of a specific data science shop, which you can “hire” via the app and enable them to safely work on your project.

This first instance is connected to Adam Votava’s company aLook Analytics (check them out at http://www.alookanalytics.com/).

How does it work?

Let’s imagine you want to build something data-science-complex in your project. You get in touch with aLook and agree on what it is you want them to do for you. You exchange some data, the boys there will do some testing on their side, set up the environment and once they’re done, they’ll give you a short configuration script that you will enter into their app in KBC. Any business agreement regarding their work is to be made directly between you and aLook, Keboola stays on the sidelines for this one.
When you run the app, your data gets served to aLook’s prepared model and scripts, saved in aLooks repository get executed on Keboola servers. All the complex stuff happens and the resulting data gets returned into your project. The app can be (like any other) included in your Orchestrations, which means it can run automatically as a part of your regular workflow.

The user of KBC does not have direct access to the script, protecting aLook’s IP (of course, if you agree with them otherwise, we do not put up any barriers).

Very soon we will enable the generic “Custom Science” app mentioned above. That means that any data science pro can connect their GitHub/Bitbucket themselves - that gives you, our user, the freedom to find the best brain in the world for your job.

Why people and not just machines?

No “Machine Learning Drag&Drop” app provides the same quality as a bit of thought by a seasoned data scientist. We’re talking business analytics here! People can put things in context and be creative, while all machines can do is to adjust (sometimes thousands of) parameters and tests the results against a training set. That may be awesome for facial recognition or self-driving car AI, but in any specific business application, a trained brain will beat the machine. Often you don’t even have enough of a test sample so a bit of abstract thinking is critical and irreplaceable.

Agency - get rid of pivot tables !

During my midnight oil hours and rumbling through out our internal systems, I have come across the ZenDesk tickets that our data analysts are closing for one of hour clients - H1 agency (part of GroupM).

At H1.cz they have created a report in GoodData which is called “non active campaigns”. It contains one metric, 5 attributes of type data, client, etc. and 4 filters (time, client’s agency, etc….) - It sounds super simple, but let’s take a closer look.



What it does, it gives you back a table, which is a wet dream of any and all agencies out there. You can see “anything” across all of the advertising channels. I mean “anything." In this particular case, they’ve created a report of non active campaigns. After some time this is a very good example of an output that is very hard or impossible to achieve in things like Tableau, SAP, Chartio, periscope.io or RJMetrics. Rock&Roll of the multi-dimensional BI system! You need to live it to believe it and to actually understand it.

Bellow is the data model (non readable on purpose), the yellow ovals are the things on top of which you count and you can see them in the context of green ones:


Karel Semerák from H1.cz has prepared this report. I bet he has no clue what mega machine he put into works so it would actually produced this. GoodData has based on the physical data model, definition of metrics and report context generated 460 rows of SQL in the datawarehouse which propells the system. 

Just imagine that there is a real person that tries to do this report by the hand (totally ignoring the incomprehensible amount of data), he has to do lots of small tasks (look inside AdWords, find the active clients, count number of their campaigns, compare to CRM data for paid invoices, create temporary pivot table, etc.)  and every little task could be represented by the rectangle inside this picture:


It all comes to almost 90 totally different tasks each taking from a minute to 3 days when done by hand. Try to explain this workflow to a Teradata consultant and you will spend a week just explaining what you want, try it with IBM Cognos expert and…well you get the picture.
And one more thing, with GoodData you do it yourself and don’t have to wait another month for the expert nor pay 5 digits sum for one report.

Well played, GoodData & multi-dimensional BI! 

But for a moment let’s forget about this one report. H1.cz has already prepared over 400 of such reports. Try to produce that in Excel and you better have a hord of MS Excel devotees who work hard as robots and are precise as robots. Last time I have seen something like this was years back at OMD. It was a mega office and all of the people inside have been producing pivot tables.

Talking about robots. If you are interested in counting the probability with which “data AI robots” will replace your job, take a look here.


Karel Semerák from H1.cz can stay cool though. He think about the data and the context instead of spending time on tasks where robots will always be better and that is one thing where robots will take some time to improve. Cognitive skills and context.

So next time your P/L starts nocking on your doors, think about giving your people the chance to creatively use their head and leave all the heavy lifting to robots. People aren’t the best at copy&paste or sorting through the AdWords report, but they are great at creative thinking and that is what you need in order to win over your competition.

Another year passed by

Yesterday my account in GoodData turned 5 years old!

It is one in the morning and the delivery service calls my phone; surprise! We’ve got champagne for you. It was from my colleagues. I’m alone and sick with the flu in bed, but almost shed a tear.

Every single day for the last 12 months I looked forward to going to work. And the main reason is the people at Keboola. Thank you guys, without you I would probably just sit at the cash register at TESCO and… well, whatever.

This seems like a good moment to look back at the last 12 months. This is not in any particular order - and not a complete list, either:

  • We adjusted our positioning. From “just selling and implementing” GoodData, we moved to data enablement and actually started to talk with everyone who might have a need to analyse data. You have the data, we will help you integrate it and get it “consumption ready”. If anyone wants to use highcharts.com they are more than welcome - we are here for our clients and “the tool” for our clients’ internal analytics guys to use. We also built several non-analytics applications, where the purpose is to deliver quality data into other companies’ products and platforms.
  • Our Keboola Connection ecosystem is growing rapidly. We are adding more and more new ways to push your data for analytics and data discovery. Along with GoodData, we support today Tableau , Chartio and are planning support for Birst, RJMetrics and Anaplan. I would love to have support for SAS soon as well. If you have a tool that can extract the data from DB, CSV from your hard drive or from a URL, you can have data from us today.
  • In total silence we have launched our “Apps Store”. The main part of it is our, still very juvenile app "LuckyGuess" and transformations templates which automate many daily routine type tasks. Our goal is to support any app that really helps our users/analytics by providing some added value. It automatically analyzes data or automates processes with the data. If you can deliver such an app in Docker, we offer the best place to monetise it -- we have the computing power and clients have their data with us already… Our LuckyGuess app is primarily written in R and it does very basic, yet fundamental things like detect relations between tables, lets you know the data types, detects dependencies (regressions) between the columns (“tell me which expenses bring the most customers") or it can detect purchasing patterns and let you know when to go to talk to a particular customer because he is most likely to buy. We are working on other apps, our own as well as partners driven !

  • Marc Raiser is back. A few years ago he received an offer that he couldn’t refuse - working with Fujitsu Mission Critical Systems Ltd. (managing data from large machines and building AI on top of that). At the time we joked that Japan would be just an apprenticeship program for him and that he would be back soon. And voila, he is back working with us again, for now on development of LuckyGuess!
  • The rebuilding of our platform into async mode is nearly finished. It will give us unlimited horizontal scalability.
  • Martin Karásek developed a new design of our UI. No longer bare Bootstrap! While implementing the new design we also redesigned our whole approach to technical implementation of UI, and today everything will be an SPA application, on top of our APIs. Any partner of ours can skin it as he wishes and run it from his own servers if he has a need for that. Sneak peek of the Transformations UI:

  • We organised our first Enterprise Data Hackathon
  • We reorganized the company into two verticals - product and services. Our Keboola Connection team actually doesn’t have any direct clients any more. Everything is done through partners. Currently we have 7 partners. Our definition of a partner is any other entity which has a data business of their own and uses a tech stack from us to support their business. In just the last month we were approached by 4 more companies.
  • We now have a third partner in Keboola. So it’s me, Milan and Pavel Doležal. Pavel spends most of his time with our partners making sure they get all the right tools and support they need for their work and is leading the development of our partner network.
  • Vojta Roček left us and went his own "BI” way. Today he is in a new ecommerce holding Rockaway and he is leading people down the data-driven business development path. Keboola Connection adoption within Rockaway is growing every day.
  • Our extractor-framework - the environment where third parties can write their own extractors - is done and ready to use. Today it takes us ½ a day to connect to a new API.
  • We are finishing the app that can read Apiary Blueprint and by doing so we shall be able to read data from any API that has its documentation in Apiary.io with minimum development.
  • Working on “schemas” - the possibility to use standardized nomenclature for naming and describing the data. Think of it as a "data ontology". It will allow creation of smarter Apps, as they will be able to understand the meaning of the data.
  • Just launched TAGs - it is like a form of dialogue between you and us about the data. It is enough for us if you just tag the column "location” and we will promptly serve you weather data for every address in the column. If you label a column as “currency” then right away you have the up to date currency exchange rates, etc.
  • We are still 25 people and growing without a need to add too many more.
  • Zendesk launched online courses for Zendesk Insights within our own Keboola Academy. We trained hundreds of people how to use GoodData.
  • Our “Team Canada” has moved into new offices.
  • We publish many components as open source. If it makes sense, we want to provide it for you for free. Our JSON2CSV convertor is a first sign of this trend. The dream would be to run the most used extractors for free as well.

So that's where we are, what we've been doing and where we're going, exciting times! 

Now, to take my medicine...

    Gorila Mobil - Data-Driven Business

    In July 2014, O2 announced that it had acquired Gorila Mobile (virtual mobile operator). Gorila approached us at Keboola a couple of months before their official launch.  They had one goal: “We need to have a data-driven company and we want you to help us set it up.”

    The brain behind Gorila is Roman Novacek, a brain that works a bit differently than yours or mine.

    18 months ago

    It was the morning of  23.4.2013 and I was on my way to one of the largest techhubs - TechSquare to see Roman. Back then Roman had started a company called Tarifomat which was offering its customers a way to find the best mobile operator deal and help them switch. Honestly, Tarifomat was a great idea with a very difficult execution path. Their sales funnel is verrrry long. Basically they get paid only a ½ year after the client switches and only in the case that he doesn’t cancel beforehand.  The path to getting paid is paved with unexpected traps like “the courrier that was delivering the SIM card hasn’t found the address” etc.  A perfect fit for us if the client could increase their margins by 30x. We rolled up our sleeves and proceeded with the project. It was a success. It had to be, our VP of Propaganda wrote that companies can count on us and we have to honor that promise.

    Tarifomat got a perfect overview of their whole funnel (up to 1500 requests a day). Roman says that it was the first time that he actually understood what was going on inside the company.

    7 months later I got a call, it was Roman. He was being as secretive as James Bond and mysteriously speaking about some new virtual operator, but couldn’t tell me more, only that the plan was to design the company from the ground up as a data-driven company. Once somebody starts to send these kinds of signals, I can’t help it, but I lose my ability to focus on anything other than “new data-driven company”. Well, it looked just like lot of talk, but shortly after that came the walk. Roman sent us the first payment and right after the brief, we began meetings and planning what exactly we would be solving together.

    Gorila was another virtual operator (inside the O2 network) and they tried to be very cool (check out the YouTube channel). But being cool is not the only ingredient for success….you need more….

    With our help Roman put together the daily dashboards which mapped out the full acquisition channel down to each campaign/media/type/position/brand message/product. The O2 team was taken aback when they saw that. The number of activated SIM cards was growing, expenses were falling - everybody was happy, champagne flowing everywhere…. Only Roman’s team wasn’t celebrating. People weren’t actually using the SIM cards as they had envisioned inside Gorila Mobile. Now what ???

    Friday afternoon:  "Let’s dig into the GoodData dashboards and solve this!"

    Sunday evening: Claim "Gorila mobil - the most of internet, FUP you!" changed to "5 Kč/hr for calls and all the data you need".

    A complete switch in brand positioning, based solely on data !  I get shivers running up my spine just thinking about that. While we were chilling in our offices, playing ping pong; Romans’ team was rocking it! 5Kc worked! - they got enough data to support their further visions; they were bold and full of energy.  

    Roman has a vision that data describes the now as it is and that we should use that knowledge to validate strategic directions and decisions.

    And we can see exactly the same pattern in DameJidlo. Gorila mobil is soaked deep down in data and it works. “There was no time to hide anything; everybody from the O2 callcentre to investors and partners has full access to all data” - daily orders, comparisons, value of the orders, SIM activations, number of customers and their behaviour, how frequently/and where they top-up their SIM cards, etc.

    Unfortunately, this great ride lasted only 3 months. The Gorila project was over, so successful that Telefonica decided to buy it and incorporate it inside the O2 structure.

    Roman moved on into a new project, and Keboola was ready. Anyhow, who can say you have a iPhone cover from the real cherry wood? We are very interested to see how they will handle the tricky things like life time warranty for the bamboo iPad cover and what role data will play in that business.

    I caught Roman during his trip through China where he was stuffing himself with chicken feet - so here’s a quick Google Docs style interview :)

    PS: Roman, what was the hardest thing in the beginning of Gorila ?

    RN: Convincing O2 that we have to be agile and have NO time for endless meetings. We wanted to focus 100% of our time and energy on marketing and we knew we needed the company to be 100% based on data.  In the beginning no one believed this vision inside O2. Today, (after the Gorla acquisition) O2 wants to have the same system as we had (using the sales / activation/ channels activation data from yesterday). Dusan Simonovic and Jiri Caudr are the stakeholders and I hope they will be successful with that. When you know what is going on inside the company, you don’t have to speculate . That gives you real power to make decisions and work hard to achieve you goals; because you know exactly WHAT you’re doing and WHY you’re doing it. No stumbling in the dark. That’s how you oddelis zrno od plev….

    PS: What do you mean by “odděluje zrna od plev”?

    RN: Well, you can have lots of excuses when you don’t have the data. You can come up with excuses about things that went wrong when you don’t have data to show. Investors have no way to prove you wrong. At least in short term. You can argue that the market has changed, some externalities have worked against you, etc.  Once most of your decisions are firmly based on data and anybody can see the results on an ongoing basis, you literally put your skin in the game. If you f..ck something up, anybody can see in the data what happened. What were the circumstances before the decision and how it looks right now. I love this. I got addicted to data driven business and now I can’t do it any other way. My head just works this way :)

    PS: This was your second project with Keboola. How was it for you working with us again?

    RN: Once I convinced my partners to build a “metrics driven company” the hardest part was to get all the data sources. Network information, info from marketing tables, Google Analytics, distribution and logistics data like post office, couriers, CMS, etc.  We got lots of help from Martin Hakl and is company Breezy. They did our web and all the data extractors so the data could flow to Keboola.

    PS: Could you show us something?

    RN: Yep, I have smuggled the axis a bit….Now you will ask what it is and how we worked with it, right ? :)  What you can see in the report is the SIM cards activation in time. The dotted lines are linear extrapolations - trends - so you can see the general trend, growing or stagnation. We have this report on everybody’s dashboard and we can filter it according to the channel where SIM card was activated (for example large newspaper stand network, post offices, lottery terminals, etc.)  Exactly in the same way you can see what activations we have according to a marketing channel. If you click on any point inside the report another dashboard opens and you can see how much money we get from that particular set of activations and how these people behave. It’s hard to describe, it is much better to show in reality :)

    PS: My favourite question is to ask people if they had any “aha moment.” The point at which you just realize that anything you’ve done until this point was wrong and you have to go 180 degrees different direction. Do you have a moment like this?

    RN: We were spending around 1.5M / month on advertising. Marketing wise, our ads did perform super well! When we drilled through the data we noticed that we had some campaigns that were totally bad and dragged down the overall average for our campaigns. The interesting thing was that you couldn’t see this when you looked at total campaigns because we had some campaigns which were super extra good and they minimised the deviation. If we didn’t have the data, we couldn’t have discovered this at all. We had some extra great campaigns and some shitty ones, but the average was OK. We could dig into the details and discover the bad performers, so we can turn them down and start all over again the next day. Every day we have sent out over 500SIM cards and we knew exactly how much they cost us, how long it will take them to connect to the network, how much they’re gonna spend and how long they will stay with us. We could have calm night dreams, because we had data :)

    PS:  Now you’re data guy forever?

    RN: You got it buddy! :-) In any company I start the data analytics will be the first thing that I will be taking care off. Once we see inside the data and what’s going on inside the company, we are better able to take risks and test new things.



    GoodData XAE: The BI Game-Changer (3rd part)

    For previous part (2/3), continue here


    Discovering the value you can’t see.

    Creating a query language is the most complicated task to be solved in BI. It’s not about saving big data, not about their processing, nor about drawing graphs and making API to have a good cooperation with our clients. You cannot buy a query language nor program it in a month.

    If the query language gets too complicated the customer won’t manage to work with it. If the query language gets too stupid the customer won’t manage to work with it the way he needs to. GoodData has a simple language to express any complicated questions about the data. At the same time, it has a device that helps it to apply the language to any complicated BI project (or logical data model). In the case of GoodData it has already been mentioned that MAQL/AQW – in my point of view- is the one that is irreplaceable. Furthermore, guys from Prague and Brno – Tomáš Janoušek, David Kubečka and Tomáš Jirotka have widened the AQE with a set of mathematical proofs (complicated algebra) that allow us quick tests of whether the new functions in AQE apply to any type of logical models. That’s how GoodData makes sure that the translations between (MAQL) metrics and some SQL in the lower databases are correct. AQE then helps a common user to overcome the chasm that separates him from low-level scripting.

    UPDATE 17. 11. 2013: MAQL is a query language that is translated with MAQL interpreter (before known as QT – “Query Tree” engine) into a tree of queries using the basis of logical model (LDM). These queries are actually definitions of “star joins” in which “Star Generator” (SJG) creates its own SQL queries in DB backend according to the physical data model (PDM – lays below LDM). The whole thing was created at the beginning by Michal Dovrtěl and Hynek Vychodil. The new implementation of AQE further helped to lay all of this onto a solid mathematical basis of ROLAP algebra (which is similar to relation algebra).

    After weeks of persuading and yes, bribes, I managed to beg lightly censored examples of queries that AQE creates out of metrics I wrote for this purpose. I guess this is the first time anyone has actually published this....

    For a comparison I used the data model out of the Report Master course in Keboola Academy and made this report from it:

    The right Y-Axis on the graph shows me how many contracts I have done in Afghanistan, Albania, Algeria and American Samoa in the last 13 months. On the left Y-Axis I can see with a blue line how much regular income my salespeople have brought me and the green line indicates how much was the median sales in a given month (the inputs are de-facto identical with the table at from Part 1 of the previous blog posts).

    The graph then shows me three metrics (as per the legend below the graph):

    • "# IDs” = SELECT COUNT(ID (Orders)) – counts the number of components.

    • “Avg Employee” = SELECT AVG(revenue per employee) – counts the mean of (auxiliary) metrics counting the sum of turnover to salesperson.

    • “Median Employee” = SELECT MEDIAN(revenue per employee) – counts the median of (auxiliary) metrics counting the sum of turnover of salespeople.

    and the auxiliary metrics:

    • "revenue per employee” = SELECT SUM(totalPrice (Orders)) BY employeeName (Employees) – counts the values of components (some sales) at the level of salesperson.

    For the most part, everything explains itself – maybe except “BY” which states that the money “totalPrice  (Orders)” is counted per salesperson and not chaotically within itself. I dare say that anyone who is willing and tries MAQL even a little bit is going to learn (or for that matter we can teach it to you with Keboola Academy any time☺).

    And now the most important thing... see below how AQE translates to the following SQL:

    With a little bit of exaggeration we can say that creating my report is actually quite difficult but thanks to AQE, it does not bother me at all.


    If these three hypotheses are valid:

    1. If GoodData won’t earn me a bunch of money, I won’t use it.
    2. I will earn a bunch of money, but only by the use of a BI project created to suit MY exact needs.
    3. BI that is created to suit MY exact needs is a complex matter that we can only manage with AQE.

    … then the basis of the success of GoodData is AQE.

    A footnote: the before mentioned MAQL metrics are simple examples. Sometimes it is necessary to build the metrics so complicated that it’s almost impossible to imagine what must happen to the background data. This is an example of metrics from one project where analytics stands upon the unstructured texts. Metrics counts the conversation topics in current time by moderators:

    Lukáš Křečan once blogged (CZ) that people are the greatest competition advantage of GoodData.

    Translation: “Our biggest competitive advantage is not a unique technology that no one else has. The main thing is people. ”

    People are the base. We cannot do this without them; it’s them who create the one and only atmosphere in which unique things are founded. However, one and the other are replaceable.  The biggest competition advantage of GoodData (as well as the intellectual property) is AQE. If we didn’t have it the user would have to click the reports in closed UI that would take away the essential flexibility. Without AQE, GoodData would classify itself with Tableau, Bime, Birst and others. It would become basically uninteresting and it would have to compete strongly with firms who build their own UI over “Redshifts”.

    AQE is an unrepeatable opportunity to get ahead of the competitors who then are only going to lose. No one else is able to implement their own new function into product with arbitrary data in arbitrary dimensions while analytically proving and testing the validity of their implementation.

    The line between the false image of, “this cool dashboard is very beneficial for me” and the “real potential that you can dig out of the data” is very thin… it’s name is customizing. It’s an arbitrary model for arbitrary data and arbitrary calculations over it. It can be called an extreme. However, without the ability to count a natural logarithm out of a share of figures of two time-periods over many dimensions, you cannot become a star in the world of analytics. AQE is a game changer on the field of BI and only thanks to it, GoodData redefines rules of the game. Today a general root, tomorrow K-means… ☺  

    Howgh!

    GoodData XAE: The BI Game-Changer (2nd part)

    For previous part (1/3), continue here


    An honest look at your data

    Moving forward with our previous example; uploading all of the data sources we use internally (from one side of the pond to the other) into an LDM makes each piece of information easily accessible in GoodData - that’s 18 datasets and 4 date dimensions.

    Over this model, we can now build dashboards in which we watch how effective we are, compare the months with one another, compare people, different kinds of jobs, look at the costs, profits and so on.

    Therefore, anything in our dashboard suits our needs exactly. No one dictated us how the program will work...this freedom is crucial for us. Thanks to it we can build anything that we want in GoodData – only our abilities matter in the question of succeeding and making the customer satisfied.

    What’s a little bit tricky is that a dashboard like this can be built in anything. For now let’s focus on dashboards from KlipFolio. They are good, however they have one substantial “but” – all the visual components are objects that load information out of rigid, and predefined datasets. Someone designed these datasets exactly for the needs of the dashboard and made it not possible to tweak -  take two numbers out of two base tables … and watch their quotient in time. A month-to-date of this quotient can be forgotten immediately… and not even think about the situation in which there are “many to many” linkages. The great advantage of these BI products (they call themselves BI but we know the truth) is that they are attractive and pandering. However, one should not assume in the beginning that he has bought a diamond, when in actuality it cannot do much more than his Excel. (Just ask any woman her thoughts on cubic zirconia and you’ll see the same result).

    Why is the world flooded with products that play on a little playground with walls plastered with cool visuals? I don’t know. What I know is that people are sailing on the “Cool, BigData analytics!” wave and they are hungry for anything that looks just a little like a report. Theme analytics can be done in a few days – transformation of transactions and counting of “Customer lifetime value”  is easy until everyone starts telling you their individual demands.

    No one in the world except GoodData has the ability to manage analytics projects that are 100% free in their basis (the data model) and to let people do anything they want in these projects without having to be “low-level” data analysts and/or programmers. Bang!

    So how does GoodData manage to do it?

    Everyone is used to adding up an “A” column by inputting the formula “=SUM(A:A)”. In GoodData you add up the “A” column by inputting the formula “SELECT SUM(A)”. The language used to write all these formulas in GoodData is called MAQL – Multi-dimensional Analytical Query Language. It sounds terrifying but everyone was able to manage it – even Pavel Hacker has a Report Master diploma out of the Keboola Academy!

    If you look back at my data model out of our internal projects you might say that you want the average number of hours out of one side of the data model but you want it filtered with the type of operation, put together according to the projects descriptions and the name of the client and you want to see only the operations that took place this weekend’s afternoons. All the metrics will look like “SELECT AVG(hours_entries) WHERE name(task) = cleaning". The multi-dimensionality of this language is hidden in the fact that you don’t have to deal with questions such as: what dimension is the name of the task in? What relation does it hold toward the number of worked hours? And furthermore – what relation does it hold towards the name of the client? GoodData (or the relations in the logical model that we design for our client) will solve everything for you.

    So getting straight to the point, if I design a (denormalized) Excel table in which you find everything comfortably put together, no one who reads this will have trouble counting it. If we give you data divided by dimensions (and dimensions will often be other sources of data – just like outputs from our Czech and Canadian accounting systems) it would be much more complicated to process (most likely you will start adding in SQL like a mad person). Since the world cannot be described in one table (or maybe it does – key value pair... but you cannot work with that very much) the look into a lot of dimensions is substantial. Without it, you are just doing some little home arithmetic ☺.

    Do I still have your attention? Now is almost the time to say “wow” because if you like to dig around in data, you are probably over the moon about our described situation by now ☺.

    And to the Finale...

    Creating a query language is the most complicated task to be solved in BI. GoodData on the other hand, uses a simple, yet effective language to mitigate any of these “complications” and express the questions you have about your data. Part 3 of our series will dive deeper into this language, known as MAQL, and its ability to easily derive insights hidden in your data.


    For final part (3/3), continue here

    GoodData XAE: The BI Game-Changer (1st part)

    Putting your data into the right context

    At the beginning of last summer, GoodData launched its new analytic engine AQE (Algebraic Query Engine). Its official product name is GoodDate XAE. However, since I believe that XAE is Chinese for “underfed chicken”, I will stick with AQE ☺. Since the first moment I saw it, I considered it a concept with the biggest added value. When Michael showed me AQE I immediately fell in love.

    However, before we can truly reveal AQE and the benefits that can be derived from it, we need to begin with an understanding of it’s position in the market - starting from the foundation on which GoodData’s platform rests. In a three part series we’ll cover AQE’s impact on contextual data, delivering meaningful insights and finally digging for those hidden gems.

    First, a bit more comprehensive introduction...

    Any system with ambitions to visualize data needs some kind of mathematical device. For instance, if I choose sold items using the names of salespeople as my input and my goal is to find out the median of the salespersons’ turnover, somewhere in the background a total summation of the sold items per month (and per salesperson) must take place. Only after getting that result can we count the requested median. Notice the below graphic - the left table is the crude input with the right table being derived in the course of the process - most of the time we don’t even realize that these inter-outputs keep arising. Within the right table, we can quickly calculate the best salesperson of the month, the average salesperson/median and so on…

    And how does this stack up against the competition?

    If we don’t have a robust analytic backend, we cannot have the freedom to do whatever we want to. We have to tie our users to some already prepared “vertical analysis“ (churn analysis of the e-shop’s customers, RFM segmentation, cohorts of subscriptions, etc…). Fiddling with the data is possible in many ways. Besides GoodData, you can find tools such as Birst, Domo, Klipfolio, RJMetrics, Jaspersoft, Pentaho and many, many others. They look really cool and I have worked with some of them before! A lonely data analyst can also reach for R, SPSS, RapidMiner, Weka and other tools. However, these are not BI tools.

    Most of the aforementioned BI tools do not have a sophisticated mathematical device. Therefore, it will simply allow you to count the data, calculate the frequency of components, find out the maximum, minimum and mean. The promo video of RJMetrics is a great example.

    Can I just use a calculator instead?

    Systems such as Domo.com or KlipFolio.com solve the problem of an absentee mathematical device in a bit of a bluffy way. They offer their users several mathematical devices – just the same as Excel does. The main difference is that they can be used with separate tables, not with the whole data model. Someone may think that it does not matter but, quite the contrary – this is the pillar of anything connected to data analytics. I will try to explain why...

    The border of our sandbox lays with the application of the law of conservation of “business energy”.

    “If we don’t manage to earn our customer more money than our services (and GoodData license) cost him, he won’t collaborate with us.“

    Say for example if we take the listing of invoices from SAP and draw a graph of growth, our customers will sack us from the offices. We need a little bit more. We need to put each data dimension into context (dimension = thematic data package usually presented by data table). Each dimension does not have to have any strictly defined linkages; the table in our analytics project is called dataset.

    But how is it all connected?

    The moment we give each dimension it’s linkage (parents, children … siblings?), we get a logical data model. A logical data model describes the “business” linkages and most of the time it is not identical with the technical model in which any kind of system saves it’s data. For example, if Mironet has its own e-shop, the database of the e-shop is optimized for the needs of the e-shop – not financial, sales and/or subscription analytics. The more the environment (of which we analyze the data) is complicated, the less similarities the technical and analytical data models have. A low structural similarity of the source data and the data we need to analytics, divides the other companies from GoodData.

    A good example of this is our internal project. I chose the internal project because it contents the logical model we need only for ourselves. Therefore, it is not somehow artificially extended just because we know “the customer will pay for it anyway”.

    We upload different kinds of tables into GoodData. These tables are connected through linkages. The linkages define the logical model; the logical model then defines “what can we do with the data”. Our internal project serves to measure our own activity and it connects the data from the Czech accounting system (Pohoda), Canadian accounting system (QuickBooks), the cloud application Paymo.biz and some Google Drive documents. In total, our internal project has 18 datasets and 4 date dimensions.

    The first image (below) is a general model, select the arrow in the left corner to see what a more detailed model looks like.

    In the detailed view (2 of 2), note that the name of the client is marked with red, the name of our analyst is marked with black and the worked hours are marked with blue. What I want to show here is that each individual piece of information is widely spread throughout the project. Thanks to the linkages, GoodData knows what makes sense altogether.

    UP NEXT

    Using business-driven thinking to force your data to comply to your business model (rather than the other way around) will allow you to report on meaningful and actionable insights. Part 2 of the following series on AQE (...or more formally XAE) will uncover the translation of the Logical Data Model into the GoodData environment.


    For next part (2/3), continue here