Please hold, your call is important to us

We’ve recently experienced two fairly large system problems that have affected approximately 35% of our clients.

The first issue took 50 minutes to resolve and the other approximately 10 hours. The root cause in both cases was the way we handled the provisioning of adhoc sandboxes on top of our SnowflakeDB (a few words about "how we started w/ them").

We managed to find a workaround for the first problem, but the second one was out of our hands.  All we could do was fill in a support ticket with Snowflake and wait. Our communication channels were flooded with questions from our clients and there was nothing we could do. Pretty close to what you would call a worst-case scenario.! Fire! Panic in Keboola!

My first thoughts were like: “Sh..t! What if we run the whole system on our own infrastructure, we could do something now. We could try to solve the issue and not have to just wait…”

But, we were forced to just wait and rely on Snowflake. This is the account of what happened since:

New dose of steroids in the Keboola backend

More than two years after we announced support for Amazon Redshift in Keboola Connection, it’s about the friggin’ time to bring something new to the table. Something that will propel us further along. Voila, welcome Snowflake.

About 10 months ago we presented Snowflake at a meetup hosted at the GoodData office for the first time.

Today, we use Snowflake both behind the Storage API (it is now the standard backend for our data storage) and the Transformations Engine (you can utilize the power of Snowflake for your ETL-type processes). Snowflake’s SQL documentation can be found here.

What on Earth is Snowflake?

It’s a new database, built from scratch to run in the cloud. Something different that when a legacy vendor took an old DB and hosts it for you (MSSQL on Azure, Oracle in Rackspace or PostgreSQL in AWS).

When Salesforce Met Keboola: Why Is This So Great?

whenharrymetsallyjpg

How can I get more out of my Salesforce data?

sfdcpngAlong with being the world’s #1 CRM, Salesforce provides an end-to-end platform to connect with your customers including Marketing Cloud to personalize experiences across email, mobile, social, and the web, Service Cloud to support customer success, Community Cloud to connect customers, partners and employees and Wave Analytics designed to unlock the data within.

After going through many Salesforce implementations, I’ve found that although companies store their primary customer’s data there, the opportunity enrich it further by bringing in related data stored in other systems such as invoices in ERP or contracts in dedicated DMS is a big one.  For example, I’ve seen clients run into the issue of having inconsistent data in multiple source systems when a customer changes their billing address.  In a nutshell, Salesforce makes it easy to report on that data stored within but can’t provide a complete picture of the customer unless we broaden our view.  

The value of text (data) and Geneea NLP app

Just last week, a client let out a sigh: “We have all this text data (mostly customer reviews) and we know there is tremendous value in that set but outside from reading it all and manually sorting through it, what can we do with it?”

With text becoming a bigger and bigger chunk of a company’s data intake, we hear those questions more and more often. A few years ago, the “number of followers” was about the only metric people would get from their Twitter accounts. Today, we want (and can) know much more; What are people talking about? How do we escalate their complaints? What about the topics trending across data sources and platforms? Those are just some examples of questions we’re asking of NLP (Natural Language Processing) applications at our disposal.

Besides the more obvious social media stuff, there are many areas where text analytics can play an extremely valuable role. Areas like customer support (think of all the ticket descriptions and comments), surveys (most have open-ended questions and their answers often contain the most valuable insights), e-mail marketing (whether it is analyzing outbound campaigns and using text analytics to better understand what works and what doesn’t, or compiling inbound e-mails) and lead-gen (what do people mention when reaching out to you) to name a few. From time to time we even come across more obscure requests like text descriptions of deals made in the past that need critical information extracted (for example contract expiration dates) or comparisons of bodies of text to determine “likeness” (when comparing things like product or job descriptions).

Keboola and Slalom Consulting Team up to host Seattle’s Tableau User Group

On Wednesday, May 18th, Keboola’s Portland and BC team converged in Seattle to host the city’s monthly Tableau User Group with Slalom Consulting. We worked with SeaTUG’s regular hosts and organizers, Slalom Consulting, to put together a full evening of discussion around how to solve complex Tableau data problems using KBC. With 70+ people in attendance, Seattle’s Alexis Hotel was buzzing with excitement! 

The night began with Slalom’s very own Anthony Gould, consultant, data nerd and SeaTUG host extraordinaire, welcoming the group and getting everyone riled up for the night’s contest - awarding the attendee who’s SeaTUG related tweet got the most retweets! He showed everyone how we used Keboola Connection (KBC) to track that data and prepared them that this would be updated at the end of the night and prizes distributed!

How we "hacked" Vizable

Tableau unveiled their new Vizable app the first full day of the Tableau User Conference 2015 (A.K.A. TC-15) to much oohs and aahs. Vizable is a tablet app that allows users to take data from an .xls or .csv file and easily interact with it right on their tablet. It is unparalleled in its ease of use and intuitiveness, providing an exciting new way to consume data and drive insights. More information here: http://vizable.tableau.com/

As soon as we saw it, the Keboola team thought, “What an exciting way to use data from Keboola Connection - if only we could send data to it immediately to test it!” The app is built to accept .xls and .csv files that are physically present on the iPad it runs from, so at a glance, it is completely and utterly off-line. We immediately wondered if Keboola Connection - due to its integration with DropBox and Google Drive - could make Vizable the ultimate, on-the-go data visualization app.

(a little bit of frantic testing later)

Yeah! We can easily schedule pushing data into the iPad using our existing integrations. We didn't have to write a single line of code and already during the conference we were able to play with #data15 mentions we’d pulled in through Keboola Connection, with fresh data being automatically pushed into the iPad every 30 minutes.

We eagerly shared our success with the Vizable team and started showing conference attendees and members of the Tableau team just how we’d made it all happen! It was great to receive a string of visits from the whole Vizable crew all the way up to Dave Story, VP of Mobile and Strategic Growth, and Chris Stolte, the Chief Development Officer. What a thrilling way to educate the Tableau folks on all the cool stuff Keboola does with their tool and for their customers.

Get in touch with us if you want to know more!

Agency - get rid of pivot tables !

During my midnight oil hours and rumbling through out our internal systems, I have come across the ZenDesk tickets that our data analysts are closing for one of hour clients - H1 agency (part of GroupM).

At H1.cz they have created a report in GoodData which is called “non active campaigns”. It contains one metric, 5 attributes of type data, client, etc. and 4 filters (time, client’s agency, etc….) - It sounds super simple, but let’s take a closer look.



What it does, it gives you back a table, which is a wet dream of any and all agencies out there. You can see “anything” across all of the advertising channels. I mean “anything." In this particular case, they’ve created a report of non active campaigns. After some time this is a very good example of an output that is very hard or impossible to achieve in things like Tableau, SAP, Chartio, periscope.io or RJMetrics. Rock&Roll of the multi-dimensional BI system! You need to live it to believe it and to actually understand it.

Bellow is the data model (non readable on purpose), the yellow ovals are the things on top of which you count and you can see them in the context of green ones:


Karel Semerák from H1.cz has prepared this report. I bet he has no clue what mega machine he put into works so it would actually produced this. GoodData has based on the physical data model, definition of metrics and report context generated 460 rows of SQL in the datawarehouse which propells the system. 

Just imagine that there is a real person that tries to do this report by the hand (totally ignoring the incomprehensible amount of data), he has to do lots of small tasks (look inside AdWords, find the active clients, count number of their campaigns, compare to CRM data for paid invoices, create temporary pivot table, etc.)  and every little task could be represented by the rectangle inside this picture:


It all comes to almost 90 totally different tasks each taking from a minute to 3 days when done by hand. Try to explain this workflow to a Teradata consultant and you will spend a week just explaining what you want, try it with IBM Cognos expert and…well you get the picture.
And one more thing, with GoodData you do it yourself and don’t have to wait another month for the expert nor pay 5 digits sum for one report.

Well played, GoodData & multi-dimensional BI! 

But for a moment let’s forget about this one report. H1.cz has already prepared over 400 of such reports. Try to produce that in Excel and you better have a hord of MS Excel devotees who work hard as robots and are precise as robots. Last time I have seen something like this was years back at OMD. It was a mega office and all of the people inside have been producing pivot tables.

Talking about robots. If you are interested in counting the probability with which “data AI robots” will replace your job, take a look here.


Karel Semerák from H1.cz can stay cool though. He think about the data and the context instead of spending time on tasks where robots will always be better and that is one thing where robots will take some time to improve. Cognitive skills and context.

So next time your P/L starts nocking on your doors, think about giving your people the chance to creatively use their head and leave all the heavy lifting to robots. People aren’t the best at copy&paste or sorting through the AdWords report, but they are great at creative thinking and that is what you need in order to win over your competition.

GoodData XAE: The BI Game-Changer (3rd part)

For previous part (2/3), continue here


Discovering the value you can’t see.

Creating a query language is the most complicated task to be solved in BI. It’s not about saving big data, not about their processing, nor about drawing graphs and making API to have a good cooperation with our clients. You cannot buy a query language nor program it in a month.

If the query language gets too complicated the customer won’t manage to work with it. If the query language gets too stupid the customer won’t manage to work with it the way he needs to. GoodData has a simple language to express any complicated questions about the data. At the same time, it has a device that helps it to apply the language to any complicated BI project (or logical data model). In the case of GoodData it has already been mentioned that MAQL/AQW – in my point of view- is the one that is irreplaceable. Furthermore, guys from Prague and Brno – Tomáš Janoušek, David Kubečka and Tomáš Jirotka have widened the AQE with a set of mathematical proofs (complicated algebra) that allow us quick tests of whether the new functions in AQE apply to any type of logical models. That’s how GoodData makes sure that the translations between (MAQL) metrics and some SQL in the lower databases are correct. AQE then helps a common user to overcome the chasm that separates him from low-level scripting.

UPDATE 17. 11. 2013: MAQL is a query language that is translated with MAQL interpreter (before known as QT – “Query Tree” engine) into a tree of queries using the basis of logical model (LDM). These queries are actually definitions of “star joins” in which “Star Generator” (SJG) creates its own SQL queries in DB backend according to the physical data model (PDM – lays below LDM). The whole thing was created at the beginning by Michal Dovrtěl and Hynek Vychodil. The new implementation of AQE further helped to lay all of this onto a solid mathematical basis of ROLAP algebra (which is similar to relation algebra).

After weeks of persuading and yes, bribes, I managed to beg lightly censored examples of queries that AQE creates out of metrics I wrote for this purpose. I guess this is the first time anyone has actually published this....

For a comparison I used the data model out of the Report Master course in Keboola Academy and made this report from it:

The right Y-Axis on the graph shows me how many contracts I have done in Afghanistan, Albania, Algeria and American Samoa in the last 13 months. On the left Y-Axis I can see with a blue line how much regular income my salespeople have brought me and the green line indicates how much was the median sales in a given month (the inputs are de-facto identical with the table at from Part 1 of the previous blog posts).

The graph then shows me three metrics (as per the legend below the graph):

  • "# IDs” = SELECT COUNT(ID (Orders)) – counts the number of components.

  • “Avg Employee” = SELECT AVG(revenue per employee) – counts the mean of (auxiliary) metrics counting the sum of turnover to salesperson.

  • “Median Employee” = SELECT MEDIAN(revenue per employee) – counts the median of (auxiliary) metrics counting the sum of turnover of salespeople.

and the auxiliary metrics:

  • "revenue per employee” = SELECT SUM(totalPrice (Orders)) BY employeeName (Employees) – counts the values of components (some sales) at the level of salesperson.

For the most part, everything explains itself – maybe except “BY” which states that the money “totalPrice  (Orders)” is counted per salesperson and not chaotically within itself. I dare say that anyone who is willing and tries MAQL even a little bit is going to learn (or for that matter we can teach it to you with Keboola Academy any time☺).

And now the most important thing... see below how AQE translates to the following SQL:

With a little bit of exaggeration we can say that creating my report is actually quite difficult but thanks to AQE, it does not bother me at all.


If these three hypotheses are valid:

  1. If GoodData won’t earn me a bunch of money, I won’t use it.
  2. I will earn a bunch of money, but only by the use of a BI project created to suit MY exact needs.
  3. BI that is created to suit MY exact needs is a complex matter that we can only manage with AQE.

… then the basis of the success of GoodData is AQE.

A footnote: the before mentioned MAQL metrics are simple examples. Sometimes it is necessary to build the metrics so complicated that it’s almost impossible to imagine what must happen to the background data. This is an example of metrics from one project where analytics stands upon the unstructured texts. Metrics counts the conversation topics in current time by moderators:

Lukáš Křečan once blogged (CZ) that people are the greatest competition advantage of GoodData.

Translation: “Our biggest competitive advantage is not a unique technology that no one else has. The main thing is people. ”

People are the base. We cannot do this without them; it’s them who create the one and only atmosphere in which unique things are founded. However, one and the other are replaceable.  The biggest competition advantage of GoodData (as well as the intellectual property) is AQE. If we didn’t have it the user would have to click the reports in closed UI that would take away the essential flexibility. Without AQE, GoodData would classify itself with Tableau, Bime, Birst and others. It would become basically uninteresting and it would have to compete strongly with firms who build their own UI over “Redshifts”.

AQE is an unrepeatable opportunity to get ahead of the competitors who then are only going to lose. No one else is able to implement their own new function into product with arbitrary data in arbitrary dimensions while analytically proving and testing the validity of their implementation.

The line between the false image of, “this cool dashboard is very beneficial for me” and the “real potential that you can dig out of the data” is very thin… it’s name is customizing. It’s an arbitrary model for arbitrary data and arbitrary calculations over it. It can be called an extreme. However, without the ability to count a natural logarithm out of a share of figures of two time-periods over many dimensions, you cannot become a star in the world of analytics. AQE is a game changer on the field of BI and only thanks to it, GoodData redefines rules of the game. Today a general root, tomorrow K-means… ☺  

Howgh!

GoodData XAE: The BI Game-Changer (2nd part)

For previous part (1/3), continue here


An honest look at your data

Moving forward with our previous example; uploading all of the data sources we use internally (from one side of the pond to the other) into an LDM makes each piece of information easily accessible in GoodData - that’s 18 datasets and 4 date dimensions.

Over this model, we can now build dashboards in which we watch how effective we are, compare the months with one another, compare people, different kinds of jobs, look at the costs, profits and so on.

Therefore, anything in our dashboard suits our needs exactly. No one dictated us how the program will work...this freedom is crucial for us. Thanks to it we can build anything that we want in GoodData – only our abilities matter in the question of succeeding and making the customer satisfied.

What’s a little bit tricky is that a dashboard like this can be built in anything. For now let’s focus on dashboards from KlipFolio. They are good, however they have one substantial “but” – all the visual components are objects that load information out of rigid, and predefined datasets. Someone designed these datasets exactly for the needs of the dashboard and made it not possible to tweak -  take two numbers out of two base tables … and watch their quotient in time. A month-to-date of this quotient can be forgotten immediately… and not even think about the situation in which there are “many to many” linkages. The great advantage of these BI products (they call themselves BI but we know the truth) is that they are attractive and pandering. However, one should not assume in the beginning that he has bought a diamond, when in actuality it cannot do much more than his Excel. (Just ask any woman her thoughts on cubic zirconia and you’ll see the same result).

Why is the world flooded with products that play on a little playground with walls plastered with cool visuals? I don’t know. What I know is that people are sailing on the “Cool, BigData analytics!” wave and they are hungry for anything that looks just a little like a report. Theme analytics can be done in a few days – transformation of transactions and counting of “Customer lifetime value”  is easy until everyone starts telling you their individual demands.

No one in the world except GoodData has the ability to manage analytics projects that are 100% free in their basis (the data model) and to let people do anything they want in these projects without having to be “low-level” data analysts and/or programmers. Bang!

So how does GoodData manage to do it?

Everyone is used to adding up an “A” column by inputting the formula “=SUM(A:A)”. In GoodData you add up the “A” column by inputting the formula “SELECT SUM(A)”. The language used to write all these formulas in GoodData is called MAQL – Multi-dimensional Analytical Query Language. It sounds terrifying but everyone was able to manage it – even Pavel Hacker has a Report Master diploma out of the Keboola Academy!

If you look back at my data model out of our internal projects you might say that you want the average number of hours out of one side of the data model but you want it filtered with the type of operation, put together according to the projects descriptions and the name of the client and you want to see only the operations that took place this weekend’s afternoons. All the metrics will look like “SELECT AVG(hours_entries) WHERE name(task) = cleaning". The multi-dimensionality of this language is hidden in the fact that you don’t have to deal with questions such as: what dimension is the name of the task in? What relation does it hold toward the number of worked hours? And furthermore – what relation does it hold towards the name of the client? GoodData (or the relations in the logical model that we design for our client) will solve everything for you.

So getting straight to the point, if I design a (denormalized) Excel table in which you find everything comfortably put together, no one who reads this will have trouble counting it. If we give you data divided by dimensions (and dimensions will often be other sources of data – just like outputs from our Czech and Canadian accounting systems) it would be much more complicated to process (most likely you will start adding in SQL like a mad person). Since the world cannot be described in one table (or maybe it does – key value pair... but you cannot work with that very much) the look into a lot of dimensions is substantial. Without it, you are just doing some little home arithmetic ☺.

Do I still have your attention? Now is almost the time to say “wow” because if you like to dig around in data, you are probably over the moon about our described situation by now ☺.

And to the Finale...

Creating a query language is the most complicated task to be solved in BI. GoodData on the other hand, uses a simple, yet effective language to mitigate any of these “complications” and express the questions you have about your data. Part 3 of our series will dive deeper into this language, known as MAQL, and its ability to easily derive insights hidden in your data.


For final part (3/3), continue here