Why Avast Hasn’t Migrated into Full Cloud DWH

How breaking up with Snowflake.net is like breaking up with the girl you love


At the beginning of May, I got a WhatsApp message from Eda Kucera:

“Cheers, how much would it cost to have Peta in Snowflake? Eda”

There are companies that rely only on “slide ware” presentations. Other companies are afraid to open the door to the unknown and not have their results guaranteed. Avast is not one of them. I am glad I can share with you this authentic description of Avast’s effort to balance a low-level benchmark, fundamental shift in their employees’ thinking and the no-nonsense financial aspect of all that.

Let’s get back to May. Just minutes after receiving Eda’s WhatsApp message, almost 6 months of deep testing began in our own Keboola instance of Snowflake. Avast tested Snowflake with their own data. (At this point I handed it to them, the rest was entirely in their hands.)

They dumped approximately 1.5TB a day from Apache Kafka into Keboola’s Snowflake environment, and assessed the speed of the whole process, along with its other uses and its costs.


With a heavy heart, I deleted Avast’s environment within our Snowflake on October 13. Eda and Pavel Chocholous then prepared the following “post mortem”:

Pavel’s Feedback on Their Snowflake Testing

“It’s like breaking up with your girl…”

This sentence is summing it all up. And our last phone call was not a happy one. It did not work out. Avast will not migrate into Snowflake. We would love it, but it can’t be done at this very moment. But I’m jumping ahead. Let’s go back to the beginning.

The first time we saw Snowflake was probably just before DataHackathon in Hradec Kralove. It surely looked like a wet dream for anyone managing any BI infrastructure. Completely unique features like cloning the whole DWH within minutes, linear scalability while running, “undrop table”,”select…at point in time” etc.


How well did it work for us? The beginning was not so rosy, but after some time, it was obvious that the problem was on our side. Data was filled with things like “null” as text value, and as the dataset was fairly “thin”, it had a crushing impact. See my mental notes after the first couple of days:

“Long story short — overpromised. 4 billion are too much for it. The query has been parsing that json for hours, saving the data as a copy. I’ve already increased the size twice. I’m wondering if it will ever finish. The goal was to measure how much space it will take while flattened, jsoin is several times larger than avro/parquet…."


Let me add that not only our data was bad. I also didn’t know that scaling while running a query would affect only the consequently run queries and not the currently running one. So I was massively overcharging my credit card having the megainstance of Snowflake ready in the background, while my query was still running on the smallest possible node. Well, you learn from your mistakes :). This might be the one thing I expected to “be there”, but I’m a spoiled brat. It really was too good to be true.

Okay, after ironing out all the rookie errors and bugs, here is a comparison of one month of real JSON format data:

(Data size within SNFLK vs. Cloudera parquet file 3.7TB (hadoop) vs. 4.2TB (Snowflake))

Tabulated results…You know, it is hard to understand what our datasets look like and how complicated queries they hold. The main thing is that benchmarks really suck :). I personally find  the overall results much more interesting:
  • Data in Snowflake are roughly the same size as in our Hadoop, yet I would suggest to expect 10% — 20% difference.

  • Performance: we didn’t find any blockers.

  • Security/roles/privileges: SNFLK is much more mature than Hadoop platform, yet it cannot be integrated with on-premise LDAP.

  • Stability: SNFLK is far more stable than Hadoop. We didn’t encounter a single error/warning/outage so far. Working with Snowflake is nearly the opposite to hive/impala where errors and cryptical and misleading error messages are part of the ecosystem culture ;).

  • Concept of caching in SNFLK cannot be fully tested, but we have proved that it affects performance in a pleasant yet a bit unpredictable way.

  • Resource governance in SNFLK is a mature feature, beast type of queries are queued behind the active queries while small ones sneak through etc.

  • Architecture of separated 'computing nodes' can stop inter-team collisions easily. Sounds like marketing bullshit, but yes, not all teams do love each other and are willing to share resources.

  • SNFLK can consume data from various sources from most of cloud-on/on-premise services (Kafka, RabbitMQ, flat files, ODBC, JSBC, practically any source can be pushed there). Its DWH as a service architecture is unique and compelling (Redshift/Google BigQuery/GreenPlum could possibly reach this state in the near future).

  • Migration of 500+ TB data? Another story  —  one of the points that undermine our willingness to adopt Snowflake.

  • SNFLK provides limited partitioning abilities; it can bring even more performance, once enabled at full scale.

  • SNFLK would allow platform abuse with all of its 'create database as a copy', 'create warehouse as a copy', 'pay more, perform more'. And costs can grow through the roof. Hadoop is a bit harder to scale which somehow guarantees only reasonable upgrades ;).

  • SNFLK can be easily integrated into any scheduler. Its command line client is the best one I’ve seen in last couple of years.

Notes from Eda

“If we did not have Jumpshot in the house, I would throw everything into Snowflake…”

If I was to build a Hadoop cluster of the size 100TB-200TB from scratch, I would definitely start with Snowflake…Today, however, we would have to pour everything in it, and that is really hard to do while you’re fully on-premise… It would be a huge step forward for us. We would become a full-scale cloud company. That would be amazing!

If I had to pay the people in charge of Hadoop US wages instead of Czech wages, I would get Snowflake right away. That’s a no brainer #ROI.

Unfortunately, we will not go for it right now. Migrating everything is just too expensive for us at the moment and using Snowflake only partially just doesn’t make sense.


Our decision was also affected by our strong integration with Spark; we’ve been using our Hadoop cluster as compute nodes for it. In SNFLK’s case, this setup would mean pushing our data out of SNFLK into the EC2 instance where the Spark jobs would be running. That would cost additional 20-30% (the data would be running inside AWS, but the EC2s cost something as well). I know Snowflake is currently working on a solution for this setup, but I haven’t found out what it is.

In our last phone call with SNFLK, we learned that storage prices were going down again. So, I assume that we will meet within a reasonable time frame, and reopen our discussion. (In November, Snowflake has started privately testing their EU datacenter and will open it publicly in January 2017.) In the meantime, we’ll have an on-demand account for practicing :).

Keboola’s Solutions for Agencies

We would like to show you how some of our clients redefined their businesses by routinely using data in their daily activities. Despite the fact that each company’s situation is different, we hope to give you some ideas to explore in your own business.

If you work in a service agency, as a customer care manager or in similar type positions, you are all about efficiency. Any idle time spent on non-revenue generating activities means wasted time and manpower, and more importantly, a net loss for your organization.

To ensure optimal operation, you may be asking yourself questions like this:

  • Is your team correctly prioritizing clients with a higher profit margin?

  • How are individual team members performing compared to each other?

  • Are team members doing the work they are best suited for?

Earthworms

Sometimes the simplest graphs show the most relevant information. The graph that you see below (generally known as "bullet chart") has been coined the “earthworm” by our clients. Provided by one of our clients, this particular graph eloquently shows agent performance overall, as well as in comparison to the team average.

As a manager, imagine having one of these for each of your agents. In a mere seconds you can distinguish your top vs. poor performers and take the actions needed to enhance or improve their behavior.

Customer Care

Diving deeper into individual performance, you can then examine why each agent is performing the way they are. After you take a look at the next client example, you will see that this series of earthworms track agent performance in different areas.

Keboola’s Marketing Solutions

Even though we understand that every company and each department within it have very different BI needs, we also believe in sharing inspiration from our clients about how they make relevant business decisions using data in their daily routines. You might find this helpful in shaping your own solution.

When planning a new product launch and deciding where to spend your marketing budget, you probably have questions regarding the impact of your campaign:

  • How long will it take to turn marketing leads into faithful customers?

  • Did I target the correct customer group?

  • Do my potential customers respond to the advertisement as expected?

  • What is the return of investment for my campaign based on different target groups and products?

Check out similar questions our clients have asked. Combine them with an analytical mindset, and create the reports your company needs to invest in better marketing decisions, and to generate a higher return on investment.

Roman Novacek from Gorilla Mobile says: “When looking at our marketing model, everything seemed to be going according to plan. But when we looked deeper into what we thought were well-performing campaigns, we found out that while some ads and channels were performing extraordinarily well, others were draining the overall average leading to mediocre results.”


sales funnel


McPen: Built and Run on Data

McPen is a European chain distributor of stationery goods. They are one of the first small to mid-sized retailers who use a data-driven approach to business and enable equal access to data to all of their employees.

Initial situation

Embarking on their data-driven business journey, McPen realized that to excel in the stationery goods space, they would need to create a competitive advantage with a unique operational management system. In order to identify retail solutions specific to their business, they wanted to combine many previously unconnected data sources, and upgrade and speed up their reporting process.

Where Keboola came in

Assisted by the Ascoria team, our partner, McPen’s CEO Milan Petr configured the new system from scratch and without the help of a single developer. McPen began to pull data from sources like their POS, Frames and other retail sources, allowing everybody in the company to use this compiled and easily accessible data to find solutions to their real retail problems.


Focusing on lean operations and adding new features, Milan created a system that benefitted the entire organization. He knew that to effectively manage shifts in business, he had to involve every part of the organization in making decisions based on data. Leading by example, he developed and studied the system in detail to understand its impact on daily operations. He then provided access and support directly to the people on the floor to empower them to make necessary strategic decisions and improve their daily results.

metrics-2


Surprising benefits and results

Examined data showed that in order to maximize profitability, McPen needed to upsell customers. And while their biggest income comes from customers who spend between 200 and 500 CZK (around 8 to 20 USD), it is the 42% of all McPen customers spending up to 50 CZK (around 2 USD) who have the biggest potential for the upsell.

Please hold, your call is important to us

We’ve recently experienced two fairly large system problems that have affected approximately 35% of our clients.

The first issue took 50 minutes to resolve and the other approximately 10 hours. The root cause in both cases was the way we handled the provisioning of adhoc sandboxes on top of our SnowflakeDB (a few words about "how we started w/ them").

We managed to find a workaround for the first problem, but the second one was out of our hands.  All we could do was fill in a support ticket with Snowflake and wait. Our communication channels were flooded with questions from our clients and there was nothing we could do. Pretty close to what you would call a worst-case scenario.! Fire! Panic in Keboola!

My first thoughts were like: “Sh..t! What if we run the whole system on our own infrastructure, we could do something now. We could try to solve the issue and not have to just wait…”

But, we were forced to just wait and rely on Snowflake. This is the account of what happened since:

New dose of steroids in the Keboola backend

More than two years after we announced support for Amazon Redshift in Keboola Connection, it’s about the friggin’ time to bring something new to the table. Something that will propel us further along. Voila, welcome Snowflake.

About 10 months ago we presented Snowflake at a meetup hosted at the GoodData office for the first time.

Today, we use Snowflake both behind the Storage API (it is now the standard backend for our data storage) and the Transformations Engine (you can utilize the power of Snowflake for your ETL-type processes). Snowflake’s SQL documentation can be found here.

What on Earth is Snowflake?

It’s a new database, built from scratch to run in the cloud. Something different that when a legacy vendor took an old DB and hosts it for you (MSSQL on Azure, Oracle in Rackspace or PostgreSQL in AWS).

Guiding project requirements for analytics

In a recent post, we started scoping our executive level dashboards and reporting project by mapping out who the primary consumers of the data will be, what their top priorities / challenges are, which data we need and what we are trying to measure.  It might seem like we are ready to start evaluating vendors and building it out the project, but we still have a few more requirements to gather.

What data can we exclude?

With our initial focus around sales analytics, the secondary data we would want to include (NetProspex, Marketo and ToutApp) all integrates fairly seamlessly with the Salesforce so it won't require as much effort on the data prep side.  If we pivot over to our marketing function however, things get a bit murkier.  On the low end this could mean a dozen or so data sources.  But what about our social channels, Google Ads, etc, as well as various spreadsheets.  In more and more instances, particularly for a team managing multiple brands or channels, the number of potential data sources can easily shoot into the dozens.

Although knowing what data we should include is important, what data can we exclude? Unlike the data lake philosophy (Forbes: Why Data Lakes Are Evil,) when we are creating operational level reporting, its important focus on creating value, not to overcomplicating our project with additional data sources that don't actually yield additional value.

Who's going to manage it?

Just as critical to the project as what and how; who’s going to be managing it? What skills do we have out our disposal and how many hours can we allocate for the initial setup as well as ongoing maintenance and change requests?  Will this project be managed by IT, our marketing analytics team, or both? Perhaps IT will manage data warehousing and data integration and the analyst will focus on capturing end user requirements and creating the dashboards and reports.  Depending on who's involved, the functionality of the tools and the languages used will vary. As mentioned in a recent CMS Wire post Buy and Build Your Way to a Modern Business Analytics Platform, its important to take an analytical inventory of what skills we have as well as what tools and resources we already have we may be able to take advantage of.

                                                    

When Salesforce Met Keboola: Why Is This So Great?

whenharrymetsallyjpg

How can I get more out of my Salesforce data?

sfdcpngAlong with being the world’s #1 CRM, Salesforce provides an end-to-end platform to connect with your customers including Marketing Cloud to personalize experiences across email, mobile, social, and the web, Service Cloud to support customer success, Community Cloud to connect customers, partners and employees and Wave Analytics designed to unlock the data within.

After going through many Salesforce implementations, I’ve found that although companies store their primary customer’s data there, the opportunity enrich it further by bringing in related data stored in other systems such as invoices in ERP or contracts in dedicated DMS is a big one.  For example, I’ve seen clients run into the issue of having inconsistent data in multiple source systems when a customer changes their billing address.  In a nutshell, Salesforce makes it easy to report on that data stored within but can’t provide a complete picture of the customer unless we broaden our view.  

Recommendation Engine in 27 lines of (SQL) code

In the data preparation space, very frequently the focus lies in BI as the ultimate destination of data. But we see, more and more often, how data enrichment can loop straight back into the primary systems and processes.

Take recommendation. Just the basic type (“customers who bought this also bought…”). That, in its simplest form, is an outcome of basket analysis.

We recently had a customer who asked for a basic recommendation as a part of proof of concept, whether Keboola Connection is the right fit for them. The dataset we got to work with came from a CRM system, and contained a few thousand anonymized rows (4600-ish, actually) of won opportunities which effectively represented product-customer relations (what customers had purchased which products). So, pretty much ready-to-go data for the Basket Analysis app, which has been waiting for just this opportunity in the Keboola App Store. Sounded like good challenge - how can we turn this into a basic recommendation engine?