CTO Anoop R.’s Q&A at JeffConf, Milan, Italy

Our CTO, Anoop Ramachandran, was at the JeffConf’s first serverless event in Milan, Italy, on Friday 29th Sept. He shared insights about Google’s Cloud Platform, and the solutions it can bring to SMEs like MainAd.

The use of serverless technologies opened up many new paths for us prior, during, and post-development of our proprietary tech, Logico. At JeffConf, Anoop answered questions on how we harnessed big data through serverless technologies, and any difficulties the tech & development team faced. Read the Q&A below:

  1. Who are MainAd & what do you do?

MainAd is an international, performance-driven, ad tech company. We were founded in 2007, are active in 80+ markets, and have 8 offices around the globe. Our flagship proprietary technology, ‘Logico’, utilises ML to extend the benefits of bespoke programmatic campaigns for brands.

  1. Why did you choose to use GCP’s Serverless Technologies?

As an ad tech company, we serve millions of ads daily in various formats – static, dynamic, native, video – which are displayed on several thousands of different websites. Our success depends on our ability to target the correct user, at the right time with a relevant message. To succeed, we must have an in-depth understanding of our client, visitors, devices, websites, placements and so on. This can only be accomplished if:

a. We store huge amounts of data (big data),
b. Run real-time analysis on it,
c. Produce an ad response in less than 100 ms.

And this is exactly what GCP’s serverless tools have enabled us to do.

  1. So, what are the top benefits of using GCP’s Serverless Technologies for MainAd?

For us, the main benefits of GCP’s serverless technologies can be split into 3 categories:

Generic/Marketplace

  • GCP makes top-level tech available for SMEs like us, which opens new horizons.
  • Going independently would be economically and technically impossible.
  • Hence, we now have the ability to compete with big players who outplay us in terms of budgets, market share, etc.

Cost

  • Less physical resources are needed, as it reduced our network hardware team by over 20%.
  • Through BigQuery, we can now operate in real-time with a response time of 100ms. If we went solo, the cost would’ve been up to 4 times more, without guaranteed results.
  • We can handle huge amounts of data (currently 500k requests per second) through Bigtable, in just ⅓ of the cost compared to going solo.

Performance

  • The server auto scales by itself, so less monitoring to do on our side.
  • The ability to delegate task to the system for better-than-human performance, means less hassle for the tech team.
  • Reduced stress of hardware maintenance, so resources are focused on our core business.
  • Efficient & centralised logging system for greater control and monitoring of servers’ performance; hence ease of mind for the tech team.
  1. You said that your bidding servers need to be closer to the region of the AdX server. How do you guarantee the routing of the request?

We use HTTPS Global Load Balancer, which internally redirects to nearest possible server (zones), with auto scaling feature enabled; if you have an effective reporting setup, you can almost do that without a physical presence.

  1. What was your previous technology stack? Was it difficult for you and your team to switch to GCP?

Until 2 years back, our servers were strategically placed around the world and we mostly used Microsoft’s tech. Switching to Bigtable from traditional databases was not very stressful; we got a lot of support from Google’s team.

  1. What kind of analysis do you perform on your data?
  • We process petabytes of data about users, customers and publishers.
  • We are well focused on time (millisecond). We also process data upon our Data Scientists’ requests – BigQuery is perfect for that.
  1. You talked about ML: could you give more details?

Both targeting and messaging are tailored to the specific opportunity and in real time. We developed ML models within Google’s framework to better serve our needs. Following the development of classic ML models, we are experimenting with Deep Learning now.

  1. Finally, can you share some tips & best practices for the community?
  • Minimise geographic distance as much as possible e.g. for US based queries, we use US Bigtable.
  • Keys storage, so that a greater amount of data is accessible e.g. history of values.
  • Data partition (on Big Table), helps reduce server costs and timing by optimising the querying use.