I really like conferences, they give me ideas about what’s next, what to learn, who to know, new whitepapers to read and products to check out. Conferences are as much about the technical talks as they are about the people you can meet. I was very lucky to go to this one thanks to Sighup, and with a friend, which made it even better.

dotConferences are a series of conferences started in 2012 with the goal of creating a TED-like technology event targeting the developers' community. This was my second one - the first being dotGo last November - and I’m really liking the format and the quality so far.

The best feature, in my opinion, is that they’re single track: there is no choosing what to see. This alleviates pressure on the attendees to choose correctly and forces organisers to find the best possible speakers.

This is a list of ideas and topics from dotScale 2018:

David Gageot, from Google, started the morning with a good introduction to Istio, the problems it solves, using Skaffold (which he’s contributing to) to deploy a micro-service application to Kubernetes and observing Istio providing automatic tracing, metrics and general insights on the application behavior. To give this presentation he wrote his own tool which is super cool and that we all hope he’ll release very soon.

Preeti Vaidya, from Viacom, illustrated how it’s very important to choose a database based not on how comfortable developers are with a specific tool but rather on the structure of your data. She gave a 4-step guide to choosing which type of database to use: Operations, Application, Data Mode, Data Store. Long story short: it’s important to understand what queries your application is going to perform on the data before choosing a relational database.

Damien Tournoud, founder of Platform.sh, gave an awesome talk on how monitoring is done inside his company. They built their own monitoring system from scratch, with a basic and out-of-the-box idea: monitoring is about the business logic of the monitored system. They built a model of their infrastructure and systems, and from there they generate alerts and monitoring metrics. This approach is interesting compared to those which “just monitor everything”, because the most complicated thing in monitoring is creating meaningful and actionable alerts.

After this there were a few lightning talks:

  • Enrico Signoretti, from OpenIO, described a new device in the storage industry called nano-nodes.
  • Dan, from Datadog, talked about how “FaaS: they’re not magic” but rather just an event-driven architecture as a service.
  • Mike Freedman, from TimescaleDB, illustrated how most of the data is actually time-series data and how viewing data this way can help your business.

Willy Tarreau, creator of HAProxy, gave us a few pointers on how to introspect applications in production with HAProxy. HAProxy, as with most load balancers, has a different viewpoint on your infrastructure than most other applications. This is why it’s important to have a clear understanding of the metrics behind it. A few things to keep in mind are: queue length, halog, logging only 5% of requests sometimes helps when the request count is very high, similar systems should have similar metrics, and to check differences is more important that the absolute value.

Yaroslav Tkachenko, from Activision, explained how to manage data pipelines for Call of Duty games using Kafka, a custom Stream Processor, in front of it to normalize the data that comes in and establish a schema registry for formatting messages. This last bit, I think, is the most important part: queues offer the best way to decouple producers from consumers in a large infrastructure, but decoupling can sometimes get out of hand when consumers don’t know what kind of data comes in. A schema registry is useful because it creates a common definition of the structure of the message produced and consumed. Standardization of processes and API definition is always key to scalability.

Next was Marc Shapiro's turn, explaining how sometimes neither strong consistency nor eventual consistency are enough to satisfy our application needs. As always in computer science there must be a third option: just-right-consistency. The idea is to have a system that’s as available as much as possible and as consistent as necessary; to achieve this there are few concepts to explain: CRDTs and causal consistency.

Next up was Paul Dix, cofounder and CTO of InfluxData, giving us some insight of the Influx Data Platform 2.0. The talk was called “Everything at scale is hard” and described his and the company's journey through the years of changing infrastructure paradigm from VM to containerised workloads, from monolith to microservices. The Data Platform he envisioned is based on 3 different storage tiers: memory, local SSDs and object storage glued together by a unified query language called FLUXLANG. This new “language for data” is a DSL meant to be run on distributed systems and is a complete rewrite, moving past SQLish tentatives. Two quotes that struck me the most were: “data has gravity” and the final statement “everything at scale is interesting”.

The last talk was given by Jeremy Edberg, who worked at Reddit and Netflix, describing how infrastructure data should be treated as any other type of data in a company. How can we derive velocity from movements, exactly like in physics 101? He designed a tool that is basically an SQL interface over major cloud provider infrastructure: imagine if your data scientist could query and analyze the changes or versions on your cloud provider infrastructure in the same way that customer data queries are run?

Overall the conference was amazing, it’s absolutely important for us to stay up to date and actively participate in, and contribute to, the community. Conferences are an especially important chance to get out of daily work-related problems and focus on higher level concepts and ideas.

You can follow me on Twitter and ping me there if you have questions or want to quickly discuss any of the topics mentioned.
A huge thanks to Sighup. You should follow them as well, especially if you are interested in distributed systems and Kubernetes. We are hiring! Shoot us an email if you love automating things, challenging yourself with hard problems and are eager to constantly learn.

References: