Logs are everywhere. But they have gone through an interesting development over the years:

  • grep: This works well as long as you have a single instance to search on. Once you need to SSH into many machines and try to piece together the results of multiple grep commands, things tend not to work that well anymore.
  • Splunk: Centralizing those logs and letting users search through them with a piped language in Splunk is the logical step to fix that issue. However, the more data you centralize, the slower this will get.
  • ELK: The solution to that idleness is using full-text search. Elasticsearch, in combination with Logstash and Kibana (plus Beats), gave logs a major performance boost. But at what cost?
  • Loki: Reducing the scope and going back to a smart data structure combined with grep gives Loki the possibility to reduce costs while still providing good performance.
  • Closing the gap: So what are the tradeoffs between the different systems, and are they potentially closing some gaps between performance and cost?

Join the discussion after the talk for "the right amount" of features, costs, and speed.

Video

Maschinenhaus
15.06.2021 19:50 – 20:30
Talk
Intermediate

Speakers