A short while ago, we accepted the invitation to host a hands-on infrastructure workshop for Innovation Labs. This is an innovation program dedicated to emerging Romanian entrepreneurs that wish to turn their startup ideas into MVPs.
The key to success in any application development lifecycle is to have as little discrepancies as possible between environments. Luckily, Kubernetes and Docker give you the necessary tools to achieve environment uniformization. Nonetheless, it was always challenging to create development environments that would work on any operating system, be it Linux, Windows, or MacOS. This short article will guide you through all the necessary steps to create your own development environment with Vagrant and MicroK8S on your laptop or PC.
When it comes to giving people from your organization access to your Kubernetes cluster, things can get a little tricky. Kubernetes does not have an authentication mechanism by default. By doing this, you get stuck with an admin certificate you must share with the developers. In consequence, this gives them access to all the resources in the cluster, which can create holes in your security policy.
At CloudHero, we face challenging situations each day when helping our customers in their digitalization and automation journey. One such challenge was automating the process of cloning the production database and anonymizing the data for development use. Specifically, maintenance is usually done only on the production database, and the staging one has stale data, so there are a lot of differences between the staging and the production environment. Here, we are going to generalize the problem, so we help you adapt these methods to your own use case.
One common use case when sending logs to Elasticsearch is to send different lines of the log file to different indexes based on matching patterns. In this article, we will go through the process of setting this up using both Fluentd and Logstash in order to give you more flexibility and ideas on how to approach the topic.
In this blogpost, we will go through the story of how we implemented Kubernetes autoscaling using Prometheus, and the struggles we have faced on the way there. The application running on Kubernetes was the Magento eCommerce platform, as you may find later that we are using statistics from Nginx and PHP-FPM.
Today, we are going to talk about the EFK stack: Elasticsearch, Fluent, and Kibana. You will learn about the stack and how to configure it to centralize logging for applications deployed on Kubernetes. We will focus on Fluentbit, as it is the lightweight version of Fluentd and more suitable for Kubernetes. Additionally, we will talk about how we reached the final solution and the hurdles we had to overcome. Last but not least, we’ll show you how we handled application logs without actually installing 3rd party clients.
In the DevOps era, where very much emphasis is placed on automation, having reliable, predictable and fast pipelines is a must. Fortunately, there are many options for you to try, like Jenkins, Buildbot, Drone, Concourse and so on. If you are trying to run jobs on Kubernetes there is also the new Jenkins X available. This brings major changes to Jenkins, like running jobs in the cluster. Yet, if you host your code on GitLab, you should use their CI/CD tool because it can save you a lot of time and money when done right. In this article, we will showcase the main steps behind running GitLab-Runners on Kubernetes.