Zentrales Logging mit dem Elastic Stack

 Fri, 23 Aug 2019 17:25:13 +0200 last edited: Fri, 23 Aug 2019 18:16:09 +0200  

#^Zentrales Logging mit dem Elastic Stack
on media.ccc.de

Dezentrales Logging wird mit der steigenden Zahl von zu überwachenden Prozessen immer aufwändiger. Deshalb gibt es seit mehreren Jahren Tools welche das Zentrale Logging unterstützen. In diesem Vortrag soll der Elastic Stack als ein solches Tool vorgestellt werden.

In der Welt der Microservices ist die Anzahl der Logs-produzierenden Prozesse sehr groß und liegt durchaus im Bereich von 100-1000 Prozessen. Eine manuelle Log-Verarbeitung ist hier so gut wie undenkbar. Doch auch monolithische Services laufen oftmals dezentral und das Analysieren der Produktions-Logs ist dann häufig auch mit viel Aufwand verbunden. Mithilfe eines zentralen Loggins lässt sich eine viel bessere Übersicht über den Gesamtzustand eines Systems gewinnen, da nicht jedes Log einzeln untersucht werden muss, sondern die Logs aggregiert und somit auch leicht automatisiert ausgewertet werden können. Der Elastic-Stack bietet die Möglichkeit, große Mengen an Logs zu speichern und zu durchsuchen. Das Ökosystem um den ELK-Stack unterstützt Entwickler, DevOps usw. dabei, die Logs schnell und einfach aufzubereiten, damit diese gut analysierbar sind. In diesem Vortrag werden die Vor- und Nachteile des zentralen Loggins dargelegt und gezeigt, wie sich der Elastic Stack in Umgebungen einbinden lässt.

#ELK #FrOSCon14 #FrOSCon2019

No Excuse

 Fri, 21 Jun 2019 14:42:53 +0200 
#^SQL is No Excuse to Avoid DevOps - ACM Queue
A friend recently said to me, "We can't do DevOps, we use a SQL database." I nearly fell off my chair. Such a statement is wrong on many levels.
"But you don't understand our situation!" he rebuffed. "DevOps means we'll be deploying new releases of our software more frequently! We can barely handle deployments now and we only do it a few times a year!"
I asked him about his current deployment process.

die deutsche Arbeitskultur

 Fri, 15 Mar 2019 00:15:06 +0100 
#^Verträgt sich DevOps mit der deutschen Arbeitskultur? | heise Developer
DevOps-Prinzipien und die deutsche Arbeitskultur sind nicht leicht zu vereinbaren. Aber es lohnt sich, die Herausforderung anzugehen.

CI/CD tools

 Fri, 11 Jan 2019 17:45:01 +0100 
#^7 CI/CD tools for sysadmins | Opensource.com
An easy guide to the top open source continuous integration, continuous delivery, and continuous deployment tools.

Trunk based development

 Fri, 22 Jun 2018 16:37:23 +0200 
Quite interesting reading this overview

#^Trunk Based Development - Game Changers
Since the early 80’s a number of things have pushed best practices towards Trunk-Based Development, or away from it.

The language in use to describe such things has changed over time. Software Configuration Management (SCM) is used less today than Version Control Systems (VCS) is. A simpler still term - “Source Control” - seems to be used more recently, too.

Similarly, ‘trunk’ and ‘branch’, have not always been used as terms for controlled code lines that have a common ancestor, and are eminently (and repeatably) mergeable.

Safe Containers?

 Fri, 25 May 2018 18:34:03 +0200 
#^Safe Containers » ADMIN Magazine
By Martin Loschwitz
Docker containers are a convenient way to run almost any service, but admins need to be aware of the need to address some important security issues.
Container systems like Docker are a powerful tool for system administrators, but Docker poses some security issues you won't face with a conventional virtual machine (VM) environment. For example, containers have direct access to directories such as /proc, /dev, or /sys, which increases the risk of intrusion. This article offers some tips on how you can enhance the security of your Docker environment.


 Thu, 18 Jan 2018 18:52:14 +0100 
I already had a dockerized Selenium-Grid but it was a good idea to replace it with Selenoid. The state of automation and the video recording feature are really impressive.

Selenoid is a powerful implementation of Selenium hub using Docker containers to launch browsers.

Lightweight and Lightning Fast
Suitable for personal usage and in big clusters:
* Consumes 10 times less memory than Java-based Selenium server under the same load
* Small 7 Mb binary with no external dependencies (no need to install Java)
* Browser consumption API working out of the box
* Ability to send browser logs to centralized log storage (e.g. to the ELK-stack)
* Fully isolated and reproducible environment

#^Scalable Selenium Cluster: Up & Running | Ivan Krutov
by seleniumconf on YouTube

zu leichtfertig mit Zugangsdaten

 Thu, 09 Nov 2017 18:27:55 +0100 
#^Studie: DevOps-Teams gehen häufig leichtfertig mit Zugangsdaten um

In vielen Unternehmen mangelt es den DevOps-Abteilungen an Regeln für den sicheren Umgang mit privilegierten Accounts und Zugangsdaten – vielfach fehlt eine übergreifende Sicherheitsstrategie, wie CyberArks „Advanced Threat Landscape“-Report zeigt.


 Tue, 26 Sep 2017 17:27:08 +0200 
Nice collection of #Jenkins pipeline examples.

pipeline-examples - A collection of examples, tips and tricks and snippets of scripting for the Jenkins Pipeline plugin

Jenkins Shared Libraries

 Fri, 04 Aug 2017 18:53:36 +0200 
Should have used shared libraries much earlier.

#^Jenkins Shared Libraries Workshop
by Julien Pivotto on SlideShare

RDBMS containers

 Fri, 28 Jul 2017 13:04:28 +0200 last edited: Fri, 28 Jul 2017 16:55:45 +0200  
#^RDBMS Containers » ADMIN Magazine
If you spend very much of your time pushing containerized services from server to server, you might be asking yourself: Why not databases, as well? We describe the status quo for RDBMS containers.


 Fri, 07 Jul 2017 23:17:18 +0200 
There will be beta-exams for the new LPIC-OT at FrOSCon in August. Looking at the objectives for this new exam it contains a lot of what I have done recently.

#^DevOps Tools Engineer
DevOps is one of the most in-demand skills in open source today.  In order to meet this need with verified skills LPI, an established authority in Linux Administration, is developing the DevOps Tools Engineer certification.  These additional certified competencies strengthen the portfolio of today’s IT professionals.

As more and more companies introduce DevOps methodologies to their workflows; skills in using tools which support the collaboration model of DevOps become increasingly important. LPIC-OT DevOps Tools Engineers will be able to efficiently implement a workflow and to optimize their daily administration and development tasks.

This certification will be released in autumn 2017 and will test proficiency in the most relevant free and open source tools used to implement the DevOps collaboration model, like for example configuration automation or container virtualization.

The new certification is created according to LPI‘s community-based certification development process. This process relies heavily on involvement by the IT community.

DW: eine oder einen Senior DevOps Engineer (Development und Operations Engineer)

 Bonn, GermanyTue, 27 Jun 2017 01:23:11 +0200 
Das hört sich doch sehr interessant an, aber was soll dieser Mist mit "auf Basis eines befristeten Honorarrahmenvertrages"?

#^eine oder einen Senior DevOps Engineer (Development und Operations Engineer) - Job bei Deutsche Welle in Bonn
Aktuelles Stellenangebot als eine oder einen Senior DevOps Engineer (Development und Operations Engineer) in Bonn bei der Firma Deutsche Welle

Die Abteilung Applikations- und Systembetrieb betreibt die IT-Infrastruktur der Deutschen Welle am Standort Bonn und eine Vielzahl von unternehmensrelevanten Anwendungen im Audio-, Video- und Online-Umfeld. Der Bereich „Betrieb Onlinesysteme“ betreut in einer modernen, innovativen IT-Landschaft hochverfügbare Webanwendungen zur IP-basierten Distribution des DW-Programms. Es handelt sich überwiegend um Web-Content-Managementsysteme auf Java EE- und PHP-Basis. Sie arbeiten in einem sehr motivierten und aufgeschlossenen Team.
 DevOps  Bonn

Deep Dive into Capabilities

 Sun, 25 Jun 2017 22:57:10 +0200 
Secure Your Containers with this One Weird Trick
Did you know there is an option to drop Linux capabilities in Docker? Using the docker run --cap-drop option, you can lock down root in a container so that it has limited access within the container. Sadly, almost no one ever tightens the security on a container or anywhere else.

Multi-Project Pipeline Graphs

 Fri, 23 Jun 2017 18:00:52 +0200 

#^GitLab 9.3 Released with Code Quality and Multi-Project Pipeline Graphs

GitLab 9.3 Released with Code Quality, Multi-Project Pipeline Graphs, Conversational Development Index, Improved Internationalization, Snippet Descriptions, and much more!



 Fri, 09 Dec 2016 19:02:47 +0100 
Nearly all web projects are moved to #Docker containers now. The old infrastructure was mostly based on CentOS6/7 and the main reason for this step was the annoyance of legacy #PHP projects and their PHP version requirement conflicts. I don't need a cluster or swarm, so I have a single instance with #CentOS based #Project Atomic only. The dockerized projects include:
static pages with nginx
#TYPO3 7.6
#Drupal 8.2
#Piwik 2.17
#Revive Adserver 4.x
#OXID eShop 4.[9|10]

Here are some completely subjective "best practices":
  • I was a bit disappointed about most available images in Docker's Hub. But make use of the official mariadb, php, drupal, nginx images!
  • Use your Dockerfile and no massive entrypoint scripts.
  • Don't try to build a base images for all your projects, the projects have all too different requirements. Found it much easier to build custom images from the official PHP images directly with only what was really needed for the projects.
  • Think about mail delivery requirements. Does your application requires mail(), or can you configure a SMTP server. Use sSMTP if you need a local MTA.
  • Get your persistent volumes right and use the correct #SELinux labels.
  • A local repository makes deployment much easier.
  • Use #Jenkins to build and deploy new images.
  • Don't use --link, use Docker networks instead!
  • jwilder/nginx-proxy still has some bugs, especially with custom nginx configurations, but a wonderful tool.
  • jrcs/letsencrypt-nginx-proxy-companion and it was never easier to get certificates.
  • Think about reboots. How you want your containers to be managed? Services for systemctl work quite well so far.
  • Redirect your application logs to the right output. Log management I should take a look at again.

Should also get my private projects into containers next.
 Wed, 02 Nov 2016 22:58:16 +0100 
#^Introduction to DevOps: Transforming and Improving Operations
Learn how to transform your organization using the principles and practices of DevOps.

"Introduction to DevOps: Transforming and Improving Operations” aims to help you develop a good working knowledge of the concept of DevOps, covering the foundation, principles, and practices of DevOps. This course will focus on the successful patterns used by high performance organizations over the past 10 years.

IP-based virtual hosts in a container

 Mon, 24 Oct 2016 18:34:46 +0200 last edited: Mon, 24 Oct 2016 18:45:59 +0200  
I have a Docker container with a nginx reverse proxy with name based virtual hosts and also wanted to have IP-based virtual hosts. But I always got the default server configuration, even I saw in the logs that the correct destination IP was logged, but the listen statements for the ip:port just had no effect.
It seems not to work with the default bridge network. Running the container with --net=host solved this problem and also the IP-based vhosts worked.

Dave Farley: The Rationale for Continuous Delivery

 Wed, 12 Oct 2016 23:17:35 +0200 
#^Dave Farley: The Rationale for Continuous Delivery
Dave Farley bietet in seiner Keynote der Continuous Lifecycle 2015 einen lehrreichen historischen Abriss zur Entwicklung von Continuous Delivery hin zu DevOps.

#Continuous Delivery #CD

Tear down docker test containers based on image name

 Thu, 01 Sep 2016 16:49:47 +0200 
Given your #CI generates #Docker images from your Git commits and tags them with something like web01-qa:$BUILD_NUMBER. Right now I can not set a name for the container that gets spun up after every commit, so I needed a solution to tear down the old containers after successful start of a new container based on the image they were created from. This is what I came up with:

docker ps --format "{{.ID}}\t{{.Image}}" | awk -F ':' '/web01-qa/{print $NF, $0}' | sort -r -n | tail -n+2 | awk '/web01-qa/{system("docker stop " $2)}'
Get all running containers, sort them by $BUILD_NUMBER for the image name containing web01-qa, stop all matching containers except the one from the newest image.

Or use docker rm -f if not interested in the old containers anymore.