How to copy a file/directory from a host to another who can communicate only by a 3rd host (Jump Host)

Using — append and — partial allows us to resume rsync in case that the rsync is interrupted

rsync — bwlimit=20000 — progress — append — partial -vz -e ‘ssh -J <USER>@<JUMP_HOST> -p 22’ <USER>@<SOURCE_HOST>:/SOURCE/PATH /LOCAL/PATH

Parameters explaination

  • bwlimit: if omitted uses all bandwidth else limit I/O bandwidth; KBytes per second
  • progress: show progress during transfer
  • append: append data onto shorter files
  • partial: keep partially transferred files
  • vz: verbose output and zip file data during the transfer
  • -e ‘ssh -J <USER>@<JUMP_HOST> -p 22’: rsync over a jump…

Scenario: we have a swarm of 3 servers and a 4th server which will be our storage, a directory on this host that will be mounted over ssh

Solution:

Install the following plugin on all servers of the swarm

$ docker plugin install --grant-all-permissions vieux/sshfs
latest: Pulling from vieux/sshfs
52d435ada6a4: Download complete
Digest: sha256:1d3c3e42c12138da5ef7873b97f7f32cf99fb6edde75fa4f0bcf9ed277855811
Status: Downloaded newer image for vieux/sshfs:latest
Installed plugin vieux/sshfs

On the storage server create the following directory and file

Note: kpatronas is my home directory, adjust this to your environment

$ mkdir /home/kpatronas/data
$ echo Hello world! > /home/kpatronas/data/message.txt

Now on the swarm manager lets create the ssh…


Docker images takes space in filesystem, lets do a test to get a better understanding, create the following Dockerfile

FROM alpine
RUN echo foo
RUN dd if=/dev/zero of=1g1.img bs=1G count=1
RUN dd if=/dev/zero of=1g2.img bs=1G count=1
RUN dd if=/dev/zero of=1g3.img bs=1G count=1
CMD /bin/true

Build the image with

$ docker build . -t big_image

This docker file should create an image around ~3GB , lets verify this with docker df system command

$ docker system df -v
Images space usage:
REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
big_image latest f72061695ba5 About a minute ago 3.227GB 5.613MB 3.221GB 0
alpine latest 7731472c3f2a 2 days ago 5.613MB 5.613MB 0B 0
busybox latest b97242f89c8a 4 days ago 1.232MB 0B 1.232MB …

If your application requires data persistence after the container have been deleted docker offer two options to store data, volumes and bind mounts

Bind mount have limited functionality compared to volumes. Using bind volumes a file or a directory on host machine is mounted into a container

Volumes are the preferred way for storing persistent data generated and used by containers, volumes have several advantages over bind volumes

  • Are easier to backup or migrate than bind mounts.
  • Volumes can be managed with docker cli commands or with the docker API
  • Volumes can be shared by multiple containers
  • Data in volumes can be encrypted by the volume…

Storage drivers allows you to create data in the writable layer of the container, the files written in this layer do not persist when the container is is deleted and read / write speed is lower than the host read / write speed.

A docker image is layers and each layer is an instruction in a docker file, every layer is read only.

FROM ubuntu:20.04              <--- layer 1
COPY . /application <--- layer 2
CMD python /application/app.py <--- layer 3

When we start a container using an image all new data or modification of existing data is written to a new layer, this layer is deleted when the container is deleted but the image used to create the container remains untouchable. …


Node labels can be a powerful tool, a label is nothing more than a metadata of the node, imagine that we have two swarm nodes, one in a data-center named east and one in a data-center named west, lets label them based on the data-center

On the swarm manager enter the following for the node on Data-center east

$ docker node update --label-add DC=east worker_node2

Add a label for the node on data center west also

$ docker node update --label-add DC=west worker_node3

Suppose that the one Data center, named east is our production data center and the west is the failover data center, so lets start our services on data center…


Docker stacks allows to manage multi-container applications as a single unit

Lets create the following file stacks_example.yml wich is actually a docker compose file

On the swarm manager enter

$ docker stack deploy -c stack_example.yml simple
Creating network simple_default
Creating service simple_busybox
Creating service simple_web

the simple parameter is just a prefix of our stack services to identify them.

The command created a network which is used for the services to communicate each other and two services, one with the nginx web server and the other with a bash echoing some text in a loop.

Lets try some commands

The docker stack ls command lists the running…


Το άγχος της επικοινωνίας!

Οι λέξεις που θα διαλέξουμε, ο τόνος της φωνής μας και η στάση του σωματός μας μεταφέρουνε στον συνομιλητή μας αυτά που θέλουμε να επικοινωνίσουμε.

Συχνά όμως αυτό που αντιλαμβάνεται ο συνομιλητής μας μπορεί να μην ειναι αυτό που επικοινωνούμε, και αυτό μπορεί να γίνει για δύο λόγους.

  1. Οι λέξεις που επιλέξαμε ή ο τόνος της φωνής μας ή η στάση του σωματός μας δεν αποδίδει ξεκάθαρα την πρόθεση μας.
  2. Τα “φίλτρα” του συνομιλητή μας, ανεξάρτητα απο το πόσο καλα εμείς επικοινωνούμε αλλοιώνουν το μήνυμα και πολλές φορές εμείς δεν είμαστε σε θέση να το γνωρίζουμε.

Ενώ οι άνθρωποι είμαστε κοινωνικά όντα και μας αρέσει να επικοινωνούμε τις ανάγκες μας , τις ιδέες μας και τα συναισθήματα μας, υπάρχει ο φόβος του να μην επικοινωνίσουμε σωστά, δηλαδή αυτό που θα πούμε να μην ειπωθεί με τον τρόπο που θέλουμε ή να μην γίνει κατανοητό σωστά. …


Docker swarm allows you to deploy services distributed to the nodes of the swarm, this allows high availability and high performance of the service. To start a single service we can enter something very simple like this

Create a simple service and cause a on the running node

$ docker service create nginx
spesyzvgu8pkh1eidhpjcti7d
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged

This command started a new service with the container nginx.

To see details about the services we can run

$ docker service list
ID NAME MODE REPLICAS IMAGE PORTS
spesyzvgu8pk nifty_lehmann replicated 1/1 nginx:latest

Note that the replicas are 1 and does not equal the number of my worker nodes which are 3 in my setup, This is because by default docker swarm creates one replica but still enjoys high availability, in case of a failure of the node that runs this container the service will move to another node. …


Scenario: we want to repeat the last entered command which fails until it returns a system code of 0, the reason that potentially exits with !=0 is because of network problem

solution: edit ~/.bashrc and enter

alias repeat='while [ $? -ne 0 ]; do $(fc -nl -1); done'

Now we need a tool to test if it works, we want a small tool to verify that the command will be executed until the exit code is 0, create file random.sh and enter

#!/bin/bash
RANDOM_INTEGER=$((RANDOM % 10+1))
if [ $(( RANDOM_INTEGER % 2 )) -eq 0 ]
then
echo "exit code 0"
exit 0
else
echo "exit code 1"
exit 1…

About

Konstantinos Patronas

DevOps engineer, loves Linux, Python, cats and Amiga computers

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store