Go to file
Harm Verhagen 688a6a8545 Fix docker_clone_volume.sh to preserve hard-links
When using docker_clone_volume.sh,  I found that when copying a volume with influxdb data,
the clones image was bigger than the original.

cause: alpine base image contains a cp version (busybox) which  does _not_ preserve
hardlinks, even with the -a option.

This is fixed by using an ubuntu base image instead of alpine.

downside: ubuntu image is larger than alpine.

Source volume:

 influxdb_data               380.2 MB

Before:

 influxdb_data.bak           443.5 MB

After:

 influxdb_data.ubuntu.bak    380.2 MB
2023-02-17 08:43:58 +01:00
docker_clone_volume.sh Fix docker_clone_volume.sh to preserve hard-links 2023-02-17 08:43:58 +01:00
docker_get_data_volume_info.sh Add execution permission to scripts 2020-04-30 16:53:17 +00:00
LICENSE Rename LICENSE.md to LICENSE 2020-04-30 16:51:01 +00:00
README.md Added blog link 2017-04-25 11:38:56 +02:00

docker-convenience-scripts

This repository will contain different convenience scripts for docker I have gathered over time.

docker_clone_volume.sh

The purpose for this script is that I can easily create a clone of an existing docker data with a new name. This will allow me to create a duplicate of an existing data volume I use in the production environment of my blog for example and take that duplicate to my development version to ensure I have the latest production data also at development.

You can find more details in my blog post Cloning Docker Data Volumes

docker_get_data_volume_info.sh

The purpose for this script is that I can easily get a list of details for all data volumes that are currently present. The information that is provided to the user as output is per data volume the current size of the data volume and a list of stopped or running containers that have a link to this data volume, including the image corresponding to the container. The script allows me to easily see if there are some data volumes on my disk that are taking up a lot of space and are not needed anymore.

You can find more details in my blog post Listing information for all your named/unnamed data volumes