Let me walk you through the roles of each component involved in your website and how they fit together.
1. Jekyll
Jekyll is a static site generator. It transforms simple markdown files into a static website with HTML, CSS, and JavaScript. You likely use Jekyll to manage your blog, converting your posts into web pages.
- Files related to Jekyll:
_config.yml
: Contains the configuration settings for your Jekyll site.Gemfile
andarsscriptum.gemspec
: Define the dependencies (gems) required to build the site, like Jekyll itself. Bundler uses these to manage Ruby dependencies.Rakefile
: Provides tasks to automate processes, such as building or deploying the site.
Jekyll generates static files that are served via a web server. From the app.json
file, it seems your site is a blog hosted at the repository https://arsscriptum.github.io/
【18†source】.
2. Nginx
Nginx is a web server that delivers your static files (e.g., HTML, CSS, JS) to users. It acts as a reverse proxy to handle incoming requests, route traffic, or load balance multiple containers if needed.
-
How it fits: Nginx serves the static files generated by Jekyll. In the Docker setup, the Nginx container will serve the output from Jekyll, making your website available on the internet.
-
Configuration File:
If Nginx is not configured inside the Docker image directly, it will use a provided configuration (possibly similar to the uploadednginx.conf
). This configuration determines how your site responds to requests and handles files.
3. Docker and Docker Compose
Docker provides containers, which are isolated environments to run applications. Docker Compose is used to manage multi-container applications. In your case, you have two containers:
- jekyllbuild: This container builds your Jekyll site, transforming markdown into static HTML files.
-
nginx: This container serves the built files, making them accessible as your website.
- Files related to Docker:
- Dockerfile: Describes how to create the Docker image for your application, including what packages to install and commands to run.
- docker-compose.yml: Coordinates multiple Docker containers (like
jekyllbuild
andnginx
), specifying how they interact, network configurations, and mounted volumes.
4. Gem
Gem is the package manager for Ruby, which Jekyll relies on. The Gemfile
and gemspec
declare the dependencies needed for Jekyll and other components to run properly.
Gemfile
: Specifies which gems (packages) are required, e.g., Jekyll.arsscriptum.gemspec
: A detailed specification for a gem, potentially custom settings or extensions for your blog.
5. Firebase
Your firebase.json
suggests that you have integrated Firebase Hosting, which serves static files from the /build
directory. Firebase can act as the frontend delivery service, hosting your static site generated by Jekyll【17†source】.
6. Procfile
This file is used by platforms like Heroku to define how to run applications. If you deploy to Heroku, the Procfile
specifies the commands to execute to start your app.
7. Relationship of Components
- Development: You develop your site using Jekyll, writing posts in markdown.
- Build: Jekyll generates static files from your posts.
- Docker Workflow:
- The
jekyllbuild
container builds the static files. - Nginx serves the generated files via its own container.
- The
- Firebase Hosting: Optionally, you host the built files on Firebase.
- Deployment: If deploying with Heroku, the
Procfile
ensures the correct startup procedure.
This stack provides a smooth workflow:
- Jekyll lets you write and build the blog content.
- Docker ensures consistent environments for building and serving.
- Nginx serves the final site to users.
- Firebase or Heroku may provide additional hosting or deployment options.
When building and deploying your website with the given setup, the command execution typically follows this sequence:
1. Docker Compose Initiation
- When you run
docker-compose up
ordocker-compose up --build
, Docker Compose reads thedocker-compose.yml
file to orchestrate the services (Jekyll and Nginx containers).
2. Dockerfile Execution for jekyllbuild
Service
During the build process, Docker executes the commands inside the Dockerfile
to create the jekyllbuild
image. Here is the typical order:
- Base Image:
Docker pulls the specified base image (e.g.,ruby:latest
if you’re building with Ruby).FROM ruby:latest
- Install Dependencies:
Install required system tools and libraries.RUN apt-get update && apt-get install -y build-essential
- Copy Files into the Container:
Docker copies the project files into the container.COPY . /usr/src/app
- Install Ruby Gems (Including Jekyll):
Bundler installs the dependencies defined inGemfile
.RUN bundle install
- Build Jekyll Site:
Jekyll generates the static files into a_site
orbuild
directory.RUN jekyll build
- Expose Volumes and Commands:
You may define volumes in thedocker-compose.yml
file, mapping the build output to be shared between containers (e.g.,jekyllbuild
andnginx
).
3. Nginx Setup
Once the Jekyll site is built, the Nginx service starts.
-
Mount Build Directory:
Nginx accesses the static files from the mounted volume (e.g.,/usr/src/app/_site
). -
Load Nginx Configuration:
Nginx uses a configuration file (e.g.,nginx.conf
) to determine how to serve the files. -
Expose Port:
Nginx starts listening on the specified port (usually 80 or 8080).
4. Execution Workflow During Build
Here’s the full order of execution:
- Run
docker-compose up --build
:- jekyllbuild container starts:
- Dockerfile runs: Copies files, installs dependencies, builds the site.
- Jekyll generates static files into
_site
orbuild
directory.
- Build process completes, and static files are ready.
- jekyllbuild container starts:
- Nginx container starts:
- Nginx reads the static files via the shared volume.
- Nginx serves the website on the configured port.
5. Optional Steps (Firebase/Heroku)
If you’re deploying to a platform like Firebase or Heroku:
- Firebase Hosting:
After building the site, upload the static files in thebuild
directory with:firebase deploy
- Heroku Deployment:
If using Heroku, theProcfile
defines the commands to execute:web: bundle exec jekyll serve --host 0.0.0.0 --port $PORT
Summary of Command Flow
docker-compose up --build
starts:- Docker builds
jekyllbuild
image → Runs Jekyll → Outputs static files. - Docker starts Nginx container → Serves the static files.
- Docker builds
- Optional: Deploy built files to Firebase/Heroku using platform-specific tools.
This flow ensures that your website is properly built, served locally via Nginx, and optionally deployed to the web.
What is a Dangling Image in Docker?
A dangling image is an image that has no tag associated with it. In other words, it is an intermediate or old image layer that was built during the process but is no longer associated with any specific image name or tag.
- Example of Dangling Image:
After multiple builds, an older layer might still exist in the Docker cache, but it is no longer referenced by the latest image. This “dangling” layer has the following appearance when you list images:
$ docker images -f dangling=true
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 76f2b22ef0c8 2 weeks ago 345MB
- How it happens:
- When you rebuild an image with the same tag (e.g.,
my-jekyll-site:latest
), Docker reuses or replaces some layers but leaves the unused ones behind. - These old, unreferenced layers become dangling images.
- When you rebuild an image with the same tag (e.g.,
What Are Unused Images?
Unused images are any Docker images that are not referenced by:
- Any running or stopped containers.
- Any Dockerfile or Compose builds currently in use.
There are two types of unused images:
- Dangling images: Images with
<none>:<none>
as their name and tag. - Unused tagged images: These have valid names and tags but no associated containers. For example, you pulled an image or built one, but it’s not in use by any container.
How to Identify and Clean Up Dangling and Unused Images
List Dangling Images:
docker images -f dangling=true
Remove All Dangling Images:
docker image prune -f
This removes only the dangling images (those with <none>:<none>
as name and tag).
List All Unused Images:
docker images --filter "dangling=false"
This shows images that are valid but not currently in use by any container.
Remove All Unused Images:
docker image prune --all -f
This removes both dangling images and unused images that are not being used by any containers, freeing up disk space.
Why Clean Up Dangling and Unused Images?
- Over time, Docker accumulates many dangling and unused images, consuming disk space (often in
/var/lib/docker
). - Regular cleanup helps prevent storage issues, especially when frequently building or deploying applications.
The docker image prune
command shows that images were deleted, but the “Total reclaimed space” is 0B
. This can occur for a few reasons:
Reasons Why Reclaimed Space is 0B
-
Layers Are Still in Use by Other Images
Docker uses layered storage. If any of the layers of the deleted images are shared by other images, they are not removed, and hence no space is reclaimed. Only layers that are unique and unused are deleted. -
Images Were Already Cached
The images you pruned could have been marked as “deleted” logically (i.e., no longer listed under a tag or repository), but their data may still be present as cached layers. These layers won’t reclaim space unless they are completely unused by other images or containers. -
Dangling Images Without Large Data
The images deleted might have been small or had minimal content, resulting in no noticeable space reclamation. For example, if those were just metadata layers without significant files. - Active Containers Using the Layers
If a layer is still in use by a running or stopped container, it won’t be removed from storage, even if the image is marked as deleted. Use the following command to list any containers (including stopped ones):docker ps -a
- Volume Data is Not Pruned
Docker stores persistent data (such as application logs or configs) in volumes. If any pruned image used volumes, these volumes remain until you remove them explicitly. Use:docker volume prune -f
- No Real Data Was Stored in Deleted Images
If the deleted images were lightweight (like intermediate layers or base images), the space they took was negligible. Docker only shows reclaimed space for non-trivial deletions.
How to Verify the Storage Usage
- Check Docker Storage:
docker system df
This command shows how much space images, containers, volumes, and caches are using.
- Remove All Unused Images and Data: If you want to be more aggressive with cleanup:
docker system prune -a -f
This removes all unused images, containers, networks, and volumes.
- Clear Build Cache: Docker keeps a build cache for layers to optimize future builds. You can clear the cache to reclaim space:
docker builder prune -f
Summary
The message indicates that while images were logically deleted, the layers they contained might still be in use or cached by other images or containers. You can investigate further using docker system df
to identify where space is being used and remove unused containers, volumes, or caches if needed.
To remove all these images efficiently, follow these steps. Docker won’t allow removal if any containers are currently using these images, so we’ll handle that as well.
Step 1: Stop and Remove All Containers (Optional)
Before removing images, ensure no containers are using them.
- List all running and stopped containers:
docker ps -a
- Stop all running containers:
docker stop $(docker ps -q)
- Remove all containers:
docker rm $(docker ps -aq)
Step 2: Remove Images
- Remove All Specific Images Individually: You can delete each image using their IMAGE IDs listed with
docker images
.
Example:docker rmi 5dcfc1a4188b fc6661dd7ccf 1710f33858db 5deddd70bd74 d292a78d3a16 5687fea5c0bc
- Use a Filter to Match and Remove All Related Images: If the image names follow a common naming pattern (e.g., starting with
arsscriptum
), you can remove them like this:docker images | grep 'arsscriptum' | awk '{print $3}' | xargs docker rmi -f
- Remove All Unused Images Automatically: This command removes all unused images, including those not in use by containers:
docker image prune -a -f
Handling Errors
- If Docker Prevents Image Deletion: If you see messages like
image is being used by stopped containers
, make sure you’ve removed all containers as shown in Step 1.
Step 3: Verify Image Deletion
After removing the images, confirm that they are no longer listed:
docker images
Summary Commands
If you’re sure you want to remove everything (containers, images, volumes, networks), run:
docker system prune -a --volumes -f
This ensures a complete cleanup, including images, containers, and volumes.
Here’s a Bash script that will use an already built Docker image to incrementally build your site (e.g., a Jekyll site). This script ensures the image is reused, maps the necessary volumes, and only regenerates the changes without starting from scratch.
Script: incremental_build.sh
#!/bin/bash
# ┌────────────────────────────────────────────────────────────────────────────────┐
# │ incremental_build.sh │
# └────────────────────────────────────────────────────────────────────────────────┘
# Colors for logging
YELLOW='\033[0;33m'
RED='\033[0;31m'
CYAN='\033[0;36m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${CYAN}[INFO] $1${NC}"
}
log_ok() {
echo -e "${GREEN}[SUCCESS] $1${NC}"
}
log_error() {
echo -e "${RED}[ERROR] $1${NC}"
exit 1
}
log_important() {
echo -e "${RED}[IMPORTANT] $1${NC}"
exit 1
}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
log_info "Current Script directory: $SCRIPT_DIR"
pushd "$SCRIPT_DIR/.."
ROOT_DIR=`pwd`
log_info "Current Root directory: $ROOT_DIR"
# ────────────────────────────────────────────────────────────────────────────────
# Configuration variables
IMAGE_NAME="arsscriptum"
TAG="latest"
DOCKERFILE_PATH="$ROOT_DIR/Dockerfile" # Adjust this if the Dockerfile is elsewhere
BUILD_CONTEXT="."
OUTPUT_DIR="/home/www/images"
IMAGE_TAR="${OUTPUT_DIR}/${IMAGE_NAME}-${TAG}.tar"
log_info "Building Docker image: ${IMAGE_NAME}:${TAG}"
# Get the script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
OUTPUT_DIR="/home/www/images"
BUILD_DIR="${SCRIPT_DIR}/_site" # Output directory for incremental build
DOCKER_IMAGE="my-jekyll-site:latest" # Name of the Docker image to use
INTERNAL_JEKYLL_PATH=/srv/jekyll
HOST_DEPLOY_PATH=/home/www/arsscriptum.github.io
INTERNAL_DEPLOY_PATH=/tmp/www-build
# Ensure the output directory exists
if [ ! -d "$BUILD_DIR" ]; then
log_info "Creating build directory: $BUILD_DIR"
mkdir -p "$BUILD_DIR" || log_error "Failed to create build directory."
else
log_info "Build directory exists: $BUILD_DIR"
fi
# Run the Docker container for incremental build
log_info "Running Docker container for incremental site build..."
log_important "Deploy Path : $HOST_DEPLOY_PATH"
log_important "Docker Image : $DOCKER_IMAGE"
log_important "Image Package: $IMAGE_TAR"
docker run --rm \
-v "$ROOT_DIR:$INTERNAL_JEKYLL_PATH" \ # Map current project directory to container
-v "$HOST_DEPLOY_PATH:$INTERNAL_DEPLOY_PATH" \ # Map the build directory for incremental output
-v "${SCRIPT_DIR}/.jekyll-cache:/srv/jekyll/.jekyll-cache" \ # Enable caching
-p 4000:4000 \ # Optional: Expose Jekyll server on port 4000
"${DOCKER_IMAGE}" \
jekyll build --incremental || log_error "Incremental build failed."
log_ok "Incremental build completed successfully."
# Optional: Start a local server to preview the site
log_info "Starting local Jekyll server to preview the site..."
docker run --rm \
-v "${SCRIPT_DIR}:/srv/jekyll" \
-v "${BUILD_DIR}:/srv/jekyll/_site" \
-v "${SCRIPT_DIR}/.jekyll-cache:/srv/jekyll/.jekyll-cache" \
-p 4000:4000 \
"${DOCKER_IMAGE}" \
jekyll serve --incremental --host 0.0.0.0 || log_error "Failed to start Jekyll server."
log_ok "Site is available at http://localhost:4000"
Explanation
- Incremental Build with Docker:
jekyll build --incremental
: Builds only the modified files, speeding up the process.--rm
: Automatically removes the container after it exits.- Volume Mapping (
-v
):- Maps the project directory to
/srv/jekyll
inside the container. - Maps the build output to
_site
so it persists across runs. - Maps Jekyll cache to enable faster incremental builds.
- Maps the project directory to
- Running a Local Preview Server:
- Uses
jekyll serve --incremental
to start a local preview server on port 4000. - Exposes the site at
http://localhost:4000
.
- Uses
- Logging and Error Handling:
- The script logs every step and exits gracefully if any command fails.
Usage
- Make the script executable:
chmod +x incremental_build.sh
- Run the script:
./incremental_build.sh
Summary
This script uses an existing Docker image to incrementally build your Jekyll site. It ensures that only changes are recompiled, saving time. Additionally, it provides an option to preview the site locally with a Jekyll server exposed on port 4000.