Containerized Security Toolkit - A Guide
This blog is a guide on using the Containerized Security Toolkit, a project I maintain. The repository contains a quickstart command to instantly jump into a running instance. This guide, however, describes advanced workflows and usage of the toolkit.
Quickstart
In this section, I describe a nuanced quickstart (includes a couple more settings than shown in the README) on using a containerized workflow. For a comprehensive and repeatable setup, check out the Example Workflow section further down in this blog post.
First, create a directory for persistence so that it can be mounted for container execution →
1
mkdir -p $HOME/docker_work/
Next, run the image (Docker will pull it automatically) with a zsh
command to start the SSH server and read /dev/null
to keep the image alive →
1
2
3
4
5
# Change tag to main_apple to use on ARM64 systems
docker run --name="security_docker" \
-v $HOME/docker_work/:/persist \
--rm -d -it tanq16/sec_docker:main \
zsh -c "tail -f /dev/null"
Then, use docker exec
to get a shell into the container →
1
docker exec -it security_docker /bin/zsh
Finally, run the following after getting into the container (this is particularly useful if you follow my CLI-Productivity-Suite) →
1
2
3
export TERM=xterm-256color && rm -rf /etc/localtime && \
ln -s "/usr/share/zoneinfo/$(curl -s https://ipapi.co/timezone)" /etc/localtime && \
echo $(curl -s https://ipapi.co/timezone) > /etc/timezone
Data that needs to be persisted across container runs should live in the /persist/
directory, which is present on the host at $HOME/docker_work/
(created earlier).
To stop the container after working on it, use the following →
1
docker stop security_docker -t 0
Note → One known issue that may come up is that the terminal colors may not behave in the best manner through the shell opened by Docker
exec
. This can be fixed by using SSH instead. The way to do this along with other customization is explained in the Example Workflow section next.
Example Workflow
This is an example of setting up a more robust workflow for use in day to day life. I use this for my workflow and it works fantastically! The idea behind this workflow is as follows →
- Use a single command to launch a container
- SSH into the container without hunting for what the password is
- Use dynamic port forwarding with SSH to reach services exposed within the container
- Persist a volume as well as the command history to make use of shell completion features of
oh-my-zsh
- Use a single command to stop the container
Implementing all of that is a one-time setup and fairly easy to follow (just looks super detailed). Use the following command to make the necessary directory structure for persistence →
1
mkdir -p $HOME/docker_work/
After that, add the following shell functions to the respective shell rc file →
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Start Container Function
start_work(){
# run the container
docker run --name="sec_docker" --rm -d \
-v $HOME/docker_work/:/persist -p 50022:22 $@ -it tanq16/sec_docker:main \
zsh -c "service ssh start; cp /persist/.zsh_history /root/.zsh_history; tail -f /dev/null"
# create a new password for SSH-ing into the docker image
new_pass=$(cat /dev/random | head -c 20 | base64 | tr -d '=+/')
# print the new password and store in a file in the current directory
echo "Password: $new_pass"
echo $new_pass > current_docker_password
# set the new password
docker exec -e newpp="$new_pass" sec_docker zsh -c 'echo "root:$(printenv newpp)" | chpasswd'
}
# Stop Container Function
stop_work(){
# copy (save) the command history
docker cp sec_docker:/root/.zsh_history $HOME/docker_work/.zsh_history
docker stop sec_docker -t 0
}
Replace
main
tag withmain_apple
for ARM64 machines.
With that done, the two functions above accomplish the following →
start_work()
→- It starts the container in a detached state and keeps it alive with
tail -f /dev/null
- It exposes port
22
at host’s port50022
for SSH - It accepts any additional Docker arguments (due to
$@
), like more port mappings or volume mounts - It copies over the previous
.zsh_history
to maintain history across container runs - It sets the root password to something random, prints it out, and puts it in a file
- It starts the container in a detached state and keeps it alive with
stop_work()
→- It copies over the shell history from the container to the persistence directory
- It then stops the container
Tip → On MacOS, add
echo $new_pass | pbcopy
to thestart_work()
function to automatically copy the password to the clipboard.
Next, restart the shell or source the shell rc file, after which, the workflow can be used as follows →
1
2
3
4
5
6
# prints the password to SSH into the container
$ start_work
Password: pEBlVd1eBLkw01TlNpNtk4Xm6A
# ssh into the container with dynamic port forwarding
$ ssh root@localhost -p 50022 -D 65500
Then, use this inside the container (especially if you use my CLI-Productivity-Suite) →
1
2
3
export TERM=xterm-256color && rm -rf /etc/localtime && \
ln -s "/usr/share/zoneinfo/$(curl -s https://ipapi.co/timezone)" /etc/localtime && \
echo $(curl -s https://ipapi.co/timezone) > /etc/timezone
Finally, after completing your work, exit the SSH session and stop the container with the stop_work
command.
Note: Once you start
vim
, refer to the Post Installation Steps of my CLI Productivity Suite, as the container uses that suite by default.
While this workflow uses SSH (and there are reasons for it I talk about later), similar functions without the SSH requirements (instead only using docker exec
) can be written as follows →
1
2
3
4
5
6
7
8
9
10
11
12
shell_work(){
docker exec -it sec_docker_direct zsh
}
begin_work(){
docker run --name="sec_docker_direct" --rm -v $HOME/docker_work/:/persist -d -it tanq16/sec_docker:main
docker exec sec_docker_direct zsh -c 'cp /persist/.zsh_history /root/.zsh_history'
}
end_work(){
# copy (save) the command history
docker cp sec_docker_direct:/root/.zsh_history $HOME/docker_work/.zsh_history
docker stop sec_docker_direct -t 0
}
For the exec-based workflow, the sequence should be begin_work
, then shell_work
to get a shell into the container, and finally end_work
to stop the container.
Conventions
In this section, I talk about the conventions I use and recommend when using this image. The toolkit image is primarily meant to be used as a linux system for work. The idea is to have a single container with a persistent storage across multiple container runs. The .zsh_history
file is also automatically copied in and out from the container by the start and stop functions.
The container was generally built with the intention of SSH-ing into it and making use of tmux
to work in. While they can be exec-ed into as well, using SSH with tmux
has the following advantages →
- The container can be deployed on various machines (like cloud VMs) and then SSH-ed into via non-standard ports
- Using SSH allows dynamic port forwarding allowing easy access to any services exposed within the container
- It fixes certain visual discrepancies that appear when exec-ing into the image (like the shell autocompletion color being overridden by solid white, making it weird to look at)
The password is randomly generated and stored in the current_docker_password
file in the current directory to use if you exit the SSH session and connect again.
The start_work
function also includes a $@
, mainly to add other port publish arguments or volume mounts arguments if necessary. One such example is for adding a Go-lang based directory structure, which can be done with the following argument passed to the start_work
command →
1
-v /path/to/host/go_programs/:/root/go/src
Other general conventions followed for building/running the images are →
- Port mapping follows convention -
shared port
=port + 50000
- Port for dynamic port forwarding when SSH-ing into the container =
65500
- Volume mount to
/persist
(helps with persistence across runs or sessions) - General tools and installed under the
/opt
directory in the container, while executables are in/opt/executables
(respective paths are already added in the container’s rc file) - Optionally maintain a
run.sh
in the persist directory file for custom initialization setup to run when the container starts
Image Builds
The toolkit image is built via CI/CD pipelines and published to DockerHub. As such, the image is ready to pull like so →
1
docker pull tanq16/sec_docker:main # or use main_apple for ARM64
The builds are pretty large in size due to some tools like
az-cli
. So the images are approximately 7-8 GB in size.
The containers can be built by cloning my repository and using docker build
command as follows →
1
2
3
4
5
git clone --depth 1 https://github.com/tanq16/containerized-security-toolkit
cd containerized-security-toolkit/security_docker_amd # security_docker_arm for ARM64
DOCKER_BUILDKIT=1 docker build -f builder.Dockerfile -t intermediate_builder .
DOCKER_BUILDKIT=1 docker build -t sec_docker .
docker builder prune # cleanup storage
Bonus
This section includes some quick productivity tips!
To use python based tools from the container, activate the python environment in the container with the following command →
1
source /opt/pyenv/bin/activate
To SSH into the container without adding details to the hosts file, use the following command →
1
ssh -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null" root@localhost -p 50232
You can also add the following alias to your host’s rc file for ease of use →
1
alias sshide='ssh -o "StrictHostKeyChecking=no" -o "UserKnownHostsFile=/dev/null"'
This alias is automatically setup if you use my CLI Productivity Suite on the host machine.
To remove dangling images when refreshing with new builds, use the following docker rm
command (setup as an alias in my CLI Productivity Suite) →
1
alias dockernonerm='for i in $(docker images -f dangling=true -q); do docker image rm $i; done'
Conclusion
I hope this blog post gives insight into how to use my containerized toolkit image for cybersecurity-related workflows. I use this container for my day-to-day work as a security professional and I found that it significantly improved my productivity across any environment or operating system I choose to work on. Cheers!