Skip to content

[1/3] Cloud-ready Burp Suite on Docker

Last updated on 2020-01-19

Why?

The primary motivation for this initiative was to be able to easily deploy an instance of Burp to a cloud provider in order to examine and potentially script/modify traffic originating from external vulnerability assessment services. These services generally launch vulnerability scans from a public IP and expose some configuration interfaces which aren’t detailed enough to know exactly what’s going on; therefore, having visibility into – and being able to manipulate – the generated traffic is a huge boon.

Soon, additional benefits started to reveal themselves:

  • Even if the service doesn’t support proxy configuration, I can set up the Burp cloud instance with its own domain name, make it the direct target of the service, and tell Burp to re-route all requests to the real target. Invisible proxying may not even be required.
  • Scalability: quickly and consistently deploy additional instances as needed; furthermore, configuring additional listeners in each instance can be done prior to, or after launch. Each listener can direct traffic to a distinct host.
  • Burp logs become available to the myriad of cloud provider tools for log collection, queries, archival, and manipulation.
  • Docker volumes allow pre-loading configuration files and certificates, and enable data sharing between the Burp container and auxiliary containers.

Auxiliary containers

I created three auxiliary containers in order to fully achieve the “cloud Burp” goal: one running a Secure Shell (SSH) server, in order to remotely manage those configuration files and certificates; and two others running noVNC and a VNC server. The VNC containers are only necessary when running Burp with its graphical interface. I chose noVNC mainly because it runs on browsers, which means nobody has to deal with standalone VNC clients.

These auxiliary containers brought their own benefits, in addition to fulfilling certain requirements:

  • SSH access, set up with public keys at container build time, provides a safe mechanism to upload and download configuration files, extensions, certificates, logs, reports, etc.
  • Collaboration: multiple people can easily access the same Burp instance at the same time when using VNC. The VNC server provides password-protected access for view-only or full-control sessions.
  • Separation of duties between containers allows a single group of auxiliary containers to support multiple Burp instances. For example, the same noVNC instance can connect to multiple Burp instances through different VNC servers.

Despite having been built for cloud deployment, these containers can run completely isolated on a local machine. The Burp container is all you need when running it in headless mode (without the GUI), or with the GUI if your machine provides an X server.

One application I’d like to explore is using these containers in an educational setting: not only being able to run instances with predefined settings at any given point in a lecture/workshop, but allowing participants to directly view the presenter’s instance in their own screens as they follow along.

Downsides

To be fair, there are down-sides as well:

  • Certificate management is still painful.
  • Burp does not provide proxy authentication, as far as I know. If you don’t lock down access to the proxy listener some other way, you’ll end up with an open proxy on the Internet.
  • The same goes for noVNC: unless you lock it down using mechanisms from the cloud provider, anyone will be able to access and use your instance as a VNC client.
  • I haven’t yet tried to run this setup with Burp Pro. I’m sure there will be licensing issues to overcome.

With that, let’s examine the components of the Burp container itself. While auxiliary containers will be examined in future articles, the whole setup is already here on GitHub.

dockerfile

The following is based on this dockerfile version.

1
# Burp issues compatibility warnings for JRE 12 and newer,
2
# and JRE 11 seems to work best on Debian Stretch.
3
FROM openjdk:11-jre-stretch
4
5
RUN apt-get update \
6
 && apt-get upgrade -y
7
8
RUN apt-get install --no-install-recommends -y \
9
    libxext6    \
10
    libxrender1 \
11
    libxtst6    \
12
    wget
13
14
RUN apt autoremove -y
15

Not much going on here:

  • Use OpenJDK 11 for compatibility reasons. Their Docker image is based on Debian Stretch.
  • Update the OS and install packages without additional recommended ones.
  • The packages libxext6, libxrender1, and libxtst6 are the minimum necessary to run Burp in GUI mode with a “local” X11 Unix socket.
    • The socket isn’t actually local, but accessible through a Docker volume mount. More info in the Volumes section.
  • wget to download the latest version of Burp, if one isn’t provided up front.
  • apt autoremove for good measure.
Next, add an unprivileged group and user, both called burp, in order to not run the proxy as root:
16
RUN groupadd burp && mkdir /home/burp       \
17
 && useradd -s /bin/bash -g burp burp       \
18
 && cp /etc/skel/.bashrc /home/burp/.bashrc \
19
 && echo "cd ~/share" >> /home/burp/.bashrc
20

And a few niceties:

  • Change the burp user’s shell to bash;
  • Copy a default .bashrc to their home directory; and
  • Set ~/share as the working directory upon login.
Finally…
21
COPY ./entrypoint.sh   /home/burp/
22
COPY ./user_configs    /home/burp/user_configs
23
COPY ./project_configs /home/burp/project_configs
24
COPY ./prefs.xml       /home/burp/.java/.userPrefs/burp/prefs.xml
25
26
EXPOSE 8080/tcp
27
28
ENTRYPOINT ["/bin/bash", "/home/burp/entrypoint.sh"]
  • Copy some files and directories to the image.
    • prefs.xml is set up to bypass the EULA prompt.
  • Expose 8080/tcp by default, since that’s Burp’s default listening port.
  • Set entrypoint.sh to be executed when the container runs.

entrypoint.sh

The following is based on this entrypoint.sh version.

First, set some variables and defaults, and set up argument handling:
1
#! /bin/bash
2
3
_shell=""
4
_home="/home/burp"
5
6
headless="-Djava.awt.headless=true"
7
default_project_json="${_home}/share/project_configs/headless.json"
8
default_user_json="${_home}/share/user_configs/headless.json"
9
url="https://portswigger.net/burp/releases/download?product=community&type=jar"
10
11
while [[ -n ${1} ]]
12
do
13
    arg_name=${1%%=*}
14
    arg_value=${1#*=}
15
    if [[ ${arg_name} == "--gui" ]]
16
    then
17
        headless="-Djava.awt.headless=false"
18
        default_project_json="${_home}/share/project_configs/gui.json"
19
        default_user_json="${_home}/share/user_configs/gui.json"
20
        # This is the first display attempted by "x11vnc -create",
21
        # which is what the "novnc_server" container uses.
22
        export DISPLAY=:20
23
    fi
24
    [[ ${arg_name} == "--url" ]] && url=${arg_value}
25
    [[ ${arg_name} == "--user" && -n ${arg_value} ]] && user_json=${arg_value}
26
    [[ ${arg_name} == "--proj" && -n ${arg_value} ]] && project_json=${arg_value}
27
    [[ ${arg_name} == "--shell" ]] && _shell="& exec /bin/bash -i"
28
    shift
29
done
30
31
[[ -z ${user_json} ]] && user_json=${default_user_json}
32
[[ -z ${project_json} ]] && project_json=${default_project_json}
33

One item of note is _shell="& exec /bin/bash -i". This sets up the --shell argument. When present, it will cause the container to execute Burp as a background process and, by means of exec, replace the current process with /bin/bash, dropping the container into an interactive shell.

In other words:

  • With --shell, the container runs /bin/bash as its primary application, and Burp in the background. When Burp exits, the container continues to run. When the primary shell exits, the container stops.
  • Without --shell, the container runs Burp as its primary application. There’s no shell. When Burp exits, the container stops.
Next, copy configuration files into the share directory only if they don’t already exist, lest we overwrite files that were previously placed there through a local volume or the SSH container.
34
mkdir -p ${_home}/share/user_configs
35
mkdir -p ${_home}/share/project_configs
36
37
cp -n ${_home}/user_configs/*    ${_home}/share/user_configs/
38
cp -n ${_home}/project_configs/* ${_home}/share/project_configs/
39

The share directory has no significance until mounted in a Docker volume. See the Volumes section.

Now, download Burp’s JAR file if one hasn’t been provided, give the burp user ownership of /home/burp, define java_cmd as the command to execute, and invoke it.
40
jar_file="${_home}/share/burpsuite.jar"
41
[[ -f ${jar_file} ]] || wget ${url} -O ${jar_file}
42
43
chown -R burp:burp ${_home}
44
45
java_cmd="java ${headless} -jar ${jar_file} --user-config-file=${user_json} --config-file=${project_json}"
46
47
exec chroot --userspec=burp:burp / env HOME=${_home} /bin/bash -c "${java_cmd} ${_shell}"

Let’s unpack that last line, which causes the container to…

Run as a normal user

exec Replaces the current process with what’s invoked next.
chroot --userspec=burp:burp Runs a command as the burp user/group rather than running everything as root.
/ The newroot argument to chroot. In this case, we’re not using chroot to change the root directory; just to change the user.
env HOME=${_home} Runs a command with the specified environment; i.e., setting /home/burp as the burp user’s home directory.
/bin/bash -c "${java_cmd} ${_shell}" Spawns a bash process to execute java_cmd and _shell.

Here’s what that looks like without the --shell argument:

exec
 |__chroot
     |__env (now running as 'burp:burp')
         |__bash
             |__java (becomes parent process)

And with the --shell argument:

exec
 |__chroot
     |__env (now running as 'burp:burp')
         |__bash
             |__java (in background)
             |__exec
                 |__bash (becomes parent process)

Volumes

Docker volumes allow file sharing between the host and the container as well as among containers, so long as the volume is mounted in all applicable containers.

With a simple setup, we can:

  • Load Burp configuration and JAR files before and after running the container.
  • Download Burp logs, project files, modified options, etc.
  • Run Burp’s GUI on a different X11 Unix socket, such as the host’s or another container’s.

For file sharing

  • Create the volume:
    sudo docker volume create burp_share
  • When issuing docker run, include the following argument:
    --mount src=burp_share,dst=/home/burp/share,ro=false
    (this is why we create ~/share here)

When running the container locally, you can access the volume’s mountpoint directly:

$ sudo docker volume inspect -f {{.Mountpoint}} burp_share
/var/lib/docker/volumes/burp_share/_data

$ sudo tree /var/lib/docker/volumes/burp_share/_data
/var/lib/docker/volumes/burp_share/_data
├── burpsuite.jar
├── project_configs
│   ├── default.json
│   ├── gui.json
│   └── headless.json
└── user_configs
    ├── default.json
    ├── gui.json
    └── headless.json

2 directories, 7 files

When running the container remotely, you’ll also need to mount the volume onto the SSH container. Then, via SSH, you can access files on the Burp container!

For GUI display

  • Create the volume:
    sudo docker volume create x11_socket
  • When issuing docker run
    • if running the container locally, include the following argument:
      ‐-mount type=bind,src=/tmp/.X11-unix/X0,dst=/tmp/.X11-unix/X20,ro=true
      (To facilitate a remote setup, X20 is the destination socket. No need to change it for a local setup.)
    • if running the container remotely, include this instead:
      --mount src=x11_socket,dst=/tmp/.X11-unix,ro=true

When running the container remotely, you’ll need another container, like the VNC ones, to provide the display. The Burp container alone has no graphical interface. The volume must be mounted on both containers.

Building and running

You can find interactive build and execution scripts for all containers at github.com/elespike/burp_containers!

Building is easy. After ensuring that the current directory is where the dockerfile resides:
$ sudo docker volume create burp_share
$ sudo docker volume create x11_socket
$ sudo docker build -t burp:latest .

Note the trailing ., which just represents the current directory.
-t burp:latest tags the build so we can reference it when issuing docker run. See below.

Headless

sudo docker run -d -it --rm Runs the container as a daemon (background process), with an interactive TTY, and remove it from disk when stopped.
--mount src=burp_share,dst=/home/burp/share,ro=false See volumes for file sharing.
-p 0.0.0.0:8080:8080/tcp Listens on all of the host’s interfaces (0.0.0.0), on port 8080, and forwards all TCP traffic to port 8080 on the container.
burp:latest Runs the build tagged burp:latest.
--url=https://... URL argument to entrypoint.sh.
--user=/home/burp/share/user_configs/headless.json User configuration file argument to entrypoint.sh.
--proj=/home/burp/share/project_configs/headless.json Project configuration file argument to entrypoint.sh.
--shell Shell argument to entrypoint.sh.

GUI

sudo docker run -d -it --rm Runs the container as a daemon (background process), with an interactive TTY, and remove it from disk when stopped.
--mount src=burp_share,dst=/home/burp/share,ro=false See volumes for file sharing.
--mount src=x11_socket,dst=/tmp/.X11-unix,ro=true See volumes for GUI display.
-p 0.0.0.0:8080:8080/tcp Listens on all of the host’s interfaces (0.0.0.0), on port 8080, and forwards all TCP traffic to port 8080 on the container.
burp:latest Runs the build tagged burp:latest.
--gui GUI argument to entrypoint.sh.
--url=https://... URL argument to entrypoint.sh.
--user=/home/burp/share/user_configs/headless.json User configuration file argument to entrypoint.sh.
--proj=/home/burp/share/project_configs/headless.json Project configuration file argument to entrypoint.sh.
--shell Shell argument to entrypoint.sh.

That’s that! Once again, definitions for all containers as well as interactive build and run scripts are present at github.com/elespike/burp_containers.

Next up, let’s take a look at the auxiliary SSH container for remote file access:

Published inUncategorized