Last updated on 2020-01-19
Contents
Why?
The primary motivation for this initiative was to be able to easily deploy an instance of Burp to a cloud provider in order to examine and potentially script/modify traffic originating from external vulnerability assessment services. These services generally launch vulnerability scans from a public IP and expose some configuration interfaces which aren’t detailed enough to know exactly what’s going on; therefore, having visibility into – and being able to manipulate – the generated traffic is a huge boon.
Soon, additional benefits started to reveal themselves:
- Even if the service doesn’t support proxy configuration, I can set up the Burp cloud instance with its own domain name, make it the direct target of the service, and tell Burp to re-route all requests to the real target. Invisible proxying may not even be required.
- Scalability: quickly and consistently deploy additional instances as needed; furthermore, configuring additional listeners in each instance can be done prior to, or after launch. Each listener can direct traffic to a distinct host.
- Burp logs become available to the myriad of cloud provider tools for log collection, queries, archival, and manipulation.
- Docker volumes allow pre-loading configuration files and certificates, and enable data sharing between the Burp container and auxiliary containers.
Auxiliary containers
I created three auxiliary containers in order to fully achieve the “cloud Burp” goal: one running a Secure Shell (SSH) server, in order to remotely manage those configuration files and certificates; and two others running noVNC and a VNC server. The VNC containers are only necessary when running Burp with its graphical interface. I chose noVNC mainly because it runs on browsers, which means nobody has to deal with standalone VNC clients.
These auxiliary containers brought their own benefits, in addition to fulfilling certain requirements:
- SSH access, set up with public keys at container build time, provides a safe mechanism to upload and download configuration files, extensions, certificates, logs, reports, etc.
- Collaboration: multiple people can easily access the same Burp instance at the same time when using VNC. The VNC server provides password-protected access for view-only or full-control sessions.
- Separation of duties between containers allows a single group of auxiliary containers to support multiple Burp instances. For example, the same noVNC instance can connect to multiple Burp instances through different VNC servers.
Despite having been built for cloud deployment, these containers can run completely isolated on a local machine. The Burp container is all you need when running it in headless mode (without the GUI), or with the GUI if your machine provides an X server.
One application I’d like to explore is using these containers in an educational setting: not only being able to run instances with predefined settings at any given point in a lecture/workshop, but allowing participants to directly view the presenter’s instance in their own screens as they follow along.
Downsides
To be fair, there are down-sides as well:
- Certificate management is still painful.
- Burp does not provide proxy authentication, as far as I know. If you don’t lock down access to the proxy listener some other way, you’ll end up with an open proxy on the Internet.
- The same goes for noVNC: unless you lock it down using mechanisms from the cloud provider, anyone will be able to access and use your instance as a VNC client.
- I haven’t yet tried to run this setup with Burp Pro. I’m sure there will be licensing issues to overcome.
With that, let’s examine the components of the Burp container itself. While auxiliary containers will be examined in future articles, the whole setup is already here on GitHub.
dockerfile
The following is based on this dockerfile version.
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
6 |
|
7 |
|
8 |
|
9 |
|
10 |
|
11 |
|
12 |
|
13 |
|
14 |
|
15 |
|
|
Not much going on here:
- Use OpenJDK 11 for compatibility reasons. Their Docker image is based on Debian Stretch.
- Update the OS and install packages without additional recommended ones.
- The packages
libxext6
,libxrender1
, andlibxtst6
are the minimum necessary to run Burp in GUI mode with a “local” X11 Unix socket.- The socket isn’t actually local, but accessible through a Docker volume mount. More info in the Volumes section.
wget
to download the latest version of Burp, if one isn’t provided up front.apt autoremove
for good measure.
burp
, in order to not run the proxy as root
:
|
|
16 |
|
17 |
|
18 |
|
19 |
|
20 |
|
|
And a few niceties:
- Change the
burp
user’s shell tobash
; - Copy a default
.bashrc
to their home directory; and - Set
~/share
as the working directory upon login.
|
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
|
27 |
|
28 |
|
- Copy some files and directories to the image.
prefs.xml
is set up to bypass the EULA prompt.
- Expose
8080/tcp
by default, since that’s Burp’s default listening port. - Set
entrypoint.sh
to be executed when the container runs.
entrypoint.sh
The following is based on this entrypoint.sh version.
1 |
|
2 |
|
3 |
|
4 |
|
5 |
|
6 |
|
7 |
|
8 |
|
9 |
|
10 |
|
11 |
|
12 |
|
13 |
|
14 |
|
15 |
|
16 |
|
17 |
|
18 |
|
19 |
|
20 |
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
|
27 |
|
28 |
|
29 |
|
30 |
|
31 |
|
32 |
|
33 |
|
|
One item of note is _shell="& exec /bin/bash -i"
. This sets up the --shell
argument. When present, it will cause the container to execute Burp as a background process and, by means of exec
, replace the current process with /bin/bash
, dropping the container into an interactive shell.
In other words:
- With
--shell
, the container runs/bin/bash
as its primary application, and Burp in the background. When Burp exits, the container continues to run. When the primary shell exits, the container stops. - Without
--shell
, the container runs Burp as its primary application. There’s no shell. When Burp exits, the container stops.
share
directory only if they don’t already exist, lest we overwrite files that were previously placed there through a local volume or the SSH container.
|
|
34 |
|
35 |
|
36 |
|
37 |
|
38 |
|
39 |
|
|
The share
directory has no significance until mounted in a Docker volume. See the Volumes section.
burp
user ownership of /home/burp
, define java_cmd
as the command to execute, and invoke it.
|
|
40 |
|
41 |
|
42 |
|
43 |
|
44 |
|
45 |
|
46 |
|
47 |
|
Let’s unpack that last line, which causes the container to…
Run as a normal user
exec |
Replaces the current process with what’s invoked next. |
chroot --userspec=burp:burp |
Runs a command as the burp user/group rather than running everything as root . |
/ |
The newroot argument to chroot . In this case, we’re not using chroot to change the root directory; just to change the user. |
env HOME=${_home} |
Runs a command with the specified environment; i.e., setting /home/burp as the burp user’s home directory. |
/bin/bash -c "${java_cmd} ${_shell}" |
Spawns a bash process to execute java_cmd and _shell . |
Here’s what that looks like without the --shell
argument:
exec
|__chroot
|__env (now running as 'burp:burp')
|__bash
|__java (becomes parent process)
And with the --shell
argument:
exec
|__chroot
|__env (now running as 'burp:burp')
|__bash
|__java (in background)
|__exec
|__bash (becomes parent process)
Volumes
Docker volumes allow file sharing between the host and the container as well as among containers, so long as the volume is mounted in all applicable containers.
With a simple setup, we can:
- Load Burp configuration and JAR files before and after running the container.
- Download Burp logs, project files, modified options, etc.
- Run Burp’s GUI on a different X11 Unix socket, such as the host’s or another container’s.
For file sharing
- Create the volume:
sudo docker volume create burp_share
- When issuing
docker run
, include the following argument:
--mount src=burp_share,dst=/home/burp/share,ro=false
(this is why we create~/share
here)
When running the container locally, you can access the volume’s mountpoint directly:
$ sudo docker volume inspect -f {{.Mountpoint}} burp_share
/var/lib/docker/volumes/burp_share/_data
$ sudo tree /var/lib/docker/volumes/burp_share/_data
/var/lib/docker/volumes/burp_share/_data
├── burpsuite.jar
├── project_configs
│ ├── default.json
│ ├── gui.json
│ └── headless.json
└── user_configs
├── default.json
├── gui.json
└── headless.json
2 directories, 7 files
When running the container remotely, you’ll also need to mount the volume onto the SSH container. Then, via SSH, you can access files on the Burp container!
For GUI display
- Create the volume:
sudo docker volume create x11_socket
- When issuing
docker run
…- if running the container locally, include the following argument:
‐-mount type=bind,src=/tmp/.X11-unix/X0,dst=/tmp/.X11-unix/X20,ro=true
(To facilitate a remote setup,X20
is the destination socket. No need to change it for a local setup.) - if running the container remotely, include this instead:
--mount src=x11_socket,dst=/tmp/.X11-unix,ro=true
- if running the container locally, include the following argument:
When running the container remotely, you’ll need another container, like the VNC ones, to provide the display. The Burp container alone has no graphical interface. The volume must be mounted on both containers.
Building and running
You can find interactive build and execution scripts for all containers at github.com/elespike/burp_containers!
dockerfile
resides:
$ sudo docker volume create burp_share
$ sudo docker volume create x11_socket
$ sudo docker build -t burp:latest .
Note the trailing .
, which just represents the current directory.
-t burp:latest
tags the build so we can reference it when issuing docker run
. See below.
Headless
sudo docker run -d -it --rm |
Runs the container as a daemon (background process), with an interactive TTY, and remove it from disk when stopped. |
--mount src=burp_share,dst=/home/burp/share,ro=false |
See volumes for file sharing. |
-p 0.0.0.0:8080:8080/tcp |
Listens on all of the host’s interfaces (0.0.0.0 ), on port 8080, and forwards all TCP traffic to port 8080 on the container. |
burp:latest |
Runs the build tagged burp:latest . |
--url=https://... |
URL argument to entrypoint.sh . |
--user=/home/burp/share/user_configs/headless.json |
User configuration file argument to entrypoint.sh . |
--proj=/home/burp/share/project_configs/headless.json |
Project configuration file argument to entrypoint.sh . |
--shell |
Shell argument to entrypoint.sh . |
GUI
sudo docker run -d -it --rm |
Runs the container as a daemon (background process), with an interactive TTY, and remove it from disk when stopped. |
--mount src=burp_share,dst=/home/burp/share,ro=false |
See volumes for file sharing. |
--mount src=x11_socket,dst=/tmp/.X11-unix,ro=true |
See volumes for GUI display. |
-p 0.0.0.0:8080:8080/tcp |
Listens on all of the host’s interfaces (0.0.0.0 ), on port 8080, and forwards all TCP traffic to port 8080 on the container. |
burp:latest |
Runs the build tagged burp:latest . |
--gui |
GUI argument to entrypoint.sh . |
--url=https://... |
URL argument to entrypoint.sh . |
--user=/home/burp/share/user_configs/headless.json |
User configuration file argument to entrypoint.sh . |
--proj=/home/burp/share/project_configs/headless.json |
Project configuration file argument to entrypoint.sh . |
--shell |
Shell argument to entrypoint.sh . |
That’s that! Once again, definitions for all containers as well as interactive build and run scripts are present at github.com/elespike/burp_containers.
Next up, let’s take a look at the auxiliary SSH container for remote file access: