Improve performance of Docker on macOS
The History
We have been using Docker for development purposes over the past one year now, and it’s an amazing tool. It works very well on Linux, but we faced some big issues with Docker on Mac.
In the last few months, the performance issues began to annoy us more and more. Especially with some time pressure on them, it was hard to wait for a response in the browser. It was about 8–10 seconds per request or CLI / testing might take up to more than 10min to start.
You can find numerous posts on discussion forums like this, this, this, etc. which prove the slowness of Docker in Mac.
Why?
On Linux systems, Docker directly leverages the kernel of the host system, and file system mounts are native. On Windows and Mac, it’s slightly different. They do not provide a Linux Kernel, so Docker starts a virtual machine with a small Linux installed and runs Docker containers in there.
Therefore Docker for Mac still starts a virtual machine. It also brought its own hypervisor “hyper-kit” and shared file system “osxfs”. Unfortunately, “osxfs” is not fast enough. You can read more about this problem in this official post from Docker: https://docs.docker.com/docker-for-mac/osxfs-caching/.
Although the new versions of Docker for Mac bring new performance flags to mount-points of Docker Volumes (“delegated” and “cached”), they are insufficient for our platforms, because of the number of files that need to be read in a request lifecycle.
Benchmark results for 100000+0 records in100000+0 records out real (Write random data to a file in sample directory)
19 seconds for writing. For reading, it is quite similar. When you develop a medium to the big dockerized application then you are in a bad spot.
Tricks we used
Here is the story of the different tips and tricks we used in order to work efficiently with Docker for Mac.
Docker configuration
- Open Docker -> Preferences…
- Open Tab “File Sharing”
- Check that there is only one folder with all your projects, remove all others like tmp, etc. (so docker only search here for file changes)
- Open Tab “Disk” → Check that the disk image location is ending with Docker.raw. If not you have to reset docker to factory defaults (this will drop/delete your containers)
- Open Tab “Advanced” → Set CPUs to 4 → Set Memory at least to 8 GB
- Apply all the changes and check after restart if they are set to correct
Database
For whatever reason, we had all the PostgreSQL specific files laying on a “bind-mount”. So every SQL statement had to go through the I/O requests to the hosts' file system.
It was very easy to remove this limitation: Introduced a volume for database data. It gives already a hint about the (possible) performance gain.
The difference between a volume and a bind mount is, that the volume is completely managed by Docker. A bind mount is managed by the host system and can be altered at any time, so Docker for Mac needs to take care to synchronize the files between host and container.
Unfortunately, the performance gain while developing based on that change was not as high as expected … “only 1–2 seconds per request”.
Disk Space
To be fair, not completely happy. Because they are annoyed by another famous “Docker for Mac” issue: https://github.com/docker/for-mac/issues/371. The Docker VM image file ( ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2 ) grows continuously and can reach up to 60Gb of disk space !
When running out of disk space, because of the size of the Docker.qcow2 file, the only reliable solution is to delete this file and start with an empty VM image (stop Docker for Mac, delete the file, then restart Docker for Mac). The drawback is that you lose all your docker images (and your docker volumes), and have to download or create them again.
Source Codes
Solution 1: Non-shared Vendors
The vendor files are now concealed in a container and won’t show anymore on the host. We won’t be able to debug the vendors, and autocomplete won’t be available in IDE
The advantage is that you see vendors files on the host (in your IDE for instance) but the problem is that you can’t run composer in the container (you can, but the changes won’t be reflected on the host). In the same way, if you manage to get the vendor dir changed on the host, you will need to build again a new image to take these changes into account (using your Dockerfile).
Because of the disadvantage lead to many issues during development so we decided to drop this solution.
Solution 2: Use volume cache
And this is the change that really, really improved the performance compare with the default setup. But now it feels like painful slow….
Solution 3: Docker-sync (recommendation)
Then the solution is not to use the filesystem at all. Docker-sync is an alternative tool to the native volumes sharing of Docker for Mac.
Benchmark
What’s next?
minikube vs Docker Desktop on macOS