Dockerhero is a local development tool. Out of the box, it should only take a make start
to get all your local PHP projects working. Yes, all of them. At the same time.
It has support for Laravel, Codeigniter, Wordpress and other PHP projects.
It has dynamic docroot support for public
, public_html
and html
, depending on what folder is found in your project. Zero setup needed!
The goal is also to make it customizable. You can easily add your own NGINX configurations, cronjobs and via phpMyAdmin you can create your own databases.
Dockerhero includes the following software (containers):
- NGINX (latest)
- mySQL (5.7)
- Redis (latest)
- PHP (8.3-fpm by default, or choose a different version)
- Mailpit
- phpMyAdmin
- phpRedisAdmin
- MinIO S3 object storage
- and more to come!
Dockerhero includes the following useful tools:
- phpMyAdmin
- phpRedisAdmin
- Cron
- Mailpit
- Composer
- Xdebug (with remote debugging support)
- NVM
- NPM
- Yarn
- Laravel Artisan autocompletion
- Laravel Dusk support
- laravel-dump-server support
- Wkhtmltopdf
- and more to come!
Localtest.me is used to make everything work without editing your hosts file! Just like magic!
- Installation
- Updating
- Usage
- Databases
- CLI Access
- Custom NGINX configs
- Cronjobs
- Mailpit
- MinIO
- Overriding default settings
- Connecting from PHP to a local project via URL
- Making a local website publicly available
- Connecting to a docker container from your host
- Miscellaneous
- Known issues
- Contributing
- Thank you
- Project links
- Todo
Follow the instructions on the docker website to install docker and docker-compose.
Next, it is essential to make sure Dockerhero is inside the folder containing all the projects you wish to use with Dockerhero. So if you want https://mysuperproject.localtest.me
to be accessible, place and run Dockerhero inside the same folder mysuperproject
is located. For example, if the path to mysuperproject
is: /home/john/webdev/mysuperproject
- Dockerhero needs to be located in /home/john/webdev/dockerhero
.
This is because dockhero mounts its parent folder (./../
) as /var/www/projects/
, which is the location NGINX will look for when it receives a request on http://*.localtest.me
Remember: anything you do inside the container is deleted upon closing docker! Only changes to mounted folders (like your projects, databases) are persisted because those changes are actually done on your system.
By default, PHP 8.2 is active. If you would like to change this to another version, you can do so by overriding the option using the docker-compose.override.yml
to change the image.
For example, if you want to use PHP 8.0, it might look like this:
services:
php:
build: ./php/8.0
# build: ./php/8.1
# build: ./php/8.2
# build: ./php/8.3
# build: ./php/8.4
workspace:
build: ./workspace/php8.0
# build: ./workspace/php8.1
# build: ./workspace/php8.2
# build: ./workspace/php8.3
# build: ./workspace/php8.4
Available versions are: 8.4
, 8.3
, 8.2
, 8.1
and 8.0
.
If you would like to use an even older PHP version, you can use one of the old images:
image: johanvanhelden/dockerhero-php-[VERSION_NUMBER]-fpm:latest
Replace [VERSION_NUMBER]
with one of the following PHP versions: 7.4
, 7.3
, 7.2
, 7.1
, 5.6
, 5.4
The same goes for the workspace:
image: johanvanhelden/dockerhero-workspace:php[VERSION_NUMBER]
Replace [VERSION_NUMBER]
with one of the following PHP versions: 7.4
, 7.3
, 7.2
, 7.1
If you are going to use an image, please comment out the build:
lines in the docker-compose.yml
file.
For more information, please see this section: Overriding default settings
Dockerhero has full support for https. This is done with a self-signed certificate. In order to skip the warning in your browser, you can trust the certificate by importing it in the browser or keychain. The certificate can be found here.
If you are on Windows and use Firefox, you should install the certificate in Windows and allow Firefox to use Window's certificates.
This part is however entirely optional, and you do not have to do this. You can simply ignore the browser warning and continue.
Dockerhero works great with WSL2 but requires one additional setup step if you want to execute, for example, artisan commands or PHPUnit tests from outside Dockerhero.
Simply add the following block at the top of the Windows hosts file (C:\Windows\System32\drivers\etc\hosts
):
## DOCKERHERO HOSTS BLOCK START ##
0.0.0.0 dockerhero_web
0.0.0.0 dockerhero_php
0.0.0.0 dockerhero_workspace
0.0.0.0 dockerhero_redis
0.0.0.0 dockerhero_db
0.0.0.0 dockerhero_mail
0.0.0.0 dockerhero_minio
## DOCKERHERO HOSTS BLOCK END ##
And (after making sure any Docker instance is closed) restart WSL2 using the Windows command prompt: `$ wsl --shutdown`.
Simply download or pull the latest release from GitHub and re-build the images: make up-build
.
To ensure you have the latest images, you can run make pull
in the Dockerhero folder.
There a multiple ways to start Dockerhero. The most common way is to $ cd
into the Dockerhero folder on your local machine and execute:
make up
This will start all Docker containers in the background.
If you want real-time log information and see what the containers are doing, simply use:
make up-verbose
If anything fails, or you want to shut it down, you can simply press ctrl-c
in the CLI and it will shut down gracefully.
If you want to ensure a fresh build of the containers is used, you can use:
make up-build
If you want to start docker using the previous state of the containers, you can use:
make start
To stop the containers, simple stop the make up-verbose
process using ctrl-c
.
If you had it running in the background, you can use:
make stop
Or if you would like Docker to remove the containers, networks, volumes and images, use:
make down
This is a good option if you are going to update the images.
If you need to access private composer packages, you might want to link your local /home/username/.composer
folder (containing your auth.json file) and /home/username/.ssh
folder (containing any SSH keys necessary to clone packages) to Dockerhero. You can do so by adding a new volume to the workspace image in your docker-compose.override.yml
(if you do not have one,
please create it) like so:
services:
workspace:
volumes:
- /home/username/.composer:/home/dockerhero/.composer
- /home/username/.ssh:/home/dockerhero/.ssh
You will now be able to install and update private packages inside Dockerhero.
The database host you would need to use in your projects would be:
mySQL host: dockerhero_db
mySQL port: 3306
You can use any GUI you prefer. For example DBeaver.
Simply set up a new connection to host 127.0.0.1
on port 3306
with the root
user using the password dockerhero
.
If you would prefer not to install any tools, you can also use the PHPMyAdmin tool that is bundled with Dockerhero.
Make sure to start dockerhero with the --profile phpmyadmin
flag to ensure PHPMyAdmin starts (docker compose --profile phpmyadmin up
).
You can visit phpMyAdmin by going to http://localhost:8026/
If you want to import databases from the file system, place them in ./databases/upload
.
This is what a working configuration would look like:
DB_CONNECTION=mysql
DB_HOST=dockerhero_db
DB_PORT=3306
DB_DATABASE=my_project_db
DB_USERNAME=my_project
DB_PASSWORD=my_project
This assumes you created the proper database and user with the password using, for example, PHPMyAdmin.
If you would like to change the MySQL version, you can do so by editing the docker-compose.override.yml
(if you do not
have one, please create it) like so:
services:
db:
image: mysql:5.6
If you changed the MySQL image to a newer version, it will be necessary to upgrade your current databases.
You can do so by logging into the database container and running the mysql_upgrade
command, like so:
docker exec -it dockerhero_db bash
Once inside the database container you need to run the following command:
mysql_upgrade -u root -pdockerhero
After the upgrade is done, please restart Dockerhero.
By default, I've set the same SQL mode as MySQL 5.6 to ensure maximum backwards compatibility. If you would like to
set it to the 5.7 default setting, you can do so by editing the docker-compose.override.yml
(if you do not have one,
please create it) like so:
services:
db:
command: --sql_mode="ONLY_FULL_GROUP_BY"
In order to use Redis in your projects, you need to define the following host:
Redis host: dockerhero_redis
Redis port: 6379
You can use any GUI you prefer. For example Another Redis Desktop Manager.
Simply set up a new connection to host 127.0.0.1
on port 6379
.
If you would prefer not to install any tools, you can also use the phpRedisAdmin tool that is bundled with Dockerhero.
Make sure to start dockerhero with the --profile phpredisadmin
flag to ensure PHPMyAdmin starts (docker compose --profile phpredisadmin up
).
You can visit phpRedisAdmin by going to http://localhost:8027
This is what a working configuration would look like:
CACHE_DRIVER=redis
-- snip --
QUEUE_CONNECTION=redis
-- snip --
SESSION_DRIVER=redis
-- snip --
REDIS_HOST=dockerhero_redis
REDIS_PASSWORD=null
REDIS_PORT=6379
You can enter the bash environment of the containers by executing:
docker exec -it --user=dockerhero dockerhero_workspace bash
All projects are available in /var/www/projects/
You can replace dockerhero_workspace
with any container name. The --user=dockerhero
part is needed to prevent files from being generated with the root user and group. You will need to leave out this argument for other containers.
When you enter the bash environment, you will be starting in /var/www/projects
If you are inside a Laravel folder, you can type artisan
(instead of ./artisan
or php artisan
) and tab to autocomplete.
Make your life easier and create a function in your ~/.bash_aliases file like so:
sshDockerhero() {
docker exec --user=dockerhero -it dockerhero_workspace bash
}
Now, in a new terminal, you can simply execute sshDockerhero
and you will be inside the container.
You can place your own *.conf
files into the ./nginx/conf
folder. They will be automatically included once the container starts.
Create a new file in ./crons/
called crons
. In this file, define all the cron lines you want. For an example, see the ./crons/crons.sample
file.
All outgoing mail is caught by default. You do not need to configure anything. To view the e-mail that has been send, visit the Mailpit GUI
If, for some reason, this auto catching does not work properly, you can set the .env
settings for a Laravel project like so:
MAIL_MAILER=smtp
MAIL_HOST=dockerhero_mail
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
To view, manage and configure the S3 storage buckets, visit the MinIO GUI.
You can use username root
and password dockerhero
.
In order to make it work, you can set the .env settings like so:
AWS_ACCESS_KEY_ID=root
AWS_SECRET_ACCESS_KEY=dockerhero
AWS_DEFAULT_REGION=eu-west-1
AWS_BUCKET=YOUR_BUCKET_NAME
AWS_URL=http://minio:9000
AWS_ENDPOINT=http://minio:9000
AWS_USE_PATH_STYLE_ENDPOINT=true
Replace YOUR_BUCKET_NAME
with the actual name of the bucket you have created in the GUI.
You can create a brand new docker-compose.override.yml
in the root of Dockerhero to override default settings or customize things.
It might look a bit like this:
services:
php:
extra_hosts:
- "projectname.localtest.me:172.25.0.12"
workspace:
extra_hosts:
- "projectname.localtest.me:172.25.0.12"
Sometimes you might need to spin up more services, like for example an SFTP server.
You can easilly achieve this by adding these services to your docker-compose.override.yml
, in the services:
section, like this:
services:
sftp:
image: atmoz/sftp
volumes:
- ./sftp/storage:/home/sftpuser/storage
ports:
- "2222:22"
command: sftpuser:password:1001
This snippet will add a lightweight SFTP server to your dockerhero installation, binds it to your local port 2222, and maps the local folder ./sftp/storage
to the container (don't forget to create this folder). Files in this folder can now be accessed through sftp://sftpuser:password@localhost:2222/storage
Add the following entry to the docker-compose.override.yml
file in the php:
section:
extra_hosts:
- "projectname.localtest.me:172.25.0.12"
Where 172.25.0.12 is the IP of the dockerhero_web container.
Now, if PHP attempts to connect to projectname.localtest.me, it will not connect to his localhost, but to the NGINX container.
If you are developing for an API, webhook or if you want to demonstrate something to someone, it can be extremely useful to forward your local website to the public internet.
In order to do this:
- Download ngrok from: https://ngrok.com/
- Extract the zip file
- Run the following command from the command line:
./ngrok http 127.0.0.1:80 -host-header=project.localtest.me
Where the host-header flag contains the URL of the project you would like to forward.
Ngrok will now present you with a unique ngrok URL. This is the URL you can give out to clients or use in the API/webhook settings.
If you want to connect to a docker container from your host, for example to connect to the dockerhero_db
container using a local MySQL application, you can add all the docker containers to your host file. Simply paste the following container -> ip mapping (the IPs are hardcoded and should never change):
172.25.0.12 dockerhero_web
172.25.0.11 dockerhero_php
172.25.0.13 dockerhero_db
172.25.0.10 dockerhero_workspace
172.25.0.15 dockerhero_redis
172.25.0.14 dockerhero_mail
172.25.0.18 dockerhero_minio
Now, on your host, dockerhero_db
should also point to the database container.
Pro-tip: so it's also possible now to execute a test suite on your host system using the same environment file. Because your host now knows how to resolve all the container names.
In order to make Laravel Dusk work, you need to add your Laravel project URL to the "extra_hosts" section of the docker-compose.yml
workspace section, as explained in the "Connecting from PHP to a local project via URL" section.
laravel-dump-server is a great package that allows you to capture dump contents so that it does not interfere with HTTP / API responses.
In order to make it work with dockerhero, simply override the config and point it to the workspace container, like so:
'host' => 'tcp://dockerhero_workspace:9912',
Next, ssh into to workspace image, and simply run: $ artisan dump-server
and start dumping to your heart's content.
Xdebug is disabled per default to speed up PHP. If you want to start remote debugging, you would have to enable Xdebug first. To do this, execute the ./scriots/xdebug/start.sh
script. This enables Xdebug in the PHP and Workspace container.
Make your life easier and create these functions in your ~/.bash_aliases file like so:
xdebugStatus() {
$projectPath/dockerhero/scripts/xdebug/status.sh
}
xdebugStop() {
$projectPath/dockerhero/scripts/xdebug/stop.sh
}
xdebugStart() {
$projectPath/dockerhero/scripts/xdebug/start.sh
}
Now you can simply run $ xdebugStart
and $ xdebugStop
.
It is possible to remotely debug your code using an IDE.
You would have to set up your IDE to use port 9005.
And you would have to properly map your local path to Dockerhero (the project root is always /var/www/projects
in Dockerhero).
This is a working config for VSCode for a Laravel project (but has also been tested for CodeIgniter projects):
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9005,
"pathMappings": {
"/var/www/projects/${workspaceFolderBasename}": "${workspaceFolder}"
},
"ignore": [
"**/vendor/**/*.php"
]
},
There might be an error when you run ./scripts/xdebug/stop.sh
, and Xdebug will still be enabled in the workspace container.
Please run the command again and Xdebug will be stopped.
Feel free to send in pull requests! Either to the image repos or the Dockerhero repo itself. Do keep the following in mind:
- Everything needs to be as generic as possible, so do not try and add something that is super specific to your own use that no-one else will use.
- Everyone needs to be able to use it out of the box, without additional configuration. However, it is fine if a feature would be disabled without configuring. As long as users can still just clone the project and "go".
- If something needs documentation, add it to the readme.md.
- Test, test and test your changes before you create the PR.
- Always target your PR's to the
develop
branche.
To test changes to Dockerhero images, you can either follow the instructions from the README of the image or, if you want to test those changes in the Dockerhero system itself, you add the overwrite to the docker-compose.override.yml
for the container you want to test. For example:
services:
php:
build: ../folder-with-the-dockerfile
Next, start Dockerhero using the following command: make up-build
.
Once everything is tested and works properly, you can revert the changes to the docker-compose.yml
and create the PR.
Don't forget to stop and start Dockerhero again after reverting the docker-compose.yml
file, otherwise you keep using the local forked image. For the first time, after reverting the changes, I recommend to use docker compose up --build --no-cache
to ensure everything is fresh again.
- localtest.me - a big thank you goes out to localtest.me for providing a domain that points to 127.0.0.1. You can visit their website here
- LaraDock - also a huge shout out to LaraDock for providing me with a lot of sample code and inspiration. You can visit their GitHub page here.
- Dockerhero - Workspace GitHub
- Dockerhero - NGINX GitHub
- Dockerhero - PHP 8.2-fpm GitHub
- Dockerhero - PHP 8.1-fpm GitHub
- Dockerhero - PHP 8.0-fpm GitHub
- Dockerhero - PHP 7.4-fpm GitHub
- Dockerhero - PHP 7.3-fpm GitHub
- Dockerhero - PHP 7.2-fpm GitHub
- Dockerhero - PHP 7.1-fpm GitHub
- Dockerhero - PHP 5.6-fpm GitHub
- Dockerhero - PHP 5.4-fpm GitHub