We are using docker
and docker-compose
to provide as consistent a local development environment as possible, in accordance with 12factor development principles.
Firstly, check you have all the requirements on your system. For Linux users, these are either preinstalled or available through your distribution's package manager.
​git​
​make - Instructions for installing make vary, for MacOS users xcode-select --install
might work
​docker​
​docker-compose - This should be installed along with docker on OSX and Windows
​envsubst - This should be pre-installed on most Linux distributions
On MacOS, envsubst
is installed as part of gettex
. Install like this:
brew install gettextbrew link --force gettext
​unzip​
Install basic system dependencies:
sudo apt install -y git make unzip apt-transport-https ca-certificates curl gnupg-agent software-properties-common
Install Docker and docker-compose. We prefer to install it from upstream, to have the latest version:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"sudo apt updatesudo apt install -y docker-ce docker-ce-cli containerd.io
Add your user to the docker
group to avoid using sudo
on every command:
sudo usermod -aG docker ${USER}
Refresh your group membership by running:
su - ${USER}
To verify that everything works you can just run the hello-world docker container:
docker run hello-world
In order to run the Planet4 development environment in Windows you'll need to enable Windows Subsystem for Linux (WSL). WSL allows you to run a Linux environment within Windows. You'll need to enable WSL and install Ubuntu. The current version (WSL 2) comes with a lot of enhancements and better disk performance. You can follow the installation instructions here.
​Here is a post with some more detail about setting up WSL 2 (and many other tips for Windows devs!)
Note: this guide was created using the Ubuntu 18.04 image.
Verify WSL 2 and the Ubuntu image are installed
From a Powershell window, run this command to see the installed distros:
wsl -l -v
You should see the distro you installed in the list, with the WSL version: Ubuntu 18.04 - 2
Look here for more details. and set version for a specific distribution:
wsl --set-version Ubuntu-18.04 2
(Optional) In case there is a previous docker install you want to remove, you can probably do:
sudo apt remove docker docker-engine docker.io containerd runc
Download the Docker GPG key for APT and add it to its keychain:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add the Docker repository to the APT sources:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
Update the APT registry:
sudo apt update
Ensure you'll download Docker from their official repo instead of the default by running:
apt-cache policy docker-ce
You should get something like:
docker-ce:Installed: (none)Candidate: 18.03.1~ce~3-0~ubuntuVersion table:18.03.1~ce~3-0~ubuntu 500500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
Install docker-ce:
sudo apt install docker-ce
Ensure it has been installed with:
sudo service docker status
Add your user to the docker
group to avoid using sudo
on every command:
sudo usermod -aG docker ${USER}
Refresh your group membership by running:
su - ${USER}
You can verify this by running:
id -nG
The output should be something like:
youruser sudo docker
Look for the latest docker-compose version and run:
sudo curl -L "https://github.com/docker/compose/releases/download/[THE DOCKER VERSION]/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
For example:
sudo curl -L "https://github.com/docker/compose/releases/download/1.27.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
This will automatically download the specified version for your architecture.
Add execution permissions for docker-compose
:
sudo chmod +x /usr/local/bin/docker-compose
Install make and zip:
sudo apt install -y make unzip
In case the WSL version for your distro is 1, you can update it using:
wsl --set-version Ubuntu-18.04 2
WSL 2 Networking Issues: https://github.com/microsoft/WSL/issues/5336​
Docker Issues: https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly​https://stackoverflow.com/questions/63497928/ubuntu-wsl-with-docker-could-not-be-found​https://github.com/docker/compose/issues/2738​
sudo pip3 install -IUq docker-compose
The command 'docker-compose' could not be found in this WSL 2 distro.We recommend to activate the WSL integration in Docker Desktop settings.
See https://docs.docker.com/docker-for-windows/wsl/ for details.
Makefile: 212: recipe for target 'start' failed
Segmentation fault
​https://github.com/microsoft/WSL/issues/4694#issuecomment-556095344​
@therealkenc, @squeaky-pl Could you try this?​%userprofile%\.wslconfig​[wsl2]kernelCommandLine = vsyscall=emulate
​
The following dependencies are required only if you want to contribute to the docker-composer repository:
​shellcheck​
​yamllint​
​circleci​
The first time you'll need to follow the steps below, in order to clone this repo and build the containers.
# Clone the repositorygit clone https://github.com/greenpeace/planet4-docker-compose​# Navigate to new directorycd planet4-docker-compose​# Build containers, start and configure the applicationmake dev
See Fixing make dev
errors if you have any issues with this command.
If you want the application repositories to be cloned using ssh protocol, instead of https, you can use a variable:
GIT_PROTO="ssh" make dev
or for a more permanent solution, add to a file Makefile.include
:
GIT_PROTO := 'ssh'
If you want to run docker-compose commands directly:
# View status of containersdocker-compose ps​# View log outputdocker-compose logs -f
On first launch, the container bootstraps the installation with composer then after a few minutes all services will be ready and responding to requests.
When the terminal is finished, and you see the line 'ready', navigate to www.planet4.test.
It's not necessary to re-run make dev
each time you wish to start the local development environment. To start containers on subsequent runs, use:
make run
In order to keep the environment light, the default setup skips some containers that are useful for debugging and testing. Namely: PhpMyAdmin, ElasticHQ and Selenium. If you need them, you can use the full environment config by setting an environment variable:
COMPOSE_FILE="docker-compose.full.yml" make run
For a more permanent solution, edit a file .env
and change the variable there:
COMPOSE_FILE="docker-compose.full.yml"
To view the output of running containers:
docker-compose logs
If at any point the install process fails, with Composer showing a message such as file could not be downloaded (HTTP/1.1 404 Not Found)
, this is a transient network error and re-running the install should fix the issue.
Then, when running make dev
, if you get the following error:
ERROR: for traefik Cannot start service traefik: driver failed programming external connectivity on endpoint planet4dockercompose_traefik_1 (f7c7a3eded69b5451a6e2e45d13ab312c2a2e809ce5cd69994119368294ec478): Bind for 0.0.0.0:8080 failed: port is already allocatedERROR: Encountered errors while bringing up the project.make[1]: *** [up] Error 1make: *** [run] Error 2
This error means that there is a process that is already registered to use port 8080
. It is most likely a running docker container that is using this port, but to check, run this command:
lsof -nP -iTCP -sTCP:LISTEN | grep 8080
If result will be something like this:
com.docke 5086 <USERNAME> 84u IPv6 0xdc100c215fbb6b93 0t0 TCP *:8080 (LISTEN)
That's a docker container. (If it is a different process owning the port, you could run kill -9 <PID>
).
To check which container is using this port you can run:
$ docker container ls | grep 8080<CONTAINER_ID> containers.xxx.com/my-container:1.1 "/entrypoint.sh /usr…" 2 months ago Up 10 minutes 0.0.0.0:8080->8080/tcp my-container_1
To stop the container, run:
docker kill <CONTAINER_ID>
Then re-run make dev
and it should be fine. If it still doesn't work, then raise an issue.
To stop all the containers just run:
make stop
To update all containers, run:
make run
Other commands are listed under:
make help
By default, the Wordpress application is bind-mounted at:
./persistence/app/
All planet4 code will be under the Wordpress' content folder:
./persistence/app/public/wp-content/
Backend administrator login is available at www.planet4.test/wp-admin/.
Login username is admin
and the password is admin
.
​phpmyadmin login: pma.www.planet4.test​
​elastichq Access at localhost:5000/​
You can also use this setup to work on an NRO site.
First, create/edit Makefile.include
to contain:
NRO_REPO := https://github.com/greenpeace/planet4-netherlands.gitNRO_THEME := planet4-child-theme-netherlands​# optionally specify a branch, will default to "main" otherwise#NRO_BRANCH := my-other-branch​# by default it will test against your local docker-compose setup version# but you can optionally specify these variables to run the tests against# a deployed environment#NRO_APP_HOSTNAME := k8s.p4.greenpeace.org#NRO_APP_HOSTPATH := nl
Then enable the NRO:
make nro-enable
And, run the tests:
make nro-test-codeception
The tests work a bit differently to the main ones, see the Testing section for more info.
To run production containers locally, it's necessary to define two environment variables and then run make appdata
. This tells docker-compose which containers to use, and then copies the contents of the /app/source
directory to the local persistence
folder.
Example:
# Change these variables to the container images you wish to runexport APP_IMAGE=gcr.io/planet-4-151612/planet4-flibble-app:developexport OPENRESTY_IMAGE=gcr.io/planet-4-151612/planet4-flibble-openresty:develop# Copy contents of container /app/source into local persistence foldermake appdata# Bring up container suitemake run
From here, you can download a database export from GCS (for example) and visit phpMyAdmin to perform the import.
The default content is imported automatically for you.
Troubleshooting
If you want to revert back to the default content database you can delete the database container and volume and recreate:
make revertdb# ... wait for a bit ...make config flush
To completely clear redis of the full page cache, as well as object and transient caches:
make flush
Alternatively, to only clear the object cache: Login to Wordpress admin and click on Flush Object Cache on the Dashboard page. To only clear the full page cache: click Purge Cache from the top menu.
The Wordpress plugin nginx-helper is installed to enable FastCGI cache purges. Log in to the backend as above, navigate to Settings > Nginx Helper and click:
Enable Purge
Redis Cache
Enter redis
in the Hostname field
Tick all checkboxes under 'Purging Conditions'
Timber / Twig caches
Templates cache should be disabled in development mode.
It can be cleared by running a wp
command:
docker-compose exec php-fpm sh -c 'wp timber clear_caches'
This command will return a warning if timber or twig cache were already empty.
If you want to use the Google Cloud Storage you'll have to configure WP-Stateless. The plugin is installed and activated, however images will be stored locally until remote GCS storage is enabled in the administrator backend. Log in with details gathered from here and navigate to Media > Stateless Setup.
You will need a Google account with access to GCS buckets to continue.
Once logged in:
Click 'Get Started Now'
Authenticate
Choose 'Planet-4' project (or a custom project from your private account)
Choose or create a Google Cloud Bucket - it's recommended to use a bucket name unique to your own circumstances, eg 'mynamehere-test-planet4-wordpress'
Choose a region close to your work environment
Skip creating a billing account (if using Greenpeace Planet 4 project)
Click continue, and wait a little while for all necessary permissions and object to be created.
Congratulations, you're now serving media files directly from GCS buckets!
The Elasticsearch host is configured during initial build. But if you want to confirm that the setting is right, navigate to Settings > ElasticPress > Settings. The Host should be: http://elasticsearch:9200
.
Anytime you want to re-index Elasticsearch you can just run: make elastic
.
This docker environment relies on the mysql official image as well as on the planet4-base application image.
Both images provide environment variables which adjust aspects of the runtime configuration. For this environment to run only the database parameters such as hostname, database name, database users and passwords are required.
Initial values for this environment variables are dummy but are good to go for development purposes. They can be changed in the provided app.env
and db.env
files, or directly in the docker-compose.yml file itself.
See openresty-php-exim​
NEWRELIC_LICENSE
set to the license key in your NewRelic dashboard to automatically receive server and application metrics
PHP_MEMORY_LIMIT
maximum memory each PHP process can consume before being terminated and restarted by the scheduler
PHP_XDEBUG_REMOTE_HOST
in development mode enables remote XDebug debugging, tracing and profiling
@todo: Document some of the useful builtin configuration options available in upstream docker images for debugging, including:
XDebug remote debugging
Smarthost email delivery and interception
exec function limits
Memory and performance tweaks
Install XDebug on the PHP container by running:
make dev-install-xdebug
Switch XDebug mode with xdebug-mode
command and an environment variable:
XDEBUG_MODE=debug,profile make xdebug-mode
IDE specific configuration
Installation
Install extension PHP Debug https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-debug Source: https://github.com/xdebug/vscode-php-debug​
Go to the debugger tab (ctrl+shift+D)
Click on "Create a launch.json file", select PHP
File should be configured by default on port 9000
Add a pathMappings
option in the first configuration, according to the path of the project you opened. For example:
# Root of your project is planet4-docker-compose/persistence/app :"pathMappings": {"/app/source/public": "${workspaceFolder}/public"}# Root of your project is planet4-docker-compose:"pathMappings": {"/app/source/public": "${workspaceFolder}/persistence/app/public"}
A complete launch.json
file looks like this:
{// Use IntelliSense to learn about possible attributes.// Hover to view descriptions of existing attributes.// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387"version": "0.2.0","configurations": [{"name": "Listen for XDebug","type": "php","request": "launch","port": 9000,"pathMappings": {"/app/source/public": "${workspaceFolder}/public"},"log": true},{"name": "Launch currently open script","type": "php","request": "launch","program": "${file}","cwd": "${fileDirname}","port": 9000}]}
The "log": true
option allows you to see the communication between XDebug and your IDE.
Using XDebug
Start a debugging session in the Run and Debug tab by selecting Listen to XDebug
and hitting F5
Add some breakpoints in your source code by blicking on the left side of line numbers in the editor. You can also check all Notices/Warnings/Errors in the Breakpoints section of the sidebar, to trigger a pause on each of those events
Run your code, by navigating to planet4.test, using wp commands or any other action that will execute PHP scripts
Stop your session by clicking on the Stop icon or using Shift+F5
​
If you are running any other services on your local device which respond on port 80, you may experience errors attempting to start the environment. Traefik is configured to respond on port 80 in this application, but you can change it by editing the docker-compose.yml file as below:
traefik:ports:- "8000:80"
The first number is the port number on your host, the second number is mapped to port 80 on the openresty service container. Now you can access the site at www.planet4.test:8000 instead.
A more robust solution for hosting multiple services on port 80 is to use a reverse proxy such as Traefik or jwilder/openresty-proxy in a separate project, and use Docker named networking features to isolate virtual networks.
Traefik comes with a simple admin interface accessible at www.planet4.test:8080.