Table of Contents
Overview of a Full Stack Setup with Docker Compose
In this chapter you will walk through how a typical full stack web application can be assembled with Docker Compose. You already know what Docker Compose is and how a docker-compose.yml file is structured, so here the focus is on combining a frontend, a backend API, and a database into a single, reproducible environment.
A full stack app in this context means three main parts. The frontend is usually a JavaScript single page application or a server rendered web interface that runs in the browser. The backend is an HTTP API, often built with frameworks like Node.js, Django, Flask, or others. The database is a service such as PostgreSQL, MySQL, or MongoDB that persists data. With Docker Compose, each part becomes a separate service, but they are started together and wired into a shared network so they behave like one system.
The goal is to have a single command that brings up the entire stack, with services talking to each other by service name instead of manual IP configuration, and with all dependencies packaged into images that behave the same on every developer machine.
Designing the Services
Before writing any Compose configuration, it is helpful to describe the services in words. For a simple example, imagine a React frontend served by a Node.js development server, a Node.js or Python backend API, and a PostgreSQL database.
The backend needs to connect to the database with a host, port, user, password, and database name. In a Compose network, the host is simply the database service name from the Compose file. The frontend needs to know how to send API requests to the backend. Inside the Docker network, this is also done by service name. When users access the app in the browser, they usually hit the frontend through a mapped port on the host, and the frontend in turn calls the backend by its internal network name.
The design phase is where you decide which services build from local Dockerfiles and which use official images. It is common to build custom images for the frontend and backend, and to use an official database image. You also decide which data must persist between restarts. Typically, database data is stored in a named volume, while frontend and backend code can be mounted as bind mounts during development and baked into images for production.
A Typical `docker-compose.yml` for a Full Stack App
A typical Compose file for this scenario defines at least three services and one or more named volumes. The services share a default network that Compose creates automatically. Each service has its image or build context, ports, environment variables, and dependency relationships.
For the database, the Compose configuration usually sets environment variables for database credentials, stores data in a named volume, and does not expose the database port to the outside world unless there is a specific reason. The backend configuration sets environment variables that match the database settings and uses the database service name as the host. The frontend configuration exposes a port on the host so you can open the app in a browser, and if it runs a development server it may bind mount the source code so changes are reflected quickly.
In a minimal example, the database might be a postgres image with a volume such as db-data, the backend might be built from ./backend with port 8000 mapped to the host, and the frontend might be built from ./frontend with port 3000 mapped. Each service includes only what it needs, which keeps the Compose file readable and easier to maintain.
Always keep secrets such as database passwords out of the Compose file in real projects. Use environment files or secret management features, and never commit secrets to version control.
Connecting Frontend, Backend, and Database
The power of Docker Compose for full stack development comes from built in networking. All services in the same Compose project share a network, and each service can be reached by its service name. You do not need to discover container IP addresses or configure DNS manually.
The backend connects to the database using a connection string where the host is the database service name specified in docker-compose.yml. For example, a connection string might look like postgres://user:password@db:5432/appdb, where db is the service name. When the backend container resolves db, Compose networking takes care of routing traffic to the right container.
The frontend can call the backend API using the backend service name and port within the network. However, because the frontend code executes in the browser on the user’s machine, you must consider how API URLs are resolved. During development, the browser sees the host machine, not the internal Docker network. The frontend typically calls the backend using the host’s URL, such as http://localhost:8000, while inside the Docker network, the backend still talks to the database by service name. This split between internal and external addressing is a common point of confusion and needs to be handled with configuration in the frontend, often through environment variables at build time.
Using Volumes for Persistent and Shared Data
In a full stack Compose setup, volumes usually appear in two roles. The first is persistence for the database. A named volume is attached to the database container so that data survives container removal or recreation. This makes it safe to run docker-compose down without losing your development database content, as long as you do not remove volumes explicitly.
The second role is convenience for development of the frontend and backend. Bind mounts can map local source code into the containers, so any code changes are instantly reflected without rebuilding images. For example, you can mount ./backend into /app inside the backend container and rely on your framework’s auto reload. This pattern is common in local development but often removed or changed in production, where you rely more on built images than on live code mounts.
When designing volumes, keep a clear separation between what must persist and what is safe to re-create. Application logs, temporary files, or build artifacts can often live inside the container file system, while database data and user uploads should be stored in volumes.
Orchestrating the Development Workflow
With a full stack Compose setup, you can standardize how developers on the team start and use the application. The most typical command for day to day work is docker compose up (or the older docker-compose up), which builds images if needed and starts all services together.
During development, you often use the --build flag when starting, to force rebuilding images after code changes that affect the build context. Once running, you can stop the stack with docker compose down and restart it just as easily. The benefit is that developers no longer need to install and configure the database or other dependencies directly on their machines. The entire stack is encapsulated in the Compose file and the Dockerfiles for the custom services.
It is also helpful to define default environment values or .env files that Compose automatically reads. This allows you to configure ports, database credentials, or API base URLs without modifying the Compose file itself. Consistent commands, configuration, and behavior across the team significantly reduce the “works on my machine” issues.
Preparing for Production from a Full Stack Compose Setup
While Docker Compose is primarily a tool for local development, a full stack Compose project can be a useful stepping stone toward production deployment. The same Compose file or a variant of it can document how services are connected and which configuration values are needed. For production, you may create a separate Compose file that replaces development specific parts, such as bind mounts or debug ports, with production grade settings.
For example, instead of using a frontend development server, you might build static assets in a multi stage Dockerfile and serve them from a lightweight web server. The backend might run in a container that uses environment variables provided by the production environment rather than a local .env file. The database in production might be a managed service instead of a container, but the connection settings in the backend still follow the same pattern laid out in the development Compose setup.
Do not assume a Compose file that works in development is safe for production without changes. Development conveniences such as exposed database ports, default passwords, and auto reload features are not acceptable in a secure production environment.
By treating your full stack Docker Compose project as both a working environment and living documentation, you create a clear path from local development to more advanced deployment solutions, while keeping the configuration of frontend, backend, and database reliable and repeatable.