Before Docker existed, developers had a hard time trying to package, ship and deploy their applications consistently across various environments. By enabling the much-needed containerization, Docker gives you the superpower that ensures your application behaves the same way regardless of the environment. However, with this great power comes the great responsibility of monitoring and troubleshooting your applications effectively.
In this roundup, we’ll introduce you to some robust logging mechanisms that you can utilize for your Docker container logs to provide your application the reliability and performance it needs.
We’ll dive deep into Docker container logs, explore some common Docker container logs commands, and understand how you can access and manage them.
What Are Docker Container Logs?
When you run applications within containers, Docker container logs capture the standard output and error messages your application generates. Think of these as application logs, but specifically for your container.
Typically, docker container logs give you information about any exceptions or errors your application throws, status messages, events emitted by your application, relevant performance metrics, etc. These logs are extremely valuable to provide insights for troubleshooting and understanding the behavior of your application.
Accessing Docker Container Logs
To access your Docker container logs, you can run the following command using Docker CLI:
docker logs my-container
You can also enable these logs to run in real-time by using the -f flag and view Docker container logs:
docker logs -f my-container
Alternately, you can also view logs for a specific service using Docker Compose:
docker-compose logs web-backend
The above command uses Docker Compose to show you logs for your service called “web-backend”.
Understand the fundamentals of log monitoring and how it helps proactively detect issues before they impact users.
Log Drivers and why they’re important
Log drivers in Docker determine where and how your container logs should be stored and managed. This could be a JSON file, which is the default log driver that stores all your Docker logs as JSON.
Other options for log drivers include syslog, which logs your Docker container logs to the syslog daemon, journald, which integrates with systemd journal, or gelf, fluentd, awslogs for centralized logging.
Each log driver has its driver-specific features, including storage persistence, scalability, and ease of use.
Understanding Docker Logging. Explore how Docker handles logs, best practices, and advanced options for managing them efficiently.
Configuring Log Drivers
Docker CLI lets you configure the log driver for your Docker container logs at the time of creating your container:
docker run --log-driver syslog --log-opt syslog-address=udp://192.168.0.42:514 my-container
In the above command, the main --log-driver
flag specifies the log driver we’re using, which in this case is the syslog. Then, the -log-opt flag provides options specific to the selected log driver. After that, we tell Docker the address and port of the syslog server using syslog-address. Finally, the container called “my-container
” is run, whose logs are routed to the syslog server at the specified address.
To directly configure the log driver options in the /etc/docker/daemon.json
file, modify it’s contents as shown below:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Managing and Viewing Docker Logs
Developers often burn entire afternoons chasing down what turned out to be straightforward configuration errors, all because they were drowning in log noise. When you can effectively manage and view your logs, you can narrow down the ones that can cause critical problems.
Moreover, staying on top of your container logs can help you with deeper log analysis and avoid problems such as your disk space suddenly running out and causing your entire system to halt.
Viewing and Filtering Logs
Docker provides several different ways to view Docker container logs and filter them. This eventually helps you focus on specific logs that you need to focus on during debugging and troubleshooting.
For instance, the below command will show you all the logs from the last 30 minutes:
docker logs --since 30m my-container
But you’re not just limited by a time duration, you can view logs for a precise moment in time, like a specific timestamp:
docker logs --since "2024-01-15T10:30:00" my-container
The above command will show you all the logs taken on the exact timestamp specified. Moreover, you can also view logs until a specific time:
docker logs --until "2024-01-15T11:00:00" my-container
The --until
flag returns all the logs till that moment in time. You can combine the --since
flag to get a window of time and view all logs in that window, as shown below:
docker logs --since "1h30m" --until "30m" my-container
You can also follow logs in real-time with timestamps:
docker logs -f -t my-container
The above will show you logs alongside their timestamp. To reduce noise from unwanted logs, you can also stream logs from the last 50 lines or so:
docker logs -f --tail 50 my-container
The -f flag is like your messiah when it comes to debugging during development. Watching your application’s behavior in real-time makes it easier to spot any issues as they happen.
But how do you see logs for a specific pattern? Using advanced log filtering, you can specify a pattern using grep:
docker logs my-container 2>&1 | grep "ERROR"
The above log redirects logs containing the word “ERROR”
to your window. Let’s say you have a web application whose container logs look like this:
2024-01-15 10:30:15 INFO Starting application
2024-01-15 10:30:16 INFO Database connected
2024-01-15 10:30:20 ERROR Failed to connect to Redis
2024-01-15 10:30:25 INFO Request processed successfully
2024-01-15 10:30:30 ERROR Timeout connecting to external API
As you can see, the container logs show that your application failed to connect to Redis and timed out connecting to an external API. To dig deeper, you can retrieve these relevant logs using the above command.
Log Size Management and Rotation
Uncontrolled logs can get out of hand easily and start taking up more disk space than needed. This can lead to crashing your entire system. To prevent this, you should set limits to your logs when creating a container:
docker run -d \ --name my-app \ --log-driver json-file \ --log-opt max-size=10m \ --log-opt max-file=5 \ nginx:latest
Using the max-size=10m
, you specify the maximum size of a file that your logs can take. Similarly, using the max-file=5
, you can declare the maximum number of files that will be used for your container logs. All in all, this will set a size limit of 50MB. When the current log file reaches 10MB, Docker creates a new one and eventually removes the oldest when the limit is exceeded.
If you’re facing storage bloat, learn how to free up unused containers, volumes, and images in our Docker Cleanup Guide.
Log File Manipulation
There are cases where you might directly need to access the log files and manipulate the logs. For example, when you need more power and flexibility, such as advanced text processing, performance upside with large log files, etc.
For instance, consider the below docker command that logs the entire log into memory, slowing down the process for large logs in GBs:
docker logs huge-container | grep "ERROR"
You can directly access the file instead, which would be much faster:
grep "ERROR" $(docker inspect --format='{{.LogPath}}' huge-container)
Here’s another example where you might want to get lines 1000-1500
from a log file, which is harder to do directly with Docker CLI. Instead, you can directly access the file and retrieve those lines using file manipulation:
tail -n +1000 $(docker inspect --format='{{.LogPath}}' my-container) | head -500
You can still use Docker CLI for the exact Docker container logs location:
docker inspect --format='{{.LogPath}}' my-container
Best Practices for Efficient Log Management
Here are some quick best practices you should keep in mind to get an efficient log management system for your Docker container logs:
- Rotate logs regularly: Disk overflow is a common consequence of not rotating Docker container logs regularly. Configuring the daemon with rotation settings can ensure that you always have enough disk space available.
- Set log size limits: Use flags such as max-size and max-file, as we explored in the previous section, to ensure proper file size restrictions are in place. An unpredictable surge in traffic on your application could easily eat up a lot of unwanted memory if size limits are not in place.
- Follow structured logging: When streaming Docker container logs, parsing or running custom analytics, use structured formats only, like
JSON
. This can help you not get tangled in processing your logs and give you more flexibility in terms of working on top of the logs.
Learn how to structure logs using formats like JSON for better readability and machine processing. See Log Formatting Best Practices.
Centralized Logging and Routing Logs
Modern applications have complex architectures, interfaces and communicate extensively with external APIs. Practically, it’s best to have all your Docker container logs centralized in one place for viewing, accessing, managing and processing. Not only does centralized logging simplify management across multiple containers and environments, but using dedicated logging tools can give you benefits such as advanced filtering and real-time analytics.
The given command shows how you can route your Docker container logs to a popular monitoring platform Middleware:
docker run --log-driver fluentd --log-opt fluentd-address=middleware.io:24224 my-app
Best Practices for Docker Logging
As we’re close to a wrap, let’s also understand some best practices that can help us utilize Docker container logs effectively. We’ve already explored how structured logging formats like JSON can be helpful for readability and parsing logs for further analysis.
Security is paramount for every organization and their customers. A common security consideration often missed is logging sensitive data. You should never log any kind of sensitive data like passwords, auth tokens, payment information, etc.
You can save countless hours of log analysis and management by using a reliable log management tool like Middleware. Which do all the heavy lifting and offer extremely valuable features like log aggregations, automated notifications and alerting for critical log events.
Finally, ensure you regularly audit and archive your container logs to meet any compliance standards necessary. Log management solutions automate audit trails and simplify regulatory compliance out of the box, so you can use them directly to meet compliance standards conveniently.
Conclusion
By combining the power of your Docker container logs with effective log management and dedicated log management tools like Middleware, you can ensure reliability for your application. Following the best practices is just as crucial, so make sure you’re not missing them.
FAQs
How do I check Docker container logs?
To check the Docker container logs, you can use the Docker CLI command docker logs my-container
How can I follow Docker container logs in real-time?
When you use the command docker logs -f my-container, your Docker container logs are streamed in real-time.
What are the best practices for managing Docker container logs?
Some best practices for managing Docker container logs include regularly rotating logs, setting size limits using flags like max-size and max-file, using structured log formats like JSON, avoiding sensitive information in logs, and implementing centralized log management tools such as Middleware.
How do I filter Docker container logs for specific errors?
The command docker logs my-container 2>&1 | grep “ERROR” can be used to filter Docker container logs for specific errors
What is structured logging, and why is it important?
When you format your Docker container logs into a structured format, such as JSON, it becomes easier to parse, analyse, and automate monitoring.