Disaster Recovery Strategies for PostgreSQL Deployment on Kubernetes

For a production environment, it is important to design a disaster recovery strategy. The traditional disaster recovery strategy is to make regular database backups and store them. However, large databases may take hours or several days to restore. Cloud computing has made disaster recovery more efficient and simple. Cloud computing enables the deployment of applications or databases across multiple availability zones or multiple regions to achieve high availability and disaster recovery.  Today, many companies have migrated their applications to a microservices architecture and deploy the application systems including databases on Kubernetes. However, how to implement disaster recovery for databases deployed on Kubernetes? In this post, I'd like to describe the disaster recovery strategies for PostgreSQL deployment on Kubernetes. Because database is a stateful service, it is hard to manage and scale databases on Kubernetes. PostgreSQL Operators (e.g. Crunchy PostgreSQL Opera

Pgpool-II Logging and Debugging

Logging and debugging help to monitor and  identify issues or problems occurring in your program. Sometimes we need to log debug information to figure out the problems during software development and testing. However, i f debug is enabled,  a large number of debug messages are generated and it is hard to read. Proper logging and debugging configurations are important. There are a number of ways to retrieve debug information from Pgpool-II. In this post, I will describe the various ways for logging and debugging Pgpool-II.  Logging Before Pgpool-II 4.1, some log processing tools (e.g. rsyslog) are required to store Pgpool-II logs and rotate them.   For example, b elow are the relevant configuration parameters for logging to syslog. log_destination = 'syslog' syslog_facility = 'LOCAL1' syslog_ident = 'pgpool'  Since Pgpool-II 4.2, logging collector process has been implemented. The logging collector process collects log messages sent to stderr and redirects t

Logging of Pgpool-II on Kubernetes

Logging is an important topic and particularly useful for troubleshooting, debugging and monitoring. Many applications have their own built-in logging mechanism. Pgpool-II logging mechanism is similar to PostgreSQL logging mechanism. Pgpool-II log management system supports two ways for logging messages (stderr and syslog) and the logging collector process collects the logs sent to stderr and redirects them into log files. However, how can we manage the logging system on Kubernetes? In this blog, I will describe how to manage Pgpool-II container logs on Kubernetes. Logging on Kubernetes  Kubernetes supports the functionality to view the container logs by using “ kubectl logs ” command.  kubectl logs < pod name > However, logging on Kubernetes is complicated because the logs will be deleted when containers are terminated or recreated. Fluentd helps you to collect container logs and send the logs to desired destinations such as Amazon S3, MySQL, MongoDB, etc. How to use Fluentd