π Migrating Email Marketing Software to a New Server A Learn-by-Doing Experience
π Introduction
Having recently completed the Linux Fundamentals Learning Path, I was eager to challenge myself and apply my new skills. An opportunity arose last week when I received an email from my free SSL provider indicating that the certificate for my email marketing software (mail.sumamazaeem.com) had expired. This blog post details my journey of migrating this software from AWS Lightsail to an AWS EC2 server, highlighting the challenges faced and how I overcame them.
π§ Initial Problem and Background
Earlier this year, I deployed an open-source email marketing software on AWS Lightsail, following an article. I opted for this solution to avoid the costs associated with Mailchimp and deprecated the use of Mautic due to its complex migration process between PHP versions. Instead, I downloaded all my emails and uploaded them to a fresh installation of Listmonk on AWS Lightsail.
At the time of initial deployment, my understanding of NGINX was rudimentary. The configuration was basic, and it took some troubleshooting to get everything working. Fast forward to last week, I needed to renew the SSL certificate, and I saw this as an opportunity to migrate the software to my existing AWS EC2 server. Thanks to AWS credits, this server already hosted my social media automation tool (mixpost.app), which integrates with six social media platforms to schedule posts consistently.
π€ The Decision to Migrate
The motivation for this migration was twofold: cost-saving and learning. Consolidating my services on a single server would reduce costs, and the process would deepen my understanding of Docker and NGINX configurations. Thus, my learning adventure began!
π§ The Migration Process
I started the migration process in the evening, around 7 PM, and spent approximately 12 hours effort to complete it. This timeframe included extensive research and development (R&D) to understand Docker container migration and find the optimal way to achieve it.
π Key Steps and Challenges
π³ Learning and Using Docker Commands:
- Docker Save and Load: Successfully used the
docker savecommand to create a tar file of the containers and thedocker loadcommand to restore them on the new server. - Container Backup: Utilized a temporary Ubuntu image to back up the persistent data and create an image.
π Secure File Transfer:
- SSH Agent Forwarding: To avoid the direct placement of private keys on the server, I employed SSH agent forwarding.
- SCP Command: This command facilitated secure file transfers between servers, particularly for large files.
π Initial Failures and Successes:
- Rsync and SCP: Faced permission issues with rsync, which were resolved by escalating privileges to the root user on the target server. I found SCP to be more user-friendly for my purposes.
- Database Backup: Explored
pg_dumpandpg_restoreas well aspg_basebackupfor PostgreSQL database backup and restoration. Direct file transfer failed due to mounting issues.
πΎ Persistent Data and Volume Mapping:
- This was one of the trickiest challenges. Proper mapping of persistent volumes between Docker containers required a thorough understanding of Dockerβs nuanced workings. Ensuring the correct configurations and permissions finally resolved the issue.
π NGINX Configuration:
- SSL Certificate Renewal: Configured SSL certificates on the new setup using Certbot.
- Reverse Proxy Setup: Configured NGINX as a reverse proxy to efficiently manage traffic between the frontend and backend containers.
π The Method Used
Docker and Transfer Commands
- Used the following Docker commands to create, transfer, and restore container tar files:
1 2 3
docker save -o <container_backup>.tar <container_name> scp <container_backup>.tar user@server:/path/to/destination docker load -i <container_backup>.tar
Temporary Image for Data Backup
- Used a temporary Ubuntu image to create backups of persistent data.
π Securely Transferring Files
- SSH Agent Forwarding: Leveraged SSH agent forwarding to avoid placing private keys directly on the server.
- SCP Command: For secure copying of files between servers:
1
scp -i path/to/private_key <source_file> user@server_ip:/path/to/destination/ - Rsync: An alternative option.
π NGINX Configuration:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Redirect all HTTP requests to HTTPS
server {
listen 80;
listen [::]:80;
server_name mail.sumamazaeem.com;
# Redirect all HTTP requests to the same URL but with HTTPS
return 301 https://$server_name$request_uri;
}
# Serve the site through HTTPS
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mail.sumamazaeem.com;
# SSL configuration
ssl_certificate /etc/letsencrypt/live/mail.sumamazaeem.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mail.sumamazaeem.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
# Proxy the requests to the actual application
location / {
proxy_pass http://localhost:9001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}