Breaking The Bad: Transforming Monolithic Applications to Containers

Mohammad R. Tayyebi
4 min readJul 19, 2023

--

A couple of months ago, I was on a project which had no major changes since past 5 years. The team made a great effort on building a lovely service with tens of millions of visits per year, but when they left the organization, the project was blocked.

There were thousands of papers, document, diagrams, and media files, which supposed to describe the details, but they were not classified. So when different teams accepted the challenge, they immediately decide to build everything from scratch.

The legacy Laravel code-base was hosted on WHM & CPanel Stack. That was making so many side-effects such as IP banning, unwanted CronJobs and updates, with a huge overload due to miss-configurations.

The legacy structure

It is not acceptable to use a set of tools in production because of their benefits, without being aware of consequences, mechanisms, resources, and side-effects. That’s why I can call myself a ‘purist’. There have to be more reasons than a particular ‘feature’ to convince me to use anything which is not known as a ‘single-purpose utility tool’.

Before launching any change campaign, we have to define KPIs and metrics to measure changes. In this case, I used CloudFlare’s DNS statistics and Google’s Search Console to keep track of changes. Down-times are not acceptable. Any 500 error is a mess.

The new architecture is inspired Domain Driven Design micro-services. To repair the ship at sea, and due to spaghetti code which was patched several times, I created exact copies of the same application in directories with same title to the business domains, to prevent prematurely refactoring the code base. We temporary accept this redundancy, to take advantage of isolation. Then refactoring will be much easier!

“How do you eat an elephant? One bite at a time.”

This will neutralize the default Laravel and Apache’s .htaccess routing. In this regard, we can take advantages of Nginx as reverse proxy, to handle routing on sub-directories and even sub-domains. Later it will also give us a leverage in scaling the application with load-balancing. Also Nginx Lua codes will able us to solve some SEO mistakes and mixed-content difficulties on the edge.

Micro-service inspired architecture.

First, I moved the database server to a container with same version as original which was hosted on the new server and redirected the legacy server’s Connection String to the new one.

We can use a single-line command to migrate databases between servers, when a stable bandwidth is available as below:

# Simple method
mysqldump db1 | mysql -h server2 db2

# More complicated way
mysqldump db1 | \
ssh server2 docker container exec new_mysql_server_container \
mysql -u username_here -ppassword_here

Then I migrated services one by one to the new server, with the policy of one directory per container, with same folder structure to others, so when we want to map and share volumes, we will not get confused. Also I can move the uploads/ directory to another Hard Disk and use a simple web server with it’s own caching policy to shrink backups sizes, and optimize load and I/O.

Then to confirm successful migration, I can change my local DNS by editing /etc/hostsfile. With this simple trick, we can browse website using same domain on the new website without any edit on the internet DNS server zones.

Hosting different techologies and configurations on same server without any conflict between services

Moving services between servers was never easy as today thanks to Docker. A container with all of the dependencies and data, can move with after zip ing with a simple sftp command. Like:

sftp -oPort=22 host
sftp > get container_and_data.zip

With another simple command, the container is up and running!

unzip container_and_data.zip
cd container_and_data_directory/
docker compose up -d

I changed DNS records on CDN and new server is online!

Nginx log analysis with goaccess

Now we can remove replicated codes, refactor heavy database queries, combine some files, and retire some other parts.

But why some businesses fail to manage changes in IT?

It’s not all about engineering; it’s also about humanities. Not all company boards, can wait that long to rebuild everything from scratch, even if the legacy code-base is garbage, something is better than nothing. Some businesses are so fragile when it comes to audience trust. They should always keep their service accessible. Data-loss is not acceptable.

Communication is the key. The tech team should not imprison themselves in their lab and cut conversations with stakeholders. People have to accept change, believe in it and contribute. When there is no contribution, even if you succeed in the transition, change is failed, and soon will be replaced.

--

--