Articles

Devops And Release Engineering

Devops and Release Engineering's Articles

Rastin Mehr

Rastin Mehr

3 weeks ago

Scaling Up Anahita

People have asked us about how to scale up Anahita for hundreds of thousands or more users, also about a micro-services version of Anahita. No cloud application can by default scale up to a large number of users. Traditionally, Software-as-a-service or SAAS projects achieved this by throwing hardware and computing power at their production server. Today they use a dev-ops and release engineering infrastructure.  

Reimagining Anahita as a microservices architecture

Two common technologies that we can use today are Docker and Kubernetes. Docker allows us to run Anahita, MySql Database, and other parts of a setup in individual containers. A group of Docker containers that communicate with each other is a cluster of containers. At the moment, Anahita is a monolith application. We want to move towards a micro-services architecture. In this article, I want to outline the first few changes that could make this leap happen.

A Client-Server Architecture

Moving towards a client-services architecture is happening as we speak. The work on Anahita React app is part of this goal. A client-server architecture isn't a particularly a micro-services concept, but it will remove some significant barriers. The idea is to remove all the code in Anahita that is responsible for constructing and rendering user interfaces and only provide RESTful APIs.  Right now some of the processing power in the server side goes into reconstructing user interface elements and template layouts in every request. Our current codebase is about 11MB, and by reducing the codebase to a RESTful API, we can probably reduce the size to half and increase the speed too. 

Then build Client applications using technologies such as React JS, Vue JS, Electron JS, and ios/Android SDKs. This way we can have web, mobile, and desktop apps that communicate with the same back-end.  

Storing sessions in Redis

Right now, Anahita reads and writes sessions in a table in the MySQL database. It works well for an Anahita installation with tens of thousands of users. We can, however, add an adapter to make the session management with the Redis database.

Redis is a NoSQL database that stores data as key-value pairs, and it runs in the RAM. That makes it quite fast and efficient. Anahita reads and writes sessions in every request. If we could read them from a Redis database, it would increase the performance and efficiency of the system for hundreds and thousands of users, or perhaps more. 

Building a notification worker

Anahita creates a lot of notifications and sends out a lot of email notifications. It doesn't have too. In a lot of scenarios, users may not appreciate receiving constant email notifications. Also,  we have the option of sending out notifications via different channels such as mobile and browser push notifications.

A micro-services approach would be developing a worker cloud application with RESTful APIs that receives a notification from Anahita and the list of recipients. Then compose all the notification messages and send them out via notification services for email, browser, or mobile.

Building a Docker and Kubernetes cluster

Now that we have Anahita, MySQL, Redis, and notifications worker, we can build Docker packages for them. Fortunately, there are already docker images for MySql and Redis on DockerHub. All we need is Docker images for Anahita and the notifications worker. Then we can use a cluster technology such as Swarm from Docker or Kubernetes from Google to orchestrate these containers for a cloud environment. Most cloud service providers support Docker and Kubernetes or DK8 technologies, so this will make it quite simple to deploy an Anahita cluster on those cloud services. 

So how do we scale up using a cluster technology such as Kubernetes? We can have one or multiple instances of  Anahita, MySQL, Redis, or the notification worker in this cluster. 

We can even configure the Anahita DK8 cluster to create additional pods from these containers whenever traffic goes up or kill them whenever the traffic is slow. Cloud providers such as Amazon AWS provide APIs that let you monitor the operation cost of your cluster, so you can configure it to keep the number of pods within your price range. 

Building a more granular microservices architecture

Anahita can be further broken down to individual containers that perform specialized tasks. For example, a Docker container for the story feed alone, one for the notifications feed, and one for identity management and authentication.  Each development requires time and funding. You need to be mindful whether your project requires this level of scalability.

Making Anahita available as a DK8 cluster is on our road map, and we would want to make it happen this year. In fact once the Anahita React app is ready and the Anahita back-end is reduced to APIs only, we would want to make Anahita as a DK8 deployment. Using a cluster technology would make it easier for you to develop Anahita apps and deploy it to the cloud by building a Continuous Integration (CI) and Continuous Deployment (CD) workflow.

Resources:

  1. Docker: https://www.docker.com
  2. DockerHub: https://hub.docker.com
  3. Kubernetes: https://kubernetes.io
  4. Redis: https://redis.io
  5. Microservices Architecture https://en.wikipedia.org/wiki/Microservices

#Anahita #Docker #Kubernetes #MicroServices #ReactJS #ClientServerArchitecture #CloudArchitecture #DevOps #ReleaseEngineering #AWS #RedisIO

Photo by Snapwire from Pexels

Additional Information

Locations

    Powered by Anahita