Architecture Description and Deployment Pattern
The client requests are first received at low latency because of Cloudfront. The requests are then filtered by WAF and passed on to Cognito for both AuthZ and AuthN. The approved and valid requests are then routed by API Gateway to respective endpoint functions hosted on AWS Lambdas. Most of the GET requests are fulfilled by data cached at API Gateway. Remaining requests are taken care of by independent GET Lambda spun up for each request. These GET Lambdas fetch results by making calls to Read-Replicas thus eliminating a huge load on Master RDS. All write requests from PUT/POST/DELETE Lambdas are pushed into Kinesis Firehose to avoid failures at RDS and reduce any stress on Master RDS because of these calls. As Kinesis has a maximum retention period (7 days at the time of writing of this article) and we’d want to avoid any loss of data incase the RDS goes down, we keep a copy of the requests in Kinesis Firehose in S3 bucket. We, at the very end, have our Master RDS and its replicas (Slave RDS) to host and maintain user data.
We are using Cloudfront for low latency, reliability, availability and scalability. It also leverages the highly-resilient Amazon backbone network for superior performance and availability for our end users.
2. AWS WAF
AWS WAF (Web Application Firewal) helps us to block common attack patterns, such as SQL injection or cross-site scripting along with additional custom security rules. Also, it’s seamless integration with Cloudfront makes it a no brainer for our use case.
3. Amazon Cognito
Amazon Cognito helps us add and sync our users’ profile information thus provide an uninterrupted and pleasant experience regardless of the device they use. Amazon Cognito helps us to focus on coding our application while allowing user identity and app data synchronization via User Pools or Identity Pools.
4. Amazon API Gateway
Amazon’s API Gateway is helpful for routing our user requests. We’ll also be able to use the API Gateway’s caching feature (costs $1/hr for 58.2GB memory cache at the time of writing of this article) for our GET calls to reduce the strain on our RDS. Additionally we can use API Gateway to restrict the number of requests (any of GET, PUT, POST or DELETE) to achieve our goal of reducing our calls to RDS.
5. GET Lambda
This is the usual GET function that you’d normally define in addition to the fact that this is serverless. We thus profit on hosting costs of servers either on cloud or worst, on-prem. We are also reading from the read-replicas of our RDS thus eliminating a huge number of calls to our Master RDS server. We should also limit the number of lambda instances that could be spun up at a time to avoid losing our RDS server(s). As API Gateway cache will eliminate most of our /GET requests, this shouldn’t be a big problem anyways. If you are still concerned about it, you can always introduce a Redis Cache between GET lambda and RDS.
6. PUT/POST/DELETE Lambda
These functions perform their usual operations. The only difference is that these too are hosted on serverless lambda. We should try to limit these too w.r.t. the number of instances that could be spun up simultaneously.
7. Amazon Kinesis Data Firehose
To eliminate concurrent write requests on RDS and to eliminate the number of calls to it, we use Kinesis Data Firehose.
8. Amazon S3
As Kinesis has a maximum retention period (7 days at the time of writing of this article) and we’d want to avoid any loss of data incase the RDS goes down, we keep a copy of the requests in Kinesis Firehose in S3 bucket.
9. Amazon RDS
We are using Amazon RDS because it’s easy to integrate in our architecture. This helps us keep and maintain records on AWS cloud with easy replication. We are eliminating a huge number of calls to master by restricting GET calls to read only from replicas of master.