As a TL;DR:
- AWS CDK
- AWS DynamoDB
- AWS Lambda
- AWS Cloudwatch
- AWS IAM
- AWS Cognito
- AWS Elastic Beanstalk
- AWS SNS
- AWS Budgets
Below I go into further detail each of these tools one by one and how I use them. At the scale of my usage of these tools, they are either completely free (ex: IAM), or barely a few cents when "heavily used", but if there is a cost to something I will try to mention it. For more details, I made a mini-site that accompanies this post: https://lannonbr-aws-pricing.netlify.app/
As well, the top perk I feel with using any Infrastructure as Code tool is the ability to ramp down the resources and know that everything is cleaned up when you want to stop a project. Other Cloud Providers have the concept of a Project / Resources group (GCP Projects, Azure Resource Groups). Amazon does not require this structure, but by using CDK & Cloudformation, it brings some structure to managing multiple resources under one roof.
AWS DynamoDB is a managed NoSQL database solution. At its core there is not a strict schema compared to a SQL based structure, but to use Dynamo effectively, setting up and knowing access flows of how you grab data from your tables is still extremely important. For instance, with my ESM Checker project, the datapoints displayed are stored with a partion key of the month & date (ex: 2022-08) and a secondary key of the date (ex: 2022-08-04), and due to how data with Dynamo is partitioned, if I have a query of "Go get me all entries within August 2022", I know at max that it will only fetch at max 31 rows so as I continue to collect more and more data, my queries aren't getting longer over time.
For some reading on how to efficiently use Dynamo, I highly reccomend Alex DeBrie's The DynamoDB Book.
Lambda is Amazon's main Functions as a Service tooling where I don't have to manage where my code runs, but rather I can focus on what runs and how it is triggered. My main way that I use Lambda is by triggering executions on a cronjob schedule using Cloudwatch Events. I mainly write my functions in Node.js as it is my primary language, but Lambda natively supports Node, Python, Ruby, Java, Go, C#, and Powershell. On top of that, you can add Custom Runtimes to be able to deploy code in any other language that was not listed above.
All of the logs from my various resources on AWS automatically are pooled into Cloudwatch. From here, I can examine whenever my tools break. Cloudwatch also has support for creating dashboards & alert reporting for when certain events occur, but my projects have not scaled to the point where I need extremely detailed error reporting and the console errors from lambda functions do good enough for me.
At the core of managing access control on AWS is IAM. With it I mainly create roles for various portions of my apps to get access to the resources they need. With AWS CDK as I have talked about earlier, it allows me to create a workflow of making roles which have the least-priviledge they need to function which I wrote in a post titled: Least-priviledge access for AWS Resources using AWS CDK. Also, if I need my GitHub Actions CI runs to get access to AWS to either fetch data from things like DynamoDB or deploy resources, GitHub Actions has OpenID Connect support which you can create short lived tokens. I wrote about this here: Setting up OpenID Connect authentication for GitHub Actions with AWS CDK.
For applications which I need auth, Cognito has worked well. I so far have only gone down the route of setting up user pools with a username/password auth flow, but in the future I am curious in trying out more powerful auth flows (Social logins, email-based magic links, passwordless auth with WebAuthn).
In my node apps, I use @aws-sdk/credential-providers npm package to manage the auth flow of taking an identity token and giving back temporary access to other AWS resources by using identity pools
For services that I can't run completely serverless, Elastic Beanstalk is a layer on top of AWS EC2 to manage applications that require a server. My usecases for this is running Discord bots as it requires a long-running websocket connection to function which can't really be done straightforwardly in something like Lambda. Elastic Beanstalk itself doesn't cost any money, but due to this being a layer on top of EC2, you have to pay for a VM. If you are only running a single-process node app like I am, the base
t4g.nano (2 vCPU, 0.5GB RAM) will work and ends up costing around $4.00 a month.
I've been exploring AWS SNS as a messaging bus for creating a pub/sub microservice architecture. Amazon does provide a service called Amazon MQ which you can run a RabbitMQ bus on, but for smaller-scale projects it isn't financially viable for me given the base VM you can run it on costs $20 a month if you run it 24/7, while Amazon SNS costs much less for similar functionality that I want.
With SNS, I then can create topics which I send messages up to and then can create AWS Lambda functions that listen for those topics, so rather than needing to tell the producer of a message what services will be grabbing those messages, I can just send the messages up and then let them be processed however a subscriber for those messages see fit.
More on the aspect of keeping my AWS bill from spiralling out of control, I've set up a budget of $50 across my entire AWS account using the AWS Budgets tool. Although I currently only spend on average $5 a month, I have a safety net that I can be informed if my costs unexpectedly get out of hand. As well, I haven't personally set this up, but you can setup actions for AWS to take on your behalf if you hit certain thresholds on your budget as described in their docs under Configuring AWS Budgets actions, so you can imagine actions such as changing IAM permissions, disabling AWS Lambda functions, among others.