Skip to main content

FeedMyFurBabies – Using Custom Resources in AWS CDK to create AWS IoT Core Keys and Certificates

· 10 min read
Chiwai Chan
Tinkerer

In a previous blog I talked about switching from CloudFormation template to AWS CDK as my preference for infrastructure as code, for provisioning my AWS Core IoT resources; I mentioned at the time whilst using resources using AWS CDK, as it would improve my productivity to focus on iterating and building. Although I switched to CDK for the reasons I described in my previous blog, there are some CloudFormation limitations that cannot be addressed just by switching to CDK alone.

In this blog I will talk about CloudFormation Custom Resources:

  • What are CloudFormation Custom Resources?
  • What is the problem I am trying to solve?
  • How will I solve it?
  • How am I using Custom Resources with AWS CDK?

CloudFormation Custom Resources allows you to write custom logic using AWS Lambda functions to provision resources, whether these resources live in AWS (you might ask why not just use CloudFormation or CDK: keep reading), on-premise or in other public clouds. These Custom Resource Lambda functions configured within a CloudFormation template, and are hooked into a CloudFormation Stack's lifecycle during the create, update and delete phases - to allow these lifecycle stages to happen, the logic must be implemented into the Lambda function's code.

What is the problem I am trying to solve?

In my AWS IoT Core reference architecture, it relies on use of two sets of certificates and private keys; they are used to authenticate each Thing devices connecting to AWS IoT Core - this ensures that only trusted devices can establish a connection.

In the CloudFormation template version of my reference architecture, I had in the deployement instructions to manually create 2 Cetificates in the AWS Console for the IoT Core service, this is because CloudFormation doesn't directly support creation of certificates for AWS IoT Core; as shown in the screenshot below.

CloudFormation Stacks

There is nothing wrong with creating the certificates manually within the AWS Console when you are trying out my example for the purpose of learning, but it would best to be able to deploy an entire set of resources using infrastructure as code, so we can achieve consistent repeatable deployments with as minimal effort as possible. If you are someone completely new to AWS, coding and IoT, my deployment instructions would be very overwheling and the chances of you successfully deploying a fully functional example will be very unlikely.

How will I solve it?

If you got this far and actually read what was written up to this point, you probably would have guess the solution is Custom Resources: so lets talk about how the problem described above was solved.

So we know Custom Resources is part of the solution, but one important thing we need to understand is that, even though there isn't the ability to create the certificates directly using CloudFormation, but there is support for creating the certificates using the AWS SDK Boto3 Python library: create_keys_and_certificate.

create_keys_and_certificate

So essentially, we are able create the AWS IoT Core certificates using CloudFormation (in an indirectly way) but it requires the help of Custom Reources (a Lambda function) and the AWS Boto3 Python SDK.

The Python code below is what I have in the Custom Resource Lambada function, it demonstrates the use of the Boto3 SDK to create the AWS IoT Core Certificates; and as a bonus, I am leveraging the Lambda function to save the Certificates into the AWS Systems Manager Parameter Store, this makes it much more simplier by centralising the Certificates in a single location without the engineer deploying this reference architecture having to manually copying/pasting/managing the Certificates - as I have forced readers in my original version of this reference architecture deployment. The code below also manages the lifecycle of the Certificates as the CloudFormation Stacks are deleted, by deleting the Certificates it created during the create phase of the lifecycle.

The overall flow to create the certificates is: Create a CloudFormation Stack --> Invoke the Custom Resource --> invoke the Boto3 IoT "create_keys_and_certificate" API --> save the certificates in Systems Manager Parameter Store

import os
import sys
import json
import logging as logger
import requests
import boto3
from botocore.config import Config
from botocore.exceptions import ClientError

import time

logger.getLogger().setLevel(logger.INFO)


def get_aws_client(name):
return boto3.client(
name,
config=Config(retries={"max_attempts": 10, "mode": "standard"}),
)


def create_resources(thing_name: str, stack_name: str, encryption_algo: str):

c_iot = get_aws_client("iot")
c_ssm = get_aws_client("ssm")

result = {}

# Download the Amazon Root CA file and save it to Systems Manager Parameter Store
url = "https://www.amazontrust.com/repository/AmazonRootCA1.pem"
response = requests.get(url)

if response.status_code == 200:
amazon_root_ca = response.text
else:
f"Failed to download Amazon Root CA file. Status code: {response.status_code}"


try:
# Create the keys and certificate for a thing and save them each as Systems Manager Parameter Store value later
response = c_iot.create_keys_and_certificate(setAsActive=True)
certificate_pem = response["certificatePem"]
private_key = response["keyPair"]["PrivateKey"]
result["CertificateArn"] = response["certificateArn"]
except ClientError as e:
logger.error(f"Error creating certificate, {e}")
sys.exit(1)

# store certificate and private key in SSM param store
try:
parameter_private_key = f"/{stack_name}/{thing_name}/private_key"
parameter_certificate_pem = f"/{stack_name}/{thing_name}/certificate_pem"
parameter_amazon_root_ca = f"/{stack_name}/{thing_name}/amazon_root_ca"

# Saving the private key in Systems Manager Parameter Store
response = c_ssm.put_parameter(
Name=parameter_private_key,
Description=f"Certificate private key for IoT thing {thing_name}",
Value=private_key,
Type="SecureString",
Tier="Advanced",
Overwrite=True
)
result["PrivateKeySecretParameter"] = parameter_private_key

# Saving the certificate pem in Systems Manager Parameter Store
response = c_ssm.put_parameter(
Name=parameter_certificate_pem,
Description=f"Certificate PEM for IoT thing {thing_name}",
Value=certificate_pem,
Type="String",
Tier="Advanced",
Overwrite=True
)
result["CertificatePemParameter"] = parameter_certificate_pem

# Saving the Amazon Root CA in Systems Manager Parameter Store,
# Although this file is publically available to download, it is intended to provide a complete set of files to try out this working example with as much ease as possible
response = c_ssm.put_parameter(
Name=parameter_amazon_root_ca,
Description=f"Amazon Root CA for IoT thing {thing_name}",
Value=amazon_root_ca,
Type="String",
Tier="Advanced",
Overwrite=True
)
result["AmazonRootCAParameter"] = parameter_amazon_root_ca
except ClientError as e:
logger.error(f"Error creating secure string parameters, {e}")
sys.exit(1)

try:
response = c_iot.describe_endpoint(endpointType="iot:Data-ATS")
result["DataAtsEndpointAddress"] = response["endpointAddress"]
except ClientError as e:
logger.error(f"Could not obtain iot:Data-ATS endpoint, {e}")
result["DataAtsEndpointAddress"] = "stack_error: see log files"

return result

# Delete the resources created for a thing when the CloudFormation Stack is deleted
def delete_resources(thing_name: str, certificate_arn: str, stack_name: str):
c_iot = get_aws_client("iot")
c_ssm = get_aws_client("ssm")

try:
# Delete all the Systems Manager Parameter Store values created to store a thing's certificate files
parameter_private_key = f"/{stack_name}/{thing_name}/private_key"
parameter_certificate_pem = f"/{stack_name}/{thing_name}/certificate_pem"
parameter_amazon_root_ca = f"/{stack_name}/{thing_name}/amazon_root_ca"
c_ssm.delete_parameters(Names=[parameter_private_key, parameter_certificate_pem, parameter_amazon_root_ca])
except ClientError as e:
logger.error(f"Unable to delete parameter store values, {e}")

try:
# Clean up the certificate by firstly revoking it then followed by deleting it
c_iot.update_certificate(certificateId=certificate_arn.split("/")[-1], newStatus="REVOKED")
c_iot.delete_certificate(certificateId=certificate_arn.split("/")[-1])
except ClientError as e:
logger.error(f"Unable to delete certificate {certificate_arn}, {e}")


def handler(event, context):
props = event["ResourceProperties"]
physical_resource_id = ""


try:
# Check if this is a Create and we're failing Creates
if event["RequestType"] == "Create" and event["ResourceProperties"].get(
"FailCreate", False
):
raise RuntimeError("Create failure requested, logging")
elif event["RequestType"] == "Create":
logger.info("Request CREATE")

resp_lambda = create_resources(
thing_name=props["CatFeederThingLambdaCertName"],
stack_name=props["StackName"],
encryption_algo=props["EncryptionAlgorithm"]
)

resp_controller = create_resources(
thing_name=props["CatFeederThingControllerCertName"],
stack_name=props["StackName"],
encryption_algo=props["EncryptionAlgorithm"]
)

# The values in the response_data could be used in the CDK code, for example used as Outputs for the CloudFormation Stack deployed
response_data = {
"CertificateArnLambda": resp_lambda["CertificateArn"],
"PrivateKeySecretParameterLambda": resp_lambda["PrivateKeySecretParameter"],
"CertificatePemParameterLambda": resp_lambda["CertificatePemParameter"],
"AmazonRootCAParameterLambda": resp_lambda["AmazonRootCAParameter"],
"CertificateArnController": resp_controller["CertificateArn"],
"PrivateKeySecretParameterController": resp_controller["PrivateKeySecretParameter"],
"CertificatePemParameterController": resp_controller["CertificatePemParameter"],
"AmazonRootCAParameterController": resp_controller["AmazonRootCAParameter"],
"DataAtsEndpointAddress": resp_lambda[
"DataAtsEndpointAddress"
],
}

# Using the ARNs of the pairs of certificates created as the PhysicalResourceId used by Custom Resource
physical_resource_id = response_data["CertificateArnLambda"] + "," + response_data["CertificateArnController"]
elif event["RequestType"] == "Update":
logger.info("Request UPDATE")
response_data = {}
physical_resource_id = event["PhysicalResourceId"]
elif event["RequestType"] == "Delete":
logger.info("Request DELETE")

certificate_arns = event["PhysicalResourceId"]
certificate_arns_array = certificate_arns.split(",")

resp_lambda = delete_resources(
thing_name=props["CatFeederThingLambdaCertName"],
certificate_arn=certificate_arns_array[0],
stack_name=props["StackName"],
)

resp_controller = delete_resources(
thing_name=props["CatFeederThingControllerCertName"],
certificate_arn=certificate_arns_array[1],
stack_name=props["StackName"],
)
response_data = {}
physical_resource_id = certificate_arns
else:
logger.info("Should not get here in normal cases - could be REPLACE")

send_cfn_response(event, context, "SUCCESS", response_data, physical_resource_id)
except Exception as e:
logger.exception(e)
sys.exit(1)


def send_cfn_response(event, context, response_status, response_data, physical_resource_id):
response_body = json.dumps({
"Status": response_status,
"Reason": "See the details in CloudWatch Log Stream: " + context.log_stream_name,
"PhysicalResourceId": physical_resource_id,
"StackId": event['StackId'],
"RequestId": event['RequestId'],
"LogicalResourceId": event['LogicalResourceId'],
"Data": response_data
})

headers = {
'content-type': '',
'content-length': str(len(response_body))
}

requests.put(event['ResponseURL'], data=response_body, headers=headers)

How I am using Custom Resources with AWS CDK?

What I am about to describe in this section can also be applied to a regular CloudFormation template, as a matter of fact, CDK will generate a CloudFormation template behind the scenes during the Synth phase of the CDK code in the latest version of my IoT Core reference architecture implemented using AWS CDK: https://chiwaichan.co.nz/blog/2024/02/02/feedmyfurbabies-i-am-switching-to-aws-cdk/

If you want to get straight into deploying the CDK version of reference architecture, go here: https://github.com/chiwaichan/feedmyfurbabies-cdk-iot

In my CDK code, I provision the Custom Resource lambda function and the associated IAM Roles and Polices using the Python code below. The line of code "code=lambda_.Code.from_asset("lambdas/custom-resources/iot")" loads the Custom Resource Lambda function code shown earlier.

# IAM Role for Lambda Function
custom_resource_lambda_role = iam.Role(
self, "CustomResourceExecutionRole",
assumed_by=iam.ServicePrincipal("lambda.amazonaws.com")
)

# IAM Policies
iot_policy = iam.PolicyStatement(
actions=[
"iot:CreateCertificateFromCsr",
"iot:CreateKeysAndCertificate",
"iot:DescribeEndpoint",
"iot:AttachPolicy",
"iot:DetachPolicy",
"iot:UpdateCertificate",
"iot:DeleteCertificate"
],
resources=["*"] # Modify this to restrict to specific secrets
)

# IAM Policies
ssm_policy = iam.PolicyStatement(
actions=[
"ssm:PutParameter",
"ssm:DeleteParameters"
],
resources=[f"arn:aws:ssm:{self.region}:{self.account}:parameter/*"] # Modify this to restrict to specific secrets
)

logging_policy = iam.PolicyStatement(
actions=[
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
resources=["arn:aws:logs:*:*:*"]
)

custom_resource_lambda_role.add_to_policy(iot_policy)
custom_resource_lambda_role.add_to_policy(ssm_policy)
custom_resource_lambda_role.add_to_policy(logging_policy)

# Define the Lambda function
custom_lambda = lambda_.Function(
self, 'CustomResourceLambdaIoT',
runtime=lambda_.Runtime.PYTHON_3_8,
handler="app.handler",
code=lambda_.Code.from_asset("lambdas/custom-resources/iot"),
timeout=Duration.seconds(60),
role=custom_resource_lambda_role
)


# Properties to pass to the custom resource
custom_resource_props = {
"EncryptionAlgorithm": "ECC",
"CatFeederThingLambdaCertName": f"{cat_feeder_thing_lambda_name.value_as_string}",
"CatFeederThingControllerCertName": f"{cat_feeder_thing_controller_name.value_as_string}",
"StackName": f"{construct_id}",
}

# Create the Custom Resource
custom_resource = CustomResource(
self, 'CustomResourceIoT',
service_token=custom_lambda.function_arn,
properties=custom_resource_props
)

When you execute a "cdk deploy" using the CLI on the CDK reference architecture, CDK will synthesize from the Python CDK code, a CloudFormation template, and then create a CloudFormation Stack using the synthesized CloudFormation template for you.

For more details on the CDK AWS IoT reference architecture and deployment instructions, please visit my blog: https://chiwaichan.co.nz/blog/2024/02/02/feedmyfurbabies-i-am-switching-to-aws-cdk/

FeedMyFurBabies – I am switching to AWS CDK

· 7 min read
Chiwai Chan
Tinkerer

I have been a bit slack on this Cat Feeder IoT project for the last 12 months or so; there have been many challenges I've faced during that time that prevented me from materialising the ideas I had - many of them sounded a little crazy if you've had a conversation with me in passing, but they are not crazy to me in my crazy mind as I know what I ramble about is technically doable.

Examples of the technical related challenges I had were:

  • CloudFormation: the initial version of this project was implemented using CloudFormation for the IaC, here is the repository containing both the code and deployment instructions. If you read the deployment instructions, you will notice there are a lot of manual steps required - e.g. creating 2 sets of certificates in AWS Iot Core in the AWS Console; and copying and pasting values to and from the CloudFormation Parameters and Outputs, even though at the time I made my best efforts to minimise the manual effort required while coding them. It was not a good example to get it up and running especially if you are new to AWS, Arduino or IoT; as I myself struggled at times to deploy my own example.

  • Terraform: I ported the CloudFormation IaC code to Terraform some time last year, you can find it here. Nothing is wrong with Terraform itself; I just keep forgetting to save or misplaced my terraform state files every time I resume this project. In reality I might leverage both Terraform and CDK for the projects/micro-services I create in the future, but it all really depends on what I am trying to achieve at the end of the day.

Deploying the AWS CDK version of this Cat Feeder IoT project

So, the commands below are the deployment instructions taken from the AWS CDK version of this project, you can find it here: https://github.com/chiwaichan/feedmyfurbabies-cdk-iot

git clone git@github.com:chiwaichan/feedmyfurbabies-cdk-iot.git
cdk feedmyfurbabies-cdk-iot
cdk deploy

git remote rm origin
git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/feedmyfurbabies-cdk-iot-FeedMyFurBabiesCodeCommitRepo
git push --set-upstream origin main

The commands above are all you need to execute in order to deploy the Cat Feeder project in CDK - assuming you have the AWS CDK and your AWS credentials configured on the machine you are calling these commands on; the first group of commands checks out the CDK code which deploys an AWS CodeCommit repository and a CodePipeline pipeline - creates the 1st CloudFormation Stack using a CloudFormation template; and the second group of commands pushes the CDK code into the newly created CodeCommit repository created in the first group of commands, which in turns trigger an execution in CodePipeline and the pipeline deploys the resources for this Cat Feeder IoT project - creates the 2nd CloudFormation Stack using a different CloudFormation template.

The two groups of commands creates the 2 CloudFormation Stacks shown in the screenshot below, the stack "feedmyfurbabies-cdk-iot" provisions the CodeCommit repository and CodePipeline - using the 1st CloudFormation template, and the stack "Deploy-feedmyfurbabies-cdk-iot-deployed-service" provisions the resources for this Cat Feeder IoT project - using the 2nd CloudFormation template.

CloudFormation Stacks

FYI, I did not come up with the pattern I just described above that deployed the two CloudFormation Stacks: one for the pipeline and the other for the AWS resources for this Cat Feeder IoT project; I only came across it during one of those AWS online workshops I was using to learn CDK and noticed this pattern and found it useful, and pretty much decided to adopt it for my projects going forward.

Test out the deployed solution

The resources that are relevant to architecture of this AWS IoT solution are shown in the diagram below.

Deployed resources

There are 2 sets of certificates and 2 sets of AWS IoT Things and policies deployed by the "Deploy-feedmyfurbabies-cdk-iot-deployed-service":

IoT Certificates

The 1st set of certificates and IoT Thing is hooked up to the AWS Lambda function (Lambda Thing) shown in the diagram, this Lambda function acts as an AWS IoT Thing (uses the certificates saved in Systems Manager Parameter prefixed with "/feedmyfurbabies-cdk-iot-deployed-service/CatFeederThingLambda") and is fully configured as one along with all the neccessary certificates and permissions to send an MQTT message to the "cat-feeder/action" topic in AWS IoT Core; this is a very convenient way to see in action how one could send MQTT messages to AWS IoT Core using Python, as well as a good way to confirm the deployment was successful by testing it out!

Before we invoke the Lambda Thing/function, we need to subscribe to the "cat-feeder/action" topic so that we could see the incoming messages sent by the Lambda function.

Subscribe to IoT Topic

Then we invoke the Lambda function in the AWS Console:

Lambda Result

Make sure you get a green box confirming the MQTT message was sent.

The code in the Lambda is written in Python and it sends a JSON payload (the dictionary variable shown in the code below) to the IoT Topic "cat-feeder/action"

Lambda Code

Now lets go back to AWS IoT Core to confirm we have received the message:

AWS IoT Core MQTT received

We can see the message received in IoT Core is the dictionary object sent by the Lambda code

Conclusion

Using CDK does not eliminate all the issues you might encounter when using CloudFormation - I have a future blog on creating and using CloudFormation Custom Resources lined up; because at the end of the day CDK just generates a CloudFormation template and handles the deloyment of the CloudFormation Stack for you without you having to manage the CloudFormation Stacks or templates; the intent of this blog is to demonstrate how little effort is required to deploy an AWS IoT solution using CDK, compared with the same architecture I shared in my Github repo 2 years ago but with instructions using a CloudFormation template deployment that was long and tedious in manual steps.

The ultimate aim of change in IaC is to just focusing on building and iterating!

I do often talk too much in my blogs, but in this instance the instructions to deploy this solution for yourself to try out is very minimal, with the majority of the content focused on the resources deployed; and what each resource is for and how they interact with each other.

Extra

You may have noticed that there are 2 sets of certificates deployed in IoT Core and 2 IoT Things in this reference architecture, this is because you can take the 2nd set of certificates (prefixed with "/feedmyfurbabies-cdk-iot-deployed-service/CatFeederThingESP32") and Thing provisioned purely for you to send MQTT message to AWS IoT Core from your own IoT hardware devices / micro-controllers.

Your own Thing

If you want to try it out, you will need to use the IoT Core Endpoint specific to your AWS Account and Region; you can either find it in the AWS IoT Core Console, or copy it from the CloudFormation Stack's Output:

IoT Core Endpoint

The Lambda Thing we tested above can be used to send MQTT messages to your own IoT device/micro-controller, as the 2nd set of certificates is configured with the neccessary IoT Core Policies to receive the MQTT messages sent to the Topic "cat-feeder/action", and the certificates is also configured with the policies to send MQTT messages to a second IoT Topic called "cat-feeder/states"

Your own Thing Architecture

I have a future blog that will demonstrate how to do this using MicroPython and a Seeed Studio XIAO ESP32C3 - so watch this space.

FeedMyFurBabies – Event-Sourcing using Amazon EventBridge

· 9 min read
Chiwai Chan
Tinkerer

In my previous AWS IoT Cat feeder project I used a Lambda function as the event handler each time the Seeed Studio AWS IoT 1-click button was pressed, the Lambda function in turn published an MQTT message to AWS Iot Core which is received by the Cat Feeder (via a Seeed Studio XIAO ESP32C3 micro-controller) to dispense food into either one of the cat bowls or both (depending on the type of press performed on the IoT button). The long term goal is to integrate the AWS IoT Cat Feeder with the Feed My Fur Babies project.

In this Part 2 of the Feed My Fur Babies blog series, I will be introducing the Event-Sourcing pattern to the https://www.feedmyfurbabies.com architecture; describe the benefits of designing an architecture around Event-Souring and an example implemented using Terraform. I recently learnt Terraform and I now prefer it over the native IaC.

Current state architecture

Here is the current state of the Cat Feeder architecture amd the IoT related resources previously deployed in AWS using CloudFormation:

Current State Architecture

The responsibilities of each of the resources deployed in the diagram prior to the introduction of the Event-Sourcing pattern into the architecture are:

  • AWS IoT 1-Click Button: This is an IoT button I physically press to emit an event to dispense food into one or both of the cat bowls, this button can be used anywhere where there is a WIFI connection
  • AWS IoT Core Certificates: Certificates are associated with resources and devices that interacts with the AWS IoT Core Service, either publishing an MQTT message to an AWS IoT Topic, or receiving an MQTT message from a Topic
  • AWS Lambda - IoT 1-Click Event Handler & sends an MQTT message to an Iot topic: This Lambda function is responsible for handling incoming events created by the AWS IoT 1-Click Button, as well as translating the event into an MQTT message before sending it to an AWS IoT Core Topic. This is the component in the architecture that is the main focus of this blog post, we will describe how this component will be re-architectured and decomposed to work in conjunction with the introduction of the Event-Sourcing pattern.
  • AWS IoT Core: This is the IoT service that manages the IoT Topics and Subcriptions to said Topics
  • Seeed Studio XIAO ESP32C3: a micro-controller subscribed to the IoT Topic (the one the Lambda sent MQTT messages to) that will dispense food into 1 or 2 cat bowls when it receives an MQTT message from the Topic

For further details on what role this architecture plays in the Smart IoT Cat Feeder, visit Part 2 of the Smart Cat Feeder Blog Series.

What is Event-Sourcing?

The idea of Event-Sourcing is to capture all events that occurs within a system during its lifetime, these events are stored in an immutable ledger in the sequence in which they occurred in.

One of the biggest benefits of capturing all the events of a system is that we are able to replay every single event that has ever occured within the system (partially or as a whole) at a later time (lets say 5 years later), and have the ability to selectively replay the 5 years worth of events to one or more specific downstream event bus targets: an event bus target could be a new application that was deployed into your production environment 5 years after the first event was created; what this means is that we could hydrate this new application's datastore with 5 years worth of data as if it existed at the beginning when the first event occured. Also, imagine being able to re-create entire datastores with the full history for 100s of applications (where each application has its own datastore) within your system landscape, these datastores could be hydrated with the full history of events stored in the immutable Event-Sourcing ledger, or even replay the events that occur from the very first event and up to a specific event at a given point in time (e.g. half of the entire ledge) - effectively providing you with the ability to create any datastore in any datastore engine with the data inside in a state to any given point in time.

How do we introduce Event-Sourcing into the architecture?

Step 1

We start off with the AWS Lambda function shown in the current state architecture where its responsibilites is to handle the events received from the AWS IoT 1-Click Button each time it is pressed, as well as sending an MQTT message to an AWS Iot Core Topic in response to each incoming event; essentially it has 2 distinct responsibilities


Step 2

Next, we decompose the single Lambda function into 2 separate distinct Lambda functions based on its 2 responsibilities, then we chain the 2 Lambda functions together to preserve its functionality - what we have effectively achieved by doing this is decoupling the 2 responsibilities as 2 separate units of work - resulting in 2 separate compute resources.

The benefits by a decoupled architecture are:

  • Each of the Lambda functions can be implemented in different languages - e.g. one in Python and the other can be in Java
  • Independent release cycles for each of the Lambda functions
  • Changes to either one of the 2 responsibilities can be made independently of each other
  • Each Lambda function can be scaled independently of another

Step 3

In this step we use Amazon EventBridge as the Event-Sourcing store - known as the immutable ledger we described earlier, we will also leverage EventBridge as a serverless event bus to help us receive, filter, transform, route and deliver events to downstream services (event bus targets). In this instance we will slip EventBridge in between the 2 Lambda functions and we will be storing every single IoT event sent by the IoT Button into the immutable ledge,

Benefits of adding EventBridge to the architecture:

  • The IoT 1-Click Lambda handler no longer directly calls the downstream Lambda function - so it is unaware of the downstream targets
  • The IoT events are stored in an immutable ledger in the sequence in which they occurred in
  • Prepare the system landscape with the ability to more easily develop micro-services in an Event-Driven architecture using the orchestration pattern

Target State Architecture

Target State Architecture

This is the end result of introducing Event-Sourcing to the architecture; it may not look like much benefits has been gained from adding Amazon EventBridge - in fact one might think that we've added more components and in effect created more moving parts and complexity. But I have decided to specifically introduce this very early into the architecture as an investment so that I am in a position to rapdily build out my micro-service architecture - reaping the rewards from the get go.

Try it out for yourself

I have created a GitHub Repository to deploy a complete working example of the resources shown in the Target State Architecture using Terraform.

I suggest you deploy this to have a play for yourself:

  1. Clone the repository: "git clone git@github.com:chiwaichan/feedmyfurbabies-202303-eventsourcing-using-eventbridge.git"
  2. Setup your Terraform environment
  3. Run: "terraform init && terraform apply"

Also, check out each individual resource deployed by this Terrafrom code.

Create a test IoT 1-Click event to pass the event end-to-end through all the deployed resources

This is the IoT 1-Click Lambda function handler shown in the AWS Console

1-Click Handler Lambda

Create a test event so we can invoke the Lambda function to simulate an event as if a physical IoT Button is pressed

1-Click Handler Lambda - Test Event

Here we can view the logs for this Lambda function Test invocation

1-Click Handler Lambda - Test Event

The IoT 1-Click Lambda function handler sends an Event to the Custom EventBridge Event Bus named "feedmyfurbabies"

EventBridge Event Bus

The event sent to the Custom Event Bus matches on the "source" attribute with a value of "com.feedmyfurbabies" with the Custom Event Bus Rule named "feeds-rule"

EventBridge Event Bus Rule

This Lambda function is the downstream target of the Custom Event Bus Rule that was mactched by the event and is responsible for interpreting the event message and translate it into an MQTT message, then in turn sends it to the AWS IoT Core Topic "cat-feeder/action" that you can subscribe to using a micro-controller, e.g. Seeed Studio XIAO ESP32C3.

Send MQTT Message Lambda

Send MQTT Message Lambda - Monitoring

Here we can see the logs of the event received by the EventBridge Custom Bus Rule

Send MQTT Message Lambda - Logs

In the AWS Console for the AWS Iot Core Service, we can subscribe to Topics to receive an MQTT message right at the end of the downstream services - this is useful if you don't use a micro-controller

IoT Core - MQTT Client Subscribe Topic

Future State Architecture

Future State Architecture

We end up with an architecture that will enable us to easily add targets to consume events managed by the EventBridge Custom Event Bus, doing so in a way where the IoT 1-Click Lambda function has no knowledge of any newly created subscribers of the Custom Event Bus.

In a future blog I will demonstrate this.

4×4 fun with a bit of Iot, vlogging and Machine Learning – Part 1

· 7 min read
Chiwai Chan
Tinkerer

Jimny

Months prior to the very first lockdown I had gotten myself on the waitlist for a 4x4 Jimny, so I could take it to the beach without worrying about getting beached like I likely would in a regular front wheel drive hatchback; or take it to the bushes to climb some hills and see how far I would get without flipping it (badly). Knowing I wouldn't be able to drive it for an long indefinitely amount time so I decided to cancel the order back then; in some ways I was sad then but in many ways I am happy now that I have had a fair amount of time to have a good think about what else I could do with the Jimny whilst taking it on these adventures.

The time spent mulling has lead to another new blog series; this will take on a similar build approach I took while building my Iot Cat Feeder, but this time it will be on a larger scale in terms of the amount of moving parts and components; also, I would get to enjoy myself this time instead of the cats. For those that are unfamiliar with the approach I took in my prior build, I will start the blog series by proposing an idea I have in mind with a certainty of about 70% of achieving a functional prototype - this is mainly due to not having the background nor experience on most of the skills required to build out this idea.

Generally, I would create a new Part for the Blog Series as I achieve a milestone during the build, where I talk about what was achieved in the milestone and provide the details on how I got there; where possible I would include a public Github repository for any code written for the build.

So enough of my rumbling.

What is it that I am wanting to build?

As you may have already predicted what is involved in this build from the image above, yes it will involve a 4x4 - I have a Jimny on the way; and some cloud buzz words like Iot and Machine Learning.

The goals of this build is to:

  • Develop a solution to capture video recordings of my 4x4 adventures of the entire journey with 5+ viewpoints around the vehicle in 4K resolution, realistically I might only be about to capture full HD videos as explained further down this blog.
  • Capture and store the vehicle's telemetries at regular intervals as the vehicle is driven using the CAN Bus protocol, e.g. speed, RPM of the engine and any other states the car is in.
  • Capture other useful data not monitored by the vehicle's CAN Bus, such as GPS co-ordinates and the environment where the vehicle is at during the time - e.g. temperature, humidity, luminosity and many more using hand picked sensors.
  • Ingest in real time all the videos, CAN Bus and sensor data captured into an AWS Datalake

4x4

If I were able to achieve all the goals in the list above, then I would like to also achieve these goals:

  • Create a Digital Twin using AWS TwinMaker of the Jimny and associate all the sensors and devices captured with it
  • Train Machine Learning models using the data ingest in the AWS Datalake
  • Do something with the AWS Deeplens sitting in my draw for the past year with the ML models created above, perhaps warn me I am able to do something that will cause the Jimny to land on its roof like last time by making predictions on an ML model.
  • Have some sort of cloud solution that spits out a video for each of my trips so I can use it to upload to YouTube, with the video displaying some of the telemetries and sensor data captured.

AWS

At the end of the blog series I will conclude whether I was able build something that was functional, and whether or not I was able to achieve all the goals I have stated in the 2 lists above.

Where I am in terms of progress for this build?

It has been a bit of a challenge to source certain types of electronic components at the moment as some may already know, so I've only managed to source the majority of components required at this point in time.

So far I have source the following components:

Starlink RV

I had been wanting one of these for a long time so when I saw it on special I jumped on it straight away. This is the RV version so it means it can be taken anywhere with me, so I will mount this on a roof rack - one reason why I do not want to have the Jimny on its roof because it would not be fun to be somewhere with no internet for a long period of time.

Randy

The ideal location to place the Starlink is in a spot with no obstruction and as far away from everything as possible, however, when I tested it out in my tiny back yard with it sitting in the center surrounded by 2 houses (both 2 stories) and a high fence, I got the following results:

Starlink RV SpeedTest

Although the speed is as fast as you get on the one of the slowest fibre plan available in New Zealand, the upload speed is the ultimate factor that determines how many live feeds we can ingest into the Datalake; a 4K resolution video is 20Mbps so that does not leave much bandwidth for all of the other data types, results may be better depending on where I am at the time, and also, unless Starlink offers symmetrical upload speeds then we are forced with full HD feeds, FYI download speeds can be as high as 500Mbps in some parts of the world. One option is to store the data onto a NAS drive via the Home Assistant installed on the LinkStar - a device similar to a Raspberry PI, then upload the videos into the Datalake after I get home - I like to avoid this as it is too much admin.

Router / Wifi

Got a few lying around at house doing nothing.

Cameras

I also have some spare cameras to use; the feed on these can be served using the RTSP protocol, I also have a few ESP32-CAMs I recently purchased so this build will use a combination of the 2 camera types. Most webcams can be used for this.

Seeed Studio XIAO ESP32C3

I have a bunch of these as they are my go tos when I build projects using micro-controllers; they are like $5 USD: Seeed Studio XIAO ESP32C3, one of, if not the smallest ESP32s I've come across and is more reliable than other ones I've used previously.

Seeed Studio XIAO ESP32C3

I also have various sensors for use that measures:

  • Distance from objects
  • Temperature
  • Sound
  • Humidity
  • Luminosity
  • CO2

Seeed Studio LinkStar with Home Assistant

Seeed Studio LinkStar

I'll be using this to pull the feeds from the cameras, as well as saving the videos onto a NAS if we go down that route.

What is left to source?

  • Seeed Studio WIO ESP32 CAN - this is a kit I'll be using to interface with the CAN Bus to retrieve telemetries from the Jimny.
  • Jimny

Next blog

The next blog in this series I will take all the components I currently have and link it all up and detail what and how I got there.

Feed My Fur Babies – AWS Amplify and Route53

· 4 min read
Chiwai Chan
Tinkerer

I'm starting a new blog series where I will be documenting my build of a full-stack Web and Mobile application using AWS Amplify to implement both the frontend, as well as the backend; whilst developing dependent downstream Services outside of Amplify using AWS Serverless components to implement a Micro-Service architecture using Event-Driven design pattern - where we will break the application up into smaller consumable chunks that works together well.

Since we are creating from scratch a completely new application, I will also incorporate a vital pattern that will reduce complexity throughout the lifetime of the application: we will also be implementing the application using the Event-Sourcing pattern - this pattern ensures every Event ever published within a system is stored within an immutable ledger; this ledger will enable new Data Stores of any Data Store Engine to be created at any given time by replaying the Events in the ledger, of Events created from a start date and time to an end Date and Time.

CQRS is a pattern I will write up about with great detailed in a blog in the near future, CQRS will enable the ability to create mulitple Data Stores with identical data, each Data Store using a unique Data Store Engine instance.

What is AWS Amplify?

Amplify is an AWS Service that provides any frontend web or mobile developers with no cloud expertise the ability to build and host full-stack applications on AWS. As a frontend developer, you can leverage it to build and integrate AWS Services and components into your frontend without having to deal with the underlying AWS Services; all Services the frontend is built on top of is managed by AWS Amplify - e.g. no need to managed CloudFormation Stacks, S3 Storage or AWS Cognito.

What will I be doing with Amplify

My experience from a while ago was full-stack application development and I have worked under that role for over 10 years, I've used various frontend/backend frameworks, components and patterns.

I will be building a website called Feed My Fur Babies where I will provide video streams showing live feeds of my cats from web cams placed in various spots around my house, the website will also provide users with the ability to feed my cats using internet enabled devices like the IoT Cat Feeders I recently put together and watch them hoon on their favorite treats; although I am experienced with building websites from the ground up using AWS Service, I am aiming to build Feed My Fur Babies whilst leveraging as little as possible on that experience - this is so I am building the website as close to the targeted demographics skillset of a typical Amplify as possible, i.e. as a developer with only frontend experience.

Current Architecture State

Architecture

Update

Let's talk about what was done to get to the current architecture state.

First thing I did was buying the domain feedmyfurbabies.com using AWS Route53.

Route53 Feed My Fur Babies

Next, I created a new Amplify App called "Feedme".

Amplify - App

Within the App I created two Hosted Environments: one environment is to host the production environment, the other is to host a development environment. Each Hosted Environment is configured to be built and deployed from a specfic Branch in the shared CodeCommit Repository used to version control the frontend source code.

Amplify - Hosted Environments

Amplify - Hosted Environment - Main

Amplify - Hosted Environment - Dev

AWS DeepRacer

· 9 min read
Chiwai Chan
Tinkerer

This blog is to detail my first experiences with AWS DeepRacer as somebody who knows very little about how AI works under the hood, and at first didn't fully understand the difference between Supervised Learning vs Unsupervised Learning vs Reinforcement Learning when I was writing my first Python code for the "reward_function".

AWS DeepRacer Training

DeepRacer is a Reinforcement Learning based AWS Machine Learning Service that provides a quick and fun way to get into ML by letting you build and train an ML model that can be used to drive around on a virtual, as well as a physical race track.

I'm a racing fan in many ways whether it is watching Formula 1, racing my mates in go karting or having a hoon on the scooter, so once I found out about AWS DeepRacer service I've always wanted to dip my toes in it. More than 2 years later I found an excuse to play with it during my preparations for the AWS Certified Machine Learning Specialty exam, I am expecting a question or two on DeepRacer in the exam so what better way to learn about DeepRacer than to try it out by cutting some Python code.

Goal

Have a realistic one this time and keep it simple: produce an ML model that can drive a car around a few diferent virtual tracks for a few laps without going off the track.

My Machine Learning background and relevant experience

  • Statistics, Calculus and Physics: was pretty good at these during high school and did ok in Statistics and Calculas during uni.
  • Python: have been writing some Python code in the past couple of years on and off, mainly in AWS Lambda functions.
  • Machine Learning: none
  • Writing code for mathematic: had a job that involved writing complex mathmatic equations and tree based algorithms in Ruby and Java for about 7 years

Approach

Code a Python Reward Function to return a Reinforcement Reward value based on the state of the DeepRacer vehicle - the reward can be positive for good behaviour and also be negative to discourage the agent (vehicle) for a state that is not going to give us a good race pace. The state of the vehicle is a set of key/values shown below and is available to the Python Reward Function during runtime for us to use to calculate a reward value.

    # "all_wheels_on_track": Boolean,        # flag to indicate if the agent is on the track
# "x": float, # agent's x-coordinate in meters
# "y": float, # agent's y-coordinate in meters
# "closest_objects": [int, int], # zero-based indices of the two closest objects to the agent's current position of (x, y).
# "closest_waypoints": [int, int], # indices of the two nearest waypoints.
# "distance_from_center": float, # distance in meters from the track center
# "is_crashed": Boolean, # Boolean flag to indicate whether the agent has crashed.
# "is_left_of_center": Boolean, # Flag to indicate if the agent is on the left side to the track center or not.
# "is_offtrack": Boolean, # Boolean flag to indicate whether the agent has gone off track.
# "is_reversed": Boolean, # flag to indicate if the agent is driving clockwise (True) or counter clockwise (False).
# "heading": float, # agent's yaw in degrees
# "objects_distance": [float, ], # list of the objects' distances in meters between 0 and track_length in relation to the starting line.
# "objects_heading": [float, ], # list of the objects' headings in degrees between -180 and 180.
# "objects_left_of_center": [Boolean, ], # list of Boolean flags indicating whether elements' objects are left of the center (True) or not (False).
# "objects_location": [(float, float),], # list of object locations [(x,y), ...].
# "objects_speed": [float, ], # list of the objects' speeds in meters per second.
# "progress": float, # percentage of track completed
# "speed": float, # agent's speed in meters per second (m/s)
# "steering_angle": float, # agent's steering angle in degrees
# "steps": int, # number steps completed
# "track_length": float, # track length in meters.
# "track_width": float, # width of the track
# "waypoints": [(float, float), ] # list of (x,y) as milestones along the track center

Based on this set of key/values we can get a pretty good idea of the state of the vehicle/agent and what it was getting up to on the track.

So using these key/values we calculate and return a value for the Reward Function in Python. For example, if the value for "is_offtrack" is "true" then this indicates the vehicle has come off the track so we can return a negative value for the Reward Function; also, we might want to amplify the negative reward if the vehicle was doing something else it should not be doing - like steering right into a left turn (steering_angle).

Conversely, we return a positive reward value for good behaviour such as steering straight on a long stretch of the track going as fast as possible within the center of the track.

My approach to coding the Reward Functions was pretty simple: calculate the reward value based on how I myself would physically drive on a go kart track; factor as much into the calculations as possible such as how the vehicle is hitting the apex, and is it hitting it from the outside of the track or the inside; is the vehicle in a good position to take the next turn or two. For each iteration of the code, I train a new model in AWS DeepRacer with it; I normally watch the video of the simulation to pay attention to what could be improved in the next iteration; then we do the whole process all over again.

Within the Reward Function I work out a bunch of sub-rewards such as:

  • steering straight on a long stretch of the track as fast as possible within the center of the track
  • is the vehicle in a good position to take the next turn or two
  • is the vehicle was doing something else it should not be doing like steering right into a left turn

These are just some examples of sub-rewards I work out - and the list grows as I iterate and improve (or make it worse) with each version of the reward function, at the end of each function I calculate the net reward value based on the sum up of the weighted sub-rewards; each sub-reward could have a higher importance than another so I've taken a weighted approach to the calculation to allow a sub-reward to amplify the effect it has on the net reward value.

Code

Here is the very first version of the Reward Function I coded:

MAX_SPEED = 4.0

def reward_function(params):
track_width = params['track_width']
distance_from_center = params['distance_from_center']
steering_angle = params['steering_angle']
speed = params['speed']

weighted_sub_rewards = []


half_track_width = track_width / 2.0


within_percentage_of_center_weight = 1.0
steering_angle_weight = 1.0
speed_weight = 0.5
steering_angle_and_speed_weight = 1.0


add_weighted_sub_reward(weighted_sub_rewards, "within_percentage_of_center_weight", within_percentage_of_center_weight, get_sub_reward_within_percentage_of_center(distance_from_center, track_width))
add_weighted_sub_reward(weighted_sub_rewards, "steering_angle_weight", steering_angle_weight, get_sub_reward_steering_angle(steering_angle))
add_weighted_sub_reward(weighted_sub_rewards, "speed_weight", speed_weight, get_sub_reward_speed(speed))
add_weighted_sub_reward(weighted_sub_rewards, "steering_angle_and_speed_weight", steering_angle_and_speed_weight, get_sub_reward_steering_angle_and_speed_weight(steering_angle, speed))

print(weighted_sub_rewards)

weight_total = 0.0
numerator = 0.0

for weighted_sub_reward in weighted_sub_rewards:
sub_reward = weighted_sub_reward["sub_reward"]
weight = weighted_sub_reward["weight"]

weight_total += weight
numerator += sub_reward * weight

print("sub numerator", weighted_sub_reward["sub_reward_name"], (sub_reward * weight))

print(numerator)
print(weight_total)
print(numerator / weight_total)

return numerator / weight_total



def add_weighted_sub_reward(weighted_sub_rewards, sub_reward_name, weight, sub_reward):
weighted_sub_rewards.append({"sub_reward_name": sub_reward_name, "sub_reward": sub_reward, "weight": weight})

def get_sub_reward_within_percentage_of_center(distance_from_center, track_width):
half_track_width = track_width / 2.0
percentage_from_center = (distance_from_center / half_track_width * 100.0)

if percentage_from_center <= 10.0:
return 1.0
elif percentage_from_center <= 20.0:
return 0.8
elif percentage_from_center <= 40.0:
return 0.5
elif percentage_from_center <= 50.0:
return 0.4
elif percentage_from_center <= 70.0:
return 0.15
else:
return 1e-3

# The reward is better if going straight
# steering_angle of -30.0 is max right
def get_sub_reward_steering_angle(steering_angle):
is_left_turn = True if steering_angle > 0.0 else False
abs_steering_angle = abs(steering_angle)

print("abs_steering_angle", abs_steering_angle)

if abs_steering_angle <= 3.0:
return 1.0
elif abs_steering_angle <= 5.0:
return 0.9
elif abs_steering_angle <= 8.0:
return 0.75
elif abs_steering_angle <= 10.0:
return 0.7
elif abs_steering_angle <= 15.0:
return 0.5
elif abs_steering_angle <= 23.0:
return 0.35
elif abs_steering_angle <= 27.0:
return 0.2
else:
return 1e-3


def get_sub_reward_speed(speed):
percentage_of_max_speed = speed / MAX_SPEED * 100.0

print("percentage_of_max_speed", percentage_of_max_speed)

if percentage_of_max_speed >= 90.0:
return 0.7
elif percentage_of_max_speed >= 65.0:
return 0.8
elif percentage_of_max_speed >= 50.0:
return 0.9
else:
return 1.0


def get_sub_reward_steering_angle_and_speed_weight(steering_angle, speed):
abs_steering_angle = abs(steering_angle)
percentage_of_max_speed = speed / MAX_SPEED * 100.0

steering_angle_weight = 1.0
speed_weight = 1.0

steering_angle_reward = get_sub_reward_steering_angle(steering_angle)
speed_reward = get_sub_reward_speed(speed)

return (((steering_angle_reward * steering_angle_weight) + (speed_reward * speed_weight)) / (steering_angle_weight + speed_weight))

Here is a video of one of the simulation runs:

Here is a link to my Github repository where I have all of versions of reward functions I created: https://github.com/chiwaichan/aws-deepracer/tree/main/models

Conclusion

After a few weeks of training and doing about 20 runs with each run using a different reward function, I did not meet the goal I set out to do - get the agent/vehicle to do 3 laps without coming off the track on a few different tracks. On average each model was only able to race the virtual car around each track for a little over a lap without crashing. It felt like at times I hit a bit of a wall and could not improve the results and in some instances the model got worse. I need to take a break from this to think of a better approach, the way I am doing it is by improving areas without measuring the progress in each area and the amount of improvement made in each.

Next steps

  • Learn how to train a DeepRacer model in SageMaker Studio (outside of the DeepRacer Service) using a Jupyter notebook so I can have more control over how models are trained
  • Learn and perform HyperParameter Optimizations using some of the SageMaker features and services
  • Take a Data and Visualisation driven approach to derive insights into where improvements can be made to the next model iteration
  • Learn to optimise the training, e.g. stop the training early when the model is not performing well
  • Sit the AWS Certified Machine Learning Specialty exam

Breaking Down Monolithic Subnets

· 6 min read
Chiwai Chan
Tinkerer

As my knowledge and experience of Cloud networking grew from designing network architectures over time and also more of lately from reviewing client network architectures, I've come to realise and appreciate the need to designing a proper network architecture that includes the long-term considerations, as early as possible - especially before a projects begins and definately before any resources are deployed into any VPCs.

In the past, I didn't have much of an interest into how a network was configured in an office, or how routing to a publicly accessible on-premise hosted application was set up when I first started in IT, this is mainly due to understanding very little about networking and also just because the networking was looked after by somebody else. It is only when I started using Cloud services where it allowed me to learn networking, much easier, perhaps because I was able to design, build and play around with my own dedicated isolated network in minutes without worrying about breaking things.

In this blog we will illustrate what a Monolithic Subnet looks like, and the problems that comes along with them; and illustrate one way to break down a Monolithic Subnet into multiple smaller Subnets - how solutions can benefit from designing workloads to leverage dedicated individual Subnets where each Subnet is for a set of common resources type or grouping. VPCs is also susceptible to becoming monolithics so I will write a separate blog about it in the future. Workloads should always be deployed across multiple AZs architecture for high-availability but to make this blog more digestable we will talk about a one AZ architecture.

We often see systems evolve over time whether they are applications or databases, that get to a point where they are too big to run, maintain or work with: these systems are commonly known as Monolithic Applications or Monolithic Databases.

Networks and the constructs of Networking can also be susceptible to becoming a monolith, early signs and symptoms could be: 1) CIDR block based rules in Security Groups or NACLs encompasses IP addresses of resources that should not be opened up to: a cause of this may be due to the number of different groups of resources within a Subnet where the IP addresses of each resource is non-deterministic – it may be difficult to design a minimum set of CIDR block values for rules to satisfy the least privilege principle. 2) conversely, CIDR block rules in Security Groups or NACLs with too many granular rules may also be a sign of a Monolithic Subnet – the common symptom are quotas of rules being reached too often.

Let’s take the example of 4 groups of compute resources, each group has a different network traffic usage behaviour than the other groups – group #1 communicates with resources in VPC X and VPC Y, while group #3 communicates with resources in VPC Y and VPC Z.

1

This is an example of how the groups of resources could be represented in a Subnet ordered by their Private IP address:

2

Often CIDR block rules that are too broad are used, which opens access to resources that should not be included. The following rules also allows in resources Groups #2 and #4 to communicate with resources in VPC X, Y and Z, when they are not expected to interaction with resources in any of those VPCs.

3

Conversely, implementing granular rules to follow best practice of least privilege may lead to quotas of Security Groups and NACLs to be reached; in any case, least privilege should be followed.

4

Solution

The solution is to break the groups of resources down into a Subnet for each Group. There is no hard rule that states a VPC must contain X number of tiers of Subnets - Subnets are used to group similar resources with similar network traffic patterns, if there are many groupings of resources then it is perfectly fine to create as many number of tiers of Subnets – one Subnet for each Grouping.

5

As a result, rules are more specific, targeted and makes it straight forward to implement the least privilege principle.

6

When groups of resources are broken down from a monolith Subnet into multiple Subnets, there are other benefits created as a by-product:

  • With each resource group deployed in a separate dedicated Subnet early on it will likely reduce or eliminate (a good solution is to not have a problem to begin with) future re-work that combats increased architecture complexity, which may often require re-deploying resources into new Subnets - to me this is unnecessary effort if we can avoid it, especially for resources that requires a lot of manual effort to deploy
  • NACLs rules are broken down and grouped into its respective resources and Subnet, which leads to fewer number of rules in a NACL – reduce possibility of reaching the quotas
  • When all resources are deployed within one Subnet only Security Groups could be leveraged to implement firewall rules, but when resources are broken down into multiple Subnets then NACLs can be leveraged as well
  • Security Posture is improved because certain traffic does not enter the Subnet from adjacent Subnets if NACL rules are implemented appropriately
  • Depending on how granular you break down your monolithic Subnets, if it is a very fine break down then you are setting up your network architecture to be in a position to implement tighter controls gearing towards a micro-segmentation network architecture

This solution compliments the use of networking solutions in other blogs I have written:

Leveraging AWS Prefix Lists

· 8 min read
Chiwai Chan
Tinkerer

AWS VPC Prefix List is a feature of the AWS Networking that has been around for a short while, however, I have yet to see it leveraged to its full potential, and more often than not I have not seen them used at all.

There are 2 types of Prefix Lists:

  • AWS-managed Prefix Lists: as the name indicates these lists are managed by AWS, and they are used to maintain a set of IP address ranges for AWS services, e.g. S3, DynamoDB and CloudFront.
  • Customer-managed Prefix Lists: these are created and maintained by anyone who has access to the AWS Console, AWS APIs or AWS SDKs. This is what we will be focusing on.

In this blog we will go into:

  • What Customer-managed Prefix Lists are
  • How they can be leveraged by AWS Security Groups
  • How they can be leveraged by AWS Subnet Route Tables
  • How they can be leveraged by AWS Transit Gateway Route Tables
  • Considerations

AWS VPC Customer-managed Prefix List is a great tool to have available as it provides the ability to track and maintain a list of CIDR block values, which can then be referenced by other AWS Networking components in their rules or route tables. Each Prefix List supports either IPv4 or IPv6 based addresses, and a number of expected Max Entries for the list must be defined; the number of entries in the list cannot exceed the Max Entries.

You can use Prefix List to maintain a list of CIDR blocks of Subnets or VPCs; or, track a list of similiar IP addresses based on a grouping of your choice, e.g. EC2 instances with a certain function - you can even track CIDR values of Subnets, VPCs and EC2 within the same list.

I have a blog on how to automatically maintain a list of EC2 instances Private IP addresses based on a Tag set against an EC2 instance: Maintain a Prefix List of EC2 Private IP Addresses using EventBridge

Let's create a Prefix List in the AWS Console

1

2

3

Prefix List – Security Group Reference

Customer-managed Prefix List is great option to have to centrally manage and track a list of CIDR blocks allowed to ingress an ENI by referencing Prefix Lists in Security Groups, a single Prefix List instance can be referenced by one or many Security Groups within the same account or cross-account.

Let's take a look at an example

This is especially useful in scenarios where you have fleet of EC2 instances where you like to allow the same network traffic sources to ingress on Port 22 to perform administration tasks, these fleet EC2 instances could scatter across multiple VPCs, and may even be scattered across multiple AWS accounts.

4

Often, we add a new Source CIDR to all Security Groups as we allow a new machine to perform administration tasks to the same fleet of EC2 instances, or even remove (or not when we forget) a CIDR Source when a machine is retired. In the past we would have modified each and every one of these Security Groups.

Here is how we can leverage Customer-managed Prefix Lists with Security Groups:

5

Here, under the same Security Group rules outcome we externalise the CIDR values into a Prefix List and reference the list in all 3 Security Groups; in the case of Security Groups spanning across multiple AWS accounts the Prefix Lists can be shared with other AWS accounts using Resource Access Manager (RAM). Now, we can allow a new machine to perform administration tasks across the entire fleet of EC2 instances by only adding a new CIDR Source to a single location, conversely, we can remove a machine by deleting a CIDR Source. There is also an added benefit of reduced effort in the need to identify which Security Groups have a rule for an IP address if we were to remove access across the entire fleet using this pattern – because it is maintained in a single location.

Prefix List – Subnet Route Table Reference

Another way to use Prefix Lists is to use them to centrally manage and track a list of CIDR block destinations to route traffic out of a Subnet’s Route Table to the same Target, a Prefix List can be referenced by one or many Subnet Route Tables within the same account or cross-account using RAM.

Let's take a look at an example

Below, we have a scenario with 3 different Route Tables across the two VPCs, with each Route Table with the same Transit Gateway Target for the same set of Destinations; and also the same Destinations routed to their respective Egress Only Internet Gateway (EIGW) for their VPC.

6

Here is how we can leverage Customer-managed Prefix Lists with Subnet Route Tables:

7

We have externalised the Destination CIDR values of the 3 Route Tables into 2 separate Prefix Lists: 1st Prefix List contains the CIDR block values of Destinations routed for the EIGW in their respective VPC; the 2nd Prefix List contains CIDR block values of Destinations routed for the same Transit Gateway instance all VPCs is an attachment of.

Prefix List – Transit Gateway Route Table Reference

Lastly, in a Transit Gateway Route Table you have the option to either to define static routes or have routes dynamically propagated from a Transit Gateway attachment. You also have the option to use a Prefix List for routing.

Here is how we can leverage Customer-managed Prefix Lists with Transit Gateway Route Tables:

8

To reference a Prefix List in a Transit Gateway Route Table, you have to reference it under the "Prefix list references" section:

9

Considerations

  • The aggregated total Max Entries of all Prefix Lists referenced by a resource (e.g. a Security Group) is counted towards the resource's quota - not the aggregated total of actual entries of all Prefix Lists. Be conscious of the Prefix List you reference in a resource, does the resource referencing the Prefix List require all the CIDR values offered in the list? if not, you are not using Prefix Lists economically.
  • If the same Prefix List instance is referenced by multiple AWS resources then consistency is enforced - operational effort is reduced due to fewer changes by not having to change a values in multiple locations.
  • Before you add or remove a CIDR value from a Prefix List, consider the flow on impact it may have to the downstream resources that reference this list, as you may inadvertently terminate some traffic flow, or worse, open up traffic to sources you don't intend to.

Conclusion

One of the things I have noticed during my short time in consulting so far is that organising Cloud resources (in particular Networking), structuring them correctly and consistently across multiple environments will set up a solid foundation for organisations in the long term, however, it is often an area that is overlooked and is only paid attention to when the rate of innovation is slowed down due to complexities and inconsistencies. Prefix Lists is a great option to have to improve consistency and operational efficiencies.

Here I have only detailed the basic use of Customer-managed Prefix Lists, but in my other blog I have a more advanced use case leveraging Prefix Lists: Work-around for cross-account Transit Gateway Security Group Reference

This solution compliments the use of networking solutions in other blogs I have written: