13 posts tagged with "IoT"
IoT
View All TagsLet’s build an Iron Man Suit with AWS IoT
AWS IoT Core – Iron Man – Part 1
Bought the Iron Man Mark 3 3D Printing model designs from https://www.wf3d.shop/products/ironman-mark-3-suit-3d-printable-model by Walsh3D
Printed these so far:
FeedMyFurBabies – Storing Historical AWS IoT Core MQTT State data in Amazon Timestream
In my code examples I shared in the past, when I sent and received IoT messages and states to and from AWS Core IoT Topics, I only implemented subscribers to react to perform a functionality when an MQTT message is received on a Topic; while that it was useful when my FurBaby was feed in the case when the Cat Feeder was triggered to drop Temptations into the bowls, however, we did not keep a record of the feeds or the State of the Cat Feeder into some form of data store over time - this meant we did not track when or how many times food was dropped into a bowl.
In this blog, I will demonstrate how to store the data in the MQTT messages sent to AWS IoT Core and ingest the data into Amazon Timestream database; Timestream is a serverless time-series database that is fully managed so we can leverage with worrying about maintaining the database infrastructure.
Architecture
In this architecture we have two AWS IoT Core Topics, where each IoT Topic has an IoT Rule associated with it that will send all the data from every MQTT message receieved from that Topic - there is an ability to filter the messages but we've not using to use it, and that data is ingested into a corresponding Amazon Timestream table.
Deploying the reference architecture
git clone git@github.com:chiwaichan/feedmyfurbabies-cdk-iot-timestream.git
cd feedmyfurbabies-cdk-iot-timestream
cdk deploy
git remote rm origin
git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/feedmyfurbabies-cdk-iot-timestream-FeedMyFurBabiesCodeCommitRepo
git push --set-upstream origin main
Here is a link to my GitHub repository where this reference architecture is hosted: https://github.com/chiwaichan/feedmyfurbabies-cdk-iot-timestream
Simulate an IoT Thing to Publish MQTT Messages to IoT Core Topic
In the root directory of the repository is a script that simulates an IoT Thing and it will constantly publish MQTT messages to the "cat-feeder/states" Topic; ensure you have the AWS CLI installed on your machine with a default profile as it relies on it, and ensure the Access Keys used by the default profile has the permission to call "iot:Publish".
It sends a random number for the "food_capacity" that ranges 0-100 to represent the percentage of food that is remaining in a cat feeder, and a values for the "device_location" as we are scaling out with the number of cat feeders placed around the house. Be sure to send the same JSON structure in your MQTT message if you decide to not use the provided script to send the messages to the Topic.
Query the data stored in the Amazon Timestream Database/Table
Now lets jump into the AWS Console, then jump into the Timestream Service and go into the "catFeedersStates" Table; then click on "Actions" to show the "Query table" option to go to the Query editor.
The Query editor will show a default query statement, click "Run" and you will see in the Query results the data from the MQTT messages that was generated by the script; where the MQTT messages was ingested from the IoT Topic "cat-feeder/states".
FeedMyFurBabies – Send and Receive MQTT messages between AWS IoT Core and your micro-controller – I am switching from Arduino CPP to MicroPython
Recently I switched my Cat Feeder project's IaC to AWS CDK in favour of increasing my focus and productivity on building and iterating, rather than constantly mucking around with infrastructure everytime I resume my project after a break; which is rare and far between these days.
Just as with coding IoT microcontrollers such as the ESP32s, I want to get straight back into building every opportunity I get; so I am also switching away from Arduino based microcontroller development written in C++ - I don't have a background in C++ and to be honest this is the aspect I struggled with the most because I tend to forget things after not touching it for 6 months or so.
So I am switching to MicroPython to develop the logic for all my IoT devices going forward, this means I get to use Python - a programming lanaguge I work with frequently so there is less chance of me being forgetful when I use it at least once a month. MicroPython is a lean and efficient implementation of the Python 3 programming language that includes a subset of the Python standard library and is optimized to run on microcontrollers and in constrained environments - a good fit for IoT devices such as the ESP32!
What about all the Arduino hardware and components I already invested in?
Good news is MircoPython is supported on all ESP32 devices - based on the ones I myself have purchased; all I need to do to each ESP32 device is to flash it with a firmware - if you are impatient, you can scroll down and skip to below to the flashing the firmware section. When I first started Arduino, MicroPython was available to use, but that was 2 years ago and there were not as many good blog and tutorial content out there as there is today; I couldn't at the time work out how to control components such as sensors, servos and motors as well as I could with C++ based coding using Arduino; nowdays there are way more content to learn off and I've learnt (by PoCing individual components) enough to switch to MicroPython. As far as I understand it, any components you have for Arduino can be used in MicroPython, provided that there is a library out there that supports it, if there isn't then you can always write your own!
What's covered in this blog?
By the end of this blog, you will be able to send and receive MQTT messages from AWS IoT core using MicroPython, I will also cover the steps involved in flashing a MicroPython firmware image onto an ESP32C3. Although this blog has a focus and example on using an ESP32, this example can be applied to any micro-controllers of any brand or flavours, provided the micro-controller you are using supports MicroPython.
Flashing the MicroPython firmware onto a Seeed Studio XIAO ESP32C3
The following instructions works for any generic ESP32C3 devices!
Download the latest firmware from micropython.org
https://micropython.org/download/ESP32_GENERIC_C3/
Next, I connected my ESP32C3 to my Mac and ran the following command to find the name of the device port
/dev/ttyUSB0
My ESP32C3 is named "/dev/tty.usbmodem142401", the name for your ESP32C3 may be different.
Next, install esptool onto your computer, then run the following commands to flash the MicroPython firmware onto the ESP32C3 using the bin file you've just downloaded.
esptool.py --chip esp32c3 --port /dev/tty.usbmodem142401 erase_flash
esptool.py --chip esp32c3 --port /dev/tty.usbmodem142401 --baud 460800 write_flash -z 0x0 ESP32_GENERIC_C3-20240105-v1.22.1.bin
It should look something like this when you run the commands.
Install Thonny and run it. Then go to Tools -> Options, to configure the ESP32C3 device in Thonny to match the settings shown in the screenshot below.
If everything went well, you should see these 2 sections in Thonny: "MicroPython Device" and "Shell", if not then try clicking on the Stop button in the top menu.
AWS IoT Core Certificates and Keys
In order to send MQTT messages to an AWS IoT Core Topic, or to receive a message from a Topic in reverse, you will need a set of Certificate and Key\s for your micro-controller; as well as the AWS IoT Endpoint specific to your AWS Account and Region.
It's great if you have those with you so you can skip to the next section, if not, do not worry I've got you covered. In a past blog I have a reference architecture accompanied by a GitHub repository on how to deploy resources for an AWS IoT Core solution using AWS CDK, follow that blog to the end and you will have a set of Certificate and Key to use for this MicroPython example; the CDK Stack will deploy all the neccessary resources and policies in AWS IoT Core to enable you to both send and receive MQTT messages to two separate IoT Topics.
Reference AWS IoT Core Architecture: https://chiwaichan.co.nz/blog/2024/02/02/feedmyfurbabies-i-am-switching-to-aws-cdk/
Upload the MicroPython Code to your device
Now lets upload the MicroPython code to your micro-controller and prepare the IoT Certificate and Key so we can use it to authenticate the micro-controller to enable it to send and receive MQTT messages between your micro-controller and IoT Core.
Clone my GitHub repository that contains the MicroPython example code to publish and receive MQTT message with AWS IoT Core: https://github.com/chiwaichan/feedmyfurbabies-micropython-iot
It should look something like this.
Copy your Certificate and Key into the respective files shown in the above screenshot; otherwise, if you are using the Certificate and Key from my reference architecture, then you should use the 2 Systems Manager Parameter Store values create by the CDK Stack.
Next we convert the Certificate and Key to DER format - converting the files to DER format turns it into a binary format and makes the files more compact, especially neccessary when we run use it on small devices like the ESP32s.
In a terminal go to the certs directory and run the following commands to convert the certificate.pem and private.key files into DER format.
openssl rsa -in private.key -out key.der -outform DER
openssl x509 -in certificate.pem -out cert.der -outform DER
You should see two new files with the DER extension appear in the directory if all goes well; if not, you probably need to install openssl.
In Thonny, in the Files explorer, navigate to the GitHub repository's Root directory and open the main.py file. Fill in the values for the variables shown in the screenshot below to match your environment, if you are using my AWS CDK IoT referenece architecture then you are only required to fill in the WIFI details and the AWS IoT Endpoint specific to your AWS Account and Region.
Select both the certs folder and main.py in the Files explorer, then right click and select "Upload to /" to upload the code to your micro-controller; the files will appear in the "MicroPython Device" file explorer.
This is the moment we've been waiting for, lets run the main.py Python script by clicking on the Play Icon in green.
If all goes well you should see some output in the Shell section of Thonny.
The code in the main.py file has a piece of code that is generating a random number for the food_capacity percentage property in the MQTT message; you can customise the message to fit your use case.
But lets verify it is actually received by AWS IoT Core.
Alright, lets go the other way and see if we can receive MQTT messages from AWS IoT Core using the other Topic called "cat-feeder/action" we subscribed to in the MicroPython code.
Lets go back the AWS Console and use the MQTT test client to publish a message.
In the Thonny Shell we can see the message "Hello from AWS IoT console" sent from the AWS IoT Core side and it being received by the micro-controller.
FeedMyFurBabies – Using Custom Resources in AWS CDK to create AWS IoT Core Keys and Certificates
In a previous blog I talked about switching from CloudFormation template to AWS CDK as my preference for infrastructure as code, for provisioning my AWS Core IoT resources; I mentioned at the time whilst using resources using AWS CDK, as it would improve my productivity to focus on iterating and building. Although I switched to CDK for the reasons I described in my previous blog, there are some CloudFormation limitations that cannot be addressed just by switching to CDK alone.
In this blog I will talk about CloudFormation Custom Resources:
- What are CloudFormation Custom Resources?
- What is the problem I am trying to solve?
- How will I solve it?
- How am I using Custom Resources with AWS CDK?
CloudFormation Custom Resources allows you to write custom logic using AWS Lambda functions to provision resources, whether these resources live in AWS (you might ask why not just use CloudFormation or CDK: keep reading), on-premise or in other public clouds. These Custom Resource Lambda functions configured within a CloudFormation template, and are hooked into a CloudFormation Stack's lifecycle during the create, update and delete phases - to allow these lifecycle stages to happen, the logic must be implemented into the Lambda function's code.
What is the problem I am trying to solve?
In my AWS IoT Core reference architecture, it relies on use of two sets of certificates and private keys; they are used to authenticate each Thing devices connecting to AWS IoT Core - this ensures that only trusted devices can establish a connection.
In the CloudFormation template version of my reference architecture, I had in the deployement instructions to manually create 2 Cetificates in the AWS Console for the IoT Core service, this is because CloudFormation doesn't directly support creation of certificates for AWS IoT Core; as shown in the screenshot below.
There is nothing wrong with creating the certificates manually within the AWS Console when you are trying out my example for the purpose of learning, but it would best to be able to deploy an entire set of resources using infrastructure as code, so we can achieve consistent repeatable deployments with as minimal effort as possible. If you are someone completely new to AWS, coding and IoT, my deployment instructions would be very overwheling and the chances of you successfully deploying a fully functional example will be very unlikely.
How will I solve it?
If you got this far and actually read what was written up to this point, you probably would have guess the solution is Custom Resources: so lets talk about how the problem described above was solved.
So we know Custom Resources is part of the solution, but one important thing we need to understand is that, even though there isn't the ability to create the certificates directly using CloudFormation, but there is support for creating the certificates using the AWS SDK Boto3 Python library: create_keys_and_certificate.
So essentially, we are able create the AWS IoT Core certificates using CloudFormation (in an indirectly way) but it requires the help of Custom Reources (a Lambda function) and the AWS Boto3 Python SDK.
The Python code below is what I have in the Custom Resource Lambada function, it demonstrates the use of the Boto3 SDK to create the AWS IoT Core Certificates; and as a bonus, I am leveraging the Lambda function to save the Certificates into the AWS Systems Manager Parameter Store, this makes it much more simplier by centralising the Certificates in a single location without the engineer deploying this reference architecture having to manually copying/pasting/managing the Certificates - as I have forced readers in my original version of this reference architecture deployment. The code below also manages the lifecycle of the Certificates as the CloudFormation Stacks are deleted, by deleting the Certificates it created during the create phase of the lifecycle.
The overall flow to create the certificates is: Create a CloudFormation Stack --> Invoke the Custom Resource --> invoke the Boto3 IoT "create_keys_and_certificate" API --> save the certificates in Systems Manager Parameter Store
import os
import sys
import json
import logging as logger
import requests
import boto3
from botocore.config import Config
from botocore.exceptions import ClientError
import time
logger.getLogger().setLevel(logger.INFO)
def get_aws_client(name):
return boto3.client(
name,
config=Config(retries={"max_attempts": 10, "mode": "standard"}),
)
def create_resources(thing_name: str, stack_name: str, encryption_algo: str):
c_iot = get_aws_client("iot")
c_ssm = get_aws_client("ssm")
result = {}
# Download the Amazon Root CA file and save it to Systems Manager Parameter Store
url = "https://www.amazontrust.com/repository/AmazonRootCA1.pem"
response = requests.get(url)
if response.status_code == 200:
amazon_root_ca = response.text
else:
f"Failed to download Amazon Root CA file. Status code: {response.status_code}"
try:
# Create the keys and certificate for a thing and save them each as Systems Manager Parameter Store value later
response = c_iot.create_keys_and_certificate(setAsActive=True)
certificate_pem = response["certificatePem"]
private_key = response["keyPair"]["PrivateKey"]
result["CertificateArn"] = response["certificateArn"]
except ClientError as e:
logger.error(f"Error creating certificate, {e}")
sys.exit(1)
# store certificate and private key in SSM param store
try:
parameter_private_key = f"/{stack_name}/{thing_name}/private_key"
parameter_certificate_pem = f"/{stack_name}/{thing_name}/certificate_pem"
parameter_amazon_root_ca = f"/{stack_name}/{thing_name}/amazon_root_ca"
# Saving the private key in Systems Manager Parameter Store
response = c_ssm.put_parameter(
Name=parameter_private_key,
Description=f"Certificate private key for IoT thing {thing_name}",
Value=private_key,
Type="SecureString",
Tier="Advanced",
Overwrite=True
)
result["PrivateKeySecretParameter"] = parameter_private_key
# Saving the certificate pem in Systems Manager Parameter Store
response = c_ssm.put_parameter(
Name=parameter_certificate_pem,
Description=f"Certificate PEM for IoT thing {thing_name}",
Value=certificate_pem,
Type="String",
Tier="Advanced",
Overwrite=True
)
result["CertificatePemParameter"] = parameter_certificate_pem
# Saving the Amazon Root CA in Systems Manager Parameter Store,
# Although this file is publically available to download, it is intended to provide a complete set of files to try out this working example with as much ease as possible
response = c_ssm.put_parameter(
Name=parameter_amazon_root_ca,
Description=f"Amazon Root CA for IoT thing {thing_name}",
Value=amazon_root_ca,
Type="String",
Tier="Advanced",
Overwrite=True
)
result["AmazonRootCAParameter"] = parameter_amazon_root_ca
except ClientError as e:
logger.error(f"Error creating secure string parameters, {e}")
sys.exit(1)
try:
response = c_iot.describe_endpoint(endpointType="iot:Data-ATS")
result["DataAtsEndpointAddress"] = response["endpointAddress"]
except ClientError as e:
logger.error(f"Could not obtain iot:Data-ATS endpoint, {e}")
result["DataAtsEndpointAddress"] = "stack_error: see log files"
return result
# Delete the resources created for a thing when the CloudFormation Stack is deleted
def delete_resources(thing_name: str, certificate_arn: str, stack_name: str):
c_iot = get_aws_client("iot")
c_ssm = get_aws_client("ssm")
try:
# Delete all the Systems Manager Parameter Store values created to store a thing's certificate files
parameter_private_key = f"/{stack_name}/{thing_name}/private_key"
parameter_certificate_pem = f"/{stack_name}/{thing_name}/certificate_pem"
parameter_amazon_root_ca = f"/{stack_name}/{thing_name}/amazon_root_ca"
c_ssm.delete_parameters(Names=[parameter_private_key, parameter_certificate_pem, parameter_amazon_root_ca])
except ClientError as e:
logger.error(f"Unable to delete parameter store values, {e}")
try:
# Clean up the certificate by firstly revoking it then followed by deleting it
c_iot.update_certificate(certificateId=certificate_arn.split("/")[-1], newStatus="REVOKED")
c_iot.delete_certificate(certificateId=certificate_arn.split("/")[-1])
except ClientError as e:
logger.error(f"Unable to delete certificate {certificate_arn}, {e}")
def handler(event, context):
props = event["ResourceProperties"]
physical_resource_id = ""
try:
# Check if this is a Create and we're failing Creates
if event["RequestType"] == "Create" and event["ResourceProperties"].get(
"FailCreate", False
):
raise RuntimeError("Create failure requested, logging")
elif event["RequestType"] == "Create":
logger.info("Request CREATE")
resp_lambda = create_resources(
thing_name=props["CatFeederThingLambdaCertName"],
stack_name=props["StackName"],
encryption_algo=props["EncryptionAlgorithm"]
)
resp_controller = create_resources(
thing_name=props["CatFeederThingControllerCertName"],
stack_name=props["StackName"],
encryption_algo=props["EncryptionAlgorithm"]
)
# The values in the response_data could be used in the CDK code, for example used as Outputs for the CloudFormation Stack deployed
response_data = {
"CertificateArnLambda": resp_lambda["CertificateArn"],
"PrivateKeySecretParameterLambda": resp_lambda["PrivateKeySecretParameter"],
"CertificatePemParameterLambda": resp_lambda["CertificatePemParameter"],
"AmazonRootCAParameterLambda": resp_lambda["AmazonRootCAParameter"],
"CertificateArnController": resp_controller["CertificateArn"],
"PrivateKeySecretParameterController": resp_controller["PrivateKeySecretParameter"],
"CertificatePemParameterController": resp_controller["CertificatePemParameter"],
"AmazonRootCAParameterController": resp_controller["AmazonRootCAParameter"],
"DataAtsEndpointAddress": resp_lambda[
"DataAtsEndpointAddress"
],
}
# Using the ARNs of the pairs of certificates created as the PhysicalResourceId used by Custom Resource
physical_resource_id = response_data["CertificateArnLambda"] + "," + response_data["CertificateArnController"]
elif event["RequestType"] == "Update":
logger.info("Request UPDATE")
response_data = {}
physical_resource_id = event["PhysicalResourceId"]
elif event["RequestType"] == "Delete":
logger.info("Request DELETE")
certificate_arns = event["PhysicalResourceId"]
certificate_arns_array = certificate_arns.split(",")
resp_lambda = delete_resources(
thing_name=props["CatFeederThingLambdaCertName"],
certificate_arn=certificate_arns_array[0],
stack_name=props["StackName"],
)
resp_controller = delete_resources(
thing_name=props["CatFeederThingControllerCertName"],
certificate_arn=certificate_arns_array[1],
stack_name=props["StackName"],
)
response_data = {}
physical_resource_id = certificate_arns
else:
logger.info("Should not get here in normal cases - could be REPLACE")
send_cfn_response(event, context, "SUCCESS", response_data, physical_resource_id)
except Exception as e:
logger.exception(e)
sys.exit(1)
def send_cfn_response(event, context, response_status, response_data, physical_resource_id):
response_body = json.dumps({
"Status": response_status,
"Reason": "See the details in CloudWatch Log Stream: " + context.log_stream_name,
"PhysicalResourceId": physical_resource_id,
"StackId": event['StackId'],
"RequestId": event['RequestId'],
"LogicalResourceId": event['LogicalResourceId'],
"Data": response_data
})
headers = {
'content-type': '',
'content-length': str(len(response_body))
}
requests.put(event['ResponseURL'], data=response_body, headers=headers)
How I am using Custom Resources with AWS CDK?
What I am about to describe in this section can also be applied to a regular CloudFormation template, as a matter of fact, CDK will generate a CloudFormation template behind the scenes during the Synth phase of the CDK code in the latest version of my IoT Core reference architecture implemented using AWS CDK: https://chiwaichan.co.nz/blog/2024/02/02/feedmyfurbabies-i-am-switching-to-aws-cdk/
If you want to get straight into deploying the CDK version of reference architecture, go here: https://github.com/chiwaichan/feedmyfurbabies-cdk-iot
In my CDK code, I provision the Custom Resource lambda function and the associated IAM Roles and Polices using the Python code below. The line of code "code=lambda_.Code.from_asset("lambdas/custom-resources/iot")" loads the Custom Resource Lambda function code shown earlier.
# IAM Role for Lambda Function
custom_resource_lambda_role = iam.Role(
self, "CustomResourceExecutionRole",
assumed_by=iam.ServicePrincipal("lambda.amazonaws.com")
)
# IAM Policies
iot_policy = iam.PolicyStatement(
actions=[
"iot:CreateCertificateFromCsr",
"iot:CreateKeysAndCertificate",
"iot:DescribeEndpoint",
"iot:AttachPolicy",
"iot:DetachPolicy",
"iot:UpdateCertificate",
"iot:DeleteCertificate"
],
resources=["*"] # Modify this to restrict to specific secrets
)
# IAM Policies
ssm_policy = iam.PolicyStatement(
actions=[
"ssm:PutParameter",
"ssm:DeleteParameters"
],
resources=[f"arn:aws:ssm:{self.region}:{self.account}:parameter/*"] # Modify this to restrict to specific secrets
)
logging_policy = iam.PolicyStatement(
actions=[
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
resources=["arn:aws:logs:*:*:*"]
)
custom_resource_lambda_role.add_to_policy(iot_policy)
custom_resource_lambda_role.add_to_policy(ssm_policy)
custom_resource_lambda_role.add_to_policy(logging_policy)
# Define the Lambda function
custom_lambda = lambda_.Function(
self, 'CustomResourceLambdaIoT',
runtime=lambda_.Runtime.PYTHON_3_8,
handler="app.handler",
code=lambda_.Code.from_asset("lambdas/custom-resources/iot"),
timeout=Duration.seconds(60),
role=custom_resource_lambda_role
)
# Properties to pass to the custom resource
custom_resource_props = {
"EncryptionAlgorithm": "ECC",
"CatFeederThingLambdaCertName": f"{cat_feeder_thing_lambda_name.value_as_string}",
"CatFeederThingControllerCertName": f"{cat_feeder_thing_controller_name.value_as_string}",
"StackName": f"{construct_id}",
}
# Create the Custom Resource
custom_resource = CustomResource(
self, 'CustomResourceIoT',
service_token=custom_lambda.function_arn,
properties=custom_resource_props
)
When you execute a "cdk deploy" using the CLI on the CDK reference architecture, CDK will synthesize from the Python CDK code, a CloudFormation template, and then create a CloudFormation Stack using the synthesized CloudFormation template for you.
For more details on the CDK AWS IoT reference architecture and deployment instructions, please visit my blog: https://chiwaichan.co.nz/blog/2024/02/02/feedmyfurbabies-i-am-switching-to-aws-cdk/
FeedMyFurBabies – Event-Sourcing using Amazon EventBridge
In my previous AWS IoT Cat feeder project I used a Lambda function as the event handler each time the Seeed Studio AWS IoT 1-click button was pressed, the Lambda function in turn published an MQTT message to AWS Iot Core which is received by the Cat Feeder (via a Seeed Studio XIAO ESP32C3 micro-controller) to dispense food into either one of the cat bowls or both (depending on the type of press performed on the IoT button). The long term goal is to integrate the AWS IoT Cat Feeder with the Feed My Fur Babies project.
In this Part 2 of the Feed My Fur Babies blog series, I will be introducing the Event-Sourcing pattern to the https://www.feedmyfurbabies.com architecture; describe the benefits of designing an architecture around Event-Souring and an example implemented using Terraform. I recently learnt Terraform and I now prefer it over the native IaC.
Current state architecture
Here is the current state of the Cat Feeder architecture amd the IoT related resources previously deployed in AWS using CloudFormation:
The responsibilities of each of the resources deployed in the diagram prior to the introduction of the Event-Sourcing pattern into the architecture are:
- AWS IoT 1-Click Button: This is an IoT button I physically press to emit an event to dispense food into one or both of the cat bowls, this button can be used anywhere where there is a WIFI connection
- AWS IoT Core Certificates: Certificates are associated with resources and devices that interacts with the AWS IoT Core Service, either publishing an MQTT message to an AWS IoT Topic, or receiving an MQTT message from a Topic
- AWS Lambda - IoT 1-Click Event Handler & sends an MQTT message to an Iot topic: This Lambda function is responsible for handling incoming events created by the AWS IoT 1-Click Button, as well as translating the event into an MQTT message before sending it to an AWS IoT Core Topic. This is the component in the architecture that is the main focus of this blog post, we will describe how this component will be re-architectured and decomposed to work in conjunction with the introduction of the Event-Sourcing pattern.
- AWS IoT Core: This is the IoT service that manages the IoT Topics and Subcriptions to said Topics
- Seeed Studio XIAO ESP32C3: a micro-controller subscribed to the IoT Topic (the one the Lambda sent MQTT messages to) that will dispense food into 1 or 2 cat bowls when it receives an MQTT message from the Topic
For further details on what role this architecture plays in the Smart IoT Cat Feeder, visit Part 2 of the Smart Cat Feeder Blog Series.
What is Event-Sourcing?
The idea of Event-Sourcing is to capture all events that occurs within a system during its lifetime, these events are stored in an immutable ledger in the sequence in which they occurred in.
One of the biggest benefits of capturing all the events of a system is that we are able to replay every single event that has ever occured within the system (partially or as a whole) at a later time (lets say 5 years later), and have the ability to selectively replay the 5 years worth of events to one or more specific downstream event bus targets: an event bus target could be a new application that was deployed into your production environment 5 years after the first event was created; what this means is that we could hydrate this new application's datastore with 5 years worth of data as if it existed at the beginning when the first event occured. Also, imagine being able to re-create entire datastores with the full history for 100s of applications (where each application has its own datastore) within your system landscape, these datastores could be hydrated with the full history of events stored in the immutable Event-Sourcing ledger, or even replay the events that occur from the very first event and up to a specific event at a given point in time (e.g. half of the entire ledge) - effectively providing you with the ability to create any datastore in any datastore engine with the data inside in a state to any given point in time.
How do we introduce Event-Sourcing into the architecture?
We start off with the AWS Lambda function shown in the current state architecture where its responsibilites is to handle the events received from the AWS IoT 1-Click Button each time it is pressed, as well as sending an MQTT message to an AWS Iot Core Topic in response to each incoming event; essentially it has 2 distinct responsibilities
Next, we decompose the single Lambda function into 2 separate distinct Lambda functions based on its 2 responsibilities, then we chain the 2 Lambda functions together to preserve its functionality - what we have effectively achieved by doing this is decoupling the 2 responsibilities as 2 separate units of work - resulting in 2 separate compute resources.
The benefits by a decoupled architecture are:
- Each of the Lambda functions can be implemented in different languages - e.g. one in Python and the other can be in Java
- Independent release cycles for each of the Lambda functions
- Changes to either one of the 2 responsibilities can be made independently of each other
- Each Lambda function can be scaled independently of another
In this step we use Amazon EventBridge as the Event-Sourcing store - known as the immutable ledger we described earlier, we will also leverage EventBridge as a serverless event bus to help us receive, filter, transform, route and deliver events to downstream services (event bus targets). In this instance we will slip EventBridge in between the 2 Lambda functions and we will be storing every single IoT event sent by the IoT Button into the immutable ledge,
Benefits of adding EventBridge to the architecture:
- The IoT 1-Click Lambda handler no longer directly calls the downstream Lambda function - so it is unaware of the downstream targets
- The IoT events are stored in an immutable ledger in the sequence in which they occurred in
- Prepare the system landscape with the ability to more easily develop micro-services in an Event-Driven architecture using the orchestration pattern
Target State Architecture
This is the end result of introducing Event-Sourcing to the architecture; it may not look like much benefits has been gained from adding Amazon EventBridge - in fact one might think that we've added more components and in effect created more moving parts and complexity. But I have decided to specifically introduce this very early into the architecture as an investment so that I am in a position to rapdily build out my micro-service architecture - reaping the rewards from the get go.
Try it out for yourself
I have created a GitHub Repository to deploy a complete working example of the resources shown in the Target State Architecture using Terraform.
I suggest you deploy this to have a play for yourself:
- Clone the repository: "git clone git@github.com:chiwaichan/feedmyfurbabies-202303-eventsourcing-using-eventbridge.git"
- Setup your Terraform environment
- Run: "terraform init && terraform apply"
Also, check out each individual resource deployed by this Terrafrom code.
Create a test IoT 1-Click event to pass the event end-to-end through all the deployed resources
This is the IoT 1-Click Lambda function handler shown in the AWS Console
Create a test event so we can invoke the Lambda function to simulate an event as if a physical IoT Button is pressed
Here we can view the logs for this Lambda function Test invocation
The IoT 1-Click Lambda function handler sends an Event to the Custom EventBridge Event Bus named "feedmyfurbabies"
The event sent to the Custom Event Bus matches on the "source" attribute with a value of "com.feedmyfurbabies" with the Custom Event Bus Rule named "feeds-rule"
This Lambda function is the downstream target of the Custom Event Bus Rule that was mactched by the event and is responsible for interpreting the event message and translate it into an MQTT message, then in turn sends it to the AWS IoT Core Topic "cat-feeder/action" that you can subscribe to using a micro-controller, e.g. Seeed Studio XIAO ESP32C3.
Here we can see the logs of the event received by the EventBridge Custom Bus Rule
In the AWS Console for the AWS Iot Core Service, we can subscribe to Topics to receive an MQTT message right at the end of the downstream services - this is useful if you don't use a micro-controller
Future State Architecture
We end up with an architecture that will enable us to easily add targets to consume events managed by the EventBridge Custom Event Bus, doing so in a way where the IoT 1-Click Lambda function has no knowledge of any newly created subscribers of the Custom Event Bus.
In a future blog I will demonstrate this.
4×4 fun with a bit of Iot, vlogging and Machine Learning – Part 1
Months prior to the very first lockdown I had gotten myself on the waitlist for a 4x4 Jimny, so I could take it to the beach without worrying about getting beached like I likely would in a regular front wheel drive hatchback; or take it to the bushes to climb some hills and see how far I would get without flipping it (badly). Knowing I wouldn't be able to drive it for an long indefinitely amount time so I decided to cancel the order back then; in some ways I was sad then but in many ways I am happy now that I have had a fair amount of time to have a good think about what else I could do with the Jimny whilst taking it on these adventures.
The time spent mulling has lead to another new blog series; this will take on a similar build approach I took while building my Iot Cat Feeder, but this time it will be on a larger scale in terms of the amount of moving parts and components; also, I would get to enjoy myself this time instead of the cats. For those that are unfamiliar with the approach I took in my prior build, I will start the blog series by proposing an idea I have in mind with a certainty of about 70% of achieving a functional prototype - this is mainly due to not having the background nor experience on most of the skills required to build out this idea.
Generally, I would create a new Part for the Blog Series as I achieve a milestone during the build, where I talk about what was achieved in the milestone and provide the details on how I got there; where possible I would include a public Github repository for any code written for the build.
So enough of my rumbling.
What is it that I am wanting to build?
As you may have already predicted what is involved in this build from the image above, yes it will involve a 4x4 - I have a Jimny on the way; and some cloud buzz words like Iot and Machine Learning.
The goals of this build is to:
- Develop a solution to capture video recordings of my 4x4 adventures of the entire journey with 5+ viewpoints around the vehicle in 4K resolution, realistically I might only be about to capture full HD videos as explained further down this blog.
- Capture and store the vehicle's telemetries at regular intervals as the vehicle is driven using the CAN Bus protocol, e.g. speed, RPM of the engine and any other states the car is in.
- Capture other useful data not monitored by the vehicle's CAN Bus, such as GPS co-ordinates and the environment where the vehicle is at during the time - e.g. temperature, humidity, luminosity and many more using hand picked sensors.
- Ingest in real time all the videos, CAN Bus and sensor data captured into an AWS Datalake
If I were able to achieve all the goals in the list above, then I would like to also achieve these goals:
- Create a Digital Twin using AWS TwinMaker of the Jimny and associate all the sensors and devices captured with it
- Train Machine Learning models using the data ingest in the AWS Datalake
- Do something with the AWS Deeplens sitting in my draw for the past year with the ML models created above, perhaps warn me I am able to do something that will cause the Jimny to land on its roof like last time by making predictions on an ML model.
- Have some sort of cloud solution that spits out a video for each of my trips so I can use it to upload to YouTube, with the video displaying some of the telemetries and sensor data captured.
At the end of the blog series I will conclude whether I was able build something that was functional, and whether or not I was able to achieve all the goals I have stated in the 2 lists above.
Where I am in terms of progress for this build?
It has been a bit of a challenge to source certain types of electronic components at the moment as some may already know, so I've only managed to source the majority of components required at this point in time.
So far I have source the following components:
Starlink RV version
I had been wanting one of these for a long time so when I saw it on special I jumped on it straight away. This is the RV version so it means it can be taken anywhere with me, so I will mount this on a roof rack - one reason why I do not want to have the Jimny on its roof because it would not be fun to be somewhere with no internet for a long period of time.
The ideal location to place the Starlink is in a spot with no obstruction and as far away from everything as possible, however, when I tested it out in my tiny back yard with it sitting in the center surrounded by 2 houses (both 2 stories) and a high fence, I got the following results:
Although the speed is as fast as you get on the one of the slowest fibre plan available in New Zealand, the upload speed is the ultimate factor that determines how many live feeds we can ingest into the Datalake; a 4K resolution video is 20Mbps so that does not leave much bandwidth for all of the other data types, results may be better depending on where I am at the time, and also, unless Starlink offers symmetrical upload speeds then we are forced with full HD feeds, FYI download speeds can be as high as 500Mbps in some parts of the world. One option is to store the data onto a NAS drive via the Home Assistant installed on the LinkStar - a device similar to a Raspberry PI, then upload the videos into the Datalake after I get home - I like to avoid this as it is too much admin.
Router / Wifi
Got a few lying around at house doing nothing.
Cameras
I also have some spare cameras to use; the feed on these can be served using the RTSP protocol, I also have a few ESP32-CAMs I recently purchased so this build will use a combination of the 2 camera types. Most webcams can be used for this.
Seeed Studio XIAO ESP32C3
I have a bunch of these as they are my go tos when I build projects using micro-controllers; they are like $5 USD: Seeed Studio XIAO ESP32C3, one of, if not the smallest ESP32s I've come across and is more reliable than other ones I've used previously.
I also have various sensors for use that measures:
- Distance from objects
- Temperature
- Sound
- Humidity
- Luminosity
- CO2
Seeed Studio LinkStar with Home Assistant
I'll be using this to pull the feeds from the cameras, as well as saving the videos onto a NAS if we go down that route.
What is left to source?
- Seeed Studio WIO ESP32 CAN - this is a kit I'll be using to interface with the CAN Bus to retrieve telemetries from the Jimny.
- Jimny
Next blog
The next blog in this series I will take all the components I currently have and link it all up and detail what and how I got there.
Feed My Fur Babies – AWS Amplify and Route53
I'm starting a new blog series where I will be documenting my build of a full-stack Web and Mobile application using AWS Amplify to implement both the frontend, as well as the backend; whilst developing dependent downstream Services outside of Amplify using AWS Serverless components to implement a Micro-Service architecture using Event-Driven design pattern - where we will break the application up into smaller consumable chunks that works together well.
Since we are creating from scratch a completely new application, I will also incorporate a vital pattern that will reduce complexity throughout the lifetime of the application: we will also be implementing the application using the Event-Sourcing pattern - this pattern ensures every Event ever published within a system is stored within an immutable ledger; this ledger will enable new Data Stores of any Data Store Engine to be created at any given time by replaying the Events in the ledger, of Events created from a start date and time to an end Date and Time.
CQRS is a pattern I will write up about with great detailed in a blog in the near future, CQRS will enable the ability to create mulitple Data Stores with identical data, each Data Store using a unique Data Store Engine instance.
What is AWS Amplify?
Amplify is an AWS Service that provides any frontend web or mobile developers with no cloud expertise the ability to build and host full-stack applications on AWS. As a frontend developer, you can leverage it to build and integrate AWS Services and components into your frontend without having to deal with the underlying AWS Services; all Services the frontend is built on top of is managed by AWS Amplify - e.g. no need to managed CloudFormation Stacks, S3 Storage or AWS Cognito.
What will I be doing with Amplify
My experience from a while ago was full-stack application development and I have worked under that role for over 10 years, I've used various frontend/backend frameworks, components and patterns.
I will be building a website called Feed My Fur Babies where I will provide video streams showing live feeds of my cats from web cams placed in various spots around my house, the website will also provide users with the ability to feed my cats using internet enabled devices like the IoT Cat Feeders I recently put together and watch them hoon on their favorite treats; although I am experienced with building websites from the ground up using AWS Service, I am aiming to build Feed My Fur Babies whilst leveraging as little as possible on that experience - this is so I am building the website as close to the targeted demographics skillset of a typical Amplify as possible, i.e. as a developer with only frontend experience.
Current Architecture State
Update
Let's talk about what was done to get to the current architecture state.
First thing I did was buying the domain feedmyfurbabies.com using AWS Route53.
Next, I created a new Amplify App called "Feedme".
Within the App I created two Hosted Environments: one environment is to host the production environment, the other is to host a development environment. Each Hosted Environment is configured to be built and deployed from a specfic Branch in the shared CodeCommit Repository used to version control the frontend source code.
Smart Cat Feeder – Part 4
This is the Part 4 and final blog of the series where I detail my journey in learning to build an IoT solution.
Please have a read of my previous blogs to get the full context leading up to this point before continuing.
- Part 1: I talked about setting up a Seeed AWS IoT Button
- Part 2: I talked about publishing events to an Adruino Micro-controller from AWS
- Part 3: I talked about my experience of using a 3D Printer for the first time to print a Cat Feeder
Why am I building this Feeder?
I've always wanted to dip my toes into building IoT solutions beyond doing what a typical tutorial teaches in only turning on LEDs - I wanted to build something that would used everyday. Plus, I often forget to feed the cats while I am away from home (for the day), so it would be nice to come home to a non-grumpy cat by feeding them remotely any time and from any where in the world using the internet.
What was used to build this Feeder?
- A 3D Printer using PLA as the filament material.
- An Arduino based micro-controller - in this case a Seeed Studio XIAO ESP32C3
- A couple of motors and controllers
- AWS Services
- Seeed AWS IoT Button
- Some code
- and some cat food
So how does it work and how is it put together?
To simply describe what is built, the Feeder uses an Iot button click to trigger events over the internet to instruct the feeder to dispense food into one or both food bowls.
Here are some diagrams describing the architecture of the solution - the technical things that happens in-between the IoT button and the Cat Feeder.
When the Feeder receives a MQTT message from the AWS IoT Core Service, it runs the motor for 10 seconds to dispense food into either one of food bowls, and if the message contains an event value to dispense food into both bowls we can run both motors concurrently using the L298N controller.
Here's a video of some timelapse picture captured during the 3 weeks it took to 3D print the feeder.
The Feeder is made up of a small handful of basic hardware components, below is a Breadboard diagram depicting the components used and how they are all wired up together. A regular 12V 2A DC power adapter supply is used to power all the components.
The code to start and stop a motor is about 10 lines of code as shown below. This is the completed version of the Arduino Sketch shown in Part 2 of this blog series when it was partially written at the time.
#include "secrets.h"
#include <WiFiClientSecure.h>
#include <MQTTClient.h>
#include <ArduinoJson.h>
#include "WiFi.h"
// The MQTT topics that this device should publish/subscribe
#define AWS_IOT_PUBLISH_TOPIC "cat-feeder/states"
#define AWS_IOT_SUBSCRIBE_TOPIC "cat-feeder/action"
WiFiClientSecure net = WiFiClientSecure();
MQTTClient client = MQTTClient(256);
int motor1pin1 = 32;
int motor1pin2 = 33;
int motor2pin1 = 16;
int motor2pin2 = 17;
void connectAWS()
{
WiFi.mode(WIFI_STA);
WiFi.begin(WIFI_SSID, WIFI_PASSWORD);
Serial.println("Connecting to Wi-Fi");
Serial.println(AWS_IOT_ENDPOINT);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
// Configure WiFiClientSecure to use the AWS IoT device credentials
net.setCACert(AWS_CERT_CA);
net.setCertificate(AWS_CERT_CRT);
net.setPrivateKey(AWS_CERT_PRIVATE);
// Connect to the MQTT broker on the AWS endpoint we defined earlier
client.begin(AWS_IOT_ENDPOINT, 8883, net);
// Create a message handler
client.onMessage(messageHandler);
Serial.println("Connecting to AWS IOT");
Serial.println(THINGNAME);
while (!client.connect(THINGNAME)) {
Serial.print(".");
delay(100);
}
if (!client.connected()) {
Serial.println("AWS IoT Timeout!");
return;
}
Serial.println("About to subscribe");
// Subscribe to a topic
client.subscribe(AWS_IOT_SUBSCRIBE_TOPIC);
Serial.println("AWS IoT Connected!");
}
void publishMessage()
{
StaticJsonDocument<200> doc;
doc["time"] = millis();
doc["state_1"] = millis();
doc["state_2"] = 2 * millis();
char jsonBuffer[512];
serializeJson(doc, jsonBuffer); // print to client
client.publish(AWS_IOT_PUBLISH_TOPIC, jsonBuffer);
Serial.println("publishMessage states to AWS IoT" );
}
void messageHandler(String &topic, String &payload) {
Serial.println("incoming: " + topic + " - " + payload);
StaticJsonDocument<200> doc;
deserializeJson(doc, payload);
const char* event = doc["event"];
Serial.println(event);
feedMe(event);
}
void setup() {
Serial.begin(9600);
connectAWS();
pinMode(motor1pin1, OUTPUT);
pinMode(motor1pin2, OUTPUT);
pinMode(motor2pin1, OUTPUT);
pinMode(motor2pin2, OUTPUT);
}
void feedMe(String event) {
Serial.println(event);
bool feedLeft = false;
bool feedRight = false;
if (event == "SINGLE") {
feedLeft = true;
}
if (event == "DOUBLE") {
feedRight = true;
}
if (event == "LONG") {
feedLeft = true;
feedRight = true;
}
if (feedLeft) {
Serial.println("run left");
digitalWrite(motor1pin1, HIGH);
digitalWrite(motor1pin2, LOW);
}
if (feedRight) {
Serial.println("run right");
digitalWrite(motor2pin1, HIGH);
digitalWrite(motor2pin2, LOW);
}
delay(10000);
digitalWrite(motor1pin1, LOW);
digitalWrite(motor1pin2, LOW);
digitalWrite(motor2pin1, LOW);
digitalWrite(motor2pin2, LOW);
delay(2000);
Serial.println("fed");
}
void loop() {
publishMessage();
client.loop();
delay(3000);
}
Demo Time
The Seeed AWS IoT Button is able to detect 3 different types of click events: Long, Single and Double, and we are able to leverage this all the way to the feeder so we will have it performing certains actions base on the click event type.
The video below demonstrates the following scenarios:
- Long Click: this will dispense food into both cat bowls
- Single Click: this will dispense food into Ebok's cat bowl
- Double Click: this will dispense food into Queenie's cat bowl
What's next?
Build the nervous system of an ultimate nerd project I have in mind that would allow me to voice control actions controlling servos, LEDs and audio outputs, by using a mesh of Seeed XIAO BLE Sense micro-controllers and TinyML Machine Learning.