Skip to main content

3 posts tagged with "Sign Language"

Sign Language

View All Tags

Real-Time Voice to Sign Language Translation - Part 3: Edge AI Agent with Strands Agents on NVIDIA Jetson

· 13 min read
Chiwai Chan
Tinkerer

This is Part 3 of a 3-part series covering a real-time voice-to-sign-language translation system. In Part 1, I covered the React frontend that captures speech, processes it with Amazon Nova 2 Sonic, and publishes cleaned sentence text via MQTT. In Part 2, I covered the AWS CDK stack that routes IoT Core messages through Lambda to AppSync for real-time GraphQL subscriptions.

NVIDIA Jetson AGX Thor Developer Kit

This post covers the final piece — the edge AI agent that actually makes the physical hand move. It is a Strands Agent running on an NVIDIA Jetson that subscribes to MQTT commands from the frontend, uses Amazon Nova 2 Lite to invoke the fingerspell tool, drives the Pollen Robotics Amazing Hand's Feetech SCS0009 servos for ASL fingerspelling letter by letter, records video of the hand in action, uploads it to S3, and publishes hand state back to IoT Core — which Part 2's infrastructure routes through to the frontend via AppSync.

The three repositories in the series:

  1. Part 1 - Frontend and Voice Processing (amplify-react-nova-sonic-voice-chat-amazing-hand) — React web app that captures speech, streams to Nova 2 Sonic, publishes cleaned sentence text via MQTT
  2. Part 2 - Cloud Infrastructure (cdk-iot-amazing-hand-streaming) — AWS CDK stack that routes IoT Core messages through Lambda to AppSync
  3. This post (Part 3) - Edge AI Agent (strands-agents-amazing-hands) — Strands Agent powered by Amazon Nova 2 Lite on NVIDIA Jetson that translates sentence text to ASL servo commands, drives the Amazing Hand, and publishes state back

Goals

  • Receive MQTT commands from the React frontend (plain text or JSON with sentence field) and drive the Amazing Hand servos for ASL fingerspelling
  • Use the Strands Agents framework with Amazon Nova 2 Lite (us.amazon.nova-2-lite-v1:0) to invoke the fingerspell tool — the LLM passes the incoming text verbatim to the tool for letter-by-letter ASL spelling
  • Fingerspell text using the 26-letter ASL alphabet (A-Z), with each letter held for 0.8 seconds and spaces adding a 0.4-second pause
  • Control 8 Feetech SCS0009 servos (4 fingers x 2 joints) on the Pollen Robotics Amazing Hand via serial bus at 1M baud using the rustypot library
  • Record video of the hand via OpenCV during each fingerspelling sequence, encode to H.264 MP4 via imageio-ffmpeg, upload to S3, and include a presigned URL in the state message
  • Publish real-time hand state (servo angles, letter, video URL) to IoT Core over MQTT — which Part 2's CDK stack routes to AppSync for the frontend to consume
  • Authenticate to AWS IoT Core using mTLS with X.509 device certificates
  • Create a fresh agent instance per MQTT message to prevent conversation history accumulation and unbounded token growth
  • Handle graceful shutdown with servo torque disable on SIGINT/SIGTERM

The Overall System

This diagram shows the complete end-to-end system. Part 3 is the edge device highlighted on the right — the NVIDIA Jetson running the Strands Agent that controls the Amazing Hand.

Overall System with Part 3 Highlighted

How Part 3 fits in:

  • Part 1 (Frontend) publishes cleaned sentence text to the-project/robotic-hand/{deviceName}/action via MQTT
  • Part 3 (This agent) subscribes to the /action topic, processes the command through the Strands Agent, drives the servos, records video, and publishes state back to /state
  • Part 2 (Infrastructure) picks up the /state messages and routes them through Lambda to AppSync, where the frontend receives them via GraphQL subscriptions

Architecture

The agent is a Python application built on the Strands Agents framework. It runs as a long-lived MQTT listener on the NVIDIA Jetson, creating a fresh agent instance for each incoming message to keep memory bounded.

Agent Architecture

Agent Architecture

Components:

  • MQTT Listener (agent.py) — Subscribes to the action topic, parses incoming messages (plain text or JSON), and submits each action to a single-threaded agent executor to keep the AWS CRT MQTT event loop free
  • Strands Agent — A fresh Agent instance created per message with Amazon Nova 2 Lite as the model, the fingerspell tool as the available action, and a MaxToolCallsHook (limit 3) to prevent runaway tool-call loops
  • Fingerspell Tool (hand_control.py) — A @tool decorated function that the LLM invokes to spell text letter-by-letter using the 26-letter ASL alphabet
  • Servo Controller — Uses rustypot.Scs0009PyController to communicate with 8 Feetech SCS0009 servos over serial at 1M baud. Each finger has two servos controlled by dedicated move functions (Move_Index, Move_Middle, Move_Ring, Move_Thumb)
  • Video Recorder (video_recorder.py) — Background daemon thread captures frames via OpenCV, encodes to H.264 MP4 via imageio-ffmpeg, uploads to S3, and returns a presigned URL (1-hour expiry)
  • State Publisher — Non-blocking MQTT publisher on a separate thread that sends hand state (finger angles, letter, video URL) to the /state topic with QoS 1

Data Flow

Interactive Sequence Diagram

Edge Agent: MQTT Command to Servo Control Flow

From MQTT command to ASL fingerspelling with video capture

0/13
IoT CoreListenerMQTT ListenerAgentStrands AgentNovaNova 2 LiteServosServo ControllerS3S3 + Video0msMQTT message: { "sentence": "hello world" }QoS 11msParse JSON, extract sentence field2msstart_recording() — launch camera daemon thread3msCreate fresh Agent instance + submit to executorNo history from prior messages5msConverse API: system prompt + action text + fingersp...200msTool selection: fingerspell(text="hello world")210msfingerspell: Move H-E-L-L-O (0.8s per letter)Serial bus @ 1M baud300msPublish state per letter: { letter: "H", fingers: {....Non-blocking thread4500msfingerspell: Move W-O-R-L-D (0.8s per letter)4600msPublish state per letter: { letter: "W", fingers: {....8800msstop_recording_and_upload() — encode H.264 + upload ...9000msPresigned URL (1hr expiry)9001msRe-publish last state with video_url appended
IoT Core
Listener
Agent
Nova
Servos
S3
Milestone
Complete
Total: 13 steps across 6 components
MQTT command → ASL fingerspelling + video in ~9 seconds

How it works

MQTT Command Reception

The agent subscribes to an MQTT action topic (e.g. the-project/robotic-hand/XIAOAmazingHandRight/action) using mTLS authentication with X.509 device certificates. The first connection uses clean_session=True to flush any stale session state, then reconnects with clean_session=False for normal operation.

When a message arrives, the handler tries to parse it as JSON and extract the sentence field. If JSON parsing fails, it treats the entire payload as plain text. The action is then submitted to a single-threaded executor (agent_executor) to keep the AWS CRT MQTT event loop free:

def on_message(topic, payload, dup, qos, retain, **kwargs):
payload_str = payload.decode("utf-8")
try:
data = json.loads(payload_str)
action = data.get("sentence", payload_str)
except json.JSONDecodeError:
action = payload_str
agent_executor.submit(_process_action, action)

Strands Agent and Amazon Nova 2 Lite

The Strands Agents framework provides the core AI reasoning loop. A fresh agent instance is created for every MQTT message — this is deliberate to prevent conversation history from accumulating across messages, which would cause unbounded token growth over time.

The agent uses Amazon Nova 2 Lite (us.amazon.nova-2-lite-v1:0) via the Bedrock Converse API. Nova 2 Lite was chosen for its low-latency tool-use responses, which is critical for real-time servo control. The agent is configured with a MaxToolCallsHook that cancels tool calls beyond 3 to prevent infinite LLM tool-call loops.

The agent runs in fingerspell-only mode — only the fingerspell tool is available. The system prompt instructs the LLM to pass the entire message verbatim to the fingerspell tool without shortening or modifying it. State messages include a letter field identifying the current ASL letter being signed.

Servo Hardware and Control

Pollen Robotics Amazing Hand

The Amazing Hand — an open-source robotic hand designed by Pollen Robotics and manufactured by Seeed Studio — has 4 fingers (index, middle, ring, thumb — no pinky) with 2 Feetech SCS0009 servos per finger (8 servos total) connected via a Waveshare driver board over serial USB at 1,000,000 baud.

Each servo has an angle range of -90 to +90 degrees. Per-servo calibration offsets (MiddlePos) are applied during move operations to account for physical alignment:

MiddlePos = [-17, 8, -16, -4, -12, 10, -9, 9]

The control sequence for each finger:

  1. Set goal speed for both servos (write_goal_speed) with a 0.2ms sleep between each speed write for serial bus timing
  2. Convert angle to radians with calibration offset: np.deg2rad(MiddlePos[i] + angle)
  3. Set goal position for both servos (write_goal_position)
  4. 5ms sleep after positions are set before the next finger's commands

ASL Fingerspelling Tool

The fingerspell(text) tool is decorated with @tool from the Strands framework, making it callable by the LLM during inference. It spells text letter-by-letter using the ASL alphabet. Each of the 26 letters (A-Z) is mapped to servo angle tuples for all 4 fingers. Each letter is held for 0.8 seconds, spaces add a 0.4-second pause, and non-letter characters are skipped. A state message with the current letter field is published after each letter.

Since the Amazing Hand has no pinky finger, ASL letters that require a pinky use the ring finger instead.

Video Recording Pipeline

Video is recorded concurrently with each fingerspelling sequence:

  1. Start recording — Before the agent is invoked, start_recording() launches a background daemon thread (video-capture) that captures frames from OpenCV VideoCapture(0) at the camera's native FPS (typically 30)
  2. Stop and encode — After the agent completes, stop_recording_and_upload() stops the capture thread, converts frames from BGR (OpenCV) to RGB, and encodes to H.264 MP4 using imageio.v3 with the libx264 codec. The temp file is named hand_YYYYMMDD_HHMMSS_
  3. Upload to S3 — The MP4 is uploaded to the configured S3 bucket (default: cc-amazing-video) with key videos/hand_YYYYMMDD_HHMMSS.mp4
  4. Presigned URL — A presigned URL is generated with 1-hour expiry and appended to the last state message, which is re-published to the /state topic

State Publishing

After each servo movement, the tool publishes a state message to the MQTT /state topic (e.g. the-project/robotic-hand/XIAOAmazingHandRight/state) with QoS 1. Publishing is non-blocking — it submits to a dedicated _publish_executor thread to avoid blocking the servo tool.

The state payload:

{
"gesture": "fingerspell",
"letter": "E",
"ts": 1770550850,
"fingers": {
"index": { "angle_1": 45, "angle_2": -45 },
"middle": { "angle_1": 45, "angle_2": -45 },
"ring": { "angle_1": 45, "angle_2": -45 },
"thumb": { "angle_1": 60, "angle_2": -60 }
},
"video_url": "https://cc-amazing-video.s3.amazonaws.com/videos/hand_20260228.mp4?..."
}

The last published state is cached so that publish_state_with_video_url() can re-publish it with the presigned URL appended after video upload completes — without needing to re-read servo angles.

This state payload is what Part 2's CDK stack picks up via the IoT Rule, flattens in Lambda, and pushes into AppSync for the frontend to consume.

Threading Model

The agent uses two thread pools and a daemon thread to keep operations non-blocking:

ThreadTypeWorkersPurpose
agent_executorThreadPoolExecutor1Runs Strands agent off the AWS CRT MQTT event loop
_publish_executorThreadPoolExecutor1Publishes state messages non-blocking
video-captureDaemon Thread1Background camera frame capture

Graceful Shutdown

On SIGINT or SIGTERM, the agent:

  1. Sets a stop event to exit the main loop
  2. Disables servo torque (write_torque_enable(1, 2)) to release the servos and prevent power draw
  3. Disconnects from MQTT
  4. Logs completion

Technical Challenges & Solutions

Challenge 1: Conversation History Bloat

Problem: Strands Agents maintain conversation history by default. Over time, as hundreds of MQTT messages are processed, the token count grows unboundedly, increasing latency and cost.

Solution: A fresh Agent instance is created for every MQTT message. This discards all prior conversation history, keeping each invocation lightweight. Token usage (input, output, total) is logged after each invocation for monitoring.

Challenge 2: Runaway Tool-Call Loops

Problem: The LLM might enter a loop of calling tools repeatedly — for example, calling fingerspell then deciding to call it again with modified text, then again.

Solution: A custom MaxToolCallsHook implementing the Strands HookProvider interface. It counts tool calls per agent invocation and cancels any tool call beyond the limit of 3. This is injected into the agent via hooks=[MaxToolCallsHook()].

Challenge 3: No Pinky Finger on the Amazing Hand

Problem: The Pollen Robotics Amazing Hand has only 4 fingers (index, middle, ring, thumb) — no pinky. Several ASL letters require specific pinky positions (e.g. I, J, Y).

Solution: ASL letters that require a pinky use the ring finger instead. The 26-letter ASL alphabet is manually mapped to 4-finger servo angle tuples, approximating the correct hand shape with the available fingers.

Challenge 4: Serial Bus Timing

Problem: Sending servo commands too quickly over the serial bus causes missed commands or erratic movement. The Feetech SCS0009 protocol requires time between operations.

Solution: A 0.2ms sleep is inserted between speed writes, and a 5ms sleep is added after both goal positions are set, giving the serial bus time to process each command before the next finger's sequence begins.

Getting Started

GitHub Repository: https://github.com/chiwaichan/strands-agents-amazing-hands

Prerequisites

  • NVIDIA Jetson (AGX Thor or Orin Nano Super) with Python 3.10+
  • Pollen Robotics Amazing Hand connected via USB serial (Waveshare driver board)
  • AWS IoT Core device certificates (certificate, private key, root CA)
  • Amazon Bedrock access enabled for Nova 2 Lite in us-east-1
  • USB camera connected to the Jetson
  • S3 bucket for video storage (default: cc-amazing-video)

Installation

git clone https://github.com/chiwaichan/strands-agents-amazing-hands.git
cd strands-agents-amazing-hands
pip install -e .

Running the Agent

amazing-hand-agent \
--endpoint your-iot-endpoint.iot.us-east-1.amazonaws.com \
--cert certs/device.pem.crt \
--key certs/device.pem.key \
--ca certs/AmazonRootCA1.pem \
--topic the-project/robotic-hand/XIAOAmazingHandRight/action \
--serial-port /dev/amazing-hand-right \
--s3-bucket cc-amazing-video

The agent will connect to IoT Core, subscribe to the action topic, and wait for commands. When a message arrives, it will process it through the Strands Agent, drive the servos, record video, and publish state back.

Summary

This post covered the edge AI agent — the final piece of the voice-to-sign-language translation system:

  • Strands Agents framework with Amazon Nova 2 Lite for tool-use — a fresh agent per MQTT message prevents history bloat, with MaxToolCallsHook limiting calls to 3
  • ASL fingerspelling with the 26-letter alphabet (A-Z), each letter held for 0.8 seconds — the fingerspell tool is decorated with @tool for LLM invocation
  • 8 Feetech SCS0009 servos on 4 fingers controlled via rustypot over serial at 1M baud, with per-servo calibration offsets
  • Video pipeline captures via OpenCV in a background daemon thread, encodes to H.264 MP4 via imageio-ffmpeg, uploads to S3, and includes a 1-hour presigned URL in the final state message
  • Non-blocking threading with 2 thread pools (agent executor off MQTT event loop, state publisher) and a daemon thread for video capture
  • Real-time state publishing to IoT Core after every servo movement — which Part 2's CDK stack routes through Lambda to AppSync, completing the feedback loop to the React frontend in Part 1
  • Graceful shutdown disables servo torque on SIGINT/SIGTERM to release the servos and prevent power draw

Real-Time Voice to Sign Language Translation - Part 2: Cloud Infrastructure with AWS CDK, IoT Core, and AppSync

· 11 min read
Chiwai Chan
Tinkerer

This is Part 2 of a 3-part series covering a real-time voice-to-sign-language translation system. In Part 1, I covered the React frontend that captures speech, processes it with Amazon Nova 2 Sonic, and publishes cleaned sentence text via MQTT. But there is a missing piece — how does the frontend know what the physical hand is actually doing?

The answer is this repository: a small but critical AWS CDK stack that acts as the bridge between the edge device and the React frontend. It routes real-time hand state data from IoT Core to AppSync, enabling the frontend to receive live updates via GraphQL subscriptions — so the 3D hand animation stays synchronised with the physical Amazing Hand — an open-source robotic hand designed by Pollen Robotics and manufactured by Seeed Studio.

The three repositories in the series:

  1. Part 1 - Frontend and Voice Processing (amplify-react-nova-sonic-voice-chat-amazing-hand) — React web app that captures speech, streams to Nova 2 Sonic, publishes cleaned sentence text via MQTT
  2. This post (Part 2) - Cloud Infrastructure (cdk-iot-amazing-hand-streaming) — AWS CDK stack that routes IoT Core messages through Lambda to AppSync for real-time GraphQL subscriptions
  3. Part 3 - Edge AI Agent (strands-agents-amazing-hands) — Strands Agent powered by Amazon Nova 2 Lite on NVIDIA Jetson that translates sentence text to ASL servo commands, drives the Amazing Hand, and publishes state back

Goals

  • Route real-time hand state data from IoT Core MQTT to AppSync using an IoT Rules Engine SQL query and Lambda
  • Flatten nested MQTT finger angle payloads into a flat GraphQL schema for the createHandState mutation
  • Enable the React frontend to receive live hand state updates via AppSync onCreateHandState GraphQL subscriptions
  • Extract the device name dynamically from the MQTT topic path using topic(3) in the IoT Rule SQL
  • Define all infrastructure as code using AWS CDK in TypeScript
  • Integrate with the existing Amplify Gen 2 managed AppSync API and DynamoDB table from Part 1

The Overall System

This diagram shows the complete end-to-end system. Part 2 is the infrastructure highlighted in the middle — the IoT Rule, Lambda, and AppSync connection that enables real-time state feedback from the edge device back to the frontend.

Overall System with Part 2 Highlighted

How Part 2 fits in:

  • Part 1 (Frontend) publishes cleaned sentence text to the-project/robotic-hand/{deviceName}/action and subscribes to AppSync onCreateHandState for live updates
  • Part 3 (Edge Device) receives sentence text, translates it to ASL servo commands via the Strands Agent powered by Amazon Nova 2 Lite, drives the Amazing Hand, and publishes state back to the-project/robotic-hand/{deviceName}/state
  • Part 2 (This stack) listens on the /state topic, transforms the payload, and pushes it into AppSync — completing the real-time feedback loop

Architecture

The stack is intentionally small — a single IoT Rule, a single Lambda function, and the IAM glue to connect them. The AppSync API and DynamoDB table are managed by the Amplify Gen 2 backend in Part 1, so this stack only needs to call the existing createHandState mutation.

Infrastructure Overview

CDK Stack Architecture

Resources created by this CDK stack:

  • IoT Topic Rule (AmazingHandStateStreamingRule) — Matches MQTT messages on the-project/robotic-hand/+/state using SQL SELECT gesture, letter, ts, fingers, video_url, topic(3) AS device_name, then invokes the Lambda function
  • Lambda Function (AmazingHandToAppSyncFunction) — Node.js 18 function that receives the IoT event, flattens the nested fingers object into individual angle fields, and calls the AppSync createHandState GraphQL mutation using the Amplify v6 SDK with API Key authentication
  • Lambda IAM Role — Service role with AWSLambdaBasicExecutionRole for CloudWatch Logs and an inline policy granting appsync:GraphQL on the AppSync API
  • Lambda Permission — Allows the IoT service (iot.amazonaws.com) to invoke the Lambda function

Resources managed externally (by Amplify Gen 2 in Part 1):

  • AppSync API — GraphQL API with HandState model, createHandState mutation, and onCreateHandState subscription
  • DynamoDB TableHandState table with auto-generated resolvers from the @model directive

Data Flow

Interactive Sequence Diagram

IoT Core to AppSync Data Flow

From edge device MQTT publish to React real-time subscription update

0/10
JetsonNVIDIA JetsonIoT CoreRulesIoT Rules EngineLambdaAppSyncReactReact App0msMQTT publish: the-project/robotic-hand/XIAOAmazingHandRig...Nested fingers payload1msTopic matches: the-project/robotic-hand/+/state2msSQL: SELECT gesture, letter, ts, fingers, video_url, topi...Extract device_name from topic5msInvoke with enriched event (device_name added)6msValidate device_name exists7msFlatten: fingers.index.angle_1 → indexAngle1 (x8 fields)Defaults: 0 for angles, null for optional10mscreateHandState mutation (API Key auth)Amplify v6 SDK15msPersist to DynamoDB (HandState table)20msonCreateHandState subscription pushReal-time WebSocket21msUpdate 3D hand animation + letter history + video feed
Jetson
IoT Core
Rules
Lambda
AppSync
React
Milestone
Complete
Total: 10 steps across 6 components
Edge device state → React UI update in ~20ms

How it works

IoT Rules Engine SQL Query

The IoT Rule is the entry point. It listens on the MQTT topic pattern the-project/robotic-hand/+/state where + is a single-level wildcard matching any device name (e.g. XIAOAmazingHandRight).

The SQL query (using AWS IoT SQL version 2016-03-23) selects specific fields from the MQTT payload and enriches them with metadata extracted from the topic path:

SELECT gesture, letter, ts, fingers, video_url, topic(3) AS device_name
FROM 'the-project/robotic-hand/+/state'
  • gesture — The type of sign being performed (e.g. "fingerspell")
  • letter — The current letter being signed (e.g. "E")
  • ts — Unix timestamp in seconds
  • fingers — Nested JSON object containing servo angles for all four fingers, each with two joint angles
  • video_url — Optional pre-signed S3 URL for video of the hand in action
  • topic(3) AS device_name — Extracts the 3rd segment of the MQTT topic path as the device name, so the Lambda does not need to parse the topic itself

MQTT Payload Format

The edge device publishes hand state messages in this format:

{
"gesture": "fingerspell",
"letter": "E",
"ts": 1770550850,
"fingers": {
"index": { "angle_1": 45, "angle_2": -45 },
"middle": { "angle_1": 45, "angle_2": -45 },
"ring": { "angle_1": 45, "angle_2": -45 },
"thumb": { "angle_1": 60, "angle_2": -60 }
},
"video_url": "https://cc-amazing-video.s3.amazonaws.com/videos/hand_20260228.mp4?..."
}

The fingers object uses a nested structure with angle_1 and angle_2 per finger — representing the two joints of each finger on the Amazing Hand. This nested format is natural for the edge device to produce but needs to be flattened for the GraphQL schema.

Lambda Function — Payload Transformation

The Lambda function (AmazingHandToAppSyncFunction) receives the IoT event with the enriched fields from the SQL query. Its job is to:

  1. Validate that the device_name field exists (required for the GraphQL mutation)
  2. Flatten the nested fingers object into individual angle fields:
MQTT PayloadGraphQL Field
fingers.index.angle_1indexAngle1
fingers.index.angle_2indexAngle2
fingers.middle.angle_1middleAngle1
fingers.middle.angle_2middleAngle2
fingers.ring.angle_1ringAngle1
fingers.ring.angle_2ringAngle2
fingers.thumb.angle_1thumbAngle1
fingers.thumb.angle_2thumbAngle2
  1. Default missing angle values to 0, missing gesture/letter/videoUrl to null, and missing timestamp to Math.floor(Date.now() / 1000)
  2. Call AppSync createHandState mutation using the Amplify v6 SDK configured with API Key authentication

The Lambda uses the Amplify v6 SDK (aws-amplify@^6.0.0) to call AppSync, configured via environment variables:

Amplify.configure({
API: {
GraphQL: {
endpoint: process.env.APPSYNC_API_URL,
region: process.env.AWS_REGION,
defaultAuthMode: 'apiKey',
apiKey: process.env.APPSYNC_API_KEY
}
}
});

GraphQL Mutation

The Lambda calls this mutation to persist the hand state and trigger the real-time subscription:

mutation CreateHandState($input: CreateHandStateInput!) {
createHandState(input: $input) {
id
deviceName
gesture
letter
indexAngle1
indexAngle2
middleAngle1
middleAngle2
ringAngle1
ringAngle2
thumbAngle1
thumbAngle2
timestamp
videoUrl
createdAt
}
}

When AppSync receives this mutation, two things happen:

  1. The hand state record is persisted to DynamoDB via the auto-generated @model resolver
  2. The onCreateHandState subscription is triggered, pushing the new record to all subscribed clients — including the React frontend from Part 1, which uses this data to update the 3D hand animation, signed letter history, and video feed in real-time

CDK Stack Definition

The entire stack is defined in approximately 74 lines of TypeScript. The stack accepts the AppSync API URL, API key, and API ID as props, which are injected via environment variables during deployment:

interface IoTStreamingStackProps extends cdk.StackProps {
appSyncApiUrl: string;
appSyncApiKey: string;
appSyncApiId: string;
}

The stack creates the Lambda function with the AppSync connection details as environment variables, grants it appsync:GraphQL permissions scoped to the specific API, creates the IoT Topic Rule with the SQL query, and grants IoT permission to invoke the Lambda.

Two stack outputs are exported for reference:

  • AmazingHandIoTRuleArn — The IoT Rule ARN
  • AmazingHandLambdaFunctionArn — The Lambda function ARN

CI/CD Pipeline

The project includes a GitHub Actions workflow (.github/workflows/aws-cdk-deploy.yml) that automates deployment:

  1. Triggers on pushes to main and dev branches
  2. Authenticates using OIDC (no static AWS credentials stored in GitHub)
  3. Automatically discovers the AppSync configuration by:
    • Reading the Amplify App ID from SSM Parameter Store (/iot/amplify/amazinghand)
    • Finding the Amplify data CloudFormation stack
    • Extracting the AppSync API ID from CloudFormation stack resources, then querying the AppSync API directly for the URL and API key
  4. Runs cdk deploy with the discovered values

This means the stack automatically stays connected to the correct AppSync API without manual configuration.

Technical Challenges & Solutions

Challenge 1: Flattening Nested IoT Payloads for GraphQL

Problem: The edge device publishes finger angles in a nested JSON structure (fingers.index.angle_1), but the AppSync GraphQL schema uses flat fields (indexAngle1). The IoT Rules Engine SQL can select nested objects but cannot rename nested fields into flat ones.

Solution: The Lambda function handles the transformation. It receives the nested fingers object from the IoT Rule and manually flattens each field with safe defaults (0 for missing angles, null for optional fields). This keeps the IoT Rule SQL simple and the edge device payload natural.

Challenge 2: Connecting to Amplify-Managed AppSync

Problem: The AppSync API is managed by Amplify Gen 2 in Part 1's repository, not by this CDK stack. The API URL, API key, and API ID change between environments and deployments.

Solution: The CI/CD pipeline automatically discovers the AppSync configuration at deploy time by reading from SSM Parameter Store and CloudFormation stack outputs. For local development, the values are passed via environment variables in deploy.sh. The CDK stack accepts them as typed props, keeping the infrastructure code clean.

Challenge 3: Extracting Device Name from MQTT Topic

Problem: The device name is part of the MQTT topic path (the-project/robotic-hand/XIAOAmazingHandRight/state), not the message payload. The Lambda needs it to set the deviceName field in the GraphQL mutation.

Solution: The IoT Rules Engine SQL function topic(3) extracts the 3rd segment of the topic path and aliases it as device_name. This is passed to the Lambda as part of the event, so the Lambda does not need to parse the topic itself. The wildcard + in the topic filter means this works for any device name without configuration changes.

Getting Started

GitHub Repository: https://github.com/chiwaichan/cdk-iot-amazing-hand-streaming

Prerequisites

  • Node.js 18+
  • AWS CDK CLI installed (npm install -g aws-cdk)
  • AWS CLI configured with credentials
  • An existing AppSync API with the HandState schema (deployed via Part 1's Amplify Gen 2 backend)

Deployment Steps

  1. Get AppSync configuration from Part 1's Amplify deployment:
export APPSYNC_API_ID=your_api_id
export APPSYNC_API_URL=https://your-api-id.appsync-api.us-east-1.amazonaws.com/graphql
export APPSYNC_API_KEY=your_api_key
  1. Clone and Install:
git clone https://github.com/chiwaichan/cdk-iot-amazing-hand-streaming.git
cd cdk-iot-amazing-hand-streaming
npm install
cd lambda/amazing-hand-to-appsync && npm install && cd ../..
  1. Deploy:
./deploy.sh

This bootstraps CDK (if needed) and deploys the stack with the AppSync configuration.

What's Next

In Part 3, I will cover the edge AI agent (strands-agents-amazing-hands) — a Strands Agent powered by Amazon Nova 2 Lite running on an NVIDIA Jetson that subscribes to the MQTT sentence text published by the frontend in Part 1, translates them into physical servo movements on the Pollen Robotics Amazing Hand for ASL fingerspelling, records video, and publishes hand state back to IoT Core — which this Part 2 stack routes through to AppSync for the frontend to consume.

Summary

This post covered the cloud infrastructure layer of the voice-to-sign-language translation system:

  • IoT Rules Engine listens on the-project/robotic-hand/+/state and extracts device name from the topic path using topic(3)
  • Lambda function flattens nested finger angle payloads (fingers.index.angle_1indexAngle1) and calls the AppSync createHandState GraphQL mutation
  • AppSync persists to DynamoDB and broadcasts onCreateHandState subscriptions to connected React clients in real-time
  • CDK stack is intentionally small (~74 lines) — it creates only the IoT Rule, Lambda, and IAM glue, relying on the Amplify-managed AppSync API from Part 1
  • CI/CD pipeline automatically discovers AppSync configuration from SSM Parameter Store, CloudFormation stack resources, and direct AppSync API calls — no manual configuration needed
  • The stack completes the real-time feedback loop: edge device publishes state → IoT Core → Lambda → AppSync → React frontend updates 3D hand animation

Real-Time Voice to Sign Language Translation with Amazon Nova 2 Sonic and Pollen Robotics Amazing Hand - Part 1: Frontend and Voice Processing

· 16 min read
Chiwai Chan
Tinkerer

Pollen Robotics Amazing Hand

This is Part 1 of a 3-part series covering a real-time voice-to-sign-language translation system. The complete solution spans three separate repositories, each responsible for a distinct layer of the system:

  1. This post (Part 1) - Frontend and Voice Processing — The React web app that captures speech, streams it to Amazon Nova 2 Sonic on Bedrock, publishes cleaned sentence text via MQTT, and renders a real-time 3D hand visualisation
  2. Part 2 - Cloud Infrastructure (cdk-iot-amazing-hand-streaming) — The AWS CDK stack that routes IoT Core messages through Lambda to AppSync, enabling real-time GraphQL subscriptions between the edge device and the frontend
  3. Part 3 - Edge AI Agent (strands-agents-amazing-hands) — The Strands Agent powered by Amazon Nova 2 Lite running on an NVIDIA Jetson that receives MQTT sentence text, translates it to ASL servo commands, drives the Pollen Robotics Amazing Hand for fingerspelling, and streams video and state back

In this post, I focus on how speech enters the system, how Amazon Nova 2 Sonic processes and cleans up the spoken input, and how the frontend publishes cleaned sentence text over MQTT — setting the stage for Parts 2 and 3.

The key idea is that Nova 2 Sonic is not used as a chatbot here — it is configured as a dumb speech-to-text relay pipe that cleans up grammar, removes filler words like "um" and "uh", translates non-English speech to English, and forwards the cleaned text via a forced tool invocation (send_text) on every single utterance. The frontend then publishes the cleaned sentence text to AWS IoT Core over MQTT for the edge device to translate into ASL servo commands.

Goals

  • Capture speech in the browser and stream it to Amazon Nova 2 Sonic via bidirectional streaming — no backend servers required
  • Use Nova 2 Sonic's forced tool use (send_text) with toolChoice: { any: {} } to relay cleaned text on every utterance, not as a conversational chatbot
  • Publish cleaned sentence text to AWS IoT Core over MQTT for the edge device to translate into ASL servo commands
  • Subscribe to real-time hand state updates via GraphQL (AppSync) and synchronise a 3D Three.js hand animation with the physical hand
  • Use AWS Amplify Gen 2 for infrastructure-as-code backend definition in TypeScript (Cognito, AppSync, IAM policies)
  • Display a 3-column UI with signed letter history, 3D hand animation with video feed, and live transcript with microphone controls

The Overall System

The end-to-end system takes spoken words from a browser microphone all the way through to physical ASL fingerspelling on an Amazing Hand — an open-source robotic hand designed by Pollen Robotics and manufactured by Seeed Studio — passing through cloud AI, IoT messaging, and an edge AI agent along the way.

Overall System Architecture

System Components:

  1. React Frontend (this post) - Captures speech, streams to Bedrock, publishes cleaned sentence text to MQTT, renders 3D hand animation synchronised with the physical hand via GraphQL subscriptions
  2. Cloud Infrastructure (Part 2) - AWS CDK stack with IoT Core rules that route MQTT messages through Lambda to AppSync, enabling real-time GraphQL subscriptions between the edge device and the frontend
  3. Edge AI Agent (Part 3) - Strands Agent powered by Amazon Nova 2 Lite on an NVIDIA Jetson that receives MQTT sentence text, translates it to ASL servo commands, drives the Amazing Hand for fingerspelling letter by letter, records video, and publishes hand state back via IoT Core

Interactive Sequence Diagram

End-to-End Voice to Sign Language Flow

From user speech to ASL fingerspelling on the Amazing Hand

0/13
UserBrowserReact AppBedrockAmazon BedrockIoT CoreAWS IoT CoreJetsonNVIDIA JetsonHandAmazing Hand0.0sClick microphone and speakUser speaks naturally0.1sAudioWorklet: capture → resample → PCM → Base640.2sInvokeModelWithBidirectionalStream (audio chunks)Real-time streaming2.0ssend_text tool invocation (cleaned English text)Filler words removed, translated to English2.1sMQTT publish: { id, sentence, ts }Plain text, no servo data2.2sMQTT subscribe: receive sentence text2.5sStrands Agent + Nova 2 Lite: sentence → ASL letter l...LLM tool use + ASL_ALPHABET table3.0sDrive servos for ASL fingerspelling letter by letterLetter by letter via serial4.0sPublish hand state (servo angles + video)4.1sGraphQL subscription: hand state updateVia AppSync4.2sUpdate 3D hand animation + video feed + letter history4.5saudioOutput: voice confirmation ("Sent")4.6sPlay audio confirmation + see 3D hand + video
User
Browser
Bedrock
IoT Core
Jetson
Hand
Milestone
Complete
Total: 13 steps across 6 components (3 repos)
Speech → Sentence → Edge AI → ASL Fingerspelling

Architecture

The frontend is built with React 19, Vite 7, and TypeScript 5.9. The application is structured around a main VoiceChat.tsx component that orchestrates four custom hooks, three utility modules, and a Three.js-based hand animation component.

Application UI

React Hooks Architecture

React Hooks Architecture

Components:

  • VoiceChat.tsx - Main UI component with a 3-column responsive layout. Coordinates all hooks, renders the transcript feed, microphone controls, signed letter history, hand state data grid, video feed, and 3D animation. Collapses to a single column on screens under 1100px
  • useNovaSonic - Core hook managing the Bedrock bidirectional stream with InvokeModelWithBidirectionalStreamCommand. Handles authentication via Cognito, the Nova 2 Sonic event protocol (session/prompt/content lifecycle), the async generator input stream with backpressure, and send_text tool use responses. The tool is configured with toolChoice: { any: {} } to force tool invocation on every utterance
  • useAudioRecorder - Captures microphone input using an inline AudioWorklet running in a separate thread. Accumulates 2048 samples per buffer, resamples from the device sample rate (typically 48kHz) to 16kHz, converts Float32 to PCM16, and Base64 encodes for transmission
  • useAudioPlayer - Provides audio playback capability (FIFO queue of AudioBuffers at 24kHz). In the current implementation, Nova 2 Sonic's audio output is intentionally discarded since only the cleaned text via tool use is needed — the hook is available but not actively fed audio data
  • useHandStream - Subscribes to AppSync GraphQL onCreateHandState subscription filtered by device name. Fetches the last 20 hand states on mount and maintains a real-time list of 8 servo angles (thumb, index, middle, ring — each with two joint angles), letters, and video URLs
  • iotPublisher.ts - Publishes MQTT messages to the topic the-project/robotic-hand/XIAOAmazingHandRight/action. Publishes cleaned sentence text as { id, sentence, ts } payloads and handles IoT policy attachment to the Cognito identity
  • HandAnimation.tsx - Procedurally generated 3D robotic hand using Three.js with no external 3D models. The palm is built with LatheGeometry (curved cup shape), and each finger has a dual-joint rig (proximal + distal) with synchronised linkage. Uses WebGL rendering with PCFSoftShadowMap shadows, OrbitControls, and industrial-style materials with metalness/roughness

Authentication Flow

The frontend needs temporary AWS credentials to call both Bedrock (for Nova 2 Sonic streaming) and IoT Core (for MQTT publishing). No long-term credentials are stored in the browser.

Authentication Flow

Authentication Layers:

  • Cognito User Pool - Handles user registration and login with email/password. Configured via Amplify Gen 2 defineAuth with preferredUsername as an optional attribute
  • Cognito Identity Pool - Exchanges JWT tokens from the User Pool for temporary AWS credentials (access key, secret key, session token). Credentials are automatically refreshed by the Amplify SDK before expiration
  • IAM Role - The authenticated user role grants two sets of permissions: bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream scoped to amazon.nova-2-sonic-v1:0 in us-east-1, and iot:Publish, iot:Connect, iot:DescribeEndpoint, and iot:AttachPolicy for IoT Core MQTT access. An IoT Core policy (RoboticHandPolicy) is also attached to the Cognito identity at runtime to authorise MQTT publishing to the the-project/robotic-hand/* topic pattern

How it works

Audio Capture and Processing

The browser captures audio from the microphone using the Web Audio API and an AudioWorklet running in a separate thread. The AudioWorklet avoids main-thread blocking and processes audio in real-time with echo cancellation and noise suppression enabled.

Audio Processing Pipeline

Input Processing (Recording):

  1. Microphone - Browser calls getUserMedia() to capture audio at the device's native sample rate (typically 48kHz) with mono channel, echo cancellation, and noise suppression enabled
  2. AudioWorklet - An inline AudioCaptureProcessor (loaded as a Blob URL to avoid CORS issues) runs in a separate thread. It accumulates samples in a buffer and posts a Float32Array message to the main thread every 2048 samples
  3. Resample - Linear interpolation resampling converts from 48kHz to 16kHz (Nova 2 Sonic's required input rate). The ratio is calculated dynamically from the actual device sample rate
  4. Float32 to PCM16 - Floating point samples in the range [-1, 1] are converted to 16-bit signed integers. Negative values are multiplied by 0x8000 and positive values by 0x7FFF
  5. Base64 Encode - The binary PCM data is encoded to Base64 text for JSON transmission to Bedrock via a custom uint8ArrayToBase64() utility that iterates bytes into a binary string and then calls btoa()

Amazon Nova 2 Sonic Bidirectional Streaming

The heart of the system is the bidirectional stream to Amazon Nova 2 Sonic using InvokeModelWithBidirectionalStreamCommand. Nova 2 Sonic is configured not as a chatbot, but as a speech relay that cleans up input and forwards it via forced tool use.

Bidirectional Streaming Protocol

Input Events (sent to Bedrock):

  • sessionStart - Initialises the session with inference configuration: maxTokens: 1024, topP: 0.9, temperature: 0.7
  • promptStart - Configures audio output format: audio/lpcm at 24kHz, 16-bit, mono, voice matthew, Base64 encoding. Also defines the send_text tool with toolChoice: { any: {} } to force tool invocation on every utterance
  • contentStart (TEXT) - Sends the system prompt that instructs Nova 2 Sonic to act as a "dumb speech-to-text relay pipe" — clean up grammar, remove filler words, translate non-English to English, call send_text with the cleaned text, then respond with only "Sent"
  • contentStart (AUDIO) - Marks the beginning of audio input content
  • audioInput - Streams Base64-encoded 16kHz PCM audio chunks in real-time as the user speaks
  • contentEnd / promptEnd / sessionEnd - Lifecycle events to terminate content blocks, prompts, and sessions

Output Events (received from Bedrock):

  • textOutput - Returns transcribed user speech and the generated AI response text ("Sent")
  • toolUse - The send_text tool invocation containing the cleaned text in { sentence: "..." } format. This is the primary output — the frontend publishes the sentence to MQTT for the edge device to translate into ASL servo commands
  • audioOutput - Synthesised voice response as Base64-encoded 24kHz PCM. In the current implementation, audio output is intentionally discarded since only the cleaned text via tool use is needed

Tool Use — send_text:

  • The tool is defined with toolChoice: { any: {} }, which forces Nova 2 Sonic to call it on every single utterance without exception
  • The tool accepts a single sentence parameter — the cleaned-up, well-formed sentence
  • When the tool invocation arrives, the frontend extracts the sentence and publishes it as { id, sentence, ts } to IoT Core via MQTT using publishSentence(). The edge device then translates the sentence into ASL servo commands
  • A JSON tool result ({ "status": "success", "sentence": "..." }) is sent back to Nova 2 Sonic to complete the tool use cycle

Publishing Sentences via MQTT

Once the cleaned sentence is extracted from the send_text tool invocation, iotPublisher.ts publishes it to the MQTT topic the-project/robotic-hand/XIAOAmazingHandRight/action via AWS IoT Core.

MQTT Command Flow

The payload is a simple JSON object containing:

  • id - A UUID for the message
  • sentence - The cleaned sentence text from Nova 2 Sonic
  • ts - Unix timestamp in seconds

The edge device (covered in Part 3) receives this sentence and is responsible for translating it into ASL servo commands and driving the physical hand.

Interactive Sequence Diagram

MQTT Command Pipeline

From Nova 2 Sonic text output to IoT Core sentence publish

0/7
NovaNova 2 SonicHookuseNovaSonicVoiceChatVoiceChat.tsxPublisheriotPublisher.tsIoT CoreAWS IoT Core0mssend_text tool invocation: { sentence: "hello w...Cleaned text from speech1msonToolUse callback with tool event2msParse event.content JSON, extract sentence3mspublishSentence(sentence)4msBuild payload: { id: uuid, sentence, ts }Plain text, no servo data5msMQTT publish to .../action topic6msTool result: { status: "success", sentence: ".....
Nova
Hook
VoiceChat
Publisher
IoT Core
Milestone
Sentence Publish Pipeline
Speech → Nova 2 Sonic cleanup → send_text tool use → publishSentence → MQTT to edge device

The browser console logs the performance breakdown for each utterance through the voice-to-IoT pipeline. In this example, the end-to-end time from speech detection to IoT publish is approximately 2.9 seconds — with the majority spent on Speech-to-Text (2228ms) as Nova 2 Sonic processes the audio, followed by Text-to-Tool extraction (423ms) and IoT Publish (243ms):

Voice-to-IoT Pipeline Performance

Real-Time Hand State via GraphQL Subscription

The frontend subscribes to AppSync's onCreateHandState GraphQL subscription to receive real-time updates from the edge device. Each update includes the device name, current letter being signed, all 8 servo angles (thumb, index, middle, ring — each with two joint angles), a timestamp, and an optional video URL.

On mount, the hook fetches the last 20 hand states to populate the UI immediately. New states arrive in real-time as the edge device publishes them back through IoT Core → Lambda → AppSync. The data is displayed in both the signed letter history panel and the raw hand state data grid.

3D Hand Visualisation

The HandAnimation.tsx component renders a procedurally generated 3D robotic hand using Three.js — no external 3D models are loaded. The entire hand is built from code:

  • The palm uses LatheGeometry to create a curved cup shape that tapers from a narrow wrist (radius 0.18) to wide knuckles (radius 0.56)
  • Each finger has a dual-joint rig with proximal and distal segments, knuckle joints, linkage bars, and fingertips. The thumb is mounted on the side of the palm and rotates on the Z-axis, while the index, middle, and ring fingers are mounted on the front rim and rotate on the X-axis
  • The distal joint automatically follows the proximal joint at 50% of its angle, simulating a synchronised linkage mechanism
  • Materials use industrial-style metalness/roughness: dark gray frame (0x2a2a2a), light gray joints (0x888888), and darker gray tips (0x555555)
  • The scene includes PCFSoftShadowMap shadows, ambient lighting (0.8), directional light (1.0), and a fill light (0.4), with OrbitControls for interactive zoom and rotation

Servo angle updates from the GraphQL subscription drive the finger rotations in real-time, keeping the 3D animation synchronised with the physical Amazing Hand.

Audio Playback

The useAudioPlayer hook provides a FIFO queue-based audio playback capability for Web Audio AudioBuffer objects at 24kHz. However, in the current implementation, Nova 2 Sonic's audio output is intentionally discarded — the onAudioOutput callback is set to a no-op since only the cleaned text via the send_text tool use is needed to drive the MQTT pipeline. The hook remains available for future use if audio feedback is desired.

Technical Challenges & Solutions

Challenge 1: AudioWorklet CORS Issues

Problem: Loading an AudioWorklet processor from an external JavaScript file fails with CORS errors on some deployments, particularly when using Amplify Hosting.

Solution: Inline the AudioWorklet code as a Blob URL. The processor code is defined as a string, converted to a Blob with type application/javascript, and loaded via URL.createObjectURL(). The object URL is revoked after the module is added:

const blob = new Blob([audioWorkletCode], { type: 'application/javascript' });
const workletUrl = URL.createObjectURL(blob);
await audioContext.audioWorklet.addModule(workletUrl);
URL.revokeObjectURL(workletUrl);

Challenge 2: Forcing Tool Use on Every Utterance

Problem: Nova 2 Sonic is a conversational model by default — it wants to chat and respond naturally. But in this system, it needs to act as a pure relay, forwarding every single utterance as cleaned text without adding commentary or refusing any messages.

Solution: A combination of system prompt engineering and forced tool use. The system prompt explicitly instructs Nova 2 Sonic to act as a "dumb speech-to-text relay pipe" and never add commentary. The send_text tool is configured with toolChoice: { any: {} }, which forces the model to invoke a tool on every response. After calling the tool, it is instructed to only respond with "Sent".

Challenge 3: Keeping MQTT Payloads Simple

Problem: The system needs to transmit the user's intent from the frontend to the edge device reliably via IoT Core MQTT.

Solution: Rather than translating text to servo commands on the frontend (which would require large payloads with many servo poses), the frontend publishes only the cleaned sentence text as a compact { id, sentence, ts } JSON payload. The edge device is responsible for translating the sentence into ASL servo commands, keeping the MQTT messages small and the frontend simple.

Getting Started

GitHub Repository: https://github.com/chiwaichan/amplify-react-nova-sonic-voice-chat-amazing-hand

Prerequisites

  • Node.js 18+
  • AWS Account with Amazon Nova 2 Sonic model access enabled in Bedrock (us-east-1 region)
  • AWS CLI configured with credentials

Deployment Steps

  1. Enable Nova 2 Sonic in Bedrock Console (us-east-1 region)

  2. Clone and Install:

git clone https://github.com/chiwaichan/amplify-react-nova-sonic-voice-chat-amazing-hand.git
cd amplify-react-nova-sonic-voice-chat-amazing-hand
npm install
  1. Start Amplify Sandbox:
npx ampx sandbox
  1. Run Development Server:
npm run dev
  1. Open Application: Navigate to http://localhost:5173, create an account, and start talking. Note that the full system requires Parts 2 and 3 to be deployed for the physical hand to respond — but the frontend will still capture speech, process it through Nova 2 Sonic, and display the 3D hand animation independently.

What's Next

In Part 2, I will cover the cloud infrastructure layer — the AWS CDK stack (cdk-iot-amazing-hand-streaming) that routes IoT Core MQTT messages through Lambda to AppSync. This is the bridge that enables real-time GraphQL subscriptions, allowing the frontend to receive hand state updates from the edge device as they happen.

In Part 3, I will cover the edge AI agent (strands-agents-amazing-hands) — a Strands Agent powered by Amazon Nova 2 Lite running on an NVIDIA Jetson that subscribes to the MQTT sentence text published by this frontend, translates them into physical servo movements on the Pollen Robotics Amazing Hand for ASL fingerspelling, records video of the hand in action, and publishes state back through IoT Core.

Summary

This post covered the frontend and voice processing layer of a real-time voice-to-sign-language translation system:

  • Amazon Nova 2 Sonic is used not as a chatbot but as a speech relay — configured via system prompt and toolChoice: { any: {} } forced send_text tool use to clean up grammar, remove filler words, translate to English, and forward every utterance as text
  • Audio pipeline captures at 48kHz via AudioWorklet, resamples to 16kHz, converts to PCM16 Base64 for Bedrock input. Nova 2 Sonic's audio output is intentionally discarded since only the cleaned text is needed
  • MQTT publishing sends cleaned sentence text as { id, sentence, ts } to AWS IoT Core for the edge device to translate into ASL servo commands
  • Real-time feedback via GraphQL subscriptions keeps the 3D Three.js hand animation synchronised with the physical Amazing Hand using 8 servo angles (thumb, index, middle, ring — each with two joints)
  • Fully serverless frontend using AWS Amplify Gen 2 with Cognito authentication, no backend servers — direct browser-to-Bedrock and browser-to-IoT Core communication