Skip to main content

5 posts tagged with "Bedrock"

Bedrock

View All Tags

Real-Time Voice to Sign Language Translation - Part 3: Edge AI Agent with Strands Agents on NVIDIA Jetson

· 13 min read
Chiwai Chan
Tinkerer

This is Part 3 of a 3-part series covering a real-time voice-to-sign-language translation system. In Part 1, I covered the React frontend that captures speech, processes it with Amazon Nova 2 Sonic, and publishes cleaned sentence text via MQTT. In Part 2, I covered the AWS CDK stack that routes IoT Core messages through Lambda to AppSync for real-time GraphQL subscriptions.

NVIDIA Jetson AGX Thor Developer Kit

This post covers the final piece — the edge AI agent that actually makes the physical hand move. It is a Strands Agent running on an NVIDIA Jetson that subscribes to MQTT commands from the frontend, uses Amazon Nova 2 Lite to invoke the fingerspell tool, drives the Pollen Robotics Amazing Hand's Feetech SCS0009 servos for ASL fingerspelling letter by letter, records video of the hand in action, uploads it to S3, and publishes hand state back to IoT Core — which Part 2's infrastructure routes through to the frontend via AppSync.

The three repositories in the series:

  1. Part 1 - Frontend and Voice Processing (amplify-react-nova-sonic-voice-chat-amazing-hand) — React web app that captures speech, streams to Nova 2 Sonic, publishes cleaned sentence text via MQTT
  2. Part 2 - Cloud Infrastructure (cdk-iot-amazing-hand-streaming) — AWS CDK stack that routes IoT Core messages through Lambda to AppSync
  3. This post (Part 3) - Edge AI Agent (strands-agents-amazing-hands) — Strands Agent powered by Amazon Nova 2 Lite on NVIDIA Jetson that translates sentence text to ASL servo commands, drives the Amazing Hand, and publishes state back

Goals

  • Receive MQTT commands from the React frontend (plain text or JSON with sentence field) and drive the Amazing Hand servos for ASL fingerspelling
  • Use the Strands Agents framework with Amazon Nova 2 Lite (us.amazon.nova-2-lite-v1:0) to invoke the fingerspell tool — the LLM passes the incoming text verbatim to the tool for letter-by-letter ASL spelling
  • Fingerspell text using the 26-letter ASL alphabet (A-Z), with each letter held for 0.8 seconds and spaces adding a 0.4-second pause
  • Control 8 Feetech SCS0009 servos (4 fingers x 2 joints) on the Pollen Robotics Amazing Hand via serial bus at 1M baud using the rustypot library
  • Record video of the hand via OpenCV during each fingerspelling sequence, encode to H.264 MP4 via imageio-ffmpeg, upload to S3, and include a presigned URL in the state message
  • Publish real-time hand state (servo angles, letter, video URL) to IoT Core over MQTT — which Part 2's CDK stack routes to AppSync for the frontend to consume
  • Authenticate to AWS IoT Core using mTLS with X.509 device certificates
  • Create a fresh agent instance per MQTT message to prevent conversation history accumulation and unbounded token growth
  • Handle graceful shutdown with servo torque disable on SIGINT/SIGTERM

The Overall System

This diagram shows the complete end-to-end system. Part 3 is the edge device highlighted on the right — the NVIDIA Jetson running the Strands Agent that controls the Amazing Hand.

Overall System with Part 3 Highlighted

How Part 3 fits in:

  • Part 1 (Frontend) publishes cleaned sentence text to the-project/robotic-hand/{deviceName}/action via MQTT
  • Part 3 (This agent) subscribes to the /action topic, processes the command through the Strands Agent, drives the servos, records video, and publishes state back to /state
  • Part 2 (Infrastructure) picks up the /state messages and routes them through Lambda to AppSync, where the frontend receives them via GraphQL subscriptions

Architecture

The agent is a Python application built on the Strands Agents framework. It runs as a long-lived MQTT listener on the NVIDIA Jetson, creating a fresh agent instance for each incoming message to keep memory bounded.

Agent Architecture

Agent Architecture

Components:

  • MQTT Listener (agent.py) — Subscribes to the action topic, parses incoming messages (plain text or JSON), and submits each action to a single-threaded agent executor to keep the AWS CRT MQTT event loop free
  • Strands Agent — A fresh Agent instance created per message with Amazon Nova 2 Lite as the model, the fingerspell tool as the available action, and a MaxToolCallsHook (limit 3) to prevent runaway tool-call loops
  • Fingerspell Tool (hand_control.py) — A @tool decorated function that the LLM invokes to spell text letter-by-letter using the 26-letter ASL alphabet
  • Servo Controller — Uses rustypot.Scs0009PyController to communicate with 8 Feetech SCS0009 servos over serial at 1M baud. Each finger has two servos controlled by dedicated move functions (Move_Index, Move_Middle, Move_Ring, Move_Thumb)
  • Video Recorder (video_recorder.py) — Background daemon thread captures frames via OpenCV, encodes to H.264 MP4 via imageio-ffmpeg, uploads to S3, and returns a presigned URL (1-hour expiry)
  • State Publisher — Non-blocking MQTT publisher on a separate thread that sends hand state (finger angles, letter, video URL) to the /state topic with QoS 1

Data Flow

Interactive Sequence Diagram

Edge Agent: MQTT Command to Servo Control Flow

From MQTT command to ASL fingerspelling with video capture

0/13
IoT CoreListenerMQTT ListenerAgentStrands AgentNovaNova 2 LiteServosServo ControllerS3S3 + Video0msMQTT message: { "sentence": "hello world" }QoS 11msParse JSON, extract sentence field2msstart_recording() — launch camera daemon thread3msCreate fresh Agent instance + submit to executorNo history from prior messages5msConverse API: system prompt + action text + fingersp...200msTool selection: fingerspell(text="hello world")210msfingerspell: Move H-E-L-L-O (0.8s per letter)Serial bus @ 1M baud300msPublish state per letter: { letter: "H", fingers: {....Non-blocking thread4500msfingerspell: Move W-O-R-L-D (0.8s per letter)4600msPublish state per letter: { letter: "W", fingers: {....8800msstop_recording_and_upload() — encode H.264 + upload ...9000msPresigned URL (1hr expiry)9001msRe-publish last state with video_url appended
IoT Core
Listener
Agent
Nova
Servos
S3
Milestone
Complete
Total: 13 steps across 6 components
MQTT command → ASL fingerspelling + video in ~9 seconds

How it works

MQTT Command Reception

The agent subscribes to an MQTT action topic (e.g. the-project/robotic-hand/XIAOAmazingHandRight/action) using mTLS authentication with X.509 device certificates. The first connection uses clean_session=True to flush any stale session state, then reconnects with clean_session=False for normal operation.

When a message arrives, the handler tries to parse it as JSON and extract the sentence field. If JSON parsing fails, it treats the entire payload as plain text. The action is then submitted to a single-threaded executor (agent_executor) to keep the AWS CRT MQTT event loop free:

def on_message(topic, payload, dup, qos, retain, **kwargs):
payload_str = payload.decode("utf-8")
try:
data = json.loads(payload_str)
action = data.get("sentence", payload_str)
except json.JSONDecodeError:
action = payload_str
agent_executor.submit(_process_action, action)

Strands Agent and Amazon Nova 2 Lite

The Strands Agents framework provides the core AI reasoning loop. A fresh agent instance is created for every MQTT message — this is deliberate to prevent conversation history from accumulating across messages, which would cause unbounded token growth over time.

The agent uses Amazon Nova 2 Lite (us.amazon.nova-2-lite-v1:0) via the Bedrock Converse API. Nova 2 Lite was chosen for its low-latency tool-use responses, which is critical for real-time servo control. The agent is configured with a MaxToolCallsHook that cancels tool calls beyond 3 to prevent infinite LLM tool-call loops.

The agent runs in fingerspell-only mode — only the fingerspell tool is available. The system prompt instructs the LLM to pass the entire message verbatim to the fingerspell tool without shortening or modifying it. State messages include a letter field identifying the current ASL letter being signed.

Servo Hardware and Control

Pollen Robotics Amazing Hand

The Amazing Hand — an open-source robotic hand designed by Pollen Robotics and manufactured by Seeed Studio — has 4 fingers (index, middle, ring, thumb — no pinky) with 2 Feetech SCS0009 servos per finger (8 servos total) connected via a Waveshare driver board over serial USB at 1,000,000 baud.

Each servo has an angle range of -90 to +90 degrees. Per-servo calibration offsets (MiddlePos) are applied during move operations to account for physical alignment:

MiddlePos = [-17, 8, -16, -4, -12, 10, -9, 9]

The control sequence for each finger:

  1. Set goal speed for both servos (write_goal_speed) with a 0.2ms sleep between each speed write for serial bus timing
  2. Convert angle to radians with calibration offset: np.deg2rad(MiddlePos[i] + angle)
  3. Set goal position for both servos (write_goal_position)
  4. 5ms sleep after positions are set before the next finger's commands

ASL Fingerspelling Tool

The fingerspell(text) tool is decorated with @tool from the Strands framework, making it callable by the LLM during inference. It spells text letter-by-letter using the ASL alphabet. Each of the 26 letters (A-Z) is mapped to servo angle tuples for all 4 fingers. Each letter is held for 0.8 seconds, spaces add a 0.4-second pause, and non-letter characters are skipped. A state message with the current letter field is published after each letter.

Since the Amazing Hand has no pinky finger, ASL letters that require a pinky use the ring finger instead.

Video Recording Pipeline

Video is recorded concurrently with each fingerspelling sequence:

  1. Start recording — Before the agent is invoked, start_recording() launches a background daemon thread (video-capture) that captures frames from OpenCV VideoCapture(0) at the camera's native FPS (typically 30)
  2. Stop and encode — After the agent completes, stop_recording_and_upload() stops the capture thread, converts frames from BGR (OpenCV) to RGB, and encodes to H.264 MP4 using imageio.v3 with the libx264 codec. The temp file is named hand_YYYYMMDD_HHMMSS_
  3. Upload to S3 — The MP4 is uploaded to the configured S3 bucket (default: cc-amazing-video) with key videos/hand_YYYYMMDD_HHMMSS.mp4
  4. Presigned URL — A presigned URL is generated with 1-hour expiry and appended to the last state message, which is re-published to the /state topic

State Publishing

After each servo movement, the tool publishes a state message to the MQTT /state topic (e.g. the-project/robotic-hand/XIAOAmazingHandRight/state) with QoS 1. Publishing is non-blocking — it submits to a dedicated _publish_executor thread to avoid blocking the servo tool.

The state payload:

{
"gesture": "fingerspell",
"letter": "E",
"ts": 1770550850,
"fingers": {
"index": { "angle_1": 45, "angle_2": -45 },
"middle": { "angle_1": 45, "angle_2": -45 },
"ring": { "angle_1": 45, "angle_2": -45 },
"thumb": { "angle_1": 60, "angle_2": -60 }
},
"video_url": "https://cc-amazing-video.s3.amazonaws.com/videos/hand_20260228.mp4?..."
}

The last published state is cached so that publish_state_with_video_url() can re-publish it with the presigned URL appended after video upload completes — without needing to re-read servo angles.

This state payload is what Part 2's CDK stack picks up via the IoT Rule, flattens in Lambda, and pushes into AppSync for the frontend to consume.

Threading Model

The agent uses two thread pools and a daemon thread to keep operations non-blocking:

ThreadTypeWorkersPurpose
agent_executorThreadPoolExecutor1Runs Strands agent off the AWS CRT MQTT event loop
_publish_executorThreadPoolExecutor1Publishes state messages non-blocking
video-captureDaemon Thread1Background camera frame capture

Graceful Shutdown

On SIGINT or SIGTERM, the agent:

  1. Sets a stop event to exit the main loop
  2. Disables servo torque (write_torque_enable(1, 2)) to release the servos and prevent power draw
  3. Disconnects from MQTT
  4. Logs completion

Technical Challenges & Solutions

Challenge 1: Conversation History Bloat

Problem: Strands Agents maintain conversation history by default. Over time, as hundreds of MQTT messages are processed, the token count grows unboundedly, increasing latency and cost.

Solution: A fresh Agent instance is created for every MQTT message. This discards all prior conversation history, keeping each invocation lightweight. Token usage (input, output, total) is logged after each invocation for monitoring.

Challenge 2: Runaway Tool-Call Loops

Problem: The LLM might enter a loop of calling tools repeatedly — for example, calling fingerspell then deciding to call it again with modified text, then again.

Solution: A custom MaxToolCallsHook implementing the Strands HookProvider interface. It counts tool calls per agent invocation and cancels any tool call beyond the limit of 3. This is injected into the agent via hooks=[MaxToolCallsHook()].

Challenge 3: No Pinky Finger on the Amazing Hand

Problem: The Pollen Robotics Amazing Hand has only 4 fingers (index, middle, ring, thumb) — no pinky. Several ASL letters require specific pinky positions (e.g. I, J, Y).

Solution: ASL letters that require a pinky use the ring finger instead. The 26-letter ASL alphabet is manually mapped to 4-finger servo angle tuples, approximating the correct hand shape with the available fingers.

Challenge 4: Serial Bus Timing

Problem: Sending servo commands too quickly over the serial bus causes missed commands or erratic movement. The Feetech SCS0009 protocol requires time between operations.

Solution: A 0.2ms sleep is inserted between speed writes, and a 5ms sleep is added after both goal positions are set, giving the serial bus time to process each command before the next finger's sequence begins.

Getting Started

GitHub Repository: https://github.com/chiwaichan/strands-agents-amazing-hands

Prerequisites

  • NVIDIA Jetson (AGX Thor or Orin Nano Super) with Python 3.10+
  • Pollen Robotics Amazing Hand connected via USB serial (Waveshare driver board)
  • AWS IoT Core device certificates (certificate, private key, root CA)
  • Amazon Bedrock access enabled for Nova 2 Lite in us-east-1
  • USB camera connected to the Jetson
  • S3 bucket for video storage (default: cc-amazing-video)

Installation

git clone https://github.com/chiwaichan/strands-agents-amazing-hands.git
cd strands-agents-amazing-hands
pip install -e .

Running the Agent

amazing-hand-agent \
--endpoint your-iot-endpoint.iot.us-east-1.amazonaws.com \
--cert certs/device.pem.crt \
--key certs/device.pem.key \
--ca certs/AmazonRootCA1.pem \
--topic the-project/robotic-hand/XIAOAmazingHandRight/action \
--serial-port /dev/amazing-hand-right \
--s3-bucket cc-amazing-video

The agent will connect to IoT Core, subscribe to the action topic, and wait for commands. When a message arrives, it will process it through the Strands Agent, drive the servos, record video, and publish state back.

Summary

This post covered the edge AI agent — the final piece of the voice-to-sign-language translation system:

  • Strands Agents framework with Amazon Nova 2 Lite for tool-use — a fresh agent per MQTT message prevents history bloat, with MaxToolCallsHook limiting calls to 3
  • ASL fingerspelling with the 26-letter alphabet (A-Z), each letter held for 0.8 seconds — the fingerspell tool is decorated with @tool for LLM invocation
  • 8 Feetech SCS0009 servos on 4 fingers controlled via rustypot over serial at 1M baud, with per-servo calibration offsets
  • Video pipeline captures via OpenCV in a background daemon thread, encodes to H.264 MP4 via imageio-ffmpeg, uploads to S3, and includes a 1-hour presigned URL in the final state message
  • Non-blocking threading with 2 thread pools (agent executor off MQTT event loop, state publisher) and a daemon thread for video capture
  • Real-time state publishing to IoT Core after every servo movement — which Part 2's CDK stack routes through Lambda to AppSync, completing the feedback loop to the React frontend in Part 1
  • Graceful shutdown disables servo torque on SIGINT/SIGTERM to release the servos and prevent power draw

Real-Time Voice to Sign Language Translation with Amazon Nova 2 Sonic and Pollen Robotics Amazing Hand - Part 1: Frontend and Voice Processing

· 16 min read
Chiwai Chan
Tinkerer

Pollen Robotics Amazing Hand

This is Part 1 of a 3-part series covering a real-time voice-to-sign-language translation system. The complete solution spans three separate repositories, each responsible for a distinct layer of the system:

  1. This post (Part 1) - Frontend and Voice Processing — The React web app that captures speech, streams it to Amazon Nova 2 Sonic on Bedrock, publishes cleaned sentence text via MQTT, and renders a real-time 3D hand visualisation
  2. Part 2 - Cloud Infrastructure (cdk-iot-amazing-hand-streaming) — The AWS CDK stack that routes IoT Core messages through Lambda to AppSync, enabling real-time GraphQL subscriptions between the edge device and the frontend
  3. Part 3 - Edge AI Agent (strands-agents-amazing-hands) — The Strands Agent powered by Amazon Nova 2 Lite running on an NVIDIA Jetson that receives MQTT sentence text, translates it to ASL servo commands, drives the Pollen Robotics Amazing Hand for fingerspelling, and streams video and state back

In this post, I focus on how speech enters the system, how Amazon Nova 2 Sonic processes and cleans up the spoken input, and how the frontend publishes cleaned sentence text over MQTT — setting the stage for Parts 2 and 3.

The key idea is that Nova 2 Sonic is not used as a chatbot here — it is configured as a dumb speech-to-text relay pipe that cleans up grammar, removes filler words like "um" and "uh", translates non-English speech to English, and forwards the cleaned text via a forced tool invocation (send_text) on every single utterance. The frontend then publishes the cleaned sentence text to AWS IoT Core over MQTT for the edge device to translate into ASL servo commands.

Goals

  • Capture speech in the browser and stream it to Amazon Nova 2 Sonic via bidirectional streaming — no backend servers required
  • Use Nova 2 Sonic's forced tool use (send_text) with toolChoice: { any: {} } to relay cleaned text on every utterance, not as a conversational chatbot
  • Publish cleaned sentence text to AWS IoT Core over MQTT for the edge device to translate into ASL servo commands
  • Subscribe to real-time hand state updates via GraphQL (AppSync) and synchronise a 3D Three.js hand animation with the physical hand
  • Use AWS Amplify Gen 2 for infrastructure-as-code backend definition in TypeScript (Cognito, AppSync, IAM policies)
  • Display a 3-column UI with signed letter history, 3D hand animation with video feed, and live transcript with microphone controls

The Overall System

The end-to-end system takes spoken words from a browser microphone all the way through to physical ASL fingerspelling on an Amazing Hand — an open-source robotic hand designed by Pollen Robotics and manufactured by Seeed Studio — passing through cloud AI, IoT messaging, and an edge AI agent along the way.

Overall System Architecture

System Components:

  1. React Frontend (this post) - Captures speech, streams to Bedrock, publishes cleaned sentence text to MQTT, renders 3D hand animation synchronised with the physical hand via GraphQL subscriptions
  2. Cloud Infrastructure (Part 2) - AWS CDK stack with IoT Core rules that route MQTT messages through Lambda to AppSync, enabling real-time GraphQL subscriptions between the edge device and the frontend
  3. Edge AI Agent (Part 3) - Strands Agent powered by Amazon Nova 2 Lite on an NVIDIA Jetson that receives MQTT sentence text, translates it to ASL servo commands, drives the Amazing Hand for fingerspelling letter by letter, records video, and publishes hand state back via IoT Core

Interactive Sequence Diagram

End-to-End Voice to Sign Language Flow

From user speech to ASL fingerspelling on the Amazing Hand

0/13
UserBrowserReact AppBedrockAmazon BedrockIoT CoreAWS IoT CoreJetsonNVIDIA JetsonHandAmazing Hand0.0sClick microphone and speakUser speaks naturally0.1sAudioWorklet: capture → resample → PCM → Base640.2sInvokeModelWithBidirectionalStream (audio chunks)Real-time streaming2.0ssend_text tool invocation (cleaned English text)Filler words removed, translated to English2.1sMQTT publish: { id, sentence, ts }Plain text, no servo data2.2sMQTT subscribe: receive sentence text2.5sStrands Agent + Nova 2 Lite: sentence → ASL letter l...LLM tool use + ASL_ALPHABET table3.0sDrive servos for ASL fingerspelling letter by letterLetter by letter via serial4.0sPublish hand state (servo angles + video)4.1sGraphQL subscription: hand state updateVia AppSync4.2sUpdate 3D hand animation + video feed + letter history4.5saudioOutput: voice confirmation ("Sent")4.6sPlay audio confirmation + see 3D hand + video
User
Browser
Bedrock
IoT Core
Jetson
Hand
Milestone
Complete
Total: 13 steps across 6 components (3 repos)
Speech → Sentence → Edge AI → ASL Fingerspelling

Architecture

The frontend is built with React 19, Vite 7, and TypeScript 5.9. The application is structured around a main VoiceChat.tsx component that orchestrates four custom hooks, three utility modules, and a Three.js-based hand animation component.

Application UI

React Hooks Architecture

React Hooks Architecture

Components:

  • VoiceChat.tsx - Main UI component with a 3-column responsive layout. Coordinates all hooks, renders the transcript feed, microphone controls, signed letter history, hand state data grid, video feed, and 3D animation. Collapses to a single column on screens under 1100px
  • useNovaSonic - Core hook managing the Bedrock bidirectional stream with InvokeModelWithBidirectionalStreamCommand. Handles authentication via Cognito, the Nova 2 Sonic event protocol (session/prompt/content lifecycle), the async generator input stream with backpressure, and send_text tool use responses. The tool is configured with toolChoice: { any: {} } to force tool invocation on every utterance
  • useAudioRecorder - Captures microphone input using an inline AudioWorklet running in a separate thread. Accumulates 2048 samples per buffer, resamples from the device sample rate (typically 48kHz) to 16kHz, converts Float32 to PCM16, and Base64 encodes for transmission
  • useAudioPlayer - Provides audio playback capability (FIFO queue of AudioBuffers at 24kHz). In the current implementation, Nova 2 Sonic's audio output is intentionally discarded since only the cleaned text via tool use is needed — the hook is available but not actively fed audio data
  • useHandStream - Subscribes to AppSync GraphQL onCreateHandState subscription filtered by device name. Fetches the last 20 hand states on mount and maintains a real-time list of 8 servo angles (thumb, index, middle, ring — each with two joint angles), letters, and video URLs
  • iotPublisher.ts - Publishes MQTT messages to the topic the-project/robotic-hand/XIAOAmazingHandRight/action. Publishes cleaned sentence text as { id, sentence, ts } payloads and handles IoT policy attachment to the Cognito identity
  • HandAnimation.tsx - Procedurally generated 3D robotic hand using Three.js with no external 3D models. The palm is built with LatheGeometry (curved cup shape), and each finger has a dual-joint rig (proximal + distal) with synchronised linkage. Uses WebGL rendering with PCFSoftShadowMap shadows, OrbitControls, and industrial-style materials with metalness/roughness

Authentication Flow

The frontend needs temporary AWS credentials to call both Bedrock (for Nova 2 Sonic streaming) and IoT Core (for MQTT publishing). No long-term credentials are stored in the browser.

Authentication Flow

Authentication Layers:

  • Cognito User Pool - Handles user registration and login with email/password. Configured via Amplify Gen 2 defineAuth with preferredUsername as an optional attribute
  • Cognito Identity Pool - Exchanges JWT tokens from the User Pool for temporary AWS credentials (access key, secret key, session token). Credentials are automatically refreshed by the Amplify SDK before expiration
  • IAM Role - The authenticated user role grants two sets of permissions: bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream scoped to amazon.nova-2-sonic-v1:0 in us-east-1, and iot:Publish, iot:Connect, iot:DescribeEndpoint, and iot:AttachPolicy for IoT Core MQTT access. An IoT Core policy (RoboticHandPolicy) is also attached to the Cognito identity at runtime to authorise MQTT publishing to the the-project/robotic-hand/* topic pattern

How it works

Audio Capture and Processing

The browser captures audio from the microphone using the Web Audio API and an AudioWorklet running in a separate thread. The AudioWorklet avoids main-thread blocking and processes audio in real-time with echo cancellation and noise suppression enabled.

Audio Processing Pipeline

Input Processing (Recording):

  1. Microphone - Browser calls getUserMedia() to capture audio at the device's native sample rate (typically 48kHz) with mono channel, echo cancellation, and noise suppression enabled
  2. AudioWorklet - An inline AudioCaptureProcessor (loaded as a Blob URL to avoid CORS issues) runs in a separate thread. It accumulates samples in a buffer and posts a Float32Array message to the main thread every 2048 samples
  3. Resample - Linear interpolation resampling converts from 48kHz to 16kHz (Nova 2 Sonic's required input rate). The ratio is calculated dynamically from the actual device sample rate
  4. Float32 to PCM16 - Floating point samples in the range [-1, 1] are converted to 16-bit signed integers. Negative values are multiplied by 0x8000 and positive values by 0x7FFF
  5. Base64 Encode - The binary PCM data is encoded to Base64 text for JSON transmission to Bedrock via a custom uint8ArrayToBase64() utility that iterates bytes into a binary string and then calls btoa()

Amazon Nova 2 Sonic Bidirectional Streaming

The heart of the system is the bidirectional stream to Amazon Nova 2 Sonic using InvokeModelWithBidirectionalStreamCommand. Nova 2 Sonic is configured not as a chatbot, but as a speech relay that cleans up input and forwards it via forced tool use.

Bidirectional Streaming Protocol

Input Events (sent to Bedrock):

  • sessionStart - Initialises the session with inference configuration: maxTokens: 1024, topP: 0.9, temperature: 0.7
  • promptStart - Configures audio output format: audio/lpcm at 24kHz, 16-bit, mono, voice matthew, Base64 encoding. Also defines the send_text tool with toolChoice: { any: {} } to force tool invocation on every utterance
  • contentStart (TEXT) - Sends the system prompt that instructs Nova 2 Sonic to act as a "dumb speech-to-text relay pipe" — clean up grammar, remove filler words, translate non-English to English, call send_text with the cleaned text, then respond with only "Sent"
  • contentStart (AUDIO) - Marks the beginning of audio input content
  • audioInput - Streams Base64-encoded 16kHz PCM audio chunks in real-time as the user speaks
  • contentEnd / promptEnd / sessionEnd - Lifecycle events to terminate content blocks, prompts, and sessions

Output Events (received from Bedrock):

  • textOutput - Returns transcribed user speech and the generated AI response text ("Sent")
  • toolUse - The send_text tool invocation containing the cleaned text in { sentence: "..." } format. This is the primary output — the frontend publishes the sentence to MQTT for the edge device to translate into ASL servo commands
  • audioOutput - Synthesised voice response as Base64-encoded 24kHz PCM. In the current implementation, audio output is intentionally discarded since only the cleaned text via tool use is needed

Tool Use — send_text:

  • The tool is defined with toolChoice: { any: {} }, which forces Nova 2 Sonic to call it on every single utterance without exception
  • The tool accepts a single sentence parameter — the cleaned-up, well-formed sentence
  • When the tool invocation arrives, the frontend extracts the sentence and publishes it as { id, sentence, ts } to IoT Core via MQTT using publishSentence(). The edge device then translates the sentence into ASL servo commands
  • A JSON tool result ({ "status": "success", "sentence": "..." }) is sent back to Nova 2 Sonic to complete the tool use cycle

Publishing Sentences via MQTT

Once the cleaned sentence is extracted from the send_text tool invocation, iotPublisher.ts publishes it to the MQTT topic the-project/robotic-hand/XIAOAmazingHandRight/action via AWS IoT Core.

MQTT Command Flow

The payload is a simple JSON object containing:

  • id - A UUID for the message
  • sentence - The cleaned sentence text from Nova 2 Sonic
  • ts - Unix timestamp in seconds

The edge device (covered in Part 3) receives this sentence and is responsible for translating it into ASL servo commands and driving the physical hand.

Interactive Sequence Diagram

MQTT Command Pipeline

From Nova 2 Sonic text output to IoT Core sentence publish

0/7
NovaNova 2 SonicHookuseNovaSonicVoiceChatVoiceChat.tsxPublisheriotPublisher.tsIoT CoreAWS IoT Core0mssend_text tool invocation: { sentence: "hello w...Cleaned text from speech1msonToolUse callback with tool event2msParse event.content JSON, extract sentence3mspublishSentence(sentence)4msBuild payload: { id: uuid, sentence, ts }Plain text, no servo data5msMQTT publish to .../action topic6msTool result: { status: "success", sentence: ".....
Nova
Hook
VoiceChat
Publisher
IoT Core
Milestone
Sentence Publish Pipeline
Speech → Nova 2 Sonic cleanup → send_text tool use → publishSentence → MQTT to edge device

The browser console logs the performance breakdown for each utterance through the voice-to-IoT pipeline. In this example, the end-to-end time from speech detection to IoT publish is approximately 2.9 seconds — with the majority spent on Speech-to-Text (2228ms) as Nova 2 Sonic processes the audio, followed by Text-to-Tool extraction (423ms) and IoT Publish (243ms):

Voice-to-IoT Pipeline Performance

Real-Time Hand State via GraphQL Subscription

The frontend subscribes to AppSync's onCreateHandState GraphQL subscription to receive real-time updates from the edge device. Each update includes the device name, current letter being signed, all 8 servo angles (thumb, index, middle, ring — each with two joint angles), a timestamp, and an optional video URL.

On mount, the hook fetches the last 20 hand states to populate the UI immediately. New states arrive in real-time as the edge device publishes them back through IoT Core → Lambda → AppSync. The data is displayed in both the signed letter history panel and the raw hand state data grid.

3D Hand Visualisation

The HandAnimation.tsx component renders a procedurally generated 3D robotic hand using Three.js — no external 3D models are loaded. The entire hand is built from code:

  • The palm uses LatheGeometry to create a curved cup shape that tapers from a narrow wrist (radius 0.18) to wide knuckles (radius 0.56)
  • Each finger has a dual-joint rig with proximal and distal segments, knuckle joints, linkage bars, and fingertips. The thumb is mounted on the side of the palm and rotates on the Z-axis, while the index, middle, and ring fingers are mounted on the front rim and rotate on the X-axis
  • The distal joint automatically follows the proximal joint at 50% of its angle, simulating a synchronised linkage mechanism
  • Materials use industrial-style metalness/roughness: dark gray frame (0x2a2a2a), light gray joints (0x888888), and darker gray tips (0x555555)
  • The scene includes PCFSoftShadowMap shadows, ambient lighting (0.8), directional light (1.0), and a fill light (0.4), with OrbitControls for interactive zoom and rotation

Servo angle updates from the GraphQL subscription drive the finger rotations in real-time, keeping the 3D animation synchronised with the physical Amazing Hand.

Audio Playback

The useAudioPlayer hook provides a FIFO queue-based audio playback capability for Web Audio AudioBuffer objects at 24kHz. However, in the current implementation, Nova 2 Sonic's audio output is intentionally discarded — the onAudioOutput callback is set to a no-op since only the cleaned text via the send_text tool use is needed to drive the MQTT pipeline. The hook remains available for future use if audio feedback is desired.

Technical Challenges & Solutions

Challenge 1: AudioWorklet CORS Issues

Problem: Loading an AudioWorklet processor from an external JavaScript file fails with CORS errors on some deployments, particularly when using Amplify Hosting.

Solution: Inline the AudioWorklet code as a Blob URL. The processor code is defined as a string, converted to a Blob with type application/javascript, and loaded via URL.createObjectURL(). The object URL is revoked after the module is added:

const blob = new Blob([audioWorkletCode], { type: 'application/javascript' });
const workletUrl = URL.createObjectURL(blob);
await audioContext.audioWorklet.addModule(workletUrl);
URL.revokeObjectURL(workletUrl);

Challenge 2: Forcing Tool Use on Every Utterance

Problem: Nova 2 Sonic is a conversational model by default — it wants to chat and respond naturally. But in this system, it needs to act as a pure relay, forwarding every single utterance as cleaned text without adding commentary or refusing any messages.

Solution: A combination of system prompt engineering and forced tool use. The system prompt explicitly instructs Nova 2 Sonic to act as a "dumb speech-to-text relay pipe" and never add commentary. The send_text tool is configured with toolChoice: { any: {} }, which forces the model to invoke a tool on every response. After calling the tool, it is instructed to only respond with "Sent".

Challenge 3: Keeping MQTT Payloads Simple

Problem: The system needs to transmit the user's intent from the frontend to the edge device reliably via IoT Core MQTT.

Solution: Rather than translating text to servo commands on the frontend (which would require large payloads with many servo poses), the frontend publishes only the cleaned sentence text as a compact { id, sentence, ts } JSON payload. The edge device is responsible for translating the sentence into ASL servo commands, keeping the MQTT messages small and the frontend simple.

Getting Started

GitHub Repository: https://github.com/chiwaichan/amplify-react-nova-sonic-voice-chat-amazing-hand

Prerequisites

  • Node.js 18+
  • AWS Account with Amazon Nova 2 Sonic model access enabled in Bedrock (us-east-1 region)
  • AWS CLI configured with credentials

Deployment Steps

  1. Enable Nova 2 Sonic in Bedrock Console (us-east-1 region)

  2. Clone and Install:

git clone https://github.com/chiwaichan/amplify-react-nova-sonic-voice-chat-amazing-hand.git
cd amplify-react-nova-sonic-voice-chat-amazing-hand
npm install
  1. Start Amplify Sandbox:
npx ampx sandbox
  1. Run Development Server:
npm run dev
  1. Open Application: Navigate to http://localhost:5173, create an account, and start talking. Note that the full system requires Parts 2 and 3 to be deployed for the physical hand to respond — but the frontend will still capture speech, process it through Nova 2 Sonic, and display the 3D hand animation independently.

What's Next

In Part 2, I will cover the cloud infrastructure layer — the AWS CDK stack (cdk-iot-amazing-hand-streaming) that routes IoT Core MQTT messages through Lambda to AppSync. This is the bridge that enables real-time GraphQL subscriptions, allowing the frontend to receive hand state updates from the edge device as they happen.

In Part 3, I will cover the edge AI agent (strands-agents-amazing-hands) — a Strands Agent powered by Amazon Nova 2 Lite running on an NVIDIA Jetson that subscribes to the MQTT sentence text published by this frontend, translates them into physical servo movements on the Pollen Robotics Amazing Hand for ASL fingerspelling, records video of the hand in action, and publishes state back through IoT Core.

Summary

This post covered the frontend and voice processing layer of a real-time voice-to-sign-language translation system:

  • Amazon Nova 2 Sonic is used not as a chatbot but as a speech relay — configured via system prompt and toolChoice: { any: {} } forced send_text tool use to clean up grammar, remove filler words, translate to English, and forward every utterance as text
  • Audio pipeline captures at 48kHz via AudioWorklet, resamples to 16kHz, converts to PCM16 Base64 for Bedrock input. Nova 2 Sonic's audio output is intentionally discarded since only the cleaned text is needed
  • MQTT publishing sends cleaned sentence text as { id, sentence, ts } to AWS IoT Core for the edge device to translate into ASL servo commands
  • Real-time feedback via GraphQL subscriptions keeps the 3D Three.js hand animation synchronised with the physical Amazing Hand using 8 servo angles (thumb, index, middle, ring — each with two joints)
  • Fully serverless frontend using AWS Amplify Gen 2 with Cognito authentication, no backend servers — direct browser-to-Bedrock and browser-to-IoT Core communication

Agentic based Over-The-Air Firmware Management of Seeed Studio XIAO ESP32S3 IoT Device Firmware using Amazon AgentCore and Strands Agents

· 8 min read
Chiwai Chan
Tinkerer

I want to have the ability to be able to manage the firmware of all IoT devices using a prompt - it could be to upgrade a device to the latest version, or even to perform a rollback, whether across the entire IoT device fleet level - every device in all 20+ solution types, all the devices within a type of solution, or even at an individual device level.

Goals

  • To be able to over-the-air flash a new firmware version using a prompt
  • To have an Agentic Agent do all the work, give it a prompt and it takes cares of the rest
  • Scalable in the number of IoT devices, as well as, being able to scale as the number of new IoT solution Types increases; with no effort required - implement once and forget
  • Have the ability to rollback to any firmware version specified in the prompt
  • This same solution can be interfaced with using the Model Context Protocol (MCP): whether via Kiro CLI or Claude Code
  • This same solution can be interfaced with using a chatbot
  • Must be authenticated to interface with this solution
  • Must be a completely serverless-solution
  • Firmware integrity verification using SHA256 checksums before flashing to ensure firmware hasn't been corrupted during download
  • Safe rollout with rate limiting and automatic abort thresholds to prevent fleet-wide failures
  • Device firmware version tracking via device shadows to enable version-based targeting for updates
  • Configuration-gated deployments to enable or disable OTA updates per device type for controlled rollouts

Architecture

End-to-End OTA Firmware Update Flow

This diagram illustrates the complete flow from a user's natural language prompt to firmware being flashed on Seeed Studio XIAO ESP32S3 devices.

End-to-End OTA Firmware Update Flow

Flow Steps:

  1. User Prompt - Developer/Operator provides a natural language command (e.g., "Update all vision_ai_face_detector devices to v2.0.0")
  2. AgentCore Runtime - Amazon Bedrock AgentCore receives and processes the request
  3. Strands Agent - The agent with firmware_updater tool reasons about the task
  4. Config Check - Agent queries DynamoDB to verify the device type is enabled for OTA updates
  5. Firmware Metadata - Agent retrieves firmware binary, SHA256 checksum, and metadata from S3
  6. Create IoT Job - Agent creates a continuous IoT Job targeting the specified device group
  7. MQTT Notification - AWS IoT Core notifies devices via MQTT topic $aws/things/+/jobs/notify
  8. Firmware Download - Each XIAO ESP32S3 Vision AI Face Detector downloads the firmware directly from S3
  9. Version Reporting - Devices report their new firmware version to their Device Shadow

Interactive Sequence Diagram

End-to-End OTA Firmware Update Flow

From natural language prompt to firmware flashed on Seeed Studio XIAO ESP32

0/15
UserDeveloper/OperatorAgentCoreBedrock AgentCoreStrandsStrands AgentDynamoDBS3S3 BucketIoT CoreAWS IoT CoreXIAOXIAO ESP320.0s"Update all vision_ai_face_detector to v2.0.0"Natural language0.1sInvoke agent with prompt0.2sGetItem: Check if device type enabled0.3senabled: true0.4svalidate_files_exist()0.5sfirmware.bin, firmware.sha256, metadata.json ✓0.6sCreateJob (continuous, targeting device group)0.7sJob created: job-vision-ai-v2.0.00.8sSuccess: OTA job created for 5 devices0.9s"Created OTA job for vision_ai_face_detector..."1.0sMQTT: $aws/things/+/jobs/notifyAll devices in group2.0sGET firmware.bin (pre-signed URL)5.0sStreaming download (1.6MB)8.0sVerify SHA256 → Flash to APP1 → Reboot12.0sShadow update: firmwareVersion = "2.0.0"
User
AgentCore
Strands
DynamoDB
S3
IoT Core
XIAO
Milestone
Complete
Total: 15 message exchanges across 7 participants
~12 seconds end-to-end (prompt to firmware flashed)

Strands Agent Architecture on Amazon Bedrock AgentCore

This diagram details the internal architecture of the Strands Agent running on Amazon Bedrock AgentCore, showing how the LLM reasons about prompts and orchestrates tool execution.

Strands Agent Architecture

Components:

  • Amazon Bedrock AgentCore - Managed runtime that hosts and scales the agent
  • ECR Container - Docker image (Python 3.12) containing the Strands Agent code
  • Amazon Nova 2 Lite - The LLM that provides reasoning capabilities
  • Agent Loop - The core execution cycle: parse prompt → select tool → execute → respond
  • firmware_updater Tools:
    • push_firmware_update() - Main orchestrator that coordinates the entire OTA process
    • validate_files_exist() - Validates firmware.bin, firmware.sha256, and metadata.json exist in S3
    • create_dynamic_thing_group() - Creates Fleet Indexing queries to target devices by firmware version

Scalability Architecture

This diagram demonstrates how the solution scales effortlessly across multiple device types and large device fleets - implement once and forget.

Scalability Architecture

Key Scalability Features:

  • Single Agent, Multiple Device Types - One Strands Agent manages all 26+ device groups without code changes
  • S3 Folder Convention - Adding a new device type is as simple as creating a new folder (e.g., firmwares/v1.0.0/new_device_type/)
  • Auto-Discovery Mapping - Folder names automatically map to Thing Groups (e.g., vision_ai_face_detectorVisionAIFaceDetectorAWSDevice)
  • Fleet Indexing Queries - Dynamically target devices based on current firmware version, no hardcoded device lists
  • Horizontal Scaling - Add unlimited devices to any group; IoT Jobs handles distribution automatically

Firmware Rollback Architecture

This diagram shows how the solution enables rollback to any previous firmware version using a simple prompt, leveraging the dual-partition architecture of the Seeed Studio XIAO ESP32.

Firmware Rollback Architecture

Key Rollback Features:

  • Version History in S3 - All firmware versions are retained (v1.0.0, v2.0.0, v3.0.0, etc.) enabling rollback to any point
  • Dual-Partition Flash Layout - XIAO ESP32 uses APP0/APP1 partitions for safe ping-pong updates
  • Persistent Storage - NVS (WiFi, config) and SPIFFS (certificates) survive firmware updates
  • SHA256 Validation - Firmware integrity verified before committing to new partition
  • Automatic Rollback - If new firmware fails to boot and connect to MQTT, device automatically reverts to previous partition

Interactive Sequence Diagram

Firmware Rollback Sequence

Rollback to any previous firmware version with dual-partition safety

0/17
UserDeveloper/OperatorStrandsStrands AgentS3S3 (Version History)IoT CoreAWS IoT CoreXIAOXIAO ESP32FlashFlash Partitions0.0s"Rollback vision_ai_face_detector to v1.0.0"Natural language0.2sList versions: v1.0.0, v2.0.0, v3.0.00.3sv1.0.0 exists with firmware.bin + sha2560.5sCreateJob: target v1.0.0 firmware0.6sJob created: rollback-v1.0.00.7s"Rollback job created, targeting 5 devices"1.0sMQTT: Job notification with v1.0.0 URL2.0sGET v1.0.0/firmware.bin5.0sStream v1.0.0 firmware (1.6MB)6.0sWrite to APP1 partition (inactive)7.0sAPP1 written, SHA256 verified7.5sSet boot partition: APP18.0sESP.restart() → Reboot10.0sBoot from APP1 (v1.0.0)12.0sMQTT Connect successful12.5sMark APP1 as valid (ota_mark_valid)13.0sShadow: firmwareVersion = "1.0.0"
User
Strands
S3
IoT Core
XIAO
Flash
Dual-partition safety: APP0 preserved during rollback to APP1
~13 seconds to rollback (with auto-recovery on failure)

Multi-Interface Access Architecture

This diagram demonstrates how the Strands Agent can be accessed through multiple interfaces with different authentication methods - enabling developers to use their preferred tools while operators can use a web-based chatbot.

Multi-Interface Access Architecture

Interface Options:

  • MCP Clients (Developer Tools) - Claude Code and Kiro CLI connect via Model Context Protocol to a Streaming AgentCore Runtime using JWT/Cognito authentication
  • Chatbot (Web UI) - AWS Amplify React app with FirmwareAssistant component connects via Lambda proxy to an IAM AgentCore Runtime using SigV4 authentication for service-to-service communication
  • Two Runtimes, Same Agent Logic - Both runtimes run the same Strands Agent code but are deployed separately with different authentication methods suited to their use cases

Firmware AI Assistant Chatbot

The chatbot interface in an Amplify React App provides a conversational way to manage firmware updates. In this example, the assistant lists all available firmware versions across device groups, and then creates an OTA job to update the pet_feeder device group to the latest firmware version.

Firmware AI Assistant Chatbot

Authentication Architecture

This diagram illustrates the multi-layer security model ensuring that all access to the firmware management system is properly authenticated. Each interface uses a different authentication method suited to its use case.

Authentication Architecture

Authentication Layers:

  • Cognito JWT (MCP Path) - Developers using Claude Code and Kiro CLI authenticate via Amazon Cognito User Pool and receive JWT tokens, connecting to the Streaming AgentCore Runtime
  • IAM SigV4 (Chatbot Path) - The Lambda proxy authenticates using AWS IAM roles with SigV4 request signing for service-to-service communication with the IAM AgentCore Runtime
  • X.509 Certificates (Device Path) - XIAO ESP32 devices authenticate to AWS IoT Core using TLS 1.2 mutual authentication with per-device certificates
  • Certificate Chain - Amazon Root CA validates device certificates stored in SPIFFS (survives firmware updates)

Serverless Architecture Overview

This diagram provides a comprehensive view of all AWS services used in the solution - every component is fully serverless with no EC2 instances to manage.

Serverless Architecture Overview

Serverless Components:

  • Frontend - AWS Amplify Hosting, AppSync GraphQL, Cognito User Pool
  • Compute - Amazon Bedrock AgentCore, Lambda Functions, EventBridge Rules
  • Storage - S3 Firmware Bucket, DynamoDB Config Table
  • IoT - IoT Core, IoT Jobs, Device Shadows, Fleet Indexing
  • Monitoring - CloudWatch Logs & Alarms, SNS Notifications
  • CI/CD - CodeBuild (ARM64), ECR Container Registry

Firmware Integrity Verification (SHA256)

This diagram shows the firmware integrity verification process that ensures firmware hasn't been corrupted during download before flashing to the device.

Firmware Integrity Verification

Verification Flow:

  1. Download - XIAO ESP32 streams firmware.bin from S3 in chunks
  2. Calculate - SHA256 hash is calculated progressively during download (streaming hash)
  3. Compare - Calculated hash is compared against expected hash from firmware.sha256 file
  4. Flash Decision - Match: proceed to flash APP1 partition | Mismatch: abort OTA and report failure

Benefits:

  • Detects corruption during download (network issues, incomplete transfers)
  • Prevents flashing of tampered firmware
  • Memory-efficient streaming verification (no need to store entire firmware before hashing)

Interactive Sequence Diagram

SHA256 Integrity Verification Sequence

Streaming hash verification during firmware download

0/15
IoT JobAWS IoT JobXIAOXIAO ESP32S3S3 BucketOTA MgrOTA ManagerFlashFlash Memory0.0sJob document with firmware URL + expected SHA2560.1sStart OTA: updateFromURLWithChecksum(url, sha256)0.2sHTTP GET firmware.bin (Content-Length: 1.6MB)0.5sStream chunk 1 (1KB)0.6sSHA256.update(chunk1) → Running hash0.7sWrite chunk 1 to APP1 partition3.0sStream chunks 2...1600 (1KB each)~1600 iterations3.5sSHA256.update(chunks) → Accumulating hash4.0sWrite remaining chunks to APP15.0sDownload complete (EOF)5.1sSHA256.finalize() → Calculated hash5.2sCompare: calculated == expected ?5.3s✓ MATCH: Commit APP1, set boot partitionSafe to flash!5.5sAPP1 committed successfully5.6sOTA_SUCCESS: Ready to reboot
IoT Job
XIAO
S3
OTA Mgr
Flash
Memory-efficient: Hash calculated during download, not after
~5.6 seconds (download + verify + commit)

Safe Rollout with Rate Limiting & Abort Thresholds

This diagram illustrates the safety mechanisms that prevent fleet-wide failures during OTA updates by controlling rollout speed and automatically aborting when issues are detected.

Safe Rollout with Rate Limiting

Safety Mechanisms:

  • Rate Limiting - Updates are deployed to a maximum of 10 devices concurrently, preventing network congestion and allowing monitoring
  • Abort Thresholds - Job automatically cancels if failure rate exceeds 5% or more than 10 absolute failures occur
  • Batch Processing - Fleet of 100 devices is updated in batches, with completed, in-progress, and pending states tracked
  • Failure Monitoring - Real-time tracking of success/failure status feeds into abort decision logic
  • Auto-Cancel - When threshold is exceeded, all pending device updates are automatically cancelled
  • SNS Alerts - Operators are immediately notified when an OTA rollout is aborted

Interactive Sequence Diagram

Safe Rollout with Abort Threshold

Rate-limited deployment with automatic abort on failure threshold

0/16
StrandsStrands AgentIoT JobsAWS IoT JobsBatch 1Batch 1 (10 devices)Batch 2Batch 2 (10 devices)MonitorFailure MonitorSNSSNS Alerts0.0sCreateJob: maxConcurrent=10, abortThreshol...Rate limiting config0.1sJob created: 100 devices in queue0.5sDeploy to Batch 1 (devices 1-10)First 10 devices15.0sDevice 1-8: SUCCESS18.0sDevice 9-10: SUCCESS18.5sBatch 1 complete: 10/10 success (0% failure)18.6s✓ Below threshold, continue rollout19.0sDeploy to Batch 2 (devices 11-20)32.0sDevice 11-14: SUCCESS45.0sDevice 15-17: FAILED (network timeout)3 failures!45.5sRunning total: 13 success, 3 failed (18.75%)45.6sCheck: 18.75% > 5% threshold45.7s⚠️ ABORT: Failure threshold exceeded!Stop rollout46.0sCancel pending jobs (devices 18-100)46.5sPublish abort notification47.0sEmail/SMS: "OTA aborted: 3 failures i...
Strands
IoT Jobs
Batch 1
Batch 2
Monitor
SNS
Prevented 84 additional devices from receiving bad firmware
Fleet-wide failure avoided by abort threshold

Source Code

The source code for this project is available on GitHub:

note

This repository is not yet open sourced. It will be made public in a future update.

Real-Time Voice Chat with Amazon Nova Sonic using React and AWS Amplify Gen 2

· 8 min read
Chiwai Chan
Tinkerer

These days I am often creating small generic re-usable building blocks that I can pontentially use across new or existing projects, in this blog I talk about the architecture for a LLM based voice chatbot in a web browser built entirely as a serverless based solution.

The key component of this solution is using Amazon Nova 2 Sonic, a speech-to-speech foundation model that can understand spoken audio directly and generate voice responses - all through a single bidirectional stream from the browser directly to Amazon Bedrock, with no backend servers required - no EC2 instances and no Containers.

Goals

  • Enable real-time voice-to-voice conversations with AI using Amazon Nova 2 Sonic
  • Direct browser-to-Bedrock communication using bidirectional streaming - no Lambda functions or API Gateway required
  • Use AWS Amplify Gen 2 for infrastructure-as-code backend definition in TypeScript
  • Implement secure authentication using Cognito User Pool and Identity Pool for temporary AWS credentials
  • Handle real-time audio capture, processing, and playback entirely in the browser
  • Must be a completely serverless solution with automatic scaling
  • Support click-to-talk interaction model for intuitive user experience
  • Display live transcripts of both user speech and AI responses

Architecture

End-to-End Voice Chat Flow

This diagram illustrates the complete flow from a user speaking into their microphone to hearing the AI assistant's voice response.

End-to-End Voice Chat Flow

Flow Steps:

  1. User Speaks - User clicks the microphone button and speaks naturally
  2. Audio Capture - Browser captures audio via Web Audio API at 48kHz
  3. Authentication - React app authenticates with Cognito User Pool
  4. Token Exchange - JWT tokens exchanged for Identity Pool credentials
  5. AWS Credentials - Temporary AWS credentials (access key, secret, session token) returned
  6. Bidirectional Stream - Audio streamed to Bedrock via InvokeModelWithBidirectionalStream
  7. Voice Response - Nova Sonic processes speech and returns synthesized voice response
  8. Audio Playback - Response audio decoded and played through speakers
  9. User Hears - User hears the AI assistant's natural voice response

Interactive Sequence Diagram

Voice Chat Sequence Flow

From user speech to AI voice response via Amazon Nova 2 Sonic

0/21
UserBrowserReact AppCognitoBedrockAmazon BedrockNovaNova 2 Sonic0.0sClick microphone buttonUser interaction0.1sfetchAuthSession()0.2sTemporary AWS credentials0.3sInvokeModelWithBidirectionalStreamHTTP/2 streaming0.4ssessionStart + promptStart + contentStart0.5sInitialize speech-to-speech model1.0sSpeak into microphoneAudio capture1.1sAudioWorklet: capture → resample → PCM → B...1.2saudioInput events (streaming chunks)Real-time3.0sClick mic to stop recording3.1scontentEnd + new contentStart (AUDIO)3.2sProcess audio → transcribe → generate resp...3.5stextOutput: transcription + response text3.6stextOutput event3.7sSynthesize voice response (24kHz)3.8saudioOutput events (streaming chunks)3.9sBase64 → PCM → Float32 → AudioBuffer4.0sPlay audio through speakers5.0sClick mic to continue conversationSame session5.1sStart recording (session already open)5.2sNew audioInput events (next turn)Repeat cycle
User
Browser
Cognito
Bedrock
Nova
Milestone
Complete
Total: 21 steps across 5 components
~4 seconds end-to-end latency

React Hooks Architecture

This diagram details the internal architecture of the React application, showing how custom hooks orchestrate audio capture, Bedrock communication, and playback.

React Hooks Architecture

Components:

  • VoiceChat.tsx - Main UI component that coordinates all hooks and renders the interface
  • useNovaSonic - Core hook managing Bedrock bidirectional stream, authentication, and event protocol
  • useAudioRecorder - Captures microphone input using AudioWorklet in a separate thread
  • useAudioPlayer - Manages audio playback queue and Web Audio API buffer sources
  • audioUtils.ts - Low-level utilities for PCM conversion, resampling, and Base64 encoding

Data Flow:

  1. Microphone audio captured by useAudioRecorder via MediaStream
  2. AudioWorklet processes samples in real-time (separate thread)
  3. Audio resampled from 48kHz to 16kHz, converted to PCM16, then Base64
  4. useNovaSonic streams audio chunks to Bedrock
  5. Response audio received as Base64, decoded to PCM, converted to Float32
  6. useAudioPlayer queues AudioBuffers and plays through AudioContext

Authentication Flow

This diagram shows the multi-layer authentication flow that enables secure browser-to-Bedrock communication without exposing long-term credentials.

Authentication Flow

Authentication Layers:

  • Cognito User Pool - Handles user registration and login with email/password
  • Cognito Identity Pool - Exchanges JWT tokens for temporary AWS credentials
  • IAM Role - Defines permissions for authenticated users (Bedrock invoke access)
  • SigV4 Signing - AWS SDK automatically signs all Bedrock requests

Key Security Features:

  • No AWS credentials stored in browser - only temporary session credentials
  • Credentials automatically refreshed by Amplify SDK before expiration
  • IAM policy scoped to specific Bedrock model (amazon.nova-2-sonic-v1:0)
  • All communication over HTTPS with TLS 1.2+

Audio Processing Pipeline

This diagram shows the real-time audio processing that converts browser audio to Bedrock's required format and vice versa.

Audio Processing Pipeline

Input Processing (Recording):

  1. Microphone - Browser captures audio at native sample rate (typically 48kHz)
  2. AudioWorklet - Processes audio in separate thread, accumulates 2048 samples
  3. Resample - Linear interpolation converts 48kHz → 16kHz (Nova Sonic requirement)
  4. Float32 → PCM16 - Converts floating point [-1,1] to 16-bit signed integers
  5. Base64 Encode - Binary PCM encoded for JSON transmission

Output Processing (Playback):

  1. Base64 Decode - Received audio converted from Base64 to binary
  2. PCM16 → Float32 - 16-bit integers converted to floating point
  3. AudioBuffer - Web Audio API buffer created at 24kHz (Nova Sonic output rate)
  4. Queue & Play - Buffers queued and played sequentially through speakers

Interactive Sequence Diagram

Audio Processing Pipeline

Real-time audio capture, format conversion, and playback

0/16
MicMicrophoneWorkletAudioWorkletUtilsaudioUtils.tsHookuseNovaSonicBedrockPlayeruseAudioPlayer0msgetUserMedia() → MediaStream (48kHz Float32)Browser native20msprocess() called ~50x/sec, accumulate 2048 samples40mspostMessage(Float32Array)To main thread41msresampleAudio(48kHz → 16kHz)Linear interp42msfloat32ToPcm16(): [-1,1] → 16-bit signed int43msuint8ArrayToBase64(): binary → text44msonAudioData(base64String)45msaudioInput event: { content: base64 }Via SDK2000msaudioOutput event: { content: base64 } (24kHz)Streaming2001msqueueAudio(base64String)2002msbase64ToUint8Array()2003msUint8Array (PCM bytes)2004mscreateAudioBufferFromPcm()2005mspcm16ToFloat32(): 16-bit int → [-1,1]2006msAudioBuffer (24kHz)2007msAudioBufferSourceNode.start() → speakers
Mic
Worklet
Utils
Hook
Bedrock
Player
Milestone
Audio Format Conversions
Input: 48kHz Float32 → 16kHz PCM16 → Base64 | Output: Base64 → PCM16 → Float32 → 24kHz AudioBuffer

Bidirectional Streaming Protocol

This diagram illustrates how the useNovaSonic hook manages the complex bidirectional streaming protocol with Amazon Bedrock.

Bidirectional Streaming Protocol

Event Protocol: Nova Sonic uses an event-based protocol where each interaction consists of named sessions, prompts, and content blocks.

Input Events (sent to Bedrock):

  • sessionStart - Initializes session with inference parameters (maxTokens: 1024, topP: 0.9, temperature: 0.7)
  • promptStart - Defines output audio format (24kHz, LPCM, voice "matthew")
  • contentStart - Marks beginning of content blocks (TEXT for system prompt, AUDIO for user speech)
  • textInput - Sends system prompt text content
  • audioInput - Streams user audio chunks as Base64-encoded 16kHz PCM
  • contentEnd - Marks end of content block
  • promptEnd / sessionEnd - Terminates prompt and session

Output Events (received from Bedrock):

  • contentStart - Marks role transitions (USER for ASR, ASSISTANT for response)
  • textOutput - Returns transcribed user speech and generated AI response text
  • audioOutput - Returns synthesized voice response as Base64-encoded 24kHz PCM
  • contentEnd - Marks end of response content

Async Generator Pattern: The SDK requires input as AsyncIterable<Uint8Array>. The hook implements this using:

  • Event Queue - Pre-queued initialization events before stream starts
  • Promise Resolver - Backpressure control for yielding events on demand
  • pushEvent() - Adds new events during conversation (audio chunks)

Serverless Architecture Overview

This diagram provides a comprehensive view of all components - the entire solution is serverless with no EC2 instances or containers to manage.

Serverless Architecture Overview

Frontend Stack:

  • React - Component-based UI framework
  • Vite - Build tool and dev server
  • TypeScript - Type-safe development
  • AWS Amplify Hosting - Static web hosting with global CDN

Backend Stack (Amplify Gen 2):

  • amplify/backend.ts - Infrastructure defined in TypeScript
  • Cognito User Pool - Email-based authentication
  • Cognito Identity Pool - AWS credential vending
  • IAM Policy - Grants bedrock:InvokeModel permission for bidirectional streaming

AI Service:

  • Amazon Bedrock - Managed foundation model inference
  • Nova 2 Sonic - Speech-to-speech model (us-east-1)
  • Bidirectional Streaming - Real-time duplex communication

Technical Challenges & Solutions

Challenge 1: AudioWorklet CORS Issues

Problem: Loading AudioWorklet from external file fails with CORS errors on some deployments.

Solution: Inline the AudioWorklet code as a Blob URL:

const blob = new Blob([audioWorkletCode], { type: 'application/javascript' });
const workletUrl = URL.createObjectURL(blob);
await audioContext.audioWorklet.addModule(workletUrl);
URL.revokeObjectURL(workletUrl);

Challenge 2: Sample Rate Mismatch

Problem: Browsers capture audio at 48kHz, but Nova Sonic requires 16kHz input.

Solution: Linear interpolation resampling in real-time:

const resampleAudio = (audioData: Float32Array, sourceSampleRate: number, targetSampleRate: number) => {
const ratio = sourceSampleRate / targetSampleRate;
const newLength = Math.floor(audioData.length / ratio);
const result = new Float32Array(newLength);
for (let i = 0; i < newLength; i++) {
const srcIndex = i * ratio;
const floor = Math.floor(srcIndex);
const ceil = Math.min(floor + 1, audioData.length - 1);
const t = srcIndex - floor;
result[i] = audioData[floor] * (1 - t) + audioData[ceil] * t;
}
return result;
};

Challenge 3: SDK Bidirectional Stream Input

Problem: AWS SDK requires input as AsyncIterable<Uint8Array>, but events need to be pushed dynamically during the conversation.

Solution: Async generator with event queue and promise-based backpressure:

async function* createInputStream() {
while (isActiveRef.current && !ctrl.closed) {
while (ctrl.eventQueue.length > 0) {
yield ctrl.eventQueue.shift();
}
const nextEvent = await new Promise(resolve => {
ctrl.resolver = resolve;
});
if (nextEvent === null) break;
yield nextEvent;
}
}

Getting Started

GitHub Repository: https://github.com/chiwaichan/amplify-react-amazon-nova-2-sonic-voice-chat

Prerequisites

  • Node.js 18+
  • AWS Account with Bedrock access enabled
  • AWS CLI configured with credentials

Deployment Steps

  1. Enable Nova 2 Sonic in Bedrock Console (us-east-1 region)

  2. Clone and Install:

git clone https://github.com/chiwaichan/amplify-react-amazon-nova-2-sonic-voice-chat.git
cd amplify-react-amazon-nova-2-sonic-voice-chat
npm install
  1. Start Amplify Sandbox:
npx ampx sandbox
  1. Run Development Server:
npm run dev
  1. Open Application: Navigate to http://localhost:5173, create an account, and start talking!

Summary

This architecture provides a reusable building block for voice-enabled AI applications:

  • Zero backend servers - Direct browser-to-Bedrock communication
  • Real-time streaming - HTTP/2 bidirectional streaming for low latency
  • Secure authentication - Cognito User Pool + Identity Pool + IAM policies
  • Audio processing pipeline - Web Audio API, AudioWorklet, PCM conversion
  • Infrastructure as code - AWS Amplify Gen 2 with TypeScript backend definition

The entire interaction happens in real-time: speak naturally, and hear the AI respond within seconds.

Rockit Apple payslip Analyzer with GenAI Chatbot using Bedrock and Streamlit

· 5 min read
Chiwai Chan
Tinkerer

It's the time of year where I normally have to start doing taxes, not for myself but for my parents. Mum works at various fruit picking / packing places in Hawkes Bay throughout the year, so that means there are all sorts of Payslips from different employers for the last financial year. Occasionally mum would ask me specific details about her weekly payslips, and that usually means: download a PDF from and email -> open up the PDF -> find what's she asking for -> look at the PDF -> can't find it so ask what mum meant -> find the answer -> explain it to her.

Solution & Goal

The usual format,challenge: create a Generative AI conversational chat to enable mum to ask in her natural language specific details of,

And the goal: outsource the work to AI = more time to play. :-)

Success Criterias

  • Automatically extract details from Payslips - I've only tested it on Payslips from Rockit Apple.
  • Enable end-user to ask in Cantonese details of a Payslip
  • Retrieve data from an Athena Table where the
  • Create a Chatbot to receive question in Cantonese around the user's Payslips stored in the Athena Table, and generate a response back to the user in Cantonese

So what's the Architecture?

Architecture

Note

I've only tried it for Payslips generated by this employer: Rockit Apple

Deploy it for yourself to try out

Prerequisites

  • Python 3.12 installed - the only version I've validated
  • Pip installed
  • Node.js and npm Installed
  • CDK installed - using npm install -g aws-cdk
  • AWS CLI Profile configured

Deployment CLI Commands

  • Open up a terminal
  • And run the following commands
git clone git@github.com:chiwaichan/rockitapple-payslip-analyzer-with-genai-chatbot-using-bedrock-streamlit.git 
cd rockitapple-payslip-analyzer-with-genai-chatbot-using-bedrock-streamlit
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cdk deploy

If all goes well

You should see this as a result of calling the cdk deploy command

CDK Deploy

Check that the CloudFormation Stack is being created in the AWS Console

CloudFormation Create

Click on it to see the Events, Resources and Output for the Stack

CloudFormation Create Events

Find the link to the S3 Bucket to upload Payslip PDFs into, in the Stack's Resources, find the S3 Bucket with a with a Logical ID that starts with "sourcepayslips" and click on its Physical ID link.

S3 Buckets

Upload your PDF Payslips into here S3 Source Payslip PDFs

Find the link to the S3 Bucket where the extracted Data will be stored into for the Athena Table, in the Stack's Resources, find the S3 Bucket with a with a Logical ID that starts with "PayslipAthenaDataBucket" and click on its Physical ID link.

CloudFormation S3 Buckets

There you can find a JSON file, it should take about a few minutes to appear after you upload the PDF.

Athena Table JSON file in S3 Bucket

It was created by the Lambda shown in the architecture diagram we saw earlier, it uses Amazon Textract to extract the data from each Paylip using OCR, using the Queries based feature to extract the Text from a PDF by enabling us to use queries in natural language to configure what we want to extract out from a PDF. Find the "app.py" file shown in the folder structure in the screenshot below, you can modify the wording of the Questions the Lambda function uses to extract the details from the Payslip, to suit the specific needs based on the wording of your Payslip; the result of each Question extracted is saved to the Athena table using the column name shown next to the Question.

Textract Queries

What it looks like in action

Go to the CloudFormation Stack's Outputs to get the URL to open the Streamlit Application's frontend.

Click the value for the Key "StreamlitFargateServiceServiceURL"

Streamlit URL

That will take you to a Streamlit App hosted in the Fargate Container shown in the architecture diagram

Streamlit App

Lets try out some examples

Example 1 Example 2 Example 3 Example 4 1 payslip

Things don't always go well

Error

You can tweak the Athena Queries generated by the LLM by providing specific examples tailoured to your Athena Table and its column names and values - known as a Few-Shot Learning. Modify this file to tweak the Queries feed into the Few-shot examples used by Bedrock and the Streamlit app.

Few Shot Examples

Thanks to this repo

I was able to learn and build my first GenAI app: AWS Samples - genai-quickstart-pocs

I based my app on the example for Athena, I wrapped the Streamlit app into a Fargate Container and added Textract to extract Payslips details from PDFs and this app was the output of that.