Skip to main content

Design Patterns

Common design patterns and architectural approaches for building robust AppBlocks applications. These patterns represent best practices learned from real-world deployments.

State Machine Pattern

Problem

Complex device behavior with multiple states and transitions.

Solution

Implement explicit state machine:

States: IDLE, HEATING, COOLING, ERROR

IDLE:
- If temp < setpoint - 2: → HEATING
- If temp > setpoint + 2: → COOLING

HEATING:
- If temp >= setpoint: → IDLE
- If sensor error: → ERROR

COOLING:
- If temp <= setpoint: → IDLE
- If sensor error: → ERROR

ERROR:
- If error cleared: → IDLE

Implementation

Variable: current_state = "IDLE"

On Variable Changed (temperature):
If current_state == "IDLE":
If temperature < setpoint - 2:
Digital Line Set: heater = ON
Variable Set: current_state = "HEATING"
Else If temperature > setpoint + 2:
Digital Line Set: cooler = ON
Variable Set: current_state = "COOLING"

Else If current_state == "HEATING":
If temperature >= setpoint:
Digital Line Set: heater = OFF
Variable Set: current_state = "IDLE"

Benefits

  • Clear behavior definition
  • Easy to test and debug
  • Prevents invalid transitions
  • Self-documenting

Command Queue Pattern

Problem

Need to execute commands sequentially, handle failures, and retry.

Solution

Implement command queue with status tracking:

Queue: command_queue (data table)
Fields: id, command, status, retry_count, timestamp

Process:
1. Add command to queue (status: pending)
2. Process queue one command at a time
3. Update status: processing, success, failed
4. Retry failed commands
5. Log results

Implementation

On New Command:
Table Insert: command_queue
- command: command_type
- status: "pending"
- retry_count: 0
- timestamp: now()

On Scheduled Event (every 5 seconds):
Query: Get oldest pending command

If command exists:
Update status: "processing"
Execute command

If success:
Update status: "success"
Else:
Increment retry_count
If retry_count < 3:
Update status: "pending"
Else:
Update status: "failed"
Send alert

Benefits

  • Reliable command execution
  • Automatic retry logic
  • Audit trail
  • Handle device offline scenarios

Debounce Pattern

Problem

Sensor readings fluctuate, causing unnecessary actions.

Solution

Wait for value to stabilize before acting:

Variables:
- last_value
- stable_value
- stability_count
- STABILITY_THRESHOLD = 3

On Variable Changed (sensor):
If sensor == last_value:
stability_count++
If stability_count >= STABILITY_THRESHOLD:
stable_value = sensor
Trigger stable value action
Else:
last_value = sensor
stability_count = 0

Benefits

  • Reduces false triggers
  • Filters noise
  • More reliable automation
  • Saves processing

Heartbeat Pattern

Problem

Need to detect device failures and communication loss.

Solution

Regular heartbeat with timeout detection:

Device Side:
Every 30 seconds:
MQTT Publish: devices/001/heartbeat
Payload: {"timestamp": now()}

Server/Cloud Side:
On MQTT Message (devices/+/heartbeat):
Update last_heartbeat_time

On Scheduled Event (every minute):
For each device:
If now() - last_heartbeat_time > 90 seconds:
Mark device as offline
Send alert
Update dashboard

Benefits

  • Early failure detection
  • Automatic status tracking
  • Network issue detection
  • Monitoring integration

Circuit Breaker Pattern

Problem

External service failures cascade and overwhelm system.

Solution

Implement circuit breaker to fail fast:

States: CLOSED, OPEN, HALF_OPEN

CLOSED (Normal):
- Allow requests
- Count failures
- If failures > threshold: → OPEN

OPEN (Blocked):
- Reject requests immediately
- Wait timeout period
- After timeout: → HALF_OPEN

HALF_OPEN (Testing):
- Allow one test request
- If success: → CLOSED
- If failure: → OPEN

Implementation

Variables:
- circuit_state = "CLOSED"
- failure_count = 0
- last_failure_time = 0

On API Request:
If circuit_state == "OPEN":
If now() - last_failure_time > 60000: // 60 seconds
circuit_state = "HALF_OPEN"
Else:
Return error "Circuit Open"

Try API call:
If success:
failure_count = 0
circuit_state = "CLOSED"
Else:
failure_count++
last_failure_time = now()

If failure_count >= 5:
circuit_state = "OPEN"
Log "Circuit breaker opened"

Benefits

  • Prevents cascading failures
  • Reduces load on failing services
  • Automatic recovery
  • Faster failure detection

Batch Processing Pattern

Problem

Sending data individually is inefficient and costly.

Solution

Batch multiple items before sending:

Variables:
- batch_buffer (array)
- batch_count = 0
- BATCH_SIZE = 10
- last_send_time = 0

On New Data:
Add to batch_buffer
batch_count++

If batch_count >= BATCH_SIZE:
Send batch
Clear buffer
batch_count = 0
last_send_time = now()

On Scheduled Event (every 5 minutes):
If batch_count > 0 AND (now() - last_send_time > 300000):
Send batch
Clear buffer
batch_count = 0

Benefits

  • Reduced network usage
  • Lower costs
  • More efficient
  • Time and size based triggers

Retry with Exponential Backoff Pattern

Problem

Network requests fail; need smart retry strategy.

Solution

Increase wait time between retries:

retry_count = 0
base_delay = 1000 // 1 second

While retry_count < MAX_RETRIES:
Try request:
If success:
Return result

retry_count++
delay = base_delay * (2 ^ retry_count) + random(0, 1000)
// Retry 1: 2s, Retry 2: 4s, Retry 3: 8s, etc.

Wait delay milliseconds

Return error

Benefits

  • Handles temporary failures
  • Reduces server load
  • Better success rate
  • Prevents thundering herd

Observer Pattern

Problem

Multiple parts of application need to react to same event.

Solution

Use variable changes as event notifications:

Central Event Variable: event_type

Observer 1:
On Variable Changed (event_type):
If event_type == "alarm_triggered":
Send notification

Observer 2:
On Variable Changed (event_type):
If event_type == "alarm_triggered":
Log to database

Observer 3:
On Variable Changed (event_type):
If event_type == "alarm_triggered":
Activate siren

Benefits

  • Decoupled components
  • Easy to add observers
  • Flexible event handling
  • Clear event flow

Watchdog Pattern

Problem

Application may hang or enter invalid state.

Solution

Implement watchdog timer:

Variables:
- watchdog_fed = true
- watchdog_timeout = 60 seconds

Main Application Loop:
// Do work

// Feed watchdog
Variable Set: watchdog_fed = true

Watchdog Timer (every 5 seconds):
If watchdog_fed == false:
// Watchdog timeout!
Log error
System reboot
Else:
Variable Set: watchdog_fed = false

// If main loop stops feeding, reboot occurs

Benefits

  • Automatic recovery
  • Prevents hung states
  • Improves reliability
  • Simple implementation

Factory Reset Pattern

Problem

Need safe way to restore defaults.

Solution

Multi-step confirmation process:

Variable: reset_stage = 0

On Button Press (reset):
If reset_stage == 0:
reset_stage = 1
Start timer (10 seconds)
Display "Press again to confirm"

On Button Press (reset):
If reset_stage == 1:
reset_stage = 2
Display "Hold for 5 seconds"
Start timer (5 seconds)

On Button Release:
reset_stage = 0
Cancel timers

On Timer Complete:
If reset_stage == 2:
Perform factory reset
Reboot device

Benefits

  • Prevents accidental reset
  • User-friendly
  • Safe operation
  • Clear feedback

Telemetry Aggregation Pattern

Problem

Too much raw sensor data to send continuously.

Solution

Aggregate data locally, send summaries:

Every 1 second:
Read sensor value
Update running statistics:
- min_value
- max_value
- sum_value
- count

Every 5 minutes:
Calculate:
- average = sum_value / count
- range = max_value - min_value

Send telemetry:
- timestamp
- average
- min
- max
- count

Reset statistics

Benefits

  • Reduced data volume
  • Lower costs
  • Useful statistics
  • Trend analysis

Graceful Degradation Pattern

Problem

System should remain functional when components fail.

Solution

Implement fallback behaviors:

Normal Mode:
- Cloud connectivity
- Full features
- Remote control

Degraded Mode (No Cloud):
- Local operation only
- Cache data locally
- Basic features
- Sync when restored

Emergency Mode (Sensor Failure):
- Use defaults
- Manual control only
- Alert operator
- Safe state

Benefits

  • Improved reliability
  • Continued operation
  • User confidence
  • Clear failure modes

See Also