W
WUKS AI by PIXAM
Architecture

Four layers.
One decision.

WUKS AI is not a single model — it is a layered architecture where each stage refines raw data into a cleaner, more trusted signal before the next layer acts on it. No layer makes a decision it has not earned.

WUKS AI signal processing pipeline
1

Signal intelligence

Every satellite observation is evaluated before a position fix is computed. Layer 1 produces a per-satellite quality score that all downstream layers inherit — if the signal cannot be trusted, nothing built on it will be either.

  • LOS / NLOS classification Each incoming satellite signal is classified as line-of-sight or non-line-of-sight using carrier-to-noise density ratios and elevation-angle models trained on PIXAM outdoor datasets across multiple terrain types.
  • Multipath detection Multipath signatures are extracted from pseudorange and carrier-phase divergence. Per-satellite reliability scores are computed before the navigation filter sees the observations.
  • RF interference monitoring Spectral analysis runs continuously to flag jamming patterns and spoofing anomalies. Detection triggers a confidence floor reduction, alerting Layer 3 before positioning degrades enough to affect mission behaviour.
  • Fix confidence score (0–100) A single integer aggregating multi-constellation health across all active satellites. This score is the primary handoff to every other layer and every external system that WUKS AI serves.
GNSS base station feeding Layer 1
GPS · Galileo · GLONASS · BeiDou
RTK + PPP input RTCM 3.x NTRIP compatible

2

Sensor fusion

When Layer 1 signals degradation, Layer 2 steps up — blending satellite data with onboard sensor streams to maintain a continuous, trusted position estimate without mission interruption.

IMU

Inertial measurement

6-axis IMU data is tightly coupled with GNSS observables through an extended Kalman filter. Bridging periods of up to 30 seconds under full GNSS outage have been validated on PIXAM field platforms.

EKF tight coupling30s outage bridge
OPT

Optical flow & visual odometry

Downward-facing optical sensors and stereo depth inputs contribute velocity and heading estimates. Each input is weighted dynamically against the current GNSS confidence score from Layer 1.

Dynamic weightingVelocity + heading
ENV

Environmental context

Barometric altitude, terrain classification and canopy-density estimates inform the NLOS model — so the system anticipates signal blockage before it arrives, not after.

Predictive NLOS
Onboard robot camera in WUKS AI sensor fusion
GNSS rover unit with fusion sensors
Drone sensor fusion output display

3

Autonomy control loop

Layer 3 consumes the fused position and confidence score to drive real-time decisions. It is not a path planner — it is the layer that decides whether the path planner should be trusted at all.

  • Confidence-gated motion commands Velocity and heading commands are scaled proportionally to live positioning confidence. Robots and drones slow, hold position or return to home automatically as fix quality degrades — no human intervention required.
  • Real-time path replanning If GNSS quality degrades along a planned route, Layer 3 replans around known degradation zones using a continuously updated signal-quality map built during the mission itself.
  • Hardened abort logic Safe-state triggers — hold, land, return — activate when confidence falls below operator-configurable thresholds. This logic runs independently of the mission planner and cannot be overridden by it.
  • MAVLINK & ROS output Motion commands, confidence scores and abort events are published over MAVLINK v2 and ROS2 topics simultaneously, with sub-10 ms latency from sensor input to command output on tested edge hardware.
WUKI robot executing Layer 3 control commands
MAVLINK v2 ROS / ROS2
Edge deployable ARM · Jetson

4

Logic, tools & integration

Layer 4 is the interface between WUKS AI and the wider PIXAM and Diginto.tech ecosystem — exporting intelligence to visual programming tools, dashboards and third-party systems.

LG

Logik integration

WUKS AI confidence scores and signal events are available as live data nodes inside Logik's visual block programming environment. Engineers and students can build conditional logic flows that respond to real GNSS intelligence — no code required.

Live data nodes Conditional blocks No-code logic
DB

Diginto.tech dashboards

Real-time signal health metrics, fusion weights and degradation events stream directly into the Diginto.tech live GNSS dashboards via MQTT — giving operators, students and researchers a live window into what WUKS AI is seeing and deciding.

MQTT stream Live dashboard Diginto.tech
DR

Aerial drone intelligence

In-flight GNSS quality assessment feeds directly into waypoint controller decisions. WUKS AI can redirect, hold or abort a drone mission mid-flight based on live signal geometry — without pilot input and without relying on a ground station.

In-flight QA Adaptive waypoints
3P

Third-party compatibility

All Layer 4 outputs — confidence scores, health telemetry, abort events — are available over open protocols. Integrating WUKS AI into an existing stack requires no proprietary SDK, only a standard MAVLINK, ROS2 or MQTT client.

No proprietary SDK Open protocols MAVLINK · ROS2 · MQTT
Logik visual programming with WUKS AI data nodes
Diginto.tech live GNSS signal dashboard
Drone control interface with WUKS AI integration

Not synthetic. Not simulated.

Every WUKS AI model is trained on structured logging runs captured during actual PIXAM operations — outdoor robots, multi-constellation base stations and aerial missions, in working conditions.

DS

PIXAM field datasets

Structured runs capturing raw GNSS observables, RTCM streams, IMU readings and environmental metadata — collected across multiple platforms, terrains and weather conditions.

LB

Engineering-quality labels

Ground-truth labels for LOS/NLOS, multipath severity, fix type and confidence are generated using post-processed RTK references and manually reviewed by PIXAM engineers.

CV

Continuous field validation

New deployment data from live PIXAM systems feeds back into the training pipeline. Models are updated against real-world distribution shift, not frozen at initial release.

GNSS dataset signal monitoring
Mobile platform data collection run
PIXAM base station dataset logging