Blog

From brain signal to actionable data: solving EEG noise in the field

Blog
Writer

From brain signal to actionable data: solving EEG noise in the field

EEG is the most versatile tool we have for measuring brain activity in real time. It is non-invasive, highly portable, and provides incredible temporal resolution, capturing neural shifts in milliseconds. This makes it the primary engine behind the next generation of brain-computer interfaces (BCI).

However, its greatest strength is also its fundamental weakness. Because sensors record micro-voltages through the scalp and skull, the signal is incredibly faint, and the EEG data is extremely "noisy." In a real-world setting, a simple eye blink or a tensed jaw muscle creates an "electrical storm" that drowns out the neural activity research is aiming to capture.

As we move from controlled lab environments to unpredictable real-world scenarios like workload monitoring for pilots or training neuroadaptive AI, solving this signal-to-noise challenge is the key to making neurotech viable. Overcoming this challenge requires a sophisticated algorithmic approach to separate and identify the underlying signal.

Zypher EEG patch

Decomposing the EEG signal

It starts with Independent Component Analysis (ICA): the algorithmic "filter" that untangles the mess.

To understand ICA, imagine a crowded cocktail party. You have two people speaking and two microphones placed between them. Each microphone captures a messy mixture of both voices. To make sense of the conversation, you need to "unmix" the recordings to isolate the two original, independent sources.

In EEG, your scalp electrodes are the microphones. They don't just record "focus" or "stress"; they record a jumble of brain waves, heartbeats, eye movements etc. ICA is the mathematical tool that allows us to reverse-engineer this mixture, separating the "brain voice" from the "noise voice."

Recognizing the pattern

Next step is to achieve a smarter signal classification to identify the components and what they relate to, labeling noise types and brain signal types. Modern automated tools (like the popular ICLabel algorithm) are trained on thousands of examples that include information on:

  1. Autocorrelation: Analyzing how the signal resembles itself over time.
  1. Power spectral density: measures how “powerful” a frequency is in the signal
  1. Topography: Mapping where the signal is coming from across the scalp.

The first two features depend on the time series, while topography requires information about electrode positions across the whole head. When working with our Zypher EEG patch, full topographical information isn't available because the patch doesn't cover the entire scalp.

Therefore, at Zander Labs, we are building our solutions on models that deliver high performance without relying on topographical maps. By focusing on how brain signals behave over time rather than where they come from spatially, we ensure that data captured with the Zypher patch is robust enough for artifact labeling. As a result, the data can be reliably used in high-stakes industries and physical AI systems, allowing it to become truly scalable for the real world.

Curious about how mental state data could work within your AI systems? We'd love to explore it with you.

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Writer

Other blogs

Understanding EEG in movement experiments

Oddball Paradigm in Neurotech