The program allows recording from the Galea neurotechnology headset while the user is in an immersive VR environment. These recordings can then be played back, marked and analyzed and used for training machine learning algorithms to predict cognitive states and task performance.
The following demo shows the OpenBCI Research Tool (right half of the screen) with a test subject performing in a driving simulator.
Estimated cognitive state metrics (Stress, Alertness, Cognitive Load) are shown on the far right.
Demonstration of closed-loop neurostimulation using feedback from stress metric
Stimulation is provided to the ear via Transcutaneous Auricular Neurostimulation (tAN) in response to high levels of stress
Example of real-time visualization tools for a simple preprocessing and feature extraction pipeline. In this case, we're looking at filtering and finding peaks from photoplethysmography data.
Here, two raw PPG channels (pink and purple) represent the raw digitized light transmittance data from the sensor, from which the weighted average is taken, and the outliers are removed,
then the data is bandpass filtered between 0.8 and 12 Hz to isolate the pulse-wave features, then a peak finding algorithm is used to find the peaks, which represents the peak blood movement
in the local capillaries where the measurements are taken. From these peaks we can estimate the pulse rate (approximate heart rate).
More details about PPG signal processing can be found here
This program was written from scratch in C++, and uses minimal dependencies. A full list of dependencies is shared below.
Of the ~20k lines of code in the codebase (not including dependencies), I wrote around 95%+ of the code, as well as architected the main program.
The code is proprietary and cannot be shared.