[2022-1] Text-spelling brain-computer interface using split-eye stimulation of the optic nerve
Utilizing VR we are able to stimulate each eye separately (top); We integrate a fully conformal system onto the subject (bottom)
Video Demo of test subject using stimulus text speller in response to prompt
Project Description: A text-speller human-machine interface for individuals with severe motor disability, enabled by novel split-eye asynchronous stimulation by VR headset.
Results:
Using a VR headset, we are able to generate complex steady-state stimuli, with separate frequencies for each eye.
Using such an interface allows for many classes with high throughput (>240 bits/min)
Role:
Conceptualized, planned and managed all aspects of project: designed custom circuits, firmware, software, data processing, machine learning & completed systems integration
Devised and studied a new BCI paradigm: split-eye asynchronous stimulus using a VR headset
Skills/Tools used:
Project Conception, System Design & Integration
AutoCAD (2D design for microfabrication)
Electronic design: EEG analog front-end design, Bluetooth LE
Firmware development: nRF52832 microcontroller
Serial protocols: SPI, I2C
Unreal Engine 4.26: Developing stimulus arrays, virtual environments, text speller, and video game interface
MATLAB, Python: Data analysis, visualization; preprocessing methods