ADHD neurofeedback training aims to present in real time impulsivity and hyperactivity features measured in the brain activity to a child suffering from Attention Deficit Hyperactivity Disorder (ADHD) for him/her to learn how to self-regulate them. As it was discussed in previous posts this technique has turn out to be a good accompaniment of medication inducing enduring effects. Obviously one of the key points of neurofeedback training is the calculation of robust reliable attention and impulsivity features that will be trained by the child. The other important aspect is the game itself, that is nothing less than the means by which the child interacts with the neurofeedback application.Read More
Neuroscience, tCS and EEG blog
So you’ve got tons of mega bytes of EEG data from a big recording campaign with several volunteers. You persuaded them to wear your EEG recording device for 40 minutes, performing several mental tasks and receiving all types of external stimuli. You are now ready to process the data and confirm your breakthrough hypothesis.
Well, yes, but did you invest effort in having the appropriated synchronization system for your recordings? If the answer is no you might end just having a bunch of disorganized EEG data where it would be impossible to know when the subjects were performing each of the mental tasks or when the stimuli were presented to them. Your breakthrough then would have to wait to the next recording campaign.Read More
Neurokai's services offer different solutions for industrial and academic researchers. Our Experience Lab service focuses on providing a custom solution for emotion recognition based on a multimodal system (EEG, ECG, GSR and facial). In order to develop our vision-based emotion recognition system, integrated within the multimodal one, we analysed several libraries for face detection. The main objective of our research at that point was to find a robust computer vision library to be integrated in our multimodal emotion recognition system, which includes vision as well as electrophysiological modalities. In this post we will share with you what computer vision libraries we found in the community and how the methods performed, when it was possible, after running them on our own video recordings. It is worth mentioning that in our videos people were recorded wearing the Enobio cap and the videos were performed with low lighting conditions, which made the face detection task slightly more difficult.Read More
As I explained in one of my previous posts, a Brain Computer Interface (BCI) is a direct communication pathway connecting the brain to a computer or other external device. BCIs do not depend on the brain’s normal action pathways through peripheral nerves and muscles. This makes them the ideal technology to develop systems assisting or repairing human cognitive or sensory-motor function. There are a varieity of different BCI modalities although the most commonly used are motor imagery, P300 and steady state visual evoked potentials. In this post I’d like to focus on SSVEP describing SSVEPs themselves, SSVEP-based BCI systems and presenting the most commonly used SSVEP detection methodology. SSVEP-based BCIs offer two main advantages over other BCIs: they have a larger information transfer rate, and they require a shorter calibration time.
Let’s start introducing what SSVEPs are. Steady state visual evoked potentials are a resonance phenomenon occurring in the brain that can be measured in the EEG. It occurs when a person is focusing his/her visual attention on a flickering light source. When SSVEP is elicited, oscillatory components can be observed in the EEG that match the stimulation frequency and its harmonics. This effect is mainly observed in the visual cortex, the part of the cerebral cortex responsible for processing visual information, shown in the next figure.Read More
Some would say that my mom is a bit eccentric … When we were little, she would have us play a fun game ... well, I remember this happening at least once probably around 1975. The game was … telepathy. My siblings and I would sit around for a while trying to transmit each other thoughts. Could we transmit words to each other? Concentrate hard and …Read More
While heading to Petronas Technology University where I will give a course on transcranial current stimulation (tCS) basics I summarized the basics of the technology and particularly on Starstim, the device we envisioned and started to develop within the HIVE project. tCS devices allow the controlled injection of low-amplitude electrical currents into the cerebral cortex through the electrodes, which are non-invasively placed on the scalp. In this sense they play the opposite role to EEG, i.e. not for monitoring brain activity but for modifying it. There has been some advancement in the tCS field, but the technology and its effects on the brain are still not fully understood. Well, the effects are showing up more and more as studies and clinical trials increase. As a matter of fact, publications on tCS have multiplied by a factor of 4 in the last 4 years, and the clinical trials involving it, even by a factor of 10. But what is still not really understood is what causes this effect from both an electrophysiological as well as therapeutic point of view. I would like to comment on the electrophysiological effects giving some quick hints on what makes the applied electrical current affect the brain activity at the neuronal level. The state of the art is far from this understanding on its effects at neuronal population level and at a global brain level, i.e. connectivity, which have been much less studied.
How many times would you have liked to know what the person in front of you was feeling? Perhaps, you were criticizing a specific brand in an interview, and the guy just in front of you was a self-devoted user of this brand. Or the other way around: the interviewer was interested on knowing what your feelings were regarding a specific issue during the interview. How useful would an emotion recognition system have been in those situations?
We all know that we are extremely social animals but just how far does this go? Many believe that human intelligence evolved to solve our complex social problems rather than ecological problems as was previously assumed. Robin Dunbar lays this out in his 1998 paper “The Social Brain Hypothesis”. You may have heard of Dunbar’s Number, which refers to the number of people we can reasonably maintain relationships with, and it is directly related to our social processing power. In fact the number of individuals in various primates social groups and how this correlates to the size of their respective neocortex is in part what drove this theory.