— Title (0:25) Hi, we’re Anne and Dominic, and we’re here to tell you something about brain-computer interfaces. I’m a mathematician and he’s a cognitive scientist. *First, we’ll tell you why we’re excited about this topic. Then, we’ll dive into the technical details: *How data is represented inside the brain, *How to read it out with a computer, and *How to build your own BCI. And finally, we’ll have a quick discussion and answer some questions. ---------- Part 1 ---------- — Why are we doing this? (0:24) Let’s talk about communication. Language is the most natural way to communicate with our fellow humans, it’s the most intuitive way of expressing your thoughts and also the most common method to write computer code. Obviously, the word is very powerful. — Language (1:22) But which devices can we use, to put the words from our minds into a computer? *Of course, there’s the keyboard. Compared to talking, it’s quite slow and quite hard to learn, but extremely accurate by design. *There are all kinds of motion-tracking tools: The mouse, Touchpad, Touch screen, Graphic tablet. Because we want text input, we need to use hand writing: Higher information density, easier to use than a keyboard, but very slow at actually transferring a message. *Then we have voice input. Everybody knows that dictating is much faster than typing. But since a couple of years, even our computers understand what we say. This could become very interesting in the long run, because talking raises the transfer speed limit considerably. Have you thought about the information transfer that’s really happening here? *Because the actual chain of information looks more like this. It’s the brain that sends information to a body part, which interacts with a device that sends information to a computer. And both ends of the communication can deal with much higher data rates than a measly 15 bytes per second. So, what would happen if we cut out the middle man and connect a device directly to the brain? —- Language (0:26) Spoilers: it would be surprisingly underwhelming. Thinking words isn’t that much faster than speaking them. When you read out the words from your brain, the data quality is horrible, there would be a lot of misunderstanding. And you would have to teach the software where the words are in your brain, so you’d need a couple of hours before you could get even started. But of course, with such an interface, you could also use something else than words as input. —- Visual art (0:41) For example, let’s look at art! When I imagine a picture, and want to show it to other people, *I can just pick up my canvas and paintbrush and start painting. *I could also use a graphics tablet to paint with a virtual brush, *or a specialized mouse to put together 3D objects and render the scene. But each of those strategies is going to cost me several hours. And even if I actually had some knowledge in painting, the end result would never look exactly like my imagination. *Here’s where a brain-computer interface is starting to shine, because it can access the image in your mind entirely and immediately. —- The interface for everything (0:37) There’s not just *arts and *language, there’s also *gaming, there’s *music: interfaces everywhere. Interfaces that are hard to learn and even harder to master. Interfaces that put a limit on our creativity. Having a really good brain-computer interface would make most of these devices redundant. As soon as a computer can guess what you want to do, a lot of frustration starts to disappear. *And since there is more to the brain than just hearing and seeing, maybe it makes sense to transfer data from other senses as well. Unfortunately, this is still very much a vision into the future. But we have - already today - some serious possibilities of controlling a computer with our minds. It's not even that expensive. But we'll come back to that in the last part of our talk. ---------- Part 2 ---------- — How to think of a red ball (0:34) Before we can build a system that can read from our brain, we need to know how the data is represented in there. For that, we can take a look at a very basic map. *We see and imagine with the back of the brain. *Hearing and talking happens in the lower side areas. *The top of the brain is responsible for skin and muscles and movements in general. *What the front part does is most difficult to understand, even for scientists. Let’s just say it does the actual thinking. — The creation of a red ball (0:40) Let’s say we imagine a red ball. *First, we look up the meaning of a red ball in the meaning library down here. *Then, we send this object code to the appearance library at the front. This area is a library of visual objects. *It resolves the object code into its color and shape codes, *which are immediately sent downwards. *Color and shape center then convert these property codes into their visual meaning *and build up a link to the correct space in the visual buffer. *As soon as this connection is established, we can see the image of a red ball. — Storing a red ball (1:03) This red ball is stored somewhere in the brain too, along with millions of other data sets. All the information is stored in the connections between neurons. Neurons never work alone, because they’re too weak. I’ll explain that in a minute. To create a meaningful signal, they have to work together. And they do that in groups of a hundred neurons, which is called a minicolumn. Each minicolumn stores one unit of information. Together with 100 other minicolumn they form one memory unit, called hypercolumn. Each one of those hexagonal memory units is incredibly flexible. They can remember everything about red balls - how it looks, how it feels, how it sounds like, and even your emotions towards red balls. But because we only have two millions hypercolumns, it would be a waste to use a separate one for every ball. That’s why your memory unit for red balls is the same unit as for green, black and blue balls as well. — Data format (0:55) What happens when a hypercolumn of neurons recognizes THE SPECIAL red ball from YOUR childhood? It activates the minicolumn that stores the emotions of your childhood. Before, all the neurons in this minicolumn were just idling. Their activity was slow, random, and completely desynchronized. But as soon as the activation signal arrives, they generate waves of activity. Now, each one of those yellow dots is the activity of a single neuron. Because single neurons can only generate a short flash of activity, they have to work together. These waves are the typical data format inside the brain. Waves can encode more information by changing the frequency and phase. The amplitude, however, is always the same. And in this case, this wave is part of the memory of THE red ball of YOUR childhood. — Communication in the cortex And, in case you were wondering, this is how hypercolumns communicate with each other in slow motion. You see that there’s some structure, and some clear loops. But it’s all flowing, shifting and never repeating. This is how these electrical waves are traveling through the brain. ---------- Part 3 ---------- — How to find a ball in the brain (0:08) What we now have to do, is to to find the origin of electrical waves. - wait 2-3 seconds - So how do we do this without physical access to the brain? — Electroencephalography (EEG) (0:20) The easiest (and already established) way is to use an EEG. This is a device that measures electric fields with the help of some electrodes. There are EEG used for research or medical purposes and also smaller ones for consumers. They differ mostly in the price and the number of used sensors. You have probably seen one in the media already. Dominic will later explain how EEGS work. For now it is sufficient to know, that EEG measure electric fields. The big question is now, how does this all work. — Electric fields in the brain (0:22) The electric field from each neuronal cluster is strong enough to be measured outside of the head. It is usually in a range between 200 and 500 mV. The electrodes of an EEG are placed all over the head. We have to place the electrodes directly onto the head surface because air conductivity is very low. Then we combine all the electrode voltages, so that we know the voltage for each point on the head surface. — Magnetic fields in the brain (0:40) Neuron clusters also produce a MAGNETIC field at the same time. This could also be measured from a small distance. But the devices for measuring very weak magnetic fields are far too big to carry around, and very expensive. They also have to be isolated from other "strong” fields. Even regular watches create magnetic fields that are MAGNITUDES stronger than the whole brain. The research for portable Devices are already starting. But currently, EEGs are the only handy devices for custom usage. — Tissue composition (0:50) Since the electric field is caused somewhere inside of the brain, we have to look at what happens on the way outside. Our head consists of different tissues, which all put up some amount of resistance for electric fields. So, instead of travelling STRAIGHT through the head, the electric field gets distorted. It could be imagined like this: when jogging, you have different speeds on different types of surfaces. When you are jogging on asphalt, you are quicker than on sand. It is the same with the electric fields in our head. For example, our bones have a very high resistance, and the liquid around our brain is very conductive. And ONE brain tissue even has different conductivities for different directions. — Preparing the model (0:12) We are finished with the electrophysical setup: active neurons create an electric field inside of the head. We know about the conductivities of different tissue layers, and the voltage on the head surface. Without any other knowledge we are stuck here. — Mathematical modeling (0:50) Like in many occasions for solving problems, now we require the mighty toolbox of mathematical science to find a solution. What we want to do now: we want to translate our setting (and our problem) into the language of mathematics, solve the problem there, and translate the mathematical solution back into reality. And to warn you beforehand: within the mathematical world, we will need another translation, into something a computer can solve. The first step, the translating into mathematical language is also called to model the problem. Building a mathematical model uses principles, laws and concepts from other fields of science. In our case, we use a modified version of the famous Maxwell Equations, which describe the connection between magnetic and electric fields (and forces). They were first described by a physical scientist named J.C Maxwell in 1865. Those equations can be simplified and adapted to our application. — EEG forward equation (1:03) The equation we get as a result is written here. First, we look at the left side. The x describes any point in the head. u(x) is the electric field at the position x. When you derive that, you get the current at this point. Now you fold this with the electric conductivity and compute the divergence. That was what happens mathematically. What you can imagine happening in the left side is the difference between the incoming flow and the outgoing flow of the electric field, at every point x. The right side is all about neuronal activity. What is written there is a so-called “current dipole”. This means, that the whole equation is zero everywhere, except at the position of the activity. Together this means: For every point in the head, the same amount of electricity flows in, as it flows out, with the exception of the active neurons. — Solving the equation (0:21) Let’s talk about solving this equation. The usual way to solve it, is to assume that the head conductivity and the location of the activity are already known, and you would start calculating head currents. This is called the forward problem. But of course, we don’t know the correct location yet, and we don’t even have any head currents yet. — Solving the equation (0:31) We want to calculate the activity location from the conductivity and the measured surface voltage. For this, the equation needs to be turned around. This is called the inverse problem and is surprisingly difficult. — Solving the equation (0:34) We encounter the next problem, when we look on these things we measure. We want to know where the activity inside of the brain is located. So, we measure the electric field on the head surface. But we can not measure INSIDE the head. Medical EEG have probably around 120 sensors. So the measured electric field only has a resolution of 120 points. This gives us much less information then we actually need. We can deal with such kind of less information if we make some presumptions about the location of activity. Usually we limit ourselves to places in the brain with actual neurons, namely the cortex. But of course, the resolution of our estimation gets better, the more sensors we have. — Requirements for the computation (0:27) Once we have faced these problems, we now can convert the equation into a computer friendly format. A computer can not deal with equations, that describe something for EVERY point in the head. That would be infinitely many points, and infinite memory is still quite expensive nowadays. Fortunately, some clever mathematicians developed algorithms for solving the problem, which work well with a computer. — Building a head model (0:48) So we do not compute every point in the head. Instead we select only some points in the head (for example, one million) and compute results only for them. We interpolate the areas between our selected points when we need a value there. Mathematically speaking, this selection process is called “discretization”. The continuous structures of the head are divided into small cells of simple geometry. Usually we use tetrahedra or hexahedra - or a mixture of both. This works well, and it can be shown that the error from the discretization is small for certain settings. For this approach to work well, we need a good 3D model of the head. We haven't talked yet about the conductivity. Thats the last part, thats missing for our model. When building the 3D model, we need this model to represent the conductivity well enough. — Getting good images (0:23) The scientific way for getting conductivity values is to put the head of choice into an MRI. That is a big machine which can take high resolution images of the head. Then we take such a good picture from the head and convert this picture into these little geometric cells. There is only one conductivity value for each cell. — Getting good images (0:23) If you want to be very exact, you has to use a different head model for every person, because every head is unique. And because the brain can move around inside the head, you should always measure your person in the same position. But when you don't need to be so precise, there’s a simple rule: ANY head model is better than NO head model. — Preprocessing (0:54) Then we are ready to estimate the location of neuronal activity. We take: the conductivity model of the head, the location of the sensors all the locations of all cortex points and put all of that into this machine. This machine puts all of this information together to one head model. It makes sure that all the little cubes, and the sensor positions, and the cortex share the same coordinate system. The last dataset makes sure that we only estimate locations of activity inside of the cortex. When putting up these restrictions, you have to be sure of two things: - the real location of activity is actually in the cortex - and that there is NO activity outside of the cortex. — Building the transfer matrix (0:18) With the head model, the machine then computes something an intermediate step called “transfer matrix” or "lead field matrix" The transfer matrix describes what the sensors would measure for every possible electric activity in the cortex. Building this matrix is a very time- and memory-consuming process. — Doing most of the work in advance (0:28) The nice thing about this whole first part is, that you can prepare it “offline”. After the first huge calculation, only the last step requires some brain activity. Since this last step can happen in a few milliseconds, this is perfect for real-time applications. And if you always use your EEG in the same position on your head and your cortex has enough points, there’s no reason to change the transfer matrix. — The final step (0:27) The next step take this transfer matrix and the measured electric field. A good solver for the inverse problem then decides, which was the most likely activity location, that caused these sensor voltages. There are different ones, like MNE or sLORETA, which give reasonable solutions. But still, these algorithms are still too time- and memory-consuming to do this in a reasonable online-time. — Summary of the theory (0:25) In the end you get an activity map, that shows where the active clusters are. To conclude this: yes, we CAN estimate where our neurons are active. Our estimation would be much better if we had MILLIONS of sensors (instead of 120), It would be much better if the sensors were inside of the brain instead of outside. And we still need a ton of computational power and memory. But it IS possible. ---------- Part 4 ---------- — The real world: DIY BCI (0:20) We’re finished with the theory, and we’re coming back to the real world. Don’t worry, it’s easier than you think. What do you need to build your own BCI? Two pieces of hardware, and two pieces of software. I’ll start with the hardware. — EEG principles An EEG consists of two core parts: an amplifier and an ADC. We need both of these parts for every channel. Since these chips are relatively expensive, more channels automatically cost more money. We want a noise floor of AT LEAST 40db for our amplifier. The same is true for the ADC. The absolute minimum sampling frequency is 100Hz, and you should go for 200Hz for a reasonable minimum. State-of-the-art devices sample at 1000Hz to 2500Hz. Not because the brain actually generates these frequencies, but to get a better precision for the frequency phases. Finally: since we’re dealing with microvolts at the first stage, it’s very important to care for proper shielding. — Consumer-grade EEG headsets (0:51) This is the current spectrum of EEG devices in a sensible price range. In our community, we have the big openEEG project, with schematics and different implementations readily available. We start at 40€ for the cheapest (and worst) version of an openEEG. It is so cheap, because it uses the ADC in the sound chip of your computer. It only has two channels, and these channels will be quite bad. The better alternative is this Olimex OpenEEG. It has a reference electrode, which means you won't get channel drift or saturation problems. And its ADC is much better than the one for your microphone. So this is the cheapest device I'd recommend for beginners. The best solution depends on your budget, of course. What you pay for is the number of channels and a low amount of noise. The best device you can get is the openBCI with 16 channels, but 440€ will only get you passive electrodes. Passive electrodes are cheaper, but they send microvolts through very long wires. So, they’re very noisy, and you need conductive gel to make contact with the head surface. Active electrodes have the amplifier and the ADC directly next to the sensors. This package is more expensive: you need separate electronics for each channel. But the noise levels are much lower, and not having to put gel in your hair all the time is nice too. The Emotiv Epoch is the most compact package and it transmits wirelessly per default, but it is completely proprietary. Its big weakness is the sampling frequency: it can only do 125Hz. — Popular real-time software (1:14) The software needs to do four different things. First, it needs to acquire signals from the EEG and put them into data packets. Second, it needs to estimate the locations of brain activity. Third, the estimated activity needs to be processed at a signal level. I’m talking about frequency filters, Fourier transformations and so on. Fourth, some BCI paradigms. These are small programs that actually know what brain activity is. Let’s start on the left: Fieldtrip is based on Matlab and most feature-complete. It’s very versatile, but its real-time support is quite weak, so you have to hand-code a lot of things. NeuroFEM is a plugin for Fieldtrip. It’s the only software that can deal with the highest quality of headmodels, so it’s mostly used in cutting-edge science. The most user-friendly solution is probably OpenVibe. It doesn’t do localization, but it has a GUI, supports many types of EEG and is very well documented. And then there’s Mushu: this is the alternative running on Python, mainly for people who like Python or dislike Matlab. — Examples of BCI paradigms (0:22) When you have decided which software package to use, it’s time to choose a BCI paradigm. I’ll give a short overview over the most popular paradigms today. Maybe that’s a good start for you guys to get into something more advanced. — Steady-state evoked response (1:26) Steady-state evoked responses are a brute-force approach to the brain. You put in a certain frequency, and hope that at least some neurons pick it up. The classic pattern is a flickering checkerboard. Before I show it to you, here’s an epilepsy warning. Everyone with a history of epilepsy please look away for five seconds. OK? Go! (click) (wait 5 seconds) (click) Both checkerboards were flickering at a very comfortable frequency for most neurons in the brain. These frequencies come in at the back of the brain, along with everything else you see. When you select the back of the brain with your EEG software, you can find back this exact frequency in your spectrogram. Now, the cool thing is that you can pay attention to one checkerboard and ignore the other one. In your measurement, the frequency band of the attended checkerboard becomes stronger, and the other one becomes weaker. With this trick, your software can detect if your attention is on the one checkerboard or the other. In practice, this paradigm works like mental buttons - you press a button by looking at it. They don’t have to be big, and you can run up to a dozen checkerboards at the same time. — P300 / oddball paradigm (0:37) The basic idea behind the oddball paradigm is that your brain always creates the same pattern under certain conditions. The P300 signal Is generated when we see something unfamiliar, or something that doesn’t belong into the group. The signal is the strongest at 300ms after seeing the oddball. The strongest peak is positive, which is what the P stands for. Usually, the oddball paradigm works with images or sounds, but could be done with any other sense as well. — P300 / oddball analysis (0:41) Detecting the P300 in a continuous signal isn’t completely straightforward. The incoming signal will be contaminated by other brain activity, so if you use a simple amplitude threshold, it will trigger all the time. A good strategy therefore is to use a reference signal. You take the last 500ms of your measured signal and correlate it with the reference. When the correlation is above 50% or so, you know that the correct shape was actually in your signal. With this tool, you can even type on a keyboard with your mind. Just look up the p300 speller. — Event-related desynchronization (0:57) Event-related desynchronization is really easy to set up and very reliable. You imagine moving one of your limbs, for example wiggling your right foot. This imagination causes certain neuronal clusters to synchronize or desynchronize. In the overall spectrum, this desynchronization will create a gap in the 10-13Hz frequency range. The analysis for this is very simple: select a region at the top right side, and calculate the fourier transform of its data stream. Select a few frequencies in between 8 and 13 Hz and look for change in magnitude. Now your software triggers whenever you think of a moving foot or waving hand. Oh, and when you look for two regions at the same time, your software can even detect which hand you’re waving. — The workflow (0:54) So - in summary: buy an EEG headset, choose a BCI software, and get started! We’re nearly finished, I’ll just do a quick recap: Certain clusters of neurons always do the same thing. Memories are stored in the connections between neurons. Neurons communicate with electric waves in certain frequencies. We want to find the clusters that do what we want. Or we already know where to search, for example from my talk or from the literature. Either way, we select an area on the activity map. We put the signal into our BCI paradigm, and enter the eternal feedback loop with the user. Have a lot of fun, that’s it for today! ---------- The End ---------- — Popular questions