"We want to create an artificial brain that can communicate with a real brain,” says José Nuñez-Yañez, from Bristol’s Centre for Communications Research, as they’ve come up with a method to model neural activity with enough detail and speed for living cells to talk to synthetic neurons. Previous efforts to model neural activity have relied on supercomputers and generalized processors that are not good enough for parallel processing and again, these machine models are so complex that it can take 30 days to process one second of activity . But field-programmable gate arrays (FPGAs) rely on floating-point processors, with perhaps 1000 processors running in parallel. So this kind of processors can be an answer to the problem. Floating-point representation results in smaller rounding errors and therefore can be more precise. That precision is important for modeling neurons.
We know that Neurons communicate by exchanging spikes of voltage, which last a few milliseconds and may peak at about 70 millivolts. The timing of the spikes is important. In a fixed-point model, rounding errors accumulate, leading to a significant shift in the timing of the spikes. Therefore, floating point model can improve the simulation . The researcher at Bristol are so optimistic about this new idea is that their next step is to have a biological neuron talking to an artificial one. The idea is to take slices of mouse brain and incubate them on top of a sensor. After about two weeks of growth, the neurons will begin communicating as they would in the brain ! An analog to digital converter will measure the voltage produced by these neurons, send it to FPGAs, and different part of neural activity will be analyzed by the computers. I am here just rephrasing the idea that I read from this article published on Spectrum and I am really looking forward to the result of the next step.