I did some math on the recent brain simulation news with the goal of determining, at which point in time can we expect realtime brain simulation capability with a single processor.

There are some underlying assumptions.

- The processors used are close to state-of-the-art
- Moore’s law will continue for the forseeable future
- Coming closer to real time doesn’t unexpectedly increase computational complexity
- Building more advanced supercomputers doesn’t increase overhead by more then very slightly and all related fields will advance at a moore’s law -rate
- The simulation model used in the experiment (see link above) is biologically accurate
- The synapse count used in the experiment is biologically accurate

With that out of the way, how does it look? Let’s look at the numbers:

The K Computer used about 83 thousand processors, namely SPARC64 VIIIfx, which is capable of about 128 GFlops. For a regular processor that’s quite good, a Intel Core i7 980 XE is about 100, but GPUs can do a lot better, about 500 GFlops. This means we can assume that at least equalent processors are available to consumers now.

Simulating one second of brain activity of 1% of the total number of neurons took 40 minutes, a real time calculation for the same amount would take a single processor 40 * 60 * 83 000 * 100 = 19 billion times more computing power.

This sounds like a lot, but is it?

Moore’s law suggests that computational power roughly doubles every two years. How many times would our current processing power would have to double for it to be 19 billion times more powerful than now?

2^x=19900000000, solve for x and we get just under 35, so **70 years**. That sounds pretty gloomy, but that was how long until we can run the entire simulation *on a single consumer grade processor*.

How long until we can do that on a supercomputer?

40 * 60 * 100 = 240000 and solving 2^x = 240000 gives us about 18, and thus **36 years**.

Provided that research on neurobiology and cognition speeds up, we can expect some pretty interesting times around 2050. The timeframe can become a little shorter given significant improvements in algorithms and computational architecture.