With scientists mapping our neurons in ever greater detail, and companies like Google claiming they’re close to creating human-level artificial intelligence, the gap between brain and machine seems to be shrinking — throwing the question of consciousness, one of the great philosophical mysteries, back into the heart of scientific debate. Will the human mind — that ineffable tangle of private, first-person experiences — soon be shown to have a purely physical explanation? The neuroscientist Steven Novella certainly thinks so: ‘The evidence for the brain as the sole cause of the mind is, in my opinion, overwhelming.’

Elon Musk agrees: ‘Consciousness is a physical phenomenon, in my view’. Google’s Ray Kurzweil puts it even more bluntly: ‘A person is a mind file. A person is a software program.’

If these guys are correct, the ramifications are huge. Not only would it resolve, in a snap, a conundrum that’s troubled mankind for millennia — it would also pave the way for an entirely new episode in human history: minds uploaded to computers and all. And then there are the ethical implications. If consciousness arises naturally in physical systems, might even today’s artificial neural networks already be, as OpenAI’s Ilya Sutskever has speculated, ‘slightly conscious’? What, then, are our moral obligations towards them?

Materialists aren’t making these claims against a neutral backdrop. The challenges facing a purely physical explanation of consciousness are legion and well-rehearsed. How could our capacity for abstract thought — mathematical and metaphysical reasoning — have evolved by blind physical processes? Why, indeed, did we need to evolve consciousness at all, when a biological automaton with no internal experiences could have flourished just as well? Why, if the mind is the result of billions of discrete physical processes, do our experiences seem so unified? Most importantly, how do brain signals, those purely physical sparks inside this walnut-shaped sponge, magically puff into the rich, qualitative feeling of sounds and smells and sensations? None of these questions is necessarily insurmountable, but they certainly require something more substantial than what the philosopher David Chalmers calls ‘don’t-have-a-clue-materialism’ — the blind assumption that, even if we don’t yet understand how, the mysterious phenomenon of private, first-person experience must, in the end, just be reducible to physical facts. So what’s the evidence?

The last few years have seen a number of remarkable neuroscientific breakthroughs. In one study, scientists managed to communicate with a paralysed patient simply by asking him to imagine handwriting his thoughts. When he did so, brain implants recorded electric signals in his motor cortex, which artificial intelligence subsequently decoded with 94 per cent accuracy. In another, scientists tracked the ‘progress of a thought through the brain’: participants were asked to think of an antonym of a particular word, and electrodes planted on the cortex revealed how each step of the process — stimulus perception, word selection, and response — was ‘passed around’ to different parts of the brain. And in one landmark study, scientists claimed finally to have located the three specific areas of the brain — those linked to ‘arousal’ and ‘awareness’ — involved in the formation of consciousness.