Information

Energy consumption of the human brain: thinking vs not thinking

Energy consumption of the human brain: thinking vs not thinking


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm wondering about the human brain's energy consumption at different "uses".

If you use a muscle it uses more energy than if you hold it still. Is the brain like this, or is it "always on" like a kidney or heart? Does it use more energy when you're actively learning or solving a problem? Do intelligent people have more energy hungry brains?

If I'm to sum into a single question, is the brain's energy consumption largely static (individually and over the population), or does it vary?


This question covers only the sleeping aspect and the top answer doesn't source is conclusion of static energy use. I'm still largely interested in differences while learning or problem solving and differences between intelligent and average people.


The Human Brain vs. Computers

Stephen Hawking has said, “The development of full AI could spell the end of the human race.” Elon Musk has tweeted that AI is a greater threat to humans than nuclear weapons. When extremely intelligent people are concerned about the threat of AI, one can’t help but wonder what’s in store for humanity.

As of 2017, brains still have a leg up on AI. By some comparisons, human brains can process far more information than the fastest computers. In fact, in the 2000s, the complexity of the entire Internet was compared to a single human brain. This might surprise you. After all, computers are better at activities that we equate with smarts, like beating Gary Kasparov in chess or calculating square roots. Brains, however, are great at parallel processing and sorting information. They are so good at some activities that we take their strengths for granted, like being able to recognize a cat, tell a joke, or make a jump shot. Brains are also about 100,000 times more energy-efficient than computers, but that will change as technology advances. Estimates are that computers will surpass the capability of human brains around the year 2040, plus or minus a few decades. Whenever computers reach “human capacity,” they may just keep right on improving. They are not burdened by the constraints that hold back brains. Neurons, for example, are the brain’s building blocks and can only fire about 200 times per second, or 200 hertz. Computer processors are measured in gigahertz: billions of cycles per second. Signals on neurons travel at about one-millionth of the speed of fiber optic cables. And don’t forget, brains have to be small enough to fit inside skulls, and they inconveniently tire, forget, and die.

When it comes to storing information, however, biology once again shows that technology has a long way to go. This might surprise you, as well. After all, a computer hooked up to the Internet can beat human Jeopardy champions, and computers are great at memorizing things like phone books. But consider DNA as memory storage. Each of your six trillion cells contains all of the information to make your whole body. DNA can hold more data in a smaller space than any of today’s digital memories. According to one estimate, all of the information on every computer in 2015 coded onto DNA could “fit in the back of an SUV.” In fact, DNA can already be used to store non-biological information. In 2015, the works of Shakespeare were encoded into DNA.

The essence of memory, of course, lies in its durability. DVDs and other hard drives decompose after 20 or 30 years. However, scientists have sequenced 30,000-year-old Neanderthal DNA. (The Neanderthal who left us her personal data may have paid with her life, but unless she sends us a bill, the data storage was free!) Intact DNA has been found that is close to a million years old. DNA can also be used to store non-biological information. Who would have imagined that in 2015 I could bring my son a Bitcoin encoded on a fragment of DNA as a birthday present?

Brains and DNA show us that our methods of storing and processing digital information still have a lot of runway to keep getting better. This potential will be realized by new approaches, such as quantum computing or 3D neural processing.

Computer scientists like Ray Kurzweil contend that Artificial Intelligence (AI) will breeze past human intelligence — and keep on learning. AI and humans will work side by side to turbocharge the speed of invention. Kurzweil and others call this the “singularity,” a term used to describe phenomena that approach infinity in some way. The singularity is a self-stoking cycle of machines using their own AI to make even smarter machines. There is plenty of speculation about what the singularity will look like, when it will arrive, or whether it will even occur. The notion of the singularity might seem pretty abstract, but super-smart AI might represent a real danger, as cited by Stephen Hawking and Elon Musk. In 2015, Hawking and Musk joined Apple co-founder Steve Wozniak and about 1,000 other robotics and AI researchers in an open letter warning of “a military artificial intelligence arms race.”

It is hard to know whether or not to lie awake at night worrying about AI’s threat to humanity, but the idea that machines can get much smarter is important to all of us. Learning machines are fundamentally different from other technologies. Steamships can’t make themselves into better steamships, but smart machines can make themselves smarter.

In many ways, machine learning is already a reality, though many people might not realize it. Any interaction you have with Siri, Google, Netflix, or Amazon is influenced by machines that make themselves better. At Starwood, we used machine learning to improve our targeted special offers and hotel revenue management systems. Machine learning today is helping companies interpret data, learn from missed forecasts, and find new correlations. Though the analytics may be sophisticated, so far the interactions with people are nowhere near “human.” Siri can access a lot of information, but she is still pretty robotic.

Digital technology is the ultimate story of an accelerating trend line. Computers used to be rare, expensive, and hard to use. Now, smart machines are cheap and ubiquitous. Soon, we will be online all the time, along with most of our appliances, tools, and vehicles. Sensors that monitor our health and alert us to potential dangers will be everywhere. Thinking machines and the Internet will connect seamlessly with our lives and become a natural component to how we make sense of the world.

The trend line is clear, but the “headlines” are hard to see coming. Back in 2000, for example, we had a decent estimate of the advances in computing power over the next several years. Even with that information, though, no one could have identified “headline” disruptors like Facebook or the App Store. In the mid-1990s, my wife brought home a book about the future of technology that was so advanced it came with its own CD-ROM. Although the book shared insights about potential new uses of computers, it was later criticized for having said little about the World Wide Web. The author was very smart and quite familiar with technology businesses. His name was Bill Gates.

We all (as individuals, companies, and countries) have to get ready for disruptors that we cannot foresee. Technology lies behind global development and alters how people live. It affects every job, every human activity. It makes services, health care, and information available in ways that were unimaginable just a few decades ago. Along the way, it upsets social norms, disrupts industries, and dislocates workers. The pace of ever-improving technology shows no signs of letting up. Advancing AI can seem scary, but it also poses great opportunity. Every business will have to think about what it means for them. What will the next couple of decades bring?

This post is an excerpt from The Disruptors’ Feast by Frits van Paasschen, released January 16, 2017.


Power of a Human Brain

The brain makes up 2% of a person's weight. Despite this, even at rest, the brain consumes 20% of the body's energy. The brain consumes energy at 10 times the rate of the rest of the body per gram of tissue. The average power consumption of a typical adult is 100 Watts and the brain consumes 20% of this making the power of the brain 20 W.

Based on a 2400 calorie diet (Adapted from Yang)

2400 "food calorie" = 2400 kcal
2400 kcal/24 hr = 100 kcal/hr = 27.8 cal/sec = 116.38 J/s = 116 W
20% x 116 W = 23.3 W

Glucose is the main energy source for the brain. As the size and complexity of the brain increases, energy requirements increase.

The human brain is one of the most energy hungry organs in the body thereby increasing its vulnerability. If the energy supply is cut off for 10 minutes, there is permanent brain damage. There is no other organ nearly as sensitive to changes in its energy supply.

In 1955, Albert Einstein's brain was preserved for research. Three scientific papers have been published examining the features of Einstein's brain. Albert Einstein's brain differed to normal men's brain in that his brain had more glial cells per neuron that might indicate that neurons in Einstein's brain had an increased "metabolic need"-- they needed and used more energy. Einstein's brain weighed only 1,230 grams, which is less than the average adult male brain (about 1,400 grams). The thickness of Einstein's cerebral cortex was thinner. However, the density of neurons in Einstein's brain was greater. In other words, Einstein was able to pack more neurons in a given area of cortex.

The most recent study concerning Einstein's brain was published in the British medical journal The Lancet, on June 19, 1999. They found that a portion of the brain that governs mathematical abilities and spatial reasoning -- 2 key ingredients to the sort of thinking Einstein did best -- was 15% wider than average allowing better connection between its cells, which could have allowed them to work together more efficiently.


Evolutionary Arguments

Another line of evidence against the 10 percent myth comes from evolution. The adult brain only constitutes 2 percent of body mass, yet it consumes over 20 percent of the body’s energy. In comparison, the adult brains of many vertebrate species–including some fish, reptiles, birds, and mammals–consume 2 to 8 percent of their body’s energy. The brain has been shaped by millions of years of natural selection, which passes down favorable traits to increase likelihood of survival. It is unlikely that the body would dedicate so much of its energy to keep an entire brain functioning if it only uses 10 percent of the brain.


Thinking Process

Thinking brings together information to link the various parts into something comprehensible. Cognition refers to the thought process. The American College of Radiology and the Radiology Society describe functional MRI as a diagnostic procedure that can determine precisely the location of thought processes in the brain. A positron emission topography scan also can document images of the brain during a range of thought processes. The future has promise for new insights into the thinking process using these new technologies.


Why your brain is not a computer

W e are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.

We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.

Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

In 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”

There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.

The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”

F or more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.

“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”

This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.

The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.

MRI scan of a brain. Photograph: Getty/iStockphoto

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

O ne sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.

In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”

This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.

This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.

Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.

Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.

There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.

A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.

The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.

Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.

But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.

F or the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”

On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”

Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.

Current ‘reverse engineering’ techniques cannot deliver a proper understanding of an Atari console chip, let alone of a human brain. Photograph: Radharc Images/Alamy

Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.

In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.

Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.

Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.

There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that. Or –

This is an edited extract from The Idea of the Brain by Matthew Cobb, which will be published in the UK by Profile on 12 March, and in the US by Basic Books on 21 April, and is available at guardianbookshop.com

Follow the Long Read on Twitter at @gdnlongread, and sign up to the long read weekly email here.


In the movie What The Bleep Do We Know!? Dr. Joe Dispenza talks about how our brain works. He talks about the tiny nerve cells in the brain called neurons and how they have tiny branches reaching out to other neurons forming neuron-networks.

Each place where they connect is integrated into a thought or a memory.

He goes on to say that the brain does not know the difference between what it sees in its environment and what it remembers because the same neuron-networks are firing - triggered.

This mean that we can break or make a habit simply by repeating something over and over again in our mind, or by actually doing it. If we want to learn something new and create a new habit the brain will cause the creation of a new network reinforcing the habit.

The first time you tried to ride a bike you probably failed to make it. Why? because the neuron network was being established.

You were creating it and be trying over and over again the connections in the network gets stronger and stronger and all of a sudden you ride the bike without thinking about it. You do it automatically. You have created a habit.

Learn about making or braking habits, neuron networks, the power of thought and more by getting the Make A Ripple Make A Difference e-book

"You are what you repeatedly do. Excellence is not an event - it is a habit" - Aristotle

Watching this clip from the movie What The Bleep Do We Know!? will explain more about neuron networks and habits.

You can start to break out of old habits by forming new ones. The power of thought, when used to improve your self-image, is extra-ordinary.

If you don't break out of your current belief system it will stick with you for the rest of your life. You need to find a way to change it.  And you can.

You CAN change your self-image through the power of thought and positive affirmations like Louis L. Hay did. You CAN change that way you look at yourself through seeking new truths and hence you will be able to accomplish anything you put your mind to.

We all have a tremendous power inside, we just need to realize it. The true POWER IS WITHIN. The Power of Thought.

Nothing can stop the man with the right mental attitude from achieving his goal nothing on earth can help the man with the wrong mental attitude.
-Thomas Jefferson

REMEMBER: YOU CAN BE OR HAVE WHATEVER YOU WANT IN LIFE

We have all heard incredible stories about people doing the unthinkable.

A homeless person with no money becoming a multimillionaire - like Bill Bartmann who went from being homeless, broke, an alcoholic and paralyzed at the age of 17 to become the 25th richest person in the USA.

Someone becoming a legend and a historical person against "all odds" - like Mahatma Gandhi who freed India from The British Empire.

People recovering miraculously from a terrible disease or accident - like Norman Cousins who was diagnosed with a disease with little chance of surviving, but went on to live many years longer than his doctors predicted.

WHAT DO THESE INCREDIBLE PEOPLE HAVE IN COMMON?

They believe in themselves and they take action. They have realized the power of thought.

They have conditioned themselves to act, behave and think in a certain way and when the results start showing up it will reinforce their belief in themselves which in turn will produce even better results. It becomes a positive spiral because where they keep on accomplishing more and more and more.

They have changed their way of thinking and they are taking action.You can also awaken your giant inside because you are unique, just like Tony Robbins talks about.

In this interview with John Reese & Frank Kern - Anthony Robbins talks about the holy grail of taking action: certainty (app. 13:45 into the video).

It´s a good video explaining why some people are successful in life and why others keep struggling. It all boils down to our belief-systems. 

Successful people use their minds and their thoughts to achieve their goals - they are taking specific action based on successful thinking. The have realized the power of thought.

Successful people have a construct state of mind - a success attitude. They tell themselves : I CAN

Success comes to those who become SUCCESS CONSCIOUS - Napoleon Hill

The mind is the limit. As long as the mind can envision the fact that you can do something, you can do it, as long as you really believe one hundred percent.  - Arnold Schwarzenegger

If you can envision it in your mind you can do it - AND you need to take some ACTION to do it

Everyone who has ever taken a shower has had an idea. It's the person who gets out of the shower, dries off, and does something about it that makes a difference. - Nolan Bushnell

Mind-work is not an easy ride - it requires practice - just like exercising a muscle.

You need to practice how to be positive - how to believe in yourself - how to believe that you can do anything you put your mind to. The power of thought will help you when you apply it in a positive way.

But how do you apply the Power of Thought?

By surrounding yourself with supporting people and by using affirmations and creative visualization.

When you really want to achieve something it is important to be around friends and people who support you. This will inspire you - spur you on and when at times the goal seem far away your supporting friends and people can get you back on track.

This will be like an affirmation - a statement that you can do the unthinkable - that you are capable of anything you set your mind to.


Similarities Between Chimpanzee Brain and Human Brain

  • Chimpanzee brain and human brain are two structures of the central nervous system.
  • Both occur in the head region protected by the skull.
  • Both their brains involve in high-level cognitive, communicative, and emotional functions due to the structure of their brains.
  • Also, the main components of both brains are cerebrum, cerebellum, and brain stem.
  • Furthermore, the size of both brains is generally larger in proportion to body size.
  • This enlargement of the brain is due to the massive expansion of the cerebral cortex.
  • Besides, both have a similar molecular structure in their cerebellum.
  • Their prefrontal cortex and parts of the cortex involved in vision are large.
  • Additionally, both brains are more gyrified than the brains of other primate species.
  • Moreover, Broca’s and Wernicke’s area of both brains produce similar landmarks.
  • And, both brains have a dense distribution of von Economo neurons in the anterior cingulate and frontoinsular cortex.
  • Both brains also show a more complex level of connectivity and function within the arcuate fascicularis and mirror neuron systems.

Thinking with your stomach? The brain may have evolved to regulate digestion

Many life forms use light as an important biological signal, including animals with visual and non-visual systems. But now, researchers from Japan have found that neuronal cells may have initially evolved to regulate digestion according to light information.

In a study published this month in BMC Biology, researchers from the University of Tsukuba have revealed that sea urchins use light to regulate the opening and closing of the pylorus, which is an important component of the digestive tract.

Light-dependent systems often rely on the activity of proteins in the Opsin family, and these are found across the animal kingdom, including in organisms with visual and non-visual systems. Understanding the function of Opsins in animals from different taxonomic groups may provide important clues regarding how visual/non-visual systems evolved in different creatures to use light as an external signal. The function of Opsins in the Ambulacraria groups of animals, which include sea urchins, has not been characterized, something the researchers aimed to address.

"The functions of eyes and visual systems have been well-characterized," says senior author of the study Professor Shunsuke Yaguchi. "However, the way in which light dependent systems were acquired and diversified throughout evolution is unclear especially in deuterostomes because of the lack of data regarding the signaling pathway in the Ambulacraria group."

To address this, the researchers tested whether light exposure caused changes in digestive tract activity in sea urchins. They then conducted micro-surgical and genetic knockdown experiments to test whether Opsin cells in the sea urchin digestive system mediated the effect of light.

"The results provided new information about the role of Opsins in sea urchins," explains Professor Yaguchi. "Specifically, we found that stimulation of sea urchin larvae via light caused changes in digestive system function, even in the absence of food stimuli."

Furthermore, the researchers identified brain serotonergic neurons near the Opsin-expressing cells that were essential for mediating the light-stimulated release of nitric oxide, which acts as a neurotransmitter.

"Our results have important implications for understanding the process of evolution, specifically, that of light-dependent systems controlled via neurotransmitters," says Professor Yaguchi.

The data indicate that an early function of brain neurons may have been the regulation of the digestive tract in our evolutionary ancestors. Because food consumption and nutrient absorption are critical to survival, the development of a sophisticated brain-gut regulatory system may have been a major step in animal evolution.


How Stress and Negative Thinking Changes Cortisol

Stress from negative thinking creates changes in the brain that may affect your likelihood of mental disorders such as anxiety, depression, ADHD, schizophrenia and mood disorders.

People who have Post Traumatic Stress Disorder (PTSD) have been shown to have abnormalities in their brains. The amount of grey matter versus white matter. The difference is that grey matter is where the information is processed by neurons whereas white matter is a fibrous network that connects the neurons. Chronic stress produces more white matter connections but fewer neurons.


The balance of grey matter and white matter in the brain is important for the timing of communication in the brain. It is believed that the disruption in connections affects both your mood and your memories of the associations with that mood.

The problem is that our brains are good at learning from bad experiences but bad at learning from good experiences. According to Dr. Rick Hanson, creator of The Taking in The Good Course, a brain training program to use your mind to improve your happiness, says that people who completed a program of training themselves to replace negative thoughts with positive ones “experienced significantly less anxiety and depression, and significantly greater self-control, savoring, compassion, love, contentment, joy, gratitude, self-esteem, self-compassion, satisfaction with life, and overall happiness.”

Improving our brains by eliminating negative thinking is possible. Replacing negative thinking with positive thinking is like training your brain just like you would a dog. You give a dog a reward for good behavior and your brain is similar in that positive thoughts create pleasure in the brain, which is a reward. Once we feel pleasure, we want more of it, so give your brain positive thoughts and keep it on a steady diet of self-rewarding pleasure.


Watch the video: Ομιλία: Είμαστε ο εγκέφαλός μας (January 2023).