SCN astrocytes are important in circadian rhythm

From Washington University in St. Louis:

Clock stars: Astrocytes keep time for brain, behavior

Star-shaped cells around neurons prove to be surprisingly important players in body’s clock

Until recently, work on biological clocks that dictate daily fluctuations in most body functions, including core body temperature and alertness, focused on neurons, those electrically excitable cells that are the divas of the central nervous system.

Asked to define the body’s master clock, biologists would say it is two small spheres — the suprachiasmatic nuclei, or SCN — in the brain that consist of 20,000 neurons. They likely wouldn’t even mention the 6,000 astroglia mixed in with the neurons, said Erik Herzog, a neuroscientist in Arts & Sciences at Washington University in St. Louis. In a March 23 advance online publication from Current Biology, Herzog and his collaborators show that the astroglia help to set the pace of the SCN to schedule a mouse’s day.

The astroglia, or astrocytes, were passed over in silence partly because they weren’t considered to be important. Often called “support cells,” they were supposed to be gap fillers or place holders. Their Latin name, after all, means “starry glue.”

Then two things happened. Scientists discovered that almost all the cells in the body keep time, with a few exceptions such as stem cells.  And they also began to realize that the astrocytes do a lot more than they had thought. Among other things, they secrete and slurp neurotransmitters and help neurons  form strengthened synapses to consolidate what we’ve learned. In fact, scientists began to speak of the tripartite synapse, emphasizing the role of an astrocyte in the communication between two neurons.

So for a neuroscientist like Herzog, the obvious question was: What were the astrocytes doing in the SCN?  Were they keeping time? And if they were keeping time, how did the astrocyte clocks interact with the neuron clocks?

Herzog answered the first question in 2005 — yes, astrocytes have daily clocks — but then the research got stuck. To figure out what the astrocytes were doing in living networks of cells and in living animals, the scientists had to be able to manipulate them independently of the neurons with which they are entwined. The tools to do this simply didn’t exist.

Now, Herzog’s graduate student Matt Tso, the first author on the paper, has solved the problem. The tools he devised allow astrocytes in the SCN to be independently controlled. Using his toolkit, the lab ran two experiments, altering the astrocyte clocks and monitoring the highly ritualized, daily behavior of wheel-running in mice.

The scientists were surprised by the results, to be published in the April 7 print  issue of Current Biology. In both experiments, tweaks to the astrocyte clocks reliably slowed the mouse’s sense of time. “We had no idea they would be that influential,” Tso said.

The scientists are already planning follow-up experiments.

Figuring out how and where these clocks function in the brain and body is important because their influence is ubiquitous. For his part, Herzog is already looking at the connections between circadian rhythm and brain cancer, pre-term birth, manic depression and other diseases.

Astrocytes clock in

A biological clock is a series of interlocking reactions that act somewhat like a biochemical hourglass. An accumulating protein eventually shuts down its own production, much as the sand eventually drains from the top half of the hourglass. But then —through the magic of feedback loops — the biochemical hourglass, in effect, turns itself over and starts again.

At first, scientists were aware only of the clock in the SCN. If it is destroyed in an animal such as a rat, the rat will sleep for the same amount of time but in fits and starts instead of for long periods.

In 2005, Herzog demonstrated that astrocytes, like neurons, have internal clocks. His test subjects made the cover of an issue of the Journal of Neuroscience that year.

But then the genes that make up the biological clock began to be found in many different kinds of cells: lung, heart, liver, and sperm. Hair cells, by the way, prefer to grow in the evening.

So Herzog began to wonder about astrocytes in the SCN. Were they, too, keeping time?

To find out, he coupled a bioluminescent protein to a clock gene and then isolated astrocytes in a glass dish. He found that the astrocytes brightened and dimmed rhythmically, proof that they were keeping time.

The obvious next step was to look at the astrocytes not only in a glass dish but also in SCN slices and in living animals. But that turned out to be easier said than done. “We burned through two postdocs trying to get these experiments to work,” Herzog said.

So it is a technical triumph that Tso was able to make the astrocytes light up when they were expressing clock genes and to add or delete clock genes in the astrocytes while leaving the neurons intact, Herzog said.

To manipulate the astrocytes in the SCN independently of neurons, the scientists needed a way to target the astrocytes alone. The key turned out to a structural protein that helps to give astrocytes their branching structure, here linked to a protein that fluoresces green. Credit: LPDWiki.

As a first step, collaborator Michihiro Mieda from Kanazawa University created a “conditional reporter” that switched on a firefly luciferase whenever a clock gene was being expressed in a cell of interest. Tso delivered the tiny switch to the astrocytes inside a virus.

In slices of a mouse SCN with this reporter in place, the scientists could see that the star-shaped cells were expressing the clock gene in a rhythmic pattern. This proved that astrocytes keep time in living tissue where they are interacting with one another and with neurons, as well as when they are isolated in a dish.

Next, the scientists used the new gene-editing tool CRISPR-Cas9 to delete a clock gene in only the astrocytes of the SCN of living mice. They then monitored the mice for changes in the time they started running on a wheel each day.

Running is an easily measured behavior that provides a reliable indication of the state of the underlying body clock. A mouse in constant darkness will start running on a wheel approximately every 23.7 hours, typically deviating by less than 10 minutes from this schedule.

In this SCN slice, cells expressing an astrocyte-specific structural protein that had been stained red (top right panel) matched up well with cells that had been equipped to fluoresce green when they were expressing a clock gene (middle right panel), demonstrating that the scientists could watch astrocytes tick in the biological clock. Credit: Herzog lab.

“When we deleted the gene in the astrocytes, we had good reason to predict the rhythm would remain unchanged,” Tso said. “When people deleted this clock gene in neurons, the animals completely lost rhythm, which suggests that the neurons are necessary to sustain a daily rhythm.”

Instead, when astrocyte clock was deleted, the SCN clock ran slower. The mice climbed into their wheels one hour later than usual every day.

“This was quite a surprise,” Tso said.

The results of the next experiment were even more exciting for them. The scientists began with a mouse that has a mutation making its clocks run fast and then “rescued” this mutation in astrocytes but not in neurons. This meant that the astrocyte clocks were running at the normal pace but the neuron clocks were still fast.

“We expected the SCN to follow the neurons’ pace. There are 10 times more neurons in the SCN than astrocytes. Why would the behavior follow the astrocytes’? ” Tso said.

But that is exactly what they did. The mice with the restored astrocyte clocks climbed into their wheels two hours later than mice whose astrocytes and neurons were both fast-paced.

Read more.

“Mini-brains” in peripheral nervous system may analyze and interpret sensations

From University of Leeds Health News:

Discovery of ‘mini-brains’ could change understanding of pain medication

Discovery of ‘mini-brains’ could change understanding of pain medication

The body’s peripheral nervous system could be capable of interpreting its environment and modulating pain, neuroscientists have established, after studying how rodents reacted to stimulation.

Until now, accepted scientific theory has held that only the central nervous system – the brain and spinal cord – could actually interpret and analyse sensations such as pain or heat.

The peripheral system that runs throughout the body was seen to be a mainly wiring network, relaying information to and from the central nervous system by delivering messages to the ‘control centre’ (the brain), which then tells the body how to react.

In recent years there has been some evidence of a more complex role for the peripheral nervous system, but this study by Hebei Medical University in China and the University of Leeds highlights a crucial new role for the ganglia, a collection of ‘nodules’.

See how the ganglia in the peripheral system could play a key role in interpreting pain.

Previously these were believed to act only as an energy source for messages being carried through the nervous system. In addition, researchers now believe they also have the ability to act as ‘mini-brains’, modifying how much information is sent to the central nervous system.

The five year study found that nerve cells within the ganglia can exchange information between each other with the help of a signalling molecule called GABA, a process that was previously believed to be restricted to the central nervous system.

The findings are published today in the Journal of Clinical Investigation and have potential future implications for the development of new painkillers, including drugs to target backache and arthritis pain.

Pain relief drugs

Current pain relief drugs are targeted at the central nervous system and often have side effects that can include addiction and tolerance issues.

The new research opens up the possibility of a route for developing non-addictive and non-drowsy drugs, targeted at the peripheral nervous system. Safe therapeutic dosage of these new drugs can also be much higher, potentially resulting in higher efficacy.

Whilst the study showed a rodent’s peripheral nervous system was able to interpret the type of stimulation it was sensing, further research is still needed to understand how sensations are interpreted and whether these results apply to humans.

In addition, the theory would need to be adopted by drug development companies and extensively tested before laboratory and clinical trials of a drug could be carried out. Should the findings be adopted, a timescale of at least 15-20 years might be required to produce a working drug.

Nerve arrangements

Neuroscientist Professor Nikita Gamper, who led the research at both universities, said: “We found the peripheral nervous system has the ability to alter the information sent to the brain, rather than blindly passing everything on to the central nervous system.

“We don’t yet know how the system works, but the machinery is definitely in place to allow the peripheral system to interpret and modify the tactile information perceived by the brain in terms of interpreting pain, warmth or the solidity of objects.

“Further research is needed to understand exactly how it operates, but we have no reason to believe that the same nerve arrangements would not exist in humans.

“When our research team looked more closely at the peripheral system, we found the machinery for neuronal communication did exist in the peripheral nervous system’s structure. It is as if each sensory nerve has its own ‘mini-brain’, which to an extent, can interpret incoming information.”

[…]

Professor Gamper believes the findings may present a challenge to the accepted ‘Gate Control Theory of Pain’. The theory holds that a primary ‘gate’ exists between the peripheral and central nervous systems, controlling what information is sent to the central system.

The study now suggests the transmission of information to the central nervous system must go through another set of gates, or more accurately a process similar to a volume control, where the flow of information can be controlled by the peripheral nervous system.

Read more.

Dendrites more electrically active than soma of neurons; perform digital and analog computations

From UCLA Newsroom:

Brain is 10 times more active than previously measured, UCLA researchers find

Dan Gordon | March 09, 2017

Neuron

Enter a caption

Shelley Halpain/UC San DiegoUCLA scientists discovered that dendrites (shown here in green) are not just passive conduits for electrical currents between neurons.


.

A new UCLA study could change scientists’ understanding of how the brain works — and could lead to new approaches for treating neurological disorders and for developing computers that “think” more like humans.

The research focused on the structure and function of dendrites, which are components of neurons, the nerve cells in the brain. Neurons are large, tree-like structures made up of a body, the soma, with numerous branches called dendrites extending outward. Somas generate brief electrical pulses called “spikes” in order to connect and communicate with each other. Scientists had generally believed that the somatic spikes activate the dendrites, which passively send currents to other neurons’ somas, but this had never been directly tested before. This process is the basis for how memories are formed and stored.

Scientists have believed that this was dendrites’ primary role.

But the UCLA team discovered that dendrites are not just passive conduits. Their research showed that dendrites are electrically active in animals that are moving around freely, generating nearly 10 times more spikes than somas. The finding challenges the long-held belief that spikes in the soma are the primary way in which perception, learning and memory formation occur.

“Dendrites make up more than 90 percent of neural tissue,” said UCLA neurophysicist Mayank Mehta, the study’s senior author. “Knowing they are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information. It may pave the way for understanding and treating neurological disorders, and for developing brain-like computers.”

The research is reported in the March 9 issue of the journal Science.

Scientists have generally believed that dendrites meekly sent currents they received from the cell’s synapse (the junction between two neurons) to the soma, which in turn generated an electrical impulse. Those short electrical bursts, known as somatic spikes, were thought to be at the heart of neural computation and learning. But the new study demonstrated that dendrites generate their own spikes 10 times more often than the somas.

Video: Animation of a neuron firing electrical spikes

The researchers also found that dendrites generate large fluctuations in voltage in addition to the spikes; the spikes are binary, all-or-nothing events. The somas generated only all-or-nothing spikes, much like digital computers do. In addition to producing similar spikes, the dendrites also generated large, slowly varying voltages that were even bigger than the spikes, which suggests that the dendrites execute analog computation.

“We found that dendrites are hybrids that do both analog and digital computations, which are therefore fundamentally different from purely digital computers, but somewhat similar to quantum computers that are analog,” said Mehta, a UCLA professor of physics and astronomy, of neurology and of neurobiology. “A fundamental belief in neuroscience has been that neurons are digital devices. They either generate a spike or not. These results show that the dendrites do not behave purely like a digital device. Dendrites do generate digital, all-or-none spikes, but they also show large analog fluctuations that are not all or none. This is a major departure from what neuroscientists have believed for about 60 years.”

Because the dendrites are nearly 100 times larger in volume than the neuronal centers, Mehta said, the large number of dendritic spikes taking place could mean that the brain has more than 100 times the computational capacity than was previously thought.

Recent studies in brain slices showed that dendrites can generate spikes. But it was neither clear that this could happen during natural behavior, nor how often. Measuring dendrites’ electrical activity during natural behavior has long been a challenge because they’re so delicate: In studies with laboratory rats, scientists have found that placing electrodes in the dendrites themselves while the animals were moving actually killed those cells. But the UCLA team developed a new technique that involves placing the electrodes near, rather than in, the dendrites.

Using that approach, the scientists measured dendrites’ activity for up to four days in rats that were allowed to move freely within a large maze. Taking measurements from the posterior parietal cortex, the part of the brain that plays a key role in movement planning, the researchers found far more activity in the dendrites than in the somas — approximately five times as many spikes while the rats were sleeping, and up to 10 times as many when they were exploring.

“Many prior models assume that learning occurs when the cell bodies of two neurons are active at the same time,” said Jason Moore, a UCLA postdoctoral researcher and the study’s first author. “Our findings indicate that learning may take place when the input neuron is active at the same time that a dendrite is active — and it could be that different parts of dendrites will be active at different times, which would suggest a lot more flexibility in how learning can occur within a single neuron.”

Read more.

Phasic inhibition in the hippocampal CA1 region may be crucial to memory consolidation

From IST Austria:

The rhythm that makes memories permanent

Scientists at IST Austria identify mechanism that regulates rhythmic brain waves • Inhibition at synapses is the key to make memories permanent

Every time we learn something new, the memory does not only need to be acquired, it also needs to be stabilized in a process called memory consolidation. Brain waves are considered to play an important role in this process, but the underlying mechanism that dictates their shape and rhythm was still unknown. A study now published in Neuron shows that one of the brain waves important for consolidating memory is dominated by synaptic inhibition.

So-called sharp wave ripples (SWRs) are one of three major brain waves coming from the hippocampus. The new study, a cooperation between the research groups of Professors Peter Jonas and Jozsef Csicsvari at the Institute of Science and Technology Austria (IST Austria), found the mechanism that generates this oscillation of neuronal activity in mice. “Our results shed light on the mechanisms underlying this high-frequency network oscillation. As our experiments provide information both about the phase and the location of the underlying conductance, we were able to show that precisely timed synaptic inhibition is the current generator for sharp wave ripples.” explains author Professor Peter Jonas.

When neurons oscillate in synchrony, their electrical activity adds together so that measurements of field potential can pick them up. SWRs are one of the most synchronous oscillations in the brain. Their name derives from their characteristic trace when measuring local field potential: the slow sharp waves have a triangular shape with ripples, or fast field oscillations, added on. SWRs have been suggested to play a key role in making memories permanent. In this study, the researchers wanted to identify whether ripples are caused by a temporal modulation of excitation or of inhibition at the synapse, the connection between neurons. For Professor Jozsef Csicsvari, a pooling of expertise was crucial in answering this question: “SWRs play an important role in the brain, but the mechanism generating them has not been identified so far – probably partly because of technical limitations in the experiments. We combined the Jonas group’s experience in recording under voltage-clamp conditions with my group’s expertise in analyzing electrical signals while animals are behaving. This collaborative effort made unprecedented measurements possible and we could achieve the first high resolution recordings of synaptic currents during SWR in behaving mice.”

The neuroscientists found that the frequency of both excitatory and inhibitory events at the synapse increased during SWRs. But quantitatively, synaptic inhibition dominated over excitation during the generation of SWRs. Furthermore, the magnitude of inhibitory events positively correlated with SWR amplitude, indicating that the inhibitory events are the driver of the oscillation. Inhibitory events were phase locked to individual cycles of ripple oscillations. Finally, the researchers showed that so-called PV+ interneurons – neurons that provide inhibitory output onto other neurons – are mainly responsible for generating SWRs.

The authors propose a model involving two specific regions in the hippocampus, CA1 and CA3. In their model SWRs are generated by a combination of tonic excitation from the CA3 region and phasic inhibition within the CA1 region. Jian Gan, first author and postdoc in the group of Peter Jonas, explains the implications for temporal coding of information in the CA1 region: “In our ripple model, inhibition ensures the precise timing of neuronal firing. This could be critically important for preplay or replay of neuronal activity sequences, and the consolidation of memory. Inhibition may be the crucial player to make memories permanent.”

Read more.

Certain gut bacteria may contribute to misfolded proteins and inflammation in neurodegenerative diseases

From U of L School of Medicine News:

Study demonstrates role of gut bacteria in neurodegenerative diseases

Research at UofL funded by The Michael J. Fox Foundation shows proteins produced by gut bacteria may cause misfolding of brain proteins and cerebral inflammation
Study demonstrates role of gut bacteria in neurodegenerative diseases

Robert P. Friedland, M.D.

Alzheimer’s disease (AD), Parkinson’s disease (PD) and Amyotrophic Lateral Sclerosis (ALS) are all characterized by clumped, misfolded proteins and inflammation in the brain. In more than 90 percent of cases, physicians and scientists do not know what causes these processes to occur.

Robert P. Friedland, M.D., the Mason C. and Mary D. Rudd Endowed Chair and Professor of Neurology at the University of Louisville School of Medicine, and a team of researchers have discovered that these processes may be triggered by proteins made by our gut bacteria (the microbiota). Their research has revealed that exposure to bacterial proteins called amyloid that have structural similarity to brain proteins leads to an increase in clumping of the protein alpha-synuclein in the brain. Aggregates, or clumps, of misfolded alpha-synuclein and related amyloid proteins are seen in the brains of patients with the neurodegenerative diseases AD, PD and ALS.

Alpha-synuclein (AS) is a protein normally produced by neurons in the brain. In both PD and AD, alpha-synuclein is aggregated in a clumped form called amyloid, causing damage to neurons. Friedland has hypothesized that similarly clumped proteins produced by bacteria in the gut cause brain proteins to misfold via a mechanism called cross-seeding, leading to the deposition of aggregated brain proteins. He also proposed that amyloid proteins produced by the microbiota cause priming of immune cells in the gut, resulting in enhanced inflammation in the brain.

The research, which was supported by The Michael J. Fox Foundation, involved the administration of bacterial strains of E. coli that produce the bacterial amyloid protein curli to rats. Control animals were given identical bacteria that lacked the ability to make the bacterial amyloid protein. The rats fed the curli-producing organisms showed increased levels of AS in the intestines and the brain and increased cerebral AS aggregation, compared with rats who were exposed to E. coli that did not produce the bacterial amyloid protein. The curli-exposed rats also showed enhanced cerebral inflammation.

Similar findings were noted in a related experiment in which nematodes (Caenorhabditis elegans) that were fed curli-producing E. coli also showed increased levels of AS aggregates, compared with nematodes not exposed to the bacterial amyloid. A research group led by neuroscientist Shu G. Chen, Ph.D., of Case Western Reserve University, performed this collaborative study.

This new understanding of the potential role of gut bacteria in neurodegeneration could bring researchers closer to uncovering the factors responsible for initiating these diseases and ultimately developing preventive and therapeutic measures.

“These new studies in two different animals show that proteins made by bacteria harbored in the gut may be an initiating factor in the disease process of Alzheimer’s disease, Parkinson’s disease and ALS,” Friedland said. “This is important because most cases of these diseases are not caused by genes, and the gut is our most important environmental exposure. In addition, we have many potential therapeutic options to influence the bacterial populations in the nose, mouth and gut.”

Read more.

Researchers use RNA sequences to map projections from specific brain regions

From Cold Spring Harbor Laboratory News:

Revolutionary method to map the brain at single-neuron resolution is successfully demonstrated

Friday, 19 August 2016 07:00

MAPseq uses RNA sequencing to rapidly and inexpensively find the diverse destinations of thousands of neurons in a single experiment in a single animal

Cold Spring Harbor, NY — Neuroscientists today publish in Neuron details of a revolutionary new way of mapping the brain at the resolution of individual neurons, which they have successfully demonstrated in the mouse brain.

The new method, called MAPseq (Multiplexed Analysis of Projections by Sequencing), makes it possible in a single experiment to trace the long-range projections of large numbers of individual neurons from a specific region or regions to wherever they lead in the brain—in experiments that are many times less expensive, labor-intensive and time-consuming than current mapping technologies allow.

Although a number of important brain-mapping projects are now under way, all of these efforts to obtain “connectomes,” or wiring maps, rely upon microscopes and related optical equipment to trace the myriad thread-like projections that link neurons to other neurons, near and far. For the first time ever, MAPseq “converts the task of brain mapping into one of RNA sequencing,” says its inventor, Anthony Zador, M.D., Ph.D., professor at Cold Spring Harbor Laboratory.

“The RNA sequences, or ‘barcodes,’ that we deliver to individual neurons are unmistakably unique,” Zador explains, “and this enables us to determine if individual neurons, as opposed to entire regions, are tailored to specific targets.”

RNA sequences

An injection into a “source” region of the brain contains a viral library encoding a diverse collection of barcode sequences, which are hitched to an engineered protein that is designed to carry the barcode along axonal pathways. The barcode RNA is expressed at high levels and transported into the terminals of axons in the source region where the injection is made. In each neuron, it travels to the point where the axon forms a synapse with a projection from another neuron. (click to enlarge)

MAPseq approach

“Bulk” labeling methods now widely in use to map brain connections are able to determine that neurons in the “source” region (left side) project to three green-shaded regions (right side), but are not able to distinguish the specific destinations of individual neurons in the source region. MAPseq enables such distinction — in this example, showing that neurons bearing specific “barcodes” (vastly reduced in complexity here for demonstration purposes) carry those barcodes to some of the 3 “destinations” but not necessarily all of them, or the same ones as other neurons in the source region. (click to enlarge)

 

MAPseq differs from so-called “bulk tracing” methods now in common use, in which a marker—typically a fluorescent protein—is expressed by neurons and carried along their axons. Such markers are good at determining all of the regions where neurons in the source region project to, but they cannot tell scientists that any two neurons in the source region project to the same region, to different regions, or to some of the same regions, and some different ones. That inability to resolve a neuron’s axonal destinations, cell by cell in a given region, is what motivated Zador to come up with a new technique.

One way of explaining the advantage of MAPseq over bulk tracing methods is to imagine being at an international airport, with the intention of getting on a flight to, say, Germany. “If you go to the international terminal, you see a long line of ticket counters,” Zador explains. “If you want to go to Germany, it’s not enough to take any airline at the international terminal. If you stand in line at the counter for Air Chile, you’re probably not going to be able to buy a ticket for Germany.”

“Those many airlines whose counters are adjacent serve many destinations, some of which overlap, some of which are unique. You can print out a map showing all of the foreign countries that all of the airlines serve from your airport, but that doesn’t tell you anything at all about individual airlines and where they go. This is the difference between current labeling methods and MAPseq. The ‘individual airlines’ in my example are adjacent neurons in a part of the brain whose ‘routes’ we want to trace.”

Zador and his team, including Justus Kebschull, a graduate student in his lab who is first author on the Neuron paper introducing the new method, have spent several years working out a technology that enables them to assign unique barcode-like identifiers to large numbers of individual neurons via a single injection in any brain region of interest. Each injection consists of a deactivated virus that has been engineered to contain massive pools of individually unique RNA molecules, each of whose sequence—consisting of 30 “letters,” or nucleotides—is taken up by single neurons. Thirty letters yields many, many times more barcode sequences (1018) than there are neurons in either the mouse or human brain, so this method is especially well suited to the massive complexity problem that brain mapping presents.

An injection into a “source” region of the brain contains a viral library encoding a diverse collection of barcode sequences, which are hitched to an engineered protein that is designed to carry the barcode along axonal pathways. The barcode RNA is expressed at high levels and transported into the terminals of axons in the source region where the injection is made. In each neuron, it travels to the point where the axon forms a synapse with a projection from another neuron. Tests show that the technology works—the barcodes travel reliably and evenly throughout the brain, along the “trunklines” that are the axons, and out to the “branch points” where synapses form.

About two days after one or more injections are made in a region of interest, the brain is dissected and RNA is collected and sequenced. RNA barcodes in the “source” area are now matched with the same barcodes collected in distant parts of the brain.

RNA sequences

To demonstrate MAPseq’s capabilities, Zador’s team injected a part of the mouse brain called the locus coeruleus (LC), located in the brain stem. After nearly 2 days, the cortex was divided in 22 slices, dissected and sequenced for RNA barcodes. The sequence readouts were matched with barcodes of cells in the region of the initial injection, establishing specific paths of individual LC neurons. (click to enlarge)

“Sequencing the RNA is a highly efficient, automated process, which makes MAPseq such a potentially radical tool,” Kebschull says. “In addition to the speed and economy of RNA sequencing, it has the great advantage of making it possible for researchers to distinguish between individual neurons within the same region that project to different parts of the brain.”

To demonstrate MAPseq’s capabilities, Zador’s team injected a part of the mouse brain called the locus coeruleus (LC), located in the brain stem. It is the cortex’s sole source of noradrenaline, a hormone that signals surprise. Zador’s team used MAPseq to address an old question: does the “surprise” signal get broadcast everywhere in the cortex, or only to particular places, where, perhaps, it is most needed or relevant?

In their demonstration experiment, only RNA that ended up in the cortex or olfactory bulb was sequenced, along with that of the source region in the LC where the barcodes were originally injected. The team divided the cortex into 22 slices, each about 300 microns thick, and dissected the slices. The results were exciting to the team.

“We found that neurons in the LC have a variety of idiosyncratic projection patterns,” Zador says. “Some neurons project almost exclusively to a single preferred target in the cortex or olfactory bulb. Other neurons project more broadly, although weakly.”

These results, he adds, “are consistent with, and reconcile, previous seemingly contradictory results about LC projections.” The surprise signal can reach most parts of the brain, but there are very specific parts of the brain where the signal is especially focused.

The team showed that results could be obtained in experiments based on one injection in the LC, and also two injections, on opposite sides. Already in progress are experiments in which the entire cortex is being “tiled” with injections. It is hoped this will yield the first connectome of the entire cortex at single-neuron resolution.

“Once we automate the process of using many injections, we think this kind of experiment can be completed by a single person in just a week or two, and at a cost of only a few thousand dollars,” Zador says. “We are very keen on being able to do these kind of studies in a single animal, which will eliminate the past problem of injecting multiple animals to trace multiple neurons, a method that requires one to make a single map based on many brains, each of which is somewhat different.”

Zador’s next goal with MAPseq is to map the brains of animals that model various neurodevelopmental and neuropsychiatric illnesses, to see how gene mutations strongly associated with causality alter the structure of brain circuits, and thus, presumably, brain function.

Read more.

 

Scientists find area in brain that is prewired for reading

From MIT News:

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago.

Study finds brain connections key to reading

Pathways that exist before kids learn to read may determine development of brain’s word recognition area.

Anne Trafton | MIT News Office
August 8, 2016

A new study from MIT reveals that a brain region dedicated to reading has connections for that skill even before children learn to read.

By scanning the brains of children before and after they learned to read, the researchers found that they could predict the precise location where each child’s visual word form area (VWFA) would develop, based on the connections of that region to other parts of the brain.

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago, which is not enough time for evolution to have reshaped the brain for that specific task. The new study suggests that the VWFA, located in an area that receives visual input, has pre-existing connections to brain regions associated with language processing, making it ideally suited to become devoted to reading.

“Long-range connections that allow this region to talk to other areas of the brain seem to drive function,” says Zeynep Saygin, a postdoc at MIT’s McGovern Institute for Brain Research. “As far as we can tell, within this larger fusiform region of the brain, only the reading area has these particular sets of connections, and that’s how it’s distinguished from adjacent cortex.”

Saygin is the lead author of the study, which appears in the Aug. 8 issue of Nature Neuroscience. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s senior author.

Specialized for reading

The brain’s cortex, where most cognitive functions occur, has areas specialized for reading as well as face recognition, language comprehension, and many other tasks. Neuroscientists have hypothesized that the locations of these functions may be determined by prewired connections to other parts of the brain, but they have had few good opportunities to test this hypothesis.

Reading presents a unique opportunity to study this question because it is not learned right away, giving scientists a chance to examine the brain region that will become the VWFA before children know how to read. This region, located in the fusiform gyrus, at the base of the brain, is responsible for recognizing strings of letters.

Children participating in the study were scanned twice — at 5 years of age, before learning to read, and at 8 years, after they learned to read. In the scans at age 8, the researchers precisely defined the VWFA for each child by using functional magnetic resonance imaging (fMRI) to measure brain activity as the children read. They also used a technique called diffusion-weighted imaging to trace the connections between the VWFA and other parts of the brain.

The researchers saw no indication from fMRI scans that the VWFA was responding to words at age 5. However, the region that would become the VWFA was already different from adjacent cortex in its connectivity patterns. These patterns were so distinctive that they could be used to accurately predict the precise location where each child’s VWFA would later develop.

Although the area that will become the VWFA does not respond preferentially to letters at age 5, Saygin says it is likely that the region is involved in some kind of high-level object recognition before it gets taken over for word recognition as a child learns to read. Still unknown is how and why the brain forms those connections early in life.

Pre-existing connections

Kanwisher and Saygin have found that the VWFA is connected to language regions of the brain in adults, but the new findings in children offer strong evidence that those connections exist before reading is learned, and are not the result of learning to read, according to Stanislas Dehaene, a professor and the chair of experimental cognitive psychology at the College de France, who wrote a commentary on the paper for Nature Neuroscience.

“To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read,” wrote Dehaene, who was not involved in the study. “Although many children, at the age of 5, did not have a VWFA yet, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.”

The MIT team now plans to study whether this kind of brain imaging could help identify children who are at risk of developing dyslexia and other reading difficulties.

Read more.