In addition to the featured articles posted in the blog, links to other research news articles are posted on the Brain Science News page.
From The Scientist:
Lighting Up Monkey Brains
Optogenetic and chemogenetic tools illuminate brain and behavior connections in nonhuman primates.
November 1, 2017|
NEURON, 95:51-62, 2017
Since optogenetics burst onto the scene in the early 2000s, brain researchers have embraced the technique to study functions ranging from sleep and hunger to voluntary movements and sensory input. The vast majority of these studies have been conducted in rodents, and much has been learned, but extrapolating to humans from a species so different from us poses a challenge.
Brain research in nonhuman primates precedes optogenetics by decades. Attempts to understand the links between brain function and behavior have relied on techniques such as inserting an electrode into the brain to activate or interrupt neural signals, and creating lesions to disrupt pathways. But these approaches only reveal whether the altered brain regions are involved in the functions being studied, with little detail about the types of cells or networks involved.
Controlling neurons with light (optogenetics) or chemicals (chemogenetics) offers researchers a much more precise way to study brain function. Optogenetics utilizes a microbial protein known as channelrhodopsin (ChR), a light-activated ion channel. When inserted into animal cells under the control of a cell type–specific promoter, the protein is expressed in subsets of neurons, and a beam of light can be used to trigger its activity, spurring those neurons to action. Chemogenetics deploys chemicals rather than light. Cells are engineered to carry DREADD (designer receptors exclusively activated by designer drugs) proteins, which are then activated by a drug that doesn’t otherwise affect animal metabolism.
Rodents are often genetically engineered to encode ChR, DREADDs, or other controlling elements. But so far, genetically modifying primates has proven more difficult and expensive, limiting researchers to using viral vectors for delivering genes for these proteins to the brain. These vectors are generally derived from adenoviruses, says Jessica Raper of the Yerkes National Primate Research Center. “Just like humans, nonhuman primates can have neutralizing antibodies for these viruses, so any method must prescreen for antibodies specific to the serotype being used,” she explains.
The larger primate brain also requires larger amounts of vector to be injected directly into the brain, sometimes in multiple doses that may damage tissue. Furthermore, delivering light deep into the brain requires inserting an optical fiber, and chemicals designed to activate inserted genetic sequences must be able to cross the blood-brain barrier. (See “Getting Drugs Past the Blood-Brain Barrier”) That means much more trial and error than in mouse studies. “There’s no universal solution for primates as there is with the host of genetically modified rodents,” says William Stauffer of the University of Pittsburgh.
Nonetheless, several recent studies have managed to probe the function of specific brain regions or cell types in rhesus monkeys, marmosets, and other primates using optogenetic and chemogenetic tools. Here, The Scientist profiles some of these recent efforts.
From Illinois News Bureau:
Ringing in ears keeps brain more at attention, less at rest, study finds
From Salk News Releases:
New kinds of brain cells revealed
Salk and UC San Diego scientists analyzed methylation patterns of neurons to find new subtypes
LA JOLLA—Under a microscope, it can be hard to tell the difference between any two neurons, the brain cells that store and process information. So scientists have turned to molecular methods to try to identify groups of neurons with different functions.
Now, Salk Institute and University of California San Diego scientists have, for the first time, profiled chemical modifications of DNA molecules in individual neurons, giving the most detailed information yet on what makes one brain cell different from its neighbor. This is a critical step in beginning to identify how many types of neurons exist, which has eluded neuroscientists but could lead to a dramatically better understanding about brain development and dysfunction. Each cell’s methylome—the pattern of chemical markers made up of methyl groups that stud its DNA—gave a distinct readout that helped the Salk team sort neurons into subtypes. The work appears in the journal Science on August 10, 2017.
“We think it’s pretty striking that we can tease apart a brain into individual cells, sequence their methylomes, and identify many new cell types along with their gene regulatory elements, the genetic switches that make these neurons distinct from each other,” says co-senior author Joseph Ecker, professor and director of Salk’s Genomic Analysis Laboratory and an investigator of the Howard Hughes Medical Institute.
In the past, to identify what sets different types of neurons apart from each other, researchers have studied levels of RNA molecules inside individual brain cells. But levels of RNA can rapidly change when a cell is exposed to new conditions, or even throughout the day. So the Salk team turned instead to the cells’ methylomes, which are generally stable throughout adulthood.
“Our research shows that we can clearly define neuronal types based on their methylomes,” says Margarita Behrens, a Salk senior staff-scientist and co-senior author of the new paper. “This opens up the possibility of understanding what makes two neurons—that sit in the same brain region and otherwise look similar—behave differently.”
The team began their work on both mouse and human brains by focusing on the frontal cortex, the area of the brain responsible for complex thinking, personality, social behaviors and decision making, among other things. They isolated 3,377 neurons from the frontal cortex of mice and 2,784 neurons from the frontal cortex of a deceased 25-year-old human.
The researchers then used a new method they recently developed called snmC-seq to sequence the methylomes of each cell. Unlike other cells in the body, neurons have two types of methylation, so the approach mapped both types—called CG methylation (for DNA sequence containing the nucleotides cytosine and guanine) and non-CG methylation.
Neurons from the mouse frontal cortex, they found, clustered into 16 subtypes based on methylation patterns, while neurons from the human frontal cortex were more diverse and formed 21 subtypes. Inhibitory neurons—those that provide stop signals for messages in the brain—showed more conserved methylation patterns between mice and humans compared to excitatory neurons. The study also identified unique human neuron subtypes that had never been defined before. These results open the door to a deeper understanding of what sets human brains apart from those of other animals.
“This study opens a new window into the incredible diversity of brain cells,” says Eran Mukamel of the UC San Diego Department of Cognitive Science, a co-senior author of the work.
Next, the researchers plan to expand their methylome study to look at more parts of the brain, and more brains.
From NINDS Press Release:
Brain “relay” also key to holding thoughts in the mind
Wednesday, May 3, 2017
Thalamus eyed as potential treatment target for schizophrenia’s working memory deficits
Long assumed to be a mere “relay,” an often-overlooked egg-like structure in the middle of the brain also turns out to play a pivotal role in tuning-up thinking circuity. A trio of studies in mice funded by the National Institutes of Health revealed that the thalamus sustains the ability to distinguish categories and hold thoughts in mind.
By manipulating activity of thalamus neurons, scientists were able to control an animal’s ability to remember how to find a reward. In the future, the thalamus might even become a target for interventions to reduce cognitive deficits in psychiatric disorders such as schizophrenia, researchers say.
“If the brain works like an orchestra, our results suggest the thalamus may be its conductor,” explained Michael Halassa, M.D., Ph.D., of New York University (NYU) Langone Medical Center, a BRAINS Award grantee of the NIH’s National Institute of Mental Health (NIMH), and also a grantee of the National Institute of Neurological Disorders and Stroke (NINDS). “It helps ensembles play in-sync by boosting their functional connectivity.”
Three independent teams of investigators led by Halassa, Joshua Gordon, M.D., Ph.D., formerly of Columbia University, New York City, now NIMH director, in collaboration with Christoph Kellendonk, Ph.D. of Columbia, and Karel Svoboda, PhD, at Howard Hughes Medical Institute Janelia Research Campus, Ashburn, Virginia, in collaboration with Charles Gerfen, Ph.D., of the NIMH Intramural Research Program, report on the newfound role for the thalamus online May 3, 2017 in the journals Nature and Nature Neuroscience.
The prevailing notion of the thalamus as a relay was based on its connections with parts of the brain that process inputs from the senses. But the thalamus has many connections with other parts of the brain that have yet to be explored, say the researchers.
Two of the groups investigated a circuit that connects the mid/upper (mediodorsal) thalamus with the prefrontal cortex (PFC), the brain’s thinking and decision making center. Brain imaging studies have detected decreased connectivity in this circuit in patients with schizophrenia, who often experience working memory problems.
Halassa and colleagues found that neurons in the thalamus and PFC appear to talk back and forth with each other. They monitored neural activity in mice performing a task that required them to hold in mind information about categories, so that they could act on cues indicating which of two doors hid a milk reward.
Optogenetically suppressing neuronal activity in the thalamus blocked the mice’s ability to choose the correct door, while optogenetically stimulating thalamus neural activity improved the animals’ performance on the working memory task. This confirmed a previously known role for the structure, extending it to the specialized tasks Halassa and colleagues used and demonstrating for the first time a specific role in the maintenance of information in working memory.
What kind of information was the thalamus helping to maintain? The researchers found sets of neurons in the PFC that held in memory the specific category of information required in order to choose the correct door. They determined that the thalamus did not (at least in this case) relay such specific category information, but instead broadly provided amplification that was crucial in sustaining memory of the category in the PFC. It accomplished this by boosting the synchronous activity, or functional connectivity, of these sets of PFC neurons.
“Our study may have uncovered the key circuit elements underlying how the brain represents categories,” suggested Halassa.
Gordon and colleagues saw similar results when they tested how the same circuit controlled a mouse’s ability to find milk in a maze. The animals had to remember whether they had turned left or right to get their reward prior to a brief delay – and do the opposite. Also using optogenetics, the study teased apart differing roles for subgroups of PFC neurons and interactions with the brain’s memory hub, the hippocampus.
Thalamus inputs to the PFC sustained the maintenance of working memory by stabilizing activity there during the delay. “Top-down” signals from the PFC back to the thalamus supported memory retrieval and taking action. Consistent with previous findings, inputs from the hippocampus were required to encode in PFC neurons the location of the reward – analogous to the correct door in the Halassa experiment.
“Strikingly, we found two separate populations of neurons in the PFC. One encoded for spatial location and required hippocampal input; the other was active during memory maintenance and required thalamic input,” noted Gordon. “Our findings should have translational relevance, particularly to schizophrenia. Further study of how this circuit might go awry and cause working memory deficits holds promise for improved diagnosis and more targeted therapeutic approaches.”
In their study, the Janelia team and Gerfen similarly showed that the thalamus plays a crucial role in sustaining short-term memory, by cooperating with the cortex through bi-directional interactions. Mice needed to remember where to move after a delay of seconds, to gather a reward. In this case, the thalamus was found to be in conversation with a part of the motor cortex during planning of those movements. Neuronal electrical monitoring revealed activity in both structures, indicating that they together sustain information held in the cortex that predicted in which direction the animal would subsequently move. Optogenetic probing revealed that the conversation was bidirectional, with cortex activity dependent on thalamus and vice versa.
“Our results show that cortex circuits alone can’t sustain the neural activity required to prepare for movement,” explained Gerfen. “It also requires reciprocal participation across multiple brain areas, including the thalamus as a critical hub in the circuit.”
From NINDS Press Release:
NIH scientists try to crack the brain’s memory codes
Thursday, June 1, 2017
Studies of epilepsy patients uncover clues to how the brain remembers
The studies were led by Kareem Zaghloul, M.D., Ph.D., a neurosurgeon-researcher at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS). Persons with drug resistant epilepsy in protocols studying surgical resection of their seizure focus at the NIH’s Clinical Center enrolled in this study. To help locate the source of the seizures, Dr. Zaghloul’s team surgically implanted a grid of electrodes into the patients’ brains and monitored electrical activity for several days.
“The primary goal of these recordings is to understand how to stop the seizures. However, it’s also a powerful opportunity to learn how the brain works,” said Dr. Zaghloul.
For both studies, the researchers monitored brain electrical activity while testing the patients’ memories. The patients were shown hundreds of pairs of words, like “pencil and bishop” or “orange and navy,” and later were shown one of the words and asked to remember its pair.
In one study, published in the Journal of Neuroscience, the patients correctly remembered 38 percent of the word pairs they were shown. Electrical recordings showed that the brain waves the patients experienced when they correctly stored and remembered a word pair often occurred in the temporal lobe and prefrontal cortex regions. Nevertheless, the researchers showed that the waves that appeared when recalling the words happened faster than the waves that were present when they initially stored them as memories.
“Our results suggest the brain replays memories on fast forward,” said Dr. Zaghloul.
In the second study, published in Current Biology, the researchers used a new type of grid, called a high density microelectrode array, to monitor the activity of dozens of individual neurons during the memory tests. The arrays were implanted into the middle temporal gyrus, a part of the brain thought to control word, face and distance recognition.
In this study, the patients correctly remembered 23 percent of the word pairs. When the researchers looked at the electrical recordings, they found that the pattern of neurons that fired when the patients correctly recalled a word pair appeared to be similar to the pattern of neurons that fired when they first learned the pair. Moreover, the results showed that the overall activity of the neurons was specific to each individual word pair and was quietest when the patients correctly remembered a pair, suggesting that the brain only uses a small proportion of neurons to represent each memory.
“These results support the idea that each memory is encoded by a unique firing pattern of individual neurons in the brain,” concluded Dr. Zaghloul.
From Washington University in St. Louis:
Until recently, work on biological clocks that dictate daily fluctuations in most body functions, including core body temperature and alertness, focused on neurons, those electrically excitable cells that are the divas of the central nervous system.
Asked to define the body’s master clock, biologists would say it is two small spheres — the suprachiasmatic nuclei, or SCN — in the brain that consist of 20,000 neurons. They likely wouldn’t even mention the 6,000 astroglia mixed in with the neurons, said Erik Herzog, a neuroscientist in Arts & Sciences at Washington University in St. Louis. In a March 23 advance online publication from Current Biology, Herzog and his collaborators show that the astroglia help to set the pace of the SCN to schedule a mouse’s day.
The astroglia, or astrocytes, were passed over in silence partly because they weren’t considered to be important. Often called “support cells,” they were supposed to be gap fillers or place holders. Their Latin name, after all, means “starry glue.”
Then two things happened. Scientists discovered that almost all the cells in the body keep time, with a few exceptions such as stem cells. And they also began to realize that the astrocytes do a lot more than they had thought. Among other things, they secrete and slurp neurotransmitters and help neurons form strengthened synapses to consolidate what we’ve learned. In fact, scientists began to speak of the tripartite synapse, emphasizing the role of an astrocyte in the communication between two neurons.
So for a neuroscientist like Herzog, the obvious question was: What were the astrocytes doing in the SCN? Were they keeping time? And if they were keeping time, how did the astrocyte clocks interact with the neuron clocks?
Herzog answered the first question in 2005 — yes, astrocytes have daily clocks — but then the research got stuck. To figure out what the astrocytes were doing in living networks of cells and in living animals, the scientists had to be able to manipulate them independently of the neurons with which they are entwined. The tools to do this simply didn’t exist.
Now, Herzog’s graduate student Matt Tso, the first author on the paper, has solved the problem. The tools he devised allow astrocytes in the SCN to be independently controlled. Using his toolkit, the lab ran two experiments, altering the astrocyte clocks and monitoring the highly ritualized, daily behavior of wheel-running in mice.
The scientists were surprised by the results, to be published in the April 7 print issue of Current Biology. In both experiments, tweaks to the astrocyte clocks reliably slowed the mouse’s sense of time. “We had no idea they would be that influential,” Tso said.
The scientists are already planning follow-up experiments.
Figuring out how and where these clocks function in the brain and body is important because their influence is ubiquitous. For his part, Herzog is already looking at the connections between circadian rhythm and brain cancer, pre-term birth, manic depression and other diseases.
Astrocytes clock in
A biological clock is a series of interlocking reactions that act somewhat like a biochemical hourglass. An accumulating protein eventually shuts down its own production, much as the sand eventually drains from the top half of the hourglass. But then —through the magic of feedback loops — the biochemical hourglass, in effect, turns itself over and starts again.
At first, scientists were aware only of the clock in the SCN. If it is destroyed in an animal such as a rat, the rat will sleep for the same amount of time but in fits and starts instead of for long periods.
But then the genes that make up the biological clock began to be found in many different kinds of cells: lung, heart, liver, and sperm. Hair cells, by the way, prefer to grow in the evening.
So Herzog began to wonder about astrocytes in the SCN. Were they, too, keeping time?
To find out, he coupled a bioluminescent protein to a clock gene and then isolated astrocytes in a glass dish. He found that the astrocytes brightened and dimmed rhythmically, proof that they were keeping time.
The obvious next step was to look at the astrocytes not only in a glass dish but also in SCN slices and in living animals. But that turned out to be easier said than done. “We burned through two postdocs trying to get these experiments to work,” Herzog said.
So it is a technical triumph that Tso was able to make the astrocytes light up when they were expressing clock genes and to add or delete clock genes in the astrocytes while leaving the neurons intact, Herzog said.
As a first step, collaborator Michihiro Mieda from Kanazawa University created a “conditional reporter” that switched on a firefly luciferase whenever a clock gene was being expressed in a cell of interest. Tso delivered the tiny switch to the astrocytes inside a virus.
In slices of a mouse SCN with this reporter in place, the scientists could see that the star-shaped cells were expressing the clock gene in a rhythmic pattern. This proved that astrocytes keep time in living tissue where they are interacting with one another and with neurons, as well as when they are isolated in a dish.
Next, the scientists used the new gene-editing tool CRISPR-Cas9 to delete a clock gene in only the astrocytes of the SCN of living mice. They then monitored the mice for changes in the time they started running on a wheel each day.
Running is an easily measured behavior that provides a reliable indication of the state of the underlying body clock. A mouse in constant darkness will start running on a wheel approximately every 23.7 hours, typically deviating by less than 10 minutes from this schedule.
“When we deleted the gene in the astrocytes, we had good reason to predict the rhythm would remain unchanged,” Tso said. “When people deleted this clock gene in neurons, the animals completely lost rhythm, which suggests that the neurons are necessary to sustain a daily rhythm.”
Instead, when astrocyte clock was deleted, the SCN clock ran slower. The mice climbed into their wheels one hour later than usual every day.
“This was quite a surprise,” Tso said.
The results of the next experiment were even more exciting for them. The scientists began with a mouse that has a mutation making its clocks run fast and then “rescued” this mutation in astrocytes but not in neurons. This meant that the astrocyte clocks were running at the normal pace but the neuron clocks were still fast.
“We expected the SCN to follow the neurons’ pace. There are 10 times more neurons in the SCN than astrocytes. Why would the behavior follow the astrocytes’? ” Tso said.
But that is exactly what they did. The mice with the restored astrocyte clocks climbed into their wheels two hours later than mice whose astrocytes and neurons were both fast-paced.
From University of Leeds Health News:
Discovery of ‘mini-brains’ could change understanding of pain medication
The body’s peripheral nervous system could be capable of interpreting its environment and modulating pain, neuroscientists have established, after studying how rodents reacted to stimulation.
Until now, accepted scientific theory has held that only the central nervous system – the brain and spinal cord – could actually interpret and analyse sensations such as pain or heat.
The peripheral system that runs throughout the body was seen to be a mainly wiring network, relaying information to and from the central nervous system by delivering messages to the ‘control centre’ (the brain), which then tells the body how to react.
In recent years there has been some evidence of a more complex role for the peripheral nervous system, but this study by Hebei Medical University in China and the University of Leeds highlights a crucial new role for the ganglia, a collection of ‘nodules’.
See how the ganglia in the peripheral system could play a key role in interpreting pain.
Previously these were believed to act only as an energy source for messages being carried through the nervous system. In addition, researchers now believe they also have the ability to act as ‘mini-brains’, modifying how much information is sent to the central nervous system.
The five year study found that nerve cells within the ganglia can exchange information between each other with the help of a signalling molecule called GABA, a process that was previously believed to be restricted to the central nervous system.
The findings are published today in the Journal of Clinical Investigation and have potential future implications for the development of new painkillers, including drugs to target backache and arthritis pain.
Pain relief drugs
Current pain relief drugs are targeted at the central nervous system and often have side effects that can include addiction and tolerance issues.
The new research opens up the possibility of a route for developing non-addictive and non-drowsy drugs, targeted at the peripheral nervous system. Safe therapeutic dosage of these new drugs can also be much higher, potentially resulting in higher efficacy.
Whilst the study showed a rodent’s peripheral nervous system was able to interpret the type of stimulation it was sensing, further research is still needed to understand how sensations are interpreted and whether these results apply to humans.
In addition, the theory would need to be adopted by drug development companies and extensively tested before laboratory and clinical trials of a drug could be carried out. Should the findings be adopted, a timescale of at least 15-20 years might be required to produce a working drug.
Neuroscientist Professor Nikita Gamper, who led the research at both universities, said: “We found the peripheral nervous system has the ability to alter the information sent to the brain, rather than blindly passing everything on to the central nervous system.
“We don’t yet know how the system works, but the machinery is definitely in place to allow the peripheral system to interpret and modify the tactile information perceived by the brain in terms of interpreting pain, warmth or the solidity of objects.
“Further research is needed to understand exactly how it operates, but we have no reason to believe that the same nerve arrangements would not exist in humans.
“When our research team looked more closely at the peripheral system, we found the machinery for neuronal communication did exist in the peripheral nervous system’s structure. It is as if each sensory nerve has its own ‘mini-brain’, which to an extent, can interpret incoming information.”
Professor Gamper believes the findings may present a challenge to the accepted ‘Gate Control Theory of Pain’. The theory holds that a primary ‘gate’ exists between the peripheral and central nervous systems, controlling what information is sent to the central system.
The study now suggests the transmission of information to the central nervous system must go through another set of gates, or more accurately a process similar to a volume control, where the flow of information can be controlled by the peripheral nervous system.