March 19, 2013

Scientists discover previously unknown cleansing system in brain

Newer imaging technique discovers “glymphatic system”; may hold key to preventing Alzheimer’s disease

Glymphatic system (credit: Jeffrey J. Iliff et al./Science Translational Medicine)

A previously unrecognized system that drains waste from the brain at a rapid clip has been discovered by neuroscientists at the University of Rochester Medical Center.

The highly organized system acts like a series of pipes that piggyback on the brain’s blood vessels, sort of a shadow plumbing system that seems to serve much the same function in the brain as the lymph system does in the rest of the body — to drain away waste products.

Waste clearance is of central importance to every organ, and there have been long-standing questions about how the brain gets rid of its waste,” said Maiken Nedergaard, M.D., D.M.Sc., senior author of the paper and co-director of the University’s Center for Translational Neuromedicine.

“This work shows that the brain is cleansing itself in a more organized way and on a much larger scale than has been realized previously.

“We’re hopeful that these findings have implications for many conditions that involve the brain, such as traumatic brain injury, Alzheimer’s disease, stroke, and Parkinson’s disease,” she added.

The glymphatic system

Schematic of two-photon imaging of para-arterial CSF flux into the mouse cortex. Imaging was conducted between 0 and 240 mm below the cortical surface at 1-min intervals. (Credit: Jeffrey J. Iliff et al./Science Translational Medicine)

Nedergaard’s team has dubbed the new system “the glymphatic system,” since it acts much like the lymphatic system but is managed by brain cells known as glial cells. The team made the findings in mice, whose brains are remarkably similar to the human brain.

Scientists have known that cerebrospinal fluid or CSF plays an important role cleansing brain tissue, carrying away waste products and carrying nutrients to brain tissue through a process known as diffusion. The newly discovered system circulates CSF to every corner of the brain much more efficiently, through what scientists call bulk flow or convection.

“It’s as if the brain has two garbage haulers — a slow one that we’ve known about, and a fast one that we’ve just met,” said Nedergaard. “Given the high rate of metabolism in the brain, and its exquisite sensitivity, it’s not surprising that its mechanisms to rid itself of waste are more specialized and extensive than previously realized.”

While the previously discovered system works more like a trickle, percolating CSF through brain tissue, the new system is under pressure, pushing large volumes of CSF through the brain each day to carry waste away more forcefully.

The glymphatic system is like a layer of piping that surrounds the brain’s existing blood vessels. The team found that glial cells called astrocytes use projections known as “end feet” to form a network of conduits around the outsides of arteries and veins inside the brain — similar to the way a canopy of tree branches along a well-wooded street might create a sort of channel above the roadway.

Those end feet are filled with structures known as water channels or aquaporins, which move CSF through the brain. The team found that CSF is pumped into the brain along the channels that surround arteries, then washes through brain tissue before collecting in channels around veins and draining from the brain.

How has this system eluded the notice of scientists up to now?

The scientists say the system operates only when it’s intact and operating in the living brain, making it very difficult to study for earlier scientists who could not directly visualize CSF flow in a live animal, and often had to study sections of brain tissue that had already died. To study the living, whole brain, the team used a technology known as two-photon microscopy, which allows scientists to look at the flow of blood, CSF and other substances in the brain of a living animal.

While a few scientists two or three decades ago hypothesized that CSF flow in the brain is more extensive than has been realized, they were unable to prove it because the technology to look at the system in a living animal did not exist at that time.

“It’s a hydraulic system,” said Nedergaard. “Once you open it, you break the connections, and it cannot be studied. We are lucky enough to have technology now that allows us to study the system intact, to see it in operation.”

Clearing amyloid beta more efficiently

First author Jeffrey Iliff, Ph.D.,

Left: In red, smooth muscle cells within the arterial wall. The green/yellow is CSF fluid on the outside of that artery. The blue that is visible just on the very edge of the red arterial walls, especially at the red/red arterial branch — those are the water channels, the “aquaporins” discussed in the paper and in the press release, which actually move the CSF. These are critical to this process. Right: much the same, but with the red stripped out, so the focus is on the CSF. Aquaporins still visible. (Credit: Jeffrey Iliff/University of Rochester Medical Center)

a research assistant professor in the Nedergaard lab, took an in-depth look at amyloid beta, the protein that accumulates in the brain of patients with Alzheimer’s disease. He found that more than half the amyloid removed from the brain of a mouse under normal conditions is removed via the glymphatic system.

“Understanding how the brain copes with waste is critical. In every organ, waste clearance is as basic an issue as how nutrients are delivered. In the brain, it’s an especially interesting subject, because in essentially all neurodegenerative diseases, including Alzheimer’s disease, protein waste accumulates and eventually suffocates and kills the neuronal network of the brain,” said Iliff.

“If the glymphatic system fails to cleanse the brain as it is meant to, either as a consequence of normal aging, or in response to brain injury, waste may begin to accumulate in the brain. This may be what is happening with amyloid deposits in Alzheimer’s disease,” said Iliff. “Perhaps increasing the activity of the glymphatic system might help prevent amyloid deposition from building up or could offer a new way to clean out buildups of the material in established Alzheimer’s disease,” he added.

The work was funded by the National Institutes of Health (grant numbers R01NS078304 and R01NS078167), the U.S. Department of Defense, and the Harold and Leila Y. Mathers Charitable Foundation.

REFERENCES:

How noise in brain-cell signals affects neuron response time and thinking

New model of background noise in the nervous system could help better understand neuronal signaling delay in response to a stimulus

Biomedical engineer Muhammet Uzuntarla from Bulent Ecevit University, Turkey and colleagues have developed a biologically accurate model of how noise in the nervous system induces a delay in the response of neurons to external stimuli.

A new spike-latency noise model

Information encoding based on spike timing has attracted increasing attention due to the growing evidence for the relation between synchronization in neural networks and higher brain functions, such as memory, attention and cognition. And it has been shown that first-spike latency (arrival time of the first spike associated with information) carries a considerable amount of information, possibly more than other spikes.

The researchers analyzed the presence of noise in the nervous system, detected by changes in first-spike latency (the time it takes for brain cells to first respond to an external stimulus) and jitter (variation in spike timing). The noise is generated by the synaptic bombardment of each neuron by a large number of incoming excitatory and inhibitory spike inputs and because chemical-based signalling does not always work.

Previous attempts at noise modeling used a generic bell-shaped signal, referred to as a Gaussian approximation. The new noise model, published in European Physical Journal B, is closer to biological reality, the engineers suggest.

They showed there is a relation between the noise and delays in spike signal transmission, and identified two factors that could be tuned, thus influencing the noise: the incoming excitatory and inhibitory input signaling regime and the coupling strength between inhibitory and excitatory synapses. Modulating these factors could help neurons encode information more accurately, they found.

Banana has the best anti-cancer effects over other fruits

Originally posted by news.lk on August 7, 2012

Japanese scientists have found that a fully ripe banana produces a substance called TNF which has the ability to combat abnormal cells and enhance immunity against cancer.

They have pointed out that as the banana ripens it develops dark spots and or patches in the banana skin and the more patches it has the higher will be its immunity enhancement quality.

According the Japanese scientists who have carried out this research state that banana contains TNF which has anti-cancer properties. They say that the degree of anti-cancer effect corresponds to the degree of ripeness of the fruit.

In an animal experiment carried by them at the Tokyo University comparing various health benefits of different fruits, using banana, grape, apple, water melon. Pineapple, pears they have found banana with best results. Banana, produces anti-cancer substance, increases the number of white cells and has the ability to enhance the immunity of the body.

Sources: https://myscienceacademy.org/2012/08/20/banana-has-the-best-anti-cancer-effects-over-other-fruits/

https://www.news.lk/news/world/2769-banana-has-the-best-anti-cancer-effects-over-other-fruits

 

New storage nanoparticle could make hydrogen a practical fuel

University of New South Wales researchers have demonstrated that hydrogen can be released and reabsorbed from sodium borohydride, a promising storage material, overcoming a major hurdle to i

A diagram of the nanoparticle, with sodium borohydride encased in nickel, and a TEM image of the particles (credit: University of New South Wales)

ts use as an alternative fuel source.

Considered a major a fuel of the future, hydrogen could be used to power buildings, portable electronics and vehicles — but this application hinges on practical storage technology.

The researchers synthesized nanoparticles of sodium borohydride and encased these inside nickel shells.

Their unique “core-shell” nanostructure demonstrated remarkable hydrogen storage properties, including the release of energy at much lower temperatures than previously observed.

“No one has ever tried to synthesize these particles at the nanoscale because they thought it was too difficult, and couldn’t be done. We’re the first to do so, and demonstrate that energy in the form of hydrogen can be stored with sodium borohydride at practical temperatures and pressures,” says Dr Kondo-Francois Aguey-Zinsou from the School of Chemical Engineering at UNSW.

Lightweight compounds known as borohydrides (including lithium and sodium compounds) are known to be effective storage materials, but it was believed that once the energy was released it could not be reabsorbed — a critical limitation. This perceived “irreversibility” means there has been little focus on sodium borohydride.

“By controlling the size and architecture of these structures we can tune their properties and make them reversible — this means they can release and reabsorb hydrogen,” says Aguey-Zinsou. “We now have a way to tap into all these borohydride materials, which are particularly exciting for application on vehicles because of their high hydrogen storage capacity.”

In its bulk form, sodium borohydride requires temperatures above 550 degrees Celsius just to release hydrogen. However, with the core-shell nanostructure, the researchers saw initial energy release happening at just 50 °C, and significant release at 350 °C.

“The new materials that could be generated by this exciting strategy could provide practical solutions to meet many of the energy targets set by the U.S. Department of Energy,” says Aguey-Zinsou.

Myriad Genetics BRCA1 and BRCA2 patents upheld in court

Originally posted by bbc.co.uk on August 17, 2012

A court in the US has again backed a biotech company’s right to patent genes which have been isolated from the human body.

Myriad Genetics has patents on the BRCA1 and BRCA2 genes, which are strongly linked to breast and ovarian cancer.

Patents on genes have been repeatedly contested in the courts.

The latest decision by the Federal Circuit Court of Appeals sided in favour of the company.

The patents are valuable as they give the owners exclusive rights to diagnostic tests for the genes. One of the questions in the case was whether isolating a gene makes it different to one still in the body.

Circuit Judge Alan Lourie said: “Everything and everyone comes from nature, following its laws, but the compositions here are not natural products.

“They are the products of man, albeit following, as all materials do, laws of nature.”

The decision was welcomed in a statement from the president of Myriad Genetics Peter Meldrum: “We are very pleased with the favourable decision the court rendered today which again confirmed that isolated DNA is patentable.

“Importantly, the court agreed with Myriad that isolated DNA is a new chemical matter with important utilities which can only exist as the product of human ingenuity.”

However the American Civil Liberties Union, which contested the patents, argued: “Human DNA is a natural entity like air or water. It does not belong to any one company.

“This ruling prevents doctors and scientists from exchanging their ideas and research freely.”

Structure of the BRCA1 protein - Credit: emw/creative commons

Original source: https://www.bbc.co.uk/news/health-19294050

Every Black Hole Contains Another Universe?

Originally posted by truebook.wordpress.com on August 17, 2012

Author: agapesatori

A supermassive black hole sits inside the galaxy Centaurus A, as seen in a composite picture.

Image courtesy NASA/CXC/CfA/R.Kraft et al., MPIfR/ESO/APEX/A.Weiss et al. and ESO/WFI

Like part of a cosmic Russian doll, our universe may be nested inside a black hole that is itself part of a larger universe.

In turn, all the black holes found so far in our universe—from the microscopic to the supermassive—may be doorways into alternate realities.

According to a mind-bending new theory, a black hole is actually a tunnel between universes—a type of wormhole. The matter the black hole attracts doesn’t collapse into a single point, as has been predicted, but rather gushes out a “white hole” at the other end of the black one, the theory goes.

(Related: “New Proof Unknown ‘Structures’ Tug at Our Universe.”)

In a recent paper published in the journal Physics Letters B, Indiana University physicist Nikodem Poplawski presents new mathematical models of the spiraling motion of matter falling into a black hole. His equations suggest such wormholes are viable alternatives to the “space-time singularities” that Albert Einstein predicted to be at the centers of black holes.

According to Einstein’s equations for general relativity, singularities are created whenever matter in a given region gets too dense, as would happen at the ultradense heart of a black hole.

Einstein’s theory suggests singularities take up no space, are infinitely dense, and are infinitely hot—a concept supported by numerous lines of indirect evidence but still so outlandish that many scientists find it hard to accept.

If Poplawski is correct, they may no longer have to.

According to the new equations, the matter black holes absorb and seemingly destroy is actually expelled and becomes the building blocks for galaxies, stars, and planets in another reality.

(Related: “Dark Energy’s Demise? New Theory Doesn’t Use the Force.”)

Wormholes Solve Big Bang Mystery?

The notion of black holes as wormholes could explain certain mysteries in modern cosmology, Poplawski said.

For example, the big bang theory says the universe started as a singularity. But scientists have no satisfying explanation for how such a singularity might have formed in the first place.

If our universe was birthed by a white hole instead of a singularity, Poplawski said, “it would solve this problem of black hole singularities and also the big bang singularity.”

Wormholes might also explain gamma ray bursts, the second most powerful explosions in the universe after the big bang.

Gamma ray bursts occur at the fringes of the known universe. They appear to be associated with supernovae, or star explosions, in faraway galaxies, but their exact sources are a mystery. (Related: “Gamma-Ray Burst Caused Mass Extinction?”)

Poplawski proposes that the bursts may be discharges of matter from alternate universes. The matter, he says, might be escaping into our universe through supermassive black holes—wormholes—at the hearts of those galaxies, though it’s not clear how that would be possible.

“It’s kind of a crazy idea, but who knows?” he said. (Related: “Are Wormholes Tunnels for Time Travel?”)

There is at least one way to test Poplawski’s theory: Some of our universe’s black holes rotate, and if our universe was born inside a similarly revolving black hole, then our universe should have inherited the parent object’s rotation.

If future experiments reveal that our universe appears to rotate in a preferred direction, it would be indirect evidence supporting his wormhole theory, Poplawski said.

Wormholes Are “Exotic Matter” Makers?

The wormhole theory may also help explain why certain features of our universe deviate from what theory predicts, according to physicists.

Based on the standard model of physics, after the big bang the curvature of the universe should have increased over time so that now—13.7 billion years later—we should seem to be sitting on the surface of a closed, spherical universe.

But observations show the universe appears flat in all directions.

What’s more, data on light from the very early universe show that everything just after the big bang was a fairly uniform temperature.

That would mean that the farthest objects we see on opposite horizons of the universe were once close enough to interact and come to equilibrium, like molecules of gas in a sealed chamber.

Again, observations don’t match predictions, because the objects farthest from each other in the known universe are so far apart that the time it would take to travel between them at the speed of light exceeds the age of the universe.

To explain the discrepancies, astronomers devised the concept of inflation.

Inflation states that shortly after the universe was created, it experienced a rapid growth spurt during which space itself expanded at faster-than-light speeds. The expansion stretched the universe from a size smaller than an atom to astronomical proportions in a fraction of a second.

The universe therefore appears flat, because the sphere we’re sitting on is extremely large from our viewpoint—just as the sphere of Earth seems flat to someone standing in a field.

Inflation also explains how objects so far away from each other might have once been close enough to interact.

But—assuming inflation is real—astronomers have always been at pains to explain what caused it. That’s where the new wormhole theory comes in.

According to Poplawski, some theories of inflation say the event was caused by “exotic matter,” a theoretical substance that differs from normal matter, in part because it is repelled rather than attracted by gravity.

Based on his equations, Poplawski thinks such exotic matter might have been created when some of the first massive stars collapsed and became wormholes.

“There may be some relationship between the exotic matter that forms wormholes and the exotic matter that triggered inflation,” he said.

(Related: “Before the Big Bang: Light Shed on ‘Previous Universe.’”)

Wormhole Equations an “Actual Solution”

The new model isn’t the first to propose that other universes exist inside black holes. Damien Easson, a theoretical physicist at Arizona State University, has made the speculation in previous studies.

“What is new here is an actual wormhole solution in general relativity that acts as the passage from the exterior black hole to the new interior universe,” said Easson, who was not involved in the new study.

“In our paper, we just speculated that such a solution could exist, but Poplawski has found an actual solution,” said Easson, referring to Poplawski’s equations.

(Related: “Universe 20 Million Years Older Than Thought.”)

Nevertheless, the idea is still very speculative, Easson said in an email.

“Is the idea possible? Yes. Is the scenario likely? I have no idea. But it is certainly an interesting possibility.”

Future work in quantum gravity—the study of gravity at the subatomic level—could refine the equations and potentially support or disprove Poplawski’s theory, Easson said.

Wormhole Theory No Breakthrough

Overall, the wormhole theory is interesting, but not a breakthrough in explaining the origins of our universe, said Andreas Albrecht, a physicist at the University of California, Davis, who was also not involved in the new study.

By saying our universe was created by a gush of matter from a parent universe, the theory simply shifts the original creation event into an alternate reality.

In other words, it doesn’t explain how the parent universe came to be or why it has the properties it has—properties our universe presumably inherited.

“There’re really some pressing problems we’re trying to solve, and it’s not clear that any of this is offering a way forward with that,” he said.

Still, Albrecht doesn’t find the idea of universe-bridging wormholes any stranger than the idea of black hole singularities, and he cautions against dismissing the new theory just because it sounds a little out there.

“Everything people ask in this business is pretty weird,” he said. “You can’t say the less weird [idea] is going to win, because that’s not the way it’s been, by any means.”

 

Reference: https://truebook1.wordpress.com/2012/08/17/every-black-hole-contains-another-universe/

Nearly half of US corn devoted to fuel production

Originally posted by share.banoosh.com on August 19, 2012

Author: Mr.H

 

The U.S. policy, according to which gasoline must contain ethanol, is leading the U.S. to devote 40 percent of its corn harvest to fuel production.

The policy, ostensibly aimed at reducing the country’s dependence on foreign oil and at improving the environment, has been a bonanza for farmers.

Land planted with corn soared by a fourth after Congress passed the Energy Independence and Security Act of 2007, which required that gasoline producers blend 15 billion gallons of ethanol into the nation’s gasoline supply by 2015.

With this year’s crop expected to be the smallest in six years, corn prices have jumped 60 percent since June. The ethanol requirements are aggravating the rise in food costs and spreading it to the price of gasoline, which is up almost 40 cents a gallon since the start of July.

Researchers at Texas A&M University have estimated that diverting corn to make ethanol forces Americans to pay $40 billion a year in higher food prices. On top of that, it costs taxpayers $1.78 in subsidies for each gallon of gasoline that corn-based ethanol replaces, according to the Congressional Budget Office.

More than 150 House members and 25 U.S. senators, as well as the director general of the United Nations Food and Agricultural Organization, have asked Obama to temporarily suspend the ethanol mandate in order to check the rise in food prices. He should listen to them, and Congress should permanently roll back the ethanol requirements.

This isn’t to say ethanol doesn’t have a place in the U.S. energy mix. Gasoline needs to be combined with agents that carry oxygen to help cars and trucks run more efficiently. Ethanol fits the bill. But the government should let the demand for ethanol obey the laws of the market, rather than the desires of the agricultural lobby. Huffington Post

FACTS & FIGURES

Corn stalks are being disked under as the extensive drought in the corn belt causes concern over the United States government’s ethanol mandate. brainerddispatch.com

This year gasoline refiners will use some 13.2 billion gallons of ethanol, which will consume some 40 percent of the corn crop. CNBC

According to a Financial Times opinion piece published on August 10, U.N. Food and Agriculture Organization Director General had said Washington should shelve a mandate siphoning 40 percent of the U.S. corn crop for ethanol and use the corn for food and feed. UPI

The U.S. Environmental Protection Administration mandate, known as the Renewable Fuel Standard, requires 13.2 billion gallons of biofuel to be blended into gasoline by 2012 to cut greenhouse-gas emissions and U.S. foreign-oil dependence. UPI

 

Source: https://share.banoosh.com/2012/08/19/nearly-half-of-us-corn-devoted-to-fuel-production/

First ever computer model of a living organism performed

In what can only be described as a milestone in biological and genetic engineering, scientists at Stanford University have, for the first time ever, simulated a complete bacterium. With the organism completely in virtual form, the scientists can perform any kind of modification on its genome and observe extremely quickly what kind of changes would occur in the organism. This means that in the future, current lab research that takes extremely long to perform or is hazardous in nature (dealing with lethal strains of viruses for instance), could be moved almost exclusively to a computer.

The researchers chose a pathogen called Mycoplasma genitalium as their target for modeling, out of practical reasons. For one, the bacterium is implicated in a number of urethral and vaginal infections, like its name might imply as well, however this is of little importance. The bacterium distinguishes itself by having the smallest genome of any free-living organism, with just 525 genes. In comparison, the ever popular lab pathogen, E. coli has 4288 genes.

Don’t be fooled, however. Even though this bacterium has the smallest amount of genetic data that we know of, it still required a tremendous amount of research work from behalf of the team. For one, data from more than 900 scientific papers and 1,900 experiments concerning the pathogen’s behavior, genetics, molecular interactions and so on, were incorporated in the software simulation. Then, the 525 genes were described by 28 algorithms, each governing the behaviour of a software module modelling a different biological process.

“These modules then communicated with each other after every time step, making for a unified whole that closely matched M. genitalium‘s real-world behaviour,” claims the Stanford team in a statement.

Thus, even for an organism of its size, it takes that much information to account for every interaction it will undergo in its lifespan. The simulation work was made using a 128-node computing cluster, and, even so, a single cell division takes about 10 hours to simulate, and generates half a gigabyte of data. By adding more computing power, the computing process can be shortened, however its pretty clear that for more complex organisms, much more resources might be required.

“You don’t really understand how something works until you can reproduce it yourself,” says graduate student and team member Jayodita Sanghvi.

BIG LEAP FORWARD FOR GENETIC ENGINEERING AND CAD

Emulating for the first time a living organisms is fantastic by itself, and is sure to set the ground for the development of Bio-CAD (computer-aided-design). CAD is primarily used in engineering, be it aeronautic, civil, mechanical, electrical and so on, and along the years has become indispensable, not only in the design process, but more importantly in the innovation process. For instance, by replacing the insulating material for a boiler in CAD, the software will imediately tell the engineer how this will affect its performance, all without having to actually build and test it. Similarly, scientists hope to achieve a similar amount of control from bio-CAD as well. The problem is that biological organisms need to be fully described into the software for bio-CAD to become lucrative and accurate.

“If you use a model to guide your experiments, you’re going to discover things faster. We’ve shown that time and time again,” said team leader and Stanford professor Markus Covert.

We’d love to see this research expanded forward, which most likely will happen, but we’re still a long way from modeling a human – about 20,000 genes short.

The findings were presented in the journal Cell.

Sources:

https://www.zmescience.com/medicine/genetic/computer-model-simulation-bacteria-31243/

https://www.newscientist.com/blogs/onepercent/2012/07/first-organism-fully-modelled.html

https://www.cell.com/abstract/S0092-8674%2812%2900776-3

https://en.wikipedia.org/wiki/E_coli

https://en.wikipedia.org/wiki/Mycoplasma_genitalium

Transforming cancer treatment

A Harvard researcher studying the evolution of drug resistance in cancer says that, in a few decades, “many, many cancers could be manageable

Predicted probability distribution of times from when treatment starts until resistance mutations become observable in circulating DNA (credit: Luis A. Diaz Jr/Nature)

“Many people are dying needlessly of cancer, and this research may offer a new strategy in that battle,” saidMartin Nowak, a professor of mathematics and of biology and director of the Program for Evolutionary Dynamics.

“One hundred years ago, many people died of bacterial infections. Now, we have treatment for such infections — those people don’t have to die. I believe we are approaching a similar point with cancer.”

Nowak is one of several co-authors of a paper, published in Nature on June 28, that details how resistance to targeted drug therapy emerges in colorectal cancers and describes a multidrug approach to treatment that could make many cancers manageable, if not curable.

The key, Nowak’s research suggests, is to change the way clinicians battle the disease.

Physicians and researchers in recent years have increasingly turned to “targeted therapies” — drugs that combat cancer by interrupting its ability to grow and spread — rather than traditional chemotherapy, but such treatment is far from perfect. Most targeted therapies are effective for only a few months before the cancer evolves resistance to the drugs.

The culprit in the colon cancer treatment examined in the Nature paper is the KRAS gene, which is responsible for producing a protein to regulate cell division. When activated, the gene helps cancer cells develop resistance to targeted-therapy drugs, effectively making the treatment useless.

To better understand what role the KRAS gene plays in drug resistance, a team of researchers led by Bert Vogelstein, the Clayton Professor of Oncology and Pathology at the Johns Hopkins Kimmel Cancer Center, launched a study that began by testing patients to determine if the KRAS gene was activated in their tumors. Patients without an activated KRAS gene underwent a normal round of targeted therapy treatment, and the initial results — as expected — were successful. Tests performed after the treatment broke down, however, showed a surprising result: The KRAS gene had been activated.

As part of the research, Vogelstein’s team analyzed a handful of mutations that can lead to the activation of the KRAS gene. To help interpret those results, they turned to Nowak’s team, including mathematicians Benjamin Allen, a postdoctoral fellow in mathematical biology, and Ivana Bozic, a postdoctoral fellow in mathematics.

Analyzing the clinical results, Allen and Bozic were able to mathematically describe the exponential growth of the cancer and determine whether the mutation that led to drug resistance was pre-existing, or whether it occurred after treatment began. Their model was able to predict, with surprising accuracy, the window of time from when the drug is first administered to when resistance arises and the drug begins to fail.

“By looking at their results mathematically, we were able to determine conclusively that the resistance was already there, so the therapy was doomed from the start,” Allen said. “That had been an unresolved question before this study. Clinicians were finding that these kinds of therapies typically don’t work for longer than six months, and our finding provides an explanation for why that failure occurs.”

Put simply, Nowak said, the findings suggest that, of the billions of cancer cells that exist in a patient, only a tiny percentage — about one in a million — are resistant to drugs used in targeted therapy. When treatment starts, the nonresistant cells are wiped out. The few resistant cells, however, quickly repopulate the cancer, causing the treatment to fail.

“Whether you have resistance prior to the start of treatment was one of the large, outstanding questions associated with this type of treatment,” Bozic said. “Our study offers a quantitative understanding of how resistance evolves, and shows that, because resistance is there at the start, the single-drug therapy won’t work.”

The answer, Nowak said, is simple: Rather than the one drug used in targeted therapy, treatments must involve at least two drugs.

Nowak isn’t new to such strategies. In 1995 he participated in a study, also published in Nature, that focused on the rapid evolution of drug resistance in HIV. The result of that study, he said, was the development of the drug “cocktail” many HIV-positive patients use to help manage the disease.

Such a plan, however, isn’t without challenges.

The treatment must be tailored to the patient, and must be based on the genetic makeup of the patient’s cancer. Perhaps even more importantly, Nowak said, the two drugs used simultaneously must not overlap: If a single mutation allows the cancer to become resistant to both drugs, the treatment will fail just as the single-drug therapy does.

Nowak estimated that hundreds of drugs might be needed to address all the possible treatment variations. The challenge in the near term, he said, is to develop those drugs.

“This will be the main avenue for research into cancer treatment, I think, for the next decade and beyond,” Nowak said. “As more and more drugs are developed for targeted therapy, I think we will see a revolution in the treatment of cancer.”

Sources:

https://www.kurzweilai.net/transforming-cancer-treatment

The molecular evolution of acquired resistance to targeted EGFR blockade in colorectal cancers, Nature, 2012, DOI: 10.1038/nature11219

Glial cells supply axon nerve fibers with energy, researchers find

Max Planck Institute of Experimen

Electron microscope cross-section image of the nerve fibers (axons) of the optic nerve. Axons are surrounded by special glial cells, the oligodendrocytes, wrapping themselves around the axons in several layers. Between the axons, there are extensions of astrocytes, another type of glial cells. (Credit: U. Funfschilling et al./Nature)

tal Medicine researchers have discovered a possible mechanisms by which glial cells in the brain support axons and keep them alive.

Oligodendrocytes are a group of highly specialized glial cells in the central nervous system. They form the fat-rich myelin sheath that surrounds the nerve fibers as an insulating layer increases the transmission speed of the axons and also reduces ongoing energy consumption.

The extreme importance of myelin for a functioning nervous system is shown by the diseases that arise from a defective insulating layer, such as multiple sclerosis.

In a new study, the researchers showed that glial cells are also involved in providing glucose to replenish energy in the nerve fibers.

Hypothetical model of metabolic coupling between oligodendrocytes and myelinated axons (credit: U. Funfschilling et al./Nature)

This coupling of glial cells could explain, among other things, why in many myelin diseases, such as multiple sclerosis, the affected demyelinized axons often suffer irreversible damage.

Ref.: Ursula Fünfschilling at al., Glycolytic oligodendrocytes maintain myelin and long-term axonal integrity, Nature, 2012, DOI: 10.1038/nature11007