Small-scale nuclear fusion may be a new energy source

[dropcap]Fusion energy [/dropcap]may soon be used in small-scale power stations. This means producing environmentally friendly heating and electricity at a low cost from fuel found in water. Both heating generators and generators for electricity could be developed within a few years, according to research that has primarily been conducted at the University of Gothenburg.

Nuclear fusion is a process whereby atomic nuclei melt together and release energy. Because of the low binding energy of the tiny atomic nuclei, energy can be released by combining two small nuclei with a heavier one. A collaboration between researchers at the University of Gothenburg and the University of Iceland has been to study a new type of nuclear fusion process. This produces almost no neutrons but instead fast, heavy electrons (muons), since it is based on nuclear reactions in ultra-dense heavy hydrogen (deuterium).

“This is a considerable advantage compared to other nuclear fusion processes which are under development at other research facilities, since the neutrons produced by such processes can cause dangerous flash burns,” says Leif Holmlid, Professor Emeritus at the University of Gothenburg.

No radiation The new fusion process can take place in relatively small laser-fired fusion reactors fueled by heavy hydrogen (deuterium). It has already been shown to produce more energy than that needed to start it. Heavy hydrogen is found in large quantities in ordinary water and is easy to extract. The dangerous handling of radioactive heavy hydrogen (tritium) which would most likely be needed for operating large-scale fusion reactors with a magnetic enclosure in the future is therefore unnecessary.

Rendering of an atom. Nuclear fusion is a process whereby atomic nuclei melt together and release energy. Credit: © Sergey Nivens / Fotolia
Rendering of an atom. Nuclear fusion is a process whereby atomic nuclei melt together and release energy.
Credit: © Sergey Nivens / Fotolia

 

” A considerable advantage of the fast heavy electrons produced by the new process is that these are charged and can therefore produce electrical energy instantly. The energy in the neutrons which accumulate in large quantities in other types of nuclear fusion is difficult to handle because the neutrons are not charged. These neutrons are high-energy and very damaging to living organisms, whereas the fast, heavy electrons are considerably less dangerous.”

Neutrons are difficult to slow down or stop and require reactor enclosures that are several meters thick. Muons — fast, heavy electrons — decay very quickly into ordinary electrons and similar particles.

Research shows that far smaller and simpler fusion reactors can be built. The next step is to create a generator that produces instant electrical energy.

The research done in this area has been supported by GU Ventures AB, the holding company linked to the University of Gothenburg. The results have recently been published in three international scientific journals.


Story Source:

The above post is reprinted from materials provided by University of Gothenburg. The original item was written by Carina Eliasson. Note: Materials may be edited for content and length.


Journal References:

  1. Leif Holmlid, Sveinn Olafsson. Spontaneous ejection of high-energy particles from ultra-dense deuterium D(0). International Journal of Hydrogen Energy, 2015; 40 (33): 10559 DOI: 10.1016/j.ijhydene.2015.06.116
  2. Leif Holmlid, Sveinn Olafsson. Muon detection studied by pulse-height energy analysis: Novel converter arrangements. Review of Scientific Instruments, 2015; 86 (8): 083306 DOI: 10.1063/1.4928109
  3. Leif Holmlid. Heat generation above break-even from laser-induced fusion in ultra-dense deuterium. AIP Advances, 2015; 5 (8): 087129 DOI: 10.1063/1.4928572

Feeling anxious? Check your orbitofrontal cortex, cultivate your optimism

Glass half full or half empty? What you see may depend in part on the size of your orbitofrontal cortex. Optimistic people also tend to be less anxious, research finds. Credit: Graphic by Julie McMahon
Glass half full or half empty? What you see may depend in part on the size of your orbitofrontal cortex. Optimistic people also tend to be less anxious, research finds.
Credit: Graphic by Julie McMahon

 

A new study links anxiety, a brain structure called the orbitofrontal cortex, and optimism, finding that healthy adults who have larger OFCs tend to be more optimistic and less anxious.

The new analysis, reported in the journal Social, Cognitive and Affective Neuroscience, offers the first evidence that optimism plays a mediating role in the relationship between the size of the OFC and anxiety.

Anxiety disorders afflict roughly 44 million people in the U.S. These disorders disrupt lives and cost an estimated $42 billion to $47 billion annually, scientists report.

The orbitofrontal cortex, a brain region located just behind the eyes, is known to play a role in anxiety. The OFC integrates intellectual and emotional information and is essential to behavioral regulation. Previous studies have found links between the size of a person’s OFC and his or her susceptibility to anxiety. For example, in a well-known study of young adults whose brains were imaged before and after the colossal 2011 earthquake and tsunami in Japan, researchers discovered that the OFC actually shrank in some study subjects within four months of the disaster. Those with more OFC shrinkage were likely to also be diagnosed with post-traumatic stress disorder, the researchers found.

Other studies have shown that more optimistic people tend to be less anxious, and that optimistic thoughts increase OFC activity.

The team on the new study hypothesized that a larger OFC might act as a buffer against anxiety in part by boosting optimism.

Most studies of anxiety focus on those who have been diagnosed with anxiety disorders, said University of Illinois researcher Sanda Dolcos, who led the research with graduate student Yifan Hu and psychology professor Florin Dolcos. “We wanted to go in the opposite direction,” she said. “If there can be shrinkage of the orbitofrontal cortex and that shrinkage is associated with anxiety disorders, what does it mean in healthy populations that have larger OFCs? Could that have a protective role?”

The researchers also wanted to know whether optimism was part of the mechanism linking larger OFC brain volumes to lesser anxiety.

The team collected MRIs of 61 healthy young adults and analyzed the structure of a number of regions in their brains, including the OFC. The researchers calculated the volume of gray matter in each brain region relative to the overall volume of the brain. The study subjects also completed tests that assessed their optimism and anxiety, depression symptoms, and positive (enthusiastic, interested) and negative (irritable, upset) affect.

A statistical analysis and modeling revealed that a thicker orbitofrontal cortex on the left side of the brain corresponded to higher optimism and less anxiety. The model also suggested that optimism played a mediating role in reducing anxiety in those with larger OFCs. Further analyses ruled out the role of other positive traits in reducing anxiety, and no other brain structures appeared to be involved in reducing anxiety by boosting optimism.

“You can say, ‘OK, there is a relationship between the orbitofrontal cortex and anxiety. What do I do to reduce anxiety?'” Sanda Dolcos said. “And our model is saying, this is working partially through optimism. So optimism is one of the factors that can be targeted.”

“Optimism has been investigated in social psychology for years. But somehow only recently did we start to look at functional and structural associations of this trait in the brain,” Hu said. “We wanted to know: If we are consistently optimistic about life, would that leave a mark in the brain?”

Florin Dolcos said future studies should test whether optimism can be increased and anxiety reduced by training people in tasks that engage the orbitofrontal cortex, or by finding ways to boost optimism directly.

“If you can train people’s responses, the theory is that over longer periods, their ability to control their responses on a moment-by-moment basis will eventually be embedded in their brain structure,” he said.


Story Source:

The above post is reprinted from materials provided by University of Illinois at Urbana-Champaign. The original item was written by Diana Yates. Note: Materials may be edited for content and length.


Journal References:

  1. Sanda Dolcos et al. Optimism and the Brain: Trait Optimism Mediates the Protective Role of the Orbitofrontal Cortex Gray Matter Volume against Anxiety. Social, Cognitive and Affective Neuroscience, September 2015 DOI: 10.1093/scan/nsv106
  2. A Sekiguchi, M Sugiura, Y Taki, Y Kotozaki, R Nouchi, H Takeuchi, T Araki, S Hanawa, S Nakagawa, C M Miyauchi, A Sakuma, R Kawashima. Brain structural changes as vulnerability factors and acquired signs of post-earthquake stress. Molecular Psychiatry, 2012; 18 (5): 618 DOI: 10.1038/mp.2012.51

How your brain decides blame and punishment

[dropcap]Juries [/dropcap]in criminal cases typically decide if someone is guilty, then a judge determines a suitable level of punishment. New research confirms that these two separate assessments of guilt and punishment — though related — are calculated in different parts of the brain. In fact, researchers found that they can disrupt and change one decision without affecting the other.

New work by researchers at Vanderbilt University and Harvard University confirms that a specific area of the brain, the dorsolateral prefrontal cortex, is crucial to punishment decisions. Researchers predicted and found that by altering brain activity in this brain area, they could change how subjects punished hypothetical defendants without changing the amount of blame placed on the defendants.

“We were able to significantly change the chain of decision-making and reduce punishment for crimes without affecting blameworthiness,” said René Marois, professor and chair of psychology at Vanderbilt and co-principal author of the study. “This strengthens evidence that the dorsolateral prefrontal cortex integrates information from other parts of the brain to determine punishment and shows a clear neural dissociation between punishment decisions and moral responsibility judgements.”

The research titled “From Blame to Punishment: Disrupting Prefrontal Cortex Activity Reveals Norm Enforcement Mechanisms” was published on Sept. 17 in the journal Neuron.

The Experiment

The researchers used repetitive transcranial magnetic stimulation (rTMS) on a specific area of the dorsolateral prefrontal cortex to briefly alter activity in this brain region and consequently change the amount of punishment a person doled out.

“Many studies show the integrative function of the dorsolateral prefrontal cortex in relatively simple cognitive tasks, and we believe that this relatively basic process forms the foundation for far more complex forms of behavior and decision-making, such as norm enforcement,” said lead author Joshua Buckholtz, now an assistant professor of psychology at Harvard.

The researchers conducted experiments with 66 volunteer men and women. Participants were asked to make punishment and blameworthiness decisions in a series of scenarios in which a suspect committed a crime. The scenarios varied by harm caused (ranging from property loss to grievous injury and death) and how culpable the suspect was for the act (fully responsible or not, due to mitigating circumstances.) Half of the subjects received active rTMS while the other half of the subjects received a sham or placebo version of rTMS.

Level of Harm

Across all participants and all trials, both culpability and level of harm were significant predictors of the amount of punishment the subjects deemed appropriate. But subjects receiving active rTMS chose significantly lower punishments for fully culpable suspects than did those subjects receiving sham rTMS, particularly in scenarios that resulted in low to moderate harm. Additional analyses suggested that the effect was due to impaired integration of signals for harm and culpability.

“Temporarily disrupting the dorsolateral prefrontal cortex function appears to alter how people use information about harm and culpability to render these decisions. In other words punishment requires that people balance these two influences, and the rTMS manipulation interfered with this balance, especially under conditions in which these factors are dissonant, such as when the intent is clear but the harm outcome is mild,” said Buckholtz.

Implications

The research team’s main goal in this work is to expand the knowledge of how the brain assesses and then integrates information relevant to guilt and punishment decisions. It will also advance the burgeoning interdisciplinary study of law and neuroscience.

“This research gives us deeper insights into how people make decisions relevant to law, and particularly how different parts of the brain contribute to decisions about crime and punishment. We hope that these insights will help to build a foundation for better understanding, and perhaps one day better combatting, decision-making biases in the legal system,” said co-author Owen Jones, professor of law and biological sciences at Vanderbilt and director of the MacArthur Foundation Research Network on Law and Neuroscience.


Story Source:

The above post is reprinted from materials provided by Vanderbilt University. The original item was written by Amy Wolf. Note: Materials may be edited for content and length.


Journal Reference:

  1. Joshua W. Buckholtz, Justin W. Martin, Michael T. Treadway, Katherine Jan, David H. Zald, Owen Jones, René Marois. From Blame to Punishment: Disrupting Prefrontal Cortex Activity Reveals Norm Enforcement Mechanisms. Neuron, 2015; DOI: 10.1016/j.neuron.2015.08.023

Change your walking style, change your mood

Man walking (stock image). Subjects in this study who were prompted to walk in a more depressed style, with less arm movement and their shoulders rolled forward, experienced worse moods than those who were induced to walk in a happier style. Credit: © connel_design / Fotolia
Man walking (stock image). Subjects in this study who were prompted to walk in a more depressed style, with less arm movement and their shoulders rolled forward, experienced worse moods than those who were induced to walk in a happier style.
Credit: © connel_design / Fotolia

Our mood can affect how we walk — slump-shouldered if we’re sad, bouncing along if we’re happy. Now researchers have shown it works the other way too — making people imitate a happy or sad way of walking actually affects their mood.

Subjects who were prompted to walk in a more depressed style, with less arm movement and their shoulders rolled forward, experienced worse moods than those who were induced to walk in a happier style, according to the study published in the Journal of Behavior Therapy and Experimental Psychiatry.

CIFAR Senior Fellow Nikolaus Troje (Queen’s University), a co-author on the paper, has shown in past research that depressed people move very differently than happy people.

“It is not surprising that our mood, the way we feel, affects how we walk, but we want to see whether the way we move also affects how we feel,” Troje says.

He and his colleagues showed subjects a list of positive and negative words, such as “pretty,” “afraid” and “anxious” and then asked them to walk on a treadmill while they measured their gait and posture. A screen showed the subjects a gauge that moved left or right depending on whether their walking style was more depressed or happier. But the subjects didn’t know what the gauge was measuring. Researchers told some subjects to try and move the gauge left, while others were told to move it right.

“They would learn very quickly to walk the way we wanted them to walk,” Troje says.

Afterward, the subjects had to write down as many words as they could remember from the earlier list of positive and negative words. Those who had been walking in a depressed style remembered many more negative words. The difference in recall suggests that the depressed walking style actually created a more depressed mood.

The study builds on our understanding of how mood can affect memory. Clinically depressed patients are known to remember negative events, particularly those about themselves, much more than positive life events, Troje says. And remembering the bad makes them feel even worse.

“If you can break that self-perpetuating cycle, you might have a strong therapeutic tool to work with depressive patients.”

The study also contributes to the questions asked in CIFAR’s Neural Computation & Adaptive Perception program, which aims to unlock the mystery of how our brains convert sensory stimuli into information and to recreate human-style learning in computers.

“As social animals we spend so much time watching other people, and we are experts at retrieving information about other people from all sorts of different sources,” Troje says. Those sources include facial expression, posture and body movement. Developing a better understanding of the biological algorithms in our brains that process stimuli — including information from our own movements — can help researchers develop better artificial intelligence, while learning more about ourselves in the process.


Story Source:

The above post is reprinted from materials provided by Canadian Institute for Advanced Research. Note: Materials may be edited for content and length.


Journal Reference:

  1. Johannes Michalak, Katharina Rohde, Nikolaus F. Troje. How we walk affects what we remember: Gait modifications through biofeedback change negative affective memory bias. Journal of Behavior Therapy and Experimental Psychiatry, 2015; 46: 121 DOI: 10.1016/j.jbtep.2014.09.004

Pressure of Constant Social Media Causes Anxiety

Dr Cleland Woods explained: "Adolescence can be a period of increased vulnerability for the onset of depression and anxiety, and poor sleep quality may contribute to this. It is important that we understand how social media use relates to these. Evidence is increasingly supporting a link between social media use and wellbeing, particularly during adolescence, but the causes of this are unclear." Credit: © sanyasm / Fotolia
Dr Cleland Woods explained: “Adolescence can be a period of increased vulnerability for the onset of depression and anxiety, and poor sleep quality may contribute to this. It is important that we understand how social media use relates to these. Evidence is increasingly supporting a link between social media use and wellbeing, particularly during adolescence, but the causes of this are unclear.”
Credit: © sanyasm / Fotolia

The need to be constantly available and respond 24/7 on social media accounts can cause depression, anxiety and reduce sleep quality for teenagers says a study being presented September 11, 2015, at a British Psychological Society conference in Manchester.

The researchers, Dr Heather Cleland Woods and Holly Scott of the University of Glasgow, provided questionnaires for 467 teenagers regarding their overall and night-time specific social media use. A further set of tests measured sleep quality, self-esteem, anxiety, depression and emotional investment in social media which relates to the pressure felt to be available 24/7 and the anxiety around, for example, not responding immediately to texts or posts

Dr Cleland Woods explained: “Adolescence can be a period of increased vulnerability for the onset of depression and anxiety, and poor sleep quality may contribute to this. It is important that we understand how social media use relates to these. Evidence is increasingly supporting a link between social media use and wellbeing, particularly during adolescence, but the causes of this are unclear.”

Analysis showed that overall and night-time specific social media use along with emotional investment were related to poorer sleep quality, lower self-esteem as well as higher anxiety and depression levels.

Lead researcher Dr Cleland Woods said “While overall social media use impacts on sleep quality, those who log on at night appear to be particularly affected. This may be mostly true of individuals who are highly emotionally invested. This means we have to think about how our kids use social media, in relation to time for switching off.”

The study is presented at the BPS Developmental and Social Psychology Section annual conference taking place from the 9 to 11 September at The Palace Hotel in Manchester.


Story Source:

The above post is reprinted from materials provided by British Psychological Society. Note: Materials may be edited for content and length.

Bigger, older cousin to Earth discovered

This artist's concept compares Earth (left) to the new planet, called Kepler-452b, which is about 60 percent larger in diameter. Credit: NASA/JPL-Caltech/T. Pyle
This artist’s concept compares Earth (left) to the new planet, called Kepler-452b, which is about 60 percent larger in diameter.
Credit: NASA/JPL-Caltech/T. Pyle

[dropcap]NASA[/dropcap]’s Kepler mission has confirmed the first near-Earth-size planet in the “habitable zone” around a sun-like star. This discovery and the introduction of 11 other new small habitable zone candidate planets mark another milestone in the journey to finding another “Earth.”

The newly discovered Kepler-452b is the smallest planet to date discovered orbiting in the habitable zone — the area around a star where liquid water could pool on the surface of an orbiting planet — of a G2-type star, like our sun. The confirmation of Kepler-452b brings the total number of confirmed planets to 1,030.

“On the 20th anniversary year of the discovery that proved other suns host planets, the Kepler exoplanet explorer has discovered a planet and star which most closely resemble the Earth and our Sun,” said John Grunsfeld, associate administrator of NASA’s Science Mission Directorate at the agency’s headquarters in Washington. “This exciting result brings us one step closer to finding an Earth 2.0.”

Kepler-452b is 60 percent larger in diameter than Earth and is considered a super-Earth-size planet. While its mass and composition are not yet determined, previous research suggests that planets the size of Kepler-452b have a good chance of being rocky.

While Kepler-452b is larger than Earth, its 385-day orbit is only 5 percent longer. The planet is 5 percent farther from its parent star Kepler-452 than Earth is from the Sun. Kepler-452 is 6 billion years old, 1.5 billion years older than our sun, has the same temperature, and is 20 percent brighter and has a diameter 10 percent larger.

“We can think of Kepler-452b as an older, bigger cousin to Earth, providing an opportunity to understand and reflect upon Earth’s evolving environment,” said Jon Jenkins, Kepler data analysis lead at NASA’s Ames Research Center in Moffett Field, California, who led the team that discovered Kepler-452b. “It’s awe-inspiring to consider that this planet has spent 6 billion years in the habitable zone of its star; longer than Earth. That’s substantial opportunity for life to arise, should all the necessary ingredients and conditions for life exist on this planet.”

To help confirm the finding and better determine the properties of the Kepler-452 system, the team conducted ground-based observations at the University of Texas at Austin’s McDonald Observatory, the Fred Lawrence Whipple Observatory on Mt. Hopkins, Arizona, and the W. M. Keck Observatory atop Mauna Kea in Hawaii. These measurements were key for the researchers to confirm the planetary nature of Kepler-452b, to refine the size and brightness of its host star and to better pin down the size of the planet and its orbit.

The Kepler-452 system is located 1,400 light-years away in the constellation Cygnus. The research paper reporting this finding has been accepted for publication in The Astronomical Journal.

In addition to confirming Kepler-452b, the Kepler team has increased the number of new exoplanet candidates by 521 from their analysis of observations conducted from May 2009 to May 2013, raising the number of planet candidates detected by the Kepler mission to 4,696. Candidates require follow-up observations and analysis to verify they are actual planets.

Twelve of the new planet candidates have diameters between one to two times that of Earth, and orbit in their star’s habitable zone. Of these, nine orbit stars that are similar to our sun in size and temperature.

“We’ve been able to fully automate our process of identifying planet candidates, which means we can finally assess every transit signal in the entire Kepler dataset quickly and uniformly,” said Jeff Coughlin, Kepler scientist at the SETI Institute in Mountain View, California, who led the analysis of a new candidate catalog. “This gives astronomers a statistically sound population of planet candidates to accurately determine the number of small, possibly rocky planets like Earth in our Milky Way galaxy.”

These findings, presented in the seventh Kepler Candidate Catalog, will be submitted for publication in the Astrophysical Journal.

Scientists now are producing the last catalog based on the original Kepler mission’s four-year data set. The final analysis will be conducted using sophisticated software that is increasingly sensitive to the tiny telltale signatures of Earth-size planets.


Story Source:

The above post is reprinted from materials provided by NASA. Note: Materials may be edited for content and length.

Untangling the mechanics of knots

The researchers carried out experiments to test how much force is required to tighten knots with an increasing number of twists. They then compared their observations with their theoretical predictions, and found that the theory accurately predicted the force needed to close a knot, given its topology and the diameter and stiffness of the underlying strand. Credit: © highwaystarz / Fotolia
The researchers carried out experiments to test how much force is required to tighten knots with an increasing number of twists. They then compared their observations with their theoretical predictions, and found that the theory accurately predicted the force needed to close a knot, given its topology and the diameter and stiffness of the underlying strand.
Credit: © highwaystarz / Fotolia

Got rope? Then try this experiment: Cross both ends, left over right, then bring the left end under and out, as if tying a pair of shoelaces. If you repeat this sequence, you get what’s called a “granny” knot. If, instead, you cross both ends again, this time right over left, you’ve created a sturdier “reef” knot.

The configuration, or “topology,” of a knot determines its stiffness. For example, a granny knot is much easier to undo, as its configuration of twists creates weaker forces within the knot, compared with a reef knot. For centuries, sailors have observed such distinctions, choosing certain knots over others to secure vessels — largely by intuition and tradition.

Now researchers at MIT and Pierre et Marie Curie University in Paris have analyzed the mechanical forces underpinning simple knots, and come up with a theory that describes how a knot’s topology determines its mechanical forces.

The researchers carried out experiments to test how much force is required to tighten knots with an increasing number of twists. They then compared their observations with their theoretical predictions, and found that the theory accurately predicted the force needed to close a knot, given its topology and the diameter and stiffness of the underlying strand.

“This is the first time, to the best of our knowledge, that precision model experiments and theory have been tied together to untangle the influence of topology on the mechanics of knots,” the researchers write in a paper appearing in the journal Physical Review Letters.

Pedro Reis, the Gilbert W. Winslow Career Development Associate Professor in Civil Engineering and Mechanical Engineering, says the new knot theory may provide guidelines for choosing certain knot configurations for a given load-bearing application, such as braided steel cables, or surgical stitching patterns.

“Surgeons, of course, have a great deal of experience, and they know this knot is better for this stitching procedure than this knot,” Reis says. “But can we further inform the process? While maybe these knots are used, we might show that some other knots, done in a certain way, may be preferable.”

A twisted theory

Reis’ colleague, French theoretician Basile Audoly, originally took on the problem of relating a knot’s topology and mechanical forces. In previous work, Audoly, with his own colleague Sébastien Neukirch, had developed a theory based on observations of tightening a very simple, overhand knot, comprising only one twist. They then verified the theory with a slightly more complex knot with two twists. The theory, they concluded, should predict the forces required to tighten even more complex knots.

However, when Reis, together with his students Khalid Jawed and Peter Dieleman, performed similar experiments with knots of more than two twists, they found that the previous theory failed to predict the force needed to close the knots. Reis and Audoly teamed up to develop a more accurate theory for describing the topology and mechanics of a wider range of knots.

The researchers created knots from nitonol, a hyper-elastic wire that, even when bent at dramatic angles, will return to its original shape. Nitonol’s elasticity and stiffness are well known.

To generate various topologies, the researchers tied knots with multiple overhand twists, creating increasingly longer braids. They then clamped one end of each braid to a table, used a mechanical arm to simultaneously pull the knot tight, and measured the force applied. From these experiments, they observed that a knot with 10 twists requires about 1,000 times more force to close than a knot with just one.

“When Pedro Reis showed me his experiments on knots with as much as 10 twists, and told me that they could resist such a high force, this first appeared to me to be far beyond what simple equations can capture,” Audoly says. “Then, I thought it was a nice challenge.”

From shoelaces to surgery

To come up with a theory to predict the forces observed, Reis and Audoly went through multiple iterations between the experiments and theory to identify the ingredients that mattered the most and simplify the model. Eventually, they divided the problem in two parts, first characterizing the knot’s loop, then its braid. For the first part, the researchers quantified the aspect ratio, or shape of a loop, given the number of twists in a braid: The more twists in a braid, the more elliptical the loop.

The team then studied the forces within the braid. As a braid, or twist, is symmetric, the researchers simplified the problem by only considering one strand of the braid.

“Then we write an energy for the system that includes bending, tension, and friction for that one helical strand, and we are able to determine the shape,” Audoly says. “Once we have the shape, we can match it to this loop, and ultimately we get the overall force displacement response of the system.”

To test the theory, Reis plugged the experiments’ measurements into the theory to generate predictions of force.

“When we put the data through the machinery of the theory, the predictions and the dataset all collapse onto this master curve,” Reis says. “Once we have this master curve, you can give me a bending stiffness and diameter of a strand, and the number of turns in the knot, and I can tell you what force is required to close it. Also, we now understand how the knot locks itself up when more turns are added.”

Reis envisions multiple applications for the group’s theory, both significant and mundane.

“This theory helps us predict the mechanical response of knots of different topologies,” Reis says. “We’re describing the force it requires to close a loop, which is an indicator of the stiffness of the knot. This might help us to understand something as simple as how your headphones get tangled, and how to better tie your shoes, to how the configuration of knots can help in surgical procedures.”


Story Source:

The above post is reprinted from materials provided by Massachusetts Institute of Technology. The original item was written by Jennifer Chu. Note: Materials may be edited for content and length.

Earth Has More Than 3 Trillion Trees

Using a combination of satellite imagery, forest inventories, and supercomputer technologies, the researchers were able to produce a global map of tree density at the square-kilometer pixel scale. Credit: Image courtesy of Yale School of Forestry & Environmental Studies
Using a combination of satellite imagery, forest inventories, and supercomputer technologies, the researchers were able to produce a global map of tree density at the square-kilometer pixel scale.
Credit: Image courtesy of Yale School of Forestry & Environmental Studies

[dropcap]A[/dropcap] new Yale-led study estimates that there are more than 3 trillion trees on Earth, about seven and a half times more than some previous estimates. But the total number of trees has plummeted by roughly 46 percent since the start of human civilization, the study estimates.

Using a combination of satellite imagery, forest inventories, and supercomputer technologies, the international team of researchers was able to map tree populations worldwide at the square-kilometer level.

Their results, published in the journal Nature, provide the most comprehensive assessment of tree populations ever produced and offer new insights into a class of organism that helps shape most terrestrial biomes.

The new insights can improve the modeling of many large-scale systems, from carbon cycling and climate change models to the distribution of animal and plant species, say the researchers.

“Trees are among the most prominent and critical organisms on Earth, yet we are only recently beginning to comprehend their global extent and distribution,” said Thomas Crowther, a postdoctoral fellow at the Yale School of Forestry & Environmental Studies (F&ES) and lead author of the study.

“They store huge amounts of carbon, are essential for the cycling of nutrients, for water and air quality, and for countless human services,” he added. “Yet you ask people to estimate, within an order of magnitude, how many trees there are and they don’t know where to begin. I don’t know what I would have guessed, but I was certainly surprised to find that we were talking about trillions.”

The study was inspired by a request by Plant for the Planet, a global youth initiative that leads the United Nations Environment Programme’s “Billion Tree Campaign.” Two years ago the group approached Crowther asking for baseline estimates of tree numbers at regional and global scales so they could better evaluate the contribution of their efforts and set targets for future tree-planting initiatives.

At the time, the only global estimate was just over 400 billion trees worldwide, or about 61 trees for every person on Earth. That prediction was generated using satellite imagery and estimates of forest area, but did not incorporate any information from the ground.

The new study used a combination of approaches to reveal that there are 3.04 trillion trees — roughly 422 trees per person.

Crowther and his colleagues collected tree density information from more than 400,000 forest plots around the world. This included information from several national forest inventories and peer-reviewed studies, each of which included tree counts that had been verified at the ground level. Using satellite imagery, they were then able to assess how the number of trees in each of those plots is related to local characteristics such as climate, topography, vegetation, soil condition, and human impacts.

“The diverse array of data available today allowed us to build predictive models to estimate the number of trees at each location around the globe,” said Yale postdoctoral student Henry Glick, second author of the study.

The resulting map has the potential to inform scientists about the structure of forest ecosystems in different regions, and it can be used to improve predictions about carbon storage and biodiversity around the world.

“Most global environmental data is thematically coarse,” said Matthew Hansen, a global forestry expert from the University of Maryland who was not involved in the study. “The study of Crowther et al. moves us towards a needed direct quantification of tree distributions, information ready to be used by a host of downstream science investigations.”

The highest densities of trees were found in the boreal forests in the sub-arctic regions of Russia, Scandinavia, and North America. But the largest forest areas, by far, are in the tropics, which are home to about 43 percent of the world’s trees. (Only 24 percent are in the dense boreal regions, while another 22 percent exist in temperate zones.)

The results illustrate how tree density changes within forest types. Researchers found that climate can help predict tree density in most biomes. In wetter areas, for instance, more trees are able to grow. However, the positive effects of moisture were reversed in some regions because humans typically prefer the moist, productive areas for agriculture.

In fact, human activity is the largest driver of tree numbers worldwide, said Crowther. While the negative impact of human activity on natural ecosystems is clearly visible in small areas, the study provides a new measure of the scale of anthropogenic effects, highlighting how historical land use decisions have shaped natural ecosystems on a global scale. In short, tree densities usually plummet as the human population increases. Deforestation, land-use change, and forest management are responsible for a gross loss of over 15 billion trees each year.

“We’ve nearly halved the number of trees on the planet, and we’ve seen the impacts on climate and human health as a result,” Crowther said. “This study highlights how much more effort is needed if we are to restore healthy forests worldwide.”

Researchers from 15 countries collaborated on the study.


Story Source:

The above post is reprinted from materials provided by Yale School of Forestry & Environmental Studies. Note: Materials may be edited for content and length.


Journal Reference:

  1. T. W. Crowther, H. B. Glick, K. R. Covey, C. Bettigole, D. S. Maynard, S. M. Thomas, J. R. Smith, G. Hintler, M. C. Duguid, G. Amatulli, M.-N. Tuanmu, W. Jetz, C. Salas, C. Stam, D. Piotto, R. Tavani, S. Green, G. Bruce, S. J. Williams, S. K. Wiser, M. O. Huber, G. M. Hengeveld, G.-J. Nabuurs, E. Tikhonova, P. Borchardt, C.-F. Li, L. W. Powrie, M. Fischer, A. Hemp, J. Homeier, P. Cho, A. C. Vibrans, P. M. Umunay, S. L. Piao, C. W. Rowe, M. S. Ashton, P. R. Crane, M. A. Bradford. Mapping tree density at a global scale. Nature, 2015; DOI: 10.1038/nature14967

Milestone in Hybrid Artificial Photosynthesis

Artificial photosynthesis used to produce renewable molecular hydrogen for synthesizing carbon dioxide into methane. Credit: Berkeley Lab
Artificial photosynthesis used to produce renewable molecular hydrogen for synthesizing carbon dioxide into methane.
Credit: Berkeley Lab

A team of researchers at the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) developing a bioinorganic hybrid approach to artificial photosynthesis have achieved another milestone. Having generated quite a buzz with their hybrid system of semiconducting nanowires and bacteria that used electrons to synthesize carbon dioxide into acetate, the team has now developed a hybrid system that produces renewable molecular hydrogen and uses it to synthesize carbon dioxide into methane, the primary constituent of natural gas.

“This study represents another key breakthrough in solar-to-chemical energy conversion efficiency and artificial photosynthesis,” says Peidong Yang, a chemist with Berkeley Lab’s Materials Sciences Division and one of the leaders of this study. “By generating renewable hydrogen and feeding it to microbes for the production of methane, we can now expect an electrical-to-chemical efficiency of better than 50 percent and a solar-to-chemical energy conversion efficiency of 10-percent if our system is coupled with state-of-art solar panel and electrolyzer.”

Yang, who also holds appointments with UC Berkeley and the Kavli Energy NanoScience Institute (Kavli-ENSI) at Berkeley, is one of three corresponding authors of a paper describing this research in the Proceedings of the National Academy of Sciences (PNAS). The paper is titled “A hybrid bioinorganic approach to solar-to-chemical conversion.” The other corresponding authors are Michelle Chang and Christopher Chang. Both also hold joint appointments with Berkeley Lab and UC Berkeley. In addition, Chris Chang is a Howard Hughes Medical Institute (HHMI) investigator.

Photosynthesis is the process by which nature harvests the energy in sunlight and uses it to synthesize carbohydrates from carbon dioxide and water. Carbohyrates are biomolecules that store the chemical energy used by living cells. In the original hybrid artificial photosynthesis system developed by the Berkeley Lab team, an array of silicon and titanium oxide nanowires collected solar energy and delivered electrons to microbes which used them to reduce carbon dioxide into a variety of value-added chemical products. In the new system, solar energy is used to split the water molecule into molecular oxygen and hydrogen. The hydrogen is then transported to microbes that use it to reduce carbon dioxide into one specific chemical product, methane.

“In our latest work, we’ve demonstrated two key advances,” says Chris Chang. “First, our use of renewable hydrogen for carbon dioxide fixation opens up the possibility of using hydrogen that comes from any sustainable energy source, including wind, hydrothermal and nuclear. Second, having demonstrated one promising organism for using renewable hydrogen, we can now, through synthetic biology, expand to other organisms and other value-added chemical products.”

The concept in the two studies is essentially the same — a membrane of semiconductor nanowires that can harness solar energy is populated with bacterium that can feed off this energy and use it to produce a targeted carbon-based chemical. In the new study, the membrane consisted of indium phosphide photocathodes and titanium dioxide photoanodes. Whereas in the first study, the team worked with Sporomusa ovata, an anaerobic bacterium that readily accepts electrons from the surrounding environment to reduce carbon dioxide, in the new study the team populated the membrane with Methanosarcina barkeri, an anaerobic archaeon that reduces carbon dioxide using hydrogen rather than electrons.

“Using hydrogen as the energy carrier rather than electrons makes for a much more efficient process as molecular hydrogen, through its chemical bonds, has a much higher density for storing and transporting energy,” says Michelle Chang.

In the newest membrane reported by the Berkeley team, solar energy is absorbed and used to generate hydrogen from water via the hydrogen evolution reaction (HER). The HER is catalyzed by earth-abundant nickel sulfide nanoparticles that operate effectively under biologically compatible conditions. Hydrogen produced in the HER is directly utilized by the Methanosarcina barkeri archaeons in the membrane to produce methane.

“We selected methane as an initial target owing to the ease of product separation, the potential for integration into existing infrastructures for the delivery and use of natural gas, and the fact that direct conversion of carbon dioxide to methane with synthetic catalysts has proven to be a formidable challenge,” says Chris Chang. “Since we still get the majority of our methane from natural gas, a fossil fuel, often from fracking, the ability to generate methane from a renewable hydrogen source is another important advance.”

Adds Yang, “While we were inspired by the process of natural photosynthesis and continue to learn from it, by adding nanotechnology to help improve the efficiency of natural systems we are showing that sometimes we can do even better than nature.”

In addition to the corresponding authors, other co-authors of the PNAS paper describing this research were Eva Nichols, Joseph Gallagher, Chong Liu, Yude Su, Joaquin Resasco, Yi Yu and Yujie Sung.

This research was primarily funded by the DOE Office of Science.


Story Source:

The above post is reprinted from materials provided by Lawrence Berkeley National Laboratory. Note: Materials may be edited for content and length.


Journal Reference:

  1. Eva M. Nichols et al. Hybrid bioinorganic approach to solar-to-chemical conversion. PNAS, 2015 DOI: 10.1073/pnas.1508075112

July 2015 was warmest month ever recorded for the globe

Land and ocean temperature percentiles July 2015. Credit: NOAA
Land and ocean temperature percentiles July 2015
Credit: NOAA

 

The July average temperature across global land and ocean surfaces was 1.46°F (0.81°C) above the 20th century average. As July is climatologically the warmest month for the year, this was also the all-time highest monthly temperature in the 1880-2015 record, at 61.86°F (16.61°C), surpassing the previous record set in 1998 by 0.14°F (0.08°C). So July 2015 is the Warmest Month Ever!

Separately, the July globally-averaged land surface temperature was 1.73°F (0.96°C) above the 20th century average. This was the sixth highest for July in the 1880-2015 record.

The July globally-averaged sea surface temperature was 1.35°F (0.75°C) above the 20th century average. This was the highest temperature for any month in the 1880-2015 record, surpassing the previous record set in July 2014 by 0.13°F (0.07°C). The global value was driven by record warmth across large expanses of the Pacific and Indian Oceans.

The average Arctic sea ice extent for July was 350,000 square miles (9.5 percent) below the 1981-2010 average. This was the eighth smallest July extent since records began in 1979 and largest since 2009, according to analysis by the National Snow and Ice Data Center using data from NOAA and NASA.

Antarctic sea ice during July was 240,000 square miles (3.8 percent) above the 1981-2010 average. This was the fourth largest July Antarctic sea ice extent on record and 140,000 square miles smaller than the record-large July extent of 2014.

Global highlights: Year-to-date (January-July 2015)

  • The year-to-date temperature combined across global land and ocean surfaces was 1.53°F (0.85°C) above the 20th century average. This was the highest for January-July in the 1880-2015 record, surpassing the previous record set in 2010 by 0.16°F (0.09°C).
  • The year-to-date globally-averaged land surface temperature was 2.41°F (1.34°C) above the 20th century average. This was the highest for January-July in the 1880-2015 record, surpassing the previous record of 2007 by 0.27°F (0.15°C).
  • The year-to-date globally-averaged sea surface temperature was 1.21°F (0.67°C) above the 20th century average. This was also the highest for January-July in the 1880-2015 record, surpassing the previous record of 2010 by 0.11°F (0.06°C). Every major ocean basin observed record warmth in some areas.

Story Source:

The above post is reprinted from materials provided by National Oceanic and Atmospheric Administration. Note: Materials may be edited for content and length.