Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
Protected areas in the United States, representing 14 percent of the land mass, provide places for respite, recreation, and natural resource conservation. However, noise pollution poses novel threats to these protected areas, according to a first-of-its-kind study from scientists at Colorado State University and the U.S. National Park Service.
Researchers found that noise pollution was twice as high as background sound levels in a majority of U.S. protected areas, and caused a ten-fold or greater increase in noise pollution in 21 percent of protected areas.
The often-overlooked impacts of noise, driven by expansion of human activities and transportation networks, are encroaching into the furthest reaches of remote areas, according to the study. The research findings highlight the pervasiveness and identify the primary drivers of noise in protected areas.
Rachel Buxton, lead author and post-doctoral researcher in the Department of Fish, Wildlife, and Conservation Biology in the Warner College of Natural Resources, said the team was surprised by how prevalent noise pollution was in protected areas.
"The noise levels we found can be harmful to visitor experiences in these areas, and can be harmful to human health, and to wildlife," she said. "However, we were also encouraged to see that many large wilderness areas have sound levels that are close to natural levels. Protecting these important natural acoustic resources as development and land conversion progresses is critical if we want to preserve the character of protected areas."
Anthropogenic, or human-caused, noise is an unwanted or inappropriate sound created by humans, such as sounds emanating from aircraft, highways, industrial, or residential sources. Noise pollution is noise that interferes with normal activities, such as sleeping and conversation. It can also diminish a person's quality of life.
This is an acoustic recording station at the iconic tourist attraction Alcatraz Island in San Francisco Bay, Golden Gate National Park, California.
Credit: United States National Park Service

Measuring noise pollution is a challenging task, given its diffusive nature and since sound is not easily monitored remotely over large spatial scales; it can't be measured by satellite or other visual observations. Instead, for this study, the team analyzed millions of hours of sound measurements from 492 sites around the continental U.S. The results summarized predictions of existing sound levels, estimates of natural sound levels, and the amount that anthropogenic noise raises levels above natural levels, which is considered noise pollution.
How prevalent is noise pollution in protected areas? The research team found anthropogenic noise doubled background sound levels in 63 percent of U.S. protected areas, and caused a ten-fold or greater increase in background levels in 21 percent of protected areas.
In other words, noise reduced the area that natural sounds can be heard by 50 to 90 percent. This also means that what could be heard at 100 feet away could only be heard from 10 to 50 feet.
This reduced capacity to hear natural sound reduces the restorative properties of spending time in nature, such as mood enhancement and stress reduction, interfering with the enjoyment typically experienced by park visitors. Noise pollution also negatively impacts wildlife by distracting or scaring animals, and can result in changes in species composition.
High levels of noise pollution were also found in critical habitat for endangered species, namely in endangered plant and insect habitats. "Although plants can't hear, many animals that disperse seeds or pollinate flowers can hear, and are known to be affected by noise, resulting in indirect impacts on plants," said Buxton.
The study also revealed that high noise pollution levels within protected areas were in specific locations, where noise reduction techniques may best be targeted. The biggest noise-causing culprits were roads, aircraft, human development, and resource extraction.
Some protected areas have introduced effective techniques to reduce noise, launching shuttle services to cut back on traffic, implementing quiet zones where visitors are encouraged to quietly enjoy protected area surroundings, and creating noise corridors, aligning flight patterns over roads.
"Numerous noise mitigation strategies have been successfully developed and implemented, so we already have the knowledge needed to address noise issues," said George Wittemyer, an associate professor at Colorado State University and the senior author of the study. "Our work provides information to facilitate such efforts in respect to protected areas where natural sounds are integral."
Researchers said that many people don't really think of noise pollution as pollution. But the team is hopeful that more people will consider sound as a component of the natural environment.
"Next time you go for a walk in the woods, pay attention to the sounds you hear -- the flow of a river, wind through the trees, singing birds, bugling elk. These acoustic resources are just as magnificent as visual ones, and deserve our protection" said Buxton.

Story Source:
Materials provided by Colorado State UniversityNote: Content may be edited for style and length.

Journal Reference:
  1. Rachel T. Buxton, Megan F. McKenna, Daniel Mennitt, Kurt Fristrup, Kevin Crooks, Lisa Angeloni, George Wittemyer. Noise pollution is pervasive in U.S. protected areasScience, 2017; 356 (6337): 531 DOI: 10.1126/science.aah4783

Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
In January 2016, the EU imposed a maximum limit of inorganic arsenic on manufacturers in a bid to mitigate associated health risks. Researchers at the Institute for Global Food Security at Queen's have found that little has changed since this law was passed and that 50 per cent of baby rice food products still contain an illegal level of inorganic arsenic.
Professor Meharg, lead author of the study and Professor of Plant and Soil Sciences at Queen's, said: "This research has shown direct evidence that babies are exposed to illegal levels of arsenic despite the EU regulation to specifically address this health challenge. Babies are particularly vulnerable to the damaging effects of arsenic that can prevent the healthy development of a baby's growth, IQ and immune system to name but a few."
Rice has, typically, ten times more inorganic arsenic than other foods and chronic exposure can cause a range of health problems including developmental problems, heart disease, diabetes and nervous system damage.
As babies are rapidly growing they are at a sensitive stage of development and are known to be more susceptible to the damaging effects of arsenic, which can inhibit their development and cause long-term health problems. Babies and young children under the age of five also eat around three times more food on a body weight basis than adults, which means that, relatively, they have three times greater exposures to inorganic arsenic from the same food item.
50 per cent of baby rice food products still contain an illegal level of inorganic arsenic, say British researchers.
Credit: © jdjuanci / Fotolia

The research findings, published in the PLOS ONE journal today, compared the level of arsenic in urine samples among infants who were breast-fed or formula-fed before and after weaning. A higher concentration of arsenic was found in formula-fed infants, particularly among those who were fed non-dairy formulas which includes rice-fortified formulas favoured for infants with dietary requirements such as wheat or dairy intolerance. The weaning process further increased infants' exposure to arsenic, with babies five times more exposed to arsenic after the weaning process, highlighting the clear link between rice-based baby products and exposure to arsenic.
In this new study, researchers at Queen's also compared baby food products containing rice before and after the law was passed and discovered that higher levels of arsenic were in fact found in the products since the new regulations were implemented. Nearly 75 per cent of the rice-based products specifically marketed for infants and young children contained more than the standard level of arsenic stipulated by the EU law.
Rice and rice-based products are a popular choice for parents, widely used during weaning, and to feed young children, due to its availability, nutritional value and relatively low allergic potential.
Professor Meharg explained: "Products such as rice-cakes and rice cereals are common in babies' diets. This study found that almost three-quarters of baby crackers, specifically marketed for children exceeded the maximum amount of arsenic."
Previous research led by Professor Meharg highlighted how a simple process of percolating rice could remove up to 85 per cent of arsenic. Professor Meharg adds: "Simple measures can be taken to dramatically reduce the arsenic in these products so there is no excuse for manufacturers to be selling baby food products with such harmful levels of this carcinogenic substance.
"Manufacturers should be held accountable for selling products that are not meeting the required EU standard. Companies should publish the levels of arsenic in their products to prevent those with illegal amounts from being sold. This will enable consumers to make an informed decision, aware of any risks associated before consuming products containing arsenic."

Story Source:
Materials provided by Queen's University BelfastNote: Content may be edited for style and length.

Journal Reference:
  1. Antonio J. Signes-Pastor, Jayne V. Woodside, Paul McMullan, Karen Mullan, Manus Carey, Margaret R. Karagas, Andrew A. Meharg. Levels of infants’ urinary arsenic metabolites related to formula feeding and weaning with rice products exceeding the EU inorganic arsenic standardPLOS ONE, 2017; 12 (5): e0176923 DOI: 10.1371/journal.pone.0176923

Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
Cellphones and other devices could soon be controlled with touchless gestures and charge themselves using ambient light, thanks to new LED arrays that can both emit and detect light.
Made of tiny nanorods arrayed in a thin film, the LEDs could enable new interactive functions and multitasking devices. Researchers at the University of Illinois at Urbana-Champaign and Dow Electronic Materials in Marlborough, Massachusetts, report the advance in the Feb. 10 issue of the journal Science.
"These LEDs are the beginning of enabling displays to do something completely different, moving well beyond just displaying information to be much more interactive devices," said Moonsub Shim, a professor of materials science and engineering at the U. of I. and the leader of the study. "That can become the basis for new and interesting designs for a lot of electronics."
The tiny nanorods, each measuring less than 5 nanometers in diameter, are made of three types of semiconductor material. One type emits and absorbs visible light. The other two semiconductors control how charge flows through the first material. The combination is what allows the LEDs to emit, sense and respond to light.
The nanorod LEDs are able to perform both functions by quickly switching back and forth from emitting to detecting. They switch so fast that, to the human eye, the display appears to stay on continuously -- in fact, it's three orders of magnitude faster than standard display refresh rates. Yet the LEDs are also near-continuously detecting and absorbing light, and a display made of the LEDs can be programmed to respond to light signals in a number of ways.
A laser stylus writes on a small array of multifunction pixels made by dual-function LEDs than can both emit and respond to light.
Credit: Image courtesy of Moonsub Shim, University of Illinois

For example, a display could automatically adjust brightness in response to ambient light conditions -- on a pixel-by-pixel basis.
"You can imagine sitting outside with your tablet, reading. Your tablet will detect the brightness and adjust it for individual pixels," Shim said. "Where there's a shadow falling across the screen it will be dimmer, and where it's in the sun it will be brighter, so you can maintain steady contrast."
The researchers demonstrated pixels that automatically adjust brightness, as well as pixels that respond to an approaching finger, which could be integrated into interactive displays that respond to touchless gestures or recognize objects.
They also demonstrated arrays that respond to a laser stylus, which could be the basis of smart whiteboards, tablets or other surfaces for writing or drawing with light. And the researchers found that the LEDs not only respond to light, but can convert it to electricity as well.
"The way it responds to light is like a solar cell. So not only can we enhance interaction between users and devices or displays, now we can actually use the displays to harvest light," Shim said. "So imagine your cellphone just sitting there collecting the ambient light and charging. That's a possibility without having to integrate separate solar cells. We still have a lot of development to do before a display can be completely self-powered, but we think that we can boost the power-harvesting properties without compromising LED performance, so that a significant amount of the display's power is coming from the array itself."
In addition to interacting with users and their environment, nanorod LED displays can interact with each other as large parallel communication arrays. It would be slower than device-to-device technologies like Bluetooth, Shim said, but those technologies are serial -- they can only send one bit at a time. Two LED arrays facing each other could communicate with as many bits as there are pixels in the screen.
"We primarily interface with our electronic devices through their displays, and a display's appeal resides in the user's experience of viewing and manipulating information," said study coauthor Peter Trefonas, a corporate fellow in Electronic Materials at the Dow Chemical Company. "The bidirectional capability of these new LED materials could enable devices to respond intelligently to external stimuli in new ways. The potential for touchless gesture control alone is intriguing, and we're only scratching the surface of what could be possible."
The researchers did all their demonstrations with arrays of red LEDs. They are now working on methods to pattern three-color displays with red, blue and green pixels, as well as working on ways to boost the light-harvesting capabilities by adjusting the composition of the nanorods.

Story Source:
Materials provided by University of Illinois at Urbana-ChampaignNote: Content may be edited for style and length.

Journal Reference:
  1. Nuri Oh et al. Double-heterojunction nanorod light-responsive LEDs for display applicationsScience, 2017 DOI: 10.1126/science.aal2038

Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new flow battery that stores energy in organic molecules dissolved in neutral pH water. This new chemistry allows for a non-toxic, non-corrosive battery with an exceptionally long lifetime and offers the potential to significantly decrease the costs of production.
The research, published in ACS Energy Letters, was led by Michael Aziz, the Gene and Tracy Sykes Professor of Materials and Energy Technologies and Roy Gordon, the Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science.
Flow batteries store energy in liquid solutions in external tanks -- the bigger the tanks, the more energy they store. Flow batteries are a promising storage solution for renewable, intermittent energy like wind and solar but today's flow batteries often suffer degraded energy storage capacity after many charge-discharge cycles, requiring periodic maintenance of the electrolyte to restore the capacity.
By modifying the structures of molecules used in the positive and negative electrolyte solutions, and making them water soluble, the Harvard team was able to engineer a battery that loses only one percent of its capacity per 1000 cycles.
Flow batteries are a promising storage solution for renewable energy sources like wind and solar.
Credit: © Soonthorn / Fotolia

"Lithium ion batteries don't even survive 1000 complete charge/discharge cycles," said Aziz.
"Because we were able to dissolve the electrolytes in neutral water, this is a long-lasting battery that you could put in your basement," said Gordon. "If it spilled on the floor, it wouldn't eat the concrete and since the medium is noncorrosive, you can use cheaper materials to build the components of the batteries, like the tanks and pumps."
This reduction of cost is important. The Department of Energy (DOE) has set a goal of building a battery that can store energy for less than $100 per kilowatt-hour, which would make stored wind and solar energy competitive to energy produced from traditional power plants.
"If you can get anywhere near this cost target then you change the world," said Aziz. "It becomes cost effective to put batteries in so many places. This research puts us one step closer to reaching that target."
"This work on aqueous soluble organic electrolytes is of high significance in pointing the way towards future batteries with vastly improved cycle life and considerably lower cost," said Imre Gyuk, Director of Energy Storage Research at the Office of Electricity of the DOE. "I expect that efficient, long duration flow batteries will become standard as part of the infrastructure of the electric grid."
The key to designing the battery was to first figure out why previous molecules were degrading so quickly in neutral solutions, said Eugene Beh, a postdoctoral fellow and first author of the paper. By first identifying how the molecule viologen in the negative electrolyte was decomposing, Beh was able to modify its molecular structure to make it more resilient.
Next, the team turned to ferrocene, a molecule well known for its electrochemical properties, for the positive electrolyte.
"Ferrocene is great for storing charge but is completely insoluble in water," said Beh. "It has been used in other batteries with organic solvents, which are flammable and expensive."
But by functionalizing ferrocene molecules in the same way as with the viologen, the team was able to turn an insoluble molecule into a highly soluble one that could also be cycled stably.
"Aqueous soluble ferrocenes represent a whole new class of molecules for flow batteries," said Aziz.
The neutral pH should be especially helpful in lowering the cost of the ion-selective membrane that separates the two sides of the battery. Most flow batteries today use expensive polymers that can withstand the aggressive chemistry inside the battery. They can account for up to one third of the total cost of the device. With essentially salt water on both sides of the membrane, expensive polymers can be replaced by cheap hydrocarbons.
This research was coauthored by Diana De Porcellinis, Rebecca Gracia, and Kay Xia. It was supported by the Office of Electricity Delivery and Energy Reliability of the DOE and by the DOE's Advanced Research Projects Agency-Energy.
With assistance from Harvard's Office of Technology Development (OTD), the researchers are working with several companies to scale up the technology for industrial applications and to optimize the interactions between the membrane and the electrolyte. Harvard OTD has filed a portfolio of pending patents on innovations in flow battery technology.

Story Source:
Materials provided by Harvard John A. Paulson School of Engineering and Applied Sciences. Original written by Leah Burrows. Note: Content may be edited for style and length.

Journal Reference:
  1. Eugene S. Beh, Diana De Porcellinis, Rebecca Gracia, Kay Xia, Roy G Gordon, Michael Aziz. A Neutral pH Aqueous Organic/Organometallic Redox Flow Battery with Extremely High Capacity RetentionACS Energy Letters, 2017; DOI: 10.1021/acsenergylett.7b00019

Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
About 4.6 billion years ago, an enormous cloud of hydrogen gas and dust collapsed under its own weight, eventually flattening into a disk called the solar nebula. Most of this interstellar material contracted at the disk's center to form the sun, and part of the solar nebula's remaining gas and dust condensed to form the planets and the rest of our solar system.
Now scientists from MIT and their colleagues have estimated the lifetime of the solar nebula -- a key stage during which much of the solar system evolution took shape.
This new estimate suggests that the gas giants Jupiter and Saturn must have formed within the first 4 million years of the solar system's formation. Furthermore, they must have completed gas-driven migration of their orbital positions by this time.
"So much happens right at the beginning of the solar system's history," says Benjamin Weiss, professor of earth, atmospheric, and planetary sciences at MIT. "Of course the planets evolve after that, but the large-scale structure of the solar system was essentially established in the first 4 million years."
Weiss and MIT postdoc Huapei Wang, the first author of this study, report their results today in the journal Science. Their co-authors are Brynna Downey, Clement Suavet, and Roger Fu from MIT; Xue-Ning Bai of the Harvard-Smithsonian Center for Astrophysics; Jun Wang and Jiajun Wang of Brookhaven National Laboratory; and Maria Zucolotto of the National Museum in Rio de Janeiro.
Spectacular recorders
By studying the magnetic orientations in pristine samples of ancient meteorites that formed 4.563 billion years ago, the team determined that the solar nebula lasted around 3 to 4 million years. This is a more precise figure than previous estimates, which placed the solar nebula's lifetime at somewhere between 1 and 10 million years.
Artist's concept of a planet in a nearby star's dusty, planet-forming disc.
Credit: NASA/JPL-Caltech

The team came to its conclusion after carefully analyzing angrites, which are some of the oldest and most pristine of planetary rocks. Angrites are igneous rocks, many of which are thought to have erupted onto the surface of asteroids very early in the solar system's history and then quickly cooled, freezing their original properties -- including their composition and paleomagnetic signals -- in place.
Scientists view angrites as exceptional recorders of the early solar system, particularly as the rocks also contain high amounts of uranium, which they can use to precisely determine their age.
"Angrites are really spectacular," Weiss says. "Many of them look like what might be erupting on Hawaii, but they cooled on a very early planetesimal."
Weiss and his colleagues analyzed four angrites that fell to Earth at different places and times.
"One fell in Argentina, and was discovered when a farm worker was tilling his field," Weiss says. "It looked like an Indian artifact or bowl, and the landowner kept it by this house for about 20 years, until he finally decided to have it analyzed, and it turned out to be a really rare meteorite."
The other three meteorites were discovered in Brazil, Antarctica, and the Sahara Desert. All four meteorites were remarkably well-preserved, having undergone no additional heating or major compositional changes since they originally formed.
Measuring tiny compasses
The team obtained samples from all four meteorites. By measuring the ratio of uranium to lead in each sample, previous studies had determined that the three oldest formed around 4.563 billion years ago. The researchers then measured the rocks' remnant magnetization using a precision magnetometer in the MIT Paleomagnetism Laboratory.
"Electrons are little compass needles, and if you align a bunch of them in a rock, the rock becomes magnetized," Weiss explains. "Once they're aligned, which can happen when a rock cools in the presence of a magnetic field, then they stay that way. That's what we use as records of ancient magnetic fields."
When they placed the angrites in the magnetometer, the researchers observed very little remnant magnetization, indicating there was very little magnetic field present when the angrites formed.
The team went a step further and tried to reconstruct the magnetic field that would have produced the rocks' alignments, or lack thereof. To do so, they heated the samples up, then cooled them down again in a laboratory-controlled magnetic field.
"We can keep lowering the lab field and can reproduce what's in the sample," Weiss says. "We find only very weak lab fields are allowed, given how little remnant magnetization is in these three angrites."
Specifically, the team found that the angrites' remnant magnetization could have been produced by an extremely weak magnetic field of no more than 0.6 microteslas, 4.563 billion years ago, or, about 4 million years after the start of the solar system.
In 2014, Weiss' group analyzed other ancient meteorites that formed within the solar system's first 2 to 3 million years, and found evidence of a magnetic field that was about 10-100 times stronger -- about 5-50 microtesla.
"It's predicted that once the magnetic field drops by a factor of 10-100 in the inner solar system, which we've now shown, the solar nebula goes away really quickly, within 100,000 years," Weiss says. "So even if the solar nebula hadn't disappeared by 4 million years, it was basically on its way out."
The planets align
The researchers' new estimate is much more precise than previous estimates, which were based on observations of faraway stars.
"What's more, the angrites' paleomagnetism constrains the lifetime of our own solar nebula, while astronomical observations obviously measure other faraway solar systems," Wang adds. "Since the solar nebula lifetime critically affects the final positions of Jupiter and Saturn, it also affects the later formation of the Earth, our home, as well as the formation of other terrestrial planets."
Now that the scientists have a better idea of how long the solar nebula persisted, they can also narrow in on how giant planets such as Jupiter and Saturn formed. Giant planets are mostly made of gas and ice, and there are two prevailing hypotheses for how all this material came together as a planet. One suggests that giant planets formed from the gravitational collapse of condensing gas, like the sun did. The other suggests they arose in a two-stage process called core accretion, in which bits of material smashed and fused together to form bigger rocky, icy bodies. Once these bodies were massive enough, they could have created a gravitational force that attracted huge amounts of gas to ultimately form a giant planet.
According to previous predictions, giant planets that form through gravitational collapse of gas should complete their general formation within 100,000 years. Core accretion, in contrast, is typically thought to take much longer, on the order of 1 to several million years. Weiss says that if the solar nebula was around in the first 4 million years of solar system formation, this would give support to the core accretion scenario, which is generally favored among scientists.
"The gas giants must have formed by 4 million years after the formation of the solar system," Weiss says. "Planets were moving all over the place, in and out over large distances, and all this motion is thought to have been driven by gravitational forces from the gas. We're saying all this happened in the first 4 million years."
This research was supported, in part, by NASA and a generous gift from Thomas J. Peterson, Jr.

Story Source:
Materials provided by Massachusetts Institute of Technology. Original written by Jennifer Chu. Note: Content may be edited for style and length.

Journal Reference:
  1. Huapei Wang, Benjamin P. Weiss, Xue-Ning Bai, Brynna G. Downey, Jun Wang, Jiajun Wang, Clément Suavet, Roger R. Fu, Maria E. Zucolotto. Lifetime of the Solar Nebula Constrained by Meteorite PaleomagnetismScience, 10 Feb 2017 DOI: 10.1126/science.aaf5043

Latest Science Earth and Planets, Medicine and Physics, Animals Science, Researchers and Discoveries, Psychology, Journal Science
Astronomy experiments could soon test an idea developed by Albert Einstein almost exactly a century ago, scientists say.
Tests using advanced technology could resolve a longstanding puzzle over what is driving the accelerated expansion of the Universe.
Researchers have long sought to determine how the Universe's accelerated expansion is being driven. Calculations in a new study could help to explain whether dark energy- as required by Einstein's theory of general relativity -- or a revised theory of gravity are responsible.
Einstein's theory, which describes gravity as distortions of space and time, included a mathematical element known as a Cosmological Constant. Einstein originally introduced it to explain a static universe, but discarded his mathematical factor as a blunder after it was discovered that our Universe is expanding.
Research carried out two decades ago, however, showed that this expansion is accelerating, which suggests that Einstein's Constant may still have a part to play in accounting for dark energy. Without dark energy, the acceleration implies a failure of Einstein's theory of gravity across the largest distances in our Universe.
Andromeda Galaxy (stock image).
Credit: © passmil198216 / Fotolia
Scientists from the University of Edinburgh have discovered that the puzzle could be resolved by determining the speed of gravity in the cosmos from a study of gravitational waves -space-time ripples propagating through the universe.
The researchers' calculations show that if gravitational waves are found to travel at the speed of light, this would rule out alternative gravity theories, with no dark energy, in support of Einstein's Cosmological Constant. If however, their speed differs from that of light, then Einstein's theory must be revised.
Such an experiment could be carried out by the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US, whose twin detectors, 2000 miles apart, directly detected gravitational waves for the first time in 2015.
Experiments at the facilities planned for this year could resolve the question in time for the 100th anniversary of Einstein's Constant.
The study, published in Physics Letters B, was supported by the UK Science Technology Facilities Council, the Swiss National Science Foundation, and the Portuguese Foundation of Science and Technology.
Dr Lucas Lombriser, of the University of Edinburgh's School of Physics and Astronomy, said: "Recent direct gravitational wave detection has opened up a new observational window to our Universe. Our results give an impression of how this will guide us in solving one of the most fundamental problems in physics."

Story Source:
Materials provided by University of EdinburghNote: Content may be edited for style and length.

Journal Reference:
  1. Lucas Lombriser, Nelson A. Lima. Challenges to self-acceleration in modified gravity from gravitational waves and large-scale structurePhysics Letters B, 2017; 765: 382 DOI: 10.1016/j.physletb.2016.12.048