Climate change and air pollution will combine to curb food supplies

Many studies have shown the potential for global climate change to cut food supplies. But these studies have, for the most part, ignored the interactions between increasing temperature and air pollution — specifically ozone pollution, which is known to damage crops.A new study involving researchers at MIT shows that these interactions can be quite significant, suggesting that policymakers need to take both warming and air pollution into account in addressing food security.The study looked in detail at global production of four leading food crops — rice, wheat, corn, and soy — that account for more than half the calories humans consume worldwide. It predicts that effects will vary considerably from region to region, and that some of the crops are much more strongly affected by one or the other of the factors: For example, wheat is very sensitive to ozone exposure, while corn is much more adversely affected by heat.The research was carried out by Colette Heald, an associate professor of civil and environmental engineering (CEE) at MIT, former CEE postdoc Amos Tai, and Maria van Martin at Colorado State University. Their work is described this week in the journal Nature Climate Change.Heald explains that while it’s known that both higher temperatures and ozone pollution can damage plants and reduce crop yields, “nobody has looked at these together.” And while rising temperatures are widely discussed, the impact of air quality on crops is less recognized.The effects are likely to vary widely by region, the study predicts. In the United States, tougher air-quality regulations are expected to lead to a sharp decline in ozone pollution, mitigating its impact on crops. But in other regions, the outcome “will depend on domestic air-pollution policies,” Heald says. “An air-quality cleanup would improve crop yields.”Overall, with all other factors being equal, warming may reduce crop yields globally by about 10 percent by 2050, the study found. But the effects of ozone pollution are more complex — some crops are more strongly affected by it than others — which suggests that pollution-control measures could play a major role in determining outcomes.Ozone pollution can also be tricky to identify, Heald says, because its damage can resemble other plant illnesses, producing flecks on leaves and discoloration.Potential reductions in crop yields are worrisome: The world is expected to need about 50 percent more food by 2050, the authors say, due to population growth and changing dietary trends in the developing world. So any yield reductions come against a backdrop of an overall need to increase production significantly through improved crop selections and farming methods, as well as expansion of farmland.While heat and ozone can each damage plants independently, the factors also interact. For example, warmer temperatures significantly increase production of ozone from the reactions, in sunlight, of volatile organic compounds and nitrogen oxides. …

Read more

A more potent greenhouse gas than carbon dioxide, methane emissions will leap as Earth warms

While carbon dioxide is typically painted as the bad boy of greenhouse gases, methane is roughly 30 times more potent as a heat-trapping gas. New research in the journal Nature indicates that for each degree that Earth’s temperature rises, the amount of methane entering the atmosphere from microorganisms dwelling in lake sediment and freshwater wetlands — the primary sources of the gas — will increase several times. As temperatures rise, the relative increase of methane emissions will outpace that of carbon dioxide from these sources, the researchers report.The findings condense the complex and varied process by which methane — currently the third most prevalent greenhouse gas after carbon dioxide and water vapor — enters the atmosphere into a measurement scientists can use, explained co-author Cristian Gudasz, a visiting postdoctoral research associate in Princeton’s Department of Ecology and Evolutionary Biology. In freshwater systems, methane is produced as microorganisms digest organic matter, a process known as “methanogenesis.” This process hinges on a slew of temperature, chemical, physical and ecological factors that can bedevil scientists working to model how Earth’s systems will contribute, and respond, to a hotter future.The researchers’ findings suggest that methane emissions from freshwater systems will likely rise with the global temperature, Gudasz said. But to not know the extent of methane contribution from such a widely dispersed ecosystem that includes lakes, swamps, marshes and rice paddies leaves a glaring hole in climate projections.”The freshwater systems we talk about in our paper are an important component to the climate system,” Gudasz said. “There is more and more evidence that they have a contribution to the methane emissions. Methane produced from natural or humanmade freshwater systems will increase with temperature.”To provide a simple and accurate way for climate modelers to account for methanogenesis, Gudasz and his co-authors analyzed nearly 1,600 measurements of temperature and methane emissions from 127 freshwater ecosystems across the globe.The researchers found that a common effect emerged from those studies: freshwater methane generation very much thrives on high temperatures. Methane emissions at 0 degrees Celsius would rise 57 times higher when the temperature reached 30 degrees Celsius, the researchers report. For those inclined to model it, the researchers’ results translated to a temperature dependence of 0.96 electron volts (eV), an indication of the temperature-sensitivity of the methane-emitting ecosystems.”We all want to make predictions about greenhouse gas emissions and their impact on global warming,” Gudasz said. “Looking across these scales and constraining them as we have in this paper will allow us to make better predictions.”Story Source:The above story is based on materials provided by Princeton University. …

Read more

Public smoking bans linked with rapid fall in preterm births, child hospital visits for asthma

The introduction of laws banning smoking in public places and workplaces in North America and Europe has been quickly followed by large drops in rates of preterm births and children attending hospital for asthma, according to the first systematic review and meta-analysis examining the effect of smoke-free legislation on child health, published in The Lancet.The analysis of 11 studies done in North America and Europe, involving more than 2.5 million births, and nearly 250 000 asthma exacerbations, showed that rates of both preterm births and hospital attendance for asthma were reduced by 10% within a year of smoke-free laws coming into effect.Currently only 16% of the world’s population is covered by comprehensive smoke-free laws, and 40% of children worldwide are regularly exposed to second-hand smoke. To date, most studies have looked at the impact of smoking bans on adult outcomes, but children account for more than a quarter of all deaths and over half of all healthy years of life lost due to exposure to second-hand smoke.After searching systematically for both published and unpublished studies over 38 years (1975-2013) reporting on the impact of public smoking restrictions on health outcomes in children aged 12 years or younger, Dr Jasper Been from the Maastricht University Medical Centre, in the Netherlands, and colleagues identified 11 suitable studies — five North American studies describing local bans and six European studies looking at national bans.”Our research found significant reductions in preterm birth and severe asthma attacks in childhood, as well as a 5% decline in children being born very small for gestational age after the introduction of smoke-free laws,” says Dr Been.”Together with the known health benefits in adults, our study provides clear evidence that smoking bans have considerable public health benefits for perinatal and child health, and provides strong support for WHO recommendations to create smoke-free public environments on a national level.”*”This research has demonstrated the very considerable potential that smoke-free legislation offers to reduce preterm births and childhood asthma attacks,” says study co-author Professor Aziz Sheikh, of Brigham and Women’s Hospital, USA, and the University of Edinburgh, UK. “The many countries that are yet to enforce smoke-free legislation should in the light of these findings reconsider their positions on this important health policy question.”*Writing in a linked Comment, Sara Kalkhoran and Stanton Glantz from the University of California San Francisco in the USA point out that, “Medical expenses for asthma exceeded US$50 billion in the USA in 2007, and US$20 billion in Europe in 2006. If asthma emergency department visits and admissions to hospital decreased by even 10%, the savings in the USA and Europe together would be US$7 billion annually.”They conclude, “The cigarette companies, their allies, and the groups they sponsor have long used claims of economic harm, particularly to restaurants, bars, and casinos, to oppose smoke-free laws despite consistent evidence to the contrary. By contrast, the rapid economic benefits that smoke-free laws and other tobacco control policies bring in terms of reduced medical costs are real. Rarely can such a simple intervention improve health and reduce medical costs so swiftly and substantially.”Story Source:The above story is based on materials provided by The Lancet. Note: Materials may be edited for content and length.

Read more

Cereal flake size influences calorie intake

People eat more breakfast cereal, by weight, when flake size is reduced, according to Penn State researchers, who showed that when flakes are reduced by crushing, people pour a smaller volume of cereal into their bowls, but still take a greater amount by weight and calories.”People have a really hard time judging appropriate portions,” said Barbara Rolls, professor of nutritional sciences and Helen A. Guthrie Chair in Nutrition. “On top of that you have these huge variations in volume that are due to the physical characteristics of foods, such as the size of individual pieces, aeration and how things pile up in a bowl. That adds another dimension to the difficulty of knowing how much to take and eat.”According to Rolls, national dietary guidelines define recommended amounts of most food groups in terms of measures of volume such as cups.”This can be a problem because, for most foods, the recommended amounts have not been adjusted for variations in physical properties that affect volume, such as aeration, cooking, and the size and shape of individual pieces.” Rolls said. “The food weight and energy required to fill a given volume can vary, and this variation in the energy content of recommended amounts could be a challenge to the maintenance of energy balance.”The researchers tested the influence of food volume on calorie intake by systematically reducing the flake size of a breakfast cereal with a rolling pin so that the cereal was more compact and the same weight filled a smaller volume. In a crossover design, the team recruited 41 adults to eat cereal for breakfast once a week for four weeks. The cereal was either standard wheat flakes or the same cereal crushed to reduce the volume to 80 percent, 60 percent or 40 percent of the standard. The researchers provided a constant weight of cereal in an opaque container and participants poured the amount they wanted into a bowl, added fat-free milk and non-calorie sweetener as desired and consumed as much as they wanted.The researchers reported their results in the current issue of the Journal of the Academy of Nutrition and Dietetics.The research showed that as flake size was reduced, subjects poured a smaller volume of cereal, but still took a significantly greater amount by weight and energy content. Despite these differences, subjects estimated that they had taken a similar number of calories of all versions of the cereal. They ate most of the cereal they took, so as flake size was reduced, breakfast energy intake increased.”When faced with decreasing volumes of cereal, the people took less cereal,” Rolls said. …

Read more

Guns Loom Large in Childhood Death Statistics

You can’t go more than a couple of months without seeing another news headline about a school shooting, or a shooting incident involving a child. While these stories are shocking, school shootings account for only a small number of the gun-related injuries and fatalities that children suffer every year as a result of gunshots. In fact, most gun injuries happen in the home and at the hands of other children who had no intention of hurting anybody.Children and Gun DeathsAccording to a recent study presented to a conference of the American Academy of Pediatrics, over 500 children die every year from gunshot wounds. That number represents a 60 percent increase in a single decade. Handguns, by far, account for the most injuries and deaths. Over 80 percent of all children who are injured by firearms suffer injuries inflicted by handguns.The study looked at data compiled between 1997 and 2009. In 1997, 4,270 children under the age of 20 suffered a gunshot injury. By 2009, that number increase to 7,730, a jump of about 55 percent. Further, 317 children died of gunshot injuries in 1997, while 503 died of such injuries in 2009.Disproportionate DangerOther studies have shown that gunshots pose a disproportionately high fatality risk to children. Even though gunshot wounds account for only 1% of the total number of injuries children suffer each year, they account for 21% of deaths that result from childhood injury.When a child is shot, that child has a 32% chance of requiring major surgery. …

Read more

Web Tool Successfully Measures Farms’ Water Footprint

A new University of Florida web-based tool worked well during its trial run to measure water consumption at farms in four Southern states, according to a study published this month.The system measures the so-called “water footprint” of a farm. In the broader sense, water footprints account for the amount of water used to grow or create almost everything we eat, drink, wear or otherwise use.Researchers at UF’s Institute of Food and Agricultural Sciences introduced their WaterFootprint tool in the March issue of the journal Agricultural Systems, after using it to calculate water consumption at farms in Florida, Georgia, Alabama and Texas. The WaterFootprint is part of the AgroClimate system, developed by Clyde Fraisse, a UF associate professor of agricultural and biological engineering. AgroClimate is a web resource, aimed primarily at agricultural producers, that includes interactive tools and data for reducing agricultural risks.WaterFootprint, developed primarily by Daniel Dourte, a research associate in agricultural and biological engineering, estimates water use in crop production across the U.S. WaterFootprint looks at a farm in a specific year or growing season and gives you its water footprint, Dourte said. With UF’s WaterFootprint system, users provide their location by ZIP code, the crop, planting and harvesting dates, yield, soil type, tillage and water management.The tool also retrieves historical weather data and uses it to estimate the blue and green water footprints of crop production, Dourte said. Water footprints separate water use into green, which is rainfall; blue, from a freshwater resource; and gray, an accounting of water quality, after it’s been polluted.Water footprints can be viewed at the farm level or globally. For instance, if irrigation water is used to grow crops, it is essentially exported, Dourte said.Once products are shipped overseas, the water used to grow the commodity goes with it, and it may not return for a long time — if ever, Dourte said. That’s a problem if the crop is grown in a region where water is scarce, he said.But there’s often a tradeoff, he said. Global food trade saves billions of gallons of water each year, as food is exported from humid, temperate places to drier locales that would have used much more water to grow crops, Dourte said.”The U.S. …

Read more

Global food trade can alleviate water scarcity

International trade of food crops led to freshwater savings worth 2.4 billion US-Dollars in 2005 and had a major impact on local water stress. This is shown in a new study by the Potsdam Institute for Climate Impact Research. Trading food involves the trade of virtually embedded water used for production, and the amount of that water depends heavily on the climatic conditions in the production region: It takes, for instance, 2.700 liters of water to produce 1 kilo of cereals in Morocco, while the same kilo produced in Germany uses up only 520 liters. Analyzing the impact of trade on local water scarcity, our scientists found that it is not the amount of water used that counts most, but the origin of the water. While parts of India or the Middle East alleviate their water scarcity through importing crops, some countries in Southern Europe export agricultural goods from water-scarce sites, thus increasing local water stress.”Agriculture accounts for 70 percent of our global freshwater consumption and therefore has a huge potential to affect local water scarcity,” lead author Anne Biewald says. The amount of water used in the production of agricultural export goods is referred to as virtual water trade. So far, however, the concept of virtual water could not identify the regional water source, but used national or even global averages instead. “Our analysis shows that it is not the amount of water that matters, but whether global food trade leads to conserving or depleting water reserves in water-scarce regions,” Biewald says.Combining biophysical simulations of the virtual water content of crop production with agro-economic land-use and water-use simulations, the scientists were able for the first time to determine the positive and negative impacts on water scarcity through international trade of crops, livestock and feed. The effects were analyzed with high resolution on a subnational level to account for large countries like India or the US with different climatic zones and relating varying local conditions regarding water availability and water productivity. Previously, these countries could only be evaluated through national average water productivity. …

Read more

Genes play key role in parenting: Children also shape parents’ behavior

Scientists have presented the most conclusive evidence yet that genes play a significant role in parenting.A study by two Michigan State University psychologists refutes the popular theory that how adults parent their children is strictly a function of the way they were themselves parented when they were children.While environmental factors do play a role in parenting, so do a person’s genes, said S. Alexandra Burt, associate professor of psychology and co-author of a study led by doctoral student Ashlea M. Klahr.”The way we parent is not solely a function of the way we were parented as children,” Burt said. “There also appears to be genetic influences on parenting.”Klahr and Burt conducted a statistical analysis of 56 scientific studies from around the world on the origins of parenting behavior, including some of their own. The comprehensive analysis, involving more than 20,000 families from Australia to Japan to the United States, found that genetic influences in the parents account for 23 percent to 40 percent of parental warmth, control and negativity towards their children.”What’s still not clear, however, is whether genes directly influence parenting or do so indirectly, through parent personality for example,” Klahr said.The study sheds light on another misconception: that parenting is solely a top-down process from parent to child. While parents certainly seem to shape child behavior, parenting also is influenced by the child’s behavior — in other words, parenting is both a cause and a consequence of child behavior.”One of the most consistent and striking findings to emerge from this study was the important role that children’s characteristics play in shaping all aspects of parenting,” the authors write.Ultimately, parenting styles stem from many factors.”Parents have their own experiences when they were children, their own personalities, their own genes. On top of that, they are also responding to their child’s behaviors and stage of development,” Burt said. “Basically, there are a lot of influences happening simultaneously. Long story short, though, we need to be sensitive to the fact that this is a two-way process between parent and child that is both environmental and genetic.”The study is published in Psychological Bulletin, a research journal of the American Psychological Association.Story Source:The above story is based on materials provided by Michigan State University. Note: Materials may be edited for content and length.

Read more

Climate change will reduce crop yields sooner than thought

A study led by the University of Leeds has shown that global warming of only 2C will be detrimental to crops in temperate and tropical regions, with reduced yields from the 2030s onwards.Professor Andy Challinor, from the School of Earth and Environment at the University of Leeds and lead author of the study, said: “Our research shows that crop yields will be negatively affected by climate change much earlier than expected.””Furthermore, the impact of climate change on crops will vary both from year-to-year and from place-to-place — with the variability becoming greater as the weather becomes increasingly erratic.”The study, published today by the journal Nature Climate Change, feeds directly into the Working Group II report of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report, which is due to be published at the end of March 2014.In the study, the researchers created a new data set by combining and comparing results from 1,700 published assessments of the response that climate change will have on the yields of rice, maize and wheat.Due to increased interest in climate change research, the new study was able to create the largest dataset to date on crop responses, with more than double the number of studies that were available for researchers to analyze for the IPCC Fourth Assessment Report in 2007.In the Fourth Assessment Report, scientists had reported that regions of the world with temperate climates, such as Europe and most of North America, could withstand a couple of degrees of warming without a noticeable effect on harvests, or possibly even benefit from a bumper crop.”As more data have become available, we’ve seen a shift in consensus, telling us that the impacts of climate change in temperate regions will happen sooner rather than later,” said Professor Challinor.The researchers state that we will see, on average, an increasingly negative impact of climate change on crop yields from the 2030s onwards. The impact will be greatest in the second half of the century, when decreases of over 25% will become increasingly common.These statistics already account for minor adaptation techniques employed by farmers to mitigate the effects of climate change, such as small adjustments in the crop variety and planting date. Later in the century, greater agricultural transformations and innovations will be needed in order to safeguard crop yields for future generations.”Climate change means a less predictable harvest, with different countries winning and losing in different years. The overall picture remains negative, and we are now starting to see how research can support adaptation by avoiding the worse impacts,” concludes Professor Challinor.Story Source:The above story is based on materials provided by University of Leeds. Note: Materials may be edited for content and length.

Read more

Boosting self-esteem prevents health problems for seniors

The importance of boosting self-esteem is normally associated with the trials and tribulations of adolescence. But new research from Concordia University shows that it’s even more important for older adults to maintain and improve upon those confidence levels as they enter their twilight years. That’s because boosting self-esteem can help buffer potential health threats typically associated with the transition into older adulthood.A new study published in the journal Psychoneuroendocrinology, led by psychology researchers Sarah Liu and Carsten Wrosch from Concordia University’s Centre for Research in Human Development found that boosting self-esteem can buffer potential health threats in seniors.While previous research focused on self-esteem levels, Liu and Wrosch examined changes to self-esteem within each individual over time. They found that if an individual’s self-esteem decreased, the stress hormone cortisol increased — and vice versa. This association was particularly strong for participants who already had a history of stress or depression.The research team met with 147 adults aged 60 and over to measure their cortisol levels, self-esteem, stress, and symptoms of depression every 24 months over four years. Self-esteem was measured through standard questions, such as whether the participant felt worthless. The study also took into account personal and health factors like economic status, whether the participant was married or single, and mortality risk.Results showed that maintaining or even improving self-esteem could help prevent health problems. “Because self-esteem is associated with psychological wellbeing and physical health, raising self-esteem would be an ideal way to help prevent health problems later in life,” says Liu.While it’s easier said than done to tell an older adult to “go out and make more friends, or simply enhance their feelings of self-worth,” says Liu from a practical standpoint, such steps improve self-esteem.”Improving self-esteem provides real health benefits in seniors,” says Liu. “The ultimate solution may be to prevent self esteem from declining.”While this study looked at cortisol levels, Liu says future research could examine immune function to further illuminate how increases in self-esteem can contribute to patterns of healthy aging.Story Source:The above story is based on materials provided by Concordia University. Note: Materials may be edited for content and length.

Read more

Controlling zebra chip disease from the inside out

Zebra chip disease in potatoes is currently being managed by controlling the potato psyllid with insecticides. But one Texas A&M AgriLife Extension Service specialist is trying to manage the disease symptoms with alternative methods and chemistries.The disease is caused by a bacterium, Candidatus Liberibacter solanacearum, which is transmitted by the psyllid, said Dr. Ron French, AgriLife Extension plant pathologist in Amarillo.”Biological control methods can target psyllid populations in a field, but it takes a while for them to be effective, and by then the insect has already transmitted the bacterium into the plant, especially if that psyllid flew into the field. It only takes a few hours for a psyllid to acquire and transmit the bacterium from plant to plant,” French said.French is conducting his studies using alternative controls as a part of the U.S. Department of Agriculture-National Institute of Food and Agriculture-sponsored Zebra Chip Specialty Crop Research Initiative.”We are looking at three different approaches: bactericides, plant defense response and plant nutrients,” he said. “We are trying to alleviate the disease symptoms on tubers and throughout the plant, and improve plant health so that any negative impacts the psyllid, bacterium, disease or pesticide use are having on the plant can translate into improved yields.”His efforts to control the pathogen using foliar applications of a bactericide has had good results for two years when psyllid populations in the field and the instances of zebra chip were significant, French said. A significant increase in yield, 30 percent, was recorded in potato yields.But French said the problem is the next step — getting them labeled for use on potatoes.”Bactericides for potatoes are labeled only for seed treatments, although foliar applications in the field are allowed on some tree fruits crops. If we can include bactericides in a program that can minimize insecticide use, then this could be part of an integrated disease management approach,” he said.In his approach to the plant defense response, French said he is trying to produce something like a systemic acquired resistance or induced systemic resistance response from the potato against the pathogen.”To do that, we hope to use several compounds to see if the plant can actually trigger a mechanism to defend itself from the pathogen and the psyllid as well,” he said.”We hope to be able to do laboratory studies to determine if these systemic acquired resistance compounds work, and if so, why are they working?” French said. “Year after year there are differences in the field as far as climate, disease pressure, insect pressure — so sometimes we have to go to the lab to figure out why it works one time and not another.”The third and last approach he is studying is using plant nutrients to offset the damage caused by the psyllid or the pathogen and any nutrient imbalances that result, or any phytotoxicity that might occur after applying pesticides, French said.”We are adding micro- and macro-nutrients and other fertilizers,” he said. A macro-nutrient is something the plant readily needs like nitrogen and phosphorus, and a micro-nutrient is something the plant needs in small amounts, like zinc or boron, for plant functions.”In the past two years we actually had very good results with a combination of micro- and macro-nutrients that were applied bi-weekly after flowering on the potato,” French said. …

Read more

U.S. cocaine use cut by half, while marijuana consumption jumps

The use of cocaine dropped sharply across the United States from 2006 to 2010, while the amount of marijuana consumed increased significantly during the same period, according to a new report.Studying illegal drug use nationally from 2000 to 2010, researchers found the amount of marijuana consumed by Americans increased by more than 30 percent from 2006 to 2010, while cocaine consumption fell by about half. Meanwhile, heroin use was fairly stable throughout the decade.Methamphetamine consumption dramatically increased during the first half of the decade and then declined, but researchers did not have enough information to make a credible estimate of the drug’s use from 2008 to 2010.The findings come from a report compiled for the White House Office of National Drug Control Policy by researchers affiliated with the RAND Drug Policy Research Center.”Having credible estimates of the number of heavy drug users and how much they spend is critical for evaluating policies, making decisions about treatment funding and understanding the drug revenues going to criminal organizations,” said Beau Kilmer, the study’s lead author and co-director of the RAND Drug Policy Research Center. “This work synthesizes information from many sources to present the best estimates to date for illicit drug consumption and spending in the United States.”Because the project only generated estimates through 2010, researchers say the report does not address the recent reported spike in heroin use or the consequences of marijuana legalization in Colorado and Washington. The report also does not try to explain the causes behind changes in drug use or evaluate the effectiveness of drug control strategies.The study, published on the website of the Office of National Drug Control Policy, provides estimates of the amount of cocaine, heroin, marijuana and methamphetamine used each year from 2000 to 2010. The study includes estimates of retail spending on illicit drugs and the number of chronic users, who account for a majority of drug consumption.Researchers say that drug users in the United States spent on the order of $100 billion annually on cocaine, heroin, marijuana and methamphetamine throughout the decade. While the amount remained stable from 2000 to 2010, the spending shifted. While much more was spent on cocaine than on marijuana in 2000, the opposite was true by 2010.”Our analysis shows that Americans likely spent more than one trillion dollars on cocaine, heroin, marijuana and methamphetamine between 2000 and 2010,” Kilmer said.The surge in marijuana use appears to be related to an increase in the number of people who reported using the drug on a daily or near-daily basis.The estimates for marijuana are rooted in the National Survey on Drug Use and Health, which surveys nearly 70,000 individuals each year. Estimates for cocaine, heroin and methamphetamine are largely based on information from the Arrestee Drug Abuse Monitoring Program, or ADAM. The final estimates also incorporated information from other data sourcesHowever, since the federal government recently halted funding for ADAM, researchers say it will be considerably harder to track the abuse of cocaine, heroin, and methamphetamine in the future.”The ADAM program provided unique insights about those who abused hard drugs and how much they spent on these substances,” said Jonathan Caulkins, a study co-author and the Stever Professor of Operations Research and Public Policy at Carnegie Mellon University. “It’s a tragedy that 2013 was the last year for ADAM. …

Read more

Large mammals were the architects in prehistoric ecosystems

Researchers from Denmark demonstrate in a study that the large grazers and browsers of the past created a mosaic of varied landscapes consisting of closed and semi-closed forests and parkland.The study is published March 3, 2014 in the Proceedings of the National Academy of Sciences.Dung beetles recount the nature of the pastThe biologists behind the new research findings synthesized decades of studies on fossil beetles, focusing on beetles associated with the dung of large animals in the past or with woodlands and trees. Their findings reveal that dung beetles were much more frequent in the previous interglacial period (from 132,000 to 110,000 years ago) compared with the early Holocene (the present interglacial period, before agriculture, from 10,000 to 5,000 years ago).”One of the surprising results is that woodland beetles were much less dominant in the previous interglacial period than in the early Holocene, which shows that temperate ecosystems consisted not just of dense forest as often assumed, but rather a mosaic of forest and parkland,” says postdoctoral fellow Chris Sandom.”Large animals in high numbers were an integral part of nature in prehistoric times. The composition of the beetles in the fossil sites tells us that the proportion and number of the wild large animals declined after the appearance of modern man. As a result of this, the countryside developed into predominantly dense forest that was first cleared when humans began to use the land for agriculture,” explains Professor Jens-Christian Svenning.Bring back the large animals to EuropeIf people want to restore self-managing varied landscapes, they can draw on the knowledge provided by the new study about the composition of natural ecosystems in the past.”An important way to create more self-managing ecosystems with a high level of biodiversity is to make room for large herbivores in the European landscape — and possibly reintroduce animals such as wild cattle, bison and even elephants. They would create and maintain a varied vegetation in temperate ecosystems, and thereby ensure the basis for a high level of biodiversity,” says senior scientist Rasmus Ejrns.The study received financial support from the 15 June Foundation and a grant from the European Research Council. To a large extent, it supports the idea that the rewilding-based approach to nature management should be incorporated to a far greater degree in nature policy in Europe -especially in the case of national parks and other large natural areas.Story Source:The above story is based on materials provided by Aarhus University. Note: Materials may be edited for content and length.

Read more

Passive smoking causes irreversible damage to children’s arteries

Exposure to passive smoking in childhood causes irreversible damage to the structure of children’s arteries, according to a study published online today in the European Heart Journal.The thickening of the arteries’ walls associated with being exposed to parents’ smoke, means that these children will be at greater risk of heart attacks and strokes in later life. The researchers from Tasmania, Australia and Finland say that exposure to both parents smoking in childhood adds an extra 3.3 years to the age of blood vessels when the children reach adulthood.The study is the first to follow children through to adulthood in order to examine the association between exposure to parental smoking and increased carotid intima-media thickness (IMT) — a measurement of the thickness of the innermost two layers of the arterial wall — in adulthood. It adds further strength to the arguments for banning smoking in areas where children may be present, such as cars.The study was made up of 2401 participants in the Cardiovascular Risk in Young Finns Study, which started in 1980, and 1375 participants in the Childhood Determinants of Adult Health study, which started in 1985 in Australia. The children were aged between three and 18 at the start of the studies. The researchers asked questions about parents smoking habits and they used ultrasound to measure the thickness of the children’s artery walls once they had reached adulthood.The researchers found that carotid IMT in adulthood was 0.015 mm thicker in those exposed to both parents smoking than in those whose parents did not smoke, increasing from an average of 0.637 mm to 0.652 mm.”Our study shows that exposure to passive smoke in childhood causes a direct and irreversible damage to the structure of the arteries. Parents, or even those thinking about becoming parents, should quit smoking. This will not only restore their own health but also protect the health of their children into the future,” said Dr Seana Gall, a research fellow in cardiovascular epidemiology at the Menzies Research Institute Tasmania and the University of Tasmania.”While the differences in artery thickness are modest, it is important to consider that they represent the independent effect of a single measure of exposure — that is, whether or not the parents smoked at the start of the studies — some 20 years earlier in a group already at greater risk of heart disease. For example, those with both parents smoking were more likely, as adults, to be smokers or overweight than those whos parents didn’t smoke.”The results took account of other factors that could explain the association such as education, the children’s smoking habits, physical activity, body mass index, alcohol consumption and biological cardiovascular risk factors such as blood pressure and cholesterol levels in adulthood.Interestingly, the study did not show an effect if only one parent smoked. “We think that the effect was only apparent with both parents smoking because of the greater overall dose of smoke these children were exposed to,” said Dr Gall. “We can speculate that the smoking behaviour of someone in a house with a single adult smoking is different. …

Read more

New fast and furious black hole found

A team of Australian and American astronomers have been studying nearby galaxy M83 and have found a new superpowered small black hole, named MQ1, the first object of its kind to be studied in this much detail.Astronomers have found a few compact objects that are as powerful as MQ1, but have not been able to work out the size of the black hole contained within them until now.The team observed the MQ1 system with multiple telescopes and discovered that it is a standard-sized small black hole, rather than a slightly bigger version that was theorised to account for all its power.Curtin University senior research fellow Dr Roberto Soria, who is part of the International Centre for Radio Astronomy Research (ICRAR) and led the team investigating MQ1, said it was important to understand how stars were formed, how they evolved and how they died, within a spiral shaped galaxy like M83.”MQ1 is classed as a microquasar — a black hole surrounded by a bubble of hot gas, which is heated by two jets just outside the black hole, powerfully shooting out energy in opposite directions, acting like cosmic sandblasters pushing out on the surrounding gas,” Dr Soria said.”The significance of the huge jet power measured for MQ1 goes beyond this particular galaxy: it helps astronomers understand and quantify the strong effect that black hole jets have on the surrounding gas, which gets heated and swept away.”This must have been a significant factor in the early stages of galaxy evolution, 12 billion years ago, because we have evidence that powerful black holes like MQ1, which are rare today, were much more common at the time.””By studying microquasars such as MQ1, we get a glimpse of how the early universe evolved, how fast quasars grew and how much energy black holes provided to their environment.”As a comparison, the most powerful microquasar in our galaxy, known as SS433, is about 10 times less powerful than MQ1.Although the black hole in MQ1 is only about 100 kilometres wide, the MQ1 structure — as identified by the Hubble Space Telescope — is much bigger than our Solar System, as the jets around it extend about 20 light years from either side of the black hole.Black holes vary in size and are classed as either stellar mass (less than about 70 times the mass of our Sun) or supermassive (millions of times the mass of our Sun, like the giant black hole that is located in the middle of the Milky Way).MQ1 is a stellar mass black hole and was likely formed when a star died, collapsing to leave behind a compact mass.The discovery of MQ1 and its characteristics is just one of the results of the comprehensive study of galaxy M83, a collection of millions of stars located 15 million light years away from Earth.M83, the iconic Southern-sky galaxy, is being mapped with the Hubble Space and Magellan telescopes (detecting visible light), the Chandra X-ray Observatory (detecting light in X-ray frequencies), the Australia Telescope Compact Array and the Very Large Array (detecting radio waves).ICRAR is a joint venture between Curtin University and The University of Western Australia which receives funding from the State Government of Western Australia.

Read more

Discovery may help to explain mystery of ‘missing’ genetic risk, susceptibility to common diseases

A new study could help to answer an important riddle in our understanding of genetics: why research to look for the genetic causes of common diseases has failed to explain more than a fraction of the heritable risk of developing them.Susceptibility to common diseases is believed to arise through a combination of many common genetic variants that individually slightly increase the risk of disease, plus a smaller number of rare mutations that often carry far greater risk.However, even when their effects are added together, the genetic variants so far linked to common diseases account for only a relatively small proportion of the risk we know is conveyed by genetics through studies of family history.But the major new study, published in the journal PLOS Genetics, shows for the first time in cancer that some common genetic variants could actually be indicators of the presence of much more influential rare mutations that have yet to be found.Scientists at The Institute of Cancer Research, London, led an international consortium made up of more than 25 leading academic institutions on the study, which was funded by the European Union.The research, involving 20,440 men with prostate cancer and 21,469 without the disease, identified a cluster of four common genetic variants on chromosome 17 that appeared to give rise to a small increase in prostate cancer risk, using the standard statistical techniques for this type of study.But the study found an alternative explanation for the risk signal — a small proportion of the men with these common variants were in fact carriers of a rare mutation in the nearby HOXB13 gene, which is known to be linked to prostate cancer. Under this ‘synthetic association’, the number of people carrying a cancer risk variant was much lower than had been assumed, but those people who did inherit a variant had a much higher risk of prostate cancer than had been realised.The discovery shows that the prevailing genetic theory — that common cancers are predominantly caused by the combined action of many common genetic variants, each with only a very small effect — could potentially underestimate the impact of rare, as yet undiscovered mutations.The results are important because they show that there is a need for renewed effort by geneticists to find the causal variants, whether common or rare, behind the many common cancer-associated variants identified in recent years.Identifying any underlying rare mutations with a big effect on disease risk could improve the genetic screening and clinical management of individuals at greater risk of developing cancer, as well as other diseases.Study co-leader Dr Zsofia Kote-Jarai, Senior Staff Scientist at The Institute of Cancer Research (ICR), said: “As far as we are aware, this is the first known example of a ‘synthetic association’ in cancer genetics. It was exciting to find evidence for this theory, which predicts that common genetic variants that appear to increase risk of disease by only a modest amount may indeed sometimes be detected purely due to their correlation with a rarer variant which confers a greater risk.”Our study does not imply how widespread this phenomenon may be, but it holds some important lessons for geneticists in cancer, and other common diseases. It demonstrates the importance of identifying the causal genetic changes behind the many common variants that have already been shown to influence risk of disease.”Our study also demonstrates that standard methods to identify potential causal variants when fine-mapping genetic associations with disease may be inadequate to assess the contribution of rare variants. Large sequencing studies may be necessary to answer these questions unequivocally.”Study co-leader Professor Ros Eeles, Professor of Oncogenetics at The Institute of Cancer Research and Honorary Clinical Consultant at The Royal Marsden NHS Foundation Trust, said: “One important unanswered question in cancer genetics — and in genetics of common disease more generally — is why the genetic mutations we’ve discovered so far each seem to have such a small effect, when studies of families have shown that our genetic make-up has a very large influence on our risk of cancer.”Our study is an important step forward in our understanding of where we might find this ‘missing’ genetic risk in cancer. At least in part, it might lie in rarer mutations which current research tools have struggled to find, because individually each does not affect a large number of people.”

Read more

Study on flu evolution may change textbooks, history books

A new study reconstructing the evolutionary tree of flu viruses challenges conventional wisdom and solves some of the mysteries surrounding flu outbreaks of historical significance.The study, published in the journal Nature, provides the most comprehensive analysis to date of the evolutionary relationships of influenza virus across different host species over time. In addition to dissecting how the virus evolves at different rates in different host species, the study challenges several tenets of conventional wisdom, for example the notion that the virus moves largely unidirectionally from wild birds to domestic birds rather than with spillover in the other direction. It also helps resolve the origin of the virus that caused the unprecedentedly severe influenza pandemic of 1918.The new research is likely to change how scientists and health experts look at the history of influenza virus, how it has changed genetically over time and how it has jumped between different host species. The findings may have implications ranging from the assessment of health risks for populations to developing vaccines.”We now have a really clear family tree of theses viruses in all those hosts — including birds, humans, horses, pigs — and once you have that, it changes the picture of how this virus evolved,” said Michael Worobey, a professor of ecology and evolutionary biology at the University of Arizona, who co-led the study with Andrew Rambaut, a professor at the Institute of Evolutionary Biology at the University of Edinburgh. “The approach we developed works much better at resolving the true evolution and history than anything that has previously been used.”Worobey explained that “if you don’t account for the fact that the virus evolves at a different rates in each host species, you can get nonsense — nonsensical results about when and from where pandemic viruses emerged.””Once you resolve the evolutionary trees for these viruses correctly, everything snaps into place and makes much more sense,” Worobey said, adding that the study originated at his kitchen table.”I had a bunch of those evolutionary trees printed out on paper in front of me and started measuring the lengths of the branches with my daughter’s plastic ruler that happened to be on the table. Just like branches on a real tree, you can see that the branches on the evolutionary tree grow at different rates in humans versus horses versus birds. And I had a glimmer of an idea that this would be important for our public health inferences about where these viruses come from and how they evolve.””My longtime collaborator Andrew Rambaut implemented in the computer what I had been doing with a plastic ruler. We developed software that allows the clock to tick at different rates in different host species. Once we had that, it produces these very clear and clean results.”The team analyzed a dataset with more than 80,000 gene sequences representing the global diversity of the influenza A virus and analyzed them with their newly developed approach. The influenza A virus is subdivided into 17 so-called HA subtypes — H1 through H17 — and 10 subtypes of NA, N1-N10. …

Read more

America’s natural gas system is leaking methane and in need of a fix

The first thorough comparison of evidence for natural gas system leaks confirms that organizations including the Environmental Protection Agency (EPA) have underestimated U.S. methane emissions generally, as well as those from the natural gas industry specifically.Natural gas consists predominantly of methane. Even small leaks from the natural gas system are important because methane is a potent greenhouse gas — about 30 times more potent than carbon dioxide. A study, “Methane Leakage from North American Natural Gas Systems,” published in the Feb. 14 issue of the journal Science, synthesizes diverse findings from more than 200 studies ranging in scope from local gas processing plants to total emissions from the United States and Canada.”People who go out and actually measure methane pretty consistently find more emissions than we expect,” said the lead author of the new analysis, Adam Brandt, an assistant professor of energy resources engineering at Stanford University. “Atmospheric tests covering the entire country indicate emissions around 50 percent more than EPA estimates,” said Brandt. “And that’s a moderate estimate.”The standard approach to estimating total methane emissions is to multiply the amount of methane thought to be emitted by a particular kind of source, such as leaks at natural gas processing plants or belching cattle, by the number of that source type in a region or country. The products are then totaled to estimate all emissions. The EPA does not include natural methane sources, like wetlands and geologic seeps.The national natural gas infrastructure has a combination of intentional leaks, often for safety purposes, and unintentional emissions, like faulty valves and cracks in pipelines. In the United States, the emission rates of particular gas industry components — from wells to burner tips — were established by the EPA in the 1990s.Since then, many studies have tested gas industry components to determine whether the EPA’s emission rates are accurate, and a majority of these have found the EPA’s rates too low. …

Read more

Children living close to fast food outlets more likely to be overweight

Children living in areas surrounded by fast food outlets are more likely to be overweight or obese according to new research from the University of East Anglia (UEA) and the Centre for Diet and Activity Research (CEDAR).New research published today looked at weight data from more than a million children and compared it with the availability of unhealthy food from outlets including fish and chip shops, burger bars, pizza places, and sweet shops.They found that older children in particular are more likely to be overweight when living in close proximity to a high density of unhealthy eating outlets.It is hoped that the findings will help shape planning policy to help tackle childhood obesity.Prof Andy Jones, from UEA’s Norwich Medical School, led the research. He said: “We found that the more unhealthy food outlets there are in a neighborhood, the greater the number of overweight and obese children. The results were more pronounced in secondary school children who have more spending power to choose their own food.”But the association was reversed in areas with more healthy food options available.”This is important because there is an epidemic of obesity among children in the UK and other industrialized countries. It can lead to childhood diabetes, low self-esteem, and orthopedic and cardiovascular problems. It is also a big problem because around 70 per cent of obese children and teenagers also go on to have weight problems in later life.”Study co-author Andreea Cetateanu, from UEA’s school of Environmental Sciences, said: “We know that fast food is more common in deprived areas of the UK and that over-weight children are more likely to come from socio-economically deprived populations. But associations between children’s weight and the availability of junk food have not been shown before at a national scale.”If we can use these findings to influence planning decisions and help create a more healthy food environment, we may be able to help reverse this trend for future generations.”Public health policies to reduce obesity in children should incorporate strategies to prevent high concentrations of fast food and other unhealthy food outlets. But there is no quick fix — and any interventions for tackling childhood obesity and creating environments that are more supportive for both physical activity and better dietary choices must be part of the bigger picture looking at the whole obesity system.”The research team used data from the National Child Measurement Programme which records the height and weight of one million children at the majority of state schools in England annually.They took into account factors such as people living in rural locations having to travel further to buy food, and other variables such as the proportion of children living in low income households and measurements of green space which have both been associated with exercise in children.Story Source:The above story is based on materials provided by University of East Anglia. Note: Materials may be edited for content and length.

Read more

Many stroke patients on ‘clot-busting’ tPA may not need long stays in ICU

A Johns Hopkins study of patients with ischemic stroke suggests that many of those who receive prompt hospital treatment with “clot-busting” tissue plasminogen activator (tPA) therapy can avoid lengthy, restrictive monitoring in an intensive care unit (ICU).The study challenges the long-standing protocol that calls for intensive monitoring, mostly done in ICUs, for the first 24 hours after tPA infusion to catch bleeding in the brain, a side effect seen in 6 percent of patients treated with the medication.Results show that a relatively simple measure of stroke severity can accurately single out which patients need ICU monitoring and which can be managed outside of a critical care setting in the hospital.”What we saw in this preliminary study was that, after the initial hour-long infusion of tPA, if an intensive care need had not developed, the chance of needing ICU monitoring — including a symptomatic ‘bleed’ — was extremely low for a large majority of patients, namely those with milder strokes,” says Victor Urrutia, M.D., medical director of the Comprehensive Stroke Center at The Johns Hopkins Hospital and head of the research team.Ischemic stroke, caused by a clot in a blood vessel that cuts off blood flow to the brain, is the most common form of stroke and the second leading cause of death for those over 60. In the United States, an estimated 795,000 people suffer a stroke each year. So far, tPA is the only FDA-approved treatment for acute stroke.In a report on the study published online in the journal PLOS ONE, the Johns Hopkins team analyzed data from 153 stroke patients admitted to the emergency departments of The Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center between 2010 and 2013. After taking into account differences in age, sex, race, hypertension, diabetes, atrial fibrillation, kidney function, blood clotting status, use of statin drugs and other health factors, the team says that what emerged as the best predictor of the need for intensive care was a patient’s score on the National Institutes of Health (NIH) Stroke Scale, a trusted measure of stroke severity. The scale is a proven tool administered at the bedside involving 15 measures and observations, including level of consciousness, language ability, eye movements, vision strength, coordination and sensory loss. Scores range from zero to 42, with mild strokes typically registering 10 or lower. The average score for the Johns Hopkins patient group was 9.8.”What we learned is that the majority of our patients with mild strokes required no critical care, and that we are using scarce, specialized resources for intensive monitoring rather than for intensive care,” says Urrutia, an assistant professor of neurology.”If our upcoming, prospective study verifies what we’ve found about those who don’t need to be in the ICU, our patients will benefit, and we will also reduce costs of care.”Urrutia emphasized that critical care is clearly needed for tPa-linked bleeding, stroke-related brain swelling and critical abnormalities in blood pressure or blood sugar.But, he says, “For patients with an NIH Stroke Scale score of less than 10 without a need for transfer to the ICU after the first hour, the risk of a problem occurring later that needed ICU attention was only about 1 percent.”In the follow-up study, which is scheduled to begin this spring, consenting patients with a low stroke scale score and no other apparent need for intensive care will enter a stroke unit with a less rigorous monitoring schedule and increased family visiting time.Patients in the non-ICU setting will be less physically restricted and subjected to fewer sleep interruptions, lowering the risk of ICU-associated delirium and psychological distress. “We expect benefits to extend to the hospital as well, freeing up the ICU staff and beds for sicker patients,” says Urrutia.The financial benefits of the change in protocol could be significant, Urrutia adds. “Present monitoring for patients with tPA is very costly,” he says.

Read more

Utilizzando il sito, accetti l'utilizzo dei cookie da parte nostra. maggiori informazioni

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close