‘3-D’ test could reduce reliance on animals for testing asthma and allergy medications

To determine whether new medicines are safe and effective for humans, researchers must first test them in animals, which is costly and time-consuming, as well as ethically challenging. In a study published in ACS’ journal Molecular Pharmaceutics, scientists report that they’ve developed a simple, “3D” laboratory method to test asthma and allergy medications that mimics what happens in the body, which could help reduce the need for animal testing.Amir Ghaemmaghami and colleagues note that respiratory conditions, such as asthma and allergies, are becoming more common. These conditions affect the lungs and the airway leading to the lungs, making it difficult to breathe. Every year, respiratory symptoms lead to expensive hospital visits, as well as absences from work and school. Better drugs could provide relief, but before giving new medicines to people, researchers must first test them in animals — a costly and laborious process. Sometimes, researchers will use “2D” tests in which they apply the drug to a layer of human cells in a lab dish instead, but this isn’t an adequate way to tell how a medicine will work in a whole animal or a whole person. So, Ghaemmaghami’s team developed a new, 3D alternative.Their test includes three types of human cells that are typically in a person’s airway. In the body, these cells are close together and are involved in the development of respiratory conditions. The 3D “model” reacted just like a real person’s airway when they exposed it to allergens and bacterial extract. They say that the model has the potential of reducing the need for some animal testing of new drugs for respiratory conditions.Story Source:The above story is based on materials provided by American Chemical Society. …

Read more

Computer maps 21 distinct emotional expressions — even ‘happily disgusted’

Researchers at The Ohio State University have found a way for computers to recognize 21 distinct facial expressions — even expressions for complex or seemingly contradictory emotions such as “happily disgusted” or “sadly angry.”In the current issue of the Proceedings of the National Academy of Sciences, they report that they were able to more than triple the number of documented facial expressions that researchers can now use for cognitive analysis.”We’ve gone beyond facial expressions for simple emotions like ‘happy’ or ‘sad.’ We found a strong consistency in how people move their facial muscles to express 21 categories of emotions,” said Aleix Martinez, a cognitive scientist and associate professor of electrical and computer engineering at Ohio State. “That is simply stunning. That tells us that these 21 emotions are expressed in the same way by nearly everyone, at least in our culture.”The resulting computational model will help map emotion in the brain with greater precision than ever before, and perhaps even aid the diagnosis and treatment of mental conditions such as autism and post-traumatic stress disorder (PTSD).Since at least the time of Aristotle, scholars have tried to understand how and why our faces betray our feelings — from happy to sad, and the whole range of emotions beyond. Today, the question has been taken up by cognitive scientists who want to link facial expressions to emotions in order to track the genes, chemicals, and neural pathways that govern emotion in the brain.Until now, cognitive scientists have confined their studies to six basic emotions — happy, sad, fearful, angry, surprised and disgusted — mostly because the facial expressions for them were thought to be self-evident, Martinez explained.But deciphering a person’s brain functioning with only six categories is like painting a portrait with only primary colors, Martinez said: it can provide an abstracted image of the person, but not a true-to-life one.What Martinez and his team have done is more than triple the color palette — with a suite of emotional categories that can be measured by the proposed computational model and applied in rigorous scientific study.”In cognitive science, we have this basic assumption that the brain is a computer. So we want to find the algorithm implemented in our brain that allows us to recognize emotion in facial expressions,” he said. “In the past, when we were trying to decode that algorithm using only those six basic emotion categories, we were having tremendous difficulty. Hopefully with the addition of more categories, we’ll now have a better way of decoding and analyzing the algorithm in the brain.”They photographed 230 volunteers — 130 female, 100 male, and mostly college students — making faces in response to verbal cues such as “you just got some great unexpected news” (“happily surprised”), or “you smell a bad odor” (“disgusted”). In the resulting 5,000 images, they painstakingly tagged prominent landmarks for facial muscles, such as the corners of the mouth or the outer edge of the eyebrow. They used the same method used by psychologist Paul Ekman, the scientific consultant for the television show “Lie to Me.” Ekman’s Facial Action Coding System, or FACS, is a standard tool in body language analysis.They searched the FACS data for similarities and differences in the expressions, and found 21 emotions — the six basic emotions, as well as emotions that exist as combinations of those emotions, such as “happily surprised” or “sadly angry.”The researchers referred to these combinations as “compound emotions.” While “happily surprised” can be thought of as an expression for receiving unexpected good news, “sadly angry” could be the face we make when someone we care about makes us angry.The model was able to determine the degree to which the basic emotions and compound emotions were characterized by a particular expression.For example, the expression for happy is nearly universal: 99 percent of the time, study participants expressed happiness by drawing up the cheeks and stretching the mouth in a smile. Surprise was also easily detected: 92 percent of the time, surprised participants opened their eyes wide and dropped their mouth open.”Happily surprised” turned out to be a compound of the expressions for “happy” and “surprised.” About 93 percent of the time, the participants expressed it the same way: with the wide-open eyes of surprise and the raised cheeks of happiness — and a mouth that was a hybrid of the two — both open and stretched into a smile.The computer model also gives researchers a tool to understand seemingly contradictory emotions. …

Read more

Good bacteria that protects against HIV identified

Researchers at the University of Texas Medical Branch at Galveston by growing vaginal skin cells outside the body and studying the way they interact with “good and bad” bacteria, think they may be able to better identify the good bacteria that protect women from HIV infection and other sexually transmitted infections.The health of the human vagina depends on a symbiotic/mutually beneficial relationship with “good” bacteria that live on its surface feeding on products produced by vaginal skin cells. These good bacteria, in turn, create a physical and chemical barrier to bad bacteria and viruses including HIV.A publication released today from a team of scientists representing multiple disciplines at UTMB and the Oak Crest Institute of Science in Pasadena, Calif., reports a new method for studying the relationship between the skin cells and the “good” bacteria.The researchers are the first to grow human vaginal skin cells in a dish in a manner that creates surfaces that support colonization by the complex good and bad communities of bacteria collected from women during routine gynecological exams. The bacteria communities have never before been successfully grown outside a human.The research group led by Richard Pyles at UTMB reports in the journal PLOS One that by using this model of the human vagina, they discovered that certain bacterial communities alter the way HIV infects and replicates. Their laboratory model will allow careful and controlled evaluation of the complex community of bacteria to ultimately identify those species that weaken the defenses against HIV. Pyles also indicated that this model “will provide the opportunity to study the way that these mixed species bacterial communities change the activity of vaginal applicants including over-the-counter products like douches and prescription medications and contraceptives. These types of studies are very difficult or even impossible to complete in women who are participating in clinical trials.”In fact, the team’s report documented the potential for their system to better evaluate current and future antimicrobial drugs in terms of how they interact with “good and bad” bacteria. In their current studies a bacterial community associated with a symptomatic condition called bacterial vaginosis substantially reduced the antiviral activity of one of the leading anti-HIV medicines.Conversely, vaginal surfaces occupied by healthy bacteria and treated with the antiviral produced significantly less HIV than those vaginal surfaces without bacteria treated with the same antiviral. Dr. Marc Baum, the lead scientist at Oak Crest and co-author of the work, stated “this model is unique as it faithfully recreates the vaginal environment ex vivo, both in terms of the host cellular physiology and the associated complex vaginal microbiomes that could not previously be cultured. I believe it will be of immense value in the study of sexually transmitted infections.”Story Source:The above story is based on materials provided by University of Texas Medical Branch at Galveston. …

Read more

Animals losing migratory routes? Devasting consequences of scarcity of ‘knowledgeable elders’

Small changes in a population may lead to dramatic consequences, like the disappearance of the migratory route of a species. A study carried out in collaboration with the SISSA has created a model of the behaviour of a group of individuals on the move (like a school of fish, a herd of sheep or a flock of birds, etc.) which, by changing a few simple parameters, reproduces the collective behaviour patterns observed in the wild. The model shows that small quantitative changes in the number of knowledgeable individuals and availability of food can lead to radical qualitative changes in the group’s behaviour.Until the ’50s, bluefin tuna fishing was a thriving industry in Norway, second only to sardine fishing. Every year, bluefin tuna used to migrate from the eastern Mediterranean up to the Norwegian coasts. Suddenly, however, over no more than 4-5 years, the tuna never went back to Norway. In an attempt to solve this problem, Giancarlo De Luca from SISSA (the International School for Advanced Studies of Trieste) together with an international team of researchers (from the Centre for Theoretical Physics — ICTP — of Trieste and the Technical University of Denmark) started to devise a model based on an “adaptive stochastic network.” The physicists wanted to simulate, simplifying it, the collective behaviour of animal groups. Their findings, published in the journal Interface, show that the number of “informed individuals” in a group, sociality and the strength of the decision of the informed individuals are “critical” variables, such that even minimal fluctuations in these variables can result in catastrophic changes to the system.”We started out by taking inspiration from the phenomenon that affected the bluefin tuna, but in actual fact we then developed a general model that can be applied to many situations of groups “on the move,” explains De Luca.The collective behaviour of a group can be treated as an “emerging property,” that is, the result of the self-organization of each individual’s behaviour. “The majority of individuals in a group may not possess adequate knowledge, for example, about where to find rich feeding grounds” explains De Luca. “However, for the group to function, it is enough that only a minority of individuals possess that information. The others, the ones who don’t, will obey simple social rules, for example by following their neighbours.”The tendency to comply with the norm, the number of knowledgeable individuals and the determination with which they follow their preferred route (which the researchers interpreted as being directly related to the appeal, or abundance, of the resource) are critical variables. …

Read more

Virtual bees help to unravel complex causes of colony decline

Scientists have created an ingenious computer model that simulates a honey bee colony over the course of several years. The BEEHAVE model, published today in the Journal of Applied Ecology, was created to investigate the losses of honeybee colonies that have been reported in recent years and to identify the best course of action for improving honeybee health.A team of scientists, led by Professor Juliet Osborne from the Environment and Sustainability Institute, University of Exeter (and previously at Rothamsted Research), developed BEEHAVE, which simulates the life of a colony including the queen’s egg laying, brood care by nurse bees and foragers collecting nectar and pollen in a realistic landscape.Professor Juliet Osborne said: “It is a real challenge to understand which factors are most important in affecting bee colony growth and survival. This is the first opportunity to simulate the effects of several factors together, such as food availability, mite infestation and disease, over realistic time scales.”The model allows researchers, beekeepers and anyone interested in bees, to predict colony development and honey production under different environmental conditions and beekeeping practices. To build the simulation, the scientists brought together existing honeybee research and data to develop a new model that integrated processes occurring inside and outside the hive.The first results of the model show that colonies infested with a common parasitic mite (varroa) can be much more vulnerable to food shortages. Effects within the first year can be subtle and might be missed by beekeepers during routine management. But the model shows that these effects build up over subsequent years leading to eventual failure of the colony, if it was not given an effective varroa treatment.BEEHAVE can also be used to investigate potential consequences of pesticide applications. For example, the BEEHAVE model can simulate the impact of increased loss of foragers. The results show that colonies may be more resilient to this forager loss than previously thought in the short-term, but effects may accumulate over years, especially when colonies are also limited by food supply.BEEHAVE simulations show that good food sources close to the hive will make a real difference to the colony and that lack of forage over extended periods leaves them vulnerable to other environmental factors. Addressing forage availability is critical to maintaining healthy hives and colonies over the long term.Professor Osborne added: “The use of this model by a variety of stakeholders could stimulate the development of new approaches to bee management, pesticide risk assessment and landscape management. The advantage is that each of these factors can be tested in a virtual environment in different combinations, before testing in the field. …

Read more

Previous rapid thinning of Pine Island Glacier sheds light on future Antarctic ice loss

New research, published this week in Science, suggests that the largest single contributor to global sea level rise, a glacier of the West Antarctic Ice Sheet, may continue thinning for decades to come. Geologists from the UK, USA and Germany found that Pine Island Glacier (PIG), which is rapidly accelerating, thinning and retreating, has thinned rapidly before. The team say their findings demonstrate the potential for current ice loss to continue for several decades yet.Their findings reveal that 8000 years ago the glacier thinned as fast as it has in recent decades, providing an important model for its future behaviour. The glacier is currently experiencing significant acceleration, thinning and retreat that is thought to be caused by ‘ocean-driven’ melting; an increase in warm ocean water finding its way under the ice shelf.After two decades of rapid ice loss, concerns are arising over how much more ice will be lost to the ocean in the future. Model projections of the future of PIG contain large uncertainties, leaving questions about the rate, timing and persistence of future sea level rise. Rocks exposed by retreating or thinning glaciers provide evidence of past ice sheet change, which helps scientists to predict possible future change. The geologists used highly sensitive dating techniques, pioneered by one of the team, to track the thinning of PIG through time, and to show that the past thinning lasted for several decades.Lead author Joanne Johnson from the British Antarctic Survey (BAS) said: “Our geological data show us the history of Pine Island Glacier in greater detail than ever before. The fact that it thinned so rapidly in the past demonstrates how sensitive it is to environmental change; small changes can produce dramatic and long-lasting results. Based on what we know, we can expect the rapid ice loss to continue for a long time yet, especially if ocean-driven melting of the ice shelf in front of Pine Island Glacier continues at current rates,”Professor Mike Bentley, a co-leader of the project based at Durham University said: “This paper is part of a wide range of international scientific efforts to understand the behaviour of this important glacier. The results we’re publishing are the product of long days spent sampling rocks from mountains in Antarctica, coupled to some exceptionally precise and time-consuming laboratory analyses. …

Read more

Nautical Spring Styles: let the countdown begin!

OMGOSH I CAN’T WAIT FOR SPRING. No, seriously. This winter has been brutal, right?!?!?! 27 DAYS TILL SPRING, but who’s counting We’re usually looking at dead grass, which isn’t pretty, but this winter we haven’t seen the grass at all. Not since before Christmas! We’re just buried over here and when you have two toddlers at home…. phew. That’s all I have to say To get me through this cold weather, I’ve been thinking about spring… planning what I want to do outside once the weather is finally nice enough, looking at old photos of summer fun, working out to get in better shape, and–of course–shopping for new clothes! I’ve been scoping out some spring styles. Remember those huge catalogs your mom would get when you were a …

Read more

Revolutionary new view on heritability in plants: Complex heritable traits not only determined by changes in DNA sequence

Complex heritable traits are not only determined by changes in the DNA sequence. Scientists from the University of Groningen Bioinformatics Centre, together with their French colleagues, have shown that epigenetic marks can affect traits such as flowering time and architecture in plants. Furthermore, these marks are passed on for many generations in a stable manner. Their results were published in Science on the 6th of February 2014. It seems that a revision of genetics textbooks is now in order.We’ve all been taught that DNA is the physical foundation of heredity. Our genes are spelled out in the four famous letters A, T, C and G, which together form the genetic code. A single letter change in this code can lead to a gene ceasing to function or failing to work properly.The fact that the functioning of our genes is also affected by epigenetic marks has been known for decades. For example, the nucleotide cytosine (the C in the genetic code) can be changed into a methylcytosine. This cytosine methylation, which is one type of epigenetic mark, is typically associated with repression of gene activity.Epigenetic inheritance’While in mammals epigenetic marks are typically reset every generation, in plants no such dramatic resetting takes place. This opens the door to epigenetic inheritance in plants: epigenetic changes that are acquired in one generation tend to be stably passed on to the next generation’, explains Frank Johannes, assistant professor at the GBIC and co-lead scientist for the Science study.Johannes’s French colleagues have produced inbred strains of the model plant Arabidopsis, in which the epigenetic marks vary between strains although the DNA sequence is almost identical. …

Read more

Massive neutrinos solve a cosmological conundrum

Scientists have solved a major problem with the current standard model of cosmology identified by combining results from the Planck spacecraft and measurements of gravitational lensing in order to deduce the mass of ghostly sub-atomic particles called neutrinos.The team, from the universities of Manchester and Nottingham, used observations of the Big Bang and the curvature of space-time to accurately measure the mass of these elementary particles for the first time.The recent Planck spacecraft observations of the Cosmic Microwave Background (CMB) — the fading glow of the Big Bang — highlighted a discrepancy between these cosmological results and the predictions from other types of observations.The CMB is the oldest light in the Universe, and its study has allowed scientists to accurately measure cosmological parameters, such as the amount of matter in the Universe and its age. But an inconsistency arises when large-scale structures of the Universe, such as the distribution of galaxies, are observed.Professor Richard Battye, from The University of Manchester School of Physics and Astronomy, said: “We observe fewer galaxy clusters than we would expect from the Planck results and there is a weaker signal from gravitational lensing of galaxies than the CMB would suggest.”A possible way of resolving this discrepancy is for neutrinos to have mass. The effect of these massive neutrinos would be to suppress the growth of dense structures that lead to the formation of clusters of galaxies.”Neutrinos interact very weakly with matter and so are extremely hard to study. They were originally thought to be massless but particle physics experiments have shown that neutrinos do indeed have mass and that there are several types, known as flavours by particle physicists. The sum of the masses of these different types has previously been suggested to lie above 0.06 eV (much less than a billionth of the mass of a proton).In this paper, Professor Battye and co-author Dr Adam Moss, from the University of Nottingham, have combined the data from Planck with gravitational lensing observations in which images of galaxies are warped by the curvature of space-time. They conclude that the current discrepancies can be resolved if massive neutrinos are included in the standard cosmological model. They estimate that the sum of masses of neutrinos is 0.320 +/- 0.081 eV (assuming active neutrinos with three flavours).Dr Moss said: “If this result is borne out by further analysis, it not only adds significantly to our understanding of the sub-atomic world studied by particle physicists, but it would also be an important extension to the standard model of cosmology which has been developed over the last decade.”The paper is published in Physical Review Letters and has been selected as an Editor’s choice.Story Source:The above story is based on materials provided by University of Manchester. Note: Materials may be edited for content and length.

Read more

New microchip demonstrates how metastasis takes place

Nearly 70 percent of patients with advanced breast cancer experience skeletal metastasis, in which cancer cells migrate from a primary tumor into bone — a painful development that can cause fractures and spinal compression. While scientists are attempting to better understand metastasis in general, not much is known about how and why certain cancers spread to specific organs, such as bone, liver, and lungs.Now researchers from MIT, Italy, and South Korea have developed a three-dimensional microfluidic platform that mimics the spread of breast cancer cells into a bonelike environment.The microchip — slightly larger than a dime — contains several channels in which the researchers grew endothelial cells and bone cells to mimic a blood vessel and bone side-by-side. They then injected a highly metastatic line of breast cancer cells into the fabricated blood vessel.Twenty-four hours later, the team observed that twice as many cancer cells had made their way through the vessel wall and into the bonelike environment than had migrated into a simple collagen-gel matrix. Moreover, the cells that made it through the vessel lining and into the bonelike setting formed microclusters of up to 60 cancer cells by the experiment’s fifth day.”You can see how rapidly they are growing,” says Jessie Jeon, a graduate student in mechanical engineering. “We only waited until day five, but if we had gone longer, [the size of the clusters] would have been overwhelming.”The team also identified two molecules that appear to encourage cancer cells to metastasize: CXCL5, a protein ligand secreted by bone cells, and CXCR2, a receptor protein on cancer cells that binds to the ligand. The preliminary results suggest that these molecules may be potential targets to reduce the spread of cancer.Jeon says the experiments demonstrate that the microchip may be used in the future to test drugs that might stem metastasis, and also as a platform for studying cancer’s spread to other organs.She and her colleagues, including Roger Kamm, the Cecil and Ida Green Distinguished Professor of Mechanical and Biological Engineering at MIT, have outlined the results of their experiments in the journal Biomaterials.”Currently, we don’t understand why certain cancers preferentially metastasize to specific organs,” Kamm says. “An example is that breast cancer will form metastatic tumors in bone, but not, for example, muscle. Why is this, and what factors determine it? We can use our model system both to understand this selectivity, and also to screen for drugs that might prevent it.”Through a wall and into boneThe process by which cancer cells form secondary tumors requires the cells to first survive a journey through the circulatory system. These migrating cells attach to a blood vessel’s inner lining, and ultimately squeeze through to the surrounding tissue — a process called extravasation, which Kamm’s research group modeled last fall using a novel microfluidic platform.Now the group is looking to the next step in metastasis: the stage at which a cancer cell invades a specific organ. …

Read more

Large-scale deep re-sequencing reveals cucumber’s evolutionary enigma

Oct. 20, 2013 — In a collaborative study published online today in Nature Genetics, researchers from the Genome Centre of Chinese Academy of Agricultural Sciences (CAAS), BGI, and other institutes present a cucumber genomic variation map that includes about 3.6 million variants revealed by deep resequencing of 115 cucumbers worldwide. This work provides new insights for understanding the genetic basis of domestication and diversity of this important crop, and provides guidance for breeders to harness genetic variation for crop improvement.Cucumber is a major vegetable crop consumed worldwide as well as a model system for sex determination and plant vascular biology. In 2009, cucumber became the seventh plant to have its genome sequence published, following the well-studied model plant Arabidopsis thaliana, the poplar tree, grapevine, papaya, and the crops rice and sorghum. More efforts have been put into cucumber genomics research since then.As a part of these efforts, researchers from CAAS and BGI re-sequenced 115 cucumber lines sampled from 3,342 accessions worldwide, and also conducted de novo sequencing on a wild cucumber. In total, they detected more than 3.3 million SNPs, over 0.33 million small insertion and deletions (indels), and 594 presence-absence variations (PAVs), and then constructed a comprehensive variation map of cucumber.Furthermore, researchers did a suite of model-based analyses of population structure and phylogenetic reconstruction. The results indicated that the three cultivated groups (Eurasian, East Asian, and Xishuangbanna) each are monophyletic and genetically quite homogeneous, but the Indian group shows clear evidence of substructure and genetic heterogeneity. Their further analysis also provide evidence on the ancestral status of the Indian group, which holds great potential for introducing new alleles into the cultivated gene pool.To understand the population bottlenecks during domestication, researchers made a comparison analysis between vegetable and grain food crops. The comparison result indicated that the three vegetable crops (cucumber, watermelon, and tomato) probably underwent narrower bottleneck events during domestication than the grain food crops (rice, maize, and soybean). In addition, they also identified 112 putative domestication sweeps in the cucumber genome. …

Read more

Brown algae reveal antioxidant production secrets

Sep. 5, 2013 — Brown algae contain phlorotannins, aromatic (phenolic) compounds that are unique in the plant kingdom. As natural antioxidants, phlorotannins are of great interest for the treatment and prevention of cancer and inflammatory, cardiovascular and neurodegenerative diseases.Researchers at the Végétaux marins et biomolécules (CNRS/UPMC) laboratory at the Station biologique de Roscoff, in collaboration with two colleagues at the Laboratoire des sciences de l’Environnement MARin (Laboratory of Marine Environment Sciences) in Brest (CNRS/UBO/IFREMER/IRD) have recently elucidated the key step in the production of these compounds in Ectocarpus siliculosus, a small brown alga model species. The study also revealed the specific mechanism of an enzyme that synthesizes phenolic compounds with commercial applications. These findings have been patented and should make it easier to produce the phlorotannins presently used as natural extracts in the pharmaceutical and cosmetic industries. The results have also been published online on the site of the journal The Plant Cell.Until now, extracting phlorotannins from brown algae for use in industry was a complex process, and the biosynthesis pathways of these compounds were unknown. By studying the first genome sequenced from a brown alga, the team in Roscoff identified several genes homologous to those involved in phenolic compound biosynthesis in terrestrial plants (1). Among these genes, the researchers found that at least one was directly involved in the synthesis of phlorotannins in brown algae. They then inserted these genes into a bacterium, which thus produced a large quantity of the enzymes that could synthesize the desired phenolic compounds. One of these enzymes, a type III polyketide synthase (PKS III), was studied in detail and revealed how it produces phenolic compounds. …

Read more

Clinical tool classifies spots on lung scans of smokers

Sep. 4, 2013 — A Terry Fox Research Institute(TFRI)-led study has developed a new clinical risk calculator software that accurately classifies, nine out of ten times, which spots or lesions (nodules) are benign and malignant on an initial lung computed tomography (CT) scan among individuals at high risk for lung cancer.The findings are expected to have immediate clinical impact worldwide among health professionals who currently diagnose and treat individuals at risk for or who are diagnosed with lung cancer, and provide new evidence for developing and improving lung-cancer screening programs. A total of 12,029 lung cancer nodules observed on CTs of 2,961 current and former smokers were examined in the population-based study.The results, to be published in the Sept. 5th issue of the New England Journal of Medicine (NEJM), will have an immediate impact on clinical practice, says co-principal investigator Dr. Stephen Lam, chair of BC’s Provincial Lung Tumour Group at the BC Cancer Agency and a professor of medicine at the University of British Columbia.”We already know that CT screening saves lives. Now, we have evidence that our model and risk calculator can accurately predict which abnormalities that show up on a first CT require further follow up, such as a repeat CT scan, a biopsy, or surgery, and which ones do not. This is extremely good news for everyone — from the people who are high risk for developing lung cancer to the radiologists, respirologists and thoracic surgeons who detect and treat it. Currently, there are no Canadian guidelines for us to use in clinical practice.”In countries where guidelines do exist, they largely relate to nodule size. The pan-Canadian team’s prediction model, developed by Brock University epidemiologist Dr. Martin Tammemägi, includes a risk calculator that considers several factors in addition to size: older age, female sex, family history of lung cancer, emphysema, location of the nodule in the upper lobe, part-solid nodule type, lower nodule count and spiculation (presence of sharp or needle-like points). …

Read more

New Cassini data from Saturn’s largest moon Titan indicate a rigid, weathered ice shell

Aug. 28, 2013 — An analysis of gravity and topography data from Saturn’s largest moon, Titan, has revealed unexpected features of the moon’s outer ice shell. The best explanation for the findings, the authors said, is that Titan’s ice shell is rigid and that relatively small topographic features on the surface are associated with large roots extending into the underlying ocean. The study is published in the August 29 issue of the journal Nature.Led by planetary scientists Douglas Hemingway and Francis Nimmo at the University of California, Santa Cruz, the study used new data from NASA’s Cassini spacecraft. The researchers were surprised to find a negative correlation between the gravity and topography signals on Titan.”Normally, if you fly over a mountain, you expect to see an increase in gravity due to the extra mass of the mountain. On Titan, when you fly over a mountain the gravity gets lower. That’s a very odd observation,” said Nimmo, a professor of Earth and planetary sciences at UC Santa Cruz.To explain that observation, the researchers developed a model in which each bump in the topography on the surface of Titan is offset by a deeper “root” big enough to overwhelm the gravitational effect of the bump on the surface. The root is like an iceberg extending below the ice shell into the ocean underneath it. “Because ice is lower density than water, you get less gravity when you have a big chunk of ice there than when you have water,” Nimmo explained.An iceberg floating in water is in equilibrium, its buoyancy balancing out its weight. In this model of Titan, however, the roots extending below the ice sheet are so much bigger than the bumps on the surface that their buoyancy is pushing them up against the ice sheet. …

Read more

Study relies on twins and their parents to understand height-IQ connection

Aug. 27, 2013 — The fact that taller people also tend to be slightly smarter is due in roughly equal parts to two phenomena — the same genes affect both traits and taller people are more likely than average to mate with smarter people and vice versa — according to a study led by the University of Colorado Boulder.The study did not find that environmental factors contributed to the connection between being taller and being smarter, both traits that people tend to find attractive.The modest correlation between height and IQ has been documented in multiple studies stretching back to the 1970s. But the reasons for the relationship between the two traits has not been well understood.The technique developed by the researchers at CU-Boulder to tease out those reasons may open the door for scientists to better understand why other sexually selected traits — characteristics that individuals find desirable in mates — tend to be linked. People who are attractive because of one trait tend to have other attractive traits as well.”Not just in humans but also in animals, you see that traits that are sexually attractive tend to be correlated,” said Matthew Keller, assistant professor of psychology and neuroscience at CU-Boulder and lead author of the study appearing in the journal PLOS Genetics. “So if you have animals that are high on one sexually selected trait they are often high on other ones, too. And the question has always been, ‘What’s the cause of that?’ And it has always been very difficult to tease apart the two potential genetic reasons that those could be related.”The key to the technique developed by Keller, also a fellow at CU-Boulder’s Institute for Behavioral Genetics, and his colleagues is using data collected about fraternal twins, identical twins and, importantly, their parents.It has been common in the past to use information about identical twins and fraternal twins to determine whether a particular trait is inherited, caused by environmental factors or affected by some combination of both. This kind of twin study assumes that each twin grows up with the same environmental factors as his or her sibling.If a trait that’s present in one twin is just as often present in the other — regardless of whether the twins are fraternal or identical — then the trait is likely caused by environmental conditions. On the other hand, if a trait is generally found in both identical twins but only in one of a set of fraternal twins, it’s likely that the trait is inherited, since identical twins have the same genetic material but fraternal twins do not.Similar studies also can be done for linked traits, such as height and IQ. But while scientists could determine that a pair of traits is passed down genetically, they could not further resolve whether inherited traits were linked due to the same genes influencing both traits, called “pleiotropy,” or because people who have those traits are more likely to mate with each other, known as “assortative mating.”The new CU-Boulder study solves this problem by including the parents of twins in its analysis. While this has occasionally been done in the past for single traits, information on parents has not previously been used to shed light on why two traits are genetically correlated. …

Read more

New risk model sheds light on arsenic risk in China’s groundwater

Aug. 22, 2013 — Arsenic-laden groundwater used for cooking and drinking could pose a risk to the health of almost 20 million people across China. This is shown by a study carried out by Eawag scientists in collaboration with Chinese colleagues and published today in Science. The estimates are based on a risk model incorporating geological and hydrological data, as well as measurements of arsenic in wells. The study is being adopted by the authorities in the national groundwater monitoring programme.Since the 1960s, it has been known that groundwater resources in certain provinces of China are contaminated with arsenic. Estimates of the numbers of affected people have risen year by year. In the most recent survey — conducted by the Chinese Ministry of Health between 2001 and 2005 — more than 20,000 (5%) of the 445,000 wells tested showed arsenic concentrations higher than 50 µg/L. According to official estimates, almost 6 million people consume drinking water with an arsenic content of more than 50 µg/L and almost 15 million are exposed to concentrations exceeding 10 µg/L (the guideline value recommended by the WHO).Given the sheer size of China and the time and expense involved in testing for arsenic contamination, several more decades would probably be required to screen all of the millions of groundwater wells. Accordingly, a group of researchers from Eawag and the China Medical University in Shenyang developed a statistical risk model making use of existing data on geology, soil characteristics and topographic features. This model was calibrated using available arsenic measurements. …

Read more

‘Zombie vortices’ may be key step in star formation

Aug. 20, 2013 — A new theory by fluid dynamics experts at the University of California, Berkeley, shows how “zombie vortices” help lead to the birth of a new star.Reporting Aug. 20 in the journal Physical Review Letters, a team led by computational physicist Philip Marcus shows how variations in gas density lead to instability, which then generates the whirlpool-like vortices needed for stars to form.Astronomers accept that in the first steps of a new star’s birth, dense clouds of gas collapse into clumps that, with the aid of angular momentum, spin into one or more Frisbee-like disks where a protostar starts to form. But for the protostar to grow bigger, the spinning disk needs to lose some of its angular momentum so that the gas can slow down and spiral inward onto the protostar. Once the protostar gains enough mass, it can kick off nuclear fusion.”After this last step, a star is born,” said Marcus, a professor in the Department of Mechanical Engineering.What has been hazy is exactly how the cloud disk sheds its angular momentum so mass can feed into the protostar.Destabilizing forcesThe leading theory in astronomy relies on magnetic fields as the destabilizing force that slows down the disks. One problem in the theory has been that gas needs to be ionized, or charged with a free electron, in order to interact with a magnetic field. However, there are regions in a protoplanetary disk that are too cold for ionization to occur.”Current models show that because the gas in the disk is too cool to interact with magnetic fields, the disk is very stable,” said Marcus. “Many regions are so stable that astronomers call them dead zones — so it has been unclear how disk matter destabilizes and collapses onto the star.”The researchers said current models also fail to account for changes in a protoplanetary disk’s gas density based upon its height.”This change in density creates the opening for violent instability,” said study co-author Pedram Hassanzadeh, who did this work as a UC Berkeley Ph.D. student in mechanical engineering. When they accounted for density change in their computer models, 3-D vortices emerged in the protoplanetary disk, and those vortices spawned more vortices, leading to the eventual disruption of the protoplanetary disk’s angular momentum.”Because the vortices arise from these dead zones, and because new generations of giant vortices march across these dead zones, we affectionately refer to them as ‘zombie vortices,'” said Marcus. …

Read more

Altruism or manipulated helping? Altruism may have origins in manipulation

Aug. 19, 2013 — Manipulation is often thought of as morally repugnant, but it might be responsible for the evolutionary origins of some helpful or altruistic behavior, according to a new study.In evolutionary biology, manipulation occurs when an individual, the manipulator, alters the behavior of another individual in ways that is beneficial to the manipulator but may be detrimental to the manipulated individual. Manipulation not only occurs in humans and animals but also at the cellular level, such as among cells in a multicellular organism, or in parasites, which can alter the behavior of their hosts.Consider the case of the parasitic roundworm Myrmeconema neotropicum, which once ingested by the tropical ant Cephalotes atratus in Central and South America, causes the ant to grow a bright red abdomen, mimicking berries. This bright abdomen constitutes a phenotype manipulated by the roundworm. Birds eat the “berries,” or infected ants, and then spread the parasite in their droppings, which are subsequently collected by foraging Cephalotes atratus and fed to their larva, and the cycle of manipulated behavior begins anew.In the study published this week in the journal American Naturalist, the researchers developed a mathematical model for the evolution of manipulated behavior and applied it to maternal manipulation in eusocial organisms, such as ants, wasps, and bees, which form colonies with reproductive queens and sterile workers. In the model, mothers produce two broods, and they manipulate the first-brood offspring to stay in the maternal site and help raise the second brood. Mothers can do this by disrupting the offspring’s development in some way, for example through poor feeding or aggressive behavior. Manipulated offspring of the first-brood stay and help to raise the second brood. Alternatively, first-brood offspring can resist manipulation and leave.The researchers show that an offspring’s resistance to manipulation may often fail to evolve, if the costs of resistance are high. In a sense, then, helping or altruistic behavior is coerced through manipulation.”The evidence in so-called primitive eusociality, where helping is often coerced through aggression or differential feeding, appears consistent with these results,” said lead author Mauricio Gonzalez-Forero, who conducted the study while a graduate research assistant at the National Institute for Mathematical and Biological Synthesis.

Read more

Answering critical questions to respond to anthrax attack

Aug. 15, 2013 — If terrorists targeted the United States with an anthrax attack, health care providers and policy makers would need key information — such as knowing the likelihood of an individual becoming infected, how many cases to expect and in what pattern, and how long to give antibiotics — to protect people from the deadly bacteria.Those questions gained urgency when anthrax-laced letters killed five people and infected 17 others in the wake of the terror attacks of September 2001. Now, using information from prior animal studies and data from a deadly anthrax exposure accident in Russia in the late 1970s, University of Utah and George E. Wahlen Department of Veterans Affairs Medical Center researchers have developed a mathematical model to help answer critical questions and guide the response to a large-scale anthrax exposure.In an Aug. 15, 2013, study in PLOS Pathogens online, the researchers use their model to estimate that for an individual to have a 50 percent chance of becoming infected with anthrax (known as ID50), he or she would have to inhale 11,000 spores of the bacteria. A 10 percent chance of being infected would require inhaling 1,700 spores and a 1 percent chance of infection would occur by inhaling 160 spores. The researchers also found that at ID50, the median time for anthrax symptoms to appear is 9.9 days and that the optimal time to take antibiotics is 60 days.”Anthrax is a well-studied disease and experimental animal data exist, but there is no real good information on dose response for the disease in humans,” says Adi V. Gundlapalli, M.D., Ph.D., an infectious diseases specialist and epidemiologist, associate professor of internal medicine at the U of U School of Medicine and staff physician at the Salt Lake City George E. Wahlen Department of Veterans Affairs Medical Center. “We don’t want to be overly fearful, but we need to be prepared in the event of a bioterrorism attack with anthrax.”Although studies with animals at other institutions have looked at anthrax, the data are limited and usually involved vaccine testing and not exposure amounts for infection. …

Read more

Computer model predicts red blood cell flow

Aug. 13, 2013 — Adjacent to the walls of our arterioles, capillaries, and venules — the blood vessels that make up our microcirculation — there exists a peculiar thin layer of clear plasma, devoid of red blood cells. Although it is just a few millionths of a meter thick, that layer is vital. It controls, for example, the speed with which platelets can reach the site of a cut and start the clotting process.”If you destroy this layer, your bleeding time can go way up, by 60 percent or more, which is a real issue in trauma,” said Eric Shaqfeh, the Lester Levi Carter Professor and a professor of chemical engineering and mechanical engineering at Stanford University. Along with his colleagues, Shaqfeh has now created the first simplified computer model of the process that forms that layer — a model that could help to improve the design of artificial platelets and medical treatments for trauma injuries and for blood disorders such as sickle cell anemia and malaria.The model is described in a paper appearing in the journal Physics of Fluids.The thin plasma layer, known as the Fåhræus-Lindqvist layer, is created naturally when blood flows through small vessels. In the microcirculation, the layer forms because red blood cells tend to naturally deform and lift away from the vessel walls. “The reason they don’t just continually move away from the wall and go far away is because, as they move away, then also collide with other red blood cells, which force them back,” Shaqfeh explained. “So the Fåhræus-Lindqvist layer represents a balance between this lift force and collisional forces that exist in the blood.”Because the deformation of red blood cells is a key factor in the Fåhræus-Lindqvist layer, its properties are altered in diseases, such as sickle cell anemia, that affect the shape and rigidity of those cells. The new model, which is a scaled-down version of an earlier numerical model by Shaqfeh and colleagues that provided the first large-scale, quantitative explanation of the formation of the layer, can predict how blood cells with varying shapes, sizes, and properties — including the crescent-shaped cells that are the hallmark of sickle cell anemia — will influence blood flow.The model can also help predict the outcome of — and perfect — treatments for trauma-related injuries. One common thing to do during treatment for trauma injuries is to inject saline, which among other things reduces the hematocrit, the blood fraction of red blood cells. …

Read more

Utilizzando il sito, accetti l'utilizzo dei cookie da parte nostra. maggiori informazioni

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close