Preoperative PET cuts unnecessary lung surgeries in half

New quantitative data suggests that 30 percent of the surgeries performed for non-small cell lung cancer patients in a community-wide clinical study were deemed unnecessary. Additionally, positron emission tomography (PET) was found to reduce unnecessary surgeries by 50 percent, according to research published in the March issue of the Journal of Nuclear Medicine.PET imaging prior to surgery helps stage a patient’s disease by providing functional images of tumors throughout the body, especially areas where cancer has spread, otherwise known as metastasis. Few studies have been able to pin down exactly what impact preoperative PET has on clinical decision-making and resulting treatment. Preliminary review of the data from this long-term, observational study of an entire community of veterans was inconclusive about the utility of PET, but after a more thorough statistical analysis accounting for selection bias and other confounding factors, the researchers were able to conclude that PET imaging eliminated approximately half of unnecessary surgeries.”It has become standard of care for lung cancer patients to receive preoperative PET imaging,” said Steven Zeliadt, PhD, lead author of the study conducted at VA Puget Sound Health Care System and associate professor for the University of Washington in Seattle, Wash. “The prevailing evidence reinforces the general understanding within the medical community that PET is very useful for identifying occult metastasis and that it helps get the right people to surgery while avoiding unnecessary surgeries for those who would not benefit.”For this study, researchers reviewed newly diagnosed non-small lung cancer patients who received preoperative PET to assess the real-life effectiveness of PET as a preventative measure against unnecessarily invasive treatment across a community of patients. A total of 2,977 veterans who underwent PET during disease staging from 1997 to 2009 were included in the study. Of these, 976 patients underwent surgery to resect their lung cancer. During surgery or within 12 months of surgery, 30 percent of these patients were found to have advanced-stage metastatic disease, indicating an unnecessary surgery.Interestingly, the use of PET increased during the study period from 9% to 91%. Conventional multivariate analyses was followed by instrumental variable analyses to account for unobserved anomalies, such as when patients did not undergo PET when it would have been clinically recommended to do so. This new data has the potential to change policy and recommendations regarding the use of oncologic PET for more accurate tumor staging.”We will likely build more quality measures around this research so that preoperative PET is more strongly recommended to improve the management of care for these patients,” added Zeliadt.Story Source:The above story is based on materials provided by Society of Nuclear Medicine. …

Read more

Audio Fest NOW and Best Buy!

~Written in a partnership with Best Buy and their Audio Fest. All opinions are my own.Did you know Audio Fest is happening right now at Best Buy stores across the country? It goes from March 2nd to April 4th, 2014, and is filled with specials, deals, and events for all things audio–making Best Buy the place to be! I just went this weekend We’re big music fans and especially since having kids because there’s nothing better to cure a bad day or a sad mood than a DANCE PARTY!Maybe you h ave a home entertainment area? Ours is in our basement and our 4-year-old calls it the movie theater. It makes family movie nights extra special! Best Buy can enhance your experience and upgrade your …

Read more

Human and dog brains both have dedicated ‘voice areas’

The first study to compare brain function between humans and any nonprimate animal shows that dogs have dedicated voice areas in their brains, just as people do. Dog brains, like those of people, are also sensitive to acoustic cues of emotion, according to a study in the Cell Press journal Current Biology on February 20.The findings suggest that voice areas evolved at least 100 million years ago, the age of the last common ancestor of humans and dogs, the researchers say. It also offers new insight into humans’ unique connection with our best friends in the animal kingdom and helps to explain the behavioral and neural mechanisms that made this alliance so effective for tens of thousands of years.”Dogs and humans share a similar social environment,” says Attila Andics of MTA-ELTE Comparative Ethology Research Group in Hungary. “Our findings suggest that they also use similar brain mechanisms to process social information. This may support the successfulness of vocal communication between the two species.”Andics and his colleagues trained 11 dogs to lay motionless in an fMRI brain scanner. That made it possible to run the same neuroimaging experiment on both dog and human participants — something that had never been done before. They captured both dogs’ and humans’ brain activities while the subjects listened to nearly 200 dog and human sounds, ranging from whining or crying to playful barking or laughing.The images show that dog and human brains include voice areas in similar locations. Not surprisingly, the voice area of dogs responds more strongly to other dogs while that of humans responds more strongly to other humans.The researchers also noted striking similarities in the ways the dog and human brains process emotionally loaded sounds. In both species, an area near the primary auditory cortex lit up more with happy sounds than unhappy ones. Andics says the researchers were most struck by the common response to emotion across species.There were some differences, too: in dogs, 48% of all sound-sensitive brain regions respond more strongly to sounds other than voices. …

Read more

To hear without being heard: First nonreciprocal acoustic circulator created

A team of researchers at The University of Texas at Austin’s Cockrell School of Engineering has built the first-ever circulator for sound. The team’s experiments successfully prove that the fundamental symmetry with which acoustic waves travel through air between two points in space (“if you can hear, you can also be heard”) can be broken by a compact and simple device.”Using the proposed concept, we were able to create one-way communication for sound traveling through air,” said Andrea Al, who led the project and is an associate professor and David & Doris Lybarger Endowed Faculty Fellow in the Cockrell School’s Department of Electrical and Computer Engineering. “Imagine being able to listen without having to worry about being heard in return.”This successful experiment is described in “Sound Isolation and Giant Linear Nonreciprocity in a Compact Acoustic Circulator,” which will be featured on the cover of Science in the Jan. 31 issue.An electronic circulator, typically used in communication devices and radars, is a nonreciprocal three-port device in which microwaves or radio signals are transmitted from one port to the next in a sequential way. When one of the ports is not used, the circulator acts as an isolator, allowing signals to flow from one port to the other, but not back. The UT Austin team realized the same functionality is true for sound waves traveling in air, which led to the team’s building of a first-of-its-kind three-port acoustic circulator.Romain Fleury, the paper’s first author and a Ph.D. student in Al’s group, said the circulator “is basically a one-way road for sound. The circulator can transmit acoustic waves in one direction but block them in the other, in a linear and distortion-free way.”The scientific knowledge gained from successfully building a nonreciprocal sound circulator may lead to advances in noise control, new acoustic equipment for sonars and sound communication systems, and improved compact components for acoustic imaging and sensing.”More broadly, our paper proves a new physical mechanism to break time-reversal symmetry and subsequently induce nonreciprocal transmission of waves, opening important possibilities beyond applications in acoustics,” Al said. “Using the same concept, it may actually be possible to construct simpler, smaller and cheaper electronic circulators and other electronic components for wireless devices, as well as to create one-way communication channels for light.”This research may eventually allow for an “acoustical version of one-way glass,” said Preston Wilson, acoustics expert and associate professor in the Department of Mechanical Engineering. “It also opens up avenues for very efficient sound isolation and interesting new concepts for active control of sound isolators.”At the core of the team’s sound circulator is a resonant ring cavity loaded with three small computer fans that circulate the airflow at a specific velocity. …

Read more

Low-voiced men love ’em and leave ’em, yet still attract more women

Oct. 16, 2013 — Men with low-pitched voices have an advantage in attracting women, even though women know they’re not likely to stick around for long.Researchers at McMaster University have found that women were more attracted to men with masculine voices, at least for short-term relationships.Those men were also seen as more likely to cheat and unsuitable for a longer relationship, such as marriage.The study, published online in the journal Personality and Individual Differences, offers insight into the evolution of the human voice and how we choose our mates.“The sound of someone’s voice can affect how we think of them,” explains Jillian O’Connor, a postdoctoral fellow in the Department of Psychology, Neuroscience & Behaviour and lead author of the study.“Until now, it’s been unclear why women would like the voices of men who might cheat. But we found that the more women thought these men would cheat, the more they were attracted to them for a brief relationship when they are less worried about fidelity.”For the study, 87 women listened to men’s voices that were manipulated electronically to sound higher or lower, and then chose who they thought was more likely to cheat on their romantic partner.Researchers also asked the participants to choose the voice they thought was more attractive for a long-term versus a short-term relationship.“From an evolutionary perspective, these perceptions of future sexual infidelity may be adaptive,” explains David Feinberg, an assistant professor in the Department of Psychology, Neuroscience & Behaviour.“The consequences of infidelity are very high whether it is emotional or financial and this research suggests that humans have evolved as a protection mechanism to avoid long-term partners who may cheat,” he says.

Read more

Hate the sound of your voice? Not really

Sep. 12, 2013 — It turns out we really do like the sound of our own voice. We just may not realize it.A new study by Albright College has found that people unknowingly assessed their own recorded voice as sounding more attractive in comparison to how others rated their voices, which is considered a form of unconscious self-enhancement.”People generally tend to have an enhanced sense about themselves,” says Susan Hughes, associate professor of psychology. “Often people will think they have more attractive or possess better qualities than they actually do. This is sometimes used as a mechanism to build self-esteem or fight against depression.”The findings are included in a new article, “I Like My Voice Better: Self-Enhancement Bias in Perceptions of Voice Attractiveness,” to be published later this month in the scholarly journal Perception. The study is co-authored by Marissa Harrison, Ph.D., assistant professor of psychology at Penn State University’s Harrisburg campus.For the study, 80 men and women assessed the voice attractiveness of an array of different voice recordings of people counting from one to 10. Unbeknownst to participants, researchers included three different samples of participants’ own voice recordings in the group. Researchers believe that most participants did not recognize or realize their own voices were included, yet rated their own voices as sounding more attractive than how other raters judged their voices. Participants also rated their own voices more favorably than they had rated the voices of other people.”Given this age of heightened narcissism, this study provides further evidence that individuals seem to inflate their opinions of themselves by thinking the sound of their own voices is more attractive,” says Hughes.The article suggests that participants may have also preferred their own voice due to a mere exposure effect and the tendency to like the familiar. This effect may have still been a factor even if participants were not overtly aware they were hearing their own voice, according to the study.Hughes, an expert in evolutionary psychology and voice perception, was surprised by the results, especially since many people report not liking the sound of their recorded voice. …

Read more

Look at what i’m saying: Engineers show brain depends on vision to hear

Sep. 4, 2013 — University of Utah bioengineers discovered our understanding of language may depend more heavily on vision than previously thought: under the right conditions, what you see can override what you hear. These findings suggest artificial hearing devices and speech-recognition software could benefit from a camera, not just a microphone.”For the first time, we were able to link the auditory signal in the brain to what a person said they heard when what they actually heard was something different. We found vision is influencing the hearing part of the brain to change your perception of reality — and you can’t turn off the illusion,” says the new study’s first author, Elliot Smith, a bioengineering and neuroscience graduate student at the University of Utah. “People think there is this tight coupling between physical phenomena in the world around us and what we experience subjectively, and that is not the case.”The brain considers both sight and sound when processing speech. However, if the two are slightly different, visual cues dominate sound. This phenomenon is named the McGurk effect for Scottish cognitive psychologist Harry McGurk, who pioneered studies on the link between hearing and vision in speech perception in the 1970s. The McGurk effect has been observed for decades. However, its origin has been elusive.In the new study, which appears today in the journal PLOS ONE, the University of Utah team pinpointed the source of the McGurk effect by recording and analyzing brain signals in the temporal cortex, the region of the brain that typically processes sound.Working with University of Utah bioengineer Bradley Greger and neurosurgeon Paul House, Smith recorded electrical signals from the brain surfaces of four severely epileptic adults (two male, two female) from Utah and Idaho. House placed three button-sized electrodes on the left, right or both brain hemispheres of each test subject, depending on where each patient’s seizures were thought to originate. …

Read more

Echolocation for humans: Playing it by ear

Aug. 29, 2013 — Biologists at Ludwig-Maximilians-Universitaet (LMU) in Munich have demonstrated that people can acquire the capacity for echolocation, although it does take time and work.As blind people can testify, we humans can hear more than one might think. The blind learn to navigate using as guides the echoes of sounds they themselves make. This enables them to sense the locations of walls and corners, for instance: by tapping the ground with a stick or making clicking sounds with the tongue, and analyzing the echoes reflected from nearby surfaces, a blind person can map the relative positions of objects in the vicinity. LMU biologists led by Professor Lutz Wiegrebe of the Department of Neurobiology (Faculty of Biology) have now shown that sighted people can also learn to echolocate objects in space, as they report in the biology journal Proceedings of the Royal Society B.Wiegrebe and his team have developed a method for training people in the art of echolocation. With the help of a headset consisting of a microphone and a pair of earphones, experimental subjects can generate patterns of echoes that simulate acoustic reflections in a virtual space: the participants emit vocal clicks, which are picked up by the microphone and passed to a processor that calculates the echoes of a virtual space within milliseconds. The resulting echoes are then played back through the earphones. The trick is that the transformation applied to the input depends on the subject’s position in virtual space. So the subject can learn to associate the artificial “echoes” with the distribution of sound-reflecting surfaces in the simulated space.A dormant skill”After several weeks of training, the participants in the experiment were able to locate the sources of echoes pretty well. This shows that anyone can learn to analyze the echoes of acoustic signals to obtain information about the space around him. …

Read more

Cellular channels vital for hearing identified

July 18, 2013 — Ending a 30-year search by scientists, researchers at Boston Children’s Hospital have identified two proteins in the inner ear that are critical for hearing, which, when damaged by genetic mutations, cause a form of delayed, progressive hearing loss. Findings were published online July 18 by the journal Neuron.Share This:The mutations, affecting genes known as TMC1 and TMC2, were reported in 2011 by the laboratory of Jeffrey Holt, PhD, in the Department of Otolaryngology at Boston Children’s. Until now, however, it wasn’t clear what the genes do. In the new study, Holt and colleagues at the National Institute on Deafness and Other Communication Disorders (NIDCD) show that the proteins encoded by the genes form channels that turn mechanical sound waves into electrical signals that talk to the brain. A tiny point mutation — a change in one base or “letter” in the genetic sequence — is enough to cause deafness.Corresponding channels for each of the other senses were identified years ago, but the sensory transduction channel for both hearing and the sense of balance had remained a mystery, says Holt.The study involved so-called Beethoven mice that carry mutations on TMC1 and become deaf by their second month of life. Each mutation has a human counterpart that causes a prominent form of genetic deafness, causing children to become completely deaf by the age of 10 to 15 years.Studies of sensory hair cells from the cochleas of the mice, which sense sound vibrations and signal the brain, showed that the TMC1 and TMC2 proteins are necessary to get calcium into the cells. The researchers showed that when TMC1 was mutated, the calcium influx was reduced and the resulting electrical current was weaker in response to sound. “This is the smoking gun we’ve been looking for,” says Holt.The study also provided evidence that:The TMC1 and TMC2 proteins act as backups for each other, explaining why hearing loss is gradual and not immediate. “TMC2 can compensate for loss of function of TMC1, but not completely,” Holt says. The two proteins can create channel structures either singly or combined in groups, suggesting they may help make different hair cells sensitive to different pitch ranges. …

Read more

Inner speech speaks volumes about the brain

July 16, 2013 — Whether you’re reading the paper or thinking through your schedule for the day, chances are that you’re hearing yourself speak even if you’re not saying words out loud. This internal speech — the monologue you “hear” inside your head — is a ubiquitous but largely unexamined phenomenon. A new study looks at a possible brain mechanism that could explain how we hear this inner voice in the absence of actual sound.In two experiments, researcher Mark Scott of the University of British Columbia found evidence that a brain signal called corollary discharge — a signal that helps us distinguish the sensory experiences we produce ourselves from those produced by external stimuli — plays an important role in our experiences of internal speech.The findings from the two experiments are published in Psychological Science, a journal of the Association for Psychological Science.Corollary discharge is a kind of predictive signal generated by the brain that helps to explain, for example, why other people can tickle us but we can’t tickle ourselves. The signal predicts our own movements and effectively cancels out the tickle sensation.And the same mechanism plays a role in how our auditory system processes speech. When we speak, an internal copy of the sound of our voice is generated in parallel with the external sound we hear.”We spend a lot of time speaking and that can swamp our auditory system, making it difficult for us to hear other sounds when we are speaking,” Scott explains. “By attenuating the impact our own voice has on our hearing — using the ‘corollary discharge’ prediction — our hearing can remain sensitive to other sounds.”Scott speculated that the internal copy of our voice produced by corollary discharge can be generated even when there isn’t any external sound, meaning that the sound we hear when we talk inside our heads is actually the internal prediction of the sound of our own voice.If corollary discharge does in fact underlie our experiences of inner speech, he hypothesized, then the sensory information coming from the outside world should be cancelled out by the internal copy produced by our brains if the two sets of information match, just like when we try to tickle ourselves.And this is precisely what the data showed. The impact of an external sound was significantly reduced when participants said a syllable in their heads that matched the external sound. Their performance was not significantly affected, however, when the syllable they said in their head didn’t match the one they heard.These findings provide evidence that internal speech makes use of a system that is primarily involved in processing external speech, and may help shed light on certain pathological conditions.”This work is important because this theory of internal speech is closely related to theories of the auditory hallucinations associated with schizophrenia,” Scott concludes.This research was supported by grants from the Natural Sciences and Engineering Research Council of Canada to Bryan Gick, Janet F. Werker and Eric Vatikiotis-Bateson.

Read more

The sounds of science: Melting of iceberg creates surprising ocean din

July 10, 2013 — There is growing concern about how much noise humans generate in marine environments through shipping, oil exploration and other developments, but a new study has found that naturally occurring phenomena could potentially affect some ocean dwellers.Nowhere is this concern greater than in the polar regions, where the effects of global warming often first manifest themselves. The breakup of ice sheets and the calving and grounding of icebergs can create enormous sound energy, scientists say. Now a new study has found that the mere drifting of an iceberg from near Antarctica to warmer ocean waters produces startling levels of noise.Results of the study are being published this month in Oceanography.A team led by Oregon State University researchers used an array of hydrophones to track the sound produced by an iceberg through its life cycle, from its origin in the Weddell Sea to its eventual demise in the open ocean. The goal of the project was to measure baseline levels of this kind of naturally occurring sound in the ocean, so it can be compared to anthropogenic noises.”During one hour-long period, we documented that the sound energy released by the iceberg disintegrating was equivalent to the sound that would be created by a few hundred supertankers over the same period,” said Robert Dziak, a marine geologist at OSU’s Hatfield Marine Science Center in Newport, Ore., and lead author on the study.”This wasn’t from the iceberg scraping the bottom,” he added. “It was from its rapid disintegration as the berg melted and broke apart. We call the sounds ‘icequakes’ because the process and ensuing sounds are much like those produced by earthquakes.”Dziak is a scientist with the Cooperative Institute for Marine Resources Studies (CIMRS), a collaborative program between Oregon State University and NOAA based at OSU’s Hatfield center. He also is on the faculty of OSU’s College of Earth, Ocean, and Atmospheric Sciences.When scientists first followed the iceberg, it encountered a 124-meter deep shoal, causing it to rotate and grind across the seafloor. It then began generating semi-continuous harmonic tremors for the next six days. The iceberg then entered Bransfield Strait and became fixed over a 265-meter deep shoal, where it began to pinwheel. The harmonic tremors became shorter and less pronounced.It wasn’t until the iceberg broke loose and drifted into the warmer waters of the Scotia Sea that the real action began. …

Read more

Moths talk about sex in many ways

July 8, 2013 — Moths are nocturnal, and they have one major enemy; the bat. As a defense many moths developed ears sensitive to the bat´s echolocation cries, and they have also developed different behaviors to avoid bats. Now it turns out that many moths are able to use both their hearing and their avoidance behavior to an entirely different purpose: to communicate about sex. According to a Danish/Japanese research team the various moth species probably talk about sex in a great number of different ways. This sheds new light on the evolution of sound communication and behavior.Moths have probably developed ears for the sole purpose of hearing if their worst enemy, the bat, is near. It has long been thought that moths were dumb, but many of them actually produce sounds — just so softly that bats cannot hear them. The moths use the sounds to communicate sexually. This scientists have known for a few years, and now new research reveals, that moths have developed different ways to not only use their sense of hearing, but also their avoidance behavior that was originally developed as a defense against bats.”We have examined two different moths and seen that they use their ears and behavior quite differently when they communicate sexually. There is no reason to believe that other moths do not do it in their own way, too. The variation in how to use these skills must be huge,” says sensory physiology researcher, Annemarie Surlykke from Department of Biological Sciences, University of Southern Denmark (SDU).She and her Japanese colleagues from University of Tokyo have studied the two species, Asian corn borer moth (Ostrinia furnacalis) and Japanese lichen moth (Eilema japonica). …

Read more

Hawkmoths use ultrasound to combat bats

July 4, 2013 — For years, pilots flying into combat have jammed enemy radar to get the drop on their opponents. It turns out that moths can do it, too.A new study co-authored by a University of Florida researcher shows hawkmoths use sonic pulses from their genitals to respond to bats producing the high-frequency sounds, possibly as a self-defense mechanism to jam the echolocation ability of their predators.Echolocation research may be used to better understand or improve ultrasound as a vital tool in medicine, used for observing prenatal development, measuring blood flow and diagnosing tumors, among other things. The study appears online today in the journal Biology Letters.Study co-author Akito Kawahara, assistant curator of Lepidoptera at the Florida Museum of Natural History on the UF campus, said ultrasound has only been demonstrated in one other moth group.”This is just the first step toward understanding a really interesting system,” Kawahara said. “Echolocation research has been focused on porpoises, whales and dolphins. We know some insects produce the sounds, but this discovery in an unrelated animal making ultrasound, potentially to jam the echolocation of bats, is exciting.”Hawkmoths are major pollinators and some are agricultural pests. Researchers use the insects as model organisms for genetic research due to their large size.Previous research shows tiger moths use ultrasound as a defense mechanism. While they produce the sound using tymbals, a vibrating membrane located on the thorax, hawkmoths use a system located in the genitals. Scientists found at least three hawkmoth species produce ultrasonic sound, including females. Researchers believe hawkmoths may produce the sound as a physical defense, to warn others or to jam the bats’ echolocation, which confuses the predators so they may not identify an object or interpret where it is located, Kawahara said.The study was conducted in Malaysia, which has the highest diversity of hawkmoths worldwide, and funded by a National Science Foundation grant of about $500,000. Kawahara also conducted research in the jungles of Borneo and the lower Amazon.”So much work has been focused on animals that are active during the day, but there are a lot of really interesting things happening at night, and we just don’t know a lot about what is actually going on — because we can’t hear or see it,” Kawahara said. …

Read more

Listening to blood cells: Simple test could use sound waves for diagnosing blood-related diseases

July 2, 2013 — New research reveals that when red blood cells are hit with laser light, they produce high frequency sound waves that contain a great deal of information. Similar to the way one can hear the voices of different people and identify who they are, investigators reporting in the July 2 issue of Biophysical Journal, published by Cell Press, could analyze the sound waves produced by red blood cells and recognize their shape and size. The information may aid in the development of simple tests for blood-related diseases.Share This:”We plan to make specialized devices that will allow the detection of individual red blood cells and analyze the photoacoustic signals they produce to rapidly diagnose red blood cell pathologies,” says senior author Dr. Michael Kolios, of Ryerson University, Toronto.Deviations from the regular biconcave shape of a red blood cell are a significant indicator of blood-related diseases, whether they result from genetic abnormalities, from infectious agents, or simply from a chemical imbalance. For example, malaria patients’ red blood cells are irregularly swollen, while those of patients with sickle cell anemia take on a rigid, sickle shape.Using a special photoacoustic microscope that detects sound, the investigators were able to differentiate healthy red blood cells from irregularly shaped red blood cells with high confidence, using a sample size of just 21 cells. Because each measurement takes only fractions of a second, the method could eventually be incorporated into an automated device for rapid characterization of red blood cells from a single drop of blood obtained in the clinic.”We are currently developing a microfluidic device, which integrates the laser and probes and flows single cells through the target area. This would enable measuring thousands of cells in a very short period of time with minimal user involvement,” says first author Eric Strohm, who is a graduate student in Dr. Kolios’ laboratory. The investigators are applying the method to other types of cells as well, including white blood cells, and they are also using it to detect changes in photoacoustic signals that occur when blood cells clump together to form dangerous blood clots.Share this story on Facebook, Twitter, and Google:Other social bookmarking and sharing tools:|Story Source: The above story is reprinted from materials provided by Cell Press, via EurekAlert!, a service of AAAS. Note: Materials may be edited for content and length. …

Read more

Practical new approach to holographic video could also enable 2-D displays with higher resolution and lower power consumption

June 19, 2013 — Today in the journal Nature, researchers at MIT’s Media Lab report a new approach to generating holograms that could lead to color holographic-video displays that are much cheaper to manufacture than today’s experimental, monochromatic displays. The same technique could also increase the resolution of conventional 2-D displays.Using the new technique, Daniel Smalley, a graduate student in the Media Lab and first author on the new paper, is building a prototype color holographic-video display whose resolution is roughly that of a standard-definition TV and which can update video images 30 times a second, fast enough to produce the illusion of motion. The heart of the display is an optical chip, resembling a microscope slide, that Smalley built, using only MIT facilities, for about $10.”Everything else in there costs more than the chip,” says Smalley’s thesis advisor, Michael Bove, a principal research scientist at the Media Lab and head of its Object-Based Media Group. “The power supplies in there cost more than the chip. The plastic costs more than the chip.”Joining Bove and Smalley on the Nature paper are two other graduate students in Bove’s group, James Barabas and Sundeep Jolly, and Quinn Smithwick, who was a postdoc at MIT at the time but is now a research scientist at Disney Research.When light strikes an object with an irregular surface, it bounces off at a huge variety of angles, so that different aspects of the object are disclosed when it’s viewed from different perspectives. In a hologram, a beam of light passes through a so-called diffraction fringe, which bends the light so that it, too, emerges at a host of different angles.One way to produce holographic video is to create diffraction fringes from patterns displayed on an otherwise transparent screen. The problem with that approach, Bove explains, is that the pixels of the diffraction pattern have to be as small as the wavelength of the light they’re bending, and “most display technologies don’t happily shrink down that much.”Sound footingStephen Benton, a Media Lab professor who died in 2003, created one of the first holographic-video displays by adopting a different technique, called acousto-optic modulation, in which precisely engineered sound waves are sent through a piece of transparent material. “The waves basically squeeze and stretch the material, and they change its index of refraction,” Bove says. “So if you shine a laser through it, [the waves] diffract it.”Benton’s most sophisticated display — the Mark-II, which was built with the help of Bove’s group — applied acousto-optic modulation to a crystal of an expensive material called tellurium dioxide. “That was the biggest piece of tellurium dioxide crystal that had ever been grown,” Bove says. …

Read more

Sound waves precisely position nanowires

June 19, 2013 — The smaller components become, the more difficult it is to create patterns in an economical and reproducible way, according to an interdisciplinary team of Penn State researchers who, using sound waves, can place nanowires in repeatable patterns for potential use in a variety of sensors, optoelectronics and nanoscale circuits.”There are ways to create these devices with lithography, but it is very hard to create patterns below 50 nanometers using lithography,” said Tony Jun Huang, associate professor of engineering science and mechanics, Penn State. “It is rather simple now to make metal nanomaterials using synthetic chemistry. Our process allows pattern transfer of arrays of these nanomaterials onto substrates that might not be compatible with conventional lithography. For example, we could make networks of wires and then pattern them to arrays of living cells.”The researchers looked at the placement of metallic nanowires in solution on a piezoelectric substrate. Piezoelectric materials move when an electric voltage is applied to them and create an electric voltage when compressed.In this case, the researchers applied an alternating current to the substrate so that the material’s movement creates a standing surface acoustic wave in the solution. A standing wave has node locations that do not move, so the nanowires arrive at these nodes and remain there.If the researchers apply only one current, then the nanowires form a one-dimensional array with the nanowires lined up head to tail in parallel rows. If perpendicular currents are used, a two-dimensional grid of standing waves forms and the nanowires move to those grid-point nodes and form a three-dimensional spark-like pattern.”Because the pitch of both the one-dimensional and two-dimensional structures is sensitive to the frequency of the standing surface acoustic wave field, this technique allows for the patterning of nanowires with tunable spacing and density,” the researchers report in a recent issue of ACS Nano.The nanowires in solution will settle inplace onto the substrate when the solution evaporates, preserving the pattern. The researchers note that the patterned nanowires could then be transferred to organic polymer substrates with good accuracy by placing the polymer onto the top of the nanowires and with slight pressure, transferring the nanowires. They suggest that the nanowires could then be transferred to rigid or flexible substrates from the organic polymer using microcontact-printing techniques that are well developed.”We really think our technique can be extremely powerful,” said Huang. “We can tune the pattern to the configuration we want and then transfer the nanowires using a polymer stamp.”The spacing of the nodes where nanowires deposit can be adjusted on the fly by changing the frequency and the interaction between the two electric fields.”This would save a lot of time compared to lithography or other static fabrication methods,” said Huang.The researchers are currently investigating more complex designs.The National Institutes of Health, National Science Foundation and the Penn State Center for Nanoscale Science supported this research.

Read more

Mapping a room in a snap: Four microphones and a computer algorithm are enough to produce a 3-D model of a simple, convex room

June 17, 2013 — Blind people sometimes develop the amazing ability to perceive the contours of the room they’re in based only on auditory information. Bats and dolphins use the same echolocation technique for navigating in their environment.At EPFL, a team from the Audiovisual Communications Laboratory (LCAV), under the direction of Professor Martin Vetterli, has developed a computer algorithm that can accomplish this from a sound that’s picked up by four microphones. Their experiment is being published this week in the Proceedings of the National Academy of Sciences (PNAS). “Our software can build a 3D map of a simple, convex room with a precision of a few millimeters,” explains PhD student Ivan Dokmanić.Randomly placed microphonesAs incredible as it may seem, the microphones don’t need to be carefully placed. “Each microphone picks up the direct sound from the source, as well as the echoes arriving from various walls,” Dokmanić continues. “The algorithm then compares the signal from each microphone. The infinitesimal lags that appear in the signals are used to calculate not only the distance between the microphones, but also the distance from each microphone to the walls and the sound source.”This ability to “sort out” the various echoes picked up by the microphones is in itself a first. By analyzing each echo’s signal using “Euclidean distance matrices,” the system can tell whether the echo is rebounding for the first or second time, and determine the unique “signature” of each of the walls.The researchers tested the algorithm at EPFL using a “clean” sound source in an empty room in which they changed the position of a movable wall. Their results confirmed the validity of the approach. A second experiment carried out in a much more complex environment — an alcove in the Lausanne Cathedral — gave good partial results. …

Read more

Key mechanism boosts the signaling function of neurons in brain

June 14, 2013 — Locating a car that’s blowing its horn in heavy traffic, channel-hopping between football and a thriller on TV without losing the plot, and not forgetting the start of a sentence by the time we have read to the end — we consider all of these to be normal everyday functions. They enable us to react to fast-changing circumstances and to carry out even complex activities correctly. For this to work, the neuron circuits in our brain have to be very flexible. Scientists working under the leadership of neurobiologists Nils Brose and Erwin Neher at the Max Planck Institutes of Experimental Medicine and Biophysical Chemistry in Göttingen have now discovered an important molecular mechanism that turns neurons into true masters of adaptation.Neurons communicate with each other by means of specialised cell-to-cell contacts called synapses. First, an emitting neuron is excited and discharges chemical messengers known as neurotransmitters. These signal molecules then reach the receiving cell and influence its activation state. The transmitter discharge process is highly complex and strongly regulated. Its protagonists are synaptic vesicles, small blisters surrounded by a membrane, which are loaded with neurotransmitters and release them by fusing with the cell membrane. In order to be able to respond to stimulation at any time by releasing transmitters, a neuron must have a certain amount of vesicles ready to go at each of its synapses. Brose has been studying the molecular foundations of this stockpiling for years.The problem is not merely academic. …

Read more

Tiger moths: Mother Nature’s fortune tellers

June 3, 2013 — When it comes to saving its own hide, the tiger moth can predict the future.A new study by researchers at Wake Forest University shows Bertholdia trigona, a species of tiger moth found in the Arizona desert, can tell if an echo-locating bat is going to attack it well before the predator swoops in for the kill — making the intuitive, tiny-winged insect a master of self-preservation.Predators in the nightA bat uses sonar to hunt at night. The small mammal emits a series of ultrasonic cries and listens carefully to the echoes that return. By determining how long it takes the sound to bounce back, the bat can figure out how far away its prey is.Aaron Corcoran and William Conner of Wake Forest previously discovered Bertholdia trigona defends itself by jamming its predators’ sonar. Conner, a professor of biology, said the tiger moth has a blister of cuticle on either side of its thorax called a tymbal. It flexes this structure to create a high-pitched, clicking sound.The moth emits more than 4,500 clicks per second right when the bat would normally attack, jamming its sonar.”It is the only animal in the world we know of that can jam its predator’s sonar,” Conner said. “Bats and tiger moths are in the midst of an evolutionary arms race.”The new study published May 6 in the journal PLOS ONE, shows that tiger moths can tell when it is time to start clicking by listening for a telltale change in the repetition rate of the bat’s cries and an increase in sound intensity. The combination of these two factors tells the moth that it has been targeted.Conner’s team used high-speed infrared cameras to create 3D maps of the flight paths of bats attacking tiger moths. They then used an ultrasonic microphone to measure the rate of bat cries and moth clicks.Normally, a bat attack starts with relatively intermittent cries. As it gets closer to the moth, a bat increases the rate at which it produces cries — painting a clearer picture of the moth’s location.Conner’s team found that soon after the bats detected and targeted their prey, moths increased their rate of clicking dramatically, causing the predators to veer off course. The sonar jamming works 93 percent of the time. …

Read more

Acidifying oceans could spell trouble for squid

June 1, 2013 — Acidifying oceans could dramatically impact the world’s squid species, according to a new study led by Woods Hole Oceanographic Institution (WHOI) researchers and soon to be published in the journal PLOS ONE. Because squid are both ecologically and commercially important, that impact may have far-reaching effects on the ocean environment and coastal economies, the researchers report.”Squid are at the center of the ocean ecosystem — nearly all animals are eating or eaten by squid,” says WHOI biologist T. Aran Mooney, a co-author of the study. “So if anything happens to these guys, it has repercussions down the food chain and up the food chain.”Research suggests that ocean acidification and its repercussions are the new norm. The world’s oceans have been steadily acidifying for the past hundred and fifty years, fueled by rising levels of carbon dioxide (CO2) in the atmosphere. Seawater absorbs some of this CO2,turning it into carbonic acid and other chemical byproducts that lower the pH of the water and make it more acidic. As CO2 levels continue to rise, the ocean’s acidity is projected to rise too, potentially affecting ocean-dwelling species in ways that researchers are still working to understand.Mooney and his colleagues — lead author Max Kaplan, then an undergraduate student from the University of St. Andrews in the U.K. and now a WHOI graduate student, and WHOI scientists Daniel McCorkle and Anne Cohen — decided to study the impact of acidifying seawater on squid. Over the summer of 2011, Mooney and Kaplan gathered male and female Atlantic longfin squid (Loligo pealeii) from the waters of Vineyard Sound and transported them to a holding tank in the WHOI Environmental Systems Laboratory. …

Read more

Utilizzando il sito, accetti l'utilizzo dei cookie da parte nostra. maggiori informazioni

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close