Thursday, December 14, 2006

Robotic hand has a built-in 'slip sense'

An artificial hand built in the UK has fingertip sensors that let it grasp delicate objects without crushing or dropping them.

A previous prototype has proved itself capable of grappling with door keys and twisting the lid off a jar (see New robot hand is even more human). The latest incarnation not only moves more like a real hand but also has improved sense of touch (990KB, Windows Media Player format).

"We've added new arrays of sensors that allow it to sense temperature, grip-force and whether an object is slipping," says Neil White, an electronic engineer at Southampton University who developed the hand with colleagues Paul Chappell, Andy Cranny and Darryl Cotton.

Its developers hope that the robotic hand could eventually give amputees greater dexterity and deftness of touch via a prosthetic limb. Like some existing mechanical prosthetics, it could be controlled by connecting its motors to nerves in an amputee's arm, shoulder or chest.
Slip sense

Pressure sensors in each fingertip connect to a control system that maintains the hand's grip. "If a hand without them held a polystyrene cup it would just crush it," White explains. By contrast, the new hand uses feedback from its sensors to prevent each finger from closing further, once an object is gripped.

Gripping an object too lightly can be a problem with existing artificial hands. "The slip sensors prevent that by detecting the vibration as an object slips through the fingers," says White.

Other slip-detectors use microphones to pick up the sound caused when an object starts slipping, he explains: "Using vibration is more robust because there can be no interference in noisy environments. Some hands that use sound will close just when you whistle at them."

The hand's sensors consist of patches of piezoelectric crystals surrounded by circuitry, all screen printed directly onto the each fingertip through a technique called "thick-film fabrication". The piezoelectric crystals create voltages when their shape changes, and can detect changes in temperature, vibration and strain.

Thick-film fabrication is cheaper than using conventional silicon, says White. This could be important for prosthetic devices, he adds, as they will only be manufactured in small numbers, preventing the development on an economy of scale.

Giving prosthetic hands the ability to "feel" objects is important, says Göran Lundborg at Lund University in Sweden. "If people are to use them in place of real hands they need to have similar abilities," he told New Scientist.

Lundborg adds that the ultimate goal is to find a way to let a person's brain control the feedback loop between an artificial hand's sensors and motors. In future, this might be achieved by connecting the sensor output directly to a patient's brain or nerves, he suggests.

But, in the meantime, there may be simpler ways to do it. "We have experimented with feeding the output from small microphones in a glove into earphones," Lundborg says.

With training, subjects involved in the experiment were able to distinguish between the sounds produced by grasping different types of objects with the glove. MRI scans also revealed that they processed information from the earphones using the area of the brain that normally deals with touch.

Original Article

Handheld device sees more colours than humans

A handheld device sensitive to changes in colour not detectable by the human eye could be used to spot objects hidden by camouflage or foliage.

The Image Replication Imaging Spectrometer (IRIS) system was developed by Andrew Harvey and colleagues at Heriot-Watt University in the UK.

The cells in the human retina that detect coloured light are sensitive to only certain parts of the spectrum – red, green or blue. All perceived colours are a mixture of this basic palette of colours. Digital cameras work in a similar way, also using separate red, green and blue filters or sensors.

By contrast, the IRIS system has a greater basic palette, of 32 or more "colours" – bands of the light spectrum. It works by dividing an image into 32 separate snapshots, each containing only the light from one of its 32 spectral bands. This allows it to pick out features that blend into one for a human observer. "In a single snapshot we can capture subtle differences in colour that the eye can't," Harvey told New Scientist.
Colour palette

The 32 snapshots are projected onto a detector side by side, allowing the device to analyse them all simultaneously. "Until now this kind of imaging was achieved by looking at the different spectral bands sequentially in time," says Harvey, "this method is much faster." What IRIS sees can be translated into false colour images to allow a human to make use of its abilities.

Two British defence firms, Quinetiq and Selex, are working on handheld versions of the device, Harvey says, which are similar in size to a video camera: "It should be useful in, for example, a situation where they need to know if there are any artificial objects like mines or vehicles hidden in foliage."

IRIS could help reveal what is hidden, "or let soldiers know what needs further investigation", he adds.

The device is also being tested as a medical tool, in collaboration with Andy McNaught at Cheltenham General Hospital in the UK. He is using it to diagnose eye disease by looking at blood flow within the retina. This is because IRIS is sensitive enough to tell the different between oxygenated and deoxygenated blood.

Images like the one to the right can be used to look for problems with retinal blood flow, such as diabetic retinopathy – a complication of diabetes that can lead to blindness.

Original Article

Monday, December 11, 2006

Language of Surgery

Data Collected From Robotic Medical Tools Could Improve Operating Room Skills

Borrowing ideas from speech recognition research, Johns Hopkins computer scientists are building mathematical models to represent the safest and most effective ways to perform surgery, including tasks such as suturing, dissecting and joining tissue.

The team's long-term goal is to develop an objective way of evaluating a surgeon's work and to help doctors improve their operating room skills. Ultimately, the research also could enable robotic surgical tools to perform with greater precision.

The project, supported by a three-year National Science Foundation grant, has yielded promising early results in modeling suturing work. The researchers performed the suturing with the help of a robotic surgical device, which recorded the movements and made them available for computer analysis.

"Surgery is a skilled activity, and it has a structure that can be taught and acquired," said Gregory D. Hager, a professor of computer science in the university's Whiting School of Engineering and principal investigator on the project. "We can think of that structure as the language of surgery.' To develop mathematical models for this language, we're borrowing techniques from speech recognition technology and applying them to motion recognition and skills assessment."

language of surgery researchers
'Language of surgery' researchers collect data from this da Vinci robotic surgical system operated by David Yuh, a cardiac surgeon at The Johns Hopkins Hospital. Standing are team members Gregory Hager, Izhak Shafran, Henry Lin and Sanjeev Khudanpur.
Photo by Will Kirk
Complicated surgical tasks, Hager said, unfold in a series of steps that resemble the way that words, sentences and paragraphs are used to convey language. "In speech recognition research, we break these down to their most basic sounds, called phonemes," he said. "Following that example, our team wants to break surgical procedures down to simple gestures that can be represented mathematically by computer software."

With that information in hand, the computer scientists hope to be able to recognize when a surgical task is being performed well and also to identify which movements can lead to operating room problems. Just as a speech recognition program might call attention to poor pronunciation or improper syntax, the system being developed by Hager's team might identify surgical movements that are imprecise or too time-consuming.
But to get to that point, computers first must become fluent in the "language" of surgery. This will require computers to absorb data concerning the best ways to complete surgical tasks. As a first step, the researchers have begun collecting data recorded by Intuitive Surgical's da Vinci Surgical Systems. These systems allow a surgeon, seated at a computer workstation, to guide robotic tools to perform minimally invasive procedures involving the heart, the prostate and other organs. Although only a tiny fraction of hospital operations involve the da Vinci, the device's value to Hager's team is that all of the robot's surgical movements can be digitally recorded and processed. In a paper presented at the Medical Image Computing and Computer-Assisted Intervention Conference in October 2005, Hager's team announced that it had developed a way to use data from the da Vinci to mathematically model surgical tasks such as suturing, a key first step in deciphering the language of surgery. The lead author, Johns Hopkins graduate student Henry C. Lin, received the conference award for best student paper.

da Vinci robotic system
When a surgeon operates the controls of a da Vinci robotic system, the device records these hand movements. Computer scientists are analyzing this data in their effort to understand the 'language of surgery.'
Photo by Will Kirk
"Now, we're acquiring enough data to go from words' to sentences,'" said Hager, who is deputy director of the National Science Foundation Engineering Research Center for Computer-Integrated Surgical Systems and Technology, based at Johns Hopkins. "One of our goals for the next few years is to develop a large vocabulary that we can use to represent the motions in surgical tasks."

The team also hopes to incorporate video data from the da Vinci and possibly from minimally invasive procedures performed directly by surgeons. In such operations, surgeons insert instruments and a tiny camera into small incisions to complete a medical procedure. The video data from the camera could contribute data to the team's efforts to identify effective surgical methods.

Hager's Johns Hopkins collaborators include David D. Yuh, a cardiac surgeon from the School of Medicine. "It is fascinating to break down the surgical skills we take for granted into their fundamental components," Yuh said. "Hopefully, a better understanding of how we learn to operate will help more efficiently train future surgeons. With the significantly reduced number of hours surgical residents are permitted to be in the hospital, surgical training programs need to streamline their training methods now more than ever. This research work represents a strong effort toward this."

David Yuh, Izhak Shafran, Gregory Hager
Cardiac surgeon David Yuh controls the da Vinci robotic surgical system as computer scientists Izhak Shafran and Gregory Hager observe.
Photo by Will Kirk
Hager's other collaborators include Sanjeev Khudanpur, a Johns Hopkins assistant professor of electrical and computer engineering, and Izhak Shafran, who was a postdoctoral fellow affiliated with the university's Center for Language and Speech Processing and who is now an assistant professor at the Oregon Graduate Institute.

Hager cautioned that the project is not intended to produce a "Big Brother" system that would critique a surgeon's every move. "We're trying to find ways to help them become better at what they do," he said. "It's not a new idea. In sports and dance, people are studying the mechanics of movement to see what produces the best possible performance. By understanding the underlying structures, we can become better at what we do. I think surgery's no different."

Original Article

Engineered yeast improves ethanol production

Anne Trafton, News Office
December 7, 2006

MIT scientists have engineered yeast that can improve the speed and efficiency of ethanol production, a key component to making biofuels a significant part of the U.S. energy supply.

Currently used as a fuel additive to improve gasoline combustibility, ethanol is often touted as a potential solution to the growing oil-driven energy crisis. But there are significant obstacles to producing ethanol: One is that high ethanol levels are toxic to the yeast that ferments corn and other plant material into ethanol.

By manipulating the yeast genome, the researchers have engineered a new strain of yeast that can tolerate elevated levels of both ethanol and glucose, while producing ethanol faster than un-engineered yeast. The work is reported in the Dec. 8 issue of Science.

Fuels such as E85, which is 85 percent ethanol, are becoming common in states where corn is plentiful; however, their use is mainly confined to the Midwest because corn supplies are limited and ethanol production technology is not yet efficient enough.

Boosting efficiency has been an elusive goal, but the MIT researchers, led by Hal Alper, a postdoctoral associate in the laboratories of Professor Gregory Stephanopoulos of chemical engineering and Professor Gerald Fink of the Whitehead Institute, took a new approach.

The key to the MIT strategy is manipulating the genes encoding proteins responsible for regulating gene transcription and, in turn, controlling the repertoire of genes expressed in a particular cell. These types of transcription factors bind to DNA and turn genes on or off, essentially controlling what traits a cell expresses.

The traditional way to genetically alter a trait, or phenotype, of an organism is to alter the expression of genes that affect the phenotype. But for traits influenced by many genes, it is difficult to change the phenotype by altering each of those genes, one at a time.

Targeting the transcription factors instead can be a more efficient way to produce desirable traits. "It is the makeup of the transcripts that determines how a cell is going to behave and this is controlled by the transcription factors in the cell," according to Stephanopoulos, a co-author on the paper.

The MIT researchers are the first to use this new approach, which is akin to altering the central processor of a computer (transcription factors) rather than individual software applications (genes), says Fink, an MIT professor of biology and a co-author on the paper.

In this case, the researchers targeted two different transcription factors. They got their best results with a factor known as a TATA-binding protein, which when altered in three specific locations caused the over-expression of at least a dozen genes, all of which were found to be necessary to elicit an improved ethanol tolerance, thus allowing that strain of yeast to survive high ethanol concentrations.

Because so many genes are involved, engineering high ethanol tolerance by the traditional method of overexpressing individual genes would have been impossible, says Alper. Furthermore, the identification of the complete set of such genes would have been a very difficult task, Stephanopoulos adds.

The high-ethanol-tolerance yeast also proved to be more rapid fermenters: The new strain produced 50 percent more ethanol during a 21-hour period than normal yeast.

The prospect of using this approach to engineer similar tolerance traits in industrial yeast could dramatically impact industrial ethanol production, a multi-step process in which yeast plays a crucial role. First, cornstarch or another polymer of glucose is broken down into single sugar (glucose) molecules by enzymes, then yeast ferments the glucose into ethanol and carbon dioxide.

Last year, four billion gallons of ethanol were produced from 1.43 billion bushels of corn grain (including kernels, stalks, leaves, cobs, husks) in the United States, according to the Department of Energy. In comparison, the United States consumed about 140 billion gallons of gasoline.

Other co-authors on the Science paper are Joel Moxley, an MIT graduate student in chemical engineering, and Elke Nevoigt of the Berlin University of Technology.

The research was funded by the DuPont-MIT Alliance, the Singapore-MIT Alliance, the National Institutes of Health and the U.S. Department of Energy.

Original Article

Growing heart muscle

ANN ARBOR, Mich. — It looks, contracts and responds almost like natural heart muscle – even though it was grown in the lab. And it brings scientists another step closer to the goal of creating replacement parts for damaged human hearts, or eventually growing an entirely new heart from just a spoonful of loose heart cells.

This week, University of Michigan researchers are reporting significant progress in growing bioengineered heart muscle, or BEHM, with organized cells, capable of generating pulsating forces and reacting to stimulation more like real muscle than ever before.

The three-dimensional tissue was grown using an innovative technique that is faster than others that have been tried in recent years, but still yields tissue with significantly better properties. The approach uses a fibrin gel to support rat cardiac cells temporarily, before the fibrin breaks down as the cells organize into tissue.

The U-M team details its achievement in a new paper published online in the Journal of Biomedical Materials Research Part A.

And while BEHM is still years away from use as a human heart treatment, or as a testing ground for new cardiovascular drugs, the U-M researchers say their results should help accelerate progress toward those goals. U-M is applying for patent protection on the development and is actively looking for a corporate partner to help bring the technology to market.

Ravi K. Birla, Ph.D., of the Artificial Heart Laboratory in U-M's Section of Cardiac Surgery and the U-M Cardiovascular Center, led the research team.

"Many different approaches to growing heart muscle tissue from cells are being tried around the world, and we're pursuing several avenues in our laboratory," says Birla. "But from these results we can say that utilizing a fibrin hydrogel yields a product that is ready within a few days, that spontaneously organizes and begins to contract with a significant and measurable force, and that responds appropriately to external factors such as calcium."

The new paper actually compares two different ways of using fibrin gel as a basis for creating BEHM: layering on top of the gel, and embedding within it. In the end, the layering approach produced a more cohesive tissue that contracted with more force – a key finding because embedding has been seen as the more promising technique.

The ability to measure the forces generated by the BEHM as it contracts is crucial, Birla explains. It's made possible by a precise instrument called an optical force transducer that gives more precise readings than that used by other teams.

The measurement showed that the BEHM that had formed in just four days after a million cells were layered on fibrin gel could contract with an active force of more than 800 micro-Newtons. That's still only about half the force generated within the tissue of an actual beating heart, but it's much higher than the forces created by engineered heart tissue samples grown and reported by other researchers. Birla says the team expects to see greater forces created by BEHM in future experiments that will bathe the cells in an environment that's even more similar to the body's internal conditions.

In the new paper, the team reports that contraction forces increased when the BEHM tissues were bathed in a solution that included additional calcium and a drug that acts on beta-adrenergic receptors. Both are important to the signaling required to produce cohesive action by cells in tissue.

The U-M team also assessed the BEHM's structure and function at different stages in its development. First author and postdoctoral fellow Yen-Chih Huang, Ph.D., of the U-M Division of Biomedical Engineering, led the creation of the modeling system. Co-author and research associate Luda Khait examined the tissue using special stains that revealed the presence and concentration of the fibrin gel, and of collagen generated by the cells as they organized into tissue.

Over the course of several days, the fibrin broke down as intended, after fulfilling its role as a temporary support for the cells. This may be a key achievement for future use of BEHM as a treatment option, because the tissue could be grown and implanted relatively quickly.

The U-M Artificial Heart Laboratory ( is part of the U-M Section of Cardiac Surgery, and draws its strength from the fact that it includes bioengineers, cell biologists and heart surgeons – a multidisciplinary group that can tackle both the technical and clinical hurdles in the field of engineering heart muscle. Its focus is to evaluate different platforms for engineering cardiovascular structures in the laboratory. Active programs include tissue engineering models for cardiac muscle, tri-leaflet valves, cell-based cardiac pumps and vascular grafts. In addition, the laboratory has expertise in several different tissue engineering platforms: self-organization strategies, biodegradable hydrogels such as fibrin, and polymeric scaffolds.

Each approach may turn out to have its own applications, says Birla, and the ability to conduct side-by-side comparisons is important. Other researchers have focused on one approach or another, but the U-M team can use its lab to test multiple approaches at once.

"Fundamentally, we're interested in creating models of the different components of the heart one by one," says Birla.

"It's like building a house – you need to build the separate pieces first. And once we understand how these models can be built in the lab, then we can work toward building a bioengineered heart." He notes that while many other labs focus on growing one heart component, only U-M is working on growing all the different heart components.

Already, the U-M team has begun experiments to transplant BEHM into the hearts of rats that have suffered heart attacks, and see if the new tissue can heal the damage. This work is being conducted by Francesco Migneco, M.D., a research fellow with the Artificial Heart Laboratory. Further studies will implement "bioreactors" that will expose the BEHM tissue to more of the nutrients and other conditions that are present in the body.

Wednesday, December 06, 2006

Unprecedented Efficiency In Producing Hydrogen From Water

Scientists are reporting a major advance in technology for water photooxidation --using sunlight to produce clean-burning hydrogen fuel from ordinary water.

Michael Gratzel and colleagues in Switzerland note that nature found this Holy Grail of modern energy independence 3 billion years ago, with the evolution of blue-green algae that use photosynthesis to split water into its components, hydrogen and oxygen.

Gratzel is namesake for the Gratzel Cell, a more-efficient solar cell that his group developed years ago. Solar cells produce electricity directly from sunlight. Their new research, scheduled for publication in the Dec. 13 issue of the weekly Journal of the American Chemical Society, reports development of a device that sets a new benchmark for efficiency in splitting water into hydrogen and oxygen using visible light, which is ordinary sunlight.

Previously, the best water photooxidation technology had an external quantum efficiency of about 37 percent. The new technology's efficiency is 42 percent, which the researchers term "unprecedented." The efficiency is due to an improved positive electrode and other innovations in the water-splitting device, researchers said.

Original Article

Spintronic RAM and permanent storage

Scientists have created novel ‘spintronic’ devices that could point the way for the next generation of more powerful and permanent data storage chips in computers. Physicist at the Universities of Bath, Bristol and Leeds have discovered a way to precisely control the pattern of magnetic fields in thin magnetic films, which can be used to store information.

The discovery has important consequences for the IT industry, as current technology memory storage has limited scope for develop further. The density with which information can be stored magnetically in permanent memory - hard drives - is reaching a natural limit related to the size of the magnetic particles used. The much faster silicon-chip based random access memory - RAM - in computers loses the information stored when the power is switched off.

The key advance of the recent research has been in developing ways to use high energy beams of gallium ions to artificially control the direction of the magnetic field in regions of cobalt films just a few atoms thick.

The direction of the field can be used to store information: in this case “up” or “down” correspond to the “1” or “0” that form the basis of binary information storage in computers.

Further, the physicists have demonstrated that the direction of these magnetic areas can be “read” by measuring their electrical resistance. This can be done much faster than the system for reading information on current hard drives. They propose that the magnetic state can be switched from “up” to “down” with a short pulse of electrical current, thereby fulfilling all the requirements for a fast magnetic memory cell.

Using the new technology, computers will never lose memory even during a power cut – as soon as the power is restored, the data will reappear.

Professor Simon Bending, of the University of Bath's Department of Physics, said: “The results are important as they suggest a new route for developing high density magnetic memory chips which will not lose information when the power is switched off. For the first time data will be written and read very fast using only electrical currents.”

“We’re particularly pleased as we were told in the beginning that our approach probably would not work, but we persevered and now it has definitely paid off.”

Professor Bending worked with Dr Simon Crampin, Atif Aziz and Hywel Roberts in Bath, Dr Peter Heard of the University of Bristol and Dr Chris Marrows of the University of Leeds.

Another approach to overcoming the problem of storing data permanently with rapid retrieval times is that of magnetic random access memory chips (MRAMs); prototypes of these have already been developed by several companies. However, MRAM uses the stray magnetic fields generated by wires that carry a high electrical current to switch the data state from “up” to “down”, which greatly limits the density of information storage.

In contrast, if the approach at Bath is developed commercially, this would allow the manufacture of magnetic memory chips with much higher packing densities, which can operate many times faster.

A paper written by the researchers appeared recently in the journal Physical Review Letters, entitled: Angular Dependence of Domain Wall Resistivity in Artificial Magnetic Domain Structures.

Original Article

Tuesday, December 05, 2006

Timetable for Moon colony

NASA plans to permanently occupy an outpost at one of the Moon's poles, officials announced on Monday.

The first four astronauts will land for a short visit in 2020, but it will take until at least 2024 to prepare for "a fully functional presence with rotating crews", said Scott Horowitz, associate administrator for the exploration systems mission directorate.

Original Article

Monday, December 04, 2006

Library on a disc

Blu-Ray Disc Association and industrial leaders in computer and other
media recently commercially introduced Blu-Ray Disc technology that
allows for storage of 25 gigabytes (GB) on a single layer of a disc and
50 GB on two layers. It has been referred to as the next generation of
optical disc format, and it offers high-definition quality.

Belfield's technique allows for storing on multiple layers with the
capacity of at least 1,000 GB and high-definition quality.

Original Article

Friday, December 01, 2006

Ghost in the machine

KAIST's Robot Intelligence Technology, or RIT, lab is most famous as the home of the Federation of International Robot-soccer Association, FIRA, the robotic soccer league. But beyond the easy crowd appeal of robotic sport, the researchers here are far more enthusiastic about a different creation -- one that lives in the wires and silicon woven throughout the walls of this building: a "software robot" they call Rity.

Rity is the ghost in the machine: an autonomous agent that can transfer itself into desktop computers, PDAs, servers and robotic avatars, and adapt and evolve like a genetic organism. As researchers go from place to place, they are captured and recognized by a network of cameras in the building, allowing Rity to follow them from computer to computer.

The "sobot" can upload itself into a mobile robot -- a simpler cousin of HanSaRam called MyBot -- and follow Kuppuswamy from room to room on its servo-controlled wheels, fetching objects for the researcher with its mechanical arms. If it sees Kuppuswamy sit in front of his office PC, Rity can abandon MyBot like a husk and slip into the desktop machine, to better put itself at its human master's disposal.

That's the theory, at least. The researchers here have set themselves a high task: creating a world in which robotic software minds and hardware bodies blend into the environment of daily life.

In a hospital setting, for example, sobots will serve as personal assistants to doctors, moving through a legion of bot bodies, some that check in on patients, others that track doctors through the hospital corridors. "Within 10 years robots will be in hospitals providing (triage)," says researcher Park In-Won.

In reality, Rity can't do much yet. On this day the scientists have a hard time just getting him to appear. They're gathered around a big-screen TV that sits like a living room centerpiece along one wall of the lab. A grad student is mugging for a mounted camera, which is supposed to recognize his face and summon his Rity. But nothing is happening.

Other students scramble around the lab -- a geek's paradise littered with cardboard boxes, caseless computers and inscrutable machined parts -- picking up the occasional tool and speaking in Korean to one another. Finally, the virtual genie materializes on the giant monitor, where it takes the form of a cute, cartoonish dog.

Original Article