Thursday, December 14, 2006

Robotic hand has a built-in 'slip sense'


An artificial hand built in the UK has fingertip sensors that let it grasp delicate objects without crushing or dropping them.

A previous prototype has proved itself capable of grappling with door keys and twisting the lid off a jar (see New robot hand is even more human). The latest incarnation not only moves more like a real hand but also has improved sense of touch (990KB, Windows Media Player format).

"We've added new arrays of sensors that allow it to sense temperature, grip-force and whether an object is slipping," says Neil White, an electronic engineer at Southampton University who developed the hand with colleagues Paul Chappell, Andy Cranny and Darryl Cotton.

Its developers hope that the robotic hand could eventually give amputees greater dexterity and deftness of touch via a prosthetic limb. Like some existing mechanical prosthetics, it could be controlled by connecting its motors to nerves in an amputee's arm, shoulder or chest.
Slip sense

Pressure sensors in each fingertip connect to a control system that maintains the hand's grip. "If a hand without them held a polystyrene cup it would just crush it," White explains. By contrast, the new hand uses feedback from its sensors to prevent each finger from closing further, once an object is gripped.

Gripping an object too lightly can be a problem with existing artificial hands. "The slip sensors prevent that by detecting the vibration as an object slips through the fingers," says White.

Other slip-detectors use microphones to pick up the sound caused when an object starts slipping, he explains: "Using vibration is more robust because there can be no interference in noisy environments. Some hands that use sound will close just when you whistle at them."

The hand's sensors consist of patches of piezoelectric crystals surrounded by circuitry, all screen printed directly onto the each fingertip through a technique called "thick-film fabrication". The piezoelectric crystals create voltages when their shape changes, and can detect changes in temperature, vibration and strain.
Touchy-feely

Thick-film fabrication is cheaper than using conventional silicon, says White. This could be important for prosthetic devices, he adds, as they will only be manufactured in small numbers, preventing the development on an economy of scale.

Giving prosthetic hands the ability to "feel" objects is important, says Göran Lundborg at Lund University in Sweden. "If people are to use them in place of real hands they need to have similar abilities," he told New Scientist.

Lundborg adds that the ultimate goal is to find a way to let a person's brain control the feedback loop between an artificial hand's sensors and motors. In future, this might be achieved by connecting the sensor output directly to a patient's brain or nerves, he suggests.

But, in the meantime, there may be simpler ways to do it. "We have experimented with feeding the output from small microphones in a glove into earphones," Lundborg says.

With training, subjects involved in the experiment were able to distinguish between the sounds produced by grasping different types of objects with the glove. MRI scans also revealed that they processed information from the earphones using the area of the brain that normally deals with touch.

Original Article

Handheld device sees more colours than humans


A handheld device sensitive to changes in colour not detectable by the human eye could be used to spot objects hidden by camouflage or foliage.

The Image Replication Imaging Spectrometer (IRIS) system was developed by Andrew Harvey and colleagues at Heriot-Watt University in the UK.

The cells in the human retina that detect coloured light are sensitive to only certain parts of the spectrum – red, green or blue. All perceived colours are a mixture of this basic palette of colours. Digital cameras work in a similar way, also using separate red, green and blue filters or sensors.

By contrast, the IRIS system has a greater basic palette, of 32 or more "colours" – bands of the light spectrum. It works by dividing an image into 32 separate snapshots, each containing only the light from one of its 32 spectral bands. This allows it to pick out features that blend into one for a human observer. "In a single snapshot we can capture subtle differences in colour that the eye can't," Harvey told New Scientist.
Colour palette

The 32 snapshots are projected onto a detector side by side, allowing the device to analyse them all simultaneously. "Until now this kind of imaging was achieved by looking at the different spectral bands sequentially in time," says Harvey, "this method is much faster." What IRIS sees can be translated into false colour images to allow a human to make use of its abilities.

Two British defence firms, Quinetiq and Selex, are working on handheld versions of the device, Harvey says, which are similar in size to a video camera: "It should be useful in, for example, a situation where they need to know if there are any artificial objects like mines or vehicles hidden in foliage."

IRIS could help reveal what is hidden, "or let soldiers know what needs further investigation", he adds.

The device is also being tested as a medical tool, in collaboration with Andy McNaught at Cheltenham General Hospital in the UK. He is using it to diagnose eye disease by looking at blood flow within the retina. This is because IRIS is sensitive enough to tell the different between oxygenated and deoxygenated blood.

Images like the one to the right can be used to look for problems with retinal blood flow, such as diabetic retinopathy – a complication of diabetes that can lead to blindness.

Original Article

Monday, December 11, 2006

Language of Surgery


Data Collected From Robotic Medical Tools Could Improve Operating Room Skills

Borrowing ideas from speech recognition research, Johns Hopkins computer scientists are building mathematical models to represent the safest and most effective ways to perform surgery, including tasks such as suturing, dissecting and joining tissue.

The team's long-term goal is to develop an objective way of evaluating a surgeon's work and to help doctors improve their operating room skills. Ultimately, the research also could enable robotic surgical tools to perform with greater precision.

The project, supported by a three-year National Science Foundation grant, has yielded promising early results in modeling suturing work. The researchers performed the suturing with the help of a robotic surgical device, which recorded the movements and made them available for computer analysis.

"Surgery is a skilled activity, and it has a structure that can be taught and acquired," said Gregory D. Hager, a professor of computer science in the university's Whiting School of Engineering and principal investigator on the project. "We can think of that structure as the language of surgery.' To develop mathematical models for this language, we're borrowing techniques from speech recognition technology and applying them to motion recognition and skills assessment."

language of surgery researchers
'Language of surgery' researchers collect data from this da Vinci robotic surgical system operated by David Yuh, a cardiac surgeon at The Johns Hopkins Hospital. Standing are team members Gregory Hager, Izhak Shafran, Henry Lin and Sanjeev Khudanpur.
Photo by Will Kirk
Complicated surgical tasks, Hager said, unfold in a series of steps that resemble the way that words, sentences and paragraphs are used to convey language. "In speech recognition research, we break these down to their most basic sounds, called phonemes," he said. "Following that example, our team wants to break surgical procedures down to simple gestures that can be represented mathematically by computer software."

With that information in hand, the computer scientists hope to be able to recognize when a surgical task is being performed well and also to identify which movements can lead to operating room problems. Just as a speech recognition program might call attention to poor pronunciation or improper syntax, the system being developed by Hager's team might identify surgical movements that are imprecise or too time-consuming.
But to get to that point, computers first must become fluent in the "language" of surgery. This will require computers to absorb data concerning the best ways to complete surgical tasks. As a first step, the researchers have begun collecting data recorded by Intuitive Surgical's da Vinci Surgical Systems. These systems allow a surgeon, seated at a computer workstation, to guide robotic tools to perform minimally invasive procedures involving the heart, the prostate and other organs. Although only a tiny fraction of hospital operations involve the da Vinci, the device's value to Hager's team is that all of the robot's surgical movements can be digitally recorded and processed. In a paper presented at the Medical Image Computing and Computer-Assisted Intervention Conference in October 2005, Hager's team announced that it had developed a way to use data from the da Vinci to mathematically model surgical tasks such as suturing, a key first step in deciphering the language of surgery. The lead author, Johns Hopkins graduate student Henry C. Lin, received the conference award for best student paper.

da Vinci robotic system
When a surgeon operates the controls of a da Vinci robotic system, the device records these hand movements. Computer scientists are analyzing this data in their effort to understand the 'language of surgery.'
Photo by Will Kirk
"Now, we're acquiring enough data to go from words' to sentences,'" said Hager, who is deputy director of the National Science Foundation Engineering Research Center for Computer-Integrated Surgical Systems and Technology, based at Johns Hopkins. "One of our goals for the next few years is to develop a large vocabulary that we can use to represent the motions in surgical tasks."

The team also hopes to incorporate video data from the da Vinci and possibly from minimally invasive procedures performed directly by surgeons. In such operations, surgeons insert instruments and a tiny camera into small incisions to complete a medical procedure. The video data from the camera could contribute data to the team's efforts to identify effective surgical methods.

Hager's Johns Hopkins collaborators include David D. Yuh, a cardiac surgeon from the School of Medicine. "It is fascinating to break down the surgical skills we take for granted into their fundamental components," Yuh said. "Hopefully, a better understanding of how we learn to operate will help more efficiently train future surgeons. With the significantly reduced number of hours surgical residents are permitted to be in the hospital, surgical training programs need to streamline their training methods now more than ever. This research work represents a strong effort toward this."

David Yuh, Izhak Shafran, Gregory Hager
Cardiac surgeon David Yuh controls the da Vinci robotic surgical system as computer scientists Izhak Shafran and Gregory Hager observe.
Photo by Will Kirk
Hager's other collaborators include Sanjeev Khudanpur, a Johns Hopkins assistant professor of electrical and computer engineering, and Izhak Shafran, who was a postdoctoral fellow affiliated with the university's Center for Language and Speech Processing and who is now an assistant professor at the Oregon Graduate Institute.

Hager cautioned that the project is not intended to produce a "Big Brother" system that would critique a surgeon's every move. "We're trying to find ways to help them become better at what they do," he said. "It's not a new idea. In sports and dance, people are studying the mechanics of movement to see what produces the best possible performance. By understanding the underlying structures, we can become better at what we do. I think surgery's no different."

Original Article

Engineered yeast improves ethanol production


Anne Trafton, News Office
December 7, 2006

MIT scientists have engineered yeast that can improve the speed and efficiency of ethanol production, a key component to making biofuels a significant part of the U.S. energy supply.

Currently used as a fuel additive to improve gasoline combustibility, ethanol is often touted as a potential solution to the growing oil-driven energy crisis. But there are significant obstacles to producing ethanol: One is that high ethanol levels are toxic to the yeast that ferments corn and other plant material into ethanol.

By manipulating the yeast genome, the researchers have engineered a new strain of yeast that can tolerate elevated levels of both ethanol and glucose, while producing ethanol faster than un-engineered yeast. The work is reported in the Dec. 8 issue of Science.

Fuels such as E85, which is 85 percent ethanol, are becoming common in states where corn is plentiful; however, their use is mainly confined to the Midwest because corn supplies are limited and ethanol production technology is not yet efficient enough.

Boosting efficiency has been an elusive goal, but the MIT researchers, led by Hal Alper, a postdoctoral associate in the laboratories of Professor Gregory Stephanopoulos of chemical engineering and Professor Gerald Fink of the Whitehead Institute, took a new approach.

The key to the MIT strategy is manipulating the genes encoding proteins responsible for regulating gene transcription and, in turn, controlling the repertoire of genes expressed in a particular cell. These types of transcription factors bind to DNA and turn genes on or off, essentially controlling what traits a cell expresses.

The traditional way to genetically alter a trait, or phenotype, of an organism is to alter the expression of genes that affect the phenotype. But for traits influenced by many genes, it is difficult to change the phenotype by altering each of those genes, one at a time.

Targeting the transcription factors instead can be a more efficient way to produce desirable traits. "It is the makeup of the transcripts that determines how a cell is going to behave and this is controlled by the transcription factors in the cell," according to Stephanopoulos, a co-author on the paper.

The MIT researchers are the first to use this new approach, which is akin to altering the central processor of a computer (transcription factors) rather than individual software applications (genes), says Fink, an MIT professor of biology and a co-author on the paper.

In this case, the researchers targeted two different transcription factors. They got their best results with a factor known as a TATA-binding protein, which when altered in three specific locations caused the over-expression of at least a dozen genes, all of which were found to be necessary to elicit an improved ethanol tolerance, thus allowing that strain of yeast to survive high ethanol concentrations.

Because so many genes are involved, engineering high ethanol tolerance by the traditional method of overexpressing individual genes would have been impossible, says Alper. Furthermore, the identification of the complete set of such genes would have been a very difficult task, Stephanopoulos adds.

The high-ethanol-tolerance yeast also proved to be more rapid fermenters: The new strain produced 50 percent more ethanol during a 21-hour period than normal yeast.

The prospect of using this approach to engineer similar tolerance traits in industrial yeast could dramatically impact industrial ethanol production, a multi-step process in which yeast plays a crucial role. First, cornstarch or another polymer of glucose is broken down into single sugar (glucose) molecules by enzymes, then yeast ferments the glucose into ethanol and carbon dioxide.

Last year, four billion gallons of ethanol were produced from 1.43 billion bushels of corn grain (including kernels, stalks, leaves, cobs, husks) in the United States, according to the Department of Energy. In comparison, the United States consumed about 140 billion gallons of gasoline.

Other co-authors on the Science paper are Joel Moxley, an MIT graduate student in chemical engineering, and Elke Nevoigt of the Berlin University of Technology.

The research was funded by the DuPont-MIT Alliance, the Singapore-MIT Alliance, the National Institutes of Health and the U.S. Department of Energy.

Original Article

Growing heart muscle


ANN ARBOR, Mich. — It looks, contracts and responds almost like natural heart muscle – even though it was grown in the lab. And it brings scientists another step closer to the goal of creating replacement parts for damaged human hearts, or eventually growing an entirely new heart from just a spoonful of loose heart cells.

This week, University of Michigan researchers are reporting significant progress in growing bioengineered heart muscle, or BEHM, with organized cells, capable of generating pulsating forces and reacting to stimulation more like real muscle than ever before.

The three-dimensional tissue was grown using an innovative technique that is faster than others that have been tried in recent years, but still yields tissue with significantly better properties. The approach uses a fibrin gel to support rat cardiac cells temporarily, before the fibrin breaks down as the cells organize into tissue.

The U-M team details its achievement in a new paper published online in the Journal of Biomedical Materials Research Part A.

And while BEHM is still years away from use as a human heart treatment, or as a testing ground for new cardiovascular drugs, the U-M researchers say their results should help accelerate progress toward those goals. U-M is applying for patent protection on the development and is actively looking for a corporate partner to help bring the technology to market.

Ravi K. Birla, Ph.D., of the Artificial Heart Laboratory in U-M's Section of Cardiac Surgery and the U-M Cardiovascular Center, led the research team.

"Many different approaches to growing heart muscle tissue from cells are being tried around the world, and we're pursuing several avenues in our laboratory," says Birla. "But from these results we can say that utilizing a fibrin hydrogel yields a product that is ready within a few days, that spontaneously organizes and begins to contract with a significant and measurable force, and that responds appropriately to external factors such as calcium."

The new paper actually compares two different ways of using fibrin gel as a basis for creating BEHM: layering on top of the gel, and embedding within it. In the end, the layering approach produced a more cohesive tissue that contracted with more force – a key finding because embedding has been seen as the more promising technique.

The ability to measure the forces generated by the BEHM as it contracts is crucial, Birla explains. It's made possible by a precise instrument called an optical force transducer that gives more precise readings than that used by other teams.

The measurement showed that the BEHM that had formed in just four days after a million cells were layered on fibrin gel could contract with an active force of more than 800 micro-Newtons. That's still only about half the force generated within the tissue of an actual beating heart, but it's much higher than the forces created by engineered heart tissue samples grown and reported by other researchers. Birla says the team expects to see greater forces created by BEHM in future experiments that will bathe the cells in an environment that's even more similar to the body's internal conditions.

In the new paper, the team reports that contraction forces increased when the BEHM tissues were bathed in a solution that included additional calcium and a drug that acts on beta-adrenergic receptors. Both are important to the signaling required to produce cohesive action by cells in tissue.

The U-M team also assessed the BEHM's structure and function at different stages in its development. First author and postdoctoral fellow Yen-Chih Huang, Ph.D., of the U-M Division of Biomedical Engineering, led the creation of the modeling system. Co-author and research associate Luda Khait examined the tissue using special stains that revealed the presence and concentration of the fibrin gel, and of collagen generated by the cells as they organized into tissue.

Over the course of several days, the fibrin broke down as intended, after fulfilling its role as a temporary support for the cells. This may be a key achievement for future use of BEHM as a treatment option, because the tissue could be grown and implanted relatively quickly.

The U-M Artificial Heart Laboratory (www.sitemaker.umich.edu/ahl) is part of the U-M Section of Cardiac Surgery, and draws its strength from the fact that it includes bioengineers, cell biologists and heart surgeons – a multidisciplinary group that can tackle both the technical and clinical hurdles in the field of engineering heart muscle. Its focus is to evaluate different platforms for engineering cardiovascular structures in the laboratory. Active programs include tissue engineering models for cardiac muscle, tri-leaflet valves, cell-based cardiac pumps and vascular grafts. In addition, the laboratory has expertise in several different tissue engineering platforms: self-organization strategies, biodegradable hydrogels such as fibrin, and polymeric scaffolds.

Each approach may turn out to have its own applications, says Birla, and the ability to conduct side-by-side comparisons is important. Other researchers have focused on one approach or another, but the U-M team can use its lab to test multiple approaches at once.

"Fundamentally, we're interested in creating models of the different components of the heart one by one," says Birla.

"It's like building a house – you need to build the separate pieces first. And once we understand how these models can be built in the lab, then we can work toward building a bioengineered heart." He notes that while many other labs focus on growing one heart component, only U-M is working on growing all the different heart components.

Already, the U-M team has begun experiments to transplant BEHM into the hearts of rats that have suffered heart attacks, and see if the new tissue can heal the damage. This work is being conducted by Francesco Migneco, M.D., a research fellow with the Artificial Heart Laboratory. Further studies will implement "bioreactors" that will expose the BEHM tissue to more of the nutrients and other conditions that are present in the body.

Wednesday, December 06, 2006

Unprecedented Efficiency In Producing Hydrogen From Water


Scientists are reporting a major advance in technology for water photooxidation --using sunlight to produce clean-burning hydrogen fuel from ordinary water.

Michael Gratzel and colleagues in Switzerland note that nature found this Holy Grail of modern energy independence 3 billion years ago, with the evolution of blue-green algae that use photosynthesis to split water into its components, hydrogen and oxygen.

Gratzel is namesake for the Gratzel Cell, a more-efficient solar cell that his group developed years ago. Solar cells produce electricity directly from sunlight. Their new research, scheduled for publication in the Dec. 13 issue of the weekly Journal of the American Chemical Society, reports development of a device that sets a new benchmark for efficiency in splitting water into hydrogen and oxygen using visible light, which is ordinary sunlight.

Previously, the best water photooxidation technology had an external quantum efficiency of about 37 percent. The new technology's efficiency is 42 percent, which the researchers term "unprecedented." The efficiency is due to an improved positive electrode and other innovations in the water-splitting device, researchers said.

Original Article

Spintronic RAM and permanent storage


Scientists have created novel ‘spintronic’ devices that could point the way for the next generation of more powerful and permanent data storage chips in computers. Physicist at the Universities of Bath, Bristol and Leeds have discovered a way to precisely control the pattern of magnetic fields in thin magnetic films, which can be used to store information.

The discovery has important consequences for the IT industry, as current technology memory storage has limited scope for develop further. The density with which information can be stored magnetically in permanent memory - hard drives - is reaching a natural limit related to the size of the magnetic particles used. The much faster silicon-chip based random access memory - RAM - in computers loses the information stored when the power is switched off.

The key advance of the recent research has been in developing ways to use high energy beams of gallium ions to artificially control the direction of the magnetic field in regions of cobalt films just a few atoms thick.

The direction of the field can be used to store information: in this case “up” or “down” correspond to the “1” or “0” that form the basis of binary information storage in computers.

Further, the physicists have demonstrated that the direction of these magnetic areas can be “read” by measuring their electrical resistance. This can be done much faster than the system for reading information on current hard drives. They propose that the magnetic state can be switched from “up” to “down” with a short pulse of electrical current, thereby fulfilling all the requirements for a fast magnetic memory cell.

Using the new technology, computers will never lose memory even during a power cut – as soon as the power is restored, the data will reappear.

Professor Simon Bending, of the University of Bath's Department of Physics, said: “The results are important as they suggest a new route for developing high density magnetic memory chips which will not lose information when the power is switched off. For the first time data will be written and read very fast using only electrical currents.”

“We’re particularly pleased as we were told in the beginning that our approach probably would not work, but we persevered and now it has definitely paid off.”

Professor Bending worked with Dr Simon Crampin, Atif Aziz and Hywel Roberts in Bath, Dr Peter Heard of the University of Bristol and Dr Chris Marrows of the University of Leeds.

Another approach to overcoming the problem of storing data permanently with rapid retrieval times is that of magnetic random access memory chips (MRAMs); prototypes of these have already been developed by several companies. However, MRAM uses the stray magnetic fields generated by wires that carry a high electrical current to switch the data state from “up” to “down”, which greatly limits the density of information storage.

In contrast, if the approach at Bath is developed commercially, this would allow the manufacture of magnetic memory chips with much higher packing densities, which can operate many times faster.

A paper written by the researchers appeared recently in the journal Physical Review Letters, entitled: Angular Dependence of Domain Wall Resistivity in Artificial Magnetic Domain Structures.

Original Article

Tuesday, December 05, 2006

Timetable for Moon colony


NASA plans to permanently occupy an outpost at one of the Moon's poles, officials announced on Monday.

The first four astronauts will land for a short visit in 2020, but it will take until at least 2024 to prepare for "a fully functional presence with rotating crews", said Scott Horowitz, associate administrator for the exploration systems mission directorate.

Original Article

Monday, December 04, 2006

Library on a disc

Blu-Ray Disc Association and industrial leaders in computer and other
media recently commercially introduced Blu-Ray Disc technology that
allows for storage of 25 gigabytes (GB) on a single layer of a disc and
50 GB on two layers. It has been referred to as the next generation of
optical disc format, and it offers high-definition quality.

Belfield's technique allows for storing on multiple layers with the
capacity of at least 1,000 GB and high-definition quality.

Original Article
<http://www.physorg.com/printnews.php?newsid=84454118>

Friday, December 01, 2006

Ghost in the machine


KAIST's Robot Intelligence Technology, or RIT, lab is most famous as the home of the Federation of International Robot-soccer Association, FIRA, the robotic soccer league. But beyond the easy crowd appeal of robotic sport, the researchers here are far more enthusiastic about a different creation -- one that lives in the wires and silicon woven throughout the walls of this building: a "software robot" they call Rity.

Rity is the ghost in the machine: an autonomous agent that can transfer itself into desktop computers, PDAs, servers and robotic avatars, and adapt and evolve like a genetic organism. As researchers go from place to place, they are captured and recognized by a network of cameras in the building, allowing Rity to follow them from computer to computer.

The "sobot" can upload itself into a mobile robot -- a simpler cousin of HanSaRam called MyBot -- and follow Kuppuswamy from room to room on its servo-controlled wheels, fetching objects for the researcher with its mechanical arms. If it sees Kuppuswamy sit in front of his office PC, Rity can abandon MyBot like a husk and slip into the desktop machine, to better put itself at its human master's disposal.

That's the theory, at least. The researchers here have set themselves a high task: creating a world in which robotic software minds and hardware bodies blend into the environment of daily life.

In a hospital setting, for example, sobots will serve as personal assistants to doctors, moving through a legion of bot bodies, some that check in on patients, others that track doctors through the hospital corridors. "Within 10 years robots will be in hospitals providing (triage)," says researcher Park In-Won.

In reality, Rity can't do much yet. On this day the scientists have a hard time just getting him to appear. They're gathered around a big-screen TV that sits like a living room centerpiece along one wall of the lab. A grad student is mugging for a mounted camera, which is supposed to recognize his face and summon his Rity. But nothing is happening.

Other students scramble around the lab -- a geek's paradise littered with cardboard boxes, caseless computers and inscrutable machined parts -- picking up the occasional tool and speaking in Korean to one another. Finally, the virtual genie materializes on the giant monitor, where it takes the form of a cute, cartoonish dog.

Original Article

Thursday, November 30, 2006

The first androids


South Korean scientists are working on a new-generation robot resembling a human which will be able to walk the walk as well as talk the talk, one of the team said Thursday.

The first walking "android" will make its debut within two to three years, said So Byung-Rok, one of the team of researchers at the Korea Institute of Industrial Technology.

Androids present particular technological challenges in cramming complicated modules, motors and actuators into a life-size body.

The team has already developed two android prototypes designed to look like a Korean woman in her early 20s, which can hold hold a conversation, make eye contact, and express anger, sorrow and joy.

The latest version, named EveR2-Muse, was unveiled last month at a robot exhibition in Seoul.

She made her debut billed as "the world's first android entertainer" singing a new Korean ballad "I Will Close My Eyes For You."

"Standing like a human, she can sing a song and move her arms, hips and knees to the rhythm although she cannot lift her feet or walk yet," So told AFP in reference to EveR2-Muse.

"We are now working to improve the motion and upgrade intelligence so that next-generation androids can walk like a human, engage in more sophisticated conversations and have a wider range of facial expressions."

Original Article

Sunday, November 26, 2006

Growing Intelligence

The Indonesian volcano Talang on the island of Sumatra had been dormant for centuries when, in April 2005, it suddenly rumbled to life. A plume of smoke rose 1000 meters high and nearby villages were covered in ash. Fearing a major eruption, local authorities began evacuating 40,000 people. UN officials, meanwhile, issued a call for help: Volcanologists should begin monitoring Talang at once.

Little did they know, high above Earth, a small satellite was already watching the volcano. No one told it to. EO-1 (short for "Earth Observing 1") noticed the warning signs and started monitoring Talang on its own.

Indeed, by the time many volcanologists were reading their emails from the UN, "EO-1 already had data," says Steve Chien, leader of JPL's Artificial Intelligence Group.

EO-1 is a new breed of satellite that can think for itself. "We programmed it to notice things that change (like the plume of a volcano) and take appropriate action," Chien explains. EO-1 can re-organize its own priorities to study volcanic eruptions, flash-floods, forest fires, disintegrating sea-ice—in short, anything unexpected.

Is this real intelligence? "Absolutely," he says. EO-1 passes the basic test: "If you put the system in a box and look at it from the outside, without knowing how the decisions are made, would you say the system is intelligent?" Chien thinks so.

And now the intelligence is growing. "We're teaching EO-1 to use sensors on other satellites." Examples: Terra and Aqua, two NASA satellites which fly over every part of Earth twice a day. Each has a sensor onboard named MODIS. It's an infrared spectrometer able to sense heat from forest fires and volcanoes—just the sort of thing EO-1 likes to study. "We make MODIS data available to EO-1," says Chien, "so when Terra or Aqua see something interesting, EO-1 can respond."

EO-1 also taps into sensors on Earth's surface, such as "the USGS volcano observatories in Hawaii, Washington and Antarctica." Together, the ground stations and satellites form a web of sensors, or a "sensorweb," with EO-1 at the center, gathering data and taking action. It's a powerful new way to study Earth.

Chien predicts that sensorwebs are going to come in handy on other planets, too. Take Mars, for example: "We have four satellites orbiting Mars and two rovers on the ground. They could work together." Suppose one satellite notices a dust storm brewing. It could direct others to monitor the storm when they fly over the area and alert rovers or astronauts—"hunker down, a storm is coming!"

On the Moon, Chien envisions swarms of rovers prospecting the lunar surface—"another good application," he says. What if one rover finds a promising deposit of ore? Others could be called to assist, bringing drills and other specialized tools to the area. With the autonomy of artificial intelligence, these rovers would need little oversight from their human masters.

Yet another example: the Sun. There are more than a half-a-dozen spacecraft 'out there' capable of monitoring solar activity—SOHO, ACE, GOES-12 and 13, Solar-B, TRACE, STEREO and others. Future missions will inflate the numbers even more. "If these spacecraft could be organized as a sensorweb, they could coordinate their actions to study solar storms and provide better warnings to astronauts on the Moon and Mars," he points out.

For now, the intelligence is confined to Earth. The rest of the Solar System awaits.

Original Article

Saturday, November 25, 2006

Robot Realism

David Hanson's robots can creep people out. Their heads are so lifelike, their skin so textured and realistic, that Candy Sidner, a competing roboticist, called his Albert Einstein robot "spookily cool ... a giant step forward."

Hanson, who started his career as an artist and spent time working in Disney's Imagineering Lab, said he flirts with being too realistic for comfort. His work, he said, "poses an identity challenge to the human being."

"If you make it perfectly realistic, you trigger this body-snatcher fear in some people," he said. "Making realistic robots is going to polarize the market, if you will. You will have some people who love it and some people who will really be disturbed."

Hanson's robotics company in Dallas is the flip side of an industry focused on making robots more human on the inside. Hanson makes "conversational character robots." They are mostly human-looking heads using a skin-like material that he invented called Frubber. They are battery-powered, walk and are expressive, but from the neck down they don't look human at all.

The issue of being too human-looking is called "uncanny valley" syndrome, and Hanson embraces it with the passion and line-crossing of an avant-garde artist, which he also is.

Hanson made a robot head modeled on his own, but it wasn't for use as a robot. It was part of an art show where he made his self-portrait robot a "large homeless robot figure in a box." The idea was to go out of the "comfort zone" of science, he said.

But Hanson is also a businessman who is designing entertainment robots for the home. He hopes to have two-foot robots - with human-looking heads that are more cartoonish than uncannily accurate - that can dance, make eye contact, talk and recognize your face. The idea is to price them at $3,000 and get them on the market in about a year.

"It would be very much like Astro Boy in the old TV series," Hanson said.

Original Article

Wednesday, November 22, 2006

Ultra-intense laser blast creates true 'black metal'


"Black gold" is not just an expression anymore. Scientists at the University of Rochester have created a way to change the properties of almost any metal to render it, literally, black. The process, using an incredibly intense burst of laser light, holds the promise of making everything from fuel cells to a space telescope's detectors more efficient--not to mention turning your car into the blackest black around.

"We've been surprised by the number of possible applications for this," says Chunlei Guo, assistant professor of optics at the University of Rochester. "We wanted to see what would happen to a metal's properties under different laser conditions and we stumbled on this way to completely alter the reflective properties of metals."

The key to creating black metal is an ultra-brief, ultra-intense beam of light called a femtosecond laser pulse. The laser burst lasts only a few quadrillionths of a second. To get a grasp of that kind of speed--a femtosecond is to a second what a second is to about 32 million years.

During its brief burst, Guo's laser unleashes as much power as the entire grid of North America onto a spot the size of a needle point. That intense blast forces the surface of the metal to form and nanostructures--pits, globules, and strands that both dramatically increase the area of the surface and capture radiation. Some larger structures also form in subsequent blasts.

Guo's research team has tested the absorption capabilities for the black metal and confirmed that it can absorb virtually all the light that fall on it, making it pitch black.

Other similar attempts have turned silicon black, but those use a gas to produce chemically etched microstructures. Regular silicon already absorbs most of the visible light that falls on it, so the etching technique only offers about a 30 percent improvement, whereas regular metals absorb only a few percent of visible light before Guo hits them with the laser.

The huge increase in light absorption enabled by Guo's femtosecond laser processing means nearly any metal becomes extremely useful anytime radiation gathering is needed. For instance, detectors of all kinds, from space probes to light meters, could capture far more data than an ordinary metal-based detector could.

And turning a metal black without paint, scoring, or burning could easily lead to everyday uses such as replacing black paint on automobile trim, or presenting your spouse with a jet-black engagement ring.

Guo is also quick to point out that the nanostructures' remarkable increase in a metal's surface area is a perfect way to catalyze chemical reactions. Along with one of his research group members, postdoctoral student Anatoliy Vorobyev, he hopes to learn how the metal can help derive more energy from fuel cell reactions. The new process has worked on every metal Guo has tried, and since it's a property of the metal itself, there's no worry of the black wearing off.

Currently, the process is slow. To alter a strip of metal the size of your little finger easily takes 30 minutes or more, but Guo is looking at how different burst lengths, different wavelengths, and different intensities affect metal's properties. Fortunately, despite the incredible intensity involved, the femtosecond laser can be powered by a simple wall outlet, meaning that when the process is refined, implementing it should be relatively simple.

Despite the "wall outlet" ease of the use and the stay-cool metal, don't expect to see home-blackening kits anytime soon. "If you got your hand in the way of the focused laser beam, even though it's only firing for a few femtoseconds, it would drill a hole through your skin," says Guo. "I wouldn't recommend trying that."

Source: University of Rochester

Original Article

Robot receptionists, teachers assistants


Japanese schools or businesses looking for a helper with a will of steel now have another number they can call -- robot receptionists ready to work for hourly wages.

The blue and white robots, which have cat-like ears and a large video camera lens for an eye, made their debut last month as hospital workers and are now being put up for rent to take additional jobs.

The "Ubiko" robots can answer simple inquiries and hand out information, meaning they could be used as receptionists in companies or as guides in airports or train stations.

The 113-centimeter (three-foot-eight) tall robots can also help out in the classroom, said Ubiquitous Exchange Co Ltd, which is marketing Ubiko with robot maker Tmsuk Co Ltd.

"By putting these robots in schools, the robots can check out the atmosphere in the classroom, and by giving some comfort to students hopefully can prevent bullying among students," Ubiquitous Exchange spokeswoman Akiko Sakurai said.

The robot can record footage and pass it to school officials and parents to detect bullying, a problem which is causing growing concern in Japanese schools.

But the robot's wage comes to 52,500 yen (445 dollars) an hour, hardly competitive against human helpers even in a country with a shrinking population.

The company insisted that Ubiko was not overpriced when considering the advantages of putting robots in service.

"If we look at these robots as advertising and public relations businesses, the price is quite cheap, actually," Sakurai said.

Twenty companies are already on the waiting-list to receive Ubiko, she said.

Two robot assistants produced by Tmsuk made their debut last month at Aizu Central Hospital in central Japan, welcoming visitors at the entrance and answering spoken inquiries.

They can also carry luggage and escort visitors and patients to their destinations.

© 2006 AFP

Original Article

Tuesday, November 21, 2006

'Evanescent coupling' could power gadgets wirelessly


    * 11:25 15 November 2006
    * NewScientist.com news service
    * Celeste Biever

A phenomenon called "evanescent coupling" could allow electronic gadgets to start charging themselves as soon as their owner walks into their home or office.

Researchers have been looking for a way to make a wireless charger for some time. One idea is to use electromagnetic induction – passing an electric current through a coil to create a magnetic field that induces a current in a neighbouring coil.

This is the way devices like electric toothbrushes are charged, and has been proposed as the basis of a universal recharger pad before (see One charger pad could power up all gadgets).

The snag as far as mobile devices are concerned is that the charger and device must be in close contact with each other for it to work. Alternative schemes - for instance, transmitting electromagnetic waves in all directions to reach any device in a room - would be hugely wasteful.
Trapped at source

Instead, Marin Soljacic at the Massachusetts Institute of Technology wants to use evanescent coupling, which allows electromagnetic energy "trapped" in a charging device to be tapped by a "drain" mobile device if the two have the same resonant frequency.

"The energy is trapped at source, until I bring a device that has the same resonant frequency close to it. Only then can the energy 'tunnel through'," says Soljacic. Crucially, the "charger" only starts powering another device when a compatible gadget comes within range.

Soljacic and colleagues Aristeidis Karalis and John Joannopoulos have carried out numerous computer simulations to see if the idea will work. They discovered that a small circuit, consisting of an inductor loop and a capacitor, could be made to resonate at a frequency of 3 to 4 megahertz, allowing it to trap electromagnetic energy without emitting radio waves to its surroundings.
Inductor loop

In the wireless charger design, alternating current from the mains is converted to this resonant frequency and sent into the circuit. The current travels round the circuit, generating a magnetic field as it passes through the inductor loop and an electric field as it passes through the capacitor. This pulsing magnetic field extends up to 5 metres around the device.

The magnetic field created by the wireless charger is relatively weak, meaning it consumes little power. However, if a mobile gadget fitted with a similar circuit, with the same resonant frequency, is brought into the room, the charger's magnetic field induces an electric current in the gadget's inductor loop.

This current travels round the mobile device's circuit, constantly switching between electrical and magnetic states, just as in the charger's circuit. As a result, the two circuits start to "resonate" together. This increases the transmission of electromagnetic energy via induction and that energy is used to charge up the gadget.

Placing one of these wireless chargers in each room of a home or office could provide coverage throughout the building. Soljacic presented the results at the American Institute of Physics Industrial Physics Forum in San Francisco on 14 November. The team is now trying to develop a prototype device.

Original Article

RNA Activation


The latest twist on the Nobel prizewinning method of RNA interference, or RNAi, could prove to be a real turn-on. Whereas standard RNAi silences a target gene, switching protein production off, the new technique boosts gene activity, providing a genetic "on" switch.

RNAi can silence genes in two ways. It can block the messenger RNA that is the intermediate between gene and protein and it can also interfere with "promoter" sequences that boost a gene's activity. It was while investigating this second phenomenon that Long-Cheng Li of the University of California, San Francisco, and his colleagues stumbled on the new method, dubbed RNA activation.

Li tried to silence several genes in human cells using short pieces of double-stranded RNA, 21 bases long. But to his surprise, he found that they had precisely the opposite effect (Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.0607015103).

Although the exact mechanism remains unclear, Li's team has already found that it requires a protein called Ago2, which is also involved in the standard RNAi process. Li believes RNA activation could find widespread use, for example in treating cancer by boosting the activity of tumour suppressor genes.

Original Article

Efficiency Jump for White OLEDs


Microscale lenses and better materials move OLEDs closer to lighting our world.
By Neil Savage

In an advance that could hasten the day when energy-efficient glowing plastic sheets replace traditional lightbulbs, a method for printing microscopic lenses nearly doubles the amount of photons coming out of the materials, called organic light-emitting diodes, or OLEDs.

Stephen Forrest, an electrical engineer and vice president of research at the University of Michigan, says his technology increases the light output of the thin, flexible OLEDs by 70 percent. "They just create local curvature that allows light to pass through," he explains.

This means that OLEDs, which are currently used for superbright color displays in a number of applications, are getting closer to being competitive as white-light sources too. "It's a significant benefit, because the one big challenge in OLEDs is coming up with ways to get light out of them," says Vladimir Bulovic, head of MIT's Laboratory of Organic Optics and Electronics. "There's a lot of light in the OLED that never makes it out."

The benefits could be substantial. Sandia National Laboratory projects that if half of all lighting is solid-state by 2025--that is, made up of OLEDs and their technological cousin, LEDs made from inorganic semiconducting materials--it will cut worldwide power use by 120 gigawatts. That would save $100 billion a year and reduce the carbon dioxide emitted by electrical plants by 350 megatons a year. And OLEDs would offer more variety in lighting design, since they would take the form of flexible sheets.

But while LEDs are taking over a number of applications, from traffic lights to high-end architectural applications, getting enough light out of OLEDs to make them practical remains tricky. When electricity runs through the thin layers of organic polymers that make up the OLEDs, it causes the material to emit photons. The problem is that only about half of the photons ever reach the surface of the device, and the majority of those that do make it that far get turned back at the last instant. That's because the glass or plastic substrate on which the layers of the OLED are deposited has a high index of refraction, but the open air into which the photons are traveling has a low index. When they hit the glass/air interface, about three-fifths of the photons get scattered to the edges of the glass and never reach an observer's eye.

Researchers have tried several methods to send those photons in the desired direction, including inscribing gratings into the OLED and coating the surface with a silica gel that has a low index of refraction. Unfortunately, most of those methods caused a blurring effect or changed the color of the light when viewed at different angles. Researchers also tried larger lenses, but that required aligning the lenses with the OLED, a step that adds to the cost and complexity of manufacture.

Instead, Forrest uses microlenses, tiny hemispheres of polymer a few micrometers in diameter that direct the light forward from the OLED. He uses imprint lithography, essentially stamping a hexagonal array of lenses into a liquid polymer. Once it has hardened, the polymer layers making up the OLED can be deposited on top of the lenses. The ones he has made aren't perfect, Forrest says, but can be improved by a company that decides to optimize the manufacturing process.


With the lenses, described last month in a paper in the Journal of Applied Physics, Forrest is getting OLEDs to an external quantum efficiency--the percent of photons generated within the OLED that actually make it all the way out--of about 32 percent, up from previous highs of around 18 percent. The more important challenge, he says, is increasing the internal quantum efficiency--the percent of electrons that are turned into photons--so that there are more photons to get out. Right now that's at about 60 to 70 percent, but there's no theoretical reason why it can't make it to 100 percent.

Forrest says OLEDs could reach a light output of 100 lumens per watt within a couple of years, which would be far better than the 50 to 75 lumens per watt of fluorescent bulbs. (OLEDs have already far surpassed the 15 lumens per watt of incandescents.) The Department of Energy, which funds research into new forms of lighting, has a goal of 150 lumens per watt in 10 to 15 years. Even though they're brighter, OLEDs will have to become a lot cheaper to compete with existing lightbulbs.

Janice Mahon, vice president of technology commercialization at Universal Display Corp., which licenses Forrest's technology, says it's possible there will be some "entry-level" white-lighting OLEDs on the market in the next two years or so. Those might be small-area OLEDs used as architectural accents or in emergency signs. OLEDs for general illumination--large wall panels to light up a room, say--won't likely be available for more than five years, and probably for more than ten, she says. "It's anybody's guess."

Forrest isn't only working on the substrates. He recently improved the materials that make up the OLED layers. Typically, OLEDs have used a mix of phosphorescent materials that shine red, green, or blue, with the colors combining to make white light. But because of the differences in their wavelengths, a blue photon contains a lot more energy than a red one does, and thus takes more energy to create, with the result that the blue phosphor isn't as efficient as the others. The blue phosphor also breaks down more quickly, leading the color of the light to grow more yellow as the OLED ages. Changing power levels can also affect the color of the light.

So Forrest replaced the blue phosphor with a material that produces blue photons through fluorescence, a process that requires higher-energy electrons than phosphorescence. Forrest designed the layers so the fluorescent material, which is more efficient and more stable than the blue-phosphor material, was nearest the cathode and could capture higher-energy photons, then pass lower-energy ones to the other layers, where they'd create green and red light. Not only does his design make more-efficient use of power, but it also maintains its color when the power levels are decreased, leading to an OLED with adjustable brightness but stable color.
Original Article
Copyright Technology Review 2006.

Monday, November 20, 2006

Tesla Motors


You've heard about Tesla Motors already--the silicon valley startup that's making the next generation in electric cars, a roadster that can go from 0 to 60 in 4 seconds, and that looks like a million bucks. (It'll also cost nearly $100K, but that's cheap compared to a Lamborghini, and you just might beat it off the line.) There have been plenty of great articles on the company and the car, from Wired, The Guardian, and others. As it happens, I know several of their engineers, so I was able to get a tour of the company a few weeks ago. This wasn't an official interview with company managers or spokespeople, so I didn't get answers to all of my questions, but my friends Drew and Colin knew most of the answers, and I was also able to squeeze in a couple minutes with JB Straubel, the company's Chief Technical Officer. Here's a summary of their answers to some technical questions that other news sources haven't written about. (The insider geek's view of Tesla Motors, if you will.)

Their building is practically hidden in San Carlos, an unobtrusive light-industrial space sitting off the beaten path amongst warehouses and more blue-collar industry than most of Silicon Valley's sprawling office parks contain. But once you step inside, the cover is blown, and you can tell there's something exciting happening. You can also tell they're a high-tech company, not a normal car company. They're small, and dominated by engineers. In fact, they're effectively just an engineering company, since both the aesthetic design and the manufacturing are outsourced to Lotus. (That's why the Tesla looks like an Elise.)

Aerodynamically the car is good, but its body is designed for looks, first and foremost. It has a drag coefficient of around .3, as compared to the EV1's .19. So although the Tesla gets 250 miles on a charge--which is excellent, more than double most EV's--the range could have been far longer. Some EV-geek chatter at GasSavers.org has suggested by back-of-the-envelope calculations that the power train of the Tesla in the body of an EV1 or an Opel Eco Speedster would have a 400 mile range. Conversely, if they had designed the car with the EV1's drag coefficient and kept a 250-mile range, they could have eliminated a lot of the battery bank, and thus a fair chunk of the cost of the vehicle. However, I don't begrudge them their decision to prioritize looks over efficiency. The Tesla, being electric, is already far more efficient than a combustion-engine car. It's a big step in the right direction, and a 250-mile range at a $100K price tag for the first run of an amazingly hot all-electric roadster is great. Too often we let the perfect be the enemy of the good.

The hardest part of an electric vehicle is not the motor, it's the controller--the power electronics which drive the motor, handle regenerative braking, and manage the power from the batteries. Tesla chose AC Propulsion's controller because of their expertise and reputation as the best in the business. Their founder and lead engineer, Alan Cocconi, is apparently so smart that instead of having CAD-refined schematics and models of the circuitry, he had the whole layout in his head. Before designing the controller for AC Propulsion, he designed the controller for the GM Impact (which became the EV1).

Tesla's motor is a high-performance induction motor; not revolutionary new technology, but top-of-the-line. In fact, it's the same one the EV1 and AC Propulsion's tZero prototype cars used. Tesla's innovation is in the way it is manufactured, keeping performance quality high but reducing costs. I also asked whether they thought about using in-wheel motors, since putting a small motor in every wheel instead of having one big motor with a drivetrain connecting it to the four wheels can greatly reduce mechanical complexity and weight, as well as improving reliability. (This is one thing EV's make possible which simply can't be done feasibly with combustion engines.) Interestingly, they did consider it, but JB said it would have made safety certification extremely difficult. It's perfectly safe, but the certification regulations are written assuming you have one motor and a drivetrain, so there are some certifications (such as the one for Anti-Lock Braking) you can't pass in a car with no drivetrain. These rules would need to be re-written to allow vehicles with in-wheel motors to be certified, which is obviously not going to happen without significant money and time spent lobbying--not a fight a small startup company should take on if it can avoid it.

The company has been operating for several years in "stealth mode" as many startups do, but started making a splash in the press this year despite the fact that their cars won't actually be available for another year. Why come out of stealth mode now? Since my friends are engineers, not the business folks, their answers were speculative, but the investors are silicon valley people, and the way things are done out there is generally to start creating the buzz before the product is out. Begin in stealth mode when you're not really sure whether it will work or how long it will take, but once you're reasonably assured of your technology and its prospects for success, let the whole world know so you can build anticipation. Tesla certainly knows that their technology and design work--they have had working cars for a long time now, and are just going through the industry-standard process of testing (a multi-step, multi-year process, from the sound of it) that works out the bugs in manufacturing and such. It will be exciting to see what kind of splash their cars make when they officially release them next year. It will also be interesting to see what their sedan will be like--they don't have any built (or even fully designed) yet, but after their roadster is a big success on the market they plan to expand down the food chain to vehicles for the rest of us. The sedans still won't be cheap, but they won't have Ferrari price tags.

My last question was, how will Tesla succeed where other EV companies have failed? The last two decades are littered with the wreckage of failed ventures in electric vehicles: Corbin Motors's Sparrow, Solectria's Force, GM's EV1, Ford's Th!nk, to name a few. Some of these companies had engineering problems, some of them were trying so hard to make their vehicles affordable that they couldn't make a profit, and for GM, in addition to technical hurdles, the company executives apparently felt that it wasn't in their interests to succeed. Tesla will be different for a few reasons. They've worked with the realities of the market--EV's require expensive technology, so you might as well accept it and make a car that people are happy to pay lots of money for because it's amazing, rather than making lots of compromises and requiring your customers to compromise as well. This means they have the money to hire top-notch engineers and work out all the bugs properly. Being expensive but high-quality in the beginning is also a smart road to future affordability, because reducing costs is easiest to do on a known problem, an existing product; it's much harder to make something both cheap and good right out of the starting gate. (Also, as they pointed out in their Forbes interview, it is much easier to move down in a market than move up in one.) Tesla is also not trying to do too much themselves, with the experts at Lotus able to handle many of the problems that would hamstring an auto startup doing their own manufacturing. (Though they have hired away many Lotus engineers for themselves, too.) They understand the march of advancing technology, while the big American car companies don't. And also unlike existing car companies, Tesla is unencumbered by chains of vested interests, they have the will to succeed.

Original article

Serial Hybrids Are Here!


Just six months after Tesla Motors announced the return of a 100% battery car, the Tesla Roadster, we have another great leap forward.  As reported in the Los Angeles Times in a story entitled “GM To Present A Modified Electric Car“ (courant.com) on November 10th, General Motors has announced a serial hybrid car.  Early next year they will present a prototype of the vehicle.

If you are wondering just exactly what “serial hybrid” car means, you’re not alone.  But this is a car that will take the market by storm.  A serial hybrid means that the car has two engines, hence it is a hybrid, but only one engine is connected directly to the drive train, hence it is a “serial” hybrid.  By this logic, your Prius is a parallel hybrid, or just a hybrid.  For a detailed explanation of a serial hybrid car, including diagrams and energy conversion charts, read our feature from October 2005 entitled “The Case for the Serial Hybrid Car.”

The advantages of a serial hybrid car are huge.  They are far, far less complex than conventional hybrid cars, because only the electric motor, with its huge range of usable RPM, is connected to the drivetrain.  Another huge advantage is that serial hybrid cars have their second motor, a small, ultra-low-emission gas or diesel engine, connected to a generator to recharge the battery pack while the car is being driven.  By doing this, a cheaper and more reliable battery pack can be used, and there is no need for a complex heat management system that is still necessary for the lithium ion batteries.

It is well and good to criticize General Motors for discontinuing the EV-1.  But they are back with another green car which is as ahead of its time as the EV-1 once was, and this car is going to attract a much bigger market.

Critics may claim the EV-1 was a zero-emission vehicle, while a serial hybrid car has a small, ultra-low emission onboard motor, and therefore it isn’t as green as the EV-1, or the Tesla Roadster, or any 100% battery powered car.  Someday, when all electricity generated everywhere is done so with no combustion or other form of environmental degradation, this concern may be valid, but until that time, this is pure poppycock.

A detail of some interest regarding GM’s bold and groundbreaking new green car initiative are some performance specifications as reported in the Los Angeles Times story.  “The new car, if developed as a production model, would be recharged daily by owners and probably would deliver sufficient power from the batteries to cover the typical daily commute of 20 to 30 miles before depleting the battery charge and switching to electricity generated onboard.”

If these figures are true, GM’s planned serial hybrid car could be dirt cheap.  Theoretically, a car like this could run on lead-acid batteries.  Remember, a serial-hybrid car would only need a two-speed transmission, if that, for the electric motor that provides traction, and no transmission at all for the small gas (or diesel) engine that powers the onboard generator.  Maintenance would be negligible.  If GM used a nickel-metal hydride battery pack, it is likely that their serial-hybrid car will go much further than “20-30 miles” just on previously stored battery power, and the onboard generator engine could be smaller.  The people’s car is here.

However General Motors designs their serial hybrid car, it will be carefully calibrated to create a car with so much value for money that we all want to buy one, and don’t be surprised if they call it the EV-2.  Redemption is a sweet thing.

Original article

Friday, November 17, 2006

Robot Adapts to Injury


Instead of giving the robot a rigid set of instructions, the researchers let it discover its own nature and work out how to control itself, a process that seems to resemble the way human and animal babies discover and manipulate their bodies. The ability to build this "self-model" is what makes it able to adapt to injury.

"Most robots have a fixed model laboriously designed by human engineers," Lipson explained. "We showed, for the first time, how the model can emerge within the robot. It makes robots adaptive at a new level, because they can be given a task without requiring a model. It opens the door to a new level of machine cognition and sheds light on the age-old question of machine consciousness, which is all about internal models."

The robot, which looks like a four-armed starfish, starts out knowing only what its parts are, not how they are arranged or how to use them to fulfill its prime directive to move forward. To find out, it applies what amounts to the scientific method: theory followed by experiment followed by refined theory.

It begins by building a series of computer models of how its parts might be arranged, at first just putting them together in random arrangements. Then it develops commands it might send to its motors to test the models. A key step, the researchers said, is that it selects the commands most likely to produce different results depending on which model is correct. It executes the commands and revises its models based on the results. It repeats this cycle a number of times, then attempts to move forward.

"The machine does not have a single model of itself -- it has many, simultaneous, competing, different, candidate models. The models compete over which can best explain the past experiences of the robot," Lipson said.

The result is usually an ungainly but functional gait; the most effective so far is a sort of inchworm motion in which the robot alternately moves its legs and body forward.

Once the robot reaches that point, the experimenters remove part of one leg. When the robot can't move forward, it again builds and tests simulations to develop a new gait.

The underlying algorithm, the researchers said, could be applied to much more complex machines and also could allow robots to adapt to changes in environment and repair themselves by replacing parts. The work also could have other applications in computing and could lead to better understanding of animal cognition. In a way, Bongard said, the robot is "conscious" on a primitive level, because it thinks to itself, "What would happen if I do this?"

"Whether humans or animals are conscious in a similar way -- do we also think in terms of a self-image, and rehearse actions in our head before trying them out -- is still an open question," he said.

Monday, November 13, 2006

Artificial neurons


New implantable biomedical devices that can act as artificial nerve cells, control severe pain, or allow otherwise paralyzed muscles to be moved might one day be possible thanks to developments in materials science. Writing today in Advanced Materials, Nicholas Kotov of the University of Michigan, USA, and colleagues describe how they have used hollow, submicroscopic strands of carbon, carbon nanotubes, to connect an integrated circuit to nerve cells. The new technology offers the possibility of building an interface between biology and electronics.

Kotov and colleagues at Oklahoma State University and the University of Texas Medical Branch have explored the properties of single-walled nanotubes (SWNTs) with a view to developing these materials as biologically compatible components of medical devices, sensors, and prosthetics. SWNTs are formed from carbon atoms by various techniques including deposition and resemble a rolled up sheet of chicken wire, but on a tiny scale. They are usually just a few nanometers across and up to several micrometers in length.

The researchers built up layers of their SWNTs to produce a film that is electrically conducting even at a thickness of just a few nanometers. They next grew neuron precursor cells on this film. These precursor cells successfully differentiated into highly branched neurons. A voltage could then be applied, lateral to the SWNT film layer, and a so-called whole cell patch clamp used to measure any electrical effect on the nerve cells. When a lateral voltage is applied, a relatively large current is carried along the surface but only a very small current, in the region of billionths of an amp, is passed across the film to the nerve cells. The net effect is a kind of reverse amplification of the applied voltage that stimulates the nerve cells without damaging them.

Kotov and his colleagues report that such devices might find use in pain management, for instance, where nerve cells involved in the pain response might be controlled by reducing the activity of those cells. An analogous device might be used conversely to stimulate failed motor neurons, nerve cells that control muscle contraction. The researchers also suggest that stimulation could be applied to heart muscle cells to stimulate the heart.

They caution that a great deal of work is yet to be carried out before such devices become available to the medical profession.

Author: Nicholas A. Kotov, University of Michigan (USA), http://www.engin.umich.edu/dept/che/research/kotov/

Title: Stimulation of Neural Cells by Lateral Currents in Conductive Layer-by-Layer Films of Single-Walled Carbon Nanotubes

Advanced Materials 2006, 18, No. 22, doi: 10.1002/adma.200600878

Comprehensive model is first to map protein folding at atomic level


Scientists at Harvard University have developed a computer model that, for the first time, can fully map and predict how small proteins fold into three-dimensional, biologically active shapes. The work could help researchers better understand the abnormal protein aggregation underlying some devastating diseases, as well as how natural proteins evolved and how proteins recognize correct biochemical partners within living cells.

The technique, which can track protein folding for some 10 microseconds -- about as long as some proteins take to assume their biologically stable configuration, and at least a thousand times longer than previous methods -- is described this week in the Proceedings of the National Academy of Sciences.

"For years, a sizable army of scientists has been working toward better understanding how proteins fold," says co-author Eugene I. Shakhnovich, professor of chemistry and chemical biology in Harvard's Faculty of Arts and Sciences. "One of the great problems in science has been deciphering how amino acid sequence -- a protein's primary structure -- also determines its three-dimensional structure, and through that its biological function. Our paper provides a first solution to the folding problem, for small proteins, at an atomic level of detail."

Fiendishly intricate, protein folding is crucial to the chemistry of life. Each of the body's 20 amino acids, the building blocks of proteins, is attracted or repulsed by water; it's largely these affinities that drive the contorting of proteins into distinctive three-dimensional shapes within the watery confines of a cell. The split-second folding of gangly protein chains into tight three-dimensional shapes has broad implications for the growing number of disorders believed to result from misfolded proteins or parts of proteins, most notably neurodegenerative disorders such as Alzheimer's and Parkinson's diseases.

The model developed by Shakhnovich and colleagues faithfully describes and catalogs countless interactions between the individual atoms that comprise proteins. In so doing, it essentially predicts, given a string of amino acids, how the resulting protein will fold -- the first computer model to fully replicate folding of a protein as happens in nature. In more than 4,000 simulations conducted by the researchers, the computer model consistently predicted folded structures nearly identical to those that have been observed experimentally.

"This work should open new vistas in protein engineering, allowing rational control of not only protein folding, but also the design of pathways that lead to these folds," says Shakhnovich, who has studied protein folding for nearly two decades. "We are also using these techniques to better understand two fundamental biological questions: How have natural proteins evolved, and how do proteins interact in living cells to recognize correct partners versus promiscuous ones?"

Source: Harvard University

Tuesday, November 07, 2006

Engineers develop revolutionary nanotech water desalination membrane


UCLA Engineering's Eric Hoek holds nanoparticles and a piece of his new RO water desalination membrane. Credit: UCLA Engineering/Don Liebig
Researchers at the UCLA Henry Samueli School of Engineering and Applied Science today announced they have developed a new reverse osmosis (RO) membrane that promises to reduce the cost of seawater desalination and wastewater reclamation.

Reverse osmosis desalination uses extremely high pressure to force saline or polluted waters through the pores of a semi-permeable membrane. Water molecules under pressure pass through these pores, but salt ions and other impurities cannot, resulting in highly purified water.

The new membrane, developed by civil and environmental engineering assistant professor Eric Hoek and his research team, uses a uniquely cross-linked matrix of polymers and engineered nanoparticles designed to draw in water ions but repel nearly all contaminants. These new membranes are structured at the nanoscale to create molecular tunnels through which water flows more easily than contaminants.

Unlike the current class of commercial RO membranes, which simply filter water through a dense polymer film, Hoek's membrane contains specially synthesized nanoparticles dispersed throughout the polymer -- known as a nanocomposite material.

"The nanoparticles are designed to attract water and are highly porous, soaking up water like a sponge, while repelling dissolved salts and other impurities," Hoek said. "The water-loving nanoparticles embedded in our membrane also repel organics and bacteria, which tend to clog up conventional membranes over time."

With these improvements, less energy is needed to pump water through the membranes. Because they repel particles that might ordinarily stick to the surface, the new membranes foul more slowly than conventional ones. The result is a water purification process that is just as effective as current methods but more energy efficient and potentially much less expensive. Initial tests suggest the new membranes have up to twice the productivity -- or consume 50 percent less energy -- reducing the total expense of desalinated water by as much as 25 percent.

"The need for a sustainable, affordable supply of clean water is a key priority for our nation's future and especially for that of California -- the fifth largest economy in the world," Hoek said. "It is essential that we reduce the overall cost of desalination -- including energy demand and environmental issues -- before a major draught occurs and we lack the ability to efficiently and effectively increase our water supply."

A critical limitation of current RO membranes is that they are easily fouled -- bacteria and other particles build up on the surface and clog it. This fouling results in higher energy demands on the pumping system and leads to costly cleanup and replacement of membranes. Viable alternative desalination technologies are few, though population growth, over-consumption and pollution of the available fresh water supply make desalination and water reuse ever more attractive alternatives.

With his new membrane, Hoek hopes to address the key challenges that limit more widespread use of RO membrane technology by making the process more robust and efficient.

"I think the biggest mistake we can make in the field of water treatment is to assume that reverse osmosis technology is mature and that there is nothing more to be gained from fundamental research," Hoek said. "We still have a long way to go to fully explore and develop this technology, especially with the exciting new materials that can be created through nanotechnology.

Hoek is working with NanoH2O, LLP, an early-stage partnership, to develop his patent-pending nanocomposite membrane technology into a new class of low-energy, fouling-resistant membranes for desalination and water reuse. He anticipates the new membranes will be commercially available within the next year or two.

"We as a nation thought we had enough water, so a decision was made in the 1970s to stop funding desalination research," Hoek said. "Now, 30 years later, there is renewed interest because we realize that not only are we running out of fresh water, but the current technology is limited, we lack implementation experience and we are running out of time. I hope the discovery of new nanotechnologies like our membrane will continue to generate interest in desalination research at both fundamental and applied levels."

Source: University of California - Los Angeles

Monday, November 06, 2006

Cheap, Transparent, and Flexible Displays


New high-performance transistors could lead to windows and helmet visors that double as high-quality displays.
By Kevin Bullis

The transparent film over this penny has 70 transistors on it. They are made of invisible materials developed at Northwestern University. (Credit: Lian Wang and Myung-Han Yoon, Northwestern University)

By developing a low-cost method for making high-performance transparent transistors, researchers at Northwestern University have taken an important step toward creating sharp, bright displays that could be laminated to windshields, computer monitors, and televisions but would blend into the background when not in use.
For years, researchers have attempted to make flexible electronics based on electrically conducting plastics that can be manufactured inexpensively. There has been some success in making ones are nearly transparent. But these organic materials have produced transistors with disappointing performance, falling well short of the capabilities of transistors made with inorganic materials such as silicon. The Northwestern researchers, led by chemistry and materials-science professor Tobin Marks, combined the best of both worlds by making hybrid organic-inorganic devices that have high performance but could be manufactured inexpensively. The transistors are transparent, so they could be used in see-through displays.
Most of the transistor is composed of indium oxide, an inorganic semiconductor that can be produced at low cost because it can be deposited over large areas at room temperature. The process Marks employs to make them is a standard technique that uses ion beams to control the crystallization and adhesion of the oxide as it is deposited onto a surface. The method can also be used to adjust the conductivity of the final material, which makes it possible to use indium oxide as a semiconductor in one part of the device and as a conductor in other parts.
The organic material in the device is made of molecules that, once applied to a surface, self-assemble into a well-ordered structure that gives it superior insulating properties. Combined with the indium-oxide semiconductor, it makes possible transistors that perform better than the amorphous silicon transistors often used in LCD screens today. Indeed, the transistors are nearly as good as the much more expensive polysilicon transistors used in high-end displays. Marks says this high performance includes low operating voltages and good switching behavior that should make the transistors easy to integrate into devices, and could lead to energy-saving, crisp-looking displays.
Since both the thin films of indium oxide and the self-assembling organic material are transparent and can be assembled on glass, as demonstrated in an article appearing online in the journal Nature Materials, they could be embedded without a trace in windows. And because the processes used are low temperature, the electronics could be deposited on a plastic substrate, allowing flexible, transparent displays.

"There are a lot of interesting things you could do if you had truly transparent electronics," Marks says. "You could almost envision a display floating in space." The displays could also be applied to glasses or helmet visors. "You could imagine an assembly-line worker, a race-car driver, or some military application where you might want a map or something like that on your visor."
The new transistors' ability to challenge silicon in performance suggests that they could be used not just as pixel switches, but also as transparent processors and memory--all of which could be incorporated into a thin, flexible sheet, saving manufacturing costs and introducing a new form of electronics. Such applications are still a long way off and require improving the performance of the transistors. But prototype displays based on the new transistors could be ready in as little as 12 to 18 months, Marks says. Polyera, a startup in Evanston, IL, has been founded to help bring the novel materials to market.

The Northwestern researchers are not the first to combine organic and inorganic materials into transistors: as early as 1999, IBM researchers produced such devices (see "Flexible Transistors"). However, these were not transparent and did not perform as well as the Northwestern transistors. Others are now working toward transparent electronics using materials such as zinc oxide and carbon nanotubes, says John Rogers, professor of materials science and engineering at the University of Illinois, Urbana-Champagne. While the carbon nanotubes could theoretically lead to significantly better-performing devices, manufacturing arrays of nanotube-based devices will be much more difficult than those made with the Northwestern techniques. The new work is also distinct from some prototype flexible displays, which still rely on visible wires.
"This is some very nice work from one of the leading groups in low-temperature materials for electronics," Rogers says. "The performance that they achieve is very impressive. This paper represents a valuable contribution to the emerging field of transparent electronics."
Copyright Technology Review 2006.

Friday, October 20, 2006

Silicon retina mimics biology for a clearer view


20 October 2006
NewScientist.com news service
Tom Simonite

A silicon chip that faithfully mimics the neural circuitry of a real retina could lead to better bionic eyes for those with vision loss, researchers claim.

About 700,000 people in the developed world are diagnosed with age-related macular degeneration each year, and 1.5 million people worldwide suffer from a disease called retinitis pigmentosa. In both of these diseases, retinal cells, which convert light into nerve impulses at the back of the eye, gradually die.

Most artificial retinas connect an external camera to an implant behind the eye via a computer (see 'Bionic' eye may help reverse blindness). The new silicon chip created by Kareem Zaghloul at the University of Pennsylvania, US, and colleague Kwabena Boahen at Stanford University, also in the US, could remove the need for a camera and external computer altogether.

The circuit was built with the mammalian retina as its blueprint. The chip contains light sensors and circuitry that functions in much the same way as nerves in a real retina – they automatically filter the mass of visual data collected by the eye to leave only what the brain uses to build a picture of the world.
Fully implanted

"It has potential as a neuroprosthetic that can be fully implanted," Zaghloul told New Scientist. The chip could be embedded directly into the eye and connected to the nerves that carry signals to the brain's visual cortex.

To make the chip, the team first created a model of how light-sensitive neurons and other nerve cells in the retina connect to process light. They made a silicon version using manufacturing techniques already employed in the computer chip industry.

Their chip measures 3.5 x 3.3 millimetres and contains 5760 silicon phototransistors, which take the place of light-sensitive neurons in a living retina. These are connected up to 3600 transistors, which mimic the nerve cells that process light information and pass it on to the brain for higher processing. There are 13 different types of transistor, each with slightly different performance, mimicking different types of actual nerve cells.

"It does a good job with some of the functions a real retina performs," says Zaghloul. For example, the retina chip is able to automatically adjust to variations in light intensity and contrast. More impressively, says Patrick Deganeer, a neurobionics expert at Imperial College London, UK, it also deals with movement in the same way as a living retina.
Changing scene

The mammalian brain only receives new information from the eyes when something in a scene changes. This cuts down on the volume of information sent to the brain but is enough for it to work out what is happening in the world.

The retina chip performs in the same way. The lowest image (right) shows how this allows it to extract useful data from a moving face.

As well as having the potential to help humans with damaged vision, future versions of the retina chip could help robots too, adds Deganeer. "If you can perform more processing in hardware at the front end you reduce demand on your main processor, and could cut power consumption a lot," he explains.

Zaghloul and Boahen are currently concentrating on reducing the size and power consumption of the retina chip before considering clinical trials.

Journal reference: Journal of Neural Engineering (vol 3, p 257)