The Photosynthetic French Sea Pods From 2050

Friday, October 24, 2014

The water is going to redefine our future. By 2050, its level will get sixteen centimeters more than today and Asia is planned to be very affected by this phenomenon. Conceived to be first installed in Indian Ocean, Bloom is an initiation of a link which purpose is to find an alternative issue on rising water due to global warming. This is why the concept of this large-scale project is to practice the culture of phytoplankton that absorb C02 excesses and create 02.

Bloom is a semi-submersible center that can also alert in case of tsunamis. Moored to the seabed with a system of cables, it is a controller of water level and quality in died zones of the oceans, rivers, coastal zones… Its goal is to regulate their O2 quantity by injecting phytoplankton making photosynthesis.

The project offers a living environment for the permanent staff of scientists and for phytoplankton in big aquariums. It is thus the first project created for these algae. It is a catalyst structure of their growing and in a way it’s a matrix pocket of oxygen on Earth that can also desalinize sea water.

Bloom wishes to be a sustainable answer by decreasing our carbon footprint while learning to live in accordance with our oceans. Every factory producing CO2 would have its own Bloom to testify of its interest for a better environment.

By giving us responsibilities on our CO2 consumptions, we would become aware of our ecological impact on the planet. Cause resourceful water is the mother of our civilizations and we will always need it.

Project facts

  • Name of the project: Bloom – An Aquatic farm for phytoplankton culture
  • Location: Indian Ocean
  • Gross floor area: 2 070 sqm
  • Height: 45m / 5 levels
  • Structure: aluminum + methacrylate
  • Designer: Sitbon Architectes
  • Credits: Sitbon Architectes
  • Website:

Written by David K. Originally appeared on PlusMood
All images courtesy Sitbon Architectes

Answer To Traffic Problems. A Sexy City-Pod

The single-axle Honda type E concept uses similar physical principles as a segway for maximum maneuverability in the urban jungle. The ultrasmall vehicle focuses on minimum space consumption, maximum driver visibility, easy access, high efficiency, and, most importantly, driving dynamics that make it capable of turning 360 on the spot. It’s got a far-out aesthetic, but it’s really not all that unrealistic from what we’ll probably see not-so-distant future.


  • no drive chain (lower weight, low fictional resistance)
  • no steering ear (lower weight)
  • turns on a dime
  • high efficiency start/stop-drive with regenerative braking
  • standing phase without loss of energy (traffic)
  • minimum use of parking space
  • zero emissions

Designer: Michael Brandt

A 10 Year Old Virtual Girl Puts A Sex Offender In Jail

Wednesday, October 22, 2014

A sting operation involving a virtual ten-year-old girl named Sweetie has led to the conviction of a registered sex offender in Australia, marking what is believed to be the first conviction since the avatar was created last year by a Dutch rights group.

Scott Robert Hansen, 37, pleaded guilty to three counts in a Brisbane court this week, admitting to possessing images of child sexual abuse and sending lewd pictures of himself to Sweetie.

Workers from the charity Terre des Hommes went undercover in chat rooms for ten weeks last year, posing as Philippine girls and using Sweetie to lure potential predators via webcam.

The group says it never actively approached predators, waiting instead for them to contact Sweetie, and that its researchers stopped all conversations once a user offered the young girl money to perform sexual acts. The goal, they say, was to shed light on a growing form of webcam-based child exploitation, which has snared thousands of victims in the Philippines alone.

More than 20,000 users from 71 countries approached Sweetie with requests for obscene performances during the ten-week operation, of which 1,000 were identified.

Terre des Hommes passed their names on to authorities in the UK, US, and other countries, though Interpol has raised concerns over non-governmental organizations spearheading criminal investigations.

The operation led to the arrest of 46 people in Australia, and other investigations are ongoing. BBC News has obtained a transcript of Hansen's conversation with Sweetie, whom he took to be nine-years-old.

The chat logs show that the man asked Sweetie if she'd ever seen a naked man before, before turning the camera on himself and performing a sexual act.

An anonymous operator tells BBC News that Hansen was "probably not the most serious, not even amongst the most serious" of predators he encountered during the sting. "Some of the men we interacted with literally give me nightmares," the operator added.

This Hoverboard Can Save Future Buildings From Earthquakes

This is the Hendo, the namesake of an inventor named Greg Henderson, and it’s really more of a technology demo than something that’s going to get you to work in the morning.

Right now it’s effectively a parlor trick, and it apparently only works in parlors lined with a one of a small set of metals. But Henderson, who co-founded the hoverboard’s parent company Arx Pax with his wife Jill, imagines the technology that’s inside it could become a solution for keeping buildings from getting destroyed in floods and earthquakes by simply lifting them up.

They also say that it could serve as a replacement for the systems that currently levitate maglev trains.

Those ambitions are the opposite of humility, but Arx Pax seems like a humble company, situated in a nondescript office park in Los Gatos, California.

Also humble: the small square white box that floats just a few centimeters above metal surfaces, designed as a technology demo that will be made available to Kickstarter backers. It’s just like an air hockey table, but in reverse, where a large object is simply floating just a few millimeters above, and adrift.

But there’s no air, just a barreling thrum of whatever is going on inside the "white box." Inside it are a group of what Henderson refers to as hover engines, and the oversimplified explanation of how they work involves a little electromagnetism and Lenz’s law.

Scale this up a bit and you get the hoverboard I’m on. Go even bigger and you can hold up cars, trains, and even buildings. Or at least that’s the idea.

"A magnet has an electromagnetic field. It is equal in all areas. It has a north and a south pole," Henderson explains. "What if you were able to take that magnet, and organize the magnetic field so that it was only on one side? And then you combine that with other magnetic fields in a way that amplifies and focuses their strength? That’s magnetic field architecture."

When used on a material like the copper floor that I’m standing on, the entire unit floats a few centimeters off the ground. Goodbye friction, and hello hoverboard.

How that works with a human on top of it is fun, but not elegant. I used to skateboard quite a bit, but hopping on Hendo’s hoverboard is something else. The easiest way to describe it is like getting on a snowboard that’s just been pulled out of an oven.

He envisions it as something that could be useful for lifting a building off its foundation. In fact, that was the premise of the company before it was even talking hoverboards. A patent Henderson filed for the company last March envisions a three-part system that would put the hover engines in the very foundation of a building, lifting it up and out of the way of danger.

When I ask how you could handle 10 feet of water when this small white box and hoverboard lift up just a few centimeters, Henderson says the scale can go way up, and lift things even higher.

The tricky part is keeping them from going out of control, which the company is still working on. That could keep the hoverboard from going bananas when you shift your weight the wrong way, and hopefully scale up to keep taller objects from toppling over.

Preparing For Our Life On Mars, On A Hawaiian Volcano

On the way to Mars, Neil Scheibelhut stopped by Walmart for mouthwash and dental floss. "We're picking up some last-minute things," he said via cellphone last Wednesday afternoon from the store.

Scheibelhut is not actually an astronaut leaving the earth. But three hours later, he and five other people stepped into a dome-shaped building on a Hawaiian volcano where they will live for the next eight months, mimicking a stay on the surface of Mars.

This is part of a Nasa-financed study, the Hawaii Space Exploration Analog and Simulation, or Hi-Seas for short. The goal is to examine how well a small group of people, isolated from civilization, can get along and work together.

When astronauts finally head toward Mars years from now — Nasa has penciled the 2030s — it will be a long and lonely journey: about six months to Mars, 500 days on the planet, and then another six months home.

"Right now, the psychological risks are still not completely understood and not completely corrected for," said Kimberly Binsted, a professor of information and computer sciences at the University of Hawaii at Manoa and the principal investigator for the project. (She is not in the dome.) "Nasa is not going to go until we solve this."

Isolation can lead to depression. Personality conflicts can spin out of control over the months.

"How do you select and support astronauts for a mission that will last two to three years in a way that will keep them healthy and performing well?" Binsted said. Or as Scheibelhut put it:

"I'm so interested to see how I react. 'I don't know' is the short answer. I think it could go a lot of different ways."

Several mock Mars missions have been conducted in recent years. A simulation in Russia in 2010 and 2011 stretched 520 days, most of the duration of an actual mission. Four of the six volunteers developed sleep disorders and became less productive as the experiment progressed. The Mars Society, a nonprofit group that promotes human spaceflight, has run short simulations in the Utah desert since 2001 and is looking to do a one-year simulation in the Canadian Arctic beginning in 2015.

Hi-Seas has already conducted two four-month missions, and next year, six more people will reside for one year inside the dome, a two-story building 36 feet in diameter with about 1,500 square feet of space. It sits in an abandoned quarry at an altitude of 8,000 feet on Mauna Loa.

To simulate the operational challenges, the crew members in the Hi-Seas dome are largely cut off. Their communications to the world outside the dome are limited to email, and each message is delayed by 20 minutes before being sent, simulating the lag for communications to travel from Mars to Earth and vice versa.

Every time one of the would-be astronauts or mission control sends a message, at least 40 minutes will elapse before a reply arrives. Real-time conversation is impossible.

On a real mission, the lag time would often be considerably shorter as Mars and Earth moved closer together, but Binsted said, "We went with the worst case because we're trying to solve the worst-case situation."

The crew members are granted some exceptions. They can check a few websites, like their banking accounts, to ensure that their earth lives do not fall apart while they are away. There is also a cellphone for emergency communications; If a hurricane (a distinctly un-Martian weather pattern) were to threaten the dome, as almost occurred over the weekend when Hurricane Ana veered south of Hawaii, mission control would not delay telling the crew to evacuate.

Some 150 people applied to participate. Binsted said the three men and three women of this Hi-Seas crew were chosen to have a similar mix of experience and backgrounds as real Nasa astronauts, and many indeed aspire to go to space.

The commander is Martha Lenio, 34, an entrepreneur looking to start a renewable-energy consulting company. Other crew members are Jocelyn Dunn, 27, a Purdue University graduate student; Sophie Milam, 26, a graduate student at the University of Idaho; Allen Mirkadyrov, 35, a Nasa aerospace engineer; and Zak Wilson, 28, a mechanical engineer who worked on military drone aircraft at General Atomics in San Diego.

"I dream about being an astronaut, and this might be the closest I ever get," Dunn said.

Wilson had previously done a two-week stay at the Mars habitat in Utah. Scheibelhut had worked on the first Hi-Seas mission as part of the ground support crew. "I thought it would be really cool to be part of what's going on inside," he said.

For their time, each is receiving round-trip airfare to Hawaii, a $11,500 stipend, food and, of course, lodging.

At the outset, the six appear to get along fine. "This is a fantastic group of people," Scheibelhut said. "Right now, everything is wonderful."

He said he recognized that there would be unpleasant patches. "Eight months — you're going to have real conflicts you're going to have to work out," he said. "Scientifically speaking, it's going to be really interesting to see what happens."

But Scheibelhut, 38, an Army veteran who served a year in Iraq in 2004, said, "I've been through worse."

On this mission, at least, no one will be trying to kill him. "I hope," he added.

The goal is to maintain cohesion among the crew members, but that too can lead to problems.

"They become more independent when they are more cohesive," Binsted said, and an independent-minded crew could start sparring with mission control.

The researchers will also be looking for signs of "third-quarter syndrome." At the beginning of the mission, the experience is new and exciting. Then, in the second quarter of the mission, people fall into routines. Near the end, people can look forward to getting out and returning to the real world.

In the middle, there can be a stretch when routines turn into tedium without end. "That third quarter can be a bit of a bummer," Binsted said.

Like real astronauts, the Hi-Seas crew will be busy performing various scientific work, including excursions outside the dome in spacesuits.

"If you're going to keep people in a can for eight months, you want to get as much science out of them as possible," Binsted said. "It also means Nasa gets a lot of bang for their buck."

Part of the science includes data Dunn will collect for her doctoral thesis. "Not a lot of people get to shut out the world for eight months and work on their research," she said.

But first, there was the stop at Walmart.

Dunn bought a pair of slippers. "The ground level stays pretty cool," she said.

Wilson picked up super glue and workout shorts. Binsted bought some supplemental food supplies — hot sauce, powdered coconut milk and spinach wraps.

Elsewhere, Lenio, the commander, was shopping for a ukulele.

"We'll start a band," said Scheibelhut, who had brought his guitar.


Here Is A Physical USB 'Key' To Your Google Account

Opting in to Google’s latest security upgrade requires a spot on your keychain for a device known as a security key.

The small USB stick provides added protection for a Google account. Once a key is associated with your account, you’ll be prompted to insert the device into a computer each time you enter a password to log in—or, if you prefer, once a month on computers you use frequently. Touching a button on the security key triggers a cryptographic exchange with Google’s login systems that verifies the key’s identity. Security keys can be bought from several security hardware companies partnered with Google, for a little less than $20.

The new approach is primarily aimed at the security-conscious. But the technology involved lays the groundwork for physical devices that displace passwords altogether, says Mayank Upadhyay, a security engineer at Google. Google has been working on ways to replace passwords for some time, because stolen or guessed passwords are often used to take over accounts.

“This is a great first step that solves a problem today but also helps move the ecosystem toward that Holy Grail,” says Upadhyay. He has led work at Google to test whether other physical devices, like smartphones or even a piece of jewelry, could replace passwords (see “Google Experiments with Ring as Password”). This summer, Google announced that it will make it possible to have a Chromebook automatically unlock and log you in to a Google account when your Android smartphone is nearby.

A security key provides a more secure version of two-factor authentication, an approach already offered by some Web companies and many banks that involves logging in with both a password and a temporary code tied to something physically in your possession. Usually a two-factor code comes via a phone app, a text message, or a key fob.

That approach is designed to prevent an attacker from logging into your account remotely. If Apple had offered two-factor authentication for its iCloud backup service, for example, people using it would have been protected against the methods used by hackers to steal the celebrity photos leaked this summer. (Apple has since rolled out the technology.)

However, sophisticated attackers are capable of breaking two-factor authentication. They can steal or spoof codes by intercepting text messages, hacking a person’s smartphone, or breaking into the centralized database used to generate the codes. There is evidence an attack like that on RSA’s SecureID authentication system in 2011 enabled security breaches at defense contractor Lockheed Martin. Google has highly targeted users who may not be safe using existing two-factor authentication systems, says Upadhyay. “We’ve seen all kinds of attacks,” he says.

A security key, such as Google’s, is resistant to remote attacks, because the information needed to copy a key can be obtained only by physically attacking a security chip inside that key. Two-factor authentication is already widely used on corporate networks. Starting early next year, companies that pay Google for e-mail and office software will be able to have their employees use security keys to access these services.

Lorrie Cranor, director of the CyLab Usable Privacy and Security Laboratory at Carnegie Mellon University (see “Why Privacy Is Hard to Get”), says that a security key is unlikely to broaden the appeal of two-factor authentication beyond those who already use it. But the technology might gain wider use if promoted and packaged in the right way, she says. “Maybe it will make sense to some people who don’t know much about computer security but can relate to the idea of using a physical key to lock their account,” she says.

A security key bought today could be used with services other than Google’s, if other companies choose to adopt the technology. The device is built on an open standard called U2F, being developed by the FIDO Alliance, a consortium established to reduce reliance on passwords (see “PayPal, Lenovo Launch New Campaign to Kill the Password”).

Stina Ehrensvärd, CEO of Yubico, a startup that sells security keys, says the consortium’s technology creates the right incentives for widespread adoption. “It’s great for Google to go out and show that this works, and I expect many to follow because it’s easy and FIDO allows competition,” she says.

Future versions of the security key will also work with mobile devices, says Ehrensvärd, because the final U2F standard will specify that a key can include a contactless near-field communications chip that most new smartphones can read wirelessly.

By Tom Simonite
We found it on Technology Review

Concept: A Lamp That Just Need Water To Illuminate

Monday, October 20, 2014

Meet WAT, a lamp powered by water. Wait what!?! A few drops of water kick starts the process where it combines with a hydroelectric battery (composed of a carbon stick coated with magnesium powder) to generate an electro chemical reaction to create power. No idea how efficient it is but color me intrigued.

Materials: Sanded blow glass , Steel switch, bioplastic, Warm white LED stripes

Designer: Manon Leblanc

Phones Ring, We All Know But Soon They May Smell To Alert You

Our mobile devices already ring and vibrate to get our attention, but a prototype device created at Le Laboratoire in Paris, the oPhone, suggests that soon they might also emit odors.

Le Laboratoire is run by Harvard professor David Edwards, who splits his time between Boston and Paris; it's a retail, R&D, and exhibition space a few blocks from the Louvre, and a domestic version called Le Lab Cambridge is set to open in Kendall Square next year.

I stopped by last month to have a look at both the oPhone (the "o" stands for olfactory), and the space itself.

The current oPhone, part of an exhibit at Le Laboratoire called "Virtual Coffee," is a separate handheld device that is linked to a smartphone via Bluetooth. (Obviously, the long-term vision is to have it integrated into phones, or perhaps designed to be a protective case for the phone.)

When triggered by an incoming e-mail, it can emit one of four aromas: espresso, hazelnut, latte, and mocha. "There's a small cartridge inside that has the ability to deliver these micro-odors when it is heated," Edwards says.

As a demo, he sends a whiff of espresso to a student, on the far side of his office. (In the picture is Edwards with Rachel Field, a Harvard student who helped create the oPhone. The demo was still a bit flukey.) "We're working now on a way to mix the oPhone's aromas," Edwards says. He talks about the possibility of discovering odiferous building blocks ''the equivalent of DNA's nucleotides ''that could be blended to create just about any smell.

In terms of the oPhone's future potential, there are obvious applications like using smell to persuade you to book a spa treatment, or stop by a bakery to grab a fresh baguette. But Edwards has other ideas, too: "You might go to see a movie, and you'd get a cartridge that's synchronized with the movie, and integrates with the drama.

It could be relevant in gaming ''a scent track you could design for a game or any audio or video program." Edwards is also interested in what you might call therapeutic smells: a unique aroma that helps you fall asleep at night, or makes you less hungry.

A new oPhone prototype will debut in London this October, Edwards says, at a conference put on by Wired Magazine. "People will design a Virtual Coffee Mocha as a 'symphony' [by] mixing different coffees, chocolates, caramels, and nuts in four movements of 30 seconds total," he writes via e-mail. It will be capable of delivering over 100 different aromas, Edwards says. And next June, when Le Lab Cambridge opens, he says that he expects to have an oPhone focused on "culinary applications."

Much of Edwards' career has been focused on new ways to deliver food, beverages, and pharmaceuticals. (He was a co-founder of a company now known as Civitas Therapeutics, which is developing an inhalable prescription drug for Parkinson's disease.) It sounds like Le Lab Cambridge will showcase those interests, blending in art and performance, as the Paris version does.

In Cambridge's academic and entrepreneurial community, Edwards says, "We are not good at showing what we do. So Le Lab Cambridge will ask the question, how do we share it?" That's a laudable mission.

Below is a video demo of May showing the oPhone in action, followed by a few photos from Le Laboratoire Paris:

Making Computers From a Living Cell

Sunday, October 19, 2014

If biologists could put computational controls inside living cells, they could program them to sense and report on the presence of cancer, create drugs on site as they’re needed, or dynamically adjust their activities in fermentation tanks used to make drugs and other chemicals.

Now researchers at Stanford University have developed a way to make genetic parts that can perform the logic calculations that might someday control such activities.

The Stanford researchers’ genetic logic gate can be used to perform the full complement of digital logic tasks and it can store information too. It works by making changes to the cell’s genome, creating a kind of transcript of the cell’s activities that can be read out later with a DNA sequencer. The researchers call their invention a “transcriptor” for its resemblance to the transistor in electronics.

“We want to make tools to put computers inside any living cell—a little bit of data storage, a way to communicate, and logic,” says Drew Endy, the bioengineering professor at Stanford who led the work.

Timothy Lu, who leads the Synthetic Biology Group at MIT, is working on similar cellular logic tools. “You can’t deliver a silicon chip into cells inside the body, so you have to build circuits out of DNA and proteins,” Lu says. “The goal is not to replace computers, but to open up biological applications that conventional computing simply cannot address.”

Biologists can give cells new functions through traditional genetic engineering, but Endy, Lu, and others working in the field of synthetic biology want to make modular parts that can be combined to build complex systems from the ground up. The cellular logic gates, Endy hopes, will be one key tool to enable this kind of engineering.

Cells genetically programmed with a biological “AND” gate might, for instance, be used to detect and treat cancer, says Endy. If protein A and protein B are present—where those proteins are characteristic of, say, breast cancer—then this could trigger the cell to produce protein C, a drug.

In the cancer example, says Endy, you’d want the cell to respond to low levels of cancer markers (the signal) by producing a large amount of the drug. The case is the same for biological cells designed to detect pollutants in the water supply—ideally, they’d generate a very large signal (for example, quantities of bright fluorescent proteins) when they detect a small amount of a pollutant.

The transcriptor triggers the production of enzymes that cause alterations in the cell’s genome. When the production of those enzymes is triggered by the signal—a protein of interest, for example—these enzymes will delete or invert a particular stretch of DNA in the genome. Researchers can code the transcriptor to respond to one, or multiple, different such signals. The signal can be amplified because one change in the cell’s DNA can lead the cell to produce a large amount of the output protein over time.

Depending on how the transcriptor is designed, it can act as a different kind of logic gate—an “AND” gate that turns on only in the presence of two proteins, an “OR” gate that’s turned on by one signal or another, and so on. Endy says these gates could be combined into more complex circuits by making the output of one the input for the next. This work is described today in the journal Science.

MIT’s Lu says cellular circuits like his and Endy’s, which use enzymes to alter DNA, are admittedly slow. From input to output, it can take a few hours for a cell to respond and change its activity. Other researchers have made faster cellular logic systems that use other kinds of biomolecules—regulatory proteins or RNA, for example. But Lu says these faster systems lack signal amplification and memory. Future cellular circuits are likely to use some combination of different types of gates, Lu says.

Christopher Voigt, a biological engineer at MIT, says the next step is to combine genetic logic gates to make integrated circuits capable of more complex functions. “We want to make cells that can do real computation,” he says.

Written by Katherine Bourzac. Originally appeared on Mashable
Image via iStock, palau83

A Leap Forward in Brain-Controlled Computing

Stanford researchers have designed the fastest, most accurate algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm’s speed, accuracy and natural movement approach those of a real arm, doubling performance of existing algorithms.

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics.

Vikash Gilja, Krishna Shenoy and Paul Nuyujukian (left to right) discuss results of their new algorithm that greatly improves performance of a computer cursor controlled by thoughts conveyed through a sensor implanted in the brain. The new algorithm approaches the speed, accuracy and natural motion of real arm. Trials in paralyzed humans have been approved by the FDA. Photo: Joel Simon.

A team of Stanford researchers have now developed an algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors.  The results are to be published November 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached performance of the real arm. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

“These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford,” said Shenoy.


The system relies on a silicon chip implanted into the brain, which records “action potentials” in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer key information about the direction and speed of the user’s intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. “Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction,” said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked “online,” under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor to toward an onscreen target.

The system is able to make adjustments on the fly when while guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop. If the cursor were straying too far to the left, for instance, the user likely adjusts their imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target — an onscreen dot — and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy. The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of real arms.

"This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology,” said Jose Carmena, associate professor of electrical engineering and neuroscience at the University of California Berkeley.


Critical to ReFIT’s time-to-target improvement was its superior ability to stop the cursor. While the old model’s cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of “training” the algorithm about movement. When the monkey used his real arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity. Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey’s brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor’s position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor


Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. “The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device, ” he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics. Much of the existing research in this field has focused on differentiating among individual neurons in the brain.

Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

The individual neuron approach has its drawbacks, Gilja said. “From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error-prone,” he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year, they can no longer accurately interpret the brain’s intended movement. Gilja said the Stanford system is working very well more than four years later.

"Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we’ve been left so far with halting, jerky, Etch-a-Sketch-like movements.  Dr. Shenoy's study is a big step toward clinically useful brain-machine technology that have faster, smoother, more natural movements," said James Gnadt, PhD, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term,  precise, accurate control of a cursor is a simplified task with enormous value for paralyzed people.

“We think we have a good chance of giving them something very useful,” he said. The team is now translating these innovations to paralyzed people as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation; NSF, NDSEG, and SGF Graduate Fellowships; DARPA ("Revolutionizing Prosthetics" and "REPAIR"); and NIH (NINDS-CRCNS and Director's Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, and Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao, and Stephen Ryu.

Kelly Servick is a science-writing intern at the Stanford University School of Engineering.

Going To Far: Motorola Unveils Vitamin Based Edible Passwords Pill

Saturday, October 18, 2014

Taking a daily vitamin could do more than give you an extra kick of vitamin D or C in the morning. Soon, it could also boost your online security, becoming an authentication token you could never lose.

At the D11 conference in California today, Motorola unveiled a "vitamin authentication" tablet powered by the acid in your stomach that turns you into a human authentication token.

Regina Dugan, Motorola's senior vice president for advanced technology and products and a former director of DARPA, described the little pill as "my first super power," according to Wired UK.

"Authentication is irritating," she said. "After 40 years of advances in computation, we're still authenticating basically the same way we did years ago."

The FDA-approved tablet, made by Proteus Digital Health, contains a small chip that can be switched on an off by your stomach acid, creating an 18-bit ECG-like signal that would let you authenticate your identity just by touching your phone, your computer or your car.

Motorola has successfully completed a demo of the tablet authenticating a phone, but CEO Dennis Woodside said it wouldn't be shipping anytime soon.

Smart pills have previously been developed as a way to transmit health information straight from your body to your doctor, and to remind patients with chronic diseases to take their medication . And now maybe they can make not getting hacked a little more convenient, too.

PHOTO The Verge

Face Recognition, With Probably The Most Positive Use

Face Recognition can be considered one of the most controversial technology we have developed in the past years. Social networks and governments might be using it to spy on us. But here is a positive concept which we may see embedded in many gadgets of near future.

Extending this concept to telephones, the Elderly E Phone uses image recognition rather than a keypad for the entering of phone numbers. It is aimed at the silver generation, who have a hard time dialing or remembering numbers.

The designers explain, “Elderly e Phone concept uses simple image recognition for the making of calls. First, the phone must be pre-set with the numbers that correspond to particular facial photographs of one’s contacts. When a call is to be made to one of these people, the user simply needs to hold the phone’s image finder above the photograph and press ‘OK’. The phone will recall the appropriate number and the call will be placed directly.”

Elderly E Phone is a 2013 red dot award: design concept winning entry.

Designers: Prof. Dai Yunting, Lu Junshi, Liu Fei, Jiang Ying & Zhu Yunpeng

Now, A Flashlight Thats Powered By Heat From Your Hand

Thursday, October 16, 2014

A 15-year-old girl in Canada has invented a flashlight that produces light just by using the warmth of your hand.

Ann Makosinski, from British Columbia, invented the thermoelectric 'Hollow Flashlight' that works via the thermoelectric effect.

The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice-versa.

"I'm sure we've all had that annoying experience when we desperately need a flashlight, we find one, and the batteries are out," Makosinski told NBC News.

"Imagine how much money we would save and the amount of toxins leached into the soil etc reduced if we didn't use any batteries in flashlights!" she said.

To create the flashlight, Makosinski measured how much electricity could be generated from the heat of a palm – about 57 milliwatts - and how much she needed to light the LED - about half a milliwatt.

Next, she got several Peltier tiles which when warm on one side and cool on the other could generate electricity, and a few other bits necessary to make the current usable by a normal LED.

Finally, she mounted the tiles and circuitry onto a hollow aluminum tube; air inside the tube would cool the Peltier tiles, while the warmth of a hand would heat the other side.

With a little tweaking of voltages and other components, the invention worked. The light generated is modest, but enough to find your keys or light the page of a book.

It worked for around half an hour in her tests at an ambient temperature of about 10 degrees Celsius, but would last longer or shorter depending on temperature differences.

"The flashlight I have made is more of a prototype then a final product, but the components in my device are quite strong," Makosinski said.

"Of course, if it was to be used and manufactured, I would try to seal off the electronic components in some sort of casing so that it wouldn't get heavily exposed to the elements (example water), and therefore last longer," she said.

Makosinski has submitted her invention for the Google Science Fair and will be in California to visit Google headquarters in September for the final judging event.

Best Chefs Of The World In Are In Your Future Kitchen

The Global Chef is a clever appliance that uses technology to its advantage. Imagine having a cook-off with Gordon Ramsey in your own Hell’s Kitchen or inviting Jamie Oliver to teach you British delights. Global Chef has the ability to bring people together from all across the world by using laser hologram technology. Discover new cuisines, join cooking classes or have dinner parties with friends and family straight from the home kitchen!


  • Global Chef transfers smell, reduces kitchen noise while in use.
  • Suggests recipes using the available ingredients placed in the smart bowl that senses the food.
  • The only physical button is the “on button”, once the user presses it a hologram interface appears.
  • The UI is controlled by kinetic movements.
  • With the help of the appliance you can cook with your loved ones or take cooking lessons from top chefs.
  • The appliance analyzes food, has a motion detection camera and can project holograms 360° around itself.
  • Global Chef is a 2013 Electrolux Design Lab Semi-finalist.

Designer: Dawid Dawod

Brain Implants Gives Hope To Restore Our Lost Memories

Hippocampus, the part of the brain with a major role in forming long-term memories.

A very promising project to restore the lost human memories will transplant a memory device inside the brain of small number of human volunteers to confirm the results. The device might become available to anyone within five to ten years. A maverick neuro-scientist, Theodore Berger, believes he has deciphered the code by which the brain forms long-term memories.

Theodore Berger, a bio-medical engineer and neuro-scientist at the University of Southern California in Los Angeles, envisions a day in the not too distant future when a patient with severe memory loss can get help from an electronic implant.

In people whose brains have suffered damage from Alzheimer’s, stroke, or injury, disrupted neuronal networks often prevent long-term memories from forming. For more than two decades, Berger has designed silicon chips to mimic the signal processing that those neurons do when they’re functioning properly—the work that allows us to recall experiences and knowledge for more than a minute. Ultimately, Berger wants to restore the ability to create long-term memories by implanting chips like these in the brain.

As reported by the CNN and MIT Technology Review, the researchers have already experimented on rat and monkey brains, proving that brain messages can be replicated by electrical signals from a silicon chip.

‘We’re not putting individual memories back into the brain. We’re putting in the capacity to generate memories.’, clarified Berger the basics of their research. He also said “I never thought I’d see this go into humans, and now our discussions are about when and how. I never thought I’d live to see the day.”, adding “I might not benefit from it myself but my kids will.”

Berger and his colleagues are planning human studies. He is collaborating with clinicians at his university who are testing the use of electrodes implanted on each side of the hippocampus to detect and prevent seizures in patients with severe epilepsy. If the project moves forward as envisioned, Berger’s group will piggyback on the trial to look for memory codes in those patients’ brains.

More details about the research and its background on CNN and MIT Technology Review.

New Nano Structure Is The Thinnest Light-Absorber Ever

Wednesday, October 15, 2014

Thinnest Light Absorbers Consisting of billions of gold nanodots. Courtesy of Mark Shwartz

Nanosize light-absorbers break records for size and efficiency and could lead to better solar cells. Scientists at Stanford University have managed to build light-absorbers that are thousands of times thinner than a sheet of paper. The nanosize structures are capable of absorbing close to 100 percent of visible light emanating from specific wavelengths. The material could be used to make cheaper, more efficient solar cells, among other things, the researchers say.

"Much like a guitar string, which has a resonance frequency that changes when you tune it, metal particles have a resonance frequency that can be fine-tuned to absorb a particular wavelength of light," Carl Hagglund, lead author of the study said in a statement. "We tuned the optical properties of our system to maximize the light absorption."

In order to construct the light-absorbers, ultrathin wafers were coated with trillions of round gold nanodots, essentially small spherical magnets. The wafers contain about 520 billion nanodots per square inch. The wafers were topped with an additional layer of film. The thickness of film determines the specific light frequency the absorber is designed to capture. In testing phases, the prototypes proved capable of absorbing 99 percent of light generated by a wavelength measuring 600 nanometers long.

The previous leading light-absorber technology required a foundational layer that was three times thicker to absorb the same amount of light.

The team believes the light-absorbers have the potential to significantly enhance the efficiency of solar cells. The small size of the structure as a whole forces energy charge carriers closer to one another, meaning it won’t take as long for the charge carriers to be collected and stimulate electrical current production. As the structures require less material, they could also make solar cell technology more affordable.

"We are now looking at building structures using ultrathin semiconductor materials that can absorb sunlight," Stacey Bent, co-director of the Stanford Center on Nanostructuring for Efficient Energy Conversion (CNEEC) said in the same statement. "These prototypes will then be tested to see how efficiently we can achieve solar energy conversion."

Written by Lacey Henry. Originally appeared on POPSCI

A Space Laser Designed To Vaporize Dangerous Asteroids

DE-STAR is designed to vaporize or divert asteroids that threaten Earth. This isn’t science fiction—I build things that have to work in practice. DE-STAR stands for Directed Energy Solar Targeting of Asteroids and exploRation. It looks like an open matchbook with lasers on one flap and a photovoltaic panel for power from sunlight on the other. By synchronizing the laser beams, we can create a phased array, which produces a steerable 70-gigawatt beam. An onboard system receives orders on what to target.

Our laser beam would then produce a spot about 100 feet in diameter on an asteroid that’s as far away from the satellite as we are from the sun. The laser would raise an asteroid’s surface temperature to thousands of degrees Celsius—hot enough that all known substances evaporate. In less than an hour, DE-STAR could have completely vaporized the asteroid that broke up over Russia this winter, if we had seen it coming. Plus, as the material evaporates, it creates a thrust in the opposite direction, comparable to the space shuttle’s rocket booster. That means you could divert the asteroid by changing its orbit with a shorter laser blast.

DE-STAR could also power things on Earth or in space. You could send the electrical power it produces—not via laser beam but via microwaves. Or you could use the laser to directly propel spacecraft. But here’s the thing: For full-blown asteroid vaporization, each flap of the matchbook would have to be six miles long. We’ve never built a structure this size in space, but if there were the worldwide will, I could see building this within 30 to 50 years. But since it’s completely modular, we propose starting smaller. We could begin with a version that’s three feet per side right now. With that, you could cook your dinner from 600 miles away.

—Philip Lubin is a physicist at UC Santa Barbara and co-inventor of DE-STAR with statistician Gary Hughes, of California Polytechnic State University.

This article originally appeared in the July 2013 issue of Popular Science. See the rest of the magazine here.

Here Is An Idea Of Making Solar Roads To Power Entire America

A US company is seeking $1 million in crowd-sourced funds for its ambitious plan to turn American roads into giant energy farms that could power the entire country.

The company Solar Roadways is raising funds on crowd-funding website Indiegogo to cover highways in the US in thick LED-lit glass solar panels.

The modular paving system of Solar Road Panels can withstand the heaviest of trucks, according to the company's Indiegogo page. These panels can be installed on roads, parking lots, driveways, sidewalks, bike paths, playgrounds and literally any surface under the Sun, it said.

The roads pay for themselves primarily through the generation of electricity, which can power homes and businesses connected via driveways and parking lots.

According to the company, a nationwide system could produce more clean renewable energy than a country uses as a whole.

They have many other features including heating elements to stay snow/ice free, LEDs to make road lines and signage, and attached Cable Corridor to store and treat storm-water and provide a "home" for power and data cables.

Electric vehicles (EVS) will be able to charge with energy from the Sun (instead of fossil fuels) from parking lots and driveways and after a roadway system is in place, mutual induction technology will allow for charging while driving, the company said on its Indiegogo page.

Documentary: Aftermath, Population Zero

Imagine if one minute from now, every single person on Earth disappeared. All 6.6 billion of us. What would happen to the world without humans?

How long would it be before our nuclear power plants erupted, skyscrapers crumbled and satellites dropped from the sky?

What would become of the household pets and farm animals? And could an ecosystem plagued with years of pollution ever recover?

Similar to the History Channel's special Life After People (recommended), Aftermath features what scientists and others speculate the earth, animal life, and plant life might be like if humanity no longer existed, as well as the effect that humanity's disappearance would have on the artifacts of civilization.

Watch the full documentary now

Is The Future Of Journalism Relies On Virtual Reality ?

Tuesday, October 14, 2014

Formally trained in print and documentary, journalist Nonny de la Peña has a history of telling human stories through a multitude of communication tools and media.

The award-winning filmmaker and journalist has twenty years of reporting experience, but within the last decade, she's found a powerful storytelling method in an unexpected place: eschewing print, television, and even the Internet, de la Peña believes virtual reality is the most powerful storytelling method at modern media's disposal.

De la Peña has been described as a pioneer of "Immersive Journalism," and has since become a Senior Research Fellow in the new field at USC's Annenberg School of Journalism and Communications.

Her craft specifically focuses on creating innovative VR projects using custom-built, motion-capture setups that allow users to walk through 3D generated recreations of non-fictional events. She told The Creators Project, "I believe VR will fundamentally change the landscape of how we experience many stories."

As an example, de la Peña and her team used footage of a man collapsing while waiting in line at a food bank to digitally reconstruct the event—the resulting virtual reality story, entitled Hunger in Los Angeles, gives viewers the firsthand experience of the tension and intensity of the actual event, far beyond the emotional stimuli of any news broadcast.

"Most people don’t have a lot of VR experience," says Paisley Smith, an assistant to de la Peña. "It's a disconnect from the world you know, and you're immersed somewhere else. In this way you can be more invested in the experience. It’s a very powerful, distraction-free storytelling technique."

Hunger rapidly became one of the most talked about pieces at the 2012 Sundance Film Festival, and de la Peña has been devoted to non-fiction virtual reality storytelling ever since. Other virtual reality stories tackle important human rights issues of today: the first, entitled Project Syria, recounts of the bombing of Aleppo, Syria, followed by an inside look at a Syrian refugee camp. In an in-depth profile of the project on Motherboard, writer Christopher Malmo describes the endeavor as "a perfect example of what's possible when new technologies are applied to reporting.

Using VR renders the project immersive, going beyond two-dimensional print or video coverage to physically place the viewer into the story. In doing so, they stop being a mere viewer, and much more of a witness."

The second VR project, Use of Force, bears witness to police brutality on the US/Mexico border. These stories often get covered on broadcast, print, and digital news outlets, but de la Peña believes that virtual reality makes these experiences more personal—a new way for people to experience news outside information-saturated media markets.

Curious about the future of immersive journalism, we spoke to Nonny de la Peña about technology, non-fiction storytelling, and the art of communicating in VR:

The Creators Project: When did you first realize that virtual reality technology was mature enough for journalism and non-fiction storytelling?

Nonny de la Peña: Eight years ago, after building a virtual Guantanamo Bay Prison in Second Life with artist Peggy Weil, I began to think of how virtual reality could be applied to other important news stories. Soon after we were lucky to collaborate with Mel Slater and Maria Sanchez Vives at the Event Lab at the University of Barcelona on a piece that put people “in the body” of a detainee in a stress position, in order to offer a visceral report using FOIA released documents on the way we tortured prisoners. While at their lab,

I saw another powerful piece they had created in order to study the bystander effect which put you in the middle of a bar fight. That was when I realized the power of creating pieces that put audiences on scene using VR goggles and full tracking. After that, I have focused exclusively on using virtual reality for important narratives.

When developing such rich virtual reality experiences, where do you begin? How do you choose which topics best lend themselves to the medium?

My work in virtual reality is unusual in that I tell linear narratives which can’t necessarily be altered by audience interaction, so the first question I need to address is how a narrative can unfold AROUND a participant, as if they are standing on a theater stage.

I only have limited cues to direct attention—audio, abrupt movements, gathering of people in the scene—as fully immersive virtual reality allows the audience to look, walk or even run anywhere they choose, so the topic needs to allow the type of robust design that can be experienced from any angle. While these pieces can be created for a variety of stories, I have always been driven by the intersection of investigative reporting and human rights and my work to date reflects that interest.

How is your process as a storyteller different in immersive VR, as opposed to print or documentary?

In my many years working in print and documentary I have enjoyed extensive editorial control. I can very specifically edit a story so that the “cuts” read or view well, without worrying what might happen to someone’s body if they are “experiencing” the story.

Virtual reality can definitely cause what’s known as sim sickness, a feeling like motion sickness. So when you design a piece that makes your audience feel as if they are actually present on scene, you have to respect that you have brought their entire body along for the ride.

This type of spatial narrative requires very specific considerations for design and key is to imagine truly standing in the middle of the story.

The other crucial element that always needs to be considered is the speed of the refresh rate on the goggles. It has been so great working with some of the top technologies committed to making imagery on the screen track any viewer movement without the lag that can cause sim sickness.

I have had incredible support from the motion tracking camera company Phasespace and the whole team at USC’s ICT MxR Lab headed by VR veteran Mark Bolas. Working with Palmer Luckey and Oculus Rift has also been a game changer–we are now able to offer large audiences access to these pieces.

However, there is a key similarity between traditional cinema, television and news reporting and the type of immersive virtual reality experiences I make: If the audio is bad, the piece will be problematic no matter how good the visuals are.

Can you walk me through the process of translating research into a digital world?

When I begin these pieces, I completely rely on my journalistic training and background to gather the necessary elements.

I use traditional methods of researching important stories and then begin collecting the images and audio that act as the fundamental scaffolding upon which I build. For example, in Project Syria, I was shown a video of a young girl singing on a street in Aleppo when a mortar shell hit.

We then had to gather a dozen mobile phone videos taken before the explosion and during the aftermath, as well as photographic material and Google earth images to anchor the street where the event occurred. I then sent a team into a refugee camp on the border of Syria to collect material about children living in the camp in order to inform the second half of the piece.

Throughout, I had to imagine what it would be like to be standing there when events transpire—will the material I want to duplicate digitally offer a deeper understanding of an event? Once I feel I can hit that note, then the modeling in Maya and 3DMax begins We start designing the motion capture session and ultimately translate all of these digital elements into the game engine to make the experience “feel real.”

How does immersive VR affect people differently, compared to read or watched stories? What kind of reactions are you looking for from your audiences?

One person told me that even two weeks after experiencing Use of Force she still felt the memory of the story “in her body.” I think that’s the key to the difference—it’s a very visceral feeling to go through a well-crafted, fully immersive piece. When I build these, I set out to make people understand what happened in a more profound way by allowing them to become witnesses to an event.

I have now put thousands of people through my pieces and it is amazing to watch people gasp when they put the goggles on and they suddenly have been transported to another location—they know they are here but they feel like they are there too. Or else they try to interact with the virtual environment as if it is real.

These kind of reactions tell me that the piece is working. I have even had folks pull their mobile phones from their pockets in a knee jerk reaction when the seizure victim collapses in Hunger in Los Angeles. Of course they put it back immediately when they catch themselves, but the reality becomes that strong.

Do you think the introduction of VR into journalism can change the way people think about news?

I don’t know if it will change the way people think about the news, but it will definitely change the way they receive their news. Just as with the introduction of radio or television, these new news delivery systems changed our feelings about the world we live in. I believe VR will fundamentally change the landscape of how we experience many stories.

What technological advancements do you hope will improve your practice?

We are already seeing the advent of many tools that allow quick photo real models of environments inside and outside buildings and along streets.

When we can do that with people, including rapidly capturing their motions in a natural way, the way 360 cameras are beginning to do today, my life will be so much easier! Currently, 360 cameras don’t yet give us the ability that allows people to move throughout the experience and be able to see the environment people from any angle or direction.

Viewers are still fixed to a chair. However, I expect photo real and real time to merge in the very near future.

How was your approach different in creating Project Syria, versus Hunger in Los Angeles?

When I made Hunger in Los Angeles, no one had ever used VR for telling a nonfiction story in this way. I had no budget, funding or backer and it took two years for me to beg the favors necessary to get it built.

I spent about $700 of my own money and had amazing people helping me out. When it premiered at Sundance, I think I was as surprised as everyone that it really, truly worked.

The World Economic Forum commissioned Project Syria. Elizabeth Daley, Dean of the USC School of Cinematic Arts, brought the head of WEF, Klaus Schwab, to experience Hunger. He took of the goggles and commissioned Project Syria on the spot.

But I really ended up with only about six weeks to build from scratch in order to make the January Davos deadline and it was all done at a level of intensity that I hope will be a rare occurrence.

Another crucial difference is I knew I wanted to do a story that exemplified the hunger crisis in America and the strain on food banks. That gave me time to record many hours of material until I captured the right scene to build with.

Not only did I not have the luxury of time with Project Syria but the furor of the events also caused tremendous problems. For example, I reached out to a photographer to potentially hire him or utilize his existing archive and twenty minutes after I sent the email, word came across twitter he had been kidnapped.

All of this was happening while the aid agencies were calling Syria the worst crisis of our life time—and I needed to make the piece convey that urgency to the level of world leaders who attend WEF. I can tell you, the pressure was intense but once again, we succeeded beyond my wildest imagination.

De la Peña's films will be shown at the Future of Storytelling Summit in New York City on Oct. 1-2, where she will also be giving a talk.

By Beckett Mufson
We found it on The Creators Project.

Nasa Is Training 13 Year Old To Send To Mars By 2034

NASA is really serious about sending humans to Mars. Currently NASA is training a 13-year-old girl from Louisiana in the US, Alyssa Carson, to be the first to set foot on the Red Planet in 2033-34.

NASA spokesman Paul Foreman has recently been quoted in a BBC interview saying, "NASA takes people like Alyssa very seriously. She is of the perfect age to one day become an astronaut to eventually travel to Mars. She is doing the right things, taking all of the right steps to actually become an astronaut."

Her father, Bert, embraces her passion and is giving full encouragement. He is confident that his daughter will fulfil her dreams in 2033. He says he has discussed with her the risks of space missions. "But, if this is the only way to achieve your dreams she is willing to take the risk," Bert told BBC.

To accomplish her Martian ambitions she has attended NASA space camps in the past nine years in different countries and has learnt Spanish, French and Chinese.

"I have thought about possibly being other things, but being an astronaut was always first on my list. I do not want one obstacle in the way to stop me from going to Mars. Failure is not an option," she has told her BBC interviewer. 

She said: "I want to go to Mars because it is a place where no one ever has been. I want to be the first to take this step," she added. Seeing her commitment, the 13-year-old space addict is being invited by various international space organizations to give presentations, including Mars-One, a private Netherlands-based outfit, which is planning a one-way manned mission to the Red Planet.

The Bionic Eye Allows Blind Man To ‘See’ After 33 Years

A revolutionary new bionic eye implanted into a 66-year-old blind man in the US has allowed him to 'see' for the first time in 33 years.

Larry Hester was diagnosed with retinitis pigmentosa when he was in his early 30s. At the time, the degenerative disease that would rob his sight was poorly understood, and there were no known treatments, researchers said.

On October 1, 2014 Hester became only the seventh person in the US to have a so-called bionic eye — an Argus II Retinal Prosthesis Device — activated as a visual aid to send light signals to his brain. The device incorporates technology initially developed by researchers at the Duke Eye Centre; its sophisticated features were further enhanced and marketed by a company called Second Sight Medical Products.

Using wireless technology, a sensor is implanted in the eye to pick up light signals sent from a camera mounted on special eyeglasses. Paul Hahn, a retinal surgeon at Duke, implanted the sensor on September 10 and activated the device three weeks later — to the sheer delight of Hester and his family.

Hahn cautioned the device will not restore eyesight, but provide a visual aid that could help Hester distinguish a door from a wall, or a crosswalk painted in a roadway. ptiHester describes seeing flashes of light that are more intense when he aims the camera at lights or light-coloured objects.

During a clinic visit, Hester described "seeing" sights he had long believed were past memories — a white duck swimming in a pond, the harvest moon, his wife's yellow chrysanthemums.

Hester's wife, Jerry, said her most cherished moment came while they were watching a football game. She was sitting in a dark chair, and her skin was enough of a contrast that Hester could see flashes. He reached out and touched her face.

French Bank Groupe BPCE Allows You To Send Money Via Twitter

Groupe BPCE, the second largest bank in France in terms of customers, announces a new service that will let French customers send money over Twitter, according to Reuters.

It's unclear exactly how the service will work — whether it will use Twitter's limited direct messaging service or replies, for example — but Reuters points to a statement from Groupe BPCE made last month that says the new service, which will fall under the umbrella of the bank's existing mobile payments system S-Money, will allow person-to-person payments no matter what bank the sender and recipient are using.

The news comes after Twitter's introduction of a "buy" button in early September, which allows businesses to let customers purchase items directly within a tweet.

Third-party services including Dwolla also already support money transfer options through Twitter and other social networks. Twitter's recent moves into e-commerce are somewhat ironic, given the fact that former Twitter cofounder Jack Dorsey went on to found his own mobile payments startup, Square, back in 2009.

"From the Twitter point of view, there is a limit to their appetite for getting involved in payments processing itself," said Andrew Copeman, a payments analyst with financial services research firm AITE Group, who is based in Edinburgh, Scotland.

"At the moment, banks are probably viewing Twitter and other social media networks as marketing channels to reach a wider set of their customers and to extend the bank's existing mobile banking initiatives," he said.

Twitter's success in developing additional services on its platform as Facebook has done will be key to its future profitability. Rakuten Bank (4755.T) in Japan offers a similar "Transfer by Facebook" service that lets users of its mobile banking app send money to anyone in their Facebook friends list.

Investors have been worried about Twitter's slowing user growth, sending the shares down about 17 percent this year, while rival Facebook's have climbed 35 percent.

Thomas Husson, a marketing strategy analyst with Forrester Research, said Twitter was likely to multiply efforts to explore new ways to generate revenue with banks and credit card firms.

"Twitter wants to more explicitly demonstrate the overall value of its network as an advertising platform," he said.

A New Algorithm Analyzes Shadows To Spot Fake Photos

A new algorithm can spot fake photos by looking for inconsistent shadows that are not always obvious to the naked eye.

The technique, which will be published in the journal ACM Transactions on Graphics in September, is the latest tool in the increasingly sophisticated arms race between digital forensics experts and those who manipulate photos or create fake tableaus for deceptive purposes.

National security agencies, the media, scientific journals and others use digital forensic techniques to differentiate between authentic images and computerized forgeries.

James O'Brien, a computer scientist at the University of California, Berkeley, along with Hany Farid and Eric Kee of Dartmouth University, developed an algorithm that interprets a variety of shadows in an image to determine if they are physically consistent with a single light source.

In the real world, O'Brien explained, if you drew a line from a shadow to the object that cast the shadow and kept extending the line, it would eventually hit the light source. Sometimes, however, it isn't possible to pair each portion of a shadow to its exact match on an object.

"So instead we draw a wedge from the shadow where the wedge includes the whole object. We know that the line would have to be in that wedge somewhere. We then keep drawing wedges, extending them beyond the edges of the image," said O'Brien.

If the photo is authentic, then all of the wedges will have a common intersection region where the light source is. If they don't intersect, "the image is a phony," O'Brien said.

A Growing Toolbox

The new technique does have limits, though. For instance, it was designed for use with images in which there is a single dominant light source, not situations with lots of little lights or a wide, diffuse light.

One could also imagine a clever forger anticipating the use of the shadow detection software and making sure they created shadows that would pass the test. The researchers call this just one technique in a toolbox of methods that are being developed to catch forgers.

O'Brien says one of the motivations for developing their algorithm is to reduce the need to rely on subjective evaluation by human experts to spot forgeries, which can easily mistake forged photos for authentic photos and authentic photos for forged ones.

Take for example the iconic 1969 photo of NASA astronaut Buzz Aldrin posing on the surface of the moon.

"The shadows go in all kinds of different directions and the lighting's very strange ... but if you do the analysis [with our software], it all checks out," O'Brien said.

Our Trouble With Shadows

It's unclear why humans are so bad at detecting inconsistent shadows, especially since our visual systems are so attuned to other cues, such as color, size and shape, said UC-Berkeley vision researcher Marty Banks.

One idea, Banks said, is that shadows are a relatively unimportant visual cue when it comes to helping organisms survive.

"It's important to get the color right because that might be a sign that the fruit or meat you're going to eat is spoiled, and it's important to get size and position right so you can interact with things," said Banks, who did not participate in the research. "And then there are things where it just doesn't really matter. One of them is shadows, we believe."

After all, before the advent of photography, it was unlikely for people to encounter scenes where shadows are pointing in the wrong direction.

Analyzing shadows could also just be a more mentally taxing task, said Shree Nayer, a computer vision researcher at New York's Columbia University, who was also not involved in the research.

"This is a more complex second order effect," Nayer said, "and it's something we have a much harder time perceiving."

Man-Machine Collaboration

For now, at least, the team's method still requires some human assistance — matching shadows to the objects that cast them.

"This is something that in many images is unambiguous and people are pretty good at it," O'Brien explained.

Once that is done, the software takes over and figures out if the shadows could have been created by a common light source.

In this way, the scientists say, their method lets humans do what computers can't (interpret the high-level content in images) and lets computers do what humans are poor at (test for inconsistencies).

"I think for the foreseeable future, the best approaches are going to be this hybrid of humans and machines working together," O'Brien said.

Columbia's Nayer said he could envision a day when computers won't need human assistance to perform such tasks, because of increasingly sophisticated models and machine learning algorithms.

Because their software requires relatively simple human assistance, O'Brien and his team say it could one day be useful not only to experts, but the general public as well.

"So you could imagine a plug-in for Photoshop or an interactive app in your web browser where you can do that, and it would flag any inconsistencies," O'Brien said.

This article originally published at LiveScience
Image: Jo Christian Oterhals/Flickr
© Copyright 2012-2014 nexpected All Rights Reserved.
Developed by Webdesign Berlin - | Powered by Blogger.