December 27, 2012

US Teen Invents Advanced Cancer Test Using Google

Originally posted on bbc.co.uk, August 20, 2012

Fifteen-year-old high school student Jack Andraka likes to kayak and watch the US television show Glee.

And when time permits, he also likes to do advanced research in one of the most respected cancer laboratories in the world.

Jack Andraka has created a pancreatic cancer test that is 168 times faster and considerably cheaper than the gold standard in the field. He has applied for a patent for his test and is now carrying out further research at Johns Hopkins University in the US city of Baltimore.

And he did it by using Google.

The Maryland native, who won $75,000 at the Intel International Science and Engineering Fair in May for his creation, cites search engines and free online science papers as the tools that allowed him to create the test.

The BBC’s Matt Danzico sat down with the teenager, who said the idea came to him when he was “chilling out in biology class”.

Source: https://www.bbc.co.uk/news/magazine-19291258

Scientists Implant ‘World’s First’ Bionic Eye

Originally posted by Agence France-Presse on RawStory.com

shutterstock.com

Australian scientists said Thursday they had successfully implanted a “world first” bionic eye prototype, describing it as a major breakthrough for the visually impaired.

Bionic Vision Australia (BVA), a government-funded science consortium, said it had surgically installed an “early prototype” robotic eye in a woman with hereditary sight loss caused by degenerative retinitis pigmentosa.

Described as a “pre-bionic eye”, the tiny device is attached to Dianne Ashworth’s retina and contains 24 electrodes which send electrical impulses to stimulate her eye’s nerve cells.

Researchers switched on the device in their laboratory last month after Ashworth had fully recovered from surgery and she said it was an incredible experience.

“I didn’t know what to expect, but all of a sudden, I could see a little flash — it was amazing,” she said in a statement.

“Every time there was stimulation there was a different shape that appeared in front of my eye.

Penny Allen, the surgeon who implanted the device, described it as a “world first”.

Ashworth’s device only works when it is connected inside the lab and BVA chairman David Penington said it would be used to explore how images were “built” by the brain and eye.

Feedback from the device will be fed into a “vision processor” allowing doctors to determine exactly what Ashworth sees when her retina is subjected to various levels of stimulation.

“The team is looking for consistency of shapes, brightness, size and location of flashes to determine how the brain interprets this information,” explained Rob Shepherd, director of the Bionics Institute which was also involved in the breakthrough.

The team is working towards a “wide-view” 98-electrode device that will provide users with the ability to perceive large objects such as buildings and cars, and a “high-acuity” 1,024-electrode device.

Patients with the high-acuity device are expected to be able to recognise faces and read large print, and BVA said it would be suitable for people with retinitis pigmentosa and age-related macular degeneration.

Penington said the early results from Ashworth had “fulfilled our best expectations, giving us confidence that with further development we can achieve useful vision”.

“The next big step will be when we commence implants of the full devices,” he said.

Source: https://www.rawstory.com/rs/2012/08/30/scientists-implant-world-first-bionic-eye/

Apple Rejects App That Tracks U.S. Drone Strikes

Originally posted by Christina Bonnington and Spencer Ackerman on Wired.com, August 30, 2012

It seemed like a simple enough idea for an iPhone app: Send users a pop-up notice whenever a flying robots kills someone in one of America’s many undeclared wars. But Apple keeps blocking the Drones+ program from its App Store — and therefore, from iPhones everywhere. The Cupertino company says the content is “objectionable and crude,” according to Apple’s latest rejection letter.

A mockup of developer Josh Begley’s drone-strike app for iOS. Wired.com

It’s the third time in a month that Apple has turned Drones+ away, says Josh Begley, the program’s New York-based developer. The company’s reasons for keeping the program out of the App Store keep shifting. First, Apple called the bare-bones application that aggregates news of U.S. drone strikes in Pakistan, Yemen and Somalia “not useful.” Then there was an issue with hiding a corporate logo. And now, there’s this crude content problem.

Begley is confused. Drones+ doesn’t present grisly images of corpses left in the aftermath of the strikes. It just tells users when a strike has occurred, going off a publicly available database of strikes compiled by the U.K.’s Bureau of Investigative Journalism, which compiles media accounts of the strikes.

iOS developers have a strict set of guidelines that must be adhered to in order to gain acceptance into the App Store. Apps are judged on technical, content and design criteria. As Apple does not comment on the app reviews process, it can be difficult to ascertain exactly why an app got rejected. But Apple’s team of reviewers is small, sifts through up to 10,000 apps a week, and necessarily errs on the side of caution when it comes to potentially questionable apps.

Apple’s original objections to Drones+ regarded the functionality in Begley’s app, not its content. Now he’s wondering if it’s worth redesigning and submitting it a fourth time.

“If the content is found to be objectionable, and it’s literally just an aggregation of news, I don’t know how to change that,” Begley says.

Begley’s app is unlikely to be the next Angry Birds or Draw Something. It’s deliberately threadbare. When a drone strike occurs, Drones+ catalogs it, and presents a map of the area where the strike took place, marked by a pushpin. You can click through to media reports of a given strike that the Bureau of Investigative Reporting compiles, as well as some basic facts about whom the media thinks the strike targeted. As the demo video above shows, that’s about it.

It works best, Begley thinks, when users enable push notifications for Drones+. “I wanted to play with this idea of push notifications and push button technology — essentially asking a question about what we choose to get notified about in real time,” he says. “I thought reaching into the pockets of U.S. smartphone users and annoying them into drone-consciousness could be an interesting way to surface the conversation a bit more.”

But that conversation may not end up occurring. Begley, a student at Clay Shirky’s lab at NYU’s Interactive Telecommunications Program, submitted a threadbare version of Drones+ to Apple in July. About two weeks later, on July 23, Apple told him was just too blah. “The features and/or content of your app were not useful or entertaining enough,” read an e-mail from Apple Begley shared with Wired, “or your app did not appeal to a broad enough audience.”

Finally, on Aug. 27, Apple gave him yet another thumbs down. But this time the company’s reasons were different from the fairly clear-cut functionality concerns it previously cited. “We found that your app contains content that many audiences would find objectionable, which is not in compliance with the App Store Review Guidelines,” the company e-mailed him.

It was the first time the App Store told him that his content was the real problem, even though the content hadn’t changed much from Begley’s initial July submission. It’s a curious choice: The App Store carries remote-control apps for a drone quadricopter, although not one actually being used in a war zone. And of course, the App Store houses innumerable applications for news publications and aggregators that deliver much of the same content provided by Begley’s app.

Wired reached out to Apple on the perplexing rejection of the app, but Apple was unable to comment.

Begley is about at his wits end over the iOS version of Drones+. “I’m kind of back at the drawing board about what exactly I’m supposed to do,” Begley said. The basic idea was to see if he could get App Store denizens a bit more interested in the U.S.’ secretive, robotic wars, with information on those wars popping up on their phones the same way an Instagram comment or retweet might. Instead, Begley’s thinking about whether he’d have a better shot making the same point in the Android Market.

Drones+ iPhone App from Josh Begley on Vimeo.

Source: https://www.wired.com/dangerroom/2012/08/drone-app/

Harvard Creates Cyborg Flesh That’s Half Man, Half Machine

Originally posted by Sebastian Anthony on ExtremeTech.com, August 29, 2012

Bioengineers at Harvard University have created the first examples of cyborg tissue: Neurons, heart cells, muscle, and blood vessels that are interwoven by nanowires and transistors.

These cyborg tissues are half living cells, half electronics. As far as the cells are concerned, they’re just normal cells that behave normally — but the electronic side actually acts as a sensor network, allowing a computer to interface directly with the cells. In the case of cyborg heart tissue, the researchers have already used the embedded nanowires to measure the contractions (heart rate) of the cells.

To create cyborg flesh, you start with a three-dimensional scaffold that encourages cells to grow around them. These scaffolds are generally made of collagen, which makes up the connective tissue in almost every animal. The Harvard engineers basically took normal collagen, and wove nanowires and transistors into the matrix to create nanoelectric scaffolds (nanoES). The neurons, heart cells, muscle, and blood vessels were then grown as normal, creating cyborg tissue with a built-in sensor network.

Cardiac cells, with a nanoelectroic electrode highlighted. Extremetech.com

So far the Havard team has mostly grown rat tissues, but they have also succeeded in growing a 1.5-centimeter (0.6in) cyborg human blood vessel. They’ve also only used the nanoelectric scaffolds to read data from the cells — but according to lead researcher Charles Lieber, the next step is to find a way of talking to the individual cells, to “wire up tissue and communicate with it in the same way a biological system does.”

Suffice it to say, if you can use a digital computer to read and write data to your body’s cells, there are some awesome applications. If you need a quick jolt of adrenaline, you would simply tap a button on your smartphone, which is directly connected to your sympathetic nervous system. You could augment your existing physiology with patches — a patch of nanoelectric heart cells, for example, that integrates with your heart and reports back if you experience any problems. When we eventually put nanobots into our bloodstream, small pulses of electricity emitted by the cells could be used as guidance to damaged areas. In the case of blood vessels and other organs, the nanoelectric sensor network could detect if there’s inflammation, blockage, or tumors.

A computer chip, containing a sample of nanoES tissue. Extremetech.com

Realistically, though, we’re a long way away from such applications. In the short term, though, these cyborg tissues could be used to create very accurate organs-on-a-chip — lab-grown human organs that are encased within computer chips and then used to test drugs or substance toxicity, without harming a single bunny or bonobo.

Source: https://www.extremetech.com/extreme/135207-harvard-creates-cyborg-flesh-thats-half-man-half-machine

Metatronic Chip Replaces Electricity With Light, Swaps Resistors With Silicon Nitride Nanorods

Originally posted by Sebastian Anthony on ExtremeTech.com, February 24, 2012

www.element14.com

www.element14.com

Optical engineers at the University of Pennsylvania have created the first computer circuit where logic is performed with light instead of electricity. Dubbed “metatronics,” this light-based logic could enable smaller, faster, and more energy efficient computer chips.

The team, led by Nader Engheta, demonstrated that it’s possible to make resistors, inductors, and capacitors that act on light. By creating a chip that has a comb-like array of nanorods — tiny pillars of silicon nitride (pictured below) — the flow of light can be controlled in such a way that the “voltage” and “current” of the optical signal can be altered. By changing the height and width of the nanorods, and by altering their arrangement, different effects can be achieved. For example, if light has to pass by a short rod and then a tall rod, it might create a resistor-like effect — but a square of four short rods might act as an optical capacitor. The metatronic name comes from the fact that these nanorods are a metamaterial; a material that has properties that can’t be found in nature.

Because Engheta and co are working with light instead of electricity, their metatronic chip has some very odd properties. For example, light’s polarization — whether the light wave undulates left/right or up/down — affects how it moves through the nanorods. When the light is aligned with the nanorods (pictured above), the circuit fires in parallel; but when light is perpendicular, the circuit is serial. In effect, one set of nanorods can act as two different circuits, which Engheta calls “stereo-circuitry.”

www.extremetech.com

Furthermore, if you rotate the circuit itself through 45 degrees, the light wave would hit the nanorods obliquely, creating a circuit that is neither series or parallel — a setup that doesn’t occur in regular electronics. Eventually — and be careful, this might make your brain explode — you could even build 3D arrays of nanorods, where a single arrangement could act as dozens of different circuits.

To put this into perspective, imagine a low-power, ultra-high-speed CPU that turns into a GPU when you change the input signal — that’s the kind of functionality that metatronic circuits might one day enable. In the short term, though, work needs to be done on optical interconnects– and, as yet, the closest we’ve come to creating an optical transistor is MIT’s optical diode. In the short term it is much more likely that optoelectronic chips — chips that mix electronic logic with optical interconnects, and which can be built using standard semiconductor processes — will be used commercially.

Sources:

https://www.extremetech.com/extreme/119759-metatronic-chip-replaces-electricity-with-light-swaps-resistors-with-nanorods

University of Pennsylvania

Hover Bike: Star Wars Technology Brought To Life

Originally posted by EndTheLie.com

Image from www.aerofex.com

California-based firm Aerofex created an aerial vehicle with two ducted rotors instead of wheels, which originates from a design abandoned in the 1960s because of stability and rollover problems.

The aerospace firm managed to fix the stability issue by creating a mechanical system — controlled by two control bars at knee-level — that allows the vehicle to respond to a human pilot’s leaning movements and natural sense of balance, Innovation News daily reports.

“Think of it as lowering the threshold of flight, down to the domain of ATV’s [all-terrain vehicles],” said Mark De Roche, an aerospace engineer and founder of Aerofex.

The hover bike does not require special training and could become a useful tool in agriculture, border control and search-and-rescue operations.

“Imagine personal flight as intuitive as riding a bike,” reads the firm’s website. “Or transporting a small fleet of first-responder craft in the belly of a passenger transport. Think of the advantages of patrolling borders without first constructing roads.”

Aerofex does not plan on initially developing and selling a human version of the hover vehicle and instead plans to use the aerial vehicle as a test platform for unmanned drones.

Source: https://EndtheLie.com/2012/08/22/hover-bike-star-wars-technology-brought-to-life-video

Precrime creeps closer to reality, with predictive smartphone location tracking

Originally posted by Sebastian Anthony on ExtremeTech.com, August 21, 2012

 

As seen in “Minority Report”

A British research group has developed software that can predict, within 20 meters, where you will be 24 hours from now.

It’s actually surprisingly easy to predict your general, routine movements — home, car, office, lunch, car, home — but it has always been nigh impossible to predict breaks in routine, such as a trip to the cinema or a holiday abroad.

The researchers, Mirco Musolesi, Manlio Domenico, and Antonio Lima of the University of Birmingham, cracked this problem by factoring in the location of your friends and your social interactions with those friends (phone calls, meet-ups, etc.) By simply analyzing how many calls you make to a friend, and by correlating your movement patterns, the researchers can predict your movements over the next 24 hours — even if you deviate dramatically from routine.

The repercussions of such an algorithm are immense, with possible applications that range from awesome to terrifying. On the relatively benign side of things, you can imagine a version of Google Now that knows where you will be tomorrow, and offers up suggestions for which clothes to wear, which other friends will be in the area, and where you should eat (plus a coupon, if the restaurant is one of Google’s partner). This application of the algorithm would be opt-in — if you want to enjoy the services that Google can provide by knowing your predicted location, then that’s your choice.

On the nefarious end of the spectrum, though, this algorithm could be the cornerstone of a Precrime Police Division, a la Minority Report. Precrime would track the location of known criminals via their smartphones, and put a tap on their calls to correlate their movements with their friends/known associates. Very quickly, the Precrime Police could create a map of where every criminal will be in the next 24 hours. It would probably be difficult to predict actual crimes, but at least you’d know where to station your cops.

It’s worth noting that some police departments are already doing something similar, but on a much broader scale: They’re collating all of the reports and arrests in their database, and then plotting them on a map to see where crime is most likely to occur on any given day. In regions where police forces are being downsized, technology will become increasingly important as a force amplifier — and eventually, I wouldn’t be surprised if a real, per-criminal precrime system is deployed.

 

Source: https://www.extremetech.com/computing/134422-precrime-creeps-closer-to-reality-with-predictive-smartphone-location-tracking

200 page book converted into DNA by researchers

Originally posted by Jed E. Robinson on RoundNews.com on August 17, 2012

Scientists from Harvard University wanted to prove that DNA, the genetic template substance, can be a viable storage solution. They took a 200+ page book that totalled close to 53000 words.

The book also had 11 images and a short javascript code added to its contents.

The scope of Harvard’s research was to see if DNA molecules ca be used to store a large amount of data. DNA can last for thousands of years as opposed to the average harddrive lifespan which is close to 5 years of active use. If DNA is trapped in amber then it can last for million of years.

In order to convert the digital version of the book to DNA the following process was followed:

- Researchers first took the binary code of the book.

- The resulting binary string was analysed bit by bit. A nucleobase was assigned for every bit value.

- The 5.27 million base long DNA strand was synthesized by analysing 96 bases at a time

- The synthesized DNA now contains the entire book. Its weight is one million time less than the weight of a grain of salt.

After the book was converted to DNA, Harvard scientists went ahead and tried to read the content in order to determine how reliable is DNA as a storing medium. Only 10 bits out of the total of 5.27 million were erronated. Current technology offers an easy way to read DNA. There are many commercially available solutions on the market.

DNA is our basis of life. Using it for storing data in the close future is not that far fetched.

Source: https://www.roundnews.com/science/beyond-science/449-200-page-book-converted-into-dna-by-researchers.html

Darpa Looks to Make Cyberwar Routine With Secret ‘Plan X’

Col. Todd Wood (right), commander of 1st Stryker Brigade Combat Team, 25th Infantry Division, briefs National Security Agency director Gen. Keith Alexander at Forward Operating Base Masum Ghar in Kandahar Province, Afghanistan. Photo: Sgt. Michael Blalack/U.S. Army

Originally posted by Noah Shachtman on wired.com on August 21, 2012

The Pentagon’s top research arm is unveiling a new, classified cyberwarfare project. But it’s not about building the next Stuxnet, Darpa swears. Instead, the just-introduced “Plan X” is designed to make online strikes a more routine part of U.S. military operations. That will make the son of Stuxnet easier to pull off — to, as Darpa puts it, “dominate the cyber battlespace.”

Darpa spent years backing research that could shore up the nation’s cyberdefenses. “Plan X” is part of a growing and fairly recent push into offensive online operations by the Pentagon agency largely responsible for the internet’s creation. In recent months, everyone from the director of Darpa on down has pushed the need to improve — and normalize — America’s ability to unleash cyberattacks against its foes.

That means building tools to help warplanners assemble and launch online strikes in a hurry. It means, under Plan X, figuring out ways to assess the damage caused by a new piece of friendly military malware before it’s unleashed. And it means putting together a sort of digital battlefield map that allows the generals to watch the fighting unfold, as former Darpa acting director Ken Gabriel told the Washington Post: “a rapid, high-order look of what the Internet looks like — of what the cyberspace looks like at any one point in time.”

It’s not quite the same as building the weapons themselves, as Darpa notes in its introduction to the five-year, $100 million effort, issued on Monday: “The Plan X program is explicitly not funding research and development efforts in vulnerability analysis or cyberweapon generation.” (Emphasis in the original.)

But it is certainly a complementary campaign. A classified kick-off meeting for interested researchers in scheduled for Sept. 20.

The American defense and intelligence establishment has been reluctant at times to authorize network attacks, for fear that their effects could spread far beyond the target computers. On the eve of the Iraq invasion of 2003, for instance, the Bush administration made plans for a massive online strike on Baghdad’s financial system before discarding the idea out of collateral damage concerns.

It’s not the only factor holding back such operations. U.S. military chiefs like National Security Agency director Gen. Keith Alexander have publicly expressed concern that America may not be able to properly respond to a national-level attack unless they’re given pre-defined battle plans and “standing rules of engagement” that would allow them to launch a counterstrike “at net speed.” Waiting more than a few moments might hurt the American ability to respond at all, these officers say.

“Plan X” aims to solve both problems simultaneously, by automatically constructing mission plans that are as easy to execute as “the auto-pilot function in modern aircraft,” but contain “formal methods to provably quantify the potential battle damage from each synthesized mission plan.”

Then, once the plan is launched, Darpa would like to have machines running on operating systems that can withstand the rigors of a full-blown online conflict: “hardened ‘battle units’ that can perform cyberwarfare functions such as battle damage monitoring, communication relay, weapon deployment, and adaptive defense.”

The ability to operate in dangerous areas, pull potential missions off-the-shelf, and assess the impact of attacks — these are all commonplace for air, sea, and land forces today. The goal of Plan X is to give network-warfare troops the same tools. “To get it to the point where it’s a part of routine military operations,” explains Jim Lewis, a long-time analyst of online operations at the Center for Strategic and International Studies.

Of course, many critics of U.S. policy believe the deployment of cyberweapons is already too routine. America’s online espionage campaign against Iran has been deeply controversial, both at home and abroad. The Russian government and its allies believe that cyberweapons ought to be banned by international treaty. Here in the U.S., there’s a fear that, by unleashing Stuxnet and other military-grade malware, the Obama administration legitimized such attacks as a tool of statecraft — and invited other nations to strike our fragile infrastructure.

The Darpa effort is being lead, fittingly, by a former hacker and defense contractor. Daniel Roelker helped start the intrusion detection company Sourcefire and the DC Black Ops unit of Raytheon SI Government Solutions. In a November 2011 presentation (.pdf), Roelker decried the current, “hacker vs. hacker” approach to online combat. It doesn’t scale well — there are only so many technically skilled people — and it’s limited in how fast it can be executed. “We don’t win wars by out-hiring an adversary, we win through technology,” he added.

Instead, Roelker continued, the U.S. needs a suite of tools to analyze the network, automate the execution of cyberattacks, and be sure of the results. At the time, he called these the “Pillars of Foundational Cyberwarfare.” Now, it’s simply known as Plan X.

Source: https://www.wired.com/dangerroom/2012/08/plan-x

How noise in brain-cell signals affects neuron response time and thinking

New model of background noise in the nervous system could help better understand neuronal signaling delay in response to a stimulus

Biomedical engineer Muhammet Uzuntarla from Bulent Ecevit University, Turkey and colleagues have developed a biologically accurate model of how noise in the nervous system induces a delay in the response of neurons to external stimuli.

A new spike-latency noise model

Information encoding based on spike timing has attracted increasing attention due to the growing evidence for the relation between synchronization in neural networks and higher brain functions, such as memory, attention and cognition. And it has been shown that first-spike latency (arrival time of the first spike associated with information) carries a considerable amount of information, possibly more than other spikes.

The researchers analyzed the presence of noise in the nervous system, detected by changes in first-spike latency (the time it takes for brain cells to first respond to an external stimulus) and jitter (variation in spike timing). The noise is generated by the synaptic bombardment of each neuron by a large number of incoming excitatory and inhibitory spike inputs and because chemical-based signalling does not always work.

Previous attempts at noise modeling used a generic bell-shaped signal, referred to as a Gaussian approximation. The new noise model, published in European Physical Journal B, is closer to biological reality, the engineers suggest.

They showed there is a relation between the noise and delays in spike signal transmission, and identified two factors that could be tuned, thus influencing the noise: the incoming excitatory and inhibitory input signaling regime and the coupling strength between inhibitory and excitatory synapses. Modulating these factors could help neurons encode information more accurately, they found.