Search This Blog

Showing posts with label Ray Kurzweil. Show all posts
Showing posts with label Ray Kurzweil. Show all posts

Thursday, 22 May 2014

Wireless Microchip Implant Set For Human Trials

Nicholas West
Activist Post

Once again, it seems that yesterday's conspiracy theory is today's news. 

However, the signposts have been there all along. Microchip implants to track pets and livestock and the elderly are now widely available, while microchipping kids is not far off. Extensive animal testing has been conducted on monkeys to enable them to control devices via brain-computer interface. Edible "smart pill" microchips have been embraced as a way to correctly monitor patient dosages and vital signs.

In the name of health and security - always the dynamic duo for introducing the next level of science fiction into everyday reality - a new wirelessly powered implant a fraction the size of a penny, as seen above, promises to offer a whole new ease of medical monitoring and drug delivery.

Futurist and a director of engineering, Ray Kurzweil, has discussed at length the imminent Human Body 2.0, which will incorporate medical nanobots that that can deliver drugs to specific cells and also identify certain genetic markers by using fluorescent labeling. Once these nanobots have entered the body, Kurzweil indicates that they could then connect our brains directly to Cloud computing systems. Most significantly, Kurzweil states:
It will be an incremental process, one already well under way. Although version 2.0 is a grand project, ultimately resulting in the radical upgrading of all our physical and mental systems, we will implement it one benign step at a time. Based on our current knowledge, we can already touch and feel the means for accomplishing each aspect of this vision. (emphasis added) [Source]
Read more

Wednesday, 21 May 2014

Cloaked DNA nanodevices

WChild Blog via KurzweilAI

Scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering have built the first DNA nanodevices that survive the body’s immune defenses.

The results pave the way for smart DNA nanorobots that could use logic to diagnose cancer earlier and more accurately than doctors can today, target drugs to tumors, or even manufacture drugs on the spot to cripple cancer, the researchers report in the April 22 online issue of ACS Nano.

“We’re mimicking virus functionality to eventually build therapeutics that specifically target cells,” said Wyss Institute Core Faculty member William Shih, Ph.D., the paper’s senior author. Shih is also an Associate Professor of Biological Chemistry and Molecular Pharmacology at Harvard Medical School and Associate Professor of Cancer Biology at the Dana-Farber Cancer Institute.

The same cloaking strategy could also be used to make artificial microscopic containers called protocells that could act as biosensors to detect pathogens in food or toxic chemicals in drinking water.

DNA is well known for carrying genetic information, but Shih and other bioengineers are using it instead as a building material. To do this, they use DNA origami — a method Shih helped extend from 2D to 3D. In this method, scientists take a long strand of DNA and program it to fold into specific shapes, much as a single sheet of paper is folded to create various shapes in the traditional Japanese art.

Shih’s team assembles these shapes to build DNA nanoscale devices that might one day be as complex as the molecular machinery found in cells. For example, they are developing methods to build DNA into tiny robots that sense their environment, calculate how to respond, then carry out a useful task, such as performing a chemical reaction or generating mechanical force or movement.

In 2012 Wyss Institute researchers reported in Science that they had built a nanorobot that uses logic to detect a target cell, then reveals an antibody that activates a “suicide switch” in leukemia or lymphoma cells.

Read more


Sunday, 23 February 2014

Robots will be smarter than us all by 2029, warns AI expert Ray Kurzweil



The Independent 

One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.

In 1990 he said a computer would be capable of beating a chess champion by 1998 – a feat managed by IBM’s Deep Blue, against Garry Kasparov, in 1997.

When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.

Now, Kurzweil says than within 15 years robots will have overtaken us, having fulfilled the so-called Turing test where computers can exhibit intelligent behaviour equal to that of a human.

Speaking in an interview with the Observer, he said that his prediction was foreshadowed by recent high-profile AI developments, and Hollywood films like Her, starring Joaquin Phoenix.

“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them,” he said.

“The public has seen things like Siri (Apple’s voice recognition software), where you talk to a computer. They’ve seen the Google self-driving cars. My views are not radical any more.”

Though credited with inventing the world’s first flat-bed scanners and text-to-speech synthesisers, Kurzweil is perhaps most famous for his theory of “the singularity” – a point in the future where humans and machines will apparently “converge”.

His decision to work for Google came after the company acquired a host of other AI developers, from the BigDog creators Boston Dynamics to the British startup DeepMind.

And the search engine giant’s co-founder Larry Page was able to convince Kurzweil to take on “his first actual job” by promising him “Google-scale resources”.

With the company’s unprecedented billions to spend, and some of humanity’s greatest minds already on board, it is clearly only a matter of time before we reach that point when robots can joke, learn and yes, even flirt.


Tuesday, 4 February 2014

What does Google want with DeepMind? Here are three clues


Orwell Was Right

Google's seemingly inexorable drive to control every aspect of our lives continues unabated. The Conversation takes a look at their move into the field of artificial intelligence.

All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google and the reported battle with Facebookto win the company over indicate that this is a firm well worth backing.
Although solid information is thin on the ground, you can get an idea of what the purchase might be leading to, if you know where to look.
Clue 1: what does Google already know?
Google has always been active in artificial intelligence and relies on the process for many of its projects. Just consider the “driver” behind its driverless cars, the speech recognition system in Google Glass, or the way its search engine predicts what we might search for after just a couple of keystrokes. Even the page-rank algorithm that started it all falls under the banner of AI.
Acquiring a company such as DeepMind therefore seems like a natural step. The big question is whether Google is motivated by a desire to help develop technologies we already know about or whether it is moving into the development of new technologies.
Given its track record, I’m betting on the latter. Google has the money and the drive to tackle the biggest questions in science, and developing computers that think like humans has, for a long time, been one of the biggest of them all.
Clue 2: what’s in the research?
The headlines this week have described DeepMind as a “secretive start-up”, but clues about what it gets up to at its London base can be gleaned from some of the research publications produced by the company’s co-founder, Demis Hassabis.
Hassabis' three most recent publications all focus on the brain activity of human participants as they undergo particular tasks. He has looked into how we take advantage of our habitat, how we identify and predict the behaviour of other people and how we remember the past and imagine the future.
As humans, we collect information through sensory input and process it many times over using abstraction. We extract features and categorise objects to focus our attention on the information that is relevant to us. When we enter a room we quickly build up a mental image of the room, interpret the objects in the room, and use this information to assess the situation in front of us.
The people at Google have, until now, generally focused on the lower-level stages of this information processing. They have developed systems to look for features and concepts in online photos and street scenes to provide users with relevant content, systems to translate one language to another to enable us to communicate, and speech recognition systems, making voice control on your phone or device a reality.
The processes Hassabis investigates require these types of information processing as prerequisites. Only once you have identified the relevant features in a scene and categorised objects in your habitat can you begin to take advantage of your habitat. Only once you have identified the features of someone’s face and recognised them as a someone you know can you start to predict their behaviour. And only once you have built up vivid images of the past can you extrapolate a future.
Clue 3: what else is on the shopping list?
Other recent acquisitions by Google provide further pieces to the puzzle. It has recently appointed futurist Ray Kurzweil, who believes in search engines with human intelligence and being able to upload our minds onto computers, as its director of engineering. And the purchase of Boston Dynamics, a company developing ground breaking robotics technology, gives a hint of its ambition.
Google is also getting into smart homes in the hope of more deeply interweaving its technologies into our everyday lives. DeepMind could provide the know-how to enable such systems to exhibit a level of intelligence never seen before in computers.
Combining the machinery Google already uses for processing sensory input with the ideas under investigation at DeepMind about how the brain uses this sensory input to complete high-level tasks is an exciting prospect. It has the potential to produce the closest thing yet to a computer with human qualities.
Building computers that think like humans has been the goal of AI ever since the time of Alan Turing. Progress has been slow, with science fiction often creating false hope in people’s minds. But these past two decades have seen unimaginable leaps in information processing and our understanding of the brain. Now that one of the most powerful companies in the world has identified where it wants to go next, we can expect big things. Just as physics had its heyday in the 20th century, this century is truly the golden age of AI.

Sunday, 26 January 2014

Robots to Breed with Each Other and Humans by 2045

Nicholas West
Activist Post

Cybernetics experts say it's possible for robots to breed with each other, and with humans, by 2045.


The magical transhumanist date of 2045 holds many predictions for how man will attain his final merger with computer systems and usher in an age of "spiritual" machines. Ray Kurzweil has issued a bevy of likely scenarios in his book The Singularity is Near, and continues to suggest that much of those predictions could arrive much sooner. Others have pointed strictly to the economic impact and have marked 2045 has the date when humans could be completely outsourced to robotic workers.

Now cybernetic experts are pointing to the trends in robotics, artificial intelligence, and 3D printing to suggest that the "merger" could go beyond the establishment of an era of cyborgs and into a very literal one: sex with robots.


There has been an ongoing move to create humanoid robots that can more than simply mimic human ability and behavior. Attention is being paid to the social aspect as well. But what is now being proposed has even more serious ethical and existential implications, and very well could bring about the concept of a true "master race."


Read more
 

Friday, 15 November 2013

Google Funds Creation of Secretive Avatar-Style Virtual Reality

Old Thinker News via infowars.com

Google is funding a secretive project that will use millions of linked computers to create an Avatar-like virtual reality world in which people could live, interact, and even have sex.


The idea sounds like a rudimentary version of the 1999 science fiction thriller The Thirteenth Floor, in which supercomputers create a simulated reality populated by human characters who don’t know that they are living in an artificially generated world.


Entitled High Fidelity, the project envisions a virtual reality “world extending visibly to vanishing points like our world does today, enabling you to see your house, your neighborhood, distant mountains, and other planets in the sky,” and will rely on “millions of people to contribute their devices and share them to simulate the virtual world.”


Second Life founder Philip Rosedale, the program’s architect, says the idea is to “create a virtual place with the kind of richness and communication and interaction that we find in the real world, and then get us all in there.” Rosedale boldly predicts that within six years High Fidelity will allow people to immerse themselves in virtual landscapes that resemble cutting edge CGI environments seen in movies likeAvatar and Star Trek.


The slogan for the project states, “If it doesn’t hurt to think about it, we’re not going to try it.”

According to a report by Singularity Hub’s Jason Dorrier, the project will utilize a second or third generation version of Oculus Rift, the virtual reality headset, in addition to an array of body sensing technology in order to create a tactile environment with virtually instantaneous communication between physical movement and the behavior of the individual’s avatar within the virtual reality world.


“High Fidelity’s other big idea will power the world they live in,” writes Dorrier. “In exchange for virtual money, virtual citizens will assign their computer’s unused processing power—when they’re sleeping, for example—to construct High Fidelity’s world in exquisite detail.” The program could use anything up to a billion linked computers to sculpture and maintain its artificial landscape.


It’s also envisaged that people will have relationships, get married, and even enjoy virtual reality sex in this artificial landscape, mirroring the predictions of futurist Ray Kurzweil, whose 1999 book The Age of Spiritual Machines features a character called Molly who ditches her husband in favor of an artificially intelligent computer with which she merges and then electronically copulates.


This brings up an interesting moral conundrum – does having sex with someone else’s avatar in virtual reality constitute cheating?


The idea of creating intricate artificially generated environments brings us back to a mind-boggling question that we’ve asked before.


If in 2013 we’re now starting to talk about using computers to create incredibly complex and sophisticated virtual reality worlds in which humans interact with each other, how do we know that our own world is not merely an even more high-tech virtual reality simulation created by our future selves?


Related Posts Plugin for WordPress, Blogger...