Search This Blog

Showing posts with label Cybernetics. Show all posts
Showing posts with label Cybernetics. Show all posts

Tuesday, 14 May 2019

Moral Wisdom in the Age of Artificial Intelligence: Cybernetics Pioneer Norbert Wiener’s Prophetic Admonition About Technology and Ethics

Web Investigator KK.org
Maria Popova

Norbert Wiener “The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.”

“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.

In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology. 

Read more

Thursday, 25 April 2019

Cornell scientists create ‘living’ machines that eat, grow, and evolve

The Next Web

The field of robotics is going through a renaissance thanks to advances in machine learning and sensor technology. Each generation of robot is engineered with greater mechanical complexity and smarter operating software than the last. But what if, instead of painstakingly designing and engineering a robot, you could just tear open a packet of primordial soup, toss it in the microwave on high for two minutes, and then grow your own ‘lifelike’ robot?

If you’re a Cornell research team, you’d grow a bunch and make them race.

Scientists from Cornell University have successfully constructed DNA-based machines with incredibly life-like capabilities. These human-engineered organic machines are capable of locomotion, consuming resources for energy, growing and decaying, and evolving. Eventually they die.

That sure sounds a lot like life, but Dan Luo, professor of biological and environmental engineering in the College of Agriculture and Life Sciences at Cornell, who worked on the research, says otherwise. He told The Stanford Chronicle:
We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before.
Just how lifelike? 

Read more

Thursday, 8 December 2016

The Rise of the Machines: Millions Of American Jobs Will Be Wiped Out In The Next Five Years

Marc Slavo
shtfplan.com

There is a paradigm shift coming and it is about to rewrite everything we know about economics, human labor, and government dependence.

Earlier this week Amazon launched its first Amazon Go store, which allows a customer to walk in, grab the items they want, and simply walk out. Everything is tracked utilizing RFID chips, so the second you step out of the store Amazon knows exactly what you’ve purchased and automatically charges your account:
Amazon Go is a system that marries physical stores with advanced algorithms and sensors to eliminate the need for a typical store checkout. Instead of packing all the things you need into a basket or cart and then dragged it through a tedious checkout process, you just grab whatever you need and walk out of the store.

It sounds like a shaky concept at first, but only until you see just how advanced the technology really is. When you first walk into the store, you use your smartphone to open you virtual shopping cart. As you make your way around the store, a vast system of sensor tracks where you are, what you pick up, and what you take with you. The system even knows if you pick something up and then put it back, and will only charge you for things you actually intended to buy.
Amazon’s latest move is simply the next evolution designed to make human labor obsolete.

As Mike Shedlock recently pointed out at Mish Talk, the transition to automated systems like Amazon Go, as well as technologies like self-driving cars and long-haul trucks, has been fast tracked.
We’re no longer talking decades, but rather, a few years before we start to see the direct effects on the labor market:

Read more

-----------------------------


Comment: For more in this subject and the wider implications of technology in psychopaths' hands read my series:

Technocracy I 




 





See more HERE.



Friday, 8 July 2016

Robots to replace soldiers in future, says Russian military's tech chief

RT

Future warfare will see sophisticated combat robots fighting on land, in the air, at sea and in outer space, the head of Russia's military hi-tech body has said, adding that the days of conventional soldiers on the battlefield are numbered.  

"I see a greater robotization [of war], in fact, future warfare will involve operators and machines, not soldiers shooting at each other on the battlefield," Lieutenant General Andrey Grigoriev, head of the Advanced Research Foundation (ARF) - viewed as Russia's analogue of DARPA - told RIA Novosti in an interview on Wednesday.

He noted that future warfare will be determined by unmanned combat systems: "It would be powerful robot units fighting on land, in the air, at sea as well as underwater and in outer space."

"They would be integrated into large comprehensive reconnaissance-strike systems," Grigoriev added.

"The soldier would gradually turn into an operator and be removed from the battlefield," he stressed. 


 Read more (with videos)

Wednesday, 25 May 2016

Cyborgs 2030: The Military’s Vision of Remote Controlled Soldiers

C.K. Golden
Disinfo

(A Call to Actions)  The following is a portion of an article written by Bobby Vaughn Jr. about the US Military’s plans to develop “cyborg humans” for use in warfare.  To view the entire article please click here.

“Be All You Can Be” – the motto of Earth’s most aggressive and persistently violent team of death adders: The United States ARMY.

“The ARMY states this in a catchy way- convincing many that joining is the best you can do for yourself by improving yourself, upgrading yourself. Being all that you can be once you have raped sacred lands of all human dignity and enforced a terror state amongst the children and women?  Is that the best the United States Citizen can be? Of course not, the military wants more & more! More, of course, in a techno-integro sense.  What they want are cyborgs: Remotely-controlled man/machine trans-humans that have stripped away all dignity and will to the hands of war.”
 


Department of Defense Budget report for Fiscal Year 2015. Nanotechnology is the future of warfare. Take the boots off the ground, put them in your living room. Video games of the future might be the mans by which cyborg soldiers are controlled, perhaps they already are.


Saturday, 19 March 2016

'Body Hacking' Movement Rises Ahead Of Moral Answers


NPR

A curious crowd lingered around Amal Graafstra as he carefully unpacked a pair of gloves, a small sterile blanket and a huge needle. A long line of people were waiting to get tiny computer chips implanted into their hands.

Graafstra had set up shop in a booth in the middle of an exhibit hall at the Austin Convention Center in Texas' capital, where he gathered last month with several hundred others who call themselves "body hackers" — people who push the boundaries of implantable technology to improve the human body.

The movement evokes visceral reactions, brings up safety and ethical concerns and quickly veers into sci-fi questions about the line between human and cyborg.

Graafstra is a pioneer in the space. Using his own body to experiment, he designed bio-safe magnets and the microchip he was about to implant into the hand of A.J. Butt, who was sporting a tall, blue mohawk.

The implantable RFID chips hold encrypted information, and their unique ID numbers can be used to open doors or unlock the owner's smartphone, which is what Butt wanted to do.

Butt took a deep breath. The needle plunged into his skin at the base of his thumb, and a chip bigger than a grain of rice slipped just below the surface.

Across the way, Sasha Rose, who was working a meditation booth at the convention, watched the people line up to be "chipped."

She shook her head: This was the craziest thing she had seen. She wondered about Graafstra's credentials. She thought this was a medical procedure, so should he be performing it? Did his clients know the potential consequences of carrying personal information on a device inside their skin?

"More than the crazy concept, it's actually people's willingness to accept it. That's why it's crazy to me," Rose said. "People are just willing to just line up and go, 'Yeah, stick that in me.' "

Read more

Saturday, 10 May 2014

'Killer robots' and their use to be debated at United Nations

The Independent 

Killer robots and their use will be debated during a meeting of experts at the United Nations in Geneva, amid fears that once created they could pose a “threat to humanity”.

Prof Ronald Arkin and Prof Noel Sharkey will debate the need for so-called killer robots during the UN Convention on Certain Conventional Weapons (CCW), marking the first time the issue of killer robots has been discussed within the CCW.

Killer robots are autonomous machines able to identify and kill targets without human input.

Fully autonomous weapons have not yet been developed but technological advances are bringing them closer to existing.

Prof Sharkey, a member and co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control spoke ahead of the conference and warned autonomous weapons systems cannot be guaranteed to "predictably comply with international law."

He told the BBC: "Nations aren't talking to each other about this, which poses a big risk to humanity."
Read more

Wednesday, 30 April 2014

5 Reasons Why Google Wants to Gain Control of Your Car

Susan Posel

Google is very proud of their self-driving cars.

So much so that their self-driving car (SDC) project has “entered a new stage” with the development of new technology that can:

• Drive an estimated 700,000 “accident-free” miles
• Utilize lasers, radar and camera to analyze urban driving conditions
• Artificial intelligence can “read” road signs
• Navigate obstacles in roadways
• Navigate merging into lanes and bad weather conditions
• Detailed mapping of the country

Chris Urmson, SDC project manager for Google said: “We’re growing more optimistic that we’re heading toward an achievable goal — a vehicle that operates fully without human intervention.”

Urmson is part of Google X Lab (GXL) which is a technology-based initiative to bring advanced robotics to the public within the next 3 years.

It was explained that with “jaywalking pedestrians [and] cars lurching out of hidden driveways, double-parked delivery trucks blocking your lane and your view”; in megacities of the future, the goal is to free residents from the “congestion from cars circling for parking and have fewer intersections made dangerous by distracted drivers. That’s why over the last year we’ve shifted the focus of the Google self-driving car project onto mastering city street driving.”

First on GXL’s list is to get SDCs on the road to replace human drivers.

Read more


Saturday, 26 April 2014

Mating strategies: Robocopulation

The Economist

“HOW do robots have sex?” sounds like the set-up line for a bad joke. Yet for Stefan Elfwing, a researcher in the Neural Computation Unit of Japan’s Okinawa Institute of Science and Technology (OIST), it is at the heart of discovering how and why multiple (or polymorphic) mating strategies evolve within the same population of a species. Because observing any species over hundreds of generations is impractical, Dr Elfwing and other scientists are increasingly using a combination of robots and computer simulation to model evolution. And the answer to that opening question? By swapping software “genotypes” via infrared communications, ideally when facing each other 30cm apart. Not exactly a salty punchline.

Charles Darwin was intrigued by polymorphism in general and it still fascinates evolutionary biologists. The idea that more than one mating strategy can coexist in the same population of a species seems to contradict natural selection. This predicts that the optimum phenotype (any trait caused by a mix of genetic and environmental factors) will cause less successful phenotypes to become extinct.

Yet in nature there are many examples of polymorphic mating strategies within single populations of the same species, resulting in phenomena such as persistent colour and size variation within that population. Male tree lizards, for instance, use three different mating strategies correlated with throat colour and body size, and devotees of each manage to procreate.

Simulations alone can unintentionally overlook constraints found in the physical world, such as how far a critter looking for a mate can see. So the OIST team based their simulations on the actual behaviour of small, custom-made “cyber-rodent” robots (illustrated above). This established their physical limitations, such as how they must align with each other to mate and the extent of their limited field of view.

Read more

Friday, 14 March 2014

The Internet of Bodies Is Coming, and You Could Get Hacked


Motherboard/Vice

The 25th anniversary of the World Wide Web came and went yesterday, along with the requisite retrospectives and predictions for the next quarter-century of innovation. But few reports forecasting the future of the web pointed out that in the future, there may not be a web. At least not as we know it now, as place you “go” or “visit,” because the next generation of the internet could be people themselves.

The "Internet of X" is a buzzphrase we're starting to hear a lot: Beyond the much-discussed Internet of Things, there's now the Internet of Pets, the Internet of Plants, and, most interestingly, the nascent Internet of Bodies.

In other words, 25 years from now gadgets like smartphones, smartwatches, augmented glasses, virtual reality headgear, and the myriad other devices merging humans and the internet may be laughably antiquated. Computers will become so tiny they can be embedded under the skin, implanted inside the body, or integrated into a contact lens and stuck on top of your eyeball.

Naturally, those machines will be wifi-enabled, so it’s feasible that anything you can do with your phone now you could do with your gaze or gestures in a few decades. And maybe even more, as augmented reality and virtual reality come out of infancy and proliferate beyond awkward and cumbersome devices like Google Glass or the Oculus Rift.

Imagine an implantable sensor in your arm that can display a person's contact information when you shake their hand, or an augmented contact lens that projects a map in front of your eyes as you walk around. It's not that crazy; smart and augmented contacts are already in development, people are getting digital tattoos, biohackers are sticking computer chips under their skin, and there are several startups selling technology to annotate the world.

Read more


Sunday, 23 February 2014

Robots will be smarter than us all by 2029, warns AI expert Ray Kurzweil



The Independent 

One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.

In 1990 he said a computer would be capable of beating a chess champion by 1998 – a feat managed by IBM’s Deep Blue, against Garry Kasparov, in 1997.

When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.

Now, Kurzweil says than within 15 years robots will have overtaken us, having fulfilled the so-called Turing test where computers can exhibit intelligent behaviour equal to that of a human.

Speaking in an interview with the Observer, he said that his prediction was foreshadowed by recent high-profile AI developments, and Hollywood films like Her, starring Joaquin Phoenix.

“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them,” he said.

“The public has seen things like Siri (Apple’s voice recognition software), where you talk to a computer. They’ve seen the Google self-driving cars. My views are not radical any more.”

Though credited with inventing the world’s first flat-bed scanners and text-to-speech synthesisers, Kurzweil is perhaps most famous for his theory of “the singularity” – a point in the future where humans and machines will apparently “converge”.

His decision to work for Google came after the company acquired a host of other AI developers, from the BigDog creators Boston Dynamics to the British startup DeepMind.

And the search engine giant’s co-founder Larry Page was able to convince Kurzweil to take on “his first actual job” by promising him “Google-scale resources”.

With the company’s unprecedented billions to spend, and some of humanity’s greatest minds already on board, it is clearly only a matter of time before we reach that point when robots can joke, learn and yes, even flirt.


Tuesday, 4 February 2014

What does Google want with DeepMind? Here are three clues


Orwell Was Right

Google's seemingly inexorable drive to control every aspect of our lives continues unabated. The Conversation takes a look at their move into the field of artificial intelligence.

All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google and the reported battle with Facebookto win the company over indicate that this is a firm well worth backing.
Although solid information is thin on the ground, you can get an idea of what the purchase might be leading to, if you know where to look.
Clue 1: what does Google already know?
Google has always been active in artificial intelligence and relies on the process for many of its projects. Just consider the “driver” behind its driverless cars, the speech recognition system in Google Glass, or the way its search engine predicts what we might search for after just a couple of keystrokes. Even the page-rank algorithm that started it all falls under the banner of AI.
Acquiring a company such as DeepMind therefore seems like a natural step. The big question is whether Google is motivated by a desire to help develop technologies we already know about or whether it is moving into the development of new technologies.
Given its track record, I’m betting on the latter. Google has the money and the drive to tackle the biggest questions in science, and developing computers that think like humans has, for a long time, been one of the biggest of them all.
Clue 2: what’s in the research?
The headlines this week have described DeepMind as a “secretive start-up”, but clues about what it gets up to at its London base can be gleaned from some of the research publications produced by the company’s co-founder, Demis Hassabis.
Hassabis' three most recent publications all focus on the brain activity of human participants as they undergo particular tasks. He has looked into how we take advantage of our habitat, how we identify and predict the behaviour of other people and how we remember the past and imagine the future.
As humans, we collect information through sensory input and process it many times over using abstraction. We extract features and categorise objects to focus our attention on the information that is relevant to us. When we enter a room we quickly build up a mental image of the room, interpret the objects in the room, and use this information to assess the situation in front of us.
The people at Google have, until now, generally focused on the lower-level stages of this information processing. They have developed systems to look for features and concepts in online photos and street scenes to provide users with relevant content, systems to translate one language to another to enable us to communicate, and speech recognition systems, making voice control on your phone or device a reality.
The processes Hassabis investigates require these types of information processing as prerequisites. Only once you have identified the relevant features in a scene and categorised objects in your habitat can you begin to take advantage of your habitat. Only once you have identified the features of someone’s face and recognised them as a someone you know can you start to predict their behaviour. And only once you have built up vivid images of the past can you extrapolate a future.
Clue 3: what else is on the shopping list?
Other recent acquisitions by Google provide further pieces to the puzzle. It has recently appointed futurist Ray Kurzweil, who believes in search engines with human intelligence and being able to upload our minds onto computers, as its director of engineering. And the purchase of Boston Dynamics, a company developing ground breaking robotics technology, gives a hint of its ambition.
Google is also getting into smart homes in the hope of more deeply interweaving its technologies into our everyday lives. DeepMind could provide the know-how to enable such systems to exhibit a level of intelligence never seen before in computers.
Combining the machinery Google already uses for processing sensory input with the ideas under investigation at DeepMind about how the brain uses this sensory input to complete high-level tasks is an exciting prospect. It has the potential to produce the closest thing yet to a computer with human qualities.
Building computers that think like humans has been the goal of AI ever since the time of Alan Turing. Progress has been slow, with science fiction often creating false hope in people’s minds. But these past two decades have seen unimaginable leaps in information processing and our understanding of the brain. Now that one of the most powerful companies in the world has identified where it wants to go next, we can expect big things. Just as physics had its heyday in the 20th century, this century is truly the golden age of AI.

Sunday, 26 January 2014

Robots to Breed with Each Other and Humans by 2045

Nicholas West
Activist Post

Cybernetics experts say it's possible for robots to breed with each other, and with humans, by 2045.


The magical transhumanist date of 2045 holds many predictions for how man will attain his final merger with computer systems and usher in an age of "spiritual" machines. Ray Kurzweil has issued a bevy of likely scenarios in his book The Singularity is Near, and continues to suggest that much of those predictions could arrive much sooner. Others have pointed strictly to the economic impact and have marked 2045 has the date when humans could be completely outsourced to robotic workers.

Now cybernetic experts are pointing to the trends in robotics, artificial intelligence, and 3D printing to suggest that the "merger" could go beyond the establishment of an era of cyborgs and into a very literal one: sex with robots.


There has been an ongoing move to create humanoid robots that can more than simply mimic human ability and behavior. Attention is being paid to the social aspect as well. But what is now being proposed has even more serious ethical and existential implications, and very well could bring about the concept of a true "master race."


Read more
 

Sunday, 22 December 2013

New robotic 'muscle' thousand times stronger

www.cdn.3oneseven.com/
Zee News
Dec. 20, 2013

Scientists have developed a new robotic 'muscle', thousand times more powerful than a human muscle, which can catapult objects 50 times heavier than itself - faster than the blink of an eye.

Researchers with the Lawrence Berkeley National Laboratory in US demonstrated a micro-sized robotic torsional muscle/motor made from vanadium dioxide that is able to catapult very heavy objects over a distance five times its length within 60 milliseconds.

"We've created a micro-bimorph dual coil that functions as a powerful torsional muscle, driven thermally or electro-thermally by the phase transition of vanadium dioxide," said study leader, Junqiao Wu.

"Using a simple design and inorganic materials, we achieve superior performance in power density and speed over the motors and actuators now used in integrated micro-systems," Wu said.

What makes vanadium dioxide highly coveted by the electronics industry is that it is one of the few known materials that is an insulator at low temperatures but abruptly becomes a conductor at 67 degrees Celsius.

This temperature-driven phase transition from insulator-to-metal is expected to one day yield faster, more energy efficient electronic and optical devices.

However, vanadium dioxide crystals also undergo a temperature-driven structural phase transition whereby when warmed they rapidly contract along one dimension while expanding along the other two.

This makes vanadium dioxide an ideal candidate material for creating miniaturised, multi-functional motors and artificial muscles.

Wu and his colleagues fabricated their micro-muscle on a silicon substrate from a long "V-shaped" bimorph ribbon comprised of chromium and vanadium dioxide.

When the V-shaped ribbon is released from the substrate it forms a helix consisting of a dual coil that is connected at either end to chromium electrode pads.

Heating the dual coil actuates it, turning it into either a micro-catapult, in which an object held in the coil is hurled when the coil is actuated, or a proximity sensor, in which the remote sensing of an object causes a "micro-explosion," a rapid change in the micro-muscle's resistance and shape that pushes the object away.

Friday, 20 December 2013

There's no resting place for the human race

New Scientist
19 December 2013

This year, many fixed points in the human condition have begun to look distinctly movable
 
THE Chinese robot now roving the moon's surface – Yutu, the Jade Rabbit – is humanity's first emissary there for nearly 40 years. It is named after the companion of the moon goddess Chang'e. More such explorers will follow over the next decade, but few, if any, will bear names from the Greek and Russian traditions associated with the first space race.

Rather, they are beginning a new tradition. As one scientist puts it, the moon is "the eighth continent", a chunk torn off Earth aeons ago. That's not just a hypothesis for our satellite's origins: the latest space racers want to build a lunar staging post for exploration, and settlement, of the solar system (see "China lands on moon, kicks off next lunar space race").

This is not a new dream, but the urge to leave Earth is getting easier to envisage in a world of limited resources and ambitious billionaires. But do we have the right to spread to other worlds before putting our own in order? How will humanity define and locate itself when it lives on more than one planet?

This is just one of many developments challenging how we see our place in the universe. It is customary at this time of year to reflect on the past 12 months. In that spirit, let's consider a couple more advances and see if we can get a sense of where we might soon find ourselves.

Consider humanity itself. Sequencing the genome of the Denisovans, a mysterious group of prehistoric hominins, suggests that interbreeding between Neanderthals, Denisovans and humans seems to have been common, rather than the rarity previously assumed – which further drives home the idea that we are the sole survivors of a precarious evolutionary process, rather than the end of a neat line of descent. That deals yet another blow to the hubristic assumption that humanity's place is at the apex of the natural world.

Intriguingly, here at our end of the human timeline, we are beginning to engineer our own genome. Next year is likely to see the conception of a child with DNA from three parents (see "2014 Preview: Three-parent babies close to conception"). The modification is life-changing – it will prevent certain inherited diseases – but minor. Nonetheless, the introduction of genetic material that could be passed down the generations represents a watershed. How far should we go in compensating for shortcomings in our genetic inheritance? What conditions would merit such tinkering?

We need not journey into space, or history, to find ourselves rethinking our place in the world. 

Smartphones provide us with unprecedented context about our surroundings, allowing us to seek out everything from a cold drink to a hot date. They also let us detach ourselves from those around us, as we look at screens rather than our neighbours' faces.

So the line between physical here and online there is becoming increasingly blurred. Devices such as Glass, Google's augmented reality spectacles, and advances in robotics and remote control will only accelerate that trend (see "Mind-reading light helps you stay in the zone" and "The mystery behind Google's sudden robotics splurge"). Our experiences of the world, and our sense of our place in it, may soon be very different – not just from how we perceive it now, but also from how our neighbours do. Will sharing our thoughts and experiences bring us together? Or will we end up living in worlds of our own?

These examples are just the tip of the iceberg. Many fields, from climatology to neuroscience, raise questions about who we are and where we fit in. But that's nothing new.

Science and technology have always posed challenges, and we have always assimilated them – eventually. That drive has got us where we are today. Perhaps that's the only real constant here: to be human is to continually seek out a new place in the world.

This article appeared in print under the headline "Perpetual motion"


Friday, 6 December 2013

This crime-predicting robot aims to patrol our streets by 2015

 


 

CNet

California-based Knightscope has designed a 5-foot-tall, 300-pound automaton called the K5 to combat crime and provide for public safety. Oh yeah, and it'll work for just $6.25 an hour.

A scene in the 2004 film “I, Robot” involves an army of rogue NS-5 humanoids establishing a curfew and imprisoning the citizens of Chicago, circa 2035, inside their homes. That’s not how Knightscope envisions the coming day of deputized bots.

In its far less frightful future, friendly R2-D2 lookalikes patrol our streets, school hallways, and company campuses to keep us safe and put real-time data to good use. Instead of the Asimov-inspired NS-5, Knightscope, a Silicon Valley-based robotics company, is developing the K5.

Officially dubbed the K5 Autonomous Data Machine, the 300-pound, 5-foot-tall mobile robot will be equipped with nighttime video cameras, thermal imaging capabilities, and license plate recognition skills. It will be able to function autonomously for select operations, but more significantly, its software will provide crime prediction that’s reminiscent, the company claims, of the “precog” plot point of “Minority Report.”

“It can see, hear, feel, and smell and it will roam around autonomously 24/7,” said CEO William Santana Li, a former Ford Motor executive, in an interview with CNET.

At the moment, the K5 is only a prototype, and Knightscope next year will launch a beta program with select partners. But the company is shooting to have the K5 fully deployed by 2015 on a machine-as-a-service business model, meaning clients would pay by the hour for a monthly bill, based on 40-hour weeks, of $1,000. The hourly rate of $6.25 means the cost of the K5 would be competitive with the wages of many a low-wage human security guard.

Servicing and monitoring of the bots will depend on client needs, Li said, with either Knightscope or the customer employing someone to manage the bots full-time.

Crime prediction is one of the more eye-popping features of the K5, but the bot is also packed to the gills with cutting-edge surveillance technology. It has LIDAR mapping — a technique using lasers to analyze reflected light — to aid its autonomous movement. “It takes in data from a 3D real-time map that it creates and combines that with differential GPS and some proximity sensors and does a probabilistic analysis to figure out exactly where it should be going on its own,” Li explained.

It also has behavioral analysis capabilities and enough camera, audio, and other sensor technology to pump out 90 terabytes of data a year per unit. Down the line, the K5 will be equipped with facial recognition and even the ability to sniff out emanations from chemical and biological weapons, as well as airborne pathogens. It will be able to travel up to 18 mph, and later models will include the ability to maneuver curbs and other terrain.

Read more

Sunday, 17 November 2013

U.S. military may have 10 robots per soldier by 2023

Computerworld

American soldiers patrolling dangerous streets will soon be accompanied by autonomous robots programmed to scan the area with thermal imaging and send live images back to the command center.
 
Likewise, squads of infantrymen hiking through mountains will be helped by a wagon train of robots carrying extra water, ammo and protective gear.

Such scenarios are but a few years down the road, according to robotic researchers and U.S. military officials.

"Robots allow [soldiers] to be more lethal and engaged in their surroundings," said Lt. Col. Willie Smith, chief of Unmanned Ground Vehicles at Fort Benning, Ga. "I think there's more work to be done but I'm expecting we'll get there.

Army leaders last month evaluated autonomous robots that move through water, sand and up rocky hills. Robots shown during a week-long demonstration at Fort Benning were designed to carry 1,000 pounds of gear, follow foot soldiers on long treks, scan for land mines and carry wounded soldiers to safety. 

5D Robotics, Northrop Grumman Corp., QinetiQ and HDT Robotics and other companies showed off autonomous robots during the event.

Part of the program focused on weaponized robots while other demonstrations showed how robots can help and protect U.S. soldiers in the field.

"Ten years from now, there will probably be one soldier for every 10 robots," Scott Hartley, a senior research engineer and co-founder of 5D Robotics, told Computerworld. "Each soldier could have one or five robots flanking him, looking for enemies, scanning for landmines. Robots can save lives."

Read more



Saturday, 16 November 2013

People-safe robot is first non-human to close NASDAQ

Comment: Machines meeting those who think like machines. Cause for celebration?  If you are non-human  - flesh or silicon - very probably.

------------

New Scientist

Hillary Clinton, Richard Branson and Michael Jackson have all done it. Today a multi-jointed, people-friendly robot arm, called UR5, became the first non-human to ring the bell at the NASDAQ stock exchange, a twice daily act that marks the opening and closing of the market in Times Square, New York.

NASDAQ offers its bell-ringing up to companies, heads-of-state and community leaders as a means to make announcements or celebrate important milestones – Mark Zuckerberg rang the bell the day Facebook went public, for example.

UR5 was chosen to mark the launch of the first robot-specific stock market index, ROBO-STOX, which allows investors to track the value of the robotics industry as a whole.

"It is a very exciting step for us, and an exciting step for the robot industry as a whole," says Esben Østergaard of Universal Robots in Odense, Denmark, which makes UR5. "The world needs robots, and we are very happy to be chosen to represent this important event in the history of robot technology."



Thursday, 5 January 2012

Good-bye, wheelchair, hello exoskeleton



kurzweilai.net

Early this year Ekso Bionics (formerly known as Berkeley Bionics) will begin selling its Ekso exoskeleton walking suit to rehab clinics in the United States and Europe.

It will allow patients with spinal cord injuries to train with the device under a doctor’s supervision. By the middle of 2012, the company plans to have a model for at-home physical therapy.

Your job is to balance your upper body, shifting your weight as you plant a walking stick on the right; your physical therapist will then use a remote control to signal the left leg to step forward. In a later model, the walking sticks will have motion sensors that communicate with the legs, allowing the user to take complete control.





Related Posts Plugin for WordPress, Blogger...