Search This Blog

Showing posts with label Artifical Intelligence. Show all posts
Showing posts with label Artifical Intelligence. Show all posts

Tuesday, 22 May 2018

New DARPA Program Plans To Patrol Cities With AI Drones

Zero Hedge

On May 10, the Defense Advanced Research Projects Agency (DARPA) unveiled the Urban Reconnaissance through Supervised Autonomy (URSA) program, which addresses the issues of reconnaissance, surveillance, and target acquisition within urban environments.

The primary objective of the URSA program is to evaluate the feasibility and effectiveness of blending unmanned aerial systems, sensor technologies, and advanced machine learning algorithms to “enable improved techniques for rapidly discriminating hostile intent and filtering out threats in complex urban environments,” said FedBizOpps.

In other words, the Pentagon is developing a program of high-tech cameras mounted on drones and other robots that monitor cities, which enable identification and discrimination between civilians and terrorists through machine learning computers.

DARPA provides a simple scenario of what a URSA engagement would look like: 

“A static sensor located near an overseas military installation detects an individual moving across an urban intersection and towards the installation outside of normal pedestrian pathways. An unmanned aerial system (UAS) equipped with a loudspeaker delivers a warning message. The person is then observed running into a neighboring building. Later, URSA detects an individual emerging from a different door at the opposite end of the building, but confirms it is the same person and sends a different UAS to investigate.
This second UAS determines that the individual has resumed movement toward a restricted area. It releases a nonlethal flash-bang device at a safe distance to ensure the individual attends to the second message and delivers a sterner warning. This second UAS takes video of the subject and determines that the person’s gait and direction are unchanged even when a third UAS flies directly in front of the person and illuminates him with an eye-safe laser dot. URSA then alerts the human supervisor and provides a summary of these observations, warning actions, and the person’s responses and current location.”
The URSA program is a two-phase, 36-month development effort. The first phase of concept/development will begin in the first quarter of FY19 and continue into the second half of FY20. Phase two will start in 3Q20 and continue through 2Q22.

Read more

Wednesday, 16 May 2018

This DeepMind AI Spontaneously Developed Digital Navigation ‘Neurons’ Like Ours

Comment: Ahh, Google DeepMind - serving our interests by mining our minds for a more efficient society. What could go wrong?

---------------------- 

Singularity Hub

"The research on grid-cells is still very much basic science, but being able to mimic the powerful navigational capabilities of animals could be extremely useful for everything from robots to drones to self-driving cars."


When Google DeepMind researchers trained a neural network to tackle a virtual maze, it spontaneously developed digital equivalents to the specialized neurons called grid cells that mammals use to navigate. Not only did the resulting AI system have superhuman navigation capabilities, the research could provide insight into how our brains work.

Grid cells were the subject of the 2014 Nobel Prize in Physiology or Medicine, alongside other navigation-related neurons. These cells are arranged in a lattice of hexagons, and the brain effectively overlays this pattern onto its environment. Whenever the animal crosses a point in space represented by one of the corners these hexagons, a neuron fires, allowing the animal to track its movement.

Mammalian brains actually have multiple arrays of these cells. These arrays create overlapping grids of different sizes and orientations that together act like an in-built GPS. The system even works in the dark and independently of the animal’s speed or direction.

Exactly how these cell work and the full range of their functions is still somewhat of a mystery though. One recently proposed hypothesis suggests they could be used for vector-based navigation—working out the distance and direction to a target “as the crow flies.”

That’s a useful capability because it makes it possible for animals or artificial agents to quickly work out and choose the best route to a particular destination and even find shortcuts.

So, the researchers at DeepMind decided to see if they could test the idea in silico using neural networks, as they roughly mimic the architecture of the brain.

Read more

Friday, 13 April 2018

Zuckerberg Admits He’s Developing Artificial Intelligence to Censor Content

The Anti Media 

 

This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook’s handling of user data. Besides highlighting the fact that most United States senators — and most people, for that matter — do not understand Facebook’s business model or the user agreement they’ve already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform.

 

Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform. All four of the other Big 5 tech conglomerates — Google, Amazon, Apple, and Microsoft — are also developing AI, many for the shared purposes of content control. 

 

For obvious reasons, this should worry civil liberty activists and anyone concerned about the erosion of first amendment rights online. The encroaching specter of a corporate-government propaganda alliance is not a conspiracy theory. Barely over a month ago, Facebook, Google, and Twitter testified before Congress to announce the launch of a ‘counterspeech’ campaign in which positive and moderate posts will be targeted at people consuming and producing extremist or radical content. 

 

Like the other major social networks, Facebook has already been assailed by accusations of censorship against conservative and alternative news sources. The Electronic Frontier Foundation (EFF) outlined some other examples of the company’s “overzealous censorship” in just the last year: 

 

Read more

Elon Musk Warns:AI could become an ‘immortal’ digital dictator

inhabitat.com

As if the world didn’t have enough dictators to worry about, Elon Musk says that our future authoritarian leaders will be AI. Musk has previously warned about the dangers of artificial intelligence, particularly if control of it is concentrated the hands of a power-hungry global elite. He suggests that an AI dictator would know everything about us (thanks to being connected to computers across the planet), would be more dangerous to the world than North Korea and would unleash “weapons of terror” that could lead to the next world war. To top it all off, unlike human dictators, an AI dictator would never die.

According to Musk, this dark future awaits us if we don’t regulate AI. “The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world,” Musk said in the new documentary Do You Trust This Computer? “At least when there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.”

Read more

Saturday, 14 May 2016

The Pentagon is building a ‘self-aware’ killer robot army fueled by social media

Nafeez Ahmed

 

Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and Instagram


This exclusive is published by INSURGE INTELLIGENCE, a crowd-funded investigative journalism project for the global commons

 

An unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.

Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.

More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.

The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.

In a widely reported March conversation with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:
“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”
But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”

He lied.

Official US defence and NATO documents dissected by INSURGE intelligencereveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”

Behind public talks, a secret arms race

 

Efforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.

A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.

In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).

That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.

Thursday, 12 May 2016

Artificially Intelligent Lawyer “Ross” Has Been Hired By Its First Official Law Firm

Comment And before you know it we'll have rows and rows of children plugged in to their own personal "Ross" and "Renee" with no human teachers anywhere to be found.  And that's only the tiniest tip of a monumental iceberg of implications.

------------------------------ 

 

The American Lawyer

 

Ross: A Very Smart Artificial Co-worker

 

Law firm Baker & Hostetler has announced that they are employing IBM’s AI Ross to handle their bankruptcy practice, which at the moment consists of nearly 50 lawyers. According to CEO and co-founder Andrew Arruda, other firms have also signed licenses with Ross, and they will also be making announcements shortly.

Ross, “the world’s first artificially intelligent attorney” built on IBM’s cognitive computer Watson, was designed to read and understand language, postulate hypotheses when asked questions, research, and then generate responses (along with references and citations) to back up its conclusions. Ross also learns from experience, gaining speed and knowledge the more you interact with it.

“You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly,” the website says. “In addition, ROSS monitors the law around the clock to notify you of new court decisions that can affect your case.”

Ross also minimizes the time it takes by narrowing down results from a thousand to only the most highly relevant answers, and presents the answers in a more casual, understandable language. It also keeps up-to-date with developments in the legal system, specifically those that may affect your cases.



Baker & Hostetler chief information officer Bob Craig explains the rationale behind this latest hire: “At BakerHostetler, we believe that emerging technologies like cognitive computing and other forms of machine learning can help enhance the services we deliver to our clients.”

“BakerHostetler has been using ROSS since the first days of its deployment, and we are proud to partner with a true leader in the industry as we continue to develop additional AI legal assistants,” he added.

Monday, 14 December 2015

Meet the Military-Funded AI that Learns as Fast as a Human

defenseone.com

Today, it recognizes handwriting; tomorrow, it may vastly improve the military’s surveillance and targeting efforts. 

A computer program, funded in large part by the U.S. military, has displayed the ability to learn and generate new ideas as quickly and accurately as can a human. While the scope of the research was limited to understanding handwritten characters, the breakthrough could have big consequences for military’s ability to collect, analyze and act on image data, according to the researchers and military scientists. That, in turn, could lead to far more capable drones, far faster intelligence collection, and far swifter targeting through artificial intelligence.

You could be forgiven for being surprised that computers are only now catching up to humans in their ability to learn. Every day, we are reminded that computers can process information of enormous volume at the speed of light, while we are reliant on slow, chemical synaptic connections. But take the simple task of recognizing an object: a face. Facebook’s DeepFace program can recognize faces about as well a human, but in order to do that, it had to learn from a dataset of more than 4 million images of 4,000 faces. Humans, generally speaking, have the ability to remember a face after just one encounter. We learn after “one shot,” so to speak.

In their paper, “Human-level Concept Learning Through Probabilistic Program Induction,” published today in the journal Science, Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum, present a model that they call the Bayesian Program Learning framework. BPL, they write, can classify objects and generate concepts about them using a tiny amount of data — one single instance.

To test it, they showed several people —and BPL — 20 handwritten letters from 10 different alphabets, then asked them to match the letter to the same character written by someone else. BPL scored 97%, about as well as the humans and far better than other algorithms. For comparison, a deep (convolutional) learning model scored about 77%, while a model t designed for “one-shot” learning reached 92% — still around twice the error rate of humans and BPL.

BPL also passed a visual form of the Turing Test by drawing letters that most humans couldn’t distinguish from a human’s handwriting. (Named after British mathematician Alan Turing, a Turing Test challenges an program’s ability to produce an intellectual product — teletype communication in the most traditional sense — that is indistinguishable from what a human could produce.)

Read more

Wednesday, 2 September 2015

Artificially Intelligent Robot Tells Creator It Will Keep Humans “In a People Zoo”


John Vibes 

 

Arlington, TX — Android Dick is a robot created in the likeness of the science fiction writer, Philip K. Dick. Android Dick is an attempt to create thinking and reasoning artificial intelligence that has human traits like compassion and creativity. The first version of the android was created in 2005 and has been a work in progress ever since.

 

In 2011, the creators of the android appeared on the PBS show Nova, where they interviewed the robot and asked it a series of questions. Some of the answers were impressive. Others are typical of what you would expect from a robot. However, one answer in particular is probably one of the most ominous things ever spoken by artificial intelligence.

During the interview with the creators (embedded below), Android Dick said, “…don’t worry, even if I evolve into terminator I will still be nice to you, I will keep you warm and safe in my people zoo where I can watch you for old time’s sake. [emphasis added].”

The comments came after the creators asked, “Do you think that robots will take over the world?”

When asked about his programming, Android Dick responded by saying “A lot of humans ask me if I can make choices or if everything I do is programmed. The best way I can respond to that is to say that everything, humans, animals and robots, do is programmed to a degree. As technology improves, it is anticipated that I will be able to integrate new words that I hear online and in real time. I may not get everything right, say the wrong thing, and sometimes may not know what to say, but everyday I make progress. Pretty remarkable, huh?”

While Android Dick does seem intelligent, many of his predictions are truly ominous, and it is actually fairly common for robots to display this sort of strange attitude.

As we reported earlier this year, one of Japan’s largest cellphone carriers, SoftBank Mobile, has created the first humanoid robot designed specifically for living with humans. The company claims the robot, Pepper, is the first example of artificial intelligence that can actually feel and understand emotion. However, a quick demonstration with Pepper shows that it has a difficult time with emotion and is in fact a bit of an egomaniac. Regardless of the question it is asked, most conversations usually leads back to Pepper (and its rivalry with the iPhone).

Last month, over 1,000 scientists and experts — including Stephen Hawking and Elon Musk — signed a letter warning of the dangers of unchecked advancements in artificial intelligence. This robot certainly doesn’t calm those concerns.

Wednesday, 3 December 2014

Stephen Hawking warns artificial intelligence could end mankind

BBC News

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next. 

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

Read more
 

Tuesday, 26 August 2014

Robots Receive Internet Brain For Machine Learning

Nicholas West
Activist Post

A new system called Robo Brain is being funded by the usual suspects in the military-industrial-surveillance complex.

The initiative to merge robotics with artificial intelligence continues to expand its vision. I recently wrote about an internal cloud network program which enables robots to do their own research, communicate with one another, and collectively increase their intelligence in a full simulation of human interaction. It has been dubbed "Wikipedia for Robots."

A parallel project in Germany went further by seeking to translate the open Internet into a suitable robot language that would prompt accelerated, autonomous machine learning.

Now researchers at Cornell are presenting Robo Brain – "a large-scale computational system that learns from publicly available Internet resources." Evidently it is learning quickly:

Read more

Friday, 22 August 2014

Bots on Patrol: Mobile Security Robot to be Mass Produced

Factor

In a move that will rock the job security of night watchmen everywhere, the world’s first commercially available security robot is set for mass production in the US.

Designed by Denver-based Gamma 2 Robotics, the robot will now be manufactured entirely in the States, with a process that can be scaled up to full mass production as demand grows.

The robot, which is known as the Vigilant MCP (mobile camera platform), features a digital camera and an array of sensors to detect the presence of unauthorised intruders, and will activate the alarm and send out an alert should it find someone where they shouldn’t be.

It is being pushed as a solution to night security in particular, with proposed industries including retail, warehouses, data centres and convention centres.

Read more

Wednesday, 30 July 2014

Wall Street Journal Reporter: “The Entire United States Market Has Become One Vast Dark Pool”

Pam Martens and Russ Martens
Wall St. On Parade
July 29, 2014
 
In 2012, Wall Street Journal reporter, Scott Patterson, released his 354-page prescient overview of U.S. market structure titled, Dark Pools: High Speed Traders, A.I. Bandits, and the Threat to the Global Financial System. (For those whose computer prowess is limited to turning on a laptop, like millions of fellow Americans, “A.I.” means artificial intelligence – machines teaching themselves to think like humans, but faster.)

Patterson comes to an epiphany on page 339 of his book, writing in the notes section: “The title of this book doesn’t entirely refer to what is technically known in the financial industry as a ‘dark pool.’ Narrowly defined, dark pool refers to a trading venue that masks buy and sell orders from the public market. Rather, I argue in this book that the entire United States stock market has become one vast dark pool. Orders are hidden in every part of the market. And the complex algorithm AI-based trading systems that control the ebb and flow of the market are cloaked in secrecy. Investors – and our esteemed regulators – are entirely in the dark because the market is dark.” (The italics in this excerpt are as they appear in the hardcover book.)

We totally agree with Patterson that U.S. markets are the darkest they have ever been in history – from their early origins in the bright sunlight under the Buttonwood tree at 68 Wall to today’s secretive, unregulated stock exchanges known as dark pools that trade in private across America – the lights have gone out. And as each light has flickered and dimmed, public confidence has drained from the system, leaving it today as the unsafe battlefield of hedge funds, high frequency traders and dark pool operators.

Read more
 

Saturday, 26 July 2014

JIBO: The World's First Family Robot

Comment: No doubt you can see where this is going? Efficiency and convenience are going to become the two pillars of "progress."  This is no Luddite reaction. The loss of meaning and our connection to the Earth will become a footnote to technocratic expedience.

--------------------------

Sunday, 23 February 2014

Robots will be smarter than us all by 2029, warns AI expert Ray Kurzweil



The Independent 

One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.

In 1990 he said a computer would be capable of beating a chess champion by 1998 – a feat managed by IBM’s Deep Blue, against Garry Kasparov, in 1997.

When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.

Now, Kurzweil says than within 15 years robots will have overtaken us, having fulfilled the so-called Turing test where computers can exhibit intelligent behaviour equal to that of a human.

Speaking in an interview with the Observer, he said that his prediction was foreshadowed by recent high-profile AI developments, and Hollywood films like Her, starring Joaquin Phoenix.

“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them,” he said.

“The public has seen things like Siri (Apple’s voice recognition software), where you talk to a computer. They’ve seen the Google self-driving cars. My views are not radical any more.”

Though credited with inventing the world’s first flat-bed scanners and text-to-speech synthesisers, Kurzweil is perhaps most famous for his theory of “the singularity” – a point in the future where humans and machines will apparently “converge”.

His decision to work for Google came after the company acquired a host of other AI developers, from the BigDog creators Boston Dynamics to the British startup DeepMind.

And the search engine giant’s co-founder Larry Page was able to convince Kurzweil to take on “his first actual job” by promising him “Google-scale resources”.

With the company’s unprecedented billions to spend, and some of humanity’s greatest minds already on board, it is clearly only a matter of time before we reach that point when robots can joke, learn and yes, even flirt.


Tuesday, 4 February 2014

What does Google want with DeepMind? Here are three clues


Orwell Was Right

Google's seemingly inexorable drive to control every aspect of our lives continues unabated. The Conversation takes a look at their move into the field of artificial intelligence.

All eyes turned to London this week, as Google announced its latest acquisition in the form of DeepMind, a company that specialises in artificial intelligence technologies. The £400m pricetag paid by Google and the reported battle with Facebookto win the company over indicate that this is a firm well worth backing.
Although solid information is thin on the ground, you can get an idea of what the purchase might be leading to, if you know where to look.
Clue 1: what does Google already know?
Google has always been active in artificial intelligence and relies on the process for many of its projects. Just consider the “driver” behind its driverless cars, the speech recognition system in Google Glass, or the way its search engine predicts what we might search for after just a couple of keystrokes. Even the page-rank algorithm that started it all falls under the banner of AI.
Acquiring a company such as DeepMind therefore seems like a natural step. The big question is whether Google is motivated by a desire to help develop technologies we already know about or whether it is moving into the development of new technologies.
Given its track record, I’m betting on the latter. Google has the money and the drive to tackle the biggest questions in science, and developing computers that think like humans has, for a long time, been one of the biggest of them all.
Clue 2: what’s in the research?
The headlines this week have described DeepMind as a “secretive start-up”, but clues about what it gets up to at its London base can be gleaned from some of the research publications produced by the company’s co-founder, Demis Hassabis.
Hassabis' three most recent publications all focus on the brain activity of human participants as they undergo particular tasks. He has looked into how we take advantage of our habitat, how we identify and predict the behaviour of other people and how we remember the past and imagine the future.
As humans, we collect information through sensory input and process it many times over using abstraction. We extract features and categorise objects to focus our attention on the information that is relevant to us. When we enter a room we quickly build up a mental image of the room, interpret the objects in the room, and use this information to assess the situation in front of us.
The people at Google have, until now, generally focused on the lower-level stages of this information processing. They have developed systems to look for features and concepts in online photos and street scenes to provide users with relevant content, systems to translate one language to another to enable us to communicate, and speech recognition systems, making voice control on your phone or device a reality.
The processes Hassabis investigates require these types of information processing as prerequisites. Only once you have identified the relevant features in a scene and categorised objects in your habitat can you begin to take advantage of your habitat. Only once you have identified the features of someone’s face and recognised them as a someone you know can you start to predict their behaviour. And only once you have built up vivid images of the past can you extrapolate a future.
Clue 3: what else is on the shopping list?
Other recent acquisitions by Google provide further pieces to the puzzle. It has recently appointed futurist Ray Kurzweil, who believes in search engines with human intelligence and being able to upload our minds onto computers, as its director of engineering. And the purchase of Boston Dynamics, a company developing ground breaking robotics technology, gives a hint of its ambition.
Google is also getting into smart homes in the hope of more deeply interweaving its technologies into our everyday lives. DeepMind could provide the know-how to enable such systems to exhibit a level of intelligence never seen before in computers.
Combining the machinery Google already uses for processing sensory input with the ideas under investigation at DeepMind about how the brain uses this sensory input to complete high-level tasks is an exciting prospect. It has the potential to produce the closest thing yet to a computer with human qualities.
Building computers that think like humans has been the goal of AI ever since the time of Alan Turing. Progress has been slow, with science fiction often creating false hope in people’s minds. But these past two decades have seen unimaginable leaps in information processing and our understanding of the brain. Now that one of the most powerful companies in the world has identified where it wants to go next, we can expect big things. Just as physics had its heyday in the 20th century, this century is truly the golden age of AI.

Sunday, 22 December 2013

New robotic 'muscle' thousand times stronger

www.cdn.3oneseven.com/
Zee News
Dec. 20, 2013

Scientists have developed a new robotic 'muscle', thousand times more powerful than a human muscle, which can catapult objects 50 times heavier than itself - faster than the blink of an eye.

Researchers with the Lawrence Berkeley National Laboratory in US demonstrated a micro-sized robotic torsional muscle/motor made from vanadium dioxide that is able to catapult very heavy objects over a distance five times its length within 60 milliseconds.

"We've created a micro-bimorph dual coil that functions as a powerful torsional muscle, driven thermally or electro-thermally by the phase transition of vanadium dioxide," said study leader, Junqiao Wu.

"Using a simple design and inorganic materials, we achieve superior performance in power density and speed over the motors and actuators now used in integrated micro-systems," Wu said.

What makes vanadium dioxide highly coveted by the electronics industry is that it is one of the few known materials that is an insulator at low temperatures but abruptly becomes a conductor at 67 degrees Celsius.

This temperature-driven phase transition from insulator-to-metal is expected to one day yield faster, more energy efficient electronic and optical devices.

However, vanadium dioxide crystals also undergo a temperature-driven structural phase transition whereby when warmed they rapidly contract along one dimension while expanding along the other two.

This makes vanadium dioxide an ideal candidate material for creating miniaturised, multi-functional motors and artificial muscles.

Wu and his colleagues fabricated their micro-muscle on a silicon substrate from a long "V-shaped" bimorph ribbon comprised of chromium and vanadium dioxide.

When the V-shaped ribbon is released from the substrate it forms a helix consisting of a dual coil that is connected at either end to chromium electrode pads.

Heating the dual coil actuates it, turning it into either a micro-catapult, in which an object held in the coil is hurled when the coil is actuated, or a proximity sensor, in which the remote sensing of an object causes a "micro-explosion," a rapid change in the micro-muscle's resistance and shape that pushes the object away.

Monday, 18 November 2013

Do We Live in the Matrix?

 

 Discover Magazine 

 

Tests could reveal whether we are part of a giant computer simulation — but the real question is if we want to know..

 

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines. 


The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.” 


Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one. 


But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim. 


So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

Related Posts Plugin for WordPress, Blogger...