Archive

Archive for October, 2013

Palm Pilot Inventor Wants to Open Source the Human Brain.

October 28, 2013 Leave a comment

Wired article about brain inspired software:

Palm Pilot Inventor Wants to Open Source the Human Brain.

Hawkins founded the Redwood Center for Theoretical Neuroscience to study the brain full-time, and he co-authored On Intelligence with Sandra Blakeslee. In 2005, he co-founded Grok, originally known as Numenta, to turn his intelligence research into a marketable product.

But he wasn’t content to keep the company’s secrets to himself, so in addition to publishing a white paper outlining the theory and mathematics, the team has released NuPIC, an open source platform that includes the company’s algorithms and a software framework for building prediction systems with them.

Numenta Cortical Learning Algorithm.

Numenta Platform for Intelligent Computing (NuPIC).

And, maybe people should learn about it in new ways?

Radical New Teaching Method:

Wired: Learning in school could be so much more fun.

says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”

Theorists from Johann Heinrich Pestalozzi to Jean Piaget and Maria Montessori have argued that students should learn by playing and following their curiosity.

The study found that when the subjects controlled their own observations, they exhibited more coordination between the hippocampus and other parts of the brain involved in learning and posted a 23 percent improvement in their ability to remember objects. “The bottom line is, if you’re not the one who’s controlling your learning, you’re not going to learn as well,” says lead researcher Joel Voss, now a neuroscientist at Northwestern University.

Robots on the move.

October 22, 2013 Leave a comment

Atlas Robot:

atlas_1

Atlas is a bipedal humanoid robot primarily developed by the American robotics company Boston Dynamics, with funding and oversight from the United States Defense Advanced Research Projects Agency (DARPA). The 6-foot (1.8 m) robot is designed for a variety of search and rescue tasks, and was unveiled to the public on July 11, 2013.

Atlas_frontview_2013
Atlas Robot (Wiki)

The atlas Robot on youtube:

Not to be left behind, popular science follows up with an article about similar “thought provoking” advances in robotics.

E.g. see: Creepy tech advances.

Computers with Schizophrenia:
– The neural network DISCERN –

Computer networks that can’t forget fast enough can show symptoms of a kind of virtual schizophrenia, giving researchers further clues to the inner workings of schizophrenic brains.

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what’s meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren’t real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

“It’s an important mechanism to be able to ignore things,” says Grasemann. “What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia.”

DISCERN.

Robot-deceiver:

Professor Roland Arkin from the School of interactive computing at the University of Georgia presented the results of an experiment in which scientists were able to teach a group of robots to cheat and deceive. The strategy for such fraudulent behavior was based on the behavior of birds and squirrels.

After a while, the hiding robot started deliberately overturning obstacles just to create a diversion and was hiding somewhere away from the mess he had left behind. This strategy was not originally programmed, the robot has developed its own strategy, through trial and error.

So, surely, we need some rules: Governing Lethal Behavior – And there we go: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture –

Roland Arkin – Ethical Robots.

For the implementation of an ethical control and reasoning system potentially
suitable for constraining lethal actions in an autonomous robotic system.

Ruthless Robots:

By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink.

Imagination:
In an experiment, this supercomputer was given free access to the Internet and the ability to examine the contents of the network. There were no restrictions or guidelines.

There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.
Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

Looking for Cats.

Robot prophet:

“Nautilus” is another self-learning supercomputer. This unit was fed millions of newspaper articles starting from 1945, by basing its search on two criteria: the nature of the publication and location. Using this wealth of information about past events, the computer was asked to come up with suggestions on what would happen in the “future.”

Prophets.

News outlets which published online versions were also analysed, as was the New York Times’ archive, going back to 1945.
In total, Mr Leetaru gathered more than 100 million articles. Mood detection, or “automated sentiment mining” searched for words such as “terrible”, “horrific” or “nice”.
The Egypt graph, said Mr Leetaru, suggested that something unprecedented was happening this time.

_55245841_egyptgraph
Media “sentiment” around Egypt fell dramatically in early 2011, just before the resignation of President Mubarak.

“If you look at this tonal curve it would tell you the world is darkening so fast and so strongly against him that it doesn’t seem possible he could survive.”
Similar drops were seen ahead of the revolution in Libya and the Balkans conflicts of the 1990s.
Reports were analysed for two main types of information: mood – whether the article represented good news or bad news, and location – where events were happening and the location of other participants in the story.