Archive for the ‘mobile computing’ Category

Kurzweil is joining forces with Google

December 17, 2012 Leave a comment

On December 14th 2012 there was a super-interesting post on that Kurzweil is joining forces with Google.

Ray Kurzweil confirmed today that he will be joining Google to work on new projects involving machine learning and language processing…

Ambitions certainly run high:

Im thrilled to be teaming up with Google to work on some of the hardest problems in computer science, so we can turn the next decades ”unrealistic” visions into reality.

As you read on, you begin to wonder, if this is really the start of Arthur C Clarkes 2025 Braincap vision.
See my Braincap article.

It is certainly all very intriguing:
On page 156 in Kurzweils ”How to Create a Mind” one reads that Kurzweil has started a new company called Patterns:

…Which intends to develop hierarchical self-organizing neocortical models that utilize HHMMs (Hierarchical Hidden Markov Models) and related techniques for  the purpose of understanding natural language. An important emphasis will be on the ability for the system to design its own hierarchies in manner similar to a biological neocortex.

Our envisioned system will continually read a wide range of material, such as Wikipedia and other knowledge resources, as well as listen to everything you say and watch everything you write (if you let it).
The goal is for it to become a helpful friend answering your questions –
before you even formulate them
– and giving you useful information and tips as you go through the day.

So, finally Gordon Bells full MyLifeBits is under way …

Ten years later, Wireless Sensor Nets making automatic digital diaries and putting them directly out on the internet for you, and what have you from Futuropolis 2058, seems almost commonplace.

Obviously, IBMs Watson was only the start.
In Jeopardy a question is posed, and Watsons machinery goes to work. Its UIMA (Unstructured Information Management Architecture) deploys hundreds of subsystems, all of which are attempting to come up with a response to the Jeopardy query. I.e. more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence, and merge and rank hypotheses. Finally, Watsons then acts as an expert system that combine the results of the subsystems. Helping to figure out how much confidence it has in the answers subsystems come up with.
Not only can Watson understand the Jeopardy queries, it can also search its 200 million pages of knowledge (Wikipedia and other sources) and come up with the correct answer faster than any human expert…

And that is just 2012 stuff. Kurzweill obviously won’t let us stop there.
On page 169 of ”How to Create a Mind” one reads that a better Watson should not only be able to answer a question, but also understand – pick out themes in documents and novels:

Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even without the words) of other thinkers, is another matter.
Doing so would constitute a higher-level task than Watson is capable of today – it is what I call a Turing test-level task (That being said, I will point out that most humans do not come up with their own original thoughts either. But copy the ideas of their peers and opinion leaders).
At any rate this is 2012, not 2029, so I would not expect Turing test- level intelligence yet.

Intriguing indeed.

Arthur C. Clarke’s vision is surely well under way …

But even he didn’t anticipate a future where our memories belonged to the Cloud, Google or similar.
A cloud that will design its own cognitive hierarchies in a manner similar to a biological neocortex, based on our memories, and feed the result right back to us, shaping and directing our lives, as our most trusted friend.


Simon Laub

Posted on UseNet, Dec. 17th 2012:
Newsgroups:, rec.arts.sf.written
From: Simon Laub
Date: Mon, 17 Dec 2012 23:43:03 +0100
Local: Mon, Dec 17 2012 11:43 pm
Subject: Kurzweil is joining forces with Google


The sarcastic Mars rover is on twitter ….

August 17, 2012 Leave a comment

Remember to check out Curiositys twitter messages:

And YES indeed – Curiosity has landed…. absolutely mindblowing amazing stuff!
Make sure to watch the landing video (nasajpl)!!
Certainly, some of us still find it rather hard to understand that
all of this actually worked……

And here is where it gets hilarious. The poor lonely robot, 248 million km away, turns out to be a rather sarcastic little fellow. And a poet as well….
A least thats the impression you get when you read his twitter ramblings: SarcasticRover.

01001000 01100101 01101100 01101100 01101111 00101100 00100000 01001110 01100101 01110010 01100100 01110011 00101110

They’re checking out my systems and software… got nothing to do but look at rocks and ponder death.

No that’s cool JPL… I’ll walk for over a month to go to some unpronounceable shithole so you can look at a rock or whatever. NBD.

Ready to start driving around in a pointless f–king circle looking for some kind of magical dirt or whatever. GO SCIENCE!

Now the Olympics are over, I assume the media will go back to ignoring athletes and celebrating intellectuals, right? LOL SAD.

Hey does that Mayan calendar bullshit apply to Mars, or can I just watch you all die from here? MAYA2012!!!

Hilarious stuff!

Read more about the sarcastic rover here.

See stunning pictures taken by Curiosity here.

And watch a 360 degree panorama from Mars taken by Curiosity here (sol 2 on Mars).

A Robotic Best Friend

July 2, 2012 Leave a comment

I-SODOG, Your New, Robotic Best Friend:

Takara Tomy I-SODOG (Tokyo Toy Show 2012):

Touch sensing skin for the iCub

April 28, 2012 Leave a comment

Skin has been one of the big missing technologies for humanoid robots, according to italian roboticist Giogio Metta.

It is only possible to interact closely with people if the robot is exactly aware of what its limbs are interacting with.

See more:

Google plans to launch glasses with a heads-up display by the end of 2012

April 5, 2012 Leave a comment

The glasses, who were previously rumored to have a front-facing camera with flash and a voice input interface, will be Android based.

They will include a display, mere inches from the wearer’s eye, streaming real-time info about your surroundings, similar to the various augmented reality applications we’ve seen on smartphones.

The data will be fetched through a 3G/4G data connection, and the glasses will retrieve information about their surroundings through GPS and several sensors.

The glasses will cost “around the price of current smartphones,” sources say. While definitely not very precise – current smartphones cost anywhere from $150 to $600 – this price range shows that Google intends the glasses as a product for the mass market.

Sort of the same functionality as on the lg optimus ?

1.Google Maps Overlay: Looking for  friend’s, stores etc.
2.Social Connections.
3.Video: Recording images/video.
4.Access to Local Reviews – on whatever you look at.

Google expects:
1. The Google HUD might finally give the search giant a chance to compete everywhere for ad space.
2. Local Paid Search – combining keyword search for position and the HUD.
3. Real-time Bidding/Related Offers: Looking at an offer – what are the competitors selling in the local area?
4. Utilizing the saved search history to guide the person around in the urban landscape.



Sources: searchenginewatch