Thursday, June 30, 2011

Natural User System - What android@home should have been



Natural User System Project

Timeline: 2 weeks
Cost: $150
Lines of code: 50,000+

Info:
The platform we built allows for an ecosystem of automated devices that can work on a standard
powerline system, or wireless system. It can be integrated into current networks.

The system was built using a combination of Ruby and C++. We looked to the elegance and simplicity of ruby on rails to give developers the freedom to write complex applications for our system in literally seconds. The lights application, as seen in the video, was written in under a minute.

The server has three abstraction layers, devices, generators, and apps.

Devices are like microphones and speakers, while developers can easily come in and create new devices, say a washing-machine, we have created some standard devices which automatically integrate into the basic home automation system. For example, let's say you develop a microphone for our system. The driver you write simply states that it's capable of being a microphone. The server then handles all the noise-canceling, auto-correlation, etc. and listens in on the microphone..

Generators are "listeners" to the devices. Once the server sees a microphone, it will automatically listen in. A standard text-to-speech command generator is included in the "basic" system. The tts (text-to-speech) command generator listens in on all microphones, and then can spawn apps.

Apps register themselves with "meta tags". Much like a search engine, generators push a search to a dispatcher, which in-turn tries to find the correct app to run. Once an application has control, it can either rely on simple built-in functions like "talk ____", or "listen" or
it, the application, can call devices directly. The system is so elegant that it knows the location of devices, meaning that, a "talk" command will only make the computer talk in the same room as the person who issued the initial conversation.

Take note that this system is not only voice-recognition capable, but gesture recognition technology will also be integrated in the "basic" system.

What we are working on now is the next level of automation systems. As we use the terminology "NUS" (natural user system) instead of "NUI" (natural user interface) , we mean an intelligent system.

We don't want to disclose to much information about how we are integrating intelligence into the system, but it will truly be awe-inspiring once completed. We are working towards the most accurate, fastest, smartest, and groundbreaking automation system to date. We are not looking to expand on current technologies, we are developing a new technology.


Android@Home pre-demo

So its another late night at the office, but it should be well worth it tomorrow!
Wondering what we will be demonstrating? It should go something like this:

1) Human speaks into the wireless microphone "Computer turn the kitchen light on"
a) The human speaks into the wireless microphone
b) The microphone sends the humans voice to the Voice Recognition Server at CD quality
c) The Voice Recognition Server analyses the human speech and understands the humans intent
d) The Server sends a command to the kitchen light to turn on

2) The Light turns ON!

...more in a bit

Sunday, June 19, 2011

Its ALIVE!!

Ladies and Gentlemen mark your Calendar, Today marks the day for the first real demonstration of an android@home / home automation device that uses voice recognition. Earlier today, during a skype meeting, Bud Townsend from VA Tech demonstrated how he was able to stream voice recognition from point A to point B while staying secure. The details of this project are not yet fully public but what we do know is that Bud was able to input voice commands remotely while staying encrypted.. and guess what else? Voice recognition over the network was at CD quality audio.. which.. with some research you will find out that, that is significantly higher than what Google Voice Recognition usually uses. We are excited to see what comes next... :)

Monday, June 13, 2011

Hello World

The goal of the Android@Home project is to build a home automation system, that is inherently more natural to use, while providing features that have not yet been realistically implemented in previous home automation systems. Examples of these features include voice recognition, advance artificial intelligence, and voice feedback. It is also a goal of the Android@Home project to encourage manufactures to design products that are “smart” and will integrate with Google’s Android@Home standard.

Overall, the point of the Android@Home project is to prove a concept, that home automation, or a “smart” home is possible and affordable.

To encourage future implementation and to provide the best possible system, the majority of the Android@Home project will be Open Source.