So I just got back from a business trip to Miami, and I am completely exhausted. I did however want to update you on a a few things real quick.
1st - I want to welcome the CEO of Crunchy Logistics back (Neil Dufva) from his month long journey in Europe!
2nd - I want to thank Joe Britt for responding to my e-mail, it shows a lot of character when someone that busy and that important takes time out of his day.
Now that Neil and the rest of the Crunchy team are back from Europe, and that Bud Townsend has had some time to work on his other projects (obligations), we should be getting our Natural User System project going once again. Also, I like the style of having a skype meeting/show so you most likely will see more of that.
Natural User System - The future of home automation
Hey guys/gals!
So who would have ever thought we would get so much interest in our Home Automation side project! Well the people have spoke, and our Natural User System is a go. In this video we explain why we are transitioning our blog, and why our system is named Natural User System (not andorid@home). We also ask Joe Britt (head honcho of android@home) for a favor :p
We also briefly discuss what our next version of this prototype (v1.2) will incorporate.
Check out the vblog update, and make sure to follow us on twitter (@crunchylogistic) for the latest news.
Its official, The Natural User System project is a go! After a successful demonstration to the Crunchy Logistics team, we have been given the green light to proceed with development.
Timeline: 2 weeks Cost: $150 Lines of code: 50,000+
Info: The platform we built allows for an ecosystem of automated devices that can work on a standard powerline system, or wireless system. It can be integrated into current networks.
The system was built using a combination of Ruby and C++. We looked to the elegance and simplicity of ruby on rails to give developers the freedom to write complex applications for our system in literally seconds. The lights application, as seen in the video, was written in under a minute.
The server has three abstraction layers, devices, generators, and apps.
Devices are like microphones and speakers, while developers can easily come in and create new devices, say a washing-machine, we have created some standard devices which automatically integrate into the basic home automation system. For example, let's say you develop a microphone for our system. The driver you write simply states that it's capable of being a microphone. The server then handles all the noise-canceling, auto-correlation, etc. and listens in on the microphone..
Generators are "listeners" to the devices. Once the server sees a microphone, it will automatically listen in. A standard text-to-speech command generator is included in the "basic" system. The tts (text-to-speech) command generator listens in on all microphones, and then can spawn apps.
Apps register themselves with "meta tags". Much like a search engine, generators push a search to a dispatcher, which in-turn tries to find the correct app to run. Once an application has control, it can either rely on simple built-in functions like "talk ____", or "listen" or it, the application, can call devices directly. The system is so elegant that it knows the location of devices, meaning that, a "talk" command will only make the computer talk in the same room as the person who issued the initial conversation.
Take note that this system is not only voice-recognition capable, but gesture recognition technology will also be integrated in the "basic" system.
What we are working on now is the next level of automation systems. As we use the terminology "NUS" (natural user system) instead of "NUI" (natural user interface) , we mean an intelligent system.
We don't want to disclose to much information about how we are integrating intelligence into the system, but it will truly be awe-inspiring once completed. We are working towards the most accurate, fastest, smartest, and groundbreaking automation system to date. We are not looking to expand on current technologies, we are developing a new technology.
Ladies and Gentlemen mark your Calendar, Today marks the day for the first real demonstration of an android@home / home automation device that uses voice recognition. Earlier today, during a skype meeting, Bud Townsend from VA Tech demonstrated how he was able to stream voice recognition from point A to point B while staying secure. The details of this project are not yet fully public but what we do know is that Bud was able to input voice commands remotely while staying encrypted.. and guess what else? Voice recognition over the network was at CD quality audio.. which.. with some research you will find out that, that is significantly higher than what Google Voice Recognition usually uses. We are excited to see what comes next... :)
The goal of the Android@Home project is to build a home automation system, that is inherently more natural to use, while providing features that have not yet been realistically implemented in previous home automation systems. Examples of these features include voice recognition, advance artificial intelligence, and voice feedback. It is also a goal of the Android@Home project to encourage manufactures to design products that are “smart” and will integrate with Google’s Android@Home standard.
Overall, the point of the Android@Home project is to prove a concept, that home automation, or a “smart” home is possible and affordable.
To encourage future implementation and to provide the best possible system, the majority of the Android@Home project will be Open Source.