Natural User System Project
Timeline: 2 weeks
Cost: $150
Lines of code: 50,000+
Info:
The platform we built allows for an ecosystem of automated devices that can work on a standard
powerline system, or wireless system. It can be integrated into current networks.
The system was built using a combination of Ruby and C++. We looked to the elegance and simplicity of ruby on rails to give developers the freedom to write complex applications for our system in literally seconds. The lights application, as seen in the video, was written in under a minute.
The server has three abstraction layers, devices, generators, and apps.
Devices are like microphones and speakers, while developers can easily come in and create new devices, say a washing-machine, we have created some standard devices which automatically integrate into the basic home automation system. For example, let's say you develop a microphone for our system. The driver you write simply states that it's capable of being a microphone. The server then handles all the noise-canceling, auto-correlation, etc. and listens in on the microphone..
Generators are "listeners" to the devices. Once the server sees a microphone, it will automatically listen in. A standard text-to-speech command generator is included in the "basic" system. The tts (text-to-speech) command generator listens in on all microphones, and then can spawn apps.
Apps register themselves with "meta tags". Much like a search engine, generators push a search to a dispatcher, which in-turn tries to find the correct app to run. Once an application has control, it can either rely on simple built-in functions like "talk ____", or "listen" or
it, the application, can call devices directly. The system is so elegant that it knows the location of devices, meaning that, a "talk" command will only make the computer talk in the same room as the person who issued the initial conversation.
Take note that this system is not only voice-recognition capable, but gesture recognition technology will also be integrated in the "basic" system.
What we are working on now is the next level of automation systems. As we use the terminology "NUS" (natural user system) instead of "NUI" (natural user interface) , we mean an intelligent system.
Cost: $150
Lines of code: 50,000+
Info:
The platform we built allows for an ecosystem of automated devices that can work on a standard
powerline system, or wireless system. It can be integrated into current networks.
The system was built using a combination of Ruby and C++. We looked to the elegance and simplicity of ruby on rails to give developers the freedom to write complex applications for our system in literally seconds. The lights application, as seen in the video, was written in under a minute.
The server has three abstraction layers, devices, generators, and apps.
Devices are like microphones and speakers, while developers can easily come in and create new devices, say a washing-machine, we have created some standard devices which automatically integrate into the basic home automation system. For example, let's say you develop a microphone for our system. The driver you write simply states that it's capable of being a microphone. The server then handles all the noise-canceling, auto-correlation, etc. and listens in on the microphone..
Generators are "listeners" to the devices. Once the server sees a microphone, it will automatically listen in. A standard text-to-speech command generator is included in the "basic" system. The tts (text-to-speech) command generator listens in on all microphones, and then can spawn apps.
Apps register themselves with "meta tags". Much like a search engine, generators push a search to a dispatcher, which in-turn tries to find the correct app to run. Once an application has control, it can either rely on simple built-in functions like "talk ____", or "listen" or
it, the application, can call devices directly. The system is so elegant that it knows the location of devices, meaning that, a "talk" command will only make the computer talk in the same room as the person who issued the initial conversation.
Take note that this system is not only voice-recognition capable, but gesture recognition technology will also be integrated in the "basic" system.
What we are working on now is the next level of automation systems. As we use the terminology "NUS" (natural user system) instead of "NUI" (natural user interface) , we mean an intelligent system.
We don't want to disclose to much information about how we are integrating intelligence into the system, but it will truly be awe-inspiring once completed. We are working towards the most accurate, fastest, smartest, and groundbreaking automation system to date. We are not looking to expand on current technologies, we are developing a new technology.