Microsoft Kinect as a sensor

Saw this posted on a few blogs today…the Kinect could be the “smart sensor” we need integrated with Vera.

Anyone interested in putting this together? Here’s some info…

http://www.electronichouse.com/article/hacked_kinect_makes_perfect_home_automation/#When:15:27:05Z
http://nitrogen.posterous.com/home-automation-and-lighting-control-with-kin

In fact I was just working on this 2 weeks ago. I’ve got the freenect library compiled for the Vera, but haven’t written any interesting code yet:

[url=http://openkinect.org/wiki/Main_Page]http://openkinect.org/wiki/Main_Page[/url]

I’ll post instructions and a opkg file when I get things cleaned up a bit. In the meantime we can start brainstorming software ideas.

The library above basically provides access to motor (tilt control), the camera (rgb color values), and the depth sensor. The challenge is to implement algorithms which work within the limitations of the Vera that produce interesting results. The demos that have been making the rounds lately do all their processing on a real computer.

On the other hand, we don’t need complicated real time image processing, but only something that checks for presence of bodies in the room a couple of times every second or so.

The Kinect would be one expensive motion sensor, and this device would have to connect directly to Vera I would have to assume right? or use some sort of usb to ethernet device so you could place the sensor far away from Vera.

Without a PC to handle the image processing the capability of Kinect would be pretty much hindered to a motion sensor, yes? Would it be possible for the Kinect to differentiate between various hand gestures at the least? Although, I see the Chronos TI watch plugin coming a long way… lol, they’ll probably have hand gesture movements down a week from now. Thats just insane. lol.

If the Kinect directly connected to Vera, without a PC involved, could differentiate between various hand gestures then that could get interesting. You could then assign different scenes and/or devices to specific hand gestures. I just don’t see how it could be implemented in a practical sense. What I mean is, you dont want to have it having a whole view of your living room and then you go to walk by and have numerous devices and/or scenes being triggered just because you walked by moving your hands. lol.

depth sensor** - that would be cool to be able to specify if certain hand movements are picked up within a certain vicinity to perform X action.

Anyone else have any other idea’s?

Any u pdates on this? I have a spare Kinect sensor lying around and I want to put it to use!

Any news guys on this?