Does a Zwave speech module exist?

Is there such a thing as a zwave module that can be programmed such that on different zwave commands it will play different audio files on the unit. These audio files can be uploaded by the user initially on setup from a computer (by usb).

This would allow for audio confirmation of zwave commands and scenes.

I am from the UK so would be even better if something existed here.

Failing that, does anybody have any ideas how to do this without having a computer on all of the time?

This has been discussed a few times in the forum. I believe the solution people were using were either via Sonos or Logitech Squeezebox. There are now z-wave speech type modules available that I know of. If you do a search for sonos, you can find a few topics discussing what you are looking for.

  • Garrett

Hi,

Thanks for the response.

Because of the lack of speech modules I have been at the PC options using MCE Controller and now node.js (which is a lot more powerful).

Buying a Squeezebox just for audio notifications seems a bit OTT.

Hi Verauser,

I’m curious how you expected to make sounds without some sort of audio processor/amplifier being involved?

I have been using my Sonos to play some sounds file confirmations and it works quite well. But for you, maybe staying with a PC based zwave set up, USB controller and software like Eventghost with HAL etc might be what you’re after ?

I have a similar dream for more audio/speech, so i would be interested to know how you get on.

I think the people are using Karrotz to do this as well. There are two plugins available.

Given the lack of speech modules I now use a bit of LUA code to call a TCP server running on my PC. The TCP server is coded using node.js (see nodejs.org).

So, for example in a scene triggered by my front door PIR, I have some LUA code being triggered like this:

local socket = require("socket")
host = "192.168.0.3"
local tcp = assert(socket.tcp())
if( tcp ~= "nil" and tcp ~= nill) then
tcp:settimeout(0.5)
tcp:connect(host, 5150)
tcp:send("@frontdoor\r")
tcp:close()
end

(where 192.168.0.3 is the IP address of my PC and 5150 the TCP port)

and a node.js script like this on the PC:

var net = require('net');
 var cp = require('child_process');
/*
 * Callback method executed when a new TCP socket is opened.
 */
function newSocket(socket) {
	socket.on('data', function(data) {
		receiveData(socket, data);
	})
}

/*
 * Cleans the input of carriage return, newline
 */
function cleanInput(data) {
	return data.toString().replace(/(\r\n|\n|\r)/gm,"");
}

/*
 * Method executed when data is received from a socket
 */
function receiveData(socket, data) 
{
	var cleanData = cleanInput(data);
	
	if(cleanData == "@frontdoor") 
	{
		cp.exec('saystatic.exe someone at the front door', datacallback);
	}
	
}
 
function datacallback(e, stdout, stderr) 
		{
			if(!e) 
			{
				console.log(stdout);
				console.log(stderr);
			}
		};

// Create a new server and provide a callback for when a connection occurs
var server = net.createServer(newSocket);
 
// Listen on port 5150
server.listen(5150, '192.168.0.3');

This calls a cmd line program called saystatic.exe (added directory to PATH statment). Available here: http://krolik.net/post/Say-exe-a-simple-command-line-text-to-speech-program-for-Windows.aspx
which is simply a front end to the Windows speech functionality.

If the PC is not on then the LUA script times out after 1/2 seconds so no problems.

Now of course if you installed node.js onto a Rasberry Pi (Debian package is available) and plugged in a simple speaker you could have a speech module for about 40 USD.

Hi VeraUser

Interesting stuff !

Am I right in saying that you have to have both a ‘server’ and a PC running (as well as Vera) for this speech functionality to work?

I like the idea too of doing something with the Raspberry Pi, maybe that could be the speech/audio translator device you need , running with a speaker (or maybe connected via a line in to an existing stereo etc.).

I appreciate the goal after all is to try and remove the need for too many extra things being on all the time.

Text to Speech & Speech Recognition are something I’m keen on seeing used/developed with Vera more and more and it’s a growing market too for products e.g List of applications/Other - ArchWiki

( I just wish I had the skills myself to develop/code this sort of stuff myself) :wink:

Hi Parkerc

No, I just have Vera talking to a single PC on which is running a TCP server - a bit of code.

With the Raspberry Pi if you didnt want to add a text to speech translator onto it you could just get it to play prerecorded audio files instead.

For the playing of prerecorded files, I already have that set up on Vera and running through my Sonos, which works pretty well and is helped by being multiroom too. (well three rooms :wink:

Being small the Raspebbey Pi with text to speech can connect via a line-in source on my Sonos to play any sound out, therefore things that need to be said dynamically or on the fly would be best suited via that method, rather than having to anticipate and prerecord everything.

So what’s your plan VeraUser?

Well I’ve ordered a Raspberry Pi on Monday - delivery date is not for another 11 weeks.

But I have ordered an Everspring doorbell and I see in this forum there are problems with that so may be tinkering with that for a while.

I’ve not long had my Raspberry Pi, so I might have a play and put some Text to Speech software on it, to create a Vera Speech Module.

But how ‘text’ is sent from Vera to the Pi is something I will have to rely on others for ? :wink:

Festival from the The Centre for Speech Technology Research (CSTR) looks a good choice, as there are API out there to help (links of interest below).

http://www.cstr.ed.ac.uk/projects/festival/
https://wiki.archlinux.org/index.php/Festival
http://revision3.com/haktip/festival