Send text for Alexa to speak - e.g via a scene ?

Hi

I’ve used a number of TTS app/assistants with my Vera over the years, so I was wondering if it is (or will be) possible for us to create some “text” for Alexa to speak, which is generated from information held on from Vera ? Does anyone know ?

I currently have a number of scripts, that I can run on Vera which will be announced via Sonos - e.g I can run a scene that checks if any doors/windows are open and if so, it tells me exactly which one(s)

Sorry if this has been asked/answered already ?

I, too, am interested in this. I don’t have a solution yet, but I have some thoughts. Maybe someone else can build on these or has better ideas. And I’m sure I don’t have all the correct information.

First, as far as I know, Alexa only supports the smart home API which doesn’t yet include interesting arbitrary text. It has some queries for the thermostat and locks and such (see https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/smart-home-skill-api-reference), but that looks limited. And this is only a query, not something that you can trigger.

Next I was wondering about brute force with Bluetooth. Have a Raspberry Pi that gets the text to speech, then connects to the Echo via Bluetooth, plays the message, and disconnects. And whatever music was playing prior should continue on afterwards. I’ve been searching but can’t find much to indicate this is possible. But my hopes aren’t high.

You can create your own Alexa skill that can access arbitrary text. So as long as you can get the text to a location accessible by the host of your skill, then you can likely have Alexa play it back for you. The harder part is getting a secure location for this, but it could be done. One key piece is missing however: this is only a query, not a push. Vera could likely get the text to a location and Alexa could play it back, but only if you asked her to do so.

So then we get to true Rube Goldberg idea: create the skill, also have the RPi, and have the RPi play over a speaker near the Alexa to query for the skill and get the announcement - “Alexa, ask my skill for the announcement.” There has to be a better way…

I have done this with Vera Concierge for Google Home … via the Vera Alerts plugin, you can send text to be spoken to a Google Home device and/or any Google Cast device. The Vera Concierge handles the TTS conversion.

The Amazon Polly & Sonos set up is the best I have used with Vera so far (http://forum.micasaverde.com/index.php/topic,49618.msg326146.html#msg326146)

Ironically with the above I’m using an Amazon speech engine product outside of Alexa - the bonus of rhe Sonos tie in is the fact that all announcements can be made across multiple room (using my existing set up).

However there’s obviously no voice recognition capability in that set up - even if as I understand it - Sonos are working on something of their own as well as doing tie ins with Alexa and maybe others too.

If the Vera → Alexa API can support text for tailored responses that would be awesome !!

There is some development on this front and i think this would be the best solution: Alexa’s opt-in notifications go live, allowing skills to alert you with lights and chimes | TechCrunch

Quote from the article.

... once enabled, the skill using notifications can opt to change the color of the device?s LED light, and use an audio chime to capture the user?s attention

I can imagine this could get a little congested (and confusing) if you don?t limit the skills you enable notifications for on your set up.

I agree. You need to think about what to enable.

I’m mulling over the Sonos One for this very reason - Sonos TTS with Alexa built in. I’ve read that it isn’t 100% the same as an Echo/Dot/Tap etc., but it might be close enough to work for my purposes: play music on command, control devices through Alexa, and allow for TTS messages and return to the music after the TTS ends.

The Sonos One is an interesting proposition, and the choice for me comes down to your desire for combined/integrated solutions vs seperates ?

While Ive not looked at the new Sonos One in detail, thanks to the release of the Sonos Skill, I can now control aspects of my existing Sonos set up via my existing Dots. (Nice ! :slight_smile: )

Over the years Ive chosen to move away from a single systems/protocol (e.g z-wave) for my entire set up and instead looked to embrace separate open solution much more - as this gives me greater flexibility of choice/opportunities and a greater return on my investment.

While Alexa is my Home Digital Assistant of choice (for now) and Siri for Work - I know my taste will change and I would like my nodes/devices to be flexible to move with them.

[i]As a side note - with Alexa being my current assistant of choice - to embrace it more - I?ve just started to look in to building some homebrew Alexa devices (using a couple of existing Raspberry PIs I have ) - have a look here → https://pimylifeup.com/raspberry-pi-alexa/[/i]

I use an Rpi with aws/amazon polly. I have scenes/plegs that send a command to my rpi with usb speaker (diy sonos)
this is what i’m using for voice notification. I have them in the same room or near my echo dot’s. so you can automate polly to say something to alexa when you walk in a room for example. then alexa will tell you the weather or whatever.

Can’t see a post of success yet. Has anyone managed to do anything TTS-like yet?

This has been proven in a different context, openHAB, so there’s no reason it couldn’t be done for Vera as well. Some people report that it works very well, but others have trouble keeping the login to Amazon current. It allows the controller to command Echo to do anything possible on the alexa.amazon.com web page which allows you to do quite a bit beyond TTS.

https://github.com/openhab/openhab2-addons/tree/master/addons/binding/org.openhab.binding.amazonechocontrol

Just a small matter of coding to port it over to Vera…

Wow that looks and sounds great, as far as my more.limited technical ability can tell.

All we need is someone to turn it into a Vera plugin now. Beyond me, but … Any takers?