Blog post about Ezlo VOI - text chat with Alexa and Google

We’ve just released into Beta a new exciting feature of Ezlo VOI™ - text chat with Alexa and Google.
You can read more about it in this Blog post.

excellent feature!!!
Now i can text alexa :slight_smile:

I’m highly confused in the exciting feature, the innovation or the home automation in this…
For google home you can press long on the middle button and you just send commands to it - you’re proposing me to wait for the vera app to load and give it a command there?
And for Alexa - you just need to open the alexa app… it’s right there:

I’m probably missing something?

Can you send “text” command to Alexa using that button? (not voice)

can you do like “Alexa turn tantan on” (in text…)?

Picture is worth a thousand word…

giphy

Translate: Why would anybody want to do that when it can be done with voice command?
Even better I can also do it with two taps on my mobile app.

A lot of focus is given on peripheral and marginally useful (to be polite) features and not much is being done about the core functions of the hub. Mobile app and cloud integration development are probably the last things you should be doing right now. Where are the local processing/ local UI, wireless stack implementation and API/apps layer development?

2 Likes

No idea about that, but what’s the proposed use case of texting Alexa as opposed to talking to her? Just curious

C

I feel that ‘texting’ Alexa may be useful, but if I’m going to pull my phone out why not at that point directly control the device through the app?

Edit: Not trying to down talk any progress, was just an honest observation

1 Like

what Alexa can do is not limited to just controlling few devices. There are many other online services. Not every environment is suitable to “voice commands” (watching TV), movie theater, in a meeting etc where you may still need an answer from Alexa but voice is not appropriate.

Google seems to offer that (keyboard, I’ve googled and Alexa doesn’t, or I couldn’t find any reference.
But in almost 3 years I’ve never used the keyboard for google either.
Maybe you’ve found a niche… although I don’t see it. Hopefully this feature didn’t take more than one day to implement, as the app really has a lot of issues.
Have you personally ever used the vera android app, Melih?

Hi @rafale77,

Local mode for the VeraMobile apps and Ezlo controllers is already developed for the beta users. You can try this feature with Ezlo Atom v2 or Ezlo PlugHub v2 for which we have opened the Beta enrolment.

As for the APIs, we’ve started to share with you the pre-alpha versions of the hub API and LUA API. We’ll continue with updates.

I’m a fan of VOI simply because it allows me to:
a. silently send Alexa commands without speaking;
b. do so at scheduled times;
c. to whichever Alexa device I choose;
d. as part of a Scene (or, in the future, Reactor routine);
e. read (as opposed to hear) Alexa’s response;
f. control devices outside of the Vera realm without setting up IFTTT applets in advance;
g. start/stop timers, skills, music or utterances on any Alexa device;
h. sidestep dedicated Vera plug-ins (e.g. GCal3) for adding events to any other linked repository (e.g. Google Calendar, shopping lists, etc.).

And that’s just the stuff I’ve thought of and tested so far!

  • Libra

P.S. The moment I learn that VOI may not be backported to future Vera firmware revisions, is the day this company becomes dead to me. }:->

4 Likes

Yes.
I mainly use the IOS app. I am not a big android user, but i have one android device i test on. I usually use the dev versions of the app to give developers feedback on the apps and fw. So yes I am a very involved user that tests and gives constant feedback.

I have been using the Local mode in my Atom v2 for some time. Identified few issues with certain access points and we are in the middle of troubleshooting that with engineering etc etc.

Today I will be playing with Zigbee onboarding that just became available with a beta with my own edge hardware (not out yet to beta community) and our brand new linux fw…I have many zigbee devices i am eager to get going.

I agree with you for the scenes part. I will give you that, although for me the privacy issue is a show stopper (google has access to a lot of my private data and I’m not sharing that).

I was mostly referring here to this new feature, the chat. But maybe it’s just not for me, hopefully others find it interesting.

@melih
I now understand what’s going on - you’re only testing on ezlo devices.
The thing is that for the past months the android app in particular has become unusable (I don’t need any beta anymore, thanks) and was not sure on how you guys were missing on that.
Can I assume that any vera development or fixes are gone?

correct.
I only use the new hardware (for example i have Atom v2 and plugh hub v2 operating in local mode at home) and our own RTOS FW or our own Linux FW we developed. These are the future of the company and they both are maturing nicely. Of course there are many challenges, but we have an amazing team who is working hard on these. Key is to make sure we have a good stable platform firmware platform…good stable hardware platform…good stable cloud platform…then start bridging old vera fw users to start using the new platforms and continue building from there…

Probably you turn on and off the kitchen lights manually via your app?

I do all…including very heavy scenes with over 30 devices that are interlaced with VOI commands. Important to test how scenes perform when interlaced with VOI commands that control alexa devices etc.
I also use Siri commands to launch scenes. (i am testing multi casting in zwave protocol on all these devices with some interesting results)
taking the phone out to press a button to turn something on is not practical or useful in my view. Home automation should be “predictive”…way beyond “smart”…
Is that what you do @slelieveld use your app to turn things on or off?

Aside from the beta testing I’m currently doing with the Vera Mobile app – mostly to try VOI through a Linux-fw Vera Edge bought for this purpose alone – I basically never use it. I prefer using the Web UI of my VeraPlus almost exclusively, especially for tweaking plug-ins like Reactor.

I do wish VeraMobile could also act as a directly plug-in/extension for Tasker on Android, but for that purpose I still use the AutoVera app. Hoping its connection to my VeraPlus controller never breaks, despite it being an unmaintained app for a couple of years now.

1 Like

I never use the app. Well only to ex and include devices but I always end up in using my windows tablet for that …

2 Likes

Hi @loana,

Really sorry, I did not mean to derail this thread off topic. I think I expressed what it would take to really start testing these elsewhere and will keep it there. Basically we are nowhere close. I would suggest you try without your phone and without internet connection to see how it goes. The API is a good step forward, necessary, but not sufficient. Alluding to the necessity of a webUI as @LibraSun mentioned it above and many more of us have discussed before.
And I agree with @melih on the vision that we should not be using the phones for home automation. It should be automation - predictive. So for me the mobile app is one step back, adding texting for control is two steps back. All we need is a fully localized system relying on a webUI (or even a program running on a pc connected to the API) for setup. No the mobile app is not a valid setup tool. Sorry.
We live in an age of mobile app clutter. I have been deleting apps from my phones faster than they can be added. Consolidation and simplicity is paradigm.

1 Like

I agree with @rafale77 !
However the line between “mobile” and “non mobile” and screen sizes is blurring. Now there are big tablets that operate on what we call mobile OSes …
Main interaction point is “Scene/rule creation” when it comes to why people want web UI.
So the question becomes how you develop this “scene/rule creation” capability that can run on Web.

We have a department thats been building something interesting using NativeScript (which runs on all smartphone OSes as well as Web). We are building a “Dashboard Designer”. You can design any size/type etc, then choose what functionality to map to each tile, color, type etc etc. Of course as always very first version will be limited to what it really will do but its a starting point. (we may get to play with it in 3-4 sprints don’t have exact details)

This dashboard designer will, initially, be a stand alone app. Once this is up and running properly, we have ideas about putting full “rule/scene” creation capabilities on top of this technology (this is just an idea at the moment). This way we will achieve rule/scene capability across web and mobile platforms.
If we choose not to do that, then we will most definitely have a web only version of the platform for all this. (<-----Now you know why I had to talk about the dashboard designer to explain how we may achieve the webui)

Wanted to share the context and peek into what the development teams are working on.