Translate: Why would anybody want to do that when it can be done with voice command?
Even better I can also do it with two taps on my mobile app.
A lot of focus is given on peripheral and marginally useful (to be polite) features and not much is being done about the core functions of the hub. Mobile app and cloud integration development are probably the last things you should be doing right now. Where are the local processing/ local UI, wireless stack implementation and API/apps layer development?
what Alexa can do is not limited to just controlling few devices. There are many other online services. Not every environment is suitable to “voice commands” (watching TV), movie theater, in a meeting etc where you may still need an answer from Alexa but voice is not appropriate.
Google seems to offer that (keyboard, I’ve googled and Alexa doesn’t, or I couldn’t find any reference.
But in almost 3 years I’ve never used the keyboard for google either.
Maybe you’ve found a niche… although I don’t see it. Hopefully this feature didn’t take more than one day to implement, as the app really has a lot of issues.
Have you personally ever used the vera android app, Melih?
Local mode for the VeraMobile apps and Ezlo controllers is already developed for the beta users. You can try this feature with Ezlo Atom v2 or Ezlo PlugHub v2 for which we have opened the Beta enrolment.
As for the APIs, we’ve started to share with you the pre-alpha versions of the hub API and LUA API. We’ll continue with updates.
I’m a fan of VOI simply because it allows me to:
a. silently send Alexa commands without speaking;
b. do so at scheduled times;
c. to whichever Alexa device I choose;
d. as part of a Scene (or, in the future, Reactor routine);
e. read (as opposed to hear) Alexa’s response;
f. control devices outside of the Vera realm without setting up IFTTT applets in advance;
g. start/stop timers, skills, music or utterances on any Alexa device;
h. sidestep dedicated Vera plug-ins (e.g. GCal3) for adding events to any other linked repository (e.g. Google Calendar, shopping lists, etc.).
And that’s just the stuff I’ve thought of and tested so far!
P.S. The moment I learn that VOI may not be backported to future Vera firmware revisions, is the day this company becomes dead to me. }:->
I mainly use the IOS app. I am not a big android user, but i have one android device i test on. I usually use the dev versions of the app to give developers feedback on the apps and fw. So yes I am a very involved user that tests and gives constant feedback.
I have been using the Local mode in my Atom v2 for some time. Identified few issues with certain access points and we are in the middle of troubleshooting that with engineering etc etc.
Today I will be playing with Zigbee onboarding that just became available with a beta with my own edge hardware (not out yet to beta community) and our brand new linux fw…I have many zigbee devices i am eager to get going.
I agree with you for the scenes part. I will give you that, although for me the privacy issue is a show stopper (google has access to a lot of my private data and I’m not sharing that).
I was mostly referring here to this new feature, the chat. But maybe it’s just not for me, hopefully others find it interesting.
I now understand what’s going on - you’re only testing on ezlo devices.
The thing is that for the past months the android app in particular has become unusable (I don’t need any beta anymore, thanks) and was not sure on how you guys were missing on that.
Can I assume that any vera development or fixes are gone?
I only use the new hardware (for example i have Atom v2 and plugh hub v2 operating in local mode at home) and our own RTOS FW or our own Linux FW we developed. These are the future of the company and they both are maturing nicely. Of course there are many challenges, but we have an amazing team who is working hard on these. Key is to make sure we have a good stable platform firmware platform…good stable hardware platform…good stable cloud platform…then start bridging old vera fw users to start using the new platforms and continue building from there…
I do all…including very heavy scenes with over 30 devices that are interlaced with VOI commands. Important to test how scenes perform when interlaced with VOI commands that control alexa devices etc.
I also use Siri commands to launch scenes. (i am testing multi casting in zwave protocol on all these devices with some interesting results)
taking the phone out to press a button to turn something on is not practical or useful in my view. Home automation should be “predictive”…way beyond “smart”…
Is that what you do @slelieveld use your app to turn things on or off?
Aside from the beta testing I’m currently doing with the Vera Mobile app – mostly to try VOI through a Linux-fw Vera Edge bought for this purpose alone – I basically never use it. I prefer using the Web UI of my VeraPlus almost exclusively, especially for tweaking plug-ins like Reactor.
I do wish VeraMobile could also act as a directly plug-in/extension for Tasker on Android, but for that purpose I still use the AutoVera app. Hoping its connection to my VeraPlus controller never breaks, despite it being an unmaintained app for a couple of years now.
Really sorry, I did not mean to derail this thread off topic. I think I expressed what it would take to really start testing these elsewhere and will keep it there. Basically we are nowhere close. I would suggest you try without your phone and without internet connection to see how it goes. The API is a good step forward, necessary, but not sufficient. Alluding to the necessity of a webUI as @LibraSun mentioned it above and many more of us have discussed before.
And I agree with @melih on the vision that we should not be using the phones for home automation. It should be automation - predictive. So for me the mobile app is one step back, adding texting for control is two steps back. All we need is a fully localized system relying on a webUI (or even a program running on a pc connected to the API) for setup. No the mobile app is not a valid setup tool. Sorry.
We live in an age of mobile app clutter. I have been deleting apps from my phones faster than they can be added. Consolidation and simplicity is paradigm.
I agree with @rafale77 !
However the line between “mobile” and “non mobile” and screen sizes is blurring. Now there are big tablets that operate on what we call mobile OSes …
Main interaction point is “Scene/rule creation” when it comes to why people want web UI.
So the question becomes how you develop this “scene/rule creation” capability that can run on Web.
We have a department thats been building something interesting using NativeScript (which runs on all smartphone OSes as well as Web). We are building a “Dashboard Designer”. You can design any size/type etc, then choose what functionality to map to each tile, color, type etc etc. Of course as always very first version will be limited to what it really will do but its a starting point. (we may get to play with it in 3-4 sprints don’t have exact details)
This dashboard designer will, initially, be a stand alone app. Once this is up and running properly, we have ideas about putting full “rule/scene” creation capabilities on top of this technology (this is just an idea at the moment). This way we will achieve rule/scene capability across web and mobile platforms.
If we choose not to do that, then we will most definitely have a web only version of the platform for all this. (<-----Now you know why I had to talk about the dashboard designer to explain how we may achieve the webui)
Wanted to share the context and peek into what the development teams are working on.
here’s the thing… I use tablets extensively myself and… I use webUIs on it to setup everything. Tablets are extending more and more to be laptop replacements. Thanks to the big screen, we can do a lot more webUIs and avoid using plugins. By webUI, I mean a local web server which can run without any cloud interaction and this is the key.
Well that’s the thing, I don’t want to. I actually have no interest in having anything run on the web and be internet dependent. I think we discussed this before.
Don’t get me wrong. I think a mobile app is a good thing but I see it as a limited extension to the API and even to the webUI on which it should be built upon because it will be internet dependent and because of the limited screen real estate. It is mere optional gadgetry/cherry on the cake for a solid local UI. Your approach reminds me of what nest/google or ecobee have done: mobile app first and then cloud hosted webUI. Not only is it not very innovative (ST is already doing that) but goes against the localized processing vision you expressed. I got rid of every single such devices from my home after having been burnt by one too many of them. Think about it: Say I have a router breakdown or an internet outage at my house or say I don’t have any mobile device available. You mean nothing gets processed or in the case of the mobile app, I can’t even change configuration of my automation?
A webUI is accessible by mobile devices and desktops/laptops. Not so for a mobile app… It is by design limited in capability and usability. So the dashboard etc… all great if it could be done on a webserver and with a mobile webpage version… and no mobile app! No more android and iOS version for example and fully locally hosted. Just a thought… And ohh yeah that’s already how my setup is but it takes a lot of expertise to setup. If EZLO could make it easy and accessible to the greater public, you would have breakthrough in the market.
Not the scene running on the web (we call it cloud). The scene creator running as a web app. The discussion is about creating a “web UI”…its not about running scenes on the cloud.
As mentioned before we already running scenes locally on Atoms…(despite you saying it can’t be done…remember ) as well as cloud, we can do both! You are misreading what I have written! You are conflating us creating a “web UI” with running Scenes in the cloud (we don’t refer to running scenes as running on the web) and re-hashing stuff again
Also, for a techie like you it should be easy enough to run a network scan to see we already have a web server running in Atom! (again despite you saying Atom can’t do it …remember? ) . So you already have full access to Atom locally.
Everything gets processed…locally…the whole scene is stored and executed inside Ezlo hardware. I really don’t know what you are talking about…There is a HUGE disconnect between your perception of our hardware and reality of how it works. Happy to send you few samples of Atom v2, plughug etc so that you get to play with them.
If an app runs on smartphone locally it doesn’t mean it requires the internet. A dashboard running on your app could easily be running locally, making a local connection to ezlo hardware without having any internet. Today our app connects directly to ezlo hardware (as ezlo hardware has a web server running inside). And as I mentioned before, our choice of nativescript as a technology will allow us to run it on 3 different platforms (web, ios, android).
At this moment the app is lacking nearly all functionality other than to add a device, have a tiny bit of control on it (most device functions are not available in the Android app) and create scenes. All other functions the Vera web GUI delivers that are key for a mature home automation setup are lacking, and even the web GUI lacks quite a bit (zero trouble shooting at network or device level for example). It will be a huge effort to add all those function in the mobile app and most I do not want to use without having a keyboard.
So I agree all the integration efforts are nice, but you are still missing most basic functionality for a proper controller. Right now they are good for the sunny days where everything works right out of the box and will never go faulty ever. With our z-wave experience we know that is not how it really works, and that is not just because of the current state of the vera controllers. I fear you will be in this pre-alpha, alpha, beta stage for a long time. The real world will be back to normal sooner I expect.