We’re sorry about this issue. Unfortunately, there might be an issue with your particular setup causing the storage environment to crash. Your Ezlo controller should be able to handle more than 43 devices. We’d like to replicate this and let you know what else we can find. Please open a ticket with us indicating your controller’s serial number and whether you give us permission to create support credentials.
We’re sorry about the delay. Rest assured that we’re closely looking into this. We have asked the developers to shed some more light on the storage API not exposing the lmdb environment to set an appropriate map size. We’re waiting on their input.
Thank you for bringing up this topic. We reviewed the issue you described. We plan to create a new FW build with a fix for DB size by the end of the week. We can provide you with this early beta update if you want to try the fix.
Thanks for the update! We’re looking forward to more storage. Three-ish quick questions:
What do I actually have to do to take up this update in our controllers?
Was there a straight-up bug being fixed, or what’s changed and what do I have to do?
Part of our original request was for more insight into what this storage subsystem is doing, and how. I still don’t feel like I’ve got that. Can you say more here, or at Storage - Ezlo API Documentation , about how storage capacity is managed in the current implementation to address some of the questions raised upthread?
Today the new version of FW has been pushed to live. It might take approx 24 hrs to upload that new version to all controllers. You can force an update by switching the controller off/on. I’d suggest checking the current version of FW that you have on your controllers first.
As for your next two questions, I’ll discuss them with our devs and come back with comments.
Please, find below additional info related to the issue discussed in this thread.
We had a limitation of the amount of storage (both main and temporary) - 1Mb. It caused issues in serving approx 50 ZWave devices. For test_plugin devices, this number is around 700.
Also, we have a limitation of 1Mb of storage for every plugin, so, all that space may be allocated to a single plugin.
The reason for such a behavior is a bug in the lmdb library: the default memory map size in the docs is 10Mb, but in fact, it is 1Mb.
The solution is to increase the amount of data to save more data on a controller. We applied the following logic:
Allocate 30% of the Flash drive, but not less than 10Mb and less than 1Gb of main memory.
Allocate 10% of the RAM, but not less than 10Mb and less than 1Gb of main memory.
After that, the storage.bin and temp.storage.bin have grown up to these limits and are able to store all needed data.
@Max Thanks for that update. We’ll do some destructive testing to understand what that means for us.
Another angle has come up. I’m happy to address it elsewhere if that’s better…
Only one of our four Ezlo Plus has taken up the version update to firmware 2.0.30 automatically. Per ezlogic.mios.com, the other 3 are still on 2.0.29. I’d happily check also by logging into the controller, but while I’ve looked at many files in /etc with various version numbers, none hold the version number of the firmware. Maybe you could point me in the right direction and also suggest what might be going on with the boxes not updating. I really don’t want to fix this symptomatically, but to get at the root cause!
While we see an increase in capacity, it’s strangely short of what we would expect, and we are now getting a different error message.
We would really appreciate documentation and tools to form deeper insights about what’s going on.
After updating to your newest firmware, testing shows we’re able to store 3.5-7x more items of data than before. The file /home/data/storage.bin which I think is where it is stored, is now about 3.5MB.
We are primarily storing a single large Lua array that can now hold about 710 items before failing. A maximal entry is around 900 bytes, based on inspection of storage.bin. So we can account for about 650K of the 3.5MB footprint of storage.bin.
We no longer see MDB_MAP_FULL but instead:
Fatal error: DbLuaBinding: storage.setTable: fail to set a table value
When this happens, we remove the oldest event in the array and try saving again. The total size currently hovers around 710 entries.
Can you help us understand what’s going on?
The storage bin is open in 154 simultaneous file descriptors. I don’t even know whether that’s a smell, but I thought I’d mention it.
/tmp/log/firmware# lsof | grep /home/data/storage.bin$ | wc
154 1602 16786