We are running into a problem when writing a table to storage using storage.set_table(). The error is: “MDB_MAP_FULL: Environment mapsize limit reached”. We hit this limit when the table gets to be about 62,000 bytes.
The MDB_MAP_FULL is a LMDB error, and means the maximum size in bytes of the DB has been reached, but 62,000 seams a very small value.
Has anyone else run into this issue? Is there a way to increase the map size?
We are sorry to hear you’re running into this issue. As you mentioned the default LMDB map size is in fact 64KB which is in accordance with the table size at which you’re encountering this error. The map size can be adjusted from your environment setup when using the Lua wrapper for the LMDB.
According to the LMDB documentation an environment should be initialized as follows:
where path is the directory whithin the file system where the database is to be allocated and map_size the max size in bytes that the database is allowed to grow to. By default this parameter is set to low to encourage a crash so that you can figure out a good size to work with.
To offer you further assistance, can you please provide a code snippet of the script that’s causing this error?
We’re sorry about this issue. Unfortunately, there might be an issue with your particular setup causing the storage environment to crash. Your Ezlo controller should be able to handle more than 43 devices. We’d like to replicate this and let you know what else we can find. Please open a ticket with us indicating your controller’s serial number and whether you give us permission to create support credentials.
We’re sorry about the delay. Rest assured that we’re closely looking into this. We have asked the developers to shed some more light on the storage API not exposing the lmdb environment to set an appropriate map size. We’re waiting on their input.
Thank you for bringing up this topic. We reviewed the issue you described. We plan to create a new FW build with a fix for DB size by the end of the week. We can provide you with this early beta update if you want to try the fix.
Thanks for the update! We’re looking forward to more storage. Three-ish quick questions:
What do I actually have to do to take up this update in our controllers?
Was there a straight-up bug being fixed, or what’s changed and what do I have to do?
Part of our original request was for more insight into what this storage subsystem is doing, and how. I still don’t feel like I’ve got that. Can you say more here, or at Storage - Ezlo API Documentation , about how storage capacity is managed in the current implementation to address some of the questions raised upthread?
Today the new version of FW has been pushed to live. It might take approx 24 hrs to upload that new version to all controllers. You can force an update by switching the controller off/on. I’d suggest checking the current version of FW that you have on your controllers first.
As for your next two questions, I’ll discuss them with our devs and come back with comments.
Please, find below additional info related to the issue discussed in this thread.
We had a limitation of the amount of storage (both main and temporary) - 1Mb. It caused issues in serving approx 50 ZWave devices. For test_plugin devices, this number is around 700.
Also, we have a limitation of 1Mb of storage for every plugin, so, all that space may be allocated to a single plugin.
The reason for such a behavior is a bug in the lmdb library: the default memory map size in the docs is 10Mb, but in fact, it is 1Mb.
The solution is to increase the amount of data to save more data on a controller. We applied the following logic:
Allocate 30% of the Flash drive, but not less than 10Mb and less than 1Gb of main memory.
Allocate 10% of the RAM, but not less than 10Mb and less than 1Gb of main memory.
After that, the storage.bin and temp.storage.bin have grown up to these limits and are able to store all needed data.