MDB_MAP_FULL error writing table to storage

We are running into a problem when writing a table to storage using storage.set_table(). The error is: “MDB_MAP_FULL: Environment mapsize limit reached”. We hit this limit when the table gets to be about 62,000 bytes.

The MDB_MAP_FULL is a LMDB error, and means the maximum size in bytes of the DB has been reached, but 62,000 seams a very small value.

Has anyone else run into this issue? Is there a way to increase the map size?

Hello Richard,

We are sorry to hear you’re running into this issue. As you mentioned the default LMDB map size is in fact 64KB which is in accordance with the table size at which you’re encountering this error. The map size can be adjusted from your environment setup when using the Lua wrapper for the LMDB.
According to the LMDB documentation an environment should be initialized as follows:

lmdb. Environment ( path , map_size=10485760 , subdir=True , readonly=False , metasync=True , sync=True , map_async=False , mode=493 , create=True , readahead=True , writemap=False , meminit=True , max_readers=126 , max_dbs=0 , max_spare_txns=1 , lock=True )

where path is the directory whithin the file system where the database is to be allocated and map_size the max size in bytes that the database is allowed to grow to. By default this parameter is set to low to encourage a crash so that you can figure out a good size to work with.

To offer you further assistance, can you please provide a code snippet of the script that’s causing this error?

Regards,

Santiago Gutierrez
Customer Care

The code generating this error is basically:

local storage = require “storage”
storage.set_table(storage_key, datatable)

How would the plugin get access to the lmdb library? require “lmdb” doesn’t work. Nor does
the storage API doesn’t expose the lmdb environment or the path to the database.

Santiago,

Are you saying the Ezlo storage API exposes the LMDB of its implementation so we can replace it? Or proposing we hook into a Lua global variable to change it? Or that we bypass the Ezlo storage API?

Which LUA wrapper are you using? Is it GitHub - shmul/lightningmdb: Lightningdbm is a thin wrapper around OpenLDAP Lightning Memory-Mapped Database (LMDB) or something else?

All the best,

Lee

@Thiago can you please put a little more flesh on the bones of you Dec 2021 reply? We have been unable to figure out how to apply your suggested change.

@Thiago And a follow-up: it looks like we’re getting the MDB_MAP_FULL error after writing progressively smaller amounts of data, as if there’s some garbage accumulating.

Is there a way for us to enumerate the entire content of the database, either through Lua or an external tool?

We’re not seeking to set up an additional lmdb database, of course, but to enlarge the one backing your storage API. It might be good to know where that is.

@Leonardo_Soto Maybe you could take a look at this thread where we haven’t had any response in months?

MAP_FULL is playing a key role in my not being able to add more than 43 devices to my Ezlo Plus, as it sets my Zwave chip into perpetual restart:

2022-07-01 07:20:52.745522 INFO : Zwave reset is completed

2022-07-01 07:20:52.748770 ERROR: Exception ‘N4lmdb5ErrorE’ was caught:MDB_MAP_FULL: Environment mapsize limit reached

terminate called after throwing an instance of ‘HubPlatform::Services::TaskLoop::TaskException’

what(): TaskLoop: uncaught exceptions

Aborted (core dumped)

**** zwaved Restarted at Fri Jul 1 07:20:55 PDT 2022

2022-07-01 07:20:55 INFO : Logs folder was changed: //var/log/firmware

2022-07-01 07:20:55 INFO : addon.zwave: at-release/1978 [2022-05-05T11:14:51+0000]

2022-07-01 07:20:55 INFO : addon.zwave: Spread: connected to “4803” with private group “#addon.zwav#localhost”

Cannot set programming mode for ZW chip!

Z-Wave stick based on EFR32ZG14 is supported

Zwave hard reset is requested

2022-07-01 07:20:58.233546 INFO : zwaveModuleResetPinAssert: Assert reset on zwave chip

2022-07-01 07:20:58.234552 INFO : zwaveModuleResetPinRelease: Deassert reset on zwave chip

2022-07-01 07:20:58.735915 INFO : Zwave reset is completed

This is not even 2000 sqft - how can anyone call this a serious system if it can’t even operate a smaller-than-average-house?

Hello Dan,

We’re sorry about this issue. Unfortunately, there might be an issue with your particular setup causing the storage environment to crash. Your Ezlo controller should be able to handle more than 43 devices. We’d like to replicate this and let you know what else we can find. Please open a ticket with us indicating your controller’s serial number and whether you give us permission to create support credentials.

Hi Lee,

We’re sorry about the delay. Rest assured that we’re closely looking into this. We have asked the developers to shed some more light on the storage API not exposing the lmdb environment to set an appropriate map size. We’re waiting on their input.

We apologize for the inconveniences.

Hello @Lee @Dan-n-Randy
Thank you for bringing up this topic. We reviewed the issue you described. We plan to create a new FW build with a fix for DB size by the end of the week. We can provide you with this early beta update if you want to try the fix.

Thanks for this heads-up. I’m juggling a number of issues and won’t be able to try out a beta. I’d be happy to review docs for API enhancements, and so on.

@Max @Thiago Any update on this issue? I’ve looked through recent release announcements and not seen anything.

Hello @Lee
That fix was in the release candidate of our Linux FW. I’m checking that version’s current status and will return to you with the info.

Hello @Lee
The new FW version has been released to beta. We are planning to release this version live in one week.

Hello all,
Hello @Lee

Today we released a new version of Linux FW that contains a fix for the issue reported in this group.

@Max

Thanks for the update! We’re looking forward to more storage. Three-ish quick questions:

  • What do I actually have to do to take up this update in our controllers?
  • Was there a straight-up bug being fixed, or what’s changed and what do I have to do?
  • Part of our original request was for more insight into what this storage subsystem is doing, and how. I still don’t feel like I’ve got that. Can you say more here, or at Storage - Ezlo API Documentation , about how storage capacity is managed in the current implementation to address some of the questions raised upthread?

All the best,

Lee

Today the new version of FW has been pushed to live. It might take approx 24 hrs to upload that new version to all controllers. You can force an update by switching the controller off/on. I’d suggest checking the current version of FW that you have on your controllers first.
As for your next two questions, I’ll discuss them with our devs and come back with comments.

@Max Thanks. I confirmed they weren’t updated yet before asking. I’ll watch the normal process proceed over the next day before taking action.

Hello @Lee
Please, find below additional info related to the issue discussed in this thread.

Problem
We had a limitation of the amount of storage (both main and temporary) - 1Mb. It caused issues in serving approx 50 ZWave devices. For test_plugin devices, this number is around 700.
Also, we have a limitation of 1Mb of storage for every plugin, so, all that space may be allocated to a single plugin.
The reason for such a behavior is a bug in the lmdb library: the default memory map size in the docs is 10Mb, but in fact, it is 1Mb.

Solution
The solution is to increase the amount of data to save more data on a controller. We applied the following logic:
Allocate 30% of the Flash drive, but not less than 10Mb and less than 1Gb of main memory.
Allocate 10% of the RAM, but not less than 10Mb and less than 1Gb of main memory.
After that, the storage.bin and temp.storage.bin have grown up to these limits and are able to store all needed data.