openLuup: Asynchronous I/O

I have successfully been testing a version of the asynchronous I/O, in general, and specifically for VeraBridge. As hoped for, the latency is much lower, and there is somewhat less polling activity. I’ve tested common error conditions, including Luup reloads and Vera reboots.

The latest development release 2019.03.14 on GitHub is available for download. By default, the asynchronous VeraBridge is not enabled, but there is a device variable AsyncPoll which, if set to “true”, will enable it after a reload.

At the moment there are some additional log messages to check the behaviour, but overall, the only difference you should see is improved response times to Vera variable changes, and an additional socket named VERA async-tcp on the Console > Scheduler > Sockets page.

The bridge uses a new form of the http.request() which provides an asynchronous callback to the user when the response is received. In theory, this could be available to openLuup plugin developers to handle things like SOAP / AJAX calls, but that’s yet to be tested*.

If you care to try it, I’d welcome feedback.

* as discussed here: [url=,111505.msg426879.html#msg426879]Using Soap onvif to listen for camera motion detection[/url]
1 Like

Ok - had a go at this, on a PI pcb, by updating my existing very well functioning set up (thanks for that):

openLuup & bridge now both show as version 19.13.14 as expected.
AsyncPoll set to “true” and restarted openLuup
http://ip_address:3480/console?page=sockets makes no mention of “VERA async-tcp”

console log shows:

2019-03-15 14:26:29.566 luup_log:5: VeraBridge ASYNC callback status code: ?, data length: 4 2019-03-15 14:26:29.567 openLuup.context_switch:: ERROR: ./openLuup/http.lua:547: attempt to call method 'sendrequestline' (a nil value) 2019-03-15 14:26:29.567 openLuup.scheduler:: job aborted : ./openLuup/http.lua:547: attempt to call method 'sendrequestline' (a nil value)

Suspect I have the wrong socket library as sendrequestline is not found in openLuup in GitHub?? If so, what to do?

I set AsyncPoll back to “false” and restarted openLuup in the hope that this will revert back to the method previously used?

a-lurker see,48982.msg321683.html#msg321683 Maybe keep both functions…

@Buxton yes - that’s the problem. I modified http.lua as per the file here:,48982.msg321683.html#msg321683

It no longer had sendrequestline() and also had a modified sendheaders(). So something fixed two years ago caused the the problem described above. So I rehashed that code to resurrect these two functions.

So setting AsyncPoll set to “true” and restarting openLuup now results in being shown in “VERA async-tcp” in http://ip_address:3480/console?page=sockets

The log file now shows lots of these - so looks like it’s working:

VeraBridge ASYNC callback status code: 200, data length: xyz

Some initial confusion in that there are now two http.lua files. One being in the socket lib and one in openLuup. Does the latter also need to be called http.lua? Also can the functions sendrequestline() & sendheaders() be duplicated in the openLuup http.lua file or is that too hard? I don’t quiet understand the interactions between the two http.lua files.

Have attached my updated http.lua socket version for reference.

@a-lurker, great to see a response from you. Thanks for trying it and finding a problem (albeit, of your own making, in the end?!)

@Buxton, great detective work. I had not seen that post previously.

Karma to you both for that.

It’s been my philosophy never to alter a ‘system’ library, for just these reasons, except through the usual GitHub pull request mechanism, and even then you’ll run into installed version incompatibilties. For this reason, I’ve been putting off (for years) delving into the LuaSocket library to see how I could do asynchronous I/O. However, it turns out that there is a legitimate, although undocumented, way of doing this, and the low-level library itself provides almost all the tools you need.

Regarding the confusion of file naming, it isn’t really a problem because one is [tt]socket.http[/tt] and the other [tt]openLuup.http[/tt] and, in truth, you shouldn’t need to touch either. However, I have been through this quandary before: the original file was called this, but then changed to [tt]openLuup.server[/tt] but changed back per the comment in its header, when the server functionality became more general. I’ll consider another name change. Ideas?

@amg0 has asked me whether this code will run on Vera, and I believe that with one small utility function to replace an openLuup scheduler call, then it could do just that. The technique is not limited to HTTP, since all the magic is actually happening at the TCP/socket level, but the code itself would need to be generalised a bit.

I just want this to get fully shaken down in VeraBridge before I try to widen its application.

Thanks both for your invaluable help.


PS: I’ve split this into a separate topic, since I think there may just be further discussion.

The asynchronous HTTP request routine is accessible for anyone who wants to try this in a plugin.

The call parameters are exactly the same as http.request(), as documented here: LuaSocket http.request() but with an additional one for the callback routine:

-- "simple" GET request
local ok, err = luup.openLuup.async_request ("simple URL", myCallback)

-- "simple" POST request
local ok, err = luup.openLuup.async_request ("simple URL", "body of POST", myCallback)

-- "generic" request
local ok, err = luup.openLuup.async_request ({table with url, headers, sources, sinks, etc...}, myCallback)

Return code is non-nil for success when sending the request, nil followed by an error string if failure (including “closed” and “timeout”). In the event of failure, the callback routine will never be called. If successful, the callback receives parameters which are exactly the same as the returns defined for the normal http.request() routine:

function myCallback (response, code, headers, statusline)

If the request was with a string URL, then the response is a string. If it was of the generic form, then the response is 1 and actually ends up wherever you specified the ltn12 sink.

Note that the callback is specified as a normal function, not as a global string name, as in standard luup requests. So if you wanted to ignore the response entirely, you could write:

local ok, err = luup.openLuup.async_request ("url goes here", function () end)

giving you a non-blocking send, although that could hang for a while if the socket open or send request times out.

This asynchronous request could be useful for getting images, or other large data, from cameras and suchlike, and certainly works well in VeraBridge for status data requests.

A somewhat unexpected side-effect of switching my main openLuup system to using asynchonous I/O for the currently two bridged Veras that it oversees was a dramatic drop in CPU usage.

Well, I say dramatic, it did halve, but since it’s dropped from just 4% to 2% it may hardly matter. What does matter, though, is the reduced latency in response times.

I’m left with wondering, once again, just where all the CPU goes in a Vera? The current trend in HA (at least, as read here) is the move to very capable machines, but really the majority of HA automation actually needs nothing like that level of computing power.

I think that is what Melih was saying last fall.

The misleading part of looking at CPU utilization is to focus not on the average or the idle rate but on the spikes. This is when you actually see the CPU constrained and potentially crash. Yeah sure most of the time it will be at 2-5% but if the cpu processing queue is overwhelmed… then we run into potential issues even if it is for a very short time.

Undoubtedly true, and these plotted values are already averages, so the peaks are likely to hit 100% at times.

This is when you actually see the CPU constrained and potentially crash.

However, a well-written system should degrade gracefully, and certainly not crash. Clearly you could bring it to its knees with multiple video streams, or such like, but I think that’s just inappropriate use of an HA controller.

Completely agree. I was thinking more of the vera UI7 rather than openLuup here… I have yet to get openLuup to crash due to cpu useage.

@akbooer Is there any things I need to check/validate/modify if I want to enable it or I can do it and it will just decrease the latency at some point? Right now openLuup is handling everything and vera only the “zwave” communication.

If you use the latest development version, it has some rather conservative, but user adjustable parameters (hidden, but reachable.) The only thing you need to enable it is the true flag in the AsyncPoll variable and then restart.

Would be very interested to know how you get on, or, indeed, whether you notice any difference at all!


Will enable it for sure back @ home later today!

The latest openLuup development release has a new file in the openLuup/ folder: http_async.lua.

This now implements asynchronous HTTP and HTTPS requests (depending on which scheme is provided in the requested URL.) If you want to use it, you simply require it in your code like this:

local async = require "http_async"

It exports a single function require which works exactly as described earlier in this thread: openLuup: Asynchronous I/O - #6 by akbooer - openLuup - Ezlo Community (except for a change of name.)

For example:

local ok, err = async.request ("simple URL", "body of POST", myCallback)

I will be withdrawing the original openLuup syntax to call this function (luup.openLuup.async_request) in order to make any plugin which uses this transportable to Vera, since this new module can be copied to Vera and used in exactly the same way there.

I have done limited testing in Vera (quite amusing in the test window, because the function returns before the response arrives, so your callback routine needs to write to the log if you want to see the result!)

Example test code:

local async = require "http_async"
local ltn12 = require "ltn12"
local response_table = {}

local function request_callback (response, code, headers, statusline)
  luup.log ("CALLBACK status code: " .. (code or '?'))
  luup.log ("CALLBACK output length: " .. #table.concat (response_table))

local ok, err = async.request ({
      url = "",
      sink = ltn12.sink.table (response_table),
      protocol = "tlsv1_2",
    }, request_callback)
print (ok, err or "all OK")

Giving this log output (you have to rush to the AltUI Misc > OsCommand page and tail the log)

50 05/11/19 12:56:53.317 luup_log:0: ALTUI: Evaluation of lua code returned: nil <0x7445d520>
50 05/11/19 12:56:54.107 luup_log:0: CALLBACK status code: 200 <0x7565d520>
50 05/11/19 12:56:54.107 luup_log:0: CALLBACK output length: 7421 <0x7565d520>

Huge thanks to @amg0 for suggesting that this module could be adapted to be used on Vera.

1 Like

Really cool, although I would call this “asynchronous HTTP requests”, as “I/O” is a bit broad. When I think of async I/O, I’m usually thinking of lower-level stuff (like async sockets, which Vera desperately needs).

One interesting thing to note about your example… the callback has to use response_table as an upvalue to get the response text. That means a lot of people will be tempted to use a global (the ultimate upvalue), and that will be fine in a lot of cases, but if this approach is used in a scenario where multiple simultaneous async requests are made/processed, the sinks will collide. If you wrap the async.request calls in a function or closure, however, and make response_table local to that closure, it solves that issue.

Certainly true. All the code is there for asynchronous sockets, it’s just that it’s wrapped in an HTTP request function at the moment. I could unwrap it … ?

Quite true. This was only a test example. The whole thing should be part of a closure. As Lua Test code, it already is.

Just have a conversation with @amg0 on exactly this topic.

Here’s the function wrapped for your own use, with a separate response_table for each invoked request…

local function async_request (request_table)
  local response_table = {}
  request_table.sink = ltn12.sink.table (response_table)
  local function callback (response, code, headers, statusline)
    -- do whatever you need with the response here
  return async.request (request_table, callback)
1 Like

it is great , however I am not sure the function enclosure trick can be used every time.

it would still be useful to be able to pass in a context parameter that could be anything (scalar, table) but just returned back, as is , as part of the callback. in it we could store some info, data, indexes, anything

for instance if we have a loop iterating over a table and calling async_request () for each entry in that table, it would be useful for the callback to know for which entry it is being called back so it could for instance find in the context parameter the index of that entry

Wrap it in a closure rather than a named function; the loop wraps the closure.