OpenLuup - very slow reload times from ALTUI Homepage

First off I can’t express how awesome openLuup is! :slight_smile: Everything is working great on a beaglebone black, except the fact that I have extremely long load times whenever I click refresh and go back to the main homepage.

I tracked it down with Chrome developer tools and also in the openLuup logs. It seems like its compressed js files that (I assume) shouldn’t be changing, but are getting sent down as 200s instead of 304s (not modified) every single request.

They aren’t big files given that I’m sitting next to my beaglebone on my laptop, so it is strange that they are taking this long to download even though they probably shouldn’t even be redownloaded right?

Am I doing something wrong (thats probably the likely cause :), or is there a way I can help contribute back and make these static files be 304ing? OR maybe the load time is so long bc of other issues with the beaglebone black? CPU/Memory/Disk IO all are fine, I’ve upped ubuntu’s open file limit, and am not sure where else to check?

Log below from openLuup - you can see the requests finish almost 15 seconds after they are initiated.

Thanks again for all your hard work with this project - it is amazingly impressive.

Jim

2016-09-07 20:58:17.042 openLuup.server:: GET /J_ALTUI_b_blocks_compressed.js HTTP/1.1 tcp{client}: 0x7f6e28
2016-09-07 20:58:17.044 openLuup.server:: GET /J_ALTUI_b_blockly_compressed.js HTTP/1.1 tcp{client}: 0x975700
2016-09-07 20:58:17.046 openLuup.server:: GET /J_ALTUI_multibox.js HTTP/1.1 tcp{client}: 0x88c0f0
2016-09-07 20:58:17.759 openLuup.server:: request completed (67287 bytes, 5 chunks, 717 ms) tcp{client}: 0x7f6e28
2016-09-07 20:58:17.809 openLuup.server:: request completed (36947 bytes, 3 chunks, 763 ms) tcp{client}: 0x88c0f0
2016-09-07 20:58:33.700 openLuup.server:: request completed (591636 bytes, 37 chunks, 16656 ms) tcp{client}: 0x975700

Seems something is wrong here. The server is quite naive, so always resends (must fix that some time) but one reason I’ve never bothered is that it takes very little time, even for the big .js files.

I have a BBB and did much of my early development on it. But I’m away from that system for the next couple of weeks, so can’t easily check.

Hi,

Have you tried with Firefox? With the latest ALTUI version I have trouble using Chrome and Exploder (won’t talk about Edge, not even sure that talks HTML).

Cheers Rene

Thanks for the quick response! I hope you are away from your system bc you are enjoying a vacation :slight_smile:

I’ve started using firefox and the UI in general is more responsive but I still have 20 second reload times bc of those js files.

I was logging in chrome and saw a “Waiting for available socket” message. I thought this was a client side (Chrome) issue, but after some research it looks like it might be the server requiring a large number of sockets to be opened? Maybe one for each file and it maxes out the available per subdomain socket limit in chrome? I’ll get under the hood of Chrome and increase that limit and report back. For now here are is some debug info.

Hopefully this is a problem someone else has or will have so with your help we can stop them from having it in the future, and not just a Jim issue :slight_smile:

Ubuntu 16 headless on beaglebone black
beaglebone cpu-freq set to 1000MHZ

Fresh install of openLuup - no Vera’s bridged yet. Only Netatmo plugin installed.

ulimit settings -

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3704
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 9000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3704
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

No CPU or Memory or IO issues on the BBB.

Screenshots from the loadtime issue in Firefox debugger below, with detail on one of the long loading blockly files. Shows a waittime of 21 seconds and a download time of 1ms so i’m thinking this has to be an issue of maxing out available open sockets on either the browser or the server. I’m using a fresh install of Mac OSX with latest chrome and firefox and have never seen this before, so my guess would be that maybe the server is trying to use a new socket for every file and its hitting the max sockets on the browser? (i think its 6 for chrome and 12 for firefox or maybe reversed - don’t remember)

Forgot to add the detailed file showing the http request wait time vs actual download time.

No, the server reuses sockets, keeping the connection open, if the client allows. I typically see five or six parallel requests being serviced at once and then another request on the same sockets.

Everything is working great on a beaglebone black, except the fact that I have extremely long load times whenever I click refresh and go back to the main homepage.

…a question I should have asked at the start: “Why would you want to do that?”

Good question. I’m firing openluup back up on an Odroid to see if if it was just a beaglebone ubuntu issue.

Re: your question, I am in the habit of hitting refresh to update the UI and devices, etc. I’ve successfully worked myself out of a habit of accidentally using the back button, but I can’t find an easy way to go from the device tree to edit a device, and then go back to the device tree. I’m probably completely missing something and this is a stupid/off topic question - so apologies in advance.

I’ll report back tonight on how everything works on Ubuntu 16 with the Odroid.

Thanks
Jim

Just fired a new install up on an Odroid XU4 and everything is working great and very quick.

The beaglebone black should have more than enough horsepower to run openluup so I must have had some strange OS config settings incorrect.

For anyone who happens to stumble upon this issue themselves hopefully I can save a few hours of your life -

I did an EMMC flash of Ubuntu 16.4 on a beaglebone black (specifically this latest image - BBB-eMMC-flasher-ubuntu-16.04-console-armhf-2016-06-09-2gb.img) and was having the above issues. Please note in no way am I suggesting this image on the beaglebone doesn’t run openLuup great - I might have updated a library or done something to cause the above issues. But if you anyone else does happen to have this issue and searches the forum now hopefully they can know its not just their setup and save themselves some time.

Thanks again for the fast responses. I’ll be reaching out over PM soon - @akbooer I am a software developer and would like to discuss ways I can contribute to the open source work you are doing - I’m excited to help out where I can.

thanks
jim

Thanks for the feedback… always nice to know that things are working well in the end.

GitHub is there to be used, so contributions welcomed if I can understand them! You’ll find that others contributions have been acknowledged in various modules.

The major challenge has been in discovering what Vera actually does, rather then reverse engineering it. But I would be the first to admit that there are things which bear improvement.

@tornadoslims, I had the same problem from time to time, and also noticed long waits for sockets in the Chrome network tab. Then I noticed that F5 refresh of the AltUI eventually ends up producing around 90 requests, most of them firing in parallel. I increased the backlog value to 265 in the openluup/server.lua, and the problem went away, all browser F5 refreshes are fast:

local server, msg = socket.bind ('*', port, backlog or 256)

To test my theory, I reduced the backlog to 32 and immediately saw the stuck refresh. It seems that the parallel ajax request are quite common in the modern browser UI world. The small backlog values once popular on linux servers with memory constraints are not reasonable these days. Here is another example of the backlog limit problem: [url=https://derrickpetzold.com/#!/p/somaxconn/]https://derrickpetzold.com/#!/p/somaxconn/[/url]

@akbooer, I think the backlog could be increased even further in anticipation of more complex UI or larger automation setups with more TCP clients. I have several Sonos units and they are quite chatty, always talk back to the Sonos plugin devices. I see increasing popularity of the IP based devices in the home automation. Is there any reason to keep that backlog limit low?

Yes, possibly. From the LuaSocket documentation

Note: It is important to close all used sockets once they are not needed, since, in many systems, each socket uses a file descriptor, which are limited system resources. Garbage-collected objects are automatically closed before destruction, though.

…so we might hit a system limit with openLuup. Also, I don’t know if there is a Lua compilation parameter which may affect this.

However, it’s a great observation and I’ll certainly up the limit. Strangely enough, I think I ran into this long socket wait problem yesterday when playing with a RPi.

I’ve updated the GitHub development branch to make this change to the backlog parameter (also the ‘/port_3480’ removal in the server, thanks @explorer.)