I think most folks using the micro platforms are using them as Runtime only. In this mode, they're mounting the remote FS and editing on their convention machines
Just on remote editing and updating. DropBox is another option:
I recently started on openHAB and have been testing on a full server. Itās almost production-ready now, so I moved it to a Raspberry Pi 2.
The RPI 2 is usable, but too slow for my taste; rules which took milliseconds to process on the full server now take 2-3 seconds.
This is probably because our environment is fairly large. We have about 70 devices total with approximately 40 rules running a large variety of functions and bindings (Pushover, Nest, DSC, Plex, Astro, System Monitor, some custom stuff). Persistence is being handled by mysql on an external HDD, but Iāve also tested against RRD4J which was even worse in terms of performance.
I feel thereās probably room for optimization, but Iām going to also experiment with some slightly better hardware (Odroid XU4) to see what the difference might be.
@silencery,
Iād be interested in knowing more details about your config, esp when it comes to the performance elements (eg [tt]vmstat 5[/tt], or similar).
Iāve not run on an RPi2 (yet) but Iāve seen a number of people have issues with the iops achievable via the SD Card interface on those devices.
That said, I have ~1000 items in my setup, most with RRD-based persistence. The bulk of these are coming/proxied from my MiOS/UI5 system (Alarm, ZWave etc), but ~150 are coming from @watouās native Nest Binding, and there are ~ 100 coming from native Harmony, Astro, Weather MQTT Bindings. Most of the time, this system is idleā¦ very idle.
I see bursts in my ODroid C1 System when itās RRDās are written, as well as when some of my custom Rules (for Energy calcs) kick in, but theyāre not impacting Scene timing (I have Debug level logging enabled, and I write timing data for each Rule executed)
If you go the ODroid route, it would be worthwhile picking up their eMMC module, and the battery for the on-board RTC. I replaced the U1 SDCard with one of these, and the difference is noticeable (and the U1 MicroSD Card is also noticeable over a regular Class 10 MicroSD Card).
Thatās impressively huge AND you have debug logging on? Your IO must be taking a decent hit. Thatās very interesting info to share; it sounds like I do indeed have a lot of room for optimization.
Yes, Iāve been concerned about storage bottlenecking the RPI2, so I moved the root FS off SD onto a conventional HDD. Results are still the same, so it seems thereās something else causing the performance hiccups. Iāve also taken care to watch the openHAB output to fix anything that may have been causing errors or warnings.
Hereās a paste of the vmstat output. Everything seems to be ok. Itās not even hitting the swap:
I should say the slow triggers are not consistent; sometimes things happen quickly, sometimes they take forever. I thought it was just an issue of the objects not being cached in memory yet, but the slowness happens even if the server has been running for a while. Iāll need to find some time to trace.
The sluggishness comes up in various areas, but one example is a rule I setup to dim the lights when a movie is playing on the Plex client. The associated actions are pretty simple: evaluate if itās day/night. send a notification to plex via json, and dim the lights if a movie is playing during the evening. When i fire the trigger (start or resume a movie), this action can take anywhere from near instantaneous to 5-6 seconds to complete. Odd.
By the way, your suggestion for adding locks on shared objects in the other thread was awesome. It really helped enhance consistency by quite a bit. Now I just gotta focus on performance.
Of course, during the busy times it looks a lot different. The largish figures under Blocks Out are the RRD Sync, which is set to write each minute, in addition to each-change. Looking at the Wait stat, it only has a small impact on the system during that time.
For my logging, I have DEBUG enabled with both Local FS and SYSLOG as output. The latter is another RPi where I centralize all my device logs, and itās on an Attached USB Drive where I offline/archive logs for months (Router, openHAB, Mac, etc, etc). In the worst case, I could turn off openHABās DEBUG Logging and Iāve have a copy on the other machine.
Your #'s arenāt anything to worry about, so Iād expect the blockage to be somewhere else. It would be worthwhile double-checkin the locks youāve added to ensure theyāre tightly scoped around ājustā the bit that needs the consistency lock. Iād also look at those rules and create distinct locks for the bits that need them, just incase thereās any unneeded sharing going on.
Slow Rule executions will occur on the first time each Rule is executed after Startup (or after a Rule has been edited, causing a Reload). These timings are often large, even in fast-CPU environments and donāt seem to be avoidable, but theyāre [largely] a one-time cost - unless youāre hot editing Rules frequently.
It may also be that your VMStat #'s are quite different during Scene execution, so you may want to create a script to gather the logs over a longer period so they can be looked at in more detail.
I've also taken care to watch the openHAB output to fix anything that may have been causing errors or warnings.
Definitely good practice... esp after you've just written any new Rules ;)
I put timing hooks into quite a few of my scenes, which allows me to go back and see if there are long-running things āafter the factā.
The typical form of these is:
[code]rule āā¦ā
when
ā¦
then
var t = now
...
var long x = now.getMillis - t.getMillis
logInfo("eagle", "PERF Pull-Data-from-Eagle elapsed: " + String::valueOf(x) + "ms")
end
[/code]
These have come in VERY handy for all sorts of issues.
The other thing I changed a lot of is my use of ā[tt]Itemā¦ received update ā¦[/tt]ā in Rules. Iāve converted almost all of these to use ā[tt]Item ā¦ changed[/tt]ā or ā[tt]Item ā¦ changed to ā¦[/tt]ā.
I used to put all the logic in the Rule itself, and going this route has helped a bunch (as the System is now doing the comparisons, instead of the Rule itself)
2x 120 GB kingston SSD Drives (I know, but itās a prototype for now)
zwave bindings: vera lite, 2x vera edge, aeon usb stick (all zwave devices has direct connection to a vera so everything is instant, and large commands are pretty fast)
Software:
Proxmox 4.3
zfs mirror on ssd drives
one container for each: mysql, mqtt, apache+php+nodejs for custom interfaces, nagios, email forwarder, nodered, openhab, and other
Usage on idle: 1.4 GB of ram + 0.4 for proxmox, 1% IO delay
Will expand this into a high-availability 3-node proxmox cluster, with only one node standing required to be able for automation and remote controll to work, once I get the hang of OpenHab.
Heās also running mysql, mqtt, apache+php+nodejs for custom interfaces, nagios, email forwarder and nodered apart from OH. Which goes a long way towards the memory usage numbers.
Yeah, I saw that, but was interested in which component was hogging the memory. There are a bunch of folks using OH, MQTT together on small footprint HW ā¦ so Iām guessing Apache, but a breakdown would be handy.
I am running it in my iMac with 3.5GHZ CPU, SSD drive and 32GB RAM :D. I just installed the Hue plugin. It may take me a few weeks to set up everything up
Iām running it on my i7 Mac mini Server with Sierra. I only just started so I havenāt got to grips with it yet.
My Mini Server was already on to run various things such as CCTV software and the HomeKit and Alexa bridge softwares so I decided Iād give openHab2 a go.