Timed scenes triggering more than once?

Just rename all my trigger name and also reload openLuup and looks like now, all trigger are at the right time and not like before with some seconds before the right time…

5 2017-05-04 10:02:00 0 Wait job#34 :timer 'Everyday @ 10:02' for scene [60] Reset RulesEngine ON 6 2017-05-04 18:55:00 0 Wait job#31 :timer 'Sunday to Thursday @ 18:55' for scene [43] Reminder - Kids Sleeping - 18h55 7 2017-05-04 19:47:00 0 Wait job#35 :timer 'Everyday @ 19:47' for scene [61] Smartswitch - Evening mode *V 8 2017-05-04 19:50:00 0 Wait job#32 :timer 'Sunday to Thursday @ 19:50' for scene [44] Reminder - Kids Sleeping - 19h50 9 2017-05-04 20:12:00 0 Wait job#25 :timer 'Sunday to Thursday @ 20:12' for scene [1] Pre-Sleeping *V 10 2017-05-05 03:00:00 0 Wait job#27 :timer 'inNight Sleeping Everyday @ 03:00' for scene [14] Post-Sleeping 11 2017-05-05 05:00:00 0 Wait job#28 :timer 'Daily Backup Everyday @ 05:00' for scene [20] Summer Mode 12 2017-05-05 05:00:00 0 Wait job#29 :timer 'Smartswitch Everyday @ 05:00' for scene [28] SmartSwitch - Reset ALL 13 2017-05-05 05:00:00 0 Wait job#30 :timer 'Daily Backup Everyday @ 05:00' for scene [41] Daily backup 14 2017-05-05 05:15:00 0 Wait job#37 :timer 'Monday to Friday @ 05:15' for scene [69] Waking Up - Seb working days 15 2017-05-05 08:00:00 0 Wait job#33 :timer 'Winnie Reminder' for scene [46] Reminder - Winnie 16 2017-05-05 21:12:00 0 Wait job#26 :timer 'Friday to Saturday @ 21:12' for scene [1] Pre-Sleeping *V 17 2017-05-06 15:00:00 0 Wait job#36 :timer 'Saturday to Sunday @ 15:00' for scene [63] Reminder - Snack time - 15h00

Yes, I am wondering if there is some clock ‘drift’ issue. Not necessarily with the system itself, but with the ways that delays are calculated. This gives me something to look for…

[ul][li]Does the multiple scene problem get worse with time?[/li]
[li]Does the system clock drift and/or get resynched every so often?[/li][/ul]

Not sure what the RPi default behaviour regarding NIST syncs and the clock might be - I’m no Unix expert! Any observations on the above welcomed.

AK,

I’m not running openLuup on a Raspberry Pi anymore since a while. It’s running on a small form factor i7!

AFAIK it’s not getting worse with time…

I think this may be the vital clue which I have been slow to pick up on…

…is it always the case for multiple triggering that only the last one is at or after the scheduled time?

Well this is two days later from the last log and it’s earlier again by an extra 1.5 secs from the last log. Maybe rounding errors building up. Give me another couple of hours and I’ll post the next log.

2017-05-03 17:54:55.468 openLuup.server:: request completed (177 bytes, 1 chunks, 6 ms) tcp{client}: 0x198ab28 2017-05-03 17:54:58.497 luup_log:0: My Lua ver 0.50 debug: executing scene 9: "It is 5.55 PM" in room: "Watchers" 2017-05-03 17:55:00.161 luup_log:0: My Lua ver 0.50 debug: rest of scene 9 was executed 2017-05-03 17:55:00.161 luup.scenes:: scene 9, It is 5.55 PM, initiated by It is 5.55 PM 2017-05-03 17:55:00.173 openLuup.server:: request completed (4130 bytes, 1 chunks, 5073 ms) tcp{client}: 0x197afb8 2017-05-03 17:55:00.224 openLuup.server:: GET /data_request?id=user_data&output_format=json&DataVersion=497072759&_=1493447211658 HTTP/1.1 tcp{client}: 0x197afb8 2017-05-03 17:55:00.226 luup_log:0: My Lua ver 0.50 debug: executing scene 9: "It is 5.55 PM" in room: "Watchers" 2017-05-03 17:55:01.651 luup_log:0: My Lua ver 0.50 debug: rest of scene 9 was executed 2017-05-03 17:55:01.651 luup.scenes:: scene 9, It is 5.55 PM, initiated by It is 5.55 PM 2017-05-03 17:55:02.512 openLuup.server:: request completed (1159333 bytes, 73 chunks, 2288 ms) tcp{client}: 0x197afb8

This is what the job queue says:

2017-05-05 17:54:58         0      Wait  job#33 :timer 'It is 5.55 PM' for scene [9] It is 5.55 PM

I am convinced that the scheduler and timer modules themselves are not the cause…

Here’s some specific lines extracted from a log of one of my systems which has been running for 64 days. The openLuup plugin synchronises to clock time on startup and then starts a repeating timer which should run on the minute, every two minutes.

2017-05-05 04:02:00.356   :: openLuup LOG ROTATION :: (runtime 64.4 days) 
2017-05-05 04:04:00.044   luup_log:0: 3 Mb, cpu 2.3%, 64.38 days
2017-05-05 04:06:00.350   luup_log:0: 4 Mb, cpu 2.3%, 64.38 days
2017-05-05 04:08:00.153   luup_log:0: 4 Mb, cpu 2.3%, 64.38 days
2017-05-05 04:10:00.467   luup_log:0: 5 Mb, cpu 2.3%, 64.38 days
2017-05-05 04:12:00.291   luup_log:0: 5 Mb, cpu 2.3%, 64.38 days
2017-05-05 04:14:00.040   luup_log:0: 4 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:16:00.405   luup_log:0: 4 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:18:00.185   luup_log:0: 5 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:20:00.455   luup_log:0: 5 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:22:00.330   luup_log:0: 5 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:24:00.060   luup_log:0: 3 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:26:00.381   luup_log:0: 4 Mb, cpu 2.3%, 64.39 days
2017-05-05 04:28:00.115   luup_log:0: 5 Mb, cpu 2.2%, 64.39 days
2017-05-05 04:30:00.218   luup_log:0: 5 Mb, cpu 2.2%, 64.4 days

I would expect some variation, since the scheduler may be busy with other jobs, but as you see there is no drift even after 64 days.

The problem, then, must be specifically with the scene handling.

As of a few minutes ago:

2017-05-05 17:54:58.844 openLuup.server:: GET /port_3480/data_request?_dc=1493970900534&id=lr_dmCtrl&control=status&last=1493970853 HTTP/1.1 tcp{client}: 0x199f708 2017-05-05 17:54:58.846 luup_log:0: My Lua ver 0.50 debug: executing scene 9: "It is 5.55 PM" in room: "Watchers" 2017-05-05 17:55:01.103 luup_log:0: My Lua ver 0.50 debug: rest of scene 9 was executed 2017-05-05 17:55:01.104 luup.scenes:: scene 9, It is 5.55 PM, initiated by It is 5.55 PM 2017-05-05 17:55:01.115 openLuup.server:: request completed (4130 bytes, 1 chunks, 5297 ms) tcp{client}: 0x192adb0 2017-05-05 17:55:01.116 openLuup.server:: request completed (268 bytes, 1 chunks, 2271 ms) tcp{client}: 0x199f708 2017-05-05 17:55:01.250 luup_log:12: Paradox_IP150_wps debug: zoneStatus: 1 2017-05-05 17:55:01.250 luup_log:12: Paradox_IP150_wps debug: Zone_011: Entry Hallway PI is now open 2017-05-05 17:55:01.250 luup.variable_set:12: 12.urn:a-lurker-com:serviceId:Paradox_IP150_wps1.Zone_011 was: 0 now: 1 #hooks:0 2017-05-05 17:55:01.251 openLuup.server:: GET /data_request?id=user_data&output_format=json&DataVersion=497074416&_=1493447224535 HTTP/1.1 tcp{client}: 0x192adb0 2017-05-05 17:55:01.253 luup_log:0: My Lua ver 0.50 debug: executing scene 9: "It is 5.55 PM" in room: "Watchers" 2017-05-05 17:55:02.905 luup_log:0: My Lua ver 0.50 debug: rest of scene 9 was executed 2017-05-05 17:55:02.906 luup.scenes:: scene 9, It is 5.55 PM, initiated by It is 5.55 PM 2017-05-05 17:55:03.779 openLuup.server:: request completed (1159513 bytes, 73 chunks, 2526 ms) tcp{client}: 0x192adb0 2017-05-05 17:55:03.807 luup_log:12: Paradox_IP150_wps debug: zoneStatus: 0

And the next scheduled trigger:

  15  2017-05-06 17:54:58         0      Wait  job#33 :timer 'It is 5.55 PM' for scene [9] It is 5.55 PM 

[quote=“akbooer, post:26, topic:196202”]Here’s some specific lines extracted from a log of one of my systems which has been running for 64 days. The openLuup plugin synchronises to clock time on startup and then starts a repeating timer which should run on the minute, every two minutes.
I would expect some variation, since the scheduler may be busy with other jobs, but as you see there is no drift even after 64 days.[/quote]Is it possible you would see a difference with a single long term timer (say 24 hours or more) as opposed to a timer running every 2 minutes? Just a thought.

Good thought, but in fact I have several daily and weekly timers also running (like clockwork!) This one, for example:

2017-05-05 16:00:00.265   openLuup.scenes:: Advent ON is currently paused

is scheduled for 16:00 daily, does so (although it’s actually paused, it still get rescheduled) and has been doing so for 64 days.

I’m currently following the line of thought that it’s a problem in scenes which have potentially time-consuming actions, in other words, ones which finish a significant time after they start. Just running some tests on a fix at the moment, and it needs some further work, but seems like it may be promising. This is related to your cumulative error suggestion.

Is it the case that these repeated scenes have significant processing associated with them? It may not be many lines of code, but if it involves, for example, HTTP requests or other I/O, then that would count.

AK,

Maybe that can help, need to validate, but pretty sure that all my scene that run more than once, contain either a wget call or/and an external os.execute …

That would be very helpful. I think this is the nub of the problem.

either a wget call or/and an external os.execute ...

os.execute() ? Yes, that could do it. I always try and avoid these. If it’s for a file-related operations you are much better off using the luafilesystem (lfs) module. Of course, there are times when you have to spawn an external process.

I have updated the GitHub testing branch with a new version (2017.05.23) which has a number of fixes including an attempt to nail this timing error.

So type testing into the Update box and then click the Update button to give it a whirl.

Note: this IS on the testing branch, not development, because it’s been hard for me to duplicate some of the reported problems with scene timing, and I do need your help in trying this out. Everything else should work just fine…

(This also includes luup.job.set() and luup.job.setting() implementation)

AK,

My os.execute it’s to run a Shell script…

AK,

Not sure if it’s related to “testing” repo… but in Startup jobs… I have

 23  2017-05-23 08:12:52        67     Abort  job#23 :plugin openWeather [string "[67] I_openWeather.xml"]:176: parameter #2 should be type number but is string 

Ah, that’s too stringent type checking. I blame the person who suggested this in the first place :wink:

…now fixed in the testing branch, 2017.05.23b

Working now!

Will let you know if still having the scene running more than once!

You might care to investigate io.popen() see Lua 5.1 Reference Manual

…not only can it spawn a process, but you could read intermediate results from it if you so chose.

I’m not saying it solves the problem, but just another way of doing this.

I can try that too! I don’t really need to read the result or output, just need to run it!

I’m a bit suspicious about the number being considered a string. GitHub shows this call for line 176:

luup.call_delay ("Weather_delay_callback", WU["Period"])

And line 53 shows: Period = 1800, – data refresh interval in seconds – a number

So not sure why the error complains about a string?