can't reproduce documented overflow behavior of _delay_ms()

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

can't reproduce documented overflow behavior of _delay_ms()

Britton Kerin
On a 16 MHz Arduino Mega 256, this code:

  __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
  debug_led_on ();
  _delay_ms (290000.0);
  debug_led_off ();
  assert (0);

leaves the debug led on for about 269 s.  Same for larger arguments to
_delay_ms (e.g. 350000.0).  This is different from what this
documentation implies will happen:

  If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
maximal possible delay is
  4294967.295 ms/ F_CPU in MHz. For values greater than the maximal
possible delay,
  overflows results in no delay i.e., 0ms.

It looks like the result is now the maximum delay, rather than 0ms.
Perhaps __builtin_avr_delay_cycles() has changed?

Also, the documentation for _delay_us() also contains this:

  If the user requests a delay greater than the maximal possible one,
_delay_us() will
  automatically call _delay_ms() instead. The user will not be
informed about this case.

  If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
maximal possible delay is
  4294967.295 us/ F_CPU in MHz. For values greater than the maximal
possible delay, overflow
  results in no delay i.e., 0us.

This looks suspicious because if _delay_us() calls _delay_ms() for
longer delays, one would
expect the same maximum low-resolution delay as for _delay_ms().  And
in fact this code
behaves as expected (gives 10 seconds with LED on):

  __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
  debug_led_on ();
  _delay_us (10000000.0);   // 10 s in us
  debug_led_off ();
  assert (0);

but this code delays for only 269s:

  __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
  debug_led_on ();
  _delay_us (290000000.0);   // 290 s in us
  debug_led_off ();
  assert (0);

So it looks like with recent avr gcc at least my suspicion is correct.

If it would get applied I could prepare a documentation patch for
these issues, but I don't know when the __builtin_avr_delay_cycles()
behavior changed (assuming that's the cause of the above problem).

Britton

Reply | Threaded
Open this post in threaded view
|

Re: can't reproduce documented overflow behavior of _delay_ms()

Georg-Johann Lay-2
Am 23.01.20 um 23:57 schrieb Britton Kerin:

> On a 16 MHz Arduino Mega 256, this code:
>
>    __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
>    debug_led_on ();
>    _delay_ms (290000.0);
>    debug_led_off ();
>    assert (0);
>
> leaves the debug led on for about 269 s.  Same for larger arguments to
> _delay_ms (e.g. 350000.0).  This is different from what this
> documentation implies will happen:
>
>    If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
> maximal possible delay is
>    4294967.295 ms/ F_CPU in MHz. For values greater than the maximal
> possible delay,
>    overflows results in no delay i.e., 0ms.
>
> It looks like the result is now the maximum delay, rather than 0ms.
> Perhaps __builtin_avr_delay_cycles() has changed?

No, the code is still from the initial commit

https://gcc.gnu.org/r172416

The code is still the same today:

https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/config/avr/avr.md?annotate=280000
https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/config/avr/avr.c?annotate=280000

That's from just prior to the SVN->GIT transition.
(I found no way for a similar neat view with gitweb.  The transition was
3 weeks ago, and since then that code did not change either.)

https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;a=tree;f=gcc/config/avr;hb=HEAD

Johann


> Also, the documentation for _delay_us() also contains this:
>
>    If the user requests a delay greater than the maximal possible one,
> _delay_us() will
>    automatically call _delay_ms() instead. The user will not be
> informed about this case.
>
>    If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
> maximal possible delay is
>    4294967.295 us/ F_CPU in MHz. For values greater than the maximal
> possible delay, overflow
>    results in no delay i.e., 0us.
>
> This looks suspicious because if _delay_us() calls _delay_ms() for
> longer delays, one would
> expect the same maximum low-resolution delay as for _delay_ms().  And
> in fact this code
> behaves as expected (gives 10 seconds with LED on):
>
>    __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
>    debug_led_on ();
>    _delay_us (10000000.0);   // 10 s in us
>    debug_led_off ();
>    assert (0);
>
> but this code delays for only 269s:
>
>    __builtin_avr_delay_cycles (100.0);   // Compiles, so we apparently have it
>    debug_led_on ();
>    _delay_us (290000000.0);   // 290 s in us
>    debug_led_off ();
>    assert (0);
>
> So it looks like with recent avr gcc at least my suspicion is correct.
>
> If it would get applied I could prepare a documentation patch for
> these issues, but I don't know when the __builtin_avr_delay_cycles()
> behavior changed (assuming that's the cause of the above problem).
>
> Britton
>
>


Reply | Threaded
Open this post in threaded view
|

Re: can't reproduce documented overflow behavior of _delay_ms()

Joerg Wunsch
In reply to this post by Britton Kerin
As Britton Kerin wrote:

>   If the avr-gcc toolchain has __builtin_avr_delay_cycles() support,
> maximal possible delay is
>   4294967.295 ms/ F_CPU in MHz. For values greater than the maximal
> possible delay,
>   overflows results in no delay i.e., 0ms.
>
> It looks like the result is now the maximum delay, rather than 0ms.
> Perhaps __builtin_avr_delay_cycles() has changed?

No, obviously, the documentation is wrong, and the delay functions
clip the delay value at a __builtin_avr_delay_cycles() value of
UINT_MAX rather than setting it to 0.

However, I just revisited the C standard on this.  All this is simply
undefined behaviour: the internal calculation is performed with a
"double" argument type, which is eventually then converted to
uint32_t, thereby overflowing the uint32_t domain.

The C standard says:

6.3.1.4 Real floating and integer

1 When a finite value of real floating type is converted to an integer
  type other than _Bool, the fractional part is discarded (i.e., the
  value is truncated toward zero). If the value of the integral part
  cannot be represented by the integer type, the behavior is
  undefined.

So this explains why the documentation claims 0 but we now see an
argument of UINT_MAX.

Feel free to file a documentation bug for that.  I don't think nobody
will use such long delays in any practical application, but the
documentation ought to be correct, mentioning that overflowing the
respective integer domain yields undefined behaviour.

--
cheers, Joerg               .-.-.   --... ...--   -.. .  DL8DTL

http://www.sax.de/~joerg/
Never trust an operating system you don't have sources for. ;-)

Reply | Threaded
Open this post in threaded view
|

Re: can't reproduce documented overflow behavior of _delay_ms()

Georg-Johann Lay-2
Joerg Wunsch schrieb:
  > No, obviously, the documentation is wrong, and the delay functions
> clip the delay value at a __builtin_avr_delay_cycles() value of
> UINT_MAX rather than setting it to 0.
>
> However, I just revisited the C standard on this.  All this is simply
> undefined behaviour: the internal calculation is performed with a
> "double" argument type, which is eventually then converted to
> uint32_t, thereby overflowing the uint32_t domain.

hmmm.  So there is more word to do for double support?  We definitely do
not want double here but float instead...

Johann

Reply | Threaded
Open this post in threaded view
|

Re: can't reproduce documented overflow behavior of _delay_ms()

Joerg Wunsch
As Georg-Johann Lay wrote:

> hmmm.  So there is more work to do for double support?  We definitely do not
> want double here but float instead...

I disagree.

This is all just happening in temporary variables inside of "static
inline" functions, and thus completely optimized away by the compiler.
Thus, the actual argument type doesn't really matter.

With optimizations turned off, all bets for the various _delay*
functions are gone anyway.

It's just the artifact we are being trapped here that overflowing a
conversion from float/double (no matter which) to an integral type is
causing undefined behaviour. Apparently, in the past this caused a
result of 0 (which seems to have been empirically evaluated by its
time, and then recorded as a matter of fact in our documentation),
where it now seems to results in the largest possible value for the
target type (about 269 s delay at 16 MHz). Being undefined behaviour,
the compiler would have been free to just use 42 as well ...

--
cheers, Joerg               .-.-.   --... ...--   -.. .  DL8DTL

http://www.sax.de/~joerg/
Never trust an operating system you don't have sources for. ;-)

Reply | Threaded
Open this post in threaded view
|

Re: can't reproduce documented overflow behavior of _delay_ms()

Georg-Johann Lay-2
Am 04.02.20 um 23:05 schrieb Joerg Wunsch:

> As Georg-Johann Lay wrote:
>
>> hmmm.  So there is more work to do for double support?  We definitely do not
>> want double here but float instead...
>
> I disagree.
>
> This is all just happening in temporary variables inside of "static
> inline" functions, and thus completely optimized away by the compiler.
> Thus, the actual argument type doesn't really matter.

As yes, of course.

> With optimizations turned off, all bets for the various _delay*
> functions are gone anyway.

Johann