View previous topic :: View next topic |
Author |
Message |
lindsay.wilson.88
Joined: 11 Sep 2024 Posts: 40
|
Confused about atol() and unsigned values |
Posted: Wed Oct 09, 2024 7:19 pm |
|
|
Suppose I have the following:
Code: | unsigned int16 v;
get_string(uart_text,10);
v=atol(uart_text);
printf("You entered: %lu\r\n",v); |
This compiles and runs fine. I can enter any value in the range 0-65535 over the uart, and it converts it to an unsigned 16-bit integer, then prints it back.
However, I don't understand why this works! atol() is supposed to return a signed int16, and looking at the internal code for atol() in stdlib.h, it definitely does all its calculations with signed int16. So how come it will happily convert numbers in the range 0-65535, when a signed int16 can only go up to 32767?
I know there's strtoul() as well, which I probably should be using instead, but I'd like to understand the behaviour with atol(). |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19495
|
|
Posted: Thu Oct 10, 2024 2:27 am |
|
|
This is an oddity about the way that overflows happen and are handled.
If you overflow signed int16 arithmetic, the behaviour of the whole 16bit
word, gives the same result as if you were doing the maths unsigned.
So 5*10000 done as an unsigned 16bit value, gives 50000, but do the
same with the variables declared as a signed 16bit value, and then print
this as an unsigned value, and you get the same answer!...
-50000, as a signed int32, is FFFF3CB0. 50000 is 3CB0, so effectively you
have the low 16bits The sign bit is what has migrated off the end. Print
these low 16bits as an unsigned value, and it is as if the overflow has not
happened. |
|
|
jeremiah
Joined: 20 Jul 2010 Posts: 1345
|
Re: Confused about atol() and unsigned values |
Posted: Thu Oct 10, 2024 9:30 am |
|
|
lindsay.wilson.88 wrote: |
However, I don't understand why this works! atol() is supposed to return a signed int16, and looking at the internal code for atol() in stdlib.h, it definitely does all its calculations with signed int16. So how come it will happily convert numbers in the range 0-65535, when a signed int16 can only go up to 32767?
|
I think you are overthinking the difference between signed an unsigned numbers. 95% of the time signed vs unsigned is mostly about how you interpret the data. Because of how signed and unsigned numbers are related mathematically, operations on both signed and unsigned numbers tend to work the same (the results are just interpreted differently).
They do matter for things like boolean expressions (EX: number < some_threshold) but most of the time you can use them somewhat interchangeably. |
|
|
lindsay.wilson.88
Joined: 11 Sep 2024 Posts: 40
|
|
Posted: Thu Oct 10, 2024 7:25 pm |
|
|
My brain hurts ;-) I think I get what you're describing!
Turns out Windows calculator does signed integers (2's complement). If I stick in 10000 then multiply by 5, it says -15536, so that overflowed. The hex for that is C350, and if that's looked at as an unsigned integer, then it's the correct 50000.
Simpler, if I just look at single-byte arithmetic, if I add 100 plus 100, it says -56, C8 in hex, which is 200 as unsigned. |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19495
|
|
Posted: Fri Oct 11, 2024 2:36 am |
|
|
Key though (of course) as Jeremiah says, if is that you are then displaying the signed number as an unsigned result.
It is designed to make you brain hurt if you overthink it. Just accept that this
is what happens.
|
|
|
lindsay.wilson.88
Joined: 11 Sep 2024 Posts: 40
|
|
Posted: Fri Oct 11, 2024 5:16 am |
|
|
Couple more things ;-)
Am I correct in thinking that if v is declared as an unsigned int16, then it'll definitely hold the correct value (as in, 0 to 65535) when I do v=atol(uart_text)? Looking at the assembly, it does look like it's simply assigning the two bytes produced by atol to the variable (which is what I want).
Why doesn't it attempt to automatically convert from signed to unsigned? I.e. the effect of doing v=(unsigned)atol(uart_text) instead. Which produces even stranger results, effectively the result modulo 256 ;-)
I don't need to in this application, but say I wanted to convert a longer number (an int32) from an input string. The compiler manual shows there's an atol32() function, but when I try to use this, the compiler just spits out "Undefined identifier atol32". Is there something else I need to include, or another function I can use?
Edit: Just realised there is atoi32(), which does work fine. I don't know why the manual mentions atol32() since it definitely isn't defined anywhere in stdlib.h. |
|
|
Ttelmah
Joined: 11 Mar 2010 Posts: 19495
|
|
Posted: Fri Oct 11, 2024 7:12 am |
|
|
Your cast returns an unsigned int8. Remember the default integer size
is int8, not int16.
atol32 should be in sodlib. However it is just a #define to atom, for
compatibility. Maybe your old compiler has this missing. |
|
|
lindsay.wilson.88
Joined: 11 Sep 2024 Posts: 40
|
|
Posted: Fri Oct 11, 2024 10:11 am |
|
|
Ahhh makes sense. (unsigned int16) does the trick.
I'm definitely using the most recent version (5.118) and there's no sign of atol32 in stdlib.h, only atoi32. I guess it's maybe redundant since atoi32 does a 32-bit integer anyway? Just odd that the user manual lists atoi32 and atol32 separately. |
|
|
|