|
|
View previous topic :: View next topic |
Author |
Message |
RKnapp
Joined: 23 Feb 2004 Posts: 51
|
IEEE to Microchip 32-bit Float Conversion; vice-versa: How? |
Posted: Tue Feb 24, 2004 1:09 pm |
|
|
Friends,
I'm needing to convert IEEE floats (generated in a Pentium) to four bytes of the format the 18F uses so I can move them across 232, recreate them in the PIC18F8720 and use them.
I also have the reverse problem: PIC18 floats converted into IEEE bytes, so I can port them back up the line.
I have to confess -- I'm fine with C but know no assembly. Could anyone post a few CCS-C routines (I used to use unions -- between Pentiums -- to break up the bytes) which can accomplish this kind of splitting up into bytes, and recomposition? It's not the 232 that's a problem for me, it's the byte-splitting & reformation (conversion) on each side.
I would really appreciate this. Thanks,
Robert |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Tue Feb 24, 2004 7:44 pm |
|
|
Quote: | I'm needing to convert IEEE floats (generated in a Pentium) to
four bytes of the format the 18F uses so I can move them across 232,
recreate them in the PIC18F8720 and use them. |
Here is a program to do the conversion, and the source code for it.
http://www.piclist.com/techref/microchip/math/fpconvert.htm |
|
|
plehman
Joined: 18 Feb 2004 Posts: 12
|
Here's how I did it |
Posted: Wed Feb 25, 2004 9:21 pm |
|
|
In case you don't know, the only real difference between IEEE-754 and the Microchip floating point numbers is where the sign bit resides. Here is some code I wrote to do the conversion, though it assumes a union like the one shown below:
Code: | union float_temp
{
float fValue;
int hValue[4];
} A, B;
|
A and B are variables of this type, which will allow byte operations using the .hValue[]. Note that .hValue[0] will be the MSB, and .hValue[3] is the LSB. Here are the conversion routines:
Code: | void convert_ieee_to_microchip(int *Value)
{
int1 Temp;
Temp = shift_left(&Value[1], 1, 0);
Temp = shift_left(&Value[0], 1, Temp);
shift_right(&Value[1], 1, Temp);
}
void convert_microchip_to_ieee(int *Value)
{
int1 Temp;
Temp = shift_left(&Value[1], 1, 0);
Temp = shift_right(&Value[0], 1, Temp);
shift_right(&Value[1], 1, Temp);
}
|
They can be called with
Code: | convert_ieee_to_microchip(&A.hValue[0]); // IEEE-754->Microchip
convert_microchip_to_ieee(&A.hValue[0]); // Microchip->IEEE-754
|
I can fill the union variables with a small loop like this one:
Code: | for (i = 0; i < 4; i++)
{
A.hValue[i] = read_eeprom(i);
}
|
Hope that helps |
|
|
RKnapp
Joined: 23 Feb 2004 Posts: 51
|
|
Posted: Wed Feb 25, 2004 10:21 pm |
|
|
It helps a LOT! Thank you!
I really appreciate both yours and PCM's response. I am no newbie to embedded control, but am definitely a newbie to these little chips. I am in awe of people who know so much in this arena.
So I really appreciate your help!
Robert |
|
|
RKnapp
Joined: 23 Feb 2004 Posts: 51
|
More on this topic for MPLABS Users |
Posted: Wed Mar 05, 2008 9:50 pm |
|
|
Friends,
This is an old topic, I admit, but I had to "re-solve" this problem when working with the MPLAB's compiler and some of you *might* be interested in my solution. If you're not, I am well aware that this is the CCS forum, but I have been helped many times by kind people on this forum and would like to offer my little bit of insight -- so please don't be offended. You might want to avoid calling shift_left/shift_right for some reason. If my solution is incorrect, please let me know.
Situation: a) MPLAB's compiler doesn't have the convenient shift_left() / shift_right() functions of CCS. b) MPLAB's compiler lacks a "one-bit type" so my solution has to use lots of masks and brute force.
I don't offer complete routines, but instead the listing of my Borland C++ "test" program which contains the converters. Note, Microchip's AN575 describes the Microchip floating point format; wikipedia describes the floating point format IEEE-754-1985; the CCS help file describes the CCS floating point format -- however, the numbering of the bytes caused me some confusion because the diagram appears to show the number as being big-endian when in fact (as the test program shows) it isn't. Byte numbering is as we usually think of them:
(high addr->) [ Byte 3 | Byte 2 | Byte 1 | Byte 0 ] (
<- low address in the union)
Now, plehman's generously-offered CCS solution appears to operate on the "wrong" bytes to me, but it works, I think, because the shift_left / shift_right routines use the same convention as in the CCS help file, but I am happy to be contradicted on this if I have misunderstood what is happening. Anyway, I hope the code below helps someone.
Robert
Code: |
/* ===========================================================================
Function Name: test_1
============================================================================*/
#define uchar unsigned char
#include <stdio.h>
#include <conio.h>
int main (void)
{
// Local data:
union
{
float IEEE_Float;
uchar uBytes[4];
} uF;
char c = 0;
float infloat;
uchar tempchar_sbit;
// Procedure:
printf("\n test_1: Test IEEE to Microchip Float conversion: \n");
while ( ( c != 'E') && ( c != 'e' ) )
{
printf ("\n Enter a float >>");
scanf ("%f", & infloat);
uF.IEEE_Float = infloat;
printf ("\nFloat %f IEEE-754 bytes [3..0] are %X %X %X %X ", infloat,
uF.uBytes[3], uF.uBytes[2], uF.uBytes[1], uF.uBytes[0] );
// CONVERT IEEE-754 to Microchip:
tempchar_sbit = uF.uBytes[3] & 0x80; // get IEEE sign bit -- resides in bit 31
uF.uBytes[3] *= 2; // one shift left (drags in a zero on the right)
if ( uF.uBytes[2] & 0x80 ) // if bit 23 is set,
{
uF.uBytes[3] |= 0x01; // set bit 24 (else it will remain cleared)
uF.uBytes[2] &= 0x7F; // clear bit 23
};
if ( tempchar_sbit ) // if IEEE sign bit was set,
uF.uBytes[2] |= 0x80; // set bit 23 (else, leave it cleared)
printf ("\nFloat %f Microchip bytes [3..0] are %X %X %X %X ", infloat,
uF.uBytes[3], uF.uBytes[2], uF.uBytes[1], uF.uBytes[0] );
// CONVERT Microchip to IEEE-754:
tempchar_sbit = uF.uBytes[2] & 0x80; // get Microchip sign bit -- resides in bit 23
uF.uBytes[2] &= 0x7F; // clear bit 23
if ( uF.uBytes[3] & 0x01 ) // if bit 24 is set,
uF.uBytes[2] |= 0x80; // set bit 23 (else, leave it cleared)
uF.uBytes[3] /= 2; // one shift right (drags in a zero on the left)
if ( tempchar_sbit ) // if Microchip sign bit was set,
uF.uBytes[3] |= 0x80; // set bit 31 (else, leave it cleared)
printf ("\nFloat %f Microchip bytes [3..0] --> %X %X %X %X (converted BACK to IEEE-754)", infloat,
uF.uBytes[3], uF.uBytes[2], uF.uBytes[1], uF.uBytes[0] );
printf("\n ( Enter 'E' to exit, or C/R to perform tests again... )");
c = getch();
}
// Exiting:
printf("\n Exiting: bye");
return ( 0 );
}
//==============================================================================
|
|
|
|
RKnapp
Joined: 23 Feb 2004 Posts: 51
|
Oops -- Floating Point question is moot if using the C30 |
Posted: Thu Mar 13, 2008 1:16 pm |
|
|
Friends,
Pls disregard my last answer. It's wrong.
I'm programming a dsPIC under MPLAB C30 (forced to do so because the CCS dsPIC compiler was too late in coming out) and if you look at page 174 of the pdf entitled C30_Users_Guide_51284e.pdf you'll see that C.2.Note1 talks about the use of "Microchip" formatted floats under MPLAB C18.
Fine. But the next note, C.2.Note2 says that MPLAB C30 uses IEEE-754 format. So "dumb serial devices" can send the device IEEE-754 floats and no special manipulation will be necessary.
Sorry about that.
Robert |
|
|
Douglas Kennedy
Joined: 07 Sep 2003 Posts: 755 Location: Florida
|
|
Posted: Fri Mar 14, 2008 2:09 am |
|
|
With respect to conversions to PC float and CCS float I might not be remembering this correctly but isn't there more to it than just the position of the sign. Floating pt employs a normalized format for the mantissa. It is basically .1xxxxxxxxxxx (binary) and for all notations the point is implied. If 23 bits are used then a bit is available for the sign. Now in some notations they take advantage of the fact that the mantissa always is .1xxxxxx so the notation not only implies the pt but will also implies the leading 1. This allows an extra bit of precision 24 instead of 23 bits. The notation indicates a sign by replacing the leading 1 with a bit to indicate the sign. A conversion from a notation using the implied 1 (24 bit precision) to a non implied 1 (23 bit) also is lossy. I remember my Softee PC using the implied leading 1 notation. |
|
|
Guest
|
As far as I can tell, only the sign bit moves |
Posted: Fri Mar 14, 2008 1:04 pm |
|
|
Douglas,
I'm no expert in this thing, but it appears to me that in the 754 spec and on that page which I cited, a format is shown that does exploit, just as you say -- in all cases -- the idea that the mantissa is of the form 1.mmmmm and thus achieves 23 bits of those m bits -- mantissa powers of two. In every case I've seen, the 1. is implicit when making the conversion.
This is the same between the two compilers. I used to think this was a function of the hardware, but no -- there are no floating point units on these chips, so the C18-format (adhered to by CCS as well) seems to be merely a function of the compiler and the convenience of the math-routine library writers.
The C18 format, to me, actually makes more sense. It's hard to imagine what the 754 people were thinking when they put the sign bit out on the far left, thus forcing the biased 8-bit exponent to shift one bit to the right and thereby causing a ton of fussiness. But maybe they reasoned that since we have to "unbias" that exponent to use it anyway, we might as fuss around slightly more.
I dunno.
But I have just this morning learned another tidbit which you *might* want to know: in C30 you can't just load up a float with hex bytes: it fails for some reason I don't understand. E.g.,
Code: | float Fgibberish = 0x40F6EEF0; // loads nonsense |
That loads up gibberish. What you CAN do is this:
Code: |
union
{
float Floatout;
unsigned long UI32in;
unsigned char ubytes[4];
} uconv;
uconv.UI32in = 0x40F6EEF0;
// ... now I can use uconv.Floatout with no problems,
// and I can examine uconv.ubytes[n] to verify lsb/msb ordering etc.
|
I don't know why this is, and haven't made this test on CCS. ???
Have a nice day,
Robert |
|
|
Douglas Kennedy
Joined: 07 Sep 2003 Posts: 755 Location: Florida
|
|
Posted: Sat Mar 15, 2008 1:30 am |
|
|
Code: | float Fgibberish = 0x40F6EEF0; |
Probably the compiler didn't see 0x40F6EEF0 as float notation (format) but instead saw 0x40F6EEF0 as hex notation for an integer the numerical value of which it then converted to float notation. Any notation for floating point has possibilities when it comes to ordering the bytes in terms of where the most significant byte is found. Where the sign bit is encoded also has possibilities. Now the goal is often to make the float notation efficient for testing for ==0 <=0 >=0 with the targeted processor. As long as the notational representation of a number ( Ex 4 byte float pt format) never leaves the domain of the compiler that understands it ...life is good. Further if numbers are converted to other well accepted notations ( Ex ASCII digits and decimal pts or HEX ASCII chars) before transfer between compiler domains life is still good. Float has evolved to have a number of notational formats the choice of which one to use being left to the compiler writer. |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|