View previous topic :: View next topic |
Author |
Message |
s_mack
Joined: 04 Jun 2009 Posts: 107
|
Keeping a relatively large running average with arrays? |
Posted: Tue Sep 15, 2009 6:45 pm |
|
|
Each time my code runs, I get a value for x. I want to keep track of a history of x. I'm assuming this is best done in an array.
So each time the code runs, I pop the value into that array. But... I'm going to eventually have to start kicking old values out and replacing them with new ones. I can't imagine this is very efficient:
Code: |
for (i=1;i<=100;++i)
myarray[i-1] = myarray[i];
myarray[100] = x; |
So my questions regarding arrays in CCS:
Is there a function like PHP's pop(), shift() or whatever to kick a value from the beginning?
Similarly, is there a function like PHP's push() to add a new element to the end of the array?
Is there a function to sum (or average) the elements of an array?
Thanks gurus! |
|
|
PCM programmer
Joined: 06 Sep 2003 Posts: 21708
|
|
Posted: Tue Sep 15, 2009 7:25 pm |
|
|
You don't have to physically shift the contents of the array each time.
Just increment an index. See this post for examples:
http://www.ccsinfo.com/forum/viewtopic.php?t=3462&start=1
These two routines implement a running median filter and a
running average filter. |
|
|
asmboy
Joined: 20 Nov 2007 Posts: 2128 Location: albany ny
|
|
Posted: Tue Sep 15, 2009 7:37 pm |
|
|
int8's is very ez and needs no push down ever - BUT you must insure that you have at least 100 data items or more
Code: |
// circular array does running aves nicely
// have at least 100 vals loaded before first average
// else there will be undefined vals in the array
// nuval is your new data being added to the array
#define SIZE 100
int8 myarray[SIZE];
int8 next = 0;
void addnewVal(int8 nuval){ // loop around forever
myarray[next]=nuval;
next=(next+1) % SIZE;
}
int8 runtotAve (void){ // assume you have called addnewval 110+ times
int16 total; int8 i; // ret your average
for (i=0; i<100 ;i++){ total +=myarray[i];}
return ( (int8) (total/100));
}
|
after your first 100 entries - you will commence over writing the oldest entry and so on |
|
|
s_mack
Joined: 04 Jun 2009 Posts: 107
|
|
Posted: Tue Sep 15, 2009 7:45 pm |
|
|
I guess there's no magical way of not taking the RAM hit is there?
My RAM just jumped from 48 to 60 % to 70 to 82% and I need the space!
Ouch. |
|
|
asmboy
Joined: 20 Nov 2007 Posts: 2128 Location: albany ny
|
|
Posted: Tue Sep 15, 2009 8:11 pm |
|
|
U R not using 18F parts then - right ?
U never did say
if in 16F 28 /40 pin land -- there is almost sure to be an 18F
equivalent part with lots more RAM
$$ will fix the problem |
|
|
s_mack
Joined: 04 Jun 2009 Posts: 107
|
|
Posted: Tue Sep 15, 2009 9:10 pm |
|
|
18F4480 |
|
|
asmboy
Joined: 20 Nov 2007 Posts: 2128 Location: albany ny
|
|
Posted: Tue Sep 15, 2009 10:19 pm |
|
|
18f4580
double your code space double your user ram
same hardware
close enuf? |
|
|
s_mack
Joined: 04 Jun 2009 Posts: 107
|
|
Posted: Tue Sep 15, 2009 10:37 pm |
|
|
Yes, it would have been. But that's not what is in our devices
You advice is good, but its a "woulda, shoulda, coulda.... didn't" kind of situation. I'm trying to make our device do more than it was originally designed to do with software upgrades alone. If I can, then great. If I can't then it still does what it was designed to do very well.
Thanks for your help. It fits as-is and works. I just have 2 or 3 more features to add that are a higher priority and I'm pretty sure they won't all fit.
Can I use the EEPROM instead of an array I wonder? Then all I need is a temp variable and a sum variable. Hmm.... |
|
|
s_mack
Joined: 04 Jun 2009 Posts: 107
|
|
Posted: Tue Sep 15, 2009 11:34 pm |
|
|
Ok, someone stop me here before I waste too much time on this
How fast is an eeprom read? Thanks to the nature of the program and the fact there is an interrupt for writing I suppose I don't care how long a write takes (within reason) but I need some assurance that a read happens reliably fast.
Is there a "lifespan" to eeprom? Am I just going to burn out the chip by reading and writing to it constantly?
If not... I figure I can use 100 locations of eeprom each storing the initial values of the 100 aforementioned elements. Once the 100th spot is taken, the first is replaced and a "ticker" variable increments thereby keeping track of the new "first" spot. My sum variable just needs to know the value of the kicked out value (1 eeprom read) and the value of the new one that replaced it (1 eeprom write) to make the average calculations.
If this can be done practically, then instead of 100 int8 variables taking up RAM I just have 3 int8s (ticker, addvar, subvar) in addition to the 1 int16 (running total) I would need anyway. That seems to be a whole lot of savings in RAM. But at the obvious (and perhaps ludicrous) expense of (much?) slower eeprom read/writes.
And if there is a finite number of times that the EEPROM can be accessed then this obviously isn't the way to go. |
|
|
asmboy
Joined: 20 Nov 2007 Posts: 2128 Location: albany ny
|
|
Posted: Wed Sep 16, 2009 11:00 am |
|
|
yes writing constantly to eeprom will eventually FAIL
and it will slow down execution too - whether internal or external
you could make the running average array smaller as one way to save ram tho making the code to handle it smaller is not so EZ since there is not much of it 2 start with
an array of 50 vals would return 50 bytes of ram for other stuff but thats about it short of changing pics OR a rigorous analysis ( profiling) of the
.LST file you have now to see where all the weight is
if you can dispense with FLOATS - that usually unbloats code and being parsimonius with printf and other such code hog calls is another way to cut down program size
check your LST to c what can be done |
|
|
s_mack
Joined: 04 Jun 2009 Posts: 107
|
|
Posted: Wed Sep 16, 2009 11:08 am |
|
|
Thanks.
Getting rid of floats is what brought me here
I replaced all my float code with fixed, at the inevitable expense of precision. That precision has led to an oscillation that is undesirable. We determined that taking a running average of the last 100 data points eliminates the oscillation without introducing too much retardation. Significantly less than 100 doesn't do much.
But using 100 is slightly worse for the code then using floats was so in the end I'm not better off.
I'll try some other low-cost (code, not $$) methods for reducing the oscillation and if that's a no-go then I'll just go back to floats and eliminate the extra features we were hoping to introduce.
Thanks for the suggestion and help, and for confirmation that using eeprom wasn't a good way to go. It worked, by the way It wasn't as slow as I thought it might be. But yeah, I don't want to be burning out chips doing this. Plus it wasn't 100% reliable - about 0.5% of the data reads were lost. Thanks to the averaging that isn't really an issue. Interesting exercise anyway! |
|
|
Ttelmah Guest
|
|
Posted: Wed Sep 16, 2009 2:29 pm |
|
|
The alternative it might be worth considering, is external FRAM. Basically instantaneous reads/writes (timing is limited only by the speed you can transfer the data to/from the chip, up to SPI rates as high as 25MHz). Using the SPI versions, on a 40MHz PIC, you can transfer a byte in under 1uSec....
Best Wishes |
|
|
Ken Johnson
Joined: 23 Mar 2006 Posts: 197 Location: Lewisburg, WV
|
|
|
RLScott
Joined: 10 Jul 2007 Posts: 465
|
Re: Keeping a relatively large running average with arrays? |
Posted: Fri Sep 18, 2009 8:12 pm |
|
|
s_mack,
The kind of running average you are describing is sometimes called a "box car average" because the graph of which points are included looks like a box car. There is another kind of running average that takes much less RAM, and that is an exponential decay average. That is like what you would get if you filtered your values with a very long time-constant RC filter. This can be implemented by the recursive formula:
Code: | FilteredVal = FilteredVal * alpha + NewVal * beta |
where alpha + beta = 1.
The difference in behavior is that the influence of a value drops out suddenly and completely when it falls off the end of the queue in the box car average. But in the exponential decay average, the influence of a value starts out at a maximum, and then it gradually decays over time, until after several time-constants its influence (weighting) is inconsequential. If you can use this kind of filtering behavior, then maybe exponential decay filtering is for you.
After posting this, I see that it is essentially covered in Ken Johnson's posting. _________________ Robert Scott
Real-Time Specialties
Embedded Systems Consulting |
|
|
|