Archive

Archive for the ‘filtering’ Category

Demonstration of fractional delay function on real data

November 13, 2013 1 comment
Image

Figure 1

The meteorological station at Armagh Observatory, Northern Ireland closed in 2000[1]. A lot of the data has been made online access via scanned logbooks and some digitised data, paid for primarily by lottery funds.

An unpublished version of the data is used as part of a fractional delay demonstration.

Earlier article providing a template and instructions is
https://daedalearth.wordpress.com/2013/11/12/fractional-dataset-delay-subsample-resolution-in-a-spreadsheet/

Read more…

Fractional dataset delay (subsample resolution) in a spreadsheet

November 12, 2013 3 comments
Image

Figure 1

Image

Working code is provided[1] for copying and use, no macros.

Fractional delay means one timeseries can be delayed or advanced in time relative to another by any amount including any fraction of one sample time. This is achieved by a short digital filter (5 taps) which is “designed” by the spreadsheet to user demand.

A demonstration of usage on real data is the next DaedalEarth article.  Link to demonstration.

Read more…

Categories: filtering, software

Arctic sea ice June 2011

June 5, 2011 Leave a comment

Since I’ve not shown anything to do with sea ice recently, here is an update on Arctic sea ice extent.

Daily data from IJIS/JAXA.

Sea ice extent

I’ve modelled the annual cycle and subtracted from the data.

There is still no obvious change making it clear where the Arctic ice is going to meander next.

Data

www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm

Categories: analysis, Datasets, filtering

Is sea level really an issue?

Normalised sea level and temperature

This is likely to be controversial and dismissed by some as invalid, which is their problem.

The RSS data is monthly.

Topex/Jason data is sampled roughly every 10 days, processed into monthly and then normalised to the RSS data.

Both month data were low pass filtered at 10 years, with end correction. This is likely to be dismissed as impossible, look, it is self evidently about right.

All four are plotted above.

Other work has suggested there is a correlation between temperature and sea level, sea level lagging perhaps 4 years. (I doubt many people think that sea level causes temperature)

The actual sea level rises being talked about are extremely small relative to the size of the planet.

What if the top few x metres of sea water are warmed by 0.2K?

What if the Vostok core is like polar ice?

With the last post I hope I demonstrated how a very simple regular function, a planet orbit, causes a more complex modulation of sea ice, something which does not seem to be generally understood.

Look back a couple of posts and you will see Vostok ice core plots.

What I have done now is flip the temperature proxy data upside down. This ought to roughly represent ice and puts the data the same way around as the earth sea ice plots. More ice is upward, melt is downwards.

I then used a very crude approximation to some kind of orbital signal, actually locks in at 103ky.

This is known wrong in relation to orbits (explain more in a moment) but food for thought.

Do you now see why the widespread notion that sharp melts cannot come from a simple stimulation is a highly questionable assumption? It should be considered feasible and with no magic or particular non-linearity.

Assistance

I would appreciate assistance with very long orbital period calculation. I have the capability to carry out some novel experimentation using accurate orbital data, which I do not have.

One of the very interesting features is apparently the variation in the eccentricity of the earth orbit on these very long timescales, ie. it varies with at least two periods. I hope the reason why this is so interesting is not lost on the reader.

How polar ice is modulated by the sun

April 30, 2011 9 comments

What follows here is a demonstration of how earth orbit shapes Arctic ice and in a later post I intend to show how this may well relate to palaeoclimatology shown in ice cores.

You will have seen the plots of Arctic sea ice. I am going to use one dataset here, which one is unimportant, others give the same answer.

Arctic sea ice extent, monthly data

Read more…

Categories: analysis, filtering, sea ice, solar

Epica Vostok resampled composite

April 27, 2011 Comments off

I won’t say much here now, busy.

This seems to confirm a huge date mistmatch between the two sets of ice core data.

This is a deliberately large plot. Contact me if you need data or help.

Composite of signal processing resampled data from two ice cores

Simplest way to provide data is an export to XLS format of work, warts and all.

Contains usable resampled data and originals

Added later, easiest way to provide data is export to XLS, is scruffy workfile

epica-1

Categories: analysis, filtering, Ice core

Vostok ice core, part 2

April 26, 2011 Leave a comment

See part 1 if you haven’t.

Seemed a good test to see if I could reproduce the temperature vs. CO2 lead lag result but using signal processing, data resampling. Turns out  the CO2 data is even worse than the isotope ratio temperature data, fewer data points and sampled at different dates.

Easy. I applied identical processing to both datasets and then figured out how to time shift one of them. To my surprise there is a very high correlation, r2=0.82, at least given the preprocessing used. The quick and dirty way to do the time shift was apply an offset at the decimate stage, simply picks off data at a different point. (this is valid)

If I have done this right it is about 1,500 years for best fit of rise and fall. I then aligned the datasets and plotted (Y axis reacaled and offset CO2 by hand so the data roughly matches on one scale)  for an eyeball.

Time co-incident plot of temperature and CO2

There is obviously a lot going on but there it is visibly on one plot.

A net dig shows a work by Jo Nova (know the name, no idea who she is)

http://joannenova.com.au/global-warming/ice-core-graph/

That says 800 years and seems to cite others.

Ref

http://www.ncdc.noaa.gov/paleo/metadata/noaa-icecore-2453.html

ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/co2nat.txt

Categories: analysis, filtering, Ice core

Vostok ice core, part 1

April 26, 2011 Leave a comment
An ongoing development is better handling irregularly sampled data. This is a very hard problem with no pure solution.
After a lot of investigation and experimentation I have concluded that NDFT/NDFT are of little use, solve nothing, kicks straight back into the input data must be good. Usually involved is approximating and other heuristics.
Looks hard. Run away.

Raw data overlaid with a resampled dataset

For what I am doing a good solution is fix up the dataset using signal processing, kind of trivial, although it will seem black magic to outsiders. (why no blue, pink, white, transparent magic?)
A key is keeping the human brain in the loop, each case is likely to be different with no one size fits all.
I’ve coded up the hard part for a human as an extension of one software package.
Seems to work nicely, as above, the test dataset. Vostok original data sampling ranges from 60 through 600 years.
An XLS with the original data and resample dataset is here vostoke-temperature-a
Reference
 http://www.ncdc.noaa.gov/paleo/metadata/noaa-icecore-2453.html
 ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/deutnat.txt
The dataset is now clean and trivial for normal tools.
And for those who like first difference…
Hiding anything? Nope. The 1st diff does of course have high frequency noise but is surprisingly small. Only clear term in the part not shown is ~4044y. No idea what that is if anything.
More to be done, always is.
Categories: analysis, filtering, Ice core

Dataset filtering with end correction

October 17, 2010 Leave a comment

Some time ago I became annoyed at the suggestions about can’t, won’t, and so on about frequency filtering an short datasets. Was it really that impossible a problem?

Part of my background is in professional audio, including digital, equipment design, so I am very familiar with signal processing and PCM (pulse code modulation). Just about all climatic data is actually coded PCM so of course all the laws surrounding that apply. Not that you would know this from the way that most in science behave. Wrong math. The two huge ones are Nyquist and Shannon, it is not rocket science.

Handling for example audio data is one thing, doing the same for climatic data is a whole different pit of vipers. One of the massive problems is short data, it has two ends, something which is avoid like the plague in audio, often by hiding it, mute the sound.

It didn’t take me long to figure out one then another method of end correcting short data. There is no theoretically correct solution but I am pragmatic, engineer style, it works well enough. Classic, do you want a good answer, or none?

In this case I dubbed it first order end correction. All it does is disconnect the Y, up and down, offset from zero.

This was added to a filtering package in C I was writing. (partly about getting back into programming after a 10 year or so break)

The filtering used direct convolution, a practical matter of fast enough for the size of data and filters commonly used. (in most cases immediate) This lent itself perfectly to one of the end correction methods.

Roll forwards are couple of years and for various reasons further progress really needed a switch to FFT based convolution, generally done segmented. In contrast to direct this is errm… somewhat complicated. Actual FFT is not a problem, I researched and use an excellent Japanese authored library which has permissive licencing, in line with my intent to one day release software under a similar scheme. (talking here BSD/MiT style)

Implementing the new convolution method was fun, of the hair tear out variety. Obviously this is just a case of discover how it is done, then write it. Snag as usual is that no-one actually says accurately.

That worked and on test I could go past a million tap filter, perfect. I like to find the limits before anything bites me. At 10m I run out of disk spool and RAM, program exits gracefully with a rude message, no problem. That size of complex data etc. is awful large, if that limit seems low, this ain’t bytes.

The next can of trouble is end correction. How for goodness sake can that be done?

This took weeks of hairless.  A simple answer turned out to be preprocess using direct convolution, then use the new fancy stuff. This worked more or less immediately once I guessed some bizarre properties, such as what to do with complex data. Counter intuitive seems everywhere.

At that point I had what I thought cracked the whole problem. It seemed to pass basic tests.

Over a month or two of using the program it dawned on me there were subtle problems. Some parts were wrong.

Just had a go at finally cracking it, took several periods of detail work.

The answer was figure out a way of detecting what kind of filter is being requested and then as necessary use spectral inversion on the end correction. (spectral inversion is a trivial maths process which reverses the frequency sense of a filter, such as transforming a low pass into a high pass, or the reverse)

No end correction

First order end corrected

The exact filter characteristic is unimportant here, is just a quick hack for testing.

The data is a straight diagonal line. Depending on the filter will turn into the same or a horizontal line.

The correction is not perfect, is quite involved.

Those of you who have come this far might be in for a surprise. There is one of those dreadful “everyone-knows” in science about the length of a transversal filter (or to use another name FIR) and incantations that the length of the filter must be cut off the output.

That lore is totally wrong. Above is hard proof, the filter is about 3x longer than the data.

As far as I can tell the truth is about the impulse response of the filter and the Gibbs phenomenon. The case could be that the filter exactly mimics are much shorter filter, see the point? Not about length at all.

I add, that ghastly moving average filter is the worst possible filter, not even a good filter other than for specialist usage. The only redeeming feature for climatic is that it is the shortest possible. Perhaps the most serious problem is the severe artefacts it creates, are not in the data and these have led some people astray, thinking artefacts are actual data.

I could post a demo of the dire effect.

So, there is still more to do. Not even looked at how arbitrary filters behave. Hilbert is also still a problem. May as well, looks as though a sane arbitrary definition is on disk. And yay, it worketh! 🙂

Really go mad, million point. Uh huh, 12 seconds and disk activity, needs more RAM. And with correction, 8s, already had space. I haven’t optimised anything, nor space usage. No point in showing anything, looks almost the same as the above plots. (different filter characteristic, actually a pair of sharp bandpass)

Real test time! What shall I choose, okay, a work in progress, computed from ephemeris gravitational force, daily data from 1950 and try bandpass extract of a 60 year term.

Grin. Time for a beer.

That is actually a very interesting result which deserves it’s own post.

EDIT

Further investigation throws up problems I do not understand about getting the same result from other Solex 100 runs, suggesting I made a mistake. After fiddling for some time trying to discover what I had done wrong, I decided to leave the problem for the time being, come back to it when the time seems right. Filtering working? Not used it a great deal, but no problems have appeared.

Categories: analysis, Datasets, filtering