¿ªÔÆÌåÓý

Re: CTI OSC5A2B02 OXCO module high precision frequency reference project


Daniel Marks
 

I worked in the field of compressed sensing and I taught a course that included this material.? I also wrote several papers on compressed sensing instruments.

I am very familiar with the works of Donoho, Candes, and Tao.? The various measures of sparsity, including mutual coherence, the restricted isometry property, L0, and L1 measures.

I also build instruments.? I built spectroscopic instruments and inverse scattering radar measurements (which are better conditioned as they are elliptic rather than parabolic systems as in evanescent surface waves) that utilized compressed sensing for data inversion.

I also worked on trying to infer the parameters of distributions of time series with long-memory, long-tailed distribution diffusion processes from measurements of these distributions.

And there's one thing I know:? no magic fairy dust turns bad data into good data.? You can not wave your sparsity magic wand over data and miraculously get usable data from noise.? It doesn't matter if you have the government spend $100 billion to improve a radar signature or oil companies spend $10 billion to find an oil well.

These are the kinds of visions sold by people who want grant money and promise that they can miraculously tease out some data that is somehow latent and overlooked.? This is extraordinarily rare as to be unknown.

In the end, you have a physical model for a process.? You have possible measurements of that process.? You have some inference method, for example, maximum a posteriori.? Your estimator can only be good as your model.? If your model is well enough behaved, you can get an idea using Fisher information or mininum variance estimation as to the accuracy of the estimator.

I have spend a career solving inverse problems and have been quite successful at this.? And I don't promise what I do not think can deliver.? And I would not promise that any compressed sensing or estimation would reliably provide an answer, unless there was some reason to believe that the problem was guaranteed by the physical situation to actually satisfy that sparsity constraint.?

In reality, most just assume the sparsity constraint, get an answer, and don't bother to compare to reality, or have any sort of cross-validation of the results.?

I attach a copy of a lecture for a course that briefly summarizes some basic results in compressed sensing theory as of the time the lecture was written.


On Tue, Aug 8, 2023 at 3:40?PM Reginald Beardsley via <pulaskite=[email protected]> wrote:

In general I strongly advise against jumping ahead of me to bring up some point such as the problem being illconditioned unless you have read and understand Foucart & Rauhut and "Random Data" by Bendat and Piersol.? I am at the preliminary design stage of the basic DC supply.

If you would like to skip forward to data analysis, please provide data for analysis.? For OXCO a minimum of 3 devices contemporaneously measured spanning 3 months or longer.? If you have multi-year records, send me the first half of each time series so we can compare predictions to reality.

A few mathematical details of major significance:

1) Any power fluctuations will be correlated across N OXCOs and trivial to remove as a consequence.? The same applies to environmental variations.? There are more stringent requirements on data collection to counter EMI.? Suppressing EMI in processing? requires simultaneous measurements to remove via DSP.

2) Prior to Candes & Donoho's work in 2004, such problems could not be solved. That is what "NP-Hard" denotes. I am quite amazed that Donoho's 2004 proof remained unknown to me for 9 years. Prior to 2013-2016, a significant concern any time I was handed a programming assignment was "Is it NP-Hard?". I've encountered such requests more than once. In the seismic field the most common example is line intersections as those are of critical importance. Naively finding all the points where two or more of N line segments intersect requires comparing N! combinations. In practice you must *never* attempt such a problem, so identifying such things is critical to completing the work in a timely manner. Fortunately, in many instances one can avoid using an NP-Hard algorithm by exploiting various properties of the problem. Candes found the method and Donoho provided the proof for a broad class of problems in which the solution space was "sparse".

To the best of my knowledge it is the *only* published solution to an NP-Hard problem in non-polynomial time. That is a huge breakthrough which ranks with Norbert Weiner's 1940 report titled "The Extrapolation, Interpolation and Smoothing of Stationary Time Series" . And given the mathematical symmetries with regular polytopes and convex hulls in N dimensional space and many other areas of mathematics, I think that it will come to be recognized as eclipsing Weiner which is no mean feat.

As I suspect most reading this don't know what NP-Hard means, I shall give a brief precis.

If a problem has the form of a sum of M functions from a collection of N functions, the L0 solution prior to September 2004 required evaluating all N factoral combinations. For N = 10, that''s 3.6 million sums that must then be subtracted from the data to be fit and the absolute error summed.

For N = 20 it's 2e18 permutations and N = 30 it's >2e32. My calculator is unable to evaluate 100! and an attempt to compute 50,000! would probably require more computer memory than the sum off all the memory of any form which has ever been produced just to hold all the digits of the resulting integer.

For now let's stick to properly feeding an LM399 and an AD8429 so that everything above 0 Hz is -120 dB or lower. With LM317s if possible and something better if not.

Have Fun!
Reg


On Monday, August 7, 2023 at 09:16:06 PM CDT, Daniel Marks <profdc9@...> wrote:


You might need to consider some kind of chopper-type amplifier for stabilizing the voltage at those very low input offsets and microvolt-ranges over long time scales. Otherwise drift is going to be problematic over minutes-to-hours time scale as even small input offsets vary. Even a LM399 has to have its own aging effects, given it's constructed from diffusion, ion implantation, and other deposition processes that experience relaxation over time. These are likely to be significant over a year.

Also, trying to fit the various exponential-type aging processes, which could vary over orders of magnitude of time, to a sum of exponentials is going to be a poorly conditioned problem. This is the kind of thing that NIST standards were created for. I suppose it's worth a try, but I think it would need to be compared to something like a rubidium clock.





Join [email protected] to automatically receive all group messages.