Fran?ois,
It would be helpful if you described the specific frequency span you are
using, the number of segments, and the frequencies where the "errors"
occur. If there is some sort of bug in the software algorithm, that would
help to see where it originates.
But it is also true that the nanoVNA hardware also has limitations and
boundaries, and it is likely that the "error" points you are seeing in your
calibration is in fact a true measurement, due to a condition in the
hardware that happens at the subject frequency. And so such an "error"
should be left in the calibration file, to compensate for the hardware
measurement anomaly. An example of this is around 300MHz, where the
hardware (actually firmware) switches from using fundamental frequencies of
the synthesizer to harmonic frequencies.
So I would not be quick to "correct" my calibration file, particularly if
you had the averaging turned on. If the "error" survived the averaging
function, it is indeed present in the measurement (unless there is a
software bug).
As shown in the code sample above, the nanovna-saver (N,M) averaging
function looks at the N samples from each frequency point, then discards
the M samples that are furthest away from the average of those points,
before doing the final averaging for the value at that frequency point.
toggle quoted message
Show quoted text
On Fri, Jul 29, 2022 at 9:46 AM Jim Lux <jimlux@...> wrote:
On 7/28/22 11:32 PM, F1AMM wrote:
Hello
I didn't understand the meaning of:
- Number of measurements to average
The NanoVNA makes multiple sweeps, and NanoVNA-Saver averages them.
One thing to remember is that if the sweep is >101 points, it's actually
multiple sweeps strung together.
- Number to discard
I do not know. Maybe it's how many that are farthest from the mean?
(i.e. throwing out outliers)
Looking at SweepWorker.py, that's what it looks like. See below for the
code.
- Common values ??are 3/0, 5/5, 9/4 and 25/6
1/ Could you explain to me what you understand about this subject?
I saw that these parameters were also used during the calibration.
During the calibration sequence (short open load), some measurements are
marred by a gross error (click). These errors, despite a 25/6 filtering,
remain present in the produced .cal file. There is nothing more unpleasant
than to find, then, always in the same place, these errors in the normal
measurements using this .cal file
Are these at boundaries of sweeps (i.e. at sample 101, 201, etc.)?
I have to go back, almost by hand, to the raw .cal file to correct its
errors. I detect errors with Excel and I correct them by doing a linear
extrapolation with the previous value and the next value. The result is
satisfying but not very effective.
My questions
-------------
2/ What is the nature of the current filtering algorithm that leaves so
much error
I don't know that there is a smoother or filter across the band. My
impression is that it is all point by point.
3/ Could this algorithm be improved?
Probably - there's always room for improvement.
It's tricky doing outlier rejection or filtering though. Consider
measuring a narrow band filter with measurement points far enough apart
that the sequence goes "out of band", "in band", "out of band" - is the
radically different number of the middle sample an outlier, or the true
value.
4/ Has anyone ever come up with a spinner, to run on the raw .cal file;
to correct these gross errors
73
This is what removes outliers:
def truncate(values: List[List[Tuple]], count: int) -> List[List[Tuple]]:
"""truncate drops extrema from data list if averaging is active"""
keep = len(values) - count
logger.debug("Truncating from %d values to %d", len(values), keep)
if count < 1 or keep < 1:
logger.info("Not doing illegal truncate")
return values
truncated = []
for valueset in np.swapaxes(values, 0, 1).tolist():
avg = complex(*np.average(valueset, 0))
truncated.append(
sorted(valueset,
key=lambda v, a=avg:
abs(a - complex(*v)))[:keep])
return np.swapaxes(truncated, 0, 1).tolist()