¿ªÔÆÌåÓý

ctrl + shift + ? for shortcuts
© 2025 Groups.io

nanovna-saver : Sweep setting


F1AMM
 

Hello

I didn't understand the meaning of:

- Number of measurements to average
- Number to discard
- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration. During the calibration sequence (short open load), some measurements are marred by a gross error (click). These errors, despite a 25/6 filtering, remain present in the produced .cal file. There is nothing more unpleasant than to find, then, always in the same place, these errors in the normal measurements using this .cal file

I have to go back, almost by hand, to the raw .cal file to correct its errors. I detect errors with Excel and I correct them by doing a linear extrapolation with the previous value and the next value. The result is satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves so much error
3/ Could this algorithm be improved?
4/ Has anyone ever come up with a spinner, to run on the raw .cal file; to correct these gross errors

73
--
F1AMM (Fran?ois)


 

I also see sometimes these 'clicks' coming at the same frequency in the
measured results. But after a recalibration they are most of the times
gone.But they indeed happen, why I don't understand. I don't know if
editing the .cal file is the way forward (also cumbersome). So I don;t have
a solution, but I recognise these 'clicks'/'bumps'/ticks.

All the best,

Victor


Op vr 29 jul. 2022 om 08:32 schreef F1AMM <18471@...>:

Hello

I didn't understand the meaning of:

- Number of measurements to average
- Number to discard
- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration. During
the calibration sequence (short open load), some measurements are marred by
a gross error (click). These errors, despite a 25/6 filtering, remain
present in the produced .cal file. There is nothing more unpleasant than to
find, then, always in the same place, these errors in the normal
measurements using this .cal file

I have to go back, almost by hand, to the raw .cal file to correct its
errors. I detect errors with Excel and I correct them by doing a linear
extrapolation with the previous value and the next value. The result is
satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves so
much error
3/ Could this algorithm be improved?
4/ Has anyone ever come up with a spinner, to run on the raw .cal file; to
correct these gross errors

73
--
F1AMM (Fran?ois)







 

Which nano and which firmware?

Do different nano-types or firmware versions make a difference?

On my old nano (H3.2) I had a bump around 6 MHz, but it was a known errror.

Arie PA3A

Hello

I didn't understand the meaning of:

- Number of measurements to average
- Number to discard
- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration. During
the calibration sequence (short open load), some measurements are marred by
a gross error (click). These errors, despite a 25/6 filtering, remain
present in the produced .cal file. There is nothing more unpleasant than to
find, then, always in the same place, these errors in the normal
measurements using this .cal file

I have to go back, almost by hand, to the raw .cal file to correct its
errors. I detect errors with Excel and I correct them by doing a linear
extrapolation with the previous value and the next value. The result is
satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves so
much error
3/ Could this algorithm be improved?
4/ Has anyone ever come up with a spinner, to run on the raw .cal file; to
correct these gross errors

73
--
F1AMM (Fran?ois)








 

I have a NanoVNA H version 3.5 software 1.0.64, kernel 4.0.0
I need to check where the bump is. It does not happen always... When i see
it again, I will report.

All the best,


Victor


Op vr 29 jul. 2022 om 13:10 schreef Arie Kleingeld PA3A <pa3a@...>:

Which nano and which firmware?

Do different nano-types or firmware versions make a difference?

On my old nano (H3.2) I had a bump around 6 MHz, but it was a known errror.

Arie PA3A

Hello

I didn't understand the meaning of:

- Number of measurements to average
- Number to discard
- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration.
During
the calibration sequence (short open load), some measurements are
marred by
a gross error (click). These errors, despite a 25/6 filtering, remain
present in the produced .cal file. There is nothing more unpleasant
than to
find, then, always in the same place, these errors in the normal
measurements using this .cal file

I have to go back, almost by hand, to the raw .cal file to correct its
errors. I detect errors with Excel and I correct them by doing a linear
extrapolation with the previous value and the next value. The result is
satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves so
much error
3/ Could this algorithm be improved?
4/ Has anyone ever come up with a spinner, to run on the raw .cal file;
to
correct these gross errors

73
--
F1AMM (Fran?ois)













 

On 7/28/22 11:32 PM, F1AMM wrote:
Hello
I didn't understand the meaning of:
- Number of measurements to average

The NanoVNA makes multiple sweeps, and NanoVNA-Saver averages them.

One thing to remember is that if the sweep is >101 points, it's actually multiple sweeps strung together.


- Number to discard
I do not know. Maybe it's how many that are farthest from the mean? (i.e. throwing out outliers)

Looking at SweepWorker.py, that's what it looks like. See below for the code.


- Common values ??are 3/0, 5/5, 9/4 and 25/6
1/ Could you explain to me what you understand about this subject?
I saw that these parameters were also used during the calibration. During the calibration sequence (short open load), some measurements are marred by a gross error (click). These errors, despite a 25/6 filtering, remain present in the produced .cal file. There is nothing more unpleasant than to find, then, always in the same place, these errors in the normal measurements using this .cal file

Are these at boundaries of sweeps (i.e. at sample 101, 201, etc.)?


I have to go back, almost by hand, to the raw .cal file to correct its errors. I detect errors with Excel and I correct them by doing a linear extrapolation with the previous value and the next value. The result is satisfying but not very effective.
My questions
-------------
2/ What is the nature of the current filtering algorithm that leaves so much error
I don't know that there is a smoother or filter across the band. My impression is that it is all point by point.


3/ Could this algorithm be improved?
Probably - there's always room for improvement.

It's tricky doing outlier rejection or filtering though. Consider measuring a narrow band filter with measurement points far enough apart that the sequence goes "out of band", "in band", "out of band" - is the radically different number of the middle sample an outlier, or the true value.



4/ Has anyone ever come up with a spinner, to run on the raw .cal file; to correct these gross errors
73


This is what removes outliers:


def truncate(values: List[List[Tuple]], count: int) -> List[List[Tuple]]:
"""truncate drops extrema from data list if averaging is active"""
keep = len(values) - count
logger.debug("Truncating from %d values to %d", len(values), keep)
if count < 1 or keep < 1:
logger.info("Not doing illegal truncate")
return values
truncated = []
for valueset in np.swapaxes(values, 0, 1).tolist():
avg = complex(*np.average(valueset, 0))
truncated.append(
sorted(valueset,
key=lambda v, a=avg:
abs(a - complex(*v)))[:keep])
return np.swapaxes(truncated, 0, 1).tolist()


 

Fran?ois,

It would be helpful if you described the specific frequency span you are
using, the number of segments, and the frequencies where the "errors"
occur. If there is some sort of bug in the software algorithm, that would
help to see where it originates.

But it is also true that the nanoVNA hardware also has limitations and
boundaries, and it is likely that the "error" points you are seeing in your
calibration is in fact a true measurement, due to a condition in the
hardware that happens at the subject frequency. And so such an "error"
should be left in the calibration file, to compensate for the hardware
measurement anomaly. An example of this is around 300MHz, where the
hardware (actually firmware) switches from using fundamental frequencies of
the synthesizer to harmonic frequencies.

So I would not be quick to "correct" my calibration file, particularly if
you had the averaging turned on. If the "error" survived the averaging
function, it is indeed present in the measurement (unless there is a
software bug).

As shown in the code sample above, the nanovna-saver (N,M) averaging
function looks at the N samples from each frequency point, then discards
the M samples that are furthest away from the average of those points,
before doing the final averaging for the value at that frequency point.

On Fri, Jul 29, 2022 at 9:46 AM Jim Lux <jimlux@...> wrote:

On 7/28/22 11:32 PM, F1AMM wrote:
Hello

I didn't understand the meaning of:

- Number of measurements to average

The NanoVNA makes multiple sweeps, and NanoVNA-Saver averages them.

One thing to remember is that if the sweep is >101 points, it's actually
multiple sweeps strung together.


- Number to discard
I do not know. Maybe it's how many that are farthest from the mean?
(i.e. throwing out outliers)

Looking at SweepWorker.py, that's what it looks like. See below for the
code.


- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration.
During the calibration sequence (short open load), some measurements are
marred by a gross error (click). These errors, despite a 25/6 filtering,
remain present in the produced .cal file. There is nothing more unpleasant
than to find, then, always in the same place, these errors in the normal
measurements using this .cal file


Are these at boundaries of sweeps (i.e. at sample 101, 201, etc.)?



I have to go back, almost by hand, to the raw .cal file to correct its
errors. I detect errors with Excel and I correct them by doing a linear
extrapolation with the previous value and the next value. The result is
satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves so
much error
I don't know that there is a smoother or filter across the band. My
impression is that it is all point by point.


3/ Could this algorithm be improved?
Probably - there's always room for improvement.

It's tricky doing outlier rejection or filtering though. Consider
measuring a narrow band filter with measurement points far enough apart
that the sequence goes "out of band", "in band", "out of band" - is the
radically different number of the middle sample an outlier, or the true
value.



4/ Has anyone ever come up with a spinner, to run on the raw .cal file;
to correct these gross errors

73


This is what removes outliers:


def truncate(values: List[List[Tuple]], count: int) -> List[List[Tuple]]:
"""truncate drops extrema from data list if averaging is active"""
keep = len(values) - count
logger.debug("Truncating from %d values to %d", len(values), keep)
if count < 1 or keep < 1:
logger.info("Not doing illegal truncate")
return values
truncated = []
for valueset in np.swapaxes(values, 0, 1).tolist():
avg = complex(*np.average(valueset, 0))
truncated.append(
sorted(valueset,
key=lambda v, a=avg:
abs(a - complex(*v)))[:keep])
return np.swapaxes(truncated, 0, 1).tolist()







 

What Stan said.

It appears that the nanovna is doing a "mean average". In statistics, the N number of samples are examined to find the largest and smallest, or other measure of outlier, values which are not considered when performing the averaging function. Those mean samples can unduly skew the average in an undesirable way for many applications. There are, of course, other averaging algorithms, besides mean average, better suited for other applications, including a straight average for all sample values that include extremes.

Stephen W9SK

-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of Stan Dye
Sent: Friday, July 29, 2022 10:20 AM
To: [email protected]
Subject: Re: [nanovna-users] nanovna-saver : Sweep setting

Fran?ois,

It would be helpful if you described the specific frequency span you are using, the number of segments, and the frequencies where the "errors"
occur. If there is some sort of bug in the software algorithm, that would help to see where it originates.

But it is also true that the nanoVNA hardware also has limitations and boundaries, and it is likely that the "error" points you are seeing in your calibration is in fact a true measurement, due to a condition in the hardware that happens at the subject frequency. And so such an "error"
should be left in the calibration file, to compensate for the hardware measurement anomaly. An example of this is around 300MHz, where the hardware (actually firmware) switches from using fundamental frequencies of the synthesizer to harmonic frequencies.

So I would not be quick to "correct" my calibration file, particularly if you had the averaging turned on. If the "error" survived the averaging function, it is indeed present in the measurement (unless there is a software bug).

As shown in the code sample above, the nanovna-saver (N,M) averaging function looks at the N samples from each frequency point, then discards the M samples that are furthest away from the average of those points, before doing the final averaging for the value at that frequency point.


On Fri, Jul 29, 2022 at 9:46 AM Jim Lux <jimlux@...> wrote:

On 7/28/22 11:32 PM, F1AMM wrote:
Hello

I didn't understand the meaning of:

- Number of measurements to average

The NanoVNA makes multiple sweeps, and NanoVNA-Saver averages them.

One thing to remember is that if the sweep is >101 points, it's
actually multiple sweeps strung together.


- Number to discard
I do not know. Maybe it's how many that are farthest from the mean?
(i.e. throwing out outliers)

Looking at SweepWorker.py, that's what it looks like. See below for
the code.


- Common values ??are 3/0, 5/5, 9/4 and 25/6

1/ Could you explain to me what you understand about this subject?

I saw that these parameters were also used during the calibration.
During the calibration sequence (short open load), some measurements
are marred by a gross error (click). These errors, despite a 25/6
filtering, remain present in the produced .cal file. There is nothing
more unpleasant than to find, then, always in the same place, these
errors in the normal measurements using this .cal file


Are these at boundaries of sweeps (i.e. at sample 101, 201, etc.)?



I have to go back, almost by hand, to the raw .cal file to correct
its
errors. I detect errors with Excel and I correct them by doing a
linear extrapolation with the previous value and the next value. The
result is satisfying but not very effective.

My questions
-------------

2/ What is the nature of the current filtering algorithm that leaves
so
much error
I don't know that there is a smoother or filter across the band. My
impression is that it is all point by point.


3/ Could this algorithm be improved?
Probably - there's always room for improvement.

It's tricky doing outlier rejection or filtering though. Consider
measuring a narrow band filter with measurement points far enough
apart that the sequence goes "out of band", "in band", "out of band" -
is the radically different number of the middle sample an outlier, or
the true value.



4/ Has anyone ever come up with a spinner, to run on the raw .cal
file;
to correct these gross errors

73


This is what removes outliers:


def truncate(values: List[List[Tuple]], count: int) -> List[List[Tuple]]:
"""truncate drops extrema from data list if averaging is active"""
keep = len(values) - count
logger.debug("Truncating from %d values to %d", len(values), keep)
if count < 1 or keep < 1:
logger.info("Not doing illegal truncate")
return values
truncated = []
for valueset in np.swapaxes(values, 0, 1).tolist():
avg = complex(*np.average(valueset, 0))
truncated.append(
sorted(valueset,
key=lambda v, a=avg:
abs(a - complex(*v)))[:keep])
return np.swapaxes(truncated, 0, 1).tolist()







F1AMM
 

It would be helpful if you described the specific frequency span you are using, the number of segments,
and the frequencies where the "errors" occur. If there is some sort of bug in the software algorithm,
that would help to see where it originates.
** Hello
I am posting an example. The calibration file (.cal) corresponds to the end of a 50 m coaxial cable (50 ¦¸). The frequency sweep range is 1 to 20 Mhz in 20 segments.

Follows the description of each sheet in the .xls file

.cal
---
Raw data from the .cal file converted through a .csv file

Garden 1-20 MHz 20 sec avg 25-6
-------------------------------------
Data from the .cal sheet

Algo
----
* Column A to G: the input data. It is "by hand" that I zoned in yellow the cells identified as "in error".

* Column I to N: a first search for errors with formulas of the kind
=IF(ABS((D19+D21)/2-D20)>$J1;1;"")
In cell J1: the value of the maximum difference before being declared abnormal. In my example 0.005 (0.5%) works fine

* Column P to U: precise identification of the cell in error by a formula like
=IF(AND(K21=1;K20=1;K22=1);1;"")

* Column W to AB: correction of values ??coming from columns A to G with formulas like
=IF(R13=1;(D12+D14)/2;D13)

* CSV-final
Corrected data formatting. The first three lines are special to take into account that the algorithm does not know how to process the first three lines of the original file
This sheet will be saved in .csv in order to produce the corrected .cal but do not forget, beforehand, to save the workbook if you want to keep track of the calculations. I added in this sheet the graph of one of the shortR column to show that the evolution of the values of sinusoidal form

But it is also true that the nanoVNA hardware also has limitations and boundaries,
and it is likely that the "error" points you are seeing in your calibration is in fact a true
measurement, due to a condition in the hardware that happens at the subject frequency. And so such an "error"
** It is very unlikely since in this case, it is a calibration file (.cal) following the operation (closed, open load). I don't have access to the built-in average function in nanaovna-saver. I'm just seeing the obvious errors easily identify in the produced .cal file. The errors that my algorithm locates are big errors that produce straight line segments in the graphs with the worst effect. There may be other finer errors that I can't locate.

--
F1AMM (Fran?ois)


 

Hello Francois,

What if you slightly change the frequency range and/or number of segments?
Are the errors still at the same frequency?
Good work you are doing by trying to make it clear!

All the best,

Victor


Op za 30 jul. 2022 om 05:37 schreef F1AMM <18471@...>:

It would be helpful if you described the specific frequency span you are
using, the number of segments,
and the frequencies where the "errors" occur. If there is some sort of
bug in the software algorithm,
that would help to see where it originates.
** Hello
I am posting an example. The calibration file (.cal) corresponds to the
end of a 50 m coaxial cable (50 ¦¸). The frequency sweep range is 1 to 20
Mhz in 20 segments.

Follows the description of each sheet in the .xls file

.cal
---
Raw data from the .cal file converted through a .csv file

Garden 1-20 MHz 20 sec avg 25-6
-------------------------------------
Data from the .cal sheet

Algo
----
* Column A to G: the input data. It is "by hand" that I zoned in yellow
the cells identified as "in error".

* Column I to N: a first search for errors with formulas of the kind
=IF(ABS((D19+D21)/2-D20)>$J1;1;"")
In cell J1: the value of the maximum difference before being declared
abnormal. In my example 0.005 (0.5%) works fine

* Column P to U: precise identification of the cell in error by a formula
like
=IF(AND(K21=1;K20=1;K22=1);1;"")

* Column W to AB: correction of values ??coming from columns A to G with
formulas like
=IF(R13=1;(D12+D14)/2;D13)

* CSV-final
Corrected data formatting. The first three lines are special to take into
account that the algorithm does not know how to process the first three
lines of the original file
This sheet will be saved in .csv in order to produce the corrected .cal
but do not forget, beforehand, to save the workbook if you want to keep
track of the calculations. I added in this sheet the graph of one of the
shortR column to show that the evolution of the values of sinusoidal form

But it is also true that the nanoVNA hardware also has limitations and
boundaries,
and it is likely that the "error" points you are seeing in your
calibration is in fact a true
measurement, due to a condition in the hardware that happens at the
subject frequency. And so such an "error"

** It is very unlikely since in this case, it is a calibration file (.cal)
following the operation (closed, open load). I don't have access to the
built-in average function in nanaovna-saver. I'm just seeing the obvious
errors easily identify in the produced .cal file. The errors that my
algorithm locates are big errors that produce straight line segments in the
graphs with the worst effect. There may be other finer errors that I can't
locate.

--
F1AMM (Fran?ois)






F1AMM
 

What if you slightly change the frequency range and/or number of segments?
Are the errors still at the same frequency?
** No, the "clicks" are never in the same place. We will say, to simplify, that it is parasites that cause this. We find the same thing in the .S1p files but it's less troublesome. If I want data without click, I do the same, I correct.
When we observe the curves on the screen of the box there are also clicks and there, impossible to correct them

I forgot to mention:

The sweep had been set to 25/6
nanaovna-saver version is 0.3.10-Win7
The box is NanoVNA-F v0.1.4

by
--
F1AMM (Fran?ois)

-----Message d'origine-----
De la part de Victor Reijs
samedi 30 juillet 2022 07:48


F1AMM
 

Hello

Here is the test I did

Measurements on a loop antenna at a distance of 50 m (zero reagent around 7.100 MHz) between 1 and 20 MHz (1010 steps or 10 segments) made with a nanaoVNA-F and nanovna-saver. In order to access what is closest, it seems to me, to raw measurements, I did not put "filtering", I put 1/0.

I launched the measurement 5 times and I saved in .s1p each packet of 1010 measurements. It is therefore a total of 5050 elementary measurements.
You will find in Attachment a summary of the 5 groups of identifiable errors. Values ??are rounded for ease of reading. It doesn't matter what method to find these errors; there are at least those there but there may be others not identified.

Sheet: "Synth¨¨se"
Column a: reference number
Column b: frequency
Column c ¨C g: measurement of real S11
Column i ¨C m: measure of imaginary S11

The columns are to be paired; example: c2 and i2 are the a+jb of the same measure. Values ??assumed to be in error are highlighted in yellow

To analyse
No 1, 2, 5: a single false measurement (simple case)
No 3: two wrong measurements
No 4: the most curious? Why this ?

Try to imagine a method to identify these errors and, above all, to correct them. A solution may be to delete these error lines. We see the risk of involving an average value because it is not measurement noise.

In the sheet: "Ligne 4" one can examine the exact values. It is very disturbing that the two pairs of value measure n¡ã1 and measure n¡ã2 are strictly identical; for me, it hides something.

To your meditation
73
--
Fran?ois


 

I seldom calibrate the VNA with nanosaver, so really cannot tell how that one works out for me.
Nor is my VNA (Now a H4.3) the same as F1AMM's. So I cannot compare. (I read somewhere that Fran?ois uses an 'F'?)

An idea: Could there be a fault in the communication between the nano and the pc? Some of the calibration value's of F1AMM's excel file are way off, and some 'faulty' values occur several times.

Arie PA3A


F1AMM
 

In my Excel file, it is not about calibration but about measurements. The calibration has been done and has been corrected
/g/nanovna-users/message/29103

The calibration file shows values that follow each other in a sinusoidal fashion; we can therefore attempt an interpolation. For real measurement files, the curves are more difficult to modify by interpolation.

In any case, I perceived the difficulty of filtering measurements; it is not a very simple matter.

It is indeed a nanoVNA-F that I use and almost never without nanovna-saver because, on the screen of the box, I see nothing except to take the measurements at night.
--
F1AMM (Fran?ois)

-----Message d'origine-----
De la part de Arie Kleingeld > PA3A
mardi 2 ao?t 2022 10:12