Keyboard Shortcuts
ctrl + shift + ? :
Show all keyboard shortcuts
ctrl + g :
Navigate to a group
ctrl + shift + f :
Find
ctrl + / :
Quick actions
esc to dismiss
Likes
Search
#qmx #ssb More on SSB calibration...
#qmx
#ssb
Hi all Just wanted to explain a few things about SSB calibration. Firstly, please understand that the calibration makes the performance excellent. Even without calibration the performance is quite respectable. Calibration is not a temporary thing that is only concerned with the beta firmware. It optimizes the transmission of SSB to your particular hardware setup (component tolerances, how you build the filters, etc).? There are two kinds of calibration.? The first is the phase pre-distortion calibration which measures the phase error in the QMX PA, so that it can build a calibration curve that can be used for compensation (pre-distortion). Phase error depends on amplitude. So if the system needs at a given instant to transmit, for example, 0.5 amplitude and 25-degrees of phase shift?- then it will look at the stored calibration curve and see what the measured phase error was at 0.5 amplitude. Say 10 degrees. Then it can subtract 10 degrees from the desired 25, because it knows there will be 10 degrees of error, and command the '5351 to produce 15 degrees phase shift. That's the aim.?? The second calibration is of the necessary synchronization delay between phase and amplitude modulation. Since amplitude modulation takes effect in a short time after the new amplitude is commanded to the DAC, but phase modulation requires a frequency update to the '5351 which takes some time to communicate via the I2C serial bus to the '5351, it is necessary to delay the amplitude relative to the phase. There is also some fixed delay in the phase compensation which measures relative error, not absolute error (since we don't exactly know what the phase delay is through the receiver circuits). So the sync optimization tests to find the optimum delay of amplitude relative to phase modulation. Sync calibration is run separately for USB and LSB sidebands.? Both of these types of calibration require operating the transmitter and receiver at the SAME time. The phase calibration increases the amplitude in steps, increasing the PA voltage up to 12V a step at a time. Each step transmits a steady carrier (CW) for just over one second. Even at full power this is only for one second so is expected to be much less stress on the transistors than an FT8 over would be (12.8 seconds). The synchronization calibration transmits two-tone (700?+ 1900 Hz) at full PEP during the test, but SSB is a much lower average current anyway than CW so it should not be stressing the PA transistors. Therefore if people have been using FT8 normally, then report that PA transistors are blown during calibration, which operates the transmitter either at lower power or for shorter durations, then this is a mystery why that could happen.? One possible explanation is that the PA amplitude modulator (transistor Q507 and its four support transistors) is not working properly. If the amplitude was always at maximum because Q507 is not modulating it, then indeed this would stress the transistors somewhat. This is why in the recent firmware updates last week, I added a check before the calibration starts, where the PA voltage is set somewhere halfway (set to 5V actually) and then measured there, and if it is not near to 5V we know there is a problem, you are warned, and the calibration does not proceed. There's also the new "PA mod. test" screen in the Hardware tests menu, which steps the amplitude modulation up and measures the result; this should always show a straight line from near zero to near supply voltage, like the attached AM.png. IF it does not, if it shows a straight line across the top of the screen, then you will know that your PA amplitude modulator isn't doing its job, and you need to fix that. The radio may appear to work normally in CW and Digi modes, and you'd never know, but actually it wouldn't be doing envelope shaping. And it can't work for SSB without the amplitude modulation.? Regarding the calibration itself, as I mentioned the QMX is operating its transmitter and receiver at the same time. It transmits (CW for phase error, two-tone SSB for synchronization optimization) and then uses the receiver to measure things about the transmitted signal, using the information to derive phase error and IMD self-measurements.? The problem here is that the transmitter is (nominally) 5W, a 45Vpp signal into a dummy load at the output of the LPFs. Of course the receiver Tx/Rx switch is connected to the PA output directly, before the LPF, so the voltage isn't a clean sinewave and also could be even more than 45Vpp. The poor Tx/Rx switch does its best but is just a single series BS170. Maybe the BS170 switch has an OFF-isolation of something in the range 30-40dB (a guess based on some experience, but it will vary from device to device and on different bands etc etc). 5W is?+37dB (assuming a sinewave in a 50-ohm load, which is of course NOT the case at the PA output). 30dB attenuation would still leave us at?+7dBm and even if the attenuation is 40dB it's still -3dBm which is still far above the observed maximum signal the receiver can handle without severe distortion, about -10dBm.? Basically the Transmit/Receive switch was designed to protect the receiver circuits from damage; NOT to allow clean reception of the transmitter connected directly to the receiver for self-calibration. And the NUMBER ONE fundamental rule of the entire SSB firmware project is that it must work on all 9,312 QMX/QMX+ transceivers sold already, not requiring any hardware modifications, and there will not be any future PCB revisions which would invalidate earlier QMX/QMX+ transceivers for SSB use. RULE #1.? So we have a problem with a too strong signal being fed to the transmitter. It's a difficult challenge, the receiver being used to monitor the transmitter, all in the same radio. Some tricks are necessary. Some or all of the following are employed to reduce the signal intensity hitting the receiver:
The first two of these were in place even since the first SSB beta. I made quite a lot of improvements since then. Some improvements concerned the way the zero-crossing in the phase error measurement is done, to avoid accidentally triggering on noise which happens sometimes, and in certain cases of overload, to be able to ignore the overload artefacts giving another 6dB of range. The other thing I did is make use of "wrong" BPF selection which was not previously the case, for the phase error measurement.? The attached "PhaseErr.png" illustrates this. The first action when calibrating a band is to check to see if the ADC output has been driven into clipping or if the zero crossings could not be correctly measured. If the measurement was OK, we use the "correct" BPF for the band, and just get on with the calibration. If there were measurement problems, the signal is too strong, and we now cycle through the Band Pass Filters looking for a better option. The system checks each of the bandpass filters in the radio (which means 4 for a QMX, or 8 for a QMX+).? The tests are run at about 75% of the supply voltage and this is why people have reported DAC values around 2,000 during this part of the calibration process. On each it checks several aspects and assigns a score:
At the end a "score" is calculated from the measurements, for each BPF, and the one with the highest score is chosen for the measurement. The green text printed on the screen (attached PhaseErr.png) is this testing process of each BPF. The "score" is the rightmost column. The asterisk next to the BPF number shows the "correct" BPF for the band (from the Band Configuration table) and the "Use" label indicates which BPF is actually being used for the calibration.? This improvement has been significant for phase error calibration.? Now I will make a similar improvement to the intermodulation distortion measurement in the synchronization optimization step. IMD measurement works by moving the reception frequency up 2.7 kHz higher than the transmission frequency. Then the IMD3, IMD5 and IMD7 products all fall within the receiver passband but the twin peaks (700 and 1900 Hz) are suppressed because they are in the opposite sideband. Then the energy in the IMD3, 5 and 7 tones is measured. By trying this at multiple synchronization offsets the best one can be found. You can see that if the receiver is driven into overload there will be problems measuring the wanted IMD3, 5 and 7 tones. I believe this is the main reason why some of the bands have a very shallow dip, or in some cases people have reported a flat line across the top of the screen.? 73 Hans G0UPL |
Hi Hans,
?
Given that the receiver is exposed to the transmit signal directly as you describe, would you expect that there is an actual failure risk to the Mux ICs during calibration??
?
Or, alternatively, are there other component failures or issues that you could expect to put the receiver components at risk that we should check for before running the calibration?
?
I ask because I had IC402 inexplicably die as a seemingly isolated failure after running the calibration with the first version of the SSB firmware, and, given your further explanation , I'm hesitant to try the calibration again after the repair without confirming that the rest of the receiver circuitry is in good working order.
?
Thanks,
Greg
N1TR?
? |
Hello Greg No. There is not any risk to the MUX chip or any other receiver components at all, during calibration. There is nothing different during calibration, to the conditions experienced during normal operation. On transmit the TX/RX switch is OFF. That prevents any damage to the Receiver. Same during calibration. Receiver overload in terms of serious distortion can occur, but not damage. 73 Hans G0UPL On Tue, Apr 29, 2025, 17:23 N1TR via <gregoryjz=[email protected]> wrote:
|
Hans,
?
The calibration procedure you describe is amazing stuff, very innovative.
I understand the need for no hardware mods, ideally the rig can be calibrated without opening up the box. However, if the calibration score is low, perhaps there could be an optional procedure for the brave
to desensitize the receiver temporarily.?
?
Would a 1 ohm resistor soldered across L503 during calibration give better results?
?
Jerry, KE7ER
?
On Tue, Apr 29, 2025 at 01:24 AM, Hans Summers wrote:
And the NUMBER ONE fundamental rule of the entire SSB firmware project is that it must work on all 9,312 QMX/QMX+ transceivers sold already, not requiring any hardware modifications, |
Hi Jerry ?
I don't know yet. As long as sufficient of the mentioned tricks can be applied sufficiently well to avoid the receiver entering a gross overload condition, the results are not going to be improved by attenuating it further. Actually in my calibration the score is awarded to the various options, giving high weighting to scenarios where the signal is very strong, just before gross distortion. If the signal is too weak you have more noise degrading the measurement, If the signal is so strong that it overloads the receiver, the measurements become meaningless.? I just released 1_01_008 where I made further improvements to the sync calibration part, to apply further attenuation there.? 73 Hans G0UPL |
to navigate to use esc to dismiss