Keyboard Shortcuts
Likes
Search
QTR, calibration
开云体育Dear all: I am a beginner and I have a question regarding the determination of the ink densities to be used under the 'Ink Setup' tab when using multiple gray inks.? O.k., Paul Roark calls it a 'judgement call'. With my very limited experience I fully agree. But regardless of that I would like to better understand the procedure which I found in several documents. Accordingly, the ink density to be entered under 'Ink Setup' should be the result of the product of the fractions of the respective next darker ink which would yield the same density (or Lab L) as the ink at 100%. I measured the Lab L values from a print of the calibration image
for Hahnemühle Photo Rag 308, the GCVT ink set of Paul Roark,
resolution 2880-super and 50% global ink limit. The results for
each ink are shown as circles in the image below.? All data can be
well represented by exponentially decaying functions (lines). If I follow the above mentioned standard procedure, the fractions are indicated by the labelled plusses and the densities to be entered under 'Ink Setup' are represented by the K values below the full diamonds (they do not lie on the K-curve, because the ink curves are not linear). Am I correct that these densities, i.e. products of fractions, should be used? I am asking, because the products of the fractions, although they certainly carry relevant information, have no obvious meaning which would help to visually understand the resulting ink curves. Hendrik PS: From the curves in my ink separation plot, is the global ink
limit I selected (50%) adequate? The lighter inks still have a
significant slope at 100%. |
Hello Hendrick,
I hope this helps.
|
开云体育Dear Shilesh: Thank you very much for your reply. Very helpful, because it
clarifies the procedure. What you explain is what I was thinking
of, originally. But I was misguided by the descriptions in the QTR
User Guide and in the Calibration Guide (from the QTR website). In the Calibration Guide it is stated on page 4: "But it's necessary to have all the relative
densities to a common value". So far so good. It is
logical and in line with what you explain. But on the same page it
is written: "Since the LLK will be
transitioning to the LK ink in the profiles the comparison is
most accurate by comparing the LLK ink to the LK ink not to
the K ink." Thereafter, a calculation is made in
which the relative density of LLK to LK is multiplied by the
relative density of LK to K, suggesting that the result of this
product would yield a more accurate relative density of LLK to K.?
Concerning the global ink limit, you are correct. I was thinking of using the Black Boost, because the slope of the L(K) curve for the K channel is already very small at K=100% (with global ink limit of 50%) and already L(K=100%) < 20 for the black ink. You are correct regarding the toner. For the toner curve (open blue squares on the dashed blue line in my original plot) I have not calculated its relative density, because it requires a separate curve in the Ink Setup to neutralize the print for all gray values. Thank you very much again!
Am 04.02.2025 um 04:35 schrieb shileshjani via groups.io:
|
I did not know about that guidance. That LK is a fraction of K, and LLK is a fraction of LK (but reflected back in terms of K) . I have made curves from scratch and have not noticed any strange effects of LK and LLK being fraction of K. Mind you, most times I just modify existing curves. If input->output is a linear relationship, it is mathematically the same to use either method. But we know the relationship is not linear. I have to think about this. I may putz around and see what difference (visual smoothness of ramp, and amount of linearization would be needed) the 2 methods yield. |
On Tue, Feb 4, 2025 at 10:59?AM shileshjani via <shileshjani=[email protected]> wrote:
Be careful?to distinguish?between MK and PK.? They are different due to the different types of coating on matte paper v. "Photo" (glossy, pearl, semigloss, & satin.? I use PK dilutions in my 9800.
They are differing, reduced slope curves.? That said, the system is flexible enough to deal with the differences, including when the final curve is "linearized"? (and beware that the toe of the curve is the least linear part). This is the inkset I have used for both "glossy" and matte papers (including Arches, which requires 2 MK positions to reach a good dmax): ? Paul |
Hello Paul,
?
I was only referring to K universally as the darkest ink, whether it be MK or PK. LK and LLK as the next lighter shades. In your write-up, you also refer the lighter density inks in relation to K (MK or PK) if only one K is used. This accords with my understanding of how to set ink density of successively lighter inks.
?
|
OK, so I did my experiment:
?
?
?
? |
This seems worth some comments.
toggle quoted message
Show quoted text
First -- experimenting is the best! Nothing is obvious without trying it. So experimenting and trying things out is always recommended. I'm not really sure where the numbers come from for the Curve 1 & 2 -- K, LK, LLK values so its hard to be sure. (My initial feeling is the Curve 2 has the LLK too low -- like you used the .55 factor twice; but I'm not sure.) Looking at the ink curves you can see that LLK for Curve 2 uses a lot more ink -- you said it was a lot lighter ink (.14 vs .25). Then looking at the density curves -- what should you care about? Curve 2 is closer to "linear" than Curve 1 but these curves are before Linearization so that's not the end result. My take would be that curve 1 is smoother and might actually be the better one once you do the linearization. The little wiggles in curve 2 are harder to repair. Linearize and then do the two graphs -- things will be a lot closer but the differences may tell you more. Roy On Tue, Feb 4, 2025 at 08:40 PM, shileshjani wrote:
|
Hello Roy, Thank you for your comments & thoughts. Amen to experimenting. The curve 1 and 2 renditions come from respective *.quad files data (of course pre-linearization), imported into Excel for the graphs. Indeed, density for curve 2 LLK was lower than for curve 1. LLK density in curve 1 is 25% and 14% for curve 2. I arrived at 14% density LLK for curve 2 as 0.55 (55% density of LK relative to K) multiplied by 0.25 (25% density of LLK relative to K) = ~0.14 x 100 = 14%. For clarity:
I will linearize the 51 step readings for 2 curves later tonight, and show results. You bring up a good point that curve 2 is closer to ideal, but with more “wiggles” and it is entirely possible that linearization will throw up an error. If that happens, I will linearize using density values rather than luminosity. |
开云体育I think in his example Shilesh demonstrated the difference between the two approaches we are discussing.Curve 1 illustrates method 1: Here the printed luminosities of all lighter inks are referred to K (the common reference curve): To that end he determines the densities (for the ink setup) by those densities in the calibration plot at which the K-curve has the luminosities at which LK and LLK reach their respective minima. In his example: ??? Lmin of LK was 55% step of K ??? Lmin of LLK was 25% step of K This method 1 corresponds to using the open square for the LLK density in my graph. Curve 2 illustrates method 2: In this case one has to multiply the two fractions 0.55 and 0.25. This results in 0.25 * 0.55 ~ 14 for sought density of LLK. This method 2 corresponds to using the solid diamond for the LLK density in my graph. I think in the calibration guide (p.4 of calibration.pdf) both these methods are getting mixed. Here are the sentences which are contradictory:
Both method yield the same result for the required density of LLK
only if all ink luminosities satisfy L(K) = A - b_ink*K (with
A=L(K=0) the common luminosity of the paper and different values
of b_ink by for each ink). This linear relation ship holds true
for very small values of K. But for larger K all L(K) curves
saturate. I hope it is clearer now. Best,Hendrik |
Your description of method 2 makes no sense.
toggle quoted message
Show quoted text
All the values are simple ratio -- darkness of one ink vs darkness of another ink. So K vs LK and LK vs LLK -- then calculate K vs LLK. It's just simple algebra ->> (a/b)*(b/c) = (a/c) But as you can see from Shilesh's experiment: once you linearize is fixes most anything. On Wed, Feb 5, 2025 at 08:41 AM, Hendrik Kuhlmann wrote:
|
Roy, Linearization does correct everything when doing the process carefully as you say, but I suspect the resultant component curves could be very different.? Basically, at any?given density, one will lead to a different blend?of inks than the other and one will have a different ink load than the other while still accomplishing a similar linearization. That could be important?for DTP printing, and it could also be important when focusing on the image "grittiness" that may occur when printing on high resolution potential papers (glossy or potentially baryita).? In extreme cases, I think these differences could be significant.? Maybe this falls into the "advanced concepts" category, but I can't help but want to consider and test for this in my linearization steps process. ---Michael On Thu, Feb 6, 2025 at 11:09?AM Roy Harrington via <roy=[email protected]> wrote: Your description of method 2 makes no sense.? ? |
Agree completely with Michael. And for folks like me who use QTR for selective toning of regular prints (not DTP or negatives), this tonal transition can be profoundly impacted. I often use "copy curve from LK" in curve development, and I can imagine this would play a measurable role. But I am just playing, not professional. |
On Thu, Feb 6, 2025 at 11:24 AM, shileshjani wrote:
The trouble is that he's got it wrong. He shows how he interpreted it and it's just not correct and not what the PDF says. Please stop saying it's an "alternate" interpretation. Roy -- the designer and author of all this stuff |
The ink transition could be very different. Although the density could be corrected in linearization, but transition (pre-linearization) defines the ink amount and the "grittiness" or "roughness" of the prints. It do worth personal tests.
?
I use an excel which I wrote all the formulas to do the transition job. In the excel, I could change amount limit of every ink based on the measured data, and immediately see the ink distribution visually.
?
The results are usually good, smooth density curve ready for linearization.
?
--
Kang-Wei Hsu download FREE preview: Effect of Fixatives on Inkjet Papers Preservation and Imaging Quality |
开云体育Am 06.02.2025 um 20:09 schrieb Roy
Harrington:
Your description of method 2 makes no sense. All the values are simple ratio -- darkness of one ink vs darkness of another ink. So K vs LK and LK vs LLK -- then calculate K vs LLK. It's just simple algebra ->> (a/b)*(b/c) = (a/c) The trouble is that he's got it wrong. He shows how he interpreted it and it's just not correct and not what the PDF says. Please stop saying it's an "alternate" interpretation. Roy -- the designer and author of all this stuff Dear Roy: I am sorry to say that the math you suggest cannot be applied in
the present case. The problem in your above equation is that the
'b' in the denominator of the first factor is not he same as the
'b' in the nominator of the second factor. You cannot simply
replace a, b and c by K, LK and LLK in this equation. The reason
is K, LK and LLK are names of ink channels and not
numbers! But you need numbers to be inserted in your
equation. Here is the explanation. It is a bit longer than you may want,
but I feel I have to do the explanation in very small steps for
clarity, and clarity requires precision of expression. I hope to
convince you. When dealing with the calibration plot, we need to understand the printed step wedges as luminosity functions L(K) of the step variable K. Since we have 3 luminosity functions, each for one of the ink channels, I introduce three functions of K which I denote L_K, L_LK and L_LLK:
Please note that the subscript K denotes the ink channel
K and not the step variable K which is the argument in the
parentheses. Now the independent variable K runs from 0 to 1
(100%=1). We want to find the two values K1 and K2 of the step
variable K at which the luminosity function L_LK evaluated at K=1
equals the luminosity function L_K at some unknown K1, and at
which the luminosity function L_LLK at K=1 equals the luminosity
of L_K at some unknown K2. Mathematically, this means
I have numbered the equations. These 2 equations relate the two luminosities L_LK and L_LLK at K=1 (=100%) (on the left side of the equations) to the luminosity L_K (right side of the equations), which serves as the common reference function (as intended). So we have to solve the two equations for the arguments K1 and K2. This can be done by using Newton's iteration (if one has explicit expressions for the continuous functions - I determined them by fitting an exponential), but it is easier to do this graphically. To this end one has to draw the graphs of the two functions on both sides of equation (1) and find their intersection point. At the intersection point the equation is satisfied and the abscissa of the intersection point yields the value K1. The same can be done for equations 2. Well, here we are done. Here is the mistake You claim that one arrives at the same value K2 when multiplying K1 with K3, where K3 is the step K at which the luminosity of L_LLK at K=1 equals the luminosity of L_LK. To find K3 we need to solve the equation
This equation can be solved graphically in the same way as above.
However, for general nonlinear functions L_K(K), L_LK(K) and
L_LLK(K) the claimed relation (multiplication of the fractions K1
and K3)
is not generally satisfied! This is my claim. For a proof
I have given an example in my post /g/QuadToneRIP/message/19394
. Shilesh has given another proof in his post /g/QuadToneRIP/message/19398
. It is interesting, however, that equation (4) is satisfied, if the 3 luminosity functions would be linear in K. (Note that I am talking here of the hypothetical case that one measures a linear behavior of the luminosity functions in the calibration plot. It should not be mixed up with the linearization process in QTR.) The linearity of L_K(k), L_LK(K) and L_LLK(K) is is a very special and hypothetical case. A physicist would call this a 'model'. It is unrealistic for inkjet printing (but it may possibly hold approximately for some type of matrix printers in which each pixel is made of many non-overlapping black squares which ultimately cover the whole pixel completely).? Here I give the proof of equation (4) for this model. For luminosity functions which are linear in K equations (1) to (3) yield
where 'a' is the luminosity of the paper white and b1 and b2 and
are the constant negative slopes of the (now linear) functions
L_K(K) and L_LK(K). Subtraction of the two last equations '(6) -
(7)' yields
or
Now we evaluate (7) at K=1 to obtain
This is now inserted in (5) to obtain
from which we get (b2/b1) = K1. This can now be inserted in (8) to find
This proofs that the multiplication of fractions is equivalent to directly determining K2, if all luminosity functions are linear in K. But if the luminosity functions are exponential (as it seems to be the case)
where L_ink(K-> infty) is the asymptotic luminosity for large K (saturation) and s_ink the decay rate of the luminosity (equivalent to the slope of the luminosity L_ink(K) at K=0), then (9) does not generally hold. The fact that (9) holds for linear functions can also be derived geometrically using the "Intercept theorem". I hope I did not commit any typos. I am sorry to have found this misconception. I find it not only on page 4 of the calibration guide , but also in the user guide of Tom Moore. On page 15 one finds:? " For QuadTone inks, this process is repeated for each lighter ink, comparing it to the next darker ink, calculating its density relative to that ink and then converting it to a density relative to black."? Presumably, the misconception has not been discovered/discussed
previously, because QTR is quite forgiving with respect to
selecting the densities for multiple gray inks. So most
practitioners may not care and follow their intuition. But you
will understand that I had to disproof your statement "The trouble is that he's got it wrong".
Hendrik? --? a beginner in QTR but not a beginner in math ;-) :-) |