23:02 | Bertl | yes, the streaks are definitely part of the FPN
| |
23:02 | Bertl | and yes, looks good for a first try \o/
| |
23:11 | Bertl | off to bed now ... have a good one everyone!
| |
02:09 | intracube | joined the channel | |
03:07 | Lunastyx | left the channel | |
03:10 | Lunastyx | joined the channel | |
03:14 | Lunastyx | left the channel | |
03:25 | Lunastyx | joined the channel | |
03:36 | Lunastyx | left the channel | |
03:40 | Lunastyx | joined the channel | |
03:45 | Lunastyx | left the channel | |
03:58 | Lunastyx | joined the channel | |
04:07 | Lunastyx | left the channel | |
04:15 | Lunastyx | joined the channel | |
04:20 | Lunastyx | left the channel | |
04:20 | Lunastyx | joined the channel | |
04:26 | Lunastyx | left the channel | |
04:33 | Lunastyx | joined the channel | |
04:38 | Lunastyx | left the channel | |
05:07 | Lunastyx | joined the channel | |
05:12 | Lunastyx | left the channel | |
05:40 | Lunastyx | joined the channel | |
05:45 | Lunastyx | left the channel | |
05:45 | danieel | troy_s / Bertl ... are you sure that from linearization of PLR you get more than 16bits of data? the pixel on the sensor has some well capacity and it can not hold more electrons, so even that you count them precisely, the counts wont be such high that you need 32bit
| |
05:47 | danieel | remember, that gain is applied AFTER photo-electron conversion, at the e->V stage. you can not control the photon conversion rate and the DR / latitude wont get over the equivalent of full well capacity
| |
05:49 | danieel | float is not required to hold the exact slope of the linear piece - there is much more noise, than the rounding error to int (1lsb) would be
| |
06:06 | troy_s | danieel: You don't get more data.
| |
06:07 | troy_s | danieel: But you require a deeper bit depth to maintain scene referred values.
| |
06:07 | troy_s | danieel: Make sense?
| |
06:07 | troy_s | danieel: Any given display / output referred model is always a warped version of linearized data. For example, sRGB can only stow approximately 2.5 stops of latitude up from a middle grey value.
| |
06:08 | troy_s | danieel: Middle grey slots in at around 0.2 display linear, which taken up a stop, is 0.4, taken another is 0.8, and a bit.
| |
06:08 | troy_s | danieel: Hence 2.5ish stops of latitude upwards before you hit the diffuse white cap.
| |
06:09 | troy_s | danieel: In a scene referred model, you must be set to hold many more stops of data. In such a model, 1.0 has no meaning.
| |
06:09 | troy_s | danieel: So to maintain the full latitude, with discrete granularity, it results in the need for larger numbers at your upper end.
| |
06:11 | troy_s | danieel: So if you are compositing say, a mere 9 stops upward from say, a middle grey set at an arbitrary 0.3 perhaps, you'd need 0.3, 0.6, 1.2, 2.4, 4.8, 9.6, 19.2, etc.
| |
06:11 | troy_s | danieel: And that would require enough discrete intervals between that result from image manipulations or the like.
| |
06:11 | troy_s | danieel: And that doesn't even begin to cover an HDR pass or IBL.
| |
06:13 | troy_s | danieel: So you are in the end, quite correct in part; you can indeed store a large dynamic range within the default 16 bits of data. However, storing them as scene referred values is virtually impossible given 16 bits of integer data, and certainly insufficent for any degree of post work.
| |
06:14 | troy_s | (Hopefully that makes sense, and largely explains why inter-pipeline formats are almost exclusively EXR. Interchange formats generally err toward DPX due to the inclusion of a TRC, where OpenEXR is exclusively linear, and requires additional information to properly communicate aesthetic choices such as transfer curves and such.)
| |
06:28 | danieel | troy_s: if the sensor can differentiate betweek 14-15k values, you do not need anything more than 14 bits - for complete linear representation
| |
06:30 | danieel | Full well charge 13500 e-
| |
06:30 | danieel | you can not get more latitude than 13.7 stops with that, in single exposure
| |
06:31 | Lunastyx | joined the channel | |
06:31 | danieel | assuming the sensor does reset the charge when the PLR is applied, you get 3 repetitions, so that is 3x larger linear range - 15.3 stops
| |
06:32 | danieel | (which does not need to be true, depends on implementation of PLR)
| |
06:34 | troy_s | danieel: Did you read what I typed?
| |
06:34 | troy_s | danieel: The bit depth conversion to 32 bit float OpenEXR scene referred values is required for post.
| |
06:35 | troy_s | danieel: Hence why I was asking how trivial it is to undo the Piecewise Linear knees.
| |
06:35 | Lunastyx | left the channel | |
06:36 | danieel | you cant get real scene referred values unless you calculate in iris settings and if filters are used, then their transparency
| |
06:36 | danieel | undo the plr is simple with lut, 10/12->16bit (at most)
| |
06:37 | troy_s | danieel: So again, while you can capture a decent degree of dynamic range with a decent degree of granularity in-camera, it is effectively useless in post.
| |
06:37 | troy_s | danieel: And requires the inversion of whatever TRC.
| |
06:37 | troy_s | danieel: It is always simple given you can accurately figure out what is applied, which was the crux of the question posed. ;)
| |
06:38 | danieel | troy_s: if you used DNG's a little bit more, there is a lut based interpretation of stored values - the inverted TRC you want. specified in each file. I think that is worth of the extra 8kB :)
| |
06:39 | troy_s | danieel: The question was effectively asking "Just how easy and how accurate would it be?" to which Herb made it clear that it is quite easy on both fronts.
| |
06:39 | troy_s | danieel: DNGs are fscking useless to the question.
| |
06:39 | troy_s | Egads.
| |
06:39 | troy_s | And yes, you can get reasonable scene referred values. Reasonable as in they will composite quite well with no luminance errors
| |
06:40 | danieel | i would not care much about precision for composition. even big productions suck - i notice on first look something is bad in the scene
| |
06:40 | troy_s | Huh?
| |
06:40 | troy_s | Oh god.
| |
06:41 | troy_s | Have you EVER done image manipulation man?
| |
06:41 | troy_s | If you have, linearization is just a wee tad important. And figuring out the transfer is rather important. And that is why I asked the question
| |
06:42 | troy_s | Of course the MUST MASH KEYBOARD BEFORE I UNDERSTAND WHAT HE IS SAYING types
| |
06:42 | troy_s | feel compelled to wax lyrical about rubbish that is completely irrelevant
| |
06:42 | danieel | look, PLR is just what it is. I totally wonder why do you have such a trouble with understanding how to undo-it
| |
06:42 | troy_s | I am sure it is enlightening to some, but maybe... just MAYBE... not everyone in the room is a complete imbecile?
| |
06:43 | danieel | three slopes, you can undo it by code with brances, or by luts
| |
06:43 | troy_s | You don't say?
| |
06:43 | danieel | but what matters is, that you never get from such an undo-operation more than 16bits, because of the (crappy) sensor
| |
06:43 | troy_s | Where in my question did I suggest you could not?
| |
06:44 | danieel | no matter how much you want a 32bit float
| |
06:44 | troy_s | Ok look. Have you ever manipulated a fscking image?
| |
06:44 | danieel | 32bit lin or float is unnecessary for the camera!
| |
06:44 | troy_s | God.
| |
06:44 | troy_s | Read.
| |
06:44 | danieel | we are making a camera, not a compositing software here
| |
06:44 | troy_s | The.
| |
06:44 | troy_s | Question
| |
06:45 | troy_s | Where did _I_ suggest to use 32 bit in the camera?
| |
06:45 | troy_s | Seriously.
| |
06:45 | troy_s | You keyboard mashers drive me batshit.
| |
06:45 | troy_s | Go read it.
| |
06:46 | troy_s | Good.
| |
06:46 | danieel | "The format would inevitably be an OpenEXR at 32 bit float."
| |
06:46 | troy_s | Yes. Now read the damn context.
| |
06:46 | danieel | but undoing the PLR is necessary, e.g. for viewfinders
| |
06:46 | troy_s | Fffffuuuuuuuu
| |
06:47 | troy_s | You are infuriating for someone that appears an otherwise intelligent human being sir.
| |
06:49 | danieel | i have a practical question
| |
06:50 | danieel | if you use a ND, where in the pipeline is the different slope/fraction applied to what the sensor produces until you get a correct/absolute exr file
| |
06:50 | danieel | (e.g on a F65)
| |
06:51 | troy_s | I have to ask a question back: What is a "correct / absolute exr file"?
| |
06:51 | troy_s | There is never a slope in an EXR. (Well I suppose you could say there is a linear diagonal slope.)
| |
06:52 | danieel | the exr does or does not have the same value referring to the same illumination?
| |
06:52 | troy_s | What?
| |
06:52 | troy_s | What do you mean by "Same value?"
| |
06:52 | danieel | like 1.0 will always refer to a spot with defined brightness
| |
06:53 | troy_s | Erm. 1.0 has no meaning in a scene referred image.
| |
06:53 | danieel | so it has no knowledge of how much the scene is lit
| |
06:55 | troy_s | Putting words in my buffer again. ;)
| |
06:55 | danieel | again
| |
06:55 | danieel | so exr is great in representing high dynamic range, correct?
| |
06:55 | troy_s | An EXR if scene referred, and not display linear
| |
06:55 | troy_s | 1.0 has no meaning.
| |
06:55 | danieel | i am talking about scene referred here
| |
06:55 | troy_s | Ok.
| |
06:56 | troy_s | Good.
| |
06:56 | troy_s | Perhaps I misunderstood you then
| |
06:56 | troy_s | What did you mean by "like 1.0 will..."?
| |
06:56 | danieel | so you get very high dynamic range when using exr... but does the user/software know where that DR is placed?
| |
06:56 | troy_s | (Perhaps 1.0 was an arbitrary choice.)
| |
06:57 | troy_s | danieel: What does that mean?
| |
06:57 | troy_s | Your dynamic range is your data.
| |
06:57 | danieel | that values... 1 - 1000 would be e.g. in shadows / night
| |
06:57 | troy_s | So if you go from 0.0000000212 to 45822.2
| |
06:57 | danieel | values 1000 - 1M would be during day
| |
06:57 | danieel | or you get 1-1000 at both in night and day for the same scene?
| |
06:57 | troy_s | Or your day values are 0.0234422 to 0.554322
| |
06:58 | troy_s | Think data.
| |
06:58 | troy_s | It doesn't matter. At all.
| |
06:58 | troy_s | It is like me asking you to look around your room and tell me where white is.
| |
06:58 | troy_s | Doesn't exist. Just values.
| |
06:58 | danieel | it does. how are you going to compose 2 sources then? you need to manually set a "replay gain" to have them matching?
| |
06:58 | troy_s | Sense?
| |
06:59 | troy_s | Erm not really. You generally slug them in according to whatever reference base you set.
| |
06:59 | danieel | and that reference base - is in EXR ?
| |
06:59 | troy_s | If we are HDR stacking, then we need ancillary data to cluster the images.
| |
06:59 | troy_s | Nope.
| |
06:59 | danieel | or set afterwards, when the file is used
| |
07:00 | troy_s | Up to the artist or pipeline.
| |
07:01 | troy_s | All that really matters is that your relative intensities are representitive. Rather easy math in scene linear too, as it is a basic multiply.
| |
07:02 | troy_s | I believe some pipes have conventions (like SPI setting a middle grey exposure at 0.18 to start)
| |
07:02 | danieel | okay, so EXR is not absolute
| |
07:02 | troy_s | But again, that is purely convention.
| |
07:02 | troy_s | Not sure what absolute means.
| |
07:02 | Lunastyx | joined the channel | |
07:02 | troy_s | If you mean "XXX nits maps to value YYY" then no.
| |
07:02 | danieel | that convention you shown would be part of the exr spec
| |
07:02 | danieel | yes that
| |
07:03 | troy_s | Because that would screwbar your imaging.
| |
07:04 | danieel | then I see no point / no benefit / in using EXR over 16bit integer linear - on contents from our cameras
| |
07:04 | troy_s | Uh.
| |
07:04 | troy_s | Again.
| |
07:04 | troy_s | Do you mean _in_ camera?
| |
07:05 | troy_s | If so, again, I never ever for a moment implied nor stated that.
| |
07:05 | danieel | not in
| |
07:05 | troy_s | If you mean in imaging, then you don't understand the problem
| |
07:05 | danieel | for storage, out of camera
| |
07:05 | troy_s | :)
| |
07:05 | troy_s | LOL.
| |
07:05 | troy_s | Ok
| |
07:05 | troy_s | Hrm... let's say we are using values something like... 50-60 for a given set of pixels.
| |
07:06 | troy_s | And we wanted to blur that range.
| |
07:06 | danieel | wait!
| |
07:06 | danieel | there is no blur on camera content
| |
07:06 | danieel | if you blur it - you are PROCESSING the data, not storing from camera
| |
07:07 | troy_s | Again, this entire discourse is predicated on processing as I made clear.
| |
07:07 | troy_s | That is why you need to accurately undo the TRC.
| |
07:07 | danieel | but the source data from camera for you processing pipeline does not need to be EXR/32bit
| |
07:08 | troy_s | So your raw frame always remains untouched up to the point of needing touching.
| |
07:08 | danieel | yes
| |
07:08 | troy_s | Again, where did I suggest it did?
| |
07:08 | troy_s | I am quite sure that if you use your due diligence and read precisely what I wrote and understand the context, that I made no such claim.
| |
07:09 | danieel | i would advise to not mention exr anymore, as it is absolutely not related to the camera :)
| |
07:09 | troy_s | Don't. Be. A. Donkey.
| |
07:10 | troy_s | You have to _get_ to an EXR. And that has to do with TRCs and inversion.
| |
07:10 | danieel | why EXR????
| |
07:10 | troy_s | So for the love of God, don't donkey out on me.
| |
07:10 | danieel | you can to TRC and inversion to a 16bit integer
| |
07:10 | troy_s | Ugh.
| |
07:10 | danieel | *do
| |
07:11 | troy_s | Good night. You can carry on educating us idiots tomorrow perhaps.
| |
07:46 | Lunastyx | left the channel | |
07:52 | Lunastyx | joined the channel | |
08:01 | Bertl | morning everyone!
| |
08:02 | Bertl | danieel: the problem is that the _original_ bitdepth will not work to represent nonlinear data
| |
08:03 | Bertl | just think applying a gamma curve to linear data (same bitdepth) and then undoing it later
| |
08:03 | Bertl | you will inevitably lose data for gamma values != 1
| |
08:04 | danieel | slowly please
| |
08:04 | Bertl | the PLR data is very similar, in such way, that it contains different (but piecewise linear) 'gain values' for each range
| |
08:04 | danieel | it is same princliple as sLog
| |
08:05 | danieel | value != intensity
| |
08:05 | Bertl | let's say we have 12bit
| |
08:05 | danieel | storage size?
| |
08:05 | Bertl | sensor data
| |
08:05 | danieel | adc output then?
| |
08:05 | Bertl | the values we get sent over LVDS
| |
08:06 | danieel | so storage / transmission, okray
| |
08:06 | Bertl | we don't know if a 10bit, 12bit or 24bit ADC is there
| |
08:06 | Bertl | now, in a purely linear acquisition, let's say every bit is valid
| |
08:06 | danieel | yes
| |
08:07 | Bertl | i.e. we get full 12bit, and every bit is precisely so many electrons
| |
08:07 | danieel | of course
| |
08:07 | Bertl | now we split the range in half, with two times 11bit data
| |
08:08 | Bertl | the lower half is 1:1 and the upper half we make 1:2
| |
08:08 | Bertl | i.e. we reduce the 'gain' for the upper half
| |
08:08 | danieel | so the quantizaiton step in 1st half is 1
| |
08:08 | danieel | and second half is 2
| |
08:09 | danieel | the DR range is 1.5x more than in linear mode now
| |
08:09 | danieel | we do understand each other
| |
08:09 | danieel | i think :)
| |
08:09 | Bertl | so it will take longer in the second half, twice as long, yes
| |
08:10 | Bertl | now, if we want to undo the TRC here, i.e. get a linear range, what do we need to do?
| |
08:10 | Bertl | we need to multiply the second half by 2
| |
08:10 | danieel | make a lut, and you result in 13bit output
| |
08:10 | Bertl | precisely, we need an additional bit
| |
08:11 | Bertl | now currently we receive 12bit and we store 12bit
| |
08:11 | Bertl | assuming that all 12bit are true values, we would lose data using a LUT
| |
08:11 | danieel | lose?
| |
08:11 | Bertl | we could decide where we want to lose 1/3rd of the data
| |
08:11 | Bertl | i.e. on the lower or on the upper end
| |
08:12 | danieel | you wont lose, the sensor will drop the precision
| |
08:12 | danieel | sort of lossy compression
| |
08:12 | Bertl | yes, but if we _undo_ the non linear data, then we _lose_
| |
08:12 | Bertl | (unless we make it 13bit)
| |
08:13 | danieel | that is true, but you wont need to go to float here
| |
08:13 | Bertl | and that's all I said in my email reply :)
| |
08:13 | Bertl | i.e. you have to increase the bitdepth _or_ switch to float
| |
08:13 | danieel | do you know how the adc works and plr is done in practise?
| |
08:13 | Bertl | yes
| |
08:13 | niemand | joined the channel | |
08:14 | danieel | so that brings the 16bit upper limit
| |
08:14 | danieel | still no need for float
| |
08:14 | Bertl | of course, you can always address the issue with fixed point integers
| |
08:15 | danieel | for storage I would prefer to store the "compressed" raw data as the sensor gives them, and then stitch a lut/trc per frame with it (perfect for dng workflow)
| |
08:16 | Bertl | yes, probably the best choice, we kind of do that already with the register dump at the end of a frame :)
| |
08:16 | danieel | i would maybe care about applying that unpacking - for better looking preview on viewfinders (plr as is is not easy to understand if the scene does not contain fades between ranges)
| |
08:17 | danieel | btw what sLog is, is that plr with high detail in BOTH bottom/top, losing precision in middle
| |
08:17 | danieel | i know I can do that on my sensors - but the feature is rather less documented as it was an experimental feature
| |
08:18 | danieel | there is no reset of charge when changing slopes, that is same at your end - yes?
| |
08:18 | danieel | (would interfere with exposure of the frame)
| |
08:18 | Bertl | I think the sigmoid transfer functions are introduced because they match the profile of analog film
| |
08:19 | danieel | my thinking is rather that it is needed when you do EV correction in post (get back the shadows / highlights )
| |
08:19 | Bertl | no there is no reset, but the sensels are kept from saturation during the HDR process
| |
08:19 | danieel | so it wont show quantization artifacts (posterization)
| |
08:21 | Bertl | it would be better to completely reset the sensel, but this is definitely not done in the CMV12k, mainly because it would take a lot of time
| |
08:21 | danieel | it is funny that it is always hard to talk with some people and easy with another :)
| |
08:22 | danieel | you can emulate that sort of plr - basically it is a multi-exposure HDR (different time and gains per frame)
| |
08:22 | danieel | with high fps sensors we can do lot of magic in that direction
| |
08:22 | Bertl | every person approches things differently, you need to try to understand the other perspective ... then it suddenly gets easy
| |
08:23 | Bertl | yes, the advantage of the built in HDR mode is that it doesn't require an image transfer
| |
08:23 | danieel | i try to keep myself at practise, near hardware :) others might be good at theory, but I see no point of pushing that - when it can not be used
| |
08:23 | Bertl | i.e. it happens in the sensor, which is a big advantage from the timing PoV
| |
08:24 | danieel | you mean it does two frame hdr and merges the data on chip?
| |
08:24 | Bertl | danieel: pardon me asking, how old are you?
| |
08:24 | danieel | '82
| |
08:24 | rainer_vie | joined the channel | |
08:25 | Bertl | it doesn't do two frames, it does one frame with kind of different gains
| |
08:25 | danieel | those per row/col different gains you mean?
| |
08:26 | Bertl | but you only have to transfer the data once
| |
08:26 | Bertl | even in the PLR mode it does that
| |
08:26 | Bertl | currently (32lanes, 300MHz) it takes about 14ms to transfer the 4k image
| |
08:27 | Bertl | the sensor can easily do a snap in 1-3ms or less
| |
08:27 | Bertl | but transferring two full 4k frames would take almost 30ms, three almost 45ms ... see the problem?
| |
08:28 | danieel | but how are you then getting into 100 fps?
| |
08:29 | Bertl | at this setup, not at all
| |
08:29 | Bertl | first, we are only using half the LVDS lanes
| |
08:30 | danieel | i would not expect to do that on the current setup
| |
08:30 | Bertl | so we could half the time/double the frame rate there
| |
08:30 | danieel | still doable if you run at 60 now
| |
08:30 | Bertl | second, newer CMV12k sensors (v2) have 600MHz lvds
| |
08:30 | danieel | have you tried to overclock the current ones ? :)
| |
08:31 | Bertl | not yet, but I'm pretty sure they will have some additional range
| |
08:33 | Bertl | so .. off for now ... bbl
| |
10:04 | se6astian | joined the channel | |
10:05 | se6astian | good day
| |
10:05 | Bertl | & a good 1 2 u 2!
| |
10:13 | se6astian | :)
| |
10:13 | se6astian | I see a certain shift back to 1337 type :D
| |
10:13 | se6astian | 0h th3 good 0ld t1mes ;)
| |
10:19 | Bertl | you started it 536A571AN :)
| |
10:20 | se6astian | I did some thinking and came up with math (don't laught) that I think can link the PLR parameter settings to the actual f-stop value extensions that these settings will achieve
| |
10:20 | se6astian | I ll test my theory by adapting the PLR GUI
| |
10:21 | Bertl | sounds good ...
| |
10:55 | rainer_vie | sebastian, do you think you can get one or two more examples on PLR in the video? seeing the huge dynamic range is so awesome
| |
11:08 | se6astian | patience my friend :)
| |
11:13 | rainer_vie | :) cu later
| |
11:13 | rainer_vie | left the channel | |
11:34 | se6astian | so far so good: https://cloud.gerade.org/public.php?service=files&t=88644302ffde4f6e5659627a3c8bbf32&path=/Axiom/alpha/PLR02.jpg
| |
11:35 | se6astian | 50% brighter part of the image gets half the exposure time -> +1 f-stop, do you agree with the model so far?
| |
11:35 | se6astian | +1 fstop = factor 2
| |
11:40 | Bertl | the mapping is not correct :)
| |
11:41 | Bertl | i.e. the reasoning is fine for the given graph, but the light to voltage diagram probably looks a little different
| |
11:42 | Bertl | I'm translocating shortly, but I'll check the math later
| |
11:43 | se6astian | because of nonlinearity ?
| |
11:44 | Bertl | no, simply because putting the vtfl2 on 50% and having half the exposure time will not result in this curve :)
| |
11:44 | Bertl | i.e. it might match for 30ms/10ms
| |
11:45 | Bertl | (handwaving)
| |
11:49 | se6astian | ok, let me revisit the concept
| |
11:56 | niemand | left the channel | |
12:11 | se6astian | think I fixed it: https://cloud.gerade.org/public.php?service=files&t=88644302ffde4f6e5659627a3c8bbf32&path=/Axiom/alpha/PLR03.jpg
| |
12:28 | intracube | left the channel | |
12:35 | niemand | joined the channel | |
13:08 | niemand | left the channel | |
13:15 | intracube | joined the channel | |
15:03 | troy_s | se6astian: I think the shaper and matrix can be implemented in camera quite easily.
| |
15:05 | troy_s | se6astian: Now that the mangled data is gone, the general progression was lower DE as we marched up exposure. Mean error was 5.9 or something at 34ms.
| |
15:12 | Bertl | judging from the images you converted, we are still not completely in the linear range
| |
15:12 | Bertl | otherwise the FPN streaks would not show
| |
15:13 | troy_s | Bertl: Yes something funky there. I suppose we could map the curves again.
| |
15:14 | troy_s | Bertl: On the flip side, the streaks might simply be extreme errors in the noise.
| |
15:15 | troy_s | Bertl: And the transformed colors are way out of whack. Swatch 00 on the IT8 would imply the data in the red channel is struggling a little
| |
15:15 | troy_s | Bertl: (Or conversely data in the b and g.)
| |
15:17 | troy_s | danieel: Different exposure times bring with it anomalies on the mblur front. Interesting to explore, but I wonder the impact over footage (See HDRX mode)
| |
15:18 | troy_s | danieel: PS: :P
| |
15:29 | troy_s | Bertl: Easiest way to see how badly our data is behaving is via matrix only. When the matrix DE is low, the paths are behaving linearly.
| |
15:31 | Bertl | yep
| |
15:36 | niemand | joined the channel | |
15:49 | troy_s | danieel: "others might be good at theory" was clearly directed at me. All I said was you were wrong. You were. No idea how that is theory. ;)
| |
15:56 | se6astian | sounds good, so can we extract 3x3 or 4x4 matrix values from these measurements already?
| |
16:03 | se6astian | https://cloud.gerade.org/index.php/apps/gallery/ajax/image.php?file=88644302ffde4f6e5659627a3c8bbf32%2FAxiom%2Falpha%2FPLR04.jpg
| |
16:04 | se6astian | Dynamic range seems to be extendable up to 8 f-stops in theory
| |
16:04 | se6astian | but the more you push it the more noise you get
| |
16:04 | se6astian | and it seems the colors get desaturated
| |
16:05 | troy_s | se6astian: I guess a log-ish curve is needed.
| |
16:05 | troy_s | (Via PLR)
| |
16:06 | troy_s | se6astian: The curves are pretty simple, and all color space transforms in tricolor to XYZ are 3x3, so I doubt there is an issue there.
| |
16:06 | troy_s | se6astian: What do you mean "it seems the colors get desaturated"?
| |
16:07 | troy_s | se6astian: Wider gamuts when dumped to smaller look desaturated.
| |
16:07 | se6astian | that a sky you pull down by 6 f-stops is more grey than blue
| |
16:07 | troy_s | (Same goes for wider latitude when viewed on LDR)
| |
16:08 | troy_s | se6astian: Hrm. That may be sensels filled up.
| |
16:08 | se6astian | well you dont really extend the dynamic range with this mode, you can just pull down the highlights into a useable range
| |
16:08 | se6astian | but inside the sensor already so its not a post processing step
| |
16:08 | Bertl | not true :)
| |
16:09 | troy_s | I suspect perhaps the color volume is crunched near the peak?
| |
16:09 | se6astian | basically the lighter sensels have a shorter exposure time
| |
16:09 | Bertl | se6astian: what are your settings?
| |
16:10 | troy_s | se6astian: Hard to guess from here. The only thing that can ultimately evaluate saturation however, is a post analysis of a chart. Until then, it is just random data to an extent.
| |
16:11 | se6astian | I will try to gather some chart data with PLR on soon
| |
16:11 | se6astian | Bertl: what settings do you mean?
| |
16:11 | Bertl | the kneepoing settings for the blue sky for example
| |
16:12 | se6astian | I cant recall exactly it was just a general impression that the image seems to loose saturation the more I increase the PLR values
| |
16:12 | Bertl | well, let's do some phantasy values then
| |
16:12 | se6astian | I will capture actual footage to analyse soon
| |
16:13 | Bertl | vtl = 50%, exp 2 10% ?
| |
16:13 | se6astian | these parameters are visible in the GUI
| |
16:14 | se6astian | VTFL = Level Kneepoint
| |
16:14 | Bertl | yeah, I know, give me some values to work with
| |
16:14 | se6astian | where the range is 0..63
| |
16:17 | se6astian | you can try the settings exactly as in the screenshot if you want a good starting point
| |
16:17 | Bertl | and what are the settings?
| |
16:18 | se6astian | do you want me to read them to you? :)
| |
16:18 | Bertl | yes please, no browser here
| |
16:18 | se6astian | ah!
| |
16:18 | se6astian | exp1 = 15ms
| |
16:19 | se6astian | Exp_kp1 = 0.4ms
| |
16:19 | se6astian | Exp_kp2 = 1.5ms
| |
16:19 | se6astian | Vtfl2 = 21
| |
16:19 | Bertl | I guess you are using only two slopes yes?
| |
16:19 | se6astian | Vtfl3 = 42
| |
16:19 | Bertl | otherwise the kp2 > kp1 wouldn't work
| |
16:19 | se6astian | slopes = 3
| |
16:20 | se6astian | it took me some time to realize that the kneepoints are swapped in 3 slope mode
| |
16:20 | se6astian | datasheet page 34
| |
16:20 | Bertl | isn't kp1 the first kneepoint, so should't it happen before kp2?
| |
16:20 | se6astian | kneepoint 1 is to top one in the response curve
| |
16:21 | se6astian | kneepoint 2 is the first one
| |
16:21 | Bertl | okay, so be it
| |
16:21 | Bertl | vtfl2 is for 0 to exp1-kp2
| |
16:22 | Bertl | no that must be vtfl3 then
| |
16:22 | Bertl | and vtfl2 is for exp1-kp2 to exp1-kp1 yes?
| |
16:22 | se6astian | yes, it confused the hell out of me to always swap everything while I was programming this gui
| |
16:22 | Bertl | yeah, well, cmosis is not that good in naming their registers
| |
16:23 | se6astian | :)
| |
16:23 | Bertl | so what does that mean for a color like bright blue
| |
16:23 | Bertl | let's assume R,G = 80%, B = 100%
| |
16:24 | Bertl | further let's assume that the blue is maxing out in your shot, yes?
| |
16:24 | se6astian | vtfl3 is the first holding voltage, vtfl2 is the last holding voltage
| |
16:25 | Bertl | in a 'normal' liner exposure, you would end up with 0.8 0.8 1.0
| |
16:25 | se6astian | yes
| |
16:26 | Bertl | now with kp1 at 0.4ms and vtfl2 = 21 that means that the last 1/3rd of the exposure happens in 2.6% of the entire exposure time
| |
16:27 | Bertl | i.e. R,G, and B will max out at 2/3rd anyway
| |
16:27 | Bertl | and we will get the actual exposure in the last 0.4ms
| |
16:27 | Bertl | which means, that the 20% difference get reduced to 1/3rd
| |
16:28 | Bertl | i.e. R,G will reach 93.3%, while B gets the 1005
| |
16:28 | Bertl | *100%
| |
16:29 | Bertl | now, is 93%, 93%, 100% less saturated than 80%, 80%, 100%?
| |
16:30 | troy_s | Interject here?
| |
16:30 | Bertl | this is a rhetorical question, of course the answer is yes :)
| |
16:30 | troy_s | What does the vtfl register represent?
| |
16:30 | Bertl | troy_s: go ahead, I'm done
| |
16:30 | Bertl | the voltage level the sensel is kept at
| |
16:31 | Bertl | this HDR mode works by holding the sensels on a predefined voltage for some time
| |
16:31 | troy_s | So it does a stepped capture?
| |
16:31 | troy_s | Constantly cycling through that progression?
| |
16:31 | Bertl | in this specific config (given by se6astian) the following happens:
| |
16:32 | troy_s | (My mind is washed out with what this does to motion blur)
| |
16:32 | Bertl | for 13.5ms, exposure is 'limited' at 2/3rd of the possible range
| |
16:33 | Bertl | after that, for 1.1ms it is limited to 1/3rd
| |
16:33 | Bertl | and after that, it is unlimited for 0.4ms
| |
16:33 | troy_s | Total exposure is 15ms?
| |
16:34 | Bertl | note that the exposure starts at 100% and the voltage is reduced by exposing
| |
16:34 | Bertl | yes
| |
16:34 | troy_s | So the kp registers indicate how long they hold voltage at?
| |
16:34 | se6astian | time to start cooking dinner
| |
16:34 | troy_s | se6astian: Will do the IT8s today
| |
16:34 | Bertl | yes, but they are a little confusing, as both count from the end of the exposure
| |
16:35 | troy_s | se6astian: The more you and Herb hammer on the noise, the better. Once we get the data under control, better things follow.
| |
16:35 | Bertl | i.e. 1.5ms kp1 time means, 13.5ms limit and 1.5ms unlimited
| |
16:35 | troy_s | Ah
| |
16:36 | troy_s | So a high point in the curve if we see it "traditionally"
| |
16:36 | troy_s | Bertl: What is the second value then?
| |
16:36 | Bertl | the vtfl?
| |
16:36 | troy_s | Yes.
| |
16:36 | Bertl | it is the voltage level used to limit the exposure
| |
16:37 | troy_s | Gotcha. So a hardware value.
| |
16:37 | Bertl | with a range of 0 - 63 where we assume 0 = 0% and 63 is 100%
| |
16:37 | troy_s | Bertl: And how many of these knees are possible?
| |
16:37 | Bertl | two
| |
16:37 | troy_s | Derp.
| |
16:39 | troy_s | Bertl: So speculating, is there a way to balance the data granularity across the knees?
| |
16:40 | troy_s | (thinking along the lines of the log to lin discussion)
| |
16:40 | Bertl | well, se6astian obviously tried with 2/3rd and 1/3rd for the voltage levels
| |
16:40 | Bertl | (which is a sane assumption for a first test)
| |
16:41 | troy_s | Not optimal is it?
| |
16:41 | Bertl | and he also figured that the knees need to happen at the end of the exposure not at even spaced intervals :)
| |
16:41 | troy_s | Just speculating. Feels like the more agressive sloped regions require a proportionately wider chunk of data, no?
| |
16:42 | troy_s | Hrm. Interesting dilemma.
| |
16:42 | Bertl | the question is where you want your focus
| |
16:42 | Bertl | i.e. if you are not interested in the dark ranges, you can 'trade' bits for the light areas
| |
16:42 | troy_s | Yes. I suppose a typical Cineon styled log is prudent (or close as can be had)
| |
16:42 | Bertl | you get 4096 values to break up into three ranges
| |
16:43 | troy_s | Right.
| |
16:43 | troy_s | And given the bent nature of perception
| |
16:43 | troy_s | Prudence might suggest we slam the most granular data into the mid grey linear region
| |
16:44 | troy_s | (although _this_ control is extremely interesting if there is a means to expose it in a well designed wrapper)
| |
16:44 | Bertl | thinking that perception is logarithmic, I'd go for an approximation of this curve :)
| |
16:44 | troy_s | (where midish is probably in that 20% region)
| |
16:45 | troy_s | So another questiob
| |
16:45 | troy_s | Bertl: Given that you already have clipbits working
| |
16:46 | troy_s | Bertl: As in you are clipping and stretching at around 75% sensor capacity (we should look at the chart data from that first batch to hone in on an ideal value)
| |
16:46 | troy_s | Bertl: Are we able to repurpose the bit depth left over from the clip, or does the existing scale already do that?
| |
16:47 | troy_s | (IE Is there a way to get the data granular and use the whole bit depth wisely as opposed to a comb from a scale?)
| |
16:48 | troy_s | Bertl: ^^?
| |
16:51 | Bertl | well, as the 'exposure' of the sensor stops at less than 100%, those ranges are not really useable
| |
16:51 | Bertl | we can't do anything about that in the sensor
| |
16:52 | Bertl | but I'm still not convinced this is how the sensor is supposed to work, i.e. I presume some (maybe undocumented) registers are not configured properly
| |
16:52 | troy_s | Ah.
| |
16:52 | Bertl | the clipping/stretching is IMHO a workaround
| |
16:53 | troy_s | It does seem a wee tad odd.
| |
16:53 | Bertl | there is no point in producing a sensor with 12bit output, which only can cover 11 bit range :)
| |
16:53 | troy_s | I wonder what is going on there.
| |
16:54 | Bertl | there are a number of issues related with this specific sensor, I do not plan to work on this more than necessary
| |
16:54 | Bertl | i.e. it is not really relevan or important atm and again, time is on our side here
| |
16:54 | troy_s | Has anyone done the "Uh WTF" mail to Cmosis?
| |
16:55 | Bertl | we send them on a regular basis
| |
16:55 | Bertl | usually they result in a new errata/addendum being released shortly after :)
| |
16:55 | troy_s | Frustrating that at this early on the sensor is already suffering bit depth degradation.
| |
16:55 | niemand | left the channel | |
16:56 | troy_s | And are they aware that their sensor goes nonlinear quite badly at the 70% threshold?
| |
16:57 | Bertl | I'd say yes, but I don't think we have a definitive answer in this regard
| |
16:58 | Bertl | anyway, not a real problem right now, workaround exists, everything else will be solved later or if it is unresolveable and problematic, we simply drop the sensor
| |
16:58 | troy_s | Yep
| |
16:58 | troy_s | Anyways... sort out that streaking noise so we can get decent images.
| |
16:59 | troy_s | :P
| |
16:59 | Bertl | hehe, se6astian just needs to play with the linearization values
| |
16:59 | troy_s | I will roll the IT8s.
| |
16:59 | troy_s | Bertl: ?
| |
16:59 | Bertl | i.e. use 1.5 -0.1 for example instead of 1.2 -0.1 or 1.3 -0.1
| |
16:59 | troy_s | Bertl: The streaking is purely that?
| |
17:00 | Bertl | all is configureable via shell scripts
| |
17:00 | Bertl | either that or a missing/bad rcn adjustment
| |
17:00 | troy_s | It seems there is a bit in the middle values as well.
| |
17:00 | troy_s | RCN = Royal Canadian Nuts?
| |
17:01 | troy_s | (And you give _me_ grief about acronyms.)
| |
17:01 | Bertl | that would hint towards RCN (Row Column Noise :)
| |
17:01 | troy_s | Gotcha.
| |
17:01 | Bertl | see how well acronyms work :)
| |
17:01 | troy_s | They work fine when one side isn't a complete idiot as in this case.
| 17:01 | troy_s | nuned too smert.
|
17:02 | se6astian | back, the lasagne is in the oven
| |
17:02 | se6astian | ah linearization values, I did play with them a bit already
| |
17:03 | se6astian | I found out for example that the dark vertical streaks in oveexposed areas below non-overexposed values are mostly gone with linearization parameters of 1.07 0
| |
17:03 | troy_s | Is there a better method to get to values than random trial and error?
| |
17:03 | troy_s | (some script or test?)
| |
17:04 | Bertl | I wouldn't use anything below 1.2 from what we've seen so far
| |
17:05 | se6astian | btw Bertl did you check the latest cmosis AN already, it deals with non-linearity :)
| |
17:05 | Bertl | so 1.3-1.5 as factor, and probably -0.1 to -0.2 for the offset
| |
17:05 | Bertl | yes, I read and I think I also understood it
| |
17:05 | Bertl | my comment on that is to adjust the registers as suggested
| |
17:06 | Bertl | there is no way to compensate the described effects without processing the entire frame
| |
17:06 | se6astian | cant we correct the non linear areas back to being linear, 1.3 - 1.5 means rather large banding gaps in the histogram already
| |
17:06 | se6astian | I see
| |
17:06 | Bertl | can't be that large if we have 1.5 (the maximum suggested)
| |
17:07 | Bertl | we skip one value (of 4096) every second step
| |
17:07 | Bertl | so 0, 1, 3, 4, 6 ...
| |
17:09 | Bertl | shouldn't even be visible in a histogram displayed because you do not have 4096 vertical pixels :)
| |
17:09 | Bertl | *horizontal I mean
| |
17:09 | se6astian | then what I recently saw was another effect, will investigate
| |
17:10 | Bertl | so if your histogram is 1920 pixels wide (full HD screen) then you won't be able to see it
| |
17:11 | Bertl | note that you will get 'missing' slots
| |
17:11 | se6astian | maybe I just was lucky and "tuned" into those missing slots
| |
17:11 | Bertl | this is probably what confuses your graphing solution
| |
17:12 | troy_s | se6astian: By the way, the 34ms in the Hutch was heading to best DE. Didn't look too hard at the data, but going further might be good in your brackets.
| |
17:12 | Bertl | you can try to apply a weak smoothing (like 3-4 slots wide) before graphing
| |
17:22 | se6astian | I just have the chopped histogram effect again, but if you are currently unable to view images/websites I guess it makes no sense to upload
| |
17:23 | Bertl | how do you generate the histogramm?
| |
17:23 | Bertl | you feed the cmv_hist3 output directly to a graphing software I presume?
| |
17:24 | se6astian | yes but I only take every 16th value
| |
17:24 | se6astian | to get a 256 wide histogram
| |
17:24 | Bertl | well, that explains it then
| |
17:25 | Bertl | I suggest to do the following:
| |
17:25 | Bertl | if you encounter a 0, just use the previous value
| |
17:26 | Bertl | and with previous I mean the one right before not 16 values before
| |
17:26 | Bertl | this is easy to implement and will eliminate any aliasing effects
| |
17:26 | Bertl | (up to a factor of 2.0 :)
| |
17:26 | se6astian | noted
| |
17:27 | Bertl | a much better approach would be to average the 16 values
| |
17:28 | Bertl | (but also requires more computational resources)
| |
17:57 | Bertl | off for a nap .. bbl
| |
18:04 | norpole | joined the channel | |
18:16 | troy_s | Bertl: Just as an FYI, the red and green data channels still have odd wow curves in them.
| |
18:16 | troy_s | Bertl: Near the top.
| |
18:16 | troy_s | At about 90% now. Blue's data is fine from what I can see.
| |
18:21 | troy_s | (Oddly, blue's data looks pretty uniform.)
| |
18:22 | niemand | joined the channel | |
18:49 | intracube | left the channel | |
18:51 | troy_s | left the channel | |
18:51 | troy_s | joined the channel | |
19:02 | rexbron_ | left the channel | |
19:02 | rexbron | joined the channel | |
19:02 | rexbron | left the channel | |
19:02 | rexbron | joined the channel | |
19:08 | troy_s | Bertl / se6astian - Ran all the IT8s. Similar DE2000s.
| |
19:08 | troy_s | Progress.
| |
19:08 | troy_s | Shapers and matrices hold up well on the lower exposed images as well.
| |
19:08 | troy_s | Net sum positive.
| |
19:09 | troy_s | Streaking is a nightmare, and likely hurting us quite a bit from what I can see.
| |
19:09 | troy_s | And we also need to truncate the clip region a little more.
| |
19:09 | troy_s | Other than that, much improved from DEs of 30+.
| |
19:10 | troy_s | (Average DE on the IT8s in the 34ms is about 3.8)
| |
19:10 | se6astian | interesting, I thought the streaking was already gone with ./linear_conf.sh 1.07 0, at 1.3 I did not expect anything to still remain in the image
| |
19:10 | se6astian | great
| |
19:10 | troy_s | (With maximum being the rather ugly 11.5, which is again our linearity and streaking issue.)
| |
19:10 | troy_s | se6astian: The streaking is exacerbated on the proper transform.
| |
19:10 | troy_s | Let me push an sRGB version of the raws.
| |
19:10 | troy_s | Hold.
| |
19:10 | se6astian | so can you output a matrix from your software already
| |
19:11 | troy_s | http://www.pasteall.org/pic/show.php?id=68375
| |
19:11 | troy_s | Always could.
| |
19:11 | troy_s | The issue isn't the matrix.
| |
19:11 | troy_s | The issue is the data.
| |
19:12 | troy_s | (As I tried to make clear - the thing killing us is the fact that no matter how you massage the data, the linear transforms don't fit - which tells us the data is corrupted in areas)
| |
19:12 | troy_s | (Of course we could _selectively_ dial in a single bit of data and make it appear 'decent' in 709, but that achieves nothing for us.)
| |
19:12 | sb0_ | left the channel | |
19:13 | troy_s | se6astian: The transform will require a shaper 1D LUT per channel to get the data values more closely linear (due to lower level hardware blah I suspect) and then a matrix to transform the camera's unique colorspace into XYZ.
| |
19:13 | troy_s | (3x3)
| |
19:13 | troy_s | se6astian: See the streaking?
| |
19:14 | troy_s | se6astian: The variation in the streaking is, if our experiments tell us anything, a byproduct of the non-linearity / sensor weirdness at around 90% now, in particular in the G and R data pits.
| |
19:15 | se6astian | I see it
| |
19:15 | troy_s | se6astian: But the streaking is in fact still your fault. :P
| |
19:15 | troy_s | LOL
| |
19:15 | se6astian | very nasty
| |
19:15 | troy_s | se6astian: In the end, the screwed up sensor data is causing it to be seen more nastily than in the less obvious data dump to sRGB views you probably look at it in.
| |
19:16 | se6astian | part of it could be gone with porper RCN correction calibration
| |
19:16 | troy_s | se6astian: And I'm quite sure that if we clipscale the data a little more, the streaking will be toned down due to our shaper / matrix being more accurate.
| |
19:16 | troy_s | se6astian: Yep.
| |
19:16 | troy_s | se6astian: My point is that we should work on tuning that out.
| |
19:16 | troy_s | se6astian: As it is win win - we dial it out, the profiling will be more accurate _and_ the images will look better by default.
| |
19:16 | troy_s | se6astian: Which isn't bad for an alpha alpha.
| |
19:16 | se6astian | ok, what should I do next?
| |
19:16 | se6astian | higher linearization values?
| |
19:17 | troy_s | se6astian: Contextual goal. My personal opinion is that most folks interested in the Axiom want to see images, and so I'd lean toward two things:
| |
19:17 | troy_s | 1) Refine the clip and scale region. I _think_ we should clip and scale to about 90% of where we currently are.
| |
19:17 | troy_s | (Can we clip and scale individual channels?)
| |
19:17 | troy_s | (Because I'd love to leave the blue as it is, and clip scale the weirdness off the green and red)
| |
19:18 | troy_s | 2) Reduce the RCN.
| |
19:18 | troy_s | I'm sure Bertl has some ideas on how to dial in the RCN values and adjust the clip scale.
| |
19:19 | troy_s | se6astian: But I'd say the data is now probably 80%-85% of the way to a useful 709 shaper / matrix.
| |
19:19 | se6astian | sounds good already
| |
19:19 | troy_s | se6astian: Has it alleviated your concerns about the approach?
| |
19:20 | troy_s | se6astian: I know you appeared somewhat worried about the transforms. Hopefully you can see how the data has been / still is a little pooched.
| |
19:20 | troy_s | (and as a result is sort of hindering our ability to get to a pleasant 709 (or even 2020 if you want me to craft a set)
| |
19:20 | se6astian | yes its great to see we identified the problem now
| |
19:21 | troy_s | se6astian: The beauty of the charts is that it tells us a good deal about how that sensor is behaving (in this case misbehaving)
| |
19:21 | troy_s | I'd love to see Cmosis explain or fix the blasted 12th bit errors.
| |
19:21 | troy_s | (or 10.5 bits even)
| |
19:21 | se6astian | we should write an article about it ;)
| |
19:22 | troy_s | Had I had more time I'd have finished the Apertus lab with VNG.
| |
19:22 | se6astian | after all we are in the unique position that we can review our own product
| |
19:22 | troy_s | But alas, too many balls juggling. I have zooming and scrobbling in place. And the raw file loading. I've been manually rewriting TingChen's VNG though.
| |
19:23 | troy_s | se6astian: I think if we can fix the RCN and tweak the clipscale a little further, you can probably get to decent videos.
| |
19:23 | troy_s | se6astian: Also note that I have _no_ idea what your x264 or whatever motion footage is being scaled.
| |
19:23 | troy_s | se6astian: Be careful on YouTube pushes. The broadcast scales can _greatly_ screw your imaging.
| |
19:23 | troy_s | se6astian: Are you familiar with the broadcast scale issues?
| |
19:25 | se6astian | you mean 0..255 vs 16..235 ranges?
| |
19:25 | troy_s | Yes.
| |
19:25 | troy_s | And it almost _always_ gets pooched by folks not looking for it.
| |
19:25 | troy_s | You end up with four nasty cases.
| |
19:26 | troy_s | Luma needs scaling to 16-235, and chroma rests at 16-240
| |
19:26 | troy_s | So there are two pinch points -
| |
19:26 | troy_s | when encoding, if you encode full range data (1-254 in 'by the book' reference 'full range')
| |
19:26 | troy_s | and the player doesn't read it as such
| |
19:26 | troy_s | it will take the 16 value and map it to 0
| |
19:27 | troy_s | and vice versa for the 235/240 range, which ultimately stretches your image into a greater contrasty image.
| |
19:27 | se6astian | The files I uploaded were written in DNxHD with RGB Levels (0-255), do you suspect youtube treats that wrongly?
| |
19:27 | troy_s | Alternatively, if you encode to 16-235/240 (by the book) and the player doesn't read it correctly
| |
19:27 | troy_s | Yep!
| |
19:27 | troy_s | The player may play back at native, which lifts the black from where it should be (0) to 16
| |
19:28 | troy_s | and crimps the white at 235/240
| |
19:28 | troy_s | (greyish)
| |
19:28 | troy_s | Needless to say, the net sum is that "If you don't watch for it, your contrast will be hooped, and contrast impacts perceptual sharpness"
| |
19:28 | troy_s | As in you can end up with people seeing "It doesn't look sharp!" types of things.
| |
19:28 | troy_s | When in actuality it is perceptual contrast.
| |
19:29 | troy_s | Let me dig up a good article for you if you haven't seen it... an extensive study by Zeiss.
| |
19:29 | troy_s | se6astian: Mandatory reading http://www.zeisscamera.com/doc_ResContrast.shtml
| |
19:29 | se6astian | I guess I can benchmark it by uploading some test videos to youtube and then downloading their mp4s again
| |
19:29 | troy_s | se6astian: And fascinating from an imaging standpoint.
| |
19:30 | troy_s | se6astian: Should be very easy to see...
| |
19:30 | troy_s | se6astian: My best suggestion is to put a bloop in
| |
19:30 | troy_s | se6astian: At the head or tail
| |
19:30 | troy_s | se6astian: And freeze on it in YouTube.
| |
19:30 | troy_s | (Bella Nuit's is most useful here as it has the full range)
| |
19:30 | troy_s | http://www.belle-nuit.com/test-chart
| |
19:30 | troy_s | Your blacks SHOULD be black
| |
19:31 | se6astian | the bigger problem I have is that even if I upload in DNxHD at highest quality settings the video on youtube has ugly compression artifacts all over the dark areas, I have seen youtube clips that looked super crisp, do they pay extra :)
| |
19:31 | troy_s | but another word of warning that not many folks pay much attention to...
| |
19:31 | troy_s | Yep.
| |
19:31 | troy_s | There's a secret there too.
| |
19:31 | troy_s | But let me get the other warning out
| |
19:31 | troy_s | (And this one will drive you nuts)
| |
19:31 | troy_s | At youtube, if you run non-full frame you can get different scaling.
| |
19:31 | troy_s | (and Vimeo is worse)
| |
19:32 | troy_s | Largely because the full screen version using Flash or some browser accel
| |
19:32 | troy_s | may or may not use the GFx card.
| |
19:32 | troy_s | and yep... the GFx card may or may not make assumptions about the YCbCr data.
| |
19:32 | troy_s | Fun eh?
| |
19:32 | troy_s | :)
| |
19:32 | troy_s | (rexbron discovered this very bug in full screen to non full screen at Vimeo IIRC)
| |
19:33 | troy_s | Anyways... put a test chart bloop in for the viewers
| |
19:33 | troy_s | So they can at least can see if their end is behaving.
| |
19:33 | se6astian | I started considering hosting our own video player on the website, we would have the space and bandwidth
| |
19:33 | se6astian | at least we would get around the recompression that way
| |
19:33 | troy_s | se6astian: Probably not entirely worth it at this point. The trick is to find the secret sweet spot at YouTube so it doesn't reencode.
| |
19:33 | troy_s | (and yes, that is in fact possible)
| |
19:34 | troy_s | se6astian: Even without the compression issues, people's displays tend to be all over the damn place.
| |
19:34 | troy_s | se6astian: Remember not a month and a half ago people were actually trying to say "Gee the red is not quite red" as though the red channel in the raw data was even red.
| |
19:35 | se6astian | yes, but we can at least provide a good baseline
| |
19:35 | troy_s | se6astian: (Hard to stress that the data coming into those RGB values is _arbitrary_ to people. The issue is that the colors they see are KIND of close to a reddish, so they simply assume that there is some absolute version of an RGB model.)
| |
19:35 | troy_s | Yep.
| |
19:35 | troy_s | I prefer education myself.
| |
19:35 | troy_s | Camera data == Arbitrary. Period. Full stop. It is meaningless. If you are viewing the raw data using a viewer, what you are seeing is not color.
| |
19:36 | troy_s | Until you transform the data into a meaningful and defined color space such as sRGB/709, and view accordingly, you aren't quite seeing anything of use.
| |
19:36 | troy_s | If you snap on test charts, you can also tell almost immediately a few things:
| |
19:36 | troy_s | 1) Did the player or encoder get the color transform correct (probably not.)
| |
19:37 | troy_s | 2) What kind of chroma scaling is it doing (see the chroma patterns in the middle region. If you see blurry lines on a pattern, you can tell what version it is doing - likely 420)
| |
19:37 | troy_s | 3) Is the player / encoder properly scaling the broadcast values?
| |
19:37 | troy_s | 4) Is there odd cropping happening?
| |
19:37 | troy_s | Etc. etc.
| |
19:38 | troy_s | A bloop chart for a frame is a welcome thing. :)
| |
19:38 | norpole | left the channel | |
19:39 | troy_s | se6astian: If you want to see what I mean... http://www.youtube.com/watch?v=R6MlUcmOul8
| |
19:40 | norpole | joined the channel | |
19:41 | se6astian | got it
| |
19:42 | se6astian | gotta go afk for a bit
| |
19:42 | troy_s | se6astian: Okie. Chin up. We will get there on the color front.
| |
19:42 | troy_s | se6astian: We are about 1000000000x closer than a while ago.
| |
19:42 | se6astian | troy_s: yes, thanks for your time and efforts over such a long period
| |
19:43 | troy_s | se6astian: All good. I hope you can get the RCN and the clipscale in place sooner.
| |
19:43 | se6astian | I really appreciate all the help/input, I don't think I mentioned it yet though ;)
| |
19:43 | troy_s | se6astian: Then it will open up some other things (like creating an OCIO LUT set)
| |
19:45 | troy_s | se6astian: Hopefully you didn't read my aversion to the colorist approach as me poo pooing it. I wasn't.
| |
19:46 | troy_s | I have the utmost respect for colorists. It's just that their job is different than the task in front of us.
| |
19:46 | troy_s | The worst part is that a colorist can take just about any image and make it look good. Sadly that isn't our need at the moment. We have to smooth that data out so that we get consistent results that a static shaper LUT and matrix can transform reliably in all situations.
| |
19:46 | troy_s | Anyways... off for a while.
| |
19:49 | niemand | left the channel | |
20:18 | troy_s | left the channel | |
20:22 | troy_s | joined the channel | |
21:43 | troy_s | left the channel | |
21:43 | norpole | left the channel | |
21:43 | rexbron | left the channel | |
21:43 | Lunastyx | left the channel | |
21:44 | rexbron | joined the channel | |
21:44 | Lunastyx | joined the channel | |
21:56 | troy_s | joined the channel | |
22:10 | se6astian | time for bed
| |
22:10 | se6astian | good night
| |
22:10 | se6astian | left the channel |