22:00 | [1]Sasha | Bertl, I'm trying to send you a document with the compilation of Andrey's comments re:AXIOM
| |
22:01 | [1]Sasha | I'm at work at the moment, and am unable to copy and paste all of this into a Google Doc.
| |
22:01 | [1]Sasha | I'll do this later today
| |
22:02 | [1]Sasha | Please confirm when you have received the PDF document
| |
22:14 | [1]Sasha | Bertl, I'm using HYDRA irc (due to windows 7 computers at work) and the file negotiation (for sending to you) keeps failing. I'll email you what I've compiled so far
| |
22:19 | Bertl | yes, please email
| |
00:43 | troy_s | joined the channel | |
00:55 | Bertl | wb troy_s!
| |
01:24 | troy_s | Bertl: Damn Quassel.
| |
01:25 | troy_s | Bertl: First answer, a Bradford weighted 3x3 transform works for WB.
| |
01:26 | troy_s | Bertl: Second question, any TRC is used to take things into the perceptual or display / device referred domain. All color ops should be on real world linear as a general rule.
| |
01:27 | Bertl | so the gamma correction happens after that?
| |
01:27 | troy_s | Bertl: If you are familiar with the classic painting fringes in an app like Photoshop, you can see why. Blending etc needs to happen on a luminance accurate model, and any TRC converts it to luma (Y vs Y')
| |
01:27 | troy_s | Yes.
| |
01:28 | Bertl | okay, good
| |
01:29 | troy_s | Bertl: See http://ninedegreesbelow.com/photography/linear-gamma-blur-normal-blend.html for the blending issue
| |
01:29 | troy_s | (the issue is that by bending the colors for display / perception, you are cheating the luminance levels and the mixes are mathematically incorrect)
| |
01:30 | troy_s | Bertl: So I believe all Bradford etc. manipulations (including all transform between RGB color spaces) must be on linearized data.
| |
01:31 | rexbron | Bertl: The ADC data should be linear no?
| |
01:31 | Bertl | yes, at least for the CMV12K sensor it should and seems to be
| |
01:32 | troy_s | Bertl: And the vendor confirmed. :)
| |
01:32 | Bertl | that was the 'should' part :)
| |
01:33 | troy_s | Bertl: General order of events: 1) Invert TRC. 2) Transform to XYZ. 3) Align white points (if different) 4) Transform. 5) Transform to RGB. 6) Apply TRC.
| |
01:33 | troy_s | gotcha
| |
01:34 | troy_s | And I believe all of the transforms for white and different primaries can be distilled into one 3x3 transform
| |
01:35 | troy_s | but setting white point should be a simple 3x3 Bradford with the primaries of the sensor known and blended into that matrix.
| |
01:35 | Bertl | so the 'invert TRC' part is a 1:1 mapping in the CMV12K case?
| |
01:36 | troy_s | Bertl: Currently there is no TRC, so that would be something you might add for grading purposes - starting at a Cineon Log for example. Or Josh Pines... a sort of base starting point for those graders who prefer a curved start.
| |
01:37 | troy_s | it would be whatever was chosen to bake into the raw DPXs etc.
| |
01:37 | Bertl | DPX?
| |
01:38 | troy_s | But on camera, some basic WB functionality wouldn't need to consider that. It could even likely be merely stored in metadata and interpreted by software side.
| |
01:38 | troy_s | (little like Canon does it)
| |
01:38 | rexbron | troy_s: no dpx please ;)
| |
01:38 | troy_s | rexbron: Sorry but DNG stinks.
| |
01:38 | rexbron | troy_s: explain why you think that
| |
01:38 | troy_s | rexbron: Simply zero uptake and not as versatile.
| |
01:38 | troy_s | fair?
| |
01:39 | Bertl | so let's adjust the order of events to 'our' case
| |
01:39 | rexbron | troy_s: nope
| |
01:39 | Bertl | i.e. we start with the 'linear (really) raw data' from the sensor
| |
01:39 | rexbron | troy_s: dpx is not designed around linear raw data, it's designed for printing density from film scans
| |
01:39 | troy_s | rexbron: And you also need a way to specify a TRC and DPX does that well. Exactly my point.
| |
01:39 | dmj_nova | Yeah, I'd suggest doing the in camera color as metadata
| |
01:41 | rexbron | troy_s: also cDNG has really good uptake. Resolve, Premier Pro, After Effects and Nike believe
| |
01:41 | rexbron | troy_s: please de-acryomn TRC
| |
01:41 | troy_s | rexbron: My primary gripe is really implementation. The lack of penetration and a go-to library doesn't help.
| |
01:41 | troy_s | rexbron: Tone. Response. Curve.
| |
01:41 | rexbron | troy_s: on the foss side? It's tiff
| |
01:41 | rexbron | it's tiff plus extra metadata
| |
01:41 | troy_s | Sometimes gamma, sometimes a two part algo like 709 or sRGB
| |
01:42 | rexbron | troy_s: thanks for the clarity
| |
01:42 | rexbron | troy_s: as for a goto library, dcraw supports it iirc
| |
01:42 | Bertl | okay, ping me when we are back to our case :)
| |
01:42 | troy_s | rexbron: As an aside... I am not exactly opposed to DNG.
| |
01:43 | rexbron | Bertl: pardon us while we wage a crucade
| |
01:43 | troy_s | Bertl: Probably metadata if we can find a defacto example existing.
| |
01:43 | Bertl | yeah, no problem ... go ahead, have fun :)
| |
01:44 | troy_s | rexbron: You aware of any app that interprets the DNG metadata for WB and applies a transform?
| |
01:44 | troy_s | rexbron: Resolve perhaps?
| |
01:45 | rexbron | troy_s: Resolve can definately do that. I'm pretty certain any app that uses DCraw supports it as well
| |
01:45 | rexbron | the cinema part of the dng file spec is timecode and reel numbers
| |
01:46 | rexbron | it's why you can use adobe lightroom with BMCC footage
| |
01:46 | troy_s | Bertl: You are considering a reference decoder for the files yes?
| |
01:46 | troy_s | rexbron: God that sounds joyful - NOT. LOL.
| |
01:46 | rexbron | DCraw may assume a sRGB display refered space however
| |
01:47 | Bertl | no, I'm currently interested in processing the sensor data and generating HDMI output :)
| |
01:47 | rexbron | Bertl: ah, ok
| |
01:47 | troy_s | rexbron: Not many are scene referred. Hell not Resolve methinks as it doesn't really support CDL aside from ingestion right?
| |
01:47 | troy_s | Bertl: Stuff HDMI.
| |
01:47 | rexbron | HD-SDI ftw lol
| |
01:47 | rexbron | troy_s: Let me check
| |
01:47 | troy_s | Bertl: Was more speaking of a reference bit of kit to deal with the files.
| |
01:48 | troy_s | (likely glued together OIIO / OCIO. Not too much needed really)
| |
01:48 | Bertl | well, HDMI with a weird 4:2:2 RGB mode is all we have on the axiom alpha prototype
| |
01:48 | rexbron | troy_s: Resolve can import and export ASC CDL
| |
01:48 | troy_s | Bertl: Fair. After all any Bayer sensor is 4:2:2 native.
| |
01:48 | Bertl | I'm well aware that this is not what we want in the camera, but this is what we got ATM
| |
01:49 | troy_s | rexbron: No interface for CDL. Was always LGG, no?
| |
01:49 | troy_s | Bertl: I am not that obsessed.
| |
01:49 | troy_s | Bertl: The sensor is 4:2:2
| |
01:49 | rexbron | troy_s: CDL was only ever meant as an interchance format, not as grading tools. The spec explicitly mentions so
| |
01:49 | troy_s | Obviously for post processing it is meh, but I would think that is all after a dum
| |
01:49 | troy_s | dump even
| |
01:50 | rexbron | troy_s: CDL also assumes display refered rec709
| |
01:50 | troy_s | rexbron: I seem to recall that CDL was designed to be agnostic - either DR or SR.
| |
01:50 | troy_s | But I may need to read the spec again.
| |
01:50 | rexbron | nope, the SOP operation specifies rec709 luma coeffecients
| |
01:50 | rexbron | err
| |
01:51 | rexbron | SAT rather
| |
01:51 | troy_s | How odd.
| |
01:51 | rexbron | at least in the last version of the spec I read
| |
01:51 | troy_s | Although I think that makes sense actually
| |
01:51 | rexbron | it's kinda been superseeded by ACES
| |
01:51 | troy_s | Because you sort of need a white calc for sat.
| |
01:51 | rexbron | yup
| |
01:52 | troy_s | Bertl: How goes engineering?
| |
01:52 | rexbron | Bertl: you can get away with really, really bad debayer algos for monitoring
| |
01:52 | Bertl | good but slow for my taste, but it is rather complicated
| |
01:52 | troy_s | Bertl: I read your point about doing the calculations on latitude and gamut sooner than later and I totally agree... wiser to test the sensor earlier and see if it meets the needs.
| |
01:53 | Bertl | so my idea of post processing atm looks like this:
| |
01:53 | Bertl | (in prototype that is)
| |
01:53 | rexbron | Bertl, troy_s: keep in mind that Aaton was bankrupted and Blackmagic design is pulling it's hair out over differences in the chips when moving to production scale fabs
| |
01:54 | troy_s | rexbron: Are there specs in place for raw Bayered images in DNG?
| |
01:54 | Bertl | run the r,g,b data from the sensor through a generic matrix (if required) and through a lookup table per channel
| |
01:54 | troy_s | Bertl: Hrm... why?
| |
01:54 | rexbron | troy_s: that is what DNG is. It's TIFF for bayer raw images
| |
01:55 | troy_s | (serious question - totally needed for an onboard output of course)
| |
01:55 | Bertl | because HDMI is probably not happy with the linear color data
| |
01:55 | rexbron | Bertl: you could throw it down the pipe, it'd look like shit
| |
01:55 | troy_s | Bertl: Gotcha. I see the point. Should be a simple matrix (single) with white point transform via Bradford (two matrices boiled into one)
| |
01:56 | Bertl | rexbron: yeah, but that's probably not what we want for demo footage :)
| |
01:56 | troy_s | LOL
| |
01:56 | troy_s | (3x3 matrix)
| |
01:56 | Bertl | we probably need a lookup for gamma correction as well
| |
01:57 | Bertl | the matrix cannot do non linear stuff
| |
01:57 | troy_s | Bertl: It would look like this:
| |
01:57 | troy_s | (assuming we profile the sensor to a simple quickie matrix)
| |
01:57 | troy_s | 1) RGB to XYZ via custom matrix
| |
01:58 | troy_s | 2) White point shift
| |
01:58 | troy_s | 3) XYZ to 709
| |
01:58 | troy_s | 4) 1D LUT for 709 TRC
| |
01:58 | troy_s | 5) HDMI
| |
01:58 | rexbron | yup, sounds right
| |
01:59 | Bertl | okay, the LUT is what I meant, and we need it per channel or just a single for all channels (R,G,B)?
| |
01:59 | Bertl | the 1,2,3 points should fold into a single matrix, I guess
| |
01:59 | rexbron | TRC would apply to all channels
| |
02:00 | troy_s | (well 1 and 2 are actually a single warp)
| |
02:00 | rexbron | Bertl: from what I understand, those matrixes are created under CIE illumenant A and CIE D65 and anything inbetween has been interpolated
| |
02:00 | troy_s | Bertl: You can collapse all of it into one matrix IIRC, minus the TRC
| |
02:00 | Bertl | and I think the best way to implement the matrix part is a 9-way lookup, mul and add
| |
02:01 | troy_s | rexbron: ICCs are D50 PCS. But sort of moot here.
| |
02:01 | troy_s | Bertl: That is the sound of higher thinking than mine shooting right over my head.
| |
02:02 | troy_s | Bertl: You _could_ use a 3D LUT but the only way to have a fully flexible white point setting is via a Bradford
| |
02:02 | Bertl | well, look at it like this:
| |
02:03 | Bertl | [[ a0 a1 a2 ] [ b0 b1 b2 ] [ c0 c1 c2 ]] * [x y z]
| |
02:04 | Bertl | this results in a0*x + a1*y + a2*z, b0*x + b1*y + b2*z ...
| |
02:04 | Bertl | now the a0*x can also be written as LUT_a0(x)
| |
02:04 | Bertl | similar for a1*y ... b0*x ...
| |
02:05 | Bertl | which basically reduces the matrix operation to nine LUTs and a three way add of the LUT results
| |
02:05 | troy_s | gotcha
| |
02:06 | Bertl | that's something we can do relatively simply
| |
02:06 | troy_s | Bertl: Bradford transform should be simple for you to figure out... it is sort of a transform to yet another space actually
| |
02:06 | troy_s | LMS
| |
02:06 | Bertl | and the LUT values can be calculated from linux userspace
| |
02:07 | troy_s | which is designed to make the values more closely align to our Long, Medium, and Short wavelength cone structures (and why it is more accurate than a simple XYZ scale)
| |
02:07 | Bertl | (which allows for some flexibility)
| |
02:12 | troy_s | Interesting
| |
02:13 | troy_s | Bertl: So what is your current engineering roadblock?
| |
02:13 | rexbron | Bertl: so the FPGA is running a linux kernel? huh
| |
02:13 | Bertl | yes, the zynq features two hardened arm cores
| |
02:14 | rexbron | the Kiniraw also runs a linux kernel but getting a chinesese manufacturer to comply with the GPL might be hard
| |
02:14 | Bertl | we have our own kernel and userspace running there :)
| |
02:14 | rexbron | also, the BMCC uses a ton of foss. The Readme shipped with the firmware has links to the source. It runs freeRTos
| |
02:15 | troy_s | Was there a pissing match between Axiom and Elphel?
| |
02:15 | Bertl | troy_s: well, I'm currently (re)arranging the memory read to work around xilinx bugs :)
| |
02:15 | rexbron | fun...
| |
02:15 | Bertl | not that'd I know of :)
| |
02:15 | Bertl | (the pissing contest :)
| |
02:16 | troy_s | How long until I can slap a PL lens on it and shoot something? ;)
| |
02:17 | Bertl | PL is the acronym I use for the FPGA part (contrasting PS for the ARM cores :)
| |
02:17 | Bertl | well, we can do single snapshots ATM, that works quite well and reliable
| |
02:18 | Bertl | we could actually stream full 4k at 25 or 30 frames into memory already
| |
02:18 | Bertl | that works fine as well, just getting it out of the prototype is the problem
| |
02:19 | troy_s | Wow. Impressive.
| |
02:20 | troy_s | What mount did Axiom arrive at for lenses?
| |
02:20 | Bertl | we currently use the nikon-f lens mount adapter
| |
02:21 | Bertl | https://www.apertus.org/sites/default/files/alpha-lensmount01.jpg
| |
02:22 | Bertl | https://www.apertus.org/sites/default/files/alpha-lensmount02.jpg
| |
02:34 | [1]Sasha | troy_s: I believe the plan for lenses will eventually involve using the P+S Technik Interchangable Mount System: http://www.pstechnik.de/en/optics-ims.php
| |
02:41 | FergusL | I too think this solution was kept, yes
| |
02:43 | troy_s | Seems silly
| |
02:43 | troy_s | Reference standard is PL, and anyone aspiring to create high quality images will be borrowing PL mounted lenses.
| |
02:43 | troy_s | (example - borrowing a set of Master Primes or Cookes. )
| |
02:45 | Bertl | well, as soon as there is an open PL lens mount, it can be used
| |
02:46 | Bertl | (don't see a real problem there, for now it is the nikon f-mount because the adapter for that one was already done)
| |
02:51 | troy_s | Bertl: I only said that it seemed silly because it lacked the context of who this whole project is aimed at.
| |
02:52 | troy_s | Bertl: It is all well and good to support standards, but the bottom line is that it is about creation, and that means the context of existing tools / devices / hardware cannot be overlooked.
| |
02:53 | troy_s | That means that for lower end, the Nikon makes perfectly good sense. As soon as you are talking low budget though, you are also likely talking pulling favors including borrowing lenses, which is very common.
| |
02:53 | troy_s | And every lens you will borrow will be PL.
| |
02:54 | Bertl | well, I haven't been around when the nikon f-mount was created, and I guess you haven't either
| |
02:54 | Bertl | it is available right now, even as physical prototype, so it is used :)
| |
02:55 | Bertl | as I said, if you want to see the PL-mount for the prototype, you just need to find somebody to do the 3D design for it (and figure out how to get the mechanical part working)
| |
02:55 | Bertl | the milling itself is not that complicated and not very expensive (at least not for the F-mount)
| |
02:56 | Bertl | don't forget, we are working on the prototype for free
| |
03:08 | troy_s | mechanical?
| |
03:08 | troy_s | It is a pressure slip
| |
03:09 | troy_s | There must be existing mount designs for PL. Would need to look.
| |
03:09 | Bertl | needs some kind of pressure, usually applied by a spring :)
| |
03:10 | Bertl | you can even use the 3D model for the F-mount and simply adapt it
| |
03:10 | Bertl | i.e. the tube and bottom part should be quite similar, maybe the tube length needs adjusting but that's probably it
| |
03:11 | troy_s | http://www.cinematography.com/index.php?showtopic=53178
| |
03:11 | troy_s | I think it is dovetail mechanics
| |
03:11 | troy_s | (as in wedge)
| |
03:11 | troy_s | very low tech
| |
03:11 | Bertl | nevertheless, needs to come from somewhere
| |
03:12 | Bertl | for example, we use the mechanical locking part from a camera body (for the prototype)
| |
03:14 | troy_s | tricky business
| |
03:14 | troy_s | shimming will be a nightmare
| |
03:17 | Bertl | shimming is adding a spacer to change the focal point?
| |
03:17 | troy_s | Yes. To get backfocus to align infinity.
| |
03:17 | troy_s | It is in that forum post.
| |
03:17 | Bertl | well, that's actually trivial, as we can cut the spacer with a laser cutter
| |
03:18 | Bertl | (and put it between the F-mount bayonet part and the actual lens mount tube
| |
03:18 | Bertl | )
| |
03:18 | Bertl | if there is a real need for that with the prototype, that is
| |
06:01 | [1]Sasha | left the channel | |
06:13 | Bertl | off to bed now ... have a good one everyone!
| |
07:21 | se6astian | joined the channel | |
07:44 | se6astian | https://www.apertus.org/apertus-now-accepts-bitcoins released and posted
| |
11:28 | Sasha_C | joined the channel | |
11:47 | Sasha_C | left the channel | |
11:59 | Bertl | morning everyone!
| |
11:59 | dmj_nova | left the channel | |
12:04 | se6astian | hello!
| |
12:33 | [1]se6astian | joined the channel | |
12:36 | se6astian | left the channel | |
12:36 | [1]se6astian | changed nick to: se6astian
| |
15:39 | se6astian | left the channel | |
17:33 | dmj_nova | joined the channel | |
17:34 | Bertl | wb dmj_nova!
| |
17:36 | dmj_nova | hi Bertl
| |
17:36 | dmj_nova | I miss anything?
| |
17:43 | Bertl | probably not
| |
18:50 | troy_s | left the channel | |
18:50 | rexbron | left the channel | |
18:50 | rexbron_ | joined the channel | |
18:50 | troy_s_ | joined the channel | |
19:37 | se6astian | joined the channel | |
19:38 | se6astian | evening
| |
19:38 | Bertl | evening se6astian!
| |
19:38 | mars_ | hi
| |
20:01 | Bertl | hey mars_! how's going?
| |
20:06 | mars_ | good, good
| |
20:08 | mars_ | but it was a long day and so far away from home
| |
20:08 | mars_ | ;)
| |
20:09 | Bertl | oww :)
| |
20:10 | mars_ | i'l be back on sunday
| |
20:37 | mikkael | joined the channel | |
20:37 | Bertl | wb mikkael!
| |
20:48 | se6astian | oh hello, so many new people recently
| |
21:16 | mikkael | Hello
| |
21:43 | se6astian | sorry gotta go, time for bed
| |
21:43 | se6astian | will be online most of tomorrow
| |
21:43 | se6astian | time to catch up then
| |
21:43 | se6astian | bye!
| |
21:43 | se6astian | left the channel | |
21:54 | mikkael | left the channel |