01:12 | jerknextdoor | left the channel | |
03:27 | gwelkind | joined the channel | |
04:00 | jucar1 | left the channel | |
04:06 | jucar | joined the channel | |
04:17 | gwelkind | left the channel | |
05:36 | gwelkind | joined the channel | |
06:03 | gwelkind | left the channel | |
06:22 | sasha_c | joined the channel | |
06:58 | Bertl | morning folks!
| |
06:58 | sasha_c | Morning Bertl :)
| |
07:11 | troy_s | left the channel | |
07:11 | sasha_c | left the channel | |
08:00 | troy_s | joined the channel | |
08:12 | troy_s | left the channel | |
08:12 | troy_s | joined the channel | |
09:23 | S3bastian | joined the channel | |
09:29 | sasha_c | joined the channel | |
09:31 | S3bastian | left the channel | |
09:35 | se6astian | joined the channel | |
10:01 | ramsch | joined the channel | |
10:01 | ramsch | hoi
| |
10:01 | ramsch | http://www.hdmi.org/manufacturer/specification.aspx
| |
10:03 | ramsch | left the channel | |
10:42 | Bertl | off for now ... bbl
| |
10:56 | rexbron | left the channel | |
10:56 | rexbron | joined the channel | |
10:56 | rexbron | left the channel | |
10:56 | rexbron | joined the channel | |
11:18 | se6astian | left the channel | |
11:20 | se6astian | joined the channel | |
12:37 | sasha_c | left the channel | |
12:38 | intracube | joined the channel | |
14:47 | danieel | se6astian: who was the DP for that shot?
| |
14:47 | danieel | would prefer a longer exposure time in some scenes, and more closed aperture in other
| |
14:48 | danieel | and the flashing LED's are funny :)
| |
14:50 | danieel | oh, and a better LUT - the shadows lack detail
| |
14:55 | Bertl | I think both matrix and LUTs are set to default
| |
14:55 | Bertl | i.e. no change to the sensor input per se
| |
15:03 | se6astian | time to go home
| |
15:03 | se6astian | see you
| |
15:03 | se6astian | left the channel | |
15:04 | danieel | that would explain the look
| |
15:18 | ApertusWeb7 | joined the channel | |
15:19 | ApertusWeb7 | hi - anyone here?
| |
15:21 | ApertusWeb7 | Hi, I am Edmund Ronald. I am an imaging consultant, and would like to work a bit with the Apertus proto with a view to doing still images
| |
15:21 | Bertl | welcome ApertusWeb7!
| |
15:21 | ApertusWeb7 | hello Bertl!
| |
15:22 | ApertusWeb7 | I posted on the Apertus forum. With some friends we are thinking of doing some open source still stuff.
| |
15:22 | Bertl | there are currently two units of the Axiom Alpha prototype in the entire world :)
| |
15:22 | ApertusWeb7 | I can have things fabricated :)
| |
15:23 | ApertusWeb7 | Is it just a zedboard add-on?
| |
15:23 | Bertl | okay, then it shouldn't be a problem
| |
15:23 | Bertl | yes, it currently consists of a zedboard, a sensor frontend, the sensor and a lens mount
| |
15:24 | Bertl | plus two custom made debug modules which you won't need to get it working
| |
15:25 | ApertusWeb7 | the cmosis chip talks directly to the zedboard, or there is some glue?
| |
15:25 | Bertl | is is connected with 32 of the 64 LVDS pairs and some control lines, yes
| |
15:26 | ApertusWeb7 | how hard is it to get up and running, acquire single images? Is the zedboard running Linux on one core?
| |
15:27 | ApertusWeb7 | by "images" I mean whatever data the sensor outputs, of course
| |
15:29 | Bertl | the zedboard is running Linux on both cores
| |
15:30 | Bertl | the image is easy to get when the hardware is working and the software installed
| |
15:30 | Bertl | i.e. you run a small program, or change a few registers and the image 'appears' in memory
| |
15:31 | ApertusWeb7 | yes. and i assume i can then pipe it to a remote computer
| |
15:31 | ApertusWeb7 | via ethernet
| |
15:31 | Bertl | yes, but be aware that the ethernet bandwidth is rather limited
| |
15:31 | ApertusWeb7 | as it is then linux to xxx.
| |
15:31 | ApertusWeb7 | i am not intrested in speed. stills only.
| |
15:32 | Bertl | yes, that works perfectly fine, I'm doing it with ssh :)
| |
15:32 | ApertusWeb7 | exactly.
| |
15:32 | troy_s | Bertl: Any progress?
| |
15:32 | ApertusWeb7 | if I and friends worked with this we would be open source.
| |
15:33 | Bertl | troy_s: lots of progress, but not what you're referring to ... i.e. had not time yet to tackle this
| |
15:33 | ApertusWeb7 | you would get some raw conversion and color tools out of it presumably.
| |
15:33 | Bertl | go ahead
| |
15:33 | troy_s | Bertl: What have you been pounding away on?
| |
15:34 | troy_s | ApertusWeb7: What the heck does that meam?
| |
15:34 | troy_s | mean even.
| |
15:34 | ApertusWeb7 | my name is Edmund.
| |
15:34 | Bertl | troy_s: mostly fighting with the atomos converter and visiting folks in germany
| |
15:35 | Bertl | troy_s: it is the web client
| |
15:35 | Bertl | ApertusWeb7: try /nick edmund
| |
15:35 | ApertusWeb7 | I posted on the forum. I'm thinking of doing an open source still camera and starting with your stuff
| |
15:35 | ApertusWeb7 | changed nick to: edmund
| |
15:35 | troy_s | Bertl: The current means you are dumping to disk?
| |
15:36 | edmund | changed nick to: eadmund
| |
15:36 | Bertl | troy_s: hmm?
| |
15:37 | troy_s | Bertl: The Atomos converter, is it the HDMI related converter?
| |
15:37 | Bertl | yes, HDMI -> HD-SDI
| |
15:37 | Bertl | it is attached to a samurai on the SDI side
| |
15:37 | troy_s | Bertl: Goal to get to SDI I take it?
| |
15:38 | Bertl | the goal is to capture the HDMI output
| |
15:38 | troy_s | Nothing but troubles dumping to the converters eh? Much specifications listed in the features of most of them, but few working it sounds like?
| |
15:38 | Bertl | yeah, and they are picky as hell
| |
15:38 | troy_s | Fussy units.
| |
15:39 | eadmund | question: if one wants a proto sensor board, can one order it or does one need to have it fabricated oneself?
| |
15:39 | danieel | Bertl: found a new issue - sec 28, the lamp on the sky background got darker lines below
| |
15:41 | Bertl | yeah I guess that is because the sensor is saturated there
| |
15:41 | danieel | that should not happen with cmos.. overflow was an issue with ccd's
| |
15:41 | Bertl | see the vertical streaks on the clouds in the background?
| |
15:42 | danieel | from the lamp down, and in the skies i would say i see just fpn
| |
15:42 | Bertl | IMHO the cmosis sensor registers are still not properly configure
| |
15:42 | Bertl | *configured
| |
15:43 | danieel | looks like some timing issue maybe
| |
15:43 | danieel | like a guard interval not being enough long
| |
15:43 | Bertl | somebody will have to look into that anyway ... not my current focus
| |
15:43 | eadmund | antiblooming?
| |
15:44 | eadmund | rare there registers to configure antiblooming?
| |
15:44 | troy_s | Bertl: By the way, I did a cLUT
| |
15:44 | troy_s | Bertl: And the images look pretty solid.
| |
15:44 | Bertl | eadmund: so, you can order the PCB from OSHpark, but you have to assemble it yourself (for now)
| |
15:44 | troy_s | Some strange vertical lines are revealed when we apply the cLUT
| |
15:45 | Bertl | that's fine, FPN correction is implemented but not configured
| |
15:45 | troy_s | Bertl: The cLUT looks solid though. DE 2k of 1.4 :)
| |
15:45 | troy_s | (Still complete anomalies in the >0.5 values as we have discussed)
| |
15:46 | Bertl | ah, that's good to hear
| |
15:46 | troy_s | The chart looks pretty solid.
| |
15:46 | Bertl | can you factor out the LUT and leave a matrix for the cross color dependancies?
| |
15:46 | Bertl | (with the tools available I mean)
| |
15:46 | troy_s | Aside from the background that seb's shot that I have _no_ damn clue what it is... a sheer? A blind?
| |
15:46 | troy_s | Blown out all to hell and likely averse to the profiling.
| |
15:47 | Bertl | hehe
| |
15:47 | troy_s | The wood however, looks deadly spot on (being in the neutral region)
| |
15:47 | Bertl | can you upload a corrected image?
| |
15:47 | troy_s | The matrix is hopeless until you do your magic.
| |
15:48 | Bertl | that's why I'm asking, because adding the per channel LUTs before the matrix is rather trivial
| |
15:48 | troy_s | (We are going to get those wonk skews on the matrix due to the fact that the data simply cannot be fit into a decent transform)
| |
15:48 | troy_s | Hrm. You mean to correct the linearity issues...
| |
15:48 | troy_s | Linear interp on the LUT?
| |
15:48 | troy_s | Or something a little better?
| |
15:48 | troy_s | (Bilinear?)
| |
15:49 | Bertl | no need for interpolation, we can do 1:1 with 12bit
| |
15:49 | troy_s | Good point.
| |
15:49 | Bertl | we can even do better actually
| |
15:49 | troy_s | Huh?
| |
15:49 | Bertl | i.e. we can do 12->16 bit then have the matrix crunch 16bit
| |
15:49 | troy_s | I hate the idea of tossing data
| |
15:50 | troy_s | but in some respects, perhaps you are quite right on nuking an upper bit etc.
| |
15:50 | Bertl | and after that go back to 12 bit just before the gamma lut
| |
15:50 | troy_s | Or alternatively, nuke the upper bits etc for profiling, generate a reliable matrix, then correct via a LUT on the error prone data acq
| |
15:53 | Bertl | there is no point in keeping saturated/clipped values around
| |
15:54 | Bertl | they do not contribute any information
| |
15:54 | troy_s | Not the clipped
| |
15:54 | troy_s | (Obviously that's a nightmare)
| |
15:55 | troy_s | The stuff that whacks the image.
| |
15:55 | eadmund | do you really want to do all the color stuff yourselves? I have acquaintances who already have all this code as open source, with calibration tools and it is *fast*
| |
15:56 | troy_s | Bertl: http://www.pasteall.org/pic/show.php?id=67813
| |
15:56 | troy_s | Bertl: Decent looking chart. You can see the typical stuff we discussed in the 70RGB range
| |
15:56 | Bertl | eadmund: we are fine with whoever is doing the 'color stuff' :)
| |
15:56 | troy_s | eadmund: You are speculating.
| |
15:56 | Bertl | eadmund: it is open source/open hardware after all
| |
15:57 | Bertl | troy_s: yep, looks nice
| |
15:58 | troy_s | eadmund: Calibration and profiling tools are not the issue, and I'm quite sure that the limited domain knowledge of 95% of folks that think they are aware of color knowledge is actually a discrete unit of color knowledge; that of the color knowledge of the graphic design industry. This camera's primary audience is, as I understand it, cinematic / moving image.
| |
15:58 | troy_s | Bertl: The sheer is something else. I have laughed out loud to myself when I think of what Seb's place looks like.
| |
15:58 | troy_s | Bertl: It makes me think he lives in a Boogie Nights apartment.
| |
15:59 | troy_s | eadmund: And, as someone that has watched the 'magic' of open source / Libre projects since about 1994, I'll tell you that the actual "Work" that gets done is often lone soldiers like Bertl slugging away quietly by themselves.
| |
15:59 | Bertl | if that makes him happy, so be it :)
| |
16:00 | eadmund | absolutely, and 95% of people end up on this channel :)
| |
16:00 | troy_s | (With the exception being massive projects that have inertia, and likely attract a reasonable share of nutbars and freakshows (hello Linux kernel))
| |
16:01 | troy_s | eadmund: At any rate, if the folks you know can tell me the difference between a scene referred image and a display referred image, they are welcome to come and chit chat.
| |
16:01 | Bertl | actually they are very welcome anyways :)
| |
16:01 | troy_s | Bertl: Pretty decent chart though I'd say... so the cLUT can correct for it (That's an XYZ by the way)
| |
16:01 | troy_s | Bertl: What's the vertical streaking?
| |
16:02 | troy_s | Bertl: Seems homogenous.
| |
16:02 | eadmund | yeah, I am about surprised about horizontal and vertical streaks and zones. Sensor pattern noise?
| |
16:02 | troy_s | Bertl: Is that the noise?
| |
16:02 | Bertl | it is the uncorrected FPN noise
| |
16:03 | troy_s | Note also that that is a pure sRGB dump to JPEG, so there's no real way to see it decent without assigning it an sRGB profile.
| |
16:03 | troy_s | (and assuming you are all viewing on profiled displays with a profile in the pipe.)
| |
16:03 | troy_s | (Likely a large assumption. :))
| |
16:03 | eadmund | my friends make libraw. they do raw camera color for a living.
| |
16:03 | troy_s | eadmund: LibRaw is a dupe of dcraw.
| |
16:04 | troy_s | eadmund: Nothing magic there. Worse, it is all based around graphics arts, which is not applicable to the needs of a motion picture camera regarding color.
| |
16:04 | Bertl | eadmund: sounds good ...
| |
16:04 | troy_s | (Dare I say it tacks on muddling cruft that isn't terribly valuable to imaging pipelines.)
| |
16:05 | Bertl | doesn't mean that they do not have a good understanding of the software involved, does it?
| |
16:05 | eadmund | troy_s why would characterising the sensor be a different issue, whatever conventions you choose afterwards?
| |
16:06 | troy_s | eadmund: Getting to a useful matrix / LUT is one thing, but making it useful beyond ICC is another.
| |
16:06 | eadmund | linearity, primaries, crosstalk, spectral sensitivity - where the hell is this graphics arts?
| |
16:06 | Bertl | troy_s: so what does the cLUT look like?
| |
16:07 | troy_s | eadmund: As a general rule, there's a huge chunk of graphics arts specifics stuff that tags along with the tools used to generate, for example, a profile.
| |
16:07 | troy_s | (D50 as a case in point.)
| |
16:07 | troy_s | Bertl: You want me to dump it?
| |
16:07 | Bertl | yes, please
| |
16:07 | troy_s | erf.
| |
16:07 | troy_s | Hold.
| |
16:09 | eadmund | anyway - I find all of this interesting - I would prefer maybe getting a copy of your sensor board rather than having to design; and I think you would benefit from doing the HARD stuff which is pumping data at speed
| |
16:09 | Bertl | as I said, for now, you have to assemble it yourself (no need to design anything)
| |
16:09 | eadmund | ok; tell me where to order
| |
16:10 | eadmund | what about sensor mount and connectors? chip one orders from CMOSIS?
| |
16:11 | Bertl | I'd advise to take the latest board design, you can grab it here: http://vserver.13thfloor.at/Stuff/AXIOM/cmv12k-adapter-v1.1.tar.xz
| |
16:12 | Bertl | and just submit the cmv12k-adapter-v1.1.brd via OSHpark
| |
16:12 | Bertl | there is a version already there, so you could also order that one, but we improved a few things in v1.1
| |
16:13 | Bertl | https://github.com/apertus-open-source-cinema/alpha-hardware/tree/master/SFE-PCB
| |
16:14 | Bertl | I see, we even have the v1.2 version online
| |
16:14 | eadmund | thank you! what is the expense involved? I am working with Cruse (scanners) and I guess they will order
| |
16:14 | Bertl | which isn't actually tested yet, but might improve some things
| |
16:14 | eadmund | working *with* not *for*
| |
16:15 | Bertl | the PCBs cost about 85USD (you need to order 3pcs)
| |
16:15 | gwelkind | joined the channel | |
16:15 | troy_s | Bertl: pastelink.me/dl/e65221
| |
16:15 | troy_s | Bertl: There's a header that describes the LUT.
| |
16:15 | Bertl | eadmund: the parts are way below 100USD, except for the socket and the cmosis sensor
| |
16:15 | troy_s | Also note that it is a perceptual mapping, not saturation. Sort of torn on the most ideal here. My gut wants to say absolute colorimetric... but... anyways... silly details.
| |
16:16 | troy_s | eadmund: There are a few other peeps here that lurk that are working toward building a stills camera if it is of interest to you.
| |
16:16 | troy_s | (Not certain they want to be known, but they may ping you privately.)
| |
16:17 | eadmund | troy_s; ah, interesting.
| |
16:17 | troy_s | eadmund: And by "working toward" I believe they actually have a board and sensor working etc. Not just vapourware.
| |
16:17 | troy_s | Bertl: Think you can put the cLUT to use?
| |
16:18 | troy_s | That _should_ be the XYZ LUT, unless I pooped the bed.
| |
16:20 | Bertl | hmm, so that is an imput mapping per channel, a 3D lut (interpolated) and a output mapping per channel, correct?
| |
16:21 | Bertl | I wonder if 'the tools' could do the same with a matrix instead of the 3D lut?
| |
16:21 | troy_s | ?
| |
16:21 | Bertl | i.e. with an input and an output LUT around the matrix
| |
16:22 | troy_s | Why do you see the need for a LUT after the matrix if you feed it a 3D LUT to correct?
| |
16:23 | troy_s | (Not speaking of a TRC here on the tail end, strictly colorimetry)
| |
16:23 | Bertl | I do not see any need, but that's what your data contains, nO?
| |
16:23 | Bertl | or do I misinterpret that?
| |
16:23 | troy_s | I believe there's two tables in there.
| |
16:23 | Bertl | CLUT resolution = 45
| |
16:23 | Bertl | Input Table entries = 2048
| |
16:23 | Bertl | Output Table entries = 2048
| |
16:23 | eadmund | i will ask other people to come here and chat some time - what are good times? (GMT?)
| |
16:23 | troy_s | The first being the primary workhorse of the 3D to XYZ.
| |
16:24 | Bertl | eadmund: folks are from all over the world, and my timezone is not really fixed
| |
16:24 | troy_s | Bertl: The second is a straight 1D. Which I am not entirely sure why it is there (given that I've only glanced at it.)
| |
16:24 | troy_s | Bertl: I'd suspect it has to do with that graphics arts cruft
| |
16:25 | troy_s | Bertl: And the lovely mapping of white / black crap.
| |
16:25 | troy_s | (fricking mapping)
| |
16:25 | Bertl | let's call the output table a gamma LUT :)
| |
16:25 | Bertl | anyway, the question bugging me is the following:
| |
16:25 | troy_s | Bertl: Very close to it methinks, but not quite. Traditionally the stretch the values to make for graphic design 'compliant' ICCs.
| |
16:25 | Bertl | we had a version with just a matrix
| |
16:25 | troy_s | Go.
| |
16:26 | Bertl | we have a version with a LUT + 3D LUT
| |
16:26 | troy_s | Yes. With a busted matrix thanks to the crufty values we are getting off the sensor.
| |
16:26 | Bertl | (irgnoring the gamma LUT at the end)
| |
16:26 | troy_s | Any 1D LUT is worthless.
| |
16:26 | Bertl | now, can we do/try with a LUT + matrix ?
| |
16:27 | troy_s | Don't quite understand the goal.
| |
16:27 | troy_s | As in correct the image, generate a matrix from the corrected image?
| |
16:27 | troy_s | (Little losty here.)
| |
16:27 | Bertl | no, correct the image with a LUT + matrix
| |
16:27 | Bertl | instead of a LUT + 3D Lut
| |
16:27 | troy_s | Oh. You want a pure 1D LUT correction for each channel
| |
16:28 | troy_s | Then use the matrix to reach final.
| |
16:28 | troy_s | Yes?
| |
16:28 | Bertl | bingo!
| |
16:28 | troy_s | Hrm.
| |
16:28 | troy_s | Good question and not entirely certain how to reach that goa.
| |
16:28 | troy_s | It's tricky.
| |
16:28 | troy_s | Because to get to the 'correct' values, the 3D LUT is mandatory thanks to what the sensor is barfing out.
| |
16:28 | eadmund | do you have sensor linearity issues?
| |
16:28 | troy_s | (IE Adjusting the saturation intents is absolutely mandatory here, as the values aren't linearly skewed)
| |
16:29 | Bertl | eadmund: doesn't look like, but we have saturation/clipping
| |
16:29 | troy_s | Bertl: I haven't seen any clipping that causes any issues.
| |
16:29 | troy_s | Bertl: So I'm unsure what you mean. Our charts go wonk at 70%RGB.
| |
16:30 | troy_s | (no channels clip)
| |
16:30 | eadmund | are you sure the sensor is linear in the last stop or so from saturation?
| |
16:30 | troy_s | eadmund: It won't be.
| |
16:30 | troy_s | eadmund: Nor near the bottom. Bertl has some plans there.
| |
16:31 | Bertl | the problem seems to be that with the sensor default, black saturates for each channel around 250/4096
| |
16:31 | Bertl | and white saturates around 2600/4096 (that is with the default gain)
| |
16:31 | eadmund | yes, black levels are always above 0, so you can filter.
| |
16:32 | Bertl | depending on illumination/light temperature, they diverge on dark and bright areas
| |
16:32 | Bertl | (i.e. clip off in a statistical way)
| |
16:32 | Bertl | this seems to confuse the calibration tools
| |
16:32 | troy_s | Bertl: So to be clear, you are wondering if there is a 1D method to correct the channels?
| |
16:33 | Bertl | to correct them in such a way, that a 'simple' matrix can convert them properly
| |
16:33 | troy_s | Yes. I think I understand.
| |
16:33 | eadmund | I will try and get my friend Iliah Borg to join the channel and discuss this with you. He seems to be offline.
| |
16:35 | eadmund | topic looks interesting academically to me, but to get off the ground he has the tools ready.
| |
16:35 | Bertl | eadmund: btw, are you specifically interested in the CMV12k as sensor?
| |
16:36 | eadmund | no. I wanted to go for the 50MP Sony sensor, but it looks a pain to get to test, and faster to evaluate with CMOSIS for feasability, maybe use the CMV20K after.
| |
16:36 | Bertl | okay, because I was going to point out the higher resolution devices for stills
| |
16:37 | eadmund | which ones ? The 70MP seems to be analog ...
| |
16:38 | Bertl | the CMV20k yes
| |
16:39 | Bertl | do you have detailed information about the 50MP sony one?
| |
16:40 | eadmund | that is an interesting question.
| |
16:41 | intracube | left the channel | |
16:42 | eadmund | I wonder whether it is better to do a prototype with something that available and well documented.
| |
16:42 | Bertl | well documented is always relative :)
| |
16:45 | troy_s | Bertl: How do you want the curves?
| |
16:45 | troy_s | (not sure on their direction)
| |
16:45 | troy_s | Bertl: Separate files?
| |
16:46 | eadmund | I'm off to get magic hour light :) later ... nice meeting you, hope we will have an interesting interaction in the future!
| |
16:47 | troy_s | Bertl: ?
| |
16:47 | Bertl | doesn't matter, gawk will handle that
| |
16:47 | eadmund | left the channel | |
16:47 | troy_s | Bertl: You are way too big brained.
| |
16:48 | troy_s | Bertl: In terms of files, you want me to dump them into separates or one big un?
| |
16:48 | Bertl | whatever is easier for you
| |
16:48 | Bertl | I can upload index/value pairs (separate with a space) directly
| |
16:49 | Bertl | if the index has 12bit (i.e. from 0 to 4095)
| |
16:50 | troy_s | Bertl: The input tables are only 8 bit.
| |
16:50 | troy_s | Not sure how to stretch the bit depth ther.e
| |
16:50 | Bertl | interesting ... strange choice for 12bit data, no?
| |
16:51 | Bertl | the tables on the file you uploaded were 11bit
| |
16:51 | troy_s | Bertl: Not my fault!
| |
16:51 | troy_s | lol
| |
16:51 | troy_s | Yes the 3D LUTs.
| |
16:51 | troy_s | I suspect the shapers can be lower resolution
| |
16:51 | Bertl | no, the linear luts
| |
16:51 | troy_s | Oh.
| |
16:52 | Bertl | the 3D lut was actually below 6bit
| |
16:53 | Bertl | i.e. 45x45x45 (which is roughly 5.5 bit per channel)
| |
16:53 | troy_s | Interp.
| |
16:53 | troy_s | Joy.
| |
16:53 | Bertl | yes, well, it's hard to do a full 12bit 3D lut
| |
16:53 | troy_s | What is that a 65x65?
| |
16:54 | Bertl | that would be 2^36 entries
| |
16:54 | Bertl | or 68 gigawords to look up :)
| |
16:54 | troy_s | (largest 3D LUT dispcalgui will create is 65x65)
| |
16:55 | Bertl | the thing is, I see two options to get the mapping done properly
| |
16:56 | Bertl | the first one is the 'simple' one, which breaks the mapping down to a 1D LUT per channel (3 or 4)
| |
16:56 | Bertl | and a 3x3 Matrix (actually 4x4) to handle the cross color dependancies
| |
16:56 | Bertl | the second one is what I originially proposed (before we switched to a matrix :)
| |
16:57 | Bertl | i.e. have a 1D LUT per channel per channel
| |
16:57 | Bertl | i.e. use 3/4 LUTs per channel we want converted
| |
16:58 | troy_s | Bertl: Ok... this is new territory for me so here they come
| |
16:58 | troy_s | Might be a nightmare.
| |
16:58 | Bertl | the second one is more powerful than the first one, but also a lot harder to calculate
| |
16:58 | troy_s | Matrix is sort of an ideal, but if the sensor is misbehaving, we need to do whatever we need to get useful data.
| |
16:58 | Bertl | (at least from a 3D lut/matrix)
| |
16:58 | troy_s | And just because you are smarter than everyone else here, don't be "told-you-so" jerk.
| |
16:58 | Bertl | the matrix is not the ideal, the matrix is a linear dependancy
| |
16:58 | troy_s | :P
| |
16:59 | troy_s | Well the matrix is "ideal" from a colorimetric standpoint.
| |
16:59 | Bertl | hehe, you can take it I guess :)
| |
16:59 | troy_s | That's what I meant.
| |
16:59 | Bertl | yes, agreed, if everything is linear, the matrix is the simplest and most elegant solution
| |
17:00 | Bertl | the LUT + matrix can compensate for non linearities in the channels
| |
17:00 | Bertl | but it cannot compensate for non linear behaviour across channels
| |
17:01 | Bertl | the 3D lut can compensate for everything, but requires interpolation and thus creates sub-optimal matchings
| |
17:01 | troy_s | Bertl: http://www.pasteall.org/50015
| |
17:01 | troy_s | (NO assurances on that data)
| |
17:01 | troy_s | DE 4.5 which is not super duper
| |
17:02 | troy_s | but alas...
| |
17:02 | troy_s | that is RGB order
| |
17:02 | troy_s | so the matrix is R, G, B and the LUTs are R, G, B
| |
17:02 | troy_s | (in the former, to XYZ should be.)
| |
17:02 | Bertl | okay, so that is 1D lut per channel atm?
| |
17:02 | Bertl | and the matrix used after that is?
| |
17:02 | troy_s | Looks like.
| |
17:03 | troy_s | Should be.
| |
17:03 | troy_s | Bertl: Don't quote me on that though, haven't ever done a 1D LUT to matrix transform
| |
17:05 | gwelkind | left the channel | |
17:05 | troy_s | (Well not like this. Done it plenty going from a non-linear space to another obviously)
| |
17:10 | Bertl | http://vserver.13thfloor.at/Stuff/AXIOM/curves.svg
| |
17:10 | troy_s | Bertl: Your graph-fu freaks me out.
| |
17:11 | Bertl | so that looks like my theory is pretty much spot-on
| |
17:11 | troy_s | Bertl: Here's another based off the 16ms hold tight...
| |
17:11 | Bertl | i.e. see the 'linear' range between 50 and 200?
| |
17:12 | troy_s | Bertl: http://www.pasteall.org/50016
| |
17:12 | troy_s | Bertl: That's a better DE, not entirely sure why given the saturations will be lower in the lower exposed image. But useful for data.
| |
17:12 | troy_s | so Bertl, if I understand your correctly
| |
17:12 | troy_s | Bertl: To generate our 'idealized' matrix
| |
17:12 | troy_s | we'd use that chunk between 50 and 185-ish
| |
17:12 | troy_s | Yes?
| |
17:13 | troy_s | Then things go loopyville above and below.
| |
17:13 | troy_s | (Even below 50... probalby 45ish.
| |
17:14 | troy_s | Bertl: Can you plot both the 16 and the 28 on the same chart?
| |
17:14 | Bertl | http://vserver.13thfloor.at/Stuff/AXIOM/curves2.svg
| |
17:14 | Bertl | sure here is the second one alone
| |
17:16 | troy_s | Bertl: So useful data?
| |
17:16 | Bertl | http://vserver.13thfloor.at/Stuff/AXIOM/curves12.svg
| |
17:16 | troy_s | Egads... those curves look radically different. Erf.
| |
17:17 | Bertl | yes, but only in the area where it is wild guessing
| |
17:17 | Bertl | i.e. if you look at the 'linear' range, they just have different multipliers and offsets
| |
17:17 | troy_s | I suppose with enough bits of data, we shoudl be able to find a reliable shape?
| |
17:17 | Bertl | i.e. there is more red in the second one
| |
17:18 | troy_s | Bertl: Because how can we figure out how to get to reliable LUT curves if it is going to be all over the map?
| |
17:18 | troy_s | BLUE looks pretty consistent, different offset
| |
17:18 | troy_s | red looks... whack
| |
17:18 | Bertl | I don't think the values over 200 have any meaning
| |
17:18 | se6astian | joined the channel | |
17:18 | troy_s | green is definitely whack
| |
17:18 | troy_s | Oh really?
| |
17:18 | troy_s | Want me to get the other data packs for proof?
| |
17:18 | Bertl | yes, they are the result of saturation, clipping and FPN
| |
17:18 | troy_s | How is it clipping if the data value is well below clip?
| |
17:19 | troy_s | Has to be FPN and other oddness?
| |
17:19 | Bertl | because the sensor is strange
| |
17:19 | Bertl | let me try to explain what I think I've been seeing
| |
17:19 | Bertl | you know we talked about the bell shaped gauss curve?
| |
17:20 | troy_s | Yes
| |
17:20 | Bertl | if we take a gray area, uniformly lit and capture that (somewhere in the middle range of the sensor)
| |
17:21 | Bertl | then we get three very huge peaks (bell shaped) for each color
| |
17:21 | troy_s | Yep
| |
17:21 | troy_s | (that's the primary filtration range I'd suspect yes?)
| |
17:21 | Bertl | they will not be at the same location in the histogram
| |
17:22 | troy_s | (Primary per channel)
| |
17:22 | Bertl | the base width of the curve depends on many things
| |
17:22 | Bertl | the illumination itself, the filters, the FPN, etc
| |
17:22 | Bertl | but basically we have three peaks in the middle somewhere
| |
17:22 | troy_s | Well the sensel filters themselves are not narrow band as well, which I suspect complexificates everything.
| |
17:22 | Bertl | all at slighly different locations
| |
17:22 | troy_s | Yes
| |
17:23 | Bertl | now if we take a black paper
| |
17:23 | Bertl | then we surprisingly get three identical peaks at 200/4096
| |
17:23 | Bertl | so okay, one might say, black is black, so they have to be at the same location
| |
17:24 | Bertl | fair enough, but when we do the same on the white end
| |
17:24 | Bertl | we also get three peaks at the very same location
| |
17:24 | Bertl | around 2600/4096
| |
17:24 | troy_s | Which is telling you what?
| |
17:24 | Bertl | which is telling me that the range below 250 or so, and abov 2500 is just bogus
| |
17:24 | Bertl | *above
| |
17:25 | troy_s | Gotcha.
| |
17:25 | Bertl | i.e. it doesn't contain any relevant information
| |
17:25 | troy_s | Yes. Just random firing effectivel.
| |
17:25 | troy_s | That has the seductive look of data.
| |
17:25 | troy_s | Correct?
| |
17:25 | troy_s | So in practical terms, if we chop that off, how much latitude do we lose?
| |
17:25 | troy_s | Not much as my guess
| |
17:26 | Bertl | we lose roughly a bit
| |
17:26 | troy_s | As it seems the values above 70% RGB is where it happens.
| |
17:26 | Bertl | reducing the 4096 range to 2048
| |
17:26 | troy_s | Which depending on the latitude below 70%
| |
17:26 | troy_s | That's just bit depth though... we should probably try to calculate the latitude.
| |
17:26 | Bertl | you can try the following if you feel like it
| |
17:27 | troy_s | That'd be a full stop assuming the sensels capture precisely equivalent levels of light.
| |
17:27 | Bertl | search for an snap which has a histogramm within the 300-2400 range
| |
17:27 | troy_s | What's our darkest value in percent of useful RGB data you think?
| |
17:27 | troy_s | 300?
| |
17:27 | Bertl | i.e. completely within that range
| |
17:27 | troy_s | Yes.
| |
17:27 | Bertl | then stretch the contrast to full range
| |
17:28 | Bertl | so that 300 becomes 0 and 2400 becomes 4095
| |
17:28 | Bertl | then use the resulting image for profiling with a simple matrix
| |
17:28 | troy_s | Am I a dimwit for only seeing rather reduced latittude in that? I mean if the sensor's response is (mostly) linear, 300 - 2400 seems to only be about 3 stops
| |
17:29 | Bertl | the reduction depends on the sensor gain
| |
17:29 | Bertl | and yes, for the very same image, it will be a reduction
| |
17:30 | Bertl | but if the sensor gain was set higher, it would give the same image just with larger values
| |
17:30 | Bertl | (and clipping of course)
| |
17:31 | Bertl | but I'm pretty confident, if you can find such a raw in the images captured, you can correct it with a simple matrix
| |
17:31 | troy_s | Bertl: 18ms http://www.pasteall.org/50018
| |
17:31 | troy_s | Bertl: 20ms http://www.pasteall.org/50019
| |
17:32 | troy_s | Bertl: 24ms http://www.pasteall.org/50020
| |
17:32 | troy_s | Bertl: 26ms http://www.pasteall.org/50021
| |
17:32 | troy_s | (and you have the 16 and 28 I believe now too)
| |
17:38 | troy_s | Bertl: Any luck?
| |
17:43 | Bertl | gawk -F: '/[0-9]:/ { if (NF==2) H[$1,C[$1]++]=$2; } END { for (n in C) { printf ("%d\t",n); for (m=0; m<C[n]; m++) printf("%f%c", H[n,m], ((m+1)==C[n])?10:9); }}'
| |
17:44 | troy_s | Ok.
| |
17:44 | troy_s | I totally understand.
| |
17:44 | Bertl | this will convert the curves to a table
| |
17:44 | Bertl | you can pipe it through sort -n
| |
17:44 | troy_s | Bertl: How do the curves look visually in SVG?
| |
17:44 | troy_s | Support your theory?
| |
17:44 | Bertl | http://vserver.13thfloor.at/Stuff/AXIOM/curves.gplot
| |
17:45 | Bertl | this will plot a curve (from the data generated by the gawk script) in curves.data
| |
17:48 | Bertl | and yes, the curves all look very similar
| |
17:48 | troy_s | Bertl: So random at higher end?
| |
17:48 | Bertl | the 20ms seems to have the best linearity so far and very near R/G/B values
| |
17:49 | troy_s | Bertl: So anything gleaned for how to move forward?
| |
17:49 | Bertl | well, to be honest, not much new, but it was a nice confirmation
| |
17:50 | troy_s | Bertl: So what is your plan of attack? Clip the bits?
| |
17:50 | Bertl | i.e. we probably do not even need to do fancy correction on the input channels, a normal range clipping and stretching should do
| |
17:50 | Bertl | we will nevertheless use a lut for that for improved flexibility
| |
17:51 | troy_s | Yes and iteration is more flexible.
| |
17:52 | Bertl | as I said, the LUT is not hard to add, but I want to address the triple buffering first
| |
17:53 | jucar | left the channel | |
17:55 | troy_s | Bertl: To prevent frame drops?
| |
17:55 | Bertl | kind of, yes, currently we cannot start a new sample before the buffer is flipped (double buffering)
| |
17:55 | Bertl | which limits us to half the output framerate and shorter exposure times than necessary
| |
17:57 | danieel | where is that issue?
| |
17:58 | Bertl | hmm?
| |
17:58 | danieel | i do not see why any kind of buffering would limit exposure to <180 deg
| |
17:59 | djp | left the channel | |
17:59 | Bertl | correct, that is what I mean, you lose half the framerate
| |
18:00 | djp | joined the channel | |
18:00 | Bertl | let me give an example
| |
18:00 | Bertl | you start a capture to buffer 0 at 0ms
| |
18:01 | Bertl | let's say it takes 10ms to capture and 5ms to transfer
| |
18:01 | Bertl | that would in theory allow for 66FPS
| |
18:01 | Bertl | let's also say yhe HDMI output runs at 60FPS
| |
18:01 | Bertl | s/yhe/the/
| |
18:02 | Bertl | now naturally that would limit us to 60FPS capture/output
| |
18:02 | Bertl | or we would have to drop a frame every now and then
| |
18:03 | Bertl | so, to get undisturbed output, we have to display buffer 1 during capture
| |
18:03 | FergusL | congrats se6astian & team on the first images, I'm sure we'll see more interest suddenly, since we can show people things we can shoot, even if it isn't the best the camera can do for now
| |
18:03 | Bertl | i.e. from 0 to 16.6ms, we see buffer 1
| |
18:04 | Bertl | at 16.6ms we can switch to buffer 0, given that buffer 0 completed
| |
18:04 | Bertl | which is true, as the snap is shorter than the 16.6ms (15ms)
| |
18:04 | Bertl | we also can start a new capture right after the buffer flip
| |
18:05 | Bertl | (going to buffer 1 this time)
| |
18:05 | troy_s | Oldschool video game buffering
| |
18:06 | Bertl | if the exposure/transfer is taking slightly longer, we have to wait a complete frame to start a new acquisition
| |
18:07 | Bertl | because we cannot send data to the buffer being displayed, and we do not want to capture over the current buffer
| |
18:08 | Bertl | so synchronization currently has to wait for the capture to complete, and then for the frame to complete (output) before the cycle can restart
| |
18:08 | Bertl | with triple (or more) buffering we can have the capture 'follow' the display
| |
18:09 | Bertl | danieel: everything clear?
| |
18:10 | danieel | will read back later, have to go
| |
18:10 | Bertl | k, no problem, cya
| |
18:10 | troy_s | LOL
| |
18:13 | jucar | joined the channel | |
18:37 | philippej | joined the channel | |
18:37 | Bertl | evening philippej!
| |
18:38 | philippej | Evening Bertl, long time no see, I was on holidays
| |
18:38 | philippej | Big congratulations for the recent achevements !
| |
18:39 | philippej | also congrats for the first decent looking colour corrected chart to troy_s :-
| |
18:39 | Bertl | it is interesting for me to see ...
| |
18:40 | Bertl | we have shown live images on the photonik (a week ago), but folks get crazy about the first recording :)
| |
18:42 | philippej | it's because we can get an idea of the images
| |
18:42 | philippej | it become real once recorded
| |
18:42 | Bertl | sure, it was a major achievement to figure out what that proprietary piece of fine hardware called atomos converter really wants :)
| |
18:43 | philippej | it's quite crazy the pickiness of this tool
| |
18:44 | Bertl | yes, a friend of mine told me that those converters do not accept any deviation from the formats they are designed/built for
| |
18:45 | Bertl | which gave the necessary clue to get it working in the first place, but it is a shame to call it HDMI converter, when it actually only converts a really small subset of specific HDMI modes
| |
18:45 | Bertl | and claims 'No Input' on everything else :)
| |
18:46 | troy_s | philippej: LOL. I do nothing.
| |
18:47 | philippej | well, at least congrats to both of you for making such interesting reads, this is becoming a saga
| |
18:48 | philippej | Bertl, I think those converters are made for very specific purposes like dslr provided hdmi -> sdi. With most probably the hdmi on those dslr also broken in various ways
| |
18:51 | troy_s | philippej: I suspect it is pairings that rarely test the actual stuff.
| |
18:55 | Bertl | good news people! I didn't break everything with the 'triple' buffering :)
| 18:58 | philippej | likes the time between a feature is discussed and implemented :-)
|
18:59 | Bertl | implementation is easy, testing takes time
| |
19:03 | philippej | Another quote for later :-)
| |
19:13 | ApertusWeb0 | joined the channel | |
19:13 | ApertusWeb0 | changed nick to: eadmund
| |
19:13 | Bertl | wb eadmund!
| |
19:16 | eadmund | Yo Bertl! bist du in Wien?
| |
19:16 | Bertl | nope, but in Austria
| |
19:17 | eadmund | moi je suis à Paris :) I just sent a query to CMOSIS, and will see tomorrow about ordering a zedboard.
| |
19:18 | eadmund | I used ot be an individual member of the ICC but I'm not a color scientist.
| |
19:21 | eadmund | Bertl: Should I also order one of these breadboards? http://zedboard.org/accessories/protofmc
| |
19:23 | Bertl | looks interesting, but no point in ordering for the axiom alpha
| |
19:24 | philippej | left the channel | |
19:25 | troy_s | Bertl: In theory if I dump an exr and clamp the regions of color, I could get a pretty decent matrix for OCIO eh?
| |
19:27 | troy_s | (from the 18ms)
| |
19:27 | troy_s | (flare notwithstanding, not that there seems to be a serious degree that is impacting the curves)
| |
19:27 | Bertl | if the 18ms is within the linear range, then yes
| |
19:28 | Bertl | (I doubt it is)
| |
19:28 | Bertl | i.e. you can still clamp and use it for profiling, but it will give you missing data
| |
19:28 | Bertl | i.e. some swatches will be just white or black
| |
19:29 | Bertl | (in one component)
| |
19:30 | troy_s | Seems difficult to get a chart shot that matches that exposure range.
| |
19:30 | troy_s | Some chunks of the chart will have to dip into the noise floor, or ram them up into the random zone.
| |
19:30 | intracube | joined the channel | |
19:31 | troy_s | Not sure how to negotiate that when 70% RGB is the ceiling.
| |
19:32 | Bertl | I think it would be a good start for se6astian to adjust the offset (cmv register) so that black is really black
| |
19:32 | Bertl | and then simply try to expose the chart in such a way, that white is somewhere in the light gray area
| |
19:33 | Bertl | i.e. kind of underexposed
| |
19:33 | troy_s | Bertl: Right. Go against traditional theory as we are dealing with native sensor.
| |
19:34 | troy_s | Bertl: You must be able to see almost exactly where the random data begins by cross indexing those LUTs no?
| |
19:34 | troy_s | Bertl: Would it be prudent to simply craft a scaling function that takes that point X and maps it to the high value?
| |
19:34 | Bertl | well, you see around 180 or so it goes crazy
| |
19:35 | Bertl | as the chart plots 256 values, it is 180/256
| |
19:35 | Bertl | multiply that with 4096 and you have the 12bit range
| |
19:35 | Bertl | note that adjusting the offsets first will shift that by exactly the offset adjustments
| |
19:36 | Bertl | I really wonder why Cmosis has so many gain values, if the gain settings do not make much sense in most cases
| |
19:37 | Bertl | i.e. a gain of 1 doesn't fill the range, still they allow to go down to 0.3
| |
19:37 | troy_s | EG
| |
19:37 | troy_s | ?
| |
19:37 | troy_s | Bertl: I wonder why develop a sensor that you need to toss a bits worth of random junk.
| |
19:38 | Bertl | well, if a gain of 1 fills 70% a gain of 0.3 will fill 21%
| |
19:38 | troy_s | Bertl: Unless they foresee a future where they can dial that in?
| |
19:38 | Bertl | there are many undocumented bits and registers in the sensor
| |
19:38 | troy_s | Bertl: Ideally the 70%-ish range is scaled to 100%
| |
19:38 | Bertl | so maybe yes, there is some way to improve/adjust that
| |
19:38 | troy_s | Seems that, from all of the charting I have seen
| |
19:38 | troy_s | above 70% is pure junk
| |
19:39 | Bertl | yep, that's what I mean
| |
19:39 | troy_s | You could maybe normalize that entire range down to ...
| |
19:39 | troy_s | well instead of say 30%, down to _maybe_ 5%
| |
19:39 | Bertl | so if you put white below that 70% and black right above the 10%
| |
19:39 | troy_s | I'm sure that normalized the 'information' that is there is useful if you crunch it down small enough.
| |
19:40 | troy_s | (as in it isn't _entirely_ random, but damn close enough to be useless)
| |
19:40 | troy_s | Force flatten those upper curves.
| |
19:40 | troy_s | Bertl: How long would it take to generate a gnuplot of all of the curves into one image?
| |
19:41 | Bertl | a few seconds?
| |
19:42 | troy_s | I am still shocked when people say "Impressed by the quality" on the mailing list.
| |
19:42 | troy_s | It is a little strange.
| |
19:43 | Bertl | so you want me to plot all on one chart?
| |
19:43 | troy_s | Bertl: If you think you can do it without too much effort, I'd love to see it. I'm not terribly swift at the GNUplot stuffs.
| |
19:45 | Bertl | do you remember which one was the first and second one?
| |
19:45 | Bertl | i.e. was the first one 18ms?
| |
19:45 | troy_s | first was 16, then I gave you 28. Then I filled in with 18, 20, 24, 26 IIRC
| |
19:45 | gwelkind | joined the channel | |
19:51 | Bertl | http://vserver.13thfloor.at/Stuff/AXIOM/curves_all.svg
| |
19:51 | troy_s | Did I mention your curve-fu freaks me out
| |
19:51 | troy_s | Bertl: Hrm... that's interesting don't you think?
| |
19:52 | eadmund | Bertl: Off the top of my head, I'd say that color will be solvable by existing Raw calibration methods, but the carious stripes/pattern noise are a serious problem.
| |
19:52 | eadmund | .. various.
| |
19:52 | troy_s | LOL
| |
19:52 | Bertl | eadmund: the FPN is a problem we already solved (code wise)
| |
19:52 | troy_s | Bertl: That curve set is interesting as hell.
| |
19:53 | Bertl | i.e. somebody has to figure the correct values for the row/col corrections and simply upload it to the appropriate LUTs
| |
19:53 | eadmund | then can I see a clean image with no color-shading and fpn?
| |
19:53 | troy_s | Bertl: Something looks amiss on the set with the errant Green (I wonder if that errant red is part of it)
| |
19:53 | Bertl | eadmund: yes, if you provide the proper offsets for each row/col, then se6astian can do this easily
| |
19:54 | Bertl | we did it as an example for a dark frame, IIRC
| |
19:55 | troy_s | Bertl: Blue looks remarkably uniform.
| |
19:55 | Bertl | (should be somewhere on the web pages)
| |
19:55 | eadmund | yes, but that is one end of the histo only - d'you have code that does this for every line/column, and is it thermically stable?
| |
19:56 | troy_s | In fact, even for the set the erroneous green is off, the blues all look sound within reason.
| |
19:56 | eadmund | I know this is LOL and obvious to troy, but actually I find cameras drift.
| |
19:56 | Bertl | eadmund: the FPN should be stable, yes, and we have an offset for each row/col
| |
19:57 | troy_s | eadmund: It's not LOL. I was just LOLing at the idea that the color stuff will be solvable. Bertl is ridiculously on point on most of this, and there's nothing to suggest that color isn't solved.
| |
19:57 | Bertl | there is a gain related factor as well, this will be done per pixel
| |
19:59 | troy_s | Bertl: If you look at those curves, they look pretty correctable no?
| |
19:59 | eadmund | Image engineering have some very nice devices to determine sensor spectral sensitivity. From this you can get color correction.
| |
19:59 | troy_s | (even in the previous 'junk' range)
| |
19:59 | troy_s | eadmund: It isn't that simple.
| |
19:59 | troy_s | eadmund: Again, the filters are non-narrow
| |
19:59 | troy_s | eadmund: So you can't quite simply map a sensel filter to an XYZ matrix.
| |
20:00 | troy_s | eadmund: But I'm also curious as to why you believe that what is happening now isn't quite sufficient?
| |
20:01 | Bertl | troy_s: those curves are the result of trying to fit the actual data
| |
20:01 | eadmund | I don't have any belief whatsoever. Color is the basic quantity (sensation) which you try to achieve with a camera; so the problem is never permanently solved.
| |
20:01 | Bertl | i.e. beyond the linear range they do not contain any useable information
| |
20:01 | troy_s | Bertl: Yes, but the uniformity near the top looks interesting. The fact that the curves are predictably out might be worthwhile.
| |
20:01 | troy_s | eadmund: Ok. Great. I have no idea what you mean there.
| |
20:02 | Bertl | eadmund: you can get the quantum efficiency vs wavelength from the sensor datasheet
| |
20:02 | troy_s | Bertl: (I am curious as to what might have caused that one errant green curve to whack out (need to check the data again) and what sets go with it. But judging from the others, they too could very well be LUTted to give meaningful data perhaps.
| |
20:03 | Bertl | don't spend time on that, it isn't worth the efford
| |
20:03 | Bertl | let me explain why
| |
20:03 | troy_s | Bertl: Well that blue channel tells me that there _is_ useful data in that channel.
| |
20:03 | troy_s | Bertl: And certainly the red too (in 4.5 of the 6)
| |
20:03 | Bertl | let's assume you roll the dice
| |
20:04 | Bertl | you can roll 1-6, yes?
| |
20:04 | troy_s | Sur.e
| |
20:04 | Bertl | now let's assume, you ignore everything above 3 and return 3
| |
20:04 | Bertl | now you create a curve (the mapping you did) which maps the acquired results to the 1-6 range
| |
20:05 | Bertl | for 3,4,5 and 6 you will only have 3s in your data
| |
20:05 | Bertl | but the algorithm which is trying to map this, will know that it has to go from 1 to 6
| |
20:05 | Bertl | that's what the curves do on the plot
| |
20:06 | Bertl | i.e. they show reasonable data in the linear range and do some guesswork in the other areas
| |
20:06 | gwelkind | left the channel | |
20:07 | Bertl | the data in the upper range is just not there
| |
20:07 | troy_s | Bertl: So those resultant curves don't hint that there's a degree in that blue? (It would make sense given that the sensitivity in the blue sensels is so much lower than the R / G)
| |
20:09 | troy_s | s/a degree/a degree of information versus random noise
| |
20:09 | Bertl | you see that all the curves have different offsets and gradients
| |
20:10 | Bertl | still at the very same point (around 200) they go straight up
| |
20:10 | Bertl | this is where the useful data has ended
| |
20:10 | Bertl | the rest above is weird interpolation
| |
20:11 | eadmund | well the good news is there are no zedboards in stock at the moment.
| |
20:11 | Bertl | you might ask on the mailing list, maybe somebody has bought one but doesn't want to use it after all (for whatever reason)
| |
20:12 | eadmund | Bertl: Thanks. Nice idea.
| |
20:12 | troy_s | Awesome. IMDB scammers. That's a first.
| |
20:12 | troy_s | Hilarious.
| |
20:12 | eadmund | Bertl maybe I'll get a different sensor quicker :)
| |
20:13 | Bertl | eadmund: so the sensor is unavailable as well?
| |
20:14 | eadmund | Bertl: I got no precise info from CMOSIS yet. I am also approaching other vendors.
| |
20:16 | Bertl | okay, keep us updated about the results
| |
20:17 | eadmund | Bertl: What interests me is the idea of allowing people to do "stuff" easily with the camera, like turn a filter wheel, do XY stitching, sync external equipment.
| |
20:19 | troy_s | mozjpeg. Erf.
| |
20:20 | eadmund | Bertl, my idea is not the same as your project which is a *camera*; mine is something that can be shoehorned/remounted and used in various contexts.
| |
20:22 | Bertl | sounds good! looking forward to see the results
| |
20:23 | eadmund | Bertl by the way, those curves, what are the X/Y values, and what parametrizes the families? how was the measurement made?
| |
20:24 | eadmund | the guys I'm talking to about doing thsi open source do scanners, not cameras. http://www.crusescanner.com/
| |
20:24 | eadmund | different issues. reproducibility, accuracy etc.
| |
20:26 | eadmund | static subjects, external controlled illumination. Stuff which is completely useless, easy to do and irrelevant to the cinema community :)
| |
20:30 | troy_s | eadmund: Not all of it. It's very much the same basic stuff at the core - get a reference and shoot it under controlled situations.
| |
20:30 | troy_s | eadmund: In the end, that's all of color as it is psychophysical.
| |
20:30 | Bertl | @curves troy_s can explain them to you
| |
20:34 | eadmund | so i guess you are playing with sensor gain parameter, set a gain, take multiple readings, average them, get those curves?
| |
20:37 | troy_s | eadmund: Set an exposure via gain, take a pop, and then process. The rest is basically warping the data to match an XYZ matrix.
| |
20:38 | troy_s | Plenty of learning though. Namely that I've never seen real raw data before, and I'm not sure many folks have.
| |
20:38 | eadmund | so this shows warped data ? http://vserver.13thfloor.at/Stuff/AXIOM/curves_all.svg
| |
20:38 | troy_s | Most of the knowledge comes sort of 'canned' from expectations of cameras etc. And some of that expectation is moot.
| |
20:39 | troy_s | eadmund: That's a stack of the shaper curves that the software generates to 'best fit' the data to the XYZ
| |
20:39 | troy_s | eadmund: It's nothing fancy beyond an aggregation of six profile dumps.
| |
20:39 | eadmund | I've never seen sensor-raw data. Only "firmware cooked".
| |
20:39 | troy_s | eadmund: Exactly!
| |
20:40 | troy_s | eadmund: So learning that sensors are completely whack and that what you and I are used to seeing is in fact not quite ... uh ... raw.
| |
20:40 | troy_s | :)
| |
20:40 | troy_s | Big curve.
| |
20:40 | troy_s | I've never seen anything like that above that 200 mark, and as Bertl wisely has said, I've also never actually seen 'raw' data.
| |
20:40 | troy_s | I've seen "Vendor approved raw"
| |
20:44 | eadmund | The Aptina guys told me one can tune the top shoulder on some sensors. But I have no idea whether one can recover usable color after the non linear effects. People are very sensitive to hue changes in high-L.
| |
20:44 | troy_s | eadmund: I think that depends on the sensor. In this case, Bertl is handcuffed to what the vendor discloses for twiddling.
| |
20:45 | troy_s | eadmund: And yes, then you have all the nifty stuff after that. Like 'does xxx tweak provide data that can still be fit to an XYZ transform?"
| |
20:45 | troy_s | Getting to stuff like the scene referred values is sort of a key thing in motion picture work, not the least of which is for VFx etc.
| |
20:45 | troy_s | Getting to a 'gread looking image' is the job of a colorist.
| |
20:46 | troy_s | The latter can almost always succeed given enough energy, the former can be challenging.
| |
20:46 | troy_s | And I'd argue that if you can get the former sorted out, the latter's work is made a degree easier.
| |
20:46 | troy_s | s/gread/great
| |
20:47 | troy_s | (and of course, that isn't for a second to dismiss the privilege terms 'great looking image' which is of course arbitrary to context etc. Just general ball-park rubbish-speak here.)
| |
20:52 | ApertusWeb8 | joined the channel | |
21:12 | ApertusWeb8 | left the channel | |
21:14 | intracube | left the channel | |
21:16 | intracube | joined the channel | |
21:26 | eadmund | left the channel | |
21:41 | Bertl | okay, buffering seems to work as expected, just needs some tuning to actually benefit from it
| |
21:41 | Bertl | off to bed now ... have a good one everyone!
| |
21:45 | danieel | Bertl: clear on buffering now. I have no such issue, I am streaming right from ADC, line buffer at most
| |
21:52 | gwelkind | joined the channel | |
22:03 | intracube | left the channel | |
22:10 | se6astian | ah its late already, lost track of time :)
| |
22:10 | se6astian | good night
| |
22:10 | se6astian | left the channel | |
22:37 | jerknextdoor | joined the channel | |
22:56 | gwelkind | left the channel |