22:02 | se6astian | good night!
| |
22:02 | se6astian | left the channel | |
00:08 | FergusL | tryng to understand vng
| |
00:09 | FergusL | http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/main.htm
| |
01:09 | Bertl | I found this article quite informative http://www.stark-labs.com/craig/articles/assets/Debayering_API.pdf
| |
01:26 | FergusL | yes it is
| |
01:29 | FergusL | I'm trying to look into VNG interpolation as troy_s suggested
| |
01:29 | FergusL | the explaination might be enough to write the code for it
| |
01:36 | FergusL | Bertl: do you dump the whole sensed area ?
| |
02:02 | Bertl | you mean if I dump the data at full resolution?
| |
02:03 | Bertl | if so, then yes, the sensor has 4096x3072 pixels (monochrome or color doesn't matter)
| |
02:04 | dmj_nova | can we call them photosites?
| |
02:04 | dmj_nova | so as to distinguish from RGB pixels
| |
02:05 | Bertl | well, we can call them what we want, the manufacturer calls them pixels
| |
02:06 | Bertl | (and obviously most sensor manufacturers do, otherwise they wouldn't claim so-and-so many megapixels :)
| |
02:06 | Bertl | but I'm finr with photosites as well, no problem there
| |
02:06 | Bertl | *fine
| |
02:08 | Bertl | I've even read sensel somewhere, which I consider funny :)
| |
02:08 | FergusL | yes
| |
02:08 | dmj_nova | sebastian and I started using that term after I got confused as to the amount of photosites/pixels in the sensor initially
| |
02:09 | FergusL | well, it's easy with bayer pattern sensors : photosites count == pixel count
| |
02:09 | FergusL | not the same for other technologies like X3 or else
| |
02:09 | Bertl | the really interesting point is, what do we call the elements which make up our HDMI output?
| |
02:09 | FergusL | bits
| |
02:09 | FergusL | or potatoes, your preference
| |
02:09 | Bertl | well, we do 4:2:2 RGB there :)
| |
02:10 | Bertl | so yes, potatoes is probably a good term :)
| |
02:10 | dmj_nova | I usually consider a "pixel" to be a single full-color dot
| |
02:10 | Bertl | which we do not really have, no?
| |
02:11 | dmj_nova | each color in the HDMI output is a sub-pixel
| |
02:11 | Bertl | so photosites are sub-pixels for you then?
| |
02:12 | dmj_nova | I suppose one could also call each single-color dot on the sensor a sub-pixel, but photosite seems a better fit
| |
02:12 | dmj_nova | (for the sensor that is)
| |
02:12 | FergusL | sensels are SENSor ELements, pixels are PIX, picture ELements
| |
02:13 | dmj_nova | especially since algorithms use multiple adjacent photosites to interpolate the debayered pixels, not just blocks of 4
| |
02:13 | FergusL | blocks of 5x5 !
| |
02:13 | Bertl | well, a gaussian blur also uses many potatoes, and we still call them potatoes :)
| |
02:13 | FergusL | that's the root of my question about the area
| |
02:14 | Bertl | please don't mix multi pixel/photosite/potato algorithms with the properties of pixels
| |
02:14 | FergusL | I was wondering how the algo's accomodate this
| |
02:15 | Bertl | the simplest debayer algo doesn't care about any interpixel stuff
| |
02:16 | Bertl | it just picks red 'pixels', blue 'pixels' and 'green' pixels and combines them to an image, no interpolation
| |
02:16 | Bertl | the next step is putting them into perspective by considering their relative locations
| |
02:16 | Bertl | one step further, you get alhorithms looking way beyond one pixel to 'guess' the missing information
| |
02:17 | FergusL | yes, that's what I'm trying to implement
| |
02:17 | FergusL | http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html
| |
02:17 | Bertl | but that is very much the same as sharpen tries to 'guess' the missing information
| |
02:18 | FergusL | really ?
| |
02:18 | FergusL | as in, sharpening enhancements on pictures ?
| |
02:18 | Bertl | yes, it is all guesswork and a little voodoo :)
| |
02:18 | FergusL | I thought it was in frequential domain
| |
02:18 | FergusL | (well it is, but not by method)
| |
02:19 | Bertl | sure, thing is, you cannot recover lost information
| |
02:19 | Bertl | i.e. if you take a sharp picture and run a blur over it, you throw away high frequency bands
| |
02:19 | Bertl | you can try to guess this information, but you can never reconstruct it precisely
| |
02:20 | Bertl | that's why there are so many image sharpening algos out there, one works better for this case, the other better for the next
| |
02:21 | Bertl | it is quite similar for debayering
| |
02:21 | Bertl | let me give you a simple but intuitive example:
| |
02:22 | FergusL | go, I brought cereals, in the lack of potatoes
| |
02:22 | Bertl | consider an image (artificial test image), which has a red point on every blue photosite, a blue point on every green photosite and a green point at every red photosite
| |
02:23 | Bertl | the captured image will be black in all 3(4) channels
| |
02:23 | Bertl | but the image itself will look white to the observer, and it defintely is neither
| |
02:24 | Bertl | but most importantly, there is no way to reconstruct the original image from the captured data :)
| |
02:25 | FergusL | I think I get the idea of this example
| |
02:26 | Bertl | luckily most images we find are more predictive and thus can be relatively easily reconstructed (guessed :)
| |
02:28 | Bertl | and now I'm off to bed ... have fun!
| |
02:29 | FergusL | good night ! thanks for the lesson
| |
03:38 | troy_s | sensel
| |
03:39 | troy_s | They are not bloody pixels, no matter how much damn R3D propaganda or bad math is applied.
| |
03:40 | troy_s | FergusL: dcraw has a VNG algo you can try.
| |
03:40 | FergusL | yes, i looked at the code
| |
03:40 | FergusL | but tbh a clean explanation like the ~tingchen page provides seems better
| |
03:41 | troy_s | FergusL: I still believe cubic b with prefilter to scale the chroma will result in a better (greater perceptual sharpness and possibly greater data) image.
| |
03:41 | FergusL | I trust you, I've seen your tests with this in Blender
| |
03:41 | FergusL | is it """easy""" to implement ?
| |
03:42 | FergusL | there are packages for oiio in ubuntu now
| |
03:42 | FergusL | but without py bindings...
| |
03:42 | troy_s | FergusL: The scaling algo is (it is a frequency domain scale) but there are some nuances
| |
03:42 | troy_s | namely how the sensor gathers the image.
| |
03:42 | troy_s | the image isn't a pure set of "identicals" but rather three sub images with different citinh
| |
03:42 | troy_s | citing
| |
03:43 | troy_s | So to maximize it, the cubic b with pre would need some fractional adjustments based on the lower level sensel positions
| |
03:44 | FergusL | ha
| |
03:44 | troy_s | (technically they all resolve to subpixel positions, so a "perfect" chroma scale would be slightly shifted to perfectly align the scaled B and R planes to match with the G)
| |
03:44 | FergusL | if I can land basic dumping from the raw files, basic output with OIIO and leaving debayering and processing open, that could still be a starting point
| |
03:45 | troy_s | (where G could be considered to be a half res perfect citing)
| |
03:45 | troy_s | (well half Y, full X)
| |
03:45 | troy_s | erf
| |
03:45 | troy_s | half x I guess
| |
03:45 | troy_s | Full Y
| |
03:46 | troy_s | You would want to get your debayer in pre-oiio
| |
03:46 | troy_s | OIIO is for pixels
| |
03:46 | troy_s | or lower level raws, at which point you could merely dump the channels as a sort of half debayer
| |
03:46 | troy_s | Make sense?
| |
03:47 | FergusL | of course, pre-oiio, as I see it, it's : read from raw binary file, debayer and write to oiio pixels
| |
03:49 | FergusL | I had two questions : how about the borders of the picture ? simply drop and keep an images that is slightly smaller than max resolution ? or interpolate differently ?
| |
03:49 | FergusL | also, how should I scale the 0-4095 values from the RAW when writing to OIIO pixels ? these are floats or int8/16
| |
03:50 | troy_s | explain?
| |
03:50 | troy_s | What do you mean borders?
| |
03:50 | FergusL | well, if I'm at pixel 0,0
| |
03:51 | FergusL | to interpolate I would need pixel(-2,-2) which I don't have
| |
03:51 | troy_s | And regarding the second, a correct conversion to float would likely suffice. And I would do that pre-debayer as your data will be deeper.
| |
03:51 | troy_s | Huh? Why do you need negative pixels?
| |
03:51 | FergusL | it's an array, an x,y array
| |
03:52 | troy_s | Anything less than zero has zero emission. And I can't remember how VNG deals with that.
| |
03:52 | troy_s | I know for some scales you offset and assume black
| |
03:52 | FergusL | for every pixel, debayering needs one pixel on each direction (depends, varies depending on algo)
| |
03:52 | troy_s | or do linear interps (as in assume identical pixels near borders)
| |
03:52 | FergusL | http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html
| |
03:52 | FergusL | ok
| |
03:53 | troy_s | So "if i < 0 color= image[0,Y]
| |
03:53 | FergusL | that's what I will be doing I guess, floor() and ceil() the X and Y values so that it "repeats" the borders
| |
03:53 | troy_s | etc
| |
03:53 | troy_s | I think that eliminates shadow darky fringes actually - just assume where < min or > max = min ? max
| |
03:54 | troy_s | Yes
| |
03:54 | troy_s | that properly weights the edges
| |
03:54 | troy_s | otherwise they fringe dark on some interps
| |
03:54 | FergusL | okay, got it
| |
03:55 | troy_s | only issue is the corner diagonals
| |
03:55 | troy_s | as in > max y AND > max x eg
| |
03:55 | FergusL | yes
| |
03:55 | troy_s | at which point an unsophisticated lerp should suffice
| |
03:56 | troy_s | so the pure diagonal is the corner pixel, and the rest is simple lerp
| |
03:56 | troy_s | (effectively lerp stretching the corner pixel with the adjacent row or column pixel)
| |
03:57 | troy_s | but honestly... try cubic b with pre
| |
03:57 | troy_s | that has yet to be tried anywhere
| |
03:57 | troy_s | and I would love to see it against VNG
| |
03:57 | FergusL | I'll consider this
| |
03:57 | troy_s | my guts tell me (the artistic side that is in love with cbp) that it will beat VNG handedly
| |
03:58 | FergusL | the cbp ?
| |
03:58 | troy_s | There is no way that cubic b w pre can be so mathematically pure _and_ suck for a chroma scale
| |
03:58 | FergusL | yes, it seems obvious
| |
03:58 | troy_s | cubic b with prefilter = CBP acronymn because it sucks typing it every damn time. :)
| |
03:59 | FergusL | haha, yes
| |
03:59 | troy_s | it truly is the most accurate scale I have seen available, and I have seen more than I will admit to.
| |
03:59 | troy_s | (where interpolated accuracy is desireable)
| |
04:00 | troy_s | (because needs govern scale selection obviously)
| |
04:03 | FergusL | can you elaborate on implementing cpb for debayering ?
| |
04:04 | FergusL | you're mentioning "chroma" though I take it the goal is still pixel values for r g b
| |
04:06 | troy_s | FergusL: Very simple
| |
04:06 | troy_s | Where VNG uses the surrounding pixels to estimate the result
| |
04:06 | troy_s | Take each plane of R, G, and B, convert to float, and scale accordingly.
| |
04:07 | troy_s | So for G, take full Y, and scale to 2X for a full size starting plane.
| |
04:07 | troy_s | Scale R to 2X and 2Y
| |
04:07 | troy_s | and B to 2X and 2Y
| |
04:07 | troy_s | then merge for a "phase one"
| |
04:07 | troy_s | Phase two would entail providing correct citing
| |
04:08 | troy_s | Which may be tricky... as assume we are looking at the upper most Bayer row of say, RGRGRG
| |
04:09 | troy_s | If we are using the G as the baseline full sized starting point.
| |
04:09 | troy_s | the first R and G actually have different parts of the image - ever so slightly offset (different citing)
| |
04:10 | FergusL | yes
| |
04:10 | troy_s | To perfectly align the red channel, we actually want the interpolated value between the first R and the second R.
| |
04:10 | troy_s | Sense?
| |
04:10 | troy_s | And it gets equally ugly going down
| |
04:10 | FergusL | hm... not sure
| |
04:10 | troy_s | Well think of it this way
| |
04:11 | FergusL | I see the G offset on the first row
| |
04:11 | troy_s | if we took a full sensor as a mono (no filters)
| |
04:11 | FergusL | (which isn't in the second row that is GBGBGBGB)
| |
04:11 | troy_s | we have a perfect 4k image or whatever
| |
04:11 | FergusL | yes
| |
04:11 | troy_s | each sensel represents a perfect greyscale value
| |
04:11 | troy_s | The Bayer pattern would be similar... but with filters
| |
04:12 | troy_s | so RGRGRGRGRG then GBGBGBGBGB etc.
| |
04:12 | FergusL | yes
| |
04:12 | troy_s | the first R and G represent _different_ parts of the image
| |
04:12 | FergusL | understood
| |
04:12 | troy_s | very close together, but different parts
| |
04:13 | troy_s | So peeling apart the planes into channels is actually three different images
| |
04:13 | troy_s | not just three different planes of RGB
| |
04:13 | troy_s | they are different spatially
| |
04:13 | troy_s | (and in a semi complex checkerboard fashio
| |
04:13 | troy_s | Even our green has citing issues
| |
04:13 | FergusL | ok, this time I get it
| |
04:13 | FergusL | yes
| |
04:14 | troy_s | so to marry the RGB, you have to be acutely aware of the sensel layout on the sensor
| |
04:14 | troy_s | so you can anchor the interpolation correctly
| |
04:14 | troy_s | (hence citing)
| |
04:15 | FergusL | in more precise terms than just the Bayer pattern (RGGB, BGGR etc...) ?
| |
04:15 | troy_s | We would actually want to stretch the green a sensel to the left on the top row
| |
04:15 | FergusL | yes
| |
04:15 | troy_s | to perfectly align with the green on the second row
| |
04:15 | FergusL | and one to the right on the second
| |
04:16 | troy_s | but we don't have the data for it!
| |
04:18 | troy_s | Quite sure there is a clever trick there perhaps - like rotating the pixels 45 degrees and interpolating the missing edge pixels from that for example
| |
04:18 | troy_s | (because you could then at least reconstruct 50% of the edge pixel based on the diagonal of greens for example)
| |
04:19 | troy_s | (still 50% better than we had :))
| |
04:19 | FergusL | yes
| |
04:19 | troy_s | cbp interpolates both directions
| |
04:20 | troy_s | so it could very well already have a complex solution buried in it
| |
04:20 | troy_s | (you generate the coefficients by running along x then y)
| |
04:21 | troy_s | (and given that it is in the frequency domain, who knows the result. for certain the x positions should ideally be shifted a sensel over on alternates... or generate two proper aspect images and scale each then merge into a single channel)
| |
04:22 | troy_s | I have no idea. All I know is the CBP has all sorts of magic cooked into it. A stair interpolation done a dozen times is 1:1 with a single pass interpolation for example.
| |
04:22 | troy_s | Which boggles my mind.
| |
04:23 | troy_s | (unlike the sharpeners such as Sinc / Lanczos or blurries such as cubicb etc.)
| |
04:23 | troy_s | Anyways... night.
| |
04:24 | FergusL | same here soon, thanks for the details
| |
09:51 | mars_ | FergusL: yeah, i'm interested working on that code
| |
10:29 | se6astian | joined the channel | |
10:32 | se6astian | good morning
| |
11:21 | se6astian | left the channel | |
12:47 | dmj_nova1 | joined the channel | |
12:48 | FergusL | Hi here
| |
12:49 | dmj_nova | left the channel | |
13:21 | troy_s | FergusL: https://twitter.com/dfelinto/status/396131780430151680
| |
13:25 | FergusL | oh
| |
13:33 | troy_s | FergusL: Finish the debayer attempt yet? ;)
| |
13:34 | FergusL | all I did was sleeping since we last talked about it
| |
14:26 | Bertl | morning everyone!
| |
14:28 | Bertl | and to add confusion, the sensor can do color binning, which will give overlaping bins per color channel :)
| |
16:18 | FergusL | hi Bertl
| |
16:19 | Bertl | hey
| |
16:31 | FergusL | Gradient N = |G8 - G18| + |R3 - R13| + |B7 - B17| / 2 + |B9 - B19| / 2 + |G2 - G12| / 2 + |G4 - G14| / 2 <- the | are absolute value, right ?
| |
16:31 | FergusL | not some shady math unknown operator ?
| |
16:32 | FergusL | (http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html)
| |
16:33 | Bertl | looks like, as there is nothing else specified, in 'normal' math those are simply absolute values
| |
16:49 | se6astian | joined the channel | |
16:49 | se6astian | good evening
| |
16:49 | se6astian | http://nofilmschool.com/2013/11/first-images-apertus-super-35mm-camera-prototype-axiom-alpha/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter
| |
16:49 | Bertl | evening!
| |
16:52 | Bertl | we probably need some folks to address questions and answer comments on such sites ...
| |
16:53 | Bertl | they do not have to know everything about the project and the technical details, they just have to know whom to ask for details
| |
16:55 | se6astian | what questions in particular?
| |
16:56 | Bertl | questions and statements in general, otherwise there will be a lot of speculation and drawing (mostly wrong) conclusions
| |
16:57 | se6astian | sure, when I have time I read the article and comments
| |
16:57 | se6astian | but so far there is nothing to comment to
| |
17:00 | Bertl | if you say so ...
| |
17:02 | [1]se6astian | joined the channel | |
17:03 | FergusL | WHAT THE...
| |
17:03 | FergusL | "they’ve got images in 4K. However, just like me first thing in the morning before I put on my makeup, they’re looking a little rough. "
| |
17:03 | FergusL | I... just...
| |
17:04 | se6astian | left the channel | |
17:04 | [1]se6astian | changed nick to: se6astian
| |
17:06 | se6astian | don't you wear makeup every day? :P
| |
17:08 | Bertl | V Renée obviously does ...
| |
17:10 | FergusL | aah... good ol time when Koo was writing every single article himself
| |
17:21 | dmj_nova1 | left the channel | |
17:37 | se6astian | Bertl, I just checked the lens mount files konstantin sent me and they are indeed intact NURBS curves when openend in Rhino3D, and Rhino can make edits and save as STEP file again
| |
17:37 | Bertl | excellent!
| |
17:37 | se6astian | I just never used Rhino before so I have no idea how to edit the curves yet, but everything is there to make these changes
| |
17:44 | se6astian | do we need any HDMI functional testing, protocol-level debugging, or protocol compliance testing help/advice?
| |
17:45 | se6astian | I am just writing back to John Burt
| |
17:46 | Bertl | we will, at some point, if we plan to support HDMI out
| |
17:46 | Bertl | ATM, there is not much to be tested/helped with, i.e. the analog devices chip works as expected and is what we will use for the prototype
| |
17:47 | Bertl | but maybe he can suggest good hdmi interface chips
| |
17:48 | se6astian | actually we never wanted to have HDMI output but it was what the zedboard had already ;)
| |
17:49 | Bertl | yeah, but maybe hdmi is a good choice for a viewfinder or similar at some point
| |
17:50 | se6astian | at some point maybe....
| |
17:50 | se6astian | I will reply to him
| |
17:54 | se6astian | the reply-to options now work by default in my gmail as well btw
| |
17:54 | Bertl | ah, good!
| |
18:09 | dmj_nova | joined the channel | |
18:53 | se6astian | left the channel | |
20:12 | se6astian | joined the channel | |
20:17 | se6astian | 800 unqiue visitors already today on the website
| |
20:17 | se6astian | we wont hit our record from the open modules concept
| |
20:17 | se6astian | but also we had just 2 articles so far
| |
20:18 | se6astian | more coming on monday when office hours continue most likely
| |
20:18 | Bertl | it will be more dispersed over the days I guess
| |
20:20 | se6astian | yes, this time we didnt give the writers any headstart
| |
20:37 | se6astian | interesting sensor datasheet specs of the sensor rumored to be in the BMCC btw: http://www.fairchildimaging.com/files/data_sheet_cis_2521f.pdf
| |
20:38 | se6astian | lower resolution and smaller sensor diameter than the cmv12000
| |
20:38 | se6astian | but low light, noise and dynamic range values are incredible
| |
20:40 | Bertl | LCC carrier might be a little problematic for testing
| |
20:41 | Bertl | and the global shutter is not pipelined
| |
20:41 | Bertl | (but at least it has the global shutter option :)
| |
20:42 | se6astian | what is an lcc carrier?
| |
20:42 | se6astian | the package the sensor sits on?
| |
20:42 | Bertl | leadless chip carrier
| |
20:45 | Bertl | less than half the number of photosites, 1/3rd the max framerate
| |
20:48 | Bertl | but yes, the sensitivity and noise values look impressive
| |
20:48 | Bertl | maybe contact BAE and get a detailed data sheet?
| |
20:51 | se6astian | we can try, and pricing information, just out of curiosity
| |
20:51 | Bertl | yup
| |
20:51 | se6astian | do you know how dark noise compares exactly?
| |
20:52 | se6astian | cmv12000: Dark noise 13 e- (RMS)
| |
20:52 | se6astian | CIS: <1.5 e- RMS Readout Noise
| |
20:52 | se6astian | CIS: <30 e-/pixel/sec dark current @ 20°C
| |
20:52 | se6astian | is this apple and oranges comparisson?
| |
20:53 | Bertl | no the dark noise and the dark current can be compared
| |
20:53 | se6astian | cmv12000: Dark current 125 e-/s (25 degC)
| |
20:54 | Bertl | so at roughly equivalent temperatures (CMV12k needs to be cooled for that)
| |
20:54 | se6astian | is read-out-noise == dark noise?
| |
20:54 | Bertl | the cmv12k has roughly 4 times the pixel nois of the CIS
| |
20:54 | Bertl | *noise
| |
20:55 | Bertl | the read-out noise is something different
| |
20:55 | se6astian | I see
| |
20:56 | Bertl | but it is hard to tell for both what precisely is meant
| |
20:57 | Bertl | the 13 e- seem to be the temporal noise
| |
20:57 | Bertl | (in the analog domain)
| |
20:57 | Bertl | so maybe that indeed correlates to the readout noise of the CIS
| |
21:00 | Bertl | the range is interesting, with 30ke- full well charge and 86dB
| |
21:58 | se6astian | I will see if we can get more information, likely with an NDA
| |
21:58 | se6astian | but for now, time for bed ;)
| |
21:58 | se6astian | good night
| |
21:58 | se6astian | left the channel |