Current Server Time: 19:49 (Central Europe)

#apertus IRC Channel Logs

2013/11/02

Timezone: UTC


23:02
se6astian
good night!
23:02
se6astian
left the channel
01:08
FergusL
tryng to understand vng
01:09
FergusL
http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/main.htm
02:09
Bertl
I found this article quite informative http://www.stark-labs.com/craig/articles/assets/Debayering_API.pdf
02:26
FergusL
yes it is
02:29
FergusL
I'm trying to look into VNG interpolation as troy_s suggested
02:29
FergusL
the explaination might be enough to write the code for it
02:36
FergusL
Bertl: do you dump the whole sensed area ?
03:02
Bertl
you mean if I dump the data at full resolution?
03:03
Bertl
if so, then yes, the sensor has 4096x3072 pixels (monochrome or color doesn't matter)
03:04
dmj_nova
can we call them photosites?
03:04
dmj_nova
so as to distinguish from RGB pixels
03:05
Bertl
well, we can call them what we want, the manufacturer calls them pixels
03:06
Bertl
(and obviously most sensor manufacturers do, otherwise they wouldn't claim so-and-so many megapixels :)
03:06
Bertl
but I'm finr with photosites as well, no problem there
03:06
Bertl
*fine
03:08
Bertl
I've even read sensel somewhere, which I consider funny :)
03:08
FergusL
yes
03:08
dmj_nova
sebastian and I started using that term after I got confused as to the amount of photosites/pixels in the sensor initially
03:09
FergusL
well, it's easy with bayer pattern sensors : photosites count == pixel count
03:09
FergusL
not the same for other technologies like X3 or else
03:09
Bertl
the really interesting point is, what do we call the elements which make up our HDMI output?
03:09
FergusL
bits
03:09
FergusL
or potatoes, your preference
03:09
Bertl
well, we do 4:2:2 RGB there :)
03:10
Bertl
so yes, potatoes is probably a good term :)
03:10
dmj_nova
I usually consider a "pixel" to be a single full-color dot
03:10
Bertl
which we do not really have, no?
03:11
dmj_nova
each color in the HDMI output is a sub-pixel
03:11
Bertl
so photosites are sub-pixels for you then?
03:12
dmj_nova
I suppose one could also call each single-color dot on the sensor a sub-pixel, but photosite seems a better fit
03:12
dmj_nova
(for the sensor that is)
03:12
FergusL
sensels are SENSor ELements, pixels are PIX, picture ELements
03:13
dmj_nova
especially since algorithms use multiple adjacent photosites to interpolate the debayered pixels, not just blocks of 4
03:13
FergusL
blocks of 5x5 !
03:13
Bertl
well, a gaussian blur also uses many potatoes, and we still call them potatoes :)
03:13
FergusL
that's the root of my question about the area
03:14
Bertl
please don't mix multi pixel/photosite/potato algorithms with the properties of pixels
03:14
FergusL
I was wondering how the algo's accomodate this
03:15
Bertl
the simplest debayer algo doesn't care about any interpixel stuff
03:16
Bertl
it just picks red 'pixels', blue 'pixels' and 'green' pixels and combines them to an image, no interpolation
03:16
Bertl
the next step is putting them into perspective by considering their relative locations
03:16
Bertl
one step further, you get alhorithms looking way beyond one pixel to 'guess' the missing information
03:17
FergusL
yes, that's what I'm trying to implement
03:17
FergusL
http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html
03:17
Bertl
but that is very much the same as sharpen tries to 'guess' the missing information
03:18
FergusL
really ?
03:18
FergusL
as in, sharpening enhancements on pictures ?
03:18
Bertl
yes, it is all guesswork and a little voodoo :)
03:18
FergusL
I thought it was in frequential domain
03:18
FergusL
(well it is, but not by method)
03:19
Bertl
sure, thing is, you cannot recover lost information
03:19
Bertl
i.e. if you take a sharp picture and run a blur over it, you throw away high frequency bands
03:19
Bertl
you can try to guess this information, but you can never reconstruct it precisely
03:20
Bertl
that's why there are so many image sharpening algos out there, one works better for this case, the other better for the next
03:21
Bertl
it is quite similar for debayering
03:21
Bertl
let me give you a simple but intuitive example:
03:22
FergusL
go, I brought cereals, in the lack of potatoes
03:22
Bertl
consider an image (artificial test image), which has a red point on every blue photosite, a blue point on every green photosite and a green point at every red photosite
03:23
Bertl
the captured image will be black in all 3(4) channels
03:23
Bertl
but the image itself will look white to the observer, and it defintely is neither
03:24
Bertl
but most importantly, there is no way to reconstruct the original image from the captured data :)
03:25
FergusL
I think I get the idea of this example
03:26
Bertl
luckily most images we find are more predictive and thus can be relatively easily reconstructed (guessed :)
03:28
Bertl
and now I'm off to bed ... have fun!
03:29
FergusL
good night ! thanks for the lesson
04:38
troy_s
sensel
04:39
troy_s
They are not bloody pixels, no matter how much damn R3D propaganda or bad math is applied.
04:40
troy_s
FergusL: dcraw has a VNG algo you can try.
04:40
FergusL
yes, i looked at the code
04:40
FergusL
but tbh a clean explanation like the ~tingchen page provides seems better
04:41
troy_s
FergusL: I still believe cubic b with prefilter to scale the chroma will result in a better (greater perceptual sharpness and possibly greater data) image.
04:41
FergusL
I trust you, I've seen your tests with this in Blender
04:41
FergusL
is it """easy""" to implement ?
04:42
FergusL
there are packages for oiio in ubuntu now
04:42
FergusL
but without py bindings...
04:42
troy_s
FergusL: The scaling algo is (it is a frequency domain scale) but there are some nuances
04:42
troy_s
namely how the sensor gathers the image.
04:42
troy_s
the image isn't a pure set of "identicals" but rather three sub images with different citinh
04:42
troy_s
citing
04:43
troy_s
So to maximize it, the cubic b with pre would need some fractional adjustments based on the lower level sensel positions
04:44
FergusL
ha
04:44
troy_s
(technically they all resolve to subpixel positions, so a "perfect" chroma scale would be slightly shifted to perfectly align the scaled B and R planes to match with the G)
04:44
FergusL
if I can land basic dumping from the raw files, basic output with OIIO and leaving debayering and processing open, that could still be a starting point
04:45
troy_s
(where G could be considered to be a half res perfect citing)
04:45
troy_s
(well half Y, full X)
04:45
troy_s
erf
04:45
troy_s
half x I guess
04:45
troy_s
Full Y
04:46
troy_s
You would want to get your debayer in pre-oiio
04:46
troy_s
OIIO is for pixels
04:46
troy_s
or lower level raws, at which point you could merely dump the channels as a sort of half debayer
04:46
troy_s
Make sense?
04:47
FergusL
of course, pre-oiio, as I see it, it's : read from raw binary file, debayer and write to oiio pixels
04:49
FergusL
I had two questions : how about the borders of the picture ? simply drop and keep an images that is slightly smaller than max resolution ? or interpolate differently ?
04:49
FergusL
also, how should I scale the 0-4095 values from the RAW when writing to OIIO pixels ? these are floats or int8/16
04:50
troy_s
explain?
04:50
troy_s
What do you mean borders?
04:50
FergusL
well, if I'm at pixel 0,0
04:51
FergusL
to interpolate I would need pixel(-2,-2) which I don't have
04:51
troy_s
And regarding the second, a correct conversion to float would likely suffice. And I would do that pre-debayer as your data will be deeper.
04:51
troy_s
Huh? Why do you need negative pixels?
04:51
FergusL
it's an array, an x,y array
04:52
troy_s
Anything less than zero has zero emission. And I can't remember how VNG deals with that.
04:52
troy_s
I know for some scales you offset and assume black
04:52
FergusL
for every pixel, debayering needs one pixel on each direction (depends, varies depending on algo)
04:52
troy_s
or do linear interps (as in assume identical pixels near borders)
04:52
FergusL
http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html
04:52
FergusL
ok
04:53
troy_s
So "if i < 0 color= image[0,Y]
04:53
FergusL
that's what I will be doing I guess, floor() and ceil() the X and Y values so that it "repeats" the borders
04:53
troy_s
etc
04:53
troy_s
I think that eliminates shadow darky fringes actually - just assume where < min or > max = min ? max
04:54
troy_s
Yes
04:54
troy_s
that properly weights the edges
04:54
troy_s
otherwise they fringe dark on some interps
04:54
FergusL
okay, got it
04:55
troy_s
only issue is the corner diagonals
04:55
troy_s
as in > max y AND > max x eg
04:55
FergusL
yes
04:55
troy_s
at which point an unsophisticated lerp should suffice
04:56
troy_s
so the pure diagonal is the corner pixel, and the rest is simple lerp
04:56
troy_s
(effectively lerp stretching the corner pixel with the adjacent row or column pixel)
04:57
troy_s
but honestly... try cubic b with pre
04:57
troy_s
that has yet to be tried anywhere
04:57
troy_s
and I would love to see it against VNG
04:57
FergusL
I'll consider this
04:57
troy_s
my guts tell me (the artistic side that is in love with cbp) that it will beat VNG handedly
04:58
FergusL
the cbp ?
04:58
troy_s
There is no way that cubic b w pre can be so mathematically pure _and_ suck for a chroma scale
04:58
FergusL
yes, it seems obvious
04:58
troy_s
cubic b with prefilter = CBP acronymn because it sucks typing it every damn time. :)
04:59
FergusL
haha, yes
04:59
troy_s
it truly is the most accurate scale I have seen available, and I have seen more than I will admit to.
04:59
troy_s
(where interpolated accuracy is desireable)
05:00
troy_s
(because needs govern scale selection obviously)
05:03
FergusL
can you elaborate on implementing cpb for debayering ?
05:04
FergusL
you're mentioning "chroma" though I take it the goal is still pixel values for r g b
05:06
troy_s
FergusL: Very simple
05:06
troy_s
Where VNG uses the surrounding pixels to estimate the result
05:06
troy_s
Take each plane of R, G, and B, convert to float, and scale accordingly.
05:07
troy_s
So for G, take full Y, and scale to 2X for a full size starting plane.
05:07
troy_s
Scale R to 2X and 2Y
05:07
troy_s
and B to 2X and 2Y
05:07
troy_s
then merge for a "phase one"
05:07
troy_s
Phase two would entail providing correct citing
05:08
troy_s
Which may be tricky... as assume we are looking at the upper most Bayer row of say, RGRGRG
05:09
troy_s
If we are using the G as the baseline full sized starting point.
05:09
troy_s
the first R and G actually have different parts of the image - ever so slightly offset (different citing)
05:10
FergusL
yes
05:10
troy_s
To perfectly align the red channel, we actually want the interpolated value between the first R and the second R.
05:10
troy_s
Sense?
05:10
troy_s
And it gets equally ugly going down
05:10
FergusL
hm... not sure
05:10
troy_s
Well think of it this way
05:11
FergusL
I see the G offset on the first row
05:11
troy_s
if we took a full sensor as a mono (no filters)
05:11
FergusL
(which isn't in the second row that is GBGBGBGB)
05:11
troy_s
we have a perfect 4k image or whatever
05:11
FergusL
yes
05:11
troy_s
each sensel represents a perfect greyscale value
05:11
troy_s
The Bayer pattern would be similar... but with filters
05:12
troy_s
so RGRGRGRGRG then GBGBGBGBGB etc.
05:12
FergusL
yes
05:12
troy_s
the first R and G represent _different_ parts of the image
05:12
FergusL
understood
05:12
troy_s
very close together, but different parts
05:13
troy_s
So peeling apart the planes into channels is actually three different images
05:13
troy_s
not just three different planes of RGB
05:13
troy_s
they are different spatially
05:13
troy_s
(and in a semi complex checkerboard fashio
05:13
troy_s
Even our green has citing issues
05:13
FergusL
ok, this time I get it
05:13
FergusL
yes
05:14
troy_s
so to marry the RGB, you have to be acutely aware of the sensel layout on the sensor
05:14
troy_s
so you can anchor the interpolation correctly
05:14
troy_s
(hence citing)
05:15
FergusL
in more precise terms than just the Bayer pattern (RGGB, BGGR etc...) ?
05:15
troy_s
We would actually want to stretch the green a sensel to the left on the top row
05:15
FergusL
yes
05:15
troy_s
to perfectly align with the green on the second row
05:15
FergusL
and one to the right on the second
05:16
troy_s
but we don't have the data for it!
05:18
troy_s
Quite sure there is a clever trick there perhaps - like rotating the pixels 45 degrees and interpolating the missing edge pixels from that for example
05:18
troy_s
(because you could then at least reconstruct 50% of the edge pixel based on the diagonal of greens for example)
05:19
troy_s
(still 50% better than we had :))
05:19
FergusL
yes
05:19
troy_s
cbp interpolates both directions
05:20
troy_s
so it could very well already have a complex solution buried in it
05:20
troy_s
(you generate the coefficients by running along x then y)
05:21
troy_s
(and given that it is in the frequency domain, who knows the result. for certain the x positions should ideally be shifted a sensel over on alternates... or generate two proper aspect images and scale each then merge into a single channel)
05:22
troy_s
I have no idea. All I know is the CBP has all sorts of magic cooked into it. A stair interpolation done a dozen times is 1:1 with a single pass interpolation for example.
05:22
troy_s
Which boggles my mind.
05:23
troy_s
(unlike the sharpeners such as Sinc / Lanczos or blurries such as cubicb etc.)
05:23
troy_s
Anyways... night.
05:24
FergusL
same here soon, thanks for the details
10:51
mars_
FergusL: yeah, i'm interested working on that code
11:29
se6astian
joined the channel
11:32
se6astian
good morning
12:21
se6astian
left the channel
13:47
dmj_nova1
joined the channel
13:48
FergusL
Hi here
13:49
dmj_nova
left the channel
14:21
troy_s
FergusL: https://twitter.com/dfelinto/status/396131780430151680
14:25
FergusL
oh
14:33
troy_s
FergusL: Finish the debayer attempt yet? ;)
14:34
FergusL
all I did was sleeping since we last talked about it
15:26
Bertl
morning everyone!
15:28
Bertl
and to add confusion, the sensor can do color binning, which will give overlaping bins per color channel :)
17:18
FergusL
hi Bertl
17:19
Bertl
hey
17:31
FergusL
Gradient N = |G8 - G18| + |R3 - R13| + |B7 - B17| / 2 + |B9 - B19| / 2 + |G2 - G12| / 2 + |G4 - G14| / 2 <- the | are absolute value, right ?
17:31
FergusL
not some shady math unknown operator ?
17:32
FergusL
(http://scien.stanford.edu/pages/labsite/1999/psych221/projects/99/tingchen/algodep/vargra.html)
17:33
Bertl
looks like, as there is nothing else specified, in 'normal' math those are simply absolute values
17:49
se6astian
joined the channel
17:49
se6astian
good evening
17:49
se6astian
http://nofilmschool.com/2013/11/first-images-apertus-super-35mm-camera-prototype-axiom-alpha/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter
17:49
Bertl
evening!
17:52
Bertl
we probably need some folks to address questions and answer comments on such sites ...
17:53
Bertl
they do not have to know everything about the project and the technical details, they just have to know whom to ask for details
17:55
se6astian
what questions in particular?
17:56
Bertl
questions and statements in general, otherwise there will be a lot of speculation and drawing (mostly wrong) conclusions
17:57
se6astian
sure, when I have time I read the article and comments
17:57
se6astian
but so far there is nothing to comment to
18:00
Bertl
if you say so ...
18:02
[1]se6astian
joined the channel
18:03
FergusL
WHAT THE...
18:03
FergusL
"they’ve got images in 4K. However, just like me first thing in the morning before I put on my makeup, they’re looking a little rough. "
18:03
FergusL
I... just...
18:04
se6astian
left the channel
18:04
[1]se6astian
changed nick to: se6astian
18:06
se6astian
don't you wear makeup every day? :P
18:08
Bertl
V Renée obviously does ...
18:10
FergusL
aah... good ol time when Koo was writing every single article himself
18:21
dmj_nova1
left the channel
18:37
se6astian
Bertl, I just checked the lens mount files konstantin sent me and they are indeed intact NURBS curves when openend in Rhino3D, and Rhino can make edits and save as STEP file again
18:37
Bertl
excellent!
18:37
se6astian
I just never used Rhino before so I have no idea how to edit the curves yet, but everything is there to make these changes
18:44
se6astian
do we need any HDMI functional testing, protocol-level debugging, or protocol compliance testing help/advice?
18:45
se6astian
I am just writing back to John Burt
18:46
Bertl
we will, at some point, if we plan to support HDMI out
18:46
Bertl
ATM, there is not much to be tested/helped with, i.e. the analog devices chip works as expected and is what we will use for the prototype
18:47
Bertl
but maybe he can suggest good hdmi interface chips
18:48
se6astian
actually we never wanted to have HDMI output but it was what the zedboard had already ;)
18:49
Bertl
yeah, but maybe hdmi is a good choice for a viewfinder or similar at some point
18:50
se6astian
at some point maybe....
18:50
se6astian
I will reply to him
18:54
se6astian
the reply-to options now work by default in my gmail as well btw
18:54
Bertl
ah, good!
19:09
dmj_nova
joined the channel
19:53
se6astian
left the channel
21:12
se6astian
joined the channel
21:17
se6astian
800 unqiue visitors already today on the website
21:17
se6astian
we wont hit our record from the open modules concept
21:17
se6astian
but also we had just 2 articles so far
21:18
se6astian
more coming on monday when office hours continue most likely
21:18
Bertl
it will be more dispersed over the days I guess
21:20
se6astian
yes, this time we didnt give the writers any headstart
21:37
se6astian
interesting sensor datasheet specs of the sensor rumored to be in the BMCC btw: http://www.fairchildimaging.com/files/data_sheet_cis_2521f.pdf
21:38
se6astian
lower resolution and smaller sensor diameter than the cmv12000
21:38
se6astian
but low light, noise and dynamic range values are incredible
21:40
Bertl
LCC carrier might be a little problematic for testing
21:41
Bertl
and the global shutter is not pipelined
21:41
Bertl
(but at least it has the global shutter option :)
21:42
se6astian
what is an lcc carrier?
21:42
se6astian
the package the sensor sits on?
21:42
Bertl
leadless chip carrier
21:45
Bertl
less than half the number of photosites, 1/3rd the max framerate
21:48
Bertl
but yes, the sensitivity and noise values look impressive
21:48
Bertl
maybe contact BAE and get a detailed data sheet?
21:51
se6astian
we can try, and pricing information, just out of curiosity
21:51
Bertl
yup
21:51
se6astian
do you know how dark noise compares exactly?
21:52
se6astian
cmv12000: Dark noise 13 e- (RMS)
21:52
se6astian
CIS: <1.5 e- RMS Readout Noise
21:52
se6astian
CIS: <30 e-/pixel/sec dark current @ 20°C
21:52
se6astian
is this apple and oranges comparisson?
21:53
Bertl
no the dark noise and the dark current can be compared
21:53
se6astian
cmv12000: Dark current 125 e-/s (25 degC)
21:54
Bertl
so at roughly equivalent temperatures (CMV12k needs to be cooled for that)
21:54
se6astian
is read-out-noise == dark noise?
21:54
Bertl
the cmv12k has roughly 4 times the pixel nois of the CIS
21:54
Bertl
*noise
21:55
Bertl
the read-out noise is something different
21:55
se6astian
I see
21:56
Bertl
but it is hard to tell for both what precisely is meant
21:57
Bertl
the 13 e- seem to be the temporal noise
21:57
Bertl
(in the analog domain)
21:57
Bertl
so maybe that indeed correlates to the readout noise of the CIS
22:00
Bertl
the range is interesting, with 30ke- full well charge and 86dB
22:58
se6astian
I will see if we can get more information, likely with an NDA
22:58
se6astian
but for now, time for bed ;)
22:58
se6astian
good night
22:58
se6astian
left the channel