Current Server Time: 04:06 (Central Europe)

#apertus IRC Channel Logs

2022/02/13

Timezone: UTC


23:04
vup
yeah, 20ms seems to have worked
23:04
vup
so I am not sure, what the problem with longer exposures is then
23:08
Bertl_oO
maybe because the HDMI frame count advances while the buffer id stays the same?
23:09
vup
hmm
23:10
Bertl_oO
has no clue how the recorder works, but from se6astian's description this could be a reason
23:10
vup
currently it counts any two frames that have consecutive frame counts, equal wrsel (in the first corner) and one of them having 0xAA as marker and the other 0x55 as marker as a valid frame pair
23:11
vup
and it only ever considers frames that occured one after another on the v4l2 input
23:12
Bertl_oO
well, in that case it should be fine
23:12
vup
well actually it doesn't evet "really" care about the markers
23:12
vup
s/evet/even/
23:12
vup
yeah, so not sure whats the problem there
23:12
vup
especially if 20ms already works
23:12
Bertl_oO
you will get a bunch of 'fake' exposures though
23:13
Bertl_oO
i.e. duplicates from the same exposure
23:13
Bertl_oO
(unless you wait for a change in the read selection)
23:17
vup
yeah
23:17
vup
adding that would be a easy improvement
23:17
vup
but it seemingly doesn't work atall currently, which is confusing
23:20
vup
Bertl_oO: do you know if I totally fucked up my calculation with >100 seconds of internal exposure being possible, or does that seem reasonable?
23:37
aombk3
left the channel
23:38
Bertl_oO
100 seconds sounds doable
23:38
Bertl_oO
but it should be easy to verify, as we have time calculations in both directions in the snap
23:39
Bertl_oO
i.e. from time to register settings and from registers to the exposure time
23:42
aombk
joined the channel
23:51
Bertl_oO
off to bed now ... have a good one everyone!
23:51
Bertl_oO
changed nick to: Bertl_zZ
00:06
vup
Bertl_zZ: yeah, but I was too lazy to look up how the relevant registers get setup
00:54
intrac
left the channel
00:56
intrac
joined the channel
07:36
se6astian
Good day
07:38
se6astian
Maybe my issue with the longer exposure times was purely that the 480 frames crash occurs much sooner with a lot of duplicate frames when aiming for the same 256 frames as with the shorter exposures, will test further
07:40
se6astian
Vup, temperature relationship would be very much worth researching further, your chart shows that noise level dip for 5ms which first puzzled me, but then I realized I did these exposure times first when the sensor was still cooler I assume
07:42
se6astian
But I wrote a timestamp into the capture directory name so should be easy to verify
10:02
Bertl_zZ
changed nick to: Bertl_oO
13:39
se6astian
vup, would be interesting to also chart the sensor temperature into that image: https://files.niemo.de/darkframe_deviation_hist.png
13:42
vup
yeah
13:42
vup
I probably can do that later
13:43
se6astian
IIRC the on chip sensor temperature of the cmv12000 is in degrees celsius but relative without absolute reference correct?
13:43
se6astian
I could collect temperatures from the other sensors in the camera as well with the capture script
13:44
se6astian
there are a few in powerboard and zynq IIRC
13:44
se6astian
or we just assume the first/lowest cmv12000 value is room temperature
14:33
Bertl_oO
yes, you want to do a one or better two point calibration for the sensor temperature readings (per sensor)
14:34
Bertl_oO
but other than that, they are rather precise and stable
14:52
vup
Well for now just comparing them relative to each other should be enough right?
15:01
Bertl_oO
for the same sensor, that's fine
15:19
vup
yeah, nice
16:02
se6astian
vup: whats a good place to store/place the recorder and beta capture scripts for the 2048 width tests currently ongoing?
16:02
se6astian
https://github.com/rroohhh/cmv12k_color_science ?
16:02
se6astian
or wiki?
16:04
vup
let me transfer the repo to the apertus org
16:04
vup
then there I think
16:06
vup
se6astian: done https://github.com/apertus-open-source-cinema/cmv12k_color_science
16:06
vup
can you give me access again? :)
16:08
se6astian
done
16:08
se6astian
good, will push there
16:39
anuejn[m]
joined the channel
17:13
troy_s[m]
joined the channel
17:13
se6astian
welcome! :D
17:13
se6astian
meet vup and anuejn!
17:14
se6astian
off for dinner now, bbs
17:16
troy_s[m]
That was as painless as sliding down a razor blade laden kiddie slide.
17:40
se6astian
good to have you back! :D
17:41
troy_s[m]
There is no “back” for things that were never quite “there”. Sad to hear you still have the CMOSIS sensor, but alas, here we are.
17:42
se6astian
yes :)
17:51
se6astian
troy_s[m]: I started capturing darkframes and flatfield recently and vup start analysis and charting them: http://irc.apertus.org/index.php?day=12&month=02&year=2022#28
17:51
se6astian
though I would say we are still at the beginning
18:00
anuejn
troy_s[m]: whats wrong with the CMOSIS sensor?
18:00
anuejn
(genuine question)
18:02
troy_s[m]
Hrm. That’s a pretty challenging question. In the end… it comes down to what I’ve sort of wrapped up as “image formation”. That complexity starts at the camera sensor as you know, and what the sensor sees. It seemed like a challenging sensor when last visited the subject.
18:03
troy_s[m]
Specifically, on the colour front, we have to ask “What is the goal?”
18:07
se6astian
there are definitely challenges ahead, but I am not sure if any other sensor would pose no challenges - probably just different ones
18:07
troy_s[m]
Oh they will all have challenges.
18:07
troy_s[m]
Just that that specific sensor is known to have some rather unfortunate ones.
18:07
troy_s[m]
In the end, it is what it is.
18:07
se6astian
agreed
18:08
troy_s[m]
Did anyone ever get the spectral response of the sensor?
18:08
se6astian
and the goa for mel: a balance between a pleasing and natural default color look
18:08
troy_s[m]
That’s uh… about 100 years of research that few retread over.
18:09
troy_s[m]
“Natural” and “pleasing” are going to be horrifically overfit terms. Net sum of meaningless, sadly.
18:10
troy_s[m]
I’m sure this channel has seen the ridiculous rubbish that happens when someone chases electronics in search of meaning.
18:10
se6astian
lets go for eyesore colors in 1 year then :P
18:12
troy_s[m]
It’s worth breaking out a series of data states, and trying to draw lines to meaning.
18:12
troy_s[m]
Which is where the whole “colour” thing intersects.
18:13
troy_s[m]
Folks really struggle with colour here because there are various states of “colour”, and various stages where facets of that “colour” hold different and important considerations.
18:14
troy_s[m]
(By “here” I mean in the larger sense, not “this channel”. To think of it another way, we went from a device that formed images with creative emulsion negative / positive film photography, to a device that captures stimulus.)
18:15
troy_s[m]
(Huge gap, and then layer on dozens of more confusing things around the broader ideas of “colour”. Sadly, with the gap, folks also simply forgot or lost the actual research in this domain from the sort of companies like Fuji and Kodak. Hundreds of important papers, totally forgotten for reasons that elude me.)
18:17
troy_s[m]
/msg NickServ does this still work
18:17
troy_s[m]
Nope.
18:56
anuejn
troy_s[m]: you can talk with the matrix libera-irc bridge
18:57
anuejn
there is some docs over here: https://libera.chat/guides/faq#can-i-connect-with-matrix
18:57
troy_s[m]
Yeah it’s a time vortex. I think I established a direct message with Nickserv.
18:58
Guest13
joined the channel
18:58
Guest13
changed nick to: troy_s1
18:59
troy_s1
anuejn[m]: Does this work for notify?
18:59
anuejn
this does highlight me
19:00
anuejn
though my handle is only the anuejn part but my irc client matches also anuejn[m]
19:00
troy_s1
Weird.
19:00
troy_s1
What is your background anuejn?
19:01
anuejn
difficult question... I am hacking on various apertus projects for a few years
19:01
anuejn
with vup I have built the https://wiki.apertus.org/index.php/AXIOM_Micro
19:01
troy_s1
I guess it sounds like yourself and vup are the two folks falling into the rabbit hole of "colour"? I'd start with a rather canonized piece of what I'd consider absolutely mandatory reading.
19:02
troy_s1
Are either of you two familiar with image / colour subjects?
19:02
anuejn
ah well currently we are quite a bit before doing anything that has much to do with colour
19:03
intrac
left the channel
19:03
intrac
joined the channel
19:03
vup
troy_s1: the extend of what I have done is some basic darkframe and flatfield calibration for astro purposes
19:03
vup
nothing with color yet really
19:03
anuejn
we (mostly vup is currently working on it) are currently at the stage of figuring out how to get "clean" (that is noise free + linear) images out of the cmv
19:04
vup
and not even much with general response curves
19:04
troy_s1
Oh... well... that's probably good.
19:04
vup
(for the astro stuff the ccd was assumed to be pretty much linear)
19:04
troy_s1
That's a reasonable assumption.
19:04
troy_s1
https://www.semanticscholar.org/paper/Quality-of-Color-Reproduction-Macadam/5d8cc031f104292b53c9b49b68058aebd5eee0d2. That's a really important paper in my estimation. Plenty of it will seem confusing, but the first several pages are incredibly insightful.
19:05
troy_s1
Given you may not recognize the name, David MacAdam was one of the Kodak researchers.
19:05
anuejn
also I had some fun time reading the opencolorio docs but we didnt really implement anything in that direction yet
19:05
troy_s1
You will find that in the annals of history, most of the significant contributions to colour science and image engineering perhaps unsurprisingly came from Kodak and Fuji. Although we suffer a bit of a language gap, and probably some racism, with respect to the latter.
19:06
troy_s1
As a _really_ generalized high level lens that is potentially a decent entry point for resolving confusion, it's worth considering the camera as an "observer".
19:07
troy_s1
Note that we should desperately avoid conflating terms. For example, don't call the capture at a sensor an "image".
19:07
troy_s1
(If you find that paper I linked to via uh... maybe there's an online service that is a hub for something like that... you'll at least see why calling stimulus, or in the case of a camera, tristimulus, an image is absolutely foolish.)
19:10
troy_s1
1. A camera observer senses the incoming visible electromagnetic radiation (hopefully only visible... but another topic) through a series of three monochromatic spectral filtration points.
19:10
troy_s1
2. Those capture sites are designed typically "generally" toward a human standard observer (more often than not, CIE 1931's standard observer) but are *not* cone response pigmentation / filtration, for good reason.
19:10
troy_s1
3. The first goal is to derive a standard observer transformation of the camera observer capture.
19:10
troy_s1
Let me know if any of this is confusing as shit.
19:10
troy_s1
I'll try to link to related documents that might help flesh it out.
19:11
troy_s1
Stage 3. is essentially trying to take the meaninglessness of the camera sensor, and derive a series of standard observer tristimulus values. Note how I'm avoiding the word "colour" here. Again, for good reason. Colour *does not exist* outside of the psychophysical uptake meatspace magic machine.
19:12
vup
(also I maybe should have mentiond, but I am studying physics, so thats my main background knowledge)
19:12
troy_s1
What we require as a first stage toward getting to a much more complex idea of "image" is to "align" the camera observer values with our ground truth, which we consider for now to be the CIE 1931 standard observer. There is a pretty reasonable approach to achieving this.
19:13
troy_s1
We can fit known values to unknown values, and derive some magical transformation.
19:14
troy_s1
This is typically achieved through two mechanisms:
19:14
troy_s1
1. Use tristimulus to tristimulus comparison fits. That is, take a captured under fixed conditions and treat the RGB as camera observer sensed linear values, and fit them to the known tristimulus values of the standard observer model in question.
19:14
troy_s1
2. Use the spectral distributions of the camera observer to synthetically "see" the spectral distributions of known values, and fit them to the resultant known tristimulus values of the resultant spectral distributions of knowns.
19:15
troy_s1
Typically folks will jump down the rabbit hole of lookup tables and what not, but a critical thing to typically try to maintain is the notion of exposure invariance. The TL;DR is that most lookup approaches will fail this test, hence why camera observer to standard observer tristimulus formation matrices are purely 3x3.
19:17
troy_s1
So that's your first goal.
19:26
vup
Ill read this when I am back home and probably have a bunch of questions then.
19:33
troy_s1
left the channel