00:04 | vup | yeah, 20ms seems to have worked
| |
00:04 | vup | so I am not sure, what the problem with longer exposures is then
| |
00:08 | Bertl_oO | maybe because the HDMI frame count advances while the buffer id stays the same?
| |
00:09 | vup | hmm
| 00:10 | Bertl_oO | has no clue how the recorder works, but from se6astian's description this could be a reason
|
00:10 | vup | currently it counts any two frames that have consecutive frame counts, equal wrsel (in the first corner) and one of them having 0xAA as marker and the other 0x55 as marker as a valid frame pair
| |
00:11 | vup | and it only ever considers frames that occured one after another on the v4l2 input
| |
00:12 | Bertl_oO | well, in that case it should be fine
| |
00:12 | vup | well actually it doesn't evet "really" care about the markers
| |
00:12 | vup | s/evet/even/
| |
00:12 | vup | yeah, so not sure whats the problem there
| |
00:12 | vup | especially if 20ms already works
| |
00:12 | Bertl_oO | you will get a bunch of 'fake' exposures though
| |
00:13 | Bertl_oO | i.e. duplicates from the same exposure
| |
00:13 | Bertl_oO | (unless you wait for a change in the read selection)
| |
00:17 | vup | yeah
| |
00:17 | vup | adding that would be a easy improvement
| |
00:17 | vup | but it seemingly doesn't work atall currently, which is confusing
| |
00:20 | vup | Bertl_oO: do you know if I totally fucked up my calculation with >100 seconds of internal exposure being possible, or does that seem reasonable?
| |
00:37 | aombk3 | left the channel | |
00:38 | Bertl_oO | 100 seconds sounds doable
| |
00:38 | Bertl_oO | but it should be easy to verify, as we have time calculations in both directions in the snap
| |
00:39 | Bertl_oO | i.e. from time to register settings and from registers to the exposure time
| |
00:42 | aombk | joined the channel | |
00:51 | Bertl_oO | off to bed now ... have a good one everyone!
| |
00:51 | Bertl_oO | changed nick to: Bertl_zZ
| |
01:06 | vup | Bertl_zZ: yeah, but I was too lazy to look up how the relevant registers get setup
| |
01:54 | intrac | left the channel | |
01:56 | intrac | joined the channel | |
08:36 | se6astian | Good day
| |
08:38 | se6astian | Maybe my issue with the longer exposure times was purely that the 480 frames crash occurs much sooner with a lot of duplicate frames when aiming for the same 256 frames as with the shorter exposures, will test further
| |
08:40 | se6astian | Vup, temperature relationship would be very much worth researching further, your chart shows that noise level dip for 5ms which first puzzled me, but then I realized I did these exposure times first when the sensor was still cooler I assume
| |
08:42 | se6astian | But I wrote a timestamp into the capture directory name so should be easy to verify
| |
11:02 | Bertl_zZ | changed nick to: Bertl_oO
| |
14:39 | se6astian | vup, would be interesting to also chart the sensor temperature into that image: https://files.niemo.de/darkframe_deviation_hist.png
| |
14:42 | vup | yeah
| |
14:42 | vup | I probably can do that later
| |
14:43 | se6astian | IIRC the on chip sensor temperature of the cmv12000 is in degrees celsius but relative without absolute reference correct?
| |
14:43 | se6astian | I could collect temperatures from the other sensors in the camera as well with the capture script
| |
14:44 | se6astian | there are a few in powerboard and zynq IIRC
| |
14:44 | se6astian | or we just assume the first/lowest cmv12000 value is room temperature
| |
15:33 | Bertl_oO | yes, you want to do a one or better two point calibration for the sensor temperature readings (per sensor)
| |
15:34 | Bertl_oO | but other than that, they are rather precise and stable
| |
15:52 | vup | Well for now just comparing them relative to each other should be enough right?
| |
16:01 | Bertl_oO | for the same sensor, that's fine
| |
16:19 | vup | yeah, nice
| |
17:02 | se6astian | vup: whats a good place to store/place the recorder and beta capture scripts for the 2048 width tests currently ongoing?
| |
17:02 | se6astian | https://github.com/rroohhh/cmv12k_color_science ?
| |
17:02 | se6astian | or wiki?
| |
17:04 | vup | let me transfer the repo to the apertus org
| |
17:04 | vup | then there I think
| |
17:06 | vup | se6astian: done https://github.com/apertus-open-source-cinema/cmv12k_color_science
| |
17:06 | vup | can you give me access again? :)
| |
17:08 | se6astian | done
| |
17:08 | se6astian | good, will push there
| |
17:39 | anuejn[m] | joined the channel | |
18:13 | troy_s[m] | joined the channel | |
18:13 | se6astian | welcome! :D
| |
18:13 | se6astian | meet vup and anuejn!
| |
18:14 | se6astian | off for dinner now, bbs
| |
18:16 | troy_s[m] | That was as painless as sliding down a razor blade laden kiddie slide.
| |
18:40 | se6astian | good to have you back! :D
| |
18:41 | troy_s[m] | There is no “back” for things that were never quite “there”. Sad to hear you still have the CMOSIS sensor, but alas, here we are.
| |
18:42 | se6astian | yes :)
| |
18:51 | se6astian | troy_s[m]: I started capturing darkframes and flatfield recently and vup start analysis and charting them: http://irc.apertus.org/index.php?day=12&month=02&year=2022#28
| |
18:51 | se6astian | though I would say we are still at the beginning
| |
19:00 | anuejn | troy_s[m]: whats wrong with the CMOSIS sensor?
| |
19:00 | anuejn | (genuine question)
| |
19:02 | troy_s[m] | Hrm. That’s a pretty challenging question. In the end… it comes down to what I’ve sort of wrapped up as “image formation”. That complexity starts at the camera sensor as you know, and what the sensor sees. It seemed like a challenging sensor when last visited the subject.
| |
19:03 | troy_s[m] | Specifically, on the colour front, we have to ask “What is the goal?”
| |
19:07 | se6astian | there are definitely challenges ahead, but I am not sure if any other sensor would pose no challenges - probably just different ones
| |
19:07 | troy_s[m] | Oh they will all have challenges.
| |
19:07 | troy_s[m] | Just that that specific sensor is known to have some rather unfortunate ones.
| |
19:07 | troy_s[m] | In the end, it is what it is.
| |
19:07 | se6astian | agreed
| |
19:08 | troy_s[m] | Did anyone ever get the spectral response of the sensor?
| |
19:08 | se6astian | and the goa for mel: a balance between a pleasing and natural default color look
| |
19:08 | troy_s[m] | That’s uh… about 100 years of research that few retread over.
| |
19:09 | troy_s[m] | “Natural” and “pleasing” are going to be horrifically overfit terms. Net sum of meaningless, sadly.
| |
19:10 | troy_s[m] | I’m sure this channel has seen the ridiculous rubbish that happens when someone chases electronics in search of meaning.
| |
19:10 | se6astian | lets go for eyesore colors in 1 year then :P
| |
19:12 | troy_s[m] | It’s worth breaking out a series of data states, and trying to draw lines to meaning.
| |
19:12 | troy_s[m] | Which is where the whole “colour” thing intersects.
| |
19:13 | troy_s[m] | Folks really struggle with colour here because there are various states of “colour”, and various stages where facets of that “colour” hold different and important considerations.
| |
19:14 | troy_s[m] | (By “here” I mean in the larger sense, not “this channel”. To think of it another way, we went from a device that formed images with creative emulsion negative / positive film photography, to a device that captures stimulus.)
| |
19:15 | troy_s[m] | (Huge gap, and then layer on dozens of more confusing things around the broader ideas of “colour”. Sadly, with the gap, folks also simply forgot or lost the actual research in this domain from the sort of companies like Fuji and Kodak. Hundreds of important papers, totally forgotten for reasons that elude me.)
| |
19:17 | troy_s[m] | /msg NickServ does this still work
| |
19:17 | troy_s[m] | Nope.
| |
19:56 | anuejn | troy_s[m]: you can talk with the matrix libera-irc bridge
| |
19:57 | anuejn | there is some docs over here: https://libera.chat/guides/faq#can-i-connect-with-matrix
| |
19:57 | troy_s[m] | Yeah it’s a time vortex. I think I established a direct message with Nickserv.
| |
19:58 | Guest13 | joined the channel | |
19:58 | Guest13 | changed nick to: troy_s1
| |
19:59 | troy_s1 | anuejn[m]: Does this work for notify?
| |
19:59 | anuejn | this does highlight me
| |
20:00 | anuejn | though my handle is only the anuejn part but my irc client matches also anuejn[m]
| |
20:00 | troy_s1 | Weird.
| |
20:00 | troy_s1 | What is your background anuejn?
| |
20:01 | anuejn | difficult question... I am hacking on various apertus projects for a few years
| |
20:01 | anuejn | with vup I have built the https://wiki.apertus.org/index.php/AXIOM_Micro
| |
20:01 | troy_s1 | I guess it sounds like yourself and vup are the two folks falling into the rabbit hole of "colour"? I'd start with a rather canonized piece of what I'd consider absolutely mandatory reading.
| |
20:02 | troy_s1 | Are either of you two familiar with image / colour subjects?
| |
20:02 | anuejn | ah well currently we are quite a bit before doing anything that has much to do with colour
| |
20:03 | intrac | left the channel | |
20:03 | intrac | joined the channel | |
20:03 | vup | troy_s1: the extend of what I have done is some basic darkframe and flatfield calibration for astro purposes
| |
20:03 | vup | nothing with color yet really
| |
20:03 | anuejn | we (mostly vup is currently working on it) are currently at the stage of figuring out how to get "clean" (that is noise free + linear) images out of the cmv
| |
20:04 | vup | and not even much with general response curves
| |
20:04 | troy_s1 | Oh... well... that's probably good.
| |
20:04 | vup | (for the astro stuff the ccd was assumed to be pretty much linear)
| |
20:04 | troy_s1 | That's a reasonable assumption.
| |
20:04 | troy_s1 | https://www.semanticscholar.org/paper/Quality-of-Color-Reproduction-Macadam/5d8cc031f104292b53c9b49b68058aebd5eee0d2. That's a really important paper in my estimation. Plenty of it will seem confusing, but the first several pages are incredibly insightful.
| |
20:05 | troy_s1 | Given you may not recognize the name, David MacAdam was one of the Kodak researchers.
| |
20:05 | anuejn | also I had some fun time reading the opencolorio docs but we didnt really implement anything in that direction yet
| |
20:05 | troy_s1 | You will find that in the annals of history, most of the significant contributions to colour science and image engineering perhaps unsurprisingly came from Kodak and Fuji. Although we suffer a bit of a language gap, and probably some racism, with respect to the latter.
| |
20:06 | troy_s1 | As a _really_ generalized high level lens that is potentially a decent entry point for resolving confusion, it's worth considering the camera as an "observer".
| |
20:07 | troy_s1 | Note that we should desperately avoid conflating terms. For example, don't call the capture at a sensor an "image".
| |
20:07 | troy_s1 | (If you find that paper I linked to via uh... maybe there's an online service that is a hub for something like that... you'll at least see why calling stimulus, or in the case of a camera, tristimulus, an image is absolutely foolish.)
| |
20:10 | troy_s1 | 1. A camera observer senses the incoming visible electromagnetic radiation (hopefully only visible... but another topic) through a series of three monochromatic spectral filtration points.
| |
20:10 | troy_s1 | 2. Those capture sites are designed typically "generally" toward a human standard observer (more often than not, CIE 1931's standard observer) but are *not* cone response pigmentation / filtration, for good reason.
| |
20:10 | troy_s1 | 3. The first goal is to derive a standard observer transformation of the camera observer capture.
| |
20:10 | troy_s1 | Let me know if any of this is confusing as shit.
| |
20:10 | troy_s1 | I'll try to link to related documents that might help flesh it out.
| |
20:11 | troy_s1 | Stage 3. is essentially trying to take the meaninglessness of the camera sensor, and derive a series of standard observer tristimulus values. Note how I'm avoiding the word "colour" here. Again, for good reason. Colour *does not exist* outside of the psychophysical uptake meatspace magic machine.
| |
20:12 | vup | (also I maybe should have mentiond, but I am studying physics, so thats my main background knowledge)
| |
20:12 | troy_s1 | What we require as a first stage toward getting to a much more complex idea of "image" is to "align" the camera observer values with our ground truth, which we consider for now to be the CIE 1931 standard observer. There is a pretty reasonable approach to achieving this.
| |
20:13 | troy_s1 | We can fit known values to unknown values, and derive some magical transformation.
| |
20:14 | troy_s1 | This is typically achieved through two mechanisms:
| |
20:14 | troy_s1 | 1. Use tristimulus to tristimulus comparison fits. That is, take a captured under fixed conditions and treat the RGB as camera observer sensed linear values, and fit them to the known tristimulus values of the standard observer model in question.
| |
20:14 | troy_s1 | 2. Use the spectral distributions of the camera observer to synthetically "see" the spectral distributions of known values, and fit them to the resultant known tristimulus values of the resultant spectral distributions of knowns.
| |
20:15 | troy_s1 | Typically folks will jump down the rabbit hole of lookup tables and what not, but a critical thing to typically try to maintain is the notion of exposure invariance. The TL;DR is that most lookup approaches will fail this test, hence why camera observer to standard observer tristimulus formation matrices are purely 3x3.
| |
20:17 | troy_s1 | So that's your first goal.
| |
20:26 | vup | Ill read this when I am back home and probably have a bunch of questions then.
| |
20:33 | troy_s1 | left the channel |