00:23 | mumptai | left the channel | |
04:34 | Bertl | off to bed now ... have a good one everyone!
| |
04:34 | Bertl | changed nick to: Bertl_zZ
| |
06:50 | BAndiT1983|away | changed nick to: BAndiT1983
| |
06:54 | illwieckz | left the channel | |
06:55 | illwieckz | joined the channel | |
08:19 | se6ast1an | good day
| |
08:48 | mumptai | joined the channel | |
11:01 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
11:04 | BAndiT1983|away | changed nick to: BAndiT1983
| |
11:42 | mumptai | left the channel | |
11:55 | berto_bxl | joined the channel | |
11:56 | berto_bxl54 | joined the channel | |
12:00 | berto_bxl | left the channel | |
12:06 | Bertl_zZ | changed nick to: Bertl
| |
12:06 | Bertl | morning folks!
| |
12:09 | BAndiT1983 | hi
| |
12:48 | EmilJ | Hi all. I'll flash the xilinx-provided image and export the devicetree and kernel boot args. However since I'm not even getting into the u-boot shell I don't have a lot of faith that the issue starts only at kernel boot time
| |
12:49 | vup2 | EmilJ: can you upload those somewhere?
| |
12:49 | EmilJ | I will
| |
13:50 | RexOrCine | left the channel | |
13:53 | RexOrCine | joined the channel | |
14:19 | eppisai | joined the channel | |
14:53 | eppisai | left the channel | |
15:14 | eppisai | joined the channel | |
15:31 | eppisai | left the channel | |
16:06 | EmilJ | vup2: https://gitlab.com/-/snippets/2060514
| |
16:41 | ZNC_ | left the channel | |
16:42 | ZNC | joined the channel | |
16:42 | ZNC | changed nick to: Guest8416
| |
16:43 | BAndiT1983 | left the channel | |
16:46 | BAndiT1983 | joined the channel | |
17:05 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
17:39 | berto_bxl54 | left the channel | |
18:24 | BAndiT1983|away | changed nick to: BAndiT1983
| |
18:42 | Bertl | off for now ... bbl
| |
18:43 | Bertl | changed nick to: Bertl_oO
| |
18:49 | RexOrCine | left the channel | |
18:49 | RexOrCine | joined the channel | |
19:13 | anuejn | Bertl_oO: do you know how different the gains are for the beta greens?
| |
19:15 | anuejn | and se6ast1an do you have a link of dngs from the micro for me?
| |
19:15 | anuejn | I cannot find any on my computer
| 19:15 | anuejn | looks innocent
|
19:18 | se6ast1an | you mean the files you recorded ?
| |
19:20 | anuejn | yup
| |
19:20 | se6ast1an | uploading to https://cloud.apertus.org/index.php/s/KjJ8NLqpnPA5xjQ
| |
19:20 | anuejn | I only need one
| |
19:21 | anuejn | thanks :)
| |
19:21 | anuejn | how is software able to debayer the dngs from the beta correctly?
| |
19:21 | anuejn | it seems the green pixels in the even / odd rows have different gains
| |
19:23 | se6ast1an | do you mean which software?
| |
19:23 | se6ast1an | or how the software does it?
| |
19:26 | anuejn | how software generally does this
| |
19:26 | anuejn | eg. darktable
| |
19:28 | se6ast1an | there are several standard algorithms: bilinear, AHD, VNG, etc.
| |
19:28 | se6ast1an | ufraw is quite handy for choosing them individually
| |
19:28 | se6ast1an | the different green gains can lead to maze/grid patterns
| |
19:28 | se6ast1an | see examples here: https://sites.google.com/site/cornerfix/using-cornerfix/maze-patterns-1
| |
19:29 | mumptai | joined the channel | |
19:29 | se6ast1an | it can easily be corrected by averaging the two green values in one 2x2 pixel block for example
| |
19:30 | se6ast1an | more sophisticated methods surely exist as well
| |
19:30 | anuejn | but that gives bad resolution loss?
| |
19:30 | anuejn | isnt the different gain a theoretically known factor that could be corrected for?
| |
19:31 | se6ast1an | yes
| |
19:31 | se6ast1an | the "VNG4" algorithm is known to be resistant to green-crosstalk
| |
19:31 | se6ast1an | its one option in ufraw
| |
19:32 | se6ast1an | the FPN and PRNU calibration should cancel it out
| |
19:32 | se6ast1an | the green differences I mean
| |
19:33 | anuejn | ah I guess I am just dumb and interpreted the dng file I had in the wrong way :(
| |
19:33 | anuejn | I was thinking that other channels are green than what they really are
| |
19:34 | se6ast1an | ah :)
| |
19:43 | anuejn | now it looks good
| |
19:43 | anuejn | and there is no "gain difference"
| |
19:43 | anuejn | sorry for the hassle
| |
19:46 | se6ast1an | great
| |
19:46 | se6ast1an | images to share?
| |
19:47 | anuejn | I am currently not writing any dngs
| |
19:47 | anuejn | but I can try :)
| |
19:53 | se6ast1an | you are working on the recorder software?
| |
19:56 | anuejn | no on the compression
| |
19:57 | anuejn | and I tried if it brings any improvements if we interpret the two green planes as one big
| |
19:57 | anuejn | spoiler: it doesnt
| |
19:57 | anuejn | (at least thats what I concluded)
| |
20:09 | se6ast1an | interesting
| |
20:19 | se6ast1an | There is the classic approach of chroma subsampling, mode like YCrBb : 4:2:2
| |
20:20 | se6ast1an | but I assume that does not translate directly to a raw colorspace
| |
20:25 | se6ast1an | should I explain?
| |
20:26 | anuejn | I do know how chroma subsampling works
| |
20:26 | anuejn | and red does something similiar in their patent with the raw data
| |
20:26 | anuejn | but I have to say that I do not really understand why the approach they are describing is good for non black and white images
| |
20:27 | se6ast1an | interesting!
| |
20:30 | se6ast1an | are you still encoding cineform or is this already outside their standard?
| |
20:31 | anuejn | the encoding scheme is from the cineform standart (with a few minor differences)
| |
20:31 | anuejn | but the data layout is different
| |
20:32 | anuejn | also we are currently skipping some part of the encoding process (some non linear coding of hf coefficients)
| |
20:33 | se6ast1an | but do you plan to produce files in the end that can be decoded by the cineform raw software?
| |
20:33 | anuejn | probably not
| |
20:33 | anuejn | since there seems to be not a lot of benefit and cineform is a huge and really dirty codebase
| |
20:34 | anuejn | building something compatible with that seems like not a thing I want to do
| |
20:34 | se6ast1an | well it means the footage will not be editable with any established software then...
| |
20:35 | anuejn | can anyone really open cineform?
| |
20:35 | anuejn | but I really liked the idea of having a frameserver
| |
20:36 | anuejn | and then doing something like cinemaDNG
| |
20:36 | se6ast1an | let me research what software can directly open cineform raw currently
| |
20:56 | BAndiT1983 | haven't we researched MLV in 2018 while GSoC?
| |
20:57 | BAndiT1983 | nice, remembered GEDI (green edge directed interpolation) and our link is first one, even before the PDF we used for it: https://wiki.apertus.org/index.php/OpenCine.Green_Edge_Directed_Interpolation
| |
21:08 | se6ast1an | ok, research complete
| |
21:08 | se6ast1an | gopro has killed cineform raw
| |
21:08 | se6ast1an | I couldnt get it to open anywhere
| |
21:09 | se6ast1an | resolve works with the non raw cineform files out of the box
| |
21:09 | se6ast1an | but not the raw one
| |
21:09 | se6ast1an | adobe, vlc cant import it either
| |
21:09 | se6ast1an | after installing a legacy gopro studio software
| |
21:10 | se6ast1an | I can ope the cineform raw files in the adobe suite but it crashes immediately then
| |
21:10 | se6ast1an | the gopro studio software cannot open cineform raw files...
| |
21:10 | se6ast1an | so the frame server sounds like a good approach :D
| |
21:12 | anuejn | :D
| |
21:12 | anuejn | fun
| |
21:12 | anuejn | after all raw files dont seem like gopros main market segment
| |
21:13 | anuejn | tho native cineform raw support would be nice because it allows lower res preview debayering for free
| |
21:14 | anuejn | maybe one day we have to add support for our file format in some software ;)
| |
21:19 | se6ast1an | well the frameserver would work around exactly that no?
| |
21:22 | anuejn | hm... yeah but that wouldnt be as integrated
| |
21:22 | anuejn | for example one could not just transparently use different resolutions for playback and rendering
| |
21:25 | BAndiT1983 | some programs can reload data when they notice changes, so theoretically you can change the header or wherever the resolution is stored, when the user selects different options in the frameserver
| |
21:26 | BAndiT1983 | or the user also has to reload the clips in the editor, e.g. blender VSE
| |
21:29 | anuejn | I tried the blender vse but somehow it was really slow on my machiene
| |
21:29 | anuejn | I wasnt able to do fhd playback with color correction at 30fps
| |
21:39 | BAndiT1983 | seems to be a general problem there -> https://github.com/oormicreations/VSRender
| |
21:40 | BAndiT1983 | had not many problems in 2012 with FHD and bluescreen, but maybe my memory betrays me
| |
21:41 | BAndiT1983 | VSE uses multicore, but is also not optimized, at least this is what a quick google search telling me and that's how i've landed on github as someone suggested that addon
| |
21:43 | BAndiT1983 | for frameserver (gsoc2019) i was considering opencl, or similar, to process the frames faster, plus downscaling like after effects does it, user can select /2, /4 or /8 resolution to lower CPU usage, additionally quick debayering would help to see the preview, but when high-res wide color is required, then frameserver could be set to quality mode for final processing
| |
21:43 | BAndiT1983 | were just ideas, as the reality depends also on video software and capabilities
| |
22:03 | anuejn | se6ast1an: https://files.niemo.de/axiomlabs-roundtripped.dng
| |
22:03 | anuejn | vs https://files.niemo.de/axiomlabs-roundtripped.dng
| |
22:04 | anuejn | roundtripped is compressed circa 1:4 - 1.5
| |
22:04 | BAndiT1983 | first one shows additional infos, like resolution and color info, in rawtherapee, any difference in file format?
| |
22:05 | anuejn | nope this exactly the same dng with exactly the same metadata
| |
22:05 | anuejn | (different one than the original frame but somehow dng writing / reading in python is pita because there are no god libs)
| |
22:05 | BAndiT1983 | interesting
| |
22:06 | BAndiT1983 | just a moment
| |
22:07 | BAndiT1983 | the links are the same
| |
22:07 | BAndiT1983 | but why the hell is rawtherapee showing more info for one of the files? have to do binary diff
| |
22:07 | anuejn | oh I see I pasted the wrong link
| |
22:08 | anuejn | second should be https://files.niemo.de/axiomlabs-original.dng
| |
22:10 | BAndiT1983 | maybe a dumb question, as i wasn't following latest discussion very closely, but what is the difference? size is around 12MB, also from the quick glance while zooming in not that different
| |
22:11 | anuejn | the images were compressed to a compressed intermediate format and than converted back to dng
| |
22:11 | anuejn | so the size is exactly the same
| |
22:11 | anuejn | this is a test to show how visually lossy the compression scheme is
| |
22:12 | anuejn | more files in https://files.niemo.de/compare_dng/
| |
22:12 | anuejn | with compression ratios in https://files.niemo.de/compare_dng/ratios.txt
| |
22:13 | anuejn | this is for some random quantization values which are not optimized for any purpose
| |
22:16 | BAndiT1983 | thanks for the explanation, will check how to check diff visually
| |
22:17 | BAndiT1983 | ehm, bad english indeed
| |
22:17 | anuejn | hm... generally the images have to much high frequency parts after compression (looking at the eye of the portrait with the glasses)
| |
22:18 | anuejn | this might be a thing where we generally want to try remove hf when in doubt as it looks better ;)
| |
22:18 | anuejn | BAndiT1983: you can load the folder in rawTherapee. it will keep your current zoom while switching betwen images
| |
22:18 | BAndiT1983 | haven't found any useful info recently, when Bertl mentioned removing noise and adding it after processing
| |
22:19 | BAndiT1983 | ah, right, this would also help, google also suggested digikam as it syncs panning and zooming
| |
22:20 | anuejn | also AMaZE seems to interact badly with the compression
| |
22:21 | anuejn | with VNG4 I see a lot less artifacts
| |
22:21 | BAndiT1983 | is it some lib which provides them?
| |
22:21 | Bertl_oO | hmm, different gains for the geens sounds odd ...
| |
22:22 | BAndiT1983 | confirmed, if you select the images and choose "place on light table", then you can drag and drop into left and right pane, then zoom and pan, which is in sync
| |
22:24 | anuejn | Bertl_oO: it was just me being stupid decoding some dng, nvm
| |
22:24 | danieel | left the channel | |
22:25 | Bertl_oO | anuejn: ah, okay, happens ...
| |
22:25 | anuejn | BAndiT1983: you can select them in a dropdown menu in RawTherapee
| |
22:25 | danieel | joined the channel | |
22:26 | BAndiT1983 | ah, my bad, forgot that DNG are not containing the processed data
| |
22:29 | se6ast1an | very nice!
| |
22:30 | se6ast1an | heading off for today
| |
22:30 | se6ast1an | good night
| |
22:31 | BAndiT1983 | good night
| |
22:32 | BAndiT1983 | indeed, AMaZE shows some very pronounced maze-like structures (probably where the name comes from), also AMaZE+VNG4 does not improve much there, but VNG4 itself is not that bad
| |
22:33 | BAndiT1983 | hm, the structures do also appear with other methods, so probably comes from the compression
| |
22:34 | anuejn | yup atleast the debayering doesnt interact to well with it
| |
22:34 | BAndiT1983 | hm, looked at leaf_original and i see same structures there
| |
22:39 | anuejn | yup those images are not noise free
| |
22:39 | BAndiT1983 | looked also at portrait, question would be if the artifacts are noticeable when used as a whole image and not zoomed in
| |
22:39 | anuejn | and the debayering does not like that
| |
22:39 | BAndiT1983 | some years ago, a camera man told us, that they usually want some more realistic look and it's not razor-sharp usually
| |
22:40 | BAndiT1983 | the one who developed SHOODAK algo
| |
22:40 | BAndiT1983 | https://www.apertus.org/what-is-debayering-article-october-2015
| |
22:40 | BAndiT1983 | https://wiki.apertus.org/index.php/Shoodak
| |
22:40 | BAndiT1983 | off for today, good night
| |
22:40 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
22:41 | anuejn | that basically falls into the category "add noise in post" ;)
| |
22:41 | anuejn | but definitely a valid approach
| |
22:47 | mumptai | left the channel |