00:00 | troy_s | I wonder how the BMCC is dealing with the junk area
| |
00:00 | troy_s | Would be interesting comparing the latitudes with knees. I suspect they are using knees too, otherwise a limit of 7 odd stops seems... Underwhelming.
| |
00:00 | troy_s | Clip with knees.
| |
00:01 | troy_s | Bertl: Was the data at all reliable with a knee in that region?
| |
00:01 | Bertl | IMHO the HDR setting can be easily calculated from a proper grey map image
| |
00:01 | troy_s | Yep. Just a custom LUT for linearization.
| |
00:01 | intracube | morning
| |
00:01 | troy_s | Bertl: The profiles you and I hammered on had the LUTs.
| |
00:02 | troy_s | (The Hutch's chart has a decent gradient section)
| |
00:03 | Bertl | what I mean is, if you want to know what a camera/sensor uses, you can shoot a very clean and evenly lit set of let's say 128 grey gradients and calculate the applied curves from that, even when they were 'corrected' with a LUT :)
| |
00:03 | intracube | quick question about the ungraded test mxf footage; the blacks seemed noticeably crushed in some shots (like the clips of the insects)
| |
00:04 | intracube | I'd sort of expect to see noise right down to the darkest areas of the image, but this wasn't the case
| |
00:04 | Bertl | with crushed you mean clipped I presume?
| |
00:04 | intracube | anyone know why this is the case?
| |
00:05 | intracube | Bertl: clipped normally refers to lost highlights, crushed is the same but for shadows
| |
00:05 | intracube | maybe I'm getting terminology mixed up somewhat
| |
00:06 | troy_s | intracube: Bad encode chance?
| |
00:06 | troy_s | intracube: As in a double up scale?
| |
00:06 | Bertl | please do not forget, I'm not a reference for 'movie and film terminology', for me every operation which maps values outside a given range to the edges of the range is clipping
| |
00:07 | Bertl | so if it is common to say they are crushed, then so be it
| |
00:07 | intracube | Bertl: honestly, I don't know 100% either :)
| |
00:07 | Bertl | I'm pretty sure the data was clipped on both ends, because of the limited range of the recording device
| |
00:08 | intracube | either way, seems like values in darkest areas of the frame are lost somewhere
| |
00:08 | Bertl | i.e. 12 bit after processing -> 8 bit recording
| |
00:08 | intracube | troy_s: double up scale?
| |
00:08 | intracube | Bertl: so the 12 bit range wasn't remapped down to 8bit?
| |
00:08 | troy_s | intracube: As in a crap encode that did a decode scale for 16-235/240, then another on the 0-255
| |
00:09 | Bertl | intracube: it was, but I'm pretty sure it was clipped during the process
| |
00:09 | intracube | troy_s: oh
| |
00:09 | intracube | Bertl: oh x2 :)
| |
00:09 | troy_s | intracube: But Bertl would know. (Yes. How many times have we seen that? Dozens?)
| |
00:09 | troy_s | intracube: Usually the other way, but both are possible thanks to encoding / decoding.
| |
00:10 | Bertl | that's one reason we are working on a proper interface for the beta to record the raw data, so that we do not need to consider all this unwanted modifications
| |
00:10 | intracube | hang on, I'll upload an example
| |
00:10 | Bertl | clipping btw, can be easily detected when you look at the distribution (i.e. zoom into the histogram)
| |
00:11 | Bertl | the distribution around a given value should be gaussian
| |
00:11 | Bertl | if it is cut on one side, then there has been clipping
| |
00:12 | Bertl | (requires localized histogram)
| |
00:12 | intracube | https://lh3.googleusercontent.com/-ttSXJJmBCxw/VBjRok_uiBI/AAAAAAAAA5c/W_MyrnkjK3k/w854-h480-no/insect_shadows.png
| |
00:13 | intracube | mxf loaded into blender's compositor and RGB curves to lift
| |
00:13 | Bertl | looks like clipped on both ends, and as you can see, the FPN is still there as well
| |
00:14 | intracube | I've massively increase exposure so highlights will be clipped way more than in the original
| |
00:15 | anton__ | Bertl: I feel like I maybe onto something with HDR. I'd like to write it into a coherent text and post a link here tomorrow or one of other days. Hope you will be still interested then :)
| |
00:16 | Bertl | sure, no problem, I'm just curious what kind of information you expect the second snap to provide, which couldn't be concluded/derived better from the first one?
| |
00:16 | intracube | original: https://lh5.googleusercontent.com/-AG33Gp3Qxt8/VBjSk0rFHSI/AAAAAAAAA5s/1gNSPuR40KY/w854-h480-no/insect_original.png
| |
00:17 | Bertl | that's at least not clipped in the highlights :)
| |
00:17 | anton__ | Bertl: I want to know if we actually hit Vtfl3 before Exp_kp2
| |
00:19 | anton__ | Bertl: PLR HDR is equivalent to superimposing 3 imagines with different exposures; and highlights in the first 2 may have been clipped; the info that is missing in the resulting superimposition is whether these first 2 were actually clipped or not
| |
00:19 | Bertl | unless you have a very weird setting of voltages exposures, this can be calculated from the snap
| |
00:19 | Bertl | voltages and exposures
| |
00:19 | anton__ | Bertl: suppose a bright spark is flying through the scene
| |
00:20 | anton__ | Bertl: suppose this spark enters in the last very moment to a pixel which was not clipped
| |
00:20 | anton__ | Bertl: it's like this spark is absent in first 2 images but present on the 3rd
| |
00:20 | Bertl | correct
| |
00:21 | anton__ | Bertl: suppose this resulted in fhis pixel being 95% of max brightness (numerically)
| |
00:21 | anton__ | Bertl: we can not tell if this 95% resulted from a less bright spot which was there all the time and caused the first 2 images to clip
| |
00:21 | anton__ | Bertl: or from the spark entering the pixel only in the 3rd image
| |
00:22 | Bertl | yes, I agree
| |
00:22 | anton__ | Bertl: that is the info I wanted to recover: was there clipping or not?
| |
00:22 | Bertl | but if you take another snap after the HDR snap/readout, that fast moving bright spark will end up in another pixel
| |
00:23 | Bertl | and unless you do motion tracking, the information will be useless, no?
| |
00:30 | anton__ | Bertl: my thinking was - make the extra frame with very fast shutter, say 1/10'000; make an assumption that spark is not moving too fast; then on the new frame it will be approximately at the same location as during the last moments of the main exposure
| |
00:31 | anton__ | Bertl: then if the spark is very bright in the extra frame - it means there was no clipping
| |
00:31 | anton__ | Bertl: but if the spark on the extra frame is relatively dark - then there was clipping
| |
00:32 | anton__ | Bertl: then looking at the main exposure we can find the trace of the spark - it will be the area that got clipped
| |
00:32 | anton__ | Bertl: and immediately adjucent to one end of this clipped area there is the spark
| |
00:33 | Bertl | I don't think that you will get useful data under the assumption that the object is moving that fast
| |
00:33 | Bertl | it would be easier/simpler to run 3 exposures instead of the HDR one and combine the data afterwards
| |
00:33 | anton__ | Bertl: well, I don't really want to capture a bullet
| |
00:33 | anton__ | Bertl: that was my earlier idea: do 3 exposures
| |
00:34 | anton__ | Bertl: if we do 3 exposures we have all the info we need - we know precisely what got clipped and what not
| |
00:34 | anton__ | Bertl: with that info I hope post processing has got a much better chance of guessing how to correct artifacts
| |
00:35 | anton__ | but won't 3 expsoures add too much noise?
| |
00:36 | Bertl | maybe, maybe not
| |
00:38 | Bertl | guess that's something which needs to be tested :)
| |
00:45 | anton__ | Bertl: another crazy idea. Suppose we're happy with 1 knee point. Suppose we make 3 exposures however. Suppose exposure A is 1/5000, exposure B is 1/100, exposure C is 1/5000
| |
00:45 | anton__ | Bertl: then we add A + B + C for the final image (in post processing)
| |
00:45 | anton__ | Bertl: then we make a mask; the mask allows modification only where B was clipped
| |
00:46 | anton__ | Bertl: then within that mask we somehow smudge from A to B
| |
00:46 | Bertl | regardless of the processing, I'd suggest to reduce that to A and B
| |
00:46 | anton__ | Bertl: sorry from A to C
| |
00:46 | Bertl | because A1,B1,A2,B2,A3,B3 will allow the same with less data and better exposures
| |
00:46 | Bertl | A=A1, B=B1, C=A2
| |
00:47 | anton__ | Bertl: yeah, or maybe that
| |
00:47 | anton__ | Bertl: anyway I'm happy that you're aware of this idea - instead of doing PLR HDR in-sensor emulate the same in camera
| |
00:48 | anton__ | Bertl: maybe that's nothing new to you - I'm happy anyway
| |
00:48 | Bertl | yeah, it has two problems though
| |
00:49 | anton__ | Bertl: the main idea here is that when you do it in camera you end up with more info for post processing to make its guesses; what are the probs?
| |
00:49 | Bertl | first, the readout introduces some noise (with a global pipelined shutter)
| |
00:50 | Bertl | and secondly the readout time might be a limiting factor for very short exposures
| |
00:50 | anton__ | Bertl: the doc said that FOT (frame overhead time) is about 70microseconds?
| |
00:50 | anton__ | so the two exposures need to be 70 microseconds apart
| |
00:51 | Bertl | yes, but you have to calculate the readout time as well
| |
00:51 | anton__ | Bertl: if my reading of the PDF is correct that's not a problem?
| |
00:51 | anton__ | Bertl: it seems that next exposure can start when we're still reading out?
| |
00:51 | Bertl | yes, but that creates noise
| |
00:52 | Bertl | we saw that in our tests
| |
00:52 | anton__ | Bertl: aha - so reding out during exposure creates extra noise
| |
00:52 | anton__ | Bertl: this may be a show stopper
| |
00:52 | Bertl | or the other way round, i.e. exposing creates extra noise on the readout :)
| |
00:53 | anton__ | I see
| |
00:53 | Bertl | we didn't investigate it further, but it might complicate things
| |
00:54 | anton__ | is readout a long procedure?
| |
00:54 | Bertl | depends on the data and what you consider long :)
| |
00:55 | Bertl | the 300 FPS limit of the sensor originates from the readout time
| |
00:55 | anton__ | I see. Around 1/300 is definitely long for that kind of magic
| |
00:56 | anton__ | this noise - is it just lower bits?
| |
00:56 | anton__ | or can any bit be wrong?
| |
00:57 | Bertl | as far as we checked, it consists of an offset and some rather low noise
| |
00:57 | Bertl | but it might be perfectly compensateable with proper settings
| |
00:59 | anton__ | well.. then I'd dare bring up my other idea again - you do PLR HDR in-sensor and make an extra frame with very short exposure, say immediately before the main exposure; you don't use that other frame directly - so a bit of noise during its readout doesn't matter; you use that other frame to help you make decisions on how to deal with motion artifacts
| |
00:59 | anton__ | it lets you guess if there was clipping or not during main exposure
| |
01:00 | Bertl | looking forward to the results
| |
01:01 | anton__ | Thank you for spending lots of your valuable time discussin my clueless ideas :) G'night
| |
01:02 | Bertl | ideas are good and appreciated
| |
01:02 | Bertl | and it is quite interesting to hear what folks plan to do with the Beta, so I'm really looking forward to the results
| |
01:02 | derWalter | me 2!!!
| |
01:03 | derWalter | btw, i pick up that gn8 saying and leave my self :)
| |
01:03 | derWalter | cu tomorrow, sleep tight!
| |
01:03 | Bertl | cya
| |
01:05 | anton__ | well I mainly want Axiom to succeed; and DR is an important factor; it wouldn't be good to be loosing on DR to BMPCC; so squeezing whatever DR possible from the sensor makes perfect sense to me
| |
01:06 | anton__ | perhaps you've seen my chat with Alex - there I suggested simply doing 4 or 6 frames in succession and adding them together; this is expected to have some positive effect on DR - though not as much as I naively hoped for
| |
01:07 | anton__ | currently it seems to that doing 4x frames at 4 times shorter exposures will half the iso while expanding the DR by 2 stops - not sure if I'm right on this though..
| |
01:08 | Bertl | why half the iso?
| |
01:10 | anton__ | I may be completely wrong on it... the way I understood Alex is - once we add 4 frames together the standard deviation of noise gets sqrt(4) times bigger - e.g. the noise is twice as bright... that is under the assumption that noise does not depend on shutter speed
| |
01:10 | anton__ | so to get noise back to where it was we can make the picture 2 times darker - hence half the ISO
| |
01:13 | anton__ | so perhaps with 4 frames we gain only 1 extra DR stop? not sure
| |
01:13 | Bertl | hmm ... I'd presume you scale the gain (if possible) to 4 times the range
| |
01:13 | Bertl | for each exposure, which will indeed make the noise sqrt(4) bigger
| |
01:13 | anton__ | well I'm pretty lost on this - but that's another idea that I wanted to make sure you're aware of :)
| |
01:14 | anton__ | shoot 4x or 6x more frames at correspondingly shorter shutter speed in quick succession and then add up in-camera
| |
01:14 | Bertl | but you are also getting four times the range when simply adding up
| |
01:15 | anton__ | as I said I'm a bit lost now about the range - e.g. how difference is between the darkest object that we can distinguish from pitch black and the brightest object that is not yet clipped
| |
01:15 | Bertl | so I'd rather think that you would be able to reduce the noise this way
| |
01:15 | Bertl | not considering read-out noise and other effects of bumping the gain
| |
01:15 | anton__ | well I've circulated the idea to you - that is all I can do anyway :)
| |
01:16 | Bertl | how so?
| |
01:16 | anton__ | Bertl?
| |
01:16 | Bertl | don't you plan to test your ideas on a Beta?
| |
01:16 | anton__ | ha ha :) possibly so
| |
01:16 | anton__ | I mean for now
| |
01:17 | Bertl | so, I think the main advantage AXIOM has over all the other cameras out there is that everybody can test new approaches
| |
01:17 | anton__ | indeed
| |
01:17 | Bertl | which makes it possible to test a huge amount of different ideas in parallell
| |
01:18 | Bertl | *parallel even, spent too much time with parallella boards :)
| |
01:21 | anton__ | you get all the cool toys! :)
| |
01:26 | aombk | this interleaved hdr, the binning could be done inside the camera or later in post?
| |
01:27 | Bertl | both, yes
| |
01:28 | aombk | nice
| |
01:33 | aombk | and in camera, only the "fast" algorithms would work in realtime?
| |
01:35 | Bertl | well, depends on the algorithm :)
| |
01:36 | anton__ | left the channel | |
01:39 | aombk | also this high framerate hdr could be done inside the camera to output an hdr video of half the framerate to the hdmi?
| |
01:47 | Bertl | probably
| |
01:48 | mithro | left the channel | |
01:48 | cfelton | left the channel | |
01:54 | mithro | joined the channel | |
01:54 | cfelton | joined the channel | |
02:07 | derWalter | left the channel | |
02:14 | aombk | the analog gain is like iso? double gain is like double iso is like 1 stop more?
| |
02:25 | Bertl | yes, at least with linear ISO values
| |
02:52 | Bertl | off to bed now ... have a good one everyone!
| |
02:52 | Bertl | changed nick to: Bertl_zZ
| |
05:37 | TheUberKevlar | left the channel | |
07:39 | danieeel | left the channel | |
07:42 | se6astian|away | changed nick to: se6astian
| |
07:42 | se6astian | good morning!
| |
08:03 | Juicyfruit | good morning
| |
08:45 | danieel | joined the channel | |
10:33 | derWalter | joined the channel | |
10:33 | derWalter | goood morning
| |
11:03 | Bertl_zZ | changed nick to: Bertl
| |
11:03 | Bertl | morning folks!
| |
12:05 | se6astian | bbs
| |
12:05 | se6astian | changed nick to: se6astian|away
| |
12:42 | philippej | joined the channel | |
12:52 | se6astian|away | changed nick to: se6astian
| |
12:55 | derWalter | sers bertl :)
| |
12:57 | Bertl | how's going?
| |
12:58 | philippej | hi everyone !
| |
13:02 | aombk | hi
| |
13:09 | jucar | left the channel | |
13:10 | aombk | there has to be something that can be done for the campaign to reach more people
| |
13:10 | jucar | joined the channel | |
13:11 | aombk | i dont believe that everybody, that should know about it, does
| |
13:12 | aombk | will there be apertus at photokina?
| |
13:12 | Bertl | I'm pretty sure there are many channels we haven't reached yet, but for obvious reasons, it seems to be hard to reach them :)
| |
13:27 | derWalter | wellllll i stayed up late, slept late and while waking up, i realized i eat a lot of unhealthy stuff lately, so i will fast for a day and maybe trink some purification teas :) digging trough all the papers i wrote the last day while my appointments and generating todo lists from it and doing em :) and i ve to clean my house, as i ve had no time and energy the last three or four days to do it... and i should mow the lawn and do some
| |
13:28 | derWalter | well i am also thinking a lot about what channels to reach out for... i wrote local newspapers and such, but hey, just remembered something!
| |
13:35 | philippej2 | joined the channel | |
13:35 | alexML_ | ML forums should be also helpful (we had 1 million downloads in two months), so you may consider being a little more active here: http://www.magiclantern.fm/forum/index.php?topic=11787
| |
13:36 | aombk | only very few cinema/dslr blogs/news sites have mentioned apertus beta. on the contrary digital bolex kickstarter campaign was mentioned to almost all of them many times, not just a post
| |
13:36 | philippej | left the channel | |
13:38 | aombk | yes that would help
| |
13:39 | aombk | hope you get the images you want soon so you can make a post too alexML_
| |
13:40 | alexML_ | yeah; se6astian, do you need any more info about the test images?
| |
13:42 | aombk | where can i find raw alpha footage?
| |
13:45 | aombk | i think AXIOM Alpha Colorgraded Sample Footage is not the best it can be. actually that last scenes are a bit disappointing in terms of DR
| |
13:46 | aombk | i would like to try some color grading
| |
13:53 | aombk | also i believe you present the goals of the project better
| |
13:54 | dmj726 | left the channel | |
13:55 | se6astian | alexML_: will hopefully have time to do them tomorrow
| |
13:56 | aombk | from what i understand when the camera will be out with an external hdmi recorder we will be able to record 1080p 60fps 15 stops(with hdr interleaved and nice binning to 1080p, no problem!) global shutter
| |
13:56 | aombk | thats a very very cheap alexa
| |
13:57 | aombk | and that could be made very clear in the campaign page
| |
14:01 | aombk | and the (not so distant) future goal is to find a way to unlock the whole potential of the sensor optimized slope hdr way to record 4k 300fps to external etc etc
| |
14:01 | derWalter | mhh as far as i understood you will be able to record 4k over hdmi 2.0
| |
14:01 | aombk | sensor aided stabilization...
| |
14:01 | aombk | many trick will come with time
| |
14:02 | derWalter | there are two options for 4k hdmi output, a hdmi 1.3 shield with 4 links, or one hdmi 2.0 link
| |
14:02 | philippej2 | I guess we are a bit hesitant to promise too much, altough I'm sure we'll offer much more than people expect. At least we want to be perfectly honest.
| |
14:02 | derWalter | once they got the shields up and running
| |
14:02 | aombk | that should be also made very clear in the campaign
| |
14:02 | derWalter | and there is no solution for raw recording yet, but a lot of space for development :)
| |
14:02 | philippej2 | have you seen the upadte of the campaign text?
| |
14:02 | aombk | will there be hdmi 2?
| |
14:03 | derWalter | well never forget, this is for developing not end users!!!
| |
14:03 | philippej2 | you might need to reload the page completely
| |
14:03 | derWalter | se6astian was talking about and i guess that i talked with bertl about it as well
| |
14:03 | aombk | but dont promise too much
| |
14:03 | aombk | promise a cheap alexa to begin with
| |
14:03 | derWalter | i might be totally mistaken as well, but asfaik its either 4 hdmi 1.3 links or one hdmi 2.0 link on the shield
| |
14:04 | derWalter | i wouldnt promise anything :D
| |
14:04 | aombk | "a really affordable alexa with lots of room for experimentation"
| |
14:04 | derWalter | did you ever think of how small that piece of hardware is we will get?
| |
14:05 | derWalter | its TINY
| |
14:05 | aombk | yes
| |
14:05 | derWalter | its a fetus at all :D
| |
14:06 | derWalter | but it will grow and by growing (which means making A LOT OF MISTAKES!!) it will clear the path for a successfull version GAMMA
| |
14:06 | philippej2 | you can always put it in a bigger enclosure, the countrary being impossible :-)
| |
14:06 | derWalter | right
| |
14:06 | derWalter | if you start big, you can only shrink
| |
14:06 | derWalter | start little, than you can grow :)
| |
14:07 | derWalter | s/than /then
| |
14:07 | aombk | maybe i read it the wrong way. but i dont think the campaing is going very well
| |
14:07 | aombk | hope im wrong
| |
14:08 | derWalter | i am also worried to be honest
| |
14:08 | derWalter | maybe it should be announced as camera and not cinema/movie camera
| |
14:09 | dmj726 | joined the channel | |
14:09 | aombk | there still time to correct some things and make some things clear
| |
14:09 | derWalter | yep, for stuff like this i would connect to media first and then launch it like a big boom :)
| |
14:10 | aombk | the main is what will people have after they give you 2700euro and what they will be able to do with it
| |
14:13 | theuberkevlar | joined the channel | |
14:14 | aombk | the update you did seems to be in the right direction
| |
14:16 | aombk | what is this Look-Around?
| |
14:17 | aombk | the area outside the 4k safe area?
| |
14:20 | troy_s | Overscan I believe.
| |
14:22 | Bertl | the look around is basically if you record less than the sensor resolution then you're still able to see the area outside (where microphones come into the picture, etc) on the monitor
| |
14:22 | aombk | where can i find some older footage you had?i remember an underground one and one inside an elevator that was going up. where can i find those?
| |
14:23 | Bertl | se6astian probably knows where all the raw data is
| |
14:23 | Bertl | I only have the very early AXIOM Alpha RAWs on my server
| |
14:24 | Bertl | regarding HDMI 2.0: while that is possible, it is not really planned HDMI 1.4 already allows 4k at 30 FPS which should suffice
| |
14:26 | Bertl | HDMI 2.0 requires 6Gbit per TMDS lane and I do not know a chip which can do that ATM, so it would require a second FPGA or an 7015 Zynq
| |
14:26 | derWalter | so it will be three hdmi 1.4 links?
| |
14:26 | derWalter | thx for clearing that up bertl!!
| |
14:27 | Bertl | either 3x FullHD (or slightly above) or one 1.4 with 4k
| |
14:27 | Bertl | (depending on the shield used)
| |
14:28 | Bertl | but as we have not seen any real world HDMI 2.0 (or 2.1 FWIW) recorder either, I do not think that it is a big deal
| |
14:31 | Bertl | I'm also confident that we will find a number of smart ways to capture the data during Beta development or at least when developing with/for the Beta
| |
14:31 | Bertl | but as we already said, we do not like to promise features we are not 100% sure we can deliver
| |
14:33 | aombk | alexa specs are the least you can promise
| |
14:36 | Bertl | philippej2: please disable/remove the test contact form :)
| |
14:36 | mithro | left the channel | |
14:37 | Bertl | aombk: I don't know the alexa specs and I don't think we want to compare the AXIOM to proprietary devices out there
| |
14:38 | aombk | i dont mean you should use the productname alexa
| |
14:39 | aombk | but the least beta can do is very close to what alexa does
| |
14:39 | anton__ | joined the channel | |
14:39 | aombk | and alexa is considered by many the best choice available
| |
14:39 | aombk | you could use it
| |
14:40 | seku_ | joined the channel | |
14:41 | anton__ | hi, i communicated cheesycam.com andd the guy very politely responded that he likes the project very much but does not run stories on kickstarter projects as a matter of policy
| |
14:41 | seku_ | i dont think you should call it alexa like ... first the dynamic range will (and this is to be tested) only be attained through HDR, and there's much other stuff to the alexa
| |
14:41 | danieel | Bertl: hdmi 2.0 is not just 6G
| |
14:42 | anton__ | i thought - hmm maybe if there was a page on the net with instructions on how to build the shoe box camera - maybe then he could run it
| |
14:42 | danieel | it is 3G by 4:2:0 also
| |
14:42 | anton__ | and perhaps that kind of an article could draw attention in other types oc media
| |
14:43 | anton__ | i mean on the net; the page couls have all footage available and a modest link to campaign at the bottom
| |
14:44 | anton__ | its very interesting to watch other people work :)
| |
14:46 | mithro | joined the channel | |
14:48 | Bertl | anton__: go ahead, write one, all the data is available on either apertus.org or github
| |
14:50 | Bertl | we know, because we have pictures of a working version somebody built :)
| |
14:50 | Bertl | off for a nap now ... bbl
| |
14:50 | Bertl | changed nick to: Bertl_zZ
| |
14:53 | dakorl | joined the channel | |
14:54 | philippej2 | Hello dakorl
| |
14:58 | seku_ | left the channel | |
14:59 | dakorl | hello, maybe supporters are interested in axiom t-shirts and give another 50€ or so...
| |
15:00 | dakorl | ... 50 EUR ...
| |
15:15 | dakorl | left the channel | |
15:23 | anton__ | left the channel | |
15:30 | derWalter | mhhh is 4k 4:4:4 possible over hdmi 1.4? dont think about recorders for it ^^
| |
15:42 | philippej2 | left the channel | |
15:46 | derWalter | the overscan area could come in handy, when you have dead pixels or so :)!
| |
15:57 | derWalter | i mean if it would be theoretically possible for the beta
| |
16:01 | philippej | joined the channel | |
16:08 | philippej | left the channel | |
16:17 | troy_s | derWalter: Overscans are very common. Not really used for dead pixels.
| |
16:29 | aombk2 | joined the channel | |
16:32 | aombk | left the channel | |
16:33 | aombk2 | changed nick to: aombk
| |
16:53 | intracube | I was going to ask if there was a mention on the campaign at cinematography.com
| |
16:53 | intracube | there is... but it's buried in a sub-forum: http://www.cinematography.com/index.php?showtopic=54929&page=2
| |
16:55 | intracube | maybe when there's an update to the campaign, post it in the main 'general discussion' forum - lots more page views there
| |
17:00 | danieel | left the channel | |
17:00 | danieel | joined the channel | |
17:00 | aombk | left the channel | |
17:00 | _TiN_ | left the channel | |
17:01 | intracube | left the channel | |
17:02 | aombk | joined the channel | |
17:02 | _TiN_ | joined the channel | |
17:02 | derWalter | troy_s: i know :D :D :D but i just thought of that use as my d800 got a dead pixel quite close to the border
| |
17:02 | ctag | left the channel | |
17:03 | ctag | joined the channel | |
17:03 | derWalter | a function to define what matrix should be read for achieving the 1080p signal is very desireable as well, as the nikon d4 got a similar function, where one can read "true" 1080p from the center of the sensor, which gives a lot more sharpness and makes the lens wider (crop factor)
| |
17:08 | troy_s | derWalter: Easier than that. Flag the pixel and fill it.
| |
17:09 | troy_s | derWalter: Most imaging apps provide for hot / dead pixel maps.
| |
17:09 | derWalter | you and your practical aproach :D
| |
17:09 | danieel | even some hidden firmware functions offer you that :)
| |
17:09 | troy_s | derWalter: We wouldn't be in this forum if Bertl_zZ and se6astian weren't practical about goals.
| |
17:09 | derWalter | where is the fun of dreaming, if the story ends after sentence one :D
| |
17:10 | derWalter | well... not for the d800 asfaik
| |
17:10 | troy_s | derWalter: The problem can appear complex if you speculate. In the end though, it is missing data
| |
17:10 | derWalter | every achieved goal started out as a well dreamed vision
| |
17:10 | troy_s | derWalter: Only a few ways to “solve†that.
| |
17:10 | derWalter | wellllllllll
| |
17:11 | derWalter | like the first published statement to the crowdfunding ended with: fingers crossed
| |
17:27 | Bertl_zZ | changed nick to: Bertl
| |
17:27 | Bertl | back now ...
| |
17:49 | aombk | bertl you know of a solution i can stream the prores movs from your server to check them before download?
| |
17:49 | aombk | vlc doesnt work for me
| |
17:50 | aombk | actually, prores are streamable, right?
| |
18:03 | Bertl | I doubt I have any prores movs on my server :)
| |
18:04 | Bertl | what URL are we talking about?
| |
18:09 | aombk | http://footage.apertus.org
| |
18:11 | Bertl | okay, that is the apertus server, no idea about prores, but you can get the file with curl or wget (in a stream like fashion) and pipe it into whatever can interpret it
| |
18:11 | Bertl | IIRC, we reordered a few MOVs to have the index information at the beginning
| |
18:13 | se6astian | new update posted in campaign: https://www.indiegogo.com/projects/axiom-beta-the-first-open-digital-cinema-camera/x/5022798#activity
| |
18:25 | aombk | nice
| |
18:25 | aombk | bertl, do you remeber the filename of one of those movs so i can test?
| |
18:26 | Bertl | nope, it was an automated conversion, but IIRC, we left the old version there as well, so those where a .mov.<something> is present too should have been converted
| |
18:30 | intracube | joined the channel | |
18:40 | troy_s | se6astian: Might be worth mention that the triple HDMIs will offer independent LUT support on each.
| |
18:40 | Bertl | will they?
| |
18:41 | troy_s | Bertl: You said they would!
| |
18:41 | troy_s | LOL
| |
18:41 | Bertl | did I?
| |
18:41 | troy_s | Kind of hard to have outputs without LUTs anywyas.
| |
18:41 | Bertl | fact is, we do not know where the HDMI ports will be placed in the pipeline yet
| |
18:42 | troy_s | Well at some level, that raw data is going to need conversion, and somewhere in that chain the shaper and transform will be in place.
| |
18:42 | Bertl | so it sounds reasonable that we put one on the raw data, another on processed data and the third after combining the processed data with an overlay
| |
18:42 | Bertl | (or alternatively combining the raw data with an overlay)
| |
18:43 | Bertl | but assuming that the FPGA has enough resources, three completely independant pipelines would be conceiveable as well
| |
18:44 | troy_s | Sure. At the very least, the idiot view would need LUT support I suppose.
| |
18:45 | troy_s | (As in the one that folks are using to look at the footage. I would doubt anyone would want to shoot without a basic LUT of some sort anyways, nor even a 1st use it.)
| |
18:46 | intracube | Bertl: what would the overlay be for in this context? safe areas/shutter angle/framerate displays and such?
| |
18:48 | Bertl | yes, for example
| |
18:48 | intracube | gotcha
| |
18:49 | troy_s | Bertl: Did you ever come up with a plan for the framelines?
| |
18:50 | Bertl | hmm, a search for 'framelines' on the wiki yields nothing, care to remind me what the problem/idea was?
| |
18:51 | troy_s | The frame lines. The ability to pass custom frame lines for the EVF etc.
| |
18:55 | derWalter | does the beta board support hardware h264 encoding?
| |
18:57 | aombk | Bertl, so it is unlikely you will continue the automated conversion?
| |
18:57 | troy_s | derWalter: I don't believe so.
| |
18:57 | troy_s | derWalter: Not exactly desirable.
| |
18:57 | Bertl | derWalter: define hardware h264 encoding
| |
18:58 | Bertl | aombk: ist there something (new) to convert?
| |
18:58 | aombk | yes. almost everything
| |
18:58 | Bertl | troy_s: ah, well, either we manage that with the overlay or we do special line stuff like with the pong game if you remember
| |
18:59 | troy_s | I remember the pong. I remember you talking about making a configurable line set.
| |
18:59 | intracube | troy_s: we could go old-school for the frame lines on the beta
| |
18:59 | troy_s | Not super easy of course, as the matte regions would need to be adjustable etc. Hard to define in any sane way in a format.
| |
18:59 | intracube | ...gaffa tape :P
| |
18:59 | intracube | *gaffer
| |
18:59 | aombk | bertl, only one dir is converted. and one or two from other dirs
| |
19:00 | Bertl | troy_s: I'm pretty sure we figure out some way when we get there, but it would probably help to put the actual requirements on the wiki *hint*
| |
19:00 | troy_s | I swear I had them there.
| |
19:01 | Bertl | maybe I just searched for the wrong term
| |
19:01 | troy_s | Bertl: https://wiki.apertus.org/index.php?title=Frame_Lines
| |
19:01 | troy_s | Jerk.
| |
19:01 | Bertl | anyway, there are a lot of folks here who would love to join in this and similar effords I guess
| |
19:01 | troy_s | :P
| |
19:02 | Bertl | okay, still could use some examples, maybe images of existing solutions or links to them
| |
19:04 | Bertl | the overlay vs. generated objects on the view is more a bandwidth question
| |
19:04 | derWalter | a dedicated chip, not to use the fpga
| |
19:05 | Bertl | so it really depends on the actual data, format, etc
| |
19:05 | aombk | Bertl, if you find some spare time please convert the files in all the dirs in AXIOM Alpha footage/
| |
19:05 | Bertl | derWalter: no proprietary chip for h264 on the AXIOM Beta
| |
19:05 | Bertl | derWalter: but you probably could put one on a shield
| |
19:06 | derWalter | i was just thinking of saving some 720p proxy files along with the, how ever realized, 4k output
| |
19:06 | aombk | i want x264 shield
| |
19:07 | aombk | or x265 even better
| |
19:07 | Bertl | build one :)
| |
19:07 | troy_s | Lulz.
| |
19:08 | aombk | nah
| |
19:08 | derWalter | i take two!!
| |
19:08 | aombk | you build three and give one to me
| |
19:08 | derWalter | nice rime!
| |
19:08 | troy_s | Editorial stuffs. Dump it when you land the disk.
| |
19:08 | troy_s | Erf.
| |
19:09 | derWalter | god dmmm.. rhyme i meant :D
| |
19:12 | intracube | "<Bertl> okay, still could use some examples, maybe images of existing solutions or links to them"
| |
19:12 | tyrone_ | joined the channel | |
19:12 | aombk | yeah thanks
| |
19:12 | tyrone_ | aombk: the webm vp8 codec is as vhdl available....
| |
19:12 | intracube | Bertl: do you mean just examples of multiple frame lines?
| |
19:13 | intracube | this would be for TV work rather than film, but: http://www.videocopilot.net/blog/wp-content/uploads/2010/01/cut.jpg
| |
19:13 | intracube | (HD 16x9 with 4x3 safe markings)
| |
19:18 | Bertl | the important part is what kind of markings are to be expected
| |
19:19 | Bertl | i.e. just various horizontal and vertical lines or areas (alpha) and if so, what kind of areas, what alpha values, etc
| |
19:19 | Bertl | so that we can think about keeping it pretty generic, but also efficient
| |
19:20 | troy_s | left the channel | |
19:21 | intracube | IMO, as a minimum; black, white, transparent pixels
| |
19:21 | troy_s | joined the channel | |
19:22 | aombk | ha! 2 betas 3d setup and android phone with google cardboard vr would be great!
| |
19:22 | intracube | semi-transparent would be nice, but I guess would be significantly more complex
| |
19:23 | Bertl | not a big problem per se
| |
19:24 | troy_s | Normally the mattes are adjustable
| |
19:24 | troy_s | (or rather typically, and for good reason.)
| |
19:24 | troy_s | Which is why I liked the idea of a TIFF or other such format so that you could deliver the lines and custom alpha regions.
| |
19:24 | troy_s | But I get it that the processing can be an issue.
| |
19:25 | intracube | troy_s: yes, tiff, png, gif for pre-prepared lines
| |
19:25 | troy_s | It's effectively two bitmasks really.
| |
19:25 | intracube | troy_s: but are you also suggesting on-the-fly generation in camera?
| |
19:25 | troy_s | intracube: No. But Bertl was originally concerned about processing overhead for an image.
| |
19:26 | intracube | so the camera would render it's own lines based on options in the menu?
| |
19:26 | intracube | oh ok
| |
19:26 | troy_s | intracube: I think so. Or some derivation thereof.
| |
19:26 | troy_s | Keep the processing down.
| |
19:26 | troy_s | (as opposed to doing a full over op every frame or whatever)
| |
19:26 | intracube | not sure how that would necessarily be any less compute intensive
| |
19:26 | Bertl | let me explain it once again then :)
| |
19:27 | intracube | I can understand pixel blending operations would be more difficult
| |
19:27 | Bertl | we basically have two options to generate such lines (as far as I know)
| |
19:27 | Bertl | we can generate them as part of the image pipeline (as we did for the entire graphics in the pong game)
| |
19:28 | Bertl | and we can fetch an 'overlay' image every frame from memory and combine it with the actual camera image
| |
19:28 | Bertl | the first approach requires careful design and limits the shapes and number of lines you can do but doesn't require much memory to actually describe the lines
| |
19:29 | Bertl | the second approach is more generic, allows arbitrary shapes but also requires twice the memory bandwidth to generate
| |
19:29 | Bertl | because the overlay needs to be fetched pixel by pixel on every frame
| |
19:34 | intracube | Bertl: could the first method do 50% alpha transparency, for example?
| |
19:35 | intracube | troy_s: being able to adjust framing lines in camera menus on location would be nice I guess...
| |
19:35 | Bertl | yes, why not, it would require 3 dsp blocks to do alpha blending
| |
19:35 | troy_s | intracube: Well you have to remember that some folks want to adjust the marks to upper thirds too etc. Too many variations.
| |
19:35 | troy_s | intracube: So some flexibility is required.
| |
19:36 | intracube | but a production would (should) decide well beforehand what their shooting requirements are and could design safe area images beforehand
| |
19:36 | aombk | does anyone of you guys live in paris?
| |
19:36 | troy_s | intracube: Yep.
| |
19:38 | Bertl | aombk: there were only two files which could be converted
| |
19:39 | Bertl | no, actually three files
| |
19:39 | aombk | which files were these?
| |
19:39 | intracube | Bertl: might something like this be possible with the first method: http://provideocoalition.com/images/uploads/sa00_hero_640.jpg
| |
19:39 | aombk | the ones in the first folder?
| |
19:39 | aombk | the camcat ones?
| |
19:39 | intracube | diagonals and complex geometry aren't required
| |
19:40 | intracube | ...although some people like rounded corners to the boxes...
| |
19:49 | Bertl | intracube: the shape or the strangely smeared black/grey overlap?
| |
19:55 | Bertl | aombk: T047 in garden flowers, the crowd funding video and the beta pitch
| |
20:00 | intracube | Bertl: the smearing is just down to the downscaling of the image. It looks like the lines were first drawn in black, then on top in white - with a slight offset (left and up)
| |
20:04 | Bertl | and that helps?
| |
20:04 | intracube | Bertl: ?
| |
20:04 | Bertl | but yes, would be possible
| |
20:05 | intracube | you mean the double overlay (white over black)?
| |
20:05 | Bertl | yes
| |
20:05 | intracube | it means the lines will always be visible - like if the scene you're shooting is mostly white, you will see the black lines
| |
20:05 | intracube | if you're shooting at night, then the white lines will be visible, etc
| |
20:06 | Bertl | IMHO it would be smarter to simply adjust the color based on the background
| |
20:07 | intracube | yep. I think I've seen some overlays that invert whatever pixel colour is behind them
| |
20:07 | intracube | might need to be a bit more complex than that
| |
20:07 | intracube | otherwise grey scene will end up with grey lines
| |
20:07 | Bertl | yeah
| |
20:08 | Bertl | even ant pathes would be rather simple to do in the FPGA
| |
20:09 | Bertl | but I've also considered doing async memory fetches and/or using FPGA memory for compressed overlays
| |
20:10 | Bertl | for example a simple mask can easily be RLE encoded and so could fit into a very small memory block inside the FPGA, as long as it can be simply expanded during the frame construction, it wouldn't need any additional memory bandwidth
| |
20:11 | Bertl | that would fall somewhere inbetween the two options
| |
20:11 | intracube | wouldn't inverting the overlay based on the content be compute intensive?
| |
20:12 | intracube | pseudocode for the blending might be: if(bg_pixel <= 0.7) {fg_pixel = 1.0} else {fg_pixel = 0.0};
| |
20:12 | jucar | left the channel | |
20:13 | Bertl | not really very complicated in the FPGA
| |
20:14 | jucar | joined the channel | |
20:15 | Bertl | of course, you want to make the numbers simpler to compare, but that is details
| |
20:17 | intracube | interesting
| |
20:34 | derWalter | good night everyone
| |
20:41 | jucar | left the channel | |
20:41 | Bertl | night
| |
20:42 | jucar | joined the channel | |
20:44 | tyrone_ | left the channel | |
20:53 | theuberkevlar | left the channel | |
20:55 | se6astian | time to leave as well :)
| |
20:55 | se6astian | good night
| |
20:56 | se6astian | changed nick to: se6astian|away
| |
20:56 | jucar | left the channel | |
20:59 | jucar | joined the channel | |
21:06 | derWalter | left the channel | |
21:13 | theuberkevlar | joined the channel | |
23:00 | theuberkevlar | left the channel | |
23:34 | anton__ | joined the channel | |
23:36 | anton__ | Guys, where is the core Apertus team based?
| |
23:37 | Bertl | you mean where we are located?
| |
23:38 | Bertl | se6astian and me, we are in Austria (nearby Vienna)
| |
23:39 | Bertl | ah, here, found the map, that's simpler :)
| |
23:39 | Bertl | https://apertus.org/team
| |
23:58 | anton__ | Bertl: thx
|