23:02 | Sasha_C | How are you going tonight dmj_nova?
| |
23:03 | dmj_nova | pretty good
| |
23:03 | dmj_nova | mulling over cinema camera forms
| |
23:17 | Sasha_C | Can you provide a link to any examples?
| |
23:25 | dmj_nova | not yet
| |
23:25 | dmj_nova | rexbron: has been pointing out examples of what he likes
| |
23:28 | dmj_nova | been thinking about how to incorporate those thoughts into an AXIOM camera
| |
23:28 | dmj_nova | Sasha_C: What cameras have you worked with?
| |
23:31 | Sasha_C | I've worked with Canon and Pentax Dslr's, the H16 Bolex (with reflex viewfinder) and panasonic HVX200
| |
23:32 | Sasha_C | However, since having purchased a Mamiya RB67 Pro SD, I find that I'm now spending more and more time in photography
| |
23:34 | dmj_nova | Okay
| |
23:34 | dmj_nova | I've mostly been a Canon DSLR user myself
| |
23:36 | dmj_nova | It seems that there's a bit of a divide between "box" (DSLR) camera ergonomics and "balance" (shoulder) camera ergonomics.
| |
23:44 | dmj_nova | The desire it seems is for a shoulder mount camera with good balance and very little fiddling and assembling of separate pieces on set
| |
23:45 | dmj_nova | this: http://www.aaton.com/products/film/delta/index.php
| |
23:45 | dmj_nova | not this: http://imgur.com/Rp7Z9bt
| |
00:00 | Sasha_C | In that last one, the rig is probably more expensive than the camera?
| |
00:03 | Sasha_C | From memory, one of the guys from yolk contacted us (could've been last year, or very early this year) and said they were interested in working together with their concept here: http://www.yolk.org/y2.html
| |
01:04 | dmj_nova | Sasha_C: yes it is
| |
01:05 | dmj_nova | interesting thing with the yolk bit
| |
01:07 | dmj_nova | I think I should come up with a concept focused on ergonomics and simplicity of use
| |
01:14 | tonsofpcs | left the channel | |
01:45 | aombk | joined the channel | |
01:45 | aombk | left the channel | |
01:45 | aombk | joined the channel | |
04:00 | dmj_nova | rexbron: troy_s: what is the case for left handed camera operators?
| |
04:01 | dmj_nova | do they generally operate cameras right-handed?
| |
04:01 | dmj_nova | I assume that's the case
| |
04:16 | dmj_nova | rexbron: thoughts on this camera: http://www.ikonoskop.com/begood/image_db.php?id=224&w=700&ne=1
| |
04:16 | Bertl | yes, but there are special viewfinders
| |
04:16 | Bertl | (for left 'handed' operators)
| |
04:18 | Bertl | http://irc.13thfloor.at/ALOG/2013-11/LOG_2013-11-12.txt 1384290780
| |
04:22 | dmj_nova | Bertl: thanks
| |
04:22 | Bertl | np
| |
04:22 | dmj_nova | So it seems okay to design a control scheme focused around right-handed operation
| |
04:23 | dmj_nova | it seems southpaws would have learned to work that way anyway
| |
04:23 | Bertl | I was told, that in professional environments, the cameraperson doesn't do anything but aiming the camera and controlling the focus
| |
04:23 | Bertl | everything else is basically done by the camera assistant
| |
04:24 | dmj_nova | that everything else being exposure, exchanging the magazine, and what else?
| |
04:25 | Bertl | same log, @1384290720
| |
04:26 | Bertl | I think almost everything which can be controlled on the camera is usually done by the assistant
| |
04:26 | dmj_nova | hmm
| |
04:27 | dmj_nova | Well, I suppose a shoulder-mount system can have two methods of interface:
| |
04:27 | dmj_nova | 1) assistant with LCD panel on left side
| |
04:27 | Bertl | I'm pretty sure that is where indie differs from pro
| |
04:28 | dmj_nova | 2) feel-controls with display in viewfinder
| |
04:29 | Bertl | I don't think that 'pro' camerawomen want to bother with anything in the viewfinder except sharpness and content (i.e. target)
| |
04:29 | dmj_nova | the latter would allow the operator to control the system without taking it off the shoulder or taking their eye from the viewfinder
| |
04:31 | dmj_nova | The same viewfinder hardware would be able to accommodate either HUD+content or simply content though
| |
04:31 | Bertl | sure, not saying that we need/want to focus on only one
| |
04:32 | dmj_nova | What we do need is specific modes of operation with sane defaults
| |
04:32 | dmj_nova | 1) "indie" mode which is made for single-person operation
| |
04:32 | dmj_nova | 2) "pro" mode which is tailored to 2-3 person teams
| |
04:34 | dmj_nova | and we sure as hell don't want to copy the black magic camera
| |
04:35 | Bertl | in what regard?
| |
04:35 | dmj_nova | ergonomics on that thing are...I can tell they're bad just by looking at it
| |
04:36 | dmj_nova | They couldn't even bother to conform the grip to one's hand
| |
04:36 | dmj_nova | it's a slab with a touchscreen
| |
04:37 | Bertl | maybe they had just a person with unusual hands :)
| |
04:37 | dmj_nova | you mean a person with square hands?
| |
04:38 | Bertl | yeah, maybe a stone golem :)
| |
04:38 | dmj_nova | now compare the BMCC to any Canon DSLR.
| |
04:39 | dmj_nova | Which one do you want to hold
| |
04:39 | dmj_nova | (I'm comparing to a DSLR, since it's clearly designed to be "like a DLSR but with RAW video"
| |
04:39 | Bertl | the DSLR, because it seems a lot less weight?
| |
04:40 | dmj_nova | Are you sure?
| |
04:40 | Bertl | well, probably depends on the lens system
| |
04:40 | Bertl | but it looks heavy on pictures
| |
04:41 | dmj_nova | ah, yeah it is heavier than most dslrs
| |
04:41 | troy_s | Bertl / dmj_nova Most left folks (left eyed etc) simply train for right.
| |
04:42 | troy_s | It makes sense in the case of eyepieces because some cameras don't give you an option, so you learn right.
| |
04:42 | Bertl | yes, that is how I interpreted the original comment
| |
04:42 | troy_s | Further, if operating right, the idea is to be able to open your left eye to see, so again, left eyed operators train right as convention.
| |
04:43 | troy_s | And professional work the first pulls focus, not the operator.
| |
04:44 | Bertl | btw, what is nowadays the political correct name? cameraperson or camerawoman/men or still cameraman/men?
| |
04:44 | troy_s | There is little difference on an indie; you still have someone operating (DP or an op) and a 1st to yank focus.
| |
04:45 | dmj_nova | troy_s: as far as pulling focus, how critical is latency?
| |
04:45 | troy_s | Bertl: In North America "camera operator" is pretty typical.
| |
04:45 | Bertl | ah, nice, tx
| |
04:45 | troy_s | Cameraman used to refer to the DP... now is a bit of an anachronism.
| |
04:46 | troy_s | dmj_nova: There is zero latency. If there is latency, the system is generally avoided like the plague.
| |
04:46 | dmj_nova | troy_s: thanks!
| |
04:46 | Bertl | zero noticeable latency that is
| |
04:46 | troy_s | I can think of a few instances where I have seen a system with about a two frame lag, and it is effectively impossible to operate.
| |
04:46 | dmj_nova | I assumed that it would be a big deal but had some implementation thoughts
| |
04:47 | troy_s | Even the most experienced struggle with it because you can anticipate but the disconnect causes all sorts of issues.
| |
04:47 | troy_s | General rule - there simply is zero latency.
| |
04:47 | troy_s | (on all gear including remote heads)
| |
04:47 | dmj_nova | that means you need the focus puller physically tethered and viewing a direct feet from a physical cable
| |
04:47 | Bertl | we are, to some degree, technical people, so let's settle for zero noticeable :)
| |
04:48 | troy_s | But it is a defacto norm for even the lowest budget productions to have a 1st to pull focus.
| |
04:48 | Bertl | the thing is, whatever you do, you will never get zero delay
| |
04:48 | dmj_nova | yes, we'll always be reading the previous frame
| |
04:48 | troy_s | dmj_nova: Either physical via a follow focus or ff and whip or a FIZ unit (remote Focus, Iris, Zoom)
| |
04:48 | dmj_nova | because we can't get sensor data before it has been collected and read
| |
04:49 | troy_s | (and aperture if needed for a stop yank)
| |
04:49 | troy_s | dmj_nova: The latency is about zero. Even a frame can screw you up.
| |
04:49 | Bertl | yes, and in many cases, the image acquisition will happen way faster than the output can deliver
| |
04:50 | Bertl | troy_s: it is technically impossible to get below one frame in a global shutter setup
| |
04:50 | dmj_nova | troy_s: You see what I mean about seeing the frame that was just captured rather than the frame which is being captured though, right?
| |
04:51 | troy_s | Bertl: Just saying that the systems in use are effectively perfect frame sync
| |
04:51 | dmj_nova | well, the only way to get below one frame is to use a beam splitter or two lenses
| |
04:51 | Bertl | perfect one-off frame sync systems, yes :)
| |
04:51 | troy_s | Bertl: Beyond a shadow of a doubt. Long crusades go on for remoting the live viewfinder and having zero latency.
| |
04:52 | dmj_nova | two lenses has obvious implications for framing and focus
| |
04:52 | dmj_nova | beam splitter removes half the light from the camera
| |
04:52 | troy_s | Bertl: Not sure how the Alexa or Sony's deal with it, but it is certainly not a frame out. Likely due to 180 shutter
| |
04:53 | troy_s | At 24 fps, a frame is easily noticeable.
| |
04:53 | dmj_nova | well, technically I suppose one could shoot 48fps
| |
04:53 | troy_s | And they are not out. Operators would be completely pissed.
| |
04:53 | dmj_nova | and toss half the frames
| |
04:53 | Bertl | doesn't help if your viewfinder cannot work at high framerates as well
| |
04:53 | troy_s | No. That would bugger your motion blur.
| |
04:54 | dmj_nova | but 180 shutter
| |
04:54 | troy_s | Aesthetic dictates that 24/25 is the learned baseline.
| |
04:54 | dmj_nova | so you capture 360
| |
04:54 | troy_s | 180 shutter at 24.
| |
04:54 | troy_s | No cheats.
| |
04:54 | Bertl | the funny thing is, human vision cannot 'react' faster than 150ms
| |
04:54 | troy_s | 1/48th
| |
04:54 | dmj_nova | 180 shutter means you capture for 1/2 of each 1/24 second period
| |
04:55 | dmj_nova | each frame is a 1/48th exposure
| |
04:55 | troy_s | You realize I know this very well right?
| |
04:55 | Bertl | so everything below that value will be considered instant as long as there is no disruption in continuity
| |
04:55 | dmj_nova | during that "dark" time, you could sneak a frame in that you don't actually keep but allow the operator to see for focus
| |
04:55 | troy_s | Just saying commercial cameras capture 24 at 48th. Easily. No cheats. And not a frame out.
| |
04:56 | Bertl | which actually means that at 25FPS, we still can skip 2 frames and 'get away' with it :)
| |
04:56 | dmj_nova | (it was a question from me!)
| |
04:56 | troy_s | dmj_nova: That is sort of what I am suggesting. That is in fact precisely how older mirror systems worked.
| |
04:56 | troy_s | Bertl: You might like to think tgat
| |
04:57 | dmj_nova | Bertl: troy_s has a really good idea here
| |
04:57 | troy_s | Bertl: But I have been on sets with a single frame of latency and the operators go bat shit.
| |
04:57 | Bertl | a single _additional_ frame, yes I buy that
| |
04:57 | troy_s | Bertl: I can assure you that two frames is completely a nightmare to operate.
| |
04:57 | dmj_nova | 180 degree shutter means that the sensor is only being used for 1/48th of a second to capture a frame every 1/24th of a second
| |
04:57 | troy_s | If your playback is a frame behind your remote wheels, it will drive you nuts.
| |
04:58 | troy_s | dmj_nova: Only issue is variable shutter - where you may be at a wider angle.
| |
04:58 | troy_s | Like a 270
| |
04:58 | dmj_nova | that gives us about 1/48th of a second to capture *another* frame in between the ones we actually keep
| |
04:58 | Bertl | for what purpose?
| |
04:58 | troy_s | So it isn't much of a solution.
| |
04:59 | troy_s | Bertl: Aesthetic or technical
| |
04:59 | dmj_nova | doubling our framerate for focus and halving our latency until display
| |
04:59 | Bertl | how is the second frame better or worse than the first one?
| |
04:59 | troy_s | Extremely common
| |
04:59 | troy_s | Huh?
| |
04:59 | Bertl | i.e. what is the point in taking a second frame?
| |
04:59 | dmj_nova | troy_s: wider angle...that's not a problem unless noise and dynamic range bother the focus puller
| |
05:00 | dmj_nova | the 180 degree shutter can actually be very important for 2 reasons
| |
05:00 | dmj_nova | 1) it produces the expected motion blur common in cinema
| |
05:00 | troy_s | Wider is an issue if you hope to alternate for playback.
| |
05:00 | dmj_nova | 2) it may be necessary to sync with flickering lights
| |
05:01 | troy_s | That is the technical reason. You may need a 144 etc.
| |
05:01 | dmj_nova | troy_s: well, I'm kinda assuming you can do some clever exposure compensation on the early frames
| |
05:01 | dmj_nova | *the between frames
| |
05:01 | troy_s | No clue how the viewfinder would work. But zero frame latency at 24 is important.
| |
05:02 | troy_s | Regardless of shutter.
| |
05:02 | dmj_nova | assuming linear response you can just multiply by the shutter ratio
| |
05:02 | dmj_nova | this will *not* work with 360 degree shutter
| |
05:03 | dmj_nova | that you're just stuck with what you've got
| |
05:03 | troy_s | I would think the image might be suck if at a 270 shutter and gained :)
| |
05:03 | dmj_nova | troy_s: yeah, there's a limit to what you can push it to
| |
05:03 | troy_s | That is a whole stop of light
| |
05:03 | dmj_nova | we can't be magic
| |
05:04 | troy_s | Anyways... I am sure Bertl can figure out some magic.
| |
05:04 | troy_s | the key is likely refresh hz.
| |
05:04 | dmj_nova | aside from the clever dark-time second exposure, we can't really do more magic
| |
05:04 | Bertl | I won't use my magic unless you explain why a second frame would help with the latency? :)
| |
05:04 | troy_s | As long as the refresh hz is faster than the dumping all is fine.
| |
05:05 | troy_s | I never said "second frame"
| |
05:05 | troy_s | I only suggested that a frame of latency would be a nightmare to operate from
| |
05:05 | dmj_nova | Bertl: So at 24fps capture, you see what's happening 1/24th of a second after it happens
| |
05:05 | Bertl | not necessarily
| |
05:06 | dmj_nova | if you have 180 degree shutter, your camera actually doesn't capture for half the time
| |
05:06 | Bertl | entirely depends on the exposure and transfer times
| |
05:06 | dmj_nova | oh, wait, I'm dumb
| |
05:06 | troy_s | Must sleep. Ciao friends.
| |
05:06 | dmj_nova | troy_s: with 180 shutter we actually could get that frame 1/48th of a second after capture
| |
05:07 | Bertl | with 0 transfertime, yes
| |
05:07 | dmj_nova | (well almost, electronics aren't magic)
| |
05:08 | dmj_nova | that extra frame would help with latency *between* frames, but not between exposure and display
| |
05:08 | dmj_nova | that's a whole different kind of responsiveness
| |
05:08 | dmj_nova | which may be helpful but certainly at a cost in battery life
| |
05:08 | Bertl | whatever you do, the transfer won't start before the exposure ends
| |
05:09 | dmj_nova | yes, exactly
| |
05:09 | Bertl | assuming that you have the same FPS on the viewfinder (which is unlikely)
| |
05:09 | dmj_nova | never claimed that :)
| |
05:09 | dmj_nova | viewfinder likely has a higher refresh rate if anything
| |
05:09 | Bertl | you could at least theoretically start displaying the frame immediately after exposure ends
| |
05:09 | Bertl | which gives you exactly one frame delay :)
| |
05:10 | Bertl | you cannot possibly get better than that
| |
05:10 | Bertl | (at least not for the entire frame :)
| |
05:10 | dmj_nova | Bertl: you're right, but you're using the wrong terms
| |
05:10 | dmj_nova | one *exposure* delay
| |
05:10 | dmj_nova | not one *frame* delay
| |
05:10 | Bertl | usually you will also have the delay from transfer and digitization
| |
05:11 | dmj_nova | exposure duration and frame duration usually aren't synonymous
| |
05:11 | Bertl | yes, but it doesn't help if you lower the exposure time, because you need to increase the display framerate for that, which in turn means that you have to display the same frame twice
| |
05:12 | Bertl | unless you actually increase the frame rate, at which point you're back to 1 frame delay :)
| |
05:12 | dmj_nova | Bertl: most LCD screens refresh at much more than 24 Hz
| |
05:12 | dmj_nova | 60, 72, or 120 are common rates
| |
05:13 | dmj_nova | okay, let's have 3 terms:
| |
05:13 | Bertl | yup, so if your exposure time is 1/60th second
| |
05:13 | Bertl | and your display time is also 1/60th of a second
| |
05:13 | Bertl | then you can get away with 1/60th of a second delay (at best)
| |
05:13 | Bertl | usually you will have 1/30th of a second delay
| |
05:14 | Bertl | which is still higher than the 24FPS you might be recording
| |
05:15 | dmj_nova | 1) exposure duration = time global shutter collects light for each frame (ex 1/48th)
| |
05:15 | dmj_nova | 2) frame rate = number of frames per second that are captured and permanently stored (1/24th)
| |
05:15 | dmj_nova | 3) monitor refresh rate = framerate of the display used to monitor footage (1/60th)
| |
05:15 | dmj_nova | Bertl: yes
| |
05:16 | Bertl | you can add a fourth one to that, the rate the viewfinder image is transmitted with
| |
05:16 | dmj_nova | also: you can totally manipulate such a system to get you closer to 1/60th of a second
| |
05:17 | Bertl | because even if the viewfinder has 120Hz, it doesn't help if the data stream to the recording medium is used, for example
| |
05:17 | dmj_nova | Bertl: true
| |
05:17 | dmj_nova | I was simplifying and assuming we weren't deliberately being dumb :)
| |
05:18 | Bertl | has nothing to do with dumb actually
| |
05:18 | dmj_nova | so we would be matching the two
| |
05:18 | dmj_nova | didn't mean to insult :)
| |
05:18 | Bertl | I've heard from a number of folks that sometimes you want to see the data being recorded and not some idealized view
| |
05:18 | dmj_nova | but yes, "display output rate" might have been better
| |
05:19 | Bertl | don't forget that motion and motion blur works completely different at higher rates
| |
05:19 | dmj_nova | Bertl: that's only with motion tweening
| |
05:20 | dmj_nova | Not with simply refreshing the same image three times
| |
05:21 | dmj_nova | which incidentally is done already with modern screens and projectors to compensate for flicker headaches even with 24 fps
| |
05:21 | dmj_nova | so in the theatre they show you each 1/24th of a second frame twice
| |
05:22 | dmj_nova | it doesn't change the motion blur properties, but it makes it so you can't "feel" the flicker
| |
05:22 | dmj_nova | and I may have just explained to myself why 180 degree shutter is prized for realistic motion blur
| |
05:23 | dmj_nova | or that could be a coincidence
| |
05:23 | dmj_nova | Bertl: or are you talking about something else?
| |
05:24 | Bertl | well, modern screens/projectors interpolate the missing frames
| |
05:24 | Bertl | because the information is already available in the codecs
| |
05:24 | Bertl | i.e. motion estimation and similar stuff
| |
05:25 | Bertl | fact is, that if your exposure time is lower than the frame rate, you will get 'missing' gaps in motion blur
| |
05:25 | Bertl | you can compensate for those by e.g. taking two frames and combining them
| |
05:26 | dmj_nova | Bertl: yes, but that's a terrible horrible thing that looks awful and needs to stop
| |
05:26 | Bertl | which would allow you to display each frame in a view finder 'earlier' than the combined frame
| |
05:26 | Bertl | well, it won't be different from a longer exposure time :)
| |
05:26 | dmj_nova | Bertl: it will be noisier
| |
05:27 | Bertl | actually it will have less noise
| |
05:27 | dmj_nova | not necessarily by a lot, but it will be
| |
05:27 | dmj_nova | isn't there noise associated with each readout?
| |
05:27 | Bertl | okay, yes, if you consider the bad quality of sensors, then yeah
| |
05:28 | Bertl | the readout is probably the noisiest part, but it will not increase
| |
05:28 | dmj_nova | well, you have double readout noise
| |
05:28 | dmj_nova | some of that will cancel, but not all
| |
05:28 | Bertl | i.e. if the readout causes a 5% noise, the combination will have 10% noise of the original value, but twice the intensity
| |
05:29 | Bertl | so summing and dividing by 2 (aka averaging) will give 5% noise
| |
05:29 | dmj_nova | no, not twice the intensity
| |
05:29 | Bertl | that is if the noise is not a gaussian noise
| |
05:29 | Bertl | in which case, the noise will actually get less by averaging
| |
05:29 | dmj_nova | half the exposure => half the light collected
| |
05:31 | Bertl | well, how does the readout noise figure into that then?
| |
05:31 | Bertl | will it be the same or less or more?
| |
05:32 | Bertl | it is hard to quantify and generalize
| |
05:32 | dmj_nova | you have two sets of readout noise
| |
05:32 | Bertl | I'm pretty sure in certain cases the noise will be better and in other cases worse, around the typical cases it will be about the same
| |
05:32 | dmj_nova | 2*noise + 1/2*light + 1/2*light
| |
05:33 | dmj_nova | = 2*noise + light
| |
05:33 | Bertl | so you want to only use half the ADC range for 1/2 the light?
| |
05:33 | Bertl | i.e. sample a 'darker' picture?
| |
05:34 | Bertl | usually you will adjust the ADC gain to give a full range image, no?
| |
05:34 | dmj_nova | whereas with one exposure you get: noise + light
| |
05:34 | dmj_nova | yes, but increasing gain means increasing noise in that capture
| |
05:35 | Bertl | and on the other hand, the termal noise gets less when reduced exposure
| |
05:35 | Bertl | *thermal
| |
05:35 | dmj_nova | so the averaging will negate the increased noise from the gain, but not from the sampling process itself
| |
05:35 | dmj_nova | Bertl: what about the thermal noise?
| |
05:36 | Bertl | it will get larger with longer exposure times
| |
05:36 | dmj_nova | but your two exposures is still exposing for the same amount of time
| |
05:37 | Bertl | but with resets inbetween
| |
05:37 | dmj_nova | very brief ones
| |
05:37 | Bertl | i.e. the photosites will drift away from the optimal discharge path
| |
05:38 | Bertl | if you reset more often, you get less noise
| |
05:38 | jucar1 | left the channel | |
05:38 | dmj_nova | Bertl: did not know that
| |
05:38 | dmj_nova | Bertl: you could test this
| |
05:38 | Bertl | we will probably test this and more
| |
05:38 | dmj_nova | how many exposures can you capture now?
| |
05:39 | Bertl | depends on the area
| |
05:39 | jucar | joined the channel | |
05:39 | Bertl | for a small area (i.e. number of lines) we can probably do about 200 or more
| |
05:39 | dmj_nova | try your trick with as many consecutive exposures as possible and an equivalent single exposure
| |
05:40 | dmj_nova | see what the relative noise you get is
| |
05:40 | Bertl | as I said, a lot will be tested, currently the focus is on getting the alpha prototype done
| |
05:40 | dmj_nova | fair enough
| |
05:41 | dmj_nova | I've also done some work on temporal denoising
| |
05:42 | dmj_nova | have some blender compositing setups that are pretty good
| |
05:43 | dmj_nova | also: dark noise always adds, never subtracts
| |
05:44 | dmj_nova | stray electrons never lead to values darker than pure black
| |
05:45 | Bertl | interesting that you say that, do you know how a CMOS sensor works?
| |
05:45 | dmj_nova | the basics
| |
05:46 | Bertl | the photosites are precharged to a certain voltage, and light 'discharges' them over time
| |
05:46 | Bertl | now it really depends on where those stray electrons are
| |
05:47 | dmj_nova | you mean wrt the band gap?
| |
05:47 | Bertl | http://www-isl.stanford.edu/~abbas/group/papers_and_pub/hui_thesis.pdf
| |
05:48 | Bertl | here is a nice read if you're interested in
| |
05:50 | dmj_nova | oh, this is fascinating
| |
05:55 | dmj_nova | hmm...it occurs to me that I'd actually never looked up what a full CMOS pixel looked like
| |
05:55 | dmj_nova | just dealt with photodiode operation
| |
06:09 | dmj_nova | Bertl: okay, this is a lot to digest
| |
06:10 | Bertl | have fun digesting :)
| |
06:10 | Bertl | I'm off to bed now ... have a good one everyone!
| |
06:22 | Sasha_C | left the channel | |
07:18 | Sasha_C | joined the channel | |
08:09 | PhilippeJ | joined the channel | |
08:53 | se6astian | joined the channel | |
09:20 | PhilippeJ | Hello !
| |
09:25 | PhilippeJ | Bertl, troy_s, dmj_nova just read the backlog about the delay in the pipeline. There *is* some delay between reality and what is shown on the display, in any camera. Altough this delay is as minimal as possible. It is hard to say how much is too much, less than a frame means less than 41 milliseconds. I guess we can achieve much better, like 10 msec delay. Which I'd translate as humanly not perceivable.
| |
09:29 | PhilippeJ | (not counting exposure time btw)
| |
09:30 | PhilippeJ | Interesting summary : http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5540888&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5540888
| |
09:53 | PhilippeJ | and another one, focused on video games, an area where latency is studied quite well : http://enterthesingularity.blogspot.be/2010/04/latency-human-response-time-in-current.html
| |
09:53 | PhilippeJ | experimental results : 50-60ms - absolute limit of delay detection - small object tracking - mouse cursors
| |
09:54 | PhilippeJ | which is our famous 41 msec + 10 msecs processing :-)
| |
10:41 | se6astian | Can you add that to the wiki
| |
10:41 | se6astian | I fear more and more that most of the information generated in here is just lost because its not recorded anywhere
| |
10:44 | PhilippeJ | yep
| |
10:45 | PhilippeJ | I can't edit the wiki homepage with my philippejadin account ?
| |
10:45 | se6astian | hmm, interesting
| |
10:45 | se6astian | did you fill the captcha?
| |
10:46 | PhilippeJ | I didn't have any captha to filll
| |
10:46 | PhilippeJ | there is no edit tab even when I'm logged
| |
10:46 | se6astian | ok, let me take a look
| |
10:47 | PhilippeJ | only on the homepage btw
| |
10:47 | PhilippeJ | I'm member of Autoconfirmed users, Users
| |
10:49 | se6astian | try now please
| |
10:50 | PhilippeJ | still no edit tab :-(
| |
10:50 | PhilippeJ | I have a view sourc einstead
| |
10:50 | se6astian | can you edit the sandbox?
| |
10:50 | se6astian | https://wiki.apertus.org/index.php?title=Sandbox
| |
10:50 | se6astian | I think the main page is protected
| |
10:50 | se6astian | let me make you admin
| |
10:51 | PhilippeJ | well in fact
| |
10:51 | PhilippeJ | I can edit the alpha prototype page
| |
10:51 | PhilippeJ | but it's getting long, and the info I'm going to put is more generic
| |
10:51 | se6astian | unprotected main page
| |
10:52 | PhilippeJ | works
| |
10:52 | se6astian | great
| |
10:52 | se6astian | lets see if spam increases as well now
| |
10:53 | se6astian | the wiki is only mollom protected now
| |
10:53 | se6astian | even anonymous can edit
| |
10:53 | PhilippeJ | ah well if you want to can add only registered users, who knows
| |
10:53 | se6astian | I read up on it
| |
10:53 | se6astian | and they recommend it that way
| |
10:54 | se6astian | because bots are used to registring user names
| |
10:54 | se6astian | but no bot tries to edit the page as anonymous :)
| |
11:00 | PhilippeJ | posted here : https://wiki.apertus.org/index.php?title=Image_latency
| |
11:00 | PhilippeJ | I didn't summarize the whole discussion since I wasnt present, if anyone has time, feel free
| |
11:00 | se6astian | great, thanks
| |
11:01 | se6astian | there was also some discussion about ergonomics yesterday which I would love to have summarized
| |
11:02 | se6astian | when you say less than 41ms
| |
11:02 | se6astian | that refers to 24 FPS I assume?
| |
11:02 | se6astian | 41.666ms ?
| |
11:05 | se6astian | ah yes its noted further down
| |
11:29 | intracube | joined the channel | |
11:42 | PhilippeJ | I refered to the expected irc log url, altough, it's curently empty :-)
| |
11:53 | se6astian | should be filled soon :)
| |
12:01 | dmj_nova1 | joined the channel | |
12:02 | PhilippeJ | hello dmj_nova1, feel free to fill this with more info : https://wiki.apertus.org/index.php?title=Image_latency
| 12:02 | PhilippeJ | is heading to home, see you soon!
|
12:03 | dmj_nova | left the channel | |
12:04 | PhilippeJ | left the channel | |
12:12 | FergusL | left the channel | |
12:13 | FergusL | joined the channel | |
12:21 | leo_m | joined the channel | |
13:01 | philippej | joined the channel | |
13:12 | aombk | left the channel | |
13:19 | Bertl | morning everyone!
| |
13:24 | se6astian | hello Bertl!
| |
13:29 | philippej | heya !
| |
13:55 | Sasha_C | left the channel | |
14:00 | philippej | se6astian, anything we might look at ?
| |
14:02 | se6astian | look at?
| |
14:04 | philippej | work on :-)
| |
14:05 | se6astian | the website page explaining new members the working conditions maybe?
| |
14:05 | philippej | yep
| |
14:06 | se6astian | maybe here: https://www.apertus.org/node/120
| |
14:06 | se6astian | that page is linked to from: https://www.apertus.org/contribute
| |
14:07 | se6astian | but we might need to make it more prominent
| |
14:07 | philippej | yep
| |
14:08 | philippej | we should add your now stadard phrase of "join us on irc, introduce yourself and ask for work :-)"
| |
14:08 | philippej | a little google doc ?
| |
14:08 | se6astian | good idea
| |
14:08 | se6astian | yes
| |
14:10 | philippej | let me create it
| |
14:11 | philippej | https://docs.google.com/document/d/13H9ghagelbw41WHH4J6dIyP862TC3VriXyNsE0QH3gE/edit?usp=sharing
| |
14:14 | se6astian | I cant edit yet
| |
14:15 | philippej | no you can :-)
| |
14:15 | philippej | (now)
| |
14:18 | Bertl | philippej: you want input?
| |
14:18 | philippej | Bertl, always welcome, either here or directly on the doc
| |
14:18 | Bertl | no write access either
| |
14:19 | philippej | it should be fixed, maybe reload the page
| |
14:23 | [1]leo_m | joined the channel | |
14:25 | leo_m | left the channel | |
14:25 | [1]leo_m | changed nick to: leo_m
| |
14:36 | Sasha_C | joined the channel | |
14:36 | Sasha_C | left the channel | |
14:51 | troy_s | Hello all.
| |
14:52 | Bertl | hello troy_s! how's going?
| |
14:55 | troy_s | dmj_nova1: “realistic motion blur” = misnomer. All aesthetics are emergent phenomena. Even a basic photograph is a convention. Net sum is that 24fps merely is the learned aesthetic baseline.
| |
14:57 | troy_s | philippej: I can't offer any insight other than the fact that at 24fps, the display in viewfinder, onboard, or remote must be frame accurate. A single frame behind will cause fits and bring much derision. :)
| |
14:58 | philippej | troy_s, I was thinking that to measure the delay between reality and what's on the viewfinder, we could film existing cameras (at somehow high framerate) with both a clap and the viewfinder in frame, to guesstimate the delay they have.
| |
14:58 | troy_s | Bertl: Good thanks. You sir?
| |
14:59 | troy_s | philippej: I am sure that documentation must be online somewhere.
| |
14:59 | Bertl | everything fine here as well ...
| |
14:59 | troy_s | philippej: Certainly for the Alexa or F65 etc.
| |
15:00 | philippej | troy_s, let me dig this
| |
15:00 | troy_s | philippej: I suspect it is in your estimated ballpark. 50ms or so?
| |
15:01 | philippej | for the alexa, they say, less than one frame, but it's for the viewfinder
| |
15:01 | se6astian | ok, google doc looks fine
| |
15:01 | troy_s | philippej: I do know for certain that gobs of money is spent in remotely transmitting HD signals with extremely low latency for remote head applications and operation.
| |
15:01 | se6astian | I will ask sasha to look over it and then edit the current page content to replace it with the new
| |
15:02 | philippej | troy_s, definitely, maybe I misread your conversation, I thought you were speaking in the latency between reality and the viewfinder
| |
15:02 | troy_s | philippej: EG http://www.teradek.com/pages/bolt
| |
15:02 | troy_s | philippej: That too
| |
15:02 | philippej | I think we met those guys at ibc
| |
15:03 | troy_s | philippej: I only illustrated the latency concern with remote views. Viewfinder is equally important, and is likely as-good-as-it-gets.
| |
15:04 | philippej | as I see it, in axiom, all the processing from sensor to sdi (for example) will be done in the fpga, so it will be as good as it gets
| |
15:05 | philippej | I remember when we visited intopix stand at ibc (they do compression), Guerric told me that their compression scheme added "only a few lines" of latency
| |
15:05 | troy_s | philippej: I only voiced concern when I was reading "only a frame of latency" which in camera work, is unacceptable.
| |
15:06 | philippej | each processing step adds some delay, depending on the complexity of the algorythm involved, or the required data (for example if you need 10 lines for a debayer algorithm, obviously you introduce at least those 10 lines delay)
| |
15:06 | philippej | I guess Bertl have some idea of the expected latency between reality and sdi out
| |
15:08 | troy_s | Onboard could likely be sensel skip for debayer.
| |
15:08 | troy_s | The SDI out "raw" (debayered) would likely need to be higher quality, and "zero latency" goals are less important.
| |
15:09 | troy_s | (roll it through a cheap GPU shader onboard even?)
| |
15:09 | Bertl | well, in the optimal case we have the exposure time + FOT + readout time
| |
15:09 | troy_s | FOT?
| |
15:10 | troy_s | I wonder if gcolburn made way with the profiling.
| |
15:10 | Bertl | frame overhead time
| |
15:10 | Bertl | i.e. the time it takes to copy and digitize the samples
| |
15:15 | troy_s | Gotcha
| |
15:17 | Bertl | which will be at least 1, probably more like 2 'exposure times' as we decided to call it
| |
15:17 | Bertl | (for short exposures it might be quite some more)
| |
15:24 | se6astian | time to leave the office
| |
15:24 | se6astian | see you later
| |
15:24 | se6astian | left the channel | |
15:48 | philippej | left the channel | |
15:49 | philippej | joined the channel | |
16:40 | se6astian | joined the channel | |
17:12 | [1]leo_m | joined the channel | |
17:12 | leo_m | left the channel | |
17:12 | [1]leo_m | changed nick to: leo_m
| |
17:22 | philippej | left the channel | |
17:22 | se6astian | more progress with irc channel logs: https://www.apertus.org/irc/index.php
| |
18:37 | intracube | changed nick to: intracube_afk
| |
19:30 | intracube_afk | left the channel | |
21:19 | se6astian | bedtime :)
| |
21:19 | se6astian | nighty
| |
21:20 | se6astian | left the channel | |
21:20 | jucar | left the channel | |
21:26 | jucar | joined the channel | |
21:36 | rexbron | left the channel | |
21:36 | rexbron | joined the channel | |
21:36 | rexbron | left the channel | |
21:36 | rexbron | joined the channel | |
21:38 | intracube | joined the channel | |
22:28 | jucar | left the channel |