Current Server Time: 18:32 (Central Europe)

#apertus IRC Channel Logs

2014/09/16

Timezone: UTC


00:25
intracube
joined the channel
01:20
intracube
left the channel
01:30
intracube
joined the channel
01:43
TiN
changed nick to: _TiN_
02:14
intracube
changed nick to: intracube_afk
02:58
Umibuta
joined the channel
03:00
Umibuta
left the channel
03:37
Morethink
joined the channel
03:40
Morethink
Hi Bertl.
03:43
Morethink
about the SD card driver,a mmc_rescan() function will be modified for 2 physical SD cards
03:46
Morethink
Next step, I will make some codes .A test platform needed for software debug.Have you completed SD cards PCB? Can this PCB be used of ZEDboard? I only have a ZEDboard.
03:59
Morethink
if possible,I'd like to make a platform like Apertus Alpha first. Which model is the bayonet get from? Is it OK from Nikon SLR D90 ?
03:59
wescotte
joined the channel
03:59
wescotte
left the channel
04:02
Morethink
there is socket for CMV12000 on FMC card. What is the part number?
04:18
Bertl
yes, the dualSD PMOD should be complete and it works on the ZedBoard (i.e. I tested it there)
04:19
Bertl
regarding the bayonet, we got the last from a nikon F65 body, IIRC
04:20
Bertl
the socket we used for the CMV12k is the 10-30-07-237-400T4 from Andon
04:21
Bertl
http://vserver.13thfloor.at/Stuff/AXIOM/ALPHA/cmv12k-adapter-v1.1.bom
04:21
Bertl
I'm off to bed now ... have a good one everyone!
04:21
Bertl
changed nick to: Bertl_zZ
04:47
Morethink
left the channel
05:00
Morethink
joined the channel
05:07
Morethink
left the channel
05:28
Morethink
joined the channel
05:28
jucar
left the channel
05:30
jucar
joined the channel
05:31
jucar
left the channel
05:33
jucar
joined the channel
07:25
derWalter
joined the channel
07:46
Morethink
left the channel
08:09
se6astian
goooood morning
08:09
se6astian
I am back from amsterdam!
08:19
alexML_
welcome back, how did the workshop go?
08:19
se6astian
workshop?
08:19
alexML_
you were at some workshop with Apertus, right?
08:21
alexML_
"Meet us in Amsterdam at IBC 2014" <- this one
08:25
se6astian
It was just meeting
08:25
se6astian
no workshop
08:26
alexML_
ah, okay
08:27
alexML_
btw, do you mind taking a few test images with the axiom alpha, so I can check the SNR curves and the dynamic range?
08:27
alexML_
for a measurement I need two images of the same static scene, HDR (with clipped highlights and deep shadows) and out of focus (to have a smooth gradient)
08:28
alexML_
so, just point the camera at some light bulb, make sure it's really out of focus and also make sure it has something really dark in the frame
08:34
se6astian
will do!
08:34
alexML_
I'd like such an image pair at each gain setting (1x, 2x, 3x, 4x), and also with HDR mode enabled if possible (I guess you only have it calibrated for one gain)
08:34
alexML_
the theory behind this is at http://www.magiclantern.fm/forum/index.php?topic=10111.msg117955#msg117955
08:35
alexML_
thanks!
08:36
se6astian
HDR mode has many paramters so it might not be straight forward to find the right ones for this scene
08:37
alexML_
they are scene-dependent? I guess you can use the parameters you have used in your test video
08:37
se6astian
PLR mode takes two knee points so that are 2 saturation values and 2 additional exposure times
08:38
alexML_
right, so you have a lot of room for finding the optimal settings
08:38
derWalter
left the channel
08:39
se6astian
exactly, and the "optimals" are scene dependent
08:40
Morethink
joined the channel
08:41
alexML_
right... the method is promising though, I expect this mode to go beyond 15 stops (might require an external timer, not sure yet)
08:42
alexML_
anyway, the most interesting part is what you can get in regular mode, since that's what tells the low-light ability
08:42
alexML_
the HDR will capture more highlights, but I don't expect it to change the shadow noise
08:44
dmj7261
hi se6astian
08:45
dmj7261
how was IBC?
08:47
se6astian
alexML_: yes, in theory you can push DR to the lowest possible expsoure time value the sensor can go to
08:47
se6astian
Hi dmj7261, great!
08:47
se6astian
very exhausting, but great :D
08:48
se6astian
alexML_: I mean combining exposures of 1/FPS plus the lowest possible exposure time the sensor supports
08:48
alexML_
se6astian: correct; on Canons, the lowest exposure I could get was about 1/50000 (that was the timer resolution)
08:50
dmj7261
se6astian: any cool details?
08:52
se6astian
yes, the best moment was when we asked a random person just walking by if he could take a picture of us
08:52
se6astian
he said "sure, ah that prototype you are holding, thats the open source camera right?"
08:53
dmj7261
wow, that's awesome!
08:53
dmj7261
sounds like you generated a bit of buzz
09:42
HR_
joined the channel
09:58
se6astian
changed nick to: se6astian|away
10:39
Morethink
left the channel
10:39
Morethink
joined the channel
10:41
Morethink
left the channel
12:12
philippej
joined the channel
12:24
Bertl_zZ
changed nick to: Bertl
12:24
Bertl
morning folks!
12:28
_TiN_
changed nick to: TiN
12:59
Umibuta
joined the channel
13:00
Umibuta
Hi Bert1
13:00
Bertl
hey Umibuta!
13:01
Umibuta
I have a few questions. I understand that Gamma will be modular. But will Beta be able to fit Gamma's modules?
13:02
seku
joined the channel
13:02
seku
hi all :)
13:03
seku
Hi umibuta ... errr... Seapig? :D
13:04
Umibuta
Will the upgrade path include changing the Beta's form factor to fit Axiom Gamma?
13:05
Umibuta
Hi Seku :) Yes seapig. Or Dolphin :)
13:05
seku
i'll probably let Bertl elaborate on this, but afaik modules of the beta won't fit on the gamma, but for backers there will be a preferential upgrade path from beta to gamma.
13:05
seku
with reuse of the sensor (the most expensive part)
13:05
seku
heeh, i learned something new. Dolphin
13:06
seku
i thought they were called Iruka
13:08
Umibuta
I am not Japanese but the Japanese characters for umibuta means Dolphin. And as you already know read as "Iruka". But the characters when read individually can also be read as Umibuta i.e. Sea pig
13:10
seku
mh,, interesting. (my nick is short for sekuhara ... a word i guess you know) ... felt i needed to change it when a friend of mine who works at the embassy introduced me to the japanese embassador : "Konno kata ha ... sekuhara desu"
13:10
Umibuta
Sea pig just seem more affable? :)
13:12
Umibuta
Hahahaha. Sexual harrassment ? You are not by the way in Japan, are you?
13:12
seku
no, unfortunately not. i do visit the country often for holidays tho
13:12
Bertl
to be honest, we have no clue at the moment _how_ the upgrade path will look like
13:13
Bertl
as seku already said, the shields of Beta are different from the Gamma modules
13:14
Umibuta
No worries, Bert1 :) My other question is should I include shipping cost of 3Euro to 350Euro?
13:15
seku
another fellow backer?
13:16
Umibuta
Yes. Regardless of Gamma. Beta as it is is already enticing. Even for a soccer dad like me....
13:18
seku
vacation non-dad here. as a lot of people here i guess, i come from ML RAW
13:19
Umibuta
I know I read somewhere that Beta is meant more for people in the industry. And you would benefit more from their feedback. But still I want to try my luck.
13:20
Umibuta
ML RAW?
13:20
seku
Magic Lantern raw video hack for canon DSLR
13:21
intracube_afk
left the channel
13:22
Bertl
the main purpose of the Beta is for development
13:22
Bertl
both, for developers and for early adopters, so that we can get the required feedback and real life testing
13:23
Umibuta
Yes. I that was what I read a while back before the funding was launched.
13:23
Bertl
while it certainly might be appealing for the industry, it is not designed for this branch, nevertheless, we are happy about feedback from the industry as well
13:26
designbybeck
joined the channel
13:26
designbybeck__
joined the channel
13:28
Bertl
I have an appointment with my dentist now, so I'm off, but I'll be back in a few hours to answer questions ...
13:29
Bertl
changed nick to: Bertl_oO
13:34
aombk
haha i just came back from my dentist
13:34
Umibuta
Seku, Are you one of the Apertus Team?
13:35
aombk
no
13:35
seku
not at all, just an enthusiast
13:35
seku
Bertl, se6astian|away and philippej are
13:36
aombk
yes i am a very enthusiast
13:36
seku
we all are :)
13:36
Umibuta
I am a procrastinating enthusiast.....
13:36
Umibuta
:)
13:37
aombk
but i just returned from the dentist, so maybe i can answer your questions
13:38
Umibuta
Sure. I might have missed some stuff.
13:52
seku
left the channel
13:52
seku
joined the channel
13:52
seku
left the channel
14:10
dmj7261
left the channel
14:12
TiN
changed nick to: _TiN_
14:13
_TiN_
changed nick to: TiN
14:14
TiN
changed nick to: _TiN_
14:15
se6astian|away
changed nick to: se6astian
14:17
se6astian
back
14:19
_TiN_
changed nick to: TiN
14:20
TiN
changed nick to: _TiN_
14:22
aombk
hey se6astian tell us
14:26
dmj726
joined the channel
14:38
jucar
left the channel
14:40
jucar
joined the channel
14:43
se6astian
hey aombk
14:43
se6astian
tell you about what? :)
14:45
aombk
about ibc
14:46
aombk
people knew about the camera? were they enthusiastic about it? did you meet other people or companies interested in helping?
14:46
aombk
were people coming at your booth or i dont know what you had there to ask or were they ignoring you?
14:47
aombk
did you have any interaction with the big players?
14:47
aombk
blogs/news sites were there?
14:47
intracube
joined the channel
15:05
se6astian
we will post a news update about it all soon
15:05
se6astian
writing it as we speak actually
15:11
philippej
left the channel
15:36
troy_s
alexML_: I am unsure there is a clean way around the temporal problems of HDR.
15:36
troy_s
alexML_: PLR knees to a logish curve is probably the only real way to make the latitude useful.
15:39
Umibuta
left the channel
16:06
anton__
joined the channel
16:06
anton__
Hi guys, would smb be kind enough to give link to a document which describes how hdr is done on cmv12000?
16:07
anton__
all the gory details :)
16:10
alexML_
anton__: https://github.com/apertus-open-source-cinema/alpha-hardware/blob/master/Datasheets/datasheet_CMV12000%20v1.12.pdf from page 32
16:13
anton__
alexML: mucho gracias :)
16:24
alexML_
de nada
16:25
alexML_
troy_s: I think it may be possible to minimize the artifacts in the third HDR mode, with heavy postprocessing
16:25
alexML_
but let's first see how bad they really are
16:40
anton__
left the channel
16:49
aombk
alexML_, what is considered the third hdr mode?
17:23
alexML_
https://apertus.org/axiom_imagesensor - there are 3 modes described
17:24
alexML_
similar to ML HDR video, but hopefully with precise timing control, so the two exposures might not have any delay between them
17:24
HR_
left the channel
17:26
anton__
joined the channel
17:28
anton__
okay, so PLR HDR is like taking 3 shots sequentially with different shutter speeds and superimposing them
17:30
anton__
is there any benefit over what Red can does - taking these shots sequentally and storing separately?
17:31
anton__
if my reading of the spec is correct we can start exposing next frame as soon as 70microsec (FOT)
17:31
anton__
70microsec is nothing?
17:31
anton__
it will be like starting next frame immediately?
17:32
anton__
and then on your PC add these 2 shots together multiplying first one by a factor
17:33
anton__
wouldn't it be like one continuous shot with extra DR?
17:33
anton__
yes we need more storage
17:34
anton__
but if one wants to economise he ca increase storage format bit depth by several bits and do this addition between frames before storing to the ssd?
17:35
anton__
is there any value at all in this in sensor HDR when we can read out at 150 fps and time between frames can be as short as 70microsec? and we inly want to shoot at 30fps?
17:38
anton__
sorry about typos I'm typing on a small phone..
17:40
anton__
btw if you shoot at 1/4 of your desired shutter speed you get 2 extra stops with no clipping
17:40
anton__
1/4 of the desired shutter speed but 4x framerate
17:41
anton__
2 stops is no little gain :)
17:43
anton__
that is when you use same shutter speed for all 3 frames which you later combine into 1
17:44
anton__
if you use 3 different shutter speeds for 3 frames that you combine into 1 like for sensor based hdr you get clipping bit still your software has got a lot more info to play with. wouldnt it then be easier to process away theartifacts?
17:52
jucar
left the channel
17:53
anton__
need to put it into a blog post - a bit hard for u to read here.. but if you shoot at 144fps 10bit and then combine each 6 frames into 1 in camera resulting in 24fps 13bit - you've addad 2.5 stops of dr! and 0 artifacts
18:00
alexML_
anton__: how did you get 2.5 stops? log2(sqrt(6)) = 1.3 stops of DR
18:01
alexML_
theory: http://www.statlect.com/normal_distribution_linear_combinations.htm
18:02
anton__
or if you shoot in triplets of frames say 1/100, 1/1000, 1/10000 and store all that info on ssd then software on a pc can combine each triplet into 1 frame similar to in-sensor hdr but the software has much more info to play with to remove artifacts
18:03
alexML_
yes - the PLR mode combines the frames in hardware, that's its main advantage
18:03
alexML_
in post, you only need a LUT to get linear data
18:04
anton__
if that is too much wasted space (3x) then we can combine these 3 frames into 1 in camera but we can still store some extra bits per pixel - like which pixels were clipped in 1/100 and which in 1/1000
18:04
alexML_
but if you have the processing power to do the merging in camera, that would be cool
18:06
anton__
Alex: isn't x4 linear space equal to 2 stops?
18:07
alexML_
in highlights yes, but in noise, no
18:09
alexML_
if you average two images with noise stdev = X, the result (a+b)/2 will have a stdev = x * sqrt(2) / 2
18:10
alexML_
if you average 4 images, the stdev of (a+b+c+d)/4 will be 1/2 * stdev(a) - assuming all 4 frames have the same amount of noise
18:10
anton__
left the channel
18:15
anton__
joined the channel
18:17
anton__
Ah, thx so I get more highlights but also more noise. I see. The other idea may still have some value though :)
18:17
alexML_
yeah, if you average 4 images taken at identical exposure, you will get 1 stop less noise
18:19
alexML_
the other idea - shooting 3 exposures and storing a single one - is pretty much exactly what PLR does
18:19
alexML_
you still need to do the choice in the camera
18:21
anton__
the camera could emulate PLR HDR but on top of it it could 1) increase bit depth 2) give you 2 extra bits of info per pixel: was this pixel clipped in 1/100 take? was it clipped in 1/1000 take?
18:21
se6astian
quick question: is "look-around" the official name of the feature that would suggest to you to show you an image area around the the area thats actually being recorded?
18:22
anton__
don't you want this extra bit depth and more importantly these 2 extra bits of info in the software which removes artifacts?
18:28
anton__
Alex: suppose you had a bright lamp fly very fast accross the scene. If it didn't clip any pixels in 1/100 take you don't need any artifact removal at all; if it clipped a pathe across the 1/100 frame you know that motion blur needs to be applied within that area - but nowhere else! don't you want this info for post processing?
18:34
anton__
you get a mask where the effect needs to be applied
18:35
alexML_
se6astian: I think it's "action-safe area" - http://en.wikipedia.org/wiki/Safe_area_%28television%29
18:38
anton__
alexML_: you don't need much power; if you can shoot at 300fps and have space to store 3 frames in RAM you should have enough power to add 3 frames up... me thinkss being naive :)
18:40
alexML_
anton__: I'm used with memcpy not being fast enough for realtime 1080p at 24fps (the 5D3 CPU)
18:41
alexML_
a FPGA should be a little faster though
18:44
se6astian
alexML_: the "action-safe area" is actually part of the image while the look-around is only visible to the DOP by not being recorded
18:47
alexML_
got it... RED says "Surround Viewâ„¢, which is an additional look around area, visible outside of the recorded image."
18:47
anton__
alexML_: also re averaging 6 frames - it may be that noise is related to exposure time, so if you shoot 6 frames instead of 1 at a 6 times shorter shutter speed and then add the values together - add not average - you may get more noise compared to one long take - or maybe not, maybe not a little more... I believe only prwctice will tell. so perhaps just perhaps this can be a viable way to get more dr in highlights with 0 artifacts
18:48
alexML_
anton__: multiplying by a constant does not change DR (exception: dividing with a very large number)
18:49
se6astian
*but
18:50
derWalter
joined the channel
18:51
derWalter
good evening everyone :) welcome back se6astian
18:54
anton__
AlexML_: well you can shoot a 6x brighter object without clipping. If your noise went up just little compared to one long shot then you got more dr
18:57
alexML_
compared to the long exposure, your noise would be log2(sqrt(N)) EV higher, assuming noise does not depend on exposure time; for N=6, that means 1.3 EV
18:59
alexML_
but yes, you do get more DR without motion artifacts
18:59
se6astian
Hi, derWalter thanks, I have seen in the backlog that you have been quite busy while I was gone :)
19:00
derWalter
weeeeeell... its a quite temting procrastination topic ^^
19:00
derWalter
especially as i wanted to get close in touch with you for month, but never have had the time for it!!!
19:01
anton__
does noise not depend on exposure time? thats smth to test isnt't it? i heatd extra long (seconds) shots on digital are more problematic than on film because the longer the exposure the more noise on sensor..
19:01
derWalter
and as i am reading the actuall irc log from today: regarding different shield types of the beta and the gamma: would it be possible to create an adapter shield so one can use the gamma modules on the beta?
19:03
alexML_
anton__: on Canons at least, at short exposure times (say 1/10s and faster) it does not
19:07
anton__
alexML_: got it; well I hope you also find the idea with hdr motion artifacts removal mask interesting :)
19:14
Bertl_oO
changed nick to: Bertl
19:14
Bertl
back now ...
19:29
Dan_
joined the channel
19:30
Dan_
changed nick to: Guest6447
19:30
Guest6447
left the channel
19:30
derWalter
wb bertl
19:31
derWalter
Bertl se6astian regarding different shield types of the beta and the gamma: is it technically possible to create an adapter shield so one can use the gamma modules on the beta?
19:32
Bertl
that would in theory be possible, but it wouldn't really make much sense
19:33
Bertl
the shields for the Beta are tiny and they will also be rather cheap
19:33
Bertl
I presume that an adapter shield would cost twice as much as the same 'module' for the Gamma
19:34
jucar
joined the channel
19:35
Bertl
and you would have all the problems of reduced throughput, interference, etc from multiple connectors and the solution would use up twice as much space, have twice the weight, etc
19:35
Bertl
s/adapter shield/adapter module/
19:41
derWalter
well, i was thinking of ONE adapter for the beta, to attach only gamma modules behind it. my problem of understanding is the lack of information about the gamma. i would eager to read the 100+ pages you sent in to the eu (for more reasons)
19:43
Bertl
ah, you want to attach gamma modules to the beta?
19:43
derWalter
yes
19:44
derWalter
or to be honest
19:44
derWalter
i would love to contribute to the gamma
19:44
derWalter
with beta work
19:44
derWalter
i dont know the reason why the beta has its own modules
19:44
Bertl
the whole idea of the Beta is to figure stuff out for the Gamma
19:44
derWalter
i just dont see the reason for different modules in the end
19:45
Bertl
the Shields are a primitive version to test out a few ideas on the Beta
19:45
Bertl
the Gamma modules will be much more advanced than the Beta Shields
19:46
derWalter
in what way?
19:46
Bertl
they will most likely incorporate a bus system to communicate between modules and between sensor/FPGA and module
19:47
derWalter
and the beta ones wont o.O?
19:47
Bertl
they will also feature some kind of power rail system to supply all modules with power
19:47
anton__
left the channel
19:47
Bertl
no, the Beta Shields are only one stack depth and basically a direct connection between FPGA and Shield
19:47
Bertl
or PIC32 and Shield on the other side
19:50
derWalter
aaaaaaaha...
19:51
derWalter
thats real NEWS to me :)
19:51
Bertl
haven't you seen the illustrations for the Beta shields?
19:51
Bertl
http://vserver.13thfloor.at/Stuff/AXIOM/BETA/io_shield_l.png
19:52
derWalter
yes i did
19:52
Bertl
http://vserver.13thfloor.at/Stuff/AXIOM/BETA/io-shield-hdmi-v0.5.png
19:52
Bertl
here the HDMI shield
19:52
Bertl
(or to be precise, the first design for the shield)
19:53
derWalter
but all i could read from the graphic were the screws, the connectors and a strange shape where i didnt know how it was related to the design of the beta :D
19:53
derWalter
HA
19:53
derWalter
now i realized it
19:53
derWalter
thx to the hdmi one
19:53
Bertl
well, and it didn't come to mind to simply ask? :)
19:53
Bertl
probably because we are such an unfriendly community :)
19:53
derWalter
you plug them directely in, or open the case, connect it to the board and close the case again? :D
19:54
derWalter
i was always thinking in the gamma style way, as connecting to the back
19:54
derWalter
not like an actual add IN
19:54
derWalter
more like an add ON
19:54
Bertl
you plug them into the front of the Beta, basically right beside the lens mount (in the current design)
19:54
derWalter
okay, thats very clever!
19:54
derWalter
wow, nice.
19:54
Bertl
and they are not designed to be switched out every shoot
19:55
derWalter
yes yes, well, well, well... mhhh
19:55
Bertl
it is more to allow to test different interfaces and customization according to preference
19:55
derWalter
there is a hand full of things missing for me, i was thinking about, battery, sound, internal storage and controlls/viewfinder
19:56
Bertl
http://vserver.13thfloor.at/Stuff/AXIOM/BETA/io-shield-breakout-v0.1.png
19:56
derWalter
and i thought of modules to add those to the beta, or even a case where you plug in the beta
19:56
Bertl
here is a breakout shield (we actually built that already) which allows to connect it to a breadboard or similar
19:57
derWalter
so you have a case, shaped like a normal camcorder/camera with all the controll knobs, xlr connectors and so forth and you just drop the beta in the whole thing
19:57
derWalter
and trough the connector it gets connection to the audio, storage, power and controlles
19:58
derWalter
what kind of connection are the addins using to communicate with the "main"board?
19:59
Bertl
on the Beta or what we plan for the Gamma?
20:00
derWalter
sry, the beta
20:01
Bertl
The connectors we have chosen for now are high speed MezzoStak connectors with 40 pins each from FCI
20:02
Bertl
on the right side, they provide 10 differential pairs to the FPGA each
20:02
Bertl
as well as JTAG and I2C plus power of course
20:03
derWalter
aa, now i know what connectors you have been talking about some days ago
20:04
derWalter
would it be possible to design a board which brings synced audio data in and save it to an ssd along with video data?
20:04
Bertl
many things are possible :)
20:05
Bertl
but to be honest, I'd be surprised if somebody would pull that of in a beta shield
20:05
Bertl
*off
20:05
derWalter
;)
20:06
Bertl
not because of the technical challenge but mostly because of the space
20:06
derWalter
how much are you already working on the gamma already?
20:06
derWalter
aa, okay :)
20:06
Bertl
Gamma is currently strictly collecting ideas
20:06
anton__
joined the channel
20:06
derWalter
well one could hack the case :)
20:07
Bertl
yeah, of course, if you add a PCB which is as large as the Beta itself and connect it to the Shield connectors, it becomes much more realistic
20:08
ctag
left the channel
20:08
derWalter
what was the reason for using metal for the case and making it so small? (i just realized HOW small the case is, when i started to calculate and skeetch around the battery case!)
20:08
anton___
joined the channel
20:09
anton___
changed nick to: Guest71263
20:09
Bertl
the current state is plastic for the housing, and for the size, because we wanted to make it small an powerful
20:10
Bertl
we had a shoebox sized Alpha, so this time it should be rather small and elegant
20:10
Guest71263
AlexML_: if noise does not decrease with shutter speed, then my idea of emulating sensor PLR HDR in camera may bring bad noise
20:10
anton__
left the channel
20:11
Bertl
Guest71263: there are many different aspects of noise in the sensor
20:12
Bertl
reducing the shutter speed only affects the classical thermal noise
20:12
Guest71263
AlexML_: I still want high DR and I am still worried that correcting motion artifacts created by sensor PLR HDR in post may be imperfect or not fully automated etc
20:16
Kevlar
joined the channel
20:16
Kevlar
changed nick to: Guest57438
20:16
Guest57438
nick Kevlar
20:17
Guest57438
how do I change it back?
20:17
Guest71263
left the channel
20:17
Bertl
/nick (instead of nick)
20:17
Guest57438
thankks
20:17
Bertl
np
20:20
derWalter
aa, cause it looks like a bare metal case :D well, the shoebox size is not what i expected either :D but like this its just tiny :D how much cost would it add to put a small dispay with some knobs on the back?
20:21
derWalter
i guess u thought that trough in many ways :D
20:25
Bertl
yes, it is something which was already requested several times, so we will find a solution for that
20:28
Guest57438
Maybe it could look like and sound like this: https://www.youtube.com/watch?v=6lgHK0GQhrM
20:28
Bertl
yeah, that looks like the perfect design for this purpose *G*
20:29
Guest57438
haha
20:30
Bertl
but I think we should go with a more traditional interface like this one: http://www.bbcamerica.com/anglophenia/files/2012/03/tardis1.jpg
20:31
Bertl
anyway, I'm off for a nap, probably back later ...
20:31
Guest57438
It is definitely more eye catching.
20:32
Bertl
changed nick to: Bertl_zZ
20:32
Guest57438
Why did my chat say you failed to identify in time for the nickname Kevlar? Does that mean someone else has the username?
20:32
Bertl_zZ
very likely
20:32
derWalter
i like the selfmade lightworks controllers a lot :D http://smg.photobucket.com/user/Greg_E/media/pics/controller.jpg.html
20:32
Guest57438
:)
20:33
Guest57438
changed nick to: TheUberKevlar
20:33
derWalter
:D :D
20:33
TheUberKevlar
haha
20:33
TheUberKevlar
Kevlar is always taken
20:33
TheUberKevlar
Everywhere I go
20:33
derWalter
http://tf2console.wikia.com/wiki/%C3%9Cbercharge
20:34
derWalter
i had to think about this instantly :D
20:34
derWalter
maaaybe i am going to play a round of tf2..... must be years by now :D
20:35
ctag
joined the channel
20:35
TheUberKevlar
Haha TF2. Fun stuff.
20:36
derWalter
gooooooooosh... i cant post on nikonhacker, as it says my ip is blacklisted at spamhaus.org, cause a pc on my ip seems to be infected with the conficker worm...
20:37
derWalter
but no program can find something...
20:37
derWalter
i hate windows :(
20:37
TheUberKevlar
Oh, no! I hate viruses! And false alarms.
20:44
derWalter
seems like i ve to restart to install kasperky :)
20:45
TheUberKevlar
I'm really excited about this project! I've already donated. I wish that I knew code and could contribute that way, but I don't know code other than HTML and CSS. Are there other ways that I can help?
20:48
derWalter_
joined the channel
20:48
derWalter_
reeeeeeeeeeee
20:49
derWalter
left the channel
20:49
derWalter_
back to the future :O
20:49
derWalter_
changed nick to: derWalter
20:50
derWalter
uh oh... seems there is more work to be done
20:55
derWalter
left the channel
20:59
derWalter
joined the channel
21:28
anton__
joined the channel
21:28
seku
joined the channel
21:28
seku
evenings
21:29
seku
lots of reading up to do tonight :)
21:31
seku
btw, i noticed the beta display discussion tonight. how about going WiFi to a tablet or phone for starters?
21:32
se6astian
hello seku
21:32
se6astian
wifi with the live image?
21:33
seku
i guess that would needs aps .. and latency issues. but it might help with initial framing.
21:33
seku
just an idea :)
21:34
seku
kinda what gopro somehow manages to do.
21:35
se6astian
I dont think we want to get into H264 encoding or anything of that flvour
21:35
se6astian
*flavour
21:36
seku
ah true. havent checked if the arm chips got anz h.264 easing implementation
21:38
seku
i just somewhat liked the idea, as most of us have phones anyway: they have ok resolution, are touchscreens, and any UI can adapt quickly to changing firmware
21:39
seku
but i guess most will go Atomos Samurai or Odyssey 7Q for recording at first, so monitoring might be a non-issue
21:42
anton__
hmm re the camera body.. wouldn't a lense wobble?
21:42
anton__
with a plastic body?
21:43
anton__
we want fully manual Nikon lenses... t
21:43
anton__
those are probably the old ones... in metal bodies... heavy
21:49
seku
interesting question . depends i guess if the sensor plate / lens mount will be metal .
21:49
seku
i guess
21:52
se6astian
time for bed
21:52
se6astian
good night!
21:52
seku
nächtle
21:52
se6astian
thx :)
21:53
se6astian
changed nick to: se6astian|away
21:58
derWalter
depends on the plastic i guess :D
22:16
alexML_
anton__: just did some noise tests on 5D3 at long exposures: http://pastebin.com/KDq8bNeW
22:17
alexML_
oddly enough, long exposures seem to have a little less noise (but the difference is minimal)
22:18
troy_s
alexML_: What are you testing exactly?
22:19
troy_s
alexML_: I take it the ML means from Magic Lantern? Can you code?
22:20
seku
oooh yes he can
22:21
troy_s
se6astian|away: Agree with you on h264. It is strictly an editorial / offline format. The only codecs on the map should likely be the ones with potential viability as an (albeit crappy) intermediate (aka DNxHD and ProRes) and those likely come with licensing issues. Little egregious with current trajectory really.
22:22
alexML_
troy_s: http://www.magiclantern.fm/forum/index.php?topic=10111.msg117955#msg117955 and https://bitbucket.org/hudson/magic-lantern/src/iso-research/modules/raw_diag/raw_diag.c#cl-412
22:22
troy_s
(Although bringing FFv1 or HuffYUV (with raw Bayer) to the masses is an interesting thought project.)
22:22
troy_s
alexML_: Is that a link telling me I am an idiot? I already know this. :)
22:22
troy_s
My idiocy knows no bounds.
22:23
troy_s
alexML_: Can you give me a TL; DR?
22:24
alexML_
I'm counting the electrons
22:24
troy_s
alexML_: All sensors do that I believe.
22:24
seku
(just in case .. hope im not being obnoxious here ... alex pioneered DualIso RAW on ML, and that includes the necessary postprocessing implementation) i follow all of his posts on ML.
22:24
troy_s
In a non-narrow band linear fashion of course.
22:25
troy_s
Very cool.
22:25
troy_s
Is that a temporal DualISO HDR?
22:25
derWalter
i was always dreaming of a sensor which is capable of counting photons and their wavelengh
22:25
troy_s
derWalter: Probably not useful?
22:25
seku
alternating ISO lines in same exposure,. but alex can explain that better :)
22:26
seku
eg. even lines iso 100, uneven iso 800
22:26
derWalter
something like the foveon sensor is doing
22:26
troy_s
alexML_: Does the technique exhibit temporal nasties like all the rest I take it?
22:26
seku
nope.
22:26
troy_s
seku: How is the data merged then?
22:26
seku
http://www.magiclantern.fm/forum/?topic=7139.0
22:27
troy_s
I ask because I think it is viable to do an OFlow prediction to leverage the extra range against the “motion image” as it were.
22:27
derWalter
well, if there would be such a thing, it would be something like an ultra sensitive ccd sensor, but u would get out a stream of data when what kind of photon arrived and you could render your desired fps in post if you want :)
22:28
derWalter
bla, bla,bla english skills already went to bed :D
22:28
troy_s
Very cool.
22:28
alexML_
troy_s: dual iso has zero temporal nasties, but may have aliasing in extreme highlights and shadows
22:28
troy_s
alexML_: What is the TL;DR on the merge technique (looking at your PDF)
22:29
troy_s
I cannot wrap my head around how it wouldn't with a basic blend merge. Or are you relying on one line as the reference and one as augment?
22:30
alexML_
match the exposures, then interpolate in highlights and shadows; in midtones you have full resolution
22:30
alexML_
if you split the exposures and use enfuse for example, you end up with half resolution and slightly misaligned images
22:31
troy_s
alexML_: Yes.
22:31
derWalter
stunning@dualiso@alex
22:31
troy_s
That was my worry...
22:31
troy_s
alexML_: Looks like a VNG variant you are using for merge?
22:31
alexML_
I get a linear full-res DNG, that looks just like a normal image ISO 100, but you can push the shadows to crazy levels
22:32
alexML_
the output is still Bayer data
22:32
troy_s
alexML_: Betting I can improve your interp with a cubic b prefilter instead of average (lerp)
22:32
troy_s
So alternating lines is RG or BG?
22:32
alexML_
you are welcome to give it a try
22:32
troy_s
Or 50%?
22:32
alexML_
currently I'm using AMaZE for interpolation
22:32
alexML_
there are 2 lines at one ISO, then 2 iso at another ISO and so on
22:33
troy_s
Hrm.
22:33
troy_s
There are two paths that are worth checking
22:34
troy_s
1) VNG can result in non-color values, or values that are at irregular positions given the gamut of a profiled sensor. There is a way to leverage the result against the gamut and always assure the value is "in gamut" (aka more likely closer to the value at a given position)
22:35
troy_s
(Glen Chan hints at something like this in his “Toward Better Chroma Subsampling” paper. I think the math would be easy with a profiled sensor.)
22:36
troy_s
2) The sampling I have seen has yet to use cubic b prefilter, which is an anomaly in the scaling world currently, despite being of stunning quality.
22:37
troy_s
(Frequency domain scale that more accurately lands cubic interpolations at “more correct” positions than a standard cubic with no prefilter. More local contrast akin to a LERP, but without the jaggie aliasing of a LERP)
22:38
troy_s
alexML_: Very cool stuff Alex.
22:39
troy_s
alexML_: Is there a way to invert the transfer curve and get to a linear set of values currently via a LUT?
22:41
alexML_
yeah, the DNG spec can use a LUT for this (although the values are limited to 16 bits)
22:41
alexML_
but what transfer curve are you talking about?
22:41
troy_s
alexML_: To get from the merged work to a linear representation
22:42
troy_s
(Needed for compositing and overs / blends)
22:42
alexML_
from dual iso? the merged work is already linear
22:42
troy_s
alexML_: great.
22:42
troy_s
So are the samples in the PDF just random TRCs for demo?
22:44
alexML_
yeah, the PDF describes the very first implementation
22:45
alexML_
meanwhile I've got a ton of test pictures with tricky cases, so the algorithm got a bit more complex
22:45
troy_s
So great having you around here.
22:45
troy_s
alexML_: Github?
22:46
alexML_
https://bitbucket.org/hudson/magic-lantern/commits/all?search=cr2hdr
22:47
alexML_
this is the dual iso postprocessing tool
22:47
derWalter
bertl is it likely that the beta will achieve a hdmi 2.0 out with 4k30p (maybe even 60p?)
22:47
dmj726
left the channel
22:47
troy_s
alexML_: are you using a RMSE evaluation?
22:47
alexML_
no, just pixel peeping
22:48
alexML_
I do have some samples with ground truth, so I could use it
22:48
alexML_
(from a 2-image bracket you can create a fake dual iso image)
22:50
alexML_
https://www.dropbox.com/sh/xfkizhu4lpoiuc8/AAC5GkEpZVacRUXcfrv1ikwVa?dl=0 - this is the test suite
22:51
troy_s
alexML_: I was going to suggest a synthetic image of worst case
22:51
alexML_
the graphs show the exposure matching - this part was tricky
22:51
troy_s
(Probably 75% red / blue stripes etc. Similar to chroma sampling patterns.)
22:51
troy_s
Yes I can imagine the ISO profile shifts the entire response curve correc
22:51
troy_s
?
22:52
troy_s
I have to think that Fourier transforms would work here magically, but they are beyond my grasp sadly.
22:52
troy_s
Frequency domain fitting I suppose.
22:54
troy_s
alexML_: Have you tried least mean squares for fitting?
22:59
seku
bedtime for me, work awaits ...
23:02
dmj726
joined the channel
23:07
alexML_
least mean squares... nope, can you point me to a link showing how to use it for linear regression?
23:08
alexML_
I assume it's different from simple least squares (which didn't work)
23:09
troy_s
alexML_: I think Argyll has an implementation for their fitting of color cards.
23:23
anton__
alexML: thx for measuring noise on Canon. My latest idea is this: use PLR HDR on CMV12000 + immediately after shooting this HDR frame shoot one more frame - with very short shutter speed - and record that second frame rougly - say 2 bit per pixel; use the second frame to make masks to apply motion blur to remove artifacts produced by HDR
23:23
anton__
in post processing
23:24
anton__
suggesting that as it seems that emulating HDR in camera may end up being a lot more noisy than using sensor PLR HDR
23:26
anton__
at least I (0 experience in the area) can not think of how I would implement HDR artifact removal w/o these masks
23:38
Bertl_zZ
changed nick to: Bertl
23:38
Bertl
back now ...
23:40
troy_s
anton__: Optical flow is viable.
23:41
troy_s
But alexML_ has the best direction
23:45
derWalter
welcome back
23:45
anton__
if artifact-free seamless HDR is viable then Axiom Beta looks very promising; perw/o seamless HDR - it looks like CMV12000 doesn't provide that many stops...
23:45
derWalter
say bertl will you go for a hdmi 2.0 connector?
23:47
anton__
with seamless HDR one could even say Axiom may have an edge
23:49
Bertl
okay, I've read up on the discussion and I have one comment and two questions :)
23:50
Bertl
alexML_: the white out of focus wall is to prevent wrong accounting caused by clipping and/or offsets
23:51
troy_s
anton__: Bear in mind that little has been done regarding PLS knees (aka log curve approximations)
23:51
Bertl
i.e. if you have areas which are clipped (too high, too low) then the difference will not show the noise you're looking for
23:52
Bertl
now for the questions: how does the dual ISO in different rows work on those sensors? can the analog gain be set independantly for even/odd rows?
23:52
troy_s
anton__: Solving the temporal issue with HDR adjacent frame / different temporal sensel values is not an easy problem, and will almost always have some case where the approximations fail.
23:53
troy_s
Bertl: Did you read the PDF?
23:53
Bertl
and the second question is for anton__: how should a snap after the HDR snap help with identifying critical parts?
23:54
Bertl
troy_s: no, but I'm about to read it ...
23:55
troy_s
Bertl: It covers most of the stuff you would likely find interesting.
23:55
troy_s
Bertl: Temporal problems are ridiculous to solve. Think about a tree with leaves. If you have ever done an HDR merge that is close to worst case.
23:56
troy_s
Hot sun on leaves and they _all_ have shifted.
23:56
troy_s
OFlow might be viable, but some degree of texture synthesis is probably required to identify exactly how to apply the values to the moved sensel positions.
23:57
troy_s
To me, it is probably the most worthless of attemptd
23:57
troy_s
Bertl: Did you ever implement the log PLS knees?
23:57
troy_s
IIRC we could place two knee points right?
23:58
Bertl
correct, that was what se6astian tested with
23:58
troy_s
(And did Cmosis get back to you regarding the need to toss an entire bit and a half of data?
23:58
Bertl
you even see it on the promotion video :)
23:58
troy_s
Bertl: Oh I didn't realize. Harder to profile with knees. Need to linearize diest
23:58
troy_s
First.
23:58
Bertl
we haven't had much time to poke cmosis, but we will start poking again soon :)
23:59
troy_s
I cannot stand the thought of tossing over a bit of days.
23:59
troy_s
Data even.
23:59
troy_s
“over a bit of data.”
23:59
Bertl
yeah, got it :)