Current Server Time: 08:46 (Central Europe)

#apertus IRC Channel Logs

2018/04/25

Timezone: UTC


00:27
supragya
joined the channel
01:55
supragya
left the channel
02:06
Bertl_oO
off to bed now ... night everyone!
02:06
Bertl_oO
changed nick to: Bertl_zZ
02:43
supragya
joined the channel
02:59
RexOrCine
changed nick to: RexOrCine|away
03:33
supragya
left the channel
04:54
supragya
joined the channel
04:55
supragya
Hi, is there any meeting scheduled this week? GSoC related or otherwise.... and anything else that I should know of before 29th?
04:57
supragya
Also, would it be early to ask for setting up IRC bouncers for mentors and students for GSoC?
05:01
supragya
left the channel
05:33
ArunM
joined the channel
06:26
ymc98
left the channel
06:41
ArunM
left the channel
07:43
ArunM
joined the channel
08:05
sebix
joined the channel
08:05
sebix
left the channel
08:05
sebix
joined the channel
08:08
ArunM
left the channel
08:34
ArunM
joined the channel
08:43
ArunM
Same queries !!
08:54
Bertl_zZ
changed nick to: Bertl
08:54
Bertl
morning folks!
08:56
Bertl
supragya: ArunM: mentors should be around, so it is the perfect time to discuss the GSoC project details ... IRC bouncers are 'in the works' and should be available shortly
08:58
Bertl
ArunM: we can talk/brainstorm about T728 anytime you like
08:59
ArunM
Okay
08:59
ArunM
we can do it right now
08:59
Bertl
great!
09:00
Bertl
did you have a look at the AXIOM Beta stackup and the CMV12k datasheet yet?
09:02
ArunM
read the datasheet thoroughly
09:03
ArunM
had question regarding exposure and similar operations
09:03
Bertl
okay, go ahead
09:04
ArunM
we have to emulate the functional model, so does it include manipulating the data
09:05
ArunM
or just the timings
09:05
ArunM
and sending data
09:06
Bertl
no doubt, it would be a nice touch if the sensor data would reflect the settings (like exposure, gain, etc)
09:06
Bertl
but I would consider this a bonus we can add if there is still time
09:07
ArunM
okay it means i have to store raw bitstream and apply necessory mathmatical operations
09:07
ArunM
ok
09:07
Bertl
first let's see what the typical cases for the sensor simulation/emulation will be
09:08
Bertl
https://wiki.apertus.org/index.php/AXIOM_Beta/Camera_Structure
09:09
Bertl
if you look at the stackup, you see that there are three potential places where a sensor IP could go
09:09
Bertl
it could happen directly inside the Zynq (Main FPGA)
09:10
Bertl
basically eliminating the connection on the Main Board, the entire Interface Board and the Sensor Board
09:11
Bertl
the second obvious option is to replace the Sensor Board with a custom FPGA solution (e.g. Artix/Kintex) which runs the Sensor IP and simulates the sensor
09:12
ArunM
yes
09:12
Bertl
but there is a third option, which seems quite appealing to us ...
09:13
Bertl
the Interface Board is currently a dummy, which only passes half of the sensor lanes to the Zynq, but in the future, we want to replace that with an FPGA solution which works as gearwork to 'connect' the full sensor data bus
09:14
Bertl
given that the FPGA planned there should be able to handle the full bandwidth, it might be a nice and simple option to run the Sensor IP there (including the gearwork)
09:16
ArunM
Sensor ip should be designed such that it would be implemented in any case?
09:16
ArunM
could*
09:16
Bertl
that would be the idea, although we obviously target Zynq integration first, as the 'other' hardware is not available yet
09:17
Bertl
note that it is very likely that the Interface Board will be using and Artix or Kintex as well for several reasons, so the differences should be minimal
09:19
Bertl
what should be kept in mind is that the Sensor IP has to be modular enough to scale from very simple to complex and can be connected/optimized at all layers
09:19
ArunM
okay
09:20
ArunM
Only thing that will be changed in any implimentation is the inteface to feed fake data
09:20
se6astian|away
changed nick to: se6astian
09:20
Bertl
for example, the layer which does the LVDS timing for the Sensor might not be required for sensor simulation inside the Zynq
09:20
Bertl
but for sure it will be required in the custom Sensor Frontend case
09:21
Bertl
and it might be combined/optimized with the gearwork in the Interface Board case
09:21
Bertl
for the fake data, I also see different 'levels of quality'
09:22
Bertl
first, there is the sensor internal test pattern (which is quite trivial)
09:23
Bertl
then I would suggest to create a number of 'good' test patterns for our simulation, which are still 'computed' (need no storage) but show a little bit more contrast and color
09:24
ArunM
isn't the pattern be fed like fake data?
09:24
Bertl
that is basically the final quality option here, but it might be 'too expensive' in certain cases
09:24
ArunM
ex requires lot of memorty
09:25
ArunM
memory*
09:25
Bertl
yes, the data needs to come from somewhere
09:25
Bertl
that's the reason why a generator (computed image) is very interesting for high bandwidth simulation
09:26
se6astian
changed nick to: se6astian|away
09:26
Bertl
but given that the Sensor Simulation IP is modular enough, this shouldn't pose a problem
09:26
Bertl
the 'only' thing which needs to be changed is the data source
09:27
se6astian|away
changed nick to: se6astian
09:27
ArunM
for video stream where to keep frames, is secondry memory available or it is fed every time ?
09:27
ArunM
is it*
09:28
Bertl
I think there are a number of options we have here (i.e. we had a few ideas for this)
09:29
Bertl
one idea is to add sufficient memory (with the necessary bandwidth) to the hardware
09:30
Bertl
another idea is to use a live HDMI/USB/etc feed to supply the 'raw' data on the fly
09:30
Bertl
and yet another idea is to use the Zynq DDR memory for low bandwidth simulation
09:31
Bertl
the memory in the first option could be dedicated hardware (e.g. on a sensor simulation board)
09:32
Bertl
but it could also be a second MicroZed connected to the Sensor or Interface Board
09:35
Bertl
(note that this is the big picture ... not all of that is part of T728 but it should be considered)
09:35
ArunM
okay
09:36
ArunM
can i go through every case and tell my view on it after some time?
09:37
Bertl
sure
09:37
se6astian
changed nick to: se6astian|away
09:37
Bertl
also, if you have any new ideas, do not hesitate ...
09:38
ArunM
okay
09:58
rton
joined the channel
10:12
se6astian|away
changed nick to: se6astian
10:20
ArunM
left the channel
10:21
ArunM
joined the channel
10:25
_florent_
left the channel
10:25
_florent_
joined the channel
11:12
Bertl
off for now ... bbl
11:12
Bertl
changed nick to: Bertl_oO
11:25
aleb
left the channel
11:26
aleb
joined the channel
11:36
ArunM
From what I understood here I wrote an abstract of possible scenarios, correct me if I am wrong anywhere :-)
11:36
ArunM
Considering your mentioned 30 fps video data at 4096x3072 as bottom line, for max data bandwidth. In case of uncompressed data
11:37
ArunM
It comes 18 MB per frame and 540 MB per second for 30 fps
11:37
ArunM
Say we use HDMI or USB then to meet timing constrains (delay caused by Ext. exposure time,
11:37
ArunM
frame overhead time etc )
11:37
ArunM
we cannot randomly start streaming data after delay directly from HDMI, and in case of USB 3.0 if we keep
11:37
ArunM
requesting frames directly after delay, then latency of USB is not that low to meet timing constrains.
11:37
ArunM
So Considering that in mind next option is to keep atleast 1 uncompressed frame in DDR memory and then use either hdmi or usb to update
11:37
ArunM
it.
11:38
ArunM
or
11:38
ArunM
Using a dedicated hardware simplifies things like requesting frames on the go, and also ddr memory will not be required to hold frames. But it introduces a lot of work from design side
11:38
ArunM
Also 1 question, If Hdmi is used as input, at the back end is it connected to some sort of camera? or if not how to signal the back
11:38
ArunM
end device to start sending frames?
11:40
se6astian
AFAIK several frames are buffered in memory
11:40
se6astian
at last 4 I think
11:47
ArunM
left the channel
11:49
ArunM
joined the channel
11:49
ArunM
i think you are talking about IP that intefaces with sensor board
11:50
ArunM
interfaces*
11:53
ArunM1
joined the channel
11:53
supragya
joined the channel
12:02
se6astian
possibly, best wait for Bertl_oO to return, he knows for sure
12:23
aombk2
left the channel
12:23
TD-Linux
left the channel
12:25
LordVan
left the channel
12:26
aombk2
joined the channel
12:26
TD-Linux
joined the channel
12:26
ArunM
left the channel
12:42
supragya
left the channel
13:38
RexOrCine|away
changed nick to: RexOrCine
14:00
supragya
joined the channel
14:01
supragya
Hello RexOrCine, se6astian!
14:01
RexOrCine
Hey supragya. What's the cricket situation there?
14:01
supragya
se6astian: I have given some thought over your advice, and I think I would be able to take forth the frameserver
14:02
supragya
along with the GSoC project... although it may stretch beyond GSoC period and I am fine with that
14:02
supragya
RexOrCine: I only follow international matches. Don't really like IPL
14:03
supragya
However seems like Chennai is going for the wins here (CSK)
14:04
Bertl_oO
changed nick to: Bertl
14:06
Bertl
ArunM1: don't waste to much time or thought on the live image feed ... in any case we will need some buffering there (we even need that for a generator) because the data rates are too high for 'requesting' the data
14:07
Bertl
the most likely scenario for this case will be two AXIOM Beta connected front to front so that one camera can 'play back' a sequence from memory and the other one acts as receiver with the simulated sensor
14:08
Bertl
the 'working horse' setup will be with artificial live data created by a generator or even just static test images
14:19
znca
joined the channel
14:33
ArunM
joined the channel
14:38
ArunM
So what will be the interface between both AXIOM beta or is it going to be part of design?
14:41
supragya
left the channel
14:44
Bertl
Most likely we will do a direct connection with or without an FPGA
14:45
Bertl
think of it like removing the Sensor Board and connecting the cameras at the Interface Board
14:45
ArunM
okay
14:45
Bertl
we might use a dedicated FPGA board for this in the future, but for now this is the most realistic setup
14:46
Bertl
we might even get something like this working in the next few months
14:47
Bertl
what is important to consider for the SenSim IP is that we need a side channel for configuration
14:47
Bertl
i.e. some way to configure the generator image for example
14:48
ArunM
and finally for fake data?
14:49
ArunM
to stream/transfer fake data?
14:55
ArunM1
okay got it
14:55
ArunM1
read first message again
14:55
ArunM1
:-)
14:57
Bertl
okay :)
14:57
Bertl
so first step, sensor interface (LVDS, SPI, etc)
14:57
Bertl
then fake data via generator (trivial, simple, sophisticated)
14:58
Bertl
then live data from other AXIOM (kind of DDR based generator)
14:58
ArunM
left the channel
14:58
ArunM
joined the channel
15:00
znca
left the channel
15:01
aombk2
left the channel
15:01
TD-Linux
left the channel
15:01
derWalter
left the channel
15:01
illwieckz
left the channel
15:01
ArunM1
left the channel
15:01
anuejn2
left the channel
15:01
Kjetil
left the channel
15:01
alexML
left the channel
15:02
ArunM1
joined the channel
15:02
anuejn2
joined the channel
15:02
Kjetil
joined the channel
15:02
alexML
joined the channel
15:03
derWalter
joined the channel
15:03
aombk2
joined the channel
15:03
TD-Linux
joined the channel
15:03
BAndiT1983|away
changed nick to: BAndiT1983
15:04
illwieckz
joined the channel
15:04
derWalter
left the channel
15:04
davidak[m]
left the channel
15:04
anuejn
left the channel
15:04
vup[m]
left the channel
15:04
MilkManzJourDadd
left the channel
15:05
parasew[m]
left the channel
15:05
XD[m]
left the channel
15:05
elkos
left the channel
15:05
flesk_
left the channel
15:05
hof[m]
left the channel
15:08
ArunM1
Okay I'll create and explain my architectural map for sensor interface and after getting a "go" from you, I'll start coding!
15:08
Bertl
sounds good ...
15:15
ZNC
joined the channel
15:16
supragya
joined the channel
15:16
supragya
hello BAndiT1983
15:16
BAndiT1983
hi supragya
15:17
BAndiT1983
btw. you can DM g3gg0 in lab
15:30
mithro
left the channel
15:31
mithro
joined the channel
15:35
supragya
hi alexML, are you available?
15:50
supragya
Bertl, I would like to know the specifics of how the video (and the associated metadata, what all in metadata) are provided by the camera to the final port like HDMI
15:50
supragya
In specific terms, the metadata (like date, time, exposure etc) are video specific, frames specific etc?
15:51
Bertl
basically we currently use the following image pipeline:
15:51
Bertl
https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#Image_Acquisition_Pipeline
15:51
supragya
What does the stream look like?
15:51
Bertl
metadata is not part of this at the moment, so it has to be recorded somewhere else
15:51
BAndiT1983
like raw12 file
15:51
danieel
Bertl: have you made some BW tests or just coded the hdl that it shall fit ? would be curious how much the zynq can really provide
15:52
Bertl
provide as in memory bandwidth?
15:52
danieel
from the numbers, it seems that multiple HP ports shall be used
15:52
danieel
yes
15:52
danieel
the PS controller, to PL client
15:53
supragya
[ so it has to be recorded somewhere else] -> what are the current mechanisms to get hold of this data, apart from raw pixel data/
15:53
supragya
*?
15:56
BAndiT1983
https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng
15:56
Bertl
danieel: the total throughput is about 32Gigabit/s on the DDR2 memory via HP ports
15:57
danieel
it has ddr2? though of 3
15:57
BAndiT1983
supragya, cmv_snap3 captures and then raw2dng is used, as basic pipeline
15:57
Bertl
but given that the CPU has some memory access as well, it usually tops out at 28Gigabit
15:58
Bertl
I meant DDR3 (i.e. what we have on the MicroZed)
15:58
danieel
32G is over one 128bit HP port?
15:59
Bertl
a single port tops out around 10-14Gigabit with our current setup
15:59
ArunM
left the channel
15:59
danieel
thats good to know, thanks
15:59
Bertl
np
16:00
Bertl
supragya: there are no mechanisms in place at the moment to record it
16:00
nmdis1999
joined the channel
16:00
nmdis1999
Hi Bertl!
16:01
danieel
i am trying to figure out the best architecture for a GS sensor, where one has to subtract the reset frame.. so I thought that going by the hw controller would be the best idea (power consumption and performance wise, since PS can handle DDR3-1066 vs PL DDR3-800 at artixbased lowcost devices )
16:01
Bertl
supragya: depending on the actual data stream it might be feasible to encode the data in the stream (USB or even HDMI), or have a separate stream e.g. via ethernet
16:01
Bertl
hey nmdis1999!
16:02
nmdis1999
I had a doubt, are you my primary mentor or sebastian?
16:02
BAndiT1983
Bertl, is there some unique data per frame, like WB?
16:02
supragya
so in layman terms, if the aperture is changed on the fly while recording the video, it is not possible to gather the aperture values from camera? It has to be decoded in a way by maybe change in pixel intensities in the RAW12?
16:02
supragya
+1 to BAndiT1983's question?
16:02
Bertl
nmdis1999: we really couldn't decide on that (not that it matters that much :) so you probably can decide yourself
16:03
rahul__
joined the channel
16:03
nmdis1999
I don't really mind, you both are cool :) Just wanted to know.
16:03
xfxf_
joined the channel
16:03
BAndiT1983
nmdis1999, take Bertl, as he is like a vampire lord, around all night, seems like he using indian time zone for his life ;)
16:03
BAndiT1983
*he is using
16:03
Bertl
unique per frame metadata is possible, e.g. we can change exposure per frame for example
16:04
nmdis1999
lol, that would be great for me xD
16:04
supragya
I wondered why he said good morning when it was morning here
16:04
nmdis1999
^Same
16:04
BAndiT1983
qed!
16:04
Bertl
supragya: I'm under cover :)
16:04
nmdis1999
Although, according to IST he wakes up around 4-5 am which is scary.
16:05
supragya
unique per frame is possible, but no way to retrieve it? Am I missing something here?
16:05
supragya
nup nmdis1999, he wakes at 11AM at our end... it is about 6-7 there maybe
16:05
Bertl
we also have our solder on area with IMU ready, so as soon as we get the FPGA packet protocol working, there might also be IMU data for each frame
16:05
BAndiT1983
nmdis1999, it's the way of old people to stand up early, my neighbour, elderly lady was already up at 6am, when i'Ve left for work :D
16:06
nmdis1999
Bertl, I did coded the tool for histogram (and indented it as you asked) https://github.com/nmdis1999/Histogram
16:06
danieel
somebody shall explain why is IST :30 min ... could not decide on summer/winter time so choose the average? :)
16:06
Bertl
supragya: well, metadata has not been recorded yet (except for snapshots)
16:06
supragya
hmm, but it is TBD right?
16:06
nmdis1999
Should I proceed and work on that or start studying cmv_hist3
16:07
BAndiT1983
yup, without it it's pointless to make videos
16:07
BAndiT1983
my sentence was meant for supragya
16:07
Bertl
nmdis1999: you sure about that? :)
16:07
supragya
:)
16:07
supragya
got it BAndiT1983 :)
16:07
nmdis1999
about the code? I wanted you to check it once :)
16:08
Bertl
because at the first look I see some inconsistencies with the indentation there ...
16:08
supragya
danieel: :30 is kindof strange, but makes some mathematical sense (this is the best I can tell you)
16:09
supragya
As the story goes, while India was to be unified, so was to be unified the time, so they found one central location
16:10
supragya
Calculated the time there, it was in mid of two zones, so whole India got +5.30... best explanation I can provide
16:10
rahul_
left the channel
16:10
xfxf
left the channel
16:10
rahul__
changed nick to: rahul_
16:10
xfxf_
changed nick to: xfxf
16:10
nmdis1999
I'll give it a look! Do you recommend me to use a formatter Bertl?
16:11
Bertl
is probably the best way until you get used to doing proper indentation while you are coding
16:11
nmdis1999
Sounds right.
16:11
derWalter
joined the channel
16:11
BAndiT1983
https://foxnomad.com/2017/11/07/indias-time-zone-30-minutes-off-rest-world/
16:12
Bertl
I also commented that you don't want to process the data in separate storage arrays ... mainly because the data is huge and you are moving it around over and over
16:12
BAndiT1983
also here i would suggest pool allocator (or block allocator as it's called sometimes)
16:13
Bertl
so a proper algorithm has to work on the data in one pass, basically looking at each sensel data only once, ideally accessing them in memory order
16:13
BAndiT1983
as the color channels are of the same size
16:13
Bertl
otherwise the performance will be really bad ...
16:13
BAndiT1983
Bertl, do we need every pixel right away, or is it possible to skip?
16:14
Bertl
sure, skipping and cropping helps if done properly
16:14
Bertl
for example, there is almost no point in skipping every second sensel
16:14
supragya
I guess it will help as it is purely memory pointer maths and then we can have maybe a rough histogram
16:14
Bertl
because it will be fetched from memory anyway, but it makes sense to skip every second row for example
16:15
BAndiT1983
has zynq any special instructions? like SIMD, SSE or AVX?
16:15
nmdis1999
Okay, noted. I'll work on that.
16:15
Bertl
there is NEON which is SIMD
16:16
Bertl
but the main bottleneck will be memory bandwidth
16:16
supragya
SIMD should help in histogram
16:17
Bertl
nmdis1999: we will setup a system for you to make some performance tests during the next week (probably)
16:17
parasew[m]
joined the channel
16:17
nmdis1999
Thank you :) I do really indent to do much work in community bonding as I'll have exams in early week when coding period begins
16:18
nmdis1999
and I don't wish my timeline to get disturbed
16:18
Bertl
sounds like a plan.
16:19
nmdis1999
Off for now, will be back soon :)
16:19
Bertl
cya
16:19
nmdis1999
left the channel
16:20
supragya
nmdis1999: copies Bertl's lines :)
16:25
MilkManzJourDadd
joined the channel
16:25
davidak[m]
joined the channel
16:25
elkos
joined the channel
16:25
XD[m]
joined the channel
16:25
hof[m]
joined the channel
16:27
BAndiT1983
changed nick to: BAndiT1983|away
16:30
BAndiT1983|away
changed nick to: BAndiT1983
16:41
sebix
left the channel
16:45
g3gg0
joined the channel
16:46
supragya
good evening g3gg0
16:50
flesk_
joined the channel
16:50
anuejn
joined the channel
16:50
vup[m]
joined the channel
16:56
supragya
BAndiT1983, g3gg0: why are there no ordering restrictions of blocks in MLV?
16:56
se6astian
changed nick to: se6astian|away
17:05
supragya
left the channel
17:05
supragya
joined the channel
17:14
Guest27507
left the channel
17:14
rahul_
left the channel
17:14
Guest27507
joined the channel
17:14
rahul_
joined the channel
17:15
supragya
left the channel
17:33
se6astian|away
changed nick to: se6astian
18:27
g3gg0
hiho
18:27
Bertl
hiho :)
18:27
g3gg0
> why are there no ordering restrictions of blocks in MLV?
18:28
g3gg0
@supragya is gone, but the explanation is: the devices that process MLV files, have enough computing power to do random access
18:28
g3gg0
a writing device, like a uP in a camera, has various restrictions and limitations
18:30
g3gg0
we found that a buffering mechanism which allows writing frames in that buffer in a way, so that the IO device has maximum write rates, requires that the blocks can appear out of order
18:30
g3gg0
(assuming the buffer memory is non-contiguous, as we had on canon cameras)
18:30
BAndiT1983
supragy reads logs very often, so don't worry
18:30
BAndiT1983
*supragya
18:31
alexML
hola
18:31
g3gg0
but even on a standard ringbuffer you could gain write speed if you can write the "longest" block of frames
18:31
g3gg0
hi
18:31
BAndiT1983
so audio frames are written, when the processing is finished, so to say in between?
18:31
g3gg0
inbetween
18:31
BAndiT1983
hi g3gg0
18:32
BAndiT1983
i've suggested, that he should look into MLV first, as it is quite suitable for the task
18:33
g3gg0
if you have a single buffer with e.g. 100 frames space and you fill the buffer in a linear way from frame 0 to 99 and your write device will write asynchronously.
18:33
RexOrCine
(16:12:18) supragya: Seems like I missed my primary mentor yesterday. He came (g3gg0) and I wasn't available
18:33
RexOrCine
(16:12:37) supragya: Do you know how can I reach him?
18:33
RexOrCine
(16:12:57) RexOrCine: Have you tried DMs through here?
18:33
RexOrCine
(16:13:26) RexOrCine: As, if he's unable to catch-up on previous comms he'll need to set up a bip account.
18:33
RexOrCine
(16:14:05) supragya: seems g3gg0 isn't on IRC for most part... BAndiT is my second mentor... seen him only once or twice here... that's why i requested a bouncer
18:33
RexOrCine
(16:14:17) supragya: how's it going with bouncer, do you know/
18:33
RexOrCine
(16:14:21) supragya: know?
18:33
RexOrCine
(16:15:40) RexOrCine: I find my bip account temperamental, but it works for the most part. I'll bring this up and see what can be done.
18:33
RexOrCine
(16:16:54) supragya: Let's hope it gets okay by the weekend... I have my last exam tomorrow and then leaving for home at night tomorrow... Will be available on maybe the 29th
18:33
RexOrCine
(16:17:20) supragya: Packing etc was all that I did last couple days
18:33
RexOrCine
(16:17:46) supragya: That's why I asked about if some meetings were in place, so that I don't miss them
18:33
RexOrCine
(16:19:19) RexOrCine: You moving house are you? Going home for the holidays or something?
18:34
Bertl
no need to paste parts of the irc log here, just paste an link to the log
18:34
RexOrCine
I guided him RE sending DMs through the lab so presumably you should get something through there from him.
18:34
BAndiT1983
supragya has created a meeting room in lab, there we can gather imporant stuff and discuss the gsoc related things
18:34
BAndiT1983
*important
18:35
RexOrCine
Bertl - Those were DMs.
18:36
Bertl
well, in this case you probably should keep them private instead of dumping them into the public logs :)
18:36
BAndiT1983
maybe we should use dedicated gsoc task channels for that
18:36
g3gg0
(con't) assuming you want to write blocks in an optimized way, combining several frames, it will happen that you are only left with a smaller chunk with only a few frames. on canon cameras this reduced write speed so much, that it wasnt stable enough. as a solution we decided to allow the write buffers to contain unordered frames and the write task can pick the largest block which promises highest write speed
18:37
Bertl
BAndiT1983: I think one channel is fine ... this way everybody knows who's working on what ...
18:37
Bertl
in case there are two intense discussions going on, we can always split one off into a separate room
18:37
g3gg0
as said, it was complicated because our bufferes weren't contiguous and the IO device suffered a lot when the write sizes weren't optimal
18:38
g3gg0
that said, the best solution was to allow random frame order
18:38
BAndiT1983
g3gg0, maybe i've missed something, besides the block being in different order, are the frames stored sequentially? implemented mlv reader long ago and don'T remember my research much
18:38
g3gg0
alexML put a lot of effort in the write optimization algorithm
18:38
g3gg0
they are not stored in sequential order in MLV files
18:39
g3gg0
before processing MLV file frames, you have to create an index
18:39
BAndiT1983
if the order is random, would it make sense to order them in post processing and store as new MLV file?
18:39
g3gg0
possible, but not required
18:39
g3gg0
iirc the more restricting part of PP is the image processing, not the IO device
18:39
BAndiT1983
just thinking about performance optimizations a bit
18:40
BAndiT1983
streaming for example would be easier if ordered
18:40
g3gg0
yep
18:41
BAndiT1983
there was just a discussion about the raw container for axiom and how to process/play the file, seen in ML forums that the guys use fuse, so supragy has created very basic frame server structure, maybe it can be applied later
18:41
g3gg0
if you process a MLV file with mlv_dump it will order automatically when writing
18:41
BAndiT1983
*supragya (missing the last key constantly)
18:41
g3gg0
(hmmm... or did i change that just experimentally and never checked in?)
18:42
BAndiT1983
but this is ML related currently, right?
18:42
g3gg0
yep
18:42
g3gg0
but could also apply to apertus
18:42
BAndiT1983
of course, sounds good
18:42
g3gg0
when it comes to audio / video sync
18:42
g3gg0
you can write sequentially if you wish to
18:43
BAndiT1983
do you know, why ffmpeg version wasn't maintained since last year? tried to verify my results in OC with it, but MLV was greenish
18:43
g3gg0
ever block has an identifier, size and a timestamp
18:43
g3gg0
ffmpeg could read MLV, but debayering wasnt implemented iirc
18:43
BAndiT1983
also considered to offer the implementation from OC at some point, when the sequences are finally loading and not just frames
18:44
BAndiT1983
MLV is rather good, had a lot of fun to implement the reader, was much more straight forward, than something like AVI
18:44
g3gg0
cool, good to hear that :)
18:45
BAndiT1983
what about custom tags there?
18:45
g3gg0
no problem, its allowed by design
18:45
g3gg0
every reader has to ignore unknown types
18:45
BAndiT1983
supragya had some questions and one was why we don'T use AVI as it's standard
18:45
g3gg0
there are some fundamental block types that must be supported
18:46
g3gg0
MLVI - thats the header containing the video file GUID, the content type information, frame rate and (optionally) the frame count
18:47
g3gg0
every video recording can be split into several files, like you know from RAR, R01, .... here it is called .MLV, .M00, .M01 etc
18:48
BAndiT1983
have missed the bit back then, but sounds very good, hope to find maybe some examples in the ML forum
18:48
g3gg0
all files have their header with a random "GUID" which is the same for all "chunks" (MLV, M01...)
18:49
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=7122.0
18:49
g3gg0
there you see a lot of blocks
18:49
BAndiT1983
i know this page too good ;) looked a lot through it for implementation
18:49
g3gg0
ah okay, then for supragya :)
18:50
BAndiT1983
but real sample files are not that often
18:50
BAndiT1983
i think i have spammed him also with the link at least 5 times :D
18:50
g3gg0
there was a collection, let me check
18:50
BAndiT1983
this is one of the threads -> https://www.magiclantern.fm/forum/index.php?topic=11899.50
18:51
g3gg0
exactly
18:51
BAndiT1983
last files are nice to see, because of 10, 12 and 14bit data
18:51
BAndiT1983
but some multi-part sample would be cool
18:51
g3gg0
got also a collection locally of all kinds of mlv versions
18:51
g3gg0
its just... 300 GiB
18:52
BAndiT1983
ouch
18:52
BAndiT1983
pity that my camera is not supported yet, otherwise would have shot clips myself for testing
18:53
g3gg0
which one do you have?
18:53
BAndiT1983
eos760d, seen a lot of people wanting to test firmware pieces which were posted, but seldom would someone offer to port ML
18:54
BAndiT1983
what is mlv_lite, by the way?
18:54
g3gg0
@supragya: lets get in touch via mail etc, as i have a daytime job without access to IRC, i can either chat there at nighttime (9 PM CET and later) . will share you my business mail, there i am available all the time
18:55
BAndiT1983
g3gg0, have you seen the chat room in lab yet?
18:55
g3gg0
mlv_rec was the first approach with full-blown support for writing GPS coordinates, LENS infos, level meter infos etc which caused the write rate to be rather bad
18:55
g3gg0
alexML started from scratch with a more lean design, focusing on write rate
18:56
BAndiT1983
just tried to playback some samples in vlc, but they are shot in mlv_lite
18:56
g3gg0
right now mlv_lite is "the" raw recording module
18:57
g3gg0
mlv_rec is dead (was a good start but never could fix performance issues) and not meant for use
18:57
g3gg0
t.b.h, no dont know that one
18:58
g3gg0
do now :)
18:58
BAndiT1983
mlv_rec2 is still actual one?
18:58
g3gg0
dont know that one :D
18:59
g3gg0
theres just raw_rec (first prototype, .RAW files) then mlv_rec (also called Magic Lantern (RAW) Video format v2.0) and now mlv_lite
18:59
BAndiT1983
ok, then ver mlv_rec 2.0 ;)
19:00
g3gg0
mlv_rec supports audio, mlv_lite not out of the box (just a prototype which has some issues)
19:00
BAndiT1983
ok, so we should stick with mlv_rec for now
19:00
g3gg0
nah dont focus on mlv_rec too much
19:00
g3gg0
that wouldnt help imho
19:01
BAndiT1983
ok, but how would you approach the task? my idea was to support MLV format in axiom, as starting point at least
19:01
g3gg0
well yes, check what the MLV stuff is all about and how writing is done. maybe mlv_dump, the commandline tool, is the best.
19:01
BAndiT1983
afterwards we would see if it has all the bells and whistles we need or extend it with custom packets
19:02
ArunM1
left the channel
19:03
BAndiT1983
posted a link to the source code in lab chat, this one -> https://bitbucket.org/g3gg0/magic-lantern-mlv_rec
19:03
BAndiT1983
at least something to start with
19:03
g3gg0
yep
19:04
BAndiT1983
why is the code mixed, there is C and also python there?
19:04
g3gg0
maybe we just get too much speed right now :)
19:05
g3gg0
one good pointer is: https://bitbucket.org/hudson/magic-lantern/src/02e5918a6ed5f4e21f2e50d84744f5adddcc0771/modules/mlv_rec/mlv.h?at=crop_rec_4k_mlv_lite_snd&fileviewer=file-view-default
19:06
g3gg0
together with the already posted forum entry
19:06
g3gg0
that will explain the anatomy of a MLV file
19:07
g3gg0
the first steps to make a valid MLV is:
19:07
g3gg0
MLVI (header), RAWI (resolution etc), VIDF (video frame)
19:09
g3gg0
RAWI is unfortunately a bit complex as it contains a structure which is ml internal
19:09
g3gg0
then the issue with raw frame bayer encoding is to be solved
19:09
BAndiT1983
been there, done that :) -> https://github.com/apertus-open-source-cinema/opencine/blob/master/Source/OCcore/Image/MLVLoader.cpp
19:10
BAndiT1983
which encoding?
19:10
g3gg0
there are two options we have in camera: a) raw bayer as it is in camera memory or b) LJ92 encoded pixels
19:11
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/02e5918a6ed5f4e21f2e50d84744f5adddcc0771/modules/mlv_rec/lj92.c?at=crop_rec_4k_mlv_lite_snd&fileviewer=file-view-default
19:11
g3gg0
lj92 is being done in-camera
19:11
BAndiT1983
where is lj92 coing from?
19:12
BAndiT1983
ah, jpeg92 it seems, google hasn't found meaningful info on lj92 first
19:12
g3gg0
it is lossless jpeg
19:12
BAndiT1983
https://thndl.com/how-dng-compresses-raw-data-with-lossless-jpeg92.html
19:12
g3gg0
exactly
19:12
BAndiT1983
placing links for supragya here and in lab
19:12
g3gg0
reduces raw size a lot
19:13
BAndiT1983
is it used because of speed?
19:13
g3gg0
when i remember correctly its compressing to ~55% of the original frame size
19:13
BAndiT1983
not bad
19:13
g3gg0
this allows reducing the write load a lot
19:13
BAndiT1983
is there no newer algorithm? just curious
19:13
BAndiT1983
what about processing power?
19:14
g3gg0
good question. canon does this ;)
19:14
alexML
the LJ92 algorithm is implemented some sort of DSP (with unknown architecture); we have no control over it other than calling it
19:15
g3gg0
we are not doing lj92 in software on our own, its in DIGIC somewhere
19:15
BAndiT1983
ah, we have to take that into account
19:15
BAndiT1983
maybe Bertl can reply if we could do something like that in FPGA?
19:15
g3gg0
could be a valid option to also compress the frames using lj92
19:16
g3gg0
depends on the required logic blocks for this compression
19:16
g3gg0
could be useful for writing DNG images too
19:17
BAndiT1983
if i search for jpeg92, then apertus irc logs pop up in google :D
19:17
BAndiT1983
http://irc.apertus.org/index.php?day=27&month=10&year=2014
19:17
g3gg0
inception
19:19
BAndiT1983
as we don't have the compression on FPGA yet, we should treat it as second step, just to make things simpler for a moment
19:26
g3gg0
one thing i want to mention - and why i think we are a bit too fast: the raw video container job is not "make MLV work in apertus!"
19:26
g3gg0
even if i'd love to see that ;)
19:27
g3gg0
its about "find out the requirements and the restrictions for the container format. find viable solutions and make a prototype"
19:27
BAndiT1983
i know, just need some starting point
19:28
BAndiT1983
by supporting MLV natively, it would allow to spread it more, as support is not that big out there
19:28
BAndiT1983
but if we can output the data in some stream-like format with additional data, then it would also be fine to consolidate and convert it on PC afterwards
19:29
g3gg0
yep absolutely, but i dont want to bias the result of his analysis
19:29
danieel
so what are the real options for containers? mkv/qt ? why are you reinventing the wheel?
19:29
aombk
joined the channel
19:29
g3gg0
exactly these are the questions to ask
19:29
BAndiT1983
it's about internal processing in caera first
19:29
danieel
i assumed mlv came from limitations of a canon fw hackup
19:29
BAndiT1983
it's not a hack ;)
19:29
danieel
which are not present here
19:29
g3gg0
i do not 100% agree to that statement
19:30
g3gg0
we had some specifics, yes
19:30
BAndiT1983
isn't qt/mov license restrictive?
19:30
g3gg0
those would have been a bit more effort to solve with "common" container formats
19:31
g3gg0
but i as you: what benefit would we have with mov? no tool could work with our files
19:31
g3gg0
*ask
19:31
danieel
BAndiT1983: there is ISO standard for mp4 which is same container as mov
19:31
danieel
i believe there cant be ISO for proprietary stuff
19:32
aombk2
left the channel
19:32
BAndiT1983
codecs are proprietary mostly
19:32
g3gg0
we spit out some RGGB raw video stuff with custom tags for raw properties, custom LENS tags, custom exposure info tags etc
19:32
danieel
with mov - lot of tools can work with that, the frames/seeking, multiplexing is clearly given by the qt/mov format
19:32
danieel
the question is in codec. your tool will support yours. if it catches up one day you can dragndrop beta movs to davinci resolve :)
19:33
g3gg0
and metadata?
19:33
BAndiT1983
main question is, if the camera can record mov on the fly
19:33
danieel
i worked so far with static metadata only (camera sn/shot info)
19:33
g3gg0
stuff like DARK frames
19:33
danieel
i think arri has a proprietary stream track for dynamic metadata
19:34
danieel
dark frame as 1 dark shot before the sequence?
19:34
g3gg0
and now we are getting into some real issues when our "big pro" for mp4 was that there is a lot of stuff available, but we suddenly have to hack together a lot of special extensions
19:35
g3gg0
where we just wanted a simple, lean container format that can be read without compiling several C++ libraries together
19:35
BAndiT1983
have no real problem with outputting stream with frame markers and other metadata separately, then merge on pc in post
19:35
danieel
well, there has to be a clear separation of a codec/metadata and the container
19:36
g3gg0
yet you would not win any interoperability
19:36
BAndiT1983
what are arri, red and blackmagic camera outputting?
19:36
BAndiT1983
*cameras
19:36
g3gg0
some of them store CinemaDNG?
19:37
danieel
you make 1) muxing with audio straightforward 2) splicing to parts simpler
19:37
g3gg0
not sure
19:37
danieel
arri does mov/prores
19:37
danieel
bm does mov/prores and cdng
19:37
BAndiT1983
in the camera?
19:38
danieel
dngs is easy to customize, with the todays state of things that nobody is taking a central authority.. just find a free tiff tag ID and use it for your own purpose :)
19:38
danieel
blacmagic broke supported codecs with their 3:1/4:1 codec, so yes.. things went really wild :)
19:38
BAndiT1983
cinemadng should already have most fitting tags
19:39
BAndiT1983
waht about arriraw, it's also not a common format
19:39
danieel
drawback is split-to-files, which is bad if your os has some overhead with open()/close()
19:40
g3gg0
on a PC you probably have no problem there...
19:41
g3gg0
on embedded devices, i'd definitely circumvent fs metadata updating
19:41
danieel
well, some users dont like the lot of files.. and i would not care, but midnight commander tends to copy the files shuffled, which is meh... why?
19:41
danieel
(probably readdir() returns shuffled data because of FS having trees?)
19:41
g3gg0
mc is odd with too many files
19:42
g3gg0
many files for one video take make things complex
19:43
BAndiT1983
are you people sticking with midight commander because of nostalgy? many colleagues do also, but my days with norton commander long gone and not looking back
19:43
g3gg0
i do
19:43
g3gg0
i miss it :]
19:43
danieel
well.. when you use remote ssh lot, then mc is fine, you cant easily access X
19:44
g3gg0
and NDD, norton disk doctor
19:44
BAndiT1983
g3gg0, dosbox is your friend ;)
19:44
g3gg0
right now i prefer the inverse - WSL
19:45
danieel
arriraw is file per frame, header + blob... sort of dump the struct{} to file :)
19:46
Bertl
g3gg0: World Surf League?
19:46
g3gg0
lol, no. Windows Subsystem for Linux
19:47
g3gg0
you can see it as /wine or ~wine
19:47
BAndiT1983
g3gg0, bad topic for Bertl
19:47
danieel
wine >glass :P
19:48
BAndiT1983
i know that alcohol helps a lot when developing software, but the discussion is wandering a bit off ;)
19:48
g3gg0
MS publishes a linux compatibility layer with latest windows 10's
19:48
BAndiT1983
windows is a very bad topic for Bertl :D
19:48
BAndiT1983
it has UI!
19:48
g3gg0
allows you to run unmodified ELF on windows
19:48
danieel
so.. unless you can do a compatible output, use at least a compatible/standard container :)
19:49
Bertl
BAndiT1983: it 'has a really bad UI' is probably what you meant :)
19:49
danieel
per file: cdng, single file: mp4/mov/qt vs mkv (thas easy with license) and a custom codec. doing both custom codec and custom container does not make a sense..
19:49
BAndiT1983
trie vi at work, as there was just QNX and telnet, was no fun when backspace is not working, so i'M sticking to UI
19:49
g3gg0
on modern computers you need an UI. how else would you arrange several shell windows next to each other? screen does not support multihead
19:51
BAndiT1983
mkv would be an alternative, but performance tests have to be done first
19:51
g3gg0
@danieel: if my main effort is keeping things small and controllable, i do prefer bottom-up over just reusing foreign code i do not know
19:51
BAndiT1983
zynq is not that capable and we have a lot of other stuf ongoing while recording
19:52
BAndiT1983
my vote is still for MLV here, but feel free to suggest some option which is suitable for embedded system
19:52
danieel
what is the benefit of mlv over qt/mov/mp4?
19:52
BAndiT1983
Bertl, can we stream the data and add markers after every frame?
19:53
g3gg0
guys, we are losing focus. supragya's task is to do exactly that.
19:53
Bertl
stream how, mark what?
19:54
BAndiT1983
qt and mov are proprietary and need license, so we are left with mkv and mp4
19:54
danieel
qt=mov=mp4, structure wise
19:54
BAndiT1983
when the data is pouring in from the sensor, as image sequence, can we mark separate frames?
19:55
g3gg0
i've lost too much time in debugging other people's "libSomething" code and modifying it to fit my corner case needs. thats why ML doesnt write mp4
19:55
g3gg0
the file structure is an easy task
19:56
Bertl
BAndiT1983: it is not 'pouring' in, it is currently stored 'per frame' in memory
19:56
BAndiT1983
ah ok, even better
19:56
Bertl
but that doesn't help much, as you do not have the processing power to handle the data
19:56
Bertl
(at least not with moving pictures)
19:57
BAndiT1983
my suggestion was to write it straight to some file, like stream, but with frame marker in between, so we can distinguish them
19:57
BAndiT1983
and some other file would get the metadata
19:58
BAndiT1983
merge would be done on PC and conversion to some standard format, just an idea because of zynq shortcomings
19:58
BAndiT1983
not really shortcomings, but less performance
19:58
danieel
i did not use libsmth, have my own mp4 lib.. does not like to depend on others broken libs (had a lot of fun with libtiff making dngs... I am rather one in full control)
19:59
Bertl
BAndiT1983: sounds nice, but 'how' do you imagine to get the data from memory to the 'pc'?
19:59
danieel
BAndiT1983: merge external is bad. Kinefinity did broke some teeth on that.. mux in camera if possible.
19:59
danieel
(if you have storage, of course)
19:59
danieel
Bertl: maybe over the usb3 which is in works?
19:59
BAndiT1983
there we have 2 showstoppers at the moment, which need to be solved
20:00
BAndiT1983
what about the sdcard array?
20:00
Bertl
danieel: yes, but that does not depend on the 'in memory' store
20:00
Bertl
BAndiT1983: also doesn't depend on that, i.e. we can 'design' the format of that as well
20:00
danieel
BAndiT1983: can sdcard controller do UHS-I ? (thats licensed / closed spec as well)
20:01
Bertl
danieel: UHS-II would be required to make sense
20:01
BAndiT1983
just asking, as there were some discussions a couple of months ago
20:01
danieel
he said RAID :)
20:02
danieel
uhsII is easiest with usb3.. but thats rather ZU tech, not Z
20:02
Bertl
yes, it is a long term project which got popularized too early
20:09
TofuLynx
joined the channel
20:09
TofuLynx
Hello everyone! :)
20:10
BAndiT1983
hi TofuLynx
20:10
TofuLynx
how are you?
20:11
BAndiT1983
fine, and you?
20:12
BAndiT1983
have you reflected a bit on the stuff from yesterday?
20:12
TofuLynx
I'm fine too
20:12
TofuLynx
not yet
20:12
g3gg0
hi TofuLynx
20:13
TofuLynx
hey g3gg0
20:15
TofuLynx
You are one of my mentors
20:15
TofuLynx
But I don't really know you!
20:15
g3gg0
i dont know you either :)
20:16
g3gg0
hehe that will change with time :)
20:16
TofuLynx
:P
20:16
TofuLynx
So, what's your name?
20:17
g3gg0
well. i am even called g3gg0 by my family. so i guess thats my name
20:18
comradekingu
left the channel
20:18
TofuLynx
O.o wow
20:19
g3gg0
my realname is Georg
20:19
TofuLynx
ah! xD
20:19
g3gg0
whois g3gg0.de
20:20
TofuLynx
hmm
20:20
g3gg0
;)
20:20
TofuLynx
so, what are your roles on apertus?
20:20
BAndiT1983
ask better about his role at ML ;)
20:21
TofuLynx
ah xD
20:21
TofuLynx
wait
20:21
TofuLynx
oooh
20:22
BAndiT1983
haven't you read todays logs yet?
20:22
TofuLynx
oh
20:22
BAndiT1983
supragya is always up to date, still wondering how he can be that patient to look through this stuff
20:22
TofuLynx
Will check it
20:22
g3gg0
someone told me that i do a lot of things. many things. nothing really good, but many.
20:23
TofuLynx
holy
20:23
TofuLynx
50Kb today
20:23
g3gg0
¯\_(ツ)_/¯
20:23
BAndiT1983
g3gg0, i know that from somewhere ;)
20:24
BAndiT1983
ah, i see that you are also interested in SDR, just got my sdr-rtl stick recently
20:24
g3gg0
some time ago yeah
20:24
g3gg0
made a GSM decoder in C#
20:25
g3gg0
think that was in 2009 and later
20:25
BAndiT1983
looks very interesting
20:26
g3gg0
released the sources in 2013
20:26
g3gg0
also my kraken version (GSM cipher cracker)
20:26
danieel
but you got the specs/standard text ?
20:27
g3gg0
were free
20:27
TofuLynx
hmm
20:27
danieel
great..
20:27
g3gg0
horrible ETSI specs written in word
20:27
TofuLynx
Raj created a private room?
20:27
danieel
well, looking to 750 pages of h264.. cant say its much easier today
20:27
BAndiT1983
hope it doesn'T intervene with german laws, it's already difficult enough living in frankfurt to avoid airport frequencies
20:28
g3gg0
aeons ago i hacked nokia phone firmwares. back then i thought SMS weren't ciphered
20:28
BAndiT1983
h264 is off the table, license issues
20:28
g3gg0
the reason why i did all that GSM stakc stuff
20:28
g3gg0
then i realized that SDCCHs are DCCHs and thus encrypted too
20:29
g3gg0
well... was for the fun anyway
20:29
danieel
BAndiT1983: just matter of documentation, compared to etsi/gsm
20:29
comradekingu
joined the channel
20:29
BAndiT1983
TofuLynx, yes, supragya has created some room in lab to discuss his task with us
20:29
BAndiT1983
it's good for gathering links and logs
20:29
BAndiT1983
not IRC ones
20:30
TofuLynx
didn't know we could create rooms in lab
20:30
TofuLynx
I think it would be great for us too
20:30
BAndiT1983
just click on the speech bubbles in the header
20:31
TofuLynx
how do I create one room?
20:32
g3gg0
here some old GSM video (2010) https://youtu.be/TbWLVLUguJw
20:32
BAndiT1983
click on the small plus in the left panel
20:32
TofuLynx
found it
20:32
g3gg0
these channels are private, right?
20:32
TofuLynx
Can I add you to the room?
20:32
TofuLynx
and also you, g3gg0?
20:33
g3gg0
sure
20:33
TofuLynx
Yes, they are private
20:33
BAndiT1983
g3gg0, looks like rocket science
20:33
TofuLynx
we can choose the visibility to others
20:33
BAndiT1983
mode of the rooms can be defined
20:34
TofuLynx
ok it's done!
20:34
BAndiT1983
i like the question on youtube about rtl2832 sticks
20:34
TofuLynx
Also, I want to tell something
20:34
TofuLynx
this weekend is my birthday (29th) and I will probably be occupied
20:35
BAndiT1983
TofuLynx, no problem, we are still in timeframe
20:35
TofuLynx
Ok! :)
20:35
davidak
joined the channel
20:35
BAndiT1983
some adjustments from yesterday are not hard to do, so don't worry and enjoy your day
20:36
TofuLynx
Ok!
20:36
TofuLynx
I have to do some sort of document that keeps track of what I did and have to do
20:36
BAndiT1983
have to take a look what is falling into your task area, so i don't interfere while doing stuff in OC
20:36
TofuLynx
Ok! :)
20:36
BAndiT1983
maybe gdocs and excel
20:36
BAndiT1983
don't remember how it is called there
20:36
TofuLynx
we could share a gdoc between us
20:36
TofuLynx
spreadsheets :)
20:37
TofuLynx
what do you think?
20:37
g3gg0
at work i cannot access alike
20:37
BAndiT1983
g3gg0, you are doing the stuff, which i'm still hoping to do in future when i have some space for it, like CNC and 3d printing
20:37
BAndiT1983
what are the alternatives to keep track?
20:37
g3gg0
just sleep 4 hours less and you have enough time :)
20:38
g3gg0
just saying, can access during nighttime though
20:38
BAndiT1983
it'S not about time, more like room and no neighbours ;)
20:38
TofuLynx
there's trello
20:38
TofuLynx
which is also cool
20:38
TofuLynx
and also github, I guess
20:38
TofuLynx
which as a similar system to trello
20:38
TofuLynx
has*
20:38
BAndiT1983
what the hell, why were you booting linux on EOS? :D
20:39
g3gg0
april 1st.
20:39
TofuLynx
ahahahha
20:39
TofuLynx
linux on EOS
20:39
BAndiT1983
https://www.youtube.com/watch?v=IcBEG-g5cJg
20:39
g3gg0
that is a ML tradition, things that sound impossible and publish them on april 1st
20:39
g3gg0
alex first booted DOS
20:40
BAndiT1983
pfff, impossible, you can boot linux on a carrot nowadays ;)
20:40
BAndiT1983
gameboy emu on EOS, that could be the next thing
20:41
g3gg0
not that easy. the digic has no mmu
20:41
TofuLynx
So, Andrej what do you think about trello or github?
20:41
g3gg0
and nommu linux suck... eeh is quite painful
20:42
BAndiT1983
don't know trello
20:42
BAndiT1983
g3gg0, what would you prefer to track the progress?
20:43
g3gg0
its my first time doing this kind of mentoring via internet, so dont have any preference or experience
20:43
TofuLynx
Hope I give you a good first experience xD
20:43
g3gg0
hehe :)
20:44
BAndiT1983
this year it's totally different, believe me
20:44
TofuLynx
I will probably do a plain simple google docs
20:44
BAndiT1983
maybe a table, so we can adjust it easier and set priorities
20:45
danieel
i would choose google docs as well
20:45
danieel
lately i tried to use it to dump my head there :) helps a lot
20:45
TofuLynx
we can add tables to the doc
20:45
TofuLynx
that's what I want, danieel! :)
20:45
g3gg0
im totally fine with that
20:46
BAndiT1983
still don't understand the logic of our IT guys at work, gmail works, gdrive works, google calender is blocked by firewall
20:46
TofuLynx
wow
20:46
BAndiT1983
maybe they smoke too much ;)
20:47
TofuLynx
I heard that calendar is going to be embedded into gmail
20:47
BAndiT1983
yep, switched to new layout for testing
20:47
TofuLynx
Nice! :P
20:49
BAndiT1983
g3gg0, we are still not green about the raw container, by the way
20:49
se6astian
changed nick to: se6astian|away
20:49
BAndiT1983
supragya needs some guidance, so it's mandatory to make first decisions
20:50
TofuLynx
sent gdoc link via the room
20:50
BAndiT1983
and while hardware is not ready for the task yet, it should be simulated on PC
20:50
BAndiT1983
TofuLynx, you can place links from yesterday there
20:51
TofuLynx
Roger!
20:51
RexOrCine
changed nick to: RexOrCine|away
20:51
BAndiT1983
like the github repo for pool allocator and so on
20:51
g3gg0
https://lab.apertus.org/T951
20:51
g3gg0
1. Current status analysis and requirement definition
20:52
g3gg0
Before any decision or implementation can happen, it is important to depict the current state of how video data is being recorded and written to disk.
20:52
BAndiT1983
according to Bertl: which disk?
20:52
BAndiT1983
;)
20:52
g3gg0
RAW12
20:52
g3gg0
the format you store on computers
20:52
g3gg0
i am aware that is no direct disk interface yet for the camera
20:53
g3gg0
- technical backgrounds of the current file format (i.e. "why is it as it is?")
20:53
g3gg0
- examining the technical backgrounds of the signal processing path within the camera (i.e. "how does it work?")
20:53
g3gg0
- technical possibilites and requirements of the signal processing path in terms of video container format (i.e. "what could we do with it and where are limits?")
20:53
g3gg0
- defining requirements/recommendations for a container format
20:53
BAndiT1983
so he should interview Bertl a lot
20:53
g3gg0
before any decision can be made for the future, it must be clear what the resitrictions are
20:54
BAndiT1983
some things can be looked up in apertus repos, like utils and beta-software
20:54
g3gg0
yep
20:54
BAndiT1983
my question was not targetting final decision, just general path, which you have provided
20:55
danieel
you miss one important point: future compatibility considerations
20:55
g3gg0
in this work i expect to see some theory why we would make decisions how we did
20:55
g3gg0
>technical possibilites
20:56
g3gg0
(i.e. "what could we do with it and where are limits?")
20:56
g3gg0
expected this to happen there
20:56
g3gg0
or maybe i misunderstood you?
20:57
TofuLynx
BAndiT1983: in regar to the debayer class, I was thinking if it wouldnt be better to use a single debayer class with a number flag that is used to choose the desired debayering algorithm?
20:57
BAndiT1983
danieel, which compatibility is meant?
20:57
BAndiT1983
TofuLynx, this would blow up the class, it's better to use needed class through dependency injection
20:58
TofuLynx
hmm
20:58
TofuLynx
but what if
20:58
danieel
in family: change resolution / out of family: change of sensor
20:58
BAndiT1983
for linear/bilinear there could be a single class, switchable between this two, but if you add amaze, vng, green-directed etc., then the size and maintenance hell would just explode
20:58
TofuLynx
you are changing the debayering method via the interface, we have to delete the class and allocate the new one?
20:59
TofuLynx
I see your point
20:59
BAndiT1983
you can also instantiate the class once, but this would be problematic for multi-threaded processing
20:59
BAndiT1983
i would suggest strategy pattern and dependency injection for now
21:00
TofuLynx
okk!
21:00
BAndiT1983
as reference: https://dzone.com/articles/java-the-strategy-pattern
21:01
BAndiT1983
have i pointed you to sourcemaking.com already?
21:01
BAndiT1983
https://sourcemaking.com/design_patterns/strategy
21:02
TofuLynx
no you havent!
21:04
TofuLynx
when processing each channel
21:05
TofuLynx
I'm thinking about creating auxiliary arrays that will then be returned as the final channel arrays. what do you think?
21:06
TofuLynx
and by returned I mean, placed into the existing channels
21:06
BAndiT1983
have to see the result first, have no clue how it will behave
21:06
TofuLynx
I mean
21:06
TofuLynx
how do you change the existing channel without any auxiliary array=
21:06
TofuLynx
?
21:07
BAndiT1983
we can create this ones in the pool
21:08
BAndiT1983
as the data size should be constant while working on one sequence
21:08
TofuLynx
My plan was this: create auxiliary arrays that remain unchanged and then store it on the pool
21:09
BAndiT1983
the pool should allocate the space for you, this is the idea of the pool, no need to allocate manually because of that
21:09
TofuLynx
hmm, but how do you keep track of where is the raw channel array located?
21:10
TofuLynx
with that pointer / integer?
21:10
BAndiT1983
pool allocator gives you the position
21:10
TofuLynx
ok :)
21:10
BAndiT1983
haven't looked into the lib from yesterday yet, but usually you get offset or some other location marker
21:11
TofuLynx
yeah probably
21:11
TofuLynx
what do you think of the gdoc right now?
21:12
BAndiT1983
looks ok at first glance
21:13
TofuLynx
Ok!
21:13
BAndiT1983
i hope that you mean that patternoffsets will be adjusted according to yesterdays chat ;)
21:13
BAndiT1983
also the algorithm simplified, similar to the downscaler loops
21:14
TofuLynx
with that R = 0, G1 = 1, G2 = width and B = width + 1?
21:14
BAndiT1983
yep
21:14
TofuLynx
yeah :P
21:14
BAndiT1983
this was for RGGB, others are similar
21:14
TofuLynx
yeah
21:15
TofuLynx
also I changed from colorOffsets to PatternOffsets
21:15
BAndiT1983
would do it myself, but really don't want to interfere with your gsoc task
21:15
TofuLynx
I think it's more intuitive
21:15
TofuLynx
Ok! No problem :P
21:15
BAndiT1983
you don't need a method there, just supply an enum value
21:16
BAndiT1983
as the offsets in the pattern are always the same, they can be pre-set
21:16
TofuLynx
so we have to create an enum ?
21:17
BAndiT1983
extraction of R, G and B can be separated from debayer classes
21:17
BAndiT1983
an enum is there
21:17
TofuLynx
hmm I see
21:17
TofuLynx
we could do other thing
21:17
BAndiT1983
have no IDE at the moment, but its called BayerPattern
21:17
TofuLynx
add a method to OCimage class that returns the pattern offsets
21:18
BAndiT1983
isn't it already there?
21:18
TofuLynx
let me check
21:18
BAndiT1983
https://github.com/apertus-open-source-cinema/opencine/blob/master/Source/OCcore/Image/OCImage.h
21:19
TofuLynx
enum class BayerPattern
21:19
TofuLynx
{
21:19
BAndiT1983
i would create a new class for RGB extraction, so it can do the processing
21:19
TofuLynx
RGGB,
21:19
TofuLynx
BGGR,
21:19
TofuLynx
GRBG,
21:19
TofuLynx
GBRG
21:19
TofuLynx
}
21:19
TofuLynx
this thing here
21:19
BAndiT1983
yep, and there are setter and getter at the bottom of the file
21:20
TofuLynx
hmm isnt the Downscaler class an extractor?
21:20
BAndiT1983
OCImage does not need to know about offsets, it should be in the extractor
21:21
BAndiT1983
downscaler is 2 in 1, as it's getting pixels, but if you want to do debayering, then you need separate steps
21:21
BAndiT1983
maybe i'm wrong here, but just a gut feeling
21:21
TofuLynx
why do you think OCimage doesnt need to know about offsets?
21:21
BAndiT1983
it sohuld provide the pattern, but no calcualtion for offsets
21:22
TofuLynx
why not?
21:22
BAndiT1983
why should it? it's just a general container for image data
21:22
BAndiT1983
single responsibility per class, if possible
21:23
BAndiT1983
image loader -> rgb extractor -> debayer -> and so on
21:23
BAndiT1983
this would be the pipeline
21:23
XDjackieXD
left the channel
21:23
TofuLynx
and where in the pipeline do you think the pattern offsets calculator should be?
21:24
BAndiT1983
rgb extractor, as it gets the image metadata, like width and height, also pattern
21:25
BAndiT1983
a lot of stuff from OCImage will be removed, like memmove, which was added as a hack for data storage, but with pool allocator it won't be necessary
21:26
TofuLynx
and then you would pass the pattern from the extractor to the next debayer class?
21:26
BAndiT1983
pattern offsets are always repeating, as you have same order top-left, top-right, bottom-left and bottom-right pixels
21:26
BAndiT1983
pattern is stored in OCImage, so the next class which works with it can get it from there
21:27
BAndiT1983
pattern offsets: you have just to assign right arrays to write too, but offsets stay the same for image resolution
21:27
XDjackieXD
joined the channel
21:28
TofuLynx
yeah
21:29
BAndiT1983
about your question regarding why calculatiosn shouldn't be placed in OCImage
21:29
TofuLynx
initially, I won't implement the pattern system, probably later
21:30
BAndiT1983
my work colleague called me yesterday because of an exception in our application, looked at his logs and pointed him straight to the error, which was known, as the frontend system has no connection to the database, just the server, but someone committed methods with calculations, so the system tried to convert stuff for sending and crashed
21:30
BAndiT1983
pattern system is mostly there, needs just some fixes/simplifications
21:31
TofuLynx
wow
21:32
BAndiT1983
ocimage should store just the minimum required stuff
21:32
TofuLynx
hmm but the thing is, I am creating the debayer class from the ground up
21:32
BAndiT1983
and?
21:32
TofuLynx
so I won't create any pattern system initially, as it's just a task of replacing the numbers with the pattern offsets
21:33
BAndiT1983
you can assume RGGB for now
21:33
TofuLynx
yeah
21:33
TofuLynx
that's my plan
21:33
TofuLynx
what do you think?
21:34
BAndiT1983
show me some code, then i can tell you more
21:34
BAndiT1983
which algorithm is on the list first?
21:35
TofuLynx
Ok!
21:35
TofuLynx
The first algorithm will be Linear Interpolation
21:36
TofuLynx
aka nearest neighbour
21:36
TofuLynx
and then it will be bilinear interpolation
21:36
TofuLynx
We have to discuss how to implement the flag system
21:36
TofuLynx
to choose between the two, as you suggested
21:38
BAndiT1983
take the existing class as the base and adjust the extraction there, afterwards we can move it to another class
21:38
BAndiT1983
as it would be much simpler for processing, without current overhead on code there
21:38
TofuLynx
Basically, replacing the processing methods with better ones?
21:39
TofuLynx
hmm Ok!
21:39
BAndiT1983
first with simpler ones, as it's doing a lot of stuff in one step, but it's more complicated to maintain
21:40
BAndiT1983
that's why i say that the extraction should be done separately for now
21:40
TofuLynx
wait
21:40
TofuLynx
to clear things up
21:40
TofuLynx
what do you mean by extraction?
21:40
BAndiT1983
separation of R, G and B pixels
21:41
TofuLynx
so, what the Downscaler class currently does?
21:41
BAndiT1983
bayerframepreprocessor does it at the moment
21:41
BAndiT1983
downscaler just gets known pixels
21:42
BAndiT1983
will upload code changes this days, where 12to16 and 14to16bit were moved to another place
21:43
BAndiT1983
just have to find the problem with downscaler image
21:43
TofuLynx
what do you mean by "just gets known pixels"?
21:43
BAndiT1983
pixels which were captured by sensor
21:43
TofuLynx
what does the preprocessor do?
21:44
BAndiT1983
the black ones between them are not known, downscaler avoids them and gets known ones, which results in slight shift
21:44
BAndiT1983
preprocessor is the extractor in our case, just forgot about it
21:44
TofuLynx
ah! wait
21:44
BAndiT1983
loops there can be also simplified
21:44
TofuLynx
preprocessor basically pads the known pixels with unknown pixels?
21:44
BAndiT1983
???
21:45
TofuLynx
I'm not understanding what's the difference between the two
21:45
BAndiT1983
remember how bayer sensor data looks like -> https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Bayer_pattern_on_sensor_profile.svg/350px-Bayer_pattern_on_sensor_profile.svg.png
21:45
BAndiT1983
sorry, double link -> https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Bayer_pattern_on_sensor_profile.svg/350px-Bayer_pattern_on_sensor_profile.svg.png
21:45
TofuLynx
yes?
21:45
BAndiT1983
preprocessor splits the RGGB to RGB arrays
21:46
BAndiT1983
debayer interpolates
21:46
TofuLynx
and what does downscaler do?
21:46
BAndiT1983
you wrote it, you should know ;)
21:47
TofuLynx
I think it does what you said preprocessor does xD
21:47
BAndiT1983
basically, the downscaler just gets filled pixels, and avoids the white ones (see image), in reality they have no data for certain color
21:47
TofuLynx
ah!
21:47
TofuLynx
so that's the difference... ok!
21:48
TofuLynx
isn't it easy to implement into the downscaler?
21:48
BAndiT1983
?
21:48
BAndiT1983
downscaler has other purpose than debayer
21:48
BAndiT1983
https://web.stanford.edu/class/cs231m/lectures/lecture-11-camera-isp.pdf
21:49
TofuLynx
i'm not comparing it with the downscaler
21:49
TofuLynx
but with the preprocessor
21:50
BAndiT1983
some how i've lost the thread, can you explain more?
21:50
TofuLynx
ok
21:51
TofuLynx
preprocessor extracts the RGGB into three arrays, RGB, that contain known and unknown pixels, right?
21:51
BAndiT1983
yes
21:51
TofuLynx
downscaler extracts the RGGB into three arrays, that only contain known pixels
21:51
BAndiT1983
yep
21:52
TofuLynx
my question is: Why shouldn't the downscaler also include the unknown pixels?
21:53
BAndiT1983
i think you are asking the question a bit wrong
21:53
TofuLynx
can you clear my mind?
21:53
TofuLynx
I'm really confused xD
21:53
BAndiT1983
downscaler was designed for fast pixel extraction, without the need for further processing, theoretically, as we still need gamma correction and so on
21:53
TofuLynx
oh
21:53
TofuLynx
I see it now
21:54
TofuLynx
so basically the downscaler is for an entirely different pipeline?
21:54
BAndiT1983
and you question should be: can we merge downscaler and preprocessor, so the skip level can be set
21:54
TofuLynx
yeah I guess
21:55
TofuLynx
Ok now I understand it now!
21:55
BAndiT1983
maybe this should be the first task, to adjust the preprocessor, maybe also some benchmarks to check if current implementation struggling with threads
21:55
TofuLynx
ok!
21:56
BAndiT1983
also benchmarks with loops like downscaler uses
21:58
TofuLynx
Ok the first task: benchmark the preprocessor loops
21:58
TofuLynx
change the loops to a single loop
21:58
TofuLynx
and finally benchmark it again and compare
21:58
TofuLynx
?
21:59
BAndiT1983
yep
21:59
BAndiT1983
you can consider it as downscaler and preprocessor merge
21:59
TofuLynx
do you want to add the skip pixels too?
21:59
BAndiT1983
but real merge will happen later, we should evaluate first
22:00
BAndiT1983
no, let the skip option out for now, have to reflect on that a bit, to be sure that we have some flexible solution
22:00
TofuLynx
ok!
22:00
RexOrCine|away
changed nick to: RexOrCine
22:01
BAndiT1983
maybe the default value would be 0 for skip, but if the value is higher, then OC should avoid debayering
22:01
TofuLynx
Do you want the preprocessor to call the debayer class?
22:01
BAndiT1983
have added an enum with half, quarter, eighth and sixteenth options for it
22:01
TofuLynx
or will it still be a job for the presenter?
22:01
BAndiT1983
but on my local machine for now
22:02
BAndiT1983
let the presenter do it, then the pipeline is more visible, also image loader should be simplified later, but first things first
22:03
TofuLynx
ok :)
22:03
TofuLynx
any thing you want to add to the gdoc about the first task?
22:03
TofuLynx
or correct
22:04
BAndiT1983
before/after benchmarks
22:04
BAndiT1983
such a benchmark should execute the loop many times, like 100 or 1000 to get a median value
22:04
TofuLynx
Ok!
22:05
BAndiT1983
without skipping pixels !for now!
22:05
TofuLynx
That's what is written xD
22:05
BAndiT1983
you can take a look at imageprovider for timing methods
22:05
TofuLynx
oh wait
22:06
TofuLynx
hmm
22:06
TofuLynx
timing?
22:06
BAndiT1983
line 55 and 63
22:06
TofuLynx
ah, for benchmark?
22:06
BAndiT1983
for benchmark you need to profile it somehow
22:07
TofuLynx
yeah
22:09
danieel
left the channel
22:09
danieel
joined the channel
22:11
BAndiT1983
so, off for today, TofuLynx, as usual write here, in lab or per email
22:11
BAndiT1983
see you
22:11
g3gg0
cu
22:11
BAndiT1983
changed nick to: BAndiT1983|away
22:12
TofuLynx
see you! :)
22:37
TofuLynx
Good night everyone!
22:37
TofuLynx
Nice to meet you g3gg0!
22:39
TofuLynx
left the channel
22:44
g3gg0
same, gn8 :)
23:02
g3gg0
left the channel
23:59
rton
left the channel