Current Server Time: 12:53 (Central Europe)

#apertus IRC Channel Logs

2018/04/25

Timezone: UTC


01:27
supragya
joined the channel
02:55
supragya
left the channel
03:06
Bertl_oO
off to bed now ... night everyone!
03:06
Bertl_oO
changed nick to: Bertl_zZ
03:43
supragya
joined the channel
03:59
RexOrCine
changed nick to: RexOrCine|away
04:33
supragya
left the channel
05:54
supragya
joined the channel
05:55
supragya
Hi, is there any meeting scheduled this week? GSoC related or otherwise.... and anything else that I should know of before 29th?
05:57
supragya
Also, would it be early to ask for setting up IRC bouncers for mentors and students for GSoC?
06:01
supragya
left the channel
06:33
ArunM
joined the channel
07:26
ymc98
left the channel
07:41
ArunM
left the channel
08:43
ArunM
joined the channel
09:05
sebix
joined the channel
09:05
sebix
left the channel
09:05
sebix
joined the channel
09:08
ArunM
left the channel
09:34
ArunM
joined the channel
09:43
ArunM
Same queries !!
09:54
Bertl_zZ
changed nick to: Bertl
09:54
Bertl
morning folks!
09:56
Bertl
supragya: ArunM: mentors should be around, so it is the perfect time to discuss the GSoC project details ... IRC bouncers are 'in the works' and should be available shortly
09:58
Bertl
ArunM: we can talk/brainstorm about T728 anytime you like
09:59
ArunM
Okay
09:59
ArunM
we can do it right now
09:59
Bertl
great!
10:00
Bertl
did you have a look at the AXIOM Beta stackup and the CMV12k datasheet yet?
10:02
ArunM
read the datasheet thoroughly
10:03
ArunM
had question regarding exposure and similar operations
10:03
Bertl
okay, go ahead
10:04
ArunM
we have to emulate the functional model, so does it include manipulating the data
10:05
ArunM
or just the timings
10:05
ArunM
and sending data
10:06
Bertl
no doubt, it would be a nice touch if the sensor data would reflect the settings (like exposure, gain, etc)
10:06
Bertl
but I would consider this a bonus we can add if there is still time
10:07
ArunM
okay it means i have to store raw bitstream and apply necessory mathmatical operations
10:07
ArunM
ok
10:07
Bertl
first let's see what the typical cases for the sensor simulation/emulation will be
10:08
Bertl
https://wiki.apertus.org/index.php/AXIOM_Beta/Camera_Structure
10:09
Bertl
if you look at the stackup, you see that there are three potential places where a sensor IP could go
10:09
Bertl
it could happen directly inside the Zynq (Main FPGA)
10:10
Bertl
basically eliminating the connection on the Main Board, the entire Interface Board and the Sensor Board
10:11
Bertl
the second obvious option is to replace the Sensor Board with a custom FPGA solution (e.g. Artix/Kintex) which runs the Sensor IP and simulates the sensor
10:12
ArunM
yes
10:12
Bertl
but there is a third option, which seems quite appealing to us ...
10:13
Bertl
the Interface Board is currently a dummy, which only passes half of the sensor lanes to the Zynq, but in the future, we want to replace that with an FPGA solution which works as gearwork to 'connect' the full sensor data bus
10:14
Bertl
given that the FPGA planned there should be able to handle the full bandwidth, it might be a nice and simple option to run the Sensor IP there (including the gearwork)
10:16
ArunM
Sensor ip should be designed such that it would be implemented in any case?
10:16
ArunM
could*
10:16
Bertl
that would be the idea, although we obviously target Zynq integration first, as the 'other' hardware is not available yet
10:17
Bertl
note that it is very likely that the Interface Board will be using and Artix or Kintex as well for several reasons, so the differences should be minimal
10:19
Bertl
what should be kept in mind is that the Sensor IP has to be modular enough to scale from very simple to complex and can be connected/optimized at all layers
10:19
ArunM
okay
10:20
ArunM
Only thing that will be changed in any implimentation is the inteface to feed fake data
10:20
se6astian|away
changed nick to: se6astian
10:20
Bertl
for example, the layer which does the LVDS timing for the Sensor might not be required for sensor simulation inside the Zynq
10:20
Bertl
but for sure it will be required in the custom Sensor Frontend case
10:21
Bertl
and it might be combined/optimized with the gearwork in the Interface Board case
10:21
Bertl
for the fake data, I also see different 'levels of quality'
10:22
Bertl
first, there is the sensor internal test pattern (which is quite trivial)
10:23
Bertl
then I would suggest to create a number of 'good' test patterns for our simulation, which are still 'computed' (need no storage) but show a little bit more contrast and color
10:24
ArunM
isn't the pattern be fed like fake data?
10:24
Bertl
that is basically the final quality option here, but it might be 'too expensive' in certain cases
10:24
ArunM
ex requires lot of memorty
10:25
ArunM
memory*
10:25
Bertl
yes, the data needs to come from somewhere
10:25
Bertl
that's the reason why a generator (computed image) is very interesting for high bandwidth simulation
10:26
se6astian
changed nick to: se6astian|away
10:26
Bertl
but given that the Sensor Simulation IP is modular enough, this shouldn't pose a problem
10:26
Bertl
the 'only' thing which needs to be changed is the data source
10:27
se6astian|away
changed nick to: se6astian
10:27
ArunM
for video stream where to keep frames, is secondry memory available or it is fed every time ?
10:27
ArunM
is it*
10:28
Bertl
I think there are a number of options we have here (i.e. we had a few ideas for this)
10:29
Bertl
one idea is to add sufficient memory (with the necessary bandwidth) to the hardware
10:30
Bertl
another idea is to use a live HDMI/USB/etc feed to supply the 'raw' data on the fly
10:30
Bertl
and yet another idea is to use the Zynq DDR memory for low bandwidth simulation
10:31
Bertl
the memory in the first option could be dedicated hardware (e.g. on a sensor simulation board)
10:32
Bertl
but it could also be a second MicroZed connected to the Sensor or Interface Board
10:35
Bertl
(note that this is the big picture ... not all of that is part of T728 but it should be considered)
10:35
ArunM
okay
10:36
ArunM
can i go through every case and tell my view on it after some time?
10:37
Bertl
sure
10:37
se6astian
changed nick to: se6astian|away
10:37
Bertl
also, if you have any new ideas, do not hesitate ...
10:38
ArunM
okay
10:58
rton
joined the channel
11:12
se6astian|away
changed nick to: se6astian
11:20
ArunM
left the channel
11:21
ArunM
joined the channel
11:25
_florent_
left the channel
11:25
_florent_
joined the channel
12:12
Bertl
off for now ... bbl
12:12
Bertl
changed nick to: Bertl_oO
12:25
aleb
left the channel
12:26
aleb
joined the channel
12:36
ArunM
From what I understood here I wrote an abstract of possible scenarios, correct me if I am wrong anywhere :-)
12:36
ArunM
Considering your mentioned 30 fps video data at 4096x3072 as bottom line, for max data bandwidth. In case of uncompressed data
12:37
ArunM
It comes 18 MB per frame and 540 MB per second for 30 fps
12:37
ArunM
Say we use HDMI or USB then to meet timing constrains (delay caused by Ext. exposure time,
12:37
ArunM
frame overhead time etc )
12:37
ArunM
we cannot randomly start streaming data after delay directly from HDMI, and in case of USB 3.0 if we keep
12:37
ArunM
requesting frames directly after delay, then latency of USB is not that low to meet timing constrains.
12:37
ArunM
So Considering that in mind next option is to keep atleast 1 uncompressed frame in DDR memory and then use either hdmi or usb to update
12:37
ArunM
it.
12:38
ArunM
or
12:38
ArunM
Using a dedicated hardware simplifies things like requesting frames on the go, and also ddr memory will not be required to hold frames. But it introduces a lot of work from design side
12:38
ArunM
Also 1 question, If Hdmi is used as input, at the back end is it connected to some sort of camera? or if not how to signal the back
12:38
ArunM
end device to start sending frames?
12:40
se6astian
AFAIK several frames are buffered in memory
12:40
se6astian
at last 4 I think
12:47
ArunM
left the channel
12:49
ArunM
joined the channel
12:49
ArunM
i think you are talking about IP that intefaces with sensor board
12:50
ArunM
interfaces*
12:53
ArunM1
joined the channel
12:53
supragya
joined the channel
13:02
se6astian
possibly, best wait for Bertl_oO to return, he knows for sure
13:23
aombk2
left the channel
13:23
TD-Linux
left the channel
13:25
LordVan
left the channel
13:26
aombk2
joined the channel
13:26
TD-Linux
joined the channel
13:26
ArunM
left the channel
13:42
supragya
left the channel
14:38
RexOrCine|away
changed nick to: RexOrCine
15:00
supragya
joined the channel
15:01
supragya
Hello RexOrCine, se6astian!
15:01
RexOrCine
Hey supragya. What's the cricket situation there?
15:01
supragya
se6astian: I have given some thought over your advice, and I think I would be able to take forth the frameserver
15:02
supragya
along with the GSoC project... although it may stretch beyond GSoC period and I am fine with that
15:02
supragya
RexOrCine: I only follow international matches. Don't really like IPL
15:03
supragya
However seems like Chennai is going for the wins here (CSK)
15:04
Bertl_oO
changed nick to: Bertl
15:06
Bertl
ArunM1: don't waste to much time or thought on the live image feed ... in any case we will need some buffering there (we even need that for a generator) because the data rates are too high for 'requesting' the data
15:07
Bertl
the most likely scenario for this case will be two AXIOM Beta connected front to front so that one camera can 'play back' a sequence from memory and the other one acts as receiver with the simulated sensor
15:08
Bertl
the 'working horse' setup will be with artificial live data created by a generator or even just static test images
15:19
znca
joined the channel
15:33
ArunM
joined the channel
15:38
ArunM
So what will be the interface between both AXIOM beta or is it going to be part of design?
15:41
supragya
left the channel
15:44
Bertl
Most likely we will do a direct connection with or without an FPGA
15:45
Bertl
think of it like removing the Sensor Board and connecting the cameras at the Interface Board
15:45
ArunM
okay
15:45
Bertl
we might use a dedicated FPGA board for this in the future, but for now this is the most realistic setup
15:46
Bertl
we might even get something like this working in the next few months
15:47
Bertl
what is important to consider for the SenSim IP is that we need a side channel for configuration
15:47
Bertl
i.e. some way to configure the generator image for example
15:48
ArunM
and finally for fake data?
15:49
ArunM
to stream/transfer fake data?
15:55
ArunM1
okay got it
15:55
ArunM1
read first message again
15:55
ArunM1
:-)
15:57
Bertl
okay :)
15:57
Bertl
so first step, sensor interface (LVDS, SPI, etc)
15:57
Bertl
then fake data via generator (trivial, simple, sophisticated)
15:58
Bertl
then live data from other AXIOM (kind of DDR based generator)
15:58
ArunM
left the channel
15:58
ArunM
joined the channel
16:00
znca
left the channel
16:01
aombk2
left the channel
16:01
TD-Linux
left the channel
16:01
derWalter
left the channel
16:01
illwieckz
left the channel
16:01
ArunM1
left the channel
16:01
anuejn2
left the channel
16:01
Kjetil
left the channel
16:01
alexML
left the channel
16:02
ArunM1
joined the channel
16:02
anuejn2
joined the channel
16:02
Kjetil
joined the channel
16:02
alexML
joined the channel
16:03
derWalter
joined the channel
16:03
aombk2
joined the channel
16:03
TD-Linux
joined the channel
16:03
BAndiT1983|away
changed nick to: BAndiT1983
16:04
illwieckz
joined the channel
16:04
derWalter
left the channel
16:04
davidak[m]
left the channel
16:04
anuejn
left the channel
16:04
vup[m]
left the channel
16:04
MilkManzJourDadd
left the channel
16:05
parasew[m]
left the channel
16:05
XD[m]
left the channel
16:05
elkos
left the channel
16:05
flesk_
left the channel
16:05
hof[m]
left the channel
16:08
ArunM1
Okay I'll create and explain my architectural map for sensor interface and after getting a "go" from you, I'll start coding!
16:08
Bertl
sounds good ...
16:15
ZNC
joined the channel
16:16
supragya
joined the channel
16:16
supragya
hello BAndiT1983
16:16
BAndiT1983
hi supragya
16:17
BAndiT1983
btw. you can DM g3gg0 in lab
16:30
mithro
left the channel
16:31
mithro
joined the channel
16:35
supragya
hi alexML, are you available?
16:50
supragya
Bertl, I would like to know the specifics of how the video (and the associated metadata, what all in metadata) are provided by the camera to the final port like HDMI
16:50
supragya
In specific terms, the metadata (like date, time, exposure etc) are video specific, frames specific etc?
16:51
Bertl
basically we currently use the following image pipeline:
16:51
Bertl
https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#Image_Acquisition_Pipeline
16:51
supragya
What does the stream look like?
16:51
Bertl
metadata is not part of this at the moment, so it has to be recorded somewhere else
16:51
BAndiT1983
like raw12 file
16:51
danieel
Bertl: have you made some BW tests or just coded the hdl that it shall fit ? would be curious how much the zynq can really provide
16:52
Bertl
provide as in memory bandwidth?
16:52
danieel
from the numbers, it seems that multiple HP ports shall be used
16:52
danieel
yes
16:52
danieel
the PS controller, to PL client
16:53
supragya
[ so it has to be recorded somewhere else] -> what are the current mechanisms to get hold of this data, apart from raw pixel data/
16:53
supragya
*?
16:56
BAndiT1983
https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng
16:56
Bertl
danieel: the total throughput is about 32Gigabit/s on the DDR2 memory via HP ports
16:57
danieel
it has ddr2? though of 3
16:57
BAndiT1983
supragya, cmv_snap3 captures and then raw2dng is used, as basic pipeline
16:57
Bertl
but given that the CPU has some memory access as well, it usually tops out at 28Gigabit
16:58
Bertl
I meant DDR3 (i.e. what we have on the MicroZed)
16:58
danieel
32G is over one 128bit HP port?
16:59
Bertl
a single port tops out around 10-14Gigabit with our current setup
16:59
ArunM
left the channel
16:59
danieel
thats good to know, thanks
16:59
Bertl
np
17:00
Bertl
supragya: there are no mechanisms in place at the moment to record it
17:00
nmdis1999
joined the channel
17:00
nmdis1999
Hi Bertl!
17:01
danieel
i am trying to figure out the best architecture for a GS sensor, where one has to subtract the reset frame.. so I thought that going by the hw controller would be the best idea (power consumption and performance wise, since PS can handle DDR3-1066 vs PL DDR3-800 at artixbased lowcost devices )
17:01
Bertl
supragya: depending on the actual data stream it might be feasible to encode the data in the stream (USB or even HDMI), or have a separate stream e.g. via ethernet
17:01
Bertl
hey nmdis1999!
17:02
nmdis1999
I had a doubt, are you my primary mentor or sebastian?
17:02
BAndiT1983
Bertl, is there some unique data per frame, like WB?
17:02
supragya
so in layman terms, if the aperture is changed on the fly while recording the video, it is not possible to gather the aperture values from camera? It has to be decoded in a way by maybe change in pixel intensities in the RAW12?
17:02
supragya
+1 to BAndiT1983's question?
17:02
Bertl
nmdis1999: we really couldn't decide on that (not that it matters that much :) so you probably can decide yourself
17:03
rahul__
joined the channel
17:03
nmdis1999
I don't really mind, you both are cool :) Just wanted to know.
17:03
xfxf_
joined the channel
17:03
BAndiT1983
nmdis1999, take Bertl, as he is like a vampire lord, around all night, seems like he using indian time zone for his life ;)
17:03
BAndiT1983
*he is using
17:03
Bertl
unique per frame metadata is possible, e.g. we can change exposure per frame for example
17:04
nmdis1999
lol, that would be great for me xD
17:04
supragya
I wondered why he said good morning when it was morning here
17:04
nmdis1999
^Same
17:04
BAndiT1983
qed!
17:04
Bertl
supragya: I'm under cover :)
17:04
nmdis1999
Although, according to IST he wakes up around 4-5 am which is scary.
17:05
supragya
unique per frame is possible, but no way to retrieve it? Am I missing something here?
17:05
supragya
nup nmdis1999, he wakes at 11AM at our end... it is about 6-7 there maybe
17:05
Bertl
we also have our solder on area with IMU ready, so as soon as we get the FPGA packet protocol working, there might also be IMU data for each frame
17:05
BAndiT1983
nmdis1999, it's the way of old people to stand up early, my neighbour, elderly lady was already up at 6am, when i'Ve left for work :D
17:06
nmdis1999
Bertl, I did coded the tool for histogram (and indented it as you asked) https://github.com/nmdis1999/Histogram
17:06
danieel
somebody shall explain why is IST :30 min ... could not decide on summer/winter time so choose the average? :)
17:06
Bertl
supragya: well, metadata has not been recorded yet (except for snapshots)
17:06
supragya
hmm, but it is TBD right?
17:06
nmdis1999
Should I proceed and work on that or start studying cmv_hist3
17:07
BAndiT1983
yup, without it it's pointless to make videos
17:07
BAndiT1983
my sentence was meant for supragya
17:07
Bertl
nmdis1999: you sure about that? :)
17:07
supragya
:)
17:07
supragya
got it BAndiT1983 :)
17:07
nmdis1999
about the code? I wanted you to check it once :)
17:08
Bertl
because at the first look I see some inconsistencies with the indentation there ...
17:08
supragya
danieel: :30 is kindof strange, but makes some mathematical sense (this is the best I can tell you)
17:09
supragya
As the story goes, while India was to be unified, so was to be unified the time, so they found one central location
17:10
supragya
Calculated the time there, it was in mid of two zones, so whole India got +5.30... best explanation I can provide
17:10
rahul_
left the channel
17:10
xfxf
left the channel
17:10
rahul__
changed nick to: rahul_
17:10
xfxf_
changed nick to: xfxf
17:10
nmdis1999
I'll give it a look! Do you recommend me to use a formatter Bertl?
17:11
Bertl
is probably the best way until you get used to doing proper indentation while you are coding
17:11
nmdis1999
Sounds right.
17:11
derWalter
joined the channel
17:11
BAndiT1983
https://foxnomad.com/2017/11/07/indias-time-zone-30-minutes-off-rest-world/
17:12
Bertl
I also commented that you don't want to process the data in separate storage arrays ... mainly because the data is huge and you are moving it around over and over
17:12
BAndiT1983
also here i would suggest pool allocator (or block allocator as it's called sometimes)
17:13
Bertl
so a proper algorithm has to work on the data in one pass, basically looking at each sensel data only once, ideally accessing them in memory order
17:13
BAndiT1983
as the color channels are of the same size
17:13
Bertl
otherwise the performance will be really bad ...
17:13
BAndiT1983
Bertl, do we need every pixel right away, or is it possible to skip?
17:14
Bertl
sure, skipping and cropping helps if done properly
17:14
Bertl
for example, there is almost no point in skipping every second sensel
17:14
supragya
I guess it will help as it is purely memory pointer maths and then we can have maybe a rough histogram
17:14
Bertl
because it will be fetched from memory anyway, but it makes sense to skip every second row for example
17:15
BAndiT1983
has zynq any special instructions? like SIMD, SSE or AVX?
17:15
nmdis1999
Okay, noted. I'll work on that.
17:15
Bertl
there is NEON which is SIMD
17:16
Bertl
but the main bottleneck will be memory bandwidth
17:16
supragya
SIMD should help in histogram
17:17
Bertl
nmdis1999: we will setup a system for you to make some performance tests during the next week (probably)
17:17
parasew[m]
joined the channel
17:17
nmdis1999
Thank you :) I do really indent to do much work in community bonding as I'll have exams in early week when coding period begins
17:18
nmdis1999
and I don't wish my timeline to get disturbed
17:18
Bertl
sounds like a plan.
17:19
nmdis1999
Off for now, will be back soon :)
17:19
Bertl
cya
17:19
nmdis1999
left the channel
17:20
supragya
nmdis1999: copies Bertl's lines :)
17:25
MilkManzJourDadd
joined the channel
17:25
davidak[m]
joined the channel
17:25
elkos
joined the channel
17:25
XD[m]
joined the channel
17:25
hof[m]
joined the channel
17:27
BAndiT1983
changed nick to: BAndiT1983|away
17:30
BAndiT1983|away
changed nick to: BAndiT1983
17:41
sebix
left the channel
17:45
g3gg0
joined the channel
17:46
supragya
good evening g3gg0
17:50
flesk_
joined the channel
17:50
anuejn
joined the channel
17:50
vup[m]
joined the channel
17:56
supragya
BAndiT1983, g3gg0: why are there no ordering restrictions of blocks in MLV?
17:56
se6astian
changed nick to: se6astian|away
18:05
supragya
left the channel
18:05
supragya
joined the channel
18:14
Guest27507
left the channel
18:14
rahul_
left the channel
18:14
Guest27507
joined the channel
18:14
rahul_
joined the channel
18:15
supragya
left the channel
18:33
se6astian|away
changed nick to: se6astian
19:27
g3gg0
hiho
19:27
Bertl
hiho :)
19:27
g3gg0
> why are there no ordering restrictions of blocks in MLV?
19:28
g3gg0
@supragya is gone, but the explanation is: the devices that process MLV files, have enough computing power to do random access
19:28
g3gg0
a writing device, like a uP in a camera, has various restrictions and limitations
19:30
g3gg0
we found that a buffering mechanism which allows writing frames in that buffer in a way, so that the IO device has maximum write rates, requires that the blocks can appear out of order
19:30
g3gg0
(assuming the buffer memory is non-contiguous, as we had on canon cameras)
19:30
BAndiT1983
supragy reads logs very often, so don't worry
19:30
BAndiT1983
*supragya
19:31
alexML
hola
19:31
g3gg0
but even on a standard ringbuffer you could gain write speed if you can write the "longest" block of frames
19:31
g3gg0
hi
19:31
BAndiT1983
so audio frames are written, when the processing is finished, so to say in between?
19:31
g3gg0
inbetween
19:31
BAndiT1983
hi g3gg0
19:32
BAndiT1983
i've suggested, that he should look into MLV first, as it is quite suitable for the task
19:33
g3gg0
if you have a single buffer with e.g. 100 frames space and you fill the buffer in a linear way from frame 0 to 99 and your write device will write asynchronously.
19:33
RexOrCine
(16:12:18) supragya: Seems like I missed my primary mentor yesterday. He came (g3gg0) and I wasn't available
19:33
RexOrCine
(16:12:37) supragya: Do you know how can I reach him?
19:33
RexOrCine
(16:12:57) RexOrCine: Have you tried DMs through here?
19:33
RexOrCine
(16:13:26) RexOrCine: As, if he's unable to catch-up on previous comms he'll need to set up a bip account.
19:33
RexOrCine
(16:14:05) supragya: seems g3gg0 isn't on IRC for most part... BAndiT is my second mentor... seen him only once or twice here... that's why i requested a bouncer
19:33
RexOrCine
(16:14:17) supragya: how's it going with bouncer, do you know/
19:33
RexOrCine
(16:14:21) supragya: know?
19:33
RexOrCine
(16:15:40) RexOrCine: I find my bip account temperamental, but it works for the most part. I'll bring this up and see what can be done.
19:33
RexOrCine
(16:16:54) supragya: Let's hope it gets okay by the weekend... I have my last exam tomorrow and then leaving for home at night tomorrow... Will be available on maybe the 29th
19:33
RexOrCine
(16:17:20) supragya: Packing etc was all that I did last couple days
19:33
RexOrCine
(16:17:46) supragya: That's why I asked about if some meetings were in place, so that I don't miss them
19:33
RexOrCine
(16:19:19) RexOrCine: You moving house are you? Going home for the holidays or something?
19:34
Bertl
no need to paste parts of the irc log here, just paste an link to the log
19:34
RexOrCine
I guided him RE sending DMs through the lab so presumably you should get something through there from him.
19:34
BAndiT1983
supragya has created a meeting room in lab, there we can gather imporant stuff and discuss the gsoc related things
19:34
BAndiT1983
*important
19:35
RexOrCine
Bertl - Those were DMs.
19:36
Bertl
well, in this case you probably should keep them private instead of dumping them into the public logs :)
19:36
BAndiT1983
maybe we should use dedicated gsoc task channels for that
19:36
g3gg0
(con't) assuming you want to write blocks in an optimized way, combining several frames, it will happen that you are only left with a smaller chunk with only a few frames. on canon cameras this reduced write speed so much, that it wasnt stable enough. as a solution we decided to allow the write buffers to contain unordered frames and the write task can pick the largest block which promises highest write speed
19:37
Bertl
BAndiT1983: I think one channel is fine ... this way everybody knows who's working on what ...
19:37
Bertl
in case there are two intense discussions going on, we can always split one off into a separate room
19:37
g3gg0
as said, it was complicated because our bufferes weren't contiguous and the IO device suffered a lot when the write sizes weren't optimal
19:38
g3gg0
that said, the best solution was to allow random frame order
19:38
BAndiT1983
g3gg0, maybe i've missed something, besides the block being in different order, are the frames stored sequentially? implemented mlv reader long ago and don'T remember my research much
19:38
g3gg0
alexML put a lot of effort in the write optimization algorithm
19:38
g3gg0
they are not stored in sequential order in MLV files
19:39
g3gg0
before processing MLV file frames, you have to create an index
19:39
BAndiT1983
if the order is random, would it make sense to order them in post processing and store as new MLV file?
19:39
g3gg0
possible, but not required
19:39
g3gg0
iirc the more restricting part of PP is the image processing, not the IO device
19:39
BAndiT1983
just thinking about performance optimizations a bit
19:40
BAndiT1983
streaming for example would be easier if ordered
19:40
g3gg0
yep
19:41
BAndiT1983
there was just a discussion about the raw container for axiom and how to process/play the file, seen in ML forums that the guys use fuse, so supragy has created very basic frame server structure, maybe it can be applied later
19:41
g3gg0
if you process a MLV file with mlv_dump it will order automatically when writing
19:41
BAndiT1983
*supragya (missing the last key constantly)
19:41
g3gg0
(hmmm... or did i change that just experimentally and never checked in?)
19:42
BAndiT1983
but this is ML related currently, right?
19:42
g3gg0
yep
19:42
g3gg0
but could also apply to apertus
19:42
BAndiT1983
of course, sounds good
19:42
g3gg0
when it comes to audio / video sync
19:42
g3gg0
you can write sequentially if you wish to
19:43
BAndiT1983
do you know, why ffmpeg version wasn't maintained since last year? tried to verify my results in OC with it, but MLV was greenish
19:43
g3gg0
ever block has an identifier, size and a timestamp
19:43
g3gg0
ffmpeg could read MLV, but debayering wasnt implemented iirc
19:43
BAndiT1983
also considered to offer the implementation from OC at some point, when the sequences are finally loading and not just frames
19:44
BAndiT1983
MLV is rather good, had a lot of fun to implement the reader, was much more straight forward, than something like AVI
19:44
g3gg0
cool, good to hear that :)
19:45
BAndiT1983
what about custom tags there?
19:45
g3gg0
no problem, its allowed by design
19:45
g3gg0
every reader has to ignore unknown types
19:45
BAndiT1983
supragya had some questions and one was why we don'T use AVI as it's standard
19:45
g3gg0
there are some fundamental block types that must be supported
19:46
g3gg0
MLVI - thats the header containing the video file GUID, the content type information, frame rate and (optionally) the frame count
19:47
g3gg0
every video recording can be split into several files, like you know from RAR, R01, .... here it is called .MLV, .M00, .M01 etc
19:48
BAndiT1983
have missed the bit back then, but sounds very good, hope to find maybe some examples in the ML forum
19:48
g3gg0
all files have their header with a random "GUID" which is the same for all "chunks" (MLV, M01...)
19:49
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=7122.0
19:49
g3gg0
there you see a lot of blocks
19:49
BAndiT1983
i know this page too good ;) looked a lot through it for implementation
19:49
g3gg0
ah okay, then for supragya :)
19:50
BAndiT1983
but real sample files are not that often
19:50
BAndiT1983
i think i have spammed him also with the link at least 5 times :D
19:50
g3gg0
there was a collection, let me check
19:50
BAndiT1983
this is one of the threads -> https://www.magiclantern.fm/forum/index.php?topic=11899.50
19:51
g3gg0
exactly
19:51
BAndiT1983
last files are nice to see, because of 10, 12 and 14bit data
19:51
BAndiT1983
but some multi-part sample would be cool
19:51
g3gg0
got also a collection locally of all kinds of mlv versions
19:51
g3gg0
its just... 300 GiB
19:52
BAndiT1983
ouch
19:52
BAndiT1983
pity that my camera is not supported yet, otherwise would have shot clips myself for testing
19:53
g3gg0
which one do you have?
19:53
BAndiT1983
eos760d, seen a lot of people wanting to test firmware pieces which were posted, but seldom would someone offer to port ML
19:54
BAndiT1983
what is mlv_lite, by the way?
19:54
g3gg0
@supragya: lets get in touch via mail etc, as i have a daytime job without access to IRC, i can either chat there at nighttime (9 PM CET and later) . will share you my business mail, there i am available all the time
19:55
BAndiT1983
g3gg0, have you seen the chat room in lab yet?
19:55
g3gg0
mlv_rec was the first approach with full-blown support for writing GPS coordinates, LENS infos, level meter infos etc which caused the write rate to be rather bad
19:55
g3gg0
alexML started from scratch with a more lean design, focusing on write rate
19:56
BAndiT1983
just tried to playback some samples in vlc, but they are shot in mlv_lite
19:56
g3gg0
right now mlv_lite is "the" raw recording module
19:57
g3gg0
mlv_rec is dead (was a good start but never could fix performance issues) and not meant for use
19:57
g3gg0
t.b.h, no dont know that one
19:58
g3gg0
do now :)
19:58
BAndiT1983
mlv_rec2 is still actual one?
19:58
g3gg0
dont know that one :D
19:59
g3gg0
theres just raw_rec (first prototype, .RAW files) then mlv_rec (also called Magic Lantern (RAW) Video format v2.0) and now mlv_lite
19:59
BAndiT1983
ok, then ver mlv_rec 2.0 ;)
20:00
g3gg0
mlv_rec supports audio, mlv_lite not out of the box (just a prototype which has some issues)
20:00
BAndiT1983
ok, so we should stick with mlv_rec for now
20:00
g3gg0
nah dont focus on mlv_rec too much
20:00
g3gg0
that wouldnt help imho
20:01
BAndiT1983
ok, but how would you approach the task? my idea was to support MLV format in axiom, as starting point at least
20:01
g3gg0
well yes, check what the MLV stuff is all about and how writing is done. maybe mlv_dump, the commandline tool, is the best.
20:01
BAndiT1983
afterwards we would see if it has all the bells and whistles we need or extend it with custom packets
20:02
ArunM1
left the channel
20:03
BAndiT1983
posted a link to the source code in lab chat, this one -> https://bitbucket.org/g3gg0/magic-lantern-mlv_rec
20:03
BAndiT1983
at least something to start with
20:03
g3gg0
yep
20:04
BAndiT1983
why is the code mixed, there is C and also python there?
20:04
g3gg0
maybe we just get too much speed right now :)
20:05
g3gg0
one good pointer is: https://bitbucket.org/hudson/magic-lantern/src/02e5918a6ed5f4e21f2e50d84744f5adddcc0771/modules/mlv_rec/mlv.h?at=crop_rec_4k_mlv_lite_snd&fileviewer=file-view-default
20:06
g3gg0
together with the already posted forum entry
20:06
g3gg0
that will explain the anatomy of a MLV file
20:07
g3gg0
the first steps to make a valid MLV is:
20:07
g3gg0
MLVI (header), RAWI (resolution etc), VIDF (video frame)
20:09
g3gg0
RAWI is unfortunately a bit complex as it contains a structure which is ml internal
20:09
g3gg0
then the issue with raw frame bayer encoding is to be solved
20:09
BAndiT1983
been there, done that :) -> https://github.com/apertus-open-source-cinema/opencine/blob/master/Source/OCcore/Image/MLVLoader.cpp
20:10
BAndiT1983
which encoding?
20:10
g3gg0
there are two options we have in camera: a) raw bayer as it is in camera memory or b) LJ92 encoded pixels
20:11
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/02e5918a6ed5f4e21f2e50d84744f5adddcc0771/modules/mlv_rec/lj92.c?at=crop_rec_4k_mlv_lite_snd&fileviewer=file-view-default
20:11
g3gg0
lj92 is being done in-camera
20:11
BAndiT1983
where is lj92 coing from?
20:12
BAndiT1983
ah, jpeg92 it seems, google hasn't found meaningful info on lj92 first
20:12
g3gg0
it is lossless jpeg
20:12
BAndiT1983
https://thndl.com/how-dng-compresses-raw-data-with-lossless-jpeg92.html
20:12
g3gg0
exactly
20:12
BAndiT1983
placing links for supragya here and in lab
20:12
g3gg0
reduces raw size a lot
20:13
BAndiT1983
is it used because of speed?
20:13
g3gg0
when i remember correctly its compressing to ~55% of the original frame size
20:13
BAndiT1983
not bad
20:13
g3gg0
this allows reducing the write load a lot
20:13
BAndiT1983
is there no newer algorithm? just curious
20:13
BAndiT1983
what about processing power?
20:14
g3gg0
good question. canon does this ;)
20:14
alexML
the LJ92 algorithm is implemented some sort of DSP (with unknown architecture); we have no control over it other than calling it
20:15
g3gg0
we are not doing lj92 in software on our own, its in DIGIC somewhere
20:15
BAndiT1983
ah, we have to take that into account
20:15
BAndiT1983
maybe Bertl can reply if we could do something like that in FPGA?
20:15
g3gg0
could be a valid option to also compress the frames using lj92
20:16
g3gg0
depends on the required logic blocks for this compression
20:16
g3gg0
could be useful for writing DNG images too
20:17
BAndiT1983
if i search for jpeg92, then apertus irc logs pop up in google :D
20:17
BAndiT1983
http://irc.apertus.org/index.php?day=27&month=10&year=2014
20:17
g3gg0
inception
20:19
BAndiT1983
as we don't have the compression on FPGA yet, we should treat it as second step, just to make things simpler for a moment
20:26
g3gg0
one thing i want to mention - and why i think we are a bit too fast: the raw video container job is not "make MLV work in apertus!"
20:26
g3gg0
even if i'd love to see that ;)
20:27
g3gg0
its about "find out the requirements and the restrictions for the container format. find viable solutions and make a prototype"
20:27
BAndiT1983
i know, just need some starting point
20:28
BAndiT1983
by supporting MLV natively, it would allow to spread it more, as support is not that big out there
20:28
BAndiT1983
but if we can output the data in some stream-like format with additional data, then it would also be fine to consolidate and convert it on PC afterwards
20:29
g3gg0
yep absolutely, but i dont want to bias the result of his analysis
20:29
danieel
so what are the real options for containers? mkv/qt ? why are you reinventing the wheel?
20:29
aombk
joined the channel
20:29
g3gg0
exactly these are the questions to ask
20:29
BAndiT1983
it's about internal processing in caera first
20:29
danieel
i assumed mlv came from limitations of a canon fw hackup
20:29
BAndiT1983
it's not a hack ;)
20:29
danieel
which are not present here
20:29
g3gg0
i do not 100% agree to that statement
20:30
g3gg0
we had some specifics, yes
20:30
BAndiT1983
isn't qt/mov license restrictive?
20:30
g3gg0
those would have been a bit more effort to solve with "common" container formats
20:31
g3gg0
but i as you: what benefit would we have with mov? no tool could work with our files
20:31
g3gg0
*ask
20:31
danieel
BAndiT1983: there is ISO standard for mp4 which is same container as mov
20:31
danieel
i believe there cant be ISO for proprietary stuff
20:32
aombk2
left the channel
20:32
BAndiT1983
codecs are proprietary mostly
20:32
g3gg0
we spit out some RGGB raw video stuff with custom tags for raw properties, custom LENS tags, custom exposure info tags etc
20:32
danieel
with mov - lot of tools can work with that, the frames/seeking, multiplexing is clearly given by the qt/mov format
20:32
danieel
the question is in codec. your tool will support yours. if it catches up one day you can dragndrop beta movs to davinci resolve :)
20:33
g3gg0
and metadata?
20:33
BAndiT1983
main question is, if the camera can record mov on the fly
20:33
danieel
i worked so far with static metadata only (camera sn/shot info)
20:33
g3gg0
stuff like DARK frames
20:33
danieel
i think arri has a proprietary stream track for dynamic metadata
20:34
danieel
dark frame as 1 dark shot before the sequence?
20:34
g3gg0
and now we are getting into some real issues when our "big pro" for mp4 was that there is a lot of stuff available, but we suddenly have to hack together a lot of special extensions
20:35
g3gg0
where we just wanted a simple, lean container format that can be read without compiling several C++ libraries together
20:35
BAndiT1983
have no real problem with outputting stream with frame markers and other metadata separately, then merge on pc in post
20:35
danieel
well, there has to be a clear separation of a codec/metadata and the container
20:36
g3gg0
yet you would not win any interoperability
20:36
BAndiT1983
what are arri, red and blackmagic camera outputting?
20:36
BAndiT1983
*cameras
20:36
g3gg0
some of them store CinemaDNG?
20:37
danieel
you make 1) muxing with audio straightforward 2) splicing to parts simpler
20:37
g3gg0
not sure
20:37
danieel
arri does mov/prores
20:37
danieel
bm does mov/prores and cdng
20:37
BAndiT1983
in the camera?
20:38
danieel
dngs is easy to customize, with the todays state of things that nobody is taking a central authority.. just find a free tiff tag ID and use it for your own purpose :)
20:38
danieel
blacmagic broke supported codecs with their 3:1/4:1 codec, so yes.. things went really wild :)
20:38
BAndiT1983
cinemadng should already have most fitting tags
20:39
BAndiT1983
waht about arriraw, it's also not a common format
20:39
danieel
drawback is split-to-files, which is bad if your os has some overhead with open()/close()
20:40
g3gg0
on a PC you probably have no problem there...
20:41
g3gg0
on embedded devices, i'd definitely circumvent fs metadata updating
20:41
danieel
well, some users dont like the lot of files.. and i would not care, but midnight commander tends to copy the files shuffled, which is meh... why?
20:41
danieel
(probably readdir() returns shuffled data because of FS having trees?)
20:41
g3gg0
mc is odd with too many files
20:42
g3gg0
many files for one video take make things complex
20:43
BAndiT1983
are you people sticking with midight commander because of nostalgy? many colleagues do also, but my days with norton commander long gone and not looking back
20:43
g3gg0
i do
20:43
g3gg0
i miss it :]
20:43
danieel
well.. when you use remote ssh lot, then mc is fine, you cant easily access X
20:44
g3gg0
and NDD, norton disk doctor
20:44
BAndiT1983
g3gg0, dosbox is your friend ;)
20:44
g3gg0
right now i prefer the inverse - WSL
20:45
danieel
arriraw is file per frame, header + blob... sort of dump the struct{} to file :)
20:46
Bertl
g3gg0: World Surf League?
20:46
g3gg0
lol, no. Windows Subsystem for Linux
20:47
g3gg0
you can see it as /wine or ~wine
20:47
BAndiT1983
g3gg0, bad topic for Bertl
20:47
danieel
wine >glass :P
20:48
BAndiT1983
i know that alcohol helps a lot when developing software, but the discussion is wandering a bit off ;)
20:48
g3gg0
MS publishes a linux compatibility layer with latest windows 10's
20:48
BAndiT1983
windows is a very bad topic for Bertl :D
20:48
BAndiT1983
it has UI!
20:48
g3gg0
allows you to run unmodified ELF on windows
20:48
danieel
so.. unless you can do a compatible output, use at least a compatible/standard container :)
20:49
Bertl
BAndiT1983: it 'has a really bad UI' is probably what you meant :)
20:49
danieel
per file: cdng, single file: mp4/mov/qt vs mkv (thas easy with license) and a custom codec. doing both custom codec and custom container does not make a sense..
20:49
BAndiT1983
trie vi at work, as there was just QNX and telnet, was no fun when backspace is not working, so i'M sticking to UI
20:49
g3gg0
on modern computers you need an UI. how else would you arrange several shell windows next to each other? screen does not support multihead
20:51
BAndiT1983
mkv would be an alternative, but performance tests have to be done first
20:51
g3gg0
@danieel: if my main effort is keeping things small and controllable, i do prefer bottom-up over just reusing foreign code i do not know
20:51
BAndiT1983
zynq is not that capable and we have a lot of other stuf ongoing while recording
20:52
BAndiT1983
my vote is still for MLV here, but feel free to suggest some option which is suitable for embedded system
20:52
danieel
what is the benefit of mlv over qt/mov/mp4?
20:52
BAndiT1983
Bertl, can we stream the data and add markers after every frame?
20:53
g3gg0
guys, we are losing focus. supragya's task is to do exactly that.
20:53
Bertl
stream how, mark what?
20:54
BAndiT1983
qt and mov are proprietary and need license, so we are left with mkv and mp4
20:54
danieel
qt=mov=mp4, structure wise
20:54
BAndiT1983
when the data is pouring in from the sensor, as image sequence, can we mark separate frames?
20:55
g3gg0
i've lost too much time in debugging other people's "libSomething" code and modifying it to fit my corner case needs. thats why ML doesnt write mp4
20:55
g3gg0
the file structure is an easy task
20:56
Bertl
BAndiT1983: it is not 'pouring' in, it is currently stored 'per frame' in memory
20:56
BAndiT1983
ah ok, even better
20:56
Bertl
but that doesn't help much, as you do not have the processing power to handle the data
20:56
Bertl
(at least not with moving pictures)
20:57
BAndiT1983
my suggestion was to write it straight to some file, like stream, but with frame marker in between, so we can distinguish them
20:57
BAndiT1983
and some other file would get the metadata
20:58
BAndiT1983
merge would be done on PC and conversion to some standard format, just an idea because of zynq shortcomings
20:58
BAndiT1983
not really shortcomings, but less performance
20:58
danieel
i did not use libsmth, have my own mp4 lib.. does not like to depend on others broken libs (had a lot of fun with libtiff making dngs... I am rather one in full control)
20:59
Bertl
BAndiT1983: sounds nice, but 'how' do you imagine to get the data from memory to the 'pc'?
20:59
danieel
BAndiT1983: merge external is bad. Kinefinity did broke some teeth on that.. mux in camera if possible.
20:59
danieel
(if you have storage, of course)
20:59
danieel
Bertl: maybe over the usb3 which is in works?
20:59
BAndiT1983
there we have 2 showstoppers at the moment, which need to be solved
21:00
BAndiT1983
what about the sdcard array?
21:00
Bertl
danieel: yes, but that does not depend on the 'in memory' store
21:00
Bertl
BAndiT1983: also doesn't depend on that, i.e. we can 'design' the format of that as well
21:00
danieel
BAndiT1983: can sdcard controller do UHS-I ? (thats licensed / closed spec as well)
21:01
Bertl
danieel: UHS-II would be required to make sense
21:01
BAndiT1983
just asking, as there were some discussions a couple of months ago
21:01
danieel
he said RAID :)
21:02
danieel
uhsII is easiest with usb3.. but thats rather ZU tech, not Z
21:02
Bertl
yes, it is a long term project which got popularized too early
21:09
TofuLynx
joined the channel
21:09
TofuLynx
Hello everyone! :)
21:10
BAndiT1983
hi TofuLynx
21:10
TofuLynx
how are you?
21:11
BAndiT1983
fine, and you?
21:12
BAndiT1983
have you reflected a bit on the stuff from yesterday?
21:12
TofuLynx
I'm fine too
21:12
TofuLynx
not yet
21:12
g3gg0
hi TofuLynx
21:13
TofuLynx
hey g3gg0
21:15
TofuLynx
You are one of my mentors
21:15
TofuLynx
But I don't really know you!
21:15
g3gg0
i dont know you either :)
21:16
g3gg0
hehe that will change with time :)
21:16
TofuLynx
:P
21:16
TofuLynx
So, what's your name?
21:17
g3gg0
well. i am even called g3gg0 by my family. so i guess thats my name
21:18
comradekingu
left the channel
21:18
TofuLynx
O.o wow
21:19
g3gg0
my realname is Georg
21:19
TofuLynx
ah! xD
21:19
g3gg0
whois g3gg0.de
21:20
TofuLynx
hmm
21:20
g3gg0
;)
21:20
TofuLynx
so, what are your roles on apertus?
21:20
BAndiT1983
ask better about his role at ML ;)
21:21
TofuLynx
ah xD
21:21
TofuLynx
wait
21:21
TofuLynx
oooh
21:22
BAndiT1983
haven't you read todays logs yet?
21:22
TofuLynx
oh
21:22
BAndiT1983
supragya is always up to date, still wondering how he can be that patient to look through this stuff
21:22
TofuLynx
Will check it
21:22
g3gg0
someone told me that i do a lot of things. many things. nothing really good, but many.
21:23
TofuLynx
holy
21:23
TofuLynx
50Kb today
21:23
g3gg0
¯\_(ツ)_/¯
21:23
BAndiT1983
g3gg0, i know that from somewhere ;)
21:24
BAndiT1983
ah, i see that you are also interested in SDR, just got my sdr-rtl stick recently
21:24
g3gg0
some time ago yeah
21:24
g3gg0
made a GSM decoder in C#
21:25
g3gg0
think that was in 2009 and later
21:25
BAndiT1983
looks very interesting
21:26
g3gg0
released the sources in 2013
21:26
g3gg0
also my kraken version (GSM cipher cracker)
21:26
danieel
but you got the specs/standard text ?
21:27
g3gg0
were free
21:27
TofuLynx
hmm
21:27
danieel
great..
21:27
g3gg0
horrible ETSI specs written in word
21:27
TofuLynx
Raj created a private room?
21:27
danieel
well, looking to 750 pages of h264.. cant say its much easier today
21:27
BAndiT1983
hope it doesn'T intervene with german laws, it's already difficult enough living in frankfurt to avoid airport frequencies
21:28
g3gg0
aeons ago i hacked nokia phone firmwares. back then i thought SMS weren't ciphered
21:28
BAndiT1983
h264 is off the table, license issues
21:28
g3gg0
the reason why i did all that GSM stakc stuff
21:28
g3gg0
then i realized that SDCCHs are DCCHs and thus encrypted too
21:29
g3gg0
well... was for the fun anyway
21:29
danieel
BAndiT1983: just matter of documentation, compared to etsi/gsm
21:29
comradekingu
joined the channel
21:29
BAndiT1983
TofuLynx, yes, supragya has created some room in lab to discuss his task with us
21:29
BAndiT1983
it's good for gathering links and logs
21:29
BAndiT1983
not IRC ones
21:30
TofuLynx
didn't know we could create rooms in lab
21:30
TofuLynx
I think it would be great for us too
21:30
BAndiT1983
just click on the speech bubbles in the header
21:31
TofuLynx
how do I create one room?
21:32
g3gg0
here some old GSM video (2010) https://youtu.be/TbWLVLUguJw
21:32
BAndiT1983
click on the small plus in the left panel
21:32
TofuLynx
found it
21:32
g3gg0
these channels are private, right?
21:32
TofuLynx
Can I add you to the room?
21:32
TofuLynx
and also you, g3gg0?
21:33
g3gg0
sure
21:33
TofuLynx
Yes, they are private
21:33
BAndiT1983
g3gg0, looks like rocket science
21:33
TofuLynx
we can choose the visibility to others
21:33
BAndiT1983
mode of the rooms can be defined
21:34
TofuLynx
ok it's done!
21:34
BAndiT1983
i like the question on youtube about rtl2832 sticks
21:34
TofuLynx
Also, I want to tell something
21:34
TofuLynx
this weekend is my birthday (29th) and I will probably be occupied
21:35
BAndiT1983
TofuLynx, no problem, we are still in timeframe
21:35
TofuLynx
Ok! :)
21:35
davidak
joined the channel
21:35
BAndiT1983
some adjustments from yesterday are not hard to do, so don't worry and enjoy your day
21:36
TofuLynx
Ok!
21:36
TofuLynx
I have to do some sort of document that keeps track of what I did and have to do
21:36
BAndiT1983
have to take a look what is falling into your task area, so i don't interfere while doing stuff in OC
21:36
TofuLynx
Ok! :)
21:36
BAndiT1983
maybe gdocs and excel
21:36
BAndiT1983
don't remember how it is called there
21:36
TofuLynx
we could share a gdoc between us
21:36
TofuLynx
spreadsheets :)
21:37
TofuLynx
what do you think?
21:37
g3gg0
at work i cannot access alike
21:37
BAndiT1983
g3gg0, you are doing the stuff, which i'm still hoping to do in future when i have some space for it, like CNC and 3d printing
21:37
BAndiT1983
what are the alternatives to keep track?
21:37
g3gg0
just sleep 4 hours less and you have enough time :)
21:38
g3gg0
just saying, can access during nighttime though
21:38
BAndiT1983
it'S not about time, more like room and no neighbours ;)
21:38
TofuLynx
there's trello
21:38
TofuLynx
which is also cool
21:38
TofuLynx
and also github, I guess
21:38
TofuLynx
which as a similar system to trello
21:38
TofuLynx
has*
21:38
BAndiT1983
what the hell, why were you booting linux on EOS? :D
21:39
g3gg0
april 1st.
21:39
TofuLynx
ahahahha
21:39
TofuLynx
linux on EOS
21:39
BAndiT1983
https://www.youtube.com/watch?v=IcBEG-g5cJg
21:39
g3gg0
that is a ML tradition, things that sound impossible and publish them on april 1st
21:39
g3gg0
alex first booted DOS
21:40
BAndiT1983
pfff, impossible, you can boot linux on a carrot nowadays ;)
21:40
BAndiT1983
gameboy emu on EOS, that could be the next thing
21:41
g3gg0
not that easy. the digic has no mmu
21:41
TofuLynx
So, Andrej what do you think about trello or github?
21:41
g3gg0
and nommu linux suck... eeh is quite painful
21:42
BAndiT1983
don't know trello
21:42
BAndiT1983
g3gg0, what would you prefer to track the progress?
21:43
g3gg0
its my first time doing this kind of mentoring via internet, so dont have any preference or experience
21:43
TofuLynx
Hope I give you a good first experience xD
21:43
g3gg0
hehe :)
21:44
BAndiT1983
this year it's totally different, believe me
21:44
TofuLynx
I will probably do a plain simple google docs
21:44
BAndiT1983
maybe a table, so we can adjust it easier and set priorities
21:45
danieel
i would choose google docs as well
21:45
danieel
lately i tried to use it to dump my head there :) helps a lot
21:45
TofuLynx
we can add tables to the doc
21:45
TofuLynx
that's what I want, danieel! :)
21:45
g3gg0
im totally fine with that
21:46
BAndiT1983
still don't understand the logic of our IT guys at work, gmail works, gdrive works, google calender is blocked by firewall
21:46
TofuLynx
wow
21:46
BAndiT1983
maybe they smoke too much ;)
21:47
TofuLynx
I heard that calendar is going to be embedded into gmail
21:47
BAndiT1983
yep, switched to new layout for testing
21:47
TofuLynx
Nice! :P
21:49
BAndiT1983
g3gg0, we are still not green about the raw container, by the way
21:49
se6astian
changed nick to: se6astian|away
21:49
BAndiT1983
supragya needs some guidance, so it's mandatory to make first decisions
21:50
TofuLynx
sent gdoc link via the room
21:50
BAndiT1983
and while hardware is not ready for the task yet, it should be simulated on PC
21:50
BAndiT1983
TofuLynx, you can place links from yesterday there
21:51
TofuLynx
Roger!
21:51
RexOrCine
changed nick to: RexOrCine|away
21:51
BAndiT1983
like the github repo for pool allocator and so on
21:51
g3gg0
https://lab.apertus.org/T951
21:51
g3gg0
1. Current status analysis and requirement definition
21:52
g3gg0
Before any decision or implementation can happen, it is important to depict the current state of how video data is being recorded and written to disk.
21:52
BAndiT1983
according to Bertl: which disk?
21:52
BAndiT1983
;)
21:52
g3gg0
RAW12
21:52
g3gg0
the format you store on computers
21:52
g3gg0
i am aware that is no direct disk interface yet for the camera
21:53
g3gg0
- technical backgrounds of the current file format (i.e. "why is it as it is?")
21:53
g3gg0
- examining the technical backgrounds of the signal processing path within the camera (i.e. "how does it work?")
21:53
g3gg0
- technical possibilites and requirements of the signal processing path in terms of video container format (i.e. "what could we do with it and where are limits?")
21:53
g3gg0
- defining requirements/recommendations for a container format
21:53
BAndiT1983
so he should interview Bertl a lot
21:53
g3gg0
before any decision can be made for the future, it must be clear what the resitrictions are
21:54
BAndiT1983
some things can be looked up in apertus repos, like utils and beta-software
21:54
g3gg0
yep
21:54
BAndiT1983
my question was not targetting final decision, just general path, which you have provided
21:55
danieel
you miss one important point: future compatibility considerations
21:55
g3gg0
in this work i expect to see some theory why we would make decisions how we did
21:55
g3gg0
>technical possibilites
21:56
g3gg0
(i.e. "what could we do with it and where are limits?")
21:56
g3gg0
expected this to happen there
21:56
g3gg0
or maybe i misunderstood you?
21:57
TofuLynx
BAndiT1983: in regar to the debayer class, I was thinking if it wouldnt be better to use a single debayer class with a number flag that is used to choose the desired debayering algorithm?
21:57
BAndiT1983
danieel, which compatibility is meant?
21:57
BAndiT1983
TofuLynx, this would blow up the class, it's better to use needed class through dependency injection
21:58
TofuLynx
hmm
21:58
TofuLynx
but what if
21:58
danieel
in family: change resolution / out of family: change of sensor
21:58
BAndiT1983
for linear/bilinear there could be a single class, switchable between this two, but if you add amaze, vng, green-directed etc., then the size and maintenance hell would just explode
21:58
TofuLynx
you are changing the debayering method via the interface, we have to delete the class and allocate the new one?
21:59
TofuLynx
I see your point
21:59
BAndiT1983
you can also instantiate the class once, but this would be problematic for multi-threaded processing
21:59
BAndiT1983
i would suggest strategy pattern and dependency injection for now
22:00
TofuLynx
okk!
22:00
BAndiT1983
as reference: https://dzone.com/articles/java-the-strategy-pattern
22:01
BAndiT1983
have i pointed you to sourcemaking.com already?
22:01
BAndiT1983
https://sourcemaking.com/design_patterns/strategy
22:02
TofuLynx
no you havent!
22:04
TofuLynx
when processing each channel
22:05
TofuLynx
I'm thinking about creating auxiliary arrays that will then be returned as the final channel arrays. what do you think?
22:06
TofuLynx
and by returned I mean, placed into the existing channels
22:06
BAndiT1983
have to see the result first, have no clue how it will behave
22:06
TofuLynx
I mean
22:06
TofuLynx
how do you change the existing channel without any auxiliary array=
22:06
TofuLynx
?
22:07
BAndiT1983
we can create this ones in the pool
22:08
BAndiT1983
as the data size should be constant while working on one sequence
22:08
TofuLynx
My plan was this: create auxiliary arrays that remain unchanged and then store it on the pool
22:09
BAndiT1983
the pool should allocate the space for you, this is the idea of the pool, no need to allocate manually because of that
22:09
TofuLynx
hmm, but how do you keep track of where is the raw channel array located?
22:10
TofuLynx
with that pointer / integer?
22:10
BAndiT1983
pool allocator gives you the position
22:10
TofuLynx
ok :)
22:10
BAndiT1983
haven't looked into the lib from yesterday yet, but usually you get offset or some other location marker
22:11
TofuLynx
yeah probably
22:11
TofuLynx
what do you think of the gdoc right now?
22:12
BAndiT1983
looks ok at first glance
22:13
TofuLynx
Ok!
22:13
BAndiT1983
i hope that you mean that patternoffsets will be adjusted according to yesterdays chat ;)
22:13
BAndiT1983
also the algorithm simplified, similar to the downscaler loops
22:14
TofuLynx
with that R = 0, G1 = 1, G2 = width and B = width + 1?
22:14
BAndiT1983
yep
22:14
TofuLynx
yeah :P
22:14
BAndiT1983
this was for RGGB, others are similar
22:14
TofuLynx
yeah
22:15
TofuLynx
also I changed from colorOffsets to PatternOffsets
22:15
BAndiT1983
would do it myself, but really don't want to interfere with your gsoc task
22:15
TofuLynx
I think it's more intuitive
22:15
TofuLynx
Ok! No problem :P
22:15
BAndiT1983
you don't need a method there, just supply an enum value
22:16
BAndiT1983
as the offsets in the pattern are always the same, they can be pre-set
22:16
TofuLynx
so we have to create an enum ?
22:17
BAndiT1983
extraction of R, G and B can be separated from debayer classes
22:17
BAndiT1983
an enum is there
22:17
TofuLynx
hmm I see
22:17
TofuLynx
we could do other thing
22:17
BAndiT1983
have no IDE at the moment, but its called BayerPattern
22:17
TofuLynx
add a method to OCimage class that returns the pattern offsets
22:18
BAndiT1983
isn't it already there?
22:18
TofuLynx
let me check
22:18
BAndiT1983
https://github.com/apertus-open-source-cinema/opencine/blob/master/Source/OCcore/Image/OCImage.h
22:19
TofuLynx
enum class BayerPattern
22:19
TofuLynx
{
22:19
BAndiT1983
i would create a new class for RGB extraction, so it can do the processing
22:19
TofuLynx
RGGB,
22:19
TofuLynx
BGGR,
22:19
TofuLynx
GRBG,
22:19
TofuLynx
GBRG
22:19
TofuLynx
}
22:19
TofuLynx
this thing here
22:19
BAndiT1983
yep, and there are setter and getter at the bottom of the file
22:20
TofuLynx
hmm isnt the Downscaler class an extractor?
22:20
BAndiT1983
OCImage does not need to know about offsets, it should be in the extractor
22:21
BAndiT1983
downscaler is 2 in 1, as it's getting pixels, but if you want to do debayering, then you need separate steps
22:21
BAndiT1983
maybe i'm wrong here, but just a gut feeling
22:21
TofuLynx
why do you think OCimage doesnt need to know about offsets?
22:21
BAndiT1983
it sohuld provide the pattern, but no calcualtion for offsets
22:22
TofuLynx
why not?
22:22
BAndiT1983
why should it? it's just a general container for image data
22:22
BAndiT1983
single responsibility per class, if possible
22:23
BAndiT1983
image loader -> rgb extractor -> debayer -> and so on
22:23
BAndiT1983
this would be the pipeline
22:23
XDjackieXD
left the channel
22:23
TofuLynx
and where in the pipeline do you think the pattern offsets calculator should be?
22:24
BAndiT1983
rgb extractor, as it gets the image metadata, like width and height, also pattern
22:25
BAndiT1983
a lot of stuff from OCImage will be removed, like memmove, which was added as a hack for data storage, but with pool allocator it won't be necessary
22:26
TofuLynx
and then you would pass the pattern from the extractor to the next debayer class?
22:26
BAndiT1983
pattern offsets are always repeating, as you have same order top-left, top-right, bottom-left and bottom-right pixels
22:26
BAndiT1983
pattern is stored in OCImage, so the next class which works with it can get it from there
22:27
BAndiT1983
pattern offsets: you have just to assign right arrays to write too, but offsets stay the same for image resolution
22:27
XDjackieXD
joined the channel
22:28
TofuLynx
yeah
22:29
BAndiT1983
about your question regarding why calculatiosn shouldn't be placed in OCImage
22:29
TofuLynx
initially, I won't implement the pattern system, probably later
22:30
BAndiT1983
my work colleague called me yesterday because of an exception in our application, looked at his logs and pointed him straight to the error, which was known, as the frontend system has no connection to the database, just the server, but someone committed methods with calculations, so the system tried to convert stuff for sending and crashed
22:30
BAndiT1983
pattern system is mostly there, needs just some fixes/simplifications
22:31
TofuLynx
wow
22:32
BAndiT1983
ocimage should store just the minimum required stuff
22:32
TofuLynx
hmm but the thing is, I am creating the debayer class from the ground up
22:32
BAndiT1983
and?
22:32
TofuLynx
so I won't create any pattern system initially, as it's just a task of replacing the numbers with the pattern offsets
22:33
BAndiT1983
you can assume RGGB for now
22:33
TofuLynx
yeah
22:33
TofuLynx
that's my plan
22:33
TofuLynx
what do you think?
22:34
BAndiT1983
show me some code, then i can tell you more
22:34
BAndiT1983
which algorithm is on the list first?
22:35
TofuLynx
Ok!
22:35
TofuLynx
The first algorithm will be Linear Interpolation
22:36
TofuLynx
aka nearest neighbour
22:36
TofuLynx
and then it will be bilinear interpolation
22:36
TofuLynx
We have to discuss how to implement the flag system
22:36
TofuLynx
to choose between the two, as you suggested
22:38
BAndiT1983
take the existing class as the base and adjust the extraction there, afterwards we can move it to another class
22:38
BAndiT1983
as it would be much simpler for processing, without current overhead on code there
22:38
TofuLynx
Basically, replacing the processing methods with better ones?
22:39
TofuLynx
hmm Ok!
22:39
BAndiT1983
first with simpler ones, as it's doing a lot of stuff in one step, but it's more complicated to maintain
22:40
BAndiT1983
that's why i say that the extraction should be done separately for now
22:40
TofuLynx
wait
22:40
TofuLynx
to clear things up
22:40
TofuLynx
what do you mean by extraction?
22:40
BAndiT1983
separation of R, G and B pixels
22:41
TofuLynx
so, what the Downscaler class currently does?
22:41
BAndiT1983
bayerframepreprocessor does it at the moment
22:41
BAndiT1983
downscaler just gets known pixels
22:42
BAndiT1983
will upload code changes this days, where 12to16 and 14to16bit were moved to another place
22:43
BAndiT1983
just have to find the problem with downscaler image
22:43
TofuLynx
what do you mean by "just gets known pixels"?
22:43
BAndiT1983
pixels which were captured by sensor
22:43
TofuLynx
what does the preprocessor do?
22:44
BAndiT1983
the black ones between them are not known, downscaler avoids them and gets known ones, which results in slight shift
22:44
BAndiT1983
preprocessor is the extractor in our case, just forgot about it
22:44
TofuLynx
ah! wait
22:44
BAndiT1983
loops there can be also simplified
22:44
TofuLynx
preprocessor basically pads the known pixels with unknown pixels?
22:44
BAndiT1983
???
22:45
TofuLynx
I'm not understanding what's the difference between the two
22:45
BAndiT1983
remember how bayer sensor data looks like -> https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Bayer_pattern_on_sensor_profile.svg/350px-Bayer_pattern_on_sensor_profile.svg.png
22:45
BAndiT1983
sorry, double link -> https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Bayer_pattern_on_sensor_profile.svg/350px-Bayer_pattern_on_sensor_profile.svg.png
22:45
TofuLynx
yes?
22:45
BAndiT1983
preprocessor splits the RGGB to RGB arrays
22:46
BAndiT1983
debayer interpolates
22:46
TofuLynx
and what does downscaler do?
22:46
BAndiT1983
you wrote it, you should know ;)
22:47
TofuLynx
I think it does what you said preprocessor does xD
22:47
BAndiT1983
basically, the downscaler just gets filled pixels, and avoids the white ones (see image), in reality they have no data for certain color
22:47
TofuLynx
ah!
22:47
TofuLynx
so that's the difference... ok!
22:48
TofuLynx
isn't it easy to implement into the downscaler?
22:48
BAndiT1983
?
22:48
BAndiT1983
downscaler has other purpose than debayer
22:48
BAndiT1983
https://web.stanford.edu/class/cs231m/lectures/lecture-11-camera-isp.pdf
22:49
TofuLynx
i'm not comparing it with the downscaler
22:49
TofuLynx
but with the preprocessor
22:50
BAndiT1983
some how i've lost the thread, can you explain more?
22:50
TofuLynx
ok
22:51
TofuLynx
preprocessor extracts the RGGB into three arrays, RGB, that contain known and unknown pixels, right?
22:51
BAndiT1983
yes
22:51
TofuLynx
downscaler extracts the RGGB into three arrays, that only contain known pixels
22:51
BAndiT1983
yep
22:52
TofuLynx
my question is: Why shouldn't the downscaler also include the unknown pixels?
22:53
BAndiT1983
i think you are asking the question a bit wrong
22:53
TofuLynx
can you clear my mind?
22:53
TofuLynx
I'm really confused xD
22:53
BAndiT1983
downscaler was designed for fast pixel extraction, without the need for further processing, theoretically, as we still need gamma correction and so on
22:53
TofuLynx
oh
22:53
TofuLynx
I see it now
22:54
TofuLynx
so basically the downscaler is for an entirely different pipeline?
22:54
BAndiT1983
and you question should be: can we merge downscaler and preprocessor, so the skip level can be set
22:54
TofuLynx
yeah I guess
22:55
TofuLynx
Ok now I understand it now!
22:55
BAndiT1983
maybe this should be the first task, to adjust the preprocessor, maybe also some benchmarks to check if current implementation struggling with threads
22:55
TofuLynx
ok!
22:56
BAndiT1983
also benchmarks with loops like downscaler uses
22:58
TofuLynx
Ok the first task: benchmark the preprocessor loops
22:58
TofuLynx
change the loops to a single loop
22:58
TofuLynx
and finally benchmark it again and compare
22:58
TofuLynx
?
22:59
BAndiT1983
yep
22:59
BAndiT1983
you can consider it as downscaler and preprocessor merge
22:59
TofuLynx
do you want to add the skip pixels too?
22:59
BAndiT1983
but real merge will happen later, we should evaluate first
23:00
BAndiT1983
no, let the skip option out for now, have to reflect on that a bit, to be sure that we have some flexible solution
23:00
TofuLynx
ok!
23:00
RexOrCine|away
changed nick to: RexOrCine
23:01
BAndiT1983
maybe the default value would be 0 for skip, but if the value is higher, then OC should avoid debayering
23:01
TofuLynx
Do you want the preprocessor to call the debayer class?
23:01
BAndiT1983
have added an enum with half, quarter, eighth and sixteenth options for it
23:01
TofuLynx
or will it still be a job for the presenter?
23:01
BAndiT1983
but on my local machine for now
23:02
BAndiT1983
let the presenter do it, then the pipeline is more visible, also image loader should be simplified later, but first things first
23:03
TofuLynx
ok :)
23:03
TofuLynx
any thing you want to add to the gdoc about the first task?
23:03
TofuLynx
or correct
23:04
BAndiT1983
before/after benchmarks
23:04
BAndiT1983
such a benchmark should execute the loop many times, like 100 or 1000 to get a median value
23:04
TofuLynx
Ok!
23:05
BAndiT1983
without skipping pixels !for now!
23:05
TofuLynx
That's what is written xD
23:05
BAndiT1983
you can take a look at imageprovider for timing methods
23:05
TofuLynx
oh wait
23:06
TofuLynx
hmm
23:06
TofuLynx
timing?
23:06
BAndiT1983
line 55 and 63
23:06
TofuLynx
ah, for benchmark?
23:06
BAndiT1983
for benchmark you need to profile it somehow
23:07
TofuLynx
yeah
23:09
danieel
left the channel
23:09
danieel
joined the channel
23:11
BAndiT1983
so, off for today, TofuLynx, as usual write here, in lab or per email
23:11
BAndiT1983
see you
23:11
g3gg0
cu
23:11
BAndiT1983
changed nick to: BAndiT1983|away
23:12
TofuLynx
see you! :)
23:37
TofuLynx
Good night everyone!
23:37
TofuLynx
Nice to meet you g3gg0!
23:39
TofuLynx
left the channel
23:44
g3gg0
same, gn8 :)
00:02
g3gg0
left the channel
00:59
rton
left the channel