Current Server Time: 23:55 (Central Europe)

#apertus IRC Channel Logs

2018/05/06

Timezone: UTC


01:22
XD[m]
left the channel
01:41
vup[m]
left the channel
02:21
rton93
left the channel
02:38
BAndiT1983
changed nick to: BAndiT1983|away
05:48
Bertl_zZ
changed nick to: Bertl
05:48
Bertl
morning folks!
06:00
supragya
joined the channel
06:01
supragya
hi Bertl
06:01
supragya
I have a few queries regarding the RAW data pipeline...
06:04
supragya
In a gist... I would like to know... the highest amount of outflow (speed/bandwidth in maybe MBps) could I expect out of USB3 out port... if it were for only RAW12 data
06:05
supragya
also, what is the separation of tasks between Zync and MachXO2 in USB3 path
06:05
supragya
(I read the logs often, so you could reply even if I am not on IRC ) :)
06:07
supragya
X02 and Zync models in use?
06:08
supragya
ZE/HC/HE?
06:18
supragya
got it ... 640HC
06:20
Bertl
the USB3 out is limited by the maximum bandwidth of the FT60x from FTDI (see datasheet)
06:20
Bertl
the Zynq is the 7020 model used in the MicroZed
06:20
Bertl
and the Lattice is typically the MachXO2 1200HC
06:21
Bertl
although it is kind of pin compatible with the 640 and 2000
06:21
Bertl
MachXO2 is used as routing fabric and controls the GPIOs as well as the I2C interface to the plugin modules
06:22
Bertl
it doesn't transport any high speed data
06:22
Bertl
the USB3 module also contains a MachXO2 which receives the high speed data from the Zynq and converts it to the proper output for the FTDI chip
06:23
Bertl
(see USB3 gearwork task)
06:23
Bertl
off to the faire now ... bbl
06:23
Bertl
changed nick to: Bertl_oO
06:40
seaman
joined the channel
07:14
supragya
left the channel
07:25
RexOrCine|away
changed nick to: RexOrCine
07:38
lexano
left the channel
07:43
seaman
left the channel
07:53
lexano
joined the channel
09:03
rton
joined the channel
09:36
BAndiT1983|away
changed nick to: BAndiT1983
13:07
TofuLynx
joined the channel
13:17
TofuLynx
left the channel
13:23
RexOrCine
changed nick to: RexOrCine|away
15:02
TofuLynx
joined the channel
15:11
BAndiT1983
hi TofuLynx, long time no see
15:14
TofuLynx
left the channel
15:15
TofuLynx
joined the channel
15:20
TofuLynx
left the channel
15:20
TofuLynx
joined the channel
15:32
supragya
joined the channel
15:32
supragya
Good evening!
15:32
BAndiT1983
hi supragya
15:32
supragya
(oops, quite early for that :) )
15:34
supragya
anyhow, BAndiT1983 : Bertl told me in the morning that the limiting factor of USB3 would be FTDI FT60x
15:34
BAndiT1983
read the logs
15:34
supragya
that is 5Gbps
15:34
supragya
about 33.333 fps on 18MB per frame
15:35
supragya
Are we looking at that low of recording fps?
15:35
BAndiT1983
currently looking for any sort of stream, so we can proceed with practical side of it
15:36
supragya
hmm, I was thinking, why not we just piggyback the metadump from the camera with every frame?
15:36
supragya
*metadata dump
15:36
supragya
and stream it out...
15:37
supragya
BAndiT1983: do you have knowledge of how internal system is layed out?
15:39
supragya
[piggyback] -> leaving the recorder to encode it to whichever format it likes
15:39
BAndiT1983
HDMI?
15:39
TofuLynx
left the channel
15:39
supragya
USB3
15:40
TofuLynx
joined the channel
15:41
BAndiT1983
and which recorder is meant here?
15:41
supragya
a Recording computer (PC)
15:42
supragya
The very simplistic schematic to refer is always this: https://trello.com/c/aX7KJihl/24-the-planned-usb3-recording-pathway
15:42
supragya
However, as Bertl_oO told me, the USB3 chain is as follows:
15:42
BAndiT1983
recording software should be open and format known, otherwise you will lose information or get unnecessary one
15:45
supragya
Sensor (serial data) -> FPGA (small FIFO buffer) -> AXI Memory Writer (Serial to Memory) -> DDR3 Memory Buffer -> AXI Memory Reader (Memory Block to FIFO) -> (Video Processing Stuff) -> TMDS (HDMI) Encoder -> Serialization to D0-D3 -> Zynq (via Serializer) -> MachXO2 (another FPGA) -> FTDI FT60x -> USB3 output
15:45
supragya
[TMDS (HDMI) Encoder] -> this I don't know if it comes into picture or not
15:46
BAndiT1983
HDMI encoder would be an obstacle for clear format
15:47
supragya
? this is a pathway Bertl explained me, we are not looking at HDMI
15:48
supragya
what I suggest is that we stop the serialisation D0-D3 for a few moments between frames and add the metadata that camera captures (as Bertl say, a simple register dump) and add it behind the frame before serialisation, all could be done on one channel
15:49
BAndiT1983
meta data consists of multiple things, not only image sensor data
15:49
BAndiT1983
if you are sure, that the output would be fast enough, then go on
15:50
supragya
My calculations are as follows, (need a hardware expert on this however)
15:51
TofuLynx
left the channel
15:52
supragya
Keep it 30fps, 30*18*8 = 4320 Mbps, leaves out nearly 680 Mb per second = 85 MB -> 2.8333 MB/frame
15:52
supragya
for meta
15:52
supragya
if we operate at full 5Gbps
15:53
supragya
still, I need someone like g3gg0, or Bertl on this, as to how it is feasible, where do we induce such piggybacking (in Zync maybe)
15:53
BAndiT1983
collecting meta data is also important, as this factor could slow the whole stuff down
15:54
supragya
Reasoning: As each frame has it's own meta, the conversion to DNG sequence is easy
15:54
supragya
[collecting meta data is also important] -> yes, but here is what to look at:
15:54
supragya
currently axiom cameras can have the following meta only:
15:55
supragya
Sensor(Reg dump), Lens(has to be discussed), Attached modules(maybe)
15:55
supragya
anything else?
15:57
supragya
[could slow the whole stuff] -> can do, but that is to be discussed, as this has to be done anyhow; even if meta does not translate to every frame in the stream we take out of the camera, it sure has to be retrieved and processed in the camera
15:59
BAndiT1983
i still doubt we can stream out DNG or any other format that needs bigger processing effort
15:59
supragya
not dng
16:00
supragya
** I struggle to say we can get DNG out **
16:00
supragya
it will be a RAW12 stream piggybacked with meta
16:01
supragya
the recording computer is to take this RAW12 and meta and convert it to DNG (or any other format for that matter) in it's leisure time
16:01
supragya
or on the fly if it has enough strength
16:01
BAndiT1983
how will be raw12 data separated? how will be the metadata recorded along with it?
16:02
supragya
let me draw a scematic
16:03
BAndiT1983
can we get away with normal SSD or do we need some faster system for recording without losing data? or is the data buffered first, after receiving it over USB?
16:03
supragya
yes, buffered in primary memory, flushed as we get time
16:04
supragya
then this flushed data is taken up (along with appropriate flushed meta) to convert it to suitable format
16:04
supragya
MLV's non-sequential thing is great because they write directly over an SD, but we stream out the data
16:06
BAndiT1983
the stream needs also preparation
16:07
BAndiT1983
and don't forget about future extensions of meta data
16:07
BAndiT1983
which format will meta data have?
16:11
supragya
schematic: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit?usp=sharing
16:11
supragya
[ don't forget about future extensions of meta data] -> 2 points on this
16:12
supragya
2.8333 MB/frame is a good amount of metadata that I don't think, we will be ever be able to utilise
16:13
supragya
Even if we don't get 5Gbps, we pass the RAW12 with a "Sequence ID"
16:13
supragya
this uniquely identifies the frame
16:13
supragya
On an another (maybe GbE), we can stream metadata (that we would have piggybacked)
16:14
supragya
with the Sequence IDs
16:14
supragya
The PC encoder has to just match the meta with frames
16:16
BAndiT1983
register dump will be not sufficient, as this covers only image sensor, other data will be received from different modules in system
16:16
supragya
those will be piggybacked behind sensor dump then...
16:16
BAndiT1983
maybe JSON or BSON or flatbuffers, but XML would be overkill
16:17
supragya
the format is not the worry here, it is the piggybacking technique
16:17
BAndiT1983
XMP has a lot of overhead, sa far as i'Ve seen the specification for it
16:17
supragya
then we won't use it
16:17
supragya
Ah, I now understand your concern
16:19
supragya
https://lab.apertus.org/T951 -> my mail was to satisfy the point 2 (the final format), I never tried to put forth that being the way we would stream the data out of the camera itself
16:19
supragya
I apologize if I did, that was not my intention
16:19
BAndiT1983
supragya, no problem here, just trying to get a clear picture of the process
16:24
supragya
have I missed to tell you something? Is it clear..?
16:29
BAndiT1983
no, everything ok
16:30
supragya
feedbacks?
16:30
supragya
what do you think about this from your perspective
16:30
supragya
?
16:33
BAndiT1983
everything looks clear for now, but the practical side of the thing is totally unknown yet, so can't say much
16:34
BAndiT1983
meeting in 30 minutes?
16:34
supragya
Yes
16:35
g3gg0
joined the channel
16:35
supragya
for g3gg0!
16:35
supragya
Good evening g3gg0 :)
16:35
g3gg0
hi
16:36
g3gg0
need some shower. cu in 15 min
16:36
supragya
sure
16:38
BAndiT1983
supragya, discussion in main channel? or is there something we should do in separate one?
16:38
supragya
your call BAndiT1983.
16:38
BAndiT1983
my idea of gsoc is, to do most stuff in main one, as long as we don't have to discuss critical/private stuff of the thing
16:38
supragya
private? I don't have that on plate today
16:39
BAndiT1983
just saying, sometimes we have to discuss some credentials or so, then separate channel is the way, but as it's an OSS org, so as much open discussions as possible
16:40
supragya
anyways, me too for shower, will be back soon. BAndiT1983: I am fine any way as long as we could communicate :)
16:40
BAndiT1983
ok, just give me a ping, when you are back
16:42
supragya
%n
16:57
g3gg0
re
16:57
supragya
re
16:58
supragya
g3gg0: http://irc.apertus.org/index.php?day=06&month=05&year=2018
16:59
supragya
Important: 16, 50, 53, 67, 71, 84, 88, 76
17:00
supragya
113, 105, 116,
17:00
supragya
g3gg0: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit
17:00
supragya
sorry: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit?usp=sharing
17:02
g3gg0
as far as i understood se6astian, we would have two USB3 chips with up to 3.2 gbits/s each
17:03
g3gg0
a single FTDI can achieve data rates up to 5gbits/s, but the net transfer rate will of course be lower
17:03
supragya
"Sequence ID" then? this could help?
17:03
supragya
to split the stream in two
17:04
supragya
http://irc.apertus.org/index.php?day=06&month=05&year=2018#105
17:04
supragya
BTW, when did se6astian tell you this g3gg0 ?
17:04
g3gg0
> the recording computer is to take this RAW12 and meta and convert it to DNG (or any other format for that matter) in it's leisure time
17:05
g3gg0
dont underestimage DNG writing!
17:05
supragya
Here is how it works, let me get a few links
17:05
g3gg0
even my desktop machine can't handle converting a MLV into DNG in real time
17:06
supragya
that's why "leisure", we don't need realtime as long as we can store the stream somewhere fast enough
17:06
supragya
(schematic)
17:07
g3gg0
hehe you are redesigning what MLV was meant for
17:08
supragya
kindof with a different perspective
17:08
supragya
to not let camera do much
17:08
supragya
to not let camera do much
17:08
supragya
if you look at [http://www.cmosis.com/products/product_detail/cmv12000]
17:08
supragya
"Datasheet", page 62
17:09
supragya
Bertls says, we store the meta as a register dump
17:09
g3gg0
yep
17:09
BAndiT1983
but we need also additional data, like time
17:09
supragya
If we just piggyback this dump, or better yet, pass it through a GbE with "Sequence ID"
17:09
g3gg0
and a timestamp
17:10
supragya
and add other meta like time/ lens info etc... piggybacked
17:10
g3gg0
yep
17:10
supragya
then all encoding is to be done by encoder...
17:10
supragya
off the camera...
17:10
BAndiT1983
some sort of header has also to be provided with info of meta data sizes, if there are multiple chunks between frames
17:11
supragya
We can have KLV format for that (key, length, value)
17:11
g3gg0
...
17:11
supragya
keys maybe as (0-frame, 1-lensMeta, 2-sensorMeta)
17:11
BAndiT1983
same as tags in tiff etc.
17:11
supragya
the difference I feel g3gg0 is this:
17:12
supragya
MLV is supposed to be structured in camera, but here, we do not structure that in camera
17:12
supragya
we just throw out what we have, as fast we can
17:12
supragya
the only thing to make sure is to have the encoder version closely match the hardware version
17:13
BAndiT1983
it's more like firmware version
17:13
supragya
yes
17:14
supragya
piggyback if ofcourse only a solution on single channel
17:14
supragya
however, as this may not be feasible, the SequenceID approach is taken
17:14
supragya
[sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame]
17:15
supragya
in case the sensors change, so does the Keys in the KLV format, encoder remains unchanged
17:15
supragya
more like (2-CMV12ksensorMeta, 3-someOtherSensorMeta)
17:16
g3gg0
okay now let me play a bit: if we use as KLV format a 32 bit key, 32 bit length and add an accurate microsecond timestamp to get even frame rate jitter recorded - you would end up with what is MLV like ;)
17:16
BAndiT1983
to prevent that, consider something like json with generic keys for sensor data
17:16
BAndiT1983
as we don't need the whole dump
17:16
g3gg0
save the effort encoding binary data into JSON but use binary tags and voila
17:17
g3gg0
the only thing that might prevent re-using existing toolchains might be the fact that the raw bitstream is encoded differently
17:17
BAndiT1983
there is also BSON for binary json
17:18
supragya
g3gg0: seems fitting, my issue is would Zync board be fast enough to encode such structs at such high rate, by picking relavant pieces from register dumps while also handling other features of the camera
17:18
BAndiT1983
we can re-use same memory, as meta data will be the same length for every frame
17:18
g3gg0
come on, you recommend encoding binary data into json, but writing structures is slowe?
17:18
g3gg0
*slower
17:18
BAndiT1983
just write new values to it, one has to ensure that memmore is zeroed beforehand, if data is shorter, like text string
17:19
supragya
I never went for JSON ;)
17:19
BAndiT1983
g3gg0, nope, it was just about format
17:19
g3gg0
ah that came from you :)
17:20
g3gg0
no matter if JSON or BSON. its a dynamic, structured format
17:20
BAndiT1983
for performance a fixed one would fit better
17:21
BAndiT1983
fixed structure for tags, gathered in one memory chunk and data replaced there
17:21
supragya
the only reason I described regdump to be streamed is performance... as once it has been read... nothing more than streaming has to be done...
17:21
supragya
however if struct writing can be done in time, why not :)
17:21
BAndiT1983
memset should be fast, if you pre-allocate
17:24
supragya
so how are we to proceed now?
17:25
supragya
[memset]-> wouldn't we need to call it multiple time
17:25
supragya
*times... that's struct writing ain't it?
17:25
g3gg0
no need for memset
17:25
supragya
*isn't it
17:25
g3gg0
either you just memcpy or set the elements manually
17:26
supragya
exactly
17:26
BAndiT1983
you can also memcopy, my point was, that we should re-use pre-allocated chunk of memory for meta data
17:27
g3gg0
yeah, prevent malloc if possible
17:27
supragya
while setting manually can reduce size of the regdump, it imposes new restrictions -> for every new sensor, a new struct setter from regdump has to be written (anyways we would have to, but then it would have resided at encoder only)
17:28
supragya
now it has to reside on zync
17:28
supragya
result: unneded load on zync, Gbe is not utilised fully
17:28
supragya
benefit: MLV libraries
17:28
g3gg0
so from what i've read, i'd split the writing part into two processes
17:29
g3gg0
a) USB3 -> intermediate file
17:29
g3gg0
b) intermediate file -> CDNG
17:29
g3gg0
right?
17:29
supragya
yes
17:29
g3gg0
so i will recommend: pick MLV as intermediate file and you have b) for free
17:30
g3gg0
you then have to:
17:30
g3gg0
a) receive video frames -> write as e.g. JPEG92 or MLV formatted raw
17:31
g3gg0
b) extract interesting information from metadata and write them - if they changed since the last metadata
17:31
BAndiT1983
as we don't have the jpeg92 part in fpga, it's not an option currently
17:31
supragya
a fitting point to note here:
17:31
supragya
with sID based system
17:31
g3gg0
wasnt it on the "small PC" side ?
17:31
supragya
if a single stream is capable of 25fps
17:31
supragya
we can have two streams and then we have 50fps
17:31
supragya
as meta in on Gbe
17:32
supragya
MLV is single stream, am i wrong?
17:32
g3gg0
no, just write two files
17:32
g3gg0
(called chunks)
17:33
supragya
then rearrange on timestamp?
17:33
g3gg0
yes
17:33
g3gg0
but back to writing. do we agree we are on an external barebone PC which is connected to axiom using two USB3 ?`
17:33
g3gg0
and goal was to write CDNG ?
17:33
supragya
yes, we are.... but there sits encoder
17:33
supragya
struct writing is in camera
17:34
supragya
on zync board
17:34
g3gg0
axiom side is restricted to META/RAW12/...
17:34
BAndiT1983
forget the encoder for a moment, it would be ok just to get the stream without dropped data first#
17:34
g3gg0
right?
17:34
supragya
BAndiT1983: in that case, to get a stream out, as soon as USB3 gearwork is done, we will get a stream of RAW12 anyways
17:35
supragya
(that is the problem statement)
17:35
supragya
we skip some clocks to enter some data... MLV things in stream
17:35
supragya
and then we get the final stream
17:36
g3gg0
https://app.sketchtogether.com/s/sketch/qBIfw.1.1/
17:36
g3gg0
can you see this?
17:36
BAndiT1983
yep
17:39
g3gg0
sketched correctly?
17:39
supragya
let me add
17:39
g3gg0
zync is meant to be within the axiom camera
17:40
supragya
yes it is
17:40
g3gg0
errh i mean that was obvious :)
17:40
g3gg0
AXIOM abstracts this
17:41
g3gg0
all we see to the outer world is the CMOS and the two USB3
17:41
g3gg0
call it black box
17:41
g3gg0
light in, USB3 out
17:42
g3gg0
nah remove MLV for now
17:42
g3gg0
we are talking about how it one level above would look like
17:43
g3gg0
not the technical implementation
17:43
g3gg0
"meta" is more or less the register dump of the CMOS and some other stuff
17:44
supragya
yes, but then, if everything passes ZYNC, skipping clocks for RAW writing to accomodate meta and to write structs at high rates would put pressure on zync... is it capable of that?
17:44
supragya
otherwise, what g3gg0 says is correct
17:44
supragya
In such a case
17:45
g3gg0
so okay thats what exists (left side) and what should get implemented (right side)
17:46
supragya
a few changes on left -> code for zync to serialise/write the stream into MLV
17:46
supragya
chunks etc
17:46
g3gg0
wait, no MLV yet
17:46
supragya
okay
17:46
g3gg0
just the top level view
17:46
g3gg0
we all agree?
17:46
supragya
then to
17:46
supragya
yes g3gg0
17:46
g3gg0
BAndiT1983 ?
17:47
supragya
then a code for zync.... to serialise the RAW and meta in stream
17:47
supragya
BAndiT1983: is busy drawing g3gg0 :)
17:47
g3gg0
maybe not, lets do the next steps
17:48
g3gg0
on the barebone side we might have not enough power to save CDNG directly, right?
17:48
supragya
on the right,
17:48
supragya
yes exactly
17:48
g3gg0
so the "magic here" has to buffer the RAW12 stream locally
17:48
g3gg0
let me draw
17:49
xfxf
left the channel
17:49
xfxf
joined the channel
17:50
supragya
then it is a two way step:
17:50
supragya
1. to flush the stream to SSDs directly (I would go for a RAID maybe)
17:50
supragya
2. to initiate a process that can access this flushed data to convert it to CDNG or whatever we feel fit, and then flush to disk back
17:50
BAndiT1983
nope, just watching the discussion and the image
17:50
BAndiT1983
yes, seems legit for now
17:50
BAndiT1983
i will bring again the example of ARRIRAW: our data stream is stored on the PC first or any other device providing storage, then converted afterwards
17:50
BAndiT1983
focus should be on lossless data
17:50
supragya
so ARRIRAW does this? :)
17:50
supragya
never knew...
17:51
supragya
but figured it would be used atleast somewhere
17:51
BAndiT1983
they store their primary data as proprietary ARRIRAW format, afterwards everything is converted to the needs of the projects/films
17:51
xfxf
left the channel
17:51
xfxf
joined the channel
17:51
g3gg0
yep
17:52
BAndiT1983
also saw somewhere a frame server example for arriraw, maybe in their ads
17:52
supragya
why would you need a frameserver here?
17:52
g3gg0
not the worst solution to me
17:52
g3gg0
lets keep focused
17:53
BAndiT1983
they do it, to provide preview
17:53
BAndiT1983
and to explore recorded meta data and so on
17:53
supragya
A few things have come up now:
17:53
supragya
1. to understand how zync side is to be done
17:54
supragya
2. to finally set the stream format
17:54
supragya
3. how to cache on SSDs
17:54
supragya
4. A system to convert to CDNG (Encoder_
17:54
supragya
*)
17:54
g3gg0
nothing to do on zynq side. should have been some other work
17:55
g3gg0
you can assume "you will get the RAW12/metadata stream somehow"
17:55
supragya
that helps
17:55
g3gg0
ok then lets define that.
17:55
supragya
meta on same stream as frames
17:55
supragya
?
17:55
supragya
or different Gbe
17:55
g3gg0
might be as a file stream, a pipe or as an simplification, a local file
17:57
supragya
how should it look like then? meta/RAW piggyback? seqID|Frame and seqID|meta on differenct channels? or MLV?
17:57
g3gg0
if you now used MLV as the "cache" format, the CDNG conversion could be done using existing software
17:57
g3gg0
the "from-zynq"-format?
17:57
g3gg0
i guess a series of RAW12
17:57
supragya
and meta?
17:58
g3gg0
probably, yeah
17:58
g3gg0
-> ask and make an assumption
17:58
supragya
? didn't get you
17:58
g3gg0
@bertl_oO / @se6astian
17:58
g3gg0
thats what i tell my students very often :D
17:59
g3gg0
if you dont know, ask people who could now and with all the information make an assumption on how it would look like
17:59
g3gg0
*who could know
17:59
supragya
okay... so this needs further investigation... and that I should ask Bertl_oO or se6astian, choose a fitting format and go on..
17:59
supragya
in case "from-zync" format is RAW12 series, then a question is raised
18:00
supragya
are we then doing RAW12 series -> MLV -> CDNG?
18:00
g3gg0
yep
18:01
supragya
O.o 2 conversions, so that we don't have to make efforts on MLV-> CDNG
18:01
supragya
so two encoders... kindoff?
18:01
g3gg0
noooo, MLV is not conversion
18:01
BAndiT1983
why?
18:01
supragya
chained in series
18:01
supragya
g3gg0: please elaborate the chain
18:02
g3gg0
1 min
18:03
g3gg0
re
18:03
g3gg0
if you want to write DNG, which pixel format will you have to use?
18:04
supragya
raw bayer pattern?
18:04
g3gg0
the same encoding you get out of the AXIOM?
18:04
supragya
yes
18:05
supragya
infact the RAW could be memcpy'ed in the pixel data of DNG
18:05
g3gg0
oh okay, thought the AXIOM bitstream is a bit odd and cannot be written directly into DNG?
18:05
BAndiT1983
it's just raw12, fitting DNG rather well
18:05
g3gg0
(had in mind that the bitstream was kind of interlaced)
18:06
BAndiT1983
don't remember if byte swapping have to be done or not
18:06
supragya
interlaced ?
18:06
g3gg0
ok maybe i mixed something then
18:08
g3gg0
how i'd start then, is writing the RAW12 data into MLV's VIDF frames
18:08
supragyaraj
joined the channel
18:08
supragyaraj
[ byte swapping] -> can refer OCcore
18:09
g3gg0
and modify e.g. MLVFS to accept that RAW12 bitstream for DNGs
18:09
BAndiT1983
byte swapping: if required to accomplish that, then right format should be sent by the camera
18:09
g3gg0
as cinema dng is a lot of work to do it right, i'd reuse existing tools
18:10
supragyaraj
how about meta? do you save that for every frame in MLV
18:10
supragyaraj
?
18:10
supragya
left the channel
18:10
supragyaraj
if I understand right... a meta is (mode changer) for further frames
18:10
g3gg0
i'd try to only write updated LENS/EXPO blocks if they really changed
18:10
supragyaraj
so if the meta does not changes between frames, it should not be written
18:10
g3gg0
yep
18:11
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=13152.0
18:11
supragyaraj
but current system says, we should write it all the time (until you would like to strcmp both meta's serial somehow, which would be expensive)
18:11
supragyaraj
current system as in ... dumping quickly into cache
18:12
g3gg0
then as a first step, just write it
18:12
supragyaraj
serial -> you convert the struct/object to string (serialise to JSON maybe) and compare
18:13
supragyaraj
sure g3gg0!
18:13
g3gg0
o.O
18:13
g3gg0
do not even think about strings
18:13
g3gg0
neither in C nor in java
18:13
BAndiT1983
string compare is the slowest thing
18:13
g3gg0
the only string in raw video processing you would ever see is maybe the "author name"
18:14
g3gg0
or some "scene 32, take 19" strings
18:14
Kjetil
strncmp > memcmp :P
18:14
g3gg0
you do binary compares
18:15
g3gg0
you have binary data and you simply can compare fields in them
18:15
supragyaraj
is it just me or Kjetil just comes, does some magic and poof... he goes? :)
18:15
BAndiT1983
yep, he is a fairy
18:15
g3gg0
hehe
18:15
g3gg0
depends on how ">" operator was defined in this context :)
18:16
Kjetil
maybe I shouldn't joke about this. Someone might take it seriously
18:16
supragyaraj
let's not go to other world :)
18:16
supragyaraj
an emulation environment could look as follows:
18:17
supragyaraj
a file with data in format (to be discussed) that would make it look like a AXIOM stream
18:17
supragyaraj
that could be piped to a program which saves it to MLV as fast as itcould on disk
18:18
supragyaraj
other system... reads the cache (concurrently or after the MLV writer is done) and then writes DNG sequence
18:18
supragyaraj
(the last one is in place)
18:18
BAndiT1983
concurrently is more trouble than required
18:18
supragyaraj
And finally time them and optimise them
18:18
g3gg0
that can for the first implementation also be done in post
18:18
g3gg0
see -> BAndiT1983
18:19
g3gg0
as soon the recording stops, the CDNG conversion could be started
18:19
supragyaraj
that is one plan for sure
18:19
g3gg0
either like the FUSE things work "in-place" e,g, via samba
18:19
g3gg0
or copying the CDNG to the disk
18:19
BAndiT1983
just image on-set situation, things are filmed, then backuped, if some one would convert live and loase data, then he/se will be a dead man/woman
18:20
BAndiT1983
*imagine
18:20
g3gg0
definitely!
18:20
supragyaraj
so two subparts: Stream to MLV on the fly
18:20
supragyaraj
and the other: MLV to CDNG other time (maybe next year)
18:20
g3gg0
and one hint -> you would most probably never implement CDNG writing on your own
18:21
supragyaraj
seems like it
18:21
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=5618.0
18:21
g3gg0
theres a 55 page discussion about cinemadng
18:21
g3gg0
Questions: (*1) Raw-Data - Subpixel-Chunks, MSB or LSB first? LittleEndian, so LSB first, right? (*2) is it worth that? Maybe another Fileformat would be more usable? DPX? EXR? TIF?
18:21
g3gg0
> thx for coming chmee, save us from Adobe hell.
18:21
g3gg0
> As you can see stills standard DNG EXIF is very different to Cinema DNG.
18:22
g3gg0
there are many comments that will give you a picture of how complex CDNG is
18:22
supragyaraj
[stills standard DNG EXIF is very different to Cinema DNG] -> what?
18:22
supragyaraj
never knew this
18:22
supragyaraj
:)
18:22
g3gg0
> no. cinema dng and "simple" dng differs much - in terms of mandatory tags. i use a cdng-template, i merged myself from looking into another cdng-frames. just to clarify some things:
18:23
g3gg0
thats why i recommend to *not* reinvent the CDNG wheel
18:23
supragyaraj
aye aye, I am afraid now
18:24
g3gg0
better go the open source way in this case and use smth existing and make it fit
18:24
supragyaraj
so should I begin emulation -> stream to MLV flush to disk system?
18:25
g3gg0
some things should first get checked:
18:25
g3gg0
would you write the RAW12 bitstream unmodified into VIDF frames and then modify the CDNG converter (maybe MLVFS) so it can work with it? (i'd prefer that way)
18:25
supragyaraj
[ side note: 5 users on Gdoc schematic diagram :), who are the other 2? ]
18:25
g3gg0
or:
18:26
g3gg0
would you convert the RAW12 into the canon used bit encoding so you dont have to touch the CDNG encoder
18:26
BAndiT1983
Kjetil is probably trying to do his magic on the document ;)
18:27
supragyaraj
the latter for sure but if it hampers the stream speed... then the following:
18:27
g3gg0
the first option is probably a bit more work (modifying two things the same time plus extending MLV's header specification for non-linear stuff?)
18:28
supragyaraj
stream -> save somehow (S). when get time S -> MLV (canon format) -> CDNG
18:28
g3gg0
the latter is probably the quickest win
18:28
g3gg0
hmm
18:28
g3gg0
when thinking about it, i am not sure
18:28
supragyaraj
why
18:28
g3gg0
especially when the cache should not contain post-processing (like gamma) the latter one is probably worse
18:29
g3gg0
but still a quick-win prototype
18:30
supragyaraj
don't know how canon bit encoding is different?
18:30
BAndiT1983
gyus, don't forget, that it will never satisfy demands, as every project uses different format, like EXR, DNG etc., so people will convert to the required format in post, just focus on the way to that first
18:30
BAndiT1983
prores also
18:30
g3gg0
yeah
18:31
BAndiT1983
endianess depends on the MCU/CPU in the camera, some are big, other small
18:31
supragyaraj
I am losing track here... are we talking about endianness when we are talking about bit encodings?
18:31
BAndiT1983
it can be read out by using markers in the beginning, see tiff header
18:31
supragyaraj
or bigger forces are in play here?
18:32
BAndiT1983
you are losing track, because you are trying to do the details first
18:32
BAndiT1983
stay at the big picture for now, so we can finalize the pipeline, at least in general terms
18:32
g3gg0
my point was the gamma curve in axiom was special, right?
18:32
g3gg0
multi part
18:32
g3gg0
dont remember the term
18:32
g3gg0
multiple slope?
18:33
BAndiT1983
can't find info about that in wiki
18:33
BAndiT1983
this one? -> https://wiki.apertus.org/index.php/CMV12000
18:33
BAndiT1983
adc slope?
18:34
g3gg0
hmmm not sure
18:34
BAndiT1983
will look in the datasheet, maybe there is something
18:35
BAndiT1983
page 36 in datasheet
18:35
g3gg0
anyway. latter option would involve that you convert the pixel values in the barebone when writing MLV
18:35
BAndiT1983
that would be preferrable, as it allows more adjustments in post
18:35
supragyaraj
so how should I go about? 1 (change MLVFS) or 2 (additional step)
18:36
g3gg0
this means you have to apply some multi slope correction so you will have linear data.
18:36
supragyaraj
seems doable
18:36
supragyaraj
however, not sure if doable on the fly
18:36
g3gg0
can CDNG tools handle this multi slope raw?
18:36
g3gg0
(i suspect not!)
18:37
g3gg0
(i bet they just implement the minimum they need to for their favorite cameras)
18:37
BAndiT1983
many things are calculated afterwards usually
18:37
BAndiT1983
with existing image data
18:38
BAndiT1983
seen also just general value in C/DNG files, so it's mostly "faked"
18:38
g3gg0
so: i'd start converting the RAW12 to a normal linear raw if the multi slope feature is used
18:39
g3gg0
ergo: latter option is sufficient for now
18:39
sups
joined the channel
18:40
ArunM
joined the channel
18:40
BAndiT1983
ah, read the specs again, there was a linearization table
18:40
g3gg0
if you implement this and you get the feeling it might be too slow for a barebone, then you can go for the "extend CDNG-tool" variant
18:40
sups
I am losing packets on IRC here
18:40
sups
again :)
18:40
BAndiT1983
sups, which irc client?
18:40
g3gg0
oh ok
18:40
supragyaraj
left the channel
18:41
sups
isn't multiple slope an option to achieve HDR using PLR?
18:41
sups
HexChat BAndiT1983 !
18:41
g3gg0
@sups: thats what i meant
18:41
BAndiT1983
sups, same here, no problems so far
18:41
BAndiT1983
ok guys, have to go for a walk, will read logs later
18:41
BAndiT1983
see you
18:41
BAndiT1983
changed nick to: BAndiT1983|away
18:41
g3gg0
and i am quite sure even if CDNG could handle that, the tools wont
18:41
sups
see you too BAndiT1983|away
18:41
g3gg0
so for the first prototype: just convert it to a normal linear raw
18:42
g3gg0
i'd start writing the cache already in "canon-style" raw
18:42
g3gg0
then you can check your cache-MLV with common tools if the data is fine
18:42
sups
"canon-style" documentation? reference?
18:43
g3gg0
e.g. mlv_dump
18:43
g3gg0
thats the MLV VIDF format
18:43
g3gg0
raw.h
18:44
sups
[thats the MLV VIDF format] -> reference?
18:44
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/0e6493e8ac5e118976b94237b5048c436f379d98/src/raw.h?at=unified&fileviewer=file-view-default
18:45
sups
this is canon style?
18:45
g3gg0
yep
18:45
sups
will look into it. Thank you
18:45
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/0e6493e8ac5e118976b94237b5048c436f379d98/modules/mlv_rec/mlv_dump.c?at=unified&fileviewer=file-view-default
18:45
g3gg0
look for bitextract
18:46
g3gg0
thats how the mlv tools read/write pixels from/to a VIDF stream
18:46
g3gg0
as said, first steps
18:46
g3gg0
if you realize it will get too slow, put that effort into the CDNG writer
18:47
g3gg0
okay will leave now too
18:47
g3gg0
lets continue via mail as usual
18:47
sups
okay... g3gg0
18:47
g3gg0
cu
18:47
sups
thanks for your time
18:47
sups
see you
18:48
sups
leaving for now :)
18:49
g3gg0
left the channel
18:53
sups
left the channel
19:00
ArunM
left the channel
19:51
TofuLynx
joined the channel
20:01
TofuLynx
left the channel
20:42
illwieckz
left the channel
20:42
illwieckz
joined the channel
21:30
seaman
joined the channel
22:19
danieel
left the channel
22:19
BAndiT1983|away
left the channel
22:19
BAndiT1983|away
joined the channel
22:19
BAndiT1983|away
changed nick to: BAndiT1983
22:48
TofuLynx
joined the channel
22:53
seaman
left the channel
22:53
lexano
left the channel
22:53
madonius
left the channel
22:54
seaman
joined the channel
22:54
lexano
joined the channel
22:54
madonius
joined the channel
23:00
seaman
left the channel
00:33
Bertl_oO
off to bed now ... have a good one everyone!
00:33
Bertl_oO
changed nick to: Bertl_zZ