Current Server Time: 08:19 (Central Europe)

#apertus IRC Channel Logs

2018/05/06

Timezone: UTC


00:22
XD[m]
left the channel
00:41
vup[m]
left the channel
01:21
rton93
left the channel
01:38
BAndiT1983
changed nick to: BAndiT1983|away
04:48
Bertl_zZ
changed nick to: Bertl
04:48
Bertl
morning folks!
05:00
supragya
joined the channel
05:01
supragya
hi Bertl
05:01
supragya
I have a few queries regarding the RAW data pipeline...
05:04
supragya
In a gist... I would like to know... the highest amount of outflow (speed/bandwidth in maybe MBps) could I expect out of USB3 out port... if it were for only RAW12 data
05:05
supragya
also, what is the separation of tasks between Zync and MachXO2 in USB3 path
05:05
supragya
(I read the logs often, so you could reply even if I am not on IRC ) :)
05:07
supragya
X02 and Zync models in use?
05:08
supragya
ZE/HC/HE?
05:18
supragya
got it ... 640HC
05:20
Bertl
the USB3 out is limited by the maximum bandwidth of the FT60x from FTDI (see datasheet)
05:20
Bertl
the Zynq is the 7020 model used in the MicroZed
05:20
Bertl
and the Lattice is typically the MachXO2 1200HC
05:21
Bertl
although it is kind of pin compatible with the 640 and 2000
05:21
Bertl
MachXO2 is used as routing fabric and controls the GPIOs as well as the I2C interface to the plugin modules
05:22
Bertl
it doesn't transport any high speed data
05:22
Bertl
the USB3 module also contains a MachXO2 which receives the high speed data from the Zynq and converts it to the proper output for the FTDI chip
05:23
Bertl
(see USB3 gearwork task)
05:23
Bertl
off to the faire now ... bbl
05:23
Bertl
changed nick to: Bertl_oO
05:40
seaman
joined the channel
06:14
supragya
left the channel
06:25
RexOrCine|away
changed nick to: RexOrCine
06:38
lexano
left the channel
06:43
seaman
left the channel
06:53
lexano
joined the channel
08:03
rton
joined the channel
08:36
BAndiT1983|away
changed nick to: BAndiT1983
12:07
TofuLynx
joined the channel
12:17
TofuLynx
left the channel
12:23
RexOrCine
changed nick to: RexOrCine|away
14:02
TofuLynx
joined the channel
14:11
BAndiT1983
hi TofuLynx, long time no see
14:14
TofuLynx
left the channel
14:15
TofuLynx
joined the channel
14:20
TofuLynx
left the channel
14:20
TofuLynx
joined the channel
14:32
supragya
joined the channel
14:32
supragya
Good evening!
14:32
BAndiT1983
hi supragya
14:32
supragya
(oops, quite early for that :) )
14:34
supragya
anyhow, BAndiT1983 : Bertl told me in the morning that the limiting factor of USB3 would be FTDI FT60x
14:34
BAndiT1983
read the logs
14:34
supragya
that is 5Gbps
14:34
supragya
about 33.333 fps on 18MB per frame
14:35
supragya
Are we looking at that low of recording fps?
14:35
BAndiT1983
currently looking for any sort of stream, so we can proceed with practical side of it
14:36
supragya
hmm, I was thinking, why not we just piggyback the metadump from the camera with every frame?
14:36
supragya
*metadata dump
14:36
supragya
and stream it out...
14:37
supragya
BAndiT1983: do you have knowledge of how internal system is layed out?
14:39
supragya
[piggyback] -> leaving the recorder to encode it to whichever format it likes
14:39
BAndiT1983
HDMI?
14:39
TofuLynx
left the channel
14:39
supragya
USB3
14:40
TofuLynx
joined the channel
14:41
BAndiT1983
and which recorder is meant here?
14:41
supragya
a Recording computer (PC)
14:42
supragya
The very simplistic schematic to refer is always this: https://trello.com/c/aX7KJihl/24-the-planned-usb3-recording-pathway
14:42
supragya
However, as Bertl_oO told me, the USB3 chain is as follows:
14:42
BAndiT1983
recording software should be open and format known, otherwise you will lose information or get unnecessary one
14:45
supragya
Sensor (serial data) -> FPGA (small FIFO buffer) -> AXI Memory Writer (Serial to Memory) -> DDR3 Memory Buffer -> AXI Memory Reader (Memory Block to FIFO) -> (Video Processing Stuff) -> TMDS (HDMI) Encoder -> Serialization to D0-D3 -> Zynq (via Serializer) -> MachXO2 (another FPGA) -> FTDI FT60x -> USB3 output
14:45
supragya
[TMDS (HDMI) Encoder] -> this I don't know if it comes into picture or not
14:46
BAndiT1983
HDMI encoder would be an obstacle for clear format
14:47
supragya
? this is a pathway Bertl explained me, we are not looking at HDMI
14:48
supragya
what I suggest is that we stop the serialisation D0-D3 for a few moments between frames and add the metadata that camera captures (as Bertl say, a simple register dump) and add it behind the frame before serialisation, all could be done on one channel
14:49
BAndiT1983
meta data consists of multiple things, not only image sensor data
14:49
BAndiT1983
if you are sure, that the output would be fast enough, then go on
14:50
supragya
My calculations are as follows, (need a hardware expert on this however)
14:51
TofuLynx
left the channel
14:52
supragya
Keep it 30fps, 30*18*8 = 4320 Mbps, leaves out nearly 680 Mb per second = 85 MB -> 2.8333 MB/frame
14:52
supragya
for meta
14:52
supragya
if we operate at full 5Gbps
14:53
supragya
still, I need someone like g3gg0, or Bertl on this, as to how it is feasible, where do we induce such piggybacking (in Zync maybe)
14:53
BAndiT1983
collecting meta data is also important, as this factor could slow the whole stuff down
14:54
supragya
Reasoning: As each frame has it's own meta, the conversion to DNG sequence is easy
14:54
supragya
[collecting meta data is also important] -> yes, but here is what to look at:
14:54
supragya
currently axiom cameras can have the following meta only:
14:55
supragya
Sensor(Reg dump), Lens(has to be discussed), Attached modules(maybe)
14:55
supragya
anything else?
14:57
supragya
[could slow the whole stuff] -> can do, but that is to be discussed, as this has to be done anyhow; even if meta does not translate to every frame in the stream we take out of the camera, it sure has to be retrieved and processed in the camera
14:59
BAndiT1983
i still doubt we can stream out DNG or any other format that needs bigger processing effort
14:59
supragya
not dng
15:00
supragya
** I struggle to say we can get DNG out **
15:00
supragya
it will be a RAW12 stream piggybacked with meta
15:01
supragya
the recording computer is to take this RAW12 and meta and convert it to DNG (or any other format for that matter) in it's leisure time
15:01
supragya
or on the fly if it has enough strength
15:01
BAndiT1983
how will be raw12 data separated? how will be the metadata recorded along with it?
15:02
supragya
let me draw a scematic
15:03
BAndiT1983
can we get away with normal SSD or do we need some faster system for recording without losing data? or is the data buffered first, after receiving it over USB?
15:03
supragya
yes, buffered in primary memory, flushed as we get time
15:04
supragya
then this flushed data is taken up (along with appropriate flushed meta) to convert it to suitable format
15:04
supragya
MLV's non-sequential thing is great because they write directly over an SD, but we stream out the data
15:06
BAndiT1983
the stream needs also preparation
15:07
BAndiT1983
and don't forget about future extensions of meta data
15:07
BAndiT1983
which format will meta data have?
15:11
supragya
schematic: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit?usp=sharing
15:11
supragya
[ don't forget about future extensions of meta data] -> 2 points on this
15:12
supragya
2.8333 MB/frame is a good amount of metadata that I don't think, we will be ever be able to utilise
15:13
supragya
Even if we don't get 5Gbps, we pass the RAW12 with a "Sequence ID"
15:13
supragya
this uniquely identifies the frame
15:13
supragya
On an another (maybe GbE), we can stream metadata (that we would have piggybacked)
15:14
supragya
with the Sequence IDs
15:14
supragya
The PC encoder has to just match the meta with frames
15:16
BAndiT1983
register dump will be not sufficient, as this covers only image sensor, other data will be received from different modules in system
15:16
supragya
those will be piggybacked behind sensor dump then...
15:16
BAndiT1983
maybe JSON or BSON or flatbuffers, but XML would be overkill
15:17
supragya
the format is not the worry here, it is the piggybacking technique
15:17
BAndiT1983
XMP has a lot of overhead, sa far as i'Ve seen the specification for it
15:17
supragya
then we won't use it
15:17
supragya
Ah, I now understand your concern
15:19
supragya
https://lab.apertus.org/T951 -> my mail was to satisfy the point 2 (the final format), I never tried to put forth that being the way we would stream the data out of the camera itself
15:19
supragya
I apologize if I did, that was not my intention
15:19
BAndiT1983
supragya, no problem here, just trying to get a clear picture of the process
15:24
supragya
have I missed to tell you something? Is it clear..?
15:29
BAndiT1983
no, everything ok
15:30
supragya
feedbacks?
15:30
supragya
what do you think about this from your perspective
15:30
supragya
?
15:33
BAndiT1983
everything looks clear for now, but the practical side of the thing is totally unknown yet, so can't say much
15:34
BAndiT1983
meeting in 30 minutes?
15:34
supragya
Yes
15:35
g3gg0
joined the channel
15:35
supragya
for g3gg0!
15:35
supragya
Good evening g3gg0 :)
15:35
g3gg0
hi
15:36
g3gg0
need some shower. cu in 15 min
15:36
supragya
sure
15:38
BAndiT1983
supragya, discussion in main channel? or is there something we should do in separate one?
15:38
supragya
your call BAndiT1983.
15:38
BAndiT1983
my idea of gsoc is, to do most stuff in main one, as long as we don't have to discuss critical/private stuff of the thing
15:38
supragya
private? I don't have that on plate today
15:39
BAndiT1983
just saying, sometimes we have to discuss some credentials or so, then separate channel is the way, but as it's an OSS org, so as much open discussions as possible
15:40
supragya
anyways, me too for shower, will be back soon. BAndiT1983: I am fine any way as long as we could communicate :)
15:40
BAndiT1983
ok, just give me a ping, when you are back
15:42
supragya
%n
15:57
g3gg0
re
15:57
supragya
re
15:58
supragya
g3gg0: http://irc.apertus.org/index.php?day=06&month=05&year=2018
15:59
supragya
Important: 16, 50, 53, 67, 71, 84, 88, 76
16:00
supragya
113, 105, 116,
16:00
supragya
g3gg0: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit
16:00
supragya
sorry: https://docs.google.com/document/d/1gxJGt74VogWa_CyxGyHXaE6gpYT1UJDx67uYQ9NmMqQ/edit?usp=sharing
16:02
g3gg0
as far as i understood se6astian, we would have two USB3 chips with up to 3.2 gbits/s each
16:03
g3gg0
a single FTDI can achieve data rates up to 5gbits/s, but the net transfer rate will of course be lower
16:03
supragya
"Sequence ID" then? this could help?
16:03
supragya
to split the stream in two
16:04
supragya
http://irc.apertus.org/index.php?day=06&month=05&year=2018#105
16:04
supragya
BTW, when did se6astian tell you this g3gg0 ?
16:04
g3gg0
> the recording computer is to take this RAW12 and meta and convert it to DNG (or any other format for that matter) in it's leisure time
16:05
g3gg0
dont underestimage DNG writing!
16:05
supragya
Here is how it works, let me get a few links
16:05
g3gg0
even my desktop machine can't handle converting a MLV into DNG in real time
16:06
supragya
that's why "leisure", we don't need realtime as long as we can store the stream somewhere fast enough
16:06
supragya
(schematic)
16:07
g3gg0
hehe you are redesigning what MLV was meant for
16:08
supragya
kindof with a different perspective
16:08
supragya
to not let camera do much
16:08
supragya
to not let camera do much
16:08
supragya
if you look at [http://www.cmosis.com/products/product_detail/cmv12000]
16:08
supragya
"Datasheet", page 62
16:09
supragya
Bertls says, we store the meta as a register dump
16:09
g3gg0
yep
16:09
BAndiT1983
but we need also additional data, like time
16:09
supragya
If we just piggyback this dump, or better yet, pass it through a GbE with "Sequence ID"
16:09
g3gg0
and a timestamp
16:10
supragya
and add other meta like time/ lens info etc... piggybacked
16:10
g3gg0
yep
16:10
supragya
then all encoding is to be done by encoder...
16:10
supragya
off the camera...
16:10
BAndiT1983
some sort of header has also to be provided with info of meta data sizes, if there are multiple chunks between frames
16:11
supragya
We can have KLV format for that (key, length, value)
16:11
g3gg0
...
16:11
supragya
keys maybe as (0-frame, 1-lensMeta, 2-sensorMeta)
16:11
BAndiT1983
same as tags in tiff etc.
16:11
supragya
the difference I feel g3gg0 is this:
16:12
supragya
MLV is supposed to be structured in camera, but here, we do not structure that in camera
16:12
supragya
we just throw out what we have, as fast we can
16:12
supragya
the only thing to make sure is to have the encoder version closely match the hardware version
16:13
BAndiT1983
it's more like firmware version
16:13
supragya
yes
16:14
supragya
piggyback if ofcourse only a solution on single channel
16:14
supragya
however, as this may not be feasible, the SequenceID approach is taken
16:14
supragya
[sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame][sID|RAW12 frame]
16:15
supragya
in case the sensors change, so does the Keys in the KLV format, encoder remains unchanged
16:15
supragya
more like (2-CMV12ksensorMeta, 3-someOtherSensorMeta)
16:16
g3gg0
okay now let me play a bit: if we use as KLV format a 32 bit key, 32 bit length and add an accurate microsecond timestamp to get even frame rate jitter recorded - you would end up with what is MLV like ;)
16:16
BAndiT1983
to prevent that, consider something like json with generic keys for sensor data
16:16
BAndiT1983
as we don't need the whole dump
16:16
g3gg0
save the effort encoding binary data into JSON but use binary tags and voila
16:17
g3gg0
the only thing that might prevent re-using existing toolchains might be the fact that the raw bitstream is encoded differently
16:17
BAndiT1983
there is also BSON for binary json
16:18
supragya
g3gg0: seems fitting, my issue is would Zync board be fast enough to encode such structs at such high rate, by picking relavant pieces from register dumps while also handling other features of the camera
16:18
BAndiT1983
we can re-use same memory, as meta data will be the same length for every frame
16:18
g3gg0
come on, you recommend encoding binary data into json, but writing structures is slowe?
16:18
g3gg0
*slower
16:18
BAndiT1983
just write new values to it, one has to ensure that memmore is zeroed beforehand, if data is shorter, like text string
16:19
supragya
I never went for JSON ;)
16:19
BAndiT1983
g3gg0, nope, it was just about format
16:19
g3gg0
ah that came from you :)
16:20
g3gg0
no matter if JSON or BSON. its a dynamic, structured format
16:20
BAndiT1983
for performance a fixed one would fit better
16:21
BAndiT1983
fixed structure for tags, gathered in one memory chunk and data replaced there
16:21
supragya
the only reason I described regdump to be streamed is performance... as once it has been read... nothing more than streaming has to be done...
16:21
supragya
however if struct writing can be done in time, why not :)
16:21
BAndiT1983
memset should be fast, if you pre-allocate
16:24
supragya
so how are we to proceed now?
16:25
supragya
[memset]-> wouldn't we need to call it multiple time
16:25
supragya
*times... that's struct writing ain't it?
16:25
g3gg0
no need for memset
16:25
supragya
*isn't it
16:25
g3gg0
either you just memcpy or set the elements manually
16:26
supragya
exactly
16:26
BAndiT1983
you can also memcopy, my point was, that we should re-use pre-allocated chunk of memory for meta data
16:27
g3gg0
yeah, prevent malloc if possible
16:27
supragya
while setting manually can reduce size of the regdump, it imposes new restrictions -> for every new sensor, a new struct setter from regdump has to be written (anyways we would have to, but then it would have resided at encoder only)
16:28
supragya
now it has to reside on zync
16:28
supragya
result: unneded load on zync, Gbe is not utilised fully
16:28
supragya
benefit: MLV libraries
16:28
g3gg0
so from what i've read, i'd split the writing part into two processes
16:29
g3gg0
a) USB3 -> intermediate file
16:29
g3gg0
b) intermediate file -> CDNG
16:29
g3gg0
right?
16:29
supragya
yes
16:29
g3gg0
so i will recommend: pick MLV as intermediate file and you have b) for free
16:30
g3gg0
you then have to:
16:30
g3gg0
a) receive video frames -> write as e.g. JPEG92 or MLV formatted raw
16:31
g3gg0
b) extract interesting information from metadata and write them - if they changed since the last metadata
16:31
BAndiT1983
as we don't have the jpeg92 part in fpga, it's not an option currently
16:31
supragya
a fitting point to note here:
16:31
supragya
with sID based system
16:31
g3gg0
wasnt it on the "small PC" side ?
16:31
supragya
if a single stream is capable of 25fps
16:31
supragya
we can have two streams and then we have 50fps
16:31
supragya
as meta in on Gbe
16:32
supragya
MLV is single stream, am i wrong?
16:32
g3gg0
no, just write two files
16:32
g3gg0
(called chunks)
16:33
supragya
then rearrange on timestamp?
16:33
g3gg0
yes
16:33
g3gg0
but back to writing. do we agree we are on an external barebone PC which is connected to axiom using two USB3 ?`
16:33
g3gg0
and goal was to write CDNG ?
16:33
supragya
yes, we are.... but there sits encoder
16:33
supragya
struct writing is in camera
16:34
supragya
on zync board
16:34
g3gg0
axiom side is restricted to META/RAW12/...
16:34
BAndiT1983
forget the encoder for a moment, it would be ok just to get the stream without dropped data first#
16:34
g3gg0
right?
16:34
supragya
BAndiT1983: in that case, to get a stream out, as soon as USB3 gearwork is done, we will get a stream of RAW12 anyways
16:35
supragya
(that is the problem statement)
16:35
supragya
we skip some clocks to enter some data... MLV things in stream
16:35
supragya
and then we get the final stream
16:36
g3gg0
https://app.sketchtogether.com/s/sketch/qBIfw.1.1/
16:36
g3gg0
can you see this?
16:36
BAndiT1983
yep
16:39
g3gg0
sketched correctly?
16:39
supragya
let me add
16:39
g3gg0
zync is meant to be within the axiom camera
16:40
supragya
yes it is
16:40
g3gg0
errh i mean that was obvious :)
16:40
g3gg0
AXIOM abstracts this
16:41
g3gg0
all we see to the outer world is the CMOS and the two USB3
16:41
g3gg0
call it black box
16:41
g3gg0
light in, USB3 out
16:42
g3gg0
nah remove MLV for now
16:42
g3gg0
we are talking about how it one level above would look like
16:43
g3gg0
not the technical implementation
16:43
g3gg0
"meta" is more or less the register dump of the CMOS and some other stuff
16:44
supragya
yes, but then, if everything passes ZYNC, skipping clocks for RAW writing to accomodate meta and to write structs at high rates would put pressure on zync... is it capable of that?
16:44
supragya
otherwise, what g3gg0 says is correct
16:44
supragya
In such a case
16:45
g3gg0
so okay thats what exists (left side) and what should get implemented (right side)
16:46
supragya
a few changes on left -> code for zync to serialise/write the stream into MLV
16:46
supragya
chunks etc
16:46
g3gg0
wait, no MLV yet
16:46
supragya
okay
16:46
g3gg0
just the top level view
16:46
g3gg0
we all agree?
16:46
supragya
then to
16:46
supragya
yes g3gg0
16:46
g3gg0
BAndiT1983 ?
16:47
supragya
then a code for zync.... to serialise the RAW and meta in stream
16:47
supragya
BAndiT1983: is busy drawing g3gg0 :)
16:47
g3gg0
maybe not, lets do the next steps
16:48
g3gg0
on the barebone side we might have not enough power to save CDNG directly, right?
16:48
supragya
on the right,
16:48
supragya
yes exactly
16:48
g3gg0
so the "magic here" has to buffer the RAW12 stream locally
16:48
g3gg0
let me draw
16:49
xfxf
left the channel
16:49
xfxf
joined the channel
16:50
supragya
then it is a two way step:
16:50
supragya
1. to flush the stream to SSDs directly (I would go for a RAID maybe)
16:50
supragya
2. to initiate a process that can access this flushed data to convert it to CDNG or whatever we feel fit, and then flush to disk back
16:50
BAndiT1983
nope, just watching the discussion and the image
16:50
BAndiT1983
yes, seems legit for now
16:50
BAndiT1983
i will bring again the example of ARRIRAW: our data stream is stored on the PC first or any other device providing storage, then converted afterwards
16:50
BAndiT1983
focus should be on lossless data
16:50
supragya
so ARRIRAW does this? :)
16:50
supragya
never knew...
16:51
supragya
but figured it would be used atleast somewhere
16:51
BAndiT1983
they store their primary data as proprietary ARRIRAW format, afterwards everything is converted to the needs of the projects/films
16:51
xfxf
left the channel
16:51
xfxf
joined the channel
16:51
g3gg0
yep
16:52
BAndiT1983
also saw somewhere a frame server example for arriraw, maybe in their ads
16:52
supragya
why would you need a frameserver here?
16:52
g3gg0
not the worst solution to me
16:52
g3gg0
lets keep focused
16:53
BAndiT1983
they do it, to provide preview
16:53
BAndiT1983
and to explore recorded meta data and so on
16:53
supragya
A few things have come up now:
16:53
supragya
1. to understand how zync side is to be done
16:54
supragya
2. to finally set the stream format
16:54
supragya
3. how to cache on SSDs
16:54
supragya
4. A system to convert to CDNG (Encoder_
16:54
supragya
*)
16:54
g3gg0
nothing to do on zynq side. should have been some other work
16:55
g3gg0
you can assume "you will get the RAW12/metadata stream somehow"
16:55
supragya
that helps
16:55
g3gg0
ok then lets define that.
16:55
supragya
meta on same stream as frames
16:55
supragya
?
16:55
supragya
or different Gbe
16:55
g3gg0
might be as a file stream, a pipe or as an simplification, a local file
16:57
supragya
how should it look like then? meta/RAW piggyback? seqID|Frame and seqID|meta on differenct channels? or MLV?
16:57
g3gg0
if you now used MLV as the "cache" format, the CDNG conversion could be done using existing software
16:57
g3gg0
the "from-zynq"-format?
16:57
g3gg0
i guess a series of RAW12
16:57
supragya
and meta?
16:58
g3gg0
probably, yeah
16:58
g3gg0
-> ask and make an assumption
16:58
supragya
? didn't get you
16:58
g3gg0
@bertl_oO / @se6astian
16:58
g3gg0
thats what i tell my students very often :D
16:59
g3gg0
if you dont know, ask people who could now and with all the information make an assumption on how it would look like
16:59
g3gg0
*who could know
16:59
supragya
okay... so this needs further investigation... and that I should ask Bertl_oO or se6astian, choose a fitting format and go on..
16:59
supragya
in case "from-zync" format is RAW12 series, then a question is raised
17:00
supragya
are we then doing RAW12 series -> MLV -> CDNG?
17:00
g3gg0
yep
17:01
supragya
O.o 2 conversions, so that we don't have to make efforts on MLV-> CDNG
17:01
supragya
so two encoders... kindoff?
17:01
g3gg0
noooo, MLV is not conversion
17:01
BAndiT1983
why?
17:01
supragya
chained in series
17:01
supragya
g3gg0: please elaborate the chain
17:02
g3gg0
1 min
17:03
g3gg0
re
17:03
g3gg0
if you want to write DNG, which pixel format will you have to use?
17:04
supragya
raw bayer pattern?
17:04
g3gg0
the same encoding you get out of the AXIOM?
17:04
supragya
yes
17:05
supragya
infact the RAW could be memcpy'ed in the pixel data of DNG
17:05
g3gg0
oh okay, thought the AXIOM bitstream is a bit odd and cannot be written directly into DNG?
17:05
BAndiT1983
it's just raw12, fitting DNG rather well
17:05
g3gg0
(had in mind that the bitstream was kind of interlaced)
17:06
BAndiT1983
don't remember if byte swapping have to be done or not
17:06
supragya
interlaced ?
17:06
g3gg0
ok maybe i mixed something then
17:08
g3gg0
how i'd start then, is writing the RAW12 data into MLV's VIDF frames
17:08
supragyaraj
joined the channel
17:08
supragyaraj
[ byte swapping] -> can refer OCcore
17:09
g3gg0
and modify e.g. MLVFS to accept that RAW12 bitstream for DNGs
17:09
BAndiT1983
byte swapping: if required to accomplish that, then right format should be sent by the camera
17:09
g3gg0
as cinema dng is a lot of work to do it right, i'd reuse existing tools
17:10
supragyaraj
how about meta? do you save that for every frame in MLV
17:10
supragyaraj
?
17:10
supragya
left the channel
17:10
supragyaraj
if I understand right... a meta is (mode changer) for further frames
17:10
g3gg0
i'd try to only write updated LENS/EXPO blocks if they really changed
17:10
supragyaraj
so if the meta does not changes between frames, it should not be written
17:10
g3gg0
yep
17:11
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=13152.0
17:11
supragyaraj
but current system says, we should write it all the time (until you would like to strcmp both meta's serial somehow, which would be expensive)
17:11
supragyaraj
current system as in ... dumping quickly into cache
17:12
g3gg0
then as a first step, just write it
17:12
supragyaraj
serial -> you convert the struct/object to string (serialise to JSON maybe) and compare
17:13
supragyaraj
sure g3gg0!
17:13
g3gg0
o.O
17:13
g3gg0
do not even think about strings
17:13
g3gg0
neither in C nor in java
17:13
BAndiT1983
string compare is the slowest thing
17:13
g3gg0
the only string in raw video processing you would ever see is maybe the "author name"
17:14
g3gg0
or some "scene 32, take 19" strings
17:14
Kjetil
strncmp > memcmp :P
17:14
g3gg0
you do binary compares
17:15
g3gg0
you have binary data and you simply can compare fields in them
17:15
supragyaraj
is it just me or Kjetil just comes, does some magic and poof... he goes? :)
17:15
BAndiT1983
yep, he is a fairy
17:15
g3gg0
hehe
17:15
g3gg0
depends on how ">" operator was defined in this context :)
17:16
Kjetil
maybe I shouldn't joke about this. Someone might take it seriously
17:16
supragyaraj
let's not go to other world :)
17:16
supragyaraj
an emulation environment could look as follows:
17:17
supragyaraj
a file with data in format (to be discussed) that would make it look like a AXIOM stream
17:17
supragyaraj
that could be piped to a program which saves it to MLV as fast as itcould on disk
17:18
supragyaraj
other system... reads the cache (concurrently or after the MLV writer is done) and then writes DNG sequence
17:18
supragyaraj
(the last one is in place)
17:18
BAndiT1983
concurrently is more trouble than required
17:18
supragyaraj
And finally time them and optimise them
17:18
g3gg0
that can for the first implementation also be done in post
17:18
g3gg0
see -> BAndiT1983
17:19
g3gg0
as soon the recording stops, the CDNG conversion could be started
17:19
supragyaraj
that is one plan for sure
17:19
g3gg0
either like the FUSE things work "in-place" e,g, via samba
17:19
g3gg0
or copying the CDNG to the disk
17:19
BAndiT1983
just image on-set situation, things are filmed, then backuped, if some one would convert live and loase data, then he/se will be a dead man/woman
17:20
BAndiT1983
*imagine
17:20
g3gg0
definitely!
17:20
supragyaraj
so two subparts: Stream to MLV on the fly
17:20
supragyaraj
and the other: MLV to CDNG other time (maybe next year)
17:20
g3gg0
and one hint -> you would most probably never implement CDNG writing on your own
17:21
supragyaraj
seems like it
17:21
g3gg0
https://www.magiclantern.fm/forum/index.php?topic=5618.0
17:21
g3gg0
theres a 55 page discussion about cinemadng
17:21
g3gg0
Questions: (*1) Raw-Data - Subpixel-Chunks, MSB or LSB first? LittleEndian, so LSB first, right? (*2) is it worth that? Maybe another Fileformat would be more usable? DPX? EXR? TIF?
17:21
g3gg0
> thx for coming chmee, save us from Adobe hell.
17:21
g3gg0
> As you can see stills standard DNG EXIF is very different to Cinema DNG.
17:22
g3gg0
there are many comments that will give you a picture of how complex CDNG is
17:22
supragyaraj
[stills standard DNG EXIF is very different to Cinema DNG] -> what?
17:22
supragyaraj
never knew this
17:22
supragyaraj
:)
17:22
g3gg0
> no. cinema dng and "simple" dng differs much - in terms of mandatory tags. i use a cdng-template, i merged myself from looking into another cdng-frames. just to clarify some things:
17:23
g3gg0
thats why i recommend to *not* reinvent the CDNG wheel
17:23
supragyaraj
aye aye, I am afraid now
17:24
g3gg0
better go the open source way in this case and use smth existing and make it fit
17:24
supragyaraj
so should I begin emulation -> stream to MLV flush to disk system?
17:25
g3gg0
some things should first get checked:
17:25
g3gg0
would you write the RAW12 bitstream unmodified into VIDF frames and then modify the CDNG converter (maybe MLVFS) so it can work with it? (i'd prefer that way)
17:25
supragyaraj
[ side note: 5 users on Gdoc schematic diagram :), who are the other 2? ]
17:25
g3gg0
or:
17:26
g3gg0
would you convert the RAW12 into the canon used bit encoding so you dont have to touch the CDNG encoder
17:26
BAndiT1983
Kjetil is probably trying to do his magic on the document ;)
17:27
supragyaraj
the latter for sure but if it hampers the stream speed... then the following:
17:27
g3gg0
the first option is probably a bit more work (modifying two things the same time plus extending MLV's header specification for non-linear stuff?)
17:28
supragyaraj
stream -> save somehow (S). when get time S -> MLV (canon format) -> CDNG
17:28
g3gg0
the latter is probably the quickest win
17:28
g3gg0
hmm
17:28
g3gg0
when thinking about it, i am not sure
17:28
supragyaraj
why
17:28
g3gg0
especially when the cache should not contain post-processing (like gamma) the latter one is probably worse
17:29
g3gg0
but still a quick-win prototype
17:30
supragyaraj
don't know how canon bit encoding is different?
17:30
BAndiT1983
gyus, don't forget, that it will never satisfy demands, as every project uses different format, like EXR, DNG etc., so people will convert to the required format in post, just focus on the way to that first
17:30
BAndiT1983
prores also
17:30
g3gg0
yeah
17:31
BAndiT1983
endianess depends on the MCU/CPU in the camera, some are big, other small
17:31
supragyaraj
I am losing track here... are we talking about endianness when we are talking about bit encodings?
17:31
BAndiT1983
it can be read out by using markers in the beginning, see tiff header
17:31
supragyaraj
or bigger forces are in play here?
17:32
BAndiT1983
you are losing track, because you are trying to do the details first
17:32
BAndiT1983
stay at the big picture for now, so we can finalize the pipeline, at least in general terms
17:32
g3gg0
my point was the gamma curve in axiom was special, right?
17:32
g3gg0
multi part
17:32
g3gg0
dont remember the term
17:32
g3gg0
multiple slope?
17:33
BAndiT1983
can't find info about that in wiki
17:33
BAndiT1983
this one? -> https://wiki.apertus.org/index.php/CMV12000
17:33
BAndiT1983
adc slope?
17:34
g3gg0
hmmm not sure
17:34
BAndiT1983
will look in the datasheet, maybe there is something
17:35
BAndiT1983
page 36 in datasheet
17:35
g3gg0
anyway. latter option would involve that you convert the pixel values in the barebone when writing MLV
17:35
BAndiT1983
that would be preferrable, as it allows more adjustments in post
17:35
supragyaraj
so how should I go about? 1 (change MLVFS) or 2 (additional step)
17:36
g3gg0
this means you have to apply some multi slope correction so you will have linear data.
17:36
supragyaraj
seems doable
17:36
supragyaraj
however, not sure if doable on the fly
17:36
g3gg0
can CDNG tools handle this multi slope raw?
17:36
g3gg0
(i suspect not!)
17:37
g3gg0
(i bet they just implement the minimum they need to for their favorite cameras)
17:37
BAndiT1983
many things are calculated afterwards usually
17:37
BAndiT1983
with existing image data
17:38
BAndiT1983
seen also just general value in C/DNG files, so it's mostly "faked"
17:38
g3gg0
so: i'd start converting the RAW12 to a normal linear raw if the multi slope feature is used
17:39
g3gg0
ergo: latter option is sufficient for now
17:39
sups
joined the channel
17:40
ArunM
joined the channel
17:40
BAndiT1983
ah, read the specs again, there was a linearization table
17:40
g3gg0
if you implement this and you get the feeling it might be too slow for a barebone, then you can go for the "extend CDNG-tool" variant
17:40
sups
I am losing packets on IRC here
17:40
sups
again :)
17:40
BAndiT1983
sups, which irc client?
17:40
g3gg0
oh ok
17:40
supragyaraj
left the channel
17:41
sups
isn't multiple slope an option to achieve HDR using PLR?
17:41
sups
HexChat BAndiT1983 !
17:41
g3gg0
@sups: thats what i meant
17:41
BAndiT1983
sups, same here, no problems so far
17:41
BAndiT1983
ok guys, have to go for a walk, will read logs later
17:41
BAndiT1983
see you
17:41
BAndiT1983
changed nick to: BAndiT1983|away
17:41
g3gg0
and i am quite sure even if CDNG could handle that, the tools wont
17:41
sups
see you too BAndiT1983|away
17:41
g3gg0
so for the first prototype: just convert it to a normal linear raw
17:42
g3gg0
i'd start writing the cache already in "canon-style" raw
17:42
g3gg0
then you can check your cache-MLV with common tools if the data is fine
17:42
sups
"canon-style" documentation? reference?
17:43
g3gg0
e.g. mlv_dump
17:43
g3gg0
thats the MLV VIDF format
17:43
g3gg0
raw.h
17:44
sups
[thats the MLV VIDF format] -> reference?
17:44
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/0e6493e8ac5e118976b94237b5048c436f379d98/src/raw.h?at=unified&fileviewer=file-view-default
17:45
sups
this is canon style?
17:45
g3gg0
yep
17:45
sups
will look into it. Thank you
17:45
g3gg0
https://bitbucket.org/hudson/magic-lantern/src/0e6493e8ac5e118976b94237b5048c436f379d98/modules/mlv_rec/mlv_dump.c?at=unified&fileviewer=file-view-default
17:45
g3gg0
look for bitextract
17:46
g3gg0
thats how the mlv tools read/write pixels from/to a VIDF stream
17:46
g3gg0
as said, first steps
17:46
g3gg0
if you realize it will get too slow, put that effort into the CDNG writer
17:47
g3gg0
okay will leave now too
17:47
g3gg0
lets continue via mail as usual
17:47
sups
okay... g3gg0
17:47
g3gg0
cu
17:47
sups
thanks for your time
17:47
sups
see you
17:48
sups
leaving for now :)
17:49
g3gg0
left the channel
17:53
sups
left the channel
18:00
ArunM
left the channel
18:51
TofuLynx
joined the channel
19:01
TofuLynx
left the channel
19:42
illwieckz
left the channel
19:42
illwieckz
joined the channel
20:30
seaman
joined the channel
21:19
danieel
left the channel
21:19
BAndiT1983|away
left the channel
21:19
BAndiT1983|away
joined the channel
21:19
BAndiT1983|away
changed nick to: BAndiT1983
21:48
TofuLynx
joined the channel
21:53
seaman
left the channel
21:53
lexano
left the channel
21:53
madonius
left the channel
21:54
seaman
joined the channel
21:54
lexano
joined the channel
21:54
madonius
joined the channel
22:00
seaman
left the channel
23:33
Bertl_oO
off to bed now ... have a good one everyone!
23:33
Bertl_oO
changed nick to: Bertl_zZ