02:26 | Bertl | off to bed now ... have fun!
| |
02:26 | Bertl | changed nick to: Bertl_zZ
| |
03:08 | xfxf_ | joined the channel | |
03:10 | flesk__ | joined the channel | |
03:12 | xfxf | left the channel | |
03:13 | flesk___ | left the channel | |
03:13 | xfxf_ | changed nick to: xfxf
| |
03:16 | hof[m] | left the channel | |
03:16 | parasew[m] | left the channel | |
03:16 | XD[m] | left the channel | |
03:16 | derWalter | left the channel | |
03:16 | davidak[m] | left the channel | |
03:16 | flesk_ | left the channel | |
03:16 | elkos | left the channel | |
03:17 | anuejn | left the channel | |
03:17 | vup[m] | left the channel | |
03:17 | MilkManzJourDadd | left the channel | |
04:32 | davidak | left the channel | |
04:33 | davidak | joined the channel | |
04:48 | RexOrCine | changed nick to: RexOrCine|away
| |
05:59 | davidak | left the channel | |
06:27 | g3gg0 | joined the channel | |
06:35 | g3gg0 | left the channel | |
06:40 | ArunM1 | joined the channel | |
07:37 | ArunM | joined the channel | |
07:56 | ArunM | left the channel | |
07:59 | ArunM | joined the channel | |
08:24 | ArunM | left the channel | |
08:58 | ArunM | joined the channel | |
09:44 | rahul_ | Morning everyone
| |
09:45 | rahul_ | I am working towards focus peaking project
| |
09:45 | rahul_ | I have a question, as mentioned in the image processing pipeline of the AXIOM..
| |
09:46 | derWalter | joined the channel | |
09:46 | rahul_ | the incoming pixels are buffered into the DRAM (which can be accessed by both the ARM as well as the FPGA fabric) as frames..
| |
09:47 | rahul_ | and then the processing (ARM/FPGA) is performed...
| |
09:48 | ArunM | left the channel | |
09:49 | rahul_ | As mentioned in my proposal...I had implemented the kernel using HLS tools but I process the pixels in line without buffering (VDMA)...
| |
09:50 | parasew[m] | joined the channel | |
09:50 | rahul_ | Now, my question is that
| |
09:51 | Bertl_zZ | changed nick to: Bertl
| |
09:51 | Bertl | morning folks!
| |
09:51 | rahul_ | can I place the kernel before the VDMA core in the pipeline or should I follow the standard design flow of the AXIOM image processing pipeline and place the kernel after the VDMA core..
| |
09:54 | Bertl | first, there is no classical VDMA in the AXIOM Beta pipeline
| |
09:54 | Bertl | we use high performance memory writers (kind of buffered VDMA)
| |
09:54 | Bertl | naturally the focus peaking has to go into the output pipeline
| |
09:56 | Bertl | i.e. we want to augment the output with the peaking information only on certain output devices (preview not recording :)
| |
09:57 | Bertl | if you want to test with HLS (which is fine) you can use a streaming interface with some local buffering (line buffer)
| |
09:57 | MilkManzJourDadd | joined the channel | |
09:57 | elkos | joined the channel | |
09:57 | davidak[m] | joined the channel | |
09:57 | XD[m] | joined the channel | |
09:57 | hof[m] | joined the channel | |
09:59 | danieel | then there is the question whether to do the peaking in raw or demosaiced domain
| |
09:59 | danieel | (and fullres or downscaled one on the preview output)
| |
09:59 | rahul_ | So, in my proposal i mentioned two different kernels to be placed in the pipeline
| |
10:00 | rahul_ | one for debayering and one for focus peaking
| |
10:01 | rahul_ | but later I found paper (I posted on the focus peaking task page of the aperus) where debayering and edge detection can be done in the same kernel
| |
10:05 | Bertl | yes, that's an option ... currently the 'debayering' is done by simply combining 4 sensel from the 4k input into one pixel of the FullHD preview
| |
10:09 | sebix | joined the channel | |
10:09 | sebix | left the channel | |
10:09 | sebix | joined the channel | |
10:23 | flesk_ | joined the channel | |
10:23 | anuejn | joined the channel | |
10:23 | vup[m] | joined the channel | |
10:30 | Bertl | rahul_: if there are any questions, do not hesitate to ask ...
| |
10:39 | parasew[m] | left the channel | |
10:39 | MilkManzJourDadd | left the channel | |
10:39 | vup[m] | left the channel | |
10:39 | hof[m] | left the channel | |
10:39 | elkos | left the channel | |
10:39 | XD[m] | left the channel | |
10:39 | flesk_ | left the channel | |
10:39 | davidak[m] | left the channel | |
10:40 | anuejn | left the channel | |
10:40 | derWalter | left the channel | |
11:16 | derWalter | joined the channel | |
11:20 | parasew[m] | joined the channel | |
11:27 | MilkManzJourDadd | joined the channel | |
11:27 | elkos | joined the channel | |
11:27 | davidak[m] | joined the channel | |
11:27 | hof[m] | joined the channel | |
11:27 | XD[m] | joined the channel | |
11:45 | hof[m]1 | joined the channel | |
11:47 | hof[m] | left the channel | |
11:55 | flesk_ | joined the channel | |
11:55 | anuejn | joined the channel | |
11:55 | vup[m] | joined the channel | |
12:12 | ymc98 | joined the channel | |
12:20 | se6astian|away | changed nick to: se6astian
| |
12:29 | Bertl | off for now ... bbl
| |
12:29 | Bertl | changed nick to: Bertl_oO
| |
13:20 | rton | joined the channel | |
13:33 | ArunM | joined the channel | |
13:43 | ymc98 | left the channel | |
13:44 | ymc98 | joined the channel | |
14:09 | se6astian | changed nick to: se6astian|away
| |
14:13 | Mahesh_ | joined the channel | |
14:16 | ymc98 | left the channel | |
14:26 | RexOrCine|away | changed nick to: RexOrCine
| |
14:27 | ArunM | left the channel | |
15:35 | supragya | joined the channel | |
15:38 | supragya | Bertl_oO: available?
| |
15:39 | supragya | need a more detailed overview of image pipeline than [https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#Image_Acquisition_Pipeline]
| |
15:40 | Bertl_oO | sure, what information do you need?
| |
15:42 | supragya | Here are a few things: se6astian said that atleast 4 frames are buffered before flushing to disk... what is the current way you do this? Are videos recorded at this moment in time and how are they done? (implemetation / code may be fine to look at)
| |
15:42 | Bertl_oO | note: nothing is 'flushed' to disk at the moment
| |
15:42 | Bertl_oO | what happens is the following:
| |
15:43 | Bertl_oO | data is retrieved from the sensor after exposure and written into a DDR memory buffer
| |
15:43 | Bertl_oO | at the same time (i.e. in parallel) a different frame is retrieved from memory and encoded as HDMI (for example)
| |
15:44 | Bertl_oO | there are currently 4 buffers in DDR, which get used one after the other
| |
15:44 | Bertl_oO | this allows to lock one buffer (i.e. disable it) during live preview/recording and create a raw snapshot
| |
15:45 | Bertl_oO | once the snapshot is done, the buffer is returned and reused
| |
15:45 | Bertl_oO | both input and output happen in the FPGA without any intervention from the CPU
| |
15:46 | Bertl_oO | recording on the AXIOM Beta currently happens with an external recorder connected via HDMI or SDI
| |
15:46 | supragya | ddr as in [https://www.edaboard.com/showthread.php?t=79556] ?
| |
15:47 | Bertl_oO | no, DDR as DDR3 memory attached to the Zynq on the MicroZed
| |
15:48 | supragya | using ddr buffers one after other... does that mean a perfect correlation of frame and buffer -> [F1,buf1];[F2,buf2];[F3,buf3],[F4,buf1]... and so on
| |
15:49 | Bertl_oO | yes, but only for buffers currently active
| |
15:50 | supragya | Is serialisation automatic for HDMI?
| |
15:51 | supragya | As far as I can tell... the frames for videos are in these buffers and cannot be taken out.. am I wrong?
| |
15:51 | supragya | (as till date)
| |
15:51 | Bertl_oO | not sure what you mean by 'taken out' :)
| |
15:51 | danieel | out where? :)
| |
15:52 | supragya | :) wait
| |
16:04 | BAndiT1983|away | changed nick to: BAndiT1983
| |
16:24 | supragya | Good evening BAndiT1983
| |
16:25 | BAndiT1983 | hi supragya, early for an evening, usually in germany "good evening" starts from 6pm ;)
| |
16:25 | supragya | Well, I am at airport, about to board and it's 9 here
| |
16:26 | BAndiT1983 | ah, right, you are going to the conference
| |
16:28 | BAndiT1983 | any news on raw container?
| |
16:29 | supragya | 26/04 22:00->00:00 Chennai -> Mumbai ---6 hrs layover--- 06:00->07:30 Mumbai -> Vadodara (Home)... 11:00 Avengers... 19:00 Vadodara -> Indore (conference)... on 28/04 20:00-> 04:00(next day) Indore -> Vadodara
| |
16:29 | supragya | then home on 29
| |
16:29 | supragya | raw container... yes
| |
16:30 | BAndiT1983 | wow, you have full timeframe for next days
| |
16:30 | supragya | :), see... could not help it...
| |
16:31 | supragya | could you find me on trello
| |
16:31 | BAndiT1983 | sometimes it's required, got a lot of impressions and ideas at oop2014, around that time i've decided to try to become a software architect
| |
16:32 | BAndiT1983 | ehm, who was writing comments the whole time? ;)
| |
16:37 | supragyaraj | joined the channel | |
16:37 | supragyaraj | <supragya> shared recent convo with Bertl... there
| |
16:37 | supragyaraj | <supragya> XD flight got delayed.. 1 hr.. so layover is now 5 hrs... big deal :)
| |
16:37 | BAndiT1983 | so you can proceed with gsoc ;)
| |
16:37 | supragya | left the channel | |
16:37 | BAndiT1983 | read it, very interesting
| |
16:39 | BAndiT1983 | still missing some puzzle pieces here and there, at least to get some sort of emulation of camera pipeline
| |
16:39 | supragyaraj | yes..
| |
16:49 | Bertl_oO | why do you want to emulate the camera pipeline?
| |
16:50 | supragyaraj | It is more of a emulation to see if container format that we use are apt
| |
16:50 | supragyaraj | for AXIOM camera
| |
16:51 | BAndiT1983 | it's not about big emulation, just what supragyaraj says
| |
16:52 | supragyaraj | However, if format (serial packets / their format / the order / the markers etc) are known beforehand... we are good to proceed
| |
16:52 | supragyaraj | Also a trivial thing I have not asked... what is up with audio, do we have it?
| |
16:53 | BAndiT1983 | separate recording usually
| |
16:53 | supragyaraj | good for us :)
| |
16:54 | BAndiT1983 | other metadata is much more of concern, like wb, aperture and so on
| |
17:01 | sebix | left the channel | |
17:02 | Mahesh_ | left the channel | |
17:06 | Mahesh_ | joined the channel | |
17:36 | Mahesh_ | left the channel | |
17:51 | nmdis1999 | joined the channel | |
17:54 | nmdis1999 | left the channel | |
18:03 | supragyaraj | left the channel | |
18:31 | g3gg0 | joined the channel | |
18:44 | BAndiT1983 | hi g3gg0, have you joined trello yet? it can be done through google account
| |
19:17 | TofuLynx | joined the channel | |
19:17 | TofuLynx | Hello everyone!
| |
19:21 | TofuLynx | BAndiT1983: I am going to start changing the preprocessor loops
| |
19:30 | g3gg0 | hi
| |
19:41 | TofuLynx | Hello g3gg0 :)
| |
19:44 | TofuLynx | BAndiT1983: I have implemented the new loops
| |
19:44 | TofuLynx | it's on average 3ms faster :)
| |
19:50 | BAndiT1983 | hi TofuLynx, sounds interesting, could you do a pull request?
| |
19:53 | TofuLynx | Yeah wait a moment, I am just finishing the benchmark
| |
19:55 | TofuLynx | Ok finished
| |
19:55 | TofuLynx | I will do the PR
| |
20:03 | TofuLynx | PR done
| |
20:03 | TofuLynx | you can see it :)
| |
20:05 | BAndiT1983 | alright travis ci has started, will lok into the code a bit later, just have to go to the shop quick to get some stuff to prepare dinner
| |
20:05 | TofuLynx | Ok! :) I have to go to dinner soon too
| |
20:10 | BAndiT1983 | TofuLynx, what happens if the extraction happens in single loop?
| |
20:11 | TofuLynx | Didn't try it, but I guess it would be slower, no? As it is in a single thread
| |
20:11 | TofuLynx | I will try it now
| |
20:11 | BAndiT1983 | try it please, as having more threads is not automatically faster
| |
20:12 | TofuLynx | alright
| |
20:12 | BAndiT1983 | it depends on how big the RAM areas are, which get locked while reading/writing
| |
20:12 | BAndiT1983 | first step is to simplify before going crazy with optimization
| |
20:13 | TofuLynx | hmm
| |
20:13 | TofuLynx | dataUL[index] = _outputData[index];
| |
20:13 | TofuLynx | dataUR[index + 1] = _outputData[index + 1];
| |
20:13 | TofuLynx | dataLL[index + _width] = _outputData[index + _width];
| |
20:13 | TofuLynx | dataLR[index + _width + 1] = _outputData[index + _width + 1];
| |
20:13 | TofuLynx | do you think I should store index + _width in a variable?
| |
20:15 | BAndiT1983 | have to go now, but will be back shortly, then i will reflect on it
| |
20:15 | TofuLynx | See you!
| |
20:16 | g3gg0 | cu
| |
20:24 | TofuLynx | Benchmark the unique single loop
| |
20:24 | TofuLynx | it's slower
| |
20:24 | TofuLynx | benchmarked*
| |
20:29 | TofuLynx | so, the old loops took 42ms, the single loops (those I did a PR) took 38ms and, finally, the complete extracting on a single loop took 48ms
| |
20:33 | BAndiT1983 | back
| |
20:34 | BAndiT1983 | could you pastebin the single loop code?
| |
20:37 | TofuLynx | sure
| |
20:39 | TofuLynx | https://pastebin.com/zR5BaPzk
| |
20:39 | TofuLynx | Here you go, Andrej :)
| |
20:39 | se6astian|away | changed nick to: se6astian
| |
20:44 | se6astian | changed nick to: se6astian|away
| |
20:44 | BAndiT1983 | is the output still correct?
| |
20:45 | BAndiT1983 | will add a unit test as example, which uses 8x8 data block, so we can verify every time without manual intervention
| |
20:46 | TofuLynx | yeah the output is still correct
| |
20:46 | TofuLynx | I will be right back, going to dinner
| |
20:49 | BAndiT1983 | have a nice meal
| |
21:00 | TofuLynx | left the channel | |
21:01 | TofuLynx | joined the channel | |
21:03 | supragyaraj | joined the channel | |
21:04 | supragyaraj | Good evening BAndiT1983, g3gg0 !
| |
21:04 | BAndiT1983 | hi supragyaraj
| |
21:04 | supragyaraj | This time from Mumbai !
| |
21:04 | g3gg0 | hi supragyaraj
| |
21:04 | g3gg0 | :)
| |
21:05 | BAndiT1983 | nice, at the arabian sea
| |
21:05 | supragyaraj | Very close indeed
| |
21:05 | BAndiT1983 | never looked close at the map of india, but now i see that chennai is on the other side, at bengal gulf
| |
21:06 | supragyaraj | Very down...
| |
21:08 | supragyaraj | So, my question is... what more is needed in "How to stream data out from memory - Ask Bertl" ?
| |
21:08 | BAndiT1983 | this is an ongoing task, so it will remain open for some time
| |
21:09 | BAndiT1983 | when new infos are available, then it can be extended
| |
21:09 | BAndiT1983 | you could inspect the docs of RAW video formats, like RED and ARRI, to see which metadata or frame data is usually required, like WB, aperture etc.
| |
21:10 | g3gg0 | lens info
| |
21:10 | g3gg0 | (although there is nothing yet)
| |
21:10 | g3gg0 | filmmakers will need:
| |
21:10 | g3gg0 | custom marks (button presses)
| |
21:10 | BAndiT1983 | which buttons?
| |
21:10 | g3gg0 | exposure info (exposure time, ISO etc)
| |
21:11 | supragyaraj | shutter speed etc too
| |
21:11 | g3gg0 | lens infos (lens name/type, aperture, focal length)
| |
21:11 | g3gg0 | exposure time = shutter speed
| |
21:11 | BAndiT1983 | lens info is at the moment not possible, as there is no active part for it in axiom beta yet
| |
21:11 | supragyaraj | pg61/datasheet would not provide lens info.. am I wrong?
| |
21:12 | BAndiT1983 | if i remember correctly, then the lens could be read out if we would have it
| |
21:12 | g3gg0 | if supported infos like rolling shutter percentage
| |
21:12 | BAndiT1983 | no, lenses usually have firmaware themselves
| |
21:12 | BAndiT1983 | *firmware
| |
21:12 | g3gg0 | prepare for the future
| |
21:13 | BAndiT1983 | g3gg0, good point with rolling shutter, there is a global one in the image sensor, but haven't looked up what configs are there for it
| |
21:14 | g3gg0 | button press -> in some situations, you would want to place cropmarks
| |
21:14 | g3gg0 | *cut
| |
21:14 | TofuLynx | left the channel | |
21:14 | BAndiT1983 | ah, that ones
| |
21:14 | supragyaraj | not really understood you g3gg0
| |
21:14 | supragyaraj | some sort of custom markers?
| |
21:14 | BAndiT1983 | yep
| |
21:14 | g3gg0 | yes
| |
21:15 | BAndiT1983 | so the editor knows in and out positions
| |
21:15 | g3gg0 | also:
| |
21:15 | BAndiT1983 | not a film maker, but i suppose that the camera starts also before the main scene, so markers are helpful to get required material
| |
21:15 | g3gg0 | date/time of the shot (please do not rely on file time)
| |
21:16 | g3gg0 | custom tags like scene or take number
| |
21:16 | supragyaraj | have added these for reference in trello board. Can you verify and add some
| |
21:18 | g3gg0 | any meta information could be useful in post, maybe even the sensor temperature. be at least prepared how to handle such information :)
| |
21:18 | g3gg0 | firmware version, fpga bitstream version
| |
21:18 | supragyaraj | BAndiT1983: are we able to read the lens settings now?
| |
21:19 | supragyaraj | from axiom
| |
21:19 | BAndiT1983 | read my previous comments, but to say it again, no
| |
21:19 | BAndiT1983 | lens data is usually stored in the lens itself, otherwise it would be tedious to input it when changing lenses
| |
21:20 | BAndiT1983 | http://www.dyxum.com/dforum/emount-electronic-protocol-reverse-engineering_topic119522.html
| |
21:21 | BAndiT1983 | just as info
| |
21:21 | supragyaraj | how does custom tags like scene number and take number added on cameras?
| |
21:21 | BAndiT1983 | usually through a menu, but we could do it through web remote
| |
21:21 | g3gg0 | does not matter right now
| |
21:22 | g3gg0 | current phase: find out what a future file format might be able to handle
| |
21:22 | g3gg0 | not: how can this data be retrieved from the camera
| |
21:22 | g3gg0 | :)
| |
21:22 | BAndiT1983 | custom tags should be supported
| |
21:22 | BAndiT1983 | i mean in the format
| |
21:22 | g3gg0 | also: find out which kind of data would have to get stored and which implications this has
| |
21:22 | g3gg0 | yep
| |
21:22 | g3gg0 | extensible
| |
21:23 | supragyaraj | seems like everything is pointing towards MLV ;)
| |
21:23 | g3gg0 | you could also use mp4 format for that
| |
21:23 | BAndiT1983 | MLV is a bit easier to use, but i'm biased, as i have already tried it
| |
21:23 | g3gg0 | me too, as i defined the MLV format
| |
21:24 | BAndiT1983 | ;)
| |
21:24 | g3gg0 | but supragyaraj is not biased yet
| |
21:24 | g3gg0 | and he should from a neutral pov be able to determine which format to choose
| |
21:24 | supragyaraj | I try not to be... but seems like everything (infact even requirements) are discussed with that bias in mind :)
| |
21:24 | BAndiT1983 | https://stackoverflow.com/questions/29565068/mp4-file-format-specification
| |
21:25 | BAndiT1983 | MP4 reference
| |
21:25 | g3gg0 | good pointer, thanks
| |
21:25 | BAndiT1983 | you can also use multi-part TIFF, but it's missing some important tags, so you should look into CDNG, if it can store many frames in one file
| |
21:26 | RexOrCine | changed nick to: RexOrCine|away
| |
21:26 | g3gg0 | even if you come to the conclusion that mp4 is far better because block handling and audio stream sync tasks will be perfectly handled by libraries, but maybe some licensing issues are a hurdle - the advantages should be documented
| |
21:26 | BAndiT1983 | reference from apple is not that bad
| |
21:27 | supragyaraj | one thing I really like about MLV is the non linear storage of frames... I would really like a discussion on what kind of issues you faced with Canon... maybe AXIOM may run into it sooner or later...
| |
21:28 | supragyaraj | then MLV is clearly one of the better formats for that... but let's reserve this sentence for later
| |
21:28 | Kjetil | MXF *hides*
| |
21:28 | BAndiT1983 | mts format was it for mpeg stream
| |
21:28 | g3gg0 | > but let's reserve this sentence for later
| |
21:28 | g3gg0 | exactly
| |
21:28 | BAndiT1983 | this was a format also on first JVC fullhd cameras, but the quality was terrible
| |
21:29 | supragyaraj | Kjetil: we have MXF standing for consideration. See bit.do/RVCF
| |
21:30 | BAndiT1983 | guys don't forget, that we are talking about some format which axiom should deliver at the end, optimistic way
| |
21:30 | g3gg0 | yep
| |
21:30 | BAndiT1983 | so we can't just spit out full-blown format
| |
21:30 | g3gg0 | performance-wise probably not
| |
21:31 | g3gg0 | -> pro/cons with assumptions and guesswork
| |
21:31 | supragyaraj | O.o, seems like the axiom camera's capabilities need to be analysed
| |
21:32 | BAndiT1983 | https://en.wikipedia.org/wiki/MPEG_transport_stream
| |
21:32 | g3gg0 | https://lab.apertus.org/T951
| |
21:32 | g3gg0 | 1. Current status analysis and requirement definition
| |
21:32 | g3gg0 | examining the technical backgrounds of the signal processing path within the camera (i.e. "how does it work?")
| |
21:32 | g3gg0 | technical possibilites and requirements of the signal processing path in terms of video container format (i.e. "what could we do with it and where are limits?")
| |
21:32 | g3gg0 | defining requirements/recommendations for a container format
| |
21:32 | g3gg0 | ;)
| |
21:32 | BAndiT1983 | supragyaraj, we can do benchmarks at some point, but first we need to define requirements
| |
21:33 | g3gg0 | and yes, it might be guesswork at some point
| |
21:33 | g3gg0 | you cannot measure the CDNG with XML metadata writing speed on cards we do not have yet
| |
21:34 | g3gg0 | but you can talk to experts about the possibilities of the hardware
| |
21:34 | supragyaraj | BAndiT1983: I meant capabilities the way g3gg0 put it
| |
21:34 | TofuLynx | joined the channel | |
21:34 | g3gg0 | and possible future development - maybe switching the zynq is on roadmap, maybe not etc
| |
21:35 | TofuLynx | Hello! I'm back!
| |
21:35 | danieel | none of the formats is limited by what the hardware can do (computation wise)
| |
21:35 | g3gg0 | hi
| |
21:35 | supragyaraj | hi TofuLynx
| |
21:35 | danieel | and all are limited by what in-camera storage you have
| |
21:35 | g3gg0 | if that is your conclusion, then document it and how you came to that conclusion
| |
21:35 | g3gg0 | maybe the formats limit what you can do in camera
| |
21:36 | g3gg0 | > out-of-order frame numbering
| |
21:37 | danieel | whats that ?
| |
21:37 | g3gg0 | > one thing I really like about MLV is the non linear storage of frames...
| |
21:37 | supragyaraj | Guess what happens when dual sensor system is setup on let's say a single file... everything may break (hypothetically)
| |
21:37 | g3gg0 | exactly
| |
21:37 | danieel | with dual sensors you get 2 files?
| |
21:38 | BAndiT1983 | you can write both frames into same file, mlv also splits without loosing data
| |
21:38 | supragyaraj | What if you need one... maybe sync things inherently
| |
21:38 | g3gg0 | made some proposals for MLV how to handle that
| |
21:38 | danieel | most of containers can take multiple video tracks
| |
21:38 | TofuLynx | hey supragyaraj
| |
21:39 | TofuLynx | long time no talk!
| |
21:39 | TofuLynx | how are you?
| |
21:39 | g3gg0 | so it supports subchannels
| |
21:39 | supragyaraj | AVI can take multiple streams but... it needs to know how much the count of frame is before recording
| |
21:39 | danieel | you can update that after stopping, that is not uncommon
| |
21:39 | BAndiT1983 | just wondering, how big the RAM haas to be, so we can store enough 4k data without losing it, till it's fully written out to USB or disk
| |
21:40 | danieel | BAndiT1983: 2 frames
| |
21:40 | supragyaraj | TofuLynx: great :) how about you
| |
21:40 | BAndiT1983 | supragyaraj, you cannot know it beforehand
| |
21:40 | supragyaraj | currently at airport... layover :)
| |
21:40 | supragyaraj | BAndiT1983: exactly my point
| |
21:41 | BAndiT1983 | but avi supports streaming, or not?
| |
21:41 | supragyaraj | it does... very easily
| |
21:41 | danieel | so decide - are you streaming or writing to a media, damn
| |
21:41 | supragyaraj | but for multiple streams... offset is to be known
| |
21:42 | BAndiT1983 | danieel, it's a hybrid thing
| |
21:42 | supragyaraj | ? it's just a discussion that sometimes formats can be a limited thing too....
| |
21:42 | danieel | you can update the index continuously or after stopping, some makers do very clever things (seen cluster table modification, to append in front of file)
| |
21:43 | danieel | if you hope to do #include <container.h> then yes, that is limiting you.. not the actual format
| |
21:43 | g3gg0 | possbile, then your file format has requirements for the file system
| |
21:45 | g3gg0 | (probably not the best thing to do btw)
| |
21:45 | Kjetil | #include <ffpmeg.h>
| |
21:46 | Kjetil | ffmpeg.h* ffs
| |
21:47 | BAndiT1983 | ffmpeg on zynq? this bloated piece of a library? really?
| |
21:47 | danieel | havent they dropped 32bit support.. soo.. nope :P
| |
21:48 | Kjetil | Heh. I was more of a joke since it does a bit more than it should
| |
21:50 | supragyaraj | One of Kjetil's other jokes: http://irc.apertus.org/index.php?day=26&month=03&year=2018#41 ;)
| |
21:51 | Kjetil | :)
| |
21:51 | TofuLynx | supragyaraj: I am great too :p
| |
21:51 | TofuLynx | BAndiT1983: Did you see the loops?
| |
21:51 | supragyaraj | TofuLynx: What jokes have you cracked?
| |
21:56 | TofuLynx | huh? Did I miss something? xD
| |
21:56 | TofuLynx | ah, on apertus chat?
| |
21:56 | supragyaraj | TofuLynx: nope, I didn't understand
| |
21:57 | TofuLynx | i'm confused
| |
21:58 | supragyaraj | now okay?
| |
22:00 | Kjetil | But on the topic I'm note sure that MPEG2TS is that suitable. You get synchronization but is kind of hard to extract frames afterwards without parsing the entire stream
| |
22:00 | g3gg0 | left the channel | |
22:01 | g3gg0 | joined the channel | |
22:03 | danieel | its made for streaming primarily, you are reading the file while playing it, that you can find a .ts file was not the aim
| |
22:04 | TofuLynx | BAndiT1983: you afk?
| |
22:05 | danieel | compare that to a TAR or a compressed file - you cant seek in that unless you read it fully through
| |
22:10 | BAndiT1983 | TofuLynx, yep, sort off, dinner and big bang theory
| |
22:12 | TofuLynx | ah xD
| |
22:12 | TofuLynx | BAndiT1983: I saw that you merged the PR i made. Any considerations you want to say?
| |
22:13 | BAndiT1983 | tomorrow i can say more, as i have to merge locally first, as there are several changes on my side, which will be committed soon
| |
22:17 | supragyaraj | left the channel | |
22:18 | BAndiT1983 | supragyaraj, now i'm just waiting for Kjetil to come out as flat-earther ;)
| |
22:18 | TofuLynx | Ok! :)
| |
22:19 | TofuLynx | I think I will advance to my next task
| |
22:19 | TofuLynx | the debayer class
| |
22:19 | BAndiT1983 | ok, and i will inspect your code tomorrow, after home office
| |
22:21 | TofuLynx | Ok!
| |
22:30 | TofuLynx | So is you girlfriend loving TBBT BAndiT1983? xD
| |
22:32 | BAndiT1983 | yep, it crowd is still her favourite, but tbbt is also good
| |
22:33 | BAndiT1983 | don't ask, but she also loved little britain ;)
| |
22:34 | TofuLynx | I don't know little britain
| |
22:34 | TofuLynx | also comedy?
| |
22:35 | BAndiT1983 | yep, british humour, not for everyone
| |
22:36 | RexOrCine|away | changed nick to: RexOrCine
| |
22:36 | TofuLynx | I do like it :P
| |
22:38 | BAndiT1983 | just look at YT, there are a lot of clips from there
| |
22:45 | BAndiT1983 | so, off for today, see you
| |
22:45 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
22:46 | TofuLynx | See you!
| |
22:55 | TofuLynx | this may be seen as a dumb question
| |
22:55 | TofuLynx | xD
| |
22:55 | TofuLynx | but I cant find how do I create a new file in QtCreator
| |
22:56 | TofuLynx | nevermind
| |
22:56 | TofuLynx | fount it :)
| |
22:56 | TofuLynx | found*
| |
23:03 | TofuLynx | Gtg now :)
| |
23:03 | TofuLynx | See you tomorrow!
| |
23:04 | TofuLynx | Good Night!
| |
23:04 | TofuLynx | left the channel |