Current Server Time: 19:31 (Central Europe)

#apertus IRC Channel Logs

2018/04/26

Timezone: UTC


01:26
Bertl
off to bed now ... have fun!
01:26
Bertl
changed nick to: Bertl_zZ
02:08
xfxf_
joined the channel
02:10
flesk__
joined the channel
02:12
xfxf
left the channel
02:13
flesk___
left the channel
02:13
xfxf_
changed nick to: xfxf
02:16
hof[m]
left the channel
02:16
parasew[m]
left the channel
02:16
XD[m]
left the channel
02:16
derWalter
left the channel
02:16
davidak[m]
left the channel
02:16
flesk_
left the channel
02:16
elkos
left the channel
02:17
anuejn
left the channel
02:17
vup[m]
left the channel
02:17
MilkManzJourDadd
left the channel
03:32
davidak
left the channel
03:33
davidak
joined the channel
03:48
RexOrCine
changed nick to: RexOrCine|away
04:59
davidak
left the channel
05:27
g3gg0
joined the channel
05:35
g3gg0
left the channel
05:40
ArunM1
joined the channel
06:37
ArunM
joined the channel
06:56
ArunM
left the channel
06:59
ArunM
joined the channel
07:24
ArunM
left the channel
07:58
ArunM
joined the channel
08:44
rahul_
Morning everyone
08:45
rahul_
I am working towards focus peaking project
08:45
rahul_
I have a question, as mentioned in the image processing pipeline of the AXIOM..
08:46
derWalter
joined the channel
08:46
rahul_
the incoming pixels are buffered into the DRAM (which can be accessed by both the ARM as well as the FPGA fabric) as frames..
08:47
rahul_
and then the processing (ARM/FPGA) is performed...
08:48
ArunM
left the channel
08:49
rahul_
As mentioned in my proposal...I had implemented the kernel using HLS tools but I process the pixels in line without buffering (VDMA)...
08:50
parasew[m]
joined the channel
08:50
rahul_
Now, my question is that
08:51
Bertl_zZ
changed nick to: Bertl
08:51
Bertl
morning folks!
08:51
rahul_
can I place the kernel before the VDMA core in the pipeline or should I follow the standard design flow of the AXIOM image processing pipeline and place the kernel after the VDMA core..
08:54
Bertl
first, there is no classical VDMA in the AXIOM Beta pipeline
08:54
Bertl
we use high performance memory writers (kind of buffered VDMA)
08:54
Bertl
naturally the focus peaking has to go into the output pipeline
08:56
Bertl
i.e. we want to augment the output with the peaking information only on certain output devices (preview not recording :)
08:57
Bertl
if you want to test with HLS (which is fine) you can use a streaming interface with some local buffering (line buffer)
08:57
MilkManzJourDadd
joined the channel
08:57
elkos
joined the channel
08:57
davidak[m]
joined the channel
08:57
XD[m]
joined the channel
08:57
hof[m]
joined the channel
08:59
danieel
then there is the question whether to do the peaking in raw or demosaiced domain
08:59
danieel
(and fullres or downscaled one on the preview output)
08:59
rahul_
So, in my proposal i mentioned two different kernels to be placed in the pipeline
09:00
rahul_
one for debayering and one for focus peaking
09:01
rahul_
but later I found paper (I posted on the focus peaking task page of the aperus) where debayering and edge detection can be done in the same kernel
09:05
Bertl
yes, that's an option ... currently the 'debayering' is done by simply combining 4 sensel from the 4k input into one pixel of the FullHD preview
09:09
sebix
joined the channel
09:09
sebix
left the channel
09:09
sebix
joined the channel
09:23
flesk_
joined the channel
09:23
anuejn
joined the channel
09:23
vup[m]
joined the channel
09:30
Bertl
rahul_: if there are any questions, do not hesitate to ask ...
09:39
parasew[m]
left the channel
09:39
MilkManzJourDadd
left the channel
09:39
vup[m]
left the channel
09:39
hof[m]
left the channel
09:39
elkos
left the channel
09:39
XD[m]
left the channel
09:39
flesk_
left the channel
09:39
davidak[m]
left the channel
09:40
anuejn
left the channel
09:40
derWalter
left the channel
10:16
derWalter
joined the channel
10:20
parasew[m]
joined the channel
10:27
MilkManzJourDadd
joined the channel
10:27
elkos
joined the channel
10:27
davidak[m]
joined the channel
10:27
hof[m]
joined the channel
10:27
XD[m]
joined the channel
10:45
hof[m]1
joined the channel
10:47
hof[m]
left the channel
10:55
flesk_
joined the channel
10:55
anuejn
joined the channel
10:55
vup[m]
joined the channel
11:12
ymc98
joined the channel
11:20
se6astian|away
changed nick to: se6astian
11:29
Bertl
off for now ... bbl
11:29
Bertl
changed nick to: Bertl_oO
12:20
rton
joined the channel
12:33
ArunM
joined the channel
12:43
ymc98
left the channel
12:44
ymc98
joined the channel
13:09
se6astian
changed nick to: se6astian|away
13:13
Mahesh_
joined the channel
13:16
ymc98
left the channel
13:26
RexOrCine|away
changed nick to: RexOrCine
13:27
ArunM
left the channel
14:35
supragya
joined the channel
14:38
supragya
Bertl_oO: available?
14:39
supragya
need a more detailed overview of image pipeline than [https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#Image_Acquisition_Pipeline]
14:40
Bertl_oO
sure, what information do you need?
14:42
supragya
Here are a few things: se6astian said that atleast 4 frames are buffered before flushing to disk... what is the current way you do this? Are videos recorded at this moment in time and how are they done? (implemetation / code may be fine to look at)
14:42
Bertl_oO
note: nothing is 'flushed' to disk at the moment
14:42
Bertl_oO
what happens is the following:
14:43
Bertl_oO
data is retrieved from the sensor after exposure and written into a DDR memory buffer
14:43
Bertl_oO
at the same time (i.e. in parallel) a different frame is retrieved from memory and encoded as HDMI (for example)
14:44
Bertl_oO
there are currently 4 buffers in DDR, which get used one after the other
14:44
Bertl_oO
this allows to lock one buffer (i.e. disable it) during live preview/recording and create a raw snapshot
14:45
Bertl_oO
once the snapshot is done, the buffer is returned and reused
14:45
Bertl_oO
both input and output happen in the FPGA without any intervention from the CPU
14:46
Bertl_oO
recording on the AXIOM Beta currently happens with an external recorder connected via HDMI or SDI
14:46
supragya
ddr as in [https://www.edaboard.com/showthread.php?t=79556] ?
14:47
Bertl_oO
no, DDR as DDR3 memory attached to the Zynq on the MicroZed
14:48
supragya
using ddr buffers one after other... does that mean a perfect correlation of frame and buffer -> [F1,buf1];[F2,buf2];[F3,buf3],[F4,buf1]... and so on
14:49
Bertl_oO
yes, but only for buffers currently active
14:50
supragya
Is serialisation automatic for HDMI?
14:51
supragya
As far as I can tell... the frames for videos are in these buffers and cannot be taken out.. am I wrong?
14:51
supragya
(as till date)
14:51
Bertl_oO
not sure what you mean by 'taken out' :)
14:51
danieel
out where? :)
14:52
supragya
:) wait
15:04
BAndiT1983|away
changed nick to: BAndiT1983
15:24
supragya
Good evening BAndiT1983
15:25
BAndiT1983
hi supragya, early for an evening, usually in germany "good evening" starts from 6pm ;)
15:25
supragya
Well, I am at airport, about to board and it's 9 here
15:26
BAndiT1983
ah, right, you are going to the conference
15:28
BAndiT1983
any news on raw container?
15:29
supragya
26/04 22:00->00:00 Chennai -> Mumbai ---6 hrs layover--- 06:00->07:30 Mumbai -> Vadodara (Home)... 11:00 Avengers... 19:00 Vadodara -> Indore (conference)... on 28/04 20:00-> 04:00(next day) Indore -> Vadodara
15:29
supragya
then home on 29
15:29
supragya
raw container... yes
15:30
BAndiT1983
wow, you have full timeframe for next days
15:30
supragya
:), see... could not help it...
15:31
supragya
could you find me on trello
15:31
BAndiT1983
sometimes it's required, got a lot of impressions and ideas at oop2014, around that time i've decided to try to become a software architect
15:32
BAndiT1983
ehm, who was writing comments the whole time? ;)
15:37
supragyaraj
joined the channel
15:37
supragyaraj
<supragya> shared recent convo with Bertl... there
15:37
supragyaraj
<supragya> XD flight got delayed.. 1 hr.. so layover is now 5 hrs... big deal :)
15:37
BAndiT1983
so you can proceed with gsoc ;)
15:37
supragya
left the channel
15:37
BAndiT1983
read it, very interesting
15:39
BAndiT1983
still missing some puzzle pieces here and there, at least to get some sort of emulation of camera pipeline
15:39
supragyaraj
yes..
15:49
Bertl_oO
why do you want to emulate the camera pipeline?
15:50
supragyaraj
It is more of a emulation to see if container format that we use are apt
15:50
supragyaraj
for AXIOM camera
15:51
BAndiT1983
it's not about big emulation, just what supragyaraj says
15:52
supragyaraj
However, if format (serial packets / their format / the order / the markers etc) are known beforehand... we are good to proceed
15:52
supragyaraj
Also a trivial thing I have not asked... what is up with audio, do we have it?
15:53
BAndiT1983
separate recording usually
15:53
supragyaraj
good for us :)
15:54
BAndiT1983
other metadata is much more of concern, like wb, aperture and so on
16:01
sebix
left the channel
16:02
Mahesh_
left the channel
16:06
Mahesh_
joined the channel
16:36
Mahesh_
left the channel
16:51
nmdis1999
joined the channel
16:54
nmdis1999
left the channel
17:03
supragyaraj
left the channel
17:31
g3gg0
joined the channel
17:44
BAndiT1983
hi g3gg0, have you joined trello yet? it can be done through google account
18:17
TofuLynx
joined the channel
18:17
TofuLynx
Hello everyone!
18:21
TofuLynx
BAndiT1983: I am going to start changing the preprocessor loops
18:30
g3gg0
hi
18:41
TofuLynx
Hello g3gg0 :)
18:44
TofuLynx
BAndiT1983: I have implemented the new loops
18:44
TofuLynx
it's on average 3ms faster :)
18:50
BAndiT1983
hi TofuLynx, sounds interesting, could you do a pull request?
18:53
TofuLynx
Yeah wait a moment, I am just finishing the benchmark
18:55
TofuLynx
Ok finished
18:55
TofuLynx
I will do the PR
19:03
TofuLynx
PR done
19:03
TofuLynx
you can see it :)
19:05
BAndiT1983
alright travis ci has started, will lok into the code a bit later, just have to go to the shop quick to get some stuff to prepare dinner
19:05
TofuLynx
Ok! :) I have to go to dinner soon too
19:10
BAndiT1983
TofuLynx, what happens if the extraction happens in single loop?
19:11
TofuLynx
Didn't try it, but I guess it would be slower, no? As it is in a single thread
19:11
TofuLynx
I will try it now
19:11
BAndiT1983
try it please, as having more threads is not automatically faster
19:12
TofuLynx
alright
19:12
BAndiT1983
it depends on how big the RAM areas are, which get locked while reading/writing
19:12
BAndiT1983
first step is to simplify before going crazy with optimization
19:13
TofuLynx
hmm
19:13
TofuLynx
dataUL[index] = _outputData[index];
19:13
TofuLynx
dataUR[index + 1] = _outputData[index + 1];
19:13
TofuLynx
dataLL[index + _width] = _outputData[index + _width];
19:13
TofuLynx
dataLR[index + _width + 1] = _outputData[index + _width + 1];
19:13
TofuLynx
do you think I should store index + _width in a variable?
19:15
BAndiT1983
have to go now, but will be back shortly, then i will reflect on it
19:15
TofuLynx
See you!
19:16
g3gg0
cu
19:24
TofuLynx
Benchmark the unique single loop
19:24
TofuLynx
it's slower
19:24
TofuLynx
benchmarked*
19:29
TofuLynx
so, the old loops took 42ms, the single loops (those I did a PR) took 38ms and, finally, the complete extracting on a single loop took 48ms
19:33
BAndiT1983
back
19:34
BAndiT1983
could you pastebin the single loop code?
19:37
TofuLynx
sure
19:39
TofuLynx
https://pastebin.com/zR5BaPzk
19:39
TofuLynx
Here you go, Andrej :)
19:39
se6astian|away
changed nick to: se6astian
19:44
se6astian
changed nick to: se6astian|away
19:44
BAndiT1983
is the output still correct?
19:45
BAndiT1983
will add a unit test as example, which uses 8x8 data block, so we can verify every time without manual intervention
19:46
TofuLynx
yeah the output is still correct
19:46
TofuLynx
I will be right back, going to dinner
19:49
BAndiT1983
have a nice meal
20:00
TofuLynx
left the channel
20:01
TofuLynx
joined the channel
20:03
supragyaraj
joined the channel
20:04
supragyaraj
Good evening BAndiT1983, g3gg0 !
20:04
BAndiT1983
hi supragyaraj
20:04
supragyaraj
This time from Mumbai !
20:04
g3gg0
hi supragyaraj
20:04
g3gg0
:)
20:05
BAndiT1983
nice, at the arabian sea
20:05
supragyaraj
Very close indeed
20:05
BAndiT1983
never looked close at the map of india, but now i see that chennai is on the other side, at bengal gulf
20:06
supragyaraj
Very down...
20:08
supragyaraj
So, my question is... what more is needed in "How to stream data out from memory - Ask Bertl" ?
20:08
BAndiT1983
this is an ongoing task, so it will remain open for some time
20:09
BAndiT1983
when new infos are available, then it can be extended
20:09
BAndiT1983
you could inspect the docs of RAW video formats, like RED and ARRI, to see which metadata or frame data is usually required, like WB, aperture etc.
20:10
g3gg0
lens info
20:10
g3gg0
(although there is nothing yet)
20:10
g3gg0
filmmakers will need:
20:10
g3gg0
custom marks (button presses)
20:10
BAndiT1983
which buttons?
20:10
g3gg0
exposure info (exposure time, ISO etc)
20:11
supragyaraj
shutter speed etc too
20:11
g3gg0
lens infos (lens name/type, aperture, focal length)
20:11
g3gg0
exposure time = shutter speed
20:11
BAndiT1983
lens info is at the moment not possible, as there is no active part for it in axiom beta yet
20:11
supragyaraj
pg61/datasheet would not provide lens info.. am I wrong?
20:12
BAndiT1983
if i remember correctly, then the lens could be read out if we would have it
20:12
g3gg0
if supported infos like rolling shutter percentage
20:12
BAndiT1983
no, lenses usually have firmaware themselves
20:12
BAndiT1983
*firmware
20:12
g3gg0
prepare for the future
20:13
BAndiT1983
g3gg0, good point with rolling shutter, there is a global one in the image sensor, but haven't looked up what configs are there for it
20:14
g3gg0
button press -> in some situations, you would want to place cropmarks
20:14
g3gg0
*cut
20:14
TofuLynx
left the channel
20:14
BAndiT1983
ah, that ones
20:14
supragyaraj
not really understood you g3gg0
20:14
supragyaraj
some sort of custom markers?
20:14
BAndiT1983
yep
20:14
g3gg0
yes
20:15
BAndiT1983
so the editor knows in and out positions
20:15
g3gg0
also:
20:15
BAndiT1983
not a film maker, but i suppose that the camera starts also before the main scene, so markers are helpful to get required material
20:15
g3gg0
date/time of the shot (please do not rely on file time)
20:16
g3gg0
custom tags like scene or take number
20:16
supragyaraj
have added these for reference in trello board. Can you verify and add some
20:18
g3gg0
any meta information could be useful in post, maybe even the sensor temperature. be at least prepared how to handle such information :)
20:18
g3gg0
firmware version, fpga bitstream version
20:18
supragyaraj
BAndiT1983: are we able to read the lens settings now?
20:19
supragyaraj
from axiom
20:19
BAndiT1983
read my previous comments, but to say it again, no
20:19
BAndiT1983
lens data is usually stored in the lens itself, otherwise it would be tedious to input it when changing lenses
20:20
BAndiT1983
http://www.dyxum.com/dforum/emount-electronic-protocol-reverse-engineering_topic119522.html
20:21
BAndiT1983
just as info
20:21
supragyaraj
how does custom tags like scene number and take number added on cameras?
20:21
BAndiT1983
usually through a menu, but we could do it through web remote
20:21
g3gg0
does not matter right now
20:22
g3gg0
current phase: find out what a future file format might be able to handle
20:22
g3gg0
not: how can this data be retrieved from the camera
20:22
g3gg0
:)
20:22
BAndiT1983
custom tags should be supported
20:22
BAndiT1983
i mean in the format
20:22
g3gg0
also: find out which kind of data would have to get stored and which implications this has
20:22
g3gg0
yep
20:22
g3gg0
extensible
20:23
supragyaraj
seems like everything is pointing towards MLV ;)
20:23
g3gg0
you could also use mp4 format for that
20:23
BAndiT1983
MLV is a bit easier to use, but i'm biased, as i have already tried it
20:23
g3gg0
me too, as i defined the MLV format
20:24
BAndiT1983
;)
20:24
g3gg0
but supragyaraj is not biased yet
20:24
g3gg0
and he should from a neutral pov be able to determine which format to choose
20:24
supragyaraj
I try not to be... but seems like everything (infact even requirements) are discussed with that bias in mind :)
20:24
BAndiT1983
https://stackoverflow.com/questions/29565068/mp4-file-format-specification
20:25
BAndiT1983
MP4 reference
20:25
g3gg0
good pointer, thanks
20:25
BAndiT1983
you can also use multi-part TIFF, but it's missing some important tags, so you should look into CDNG, if it can store many frames in one file
20:26
RexOrCine
changed nick to: RexOrCine|away
20:26
g3gg0
even if you come to the conclusion that mp4 is far better because block handling and audio stream sync tasks will be perfectly handled by libraries, but maybe some licensing issues are a hurdle - the advantages should be documented
20:26
BAndiT1983
reference from apple is not that bad
20:27
supragyaraj
one thing I really like about MLV is the non linear storage of frames... I would really like a discussion on what kind of issues you faced with Canon... maybe AXIOM may run into it sooner or later...
20:28
supragyaraj
then MLV is clearly one of the better formats for that... but let's reserve this sentence for later
20:28
Kjetil
MXF *hides*
20:28
BAndiT1983
mts format was it for mpeg stream
20:28
g3gg0
> but let's reserve this sentence for later
20:28
g3gg0
exactly
20:28
BAndiT1983
this was a format also on first JVC fullhd cameras, but the quality was terrible
20:29
supragyaraj
Kjetil: we have MXF standing for consideration. See bit.do/RVCF
20:30
BAndiT1983
guys don't forget, that we are talking about some format which axiom should deliver at the end, optimistic way
20:30
g3gg0
yep
20:30
BAndiT1983
so we can't just spit out full-blown format
20:30
g3gg0
performance-wise probably not
20:31
g3gg0
-> pro/cons with assumptions and guesswork
20:31
supragyaraj
O.o, seems like the axiom camera's capabilities need to be analysed
20:32
BAndiT1983
https://en.wikipedia.org/wiki/MPEG_transport_stream
20:32
g3gg0
https://lab.apertus.org/T951
20:32
g3gg0
1. Current status analysis and requirement definition
20:32
g3gg0
examining the technical backgrounds of the signal processing path within the camera (i.e. "how does it work?")
20:32
g3gg0
technical possibilites and requirements of the signal processing path in terms of video container format (i.e. "what could we do with it and where are limits?")
20:32
g3gg0
defining requirements/recommendations for a container format
20:32
g3gg0
;)
20:32
BAndiT1983
supragyaraj, we can do benchmarks at some point, but first we need to define requirements
20:33
g3gg0
and yes, it might be guesswork at some point
20:33
g3gg0
you cannot measure the CDNG with XML metadata writing speed on cards we do not have yet
20:34
g3gg0
but you can talk to experts about the possibilities of the hardware
20:34
supragyaraj
BAndiT1983: I meant capabilities the way g3gg0 put it
20:34
TofuLynx
joined the channel
20:34
g3gg0
and possible future development - maybe switching the zynq is on roadmap, maybe not etc
20:35
TofuLynx
Hello! I'm back!
20:35
danieel
none of the formats is limited by what the hardware can do (computation wise)
20:35
g3gg0
hi
20:35
supragyaraj
hi TofuLynx
20:35
danieel
and all are limited by what in-camera storage you have
20:35
g3gg0
if that is your conclusion, then document it and how you came to that conclusion
20:35
g3gg0
maybe the formats limit what you can do in camera
20:36
g3gg0
> out-of-order frame numbering
20:37
danieel
whats that ?
20:37
g3gg0
> one thing I really like about MLV is the non linear storage of frames...
20:37
supragyaraj
Guess what happens when dual sensor system is setup on let's say a single file... everything may break (hypothetically)
20:37
g3gg0
exactly
20:37
danieel
with dual sensors you get 2 files?
20:38
BAndiT1983
you can write both frames into same file, mlv also splits without loosing data
20:38
supragyaraj
What if you need one... maybe sync things inherently
20:38
g3gg0
made some proposals for MLV how to handle that
20:38
danieel
most of containers can take multiple video tracks
20:38
TofuLynx
hey supragyaraj
20:39
TofuLynx
long time no talk!
20:39
TofuLynx
how are you?
20:39
g3gg0
so it supports subchannels
20:39
supragyaraj
AVI can take multiple streams but... it needs to know how much the count of frame is before recording
20:39
danieel
you can update that after stopping, that is not uncommon
20:39
BAndiT1983
just wondering, how big the RAM haas to be, so we can store enough 4k data without losing it, till it's fully written out to USB or disk
20:40
danieel
BAndiT1983: 2 frames
20:40
supragyaraj
TofuLynx: great :) how about you
20:40
BAndiT1983
supragyaraj, you cannot know it beforehand
20:40
supragyaraj
currently at airport... layover :)
20:40
supragyaraj
BAndiT1983: exactly my point
20:41
BAndiT1983
but avi supports streaming, or not?
20:41
supragyaraj
it does... very easily
20:41
danieel
so decide - are you streaming or writing to a media, damn
20:41
supragyaraj
but for multiple streams... offset is to be known
20:42
BAndiT1983
danieel, it's a hybrid thing
20:42
supragyaraj
? it's just a discussion that sometimes formats can be a limited thing too....
20:42
danieel
you can update the index continuously or after stopping, some makers do very clever things (seen cluster table modification, to append in front of file)
20:43
danieel
if you hope to do #include <container.h> then yes, that is limiting you.. not the actual format
20:43
g3gg0
possbile, then your file format has requirements for the file system
20:45
g3gg0
(probably not the best thing to do btw)
20:45
Kjetil
#include <ffpmeg.h>
20:46
Kjetil
ffmpeg.h* ffs
20:47
BAndiT1983
ffmpeg on zynq? this bloated piece of a library? really?
20:47
danieel
havent they dropped 32bit support.. soo.. nope :P
20:48
Kjetil
Heh. I was more of a joke since it does a bit more than it should
20:50
supragyaraj
One of Kjetil's other jokes: http://irc.apertus.org/index.php?day=26&month=03&year=2018#41 ;)
20:51
Kjetil
:)
20:51
TofuLynx
supragyaraj: I am great too :p
20:51
TofuLynx
BAndiT1983: Did you see the loops?
20:51
supragyaraj
TofuLynx: What jokes have you cracked?
20:56
TofuLynx
huh? Did I miss something? xD
20:56
TofuLynx
ah, on apertus chat?
20:56
supragyaraj
TofuLynx: nope, I didn't understand
20:57
TofuLynx
i'm confused
20:58
supragyaraj
now okay?
21:00
Kjetil
But on the topic I'm note sure that MPEG2TS is that suitable. You get synchronization but is kind of hard to extract frames afterwards without parsing the entire stream
21:00
g3gg0
left the channel
21:01
g3gg0
joined the channel
21:03
danieel
its made for streaming primarily, you are reading the file while playing it, that you can find a .ts file was not the aim
21:04
TofuLynx
BAndiT1983: you afk?
21:05
danieel
compare that to a TAR or a compressed file - you cant seek in that unless you read it fully through
21:10
BAndiT1983
TofuLynx, yep, sort off, dinner and big bang theory
21:12
TofuLynx
ah xD
21:12
TofuLynx
BAndiT1983: I saw that you merged the PR i made. Any considerations you want to say?
21:13
BAndiT1983
tomorrow i can say more, as i have to merge locally first, as there are several changes on my side, which will be committed soon
21:17
supragyaraj
left the channel
21:18
BAndiT1983
supragyaraj, now i'm just waiting for Kjetil to come out as flat-earther ;)
21:18
TofuLynx
Ok! :)
21:19
TofuLynx
I think I will advance to my next task
21:19
TofuLynx
the debayer class
21:19
BAndiT1983
ok, and i will inspect your code tomorrow, after home office
21:21
TofuLynx
Ok!
21:30
TofuLynx
So is you girlfriend loving TBBT BAndiT1983? xD
21:32
BAndiT1983
yep, it crowd is still her favourite, but tbbt is also good
21:33
BAndiT1983
don't ask, but she also loved little britain ;)
21:34
TofuLynx
I don't know little britain
21:34
TofuLynx
also comedy?
21:35
BAndiT1983
yep, british humour, not for everyone
21:36
RexOrCine|away
changed nick to: RexOrCine
21:36
TofuLynx
I do like it :P
21:38
BAndiT1983
just look at YT, there are a lot of clips from there
21:45
BAndiT1983
so, off for today, see you
21:45
BAndiT1983
changed nick to: BAndiT1983|away
21:46
TofuLynx
See you!
21:55
TofuLynx
this may be seen as a dumb question
21:55
TofuLynx
xD
21:55
TofuLynx
but I cant find how do I create a new file in QtCreator
21:56
TofuLynx
nevermind
21:56
TofuLynx
fount it :)
21:56
TofuLynx
found*
22:03
TofuLynx
Gtg now :)
22:03
TofuLynx
See you tomorrow!
22:04
TofuLynx
Good Night!
22:04
TofuLynx
left the channel