00:48 | TofuLynx | left the channel | |
01:09 | TofuLynx | joined the channel | |
02:20 | rton | left the channel | |
03:16 | futarisIRCcloud | joined the channel | |
05:03 | mohit | joined the channel | |
05:26 | mohit | left the channel | |
05:39 | mohit | joined the channel | |
07:31 | mohit | left the channel | |
07:43 | supragya | joined the channel | |
07:43 | supragya | Hi TofuLynx
| |
08:16 | supragya | left the channel | |
08:21 | BAndiT1983|away | changed nick to: BAndiT1983
| |
09:13 | Bertl_zZ | changed nick to: Bertl
| |
09:13 | Bertl | morning folks!
| |
09:25 | Kjetil | Good morning
| |
09:29 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
09:40 | BAndiT1983|away | changed nick to: BAndiT1983
| |
10:02 | rton | joined the channel | |
10:36 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
10:36 | BAndiT1983|away | changed nick to: BAndiT1983
| |
11:10 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
11:10 | BAndiT1983|away | changed nick to: BAndiT1983
| |
11:15 | TofuLynx | Hey folks!
| |
11:16 | TofuLynx | BAndiT1983: I run my code successfully on the beta firmware :)
| |
11:17 | BAndiT1983 | very good, have you succeeded to extract the output back to your host machine?
| |
11:18 | TofuLynx | Nope. The tftp server just rejects the put or curl actions
| |
11:18 | BAndiT1983 | what is the error message?
| |
11:21 | BAndiT1983 | have you looked at the docs? -> https://wiki.archlinux.org/index.php/TFTP
| |
11:23 | Bertl | I wouldn't exptect the QEMU tftp server to support put requests
| |
11:23 | Bertl | after all, it is designed to boot systems not to exchange data
| |
11:24 | Bertl | looking at the code, this is easily confirmed:
| |
11:24 | Bertl | * tftp.c - a simple, read-only tftp server for qemu
| |
11:26 | BAndiT1983 | if tftp is working, so must the network also, or not?
| |
11:26 | Bertl | yes, to some degree
| |
11:27 | Bertl | i.e. the connection between the virtual network card and the qemu network stack has to work for this
| |
11:28 | Bertl | but was the qemu TFTP confirmed working already? i.e. can you access a TFTP served file with the tftp get request?
| |
11:35 | supragya | joined the channel | |
11:35 | supragya | Hi Bandit1983, BertloO
| |
11:35 | RexO | joined the channel | |
11:36 | BAndiT1983 | hi
| |
11:36 | TofuLynx | Yes I can do get requests
| |
11:36 | TofuLynx | Hey supragya !
| |
11:36 | Rex0r | left the channel | |
11:36 | supragya | Hi all
| |
11:37 | supragya | Seems like TofuLynx too had problems installing beta software
| |
11:37 | supragya | did any conclusion was reached?
| |
11:37 | supragya | (rip my grammar)
| |
11:37 | TofuLynx | Bertl: so if I modify the tftp.c I would make a tfto server that accepts put requests?
| |
11:38 | TofuLynx | Yeah, install the old version, with the old Kernel, the newer Kernel still has issues
| |
11:38 | supragya | were you successful in installing old kernel?
| |
11:38 | TofuLynx | Yeah
| |
11:39 | supragya | Hmm, have to look into it... link perhaps?
| |
11:39 | TofuLynx | Without problems at all
| |
11:39 | TofuLynx | Let me check
| |
11:40 | supragya | BandiT, my C++ challenge was completed a week ago... or maybe beofore that... I was trying to get beta softwar to work... but it was ready..
| |
11:40 | supragya | Bertl gave a few suggestion then
| |
11:40 | supragya | worked on it
| |
11:41 | supragya | should i push the software on git... and then can you give feedback?
| |
11:41 | BAndiT1983 | i know, Bertl sent me the link, have to add it to the c++ challenge task, so we can review it
| |
11:41 | TofuLynx | https://github.com/apertus-open-source-cinema/axiom-beta-qemu/blob/4334f25641b66013b49ab199e0ce75252bdd49aa/README.md
| |
11:41 | TofuLynx | Here is it supragya :)
| |
11:41 | supragya | Thank you TofuLynx
| |
11:42 | BAndiT1983 | you can push it git, review will be done as soon as someone has time, have to finish some personal stuff before that
| |
11:43 | supragya | Bandit, sure... but if there is not hurry, i would like to work on different types of debayering before then, have makefile created and tests made (Bertl asked me these would be better)
| |
11:43 | TofuLynx | BAndiT1983: how do you know I run it on the beta firmware?
| |
11:43 | BAndiT1983 | ?
| |
11:44 | supragya | Tofulynx, can you provide me with commit number
| |
11:45 | TofuLynx | What do you mean?
| |
11:46 | supragya | nvm, got it
| |
11:47 | supragya | Bandit, I ran OC, it is surprisingly blank, am i going wrong somewhere?
| |
11:48 | BAndiT1983 | blank? there are 3-4 applications, partially with features (see screenshots in wiki or lab)
| |
11:48 | TofuLynx | I will have to check OC too, I am interested in the different debayering methods task too
| |
11:49 | supragya | What are you planning to work on Tofulynx?
| |
11:49 | supragya | I was trying to work on OC debayering
| |
11:49 | TofuLynx | The same as you plan, I think xD
| |
11:49 | TofuLynx | The different raw debayering methods for OpenCine
| |
11:53 | TofuLynx | Were you successful with the old Kernel?
| |
11:55 | TofuLynx | left the channel | |
11:55 | TofuLynx | joined the channel | |
12:01 | Bertl | TofuLynx: there is no real point in modifying the tftp server in QEMU, there are easier ways to exchange data with the host
| |
12:01 | supragya | Sorry was away
| |
12:01 | TofuLynx | for example, by connecting the qemu to the internet?
| |
12:01 | supragya | Bertl, what is this discussion about tftp server? axion-beta-software related?
| |
12:02 | Bertl | TofuLynx: well yes, local network, virtual harddisk, serial port, etc
| |
12:02 | TofuLynx | it is about the axiom beta firmware, to test the code on the firmware you can use a tftp server to get files from the Host
| |
12:02 | TofuLynx | but not to put files on the Host, unfortunately.
| |
12:03 | supragya | This beta firmware, the new one or old one ?
| |
12:03 | TofuLynx | I would assume both, but I only know about the old
| |
12:04 | supragya | were you successful in running it on old firmware?
| |
12:04 | TofuLynx | yeah
| |
12:05 | supragya | that cpp one?
| |
12:05 | TofuLynx | what do you mean?
| |
12:06 | supragya | I am asking, were you successful in running the debayering program on old firmware?
| |
12:06 | TofuLynx | yeah I were
| |
12:07 | supragya | k
| |
12:07 | supragya | so you are looking at multiple proposals here at [apertus] or debayering in OpenCine?
| |
12:08 | TofuLynx | I am mainly interested on T722 OpenCine: Raw Image Debayering Methods
| |
12:09 | supragya | nice!
| |
12:09 | TofuLynx | And you?
| |
12:11 | supragya | Well, to be frank I am not sure
| |
12:11 | supragya | I would like to work on OpenCine more than any hardware related things but really need to look
| |
12:12 | TofuLynx | I understand
| |
12:12 | TofuLynx | Wishes of the best luck for you!
| |
12:14 | supragya | ty, same to you
| |
12:17 | TofuLynx | left the channel | |
12:17 | TofuLynx | joined the channel | |
12:20 | BAndiT1983 | most stuff for the beta i'm testing under manjaro, as it is archlinux derivative
| |
12:21 | supragya | i too have manjaro
| |
12:21 | supragya | Bandit, after building OC
| |
12:21 | supragya | i get only single exec... ProcessingText
| |
12:21 | BAndiT1983 | there should be 2 more, at least
| |
12:21 | BAndiT1983 | OCBackup and OCLauncher
| |
12:22 | supragya | well, no
| |
12:22 | supragya | i dont have them
| |
12:22 | supragya | used Qt gui and cli both
| |
12:23 | BAndiT1983 | in qtcreator you have to select other project, so it can build them
| |
12:23 | TofuLynx | left the channel | |
12:23 | BAndiT1983 | it's in the bottom left area, above the hammer or similar
| |
12:23 | TofuLynx | joined the channel | |
12:25 | supragya | I am doing as given in Build from command line at https://wiki.apertus.org/index.php/OpenCine.Build_Instructions, still the same
| |
12:25 | supragya | not using qtcreator
| |
12:25 | BAndiT1983 | hm, then i'M wondering why it just gives you processingtest
| |
12:26 | BAndiT1983 | will try it later on a fresh linuxmint vm, to ensure it still works, but was adjusting OC some time ago, had no problems
| |
12:26 | supragya | these were what built in cli:
| |
12:26 | supragya | [ 6%] Built target MemoryPool
| |
12:26 | supragya | [ 6%] Built target xxhash
| |
12:26 | supragya | [ 8%] Built target OCcore_autogen
| |
12:26 | supragya | [ 54%] Built target OCcore
| |
12:26 | supragya | [ 56%] Built target OCui_autogen
| |
12:26 | supragya | [ 82%] Built target OCui
| |
12:26 | supragya | [ 84%] Built target ProcessingTest_autogen
| |
12:26 | BAndiT1983 | check my last commits, hope i haven't deactivated stuff by mistake
| |
12:26 | supragya | [100%] Built target ProcessingTest
| |
12:27 | supragya | lemme do that quick
| |
12:27 | Bertl | supragya: no, the tftp was just QEMU related
| |
12:28 | supragya | BAndiT1983: i guess there is some problem (maybe) on commit https://github.com/apertus-open-source-cinema/opencine/commit/c85b9d118579e3f725910387aa972affe2b2947d
| |
12:29 | supragya | much is commented out which looks like is the culprit
| |
12:30 | BAndiT1983 | ah, so i've uploaded my test version without reverting this 2 lines
| |
12:30 | BAndiT1983 | my bad, sorry, was improving processingtest for gsoc a bit and then forgot about this
| |
12:32 | supragya | i tried uncommenting them for cmakelist, no success... something bigger going on
| |
12:32 | supragya | can you do this so i can try tinkering with OC code?
| |
12:36 | BAndiT1983 | you have to re-run cmake
| |
12:36 | BAndiT1983 | nothing bigger, this 2 lines are the problem
| |
12:36 | BAndiT1983 | they are including the projects into build
| |
12:37 | BAndiT1983 | and if you run cmake again, then it should find it, then make and you should be good to go
| |
12:37 | supragya | there are some serious issues with OCui
| |
12:39 | BAndiT1983 | which ones? i don't have time at the moment to take a deeper look, have to fix stuff elsewhere
| |
12:39 | supragya | sorry, my bad
| |
12:39 | supragya | it was a cache problem (feel stupid now)
| |
13:01 | supragya | BAndiT1983: I ran OCBackup, OC to me feels like a bunch of placeholders, what all can be done on it right now?
| |
13:02 | BAndiT1983 | it can transfer data from a drive to defined place, thumbnails must be broken, because i've deactivated FFMPEG for now
| |
13:02 | BAndiT1983 | don't know the state at the moment, have to take a look later
| |
13:03 | BAndiT1983 | FFMPEG was deactivated by me, because there were build problems on windows
| |
13:04 | TofuLynx | Can you explain what is FFMPEG?
| |
13:04 | supragya | FFMPEG is a library
| |
13:05 | TofuLynx | What does it do?
| |
13:05 | supragya | for MPEG and video related things...
| |
13:05 | supragya | like LodePNG for pictures
| |
13:05 | BAndiT1983 | used it to get preview for mov files
| |
13:06 | supragya | can add Destination folders but no source, no removable device seen no folder... how to add them?
| |
13:07 | supragya | software seems in it's too new a phase... am i wrong?
| |
13:08 | BAndiT1983 | ??
| |
13:08 | BAndiT1983 | you have to insert virtually or real an sd-card or similar
| |
13:09 | BAndiT1983 | the software works, it'S not polished and has not that many bells and whistles, but works
| |
13:09 | BAndiT1983 | or where did the screenshots in wiki and lab came from??
| |
13:09 | supragya | does it only work with sd card?
| |
13:10 | BAndiT1983 | it works with removable drives, at least it was the filter for it
| |
13:10 | BAndiT1983 | you can also create a virtual cd and mount it, my linux did it natively
| |
13:10 | BAndiT1983 | but i've used linuxmint then
| |
13:11 | BAndiT1983 | i have gathered examples from apertus blog and "burned" them to an iso file, then mounted it by double-clicking
| |
13:11 | BAndiT1983 | OCbackup seen the changes and shown it in the list at top-left corner
| |
13:13 | supragya | OCBackup is not the main application, is it? How can i access the editor
| |
13:15 | BAndiT1983 | which editor? ocbackup is a module, the biggest one, compared by implemented stuff
| |
13:18 | supragya | I am talking about the images on https://www.apertus.org/opencine
| |
13:18 | supragya | does not look like OCBackup
| |
13:19 | BAndiT1983 | this are mock-ups, which were created long before OC was started for real
| |
13:20 | supragya | this aws what i was asking :)
| |
13:20 | BAndiT1983 | and i've joined the project when i'Ve seen them and wanted to implement this software
| |
13:20 | supragya | not implemented as of now right?
| |
13:20 | BAndiT1983 | no, we have focussed on backup feature first, but processingtest was meant as predecessor to the grading software
| |
13:20 | BAndiT1983 | a playground for visual stuff, i you want to call that so
| |
13:22 | supragya | TY for info, can you elaborate on https://lab.apertus.org/T763
| |
13:23 | BAndiT1983 | vapoursynth is a successor to avisynth, this are frame server, where you use scripts to process video clips, decode, encode etc.
| |
13:24 | BAndiT1983 | for software which does not support raw videos or for example MagicLantern, we wanted to provide a plugin for vapoursynth, so we can access OC as data handler
| |
13:24 | TofuLynx | So like a video editor?
| |
13:24 | BAndiT1983 | this would allow to load avi files in an editor, without it knowing that the data is coming from OC
| |
13:24 | BAndiT1983 | it's more a small IDE with CLI, you use mainly scripts
| |
13:25 | TofuLynx | Interesting
| |
13:26 | TofuLynx | Can you elaborate an example of what I would use VapourSynth for?
| |
13:28 | BAndiT1983 | sorry, no time to do it at the momen, please google for it, have to finish important stuff, just looking in IRC a bit
| |
13:30 | BAndiT1983 | for avisynth,predecessor -> http://avisynth.nl/index.php/Script_examples
| |
13:32 | supragya | TofuLynx: seems to me that it works as follows:
| |
13:32 | supragya | just like you used lodepng's encode to encode png
| |
13:33 | supragya | from a vector
| |
13:33 | supragya | think of a vieo editor software using vapoursynt
| |
13:34 | TofuLynx | Wow
| |
13:34 | TofuLynx | It seems awesome
| |
13:34 | TofuLynx | Using "code" to edit
| |
13:34 | supragya | it acts as a library/plugin for that software... the trick is to add to the vapoursynth thingy, OC's capability
| |
13:34 | supragya | so that the editor uses vapoursynth, but vapursynth for some functions uses OC
| |
13:34 | supragya | BAndiT1983: correct me if wrong
| |
13:36 | TofuLynx | Thanks for the explanation supragya
| |
13:36 | BAndiT1983 | no, it isn't working asa plugin
| |
13:38 | BAndiT1983 | e.g. you create an AVI file with vapoursynth, it's a container for the data
| |
13:38 | BAndiT1983 | the application, which is not working with RAW data usually, opens that file, and vapoursynth reacts to the requests for frames and so on
| |
13:39 | BAndiT1983 | it's seamless for the application, and you can process the data in between, e.g. in OC or vapoursynth, before pushing it to the final editor
| |
13:39 | BAndiT1983 | so it would be: OCcore -> VP plugin -> VP frame server -> video editor
| |
13:41 | supragya | so vapoursynth is something like live connection that exists in Adobe CC?
| |
13:41 | BAndiT1983 | can be, i don't know adobe products that well
| |
13:41 | supragya | if two files are open in Adobe AE and PR and one updates, so the changes are reflected in another?
| |
13:41 | BAndiT1983 | no
| |
13:41 | supragya | :(
| |
13:42 | BAndiT1983 | vapoursynth provides a file, which looks like simple video file
| |
13:43 | BAndiT1983 | you load it in the application, it also thinks it's a normal video file
| |
13:43 | BAndiT1983 | when the application requests data from it, frames, audio data etc., VP provides it, from different sources, in our case it should be OCcore lib, which will be accessed thorugh VP pplugin
| |
13:43 | supragya | are you talking of it being mainly a frameserver?
| |
13:43 | BAndiT1983 | yes
| |
13:44 | BAndiT1983 | i've mentioned it already above
| |
13:44 | supragya | TofuLynx: http://avisynth.nl/index.php/Frameserver
| |
13:44 | supragya | that is it's main job
| |
13:44 | supragya | ?
| |
13:44 | TofuLynx | I'm understanding it now!
| |
13:45 | TofuLynx | But doesn't it take too much computational power?
| |
13:46 | BAndiT1983 | don't think so, but tests have to be done
| |
13:46 | BAndiT1983 | we can also send just partial data, you remember our talk about half size or even quarter, similar to after effects preview?
| |
13:46 | TofuLynx | Ok, seems like a memory optimization problem?
| |
13:46 | supragya | i guess it should take more computational power than it being simple file in first place
| |
13:46 | TofuLynx | Yeah I do
| |
13:47 | BAndiT1983 | in theory, the data is already loaded through OCcore, just the transfer is expensive, in my opinion
| |
13:47 | TofuLynx | Expensive to main memory or disk?
| |
13:47 | BAndiT1983 | we can accelerate stuff through opencl, cuda, shaders or openmp, so the ball is on our site how we do it
| |
13:47 | BAndiT1983 | it's more of RAM story
| |
13:48 | TofuLynx | Ok :)
| |
13:48 | supragya | https://helpx.adobe.com/premiere-pro/using/dynamic-link.html, similar to this BAndiT1983?
| |
13:48 | BAndiT1983 | this is different story, it's about how to gather different compositions in AE together, as they can be heavy on memory and CPU
| |
13:48 | BAndiT1983 | we are talking about simple frame server first
| |
13:49 | BAndiT1983 | so we don't have to write our own one, we would use VP
| |
13:49 | BAndiT1983 | idea came from MagicLantern forum, but their solution was a bit too cumbersome
| |
13:49 | TofuLynx | Makes sense
| |
13:49 | TofuLynx | And VP is open-source too, right?
| |
13:50 | supragya | yeah
| |
13:50 | BAndiT1983 | yes
| |
13:50 | BAndiT1983 | https://github.com/vapoursynth/vapoursynth
| |
13:50 | TofuLynx | Cool!
| |
13:51 | BAndiT1983 | but we don't have to modify it, just to write a plugin, is not very difficult, tried it before, as preparation for that task
| |
13:51 | supragya | TofuLynx: http://www.vapoursynth.com/doc/api/vapoursynth.h.html#writing-plugins
| |
13:51 | TofuLynx | Ok!
| |
13:51 | BAndiT1983 | yes, this are the docs
| |
13:52 | TofuLynx | supragya: thanks!
| |
13:52 | BAndiT1983 | you can take a look at that plugin, so you know how to start ->
| |
13:52 | BAndiT1983 | https://github.com/dubhater/vapoursynth-damb
| |
13:52 | TofuLynx | Seems well documented too
| |
13:52 | BAndiT1983 | but the structure is a bit questionable there
| |
13:54 | supragya | In our case, where could we find OC decoding in codebase? pointer?
| |
13:55 | TofuLynx | BAndiT1983: this probably isn't your area, but axiom beta camera can do the processing in the fpga right? Like a phone
| |
13:55 | BAndiT1983 | it does high-frequency stuff in fpga
| |
13:55 | BAndiT1983 | Bertl is the expert
| |
13:56 | BAndiT1983 | processingtest shows the methods, which are used for loading and decoding
| |
13:56 | TofuLynx | Hmm ok! :)
| |
13:56 | BAndiT1983 | also linear debayering
| |
14:00 | supragya | Bye! and thanks BAndit1983 TofuLynx, gtg
| |
14:00 | BAndiT1983 | no problem
| |
14:00 | se6astian|away | changed nick to: se6astian
| |
14:01 | supragya | left the channel | |
14:06 | illwieckz_ | joined the channel | |
14:10 | illwieckz | left the channel | |
14:23 | Bertl | off for now ... bbl
| |
14:23 | Bertl | changed nick to: Bertl_oO
| |
14:26 | illwieckz_ | changed nick to: illwieckz
| |
14:48 | TofuLynx | left the channel | |
14:50 | TofuLynx | joined the channel | |
14:51 | sr6033_ | joined the channel | |
14:53 | sr6033_ | Hello. I am Shubham Rath. I am interested to contribute in this organisation. I have knowledge of computer vision and I work with opencv and C++. I went through the ideas list. Can anyone guide me?
| |
14:57 | BAndiT1983 | hi
| |
14:57 | BAndiT1983 | and welcome
| |
14:58 | BAndiT1983 | are you planning to enter gsoc or is it about contributing in general?
| |
15:01 | TofuLynx | Welcome!
| |
15:09 | sr6033_ | left the channel | |
16:03 | sr6033_ | joined the channel | |
16:04 | sr6033_ | I will be contributing in general and also try in gsoc.
| |
16:04 | BAndiT1983 | what are you interested in?
| |
16:05 | sr6033_ | I like to work with C/C++, python and JS. And I am currently working in the area of computer vision using opencv.
| |
16:09 | BAndiT1983 | we have several areas where you can contribute, e.g. OpenCine, a toolsuite for RAW data backup and processing
| |
16:09 | BAndiT1983 | relatively new is an automatic PCB inspection, there is a prototype which uses node.js for general data processing and python for opencv
| |
16:10 | sr6033_ | Do you use github or some other version control?
| |
16:11 | BAndiT1983 | yes, there is apertus area on github
| |
16:11 | BAndiT1983 | https://github.com/apertus-open-source-cinema
| |
16:16 | BAndiT1983 | PCB inspection is not uploaded yet, have to do some cleanup, as it consists of several parts which from different prototypes
| |
16:16 | BAndiT1983 | *which are
| |
16:18 | sr6033_ | okay. There are 2 ideas posted under opencine. I have gone through them.
| |
16:21 | BAndiT1983 | i think you mean gsoc, as there is a separate section for opencine and it has more than 2 ;)
| |
16:21 | BAndiT1983 | https://lab.apertus.org/project/view/19/
| |
16:22 | sr6033_ | okay :D
| |
16:22 | sr6033_ | But if i am willing to apply for gsoc, then I need to work on those 2 only, right?
| |
16:28 | se6astian | the idea page just contains some "suggested tasks"
| |
16:28 | se6astian | you can also propose a totally different idea
| |
16:28 | se6astian | or an idea that is mentioned on the lab
| |
16:29 | se6astian | the idea page should just give students some guidance what areas we are currently working on, but its by no means a "limit" of the GSoC scope
| |
16:29 | sr6033_ | okay. Thank you. And I need to complete the qualification task first, right?
| |
16:29 | se6astian | correct
| |
16:33 | BAndiT1983 | and you have to do the C++ challenge before applying
| |
16:33 | BAndiT1983 | at least for OC tasks
| |
16:34 | sr6033_ | okay. Thank you :)
| |
16:34 | TofuLynx | Bandit
| |
16:34 | BAndiT1983 | yes?
| |
16:34 | TofuLynx | Does OC have some multithreading capabilities yet?
| |
16:36 | BAndiT1983 | have implemented it partially, there is still a lot to optimize, but don't focus on that first
| |
16:37 | BAndiT1983 | premature optimization is root of all evil ;)
| |
16:37 | BAndiT1983 | Doanld E. Knuth
| |
16:38 | TofuLynx | Ah xD
| |
16:39 | TofuLynx | Does Axiom Beta
| |
16:39 | TofuLynx | in general
| |
16:40 | TofuLynx | wait
| |
16:40 | TofuLynx | Is Axiom Camera, at the end, expected to be used by simple users?
| |
16:40 | TofuLynx | like amateur users
| |
16:41 | BAndiT1983 | yes
| |
16:41 | TofuLynx | ok!
| |
16:41 | BAndiT1983 | we are working on a web-based UI for it, so you can control it from smartphone, tablet or laptop
| |
16:41 | TofuLynx | that seems great! and what about the output of it?
| |
16:41 | BAndiT1983 | currently the camera is controlled through console, but "normal" users are not that keen to do that
| |
16:42 | BAndiT1983 | which output?
| |
16:42 | TofuLynx | video and photo
| |
16:42 | BAndiT1983 | it does both, HDMI is there and so on, but se6astian has more details
| |
16:42 | BAndiT1983 | off for now, back later, probably
| |
16:42 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
16:43 | TofuLynx | se6astian: Is axiom expected at the end to output simple png photos?
| |
16:44 | TofuLynx | and simple .h265 videos?
| |
16:46 | Bertl_oO | png maybe, h265 probably not
| |
16:52 | se6astian | currently we write still images in RAW12 format in the AXIOM Beta: https://wiki.apertus.org/index.php/RAW12
| |
16:53 | se6astian | with some additional metadata RAW12 can be converted to DNG for example, or TIFF/PNG
| |
16:57 | TofuLynx | ok!
| |
16:58 | TofuLynx | I was thinking, maybe for the basic users, in a far future,the use of a NPU on the FPGA to process images for the user in a "automatic mode"
| |
16:58 | TofuLynx | not sure how farfetchd is that idea, maybe doesnt really have a point
| |
16:59 | TofuLynx | ah, NPU I mean neural processing unit
| |
17:00 | Bertl_oO | you certainly can do that, but what would the benefit of an NPU be for the typical cinema movie maker?
| |
17:00 | TofuLynx | yeah there's really no benefit at all
| |
17:01 | Bertl_oO | note that the Beta is flexible enough to be adapted for almost any purpose as long as the hardware permits it
| |
17:01 | Bertl_oO | (and even if not, you can usually extend or adapt the hardware :)
| |
17:02 | TofuLynx | Yeah, If I had the knowledge of FPGAs and NPUs and hardware, I would try something like that, a AI Powered camera, but there's really no point in that and i am from informatics engineering xD
| |
17:03 | Bertl_oO | just find a cheap AI on a chip and make yourself a plugin or shield *G*
| |
17:04 | TofuLynx | wow xD
| |
17:04 | TofuLynx | yeah, that's a big strength of the axiom beta.
| |
17:04 | TofuLynx | The modularity
| |
17:09 | sr6033_ | @TofuLynx are you working on opencine?
| |
17:10 | sr6033_ | left the channel | |
17:10 | sr6033 | joined the channel | |
17:11 | TofuLynx | @sr6033 No! I am a student interested in contributing and participating in GSoC. OpenCine is my target!
| |
17:11 | sr6033 | Okay. Cool.
| |
17:11 | TofuLynx | namely the task T722
| |
17:28 | supragya | joined the channel | |
17:29 | supragya | Hi! sr6033, welcome
| |
17:31 | se6astian | you can read up on past conversations in this channel here btw: http://irc.apertus.org/
| |
17:31 | supragya | me?
| |
17:32 | supragya | *just friendly hand waving*, se6astian dont worry ;)
| |
17:32 | supragya | not upto any real convo
| |
17:33 | supragya | Btw, se6astian, what area of apertus to you work on, can I ask?
| |
17:36 | sr6033 | Hello @supragya :)
| |
17:39 | supragya | left the channel | |
17:42 | supragya | joined the channel | |
17:43 | sr6033_ | joined the channel | |
17:46 | sr6033 | left the channel | |
17:56 | sr6033 | joined the channel | |
17:56 | sr6033_ | left the channel | |
18:38 | se6astian | (6:31:55 PM) supragya: me? <- anyone
| |
18:38 | se6astian | me -> https://www.apertus.org/user/7
| |
18:43 | supragya | :) just wanted to know if you work on OC, or hardware so i could ask meaningful questions
| |
18:49 | Bertl_oO | in general the best approach is to just ask a question here
| |
18:49 | Bertl_oO | anybody who might be able to answer it will then reply
| |
18:50 | Bertl_oO | (often with an idication on the quality of the answer or a hint who might know more)
| |
18:53 | supragya | Okay... So i have been through OC... ran it and getting to understand the system, however any real reason why OCBackup is being made first? What about the main editor...? Any design documents?
| |
18:54 | Bertl_oO | that is something only BAndiT1983 is able to answer
| |
18:58 | supragya | He was not online... So was asking
| |
19:09 | se6astian | BAndiT1983|away: is working on OC
| |
19:09 | se6astian | he will be back soon
| |
19:11 | sr6033 | left the channel | |
19:31 | se6astian | the backup module was prioritized as it was a small scope project that could make OC a handy tool for endusers already
| |
19:36 | supragya | So does OC decode anything RN? The OCcore is pretty empty for https://lab.apertus.org/T763... Cannot find any RAW
| |
19:36 | supragya | decoders there
| |
19:38 | se6astian | RN?
| |
19:38 | supragya | right now?
| |
19:40 | arpu | joined the channel | |
19:40 | supragya | Can see DNGdecoder... Is that what is talked about in the link?
| |
19:49 | DrLuke | joined the channel | |
19:49 | DrLuke | Hello! Took some beautyshots of the axiom beta: https://drluke.space/public/apertus/
| |
19:49 | DrLuke | Feel free to use, let me know if you need a license to go with it
| |
19:50 | DrLuke | Excuse the crude lighting, the setup was rather ghetto :P
| |
19:51 | se6astian | very nice :)
| |
19:52 | se6astian | supragya: yes I think DNG is the only thing that can be decoded currently
| |
19:52 | se6astian | BAndiT1983|away: will be able to confirm
| |
19:54 | felix_ | the photos DrLuke made are going to be used on the poster for the presentation at the german ministry of science and education
| |
19:54 | DrLuke | That sounds important
| |
19:55 | Bertl_oO | nice!
| |
19:56 | Bertl_oO | felix_: did you test the reworked SDI module?
| |
19:56 | felix_ | no, only tested the module i used for development
| |
19:57 | Bertl_oO | would be nice to know if it now works as expected (when you find the time, not urgent)
| |
19:57 | felix_ | what das broken on module b?
| |
19:57 | felix_ | i'll see if i have time for that tomorrow
| |
19:57 | Bertl_oO | I just got reminded because I saw the pictures of it :)
| |
19:59 | Bertl_oO | IMHO the only reason for intermittent working (as described) could be the razor beam connectors
| |
19:59 | felix_ | ok
| |
19:59 | Bertl_oO | so I simply re-did all of them
| |
20:04 | Rex0r | joined the channel | |
20:05 | RexO | left the channel | |
20:27 | felix_ | in case you wondered: DrLuke is one of the 5 persons i share an office with
| |
20:29 | supragya | left the channel | |
20:45 | attila_turgut__ | left the channel | |
21:14 | Bertl_oO | felix_: good to know!
| |
21:18 | BAndiT1983|away | changed nick to: BAndiT1983
| |
21:21 | TofuLynx | these photos look amazing, how were they shoot?
| |
21:23 | TofuLynx | BAndiT1983, do you have any suggestions of anything I could learn? Maybe related to OpenCine
| |
21:25 | felix_ | 2 led desk lamps with a sheet of paper as difusor each, 1 mobile phone with another piece of paper as diffusor, 2 pieces of paper as background and a good camera with tripod. and then a bit post-processing with gimp. iirc DrLuke also made a photo of that setup
| |
21:25 | BAndiT1983 | TofuLynx, what is your favorite area in development?
| |
21:26 | TofuLynx | Thanks felix!
| |
21:26 | DrLuke | TofuLynx: https://my.mixtape.moe/qoqtbr.jpg
| |
21:26 | DrLuke | Thanks, the setup was a bit ghetto, but I tried my best with the stuff I had available :P
| |
21:29 | TofuLynx | BAndiT1983 not sure how to explain what I like, it's sort of something related to handling almost low level things, such for example multithreading and I am highly interested in parallelization, something I have never learned about yet.
| |
21:29 | TofuLynx | Maybe I will give a shoot at OpenCL cores?
| |
21:29 | TofuLynx | Wow DrLuke
| |
21:30 | TofuLynx | That setup is really improvised! Awesome!
| |
21:30 | BAndiT1983 | debayering is a good area to do such stuff, you could take a look at processingtest first
| |
21:30 | TofuLynx | Ok! Will check it :D
| |
21:31 | BAndiT1983 | opencl was also planned, but had no time to approach it, so you could try it
| |
21:31 | TofuLynx | You give preference to OpenCL, right?
| |
21:31 | TofuLynx | better to use OpenCL initially than CUDA cores?
| |
21:32 | TofuLynx | DrLuke, did you use an axiom beta kit to capture the photo or was other machine?
| |
21:32 | DrLuke | TofuLynx: Canon 600D
| |
21:32 | DrLuke | with kit lens
| |
21:32 | DrLuke | would've looked much neater with better glass
| |
21:33 | TofuLynx | what is kit lens?
| |
21:33 | BAndiT1983 | opencl is portable and cuda is just for nvidia, although i prefer geforce usually
| |
21:33 | BAndiT1983 | lens supplied with the camera
| |
21:33 | TofuLynx | Yeah well I prefer OpenCL just for the sake of being open source xD
| |
21:33 | DrLuke | TofuLynx: the lens that comes with the camera when you buy it
| |
21:34 | TofuLynx | Ah, they are usually low quality?
| |
21:34 | DrLuke | they are ok quality
| |
21:34 | TofuLynx | so simply there is just better?
| |
21:34 | DrLuke | but the high-end lens produces much sharper images, you could zoom in two or three times more before it gets blurry
| |
21:34 | TofuLynx | makes sense!
| |
21:35 | DrLuke | I always thought it's just placebo, but I've once tried a 3000€ lens on my camera and holy crap it's like a whole different camera
| |
21:35 | TofuLynx | wow xD
| |
21:35 | TofuLynx | 3000€ tho
| |
21:35 | DrLuke | lol yeah
| |
21:36 | DrLuke | TofuLynx: https://www.amazon.de/Canon-EF-70-200-USM-Objektiv/dp/B0033567D8/ref=sr_1_8?ie=UTF8&qid=1519508161&sr=8-8&keywords=canon+L
| |
21:36 | DrLuke | heh, they're cheaper now
| |
21:36 | TofuLynx | yeah
| |
21:36 | TofuLynx | it has its own tripod mount?
| |
21:36 | DrLuke | That's the kind of glass you'd buy if you made your living with photography
| |
21:37 | DrLuke | TofuLynx: yeah, the lens is much heavier than the body
| |
21:37 | TofuLynx | Understood!
| |
21:37 | TofuLynx | If I was rich... xD
| |
21:37 | DrLuke | Heh yeah... if only :P
| |
21:39 | BAndiT1983 | TofuLynx, if you like, then you could research how a fast conversion of 12bit to 16bit could look like, this time 16bit, so we can expand the range
| |
21:40 | BAndiT1983 | but also how to load just half the size or quarter and how the performance affected
| |
21:40 | TofuLynx | hmm
| |
21:40 | TofuLynx | what is your definition of fast?
| |
21:41 | supragya | joined the channel | |
21:42 | TofuLynx | also, do tell me one thing
| |
21:42 | TofuLynx | the linearization process is unique by machine or is based on the sensor average?
| |
21:43 | supragya | BAndiT1983, isnt MagicLantern a firmware while OC an app?
| |
21:43 | supragya | How do you plan on making vaporsynth connect the two of them?
| |
21:44 | supragya | left the channel | |
21:44 | supragya | joined the channel | |
21:47 | BAndiT1983 | linearization is usually based on the additional data from sensor, if i remember correctly
| |
21:47 | TofuLynx | that extra data appended from the sensor?
| |
21:47 | BAndiT1983 | ML is an extension for canon firmware, they have own data format, have implemented a decoder for it in OC
| |
21:48 | BAndiT1983 | image sensor has several settings, so this ones are also important when trying to get the full range of data
| |
21:48 | BAndiT1983 | for example one could use look up table for performance increase
| |
21:48 | TofuLynx | such as aperture and stuff?
| |
21:51 | BAndiT1983 | yes, but aperture come from lens, iirc
| |
21:51 | TofuLynx | ok! :)
| |
21:52 | supragya | So how would the thought of workflow be? Obv, ML will be frameClient through VS using OC plugin, which will decode .... ? And where will this run? In camera..? Sorry didnt actually got the full picture
| |
21:52 | BAndiT1983 | ML? frame server?
| |
21:53 | BAndiT1983 | ML has it's own video format
| |
21:53 | BAndiT1983 | at the moment OC supports basic ML and DNG stuff
| |
21:53 | BAndiT1983 | OC is for the PC, it's a suite to process data from the camera
| |
21:53 | DrLuke | BAndiT1983: Yes, aperture is a characteristic of the lens
| |
21:56 | supragya | Bandit, you told me that the objective of the occore plugin was to bring to VS something which will have OC as backend to process the RAW etc. Something like ML. I am just trying to figure what you meant.
| |
21:57 | supragya | Ml would be benefitted... You told...
| |
21:58 | supragya | Quote-"for software which does not support raw videos or for example MagicLantern, we wanted to provide a plugin for vapoursynth, so we can access OC as data handler"
| |
21:59 | se6astian | off to bed
| |
21:59 | se6astian | good night
| |
21:59 | se6astian | changed nick to: se6astian|away
| |
22:08 | anuejn | BAndiT1983: TofuLynx : I think opengl would be a better choice for debayering, because problems that map nicely to the graphics domain are generally better solved with opengl than opencl
| |
22:09 | anuejn | BAndiT1983: So can i imagine the vapoursynth frameserver as something like a fuse fs for video?
| |
22:09 | TofuLynx | anuejn, are you sure?
| |
22:09 | TofuLynx | OpenCL is supported by AMD graphics card in parallelization
| |
22:10 | DrLuke | is the bayerfilter a pixel-perfect thing?
| |
22:10 | DrLuke | (or: could someone show me what an image before debayering looks like?)
| |
22:10 | anuejn | se6astian: You commited some file with a space in the path :P. this breaks my setup :rofl:
| |
22:11 | TofuLynx | anuejn I think you're right
| |
22:11 | anuejn | moreover, wouldnt it be smart do delete all the php stuff from the github repo and replace it with the stuff you put inti the dir with the evil path?
| |
22:11 | TofuLynx | "The OpenGL implementations invariably run faster even after hardcore OpenCL kernel optimization."
| |
22:12 | anuejn | DrLuke: yes it has to be done per pixel
| |
22:12 | supragya | Dr Luke, it is possible to see image before debayering... But any viewer will eventually debayer it before showing on screen
| |
22:12 | anuejn | DrLuke: https://anuejn.github.io/batic/
| |
22:12 | DrLuke | I know opengl, so I could help with that
| |
22:14 | supragyaraj | joined the channel | |
22:14 | DrLuke | anuejn: cool, I will play with that tomorrow
| |
22:14 | TofuLynx | anuejn
| |
22:15 | TofuLynx | I have been researching and it seems that openGL is not suited for debayering
| |
22:15 | TofuLynx | but excelent for displaying the result
| |
22:15 | anuejn | DrLuke: there are some example debayering shaders in here: https://github.com/anuejn/batic/tree/master/examples/shaders jus coppy and paste them
| |
22:15 | DrLuke | TofuLynx: Why do you think it's unsuitable?
| |
22:16 | TofuLynx | I didn't do any profound research but i found this
| |
22:16 | TofuLynx | "OpenGL may place your video within the editing interface and make it play, but when you throw color correction onto it, CUDA or OpenCL will do the calculations to alter each pixel of the video properly."
| |
22:17 | TofuLynx | I think OpenCL is more suited as it is a Computing API, instead of a Graphics API such as OpenGL
| |
22:17 | supragya | left the channel | |
22:17 | TofuLynx | and OpenCL is highly used for parallelization
| |
22:18 | DrLuke | opengl is optimized for pushing around pixels
| |
22:18 | anuejn | which is what we are doing
| |
22:18 | DrLuke | this is pretty classical opengl territory if you ask me
| |
22:19 | DrLuke | load raw image as texture -> sample texture multiple times per pixel to calculate final color -> render to framebuffer -> write framebuffer back to cpu
| |
22:19 | TofuLynx | and "pushing around pixels" includes the debayering process and linearization and stuff?
| |
22:19 | anuejn | jup think so, too
| |
22:19 | TofuLynx | makes sense
| |
22:20 | anuejn | DrLuke: thats exactly, what i thought / implemented
| |
22:20 | DrLuke | TofuLynx: This is no different to any of the post processing shaders that are present in any modern game engine
| |
22:20 | DrLuke | technology-wise
| |
22:21 | TofuLynx | yeah, I understand
| |
22:21 | TofuLynx | and we have the same purpose as the game engine
| |
22:21 | TofuLynx | to display it to the user as fast as possible
| |
22:21 | DrLuke | plus with this you could even debayer video in realtime if you can push the raw data into the gpu fast enough
| |
22:21 | TofuLynx | That's OpenCine's goal
| |
22:22 | DrLuke | If you need help developing that, I know some opengl :)
| |
22:22 | TofuLynx | Ok! Thanks!
| |
22:23 | TofuLynx | I will have to propose in my GSoC proposal the changing of openCL/CUDA cores to OpenGL
| |
22:23 | TofuLynx | I think it makes sense, will have to get a complete explanation ready
| |
22:24 | TofuLynx | DrLuke, what is your "area" on apertus?
| |
22:24 | DrLuke | TofuLynx: None
| |
22:24 | DrLuke | Would love to get involved though
| |
22:24 | TofuLynx | Hmm
| |
22:24 | TofuLynx | What contributions do you usually do?
| |
22:24 | DrLuke | anuejn: btw: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); will make the preview look much better when zooming out
| |
22:25 | DrLuke | TofuLynx: On my own time I work on my VJ software written in opengl, and I am studying electrical engineering on the side
| |
22:26 | TofuLynx | VJ?
| |
22:26 | DrLuke | VideoJockey, basically like a DJ but for real time generated video
| |
22:26 | anuejn | DrLuke: the zooming is just the browser transforming a canvas object, which i cannot influence
| |
22:27 | DrLuke | anuejn: Oh, nevermind then
| |
22:27 | anuejn | was a quick hack at the cost of aliasing
| |
22:27 | TofuLynx | wow Never heard about that before
| |
22:27 | TofuLynx | that's great!
| |
22:27 | DrLuke | TofuLynx: warning, loud: https://www.youtube.com/watch?v=xwAj3wwsCI0&list=PLuAUfqrFxS6a_1_5fM2CK0Ez6o-fqLGhE&index=1
| |
22:27 | DrLuke | Jup, it's fun :)
| |
22:28 | TofuLynx | :D
| |
22:29 | anuejn | DrLuke: so you are into demoscene stuff?
| |
22:29 | DrLuke | anuejn: I haven't made my entry into the demoscene yet
| |
22:30 | DrLuke | I planned to release my first demo this year at revision, but time got ahead of me
| |
22:30 | DrLuke | Maybe next year
| |
22:30 | DrLuke | But yeah, I admire everything they do
| |
22:30 | TofuLynx | revision?
| |
22:30 | DrLuke | TofuLynx: one of the biggest demoparties
| |
22:30 | anuejn | nice! me too :)
| |
22:30 | DrLuke | basically you go there to finish and release your demos
| |
22:31 | DrLuke | anuejn: Neat, maybe we'll see each other's demo at revision 2019 then? :P
| |
22:31 | supragyaraj | BAndiT1983: available? Ping!!
| |
22:31 | TofuLynx | sounds awesome!
| |
22:31 | anuejn | maybe :)
| |
22:32 | DrLuke | anuejn: Do you know the GPN?
| |
22:33 | Bertl_oO | supragyaraj: IRC bridges space _and_ time ... just go ahead and ask what you want to know, you'll get your answer sooner or later
| |
22:33 | anuejn | jup was there last year :) was quite cool
| |
22:34 | DrLuke | anuejn: Oh neat, I'll be VJing there aswell
| |
22:34 | DrLuke | if you wanna do a live-shadering session hit me up, I'm still looking for talented people :)
| |
22:34 | supragyaraj | left the channel | |
22:36 | anuejn | looking forward to it! (im not that talented 🙈
| |
22:37 | DrLuke | Don't tell anyone, but neither am I! :P
| |
22:38 | DrLuke | I just always feel bad that I'm the only one who gets to play with the crazy setup we have, others should get a chance as well
| |
22:39 | DrLuke | Anyway, time for sleep, good night!
| |
22:39 | anuejn | bye!
| |
22:39 | anuejn | see you
| |
22:39 | TofuLynx | Good night!
| |
22:39 | TofuLynx | Great to meet you!
| |
22:39 | DrLuke | Same :)
| |
23:13 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
23:45 | seku | left the channel |