01:58 | illwieckz | left the channel | |
05:01 | Bertl_oO | off to bed now ... have a good one everyone!
| |
05:01 | Bertl_oO | changed nick to: Bertl_zZ
| |
05:55 | felix_ | left the channel | |
05:55 | se6astian | left the channel | |
05:55 | felix_ | joined the channel | |
05:55 | se6astian | joined the channel | |
07:53 | mithro | left the channel | |
07:54 | mithro | joined the channel | |
08:14 | se6astian | good day
| |
09:28 | mithro | left the channel | |
09:30 | mithro | joined the channel | |
11:38 | Bertl_zZ | changed nick to: Bertl
| |
11:38 | Bertl | morning folks!
| |
16:53 | Bertl | off for now ... bbl
| |
16:53 | Bertl | hmm, disregard, meeting coming up ;)
| |
16:59 | se6astian | yes please :)
| |
17:00 | se6astian | MEETING TIME! who is here?
| 17:00 | Bertl | is here ...
| 17:01 | vup | is here
|
17:01 | se6astian | vup, any news to report?
| |
17:02 | vup | a little
| |
17:02 | vup | anuejn and I mostly worked on some small fixes and dependency updates for narui (the ui framework for the recorder) and published the 0.1 version: https://crates.io/crates/narui
| 17:03 | anuejn | is here
|
17:03 | vup | next we will probably start to finally merge the integration with the recorder: https://github.com/apertus-open-source-cinema/axiom-recorder/pull/7
| |
17:05 | vup | (thats it)
| |
17:05 | Bertl | nice
| |
17:06 | se6astian | great, anything to add anuejn?
| |
17:06 | anuejn | no, I didnt get much done
| |
17:06 | se6astian | ok
| |
17:06 | se6astian | anyone else who wants to report?
| |
17:07 | se6astian | ilia3101 from the magic lantern forum will join us at 19:15
| |
17:07 | se6astian | and is interested in helping us with raw processing: https://www.magiclantern.fm/forum/index.php?topic=26299.0
| |
17:07 | se6astian | I doubt we will still be meeting then but please return for a warm welcome then in 1h
| |
17:08 | se6astian | quick updates from me:
| |
17:08 | se6astian | I tested a lattepanda alpha 864s as potential recorder
| |
17:08 | se6astian | and it works very well!
| |
17:08 | se6astian | also an Intel NUC, also works well
| |
17:08 | se6astian | but the lattepanda is slightly smaller
| |
17:08 | se6astian | but provides the same features/throughput
| |
17:09 | se6astian | even has onboard flash memory which the NUC doesnt
| |
17:09 | se6astian | raw12 playback also works well with onboard GPU
| |
17:09 | se6astian | so that seems like a very good candidate
| |
17:09 | se6astian | rockpi4 eventually went to the drawer
| |
17:10 | se6astian | and the pny CS3030 NVME SSD eventually turned out to actually be fast enough for continous recording
| |
17:10 | se6astian | I did a few recorder gui additions here and there
| |
17:10 | se6astian | or bgr-info script
| |
17:10 | se6astian | https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw-via-hdmi/bgr-info
| |
17:10 | se6astian | to identify skipped/double frames after capture
| |
17:11 | se6astian | as I still suspect the ffmpeg way is not 100% optimal bertl is working on a magewell SDK inspired capture tool
| |
17:11 | se6astian | that showed much better performance on his magewell PCIE capture cards IIRC
| |
17:11 | se6astian | I need your feedback or ideas regarding the processing step to combine 2 bgr frames into 1 raw12
| |
17:12 | Bertl | more reliable indeed, not sure how that maps to USB though, as they use different mechanisms there
| |
17:12 | se6astian | currently we use montage (imagemagick) and place the two image side by side, then reinterpret the size
| |
17:12 | se6astian | effectively alternating lines from the two files
| |
17:12 | se6astian | do you think with a different tool/approach this could be speed up
| |
17:13 | vup | I think anuejn and I wanted to support it in the recorder (just did not get to it yet)
| |
17:13 | se6astian | maybe our own C tool?
| |
17:13 | Bertl | it should be a two liner in C actually :)
| |
17:13 | vup | (the recorder would then probably also have the capture integrated)
| |
17:14 | Bertl | in the future, there are likely two different 'recording' options for raw data
| |
17:15 | Bertl | one is by using alternating frames on the HDMI output, the other is by using two HDMI outputs, one for each frame type (A/B)
| |
17:15 | Bertl | in the first case, we need to know which frame (A or B) we are currently on
| |
17:15 | Bertl | in the second case we need to synchronize the frames so that we process the correct A and B frame
| |
17:16 | se6astian | 2 line sounds good :)
| |
17:16 | Bertl | for the first case, I was thinking we could utilize an interlace mode if that works
| |
17:16 | Bertl | i.e. effectively sending the two frames after each other as a single image
| |
17:17 | Bertl | s/after each other/one after the other/
| |
17:17 | Bertl | note that this requires some changes in the gateware, so something for future transports
| |
17:17 | se6astian | sounds good
| |
17:18 | se6astian | with the corner markers we could always identify the matching pairs easily as well
| |
17:18 | Bertl | yes, but that wouldn't even be necessary on the interlace version
| |
17:18 | se6astian | even better
| |
17:19 | se6astian | regarding the grant application with the technical university, I reviewed and updated our text over the weekend
| |
17:19 | se6astian | and today got the universities parts of the text
| |
17:19 | se6astian | manfred has until wednesday to review/request changes
| |
17:19 | se6astian | then we combine texts
| |
17:19 | se6astian | and apply
| |
17:20 | se6astian | I will play around with binning/skipping a bit in the near future
| |
17:20 | se6astian | had a chat with bertl about it already today
| |
17:20 | se6astian | also about increasing FPS for smaller resolutions
| |
17:20 | se6astian | magewell USB3 device can go up to 120 FPS
| |
17:21 | se6astian | 1280x720 at 120 fps
| |
17:21 | vup | we could even do more if we just combine the data into bigger frames on the beta probably
| |
17:21 | se6astian | would be in the same throughput range as 1920x1080@50fps
| |
17:21 | se6astian | yes but that requires a more complex gateware IIRC
| |
17:21 | se6astian | buffering frames
| |
17:21 | vup | well we buffer frames already
| |
17:22 | se6astian | you are the experts :)
| |
17:22 | vup | well Bertl is the current gatewares expert
| |
17:22 | Bertl | it mainly requires that we get rid of the sequencer ;)
| |
17:22 | se6astian | I updated the raw documentation in the google doc with FW2.0 commands already partially
| |
17:22 | se6astian | but as next step plan to bring it to the wiki
| |
17:22 | se6astian | but focusing on FW2.0 there
| |
17:22 | Bertl | but that is planned for some time now, so it probably will happen sooner or later
| |
17:23 | se6astian | probably leave out the FW1.0 raw hdmi commands altogether
| |
17:23 | se6astian | with the current length todo list bertl has probably "later" :)
| |
17:24 | Bertl | yeah, no promises there ;)
| |
17:24 | se6astian | if someone wants to merge the snap_neon with the actual snap that would also help check one thing of berrtls todo list
| |
17:24 | se6astian | https://github.com/apertus-open-source-cinema/axiom-firmware/issues/192
| |
17:24 | Bertl | arrr!
| |
17:25 | se6astian | Bertl: any other news from your side?
| |
17:25 | Bertl | nothing really this week, I'm unfortunately quite busy with unrelated stuff
| |
17:25 | se6astian | understood, fingers crossed for next weeks
| |
17:26 | se6astian | ah just remembered: we registered for a 4096 block of mac addresses with openmoko yesterday
| |
17:26 | se6astian | they are handing out the addresses they bought but since the project was discontinued dont need anymore
| |
17:27 | se6astian | github PR for entry was submitted yesterday
| |
17:27 | se6astian | lets hope they work through it soon
| |
17:27 | Bertl | the idea here is to have universal addresses for each beta :)
| |
17:27 | se6astian | yes, currently the image has the mac address hard coded
| |
17:27 | se6astian | so all betas have the same
| |
17:28 | se6astian | not good
| |
17:28 | se6astian | ok, anyone else with news/topics to share/discuss?
| |
17:31 | se6astian | then lets conclude the meeting and please join us again here in 45 minutes t meet ilia
| |
17:31 | se6astian | MEETING CONCLUDED
| |
17:31 | Bertl | thanks
| |
17:32 | Bertl | changed nick to: Bertl_oO
| |
17:35 | anuejn | sorry, cant be here in 45 minutes but maybe someone can advertise axiom recorder?
| |
17:35 | anuejn | and integrating new image processing there
| |
17:35 | vup | not sure ill be here, but if I am sure :P
| |
17:36 | se6astian | so regarding the c tool for merging bgr files, will fopen() and fwrite() provide the speed up I expect?
| |
17:37 | vup | you probably want to benchmark fopen, fread with mmap?
| |
17:40 | se6astian | thanks, will check
| |
18:13 | Ilia3101 | joined the channel | |
18:14 | se6astian | Hi Ilia3101!
| |
18:14 | se6astian | welcome to our channel
| |
18:14 | Ilia3101 | hello!
| |
18:15 | vup | hi
| |
18:15 | Ilia3101 | I'm using the web irc, a bit confused how this workkshello!
| |
18:15 | Ilia3101 | Can't see what I'm tuyuping
| |
18:15 | vup | oh interesting
| |
18:15 | se6astian | oh, not good :)
| |
18:16 | se6astian | well you can choose to install a dedicated client
| |
18:16 | vup | it sounds like the webirc is a bit broken
| |
18:16 | Ilia3101 | I'll tryn again
| |
18:16 | Bertl_oO | changed nick to: Bertl
| |
18:16 | Ilia3101 | Yeah I need a client
| |
18:16 | vup | if you have matrix already, you can also join via that
| |
18:16 | Ilia3101 | sorry about this
| |
18:16 | se6astian | no worries
| |
18:17 | se6astian | alternatives: hexchat, pidgin, etc.
| |
18:17 | se6astian | chatzilla
| |
18:17 | Ilia3101 | thank you
| |
18:19 | se6astian | shall we wait until you return with a client?
| |
18:20 | ilia | joined the channel | |
18:20 | ilia | ?
| |
18:20 | ilia | Hello
| |
18:20 | vup | hi
| |
18:20 | ilia | I am on hexchat
| |
18:20 | vup | nice
| |
18:20 | Ilia3101 | left the channel | |
18:20 | vup | does that work better?
| |
18:20 | ilia | It is working better.
| |
18:20 | Bertl | yay!
| |
18:20 | vup | awesome
| |
18:20 | ilia | How was the meeting?
| |
18:21 | se6astian | good thanks :)
| |
18:21 | vup | (note though that you will only receive messages while connected, if you ever miss something we keep logs here: http://irc.apertus.org/)
| |
18:21 | se6astian | you can read the log here: http://irc.apertus.org/index.php?day=22&month=11&year=2021#18
| |
18:21 | ilia | Ah thank you
| |
18:22 | se6astian | its updated in realtime
| |
18:22 | se6astian | so regarding the raw processing:
| |
18:22 | ilia | Nice
| |
18:22 | se6astian | many thanks that you are here :)
| |
18:22 | ilia | Yes the raw processing :)
| |
18:22 | se6astian | can you tell us a bit about yourself?
| |
18:22 | se6astian | or your background
| |
18:24 | se6astian | then it might be easier to see what first steps of a collaboration could be
| |
18:24 | se6astian | the tools alex created are pretty extensive already so I am not sure myself where to best continue
| |
18:26 | ilia | I'm currently a computer science student, but I've been interested in raw image processing for a very long time. I created the MLV App project four years ago, an all-in-one raw video processing program for MLV files. And I'm currently digging deeper in to colour science and camera colour.
| |
18:26 | ilia | I definitely need to take a look at what Alex has done.
| |
18:27 | vup | oh awesome, now I know where I have seen your name before :o
| |
18:27 | se6astian | https://github.com/ilia3101/MLV-App
| |
18:27 | se6astian | cool!
| |
18:27 | vup | s/:o/:)/
| |
18:28 | ilia | So are you planning to develop image processing tools for users of the camera?
| |
18:29 | se6astian | well we do not want to reinvent the wheel here but there are some steps we want to do with our camera raw data before we feed it into a raw development software
| |
18:29 | se6astian | currently our format is called raw12 https://wiki.apertus.org/index.php/RAW12
| |
18:30 | se6astian | its basically just raw 12 bit data from the image sensor
| |
18:30 | ilia | I see.
| |
18:30 | se6astian | to convert this raw12 file to a DNG (our second format of choice) alex created raw2dng
| |
18:30 | se6astian | https://github.com/apertus-open-source-cinema/misc-tools-utilities/tree/master/raw2dng
| |
18:31 | ilia | Nice to see
| |
18:31 | se6astian | https://wiki.apertus.org/index.php/Raw2dng
| |
18:31 | se6astian | it can alter white point, black point etc
| |
18:31 | se6astian | but also do flat field correction
| |
18:31 | ilia | Nice, all very useful
| |
18:32 | ilia | How much do you know about the colour of the sensor?
| |
18:32 | se6astian | alex also added the creation of darkframes/dcnuframes, etc directly into raw2dng
| |
18:33 | se6astian | and drafted some steps of the calibration routine already
| |
18:33 | se6astian | https://wiki.apertus.org/index.php/Factory_Calibration_(firmware_2.0)
| |
18:33 | se6astian | again raw2dng can validate the effects the flatfield correction has
| |
18:33 | ilia | This is great.
| |
18:34 | se6astian | but not all evaluations are integrated yet
| |
18:34 | se6astian | alex created a range of octave scripts to analyze some aspects of the results
| |
18:34 | ilia | So what type of things still need to be done?
| |
18:34 | se6astian | thought this is were our unerstanding of what he created ends
| |
18:35 | ilia | So we don't understand everything he created?
| |
18:35 | se6astian | the camera can capture stil images internally and write raw12 files (with metadata containing sensor information)
| |
18:35 | se6astian | raw2dng requires some of this metadata to do certain things
| |
18:36 | se6astian | capturing these still images is rather slow though (think of it like a photo camera application)
| |
18:36 | se6astian | now recently we eventually made progress with capturing raw moving images over HDMI
| |
18:36 | se6astian | and recovering the raw12 data (again 12 bit) from the sensor with such image sequences
| |
18:37 | se6astian | 24/25/30 fps
| |
18:37 | ilia | That's awesome
| |
18:37 | se6astian | in 3840x2160 or even 4096x2160
| |
18:37 | se6astian | the problem now is that we cannot transfer metadata over this hdmi stream
| |
18:37 | se6astian | also raw2dng originally expected full resolution frames with 4096x3072
| |
18:38 | se6astian | so thats where we currently are
| |
18:38 | ilia | I see. Is there any possibility to make the camera do this processing?
| |
18:38 | ilia | (that raw2dng currently does)
| |
18:38 | se6astian | I managed to create darkframes from raw images captured via hdmi but the results were not as good as expected
| |
18:39 | ilia | What was the issue with those darkframes?
| |
18:39 | se6astian | DCNU frames would also help but for these the missing metadata prevents the creation
| |
18:39 | vup | se6astian: hmm so is it actually impossible to transfer the metadata, because of the processing the hdmi capture card does?
| |
18:40 | se6astian | the darkframes did not entirely remove the fixed pattern noise, but we just applied it and looked at the results
| |
18:40 | vup | or is it "just" a gateware problem and we could just append a row of pixels that contains the metadata?
| |
18:40 | se6astian | so it would require a more structured analysis actually
| |
18:40 | vup | (my understanding is, that it is the latter)
| |
18:40 | ilia | Would DCNU frames remove the noise significantly more than darkframes?
| |
18:40 | se6astian | I guess we could repack it into pixels somehow indeed
| |
18:40 | se6astian | its 128x16 bit
| |
18:40 | vup | yeah I mean thats a trivial amount of data
| |
18:41 | se6astian | indeed, probably more tricky to handle them in the hdmi sender gateware though
| |
18:41 | se6astian | but also up for discussion if it makes sense to send it with every frame
| |
18:41 | se6astian | as the content is pretty much identical for all frames
| |
18:42 | se6astian | unless you change exposure time during recording
| |
18:42 | se6astian | which can happen of course
| |
18:42 | vup | I don't think sending them every frame does a lot of harm
| |
18:42 | vup | its such a small amount of data
| |
18:42 | se6astian | agreed
| |
18:42 | vup | the bigger problem is aquiring the data probably
| |
18:42 | se6astian | ilia: Would DCNU frames remove the noise significantly more than darkframes? <- that is what we need to find out :)
| |
18:42 | vup | the easiest solution is probably just providing some generic buffer for data that gets appended to every frame
| |
18:42 | vup | and make the userspace tools keep that up to date
| |
18:43 | se6astian | so in general the goal is to improve the image quality any way we can for footage that comes out of the camera
| |
18:43 | Bertl | actually, long term we want to put any metadata into HDMI data islands
| |
18:43 | vup | when we have the control daemon that would probably be relatively easy
| |
18:43 | vup | Bertl: do we? can we actually receive those properly with common capture cards?
| |
18:43 | Bertl | if we do it right, then yes
| |
18:44 | se6astian | currently I plan to have the recorder ssh to camera, cerate a register dump and download it to where the captured footage is
| |
18:44 | Bertl | data islands are used to provide video information but also to send audio
| |
18:44 | se6astian | it wont handle changes during recording
| |
18:44 | se6astian | but an easy qucikfix
| |
18:44 | se6astian | registers can simply be appended to all raw12 files
| |
18:44 | Bertl | capture cards might not support the video aspects but they usually support the audio stream(s)
| |
18:44 | vup | Bertl: oh so you want to for example use the audio stuff to just transport arbitrary data?
| |
18:45 | Bertl | precisely
| |
18:45 | vup | sure that works
| |
18:45 | se6astian | ah, very clever!
| |
18:45 | vup | But I don't see how that is a lot better than just packing it into the pixel data
| |
18:45 | Bertl | it deals with 'lossy' capture solutions
| |
18:45 | vup | true
| |
18:46 | vup | but with 128x16 bit we could also to ecc or something like that
| |
18:46 | Bertl | and it is otherwise unused bandwidth
| |
18:46 | vup | of course, its nicer, but I its also quite a lot of work
| |
18:46 | se6astian | <ilia> How much do you know about the colour of the sensor? <- so thats the next topic we have barely touched actually (also alex did not really touch the colour topic yet)
| |
18:46 | Bertl | and of course, we could also add fake pixels as well
| |
18:46 | vup | the other option for aquiring the data would be reading out the sensor registers every frame, is that feasible Bertl
| |
18:46 | vup | ?
| |
18:46 | se6astian | so also an area where help would be much appreciated
| |
18:47 | vup | (the sensor on the micro for example is nice in that regard, because it sends the register contents as part of each frame)
| |
18:47 | Bertl | vup: the main question there is, whether we need to actually read the data from the sensor
| |
18:47 | se6astian | there is a default matrix in the raw2dng code I think
| |
18:47 | vup | Bertl: as opposed to having the userspace keep the data up to date?
| |
18:48 | vup | yeah thats what I suggested as the first option
| |
18:48 | Bertl | or let's rephrase it like this: what sensor data will change without userspace changing the sensor registers?
| |
18:48 | vup | yeah probably not a lot :P
| |
18:48 | se6astian | the sensor temperature :)
| |
18:48 | Bertl | probably only the temperature ;)
| |
18:48 | vup | maye some frame counter or so
| |
18:49 | vup | (not sure the cmv12k has one)
| |
18:49 | Bertl | so in the CMV12k case we can simply get away with buffering the sensor registers
| |
18:49 | se6astian | dont think so
| |
18:49 | ilia | I am very interested in the colour properties of the sensor. Does the manufacturer provide spectral response data or any colour information?
| |
18:49 | se6astian | yes, its in the datasheet
| |
18:49 | Bertl | ilia: yes, there is quite some information in the datasheet
| |
18:49 | se6astian | let me dig it out
| |
18:50 | Bertl | a recent version is available at AMS iirc
| |
18:50 | se6astian | https://ams.com/documents/20143/36005/CMV12000_DS000603_4-00.pdf
| |
18:50 | ilia | Thank you!
| |
18:51 | se6astian | page 17
| |
18:51 | vup | ok I unfortunately need to head out now, see you!
| |
18:51 | se6astian | thanks vup!
| |
18:51 | Bertl | vup: cya
| |
18:52 | ilia | So this DCNU data is only 128x16bit in size?
| |
18:52 | se6astian | no
| |
18:53 | se6astian | 128x16bit are the sensor configuration metadata
| |
18:53 | ilia | Ah right
| |
18:53 | se6astian | exposure time
| |
18:53 | se6astian | gain
| |
18:53 | se6astian | etc.
| |
18:53 | ilia | DCNU is a whole frame then?
| |
18:53 | se6astian | yes
| |
18:53 | ilia | And you don't have a good way of sending it over HDMI?
| |
18:54 | se6astian | DCNU frames are calculated from calibration data
| |
18:54 | Bertl | it certainly could be sent over HDMI as well
| |
18:54 | se6astian | we can send these images over hdmi
| |
18:54 | se6astian | problem is that the metadata is missing
| |
18:54 | se6astian | and raw2dng tool expects it
| |
18:55 | se6astian | so 2 ideas/approaches:
| |
18:55 | se6astian | 1. capture calibration data "the traditional way" with single images from inside the camera
| |
18:55 | se6astian | and figure out how to crop out the center area so it matches the HDMI stream
| |
18:56 | se6astian | full sensor resolution: 4096x3072
| |
18:56 | se6astian | vs HDMI stream 3840x2160
| |
18:56 | se6astian | or 4096x2160
| |
18:56 | se6astian | cropping should be trivial
| |
18:57 | se6astian | but we need to figure out if correction data created for stills really works 1:1 for HDMI data
| |
18:57 | se6astian | it should but we need to verify
| |
18:57 | ilia | So calibration data would have to be combined with the captured footage later?
| |
18:57 | ilia | After shooting
| |
18:58 | Bertl | yes, it doesn't make much sense to do that in real-time in our case
| |
18:58 | Bertl | we had some approaches for that but they were only half-hearted for a quick preview
| |
18:58 | se6astian | https://docs.google.com/drawings/d/1OX83QFdwphGZRrPg773UaYnnY0QYonqOttPOoBtNEwI/edit?usp=sharing
| |
19:00 | se6astian | alex created a pretty extensive document that gives a good overview: https://docs.google.com/document/d/12gZG4KFiWW4eV_ha-AM2hsw86fjNI4ILNo-CiHn80Z4/edit?usp=sharing
| |
19:02 | ilia | Interesting document
| |
19:03 | se6astian | sorry if we flood you with information currently :)
| |
19:05 | ilia | No worries, it's very informative
| |
19:06 | se6astian | good :)
| |
19:07 | ilia | I'm still not sure exactly what I could help with though
| |
19:09 | se6astian | well, currently we have nobody who has the skills/time to dive into these image processing topics
| |
19:10 | se6astian | like: how can we generate DCNU frames for HDMI output
| |
19:10 | ilia | Ah well I do have time right now :) And I would very much like help when possible.
| |
19:11 | ilia | So would DCNU frame generation have to happen on the camera?
| |
19:11 | se6astian | how can we evaluate how good the DCNU frames help reduce the actual fixed pattern noise
| |
19:11 | se6astian | no, its done on a connected PC most likely
| |
19:12 | se6astian | same for darkframes
| |
19:12 | se6astian | gainframes, etc.
| |
19:12 | ilia | understood
| |
19:13 | ilia | So this requires some sample data to begin
| |
19:13 | se6astian | we want to document the process so everyone can conduct the calibration
| |
19:14 | se6astian | but also need tools that measure if the calibration had the desired effects, etc.
| |
19:14 | ilia | I see
| |
19:14 | se6astian | I can help by capturing sample data (dark frames, or color charts, exposure sequences, etc.)
| |
19:14 | ilia | That would definitely be very helpful
| |
19:14 | se6astian | but I can also setup remote access to a PC and AXIOM Beta if it makes sense
| |
19:15 | ilia | Is the data steam over HDMI entirely lossless?
| |
19:15 | se6astian | yes
| |
19:15 | ilia | That's great
| |
19:15 | ilia | So when you mentioned disappointing results with darkframe over HDMI you weren't saying it was worse because of HDMI, but that dark frame is not enough by itself without DCNU
| |
19:17 | se6astian | yes, or another possibility is that the created darkframe is not calculated correctly because of the missing metadata
| |
19:17 | se6astian | but we can easily verify that by comparing HDMI captured footage/darkframes and internally captured still footage/darkframes
| |
19:18 | ilia | Yes
| |
19:19 | ilia | I think I might be able to dive in to things once I have some sample data
| |
19:19 | se6astian | sounds good!
| |
19:20 | se6astian | so next steps I would propose: you look at the raw2dng documentation/reference/factory calibration routine
| |
19:20 | se6astian | and create a list for me what sample data I should capture for you
| |
19:21 | ilia | I will take a look at that.
| |
19:21 | ilia | And think about what I might need
| |
19:21 | se6astian | I will then do that and upload
| |
19:21 | ilia | A lot of this (DCNU etc) is very new to me so we'll see how it goes.
| |
19:21 | ilia | That's great
| |
19:21 | se6astian | and we take it from there
| |
19:21 | se6astian | one step at a time
| |
19:21 | ilia | Yep
| |
19:23 | se6astian | what is the best way to reach you?
| |
19:23 | se6astian | email/messenger/irc/etc.?
| |
19:24 | ilia | I'd say email or Discord. I've also been using Discord a lot lately, haven't fully understood IRC yet.
| |
19:25 | ilia | What platforms are best for reaching you?
| |
19:25 | ilia | I'll send you my email in Magic Lantern PM
| |
19:26 | se6astian | thanks
| |
19:26 | se6astian | I will see if I can message you on discord
| |
19:26 | se6astian | I a on the magic lantern server but think I need to get authroized somehow
| |
19:26 | se6astian | to send direct messages
| |
19:27 | se6astian | I am also here on IRC most of the time
| |
19:28 | se6astian | which is our main communication platform for realtime interaction
| |
19:28 | ilia | Ok that's good to know, I'll start going on here more
| |
19:28 | se6astian | great
| |
19:28 | se6astian | many thanks!
| |
19:28 | se6astian | heading off to dinner now
| |
19:28 | ilia | Me too, it was great to chat.
| |
19:28 | Bertl | pleasure was ours!
| |
19:29 | Bertl | off for now as well ... bbl
| |
19:29 | Bertl | changed nick to: Bertl_oO
| |
19:34 | ilia | left the channel | |
19:34 | ilia | joined the channel | |
19:48 | ilia | left the channel | |
20:06 | ilia | joined the channel | |
20:16 | ilia | left the channel | |
20:25 | ilia | joined the channel | |
20:27 | ilia | left the channel | |
23:04 | vup | I think there are some decently well working discord to irc bridges
| |
23:04 | vup | so in theory we could setup a bridged apertus discord room
| |
23:11 | anuejn | sounds like a plan
| |
23:15 | illwieckz | joined the channel | |
23:26 | illwieckz | left the channel | |
23:46 | illwieckz | joined the channel |