00:21 | Spirit532 | left the channel | |
00:21 | Spirit532 | joined the channel | |
00:23 | futarisIRCcloud | joined the channel | |
01:19 | shivamgoyal | joined the channel | |
02:15 | araml | joined the channel | |
02:25 | shivamgoyal | left the channel | |
02:48 | krisss | joined the channel | |
02:48 | krisss | hello sir
| |
02:50 | krisss | i went through the project details i am very much intrested to do this project .i will be very much thankfull to you if you will allow me to do this project.
| |
02:55 | krisss | hello sir
| |
02:57 | Bertl_oO | hey krisss!
| |
02:57 | Bertl_oO | did you choose a task and start working on one of the challenges?
| |
02:58 | Bertl_oO | check out https://wiki.apertus.org/index.php/GSoC_Overview if not
| |
03:00 | humbe_coder | joined the channel | |
03:01 | krisss | live histogram ,wave form ,vector scope- this interests me
| |
03:03 | humbe_coder | Hi. I am doing the C++ challenge, can you please clarify if the input image is RGGB or GBRG?
| |
03:04 | Bertl_oO | hey humbe_coder!
| |
03:04 | krisss | sir i went through the link you shared
| |
03:05 | Bertl_oO | humbe_coder: let's think about that a moment, what will happen when you interpret an RG/GB as GB/RG or the other way round
| |
03:05 | Bertl_oO | the green will end up on the red and blue channel ... you can also assume that the 'two greens' will be reasonably similar
| |
03:06 | Bertl_oO | so how will the picture look like?
| |
03:07 | humbe_coder | @Bertl_o0 I think that even if we interpret it the opposite way the output image will stay the same? Is that correct?
| |
03:07 | Bertl_oO | krisss: krisss: so finish the challenge task then and let me know when to check your code
| |
03:08 | Bertl_oO | humbe_coder: let's assume this is true, what conditions would the image need to satisfy?
| |
03:11 | humble_coder | joined the channel | |
03:11 | humbe_coder | left the channel | |
03:13 | krisss | sir these are seems to little bit hard for me what i do
| |
03:13 | Bertl_oO | well, in that case you might want to check out for easier projects to work on
| |
03:14 | krisss | ok thank you sir
| |
03:14 | Bertl_oO | no problem
| |
03:18 | krisss | left the channel | |
03:18 | humble_coder_ | joined the channel | |
03:20 | humble_coder | left the channel | |
03:26 | humble_coder_ | @Bertl On second thought I think that the image will nog stay the same, because if green is interpreted as red n blue then maybe the insity of unknown pixels remain unaffected after demosaicing but the colours of the output image will change, also since there is no header in RAW12 images, how do I find if the image given is RGGB or GBRG
| |
03:27 | Bertl_oO | okay, so if it would stay the same, we would have R==G and G==B which also means - assuming that the greens are similar - R==B
| |
03:27 | Bertl_oO | so the image would be a pure gray image
| |
03:27 | Bertl_oO | this is unlikely but easy to check once you separated the channels
| |
03:28 | Bertl_oO | now assuming the image is not a gray image, then when you have the channel order wrong, you will still end up with basically R==B because the greens are similar
| |
03:29 | Bertl_oO | and the green channels will differ a lot (because they are actually red and blue)
| |
03:29 | Bertl_oO | you result image if you put the channels together will be all purple - green
| |
03:30 | Bertl_oO | (which is also easy to verify)
| |
03:31 | humble_coder_ | but that could only be verified with visual inspection of the image, right? everything will seem okey in code.
| |
03:32 | Bertl_oO | correct
| |
03:36 | humble_coder_ | left the channel | |
03:40 | humble_coder | joined the channel | |
03:42 | humble_coder | So I might be testing my code on RGGB image and someone else might test it on GBRG image and get a grey purpleish output. So how could I figure out which type the input image is in the code itself?
| |
03:43 | Bertl_oO | no way to do that reliably, but you can assume that we test with images in the same channel order than the example
| |
03:43 | Bertl_oO | if you want to add a feature, add an option/switch to select the order
| |
03:45 | humble_coder | okey so I take input from console regarding which type the image is?
| |
03:45 | Bertl_oO | you can do that, yes
| |
03:46 | humble_coder | I think there is an image given in the problem statement of the c++ challenge. Will we be evaluated on that image? if so which type is it?
| |
03:47 | Bertl_oO | I don't know, you have to figure that out yourself
| |
03:49 | humble_coder | how do I check that? Since there is no header. The simplest way seems to be open it in photoshop and keep zoming in top left corner.
| |
03:49 | Bertl_oO | sounds good to me
| |
03:56 | humble_coder | Also in the first point of the problem statement its mentioned to me output the intensity values of the first 5x5 tile. So do we have print it on console?
| |
03:56 | Bertl_oO | that would be my interpretation of 'output'
| |
03:57 | Bertl_oO | of course, you can also 'output' it to a file instead, but that's usually easy to do via redirection
| |
04:01 | Jamie | joined the channel | |
04:02 | Jamie | left the channel | |
04:02 | humble_coder | so before debayering there will be 9 red pixels amoung the 25 so do I print the insity values of those 9 pixels or the red insity values of all 25 pixels after debayering? also I think that we will get insity values in 12 bit, so do I print the 12 bit values or convert it to 8 bits then print?
| |
04:03 | Bertl_oO | questions over questions ... make some sane assumptions and when you cannot decide, just have a chat with BAndiT1983|away when he's around
| |
04:05 | humble_coder | ok.. thanks a lot for helping me out.
| |
04:05 | Bertl_oO | no problem, have fun!
| |
04:10 | humble_coder | left the channel | |
05:04 | Alan_ | joined the channel | |
05:08 | Alan_ | left the channel | |
05:10 | Alan_ | joined the channel | |
05:10 | Alan_ | Hi,Bertl
| |
05:10 | Alan_ | read logs , makes sense
| |
05:10 | Bertl_oO | hey
| |
05:11 | Alan_ | I have one more doubt , what perf.c and train.c do?
| |
05:12 | Alan_ | Snap is for click a pic.
| |
05:12 | Bertl_oO | s/doubt/question/
| |
05:12 | Alan_ | Question ; )
| |
05:12 | Bertl_oO | perf tries to figure out the current frame rate
| |
05:12 | Alan_ | I tried to find on wiki but didn't get much
| |
05:12 | Bertl_oO | on input (sensor side) as well as output (hdmi side)
| |
05:13 | Bertl_oO | and train does train the LVDS channels (coming from the sensor)
| |
05:13 | Bertl_oO | i.e. it adjusts the delay elements so that a test pattern is properly received
| |
05:18 | Alan_ | Ok and in this sections7.1 https://wiki.apertus.org/index.php/AXIOM_Beta/Manual#raw_still_image_capture
| |
05:19 | Bertl_oO | yes?
| |
05:19 | Alan_ | In the fabric part whats need to change 12 bit to 18
| |
05:20 | Bertl_oO | actually the 18 might be a typo there, let's check with the source :)
| |
05:20 | Alan_ | And we work directly on the contents we get in memory ,right?
| |
05:21 | Bertl_oO | no, it is actually 18bit there (lookup table) as can be seen here:
| |
05:22 | Bertl_oO | https://github.com/apertus-open-source-cinema/axiom-beta-firmware/blob/master/peripherals/soc_main/top.vhd (line 715-728)
| |
05:23 | Bertl_oO | and I remember now why we used 18bit there, because it perfectly matches the DSP inputs
| |
05:24 | Bertl_oO | and yes, T734 works on the data written to DDR memory
| |
05:28 | Alan_ | Ok i think probably I need to research more on this topic .
| |
05:28 | Amaterasu | joined the channel | |
05:28 | Alan_ | Beta gets input from hdmi output?
| |
05:29 | Bertl_oO | the input comes from the sensor, the output goes to the HDMI (typically)
| |
05:31 | Alan_ | Ok , so hdmi output is use to display stuff on camera screen?
| |
05:31 | Amaterasu | Can we use ODDR and PLLE2_BASE in the serial to parallel converter in T871 ?
| |
05:31 | Bertl_oO | Alan_: yes as well as output the actual image (or to be precise, a 1080p version of it)
| |
05:32 | Bertl_oO | Amaterasu: what does the task description say?
| |
05:35 | Amaterasu | I mean the task 1 of T871
| |
05:35 | Alan_ | left the channel | |
05:36 | Bertl_oO | where it says: You can use existing hardened units like DDR or SERDES to simplify the task.
| |
05:39 | Bertl_oO | just changed that to: You may use existing hardened units like PLL, DDR or SERDES to simplify the task.
| |
05:39 | Bertl_oO | (to make it more obvious :)
| |
05:40 | Amaterasu | left the channel | |
05:46 | Alan_ | joined the channel | |
05:49 | BAndiT1983|away | changed nick to: BAndiT1983
| |
05:50 | Alan__ | joined the channel | |
05:50 | Alan__ | Okay bertl thanks, i will read more and come back if have any doubt.
| |
05:51 | Alan_ | left the channel | |
05:51 | Bertl_oO | s/doubt/question/ :)
| |
05:55 | Alan__ | left the channel | |
05:55 | Bertl_oO | off to bed now ... have a good one everyone!
| |
05:55 | Bertl_oO | changed nick to: Bertl_zZ
| |
06:31 | Alan_ | joined the channel | |
06:32 | Alan_ | "cmv_hist3 is not meant to be fully real time capable as it is enough in most cases to calculate a new histogram with every couple of frames captured "
| |
06:33 | Alan_ | As mentioned in T734 , why isn't have to be real time capable everytime??
| |
06:39 | shivamgoyal | joined the channel | |
06:40 | Alan_ | left the channel | |
06:40 | aSobhy | left the channel | |
06:48 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
06:52 | shivamgoyal | left the channel | |
08:54 | comradekingu | left the channel | |
09:02 | aSobhy | joined the channel | |
10:32 | Bertl_zZ | changed nick to: Bertl
| |
10:32 | Bertl | morning folks!
| |
10:34 | Bertl | Alan_: if you manage to _make_ it fast enough to provide a full histogram for every frame that would be awesome!
| |
12:22 | ahmed | joined the channel | |
12:23 | ahmed | changed nick to: Guest73675
| |
12:23 | Guest73675 | hi,i am interested in this project very much
| |
12:24 | Guest73675 | left the channel | |
12:24 | Bertl | ahmed: great! best get a proper IRC client then!
| |
13:00 | shivamgoyal | joined the channel | |
13:02 | futarisIRCcloud | left the channel | |
14:04 | shivamgoyal | left the channel | |
14:40 | Hfuy | joined the channel | |
14:40 | Hfuy | Hello.
| |
14:40 | Bertl | Hey Hfuy!
| |
14:40 | Hfuy | Silly question. Can I actually get one of these cameras yet?
| |
14:41 | Bertl | that is not a silly question at all and the answer is yes :)
| |
14:41 | Hfuy | I notice things like dead pixel correction and colour calibration aren't marked complete yet, though.
| |
14:41 | Hfuy | So it presumably isn't usable.
| |
14:42 | Bertl | depends on what you want to do ... it is still a developer kit for now, so you have to be prepared for diving in and taking control yourself :)
| |
14:43 | Hfuy | When is it expected to be finished?
| |
14:43 | Bertl | never
| |
14:43 | Bertl | that is part of the idea that it will never become obsolete
| |
14:43 | Hfuy | Well, OK, but it has to hit a 1.0 at some point
| |
14:44 | Bertl | i.e. there will always be new development and improvements over time, so waiting for a 'finished' product is a bad idea here ...
| |
14:44 | Hfuy | Well, sure, but at some point it has to be possible to take it out on set and have it be usable and reliable.
| |
14:44 | Bertl | we hope that we get it 'end user ready' with the AXIOM Beta Compact
| |
14:44 | Hfuy | How long has it been in progress?
| |
14:45 | Bertl | which will probably take another year or so
| |
14:45 | Bertl | the basic development has taken less than a year for the dev kit
| |
14:46 | Bertl | but the project is running for way longer than that (see history)
| |
14:46 | Bertl | you can also check the development status on the web site
| |
14:46 | Bertl | https://www.apertus.org/axiom-beta-status
| |
14:46 | Hfuy | I was looking on the website for some history info.
| |
14:46 | Hfuy | I recall it being talked about years ago.
| |
14:47 | Bertl | https://www.apertus.org/axiom-saga-history-article-november-2014
| |
14:47 | Hfuy | So in essence it's been in progress four years and there isn't really a usable finished camera.
| |
14:49 | Bertl | what exactly are you looking for?
| |
14:54 | Hfuy | Well, something I can take out and shoot with.
| |
14:54 | Bertl | we definitely did that with the AXIOM Beta dev kit (as you can see with the footage)
| |
14:54 | Hfuy | But without dead pixel correction?
| |
14:55 | Bertl | something you can easily do in post ... actually it is better to be done in post as you have more control over 'how' to correct dead pixels if there are any
| |
14:58 | Hfuy | I don't think people would consider it very complete without those things.
| |
14:58 | Bertl | well, that's completely up to people (you) to decide for yourself
| |
14:59 | Hfuy | The thing is, if it's a CMV12000, why wouldn't I just buy an Ursa Mini 4K.
| |
14:59 | Bertl | there are many folks out there who do not like the 'papering over' basically all proprietary cameras do before they generate 'raw' data
| |
15:00 | Hfuy | What do you mean by papering over
| |
15:00 | Bertl | modifying the data to make it 'nicer', hiding the imperfections
| |
15:00 | Hfuy | Nothing wrong with that, for some value of nicer.
| |
15:01 | Bertl | if you do not see any advantage of a completely FOSS/OH camera for you, then by all means, please by a cheap proprietary camera
| |
15:01 | Bertl | *buy
| |
15:01 | Hfuy | I'm trying to understand what the advantage would be for anyone.
| |
15:01 | Bertl | that you have access to things like 'raw data' and you can actually modify how it processes the data
| |
15:02 | Bertl | you can also adapt it to your needs and extend it where needed
| |
15:02 | Hfuy | I want a hand crank mode.
| |
15:02 | Hfuy | Like Kinetta was supposed to have.
| |
15:02 | Bertl | you can make that happen quite easily with the AXIOM
| |
15:03 | Bertl | just attach a rotary encoder to it and have it control the shutter
| |
15:03 | Hfuy | Ha, for some value of "easily." You'd need to be a software engineer with experience of embedded systems development on whatever system you're using, and knowledge of the existing code.
| |
15:03 | Bertl | not really, for this you are probably fine with a little python
| |
15:04 | Bertl | but you can always hire somebody to implement that for you and it won't cost you an arm and a leg
| |
15:04 | Hfuy | Did you look at the Fairchild LTN4625
| |
15:05 | Bertl | no, do you have a comprehensive datasheet for it?
| |
15:05 | Hfuy | I think the bigger problem may simply be obtaining one.
| |
15:05 | Hfuy | I would assume (and it is an assumption) they've probably signed an exclusive with their main cinematography client.
| |
15:06 | Hfuy | But it's very capable.
| |
15:06 | Bertl | quite possible
| |
15:09 | Hfuy | Holy hell. CMV50000.
| |
15:09 | Hfuy | Do quite want.
| |
15:09 | Hfuy | That'd put you on the map :D
| |
15:11 | Bertl | it is already EOL
| |
15:12 | Hfuy | Shame.
| |
15:12 | Bertl | but it wouldn't be hard to design an AXIOM sensor frontend for it
| |
15:12 | Hfuy | Ha. E2V. I live about a mile from that place.
| |
15:13 | Hfuy | I'm not sure they really have cinematography sensors, though.
| |
15:13 | Bertl | the CMV5k is still available if you are willing to spend the 5-6k
| |
15:13 | Hfuy | Low yeilds on something like that.
| |
15:15 | intrac_ | joined the channel | |
15:15 | Bertl | yep, always a big problem with large pieces of silicon
| |
15:15 | intrac | left the channel | |
15:15 | Hfuy | I wonder if you're ever going to get a really modern sensor.
| |
15:15 | Hfuy | Some camera manufacturer will always snipe it as an exclusive.
| |
15:16 | Bertl | possible
| |
15:18 | Hfuy | Oh, well. I've shot Cion. I've seen it all :)
| |
15:23 | Bertl | lucky you :)
| |
15:31 | Hfuy | It's a painful memory. I don't like to talk about it.
| 15:31 | Hfuy | twitches
|
15:31 | Hfuy | I'm not quite sure how it's possible to get such little dynamic range out of that sensor, but they managed it.
| |
15:42 | se6astian|away | changed nick to: se6astian
| |
15:53 | intrac_ | Hfuy: one specific example of where an open source camera has benefits; one of the BlackMagic cameras at launch would only record 16:9 even though the sensor was 4:3 (so the top and bottom of the frame couldn't be captured)
| |
15:53 | intrac_ | there were requests in the BM forums for this feature, so that people could use anamorphic lenses or shoot open matte
| |
15:54 | intrac_ | I *think* this may have been added sometime later? but don't quote me on that.
| |
15:54 | Hfuy | Which one would that have been?
| |
15:54 | Hfuy | The 2.5K possibly.
| |
15:54 | intrac_ | the camera? not sure, it was a few years back now.
| |
15:54 | Hfuy | The thing is, I like the idea of all this open source stuff.
| |
15:55 | Hfuy | The problem is none of it has any meaning to anyone unless you are a software engineer. And in this situation, quite a competent software engineer with a lot of knowledge of the systems you're working on.
| |
15:55 | Hfuy | Since most people aren't, to most people, open source is pretty much meaningless.
| |
15:55 | Hfuy | I don't object to it but you have to admit it doesn't really make much practical difference.
| |
15:56 | intrac_ | not necessarily, if other people can be encouraged to make a modification then everyone can benefit
| |
15:56 | intrac_ | it only takes one skilled individual to make the change
| |
15:56 | Hfuy | My experience is that FOSS coders work on what interests them, not what interests users.
| |
15:57 | Hfuy | And that's fine, people are working for free, they can do what they like.
| |
15:57 | Hfuy | But the results for users can be... eh...
| |
15:57 | intrac_ | in some cases, but some teams have crowd funding drives to add specific features
| |
15:58 | intrac_ | there's nothing to stop a few people creating a group separate from Apertus, crowdfund, add specific features
| |
15:58 | Hfuy | In theory, sure. In reality my experience is that it just doesn't work very well.
| |
15:59 | Hfuy | My go-to example is always Blender. Huge amounts of time spent on it, fantasic features, the UX is a complete war crime and I think a lot of the effort is wasted.
| |
15:59 | Hfuy | I'm not anti open source but I think it is often very poorly managed.
| 16:00 | intrac_ | won't be drawn into Blender UI discussion :)
|
16:00 | Hfuy | I don't blame you.
| |
16:00 | intrac_ | but I've certainly seen worse from commercial programs
| |
16:00 | Hfuy | I think I'd be reasonably comfortable with the idea that Blender's UI is the worst I've ever seen on any piece of software.
| |
16:01 | Hfuy | By quite some considerable margin.
| |
16:01 | Bertl | most proprietary GUI tools are completely useless
| |
16:01 | Hfuy | And I used Real 3D on the Amiga.
| |
16:01 | intrac_ | I'd go so far as to say open source projects really need a blend of commercial and volunteer input to succeed.
| |
16:01 | Bertl | mainly because companies do not spend money on GUI development and testing unless it is essential to the tool
| |
16:02 | Hfuy | intrac_: I think they need management. And people who are not being paid do not like being told what to do.
| |
16:02 | Hfuy | It's a big problem.
| |
16:02 | intrac_ | Bertl: hardware isn't covered in the same way as software with GPL, right?
| |
16:03 | Bertl | it is Cern OHL in our case
| |
16:03 | intrac_ | eg, it'd be great if Marvx released their hardware designs for the sensor rotation module
| |
16:04 | Bertl | Hfuy: the solution is simple there, no? just pay the developers :)
| |
16:04 | intrac_ | since there was quite a bit of interest in having the camera body vertically oriented
| |
16:04 | intrac_ | reference: https://wiki.apertus.org/index.php/AXIOM_Beta/Case_Studies#Mavrx
| |
16:05 | Hfuy | Anyway, I'm preparing an article on interesting cameras that never seemed to make it.
| |
16:05 | Hfuy | It was suggested that I included Apertus
| |
16:06 | intrac_ | that would make attaching the camera head-end into a 8mm or Super16 style camera body more practical
| |
16:06 | intrac_ | Apertus is still here
| |
16:06 | Hfuy | It looks like development hell from where I'm sitting.
| |
16:06 | intrac_ | Hfuy: did you include that 8K camera (from Italy, iirc?)
| |
16:07 | Hfuy | Fran? Spain.
| |
16:07 | Hfuy | I had one here. I reviewed it. Days before the company went out of business.
| |
16:07 | intrac_ | yes, Fran
| |
16:07 | Hfuy | It was... not... very good...
| |
16:07 | intrac_ | good candidate there. Philip Bloom also did a review.
| 16:07 | Hfuy | tries to be polite
|
16:08 | Hfuy | I tried not to slag them off too much.
| |
16:08 | Hfuy | But really the thing was barely functional.
| |
16:09 | Hfuy | So far I have Kinetta (going back a ways,) Fran, Ikonoskop and Digital Bolex. And possibly Cion.
| |
16:09 | Hfuy | Some of those did ship and then died.
| |
16:09 | intrac_ | no idea why they even released it at that stage for review.
| |
16:09 | Hfuy | Cion was stillborn.
| |
16:09 | intrac_ | no sense there at all.
| |
16:09 | Hfuy | I wondered that. Presumably they were desperate for money.
| |
16:11 | Hfuy | I have some of their monitors, ex demo.
| |
16:11 | Hfuy | They're mainly fine.
| |
16:11 | se6astian | changed nick to: se6astian|away
| |
16:11 | Hfuy | Of course, that's just them going to a Chinese manufacturer with a spec list. But they work OK.
| |
16:11 | Hfuy | I would point out that Apertus has a better web presence than Cinemartin ever did, and Apertus isn't even a commercial organisation!
| |
16:21 | Hfuy | Ha. Bloom got it in the same shabby box I did. I wondered why they wanted it back.
| |
16:22 | Hfuy | And yes, the red-branded port cap!
| |
16:51 | se6astian|away | changed nick to: se6astian
| |
16:51 | BAndiT1983|away | changed nick to: BAndiT1983
| |
17:40 | aSobhy | left the channel | |
18:54 | sebix | left the channel | |
19:08 | Raghu | joined the channel | |
19:09 | Raghu | Hi there
| |
19:09 | Raghu | left the channel | |
19:10 | Bertl | hey Raghu!
| |
19:18 | Hfuy | I guess I could write about the axiom camera for one of the places I write for.
| |
19:19 | Hfuy | But I'd need one on demo. And I'm not sure if you really have anything finished you could send out?
| |
19:21 | intrac_ | changed nick to: intrac
| |
19:32 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
19:35 | Bertl | We do not 'send out' devices for reviews, but there might be an AXIOM Beta Dev Kit owner in your area you could contact ...
| |
19:40 | BAndiT1983|away | changed nick to: BAndiT1983
| |
19:40 | Hfuy | It might not be a bad idea to put together a demonstration kit that people could use for a while.
| |
19:40 | Hfuy | It's tough to write about these things without being hands on.
| |
19:41 | se6astian | you are welcome to visit us in Vienna
| |
19:42 | Hfuy | Next time I'm in Vienna, I'll give you a shout :)
| |
19:42 | se6astian | where are you based?
| |
19:43 | Hfuy | London
| |
19:43 | Hfuy | Well. Half an hour out of London, but you know.
| |
19:43 | se6astian | ah, thats not too far then
| |
19:50 | Hfuy | All I know about Vienna is that you can't get a cup of coffee without a glass of water.
| |
19:51 | se6astian | sounds like a more thorough Vienna experience is long overdue
| |
19:52 | Hfuy | I know someone who was born in Vienna, grew up in the USA and London, and now lives in New York because she says Vienna is boring.
| |
19:52 | Hfuy | I have no opinion, but Edith doesn't like it :)
| |
19:53 | BAndiT1983 | aha, good to know that such people live vienna, so other people can enjoy the city
| |
19:53 | BAndiT1983 | *leave
| |
19:53 | Hfuy | To be fair, I don't think she really lived in Vienna other than a year or two when she was a baby.
| |
19:53 | Hfuy | She does like to speak Martian on the phone to her family.
| |
19:54 | Hfuy | (Claims it's not German. Sounds like German to me.)
| |
19:55 | BAndiT1983 | eh, you probably mean Austrian, as the guys in Austria do not like to be called germans
| |
19:55 | Hfuy | The language does sound quite a lot like German.
| |
19:57 | Hfuy | I'm probably wrong. I do apologise. It's nothing like German :)
| |
20:24 | se6astian | this video teaches you the only word you really need to speak Austrian: https://www.youtube.com/watch?v=iuXR53ex4iI
| |
20:26 | BAndiT1983 | funny to see that a non-austrian is explaining :D
| |
20:26 | BAndiT1983 | was already suspicious because of eastern accent
| |
20:30 | Hfuy | Edith gets very upset if I claim her language sounds like German.
| |
20:30 | Hfuy | I think it's just German with a dozen extra words, all of which are different types of apple strudel.
| |
21:29 | humble_coder | joined the channel | |
21:30 | humble_coder | Hi guys, anyone around?
| |
21:31 | Bertl | yup, what's up?
| |
21:33 | humble_coder | I am debayering the CFA using bilinear approach, the red and blue channels seem fine but the green channel has dark pixels in it, also all three channels have significant artifacts
| |
21:34 | BAndiT1983 | humble_coder: have you considered endianess?
| |
21:36 | humble_coder | No I have not, the format is little endian I think. I am reading 8 bits at a time, three times and then performing bitwise operations on them to get intensity values
| |
21:37 | BAndiT1983 | this is where the artifacts are coming from, you have to do byte swap
| |
21:39 | humble_coder | so.. should I convert the little endian input to big endian then perform binary operations on it?
| |
21:39 | BAndiT1983 | yes
| |
21:40 | BAndiT1983 | maybe it's also big-endian
| |
21:42 | humble_coder | can I mail the link of my code to you?
| |
21:42 | BAndiT1983 | yes
| |
21:42 | humble_coder | email id - *email address removed* ?
| |
21:43 | BAndiT1983 | yes
| |
22:21 | se6astian | off to bed, good night
| |
22:21 | se6astian | changed nick to: se6astian|away
| |
22:24 | Bertl | nn
| |
22:27 | humble_coder | BAndiT1983: mail sent, please check
| |
22:28 | BAndiT1983 | humble_coder: i'm not a fan of archives in mails, because it's not safe, could you please add it to github
| |
22:31 | BAndiT1983 | about your problems, usual way for this data is to do byte swap, then to can convert each 3 bytes to 2 sensels, as we have 12bit data stored there
| |
22:32 | humble_coder | BAndiT1983: sending another mail with github link
| |
22:32 | BAndiT1983 | if the byte swap is done and data correctly extracted, then the image shouldn't be corrupted, if it is, then check processing path again
| |
22:33 | BAndiT1983 | will check tomorrow, after work, it's late here and i have to get up early for it
| |
22:33 | BAndiT1983 | see you
| |
22:33 | BAndiT1983 | changed nick to: BAndiT1983|away
|