03:29 | Bertl_oO | off to bed now ... have a good one everyone!
| |
03:29 | Bertl_oO | changed nick to: Bertl_zZ
| |
05:50 | BAndiT1983|away | changed nick to: BAndiT1983
| |
06:35 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
06:35 | BAndiT1983|away | changed nick to: BAndiT1983
| |
07:36 | se6astian|away | changed nick to: se6astian
| |
07:52 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
09:35 | madonius | joined the channel | |
10:59 | Nira|away | changed nick to: Nira
| |
12:59 | Bertl_zZ | changed nick to: Bertl
| |
12:59 | Bertl | morning folks!
| |
13:15 | Nira | changed nick to: Nira|away
| |
15:24 | Bertl | off for now ... bbl
| |
15:25 | Bertl | changed nick to: Bertl_oO
| |
16:13 | apurvanandan[m] | Hello Bertl, I tried what you suggested.
| |
16:14 | aSobhy | ah sorry I didn't notice that we have a meeting yesterday, I didn't remember we set a time for it :/
| |
16:15 | apurvanandan[m] | When I transmit two words alternatively, they are received correctly but there are erroneous words received during change/transit from one word to another
| |
16:17 | apurvanandan[m] | And when I increase the fequency of change, the error words take significant part of transmitted data and soon all data is errorneous.
| |
16:20 | se6astian | changed nick to: se6astian|away
| |
16:55 | dev__ | joined the channel | |
17:01 | dev__ | Hello BAndiT1983|away . Could u able to check last commit for videoclip class. Also I have started working on coupling Fuse to OC.
| |
17:04 | dev__ | also I had some doubt related to event bus https://trello.com/c/k5lQz4zY/20-event-bus
| |
17:04 | dev__ | present in OC
| |
17:07 | Dev | joined the channel | |
17:07 | Dev | changed nick to: Guest44074
| |
17:09 | se6astian|away | changed nick to: se6astian
| |
17:10 | dev__ | left the channel | |
17:10 | Guest44074 | left the channel | |
17:25 | supragyaraj | joined the channel | |
17:26 | supragyaraj | Hi dev
| |
17:26 | dev__ | joined the channel | |
17:27 | dev__ | Good Evening supragya_
| |
17:27 | dev__ | supragyaraj,
| |
17:29 | BAndiT1983|away | changed nick to: BAndiT1983
| |
17:30 | dev__ | Hello BAndiT1983
| |
17:31 | dev__ | I was working on coupling fuse to OC right now as u suggested
| |
17:31 | dev__ | Could u able to check videoclip class in last commit
| |
17:32 | BAndiT1983 | hi
| |
17:33 | BAndiT1983 | dev__, when do you plan to approach my comments on static allocator?
| |
17:33 | supragyaraj | left the channel | |
17:34 | dev__ | After including videoclip class, It has reduced those unnecessary methods from allocators
| |
17:34 | dev__ | There are less methods now
| |
17:35 | BAndiT1983 | you have added video clip class to image loader, but this includes another coupling
| |
17:35 | BAndiT1983 | Why does image provider needs to know about video clip?
| |
17:36 | dev__ | Imageloader method : - Load will extract related information from file, like framecount
| |
17:36 | BAndiT1983 | and why does allocator needs to know video clip??
| |
17:37 | dev__ | that's why i thought , i could pass it Load function
| |
17:37 | BAndiT1983 | this add a lot of new tight coupling, which i would like to avoid
| |
17:37 | BAndiT1983 | *couplings
| |
17:37 | dev__ | VideoClip contain information related to which frame is loaded or which is not , that's why it needs it
| |
17:38 | BAndiT1983 | ?
| |
17:38 | BAndiT1983 | sorry, but this does not make any sense, please explain why
| |
17:39 | dev__ | it has list of frames which contains information regrading index where the frame is loaded in buffer,
| |
17:40 | BAndiT1983 | why does the allocator need video clip class?
| |
17:40 | dev__ | so allocators can know where to place next frames
| |
17:41 | BAndiT1983 | and how can i store audio data?
| |
17:41 | dev__ | I was unaware that, It was promoting tight coupling
| |
17:42 | BAndiT1983 | what was promoting tight coupling?
| |
17:42 | dev__ | u mean , video data
| |
17:42 | dev__ | passing videoclip to allocators
| |
17:42 | dev__ | as u have pointed out
| |
17:43 | BAndiT1983 | have you read up on tight and loose coupling in software development?
| |
17:44 | BAndiT1983 | http://www.dotnet-stuff.com/tutorials/c-sharp/understanding-loose-coupling-and-tight-coupling
| |
17:44 | dev__ | I read about it, but could get some practical overview
| |
17:44 | BAndiT1983 | https://en.wikipedia.org/wiki/Coupling_(computer_programming)
| |
17:45 | dev__ | I will go through these links
| |
17:45 | BAndiT1983 | if you read about it, then why do you still place video clip class in allocator, as firt principle is to avoid tight coupling by using interfaces, which i would have understood the reason for
| |
17:45 | BAndiT1983 | *first
| |
17:45 | BAndiT1983 | second thing is, allocator does not need to know about it at all
| |
17:46 | dev__ | Okay, I will change it.
| |
17:48 | dev__ | Can u please answer these queries also https://trello.com/c/k5lQz4zY/20-event-bus
| |
17:50 | Fares | joined the channel | |
17:50 | BAndiT1983 | dev__, you don't have to care about the eventMap, this is internal event bus stuff, please focus on using it
| |
17:51 | BAndiT1983 | https://github.com/kakashi-Of-Saringan/opencine/blob/dev/Source/OCBackup/Presenters/BackupPresenter.cpp
| |
17:52 | BAndiT1983 | there are examples of how to use it, have you checked them out?
| |
17:52 | dev__ | Nope, Thanks for pointing out.
| |
17:52 | BAndiT1983 | ???
| |
17:53 | BAndiT1983 | i've pointed you to OCBackup long time ago, why haven't you checked it?
| |
17:53 | dev__ | I was working on Fuse from last two days
| |
17:53 | dev__ | I will check out, now
| |
17:54 | dev__ | sorry
| |
17:56 | dev__ | left the channel | |
17:56 | dev__ | joined the channel | |
17:57 | dev__ | So this how frameserver UI will be interacting to other modules, I will try to understand
| |
17:57 | supraraj | joined the channel | |
17:58 | BAndiT1983 | there are only 3 modules involved: frame server UI, OCcore and OCui
| |
17:58 | supraraj | Indian mobile network is not European mobile network :/ ping BAndiT1983 ...
| |
17:58 | supraraj | :)
| |
17:58 | BAndiT1983 | everything else is just usual 3rdParty libs OC uses and FUSE
| |
17:58 | BAndiT1983 | hi supraraj
| |
17:58 | BAndiT1983 | don't know about EU-wide mobile tendency, but german one is meh
| |
17:58 | supraraj | I may vanish without any notice.. forgive me for that
| |
17:59 | BAndiT1983 | no problem
| |
17:59 | supraraj | How's the project going dev
| |
17:59 | dev__ | Slow , But I am doing it
| |
17:59 | supraraj | Seeing from the logs, we still are on which class goes where
| |
18:00 | supraraj | Hmm... Yes... Slow puts it right
| |
18:00 | dev__ | I am sorry
| |
18:00 | supraraj | How much time do you think the prototype will still take.. dev
| |
18:01 | supraraj | And I hope you remember the first eval comments... I am yet to see you acting on those fronts :/
| |
18:02 | dev__ | Once we have proper Fuse coupling, It won't take too much time , i guess
| |
18:02 | supraraj | I don't see a big problem there - fuse coupling
| |
18:02 | supraraj | The problem is somewhere else...
| |
18:03 | dev__ | Where
| |
18:03 | supraraj | Hmm... Well.. the architecture of what goes where is still not clear
| |
18:04 | supraraj | That's a red sign... Generally that part comes in first leg...
| |
18:05 | supraraj | And I don't really feel "updated" - idk how about this project... Some info is always hiding
| |
18:06 | dev__ | I update the trello , Whenever i feel that , I have completed something
| |
18:06 | supraraj | The code front... Well there are many changes that you have done there... But to me it seems like you are writing something of your own... Without consulting... Without telling why certain things were done the way it was done
| |
18:07 | supraraj | For instance, do you remember my last meeting objection which you felt no need to reply to
| |
18:08 | supraraj | Why do I see your commits on GitHub... Isn't your dev env setup properly?
| |
18:09 | dev__ | Yes, U pointed out that diffs are not clear
| |
18:10 | supraraj | And you edit on GitHub.com ... What kind of software development setup is that
| |
18:11 | supraraj | Also, since eventbus was discussed... Last to last meeting... I still am disappointed that you haven't reviewed ocbackup
| |
18:11 | supraraj | I would need an explanation of why you are jumping to fuse when your mentors are explicit on you focusing on something else at the moment
| |
18:12 | supraraj | Namely... Finding a solution to tight coupling
| |
18:14 | supraraj | Hint: Silence may not be the best thing here :)
| |
18:14 | dev__ | I was told so many things to improve, like there were so many methods in static allocator , it had tight coupling etc
| |
18:15 | dev__ | I worked on videoclip class after that (which i started last to last meeting)
| |
18:15 | dev__ | It reduced some methods from static alloctors
| |
18:16 | dev__ | today BAndiT1983 , pointed out about , Tight coupling , I will be solving that too
| |
18:16 | dev__ | But, After that, I also inspected event bus
| |
18:16 | dev__ | but I has some doubts regarding that (so i updated trello two days back)
| |
18:17 | supraraj | Tight coupling discussion... That's not new
| |
18:17 | dev__ | So i thought untill u reply, I can atleast i can work on Fuse
| |
18:18 | supraraj | Plus project is quite far from where it needs to be... Especially since quite a lot of work was done already last year...
| |
18:18 | dev__ | Yes, I will work on it as I said,
| |
18:19 | dev__ | Yes, last year work really helped, but i am new in this field
| |
18:20 | dev__ | I have to understand things, may be slow
| |
18:21 | supraraj | :/
| |
18:21 | supraraj | BAndiT1983 ... Anything from your end
| |
18:21 | supraraj | left the channel | |
18:21 | Fares | left the channel | |
18:21 | Fares | joined the channel | |
18:24 | BAndiT1983 | don't have anything at the moment, as my comments on trello regarding static allocator are not processed yet, which is a bit disappointing, as they focus on standards of development, and it's my daily job to watch over such things
| |
18:27 | dev__ | https://trello.com/c/TWF4s8ql/2-implementation-of-static-allocator , regarding last comments : I am trying to use 512 MB size(now), which can reused again and again when we have frames more than that
| |
18:28 | BAndiT1983 | where does it allocate 512MB?
| |
18:28 | BAndiT1983 | dev__, i know the theory of things, no need to remind me again and again, i need more technical explanations on implementation
| |
18:29 | dev__ | It could be less, If the total size is less than 512 MB,
| |
18:30 | dev__ | The size of buffer are always in multiple of frameSize
| |
18:30 | BAndiT1983 | i would have expected a constructor with possibility to set the size of a page
| |
18:32 | dev__ | Okay, But we have MLV files with few frames, so that's why i had used the logic
| |
18:33 | dev__ | right now*
| |
18:33 | BAndiT1983 | ypu cpuld have initialized the allocator after inspecting first frame, there are many ways to create halfway smart application logic
| |
18:33 | BAndiT1983 | *could
| |
18:34 | dev__ | We can allocate less size than a page and that would be sufficient
| |
18:34 | BAndiT1983 | what do you mean with less size than a page?
| |
18:35 | Bertl_oO | changed nick to: Bertl
| |
18:35 | Bertl | sorry for the delay, we had a network outage
| |
18:36 | dev__ | Means, We will be using size according to Framesize and the upper limit (512 MB)
| |
18:36 | BAndiT1983 | but haven't we aggreed that you take 512MB for first tests and dissect it in buckets of frame size?
| |
18:37 | dev__ | If we need small file , we can just allocate that much space
| |
18:37 | Kjetil | left the channel | |
18:37 | BAndiT1983 | Bertl, i suppose you want to do your meeting now, we will finish for today
| |
18:37 | Bertl | reading up at the moment ...
| |
18:38 | dev__ | Yes, it can take 512 MB but if our file is small , we can take small size, isn't it ?
| |
18:38 | dev__ | for now
| |
18:40 | BAndiT1983 | dev__, yes, but it's easier just to use next bucket until the last one is reached, then start at the beginning
| |
18:40 | BAndiT1983 | file size is not important, but frame size is, in fact it's channel size
| |
18:41 | dev__ | Yes, BAndiT1983 , it is doing that
| |
18:41 | BAndiT1983 | will check it, as i have finished my tasks at work and can focus on apertus again
| |
18:41 | dev__ | If i end up with last bucket, I will start from the first again
| |
18:42 | dev__ | Okayy
| |
18:43 | dev__ | I will be inspecting OCbackup and eventbus, checking how can we reduce tight coupling,
| |
18:43 | dev__ | Thanks for your time.
| |
18:44 | BAndiT1983 | no problem, it's my job
| |
18:45 | se6astian | Fares: I think we can start the presentation now
| |
18:45 | se6astian | how do you want to do it exactly?
| |
18:47 | Fares | great! I would do it here with some tables and graphs
| |
18:47 | se6astian | great
| |
18:47 | se6astian | you have our attention :)
| |
18:47 | Fares | okay :)
| |
18:48 | Fares | so as we know that images are composed of pixels, for each pixel a constant number of bits
| |
18:48 | Fares | for 12bit sensor for example every pixel take exactly 12bits
| |
18:49 | Fares | LJ92 is a standard set to compress and decompress images
| |
18:49 | Fares | it utilize the observation that pixels are close in value to each other
| |
18:50 | Fares | so for any pixel, if we can predict it's value from neighbors pixels, we can subtract the predicted value from the actual value, and encode the subtracted value only
| |
18:51 | Fares | hopefully, most of the subtracted values will be small enough to be encoded in bits less than the original number of bits
| |
18:51 | se6astian | for raw images you take neigbouring same color values I assume?
| |
18:52 | Fares | yes, there is options in the encoder to assume "multiple components" and that allow it to predict using same color component
| |
18:53 | se6astian | right
| |
18:53 | Fares | we will stop at this point and please refer to: https://github.com/FaresMehanna/MLV-File-Format/blob/master/LJPEG-1992/README.md
| |
18:53 | Fares | in "Predictor Functions" section, you will see there is pixel X and pixels A,B and C
| |
18:54 | Fares | pixel X is the pixel we want to predict, and we will do that using A, B and C
| |
18:55 | Fares | from 1 to 7, are different function to predict the pixel value, I chose 1, it can be changed in the future in single module, but it was simple enough and it's results are good
| |
18:55 | Fares | its results*
| |
18:56 | Fares | so if everything is clear till now, I described half the process, which is, predict the pixel then subtract to get the value which will be encoded
| |
18:56 | Bertl | but you will need a buffer of at least one line for anything involving B or C
| |
18:57 | Fares | yes correct, that is why I picked to choose Px = Ra, It is simple and effective
| |
18:57 | Bertl | one question regarding color vs monochrome
| |
18:58 | dev__ | left the channel | |
18:58 | Bertl | what is required to switch from color (bayer pattern) encoding (probably 4 channels) to monochrome?
| |
18:59 | Fares | okay, in the standard you can choose 1,2,3 or 4 interleaved components, for 1 component that mean all the pixels are the same color (monochrome)
| |
19:00 | Fares | for 2 or 3 or 4 that mean, when encoding or decoding it will treat the image as 2 or 3 or 4 channels interleaved
| |
19:00 | Fares | since RAW12 is stored in memory as RGGB, so 4 components were picked
| |
19:01 | Bertl | what if we change the memory layout in the future to have separate buffers for each channel?
| |
19:02 | Fares | if the same configuration (4 components) were used to encode (monochrome) it will still work but it will produce slightly less compression ratio since every pixel will be predicted using a 4 pixel behind
| |
19:02 | BAndiT1983 | as LJ92 is coming from MLV, and they use 14bit a lot, how is that affecting the process?
| |
19:04 | Fares | Bertl: we may need to combine then when doing the encoding to produce 4 components image, or encode every component alone to produce 4 images
| |
19:04 | Bertl | okay
| |
19:04 | Bertl | please continue
| |
19:05 | Fares | BAndiT1983: the predicting process is the same of all bit depths, but for higher bit depths or higher iso, the different between every pixel and its neighbors will be high, that will result in less compression ratio
| |
19:06 | Fares | okay, after getting the subtracted value, it will belong to one of 17 ssss classes
| |
19:06 | BAndiT1983 | what is the performance hit between 12 and 14bit?
| |
19:07 | se6astian | maybe we can do the detailed questions after the main presentation?
| |
19:07 | Fares | BAndiT1983: I really didn't try it
| |
19:07 | BAndiT1983 | ah, sorry
| |
19:08 | BAndiT1983 | please continue and don't let my questions distract you
| |
19:08 | Fares | okay thank you
| |
19:08 | Fares | So every one of 17 ssss classes will be assigned a Huffman code.
| |
19:09 | Fares | and the final pixel will be [ssss_code][subtracted_value]
| |
19:09 | Fares | in the same precious link there is tab called "SSSS Values"
| |
19:10 | Fares | the table in it will explain which "difference=subtracted" value will be assigned to which ssss class
| |
19:10 | Fares | for ssss class 0, only the code will be set with 0 more bits
| |
19:10 | Fares | for ssss class 1, the code will be set and 1 more bit
| |
19:11 | Fares | so ssss class number will indicate how many bits will be needed for the subtracted value
| |
19:12 | Fares | if you refer to this link: https://github.com/FaresMehanna/JPEG-1992-lossless-encoder-core/tree/master/test_files
| |
19:13 | Fares | the graphs in the readme file will show how many subtracted value in every ssss class
| |
19:13 | Fares | in a normal image, it shows a lot of subtracted values in small ssss classes
| |
19:14 | Fares | so that info can be used to determine the best huffman codes for given bit depth
| |
19:15 | Fares | to give a complete example, please refer to: https://drive.google.com/file/d/1oIb3nR4NGYF_vBy9Z45RaaHmugD6AuIq/view?usp=sharing
| |
19:15 | Fares | in the top left there is example of the pixels in the image
| |
19:16 | Fares | in the top right there is a huffman code for every ssss class, as you notice the ssss classes with the most pixels given short codes
| |
19:16 | Fares | and in the bottom that is a simple pipeline of encoding a single pixel
| |
19:17 | Fares | first you predict it using the function 1, then subtract it, then normalize it and know which ssss class it belongs to
| |
19:17 | Fares | then encode it using the huffman code and the normalize subtracted value
| |
19:18 | Fares | in decoding the process is reversed to get the exact same pixel value
| |
19:19 | Fares | so that was a simple overview of how LJ92 is used to encode a single pixel
| |
19:19 | Fares | if there is any question till now please ask
| |
19:20 | Bertl | nothing from my side at the moment :)
| |
19:21 | Fares | okay I will continue :)
| |
19:22 | Fares | so the challenge in the encoding part is that every pixel may be encoded in 1 to 31 bits
| |
19:23 | Fares | then if 4 pixels are encoded in the same cycle that will be anywhere from 4 to 124 bits and that can take a lot of resources
| |
19:23 | Fares | so please refer to: https://github.com/FaresMehanna/JPEG-1992-lossless-encoder-core/blob/master/README.md
| |
19:24 | Fares | the first graph explain the basic pipeline for LJ92, but it now may encode multiple pixels per cycle, and that last stage is "Merger" to combine all the encoded values
| |
19:25 | Fares | the problem after that is how to output those values in 16-bits or 32-bits with no variability
| |
19:26 | Fares | so I worked in "V-bits to C-bits" module, that will input a variable amount of bits and will output a constant amount of bits using a buffer ((buffer << new_count) | new_output)
| |
19:27 | Fares | and the output will read from the same buffer, it is crucial that no/minimum stall cycles
| |
19:27 | Fares | so to optimize it a little I wrote a converter module after the "LJ92 pipeline" and before the "V-bits to C-bits"
| |
19:29 | Fares | since we know that most of the encoded values will be within some range, so this will divide only the big chunks of data to multiple smaller ones, and the small ones will not be affected
| |
19:30 | Fares | for in our first example I said if 4 pixels are encoded the values will be anywhere between 4 and 124
| |
19:30 | Fares | with a converter module we can set the upper limit to 60bits only or lower
| |
19:31 | Fares | after the "V-bits to C-bits", there is a module to detect any 0xFF byte and append after it a 0x00 byte, it is crucial in LJ92 standard
| |
19:32 | Fares | and after that there is a module that will append a starting/ending marker to every frame
| |
19:32 | Fares | that was high level overview of how the core is actually implemented in the fpga
| |
19:33 | Fares | last thing is the performance and timing
| |
19:34 | Fares | since this is lossless so there is no guarantee in the upper limit
| |
19:34 | Fares | also the performance varies with iso and lighting condition
| |
19:35 | Fares | performance issues that are related to the implementation itself
| |
19:36 | Fares | firstly the converter module will add some cycles when dividing big chunks, it may be less than 1-4% but I don't think that will introduce performance loss
| |
19:37 | Fares | the actual problem arise when there is part of the image with low details and other part with high details
| |
19:37 | Fares | the part with low details will under-saturate the output, not making full use of the usb3.0 module
| |
19:38 | Fares | and the part with high details will over-saturate the output resulting in stall cycles in the input module
| |
19:39 | Fares | so to eliminate both effects I think the core should be clocked as high as possible
| |
19:39 | Fares | that will allow more data to be outputted when the encoded values are small
| |
19:39 | Fares | and will allow more data to be processed when encoded values are big
| |
19:40 | Bertl | hmm?
| |
19:40 | Fares | I think that covers everything I wanted to present
| |
19:40 | Bertl | we can't really process more data than we have :)
| |
19:40 | se6astian | many thanks!
| |
19:40 | se6astian | now the questions :)
| |
19:41 | Bertl | well, my main question is: how are overflows handled?
| |
19:41 | Fares | I mean because of the buffer in usb3.0 module, more data can be processed and sent there
| |
19:41 | Bertl | i.e. what happens when the compressed image gets 'too large'
| |
19:42 | Fares | "overflows" of what exactly?
| |
19:42 | Fares | ahaa
| |
19:43 | Fares | so the core will work till an end signal is set, then the core will be reseted to accept the next frame, I'm currently writing a small module that will work as a counter, when it hit predefined number it will force end using a special ending marker
| |
19:44 | Fares | so it can be set to 3 million, and it will keep increasing with every cycle, if it hits 3 million, it will force end the current frame and end signal will be set
| |
19:44 | Fares | as I mention a special ending marker will be used to indicate to the receiver that this frame is not full
| |
19:45 | Bertl | okay, so the 'lower' part of the image will be missing in this case, yes?
| |
19:46 | Fares | correct, the received data can be decoded successfully
| |
19:47 | Bertl | okay, that's fine, thanks!
| |
19:48 | Fares | thank you all for listening, if there is any other questions please ask
| |
19:48 | Bertl | last question from my side: how is the integration going (i.e. real hardware, running on the Beta, etc)?
| |
19:48 | se6astian | what is the state of implementation currently?
| |
19:51 | Fares | I have been testing it regularly on the hardware and it is working fine, I didn't yet stress test it, I use primarily Xilinx DMA, I recently managed to get axihp reader and writer working and read all the data successfully but there is still problems in last segment of data.
| |
19:52 | Bertl | okay? please elaborate
| |
19:52 | Fares | I have implemented all the mentioned modules in the presentation, so for now you can get a full encoded image with no header
| |
19:53 | Fares | the last segment of data is repeated and when I reload the bit file again it doesn't work, that bug is in the dma I try to build no in the actual core
| |
19:53 | Fares | not in the actual core*
| |
19:54 | Bertl | okay
| |
19:55 | Fares | so for now I assume that the receiver will append the header of the frame
| |
19:55 | Fares | is that reasonable?
| |
19:55 | Bertl | yes, I think that is fine
| |
19:55 | Bertl | thanks a lot for the nice presentation
| |
19:56 | se6astian | very nice work thanks, even though I understand only parts of it fully :)
| |
19:56 | Fares | thank you :)
| |
19:57 | Bertl | keep up the good work!
| |
19:57 | Fares | Thank you!
| |
19:57 | Fares | now if possible I have few questions regarding how to continue
| |
19:57 | se6astian | sure
| |
19:57 | Bertl | please go ahead
| |
19:59 | Fares | firstly, the header part is tricky since the header will have info about the number of components and the huffman table used
| |
20:00 | Fares | so how should the core communicate the information to the reciever?
| |
20:00 | Fares | or should the receiver send those info to the camera first before shooting the video?
| |
20:01 | Bertl | are the huffman tables changing during encoding?
| |
20:04 | Fares | In theory you can change them between every frame and the other, but generally they are constant for the camera or at least for the video
| |
20:04 | Fares | the algorithm itself do not change them, it only read those values when encoding the frame
| |
20:05 | Bertl | what I meant is, are there any mechanisms in the encoder which would change the tables?
| |
20:05 | Bertl | and as the answer is no, it doesn't make sense to transmit any changes
| |
20:05 | Fares | AXI-Lite interface only have access to change them, the encoder itself do not
| |
20:05 | Bertl | if at some point we have adaptive tables, it would make sense to transmit them either inline or on a side channel
| |
20:06 | Bertl | for now it's fine to assume that the receiver and the camera 'know' the table
| |
20:06 | Bertl | same goes for components, image resolution and bit depth
| |
20:08 | Bertl | any other questions?
| |
20:09 | Fares | yes last one, do you know if I can use/modify Adobe DNG SDK?
| |
20:09 | Fares | since I wanted to work in the decoder in software, there is already good only in Adobe DNG SDK
| |
20:10 | Bertl | good question, check with alexML, I've never used DNG
| |
20:10 | BAndiT1983 | Fares, i would keep the fingers off the adobe stuff
| |
20:10 | Fares | okay great, thank you for your time Bertl
| |
20:10 | Bertl | other than that, it's probably checking the licenses and what they permit, etc
| |
20:10 | alexML | well, there are many other apps (besides Adobe) that can read LJ92 DNGs
| |
20:11 | Fares | BAndiT1983: because of license?
| |
20:11 | Bertl | Fares: my pleasure!
| |
20:11 | Fares | I have seen some of them using adobe sdk as well
| |
20:12 | BAndiT1983 | Fares, first reason is license, second reason is their policies, third reason it looks partially like a piece of mess
| |
20:12 | Bertl | okay, I'm off .. got some work to do ...
| |
20:12 | Bertl | changed nick to: Bertl_oO
| |
20:12 | BAndiT1983 | You can get away without SDK, as DNG is TIFF extension, OC also can read it without any SDK
| |
20:12 | alexML | Adobe SDK is pretty large iirc
| |
20:13 | Fares | I was only going to use the LJ92 decoder part from it
| |
20:13 | alexML | dcraw also has a LJ92 decoder, mlv_dump also has one
| |
20:13 | BAndiT1983 | Would not try to use any parts from some company, as it can result in nasty legal problems
| |
20:15 | alexML | we use this decoder: https://bitbucket.org/hudson/magic-lantern/src/crop_rec_4k/modules/mlv_rec/lj92.c
| |
20:15 | Fares | okay I will check dcraw, mlv_dump and other open source solutions, thank you.
| |
20:16 | Fares | alexML: the only problem with that one is that it only support single component
| |
20:16 | Fares | but I think I can write the part to decode several components
| |
20:18 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
20:18 | alexML | ah, you use components to encode different Bayer channels?
| |
20:19 | alexML | if that's the case, I guess Canon encoder (which we use in MLV) squeezes everything into one component
| |
20:19 | Fares | yes, I used 4 components with the first predictor function
| |
20:19 | Fares | alexML: will that is a little tricky
| |
20:20 | Fares | I read a lot about your work and how it all squeezed into single components
| |
20:20 | Fares | but I also read about canon raw format and how it uses 2 or 3 or 4 components
| |
20:20 | Fares | with predictor function 1
| |
20:21 | alexML | yeah, iirc they reinterpret the image as double-width / half-height, to align the color channels in order to be able to predict them
| |
20:21 | alexML | baldand (the author of lj92.c) describes the internals here: https://www.thndl.com/how-dng-compresses-raw-data-with-lossless-jpeg92.html
| |
20:22 | alexML | I didn't try to figure out the math behind LJ92; I only figured out how to call the existing encoder and how to change the image size :D
| |
20:23 | alexML | (in the original firmware, LJ92 was used to compress only still photos, i.e. CR2; we reused that encoder for video, but it wasn't designed by Canon to work like that)
| |
20:24 | Fares | yes and that was brilliant! but when looking here: http://lclevy.free.fr/cr2/#lossless
| |
20:24 | Fares | you will find that the encoder is used in those cameras id different configurations
| |
20:24 | alexML | yes
| |
20:25 | alexML | the image is sliced, for some reason
| |
20:25 | Fares | but also the encoder is different
| |
20:25 | Fares | number of components are not one
| |
20:25 | Fares | is not one*
| |
20:26 | alexML | hm, it starts to make sense
| |
20:26 | alexML | "data
| |
20:26 | alexML | "the Full Raw Jpeg is encoded with 14 bits, in 2 colors (prior to the 50D and the Digic IV), then using 4 colors with the 50D up to 1100D. Since 1D X and up to 6D, Canon is back to 2 components."
| |
20:27 | alexML | we only have lossless encoding implemented on digic 5, i.e. the last config
| |
20:27 | alexML | and, indeed, the encoder expects two slices of data
| |
20:27 | alexML | (I feed only one of them, as the decoder appears to handle it just fine, but can't really tell why it works)
| |
20:29 | Fares | I think you use special configuration because not only you don't use any slices, but you use single components with different predictor function
| |
20:29 | Fares | non of the described configurations in the encoder it self is the same as the one you using
| |
20:30 | Fares | sliced are not the same as components
| |
20:33 | alexML | well, 'all' I know is that I use pretty much the same configuration as in a Canon DIGIC 5 CR2 (whatever that is); I've only changed the image size and "bypassed" the slicing mechanism by feeding the entire image data into the encoder, all at once
| |
20:34 | alexML | (Canon code was feeding first the left half of the image, and then the right half - these were the two slices)
| |
20:35 | alexML | so, by feeding the entire image, the encoder will consider the top half of the image as the first slice (as it only looks at the amount of data; it doesn't really care about its size), and the bottom half will be the second slice
| |
20:36 | alexML | the LJ92 decoder used in DNG happens to work fine with this trick, but I don't really understand why it just works
| |
20:41 | Fares | I kinda understand that, I decoded MLV files form my eos m, as far as I can see this is valid LJ92 image with no difference between first half and last half, my observation was this is completely different that what was descried in the cr2 document.
| |
20:42 | Fares | slices are not part of LJ92 standard, but number of components and predictor function are. and both are different, and there is something called small raw as well. just maybe an area of improvement in the future.
| |
20:42 | alexML | yeah, the video frames in MLV contain plain dumps of what Canon encoder outputs
| |
20:44 | alexML | small raw is a different beast; iirc it contains debayered data
| |
20:47 | Fares | yes, but full raw is np(number of components)=2 or 4,
| |
20:48 | Fares | and predictor function is documented to be always one
| |
20:48 | Fares | both of these are not met in MLV files from canon cameras
| |
20:48 | Fares | they use single component and different function
| |
20:49 | Fares | so they may be programmable or something.
| |
20:49 | alexML | interesting, and a plain CR2 from the EOS M matches the docs from lclevy?
| |
20:50 | alexML | (I expect to see the same configuration, as I don't even know how to change it)
| |
20:51 | Fares | I didn't really look into CR2 as I was interested in MLV only, but I will check it
| |
20:52 | alexML | maybe I should run some experiments, comparing the output of CR2 encoder (as called by Canon) vs the output of the LJ92 encoder (i.e. same thing, but called by our code) on the same input image
| |
20:52 | alexML | who knows what that will reveal :)
| |
20:55 | Fares | I will check it too since I only looked into MLV files
| |
20:56 | Fares | left the channel | |
20:56 | Fares | joined the channel | |
21:02 | alexML | ha, I see you actually fixed some bugs in lj92.c some time ago: https://github.com/FaresMehanna/MLV-App/commit/a04b571f01e5ea8b0338ec5858c3bd2650d11bc6
| |
21:03 | alexML | do you have some test files that would trigger these bugs? or some additional info?
| |
21:06 | Fares | that was the test file: https://github.com/FaresMehanna/MLV-File-Format/blob/master/LJPEG-1992/Andrew_Baldwin_Implementation/test.c
| |
21:06 | alexML | one of them seems to be an overflow in "linearize" (linearization table? doesn't seem to be used in mlv_dump)
| |
21:07 | alexML | ah, feeding random data, cool
| |
21:07 | Fares | one of them will happen when random 16bit image is encoded
| |
21:07 | Fares | thanks
| |
21:08 | Fares | the other will overwrite entry in the encoding table, in the standard the bug will go un-noticed since the decoder will do the exact same mistake, but it will make the image slightly bigger
| |
21:09 | alexML | interesting one
| |
21:12 | alexML | there's also something that adjusts some bits; that's yet another bug? also triggered by test.c?
| |
21:14 | Fares | no, that was the step missing to remove the overwrite entry
| |
21:15 | Fares | I didn't come up with it, it is documented in LJ92 standard.
| |
21:15 | alexML | ok, I need to play with this stuff a bit
| |
21:17 | alexML | btw, regarding integration - do you have some kind of proof of concept code that would call the encoder from the main processor? or some kind of MMIO interface? or some notes on that?
| |
21:18 | Fares | all those steps are part of generating the Huffman table codes, also another optimization in the encoder part I am going to commit later is that there are huffman tables that are better but would never get generated with the standard way, so those may be hard coded to be tested against the standard generated ones.
| |
21:18 | Fares | the fpga core integration I'm working on?
| |
21:19 | alexML | yeah
| |
21:20 | Fares | yes I have a code I use it for testing. this is a video when I was testing it https://drive.google.com/open?id=1sAmZwe_Ou0qeftP1TQx8OpmKoI1UpfNg
| |
21:22 | Fares | but for now I use xilinx DMA core, hopefully later I will replace it with dma using axihp_reader|writer from the beta
| |
21:22 | Fares | the code with axihp_reader|writer is already working but have bugs.
| |
21:25 | alexML | cool, I'd like to look at this code, is it on the repo somewhere?
| |
21:26 | Fares | yes, all the code is here: https://github.com/FaresMehanna/JPEG-1992-lossless-encoder-core, but the verilog is not updated so you would need to run generate_verilog.py script first, if verilog files are needed
| |
21:29 | alexML | "We couldn’t find any code matching 'core_test32' in FaresMehanna/JPEG-1992-lossless-encoder-core"
| |
21:29 | alexML | I must be blind...
| |
21:31 | Fares | ah sorry, I didn't commit the software side yet, I will clean and commit the code in the following days
| |
21:33 | alexML | got it, no worries
| |
21:34 | alexML | nice job, it sounds like we'll soon be able to encode LJ92 streams directly on the beta; that's something I'd like to test
| |
21:38 | Fares | Thank you :) I'm working on it :)
| |
21:39 | alexML | cool :)
| |
22:35 | Fares | left the channel | |
22:47 | se6astian | off to bed
| |
22:47 | se6astian | good night
| |
22:47 | se6astian | changed nick to: se6astian|away
| |
00:51 | danieel | left the channel | |
00:51 | danieel | joined the channel |