06:10 | BAndiT1983|away | changed nick to: BAndiT1983
| |
06:25 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
06:38 | Bertl_oO | off to bed now ... have a good one everyone!
| |
06:38 | Bertl_oO | changed nick to: Bertl_zZ
| |
06:55 | LordVan | joined the channel | |
07:11 | alexML | joined the channel | |
07:15 | comradekingu | left the channel | |
07:15 | alexML_ | left the channel | |
07:18 | aPinky | left the channel | |
07:19 | Topic | apertus° - open source cinema | www.apertus.org | join the apertus° Lab: http://lab.apertus.org/ | IRC Logs available at: http://irc.apertus.org
| |
07:19 | se6astian | has set the topic | |
07:22 | se6astian|away | changed nick to: se6astian
| |
07:32 | se6astian | changed nick to: se6astian|away
| |
08:12 | sebix | joined the channel | |
08:29 | se6astian|away | changed nick to: se6astian
| |
08:56 | se6astian | changed nick to: se6astian|away
| |
08:57 | se6astian|away | changed nick to: se6astian
| |
08:57 | se6astian | left the channel | |
08:58 | se6astian|away | joined the channel | |
08:58 | se6astian|away | changed nick to: se6astian
| |
08:58 | se6astian | left the channel | |
08:58 | philippej | left the channel | |
08:58 | RexOrCine|away | left the channel | |
08:58 | Nira|away | left the channel | |
08:58 | BAndiT1983|away | left the channel | |
09:45 | vup | left the channel | |
09:46 | vup | joined the channel | |
10:45 | se6astian|away | joined the channel | |
10:45 | se6astian|away | changed nick to: se6astian
| |
10:45 | BAndiT1983|away | joined the channel | |
10:45 | philippej|away | joined the channel | |
10:45 | BAndiT1983|away | changed nick to: BAndiT1983
| |
10:45 | philippej|away | changed nick to: philippej
| |
10:45 | Nira|away | joined the channel | |
10:45 | RexOrCine|away | joined the channel | |
13:17 | Y_G | joined the channel | |
13:21 | siddhantsahu | left the channel | |
13:41 | tjstyle | left the channel | |
13:51 | RexOrCine|away | changed nick to: RexOrCine
| |
15:10 | Bertl_zZ | changed nick to: Bertl
| |
15:10 | Bertl | morning folks!
| |
15:11 | apurvanandan[m] | Good morning Bertl
| |
15:54 | sebix | left the channel | |
16:51 | Fares | joined the channel | |
16:51 | Nira|away | changed nick to: Nira
| |
17:00 | se6astian | meeting time
| |
17:00 | se6astian | hello everyone!
| |
17:01 | se6astian | please again message me now if you want to report
| |
17:01 | apurvanandan[m] | HI
| |
17:04 | dev__ | joined the channel | |
17:06 | se6astian | please fares go ahead
| |
17:06 | Fares | Hi everyone!
| |
17:06 | Fares | I worked earlier this week on a different encoding approach trying to optimize the core but it turned out to be worse.
| |
17:07 | Fares | so I continued in the development of the core, completed the 0xFF fixer module and optimized two modules, then I put everything together and tested it on fpga.
| |
17:08 | Bertl | sounds good
| |
17:09 | Fares | I did several basic testing with several images, and it worked correctly but timing and efficiency is still something to work on.
| |
17:11 | se6astian | very nice!
| |
17:11 | Fares | that would be all, and I would like to get samples of a noisy images out of the sensor please
| |
17:12 | Bertl | there are some .raw12 (and maybe even .raw16) images available
| |
17:12 | Fares | I found this, https://files.apertus.org/AXIOM-Beta/snapshots/BetaRawTests/2.2.2016-PropsRawSnaps/
| |
17:14 | Fares | does the sensor provide noisy pictures more than props01 image in this folder?
| |
17:14 | se6astian | there are tons of pictures here
| |
17:14 | se6astian | https://files.apertus.org/AXIOM-Beta/snapshots/
| |
17:14 | se6astian | but its hard to say which ones are the most noisy...
| |
17:17 | se6astian | if you look at https://files.apertus.org/AXIOM-Beta/snapshots/greyframes/
| |
17:17 | se6astian | there are gainx4 raw12 files
| |
17:17 | se6astian | higher gain -> higher noise
| |
17:18 | Fares | great! I will test them out, the idea is that the noiser the image the bigger its size, that mean it will take more time to compress and go through usb, so the worst case frame will determine the maximum fps
| |
17:19 | se6astian | great
| |
17:20 | anuejn | what will you do, if a image compresses worse than that empirically determined "maximum"
| |
17:20 | anuejn | eg. what part of the data will be dropped?
| |
17:20 | anuejn | will the compression be lossy or will frames be dropped?
| |
17:22 | Fares | the compression is not lossy but the frame will take more time to be processed and take more time to go through the usb so a future frame may be dropped, a safety margin will be needed
| |
17:25 | anuejn | hm... is there any "easy" way to make the compression lossy in that case
| |
17:26 | Bertl | not sure we want that ...
| |
17:26 | anuejn | not sure either, we want to just drop frames...
| |
17:26 | Bertl | but we definitely want some upper limit on the frame size
| |
17:28 | se6astian | lets discuss the details afterwards if that is ok for you?
| |
17:28 | Fares | the compression depends on the pixels is close in values to each others, to make it lossy we will smooth some pixels before compression
| |
17:29 | Fares | sure I will be here to discuss it further
| |
17:29 | Fares | pixels are*
| |
17:29 | apurvanandan[m] | Can I start?
| |
17:30 | se6astian | anything else fares or can apurvanandan[m] start?
| |
17:30 | Fares | That would be all se6astian, thank you!
| |
17:31 | se6astian | great, thank
| |
17:31 | apurvanandan[m] | Hi everybody :)
| |
17:32 | apurvanandan[m] | So here is the code that I wrote this week: https://github.com/apurvanandan1997/usb-plug-mod/tree/master/Virtex_5
| |
17:33 | apurvanandan[m] | So this week I completed major part of Zynq side of gearwork ie the BER testing model for the USB 3.0 module
| |
17:33 | apurvanandan[m] | I will be using my Virtex-5 FPGA at first and will later shift to Zynq-7020
| |
17:35 | apurvanandan[m] | I am successfully being able to setup 6 LVDS lanes between the USB module and Virtex-5 FPGA, and tested them by sending a 6-bit word to the HOST PC through the USB module :)
| |
17:35 | apurvanandan[m] | *sending 6 bit words
| |
17:37 | apurvanandan[m] | Also I formatted and cleaned up the code along with MAKEFILEs for the evaluation. Currently I am ahead of timeline :).
| |
17:37 | apurvanandan[m] | I have already started working on MachXO2 side of gearwork, finding the right primitives, and techinques for capturing the DDR stream correctly
| |
17:38 | apurvanandan[m] | After that I will implement the ECC and CRC.
| |
17:38 | apurvanandan[m] | This is all from my side.
| |
17:39 | se6astian | great, many thanks!
| |
17:39 | apurvanandan[m] | Any remarks?
| |
17:39 | Bertl | was any BER testing done so far?
| |
17:41 | apurvanandan[m] | No, because the gearwork is complete on side only, But it will be done in this week. PRNG FIbonnaci are being currently in this with 8b/10b and OSERDES at 400 serial clock.
| |
17:41 | apurvanandan[m] | *500MHz
| |
17:42 | apurvanandan[m] | on Zynq side only
| |
17:43 | Bertl | I see ...
| |
17:43 | se6astian | dev__: you are next up please
| |
17:43 | dev__ | left the channel | |
17:44 | dev__ | joined the channel | |
17:44 | dev__ | sorry Bad Internet
| |
17:44 | dev__ | Hello Everyone
| |
17:45 | dev__ | This week, I have added static allocators for playback and they are working fine MLV files.
| |
17:45 | dev__ | I tried to cover cases like random access (random requests of frames using frame number) which will be the case while seeking. --> https://github.com/kakashi-Of-Saringan/opencine/commits/dev
| |
17:45 | dev__ | ||
17:45 | dev__ | This week,I will be working to bring sliders into playback and also discuss and work on video clip class.
| |
17:46 | dev__ | that's it
| |
17:46 | se6astian | thanks!
| |
17:46 | se6astian | BAndiT1983: any comments/questions?
| |
17:47 | se6astian | otherwise Y_G please go ahead
| |
17:48 | Y_G | Hi all,
| |
17:48 | dev__ | i have updated every thing on trello, so he check
| |
17:48 | dev__ | can*
| |
17:49 | Y_G | This week I worked on and tested the I2CAdapter both with and without the SMBus commands dependency.
| |
17:49 | Y_G | Time was also spent on researching which one is better . Could'nt find anything significant to discard either approach.
| |
17:49 | Y_G | Tested the current `set register` and `get register` methods . There might be some problems with set method which I'll discuss with BAndiT1983.
| |
17:50 | Y_G | Implementation of "gamma_conf.sh" in daemon was delayed this week due to the reworking of I2CAdapter.Will try to make up for it this week
| |
17:50 | Y_G | I also went through the Image sensor parameters which would be implemented in this week .
| |
17:50 | Y_G | Thanks ,That would be all from my side.
| |
17:52 | se6astian | thanks!
| |
17:53 | Bertl | so I2C register values can now be retrieved and set via daemon?
| |
17:53 | Y_G | yes they can be
| |
17:54 | Bertl | would be nice to see some of the I2C related scripts adapted to this
| |
17:55 | Y_G | https://github.com/kiquance21/axiom-control-daemon/blob/master/Modules/I2CTestModule.cpp equivalent of pac1720info.sh
| |
17:56 | Bertl | well, that certainly wouldn't work too well in a script :)
| |
17:57 | se6astian | maybe you can discuss the requirements for that after the meeting in detail?
| |
17:58 | BAndiT1983 | GetPac1720Info() could have a loop to avoid manual +=
| |
17:58 | se6astian | otherwise aSobhy would be up next
| |
17:58 | Bertl | sure
| |
17:59 | Y_G | @BAndiT1983 Yes, but again the response is too big , So it would be better to not have this at all.
| |
18:00 | BAndiT1983 | Y_G, this is another side of the story
| |
18:02 | Y_G | yup ,will see what can be done about it.
| |
18:02 | se6astian | aSobhy: please go ahead
| |
18:02 | Bertl | Y_G: for me the main question is: how to do a 'i2c1_set 0x38 0x01' from a shell or python script via the daemon, but as I said, fine to discuss that later
| |
18:03 | aSobhy | Hi everyone
| |
18:03 | aSobhy | that week I changed some modules for the link training,
| |
18:03 | aSobhy | I choosed the hardest way to generate the seed of PRNG after Bertl told me the easy way
| |
18:03 | aSobhy | tested(simulation) some module to ensure they correctness
| |
18:03 | aSobhy | here what I have done https://github.com/aabdosobhy/Bi-Direction-packet-protocol
| |
18:03 | aSobhy | I'm writing the documment of the code I've written
| |
18:05 | aSobhy | and I stucked in the desrilizer at the MachXO2 for a long time I used a seriliazer OSERDESE2 with SDR mode as my design and didn't find a primitive to receive it
| |
18:06 | aSobhy | I manged to finish link training by the next 2 days as It take a long time :/
| |
18:06 | aSobhy | the OSERDESE2 at the ZYNQ side
| |
18:07 | Bertl | http://www.latticesemi.com/-/media/LatticeSemi/Documents/ApplicationNotes/IK/ImplementingHigh-SpeedInterfaceswithMachXO2Devices.ashx?document_id=39084
| |
18:09 | aSobhy | okay I should ask earlier
| |
18:10 | Bertl | if I remember correctly, I linked this document even before the project started
| |
18:11 | Bertl | ah yes, it is even part of the 'useful links' section of the qualification task :)
| |
18:11 | Bertl | https://lab.apertus.org/T731
| |
18:11 | dev__ | left the channel | |
18:11 | dev__ | joined the channel | |
18:12 | Bertl | ah, not qualification but task description
| |
18:13 | se6astian | felix_: wanted to also report about photonsdi development
| |
18:13 | se6astian | so again aSobhy & Bertl may I ask you to discuss the details after the reports?
| |
18:13 | aSobhy | after opening the document yes you linked it before
| |
18:13 | Bertl | sure
| |
18:13 | aSobhy | ok
| |
18:14 | felix_ | hi! i've written the video frame timing generator for photonsdi (that thing was way more convoluted than i expected) and fixed quite a few bugs when doing some build tests of the modules i had already written. no tests on real hardware and no a/b tests in simulation yet though
| |
18:15 | Bertl | so we still don't know if the hardware is working or not, yes?
| |
18:16 | felix_ | yeah
| |
18:18 | dev__ | left the channel | |
18:18 | se6astian | thanks for the update felix_ - I hope you can continue to work on this now
| |
18:18 | se6astian | Nira: you wanted to share a bit as well
| |
18:19 | dev__ | joined the channel | |
18:23 | Nira | hello everyone, this week I have solved the problems which I had last week and finished my first little coding on PIC16
| |
18:23 | Nira | https://github.com/niratubertc/PIC16_blink
| |
18:24 | se6astian | Great!
| |
18:25 | Nira | I should improve the explanation, but you can have a first look
| |
18:25 | Bertl | yes, please explain the process to program the PIC16s on the Remote
| |
18:29 | Nira | I use pic32prog for uploading icsp_ser.hex on the PIC32, for being able to program PIC16
| |
18:30 | Nira | and then we use ser_icsp6_prog_e for uploading the wanted code (remote_e_b.hex) on the PIC16
| |
18:31 | Nira | (KME on this case)
| |
18:31 | Bertl | k, tx
| |
18:33 | se6astian | no big updates from my side this time, we went over to the equipmentcafe today - its not actually a cafe, its a equipment rental facility for cinema gear
| |
18:33 | se6astian | and tested the axiom beta compact enclosure
| |
18:33 | se6astian | with lenses and base plates
| |
18:33 | se6astian | https://twitter.com/AXIOM_Community/status/1143175236407500802
| |
18:34 | se6astian | Bertl: do you want to provide the closing words as usual
| |
18:34 | se6astian | todays meeting already stretched a bit the 60 minutes we normally try to stay under
| |
18:34 | Bertl | nah, we can skip the closing words and dive into individual discussions :)
| |
18:34 | Bertl | not much happened on my side this week, except for more rework
| |
18:37 | se6astian | right!
| |
18:37 | se6astian | official meeting concluded
| |
18:38 | se6astian | please discuss the task details and next steps
| |
18:40 | Bertl | please make sure that we all have links to your current code/status (best add them to the lab)
| |
18:41 | Bertl | also make sure that you are available during the evaluation period
| |
18:41 | Y_G | @Bertl : "how to do a 'i2c1_set 0x38 0x01' from a shell or python script via the daemon" , if this functionality is required this can be added to DaemonCLI .
| |
18:42 | Bertl | well, I'd say it is the main functionality we are looking for, i.e. a way to modify all the scripts to use the daemon
| |
18:45 | dev__ | left the channel | |
18:47 | Y_G | Hmm, we may do this with a command like "./DaemonCLI i2c_module set i2c1_reg 0x38 0x01 b"
| |
18:48 | BAndiT1983 | this is too much on parameters
| |
18:49 | BAndiT1983 | better add separate i2c modules, like i2c1_module or so, although you can also omit _module
| |
18:52 | Y_G | We atleast require 3 parameters(chipAddress ,dataAddress ,value {for set}) even if we have different modules like i2c*module
| |
18:53 | BAndiT1983 | do we need to control it from command line?
| |
18:53 | BAndiT1983 | or can we give addresses meaningful names?
| |
18:53 | BAndiT1983 | also chip address can be replaced by module, or not?
| |
18:54 | Bertl | feel free to add all kind of aliases, simplifications, etc but make sure that the generic, low level access always works
| |
18:54 | aSobhy | Bertl: I'm on the document now any advice?!
| |
18:55 | Bertl | BAndiT1983: this way if somebody wants to change a bit in a register, they don't have to modify and rebuild the daemon :)
| |
18:55 | Y_G | busAddress can be replaced with different modules. the command works like this `i2cset busAdd chipAdd dataAdd value`
| |
18:55 | BAndiT1983 | ok, then we should think of extending parameter number
| |
18:56 | Bertl | aSobhy: there is no SERDES in the MachXO2s, but there is quite some capable DDR mechanism
| |
18:56 | BAndiT1983 | but we need proper evaluation of it, as it should also be crash-safe
| |
18:56 | Bertl | note that a read=modify-write operation might be nice to have though
| |
18:57 | Bertl | (but of course can be done on the shell/script level as well)
| |
19:00 | aSobhy | It is not necessary to change the ZYNQ side to DDR but the clk freq. on the MachXO2 will be 1/2 the freq of ZYNQ, right?
| |
19:01 | Bertl | depends on your setup on the MachXO2 side
| |
19:01 | Bertl | you can get up to 8:1 gearing on the MachXO2
| |
19:01 | Bertl | just requires careful design and use of the PLL
| |
19:02 | aSobhy | can that happened not use the same design on both side
| |
19:02 | Bertl | you won't be able to use the same design, as there are different mechanisms for serialization and deserialization on both FPGAs
| |
19:03 | aSobhy | okay
| |
19:03 | aSobhy | I'm working on it now
| |
19:08 | dev__ | joined the channel | |
19:10 | dev__ | Good Evening BAndiT1983 , I have tried to make the code clean and also using log/logger.h to see the logs of playback on terminal.
| |
19:12 | dev__ | please check it !
| |
19:12 | Fares | @Bertl, regarding compression variability, in my tests I noticed that timing change with the image complexity and compressed size, with the few examples I tried, it changed from 64ms to 67.8ms @100mhz for 4096*3072 image with compression size from 56% to 67%
| |
19:12 | dev__ | I wanted to confirm that , Can I go for adding sliders to playback.
| |
19:13 | BAndiT1983 | hi dev__
| |
19:13 | Bertl | Fares: yes, it is expected to see different compression sizes
| |
19:13 | Bertl | the main question here is how to address them
| |
19:13 | BAndiT1983 | dev__, will check it when i have some time, but not today, as i had pretty long day at work and a lot of coding was done
| |
19:13 | BAndiT1983 | if you feel that playback is stable then try to add sliders, but it has to be done in proper way, with focus on MVP
| |
19:14 | Bertl | Fares: the simplest option is to have fixed frame slots and just pad smaller frames to the maximum
| |
19:14 | Bertl | then have some special marker for frames which are cropped because too large
| |
19:15 | dev__ | Okay , till then , I will try make a action plan for how to do it ? and read about MVP and other things
| |
19:16 | BAndiT1983 | i though that i gave you quite a number of MVP links months ago
| |
19:16 | BAndiT1983 | *thought
| |
19:16 | dev__ | yes ,
| |
19:17 | dev__ | i remember
| |
19:18 | dev__ | I will update u with the approach soon
| |
19:18 | BAndiT1983 | if you need an example then take a look at OCBackup and how data and events are passed around, event bus, signals etc.
| |
19:18 | BAndiT1983 | ok, looking forward to it
| |
19:19 | dev__ | Nice , That would be helpful
| |
19:19 | dev__ | Thanks
| |
19:21 | Fares | @bertl: that assume 100% utilization of usb3.0 bandwidth correct? I think that is not doable, since if compressed image is small, we are still bounded by the inputted pixels/clk so there will be gaps in the output stream, for example if two pixels/clk, the minimum time to input all pixels to the core @100MHZ for 4096*3072 is 31.45728ms
| |
19:22 | Nira | changed nick to: Nira|away
| |
19:23 | dev__ | left the channel | |
19:24 | Fares | but as you said the simplest option will be to set a safe margin to the maximum fps - it will to be higher than uncompressed -, and enjoy only low bandwidth
| |
19:25 | Bertl | no there is no point in planning (size wise) for the worst case
| |
19:26 | Bertl | but after some evaluation, we can, for example, settle on 75% of the uncompressed size per frame
| |
19:27 | Bertl | and just make sure that frames which cross that amount can be identified
| |
19:28 | Fares | yes sure, absolute worst case is terrible, but the question is, will it be practical in the future to add a mechanism to identify and store the dropped frames in sd-card / transfer via ethernet?
| |
19:30 | Bertl | the sky is the limit, but you always end up with a 'worst case' which cannot be handled unless you have the full raw bandwidth or allow for lossy compression
| |
19:33 | Fares | yes that's true :)
| |
19:57 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
19:57 | BAndiT1983|away | changed nick to: BAndiT1983
| |
19:58 | anuejn | how difficult is it to apply lossy compression?
| |
19:58 | alexML | hi Fares
| |
19:59 | Fares | Hi alex
| |
20:00 | alexML | in ML, we solved the variable frame size by allocating fixed-size frames at first (85% of uncompressed size iirc), and resizing them on the fly in order to fill all the available memory
| |
20:00 | Fares | annuejn: LJ92 is not designed to be lossy, although it can be done if we smooth the pixel values first
| |
20:00 | alexML | a bit complicated, but seemed to work; we can probably reuse the approach here
| |
20:00 | Bertl | alexML: except that we do not have 'memory' to fill
| |
20:01 | Fares | The output stream is transferred to usb module then to the reciever
| |
20:01 | anuejn | ah tx, just read the wikipedia article
| |
20:02 | Bertl | it would be an option if the encoder is placed in the input pipeline and the data is already compressed when it hits the DDR memory
| |
20:03 | se6astian | changed nick to: se6astian|away
| |
20:05 | alexML | btw, that 85% we settled for, was enough for "typical" cases, but there are still users who managed to exceed this limit (with highly detailed scenes at high ISO)
| |
20:06 | alexML | (and, 56% to 67% are typical numbers for us, too)
| |
20:08 | Fares | that is for 14bits, correct?
| |
20:08 | alexML | yes
| |
20:10 | alexML | or close to 50% for a bit depth that matches the sensor noise
| |
20:11 | alexML | http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html - fig. 18
| |
20:12 | aombk2 | left the channel | |
20:12 | alexML | if the bit depth is too low, you get posterization; if the bit depth is too high, it no longer makes a big difference
| |
20:12 | alexML | for matching the bit depth with the "gradient 3-bit tonality" from that figure, I've got compressed ratios close to 50%
| |
20:13 | alexML | (that was at 11 bits at ISO 100 on 5D Mark III's sensor)
| |
20:17 | Fares | I read your article about about "optimal lut" and it really show difference, specially if you can tune the huffman tree used for encoding to give shorter codes for small differences, I think it can do more than 50%
| |
20:17 | Fares | but how can this be applied to the beta?
| |
20:18 | Fares | I mean practically, for every gain we should use different bit depth?
| |
20:20 | Y_G | left the channel | |
20:22 | alexML | hm, I'm not even sure there is a good reason to use the higher gain settings on the CMV sensor...
| |
20:23 | Bertl | how about: shorter exposure time in low light situations?
| |
20:24 | alexML | yes, but the benefit was pretty small iirc
| |
20:24 | Bertl | compared to what?
| |
20:26 | alexML | the higher gain didn't result in much cleaner shadows, compared to the lower gain, iirc (need to look up the numbers)
| |
20:27 | Bertl | so you are saying that a 2x vs 1x analog gain doesn't provide any advantage over a digital gain of 2x, yes?
| |
20:28 | alexML | my old measurements show 13.6e at gain 1, 11.02 at gain 2, 10.7 at gain 3 and 10.15 at gain 4
| |
20:29 | alexML | so, if you change the gain from 1 to 4, you get 0.4 stops of less noise in shadows, at the expense of 2 stops of highlights
| |
20:30 | alexML | from 1 to 2, you get 0.3 stops in shadows at the expense of 1 stop in highlights
| |
20:30 | Bertl | well, that might be right, but if you can run at 60FPS instead of 15FPS it might be beneficial, no?
| |
20:31 | alexML | ?!
| |
20:32 | Bertl | well, if there isn't enough light for a gain 1x to fill the 12bits
| |
20:32 | Bertl | or do you suggest just to shift in two zero bits?
| |
20:34 | alexML | between gain 1 and 2, the difference may be noticeable, but between 2 and 4... not so much
| |
20:35 | Bertl | I'm pretty sure you'll notice a missing bit :)
| |
20:36 | alexML | of course, but that bit will have mostly random noise :)
| |
20:36 | Bertl | note that even if the noise floor is above the lsb, it is better to fill this bit with noise
| |
20:36 | alexML | yes
| |
20:36 | alexML | anyway, the original question was whether it's worth the trouble of optimizing the LJ92 encoder for the higher gains, right?
| |
20:37 | Bertl | also, did you evaluate the 'gigabytes of data' we collected with the different sensor frontend configurations?
| |
20:37 | alexML | or whether it's enough to focus on the lower one(s)
| |
20:37 | Bertl | because I'm pretty sure we can still improve on the noise side
| |
20:41 | alexML | I think I've uploaded some numbers to the FTP server, back then, just can't remember where
| |
20:42 | alexML | (DRAM refresh cycle neded)
| |
20:42 | alexML | afk a bit
| |
20:42 | Bertl | k, cya
| |
20:43 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
20:43 | BAndiT1983|away | changed nick to: BAndiT1983
| |
20:43 | Fares | yes, the idea is to optimize it as possible with the noisy images that the end user may generate
| |
20:46 | Fares | Bertl: I have question regarding overclocking and timing analysis, I do timing analysis on vivado with zynq7010-1 part on synthesized desing, is that similar to the beta configuration?
| |
20:47 | Fares | Also after implementing it with low clk, I overclock it and observe no problem, is that reliable in fpga or should/must be avoided in production?
| |
20:48 | RexOrCine | changed nick to: RexOrCine|away
| |
20:49 | Bertl | the MicroZed (and thus the Beta) uses a Zynq 7020 1C grade, so yes, that is similar
| |
20:50 | Bertl | @overclocking: not sure how your do that, but it is likely the wrong approach
| |
20:51 | Bertl | if you want to verify a design for two clock speeds, you need to add proper timing constraints (for both speeds), then place and route it sucessfully
| |
20:52 | Bertl | this will guarantee that the design can work at both speeds under 'normal' circumstances
| |
20:59 | Fares | okay good notes! I will be involved more with timing analysis in the future when working with hp_reader&writer since xilinx dma is limited
| |
20:59 | illwieckz | left the channel | |
20:59 | Bertl | okay, let me know if you need any information there or help with testing, etc
| |
21:00 | Fares | okay, thank you!
| |
21:01 | illwieckz | joined the channel | |
21:05 | Bertl | np
| |
21:09 | se6astian|away | changed nick to: se6astian
| |
21:51 | BAndiT1983 | changed nick to: BAndiT1983|away
| |
21:52 | Fares | left the channel | |
22:44 | se6astian | off to bed, good night
| |
22:44 | se6astian | changed nick to: se6astian|away
| |
22:48 | Bertl | nn
| |
23:43 | aombk | joined the channel |