| 01:18 | pani | left the channel |
| 05:25 | lambamansha | joined the channel |
| 05:34 | pani | joined the channel |
| 05:38 | pani | left the channel |
| 05:38 | pani | joined the channel |
| 05:41 | BAndiT1983 | changed nick to: BAndiT1983|away
|
| 06:03 | pani | left the channel |
| 06:04 | pani | joined the channel |
| 06:13 | pani | left the channel |
| 06:13 | pani | joined the channel |
| 07:00 | mumptai | joined the channel |
| 07:07 | lambamansha | left the channel |
| 07:18 | pani | left the channel |
| 07:44 | lambamansha | joined the channel |
| 08:18 | Bertl | off to bed now ... have a good one everyone!
|
| 08:18 | Bertl | changed nick to: Bertl_zZ
|
| 08:28 | BAndiT1983|away | changed nick to: BAndiT1983
|
| 08:30 | lambamansha | left the channel |
| 10:14 | pani | joined the channel |
| 10:18 | pani | left the channel |
| 10:18 | pani | joined the channel |
| 10:29 | lambamansha | joined the channel |
| 10:49 | pani | left the channel |
| 10:49 | pani | joined the channel |
| 10:53 | pani | left the channel |
| 10:53 | pani | joined the channel |
| 11:04 | BAndiT1983 | changed nick to: BAndiT1983|away
|
| 11:44 | pani | left the channel |
| 11:46 | BAndiT1983|away | changed nick to: BAndiT1983
|
| 12:23 | intrac | left the channel |
| 12:26 | markusengsner | joined the channel |
| 12:36 | intrac_ | joined the channel |
| 12:49 | mumptai | left the channel |
| 13:45 | mumptai | joined the channel |
| 13:53 | se6ast1an | good day
|
| 13:53 | BAndiT1983 | hu
|
| 13:53 | BAndiT1983 | *hi
|
| 14:10 | markusengsner | left the channel |
| 14:24 | lambamansha | left the channel |
| 14:29 | pani | joined the channel |
| 14:33 | pani | left the channel |
| 14:33 | pani | joined the channel |
| 14:46 | markusengsner | joined the channel |
| 15:35 | tpw_rules | vup: do you have some time to talk in about an hour?
|
| 15:35 | vup | sure
|
| 15:53 | draciel | joined the channel |
| 16:02 | lambamansha | joined the channel |
| 16:02 | draciel | Hi
|
| 16:03 | draciel | left the channel |
| 16:03 | pani | left the channel |
| 16:04 | pani | joined the channel |
| 16:08 | draciel | joined the channel |
| 16:08 | pani | left the channel |
| 16:08 | pani | joined the channel |
| 16:09 | draciel | left the channel |
| 16:10 | mumptai | left the channel |
| 16:27 | lambamansha | left the channel |
| 16:34 | Bertl_zZ | changed nick to: Bertl
|
| 16:34 | Bertl | morning folks!
|
| 16:34 | BAndiT1983 | hi
|
| 16:46 | tpw_rules | vup: so like i said i was interested in the T1220 nmigen gateware for the axiom beta task
|
| 16:46 | vup | yes
|
| 16:47 | tpw_rules | and i had a pile of questions about the goals to help me understand before i write a proposal
|
| 16:47 | vup | sure, Ill try to answer them
|
| 16:50 | vup | tpw_rules: so what questions do you have?
|
| 16:50 | tpw_rules | do you have an existing PHY in VHDL for the sensor? i'm a little unsure of what you mean by the bit and word alignment; is that that the PHY will sync to the datastream appropriately?
|
| 16:51 | vup | yes, there is an existing phy in VHDL
|
| 16:51 | vup | the existing VHDL stuff can be found here: https://github.com/apertus-open-source-cinema/axiom-firmware/tree/45cca29/peripherals/soc_main
|
| 16:52 | tpw_rules | the sensor datasheet mentions the synchronization training to correct for skew, i would have to implement that?
|
| 16:52 | vup | the existing phy is mainly cmv_pll.vhd + cmv_serdes.vhd combined in top.vhd
|
| 16:53 | vup | tpw_rules: yes, so the synchronization training is basically what is meant by the bit and word alignment
|
| 16:53 | vup | bit alignment means, figuring out the correct delay taps for the delay element of the fpga to sample the data at the right point
|
| 16:54 | vup | and word alignment is then just figuring out the first bit in a word in the bit serial stream
|
| 16:55 | tpw_rules | ok it looks like all that logic is in top.vhd
|
| 16:56 | tpw_rules | i don't see much in those other files except vendor module instantiations
|
| 16:56 | vup | yeah
|
| 16:56 | vup | the bit and word alignment is currently driven by a program running under linux on the arm cores connected to the fpga
|
| 16:56 | vup | (the zynq fpga we are using is a combination of an fpga and two arm cores, connected over axi)
|
| 16:56 | tpw_rules | and you'd like to move that down to gateware?
|
| 16:57 | vup | this is the code driving the bit and word alignment: https://github.com/apertus-open-source-cinema/axiom-firmware/tree/2abfb84/software/sensor_tools/train
|
| 16:58 | vup | tpw_rules: not necessarily, its fine if its cpu driven at first aswell, I think moving it to the gateware makes only sense if there is enough time in the end
|
| 16:58 | tpw_rules | so i guess that more "with bit and word alignment" is "with the appropriate registers for that algorithm"?
|
| 16:59 | vup | well, its more a hint, that at this data rate it will be necessary to do that (and the serdes ip won't do it on its own), the implementation does not have do be done in gateware, but it has to be done
|
| 17:00 | vup | you can of course try and reuse the existing training software, but you can also write your own
|
| 17:00 | BAndiT1983 | changed nick to: BAndiT1983|away
|
| 17:01 | tpw_rules | as someone who doesn't know much about high speed serdes, how often does training need to occur? the delay is a physical thing, so it should be more or less fixed per camera right? maybe it changes if you unplug and reinsert the sensor
|
| 17:02 | vup | currently we do it once per at camera boot
|
| 17:02 | tpw_rules | ok. i actually just saw a section in the datasheet that says it is in fact temperature dependent. but it suggests also it's designed to be done while the sensor is operating
|
| 17:02 | vup | while the bit alignment is probably constant for a given camera, it can vary for example with temperature, also the word alignment has to be done everytime
|
| 17:03 | vup | tpw_rules: yeah, long term one goal is to continuously monitor the link (as it also transmits the training pattern during normal readout), and retrain as needed
|
| 17:03 | lambamansha | joined the channel |
| 17:05 | tpw_rules | okay. the datasheet also says the maximum total skew is around 4560ps. that's several clock cycles, can the serdes delay taps accommodate that, or have you compensated with routing and/or flip flop delays somehow?
|
| 17:05 | vup | we currently have a framework to embed python snippets as part of the gateware that can read / write registers on the fpga and then be run on the arm cores, there is an example for another link, as to how link training is implemented: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/a81b281c4dec08750d11a0aa4cbcee0ceb9ac45d/naps/cores/plugin_module_streamer/rx.py#L151
|
| 17:07 | tpw_rules | ah, that's cool. if a bit mysterious :)
|
| 17:09 | vup | tpw_rules: anything bigger than a bit clock cycle can be compensated using flip-flops or the bit-slip functionality of the serdes blocks (preferably the bit-slip functionality, as that does not cost extra luts / ffs)
|
| 17:10 | vup | the general idea is to use the delay element to sample in the middle of the eye and then align the bits relative to each other using the bit-slip
|
| 17:13 | tpw_rules | okay. i guess it doesn't matter to the bit slip block if the eye is sampled at 1% into the clock cycle time vs 99%?
|
| 17:14 | vup | correct, the bit slip block works using the bit clock you provide to the serdes ip block,
|
| 17:15 | vup | (just to be clear, we are not doing clock recovery here, this is a sink synchronous protocol, we are providing the clock to the sensor and the sensor sends us the data clocked by the clock we provide)
|
| 17:16 | tpw_rules | yeah, that makes sense
|
| 17:17 | tpw_rules | it looks like the pixel remapper should be a relatively simple task on top of that
|
| 17:18 | tpw_rules | do you use 32 or 64 channels from the sensor?
|
| 17:18 | vup | 32 currently
|
| 17:18 | vup | the remapper will mostly be a translation of the existing VHDL one, with hopefully some simplification using the expanded meta programming possibilities of python / nmigen
|
| 17:19 | tpw_rules | okay, that's what i figured
|
| 17:20 | vup | a general plus would be if the core would in theory work with 2 / 4 / 8 / 16 / 32 / 64 channels
|
| 17:21 | tpw_rules | it looks at some point like you end up reading two lines at once. can the core handle that?
|
| 17:21 | vup | what do you mean by that?
|
| 17:22 | tpw_rules | the datasheet says that in 64 channel mode even lines appear on the top outputs and odd lines appear on the bottom.
|
| 17:22 | vup | yes
|
| 17:23 | tpw_rules | but it looks like in the comments in the remapper that it only expects to have one line in memory at a time?
|
| 17:24 | vup | two remappers get used
|
| 17:24 | Bertl | there are always two lines which get processed at the same time
|
| 17:26 | tpw_rules | then what is the need for the remapper itself to handle 64 channels? an 8192 pixel wide sensor?
|
| 17:28 | Bertl | currently the 'remapper' (i.e. the two remappers) only handle 32 channels
|
| 17:28 | vup | sorry that was unclear, I meant the whole core as in the combination of PHYs + remapper cores
|
| 17:30 | tpw_rules | oh okay.
|
| 17:31 | Bertl | note that it makes sense to have less than 64 channels for data transfer and that the sensor can do any 2^n down to 2 IIRC
|
| 17:31 | Bertl | the frame size is independent from this, it is just a reduction of bandwidth between sensor and FPGA
|
| 17:33 | tpw_rules | yeah i got that. we can nail down the exact details once the project starts
|
| 17:33 | tpw_rules | so the next deal is this memory mapping SPI driver? it looks like there is SPI logic on the sensor for configuration. where is it attached? presumably as GPIO to the ARM if the bitbang driver will work
|
| 17:34 | vup | tpw_rules: its gpio to the fpga
|
| 17:34 | vup | but you can for example pass that through using the mmio gpio driver
|
| 17:35 | tpw_rules | ok. are there any particular performance requirements or do you just need something that works? it seems like it would be better to just memory map the pins to economize fpga space
|
| 17:36 | vup | we are not that strapped on fpga space, this is only control registers, so performance requirements are quite low
|
| 17:36 | Bertl | there is an SPI peripherial in the Zynq SoC, so that could be used here and would probably reduce the FPGA footprint to a few wires
|
| 17:36 | vup | Bertl: does that have similar problems like the i2c peripheral, or does that one actually work well?
|
| 17:37 | vup | (this is an example for a bitbanging driver for i2c: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66/naps/cores/peripherals/bitbang_i2c.py)
|
| 17:37 | Bertl | what problems do you see with I2C?
|
| 17:37 | vup | It doesn't have glitch filters, so you need to implement those yourself
|
| 17:38 | vup | otherwise it tends to lock up
|
| 17:38 | vup | atleast that was my experience, last I used it
|
| 17:38 | tpw_rules | out of curiosity, how does the linux code access the i2c peripheral then?
|
| 17:38 | vup | tpw_rules: with the code I linked?
|
| 17:38 | tpw_rules | yeah
|
| 17:38 | Bertl | vup: hmm, we are using the I2C without any filters and it works just fine ...
|
| 17:39 | Bertl | do not remember that we had any issues or lockups there
|
| 17:39 | vup | tpw_rules: that create a proper i2c device on the linux side, so just accessing `/dev/i2c-n`
|
| 17:39 | vup | Bertl: hmm so maybe thats another case of the ar0330 i2c being a bit wonky
|
| 17:39 | tpw_rules | oh, yeah ok
|
| 17:39 | Bertl | vup: yes, that might be possible, what pullups do you use?
|
| 17:40 | Bertl | might also be a problem of noise introduced by unfortunate routing :)
|
| 17:41 | vup | Bertl: I think 100k, as 10k did not work (if you recall, the sensor was not able to pull scl / sda down to GND)
|
| 17:41 | Bertl | ah, yes, well, with 100k the noise on the wires must be terrible
|
| 17:42 | Bertl | so yeah, I can imagine that this will result in all kind of issues including glitches
|
| 17:43 | vup | well the bitbanging driver has been working quite well so far
|
| 17:43 | Bertl | yes, no doubt the issues can be mitigated
|
| 17:43 | vup | tpw_rules: also I don't want to interrupt you, so if you have more questions, just ask
|
| 17:43 | pani | left the channel |
| 17:44 | tpw_rules | oh, i'm not feeling interrupted. just thinking
|
| 17:44 | tpw_rules | but i did remember that a GSoC student last year worked on some sort of improved pixel remapper. did their work ever get used?
|
| 17:44 | vup | I think Bertl can answer that one best
|
| 17:45 | tpw_rules | or i guess a more relevant question: do you want me to use any part of that or just focus on the VHDL one you already have? i did see they built a sensor data simulator which might be useful
|
| 17:47 | Bertl | well, the thing is, the student last year was tasked to improve and generalize the existing pixel remapper
|
| 17:48 | Bertl | and after a rather lengthy period of 'understanding' the pixel remapper, creating a test framework and trying to improve the existing solution, it was concluded that the current solution is pretty much optimal
|
| 17:48 | pani | joined the channel |
| 17:49 | Bertl | there were some tests on how to improve throughput, by using more than two remappers but that's about it
|
| 17:49 | tpw_rules | i'm a bit confused by how you wrote that. do you think they did a good job on their work and correctly concluded there's not much improvement to make?
|
| 17:50 | Bertl | that said, both the student and we learned a lot from that
|
| 17:50 | Bertl | yes, it was a successful GSoC project
|
| 17:51 | Bertl | it's not always about replacing existing stuff or adding more stuff, for us GSoC projects are often a way to evaluate things or try a few new ideas together with the students
|
| 17:52 | tpw_rules | okay cool. just the way you wrote it sounded to me a bit shaky
|
| 17:53 | vnksnkr | joined the channel |
| 17:53 | Bertl | probably a language problem, not a native english speaker ;)
|
| 17:54 | tpw_rules | so next was the debayering. the readme on the nmigen repo says you have a debayering example but i couldn't find it. you want to read the full 4K image data from the sensor and combine each 4 bayer pixels into one output pixel it seems?
|
| 17:54 | tpw_rules | rather than interpolate to four color pixels
|
| 17:55 | tpw_rules | do you guys use RGB internally or YUV etc?
|
| 17:56 | Bertl | the current setup is RGB based and the 4K to FullHD debayering is what we do for the preview
|
| 17:56 | Bertl | as the current HDMI outputs cannot handle much more than 1080p
|
| 17:56 | vup | tpw_rules: these are the debayering cores currently existing in the nmigen gateware: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/video/debayer.py
|
| 17:57 | tpw_rules | and those don't change the resolution?
|
| 17:57 | vup | nope
|
| 17:58 | vup | The debayering + HDMI step, is more meant as a first real-world test of the core, and not a sophisticated part of the camera processing, and maybe being used as a fullHD preview
|
| 17:59 | tpw_rules | does an imagestream know its own resolution?
|
| 18:00 | vup | tpw_rules: In a way: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/video/image_stream.py
|
| 18:00 | vup | it has a line_last and a frame_last, which (obviously) mark the end of a line / row and the end of a frame
|
| 18:01 | tpw_rules | if it's just a test, i guess you don't expect a fancy filter? to me "decimation" implies some sort of low pass filter
|
| 18:02 | vup | well yes, its a low pass filter in the sense, that one simply combines the neighbouring pixels of the bayer pattern into one
|
| 18:02 | vup | but maybe decimation is a bit misleading here
|
| 18:03 | tpw_rules | or do you just need something like (R, G, B) = (p[0, 0], (p[0, 1] + p[1, 0])/2, p[1, 1])
|
| 18:03 | vup | yep exactly that
|
| 18:04 | tpw_rules | ok cool
|
| 18:06 | tpw_rules | what do you guys have available for testing this stuff in simulation? how much access could i get to the real camera? does it have a sensor?
|
| 18:07 | vup | here is an example of how these blocks can be connected: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66/applets/camera.py, you see, currently for example the modeline of the hdmi output is not automatically determined from the input image stream, but needs to be manually specified. I think some stream negotiation stuff could be interesting, but thats later down the line.
|
| 18:07 | vup | tpw_rules: we can provide remote access to some cameras (with sensor), so you can try everything on real hardware
|
| 18:07 | tpw_rules | ok, so i would just be reimplementing that with my new blocks more or less?
|
| 18:08 | vup | tpw_rules: well not all new blocks, but yes, replacing some of the blocks with yours
|
| 18:08 | vup | for simulation, the existing vhdl gateware unfortunately does not have much in terms of simulation, but for the nmigen gateware there are a lot of smaller and bigger simulation tests
|
| 18:09 | vup | for the cores you write, it would be very nice to have similar tests aswell
|
| 18:10 | vup | this for example is a test of a hispi input core (some other protocol used by some image sensors): https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/hispi/hispi_rx_test.py,
|
| 18:11 | tpw_rules | that lzma file is some recordings from the real sensor?
|
| 18:11 | vup | exactly
|
| 18:12 | tpw_rules | do you have any for the cmv12000 yet? is there a good way to test the synchronization system?
|
| 18:12 | tpw_rules | presumably it's hard because the adjustments are done in the xilinx block
|
| 18:12 | vup | the delay tap part will be hard to test
|
| 18:12 | vup | yeah
|
| 18:12 | vup | not sure we have some recordings for the cmv12000, maybe Bertl knows?
|
| 18:14 | Bertl | hmm, what kind of recording?
|
| 18:14 | vup | just the raw serial bitstream
|
| 18:14 | vup | without word alignment
|
| 18:15 | vup | Anyways, I think test for the remapper or the debayering part will be much more interesting, especially, if someone someday wants to try and improve / remork the remapper again
|
| 18:15 | Bertl | I do not think we ever did that
|
| 18:15 | tpw_rules | the previous student wrote some testing? they mentioned a sensor data simulator
|
| 18:16 | tpw_rules | what is the metric of optimality for the remapper by the way? logic use?
|
| 18:17 | tpw_rules | ok there does exist an existing nmigen remapper. does it work?
|
| 18:18 | vup | tpw_rules: mostly bram use + logic use, while being able to run at the required clock speed
|
| 18:19 | vup | tpw_rules: the "existing" one is not much more than an empty skeleton
|
| 18:21 | Bertl | the metric depends on the use case, in general it will be FPGA footprint and performance (i.e. throughput)
|
| 18:25 | tpw_rules | how do you test that?
|
| 18:25 | tpw_rules | or is the test just to make sure improving it doesn't break it
|
| 18:29 | vup | the pnr tool (vivado in this case) tells you if you meet the timing constraints
|
| 18:30 | BAndiT1983|away | changed nick to: BAndiT1983
|
| 18:30 | tpw_rules | idk in my somewhat limited experience, fmax isn't really a proxy for your design quality
|
| 18:31 | vup | Ah I thought you asked, how you test it hase the required performance
|
| 18:31 | vup | As for the footprint, just comparing it to the existing remapper core should work, no?
|
| 18:33 | tpw_rules | is there a way to do that automatically? it's certainly not anything nmigen knows about
|
| 18:33 | vup | well not yet, but one could certainly parse the output of vivado, anuejn did something similar before parsing the output of nextpnr
|
| 18:35 | tpw_rules | hmm
|
| 18:39 | vup | we have CI setup with vivado, so this is something we could simply check in the CI on each commit
|
| 18:45 | markusengsner | left the channel |
| 19:00 | markusengsner | joined the channel |
| 19:01 | tpw_rules | so how exactly does the GSoC submission stuff work in detail? I see i have to have an application submitted by april 13
|
| 19:02 | tpw_rules | i've been reading some of the other proposals to get an idea of what should be included. do you guys review it before i submit? there's this stuff about how it's best to work with your mentor on it
|
| 19:07 | Bertl | it is advisable to have the mentors review any application, but it is not a requirement
|
| 19:08 | Bertl | after the application deadline, we will review them and request slots accordingly
|
| 19:09 | tpw_rules | slots?
|
| 19:09 | Bertl | some time later we get a certain number of slots assigned from google which we will then fill with students
|
| 19:11 | lambamansha | left the channel |
| 19:14 | vup | tpw_rules: google selects a certain number of students they sponsor for a organization, the number of "slots"
|
| 19:18 | pani | left the channel |
| 19:20 | futarisIRCcloud | joined the channel |
| 19:24 | Bertl | off for now ... bbl
|
| 19:24 | Bertl | changed nick to: Bertl_oO
|
| 19:52 | markusengsner | left the channel |
| 20:34 | pani | joined the channel |
| 20:38 | pani | left the channel |
| 20:38 | pani | joined the channel |
| 20:44 | vnksnkr | left the channel |
| 21:37 | markusengsner | joined the channel |
| 22:09 | pani | left the channel |
| 22:54 | pani | joined the channel |
| 22:58 | pani | left the channel |
| 22:58 | pani | joined the channel |
| 23:10 | markusengsner | left the channel |
| 23:14 | se6ast1an | off for today, good night
|
| 23:43 | pani | left the channel |
| 23:44 | pani | joined the channel |
| 23:48 | pani | left the channel |
| 23:48 | pani | joined the channel |
| 23:59 | BAndiT1983 | changed nick to: BAndiT1983|away
|
| 00:03 | pani | left the channel |
| 00:04 | pani | joined the channel |
| 00:08 | pani | left the channel |
| 00:08 | pani | joined the channel |
| 00:12 | lexano | left the channel |
| 00:24 | lexano | joined the channel |