Current Server Time: 23:53 (Central Europe)

#apertus IRC Channel Logs

2021/04/03

Timezone: UTC


00:18
pani
left the channel
04:25
lambamansha
joined the channel
04:34
pani
joined the channel
04:38
pani
left the channel
04:38
pani
joined the channel
04:41
BAndiT1983
changed nick to: BAndiT1983|away
05:03
pani
left the channel
05:04
pani
joined the channel
05:13
pani
left the channel
05:13
pani
joined the channel
06:00
mumptai
joined the channel
06:07
lambamansha
left the channel
06:18
pani
left the channel
06:44
lambamansha
joined the channel
07:18
Bertl
off to bed now ... have a good one everyone!
07:18
Bertl
changed nick to: Bertl_zZ
07:28
BAndiT1983|away
changed nick to: BAndiT1983
07:30
lambamansha
left the channel
09:14
pani
joined the channel
09:18
pani
left the channel
09:18
pani
joined the channel
09:29
lambamansha
joined the channel
09:49
pani
left the channel
09:49
pani
joined the channel
09:53
pani
left the channel
09:53
pani
joined the channel
10:04
BAndiT1983
changed nick to: BAndiT1983|away
10:44
pani
left the channel
10:46
BAndiT1983|away
changed nick to: BAndiT1983
11:23
intrac
left the channel
11:26
markusengsner
joined the channel
11:36
intrac_
joined the channel
11:49
mumptai
left the channel
12:45
mumptai
joined the channel
12:53
se6ast1an
good day
12:53
BAndiT1983
hu
12:53
BAndiT1983
*hi
13:10
markusengsner
left the channel
13:24
lambamansha
left the channel
13:29
pani
joined the channel
13:33
pani
left the channel
13:33
pani
joined the channel
13:46
markusengsner
joined the channel
14:35
tpw_rules
vup: do you have some time to talk in about an hour?
14:35
vup
sure
14:53
draciel
joined the channel
15:02
lambamansha
joined the channel
15:02
draciel
Hi
15:03
draciel
left the channel
15:03
pani
left the channel
15:04
pani
joined the channel
15:08
draciel
joined the channel
15:08
pani
left the channel
15:08
pani
joined the channel
15:09
draciel
left the channel
15:10
mumptai
left the channel
15:27
lambamansha
left the channel
15:34
Bertl_zZ
changed nick to: Bertl
15:34
Bertl
morning folks!
15:34
BAndiT1983
hi
15:46
tpw_rules
vup: so like i said i was interested in the T1220 nmigen gateware for the axiom beta task
15:46
vup
yes
15:47
tpw_rules
and i had a pile of questions about the goals to help me understand before i write a proposal
15:47
vup
sure, Ill try to answer them
15:50
vup
tpw_rules: so what questions do you have?
15:50
tpw_rules
do you have an existing PHY in VHDL for the sensor? i'm a little unsure of what you mean by the bit and word alignment; is that that the PHY will sync to the datastream appropriately?
15:51
vup
yes, there is an existing phy in VHDL
15:51
vup
the existing VHDL stuff can be found here: https://github.com/apertus-open-source-cinema/axiom-firmware/tree/45cca29/peripherals/soc_main
15:52
tpw_rules
the sensor datasheet mentions the synchronization training to correct for skew, i would have to implement that?
15:52
vup
the existing phy is mainly cmv_pll.vhd + cmv_serdes.vhd combined in top.vhd
15:53
vup
tpw_rules: yes, so the synchronization training is basically what is meant by the bit and word alignment
15:53
vup
bit alignment means, figuring out the correct delay taps for the delay element of the fpga to sample the data at the right point
15:54
vup
and word alignment is then just figuring out the first bit in a word in the bit serial stream
15:55
tpw_rules
ok it looks like all that logic is in top.vhd
15:56
tpw_rules
i don't see much in those other files except vendor module instantiations
15:56
vup
yeah
15:56
vup
the bit and word alignment is currently driven by a program running under linux on the arm cores connected to the fpga
15:56
vup
(the zynq fpga we are using is a combination of an fpga and two arm cores, connected over axi)
15:56
tpw_rules
and you'd like to move that down to gateware?
15:57
vup
this is the code driving the bit and word alignment: https://github.com/apertus-open-source-cinema/axiom-firmware/tree/2abfb84/software/sensor_tools/train
15:58
vup
tpw_rules: not necessarily, its fine if its cpu driven at first aswell, I think moving it to the gateware makes only sense if there is enough time in the end
15:58
tpw_rules
so i guess that more "with bit and word alignment" is "with the appropriate registers for that algorithm"?
15:59
vup
well, its more a hint, that at this data rate it will be necessary to do that (and the serdes ip won't do it on its own), the implementation does not have do be done in gateware, but it has to be done
16:00
vup
you can of course try and reuse the existing training software, but you can also write your own
16:00
BAndiT1983
changed nick to: BAndiT1983|away
16:01
tpw_rules
as someone who doesn't know much about high speed serdes, how often does training need to occur? the delay is a physical thing, so it should be more or less fixed per camera right? maybe it changes if you unplug and reinsert the sensor
16:02
vup
currently we do it once per at camera boot
16:02
tpw_rules
ok. i actually just saw a section in the datasheet that says it is in fact temperature dependent. but it suggests also it's designed to be done while the sensor is operating
16:02
vup
while the bit alignment is probably constant for a given camera, it can vary for example with temperature, also the word alignment has to be done everytime
16:03
vup
tpw_rules: yeah, long term one goal is to continuously monitor the link (as it also transmits the training pattern during normal readout), and retrain as needed
16:03
lambamansha
joined the channel
16:05
tpw_rules
okay. the datasheet also says the maximum total skew is around 4560ps. that's several clock cycles, can the serdes delay taps accommodate that, or have you compensated with routing and/or flip flop delays somehow?
16:05
vup
we currently have a framework to embed python snippets as part of the gateware that can read / write registers on the fpga and then be run on the arm cores, there is an example for another link, as to how link training is implemented: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/a81b281c4dec08750d11a0aa4cbcee0ceb9ac45d/naps/cores/plugin_module_streamer/rx.py#L151
16:07
tpw_rules
ah, that's cool. if a bit mysterious :)
16:09
vup
tpw_rules: anything bigger than a bit clock cycle can be compensated using flip-flops or the bit-slip functionality of the serdes blocks (preferably the bit-slip functionality, as that does not cost extra luts / ffs)
16:10
vup
the general idea is to use the delay element to sample in the middle of the eye and then align the bits relative to each other using the bit-slip
16:13
tpw_rules
okay. i guess it doesn't matter to the bit slip block if the eye is sampled at 1% into the clock cycle time vs 99%?
16:14
vup
correct, the bit slip block works using the bit clock you provide to the serdes ip block,
16:15
vup
(just to be clear, we are not doing clock recovery here, this is a sink synchronous protocol, we are providing the clock to the sensor and the sensor sends us the data clocked by the clock we provide)
16:16
tpw_rules
yeah, that makes sense
16:17
tpw_rules
it looks like the pixel remapper should be a relatively simple task on top of that
16:18
tpw_rules
do you use 32 or 64 channels from the sensor?
16:18
vup
32 currently
16:18
vup
the remapper will mostly be a translation of the existing VHDL one, with hopefully some simplification using the expanded meta programming possibilities of python / nmigen
16:19
tpw_rules
okay, that's what i figured
16:20
vup
a general plus would be if the core would in theory work with 2 / 4 / 8 / 16 / 32 / 64 channels
16:21
tpw_rules
it looks at some point like you end up reading two lines at once. can the core handle that?
16:21
vup
what do you mean by that?
16:22
tpw_rules
the datasheet says that in 64 channel mode even lines appear on the top outputs and odd lines appear on the bottom.
16:22
vup
yes
16:23
tpw_rules
but it looks like in the comments in the remapper that it only expects to have one line in memory at a time?
16:24
vup
two remappers get used
16:24
Bertl
there are always two lines which get processed at the same time
16:26
tpw_rules
then what is the need for the remapper itself to handle 64 channels? an 8192 pixel wide sensor?
16:28
Bertl
currently the 'remapper' (i.e. the two remappers) only handle 32 channels
16:28
vup
sorry that was unclear, I meant the whole core as in the combination of PHYs + remapper cores
16:30
tpw_rules
oh okay.
16:31
Bertl
note that it makes sense to have less than 64 channels for data transfer and that the sensor can do any 2^n down to 2 IIRC
16:31
Bertl
the frame size is independent from this, it is just a reduction of bandwidth between sensor and FPGA
16:33
tpw_rules
yeah i got that. we can nail down the exact details once the project starts
16:33
tpw_rules
so the next deal is this memory mapping SPI driver? it looks like there is SPI logic on the sensor for configuration. where is it attached? presumably as GPIO to the ARM if the bitbang driver will work
16:34
vup
tpw_rules: its gpio to the fpga
16:34
vup
but you can for example pass that through using the mmio gpio driver
16:35
tpw_rules
ok. are there any particular performance requirements or do you just need something that works? it seems like it would be better to just memory map the pins to economize fpga space
16:36
vup
we are not that strapped on fpga space, this is only control registers, so performance requirements are quite low
16:36
Bertl
there is an SPI peripherial in the Zynq SoC, so that could be used here and would probably reduce the FPGA footprint to a few wires
16:36
vup
Bertl: does that have similar problems like the i2c peripheral, or does that one actually work well?
16:37
vup
(this is an example for a bitbanging driver for i2c: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66/naps/cores/peripherals/bitbang_i2c.py)
16:37
Bertl
what problems do you see with I2C?
16:37
vup
It doesn't have glitch filters, so you need to implement those yourself
16:38
vup
otherwise it tends to lock up
16:38
vup
atleast that was my experience, last I used it
16:38
tpw_rules
out of curiosity, how does the linux code access the i2c peripheral then?
16:38
vup
tpw_rules: with the code I linked?
16:38
tpw_rules
yeah
16:38
Bertl
vup: hmm, we are using the I2C without any filters and it works just fine ...
16:39
Bertl
do not remember that we had any issues or lockups there
16:39
vup
tpw_rules: that create a proper i2c device on the linux side, so just accessing `/dev/i2c-n`
16:39
vup
Bertl: hmm so maybe thats another case of the ar0330 i2c being a bit wonky
16:39
tpw_rules
oh, yeah ok
16:39
Bertl
vup: yes, that might be possible, what pullups do you use?
16:40
Bertl
might also be a problem of noise introduced by unfortunate routing :)
16:41
vup
Bertl: I think 100k, as 10k did not work (if you recall, the sensor was not able to pull scl / sda down to GND)
16:41
Bertl
ah, yes, well, with 100k the noise on the wires must be terrible
16:42
Bertl
so yeah, I can imagine that this will result in all kind of issues including glitches
16:43
vup
well the bitbanging driver has been working quite well so far
16:43
Bertl
yes, no doubt the issues can be mitigated
16:43
vup
tpw_rules: also I don't want to interrupt you, so if you have more questions, just ask
16:43
pani
left the channel
16:44
tpw_rules
oh, i'm not feeling interrupted. just thinking
16:44
tpw_rules
but i did remember that a GSoC student last year worked on some sort of improved pixel remapper. did their work ever get used?
16:44
vup
I think Bertl can answer that one best
16:45
tpw_rules
or i guess a more relevant question: do you want me to use any part of that or just focus on the VHDL one you already have? i did see they built a sensor data simulator which might be useful
16:47
Bertl
well, the thing is, the student last year was tasked to improve and generalize the existing pixel remapper
16:48
Bertl
and after a rather lengthy period of 'understanding' the pixel remapper, creating a test framework and trying to improve the existing solution, it was concluded that the current solution is pretty much optimal
16:48
pani
joined the channel
16:49
Bertl
there were some tests on how to improve throughput, by using more than two remappers but that's about it
16:49
tpw_rules
i'm a bit confused by how you wrote that. do you think they did a good job on their work and correctly concluded there's not much improvement to make?
16:50
Bertl
that said, both the student and we learned a lot from that
16:50
Bertl
yes, it was a successful GSoC project
16:51
Bertl
it's not always about replacing existing stuff or adding more stuff, for us GSoC projects are often a way to evaluate things or try a few new ideas together with the students
16:52
tpw_rules
okay cool. just the way you wrote it sounded to me a bit shaky
16:53
vnksnkr
joined the channel
16:53
Bertl
probably a language problem, not a native english speaker ;)
16:54
tpw_rules
so next was the debayering. the readme on the nmigen repo says you have a debayering example but i couldn't find it. you want to read the full 4K image data from the sensor and combine each 4 bayer pixels into one output pixel it seems?
16:54
tpw_rules
rather than interpolate to four color pixels
16:55
tpw_rules
do you guys use RGB internally or YUV etc?
16:56
Bertl
the current setup is RGB based and the 4K to FullHD debayering is what we do for the preview
16:56
Bertl
as the current HDMI outputs cannot handle much more than 1080p
16:56
vup
tpw_rules: these are the debayering cores currently existing in the nmigen gateware: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/video/debayer.py
16:57
tpw_rules
and those don't change the resolution?
16:57
vup
nope
16:58
vup
The debayering + HDMI step, is more meant as a first real-world test of the core, and not a sophisticated part of the camera processing, and maybe being used as a fullHD preview
16:59
tpw_rules
does an imagestream know its own resolution?
17:00
vup
tpw_rules: In a way: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/video/image_stream.py
17:00
vup
it has a line_last and a frame_last, which (obviously) mark the end of a line / row and the end of a frame
17:01
tpw_rules
if it's just a test, i guess you don't expect a fancy filter? to me "decimation" implies some sort of low pass filter
17:02
vup
well yes, its a low pass filter in the sense, that one simply combines the neighbouring pixels of the bayer pattern into one
17:02
vup
but maybe decimation is a bit misleading here
17:03
tpw_rules
or do you just need something like (R, G, B) = (p[0, 0], (p[0, 1] + p[1, 0])/2, p[1, 1])
17:03
vup
yep exactly that
17:04
tpw_rules
ok cool
17:06
tpw_rules
what do you guys have available for testing this stuff in simulation? how much access could i get to the real camera? does it have a sensor?
17:07
vup
here is an example of how these blocks can be connected: https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66/applets/camera.py, you see, currently for example the modeline of the hdmi output is not automatically determined from the input image stream, but needs to be manually specified. I think some stream negotiation stuff could be interesting, but thats later down the line.
17:07
vup
tpw_rules: we can provide remote access to some cameras (with sensor), so you can try everything on real hardware
17:07
tpw_rules
ok, so i would just be reimplementing that with my new blocks more or less?
17:08
vup
tpw_rules: well not all new blocks, but yes, replacing some of the blocks with yours
17:08
vup
for simulation, the existing vhdl gateware unfortunately does not have much in terms of simulation, but for the nmigen gateware there are a lot of smaller and bigger simulation tests
17:09
vup
for the cores you write, it would be very nice to have similar tests aswell
17:10
vup
this for example is a test of a hispi input core (some other protocol used by some image sensors): https://github.com/apertus-open-source-cinema/nmigen-gateware/blob/bd60e66e4e36f21a11899ac6226d089db989c94e/naps/cores/hispi/hispi_rx_test.py,
17:11
tpw_rules
that lzma file is some recordings from the real sensor?
17:11
vup
exactly
17:12
tpw_rules
do you have any for the cmv12000 yet? is there a good way to test the synchronization system?
17:12
tpw_rules
presumably it's hard because the adjustments are done in the xilinx block
17:12
vup
the delay tap part will be hard to test
17:12
vup
yeah
17:12
vup
not sure we have some recordings for the cmv12000, maybe Bertl knows?
17:14
Bertl
hmm, what kind of recording?
17:14
vup
just the raw serial bitstream
17:14
vup
without word alignment
17:15
vup
Anyways, I think test for the remapper or the debayering part will be much more interesting, especially, if someone someday wants to try and improve / remork the remapper again
17:15
Bertl
I do not think we ever did that
17:15
tpw_rules
the previous student wrote some testing? they mentioned a sensor data simulator
17:16
tpw_rules
what is the metric of optimality for the remapper by the way? logic use?
17:17
tpw_rules
ok there does exist an existing nmigen remapper. does it work?
17:18
vup
tpw_rules: mostly bram use + logic use, while being able to run at the required clock speed
17:19
vup
tpw_rules: the "existing" one is not much more than an empty skeleton
17:21
Bertl
the metric depends on the use case, in general it will be FPGA footprint and performance (i.e. throughput)
17:25
tpw_rules
how do you test that?
17:25
tpw_rules
or is the test just to make sure improving it doesn't break it
17:29
vup
the pnr tool (vivado in this case) tells you if you meet the timing constraints
17:30
BAndiT1983|away
changed nick to: BAndiT1983
17:30
tpw_rules
idk in my somewhat limited experience, fmax isn't really a proxy for your design quality
17:31
vup
Ah I thought you asked, how you test it hase the required performance
17:31
vup
As for the footprint, just comparing it to the existing remapper core should work, no?
17:33
tpw_rules
is there a way to do that automatically? it's certainly not anything nmigen knows about
17:33
vup
well not yet, but one could certainly parse the output of vivado, anuejn did something similar before parsing the output of nextpnr
17:35
tpw_rules
hmm
17:39
vup
we have CI setup with vivado, so this is something we could simply check in the CI on each commit
17:45
markusengsner
left the channel
18:00
markusengsner
joined the channel
18:01
tpw_rules
so how exactly does the GSoC submission stuff work in detail? I see i have to have an application submitted by april 13
18:02
tpw_rules
i've been reading some of the other proposals to get an idea of what should be included. do you guys review it before i submit? there's this stuff about how it's best to work with your mentor on it
18:07
Bertl
it is advisable to have the mentors review any application, but it is not a requirement
18:08
Bertl
after the application deadline, we will review them and request slots accordingly
18:09
tpw_rules
slots?
18:09
Bertl
some time later we get a certain number of slots assigned from google which we will then fill with students
18:11
lambamansha
left the channel
18:14
vup
tpw_rules: google selects a certain number of students they sponsor for a organization, the number of "slots"
18:18
pani
left the channel
18:20
futarisIRCcloud
joined the channel
18:24
Bertl
off for now ... bbl
18:24
Bertl
changed nick to: Bertl_oO
18:52
markusengsner
left the channel
19:34
pani
joined the channel
19:38
pani
left the channel
19:38
pani
joined the channel
19:44
vnksnkr
left the channel
20:37
markusengsner
joined the channel
21:09
pani
left the channel
21:54
pani
joined the channel
21:58
pani
left the channel
21:58
pani
joined the channel
22:10
markusengsner
left the channel
22:14
se6ast1an
off for today, good night
22:43
pani
left the channel
22:44
pani
joined the channel
22:48
pani
left the channel
22:48
pani
joined the channel
22:59
BAndiT1983
changed nick to: BAndiT1983|away
23:03
pani
left the channel
23:04
pani
joined the channel
23:08
pani
left the channel
23:08
pani
joined the channel
23:12
lexano
left the channel
23:24
lexano
joined the channel