Current Server Time: 15:21 (Central Europe)

#apertus IRC Channel Logs

2015/01/19

Timezone: UTC


00:00
Bertl
i.e. read the environment from sd and execute a specific 'axiomboot' or similar script/command?
00:02
Bertl
note that uboot brings a default environment which should be cleaned up anyway (i.e. only contain relevant data)
00:10
troy_s
danieel: Which display?
00:11
danieel
not sure which display the panel is, its lm300wq5
00:11
danieel
the one i like the most was in the first apple cinema display, lm300wq1
00:11
troy_s
danieel: I think that 30" was poop
00:11
danieel
and the HP LP3065 is also fine
00:12
troy_s
danieel: The 27" panel was the best in that series, which was the same in the Dell U2711 and the venerable HP DreamColor 27 from that same era.
00:12
troy_s
(Of course the HP outclasses everything in terms of control hence why it took up residence in every post house everywhere almost exclusively on the generic workstations)
00:14
danieel
the HP has 10/30 bit panel, the apple has just a 8/24bit one
00:16
danieel
we got also a U2412M .. but i do not like too much the LED backlight.. it is quite cold in color rendition
00:21
troy_s
danieel: That isn't your choice.
00:21
troy_s
:)
00:21
troy_s
danieel: The backlight in sRGB mode must be fixed at D65.
00:22
troy_s
And of course is relative to your adapted white point in your vision field.
00:24
intracube
left the channel
00:28
intracube
joined the channel
00:30
skinkie
Bertl: I have no problem with 'defaulting' anything to what we want
00:30
skinkie
the sad thing is that the 'default' is obviously flashed by Xilinx
00:31
Bertl
huh?
00:31
skinkie
hence if we want the default to work 'out of the box' that means that the empty ramdisk is probably the most 'stable'
00:31
skinkie
I mean: in case we don't want to flash uboot ourselves, and use the uboot that is shipped
00:32
Bertl
so is our uboot executed or not?
00:32
skinkie
then the shipped uboot probably also has the ramdisk "dependency"
00:32
skinkie
on my board our uboot is executed with our configuration
00:32
Bertl
and on a default board not?
00:33
skinkie
on a default board the 'stock' xilinx is installed
00:33
Bertl
i.e. sdboot doesn't read the uboot from the sd card?
00:33
skinkie
i can't verify at this moment, but I can ask Sven, if i can check what happens on a completely vanilla device
00:34
skinkie
if it has exactly the same config as on github, it is basically not copying uenv.txt thus a ramdisk is required
00:34
Bertl
well, if I remove the sd card, uboot should not work anymore, right?
00:34
skinkie
if you remove the card, it will detect mmcinfo is not there, thus it will go fo qpi
00:35
skinkie
so it is a very fundamental question, do we want to plugin an microSD card to any zync and have it booted
00:35
Bertl
well, actually it does nothing on my board
00:35
Bertl
i.e. it just sits there without any console output
00:36
skinkie
you mean you never see a linux kernel at all?
00:36
Bertl
without sd card, no
00:37
skinkie
ah.. then you did flash embedded storage
00:37
Bertl
as I said, boot configuration is (jumper wise) boot from sd
00:37
skinkie
for me jumpers don't influence anything
00:37
Bertl
so I would presume the zynq lvl 0 bootloader tries to laod from there
00:37
skinkie
it always boots from mmc if it is present
00:37
Bertl
you know you need to power cycle to detect jumper settings?
00:38
skinkie
i always pull the power prior to changing
00:38
Bertl
okay, so make sure sd boot is configured, then pull the card and cycle
00:39
Bertl
if something boots, something serious is wrong IMHO
00:40
Bertl
also maybe "mark" your uboot with a recognizeable message/version number
00:40
Bertl
the one reporting when the SD card is in is U-Boot 2014.01 (Jul 04 2014 - 19:30:04)
00:42
skinkie
you were right
00:42
skinkie
if you have it on "SD" mode
00:43
skinkie
it will only boot with a card hand has a red light
00:44
troy_s
left the channel
00:44
Bertl
so problem solved?
00:44
Bertl
I think the default board has a bootloader in QSPI, but as that was not accessible on your kernel, I couldn't verify (yet)
00:47
skinkie
I think you are right, that if we boot from SD
00:47
skinkie
that we can control the initial uboot configuration and we should make our own in the first place
00:48
skinkie
(and prune the unreadible mess)
00:49
Bertl
great! please also add the missing hardware drivers where possible
00:50
Bertl
(most importantly xadc and qspi, but I also suspect missing second port sd/i2c/spi)
01:10
skinkie
xadc i already wrote down
01:10
skinkie
i am still 'confused' why a stock configuration doesn't include them
01:19
Bertl
no idea
01:32
fsteinel_
joined the channel
01:35
fsteinel
left the channel
01:50
troy_s
joined the channel
02:00
davidak1
left the channel
02:28
intracube
left the channel
03:57
troy_s
left the channel
05:08
Bertl
off to bed now ... have a good one everyone!
05:08
Bertl
changed nick to: Bertl_zZ
08:26
niemand
joined the channel
08:33
niemand
left the channel
09:55
se6astian|away
changed nick to: se6astian
09:56
se6astian
good morning
10:20
Bertl_zZ
changed nick to: Bertl
10:20
Bertl
morning folks!
11:37
davidak1
joined the channel
12:12
davidak1
left the channel
12:52
Bertl
comradekingu: received your buttons! thanks again! will forward them to se6astian for testing
13:07
se6astian
\o/
13:13
lab-bot
left the channel
13:16
se6astian
left the channel
13:16
philippej|away
left the channel
13:18
comradekingu
left the channel
13:18
Bertl
off for now ... bbl
13:18
Bertl
changed nick to: Bertl_oO
13:21
lab-bot
joined the channel
13:22
se6astian
joined the channel
13:23
philippej|away
joined the channel
13:23
philippej|away
changed nick to: philippej
13:37
lab-bot
left the channel
13:40
se6astian
left the channel
13:41
philippej
left the channel
13:42
lab-bot
joined the channel
13:43
se6astian
joined the channel
13:44
philippej|away
joined the channel
13:44
philippej|away
changed nick to: philippej
13:56
lab-bot
left the channel
13:59
se6astian
left the channel
13:59
philippej
left the channel
14:00
lab-bot
joined the channel
14:02
se6astian
joined the channel
14:03
philippej|away
joined the channel
14:03
philippej|away
changed nick to: philippej
14:17
lab-bot
left the channel
14:19
lab-bot
joined the channel
14:33
comradekingu
joined the channel
14:34
comradekingu
:)
14:47
se6astian
short server outage
14:48
se6astian
all back to normal now it seems
15:03
se6astian
gotta go
15:05
se6astian
changed nick to: se6astian|away
15:12
lab-bot
left the channel
15:12
lab-bot
joined the channel
15:13
intracube
joined the channel
15:14
fsteinel_
changed nick to: fsteinel
15:15
lab-bot
left the channel
15:16
lab-bot
joined the channel
16:28
dmjnova
left the channel
16:32
lab-bot
left the channel
16:35
lab-bot
joined the channel
16:37
dmjnova
joined the channel
16:39
comradekingu
left the channel
17:21
aombk2
joined the channel
17:24
Bertl_oO
changed nick to: Bertl
17:24
aombk
left the channel
17:24
Bertl
back now ...
18:06
davidak1
joined the channel
18:14
Bertl
off for a nap ... bbl
18:14
Bertl
changed nick to: Bertl_zZ
18:37
se6astian|away
changed nick to: se6astian
18:40
se6astian
back
19:25
niemand
joined the channel
19:33
fsteinel
left the channel
19:47
dmjnova
left the channel
19:49
troy_s
joined the channel
20:53
comradekingu
joined the channel
21:44
troy_s1
joined the channel
21:45
troy_s1
Greets all.
21:50
dmjnova
joined the channel
21:51
fsteinel
joined the channel
21:53
comradekingu
troy_s heia :)
21:53
troy_s1
comradekingu: How you doing?
21:53
comradekingu
im doing good, you?
21:53
comradekingu
just had some freshwater fish cought ice-fishing
21:56
comradekingu
https://www.kickstarter.com/projects/chillance/treadgaming-exercise-while-playing-games/posts/1110114 is what ive been up to over the weekend
21:57
troy_s
left the channel
21:57
troy_s1
changed nick to: troy_s
22:03
se6astian
hey troy_s I am drafting a highlight recovery concept illustration currently, would you be so kind to review it once finished?
22:03
troy_s
se6astian: Of course.
22:04
troy_s
se6astian: It's pretty sludgey territory though in my opinion. I'm sure there are some more heuristic based algorithms (looking at nearby values for example that match the fingerprint of the missing channel) that might get a closer estimate.
22:07
se6astian
I am just illustrating the obvision "take others channels value" concept for now
22:07
troy_s
Gotcha
22:07
se6astian
we can build more sophisticated algorithmns on top once we understand what shortcomings this basic approach has
22:07
niemand
left the channel
22:23
Bertl_zZ
changed nick to: Bertl
22:23
Bertl
back now ...
22:37
se6astian
http://lab.apertus.org/file/data/r3ewkv2ydwmxruln2q4u/PHID-FILE-wykpl2bfjleutomrzeh6/Highlight-recovery-Theory-concept-01.jpg
22:44
se6astian
troy_s & Bertl please let me know if you have any feedback
22:44
troy_s
se6astian: Looking now.
22:44
se6astian
great
22:45
troy_s
se6astian: Out of the gate, a little bit confusing for me.
22:46
se6astian
ok :)
22:46
troy_s
se6astian: And a few English typos, not a big deal. (appart for example)
22:47
se6astian
ah thanks
22:48
troy_s
se6astian: It's quite an interesting thing, as I have recently been discussing the idea of scene referred transfer / tone curving to the display referred domain. It is interesting because of what is expected (an aesthetic emergent phenomenon)
22:49
troy_s
I'd make a _strong_ case that the whole "desaturation toward an achromatic" is largely learned from about 100 years of photographic imaging, where singlular hues of colors would burn through the layers of emulsion. That is, no matter what the color, everything converges toward white. RGB displays end up having this facet by default thanks to them being tri-colored display referred devices and you can only increase intensity so far on a single channel befor
22:49
se6astian
updated: http://lab.apertus.org/T244#3845
22:49
troy_s
se6astian: From _our_ vantage though, where scene referred idealized data is the key, the problem is more significant.
22:50
troy_s
That is, when missing a channel, there is _no_ way to identify what that triplet of data was; the scene data is forever lost.
22:50
troy_s
I think this becomes a bit of an issue for post production work.
22:50
troy_s
se6astian: Also, "white balanced' Isn't quite an accurate image there.
22:51
troy_s
Because RGB channels are arbitrary.
22:51
troy_s
And R = G = B does not assert achromatic.
22:51
troy_s
(In some uh... 'well behaved' colorspaces it does, sRGB for example, but that is the nature of the primaries chosen.)
22:52
troy_s
Sense?
22:52
Bertl
personally I'm still not sure what I'm seeing there :)
22:52
comradekingu
How about being able to locate and size the cube yourself?
22:52
troy_s
(Agree, It took me a while to grasp the imaging.)
22:52
troy_s
(Largely because my mental models seem firmly planted in wells filling up)
22:52
troy_s
comradekingu: Explain?
22:52
Bertl
the first picture shows everything within the raw sensor range
22:52
se6astian
you mean each channel has its own coefficient, instead of just R = B its R = Xb x B for recovery right?
22:52
troy_s
comradekingu: And what cube?
22:53
troy_s
se6astian: In terms of display referred white?
22:53
comradekingu
to me it looked like setting white balance based on the most dynamic range peak
22:53
troy_s
comradekingu: It's not quite that easy.
22:53
se6astian
colorspace related yes
22:53
comradekingu
But there is a lot of computing involved to make sure the square is where its supposed to be
22:54
Bertl
the missing data seems to happen after the white balance, no?
22:54
troy_s
comradekingu: If you think of a colorspace as being weights in luminance, they sum to a unity in the display referred domain. So sRGB for example, has primaries of 0.2126729 0.7151522 0.0721750 luminance at D65 "neutral"
22:54
comradekingu
What if i could place and size it myself. It would also help understanding, if i could see the livefeed as i did it
22:54
Bertl
so it is something introduced by the image pipeline
22:54
comradekingu
ooh
22:55
se6astian
Bertl: yes, currently in Alpha we simply do not white balance
22:55
troy_s
comradekingu: There is a _clear_ division between the scene referred domain (of which Bertl is speaking currently) and the display referred.
22:55
se6astian
the output is always raw
22:55
troy_s
There is no black nor white in the scene referred domain, and the camera is attempting to "model" that domain via a display referred sensor. (it has a min and max)
22:55
Bertl
so why would we 'throw away' sensor data?
22:55
se6astian
thats the beauty: we dont
22:56
Bertl
well, if we output the 'raw' then we don't either and nothing needs to be done. period.
22:56
troy_s
comradekingu: So in essence, there is no idea of white or black that the camera sees - that's actually a transform _after_ the collection that converts it into the display referred domain. (just as you and I could use a spot meter to measure our outside world, but our eyes will have a limit on the high and low. The spot meter is just 'data' and the rest is our transform)
22:56
se6astian
true, then it should happen in post processing
22:56
troy_s
Yep.
22:56
troy_s
And it is a challenge for certain in post.
22:56
se6astian
currently it doesnt because what we save is not really understood as "RAW"?
22:57
troy_s
Because that data isn't data.
22:57
troy_s
(as in its missing a third or two thirds of the signal)
22:57
Bertl
we don't output raw yet (on the alpha)
22:57
se6astian
not per definition
22:57
Bertl
only if you capture a still image
22:57
se6astian
but per missing whitebalancing we do
22:58
Bertl
no white balance is white balance as well :)
22:58
Bertl
it's just an ugly balance :)
22:58
se6astian
true
22:58
troy_s
I think even for the 709 there should be a 3D "toward unity" sort of filmish LUT.
22:58
troy_s
For the onboard viewer.
22:58
comradekingu
left the channel
22:58
troy_s
Otherwise there is going to be some nasty castings.
22:58
comradekingu
joined the channel
22:59
se6astian
but I suspect that post processing software does not recover highlights when it re-balances the whites but just crops off the data
22:59
Bertl
I think for preview a 3D lut is doable
22:59
Bertl
se6astian: then the post processing software should be adapted
22:59
se6astian
true, but we cant do that for most tools I am afraid :)
22:59
Bertl
the problem is this, when _we_ do this on the raw data, we are actually making it worse
23:00
se6astian
so I had the idea for a workaround :)
23:00
troy_s
Bertl: That would allow colors to converge like film, and eliminate casts. So when green is say, 95%, the 3D LUT yanks up the R and the B channel to converge more gracefully at unity for the display referred transform.
23:00
Bertl
i.e. we throw away perfectly good data and replace it by false guesstimations
23:00
troy_s
Agree.
23:00
Bertl
we even reduce the bit depth in the process
23:01
Bertl
not something I would really want :)
23:02
Bertl
nevertheless, it is perfectly doable, but will give very strange side effects in the way it is described
23:02
troy_s
And in post, any transform should be in the scene referred domain... for example, if R is 0.8, B is 0.7, and G is pinned, then the estimate should come from nearby statistical Greens.
23:02
Bertl
i.e. objects will change contrast and brightness depending on the rest of the scene
23:02
troy_s
Yep.
23:02
se6astian
the experiments is we write a tool/script that converts our unbalanced 1080p YCrCb 4:2:2 image to an UHD 4:2:2 raw (DNG) image
23:02
troy_s
And totally fubar any and all colorspace transforms off the data.
23:03
Bertl
se6astian: work on the snapshots for now, those are raw and can be transformed in the way you describe it
23:03
se6astian
each pixel is 1080p contains R, B and G so we can fill a 2x2 bayer block by just taking the channel values
23:03
Bertl
do not use the 1080p 422 feed, as it already misses most of the data
23:04
se6astian
I think RAW development software does account for the highlight recovery process
23:04
Bertl
note that this is comparable to using a jpeg an generating a "raw" from it
23:04
Bertl
yes, you can do it, but it will be just wrong :)
23:04
se6astian
agreed, but the "wrong" stuff might be less visible than the positive effects
23:05
se6astian
its an experiment
23:08
comradekingu
i dont see the pure win, but its certainly interesting since its possible to go there
23:10
comradekingu
but wont it look unnatural when you try to represent it in a limited colourspace?
23:13
se6astian
we will see :)
23:13
troy_s
Well the larger issue is that there _is no data there_
23:13
troy_s
It's going to be a guess.
23:14
troy_s
And that causes all sorts of hell in trying to leverage the data in say, compositing.
23:14
troy_s
You simply _do not have any idea_ what say, the green channel was / should be.
23:14
troy_s
And as such, the chance of a guess being _way_ off is quite high.
23:14
troy_s
So you have color casts in every instance, and worse
23:15
troy_s
if you do the sort of blind and ignorant "desaturate and reduce" now you are screwing your scene referred values for compositing.
23:15
troy_s
They can never be properly slotted into the scene linear data slots.
23:16
troy_s
I personally believe there is a vast amount of more research to be done on optimizing the log curves for capture.
23:16
troy_s
Because that's where the most amount of data is going to be captured. Somehow getting a more versatile PLE mode out of CMOSIS or whatever for example.
23:18
comradekingu
is there a way to couple many axioms together to only use the middle part of the lens
23:18
troy_s
?
23:18
comradekingu
so where you would otherwise be on the curved side bit, you instead shoot that on another camera
23:19
Bertl
you'll get a different perspective unless you use a beam splitter
23:20
Bertl
but yes, you can synchronize several AXIOMs
23:20
comradekingu
how do you know the chromatic aberration is uniform?
23:20
dmjnova
left the channel
23:20
troy_s
It's rather common to shoot several cameras pointing in different directions and merge them using unwarps and such for plate work.
23:20
troy_s
comradekingu: It won't be.
23:21
troy_s
You'd have to peel it apart into planes and non-uniformly scale them if you have significant amounts.
23:21
troy_s
(although in most instances it is quite uniform in my experience.)
23:22
Bertl
IMHO it would be simpler to use a bigger lens
23:23
comradekingu
yes, but that isnt as cool
23:23
troy_s
comradekingu: It is _extremely_ common to use say, three to nine cameras for plate generation these days.
23:24
comradekingu
if you can also do 3d and increase the resoution a bit it beats the bigger lens
23:24
Bertl
troy_s: plate work is?
23:25
troy_s
Bertl: Mega car chase with a robot car that smashes another real-world car and is dissipated with a laser. Where that real-world car was and gets dissipated needs to have background plates so that it can be removed.
23:26
comradekingu
and you could do sweetspot focus on each camera
23:26
troy_s
Bertl: Or Mega car comes down and lands and drives along a road. That road needs to be shot at large density so that it can be mapped to 3D and used to generate camera motion and 3D etc.
23:26
Bertl
ah, okay, got it
23:28
comradekingu
i imagine that gets easier to do if you can have it preconfigured to shoot like that, where the cameras communicate and notify you if its off
23:30
Bertl
I think a lot will be possible, as the software is easily accessible and thus can be modified
23:30
troy_s
left the channel
23:34
troy_s
joined the channel
23:39
dmjnova
joined the channel
23:42
troy_s
left the channel
23:43
troy_s
joined the channel
23:48
se6astian
time for bed
23:49
se6astian
changed nick to: se6astian|away
23:50
dmjnova
left the channel