| 23:00 | Bertl | i.e. read the environment from sd and execute a specific 'axiomboot' or similar script/command?
|
| 23:02 | Bertl | note that uboot brings a default environment which should be cleaned up anyway (i.e. only contain relevant data)
|
| 23:10 | troy_s | danieel: Which display?
|
| 23:11 | danieel | not sure which display the panel is, its lm300wq5
|
| 23:11 | danieel | the one i like the most was in the first apple cinema display, lm300wq1
|
| 23:11 | troy_s | danieel: I think that 30" was poop
|
| 23:11 | danieel | and the HP LP3065 is also fine
|
| 23:12 | troy_s | danieel: The 27" panel was the best in that series, which was the same in the Dell U2711 and the venerable HP DreamColor 27 from that same era.
|
| 23:12 | troy_s | (Of course the HP outclasses everything in terms of control hence why it took up residence in every post house everywhere almost exclusively on the generic workstations)
|
| 23:14 | danieel | the HP has 10/30 bit panel, the apple has just a 8/24bit one
|
| 23:16 | danieel | we got also a U2412M .. but i do not like too much the LED backlight.. it is quite cold in color rendition
|
| 23:21 | troy_s | danieel: That isn't your choice.
|
| 23:21 | troy_s | :)
|
| 23:21 | troy_s | danieel: The backlight in sRGB mode must be fixed at D65.
|
| 23:22 | troy_s | And of course is relative to your adapted white point in your vision field.
|
| 23:24 | intracube | left the channel |
| 23:28 | intracube | joined the channel |
| 23:30 | skinkie | Bertl: I have no problem with 'defaulting' anything to what we want
|
| 23:30 | skinkie | the sad thing is that the 'default' is obviously flashed by Xilinx
|
| 23:31 | Bertl | huh?
|
| 23:31 | skinkie | hence if we want the default to work 'out of the box' that means that the empty ramdisk is probably the most 'stable'
|
| 23:31 | skinkie | I mean: in case we don't want to flash uboot ourselves, and use the uboot that is shipped
|
| 23:32 | Bertl | so is our uboot executed or not?
|
| 23:32 | skinkie | then the shipped uboot probably also has the ramdisk "dependency"
|
| 23:32 | skinkie | on my board our uboot is executed with our configuration
|
| 23:32 | Bertl | and on a default board not?
|
| 23:33 | skinkie | on a default board the 'stock' xilinx is installed
|
| 23:33 | Bertl | i.e. sdboot doesn't read the uboot from the sd card?
|
| 23:33 | skinkie | i can't verify at this moment, but I can ask Sven, if i can check what happens on a completely vanilla device
|
| 23:34 | skinkie | if it has exactly the same config as on github, it is basically not copying uenv.txt thus a ramdisk is required
|
| 23:34 | Bertl | well, if I remove the sd card, uboot should not work anymore, right?
|
| 23:34 | skinkie | if you remove the card, it will detect mmcinfo is not there, thus it will go fo qpi
|
| 23:35 | skinkie | so it is a very fundamental question, do we want to plugin an microSD card to any zync and have it booted
|
| 23:35 | Bertl | well, actually it does nothing on my board
|
| 23:35 | Bertl | i.e. it just sits there without any console output
|
| 23:36 | skinkie | you mean you never see a linux kernel at all?
|
| 23:36 | Bertl | without sd card, no
|
| 23:37 | skinkie | ah.. then you did flash embedded storage
|
| 23:37 | Bertl | as I said, boot configuration is (jumper wise) boot from sd
|
| 23:37 | skinkie | for me jumpers don't influence anything
|
| 23:37 | Bertl | so I would presume the zynq lvl 0 bootloader tries to laod from there
|
| 23:37 | skinkie | it always boots from mmc if it is present
|
| 23:37 | Bertl | you know you need to power cycle to detect jumper settings?
|
| 23:38 | skinkie | i always pull the power prior to changing
|
| 23:38 | Bertl | okay, so make sure sd boot is configured, then pull the card and cycle
|
| 23:39 | Bertl | if something boots, something serious is wrong IMHO
|
| 23:40 | Bertl | also maybe "mark" your uboot with a recognizeable message/version number
|
| 23:40 | Bertl | the one reporting when the SD card is in is U-Boot 2014.01 (Jul 04 2014 - 19:30:04)
|
| 23:42 | skinkie | you were right
|
| 23:42 | skinkie | if you have it on "SD" mode
|
| 23:43 | skinkie | it will only boot with a card hand has a red light
|
| 23:44 | troy_s | left the channel |
| 23:44 | Bertl | so problem solved?
|
| 23:44 | Bertl | I think the default board has a bootloader in QSPI, but as that was not accessible on your kernel, I couldn't verify (yet)
|
| 23:47 | skinkie | I think you are right, that if we boot from SD
|
| 23:47 | skinkie | that we can control the initial uboot configuration and we should make our own in the first place
|
| 23:48 | skinkie | (and prune the unreadible mess)
|
| 23:49 | Bertl | great! please also add the missing hardware drivers where possible
|
| 23:50 | Bertl | (most importantly xadc and qspi, but I also suspect missing second port sd/i2c/spi)
|
| 00:10 | skinkie | xadc i already wrote down
|
| 00:10 | skinkie | i am still 'confused' why a stock configuration doesn't include them
|
| 00:19 | Bertl | no idea
|
| 00:32 | fsteinel_ | joined the channel |
| 00:35 | fsteinel | left the channel |
| 00:50 | troy_s | joined the channel |
| 01:00 | davidak1 | left the channel |
| 01:28 | intracube | left the channel |
| 02:57 | troy_s | left the channel |
| 04:08 | Bertl | off to bed now ... have a good one everyone!
|
| 04:08 | Bertl | changed nick to: Bertl_zZ
|
| 07:26 | niemand | joined the channel |
| 07:33 | niemand | left the channel |
| 08:55 | se6astian|away | changed nick to: se6astian
|
| 08:56 | se6astian | good morning
|
| 09:20 | Bertl_zZ | changed nick to: Bertl
|
| 09:20 | Bertl | morning folks!
|
| 10:37 | davidak1 | joined the channel |
| 11:12 | davidak1 | left the channel |
| 11:52 | Bertl | comradekingu: received your buttons! thanks again! will forward them to se6astian for testing
|
| 12:07 | se6astian | \o/
|
| 12:13 | lab-bot | left the channel |
| 12:16 | se6astian | left the channel |
| 12:16 | philippej|away | left the channel |
| 12:18 | comradekingu | left the channel |
| 12:18 | Bertl | off for now ... bbl
|
| 12:18 | Bertl | changed nick to: Bertl_oO
|
| 12:21 | lab-bot | joined the channel |
| 12:22 | se6astian | joined the channel |
| 12:23 | philippej|away | joined the channel |
| 12:23 | philippej|away | changed nick to: philippej
|
| 12:37 | lab-bot | left the channel |
| 12:40 | se6astian | left the channel |
| 12:41 | philippej | left the channel |
| 12:42 | lab-bot | joined the channel |
| 12:43 | se6astian | joined the channel |
| 12:44 | philippej|away | joined the channel |
| 12:44 | philippej|away | changed nick to: philippej
|
| 12:56 | lab-bot | left the channel |
| 12:59 | se6astian | left the channel |
| 12:59 | philippej | left the channel |
| 13:00 | lab-bot | joined the channel |
| 13:02 | se6astian | joined the channel |
| 13:03 | philippej|away | joined the channel |
| 13:03 | philippej|away | changed nick to: philippej
|
| 13:17 | lab-bot | left the channel |
| 13:19 | lab-bot | joined the channel |
| 13:33 | comradekingu | joined the channel |
| 13:34 | comradekingu | :)
|
| 13:47 | se6astian | short server outage
|
| 13:48 | se6astian | all back to normal now it seems
|
| 14:03 | se6astian | gotta go
|
| 14:05 | se6astian | changed nick to: se6astian|away
|
| 14:12 | lab-bot | left the channel |
| 14:12 | lab-bot | joined the channel |
| 14:13 | intracube | joined the channel |
| 14:14 | fsteinel_ | changed nick to: fsteinel
|
| 14:15 | lab-bot | left the channel |
| 14:16 | lab-bot | joined the channel |
| 15:28 | dmjnova | left the channel |
| 15:32 | lab-bot | left the channel |
| 15:35 | lab-bot | joined the channel |
| 15:37 | dmjnova | joined the channel |
| 15:39 | comradekingu | left the channel |
| 16:21 | aombk2 | joined the channel |
| 16:24 | Bertl_oO | changed nick to: Bertl
|
| 16:24 | aombk | left the channel |
| 16:24 | Bertl | back now ...
|
| 17:06 | davidak1 | joined the channel |
| 17:14 | Bertl | off for a nap ... bbl
|
| 17:14 | Bertl | changed nick to: Bertl_zZ
|
| 17:37 | se6astian|away | changed nick to: se6astian
|
| 17:40 | se6astian | back
|
| 18:25 | niemand | joined the channel |
| 18:33 | fsteinel | left the channel |
| 18:47 | dmjnova | left the channel |
| 18:49 | troy_s | joined the channel |
| 19:53 | comradekingu | joined the channel |
| 20:44 | troy_s1 | joined the channel |
| 20:45 | troy_s1 | Greets all.
|
| 20:50 | dmjnova | joined the channel |
| 20:51 | fsteinel | joined the channel |
| 20:53 | comradekingu | troy_s heia :)
|
| 20:53 | troy_s1 | comradekingu: How you doing?
|
| 20:53 | comradekingu | im doing good, you?
|
| 20:53 | comradekingu | just had some freshwater fish cought ice-fishing
|
| 20:56 | comradekingu | https://www.kickstarter.com/projects/chillance/treadgaming-exercise-while-playing-games/posts/1110114 is what ive been up to over the weekend
|
| 20:57 | troy_s | left the channel |
| 20:57 | troy_s1 | changed nick to: troy_s
|
| 21:03 | se6astian | hey troy_s I am drafting a highlight recovery concept illustration currently, would you be so kind to review it once finished?
|
| 21:03 | troy_s | se6astian: Of course.
|
| 21:04 | troy_s | se6astian: It's pretty sludgey territory though in my opinion. I'm sure there are some more heuristic based algorithms (looking at nearby values for example that match the fingerprint of the missing channel) that might get a closer estimate.
|
| 21:07 | se6astian | I am just illustrating the obvision "take others channels value" concept for now
|
| 21:07 | troy_s | Gotcha
|
| 21:07 | se6astian | we can build more sophisticated algorithmns on top once we understand what shortcomings this basic approach has
|
| 21:07 | niemand | left the channel |
| 21:23 | Bertl_zZ | changed nick to: Bertl
|
| 21:23 | Bertl | back now ...
|
| 21:37 | se6astian | http://lab.apertus.org/file/data/r3ewkv2ydwmxruln2q4u/PHID-FILE-wykpl2bfjleutomrzeh6/Highlight-recovery-Theory-concept-01.jpg
|
| 21:44 | se6astian | troy_s & Bertl please let me know if you have any feedback
|
| 21:44 | troy_s | se6astian: Looking now.
|
| 21:44 | se6astian | great
|
| 21:45 | troy_s | se6astian: Out of the gate, a little bit confusing for me.
|
| 21:46 | se6astian | ok :)
|
| 21:46 | troy_s | se6astian: And a few English typos, not a big deal. (appart for example)
|
| 21:47 | se6astian | ah thanks
|
| 21:48 | troy_s | se6astian: It's quite an interesting thing, as I have recently been discussing the idea of scene referred transfer / tone curving to the display referred domain. It is interesting because of what is expected (an aesthetic emergent phenomenon)
|
| 21:49 | troy_s | I'd make a _strong_ case that the whole "desaturation toward an achromatic" is largely learned from about 100 years of photographic imaging, where singlular hues of colors would burn through the layers of emulsion. That is, no matter what the color, everything converges toward white. RGB displays end up having this facet by default thanks to them being tri-colored display referred devices and you can only increase intensity so far on a single channel befor
|
| 21:49 | se6astian | updated: http://lab.apertus.org/T244#3845
|
| 21:49 | troy_s | se6astian: From _our_ vantage though, where scene referred idealized data is the key, the problem is more significant.
|
| 21:50 | troy_s | That is, when missing a channel, there is _no_ way to identify what that triplet of data was; the scene data is forever lost.
|
| 21:50 | troy_s | I think this becomes a bit of an issue for post production work.
|
| 21:50 | troy_s | se6astian: Also, "white balanced' Isn't quite an accurate image there.
|
| 21:51 | troy_s | Because RGB channels are arbitrary.
|
| 21:51 | troy_s | And R = G = B does not assert achromatic.
|
| 21:51 | troy_s | (In some uh... 'well behaved' colorspaces it does, sRGB for example, but that is the nature of the primaries chosen.)
|
| 21:52 | troy_s | Sense?
|
| 21:52 | Bertl | personally I'm still not sure what I'm seeing there :)
|
| 21:52 | comradekingu | How about being able to locate and size the cube yourself?
|
| 21:52 | troy_s | (Agree, It took me a while to grasp the imaging.)
|
| 21:52 | troy_s | (Largely because my mental models seem firmly planted in wells filling up)
|
| 21:52 | troy_s | comradekingu: Explain?
|
| 21:52 | Bertl | the first picture shows everything within the raw sensor range
|
| 21:52 | se6astian | you mean each channel has its own coefficient, instead of just R = B its R = Xb x B for recovery right?
|
| 21:52 | troy_s | comradekingu: And what cube?
|
| 21:53 | troy_s | se6astian: In terms of display referred white?
|
| 21:53 | comradekingu | to me it looked like setting white balance based on the most dynamic range peak
|
| 21:53 | troy_s | comradekingu: It's not quite that easy.
|
| 21:53 | se6astian | colorspace related yes
|
| 21:53 | comradekingu | But there is a lot of computing involved to make sure the square is where its supposed to be
|
| 21:54 | Bertl | the missing data seems to happen after the white balance, no?
|
| 21:54 | troy_s | comradekingu: If you think of a colorspace as being weights in luminance, they sum to a unity in the display referred domain. So sRGB for example, has primaries of 0.2126729 0.7151522 0.0721750 luminance at D65 "neutral"
|
| 21:54 | comradekingu | What if i could place and size it myself. It would also help understanding, if i could see the livefeed as i did it
|
| 21:54 | Bertl | so it is something introduced by the image pipeline
|
| 21:54 | comradekingu | ooh
|
| 21:55 | se6astian | Bertl: yes, currently in Alpha we simply do not white balance
|
| 21:55 | troy_s | comradekingu: There is a _clear_ division between the scene referred domain (of which Bertl is speaking currently) and the display referred.
|
| 21:55 | se6astian | the output is always raw
|
| 21:55 | troy_s | There is no black nor white in the scene referred domain, and the camera is attempting to "model" that domain via a display referred sensor. (it has a min and max)
|
| 21:55 | Bertl | so why would we 'throw away' sensor data?
|
| 21:55 | se6astian | thats the beauty: we dont
|
| 21:56 | Bertl | well, if we output the 'raw' then we don't either and nothing needs to be done. period.
|
| 21:56 | troy_s | comradekingu: So in essence, there is no idea of white or black that the camera sees - that's actually a transform _after_ the collection that converts it into the display referred domain. (just as you and I could use a spot meter to measure our outside world, but our eyes will have a limit on the high and low. The spot meter is just 'data' and the rest is our transform)
|
| 21:56 | se6astian | true, then it should happen in post processing
|
| 21:56 | troy_s | Yep.
|
| 21:56 | troy_s | And it is a challenge for certain in post.
|
| 21:56 | se6astian | currently it doesnt because what we save is not really understood as "RAW"?
|
| 21:57 | troy_s | Because that data isn't data.
|
| 21:57 | troy_s | (as in its missing a third or two thirds of the signal)
|
| 21:57 | Bertl | we don't output raw yet (on the alpha)
|
| 21:57 | se6astian | not per definition
|
| 21:57 | Bertl | only if you capture a still image
|
| 21:57 | se6astian | but per missing whitebalancing we do
|
| 21:58 | Bertl | no white balance is white balance as well :)
|
| 21:58 | Bertl | it's just an ugly balance :)
|
| 21:58 | se6astian | true
|
| 21:58 | troy_s | I think even for the 709 there should be a 3D "toward unity" sort of filmish LUT.
|
| 21:58 | troy_s | For the onboard viewer.
|
| 21:58 | comradekingu | left the channel |
| 21:58 | troy_s | Otherwise there is going to be some nasty castings.
|
| 21:58 | comradekingu | joined the channel |
| 21:59 | se6astian | but I suspect that post processing software does not recover highlights when it re-balances the whites but just crops off the data
|
| 21:59 | Bertl | I think for preview a 3D lut is doable
|
| 21:59 | Bertl | se6astian: then the post processing software should be adapted
|
| 21:59 | se6astian | true, but we cant do that for most tools I am afraid :)
|
| 21:59 | Bertl | the problem is this, when _we_ do this on the raw data, we are actually making it worse
|
| 22:00 | se6astian | so I had the idea for a workaround :)
|
| 22:00 | troy_s | Bertl: That would allow colors to converge like film, and eliminate casts. So when green is say, 95%, the 3D LUT yanks up the R and the B channel to converge more gracefully at unity for the display referred transform.
|
| 22:00 | Bertl | i.e. we throw away perfectly good data and replace it by false guesstimations
|
| 22:00 | troy_s | Agree.
|
| 22:00 | Bertl | we even reduce the bit depth in the process
|
| 22:01 | Bertl | not something I would really want :)
|
| 22:02 | Bertl | nevertheless, it is perfectly doable, but will give very strange side effects in the way it is described
|
| 22:02 | troy_s | And in post, any transform should be in the scene referred domain... for example, if R is 0.8, B is 0.7, and G is pinned, then the estimate should come from nearby statistical Greens.
|
| 22:02 | Bertl | i.e. objects will change contrast and brightness depending on the rest of the scene
|
| 22:02 | troy_s | Yep.
|
| 22:02 | se6astian | the experiments is we write a tool/script that converts our unbalanced 1080p YCrCb 4:2:2 image to an UHD 4:2:2 raw (DNG) image
|
| 22:02 | troy_s | And totally fubar any and all colorspace transforms off the data.
|
| 22:03 | Bertl | se6astian: work on the snapshots for now, those are raw and can be transformed in the way you describe it
|
| 22:03 | se6astian | each pixel is 1080p contains R, B and G so we can fill a 2x2 bayer block by just taking the channel values
|
| 22:03 | Bertl | do not use the 1080p 422 feed, as it already misses most of the data
|
| 22:04 | se6astian | I think RAW development software does account for the highlight recovery process
|
| 22:04 | Bertl | note that this is comparable to using a jpeg an generating a "raw" from it
|
| 22:04 | Bertl | yes, you can do it, but it will be just wrong :)
|
| 22:04 | se6astian | agreed, but the "wrong" stuff might be less visible than the positive effects
|
| 22:05 | se6astian | its an experiment
|
| 22:08 | comradekingu | i dont see the pure win, but its certainly interesting since its possible to go there
|
| 22:10 | comradekingu | but wont it look unnatural when you try to represent it in a limited colourspace?
|
| 22:13 | se6astian | we will see :)
|
| 22:13 | troy_s | Well the larger issue is that there _is no data there_
|
| 22:13 | troy_s | It's going to be a guess.
|
| 22:14 | troy_s | And that causes all sorts of hell in trying to leverage the data in say, compositing.
|
| 22:14 | troy_s | You simply _do not have any idea_ what say, the green channel was / should be.
|
| 22:14 | troy_s | And as such, the chance of a guess being _way_ off is quite high.
|
| 22:14 | troy_s | So you have color casts in every instance, and worse
|
| 22:15 | troy_s | if you do the sort of blind and ignorant "desaturate and reduce" now you are screwing your scene referred values for compositing.
|
| 22:15 | troy_s | They can never be properly slotted into the scene linear data slots.
|
| 22:16 | troy_s | I personally believe there is a vast amount of more research to be done on optimizing the log curves for capture.
|
| 22:16 | troy_s | Because that's where the most amount of data is going to be captured. Somehow getting a more versatile PLE mode out of CMOSIS or whatever for example.
|
| 22:18 | comradekingu | is there a way to couple many axioms together to only use the middle part of the lens
|
| 22:18 | troy_s | ?
|
| 22:18 | comradekingu | so where you would otherwise be on the curved side bit, you instead shoot that on another camera
|
| 22:19 | Bertl | you'll get a different perspective unless you use a beam splitter
|
| 22:20 | Bertl | but yes, you can synchronize several AXIOMs
|
| 22:20 | comradekingu | how do you know the chromatic aberration is uniform?
|
| 22:20 | dmjnova | left the channel |
| 22:20 | troy_s | It's rather common to shoot several cameras pointing in different directions and merge them using unwarps and such for plate work.
|
| 22:20 | troy_s | comradekingu: It won't be.
|
| 22:21 | troy_s | You'd have to peel it apart into planes and non-uniformly scale them if you have significant amounts.
|
| 22:21 | troy_s | (although in most instances it is quite uniform in my experience.)
|
| 22:22 | Bertl | IMHO it would be simpler to use a bigger lens
|
| 22:23 | comradekingu | yes, but that isnt as cool
|
| 22:23 | troy_s | comradekingu: It is _extremely_ common to use say, three to nine cameras for plate generation these days.
|
| 22:24 | comradekingu | if you can also do 3d and increase the resoution a bit it beats the bigger lens
|
| 22:24 | Bertl | troy_s: plate work is?
|
| 22:25 | troy_s | Bertl: Mega car chase with a robot car that smashes another real-world car and is dissipated with a laser. Where that real-world car was and gets dissipated needs to have background plates so that it can be removed.
|
| 22:26 | comradekingu | and you could do sweetspot focus on each camera
|
| 22:26 | troy_s | Bertl: Or Mega car comes down and lands and drives along a road. That road needs to be shot at large density so that it can be mapped to 3D and used to generate camera motion and 3D etc.
|
| 22:26 | Bertl | ah, okay, got it
|
| 22:28 | comradekingu | i imagine that gets easier to do if you can have it preconfigured to shoot like that, where the cameras communicate and notify you if its off
|
| 22:30 | Bertl | I think a lot will be possible, as the software is easily accessible and thus can be modified
|
| 22:30 | troy_s | left the channel |
| 22:34 | troy_s | joined the channel |
| 22:39 | dmjnova | joined the channel |
| 22:42 | troy_s | left the channel |
| 22:43 | troy_s | joined the channel |
| 22:48 | se6astian | time for bed
|
| 22:49 | se6astian | changed nick to: se6astian|away
|
| 22:50 | dmjnova | left the channel |