Current Server Time: 04:58 (Central Europe)

#apertus IRC Channel Logs

2014/09/25

Timezone: UTC


00:01
jucar
joined the channel
00:19
liwanma
joined the channel
00:19
liwanma
Hey all
00:21
Bertl
hey liwanma!
00:24
liwanma
I'm looking to help out with any software development.
00:25
Bertl
sounds great! have you worked with embedded systems before?
00:26
Bertl
(no big deal if not, just curious)
00:27
liwanma
Yes I have, but probably not at the same level as your core team. I'm more than willing to put in the research to help out.
00:28
Bertl
excellent! do you have any area you would like or dislike?
00:30
liwanma
Well... I would like to help out with image processing because it's something I want to jump in to
00:32
Bertl
okay, sounds good, I guess we have something which might be interesting to you and where you basically can start right now if you like to
00:33
Bertl
as you might know, the image which is captured by the sensor consists of sensel data with different colors arranged in a specific pattern (bayer pattern)
00:34
liwanma
yep
00:34
Bertl
this pattern has to be interpreted and color information has to be interpolated
00:35
liwanma
right. Are we talking about adc?
00:36
Bertl
adc as in analog to digital converter?
00:36
liwanma
yes
00:37
Bertl
well, yes and no, the sensor samples an image and then converts it to a digital representation
00:37
Bertl
which is sent to the FPGA as bitstream
00:37
Bertl
after some basic processing, we get a pattern like this:
00:37
Bertl
R G R G R G ....
00:38
Bertl
G B G B G B ...
00:38
Bertl
with about 4000 by 3000 sensel
00:38
Bertl
but we actually want to get something like this:
00:39
Bertl
(RGB) (RGB) (RGB) ...
00:39
Bertl
(RGB) (RGB) (RGB) ...
00:39
danieel
Bertl: there is a word for that: debayer :)
00:39
Bertl
again, with about 4000 by 3000 pixel
00:39
Bertl
danieel: if you had followed the discussion, I already used that
00:40
Bertl
danieel: but not everybody has your vast knowledge of image processing :)
00:40
danieel
might be too complex for him :)
00:40
danieel
i am now into color math again... trying to make a dng matrix out of the colorchecker chart
00:41
Bertl
have fun!
00:41
liwanma
like debayering?
00:41
Bertl
yes, precisely
00:41
liwanma
oops, window wasn't scrolled down.
00:42
liwanma
Ok, so you'd like for me to create a debayering algorithm?
00:42
Morethink
joined the channel
00:42
Bertl
no, but what I think would be really useful if we had a good framework to test various algorithms in a simple way
00:43
Bertl
(note that this is not limited to debayering)
00:43
liwanma
alright
00:43
Bertl
so basically something which takes an input image (normal or 4k in size) and generates the pattern the sensor will produce (or a good estimation at least)
00:44
Bertl
then runs some to-be-tested algorithm over it and compares the result with the original
00:44
Bertl
does that sound like something which could be interesting to you?
00:45
danieel
just for completeness, count with an OLPF feature in it too (gaussian blur of defined diameter) before sampling the bayer data
00:45
liwanma
Yep, no prob.
00:47
Bertl
okay, it probably should be reasonably fast, but it doesn't need to be highly optimized
00:48
Bertl
do not waste time on image format input, either pick an existing generic framework for that or decide on a particular format (e.g. png) and stick with that
00:48
Bertl
note that you need to handle images with more than 8bit
00:48
liwanma
ha I was just thinking about that. Thanks!
00:49
Bertl
the sensor does 10/12 bit and the output might be up to 16bit
00:49
danieel
there is a lightweight format (pnm - basically few lines of text to define size, and then binary data - easy for 16bit rgb)
00:49
Bertl
if you decide to go for floating point, make sure that it can be rounded/truncated at the various stages to match the sensor/FPGA pattern
00:50
Bertl
pgm would be fine as well, basically anything imagemagik can convert to/from without degradation is perfectly fine
00:51
danieel
i would advice going into signed 32bit domain after loading, to avoid any clipping erros in calculations
00:51
danieel
anyway the fpga is integer math, no?
00:51
Bertl
note that the algorithms we will be testing are going to be implemented in the FPGA
00:52
Bertl
so yes, we will do integer math there as floating point is expensive
00:52
danieel
(just saw a funny baselight error, with dots popping out when the matrix rendered the extreme values negative)
00:53
Morethink
left the channel
00:53
Bertl
it would be nice to be able to incorporate algorithms written in C and maybe python for this purpose
00:54
liwanma
ok
00:54
Bertl
also note that the statistics are as important as the proper image manipulation, i.e. we want to figure out things like mean square distance and maximum deviations, etc
00:55
Bertl
and most importantly, it should be commandline compatible
00:55
danieel
one thing would be also interesting to know - if you can make a multiplication / addition macro - to count the number of operations
00:56
Bertl
yeah, well, if you design an algorithm with FPGA code in mind, you know your DSP count for that
00:57
Bertl
anyway, useful ideas and additions are always welcome, but keep it simple in the beginning
00:57
danieel
the result should be number of multiplications - therfore nr of DSP x frequency (or better to say cycles per frame)
00:58
danieel
i would not complicate it by python.. just plain C
00:58
Bertl
with commandline compatible I mean that we will be running it automated for a huge number of images and probably every time we want to test a new algorithm
00:59
Bertl
so it has to work from a makefile or shell script without any user intervention
01:09
liwanma
got it
01:10
Bertl
one more thing which comes to my mind is that the bayer pattern should be a little flexible
01:11
Bertl
we currently have two (four) patterns (from the same sensor) depending on the vertical (horizontal) flipping of the image
01:12
Bertl
i.e. the sensor can flip in both directions but the bayer pattern will change accordingly, so RG-GB becomes GR-BG when y-flipped for example
01:12
Bertl
but note that it isn't as simple as 'just' swapping the columns
01:12
Bertl
because the spatial information is different
01:13
danieel
i would explain that rather that there are 4 different combinations of masks - RGGB BGGR GRBG GBRG - for 4 potential sensor models
01:14
Bertl
yeah, for now we can assume that it will be those patterns
01:15
Bertl
just to illustrate the spatial relevance, consider an image with a black to white gradient from left to right
01:15
Bertl
so first sensel will get 0, second 1, third 2, fourth 3 ...
01:16
Bertl
so the RGRG in the first line will have 0 1 2 3
01:16
Bertl
flipping the image, will give the iverse sequence 3 2 1 0 on the right end
01:19
liwanma
got it
01:37
Bertl
okay, I suggest you make a simple design for this, and we talk about that when you think it is complete (the design) but before you start implementing it ... does that sound fine to you?
01:44
liwanma
Yep.
01:44
liwanma
Is this still your pipeline: https://wiki.apertus.org/index.php?title=AXIOM_Alpha_Software
01:44
liwanma
?
01:45
Bertl
it is the pipeline for the AXIOM Alpha, yes
01:45
Bertl
we will do a similar one for the AXIOM Beta, but it will not be the same
01:47
Bertl
btw, which reminds me, that this framework might also be used to test noise correction algorithms (like for example the FPN correction)
01:47
liwanma
how should I be factoring in the fixed pattern noise correction and the LUT?
01:47
liwanma
Won't I need those as inputs as well?
01:48
Bertl
I would probably design it to have a number of modules acting on the image data
01:48
Bertl
or units in the pipeline
01:48
Bertl
in the simplest case, the FPN correction is perfect and the original image is replicated 1:1
01:49
Bertl
just broken down into spatial bayer pattern of course
01:53
liwanma
Ok, so I'm thinking I just read in the image with an optional sensor orientation parameter and then apply a reverse debayering calculation to split it in to sensels?
01:54
liwanma
assuming the stages in between had no effect on the raw sensor data
01:55
Bertl
precisely
01:56
Bertl
after that, the debayering happens and produces a new image
01:56
Bertl
note that this can be of same or different size
01:57
Bertl
the result, has to be compared to the original image, before it was broken down to the senor pattern
02:01
liwanma
This is to test different debayering algorithms?
02:01
Bertl
correct
02:02
Bertl
but we do not need to limit it to debayering if we can keep the 'sensor side' flexible
02:02
Bertl
for example, we could add a function/procedure/process to emulate the FPN noise
02:03
Bertl
and then test the implementation of the FPN correction
02:03
Bertl
but I'd consider this a second stage
02:07
liwanma
Ah I see. That makes more sense to me. Is it possible to store the bitstream straight from the sensor? If so, it would help me verify the accuracy of the tool.
02:08
Bertl
yes, we have made a number of so called 'raw' recordings
02:08
Bertl
which basically contain a number of bits per sense
02:08
Bertl
*sensel
02:08
Bertl
(typically 12 bits)
02:12
liwanma
ok cool
02:12
Bertl
basically all files on our servers which end in raw16 or raw12 are such files
02:12
Bertl
the .raw16 ones are padded to 16 bits (with zeroes on the lsbs)
02:13
Bertl
the .raw12 are packed 12 bit data in big endian
02:44
liwanma
Ok so here's what I'm thinking the command will be: senselBitstreamTool [-reverse] [-aligment pattern] [-debayer alg] -r input -w output
02:45
liwanma
pattern is a string with the top left 2x2 pixel alignment, alg is a string representation of the formula
02:48
liwanma
in the future, if you wanna test fixed noise pattern, we can add a flag for -fnp or whatever you want
02:51
Bertl
make that -r and -w stdin and stdout
02:52
Bertl
so that you can put it in a conversion pipeline e.g. with imagemagik
02:52
Bertl
for the string representation of the algorithm, I don't think that will work
02:53
Bertl
or what exaclty do you mean there?
02:53
liwanma
ok
02:53
Bertl
*exactly
02:53
Bertl
maybe give an example?
02:54
liwanma
a script representing the algorithm
02:54
Bertl
okay, script where? what language? what data/input?
03:06
liwanma
A script that the user creates that we can launch from within the tool. We feed it rgb values and it returns sensel data or vice-versa.
03:07
liwanma
So we would be executing the script for each pixel and we reconstruct the bitstream as we go along.
03:08
Bertl
first problem, almost all debayering algorithms require more data than just a pixel
03:11
liwanma
ok
03:11
Bertl
what language did you have in mind for the senselBitstreamTool ?
03:11
liwanma
c
03:12
Bertl
just asking, because the name would hint C++ :)
03:12
Bertl
so for C, I think it would be best to model it like this:
03:13
Bertl
load the image, bayer the image (utilizing a predefined function)
03:13
Bertl
call the debayer (another predefined function) with two images, the source and the destination (allocated memory)
03:14
Bertl
analyze the result and generate statistics
03:14
Bertl
now both functions could be in C files (separate ones)
03:14
Bertl
and could be referenced by a simple name, in a lookuptable
03:15
Bertl
it is no big deal to recompile the tool when one of the functions changes
03:15
Bertl
it would be nice though to be able to test more than just a single algorithm in one run
03:16
Bertl
(avoiding the image loading, and bayering)
03:16
liwanma
yep, that's what I was aiming for
03:16
Bertl
okay, generalizing that idea, you do not even care about the bayer vs debayer
03:17
Bertl
because in any case, they just take an input image and produce an output image
03:17
Bertl
all you need to care about is the image sizes and the sequence you apply those algorithms
03:19
Bertl
something like bayerRGGB, debayer1
03:19
Bertl
might be the simplest case
03:19
liwanma
How do you know what the sequence is if you don't care whether you're bayering or debayering?
03:20
Bertl
something more complicated might be: bayerRGGB, (debayer1, debayer2)
03:20
liwanma
oh ok
03:20
Bertl
note that the syntax is just for illustration
03:20
Bertl
i.e. I do not think it is really good to use that syntax
03:21
Bertl
you could do it as some kind of stack
03:21
Bertl
with e.g. copy, push, pop or so
03:21
Bertl
or simply as tree (in whatever representation)
03:22
Bertl
or just as enumeration of sequences
03:22
Bertl
e.g. bayerRGGB,debayer1 bayerRGGB,debayer2
03:22
Bertl
but you will have to be smart not to do the bayerRGGB twice :)
03:23
Bertl
or just do it twice anyway :)
03:23
Bertl
also it might be good to actually split the algorithms into two parts
03:23
Bertl
one which calculates the output size from a given input size
03:24
Bertl
and another one which actually does the image conversion
03:27
liwanma
Ok I see
03:28
Bertl
defining a bunch of inline functions to access the image data should help to simplify algorithms
03:28
Bertl
i.e. something to access a pixel (fetch, store)
03:28
Bertl
and something to return the actual width/height
04:00
Bertl
okay, I'm off to bed now ... have a good one everyone!
04:01
Bertl
liwanma: and thanks in advance for looking into it!
04:01
Bertl
changed nick to: Bertl_zZ
04:42
wescotte
left the channel
04:47
liwanma
left the channel
04:54
aombk2
joined the channel
04:56
aombk
left the channel
05:53
aombk2
changed nick to: aombk
05:57
Gegsite
joined the channel
05:57
Gegsite
hello
05:57
Gegsite
Cool Project just found it
06:18
Gegsite
left the channel
07:51
Morethink
joined the channel
08:47
philippej
joined the channel
08:49
Morethink1
joined the channel
08:49
aquarat
left the channel
08:49
aquarat
joined the channel
08:51
Morethink
left the channel
09:16
Morethink1
left the channel
09:20
philippej
left the channel
11:39
danieel
left the channel
13:00
Bertl_zZ
changed nick to: Bertl
13:00
Bertl
morning folks!
13:24
se6astian|away
changed nick to: se6astian
13:24
danieel
joined the channel
13:27
se6astian
good afternoon
13:29
mars_
hi se6astian
13:31
se6astian
hello!
13:31
se6astian
mars_: did you have time to check out the power supply yet?
13:33
mars_
yep, and it works great
13:38
se6astian
interesting
13:38
se6astian
did you chat with herbert about it yet, the problems he found with it?
13:39
mars_
yeah, we talked about it
13:46
intracube
left the channel
13:49
se6astian
perfect
13:51
Bertl
mars_: did you get around testing it with low current?
13:53
mars_
i got down to 350mA, and also with high and low voltages
13:53
Bertl
okay, so everything fine with the design then, great!
13:54
Bertl
once I get back the PCB from our exhibition, I'll assemble another one with slightly different parts, if you would be so kind to test that one as well and document the findings somewhere, it would be great!
13:54
mars_
sure!
13:54
Bertl
thanks! appreciated!
14:25
intracube
joined the channel
15:13
se6astian
changed nick to: se6astian|away
16:44
sebix
joined the channel
16:58
Bertl
off for a nap ... bbl
16:58
Bertl
changed nick to: Bertl_zZ
16:58
sebix
left the channel
17:00
intracube
left the channel
17:17
Gegsite
joined the channel
17:47
danieel
left the channel
17:50
sebix
joined the channel
18:12
se6astian|away
changed nick to: se6astian
18:17
danieel
joined the channel
18:26
sebix
left the channel
18:26
designbybeck
left the channel
18:26
designbybeck__
left the channel
18:35
designbybeck
joined the channel
18:36
designbybeck_
joined the channel
18:48
abcd
joined the channel
18:48
abcd
left the channel
18:49
RahaMee
joined the channel
18:51
se6astian
hello RahaMee
18:55
RahaMee
Hello
18:58
RahaMee
I have a few questions that I was hoping someone can answer. If a user wanted to change out their sensor, would he be able to do it on his own in a relatively easy way?
18:58
sebix
joined the channel
19:01
RahaMee
The reason I ask is that I see apertus as way to return to the old film cameras where instead of choosing film stocks, we can choose sensors. But I'm not sure how involved the process would be to change out a sensor.
19:04
danieel
about as complicated as with film cameras... carry 2 bodies and use one of them :)
19:06
se6astian
well changing the sensor module (2 are planned currently) takes a bit of effort
19:07
se6astian
you have to unscrew the enclosure and swap some hardware elements
19:07
se6astian
as well as parts of the lens mount
19:07
se6astian
so its not something I would recommend in the field for now
19:07
se6astian
though it only takes a few minutes if you know what you are doing most likely
19:08
se6astian
we hope to make it easier in the future
19:08
se6astian
but there are other ways to change "stocks"
19:09
se6astian
currently each camera brand has its unique look, the alexa look, the sony look, the red look, etc.
19:09
se6astian
this is mostly how the image processing pipeline is designed and how the "color science" as most people call it now is defined
19:09
se6astian
with the AXIOM that part will be completely open
19:10
se6astian
so there will be presents where you can change the look of your image (not just change LUTS or image profiles) but actually affect how the sensor input is interpreted
19:11
danieel
where you got that look stuff?
19:11
danieel
once you select a proper profile you should see no difference between brands
19:15
RahaMee
Aside from a film stock or look, changing out the sensor will also allow users to choose their maximum resolution, available frame rates, speed (as in asa/iso), etc. That would be really exciting and freeing.
19:16
intracube
joined the channel
19:17
alesage_
left the channel
19:18
alesage
joined the channel
19:21
RahaMee
Because if I wanted to shoot a feature with an arri and then switch over to red for semi-highspeed or 6k, I would have to rent both of those cameras and everything else to get them working.
19:23
Gegsite
left the channel
19:23
Bertl_zZ
changed nick to: Bertl
19:23
Bertl
back now ....
19:24
RahaMee
How modular can we get the axiom to reach that level of customization?
19:25
Bertl
The AXIOM will get very modular
19:27
se6astian
RahaMee: in the beginning we will be limited by the HDMI out put module (3 independent stream up to 1080p60 4:4:4) each
19:27
se6astian
but there will definitely be options sooner or later to also record native 4K or raw
19:27
se6astian
then we will also be able to utilize more of the sensors capabilties
19:28
se6astian
but in the beginning we have to keep the goal down to the essentials and build upon that foundation later on
19:28
se6astian
But since everything is open source there are no artificial limitations
19:28
RahaMee
I understand what's coming for the beta. I'm just wondering what you guys are planning beyond the beta.
19:29
se6astian
Well the next big step after Beta is the AXIOM Gamma
19:29
se6astian
that should already be a production ready cinema camera
19:29
se6astian
with modular "blocks" and internal 4k raw recording
19:29
se6astian
but the Gamma will greatly be defined by the community working with the Beta
19:30
se6astian
so for now its hard to say what exactly it will contain already
19:30
se6astian
and also what will be possible by then, maybe there will be new much more capable sensors or storage mediums
19:31
se6astian
by keeping everything modular (not just the physical hardware) but also the software and FPGA functionality we should be able to adapt to most of the things coming in the future that we have no idea about yet
19:31
RahaMee
I love what the gamma is looking like, the only thing I absolutely want, is to be able to switch sensors easily.
19:32
Bertl
I'm pretty sure the sensor will be a module in the Gamma as well
19:32
se6astian
well maybe you are the guy who wants to work on defining/testing this modularity concept then with the Beta :)
19:32
RahaMee
That way it won't ever be about choosing the right camera for the job, but rather, the right look for the job... like how it use to be.
19:33
se6astian
there are many things we need to tackle with this: how the sensor module is physically attached, how the cooling is connected and where the air to cool the sensors is flowing, how these parts fit together, how the high speed data interconnects are designed to survive "in the field swapping", power supply, etc.
19:35
se6astian
or how the lens mount (that naturally has to be removed to swap the sensor attaches again to the sensor block, how to allow backfocus adjustments, how to make sure there are no light leaks or prevent dust/fingerprints on the sensor glass
19:39
RahaMee
If the sensor is a module itself as you guys mention, then any special requirements should also be consolidated within the module. Would that work?
19:42
RahaMee
I just realized, the BMD Ursa is going this route. Unfortunately, that's the only thing I like about it.
19:52
se6astian
correct, the module should be as self sustained as possible but in some cases thats just difficult
19:52
se6astian
but as I said we will definitely explore this direction in the future, for now we have to keep things simple :)
19:53
se6astian
so the Beta stays affordable and within a reasonable development timeframe
19:56
RahaMee
Great! One more question, after reading about the experimental 4k, two 12bit monochrome pixels are read into the 24bit channel correct?
19:59
Bertl
this is just an example how it can work
19:59
Bertl
combining sensel data (2x 12bit) into a longer word and then breaking it up again (e.g. 3x 8bit)
19:59
RahaMee
How would you get the color back?
20:00
Bertl
in post processing
20:00
RahaMee
so the color data is included with the monochrome pixels?
20:05
sebix
left the channel
20:08
Bertl
correct, the original data making up the 24 bits are from the bayer pattern of the color sensor
20:21
RahaMee
Do you guys have any estimations on how much the discounts will be for purchasing the sensors wholesale? What I mean is, how much would the discount be if you ordered 100 or 200 or etc...?
20:25
Q_
I assume the bayer filter is part of the sensor, so I think you're going to get strange results if you don't combine R, G and B to make it monochrome.
20:26
Bertl
it's not the idea to get monochrome from the sensor, the idea is to get the data out to a cheap recorder
20:26
Q_
You mean just logging the RAW RGB values?
20:26
Bertl
yup
20:27
Q_
So just like most raw formats you get from a still camera.
20:32
RahaMee
May I request filter slots in the barrel of the lens mount?
20:32
Bertl
sure, any ideas are welcome, maybe take the existing 3D model and try out a few things there
20:35
RahaMee
How do I get to those files?
20:35
RahaMee
Are they in your github?
20:36
Bertl
yep, here: https://github.com/apertus-open-source-cinema/alpha-hardware/tree/master/Nikon-F-Mount
20:46
Q_
So I've been reading about the Canon EOS 7D mark II today. The 7D is popular to do movie recording. The most requested thing for a new model was a bigger sensor. It seems to have changed from 22.3 * 14.9 to 22.4 * 15.0. I see that the CMV12000 is 22.8 * 16.9. I would really like to see the CMV20000 as option, which seems to be full frame.
20:46
Bertl
the problem with the CMV20000 for the Beta is that it is to large
20:47
Bertl
i.e. the sensor is bigger than the entire camera :)
20:50
Bertl
besides, it is very expensive, so testing wouldn't be that easy
20:54
RahaMee
For the campaign, have you guys considered adding a 20% off axiom gamma version?
20:55
Bertl
we do not want to make promises beyond the Beta, we try to stick to what we know we can fulfill
20:58
RahaMee
the contributors have slowed drastically, which might signal that you guys should expand your target backers.
20:59
RahaMee
No need to say when the gamma will be released, just that it will. Or maybe a ballpark figure like 2016.
21:00
Bertl
we will consider it
21:56
se6astian
good night
21:56
se6astian
changed nick to: se6astian|away
22:03
Q_
Looking at the gamma, it doesn't seem to have a standard screen module? I guess the beta also won't have any?
22:05
Bertl
what is a standard screen module?
22:05
intracube
RahaMee: the project shouldn't be focussing too far into the future
22:05
Q_
I guess you just expect people to connect some screen over hdmi or something?
22:05
intracube
otherwise people might say the tech as vapourware
22:05
intracube
*is
22:07
Q_
Bertl: I'm just wondering how I'm going to know what I'm pointed the camera to.
22:07
Bertl
ah, well, either with a viewfinder (attached to the camera) or with a typical monitor
22:46
RahaMee
intracube: Is Axiom gamma considered too far into the future? What happens after the beta is delivered in April?
22:47
Bertl
then we will focus on the Gamma, incorporating all the things we learned and will learn from the Beta
22:47
intracube
lots of development, enhancements, tests on the beta hardware :)
22:49
RahaMee
Are shields being considered prototypes for the modules?
22:49
Bertl
in some way, yes
22:51
RahaMee
Well, you are offering an upgrade path so essentially, you're already offering a sort of discount on the gamma version.
22:55
RahaMee
I think it should just be emphasized so that the people who don't intend to contribute to the development can realize they will still get a good deal for contributing right now.
22:56
Bertl
why wouldn't people want to contribute to the development?
22:56
Bertl
it makes no sense to me to expect a Gamma, but having no interest in us developing the Beta :)
22:59
RahaMee
Well, if they feel they have no engineering experience then they might feel that they wouldn't be able to contribute enough to justify a 2200-2500 euro investment.
23:01
Bertl
you don't need engineering experience for the Beta
23:02
Bertl
the only reason I could see for somebody having no interest in the Beta, but being interested in a discount on the future Gamma is somebody 'just' interested in getting a cheap camera, and those folks are probably better off with one of the discount cameras anyway
23:03
RahaMee
How else could they contribute without engineering experience while still requiring a beta unit?
23:05
Bertl
we are building a camera for film makers, so feedback from all kind of users (not just engineers) are essential for that
23:05
Bertl
and everybody, regardless of their technical skill, will be able to provide input and help shape the AXIOM
23:08
Bertl
I would even go so far to say that input from the non-technical folks is more important than from technical folks, when it comes to actual usability
23:10
RahaMee
Maybe emphasize that as well. I'm really just giving these suggestions because two very good DPs I know haven't been willing to contribute because their arguments are that camera manufacturers give them incentives to beta test their products
23:12
RahaMee
Send a message like: "With your help, you can have everything you ever wanted in a camera."
23:25
RahaMee
Also, when you guys say "Cameras today are technologically far superior, but less accessible". I don't know if I agree with that because I think they are very accessible now. The big problem is that it feels as if the bigger companies refuse to give us and even prevent us from using a camera's true potential
23:28
Bertl
yes, that is definitely part of the problem
23:29
Bertl
we've seen that many times, that a camera was artificially limited to keep the price high
23:29
Bertl
(at least the price of the not limited version)
23:29
Bertl
but there is also the problem of not being able/willing to handle corner cases
23:31
Bertl
a typical manufacturer (and that to some degree includes us) doesn't have the capacity to handle corner cases, but with the AXIOM, that doesn't mean that they cannot be handled/addressed outside of our development
23:35
RahaMee
Yes absolutely! So if we mention these arguments in a serious way and then reiterate how contributors can take action by helping to create the AXIOM, then I feel more people will open their eyes and help.
23:38
Bertl
while I think that we are doing exactly that on the web pages and in the campaign video, it can't hurt to re-iterate about that
23:55
RahaMee
It definitely is stated in the pages and campaign video, but I think it could be 'expressed' more effectively. The text and the video say what you're doing is something rational and that contributing would be rational... but people act on emotion and that's what I suggest when I say to 'reiterate'