Forums › Forums › News › Articles and Info › First production light field cameras arrive next year
- This topic has 36 replies, 9 voices, and was last updated 12 years, 2 months ago by sleeping.
-
AuthorPosts
-
October 20, 2011 at 12:16 am #2445CauseISaidSoParticipant
Lytro wants to reinvent photography (cnn.com)
Light field cameras are those that have after-the-fact adjustable focus (among other things). We’ve discussed them in at least a couple of threads recently, but I didn’t know they were this close to actual production. Available now for pre-order at $399 (8GB) and $499 (16GB).
And something else I didn’t know about them, but in hindsight should’ve been somewhat intuitive: (FTA) To answer one reader’s question: yes, the Lytro camera takes 3-D images. According to Ng, that feature will come via software update soon after the camera ships early next year.
As Arte Johnson said, “Verrry interesting…”
October 21, 2011 at 8:35 pm #41667chupathingieParticipantI didn’t see a mention of resolution on their site, but I’m running Internet Explorer Barney Rubble Edition at work and the page is nigh unreadable due to jumbled text overlaying each other…
October 21, 2011 at 9:05 pm #41668CauseISaidSoParticipantYou didn’t miss it, chup. There’s also no mention of it on their website. The closest thing is “Light Field Resolution”, which they list as “11 Megarays”, whatever that means. If the samples in their gallery are indicative of the final product, it doesn’t appear to have very good resolution at all.
October 21, 2011 at 11:20 pm #41669orionidParticipantIt’s probably not much better than 640×480. It’s still an infant technology, and the heavier-tech articles I’ve read about it puts most of the processing power into the depth of the image, rather than X by Y processing. If you figure to get any good production out of the focal depth, they’ll need a significant Z-axis behind the lens. Speaking digitally, more layers of sensor. Basically, 640×480 at 35 layers deep puts a single exposure at the same amount of graphical information as 10.7 MP flat image, or roughly their advertized 11 Megawhatevers. I’m just curious how they get around issues of self filtering, unless the sensor is arranged like a well-matrixed sponge, but even then you’d have issues of side shadowing from off-focus areas.
I’m cautiously optimistic on the technology. I figure if it works, within five years, one of the big players will either invent a better way to do it, or buy them out for a bajillion dollars, and then the technology will really take off. Just remember that nothing is a slam-dunk in the microprocessor industyry. The Pentium was supposed to leave Intel as the only game in town. The Athlon MX was touted as the Intel-killer. Both were “ooh, that’s cool” for a month until something better came out from a third party.
October 22, 2011 at 12:28 am #41670chupathingieParticipantRaytrix ( http://www.raytrix.de/index.php/Cameras.html ) supposedly was first to market, and they boast some pretty decent resolutions on the high-end models (8MP~ish) but list no prices whatsoever (translation: You don’t want to know). Their high-end model has a Nikon f-mount for the main objective. I have yet to find a cut-away or some other basic theory of operation for these beasties. So many questions… what’s the effective f-ratio? Is CA better or worse? Is expanded focal depth the only advantage? etc, etc…
edit: I realise those are questions more of optics than sensing, but from what I’ve read so far, the technology uses many micro lenses so I’m not really sure if the chip itself registers z-data (would love for that to be the case, tho) as opposed to being essentially a complex camera that takes, say, 64 concurrent images then processes them into a final, higher resolution image. At first blush, this looks more like a focus stacking software package with complex hardware thrown into the mix.
There’s a link somewhere that I saw to the dev’s thesis publication on the topic. If I can find it again I’ll post. I could use some reading material.
October 24, 2011 at 2:10 pm #41671chupathingieParticipantOh.. haha… it’s right there on their site…
http://www.lytro.com/renng-thesis.pdfFebruary 6, 2012 at 5:39 am #41675orionidParticipantI’m bringing up a necro thread because I just had an interesting thought about these things.
Depending on how the visual information is stored, it won’t be long until someone develops an algorithm to use sharpness and edge detection by layer to determine visual depth. Once you have that info, you can start making adjustments based on visual depth, including dynamic lighting effects all in post. Once this technology takes off, saying it’ll revolutionize photography will be a massive understatement.
February 6, 2012 at 6:27 am #41676chupathingieParticipantoh yes… z-buffer data is very flexible… I hope I’m not speaking greek with that. By way of explanation: most 3d graphics packages allow you to define extra channels in your output frames. This lets you have an alpha channel, say, for compositing CGI with live action (rotoscoping). You can also dump the z-position for each pixel in relation to the camera into a z-channel, which can then be used (and quite often is) for post-processing in exactly the manner Orionid has described. If your software can build a standard z-channel, importing your images into 3DSMAX, Maya, Blender, etc and combining them with other images or CGI is going to be childs play. Altering the lighting is just as easy, as you now would have a 3d dataset for everything in the image.
hmmm… that means the camera will also function as a 3d scanner…
February 6, 2012 at 5:10 pm #41677fluffybunnyParticipantI could be very wrong here but I’ve always assumed (since I first heard of this stuff) that they were just using shallow depth of field and varying the focus over rapid image samples, ie maybe a video type processing/sensor arrangement and a standard lens. Would that not achieve the same thing?
As previously stated, could be a huge GCE on my part and I don’t have time to research it.February 6, 2012 at 6:07 pm #41678chupathingieParticipantGCE
I’ve not seen anyone use that acronym since I was in nuke school back in ’85. Getting a GCE on any question on an exam was guaranteed to get someone mandatory extra study to the tune of 10 hrs a week.
February 6, 2012 at 6:44 pm #41679chupathingieParticipantI could be very wrong here but I’ve always assumed (since I first heard of this stuff) that they were just using shallow depth of field and varying the focus over rapid image samples, ie maybe a video type processing/sensor arrangement and a standard lens. Would that not achieve the same thing?
As previously stated, could be a huge GCE on my part and I don’t have time to research it.I had to go digging, but the information for the image is gathered in a single exposure. There’s a microlens array in front of the sensor that redirects light from different directions to the underlying pixels, which are then assembled into a final image via software. So not only are you capturing the light from the image, you are also capturing the direction the light is coming from. The downside of this is that it takes many more pixels on the sensor to produce a single pixel in the final which explains the reduced resolution of the camera.
February 6, 2012 at 9:32 pm #41680orionidParticipantGCE
I’ve not seen anyone use that acronym since I was in nuke school back in ’85. Getting a GCE on any question on an exam was guaranteed to get someone mandatory extra study to the tune of 10 hrs a week.
… but was still better than the mythical WIAS.
February 6, 2012 at 10:14 pm #41681chupathingieParticipant… but was still better than the mythical WIAS.
That one is lost on me.
February 6, 2012 at 11:05 pm #41682ravnosticParticipantI’m just wondering, when it gets there, what this is going to do in imagery for DOF in macro….
February 7, 2012 at 12:14 am #41683fluffybunnyParticipant… but was still better than the mythical WIAS.
That one is lost on me.
Yeah me too, class of ’83 here so must be something the younger nukes are hearing. Only other one I can remember is RTFQ.
-
AuthorPosts
- The topic ‘First production light field cameras arrive next year’ is closed to new replies.