Apple has introduced an AI feature which can generate 3D images from 2D originals which can be viewed on the Vision Pro goggles at WWDC.
The feature is one of several updates to the operating system for the augmented reality goggles which form the Vision OS 2 update, the preview of which can be seen on Apple's site.
The feature is one of a number of updates to the photos app, including SharePlay which lets you view photos at the same time as a friend who also happens to have Vision Pro glasses on.
The 3D images from 2D originals are created using generative AI to 'fill in the blanks' for the other viewpoint, as opposed to the existing approach to capturing spatial images using binocular cameras (two lenses next to each other). As Apple put it "With just a tap, your memories are brought to life with natural-looking depth and dimension."
To my eye the result (probably because of Apple's choice of demo image) looks a little like one of those eerie sequences in a superhero movie where we establish what turned the hero/anti-hero to being a lone fighter for justice/evil as he occasionally swipes back over a hologram of a lost family member.
This feature is just one of Apple's boosts for 3D content creation made in the full live WWDC presentation. The potentially more useful aspects for professionals related to creative tools from Canon and and Blackmagic which have the potential to help shoot Immersive Video.
Apple will be bringing editing tools for spatial video to Final Cut Pro 'Later this year' with a process for editing on the Mac and viewing on the Vision Pro.
All of this means more and more creatives might have to start asking just how to create meaningful content for augmented reality devices, while retaining some kind of directorial vision.
Apple can answer questions for consumers and professionals with each software update, but there are big questions that affect the language of photography and cinematography.