With photography we are able to transpose a three-dimensional reality into two dimensions. And today, thanks to Artificial Intelligence, we are also able to do the reverse, that is obtain a 3D representation from 2D images. this, in essence, the heart of the NeRF (Neural Radiance Field) system presented by Nvidia.
Nothing new, in absolute terms. But two factors make this unprecedented technology sensational: the amount of information and the time required to obtain the final result, both extremely reduced compared to the requirements of the technologies currently in use in this field. Thanks to the latest generation artificial intelligence, in fact, NeRF capable of obtaining a three-dimensional image of an object starting from very few shots and above all in a very short time. In fact, a dozen photos are enough and NeRF will be able, starting from those, to obtain a three-dimensional representation within a few tens of milliseconds: The video below provides an example of Nvidia’s solution in action.
David Luebke, vice president of graphics research at NVIDIA, specified the unique characteristics of NeRF as follows:
While traditional 3D representations such as polygon meshes are similar to vector images, NeRFs are like bitmap images: they densely capture how light radiates from an object or within a scene.
In that sense, Instant NeRF could be as important for 3D as digital cameras and JPEG compression have been for 2D photography, greatly increasing the speed, ease and scope of 3D capture and sharing.
The speed with which NeRF manages to render objects can find application in many areas, revolutionizing and simplifying the workflow: just think, for example, of the impact it can have on the gaming world, allowing us to create 3D models in a flash. starting from static images.