Julia Vergazova, Nikolya Ulyanov


Digital project

Dataset of images collected from IP Surveillance Cameras, 3D-Photo-Inpainting neural network

2021

Conceptual description

Swamps are zones of geological instability, floating boundaries between land and water. They are also natural ecotones - sources of biodiversity, places of stress where ecosystems meet. Loss of biodiversity is related to the ability of one species (human) to cause global changes within an ecosystem, similar to fractures and tearings in its tissues. The place where this change takes place is called the “critical zone”: the area of the Earth’s surface, thin but vital skin of the planet, where organisms regulate the flow of resources necessary to sustain life. As a result of human impact on ecosystems, a rupture occurs in the surface of the self-regulating skin of the planet.
Modeling the fragile and heterogeneous skin of critical zones could be a challenge for algorithms and neural networks. Everything solid and material is currently being melted into data. We create reconstructions of natural landscapes, embedding another ecosystem into them - the ecology of machines. The boundaries of biological and digital skins are blurred like the surface of wetlands. Our reconstructions are hybrid patches that fill the gaps in the planet’s epidermis. They are inhabited by a variety of myths and weird inhabitants - datasets involved in rehearsing the end of the world and visualizing its recovery.
While living organisms on earth are gradually filling up with plastic, a new (bio) diverse utopia is being written by machines. It reconstructs the natural, stitching together imaginary surfaces and myths of nature’s damaged parts and ruptures.

Example 1

Artwork technical description

We use surveillance camera images, which we feed into 3d-photo-inpainting network - https://shihmengli.github.io/3D-Photo-Inpainting/
Main result of this network is usually a live video, an animated version of original image. However, the network also has another, a somewhat accidental, invisible byproduct, the 3d point clouds (in the same vein as oxygen is sometimes viewed as a lucky accident of photosynthesis).
Thus we study these point clouds and the resulting animations.

In order to create 3D landscapes, we convert flat images of surfaces and landscapes into a 3D cloud.
 3D photography is a multi-layer representation of the original flat picture used to synthesize novel views of the same scene. The network generates hallucinated colored point clouds and depth in regions occluded in the original view of the photo.

We analyze this point cloud in CloudCompare, and also process it in MeshLab to create surfaces and printable 3d objects, which represent the neural network's understanding of our nature and the environment, to which we are accustomed.

Source image and geographical position of surveillance camera.

Video with reconstructed landscape

Post-production of point cloud using Meshlab, Meshmixer.

Resulting representation of reconstructed landscape in maсhine optics.

Example 2

Source footage from camera

Post-production of dot cloud using Meshlab, Meshmixer

Resulting representation of reconstructed landscape in maсhine optics