Cambodia – 2000

mickyates Cambodia, Children, Documentary, Nikon, Photography, Social development, Street, Travel Leave a Comment

This week I am going through the archives, for re-photography purposes, as on my next trip I will be travelling up to Anlong Veng, and I want to ‘remember’ our first visit there. I plan to retrace our road trip to Anlong Veng from Siem Reap.

As noted elsewhere, we first visited the ex Khmer Rouge ‘Reconciliation Area’ around Anlong Veng in March, 2000. Ingrid, Victoria (aged 10) and myself made the trip, with Sarath, Gunnar and other Save the Children personnel. We were also accompanied by National TV and had a small, armed, Army escort. In 2000, there were landmines to both sides of most of the 120km dirt track, and now there will is a road. In 2000, the trip took over 6 hours. Now, it should take less than 2.

This post is about the images that I took, from both the technical perspective and how I edited the original series.

I used a Nikon D1, arguably the first ‘professional’ digital camera. It created 2.7 megapixel images, and I had the so-called ‘holy trinity’ of zooms to accompany it. Unfortunately I wasn’t smart enough to shoot RAW (partly because I didn’t have any software at that time to process the files). So everything is JPG, shot at a 2000 x 1312 size (the DI had effectively an APS-C sensor, not a full frame).

Straight away, I was frankly pleasantly surprised by the technical quality of the images. With the passage of time, these images now seem to me to have an almost film-like feel.

The colour balance is ‘as shot’, and the images have only been tweaked for clarity and contrast. I have also restored everything to the full frame size, rather than rather ‘random’ crops.

There is an entire sequence on a deserted Khmer Rouge camp. I am looking to tell Sarath’s stories of his survival in the forest in Khmer Rouge times, and these images could be very helpful to that part of the project.

Interestingly, when I first documented the trip, I omitted to use many of these images, simplifying the edit mainly to schools and people.

It’s clear that I brought preconceptions to that edit, focused on just the school project, rather than a including a broader, historical context.

I also paid attention to details, as in the butterflies, which again I omitted from the series I published.

However, my interest in what we know call ‘street photography’ was alive and well.

There was no road, just a dirt track for almost 120 kilometres.

And there were mines everywhere, with the Halo Trust starting clearance.

Looking at this work suggests once again that it is not the camera, it is the photographer that is the key (as if I needed reminding of that fact!).

But it is not just in capturing the images. It is also in subsequent edits and choices, important as I consider the ‘Landings Exhibition’, the Workshop and Publication for this module.

I still have the D1, by the way, and the D1X, D2X …

The full gallery is here.

Colberg, Jörg. 2017. Understanding Photo Books. New York & London: Focal Press.

Artificial Intelligence & Photography

mickyates Generative, Graphic design, Manipulation, Photography Leave a Comment

It is hard to consider any photograph completely devoid of human intervention. Images taken on Mars are only possible because humans created the recording and broadcasting equipment. Monkeys taking selfies … well, however you look at it, a human designed the camera and the capture setup.

Yet it is also clear that we are increasingly aided by technology. Cameras have long had programs of various descriptions. Image manipulation software, facial recognition and so forth muddy the waters of authorship.

It would appear to me, however, that only with the advent of Artificial Intelligence (AI) featuring deep machine learning could computers actually generate new imagery without human intervention. Yes, humans design the learning algorithms, and provide ‘seed’ imagery. But entirely new images and ways of learning are created.

The header, above, shows two such images – imaginary people that do not exist.

As the system learns, it can create new images, and search for new seeds over the internet, without human intervention. As is the case with AI language research, when computers communicate directly without human interaction, novel things happen. For example, they get aggressive.

In 2017, [Google Deep Mind] researchers tested its willingness to cooperate with others, and revealed that when DeepMind feels like it’s about to lose, it opts for “highly aggressive” strategies to ensure that it comes out on top. The Google team ran 40 million turns of a simple ‘fruit gathering’ computer game that asks two DeepMind ‘agents’ to compete against each other to gather as many virtual apples as they could. They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.

And remember the tabloid scare that ‘researchers switched off computers as they were afraid that they were learning their own language‘? Not exactly true, though Facebook was more interested in computers that could communicate with humans, hence the switch off.

NVIDIA, amongst others, are creating GANs (Generative Adversarial Networks), which allow computers to self-train in image making.

Here is the abstract of a recent paper:

We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024². We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10.

Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.

Note: CelebA is a huge dataset of human faces for researcher’s use in facial recognition, facial attribute recognition and so forth. Found here.

Original NVIDIA Video

Put in simpler words, the system uses competition to create new images.

GANs consists of two parts, a discriminator and a generator. The former learns how to distinguish fake from real objects, and the latter creates new content and tries to fool the discriminator by developing novel images that it hasn’t seen before.

This is quite different to ‘traditional’ generative programs that create new images that are firmly based on human-defined rules – for example Fractals, based on the Mandelbrot Set.

I was fascinated by these, years ago, and here is an example made in 2006:

I founds a very helpful and comprehensive resource which lists a wide range of computer Generative Art programs and sources.

Generative Art refers to that created with the use of an autonomous system, to independently determine artwork that would otherwise require decisions made directly by the artist. It is almost always ‘algorithmic’ in that humans write an algorithm or equation to start the process – with unpredictable end results.

GANs take this further, by minimising the impact of human influence via ‘learnt’ machine determined inputs.

Another algorithmic approach is Agent Based Modelling, which uses Complexity Theory, another area I was tangentially involved in through my business activities in the late 1990’s / early 200’s. In particular, US Marine Corps LtGen (Ret) Paul van Riper applied the lessons of complexity theory to warfare, which directly inspired the ISAAC and EINSTein projects. EINSTein allowed the programming of various war-fighting parameters – hardware, firepower – as wall as human elements – no harm to civilians, don’t leave anyone behind.

The scenario then allowed the computer to run thousands of ‘red on blue’ war-games. Interestingly Van Riper demonstrated that asymmetric warfare needed to be addressed more throughly by the Marine Corps (think terrorism). But that is out of scope here 🙂

The algorithms created interesting images.

Again, however, whilst the visualisations were always different on every computer run, over time probability kicked in and the images tended to merge.

My point here is that, again, this falls into the Generative category of image creation, rather than AI.

…………………………………………….

Fredlake, Christopher & Wang, Kai Wang. 2008. EINSTein goes to War. https://www.cna.org/CNA_files/PDF/D0018865.A1.pdf (Accessed 23.6.2018)

Karras, Tero; Aila, Tim; Laine, Samuli; Lehtinen, Jaakko (NVIDIA and Aalto University). 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. Available at http://research.nvidia.com/publication/2017-10_Progressive-Growing-of. (Accessed 22.6.2018).

Komosinski, Maciej & Adamatzky, Andrew (Eds). 2009. Artificial Life Models in Software. Heidelberg: Springer.