# Following a rabbit hole to see how far will it go ...what is "real"?



## NGH (Nov 7, 2019)

Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.

I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology.  I was thinking about how real things have to be to be accepted.  Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'.  Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.

Does this matter? The device is only doing what it can to make the best picture, whether it is an 'accurate' representation of what was seen by the lens is secondary, right?  As long as it makes a great picture.  For most people this isn't about learning a skill it is just about capturing a scene to be able to share.
This is, in many ways, no different than 'proper' photographers manipulating the images in Lightroom or even a dark room - it is acceptable to make the most out of what was captured to create the best image.  And (as long as you are honest about what you have done) it is also now acceptable to combine images so produce something removed from what was actually seen, taking the best elements from various shots.

There has been plenty of debate about whether this is still photography.

I was thinking about this and then took a slight tangent; surely the technology is available where the device knows where it is (via GPS) and when the shutter is pressed could skim all the similar images in the internet and on top of the data it just captured from the scene, use the skimmed data to further enhance the new image? This could (hypothetically) be done without infringing copyright just absorbing data from a multitude of variations to produce something new and unique - still acceptable?  Surely it's not that different than what could be achieved in, for example, Photoshop?

Taking it a step further does such a device need to actually be in the place where the image is anyway? a user could just initiate a picture "make me an image of the Grand Canyon from the West on a spring morning" - device scrapes all the images it can find and through amalgamation and user preferences make a new and previously unseen image of the Grand Canyon.

With time and appropriate intelligence, both hypothetical ideas could work and produce amazing pictures - would they be acceptable as 'photographs' in mass population terms (excluding us purists)?

At what point does a line get drawn? ...If at all


----------



## Braineack (Nov 7, 2019)

is this even real life? are we in the matrix?   who cares?


----------



## johngpt (Nov 7, 2019)

NGH said:


> Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.
> 
> I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology.  I was thinking about how real things have to be to be accepted.  Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'.  Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.
> 
> ...


Nigel, I think you're on to something here. You should delve into coding and develop the characteristics you've mentioned. I think you'll become the next Jobs or Gates.


----------



## Designer (Nov 7, 2019)

NGH said:


> Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'.  Better perhaps than the original scene was perhaps? This has been a trend for a while ..



This discussion, in various iterations, has been going on since the second half of the 19th Century.  You're a bit late to the party, but it's o.k. because I don't know if it has been settled yet.

How photography evolved from science to art


----------



## NGH (Nov 7, 2019)

Designer said:


> NGH said:
> 
> 
> > Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'.  Better perhaps than the original scene was perhaps? This has been a trend for a while ..
> ...



I don't think I'm asking whether photography is art or even whether photo manipulation is photography (or art); I thought I had steered away from that.  I was just thinking that if teh technology allowed for someone to make pictures/images from some algorithm instead of what was in front of them whether there was a line where it isn't a photograph (beyond the literal definition of light drawing) - it's just an fun muse for the idle mind is all; not trying to start a serious debate.


----------



## NGH (Nov 7, 2019)

johngpt said:


> NGH said:
> 
> 
> > Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.
> ...



Thanks, I will get my people right on it - best get that patent in first


----------



## Jeff15 (Nov 7, 2019)

How long is a piece of string...?????


----------



## NGH (Nov 7, 2019)

Jeff15 said:


> How long is a piece of string...?????


About that a bit more


----------



## Derrel (Nov 7, 2019)

Let's not waste scarce brain cells debating about how long is a piece of string is. Let's focus instead upon the classics, such as "which is heavier, a pound of feathers or a pound of gold?", or my favorite "how many angels could dance on the head of a pin?"

Computational photography is what you are referring to, and it is an emergent field.


----------



## NGH (Nov 7, 2019)

Derrel said:


> Let's not waste scarce brain cells debating about how long is a piece of string is. Let's focus instead upon the classics, such as "which is heavier, a pound of feathers or a pound of gold?", or my favorite "how many angels could dance on the head of a pin?"
> 
> Computational photography is what you are referring to, and it is an emergent field.



Pound?  I thought those were banned and only kilograms were allowed 

Yes it is computational photography with a twist


----------



## weepete (Nov 7, 2019)

NGH said:


> Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.
> 
> I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology.  I was thinking about how real things have to be to be accepted.  Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'.  Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.
> 
> ...



Well that technology already sort of exists, it's called photogrammetry and is a way of making 3D models from photographs or videos. Currently it only takes the information from one camera, but technology can be quite amazing and we don't know where it would take us in the future. I'm pretty sure the output would still be a 3D model, but eventually they'll probably get photorealistic rendering.


----------



## limr (Nov 7, 2019)

What is real?

Nothing. Or everything. 

Using an algorithm to create photos:
This Website Generates AI Portraits of People Who Don't Exist


----------



## limr (Nov 7, 2019)

PS - the human portraits are freakishly real. Or should I say "real"? There is a cat version as well. Not so real. By any definition.


----------



## Derrel (Nov 7, 2019)

Wow, those artificially-created human portraits do indeed look exceptionally real.


----------



## NGH (Nov 7, 2019)

Derrel said:


> Wow, those artificially-created human portraits do indeed look exceptionally real.



They do indeed. It reminds me of something a friend said back when I started working in computers. He said "if you set a computer to randomly populate pixels on a screen and left it for an infinite amount of time; at some point it will show a perfect image of Margaret Thatcher" I guess you can tell how long ago that was... I guess it wasn't so much of a joke


----------



## Braineack (Nov 8, 2019)

NGH said:


> Derrel said:
> 
> 
> > Wow, those artificially-created human portraits do indeed look exceptionally real.
> ...



not at all.


----------



## Tim Tucker 2 (Nov 10, 2019)

Well, I think that the recognition of beauty and the response to it is a purely human trait so I wonder what the point and the outcome of removing the human touch from the creation of images actually is. Are they more imaginative, more real? Do they display a more fundamental understanding of a place, hold a more special memory?

The second point is that we are so sold on the idea of technology and how it creates images that we seem to think that it's a purely technical exersize involving logic and maths. It's actually driven by understanding the nature of optical illusion and how to fool the eye into thinking what it sees is real, which is much easier with the human face than any other object because we are so predisposed to recognise it and see it *correctly*, even when it's fundamentally flawed. How a human face should look is so imprinted in your memory that it's difficult to see past it, your brain corrects the flaws. Spot the obvious distortion:


----------



## Designer (Nov 10, 2019)

You've got her eyes on upside-down.


----------



## snowbear (Nov 10, 2019)

NGH said:


> They do indeed. It reminds me of something a friend said back when I started working in computers. He said "if you set a computer to randomly populate pixels on a screen and left it for an infinite amount of time; at some point it will show a perfect image of Margaret Thatcher" I guess you can tell how long ago that was... I guess it wasn't so much of a joke


Give an infinite number of of rednecks an infinite number of shotguns and an infinite number of street signs, and they will reproduce all the great literary works in braille.


----------

