which is best for me

alright i think im going for the 6d thanks everyone. well can someone tell me how good the video on the 6d is i plan on getting a good mic tho
 
I don't do video but I have tested it and it looks nice but I know the 6D doesn't have a headphone jack to monitor the audio.
 
Your examples are not at all straightforward. Did the comparisons use exactly the same lens or did they use a similar lens? Were they taken in exactly the same controlled ambient light conditions or were they taken under slightly varying lighting conditions? What "Adjustments" were made to the JPEG's in-camera, especially what in-camera noise-reduction was used? All I See are a set of JPEG files that say "A" is better than "B" with no mention of how they were obtained in any way, form or fashion. No mention of how the cameras were set up or anything whatsoever.

So your theory is that in four different comparison experiments, all four people, for no explicable reason and in complete disregard of even the most fundamental tenets of science, took the time to change the default noise reduction settings from the default and/or change the lighting of the scene DURING the experiment (?!), and they happened to do so in some way that happened to favor the Canon camera by exactly the same amount of 2 stops in all four cases?

Yes. Yes, that's much more likely than "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people."

The brand loyalty just gets really really ridiculous on this forum sometimes....
 
Your examples are not at all straightforward. Did the comparisons use exactly the same lens or did they use a similar lens? Were they taken in exactly the same controlled ambient light conditions or were they taken under slightly varying lighting conditions? What "Adjustments" were made to the JPEG's in-camera, especially what in-camera noise-reduction was used? All I See are a set of JPEG files that say "A" is better than "B" with no mention of how they were obtained in any way, form or fashion. No mention of how the cameras were set up or anything whatsoever.

So your theory is that in four different comparison experiments, all four people, for no explicable reason and in complete disregard of even the most fundamental tenets of science, took the time to change the default noise reduction settings from the default and/or change the lighting of the scene DURING the experiment (?!), and they happened to do so in some way that happened to favor the Canon camera by exactly the same amount of 2 stops in all four cases?

Yes. Yes, that's much more likely than "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people."

The brand loyalty just gets really really ridiculous on this forum sometimes....

Actually, yes it makes just as much sense. One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable. To accept the theory put forth it must be repeatable. Your suggestion that "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people." has not been demonstrated. Your hypothesis carries no more weight than any other put forth here.
 
One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.

By this logic, you can't compare Coke and Pepsi in a blind taste test EVER, because more than one thing will always be unequal: different factories, different amounts of sugar, different amounts of sodium, possibly different carbonation levels, etc. And the secret ingredients that aren't publicly published aren't demonstrable, because we don't know what they are.

To accept the theory put forth it must be repeatable.
Unlike the above requirement, this one is indeed an actual feature of good science. And conveniently, when it comes to D600 vs. 6D noise performance, the data is very well repeated. The 6D being about 2 stops better is found to be the case by 83% (5 out of 6) reviewers considered in this thread. Not only did they repeat the overall conclusion of body type, but they even repeated the exact amount by which it seems to be better.

edit: changed it to 5 out of 6, since we also have a person who responded to this thread saying they found similar results from personally shooting both.

Your suggestion that "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people." has not been demonstrated.
You're absolutely right. I have not proven that the camera review blog community is NOT, in fact, a shadowy cabal of conspirators who systematically undermine their own experiments in order to all come up with a consensus conclusion in the exact same (incorrect) direction and magnitude in order to trick unwary internet viewers into buying the wrong camera.

I officially recant my statement that "the 6D absolutely has better noise performance than the D600." Instead, for most accurate results based on your own belief system, you should follow this step-by-step flow chart to come to your own conclusion:
$Untitled-1.webp
 
didnt know it didnt have a headphone jack =[ now im not sure
 
ive been watching some videos like

seems like im going for the d600 video is about equal which is big to me but the whole action shots which i wil be taking the d600 will be a little better and i can always buy a piece for the gps.
 
Last edited by a moderator:
One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.


It is called scientific method. Try reading physics journals instead of psychology journals. In the true sciences exact control is necessary, and to qualify and prove a proposed theory you must be able to record and demonstrate your work that proves that theory and or others must be able to prove that theory via the same method. That work must be repeatable by others using your method. That it the way real science works.

That is why for example Andrea Rossi, Martin Fleischmann, Stanley Ponsand others work on cold fusion is still merely a theory. While it has been claimed that some have produced cold fusion, the reported methods have not been repeatable by any other person.
 
One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.


It is called scientific method. Try reading physics journals instead of psychology journals. In the true sciences exact control is necessary, and to qualify and prove a proposed theory you must be able to record and demonstrate your work that proves that theory and or others must be able to prove that theory via the same method. That work must be repeatable by others using your method. That it the way real science works.

That is why for example Andrea Rossi, Martin Fleischmann, Stanley Ponsand others work on cold fusion is still merely a theory. While it has been claimed that some have produced cold fusion, the reported methods have not been repeatable by any other person.

I didn't say my journals were psychology journals... I read some of those, but also biomechanics, marketing, robotics, neurobiology and chemistry, and all sorts of other things in grab bag journals like Science and Nature. I also read some physics articles mostly for fun now and then.

I have never seen a peer reviewed journal article in ANY discipline that meets your stated requirements. And I have pretty good reason to believe that meeting your requirements is in fact theoretically impossible to do in any experiment. "Hold everything perfectly equal except one variable" is what they teach you in middle school science class as a simplified version of what actually happens / is realistic.

But prove me wrong. Post a single article from anywhere that shows an actual experiment where all but ONE variable is kept perfectly identically equal and where it is demonstrable that this is the case. Particle physics, basic chemistry, anything actually empirical (i.e. not pure math or computational modeling).




Hint: Usually people intentionally vary one variable, and the rest they ASSUME are ROUGHLY equal by means of statistically random sampling. They neither demonstrate this to actually be true, nor guarantee it in any absolute sense. Even in "hard sciences." That's why p value significance tests are the standard for "a real effect." It's all based on percentages and probabilities. If you could, in fact, perfectly hold everything constant except one thing, then you wouldn't need a p value. Your result would just be the result, period. You would only need to run one trial and be 100% confident.

Also, the part of this that is relevant to what originally brought this up in this thread: When the question you want to answer is something at a high level like "which brand should I buy?" then the appropriate variable that is varied is usually BRAND, which itself actually consists of dozens or hundreds of sub-variables, which doesn't matter, because you only have limited purchasing options. You can't buy a Nikon body construction with Canon autofocus and Olympus mirror system, etc. So holding those variables independently constant is dumb, impossible, and/or unnecessary.

To compare a Nikon and Canon body as well as is needed or really possible, you would get a Nikon lens, and a Nikon->Canon non-optical adapter, then take photos of the same objects with the cameras using the same lens and attached to the same tripod, most likely in default settings, with equal aperture/ISO/shutter, etc. And when comparing ISO noise, even having the same lens is more or less unnecessary, since lenses aren't really a source of noise.
 
Last edited:
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.


It is called scientific method. Try reading physics journals instead of psychology journals. In the true sciences exact control is necessary, and to qualify and prove a proposed theory you must be able to record and demonstrate your work that proves that theory and or others must be able to prove that theory via the same method. That work must be repeatable by others using your method. That it the way real science works.

That is why for example Andrea Rossi, Martin Fleischmann, Stanley Ponsand others work on cold fusion is still merely a theory. While it has been claimed that some have produced cold fusion, the reported methods have not been repeatable by any other person.
I didn't say my journals were psychology journals... I read some of those, but also biomechanics, marketing, robotics, neurobiology and chemistry, and all sorts of other things in grab bag journals like Science and Nature. I also read some physics articles mostly for fun now and then.

I thought you meant Journals like The American Association for the Advancement of Science, The National Academy of Sciences. The American Journal of Physics, The European Journal of Physics, The European Science Foundation.

I have never seen a peer reviewed journal article in ANY discipline that meets your stated requirements. And I have pretty good reason to believe that meeting your requirements is in fact theoretically impossible to do in any experiment. "Hold everything perfectly equal except one variable" is what they teach you in middle school science class as a simplified version of what actually happens / is realistic.

I'm not surprised that you have not seen such requirements in any true scientific journal. When one is published in such journals it is expected that the readers of the particular paper already have a working knowledge of subject including methodology. Apparently you do not have an understanding of how a comparative analysis study is done. To your thinking you can make a definitive statement that one particular camera is better than the others listed based on viewing the various photos, from various photographers, taken with various lenses in various lighting and believe that with such data one can come to a positive conclusion. Not only does that defy the basic methodology when in one would compare the various Canon bodies using the same lens the same subject in the same lighting thereby having only one variable providing varied results would be the different camera bodies, it also defies common sense.

But prove me wrong. Post a single article from anywhere that shows an actual experiment where all but ONE variable is kept perfectly identically equal and where it is demonstrable that this is the case. Particle physics, basic chemistry, anything actually empirical (i.e. not pure math or computational modeling).

I have no reason to provide proof. You are the one that made the definitive claim. Provide the proof to your claim that and I quote "6d is significantly better than the d600 for the low light situations you say you shoot most (live bands). As in probably about 2 stops better. And the 70D will be inferior to either of them for low light, being a crop sensor.". If you want to know about scientific methodology then it is covered in general science 101. If you want to learn how to do a credible comparative analysis of something, especially when dealing with physical properties you should perhaps read up on the methodology for conducting a comparative analysis.

Hint: Usually people intentionally vary one variable, and the rest they ASSUME are ROUGHLY equal by means of statistically random sampling. They neither demonstrate this to actually be true, nor guarantee it in any absolute sense. Even in "hard sciences." That's why p value significance tests are the standard for "a real effect." It's all based on percentages and probabilities. If you could, in fact, perfectly hold everything constant except one thing, then you wouldn't need a p value. Your result would just be the result, period. You would only need to run one trial and be 100% confident.

Hint, you are talking about Pseudoscience . Assume as defined: "Suppose to be the case, without proof: "afraid of what people are going to assume"." I personally think that Oscar Wilde was correct with his definition of Assume. “When you assume, you make an ass out of u and me.”

Einsteins Theory of Relativity was strongly supported in 1916 by Sir Arthur Eddington and his research with his photographs of stars during a total eclipse. It was not until 1962 with the Gravity Probe B that more definitive proof was forthcoming. Russell Hulse and John Taylor, the Wang group and many others have all contributed to the proof. Although the Theory of Relativity is now an accepted theory, it has never been empirically proven and will in all likely hood never will. But the methodology used so far to support that theory has continued to be superior to your SWAG method.


Also, the part of this that is relevant to what originally brought this up in this thread: When the question you want to answer is something at a high level like "which brand should I buy?" then the appropriate variable that is varied is usually BRAND, which itself actually consists of dozens or hundreds of sub-variables, which doesn't matter, because you only have limited purchasing options. You can't buy a Nikon body construction with Canon autofocus and Olympus mirror system, etc. So holding those variables independently constant is dumb, impossible, and/or unnecessary.

To compare a Nikon and Canon body as well as is needed or really possible, you would get a Nikon lens, and a Nikon->Canon non-optical adapter, then take photos of the same objects with the cameras using the same lens and attached to the same tripod, most likely in default settings, with equal aperture/ISO/shutter, etc. And when comparing ISO noise, even having the same lens is more or less unnecessary, since lenses aren't really a source of noise.

What a load of caca-pooh-pooh. Your empirical statement that quote: "6d is significantly better than the d600 for the low light situations you say you shoot most (live bands). As in probably about 2 stops better. And the 70D will be inferior to either of them for low light, being a crop sensor." has nothing to to with auto focus, mirror system etc. nor with the comparative testing of various sensors. Aperture values are aperture values: 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, 45, 64. are the same full stops no mater the body. The are a product of physics and mathematics. The ISO scale is also the same from body to body. It is also a product of physics and mathematics. As for the lens used. There are a various methods of achieving this, be it via adapters, modification of a mount or using an older manual lens with various adapters. It does not require AF and electronic aperture in a lens to conduct a low light sensor test.

The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
 
The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
yes, that's why every single link I posted had the same subject and the same lighting for their proper comparisons.

[Lots of words about science and randomly chosen journal titles, and a mention of Gravity Probe B]
Okay, so you mentioned a specific experiment, like I asked: Gravity Probe B.

Here is a short list of some of the many things that were NOT held absolutely constant in that experiment, OTHER than the variables being tested:
1) The sphericity of the gyroscopes used (They varied by unknown amounts up to 40 atoms at any point)
2) Random heat noise/interference, since the system was kept at 2 Kelvin, not 0 Kelvin.
3) The exact path of the star the satellite was oriented to as a reference point. This cannot be known for sure unless all of the mass in the universe is accounted for. They chose it simply because it was one of the BETTER known paths, not FULLY known, and thus not able to be fully controlled out.
4) The uniformity of the gyroscope coatings, to the point where the unevenness from one side to the other was actually equivalent in size to the overall expected experimental effect
5) The exact influence of solar flares, which interrupted data collection repeatedly and may have influenced remaining data's accuracy.

The gravity probe B experiment fails to meet your criteria that "All things except one must be held [demonstrably] equal." Many more than one thing were not held equal, by NASA's own admission: http://einstein.stanford.edu/content/final_report/GPB_Final_NASA_Report-020509-web.pdf And in fact, the random influences of some of the above factors were sufficiently confusing that NASA pushed back the expected date of publication of its results by YEARS while they tried to grapple with the extreme amount of noise in their imperfectly controlled experiment.

These sources of noise had to be modeled out, which may or may not have been done correctly, or even if done correctly, still would result in a small % chance that the results are null after all. We can't even double check ourselves. As of today, we have to just ASSUME that NASA's models were correct (I wonder what Oscar Wilde would have to say about that?), because as far as I am aware, the actual raw data has not yet been released to the public for any of the conclusions to be confirmed.

Which means that this experiment has not only not been repeated (failing another one of your criteria), but the analysis of the data hasn't even been repeated. We need the data to be released for that.



Conclusion: Based on your definition of what science must be, the gravity probe B experiment fails to pass every single one of your requirements. It uses probabilistic inference from random sampling, not absolute deduction from 100% control (just like psychology), and has not been repeated (unlike many psychology experiments). Thus qualifying as "pseudoscience"
 
Last edited:
The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
yes, that's why every single link I posted had the same subject and the same lighting for their proper comparisons.

[Lots of words about science and randomly chosen journal titles, and a mention of Gravity Probe B]
Okay, so you mentioned a specific experiment, like I asked: Gravity Probe B.

Here is a short list of some of the many things that were NOT held absolutely constant in that experiment, OTHER than the variables being tested:
1) The sphericity of the gyroscopes used (They varied by unknown amounts up to 40 atoms at any point)
2) Random heat noise/interference, since the system was kept at 2 Kelvin, not 0 Kelvin.
3) The exact path of the star the satellite was oriented to as a reference point. This cannot be known for sure unless all of the mass in the universe is accounted for. They chose it simply because it was one of the BETTER known paths, not FULLY known, and thus not able to be fully controlled out.
4) The uniformity of the gyroscope coatings, to the point where the unevenness from one side to the other was actually equivalent in size to the overall expected experimental effect
5) The exact influence of solar flares, which interrupted data collection repeatedly and may have influenced remaining data's accuracy.

The gravity probe B experiment fails to meet your criteria that "All things except one must be held [demonstrably] equal." Many more than one thing were not held equal, by NASA's own admission: http://einstein.stanford.edu/content/final_report/GPB_Final_NASA_Report-020509-web.pdf And in fact, the random influences of some of the above factors were sufficiently confusing that NASA pushed back the expected date of publication of its results by YEARS while they tried to grapple with the extreme amount of noise in their imperfectly controlled experiment.

These sources of noise had to be modeled out, which may or may not have been done correctly, or even if done correctly, still would result in a small % chance that the results are null after all. We can't even double check ourselves. As of today, we have to just ASSUME that NASA's models were correct (I wonder what Oscar Wilde would have to say about that?), because as far as I am aware, the actual raw data has not yet been released to the public for any of the conclusions to be confirmed.

Which means that this experiment has not only not been repeated (failing another one of your criteria), but the analysis of the data hasn't even been repeated. We need the data to be released for that.



Conclusion: Based on your definition of what science must be, the gravity probe B experiment fails to pass every single one of your requirements. It uses probabilistic inference from random sampling, not absolute deduction from 100% control (just like psychology), and has not been repeated (unlike many psychology experiments). Thus qualifying as "pseudoscience"

That is why it is still called Einsteins Theory of Relativity and will in all probability always be just a Theory. There is no conclusive methodology to test the Theory. While the general scientific community believes it is correct the general scientific community also does not believe that it will ever be proven.

Also the Gravity Probe B was not a comparative analysis study now was it??? Scientific Methodology varies on the basis of the experiment or exercise being conducted.

As others on this forum have stated, we will suggest that people take your empirical proclamations with a grain of this stuff.
himilayan_salt_zpsd8bdf454.jpg
 
so my question is which camera is right for me
hu7h.jpg

Chevy or Ford?

Pepsi or Coke"?

Windows or Mac?

You have to decide which one is right for you. You can read all the reviews, look at all the tests and read all the opinions you like. But you have to decide. My suggestion would be that you should go take a look at all of the bodies that interest you. See which one or ones suites your needs with in your budget and decide from there. Remember unlike a fixed lens form of camera that when you buy a DSLR you are not buying a camera, you are buying a system. Photograph is a series of compromises. Very rarely, unless under controlled conditions, do all the stars align and you have the perfect light with the perfect background the perfect lens combined with the perfect camera. You are going to have to decide based on your abilities, experience and other personal factors in your life what will work the best for you.
 
The canon 6d is definitely better in high ISO, both raw and jpeg photos prove it so I don't understand why some people are saying otherwise. Websites like dxo mark can be really misleading.
 
Gryphon, if your own example of a proper scientific study was an inappropriate one, then that is nobody's fault but your own... why are you lecturing me about your own example not being analogous enough??? Provide a more appropriate example if you didn't like your first one. I can't provide one for you, because I don't think one exists anywhere in history.

In the meantime, although I feel bad about debating in the beginner's forum, it looks like the majority of other people here are indeed looking at plain, well collected evidence that does sufficiently control for the needed variables and are drawing reasonable conclusions about the 6d vs. 600d ISO performance. So this is still accomplishing something sane and rational, at least.

Thankfully, "Omg we can't trust any experiments ever because they don't live up to our impossible expectations of 100% perfect experimental control! Let's just guess or go on gut instinct instead!" is not seemingly a popular means of making life decisions around here. Nor should it be. The resources are available online to see almost any comparison you want between cameras, and people should use the information that is available.
 

Most reactions

New Topics

Back
Top