Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Your examples are not at all straightforward. Did the comparisons use exactly the same lens or did they use a similar lens? Were they taken in exactly the same controlled ambient light conditions or were they taken under slightly varying lighting conditions? What "Adjustments" were made to the JPEG's in-camera, especially what in-camera noise-reduction was used? All I See are a set of JPEG files that say "A" is better than "B" with no mention of how they were obtained in any way, form or fashion. No mention of how the cameras were set up or anything whatsoever.
Your examples are not at all straightforward. Did the comparisons use exactly the same lens or did they use a similar lens? Were they taken in exactly the same controlled ambient light conditions or were they taken under slightly varying lighting conditions? What "Adjustments" were made to the JPEG's in-camera, especially what in-camera noise-reduction was used? All I See are a set of JPEG files that say "A" is better than "B" with no mention of how they were obtained in any way, form or fashion. No mention of how the cameras were set up or anything whatsoever.
So your theory is that in four different comparison experiments, all four people, for no explicable reason and in complete disregard of even the most fundamental tenets of science, took the time to change the default noise reduction settings from the default and/or change the lighting of the scene DURING the experiment (?!), and they happened to do so in some way that happened to favor the Canon camera by exactly the same amount of 2 stops in all four cases?
Yes. Yes, that's much more likely than "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people."
The brand loyalty just gets really really ridiculous on this forum sometimes....
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
Unlike the above requirement, this one is indeed an actual feature of good science. And conveniently, when it comes to D600 vs. 6D noise performance, the data is very well repeated. The 6D being about 2 stops better is found to be the case by 83% (5 out of 6) reviewers considered in this thread. Not only did they repeat the overall conclusion of body type, but they even repeated the exact amount by which it seems to be better.To accept the theory put forth it must be repeatable.
You're absolutely right. I have not proven that the camera review blog community is NOT, in fact, a shadowy cabal of conspirators who systematically undermine their own experiments in order to all come up with a consensus conclusion in the exact same (incorrect) direction and magnitude in order to trick unwary internet viewers into buying the wrong camera.Your suggestion that "they all used default settings and didn't randomly change the lights in the same scene in between bodies, like sane people." has not been demonstrated.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.One of the basic tenements of scientific testing of this sort is that all things except one must be equal and demonstrable.
It is called scientific method. Try reading physics journals instead of psychology journals. In the true sciences exact control is necessary, and to qualify and prove a proposed theory you must be able to record and demonstrate your work that proves that theory and or others must be able to prove that theory via the same method. That work must be repeatable by others using your method. That it the way real science works.
That is why for example Andrea Rossi, Martin Fleischmann, Stanley Ponsand others work on cold fusion is still merely a theory. While it has been claimed that some have produced cold fusion, the reported methods have not been repeatable by any other person.
Of the 100 or so peer reviewed journal articles I've read in the last year (including about a dozen marketing studies), zero of them have met this requirement. So I'm curious as to where you got it from.
It is called scientific method. Try reading physics journals instead of psychology journals. In the true sciences exact control is necessary, and to qualify and prove a proposed theory you must be able to record and demonstrate your work that proves that theory and or others must be able to prove that theory via the same method. That work must be repeatable by others using your method. That it the way real science works.
That is why for example Andrea Rossi, Martin Fleischmann, Stanley Ponsand others work on cold fusion is still merely a theory. While it has been claimed that some have produced cold fusion, the reported methods have not been repeatable by any other person.I didn't say my journals were psychology journals... I read some of those, but also biomechanics, marketing, robotics, neurobiology and chemistry, and all sorts of other things in grab bag journals like Science and Nature. I also read some physics articles mostly for fun now and then.
I thought you meant Journals like The American Association for the Advancement of Science, The National Academy of Sciences. The American Journal of Physics, The European Journal of Physics, The European Science Foundation.
I have never seen a peer reviewed journal article in ANY discipline that meets your stated requirements. And I have pretty good reason to believe that meeting your requirements is in fact theoretically impossible to do in any experiment. "Hold everything perfectly equal except one variable" is what they teach you in middle school science class as a simplified version of what actually happens / is realistic.
I'm not surprised that you have not seen such requirements in any true scientific journal. When one is published in such journals it is expected that the readers of the particular paper already have a working knowledge of subject including methodology. Apparently you do not have an understanding of how a comparative analysis study is done. To your thinking you can make a definitive statement that one particular camera is better than the others listed based on viewing the various photos, from various photographers, taken with various lenses in various lighting and believe that with such data one can come to a positive conclusion. Not only does that defy the basic methodology when in one would compare the various Canon bodies using the same lens the same subject in the same lighting thereby having only one variable providing varied results would be the different camera bodies, it also defies common sense.
But prove me wrong. Post a single article from anywhere that shows an actual experiment where all but ONE variable is kept perfectly identically equal and where it is demonstrable that this is the case. Particle physics, basic chemistry, anything actually empirical (i.e. not pure math or computational modeling).
I have no reason to provide proof. You are the one that made the definitive claim. Provide the proof to your claim that and I quote "6d is significantly better than the d600 for the low light situations you say you shoot most (live bands). As in probably about 2 stops better. And the 70D will be inferior to either of them for low light, being a crop sensor.". If you want to know about scientific methodology then it is covered in general science 101. If you want to learn how to do a credible comparative analysis of something, especially when dealing with physical properties you should perhaps read up on the methodology for conducting a comparative analysis.
Hint: Usually people intentionally vary one variable, and the rest they ASSUME are ROUGHLY equal by means of statistically random sampling. They neither demonstrate this to actually be true, nor guarantee it in any absolute sense. Even in "hard sciences." That's why p value significance tests are the standard for "a real effect." It's all based on percentages and probabilities. If you could, in fact, perfectly hold everything constant except one thing, then you wouldn't need a p value. Your result would just be the result, period. You would only need to run one trial and be 100% confident.
Hint, you are talking about Pseudoscience . Assume as defined: "Suppose to be the case, without proof: "afraid of what people are going to assume"." I personally think that Oscar Wilde was correct with his definition of Assume. When you assume, you make an ass out of u and me.
Einsteins Theory of Relativity was strongly supported in 1916 by Sir Arthur Eddington and his research with his photographs of stars during a total eclipse. It was not until 1962 with the Gravity Probe B that more definitive proof was forthcoming. Russell Hulse and John Taylor, the Wang group and many others have all contributed to the proof. Although the Theory of Relativity is now an accepted theory, it has never been empirically proven and will in all likely hood never will. But the methodology used so far to support that theory has continued to be superior to your SWAG method.
Also, the part of this that is relevant to what originally brought this up in this thread: When the question you want to answer is something at a high level like "which brand should I buy?" then the appropriate variable that is varied is usually BRAND, which itself actually consists of dozens or hundreds of sub-variables, which doesn't matter, because you only have limited purchasing options. You can't buy a Nikon body construction with Canon autofocus and Olympus mirror system, etc. So holding those variables independently constant is dumb, impossible, and/or unnecessary.
To compare a Nikon and Canon body as well as is needed or really possible, you would get a Nikon lens, and a Nikon->Canon non-optical adapter, then take photos of the same objects with the cameras using the same lens and attached to the same tripod, most likely in default settings, with equal aperture/ISO/shutter, etc. And when comparing ISO noise, even having the same lens is more or less unnecessary, since lenses aren't really a source of noise.
What a load of caca-pooh-pooh. Your empirical statement that quote: "6d is significantly better than the d600 for the low light situations you say you shoot most (live bands). As in probably about 2 stops better. And the 70D will be inferior to either of them for low light, being a crop sensor." has nothing to to with auto focus, mirror system etc. nor with the comparative testing of various sensors. Aperture values are aperture values: 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, 45, 64. are the same full stops no mater the body. The are a product of physics and mathematics. The ISO scale is also the same from body to body. It is also a product of physics and mathematics. As for the lens used. There are a various methods of achieving this, be it via adapters, modification of a mount or using an older manual lens with various adapters. It does not require AF and electronic aperture in a lens to conduct a low light sensor test.
The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
yes, that's why every single link I posted had the same subject and the same lighting for their proper comparisons.The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
Okay, so you mentioned a specific experiment, like I asked: Gravity Probe B.[Lots of words about science and randomly chosen journal titles, and a mention of Gravity Probe B]
yes, that's why every single link I posted had the same subject and the same lighting for their proper comparisons.The same conditions in testing however ARE important. Same lighting, same subject are necessary for proper comparison.
Okay, so you mentioned a specific experiment, like I asked: Gravity Probe B.[Lots of words about science and randomly chosen journal titles, and a mention of Gravity Probe B]
Here is a short list of some of the many things that were NOT held absolutely constant in that experiment, OTHER than the variables being tested:
1) The sphericity of the gyroscopes used (They varied by unknown amounts up to 40 atoms at any point)
2) Random heat noise/interference, since the system was kept at 2 Kelvin, not 0 Kelvin.
3) The exact path of the star the satellite was oriented to as a reference point. This cannot be known for sure unless all of the mass in the universe is accounted for. They chose it simply because it was one of the BETTER known paths, not FULLY known, and thus not able to be fully controlled out.
4) The uniformity of the gyroscope coatings, to the point where the unevenness from one side to the other was actually equivalent in size to the overall expected experimental effect
5) The exact influence of solar flares, which interrupted data collection repeatedly and may have influenced remaining data's accuracy.
The gravity probe B experiment fails to meet your criteria that "All things except one must be held [demonstrably] equal." Many more than one thing were not held equal, by NASA's own admission: http://einstein.stanford.edu/content/final_report/GPB_Final_NASA_Report-020509-web.pdf And in fact, the random influences of some of the above factors were sufficiently confusing that NASA pushed back the expected date of publication of its results by YEARS while they tried to grapple with the extreme amount of noise in their imperfectly controlled experiment.
These sources of noise had to be modeled out, which may or may not have been done correctly, or even if done correctly, still would result in a small % chance that the results are null after all. We can't even double check ourselves. As of today, we have to just ASSUME that NASA's models were correct (I wonder what Oscar Wilde would have to say about that?), because as far as I am aware, the actual raw data has not yet been released to the public for any of the conclusions to be confirmed.
Which means that this experiment has not only not been repeated (failing another one of your criteria), but the analysis of the data hasn't even been repeated. We need the data to be released for that.
Conclusion: Based on your definition of what science must be, the gravity probe B experiment fails to pass every single one of your requirements. It uses probabilistic inference from random sampling, not absolute deduction from 100% control (just like psychology), and has not been repeated (unlike many psychology experiments). Thus qualifying as "pseudoscience"
so my question is which camera is right for me