# New camera technology advice needed



## InfiniteRes (Mar 19, 2010)

Hi everyone. I'm not sure whether it is entirely appropriate for me to be asking about this here, but you guys seem like a nice group, so why not.

A year ago on the way to work I thought of a revolutionary idea, a new type of camera that does not use pixels to take a picture. I have spent the past year designing this camera and seeking funding to build it. So far I have constructed a rough prototype, written the code for it, acquired patent pending, had it approved of by a JPL engineer and found zero investors. My idea will without a doubt work, however, it is expensive to build and I need funding in order to build a practical prototype to demonstrate its main feature, infinite resolution, and resolutions exceeding 1 terapixel given moderate levels of money spent on its construction. 

Everyone has ignored me...despite the profits that could be made on this; even the media has ignored this as a news story, isolating me from even getting my name out there to investors. 

I really really want and need to begin this project. I do not have a job, and by November there will be a high chance of me loosing my apartment due to financial issues. I thought America and capitalism represented a place where any hardworking honest man could find a place in the world, but it does not seem that way so far. 

Any advice would be much appreciated...I just really do not know what to do at this point in time. I feel like my hard work has gone to waste.


----------



## bazooka (Mar 19, 2010)

Sounds like you need to work harder.


----------



## Hamtastic (Mar 19, 2010)

InfiniteRes said:


> ...infinite resolution...



What are you using as a lens?


----------



## gsgary (Mar 19, 2010)

I'll send you all my savings


----------



## KmH (Mar 19, 2010)

InfiniteRes said:


> ....I thought America and capitalism represented a place where any hardworking honest man could find a place in the world, but it does not seem that way so far.


Yep, it sure is but as you've discovered it takes more than hard work to get an invention idea off the ground and flying.

It also takes discovering the right contacts to keep developement of the invention moving and many inventions take years to pay off for the inventor.

The key is to not give up.


----------



## InfiniteRes (Mar 19, 2010)

Work harder? Mmm...more like work differently. It seems my way of doing things is ineffective. You are correct about the right contacts, but that has been my weak point. Its difficult to meet relevant people. Though, I have exploited every single friend and friend of friend that I had, but still nothing. I have never been much of a socialite unfortunately :x  

I'm using this lens right now:  Nikon | Imaging Products | AF-S DX NIKKOR 16-85mm f/3.5-5.6G ED VR (5.3x)

Though, since my prototype is as crappy as the first digital camera made, if not worse, I cannot nearly exploit the capabilities of this lens; I just happened to have it, so I used it. For the serious prototype to be made under funding...I will need an extremely large aperture lens of which I have not decided yet, since my camera's only practical limitation is the diffraction limit. Hooked up to Hubble, it can be drawn to 100%. So yes...it will need a very large, very expensive lens to be used fully. Since it will cost an awful lot to build, I was considering having a custom lens made. 

	 		 		"I'll send you all my savings"  Thanks!   Though, unless its $200k it won't be enough


----------



## TanMan (Mar 19, 2010)

Is it possible that we could see a picture of this prototype?


----------



## usayit (Mar 19, 2010)

I'm no expert but...  I'd consider first get a solid hold on a patent to protect your idea.  You might want to consider consulting a lawyer specializing in patents.  Rather than going after investors to start a startup business which is extremely difficult especially with the capital required for such type of business, you are better off pitching your prototype towards pre-existing imaging companies; Hewlett Packard, Sony, Pentax, Kodak, Nikon, Canon, etc..   There are hundreds of examples of failed startups that have solid ideas and even implementations... and still fail. 



> I thought America and capitalism represented a place where any hardworking honest man could find a place in the world, but it does not seem that way so far.



I only want to say that this statement is troublesome.  You sound like America and Capitalism owe you something... neither don't.  Just like immigration to our shores doesn't guarantee an easy successful life, a good idea, intelligence, even hard work, doesn't guarantee you a successful startup.  If this is the attitude you choose to embrace, you are destined to fail.   The freedom to pursue such ventures is what we all should be thankful... just like the freedom ~not~ to invest in your idea.


----------



## InfiniteRes (Mar 19, 2010)

I do have patent pending, which allows me to sue whoever produces my idea without my permission during the patent pending period once I receive the patent from the USPTO. So, I am fully protected right now, since I am 100% confident the USPTO will accept my idea, considering I did have a patent attorney write it for me and perform due diligence. 

I tried pitching my idea to those companies actually, but received no responses. THAT is what angers me; the fact that no one even gives me a chance. If they looked at it and said "We do not want this", to me, then fine! But not even responding? That is plain rude. 

It may not, but it SHOULD. Social ability should play no role in such business deals. You should be able to sit down, explain the facts, and receive a logical response based off of the deductive reasoning of the data you gave them. Whether a deal succeeds or not should not be based off of how you look at them or what cologne you are wearing. My attitude towards this reminds me of how Howard Hughes conducted business. He did not like all of the nonsense. 

What I meant by that, is that the "social hierarchy" is similar to the 'anti-capitalistic hierarchy', in that doing business is being prevented by nonsensical systems that are only counter-intuitive to what everyone wants. Anyone with an idea that can make themselves and the investor money should be considered. 

Though, I have already accepted how this works and therefore am wearing the cologne, dressing well, being polite, looking at people the right way, and going to the social events. But even this has not been fruitful yet. (I am naturally polite and dress well, though these people expect way too much). 

There is NOTHING wrong with my business plan, but whenever I get to propose it to someone in person, they give me this incredulous look and walk away. That is frustrating. 

Pictures? Why certainly! 

http://i126.photobucket.com/albums/p87/RobertJustice/DSC_0086.jpg

http://i126.photobucket.com/albums/p87/RobertJustice/DSC_0039.jpg

http://i126.photobucket.com/albums/p87/RobertJustice/DSC_0083.jpg

I WAS going to have these published in a certain magazine until they halted correspondence with me, so why not post them here.

I do not have any shots of the completely assembled version for some reason, but if you combine those 3 shots you kind of get the full version. 

Can any of you guess how this works? Note: There is supposed to be a lens after the aluminum ring, and a photodiode after that. The aluminum ring can rotate.


----------



## astrostu (Mar 19, 2010)

Gotta say I'm doubtful - this has all the hallmarks of either self- or knowing delusion.

First, "patent pending" is a phrase frequently thrown out that really does not mean much of anything other than you've filed the paperwork.  Even having a patent does not mean that your apparatus will do what you claim it does.

Second, throwing out a buzzword such as "infinite resolution" is physically impossible.  You will be limited (a) by optical resolution, which you did admit in your follow-up post, and (b) by the resolution of your recording medium, be that chemical grain size in film or pixel size in digital -- even if you've come up with a new type of detector technology, you will still be limited by physics/chemistry/electronics.

Third, you've brought in a classic argument from authority, "Had it approved of by a JPL engineer."  Who is this person?  What are their credentials to "approve" such a device?  Did you show them theory, or a prototype?  Or did you just discuss the concept?

Fourth, you use a classic argument from persecution, "the media has ignored this as a news story, isolating me from even getting my name out there to investors."  It shouldn't be up to the media to get news out to potential investors, it should be you.

Finally, the most obvious one (paraphrased), "This idea will make you millions!  I just need some seed money to get a working model ... ."

If you honestly expect people to send you money, you need to supply _a lot_ more information.


----------



## InfiniteRes (Mar 19, 2010)

"First, "patent pending" is a phrase frequently thrown out that really does not mean much of anything other than you've filed the paperwork. Even having a patent does not mean that your apparatus will do what you claim it does."

I know that fully. I am just stating that to secure that fact, since it is written under the USPTO guidelines that you cannot expect anyone to abide by the patent pending authority unless they know you have patent pending. In other words, if someone produces my idea for 3 months and makes a million dollars off of it, if I did not send them a cease and desist order at the beginning of those 3 months, I am not liable to sue them for damages. You can only sue for damages when the entity KNOWINGLY produces it without your permission. So, now you all know, I have patent pending, and no person(s) has the right to produce my invention. 

At a certain point, the diffraction limit can become so large that it is irrelevant, hence Hubble which can achieve some inexplicable resolution not even worth considering at this point. 

With no diffraction limit and a couple million spent on making the elite version of my camera, there is no resolution limit. How is that possible? This is all I do. Physics. I have devoted my entire life to physics and forsaken virtually everything else. When you become this good at something, anything is possible. See Leonardo Da Vinci who invented the helicopter quite a long time before anyone else did. 

I sent the engineer all data on the camera. The patent, pictures, code, dimensions, all parameters....basically my entire project notepad. He knows everything I know. 

He is a PhD computer science engineer and worked on various JPL probes in the past. 

I have tried, but as I said, no one wants to discuss this with me...and the few that have soon loose interest. I would only assume that they would want a good story. I think this is interesting. I mean, infinite resolution? You do not believe me; it must be interesting then. Anything unbelievable such as this would be interesting. But instead they talk about bad relationships between celebrities and cats stranded in trees. Though, they do talk about the LHC and other relevant things occasionally. 

I have....every investor I have come in contact with I have sent everything, including an extensive carefully written business plan. They all ignore and treat me like an anomaly. And the people I have gone to could all afford to fund this project, but they are all very reluctant to fund ANY startup business at this point.   

I am not here to directly procure money...that is obviously unreasonable, but if any of you guys know any investors I could talk to, or perhaps anyone in the camera companies like Canon or Nikon, it would be much appreciated.


----------



## astrostu (Mar 19, 2010)

InfiniteRes said:


> At a certain point, the diffraction limit can become so large that it is irrelevant, hence Hubble which can achieve some inexplicable resolution not even worth considering at this point.



Hubble's diffraction limit is certainly worth considering - it was considered when they made the original detectors, which over-sampled by about a factor of two if memory serves.  I'm not as up to speed on the current chips up there, but practical wavelength-dependent diffraction limits plays a significant role in influencing the decision on any detector that an astronomer would purchase.



InfiniteRes said:


> With no diffraction limit and a couple million spent on making the elite version of my camera, there is no resolution limit. How is that possible? This is all I do. Physics. I have devoted my entire life to physics and forsaken virtually everything else. When you become this good at something, anything is possible. See Leonardo Da Vinci who invented the helicopter quite a long time before anyone else did.  ...  I think this is interesting. I mean, infinite resolution? You do not believe me; it must be interesting then.



Gotta say that comparing oneself with da Vinci is another fairly common sign of a scam coming up.  And for myself, speaking as an astrophysicist who has studied instrumentation for years, "infinite resolution" is not possible.  Me not believing you does not mean that it's interesting.  It means that you seem to be violating some fundamental physical laws.



InfiniteRes said:


> I sent the engineer all data on the camera. The patent, pictures, code, dimensions, all parameters....basically my entire project notepad. He knows everything I know.  He is a PhD computer science engineer and worked on various JPL probes in the past.



I'm not sure what a computer science engineer would be able to do in signing off on a physics problem.  "Working on various probes" does not give someone credibility in this matter, either, as for all intents and purposes, I've also worked on various probes but I would have no expertise on guaranteeing that a design for a revolutionary camera system is feasible.  Regardless, it's still an argument from authority and if you're expecting investors from somewhere, you will need more independent, recognized experts in the appropriate fields, in my opinion.



InfiniteRes said:


> Though, they do talk about the LHC and other relevant things occasionally.



What does the LHC have to do with this?



InfiniteRes said:


> I am not here to directly procure money...that is obviously unreasonable, but if any of you guys know any investors I could talk to, or perhaps anyone in the camera companies like Canon or Nikon, it would be much appreciated.



I don't know anyone you could talk to.  However, if you plan on pursuing this and if you honestly think you have something that works, I strongly suggest taking into account what I've stated in terms of the common signs of a scam, backing up claims of revolutionizing physics, and getting multiple relevant established experts in the appropriate fields.


----------



## InfiniteRes (Mar 20, 2010)

Oh I know, but Hubble's is so high that it is not feasible to even consider building a model that could exploit it fully right now. I am currently trying to achieve between a terapixel and a petapixel. Hubble is running...what, into the zeta pixels? Yota pixels? Too high for now; that is for later. With my camera, money = higher resolution. You could say "oh, well I could lace together an infinite number or CCD's to achieve infinite resolution", however, that would not be practical, my camera is. 

I was not doing a 1:1 comparison between me and Leonardo. Just citing him as an example of one who could vastly advance certain fields during his lifetime. 

Fundamental physical laws? Not really. Note that by infinite, I mean that you CAN increase the abilities of the tech infinitely. Once you spend a couple of million on the physical sensor, and use something like a metamaterial lens to avoid the diffraction limit, given an infinitely long exposure, you can achieve infinite resolution. It cannot currently achieve infinite resolution in a 1/1000th of a second exposure, but it can if you leave it running forever. Which brings up the point, how long do you need to run it to acquire a decent resolution. For a 1/1000th of a second exposure, given a moderately expensive version (few hundred thousand), you can achieve a trillion pixels. 

I have tried to have other experts verify my camera, but none want to discuss it with me. This guy also does physics and knows enough to handle the basics like diffraction limit etc.... The reason why he adds credibility is that a very large portion of my camera is purely digital and not physical, part of the reason why it can do what it can do. 

I was just citing the LHC as an example. 

"taking into account what I've stated in terms of the common signs of a scam"

But I am not presenting a scam. You want me to change the way that I present myself? I think all inventions that revolutionize a field will be considered a scam by at least a few people. 

I have not revolutionized physics really....I am just doing something different. It is like having an airplane use a jet engine instead of a turbine engine.


----------



## astrostu (Mar 20, 2010)

Resolution has nothing to do with how many picture elements (pixels / film "grains") one has - it's what the smallest angular size is that each picture element covers, or how small of an angular size the optics can distinguish.

HST's Advanced Camera for Surveys has a pixel size of 0.13 arcsec in the IR channel and 0.04 arcsec in the UV/VIS, and it has a detector size that's just under 1 Mpx for IR, and 16.8 Mpx for UV/VIS.  The theoretical diffraction limit for green light for a 2.4 m HST-style mirror is around 0.05 arcsec, so it is diffraction-limited with this camera for visible light.

Infinite resolution implies you are (1) ignoring the finite size of molecules and their abilities to hold information (e.g., the problem we're approaching with transistors and with magnetic disk storage), and (2) ignoring quantized information states of electrons.  Perhaps you could allay my worries - since you are patent pending, could you tell us how you are recording the incident photons?


----------



## InfiniteRes (Mar 20, 2010)

Well, number of pixels is part of the equation. I invented a sensor, but I am relying on modern optics and computing technology for the rest. It so happens both are just barely up to par with what I need for my sensor to work at its best. 

I think an explanation is in order.....I was going to have the explanation published anyways. Why not here. Though I have to say it makes me nervous. :x  That JPL engineer by the way freaked out when he first saw my explanation. It took me a number of months to show him that it will work. I hope I can show you guys faster. By the way, camera flow chart 1 and 2 are picture 1 and 2.  

http://i126.photobucket.com/albums/p87/RobertJustice/Cameraflowchart1copy.jpg

http://i126.photobucket.com/albums/p87/RobertJustice/Cameraflowchart2copy.jpg



Physical Explanation: Ok, take a deep breath, and here we go.

Imagine a window with light shining through it from the sun. You are inside the house of the window and decide to walk in front of the light. What will happen? You will create a shadow on the wall behind you. The shape of the shadow will be determined by the shape of your body. Every step you take into the light, the greater the shadow on the wall will grow, until after enough steps you will completely block out the light from reaching the wall.

Now, instead of you walking in front of the light, take a flat and square piece of cardboard and pass it in front of the light. Assuming the cardboard's height is enough to instantly block out the length of the light, as you move it across the light you will be solely reducing the horizontal axis. Say, you take 5 individual equal steps until your piece of cardboard blocks out the light completely. Every step you take will cause an overall reduction in the total amount of light reaching the wall. Namely, each step will cause a 20% reduction since 100/5 = 20. So, you have 5 individual reductions of 20% each. Your first step causes a 20% total reduction,
next step a 40%, next 60%, next 80% and finally 100% and total blockage of the light passing through the window.

Ok, now lets quantify the light passing through the window. Say, if you were to measure the quantity of light hitting the wall in total, you would read 10 joules. So, without any reduction due to your piece of cardboard, the wall is receiving 10 joules of energy from the light.

Given that, you now move the piece of cardboard 20% over the light, and then read the total amount of light now hitting the wall. Your measuring device now reads 8 Joules. You take another step and now cover 40% of the wall and make another measurement; your measuring device now reads 6 Joules. You cover 60% and your device reads 4 Joules; cover 80% and it reads 2 Joules; cover 100% and it reads 0 Joules. Based off of our knowledge of total light level without the cardboard reduction (10 Joules), how much of the wall each step would cover (20%), and how much energy we had after each step (8, 6, 4, 2, 0), we can determine how much energy each of the 5 segments of the light hitting the wall possessed. The answer being, 2 Joules.

If we had 10 Joules, take action and now have 8, we can do basic math (10 - 8 = 2) to determine that the total change is 2; and that therefore, the region that we covered was worth 2 (Joules). This also goes for addition. If we had 4 Joules, take action and now have 6, we can use basic algebra to solve for the change [4 + x = 6   --->   (4 + x) - 4 = (6) - 4   --->   x = 2], the change is 2.

Using this method, despite not being able to directly read each region via something like pixels, as long as you know the initial total and new total, you can surmise what the change must have been.

Now...dividing the light hitting the wall into 5 regions, say, region 1, region 2, region 3, region 4 and region 5, you can label each region as having 2 Joules each. Great, you now know how much energy each region in the X AXIS possesses. But what about the Y axis? To determine that, you do the same thing we just did for the X axis, accept instead of stepping with the piece of cardboard from side to side, you must raise the cardboard up from the floor, or equally, down from the ceiling. Using the same method you can determine how much energy each of the Y regions possess.

Now...my camera cannot function off of only knowing the X and Y axis, it must also know the "Z" and "L" axis, which I designate as being from top right to bottom left (Z) and top left to bottom right (L). Essentially, if you were to rotate the light coming through the window by 45 degrees, and attempted the same X and Y scans with the cardboard, your resultant data would be equal to the Z and L axis.

Everything written here is how my camera PHYSICALLY acquires the 4 axis in terms of lines. Moving a piece of cardboard over the light is one way, and was in fact how my prototype worked (accept I used a high torque low speed motor to pull the cardboard on an aluminum sled across a surface on a rotatable metal pipe), however there are many ways to generate the necessary lines.

As for the sensor we discussed as measuring the light hitting the wall, a photodiode can accomplish this task, or even a solar panel can. I used a photodiode in my prototype, with a computer graphing Digital Multi Meter (DMM) to measure the photodiode output and graph it on my laptop.

Now, in our above example, the light hitting the wall was perfectly consistent, hence the perfect 2 Joule reduction in light due to each 20% reduction in light. However, cameras take pictures of inconsistent objects with color and shape, not a single color blob! So, say we repeat the cardboard process, except this time, region 3, the 40% to 60% change, records a 6 Joule to 5 Joule change instead of a 6 Joule to 4 Joule change! What does this mean? Region 3 is worth 1 Joule, and is therefore darker than the rest of the regions of light hitting the wall. However, our initial read was 10 Joules, but we have only read 50% of the light by 60% of the progression! What does this mean? Regions 4 and 5 cannot both be 2 Joules each, at least one of the has to be 3 Joules!

So we continue the process and determine that region 4 is 2 Joules; we have 20% to go and currently are at 7 Joules. So we move the cardboard over the final region and reduce the last 3 Joules to 0 Joules, therefore determining that the final region 5 was worth 3 Joules.

So, what does this all mean? What does breaking an image down into lines along 4 axis instead of pixels do for us? Surely these lines are not the image itself, so why do we bother measuring them? The answer, is that by measuring an image in terms of lines, we can use an infinitely fine mechanical process to surmise portions of the image. Your piece of cardboard, instead of taking 5 steps, what if you were to take 10 steps to cover the image? You would reduce the ambiguity. You would now know how much energy each of the 10 regions now possess, instead of 5 regions. What if you were to take 100 steps? 1000?
1 million? Each time you increase the number of steps, you increase your ability to make smaller pixels, hence increasing the pixel count for the same surface area. Given an infinite amount of time and infinitely fine steps, you could record an infinitely high resolution; and it just so happens that A/D converters are virtually perfectly consistent, and shutters if built right can also be almost infinitely fine in movement.

I claim that infinite steps can result in infinite resolution, but how is this so? Refer to the computational explanation once you fully understand the above explanation. 




Computational Explanation: Looking at both pictures, you have an X, Y, Z and L axis. These represent the orientations of the edges in the camera. The X axis edge is horizontal to the light, Y is vertical, Z is 45 degrees to the light with a left bias, and the L is 45 degrees to the light with a right bias. Each number outside the grid represents the total quantity of energy within the direction that its pointer penetrates. Looking at camera flowchart #2, there is a region naming grid which I will be using to reference which spots I am discussing. To calibrate our points of view, know that the "18" of the Y axis overlaps grid points 1, 6, 11, 16, and 21. These grid points refer to the energy levels 7, 2, 5, 4, 0.   The "27" of the Y axis overlaps 4, 9, 14, 19, and 24. These grid points refer to the energy levels 3, 3, 9, 3, 9. The "19" of the X axis overlaps 15, 14, 13, 12, 11. These grid points refer to the energy levels 0, 9, 1, 4, 5. The "11" of the Z axis, overlaps 21, 17, 13, 9, 5. These grid points refer to the energy levels  0, 6, 1, 3, 1. The "13" of the L axis overlaps 23, 17, 11. These grid points refer to the energy levels 2, 6 and 5. -----We shall be referring to the designations of the axis numbers from left to right for the Y axis, top to bottom for the X, top to bottom/left to right for the Z, top to bottom/right to left for the L. This would mean that Y axis spot 4 would be 27, X axis spot 2 would be 15, Z axis spot 7 would be 5, L axis spot 9 would be 0. -----We shall refer to the rows as being 1-5 top to bottom, and 1-5 left to right. Therefore, row 3 on the X axis is composed of axis number 19, and row 4 on the Y axis is composed of axis number 27. Do not move on until you fully understand what this all means.

Now...the 1st picture has all of the grid points filled in. I arbitrarily wrote down the grid points, and calculated the axis levels by adding the grid points together. Therefore, Y axis spot 1 is 18, which comes from 7 + 2 + 5 + 4 + 0. L axis spot 6 is 24, which comes from 9 + 9 + 4 + 2.

Ok, so look at picture number 2, the one without any numbers filled into the grid points, and only the axis numbers present. We are going to try to fill in this grid under a single rule: That the numbers you fill into the grid points add up to their respectful axis values. They cannot add up to a value more or less than the axis value. In other words, for the 3d row on the X axis, you need the number to add up to 19. Therefore, grid regions 11, 12, 13, 14, and 15 must add up to the number 19. In picture #1, the grid point values of regions 11, 12, 13, 14, and 15 are 5, 4, 1, 9 and 0. These numbers add up to 19; however, the camera can only record the axis values and not the grid point values since the camera does not possess any pixels...it can only generate lines, hence the axis values.

Look back at picture #2 and ignore the grid values that you saw in picture #1. Now, you do not know the grid values, but you know the axis values across the X, Y, Z and L axis; and you know that the numbers that you put in must add up to to axis values that they refer to. Now, lets look at region 1, the upper left hand corner of the grid. What axis values correspond to this region, region #1? The 1st Y axis row, the 1st X axis row, the 1st Z axis row, and the 5th L axis row. The Y axis requirement is 18 for that row, X axis is 26, Z axis is 7, L axis is 16. Now...the 4 top left, top right, bottom left, and bottom right values are freebies! Because you get an axis value that corresponds to only 1 grid value! In this case, region 1, grid value 7 has Z axis row 1 corresponding to it. Because the Z axis ONLY has that single grid point corresponding to it, and that row must add up to the axis value, the grid point must be the axis value itself! Therefore, region 1 is the 1st row Z axis value. So, put a 7 in there. Now, on to region 2.

Region 2 has an axis cross section of Y axis row 2, X axis row 1, Z axis row 2, and L axis row 4. The Y axis max value is 29, X axis 26, Z axis 9, L axis 26. Now, you already put a 7 into region 1...this means that you only have 19 points left in the X axis row 1, 11 left in Y axis row 1, 0 left in Z axis row 1, and 9 left in L axis row 5.

Perhaps I am not intelligent enough or just do not see the grand causality of my system, but I cannot surmise what region 2 must be, based off of what data we currently have. My C++ program works by literally going through every single possible combination of numbers and outputting the only set that works. We are going to do a similar thing, but with a little more finesse.

Now...the best we can do for now is not go over any limit and choose an average number. Looking at region 2, the lowest limit is 9, for the 2nd row of the Z axis; so, lets go with that limit and make region 2, "9". We now have a total of 16 for X axis row 1, still 10 below the 26 point limit and requirement for X axis row 1. On to region 3. Again, chose an average number. I chose 4. We now have 20 for X axis row 1. Now, region number 5 is a corner number! Therefore, by looking at the axis value corresponding to that region, we can directly surmise its value. L axis row 1 corresponds to region 5. L axis row 1's value is "1", therefore region 5 is 1! Knowing that, we can look back at region 4 and do basic math. 20 + 1 = 21, and we need 26 for X axis row 1, and have 21 so far. Basic math. 26 - 21 = 5. Therefore, region 4 must be "5" under our current number choice. However, if we were to make region 4 "5", we would exceed the limit of L axis row 2, which is "3"! So, we must go back and change a number. Putting "3" in region 4, we now only have 24 out of the 26 that we need for X axis row 1. We cannot touch region 5 or 1, so we have region 2 and 3 to modify. Region 2 is already maxed out, so we must now look at region 3. The lowest limit for region 3 is the "11" of the L axis row 3. Adding the "2" we need to region 3 does not exceed the "11" point limit of L axis row 3! Therefore, we may now place a "6" in region 3, making the total 7 + 9 + 6 + 3 + 1 = 26! We have fulfilled the requirement for the X axis row 1 and have not exceeded any other axis values. (Note that you must fulfill the axis requirement before there are no space left in the axis's applicable regions. Once there are no space left, how can you fulfill the requirement if you have not already?)

 Now, on to row 2 of the X axis. We begin with region 6, corresponding with Y axis row 1, X axis row 2, Z axis row 2, and L axis row 6. Look at all of the axis numbers corresponding to that region.....what do you notice? Z axis row 2 is 9! We have already put a "9" in region 2, and region 2 lines up with region 6 with the Z axis row 2! This means that since we have already previously maxed out Z axis row 2, we cannot add anymore numbers to that axis row. Hence forth, we cannot place a number in region 6, it must be "0". On to region # 7. Lining up with region 7, we have region 6, region 2, region 1 and region 3; applicable axis are X axis row 2, Y axis row 2, Z axis row 3, and L axis row 5. Region 6 is 0, so we can ignore its significance to our plans. Region 1 is "7", however the limit of L axis row 5 is "16", so as long as the number we place in region 7 does not exceed "9" (7 + 9 = 16), we are fine. Region 2 has "9" in its place. The limit for Y axis row 2 is "29", so we cannot exceed "20" for region 7; however, we already have a "9" limit due to L axis row 5, so there is no need to consider the 20 limit as relevant for region 7's limit. The limit for X axis row 2 is 15 since we have a "0" in region 6, which is again above the 9 point limit previously prescribed. Finally we have Z axis row 3 with a limit of "17". However, since region 3 corresponds to the same Z axis as our current region 7, we must take the Z axis row 3 limit of 17 and subtract region 3's value of 6 from it, getting 17 - 6 = 11; once again, above the "9" limit previously prescribed by L axis row 5. (NOTE: You are to do the above process for every single region, using the data as is needed to properly derive the grid value) So, once again, choose any average number you wish for region 7, as long as it does not exceed 9. Because you have 3 other regions to fill with a total of 9 to fulfill the L axis row 5 requirement of 16, I would not suggest giving region 7 a "9", as it is unlikely that the following regions for L axis row 5 are all going to be "0". So, I chose "2" for region 7.

On to region 8. You should now be able to surmise what axis rows correspond to each region now. You should also be able to determine which other regions line up with your current regions respectful axis rows, and therefore determine how many 'points' you have left to use for your current region. For region 8, our lowest limit and therefore most relevant limit is that of X axis row 2. The original limit is 15, however region 7 corresponds to our region 8 in terms of the X axis row 2, therefore making the limit 15 - 2 = 13. The second lowest limit is of Z axis row 4, 17 - 3 = 14. Now, as with region 7, we need to consider the probability of other regions relative to the axis that your current region corresponds to requiring values later down the line. So, instead of just maxing out region 8 with "12", I went with "5".

On to region 9. The limit for region 9 is "5", corresponding to an L axis 'natural' limit of "11". reduced to "5" because of region 3's "6". Now...what to place in region 9? Well, notice this. Region 10 is the last region for X axis row 2; therefore, you must reach the X axis row 2 requirement of 15 by then. HOWEVER, region 10 corresponds to the L axis row 2 limit of "3". We already put 3 in region 4! Therefore, due to the number we previously chose, region 10 must be 0 and we must reach "15" via region 9. Since there is no more regions to meddle with, we must put "8" in region 9. What is the result of this forced play? We did not exceed an X, Y or Z axis limit, but we did exceed an L axis limit; L axis row 3 limit of "11". Corresponding to L axis row 3 is region 3 and region 9. Region 3 already has a "6" in its place from before. 6 + 8 = 14, above the 11 point limit. WE HAVE FAILED (*cry*) : ..(....

We must put "8" in region 9 at this point, but we cannot due to a previously prescribed limit. So what do you do? Start over. This is why, if you have it, I wrote the C++ program "Camera program 5++". That program solves for a similar system in around 3 seconds.   

You asked for it


----------



## SrBiscuit (Mar 20, 2010)

holy ****, my head just exploded.


----------



## gsgary (Mar 20, 2010)

Have you tried The Dragons Den you can pitch on-line 
BBC - Dragons' Den


----------



## bazooka (Mar 20, 2010)

usayit said:


> I only want to say that this statement is troublesome. You sound like America and Capitalism owe you something... neither don't. Just like immigration to our shores doesn't guarantee an easy successful life, a good idea, intelligence, even hard work, doesn't guarantee you a successful startup. If this is the attitude you choose to embrace, you are destined to fail. The freedom to pursue such ventures is what we all should be thankful... just like the freedom ~not~ to invest in your idea.


 
:thumbup::thumbup::thumbup:


----------



## usayit (Mar 20, 2010)

InfiniteRes said:


> But not even responding? That is plain rude.



This is my point....

Its a free country.. they don't owe you a response.  

A successful photographic business (wedding photographer) is not about being a good photographer but a good business man and entrepreneur.  A successful technology start up (like you are pursuing) is not about being a good inventor or engineer, its about business... no different.  You have to get "it" with no expectation of getting anything back in return.  If you want to get something back for each and every bit of effort, then stick to  being someone else's employee.

Its not rude... you just simply haven't met the minimum requirements to even begin to convince them that their time is worth spending it on you.  They don't owe you their time like I don't owe a call back for each telemarketer that leaves a voice message.

Again..  this attitude that people owe you will get you no where.  My advice... partner up with someone who's attitude is correctly aligned and better equipped... stay in the back tinkering.


----------



## InfiniteRes (Mar 20, 2010)

"holy ****, my head just exploded"

Hahaha! :lmao: That is how I felt developing this thing. And would you guess it, I hate math? 

Hmmm...I have a feeling that if I were to participate in that, they would have my head on a platter by the end of the day. But, I suppose I could try? 

"Again.. this attitude that people owe you will get you no where. My advice... partner up with someone who's attitude is correctly aligned and better equipped... stay in the back tinkering."

As I have noticed...which is why I continue to persists and ignore what I find to be rude.

Once you understand the explanation...you will likely have many many many more questions as to how I plan on carrying this out. Unfortunately though, those are trade secrets :meh: However, ask whatever you want and we will see whether I can tell you or not. 

I will likely begin posting on other forums soon...and possibly make a youtube video.


----------



## Formatted (Mar 21, 2010)

I'm by no means a degree student but I'm in my second year of Physic A level....

and I by no means understand. I must read again...

lol


----------



## KmH (Mar 21, 2010)

Infinite mechanical processes that result in infinite resolution take infinite time to complete.


----------



## astrostu (Mar 21, 2010)

Okay, I read through your description, and it seems to me as though you are talking about technologies that already exist.  First, your basic idea seems to be that of a 1-D scanner - a photomultiplier tube (detector/diode/whatever), but instead of scanning just across, you scan across (±x), down (±y), and then along the diagonals (+x/-y, -x/+y).

So, on the face of it, it seems like this can be accomplished (1) by dithering, or (2) by octagonal pixels (which I'm sure some lab is already experimenting with).  Dithering - which they do sometimes do with HST - is where you take your image, and then you move the camera over _exactly_ the distance of 1/2 a pixel, and then take your image again.  After lots of math, you can then pull out more information than you had originally.  I think octagonal - or just smaller - pixels would be easier, though, and more practical.

Another technology that is in use that seems similar is the push-broom detector.  This is in use on many spacecraft, such as the HiRISE camera on the Mars Reconnaissance Orbiter (I highly recommend taking a look at some of their photos ... 25 cm/px on Mars).  This has a linear array of pixels that then "sweeps over" the landscape.  So you have a fixed width and then arbitrary length, which is why many photos from modern-day craft are very rectangular.  While it's not arbitrary resolution along the width, it offers arbitrary resolution along the length, limited by spacecraft stability, brightness of the landscape and detector gain, read-out time, etc.

And, by using a linear array of pixels, the system is going to be many faster than a single pixel recorder - which is what you're talking about, from my reading.  And, for consumer electronics, the time gained by having an array instead of a line of pixels is even faster.  When you're talking about 1/1000-sec exposures, or even 1/30-sec, I don't see your system being fast enough and precise enough, especially when modern CCD and CMOS array detectors have so much many advances and improvements that can be made without trying to develop a completely new technology that - at least based on what I read - seems to (a) already kinda exist, and (b) be a step back to how things were done 40 years ago with single-pixel detectors that scanned the sky.


----------



## Josh66 (Mar 21, 2010)

OK, I didn't read all of that .  I'll try to get through it later.


Assuming that it works - have you tried selling the idea to Governments, or their various agencies?  Sounds like it certainly has military applications, and it sounds like governments may be the only ones with the funding to pursue it.

Or maybe defense contractors?  Any major contractor is constantly developing/researching new imaging/signals technologies, and they could be interested.  They will undoubtedly want to own the patent though.


----------



## InfiniteRes (Mar 21, 2010)

"Infinite mechanical processes that result in infinite resolution take infinite time to complete."

Which is why the rate of resolution acquisition is important, extremely important, considering there are other technologies out there that can also reach very high resolutions given large periods of time. 1 trillion pixels in 1/1000 of a second is by far the fastest I have seen though; along with that, it does not seem as if any of the other technologies can even reach 1 terapixel given an infinite amount of time. And anyways...I can vastly increase that speed once I begin selling this camera by using the profits to develop another special idea of mine that complements the camera. 

I know about dithering and read about it very early on in the designing stage. It has a limit in terms of how many times it can subdivide the pixels...and is susceptible to many other limitations. 

Yep, I thought of using octagonal pixels, though it does not seems as if that is an absolute requirement. I am open minded at this point. I need to start development to see what works best; since the pixels are virtual, the coder can change the format at any time. 

Push-broom? I usually refer to that as a scanning back camera. I am aware of that tech too. I assure you though, my camera works very differently and is far less limited.

It is very fast. I have a very high ceiling in terms of how fast I can physically take the shot, and A/D converters run fast enough right now for 1 terapixel and above at 1/1000  of a second. Another issue is the rise time of photodiodes. At 1 terapixel, I am well beneath the limit. I can likely go up to a petapixel before having to switch detector type to obtain higher speeds. 

I have tried....but they always tell me to go through the "standard channels" for funding, which all have a set of requirements absolutely prohibiting a start up business from using their services. I read the fine print :x I know that if I could speak to the right guy I could directly receive funding, but I have not been able to find him yet. I WAS considering listing my patent with the USPTO as sensitive military information, but that would have greatly constricted my options, and no one would ever know of my accomplishment. 

Tried a few defense contractors and they gave me the same response. I really think I just need to meet with the right guy instead of talking to their customer service representative. 

I MAY at some point be willing to sell the patent if I must. I would rather do that than loose my apartment.


----------



## mdtusz (Mar 21, 2010)

It seems like a concept that could work in certain situations. but Canon and Nikon aren't going to be opening their eyes any time. If you can right now with your prototype be making images of 1 tp at 1/1000 exposure times, there's something there. Do you have any processed images in a jpeg form or are they all raw data in graph form? How long does it take to actually process the data (with the aforementioned 1/1000 shutter speed)?

If the mechanical aspect of a shutter isn't stopping the development and you have actually made an image that is viewable in ridiculous detail, it's probably worth it to call up Dragons Den or even contact an entrepreneur in the electonics industry as formally as you can.


----------



## InfiniteRes (Mar 21, 2010)

Unfortunately no...if I could, I do not think I would have a problem. I only have a rough prototype right now which runs at a native horrific 25 pixels. I recently multiplexed a bunch of 5 X 5 together to get a 525 pixel image though. 

The problem is...even if i were an electronic engineer and could use an A/D converter other than my DMM, I am also not an excellent coder. My current program is VERY slow, since it is a raw force method. In fact, it can take over a trillion code iterations to find the answer in some circumstances. 

The code that I would write if I had the abilities, would only take around a thousand iterations, a billion times faster than my current method. Along with that, an advanced coder and electronic engineer could apply various other methods I developed to further drastically decrease processing time.  

Say I had an electronic engineer and coder at my disposal....I still need a workspace to cut, weld, and forge metal with. I just cannot do that in an apartment. Without those two guys (and ideally an optical engineer) and no workspace, I can only get 25 pixels. The prototype in the pictures I posted I made with glue, rubber bands, and a DREMEL. THAT was hard. 

Data processing is a major part of how my camera works. I estimate that it will take anywhere between a few minutes and a few hours to process a terapixel given my current ideas. For a petapixel, I will need to implement more advanced techniques. Also, the more powerful the computer is, the faster the processing will go. So if you use a Tesla array, we are talking about fairly low processing times.


----------



## mdtusz (Mar 21, 2010)

Well 525 pixels won't get you too far haha. If you need more advanced programming skills, go down to the nearest university and visit the engineering dept. Find someone who's fresh out or still working to finish school and offer them some money to help you with this. I was in engineering for a year and learned how to program in C at a fairly proficient level with binary searches and bubble searches covered within the first semester. I think someone in 3rd or 4th year specializing in computer engineering would probably be able to speed up your processing time. As far as optics go. Work with what's available. I don't know enough about the whole photodiode thing or A/D converters (what's that), but I'm sure you can find help around the subject. If you're making a 25 pixel image right now with that rickety prototype, do what you can to make that 525 native and keep bumping it up. Forget about time right now, that can be sorted out after you can PROVE that the method is capable of taking super high res pictures. If you walk into dragons den with a picture of anything blown up to 1000x scale and still clear, they will probably be impressed if not just a bit.


----------



## InfiniteRes (Mar 21, 2010)

So you think that would be a good idea too...visiting the local college. I will see what I can do with that. With those 2 pro's, even without the physical model, we could at least take a CCD generated picture, convert it to my line format, and treat it like it came straight from my line generation system. With that we could work on processing megapixel images so when I have the resources to build a better prototype, the digital end will already be mostly covered. 

If I could only find a job...and save up a thousand dollars, perhaps I could take a contained still shot. If the object is not moving or changing in any way, and I have an hour to take the picture, it becomes considerably easier and cheaper to build a prototype. 

Did you know where I live, you need a permit to own an air tank over a gallon? I live in one of the most anti-business cities in America unfortunately.


----------



## rosy99 (Apr 27, 2010)

Hello,

After about 2 years of use my Finepix F50se went nuts and gave me a "Focus Error" message after which the lens would not retract and made strange clicking noises. I sent it back to Fuji and they gave me a quote of $115 to get it repaired. 

I'm just wondering if its worth paying that much for a repair (Think I originally paid $250 ish). Or should I put the cash towards a new one. I seen the J38 is only $100 bucks new. Anyone know if the J38 is worth getting? 

Any advice on what to do/ recommendations of cheapish new replacement camera to get if I chose not to get it repaired. 

Thanks in advance


----------



## Garbz (Apr 27, 2010)

Point and shoot cameras are consumables. Unless you can get them covered under warranty throw them away and get a new one.


----------



## eriqalan (May 6, 2010)

If this does what you say it does you should be contacting NASA, Astronomy groups. telescope manufacturers and (dare I say it) the CIA - all of whom would be falling over themselves if the physics seems right

Most Camera companies have a vested interest in existing technologies and this doesn't look like a consumer product at this point but get CIA, NASA, Navy or air force into this and all of a sudden you have the financial backing for the prototypes


----------

