Knowledgebase:Misconceptions

From POV-Wiki
Revision as of 01:24, 28 April 2010 by StephenS (talk | contribs) (remove reference to outdated area_light section numbering)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Topic 1

The high compression achieved by the JPEG format is achieved by dropping out unnecessary colors.

It's common to think that lossy compression means drop out some information, like unnecessary colors from the image, and since the JPEG format uses lossy compression, it does exactly this.

Well, no. It doesn't do this. Actually when compressing an image to JPEG format and then decompressing it (to the screen or whatever) you get usually more colors than in the original, so the effects of the JPEG conversion are actually the opposite.

What JPEG does is not drop out colors, but the higher frequencies of the image. The basic idea is that it first converts the image to frequency spectrum (using a Fourier transform), drops out the highest frequencies (depending on the quality settings) and then compresses this frequency spectrum with an ordinary lossless compression technique. Some compression programs can also drop out middle-frequencies based on some known features of human vision (frequencies with a low amplitude that are located very close to frequencies with a very high amplitude are not seen by the human eye).

The net effect of dropping out high frequencies is that the image gets smoother (thus getting more colors than the original).

The reason why jpeg-conversion doesn't smooth nicely the image but causes square-shaped artifacts is the peculiar way in which the pixels from the image are taken. The image is divided into squares (yes, you guessed right: the size of these squares correspond to the size of the artifact squares) and then the pixels of each square is taken in a diagonal zig-zag pattern. Usually you get better compression by taking the pixels in this order (because it exploits the fact that those areas have usually small color changes) but you also get square-shaped artifacts.

End of Topic: Go Back to the Table of Contents

Topic 2

Motion blur leaves a trace behind the object

There's a common misconception about motion blur. The misconception is that the motion blur leaves a trace behind the object, but the object itself is more or less sharp.

I think that this is a consequence of artistic effects used in cartoons (they draw lines in the path the obect has travelled to indicate that the object is moving very fast). Although this artistic effect works for cartoons, it has nothing to do with reality.

In photography motion blur is caused by an object moving while the shutter is open so that it leaves an equally weighted trace on the film. The reason why it looks fainter than the static objects is that the light coming from the moving object hits each point less time than static objects, thus imprinting a dimmer image at that point. Some parts of the object look semi-transparent because the object gets out of the way of whatever was behind it (or the other way around).

This means that if an object moves with a constant speed it leaves an equally blurred trace on the film. The object is not sharper in one place and more blurred in another.

If the speed is not constant but accelerating, the amount of blur changes as well, but still there's no sharp version of the object (unless it's most of the time static and moves only a fraction of the time the shutter is open; still this fraction of time has to be very small, like 1% to get a sharp picture of the object).

Specifically, with constant acceleration you get a constant (linear) change in the blur amount. This is the case of a falling object, for example.

If you are using MegaPov, to get a photographically correct motion blur you should just define your object with its motion path exactly as if you were doing an animation and let MegaPov do the rest. Don't try to correct the calculations of MegaPov by adding additional static instances of the object. If you do, you are just making a physically incorrect image.

Calculating the correct motion of an object can be mathematically very challenging. A linear motion is easy. A constantly accelerated motion (like a falling object) needs more knowledge about physics and math. Non-constant acceleration (like a pendulum) needs even more complicated math.

It's usually easy to fall into the temptation of just defining a linear motion to your motion blurred object even if you are making an accelerated motion. This, of course, results in a physically incorrect image. If you want a physically (and thus photorealistically) correct result, don't go the easy way but make the math.

Note: There's a trick in photography which you can use to achieve a motion blur effect very similar to the one this text tries to prove wrong.

The trick is to leave the scene with a very dim lighting while the shutter of the camera is open and right at the end, ie. just before the shutter closes, light the scene with a bright flash.

This has the effect that static objects look normal but moving objects have a dim trace behind them.

However, although this may sound like exactly what you were thinking about, strictly speaking it isn't:

Firstly: The contrast of the dimmed part is extremely high. Bright parts of the object will imprint a clearer mark in the film and dimmer spots will have little or no effect. If the moving object has any light in itself (or reflects light or has anything that is bright although the rest of the scene is dim), it will leave a strong and clear trace in the film.

This means that, for example, if you are moving a flashlight in front of the camera (and using this trick), the handle will have a very dim trace behind it but the light itself will leave a strong white trace in the film.

It also has a quite strong effects if the moving object has very light and very dark pigments: The light pigments will have a much stronger trace than the dark pigments.

Trying to accurately simulate this may be near to impossible with the current MegaPov (0.5a). You can't specify that brighter parts of the object leave a larger percent of the image imprinted than the dimmer parts.

Also, in my personal opinion (and this is only my opinion), this kind of images (photos or rendered) don't look good. The trick looks quite artificial and instead of making the object look moving it makes it look like the photo is wet and the paint is melting. Well, it's only my point of view.

Secondly: When shooting a motion picture (that is, lots of photographs, about 24 per second or so) the individual frames of the film will have the evenly distributed blur as described at the beginning of this explanation.

Usually an image with motion blur can be thought as one frame of this kind of motion picture. And certainly, when you apply the proper motion blur to all the frames of an animation, it will look a lot better and realistic.

End of Topic: Go Back to the Table of Contents

Topic 3

Perspective doesn't affect a sphere. A sphere will always look circular.

This misconception has been seen a few times in rendering news groups. I think that what causes the misconception is that a sphere always looks circular no matter where are you looking it from.

Well, the statement is true if you look directly at it; that is, in POV-Ray you set the look_at vector at the center of the sphere or somewhere in the axis camera-sphere.

However, when the sphere is moved out of this axis, the perspective will start to distort it and convert it to an ellipse. It usually looks odd because we are not accustomed to see a sphere distorted like this. When you look at the sphere in the image, you are looking straight towards its center and thus you (subconsciously) expect it to look like a circle. However, you are forgetting that the camera is not looking towards the sphere, so you shouldn't either.

This problem happens often with scenes which have a Moon or a Sun or any similar spherical object which we expect to be circular.

If you still aren't convinced, surround the sphere with a cube (either semi-transparent or just a wireframe made with, for example, cylinders) and move this union around the screen (or just make several copies). The cube gets distorted, but not in any unexpected way and thus it helps you visualize why the sphere gets distorted as well.

If you have problems with this it's probably because you are making a panoramic scene with a very wide camera angle. If it bothers you a lot, you can try using another type of camera other than the default perspective one, for example a ultra_wide_angle or a panoramic camera. Those type of cameras help correcting those unwanted distortions.

End of Topic: Go Back to the Table of Contents

Topic 4

MegaPov's post-process focal blur is better and faster than POV-Ray's own, so the latter is not needed anymore.

This could be true if the post-process focal blur were able to do the same things as the regular focal blur in POV-Ray. It isn't.

Yes, the post-process focal blur is a lot faster and it always produces a very smooth result. Thus, people are often tempted using it in every scene instead of POV-Ray's own focal blur.

Yes, the post-process focal blur is nice, but you should be aware of its limitations:

  1. It doesn't work with transparent objects nor refractions.
  2. It doesn't work with reflections.
  3. It doesn't produce a physically accurate result, that is, it doesn't add any new information to the image, only blurs the existing information.

The first and second limitations surprises most people when they first try it. It shouldn't be a surprise knowing how post-processing focal blur works:

Post-processing takes only the depth of the first intersection. This is because only the depth of the first intersection is stored in the data used by the post-processing step.

No, it would not be easy to add information for the second and subsequent intersections. This would require storing the color and depth of the first intersection, then the color and depth of the second intersection, and so on. Moreover, if there was reflection and refraction, the intersection tree splits into two branches, thus doubling the space required by subsequent intersections. One could easily imagine how much memory this would take for a regular trace level of 5 (not to talk about a higher one). It can easily go larger than any existing hard drive size (not to talk being slow as hell).

The third limitation usually doesn't bother people too much, but you should still consider it. Regular focal blur can make visible details that are hidden otherwise (ie. when not using focal blur).

For example, think that you have a red box in front of a green box so that one of their edges coincide in the final image (the green box is located behind the red box so that it's not seen in the image, but if you moved the camera just a bit in the right direction the edge of the green box would become visible). Now use focal blur so that the focal point is at the edge of the green box: Part of the green box appears from behind the blurred red box!

This is not possible with the post-process focal blur.

End of Topic: Go Back to the Table of Contents

Topic 5

Why doesn't my mirror reflect light? Why won't my lens focus light?

This same problem happens in both raytracing and scanline rendering (when the reflection is not achieved by raytracing). Since this is a POV-Ray page, I'll concentrate in the raytracing part.

Thinking that light will bounce from reflective surfaces and illuminate things is a very common misconception when one doesn't know how raytracing works.

It's natural to think that raytracing works by sending rays of light from the light sources to every direction. These rays of light then illuminate the scene which is then somehow projected onto the screen.

This method has actually a name: Forward raytracing. Raytracers do not usually use this method for one very good reason: It's extremely slow and inefficient. It can take days and gigabytes of memory to render accurately even the simplest scene (which renders in seconds with a regular raytracer). Accurately means that you don't get graininess nor blotchy stains of illumination/darkness.

The reason for the inefficiency is that in forward raytracing the program has no way of knowing which areas of the scene are visible and which aren't. And even if it had, it can't know which points of the area will affect the pixels of the final image and which will not. Since some surfaces will be much closer to the camera than others the needed accuracy varies from surface to surface (and even in different parts of the surface). Add to this reflection and refraction and you have sky-rocketed the amount of resources needed.

For this and several other reasons forward raytracing is not used to render the scene (there's one exception, photon mapping, which I'll discuss a bit later).

A much more efficient way of rendering the scene is the so-called backward raytracing. The idea is that instead of shooting rays from the light sources to the objects and from there to the camera, the process is made the other way around: Rays are shot from the camera to the objects and from there to the light sources.

The method is a lot more efficient for several reasons: When you shoot the ray from the camera (through the pixel we are calculating) and see what does it hit, we know exactly which part of the scene we have to calculate. That is, we calculate only the part of the scene that really affects the color of the pixels and we don't spend useless time in those parts that don't have effect. Reflection and refraction becomes more efficient for the same reason. Shadow calculations are easy to do (just shoot a ray from the current point to the light source and see if there's anything in the way).

Although this method is very efficient and it allows you to render simple scenes in question of seconds, it has some drawbacks. One of them is that you can't calculate light reflecting from mirrors. The reason is (perhaps a bit surprisingly) the same as why forward raytracing is inefficient. The reason is that if you wanted to calculate reflecting (or refracting) light, you'll have to take into account every possible direction where the light might be coming from, that is, send lots of rays to every direction (with the known consequences).

So the only good and easy thing in forward raytracing is the hard thing in backward raytracing.

Because reflecting and refracting light (also called caustics) is not so important as getting the scene rendered (even wihtout them), backward raytracing is used. Then you just have to live with the fact that light does not reflect from the mirror.

Note: There is a way to compute reflection and refraction caustics, using an algorithm called Photon Mapping. See: What is Photon Mapping? for details.

End of Topic: Go Back to the Table of Contents

Topic 6

If I put my light source, representing the Sun, at a distance of 150 million km, it will give the most accurate result for sunlight simulation. Light rays coming from the Sun are parallel.

This is usually a bad idea. And there usually isn't any advantage of doing it (if anything, it only creates problems). There are several reasons for this.

If your scene has a regular size, like 10 or 100 or so units in diameter (and specially if it's very small), you can get accuracy problems.

Floating point numbers in computers have limited accuracy. This means that you can't represent extremely large values nor extremely small values with them. Although this is not the case here, there is also another problem with floating point numbers: Mixing very large and very small values together can also cause accuracy problems. This means that you can get accuracy problems when POV-Ray calculates the shading of the smallest details in your scene according to your light source at a huge distance.

The distance doesn't give you a more correct result either.

Firstly, the Sun is not a point light, but an area light. It's a common misconception to believe that (due to the distance) light rays coming from the Sun are parallel. Since the Sun is an area light with a significant area, the light rays can't be parallel. A light ray coming from one side of the Sun disk will hit an object in a different angle than a light ray coming from the opposite side of the Sun disk.

Clearly you have to use a circular area light to simulate the Sun.

Now, if your scene is, for example, 100 units wide, it has absolutely no difference whether your area light is at 10000 units, one million units or 150 millions of units away (as long as its apparent size doesn't change). Your scene will render practically identically. Putting your light source closer has the advantage that you'll not get accuracy problems.

Still, you'll not get a physically accurate Sun, no matter how far you put it.

If you want a physically accurate Sun, you'll have to simulate the refraction, diffraction and scattering caused by the atmosphere of the Earth (which differs depending on things like temperature and density). With clouds you'll get even more trouble (clouds scatter the light a lot; just think about a completely cloudy day, the light is emitted almost evenly uniformly from the whole area of the sky).

So you won't get any more accuracy by putting your Sun at 150 million kilometres away. All the contrary.

End of Topic: Go Back to the Table of Contents

Topic 7

Area lights are so slow in POV-Ray that it's much better to use a grid of point lights instead.

As strange as this may sound (to those who have read the documentation about area_light), this misconception does happen. I don't really know how this misconception has formed, but it's most probably caused by not having read the POV-Ray documentation carefully enough and forming strange ideas on how POV-Ray area lights really work.

If you don't know how area lights work in POV-Ray, please read the the section about area lights in the POV-Ray documentation.

What makes this misconception funny is that the suggested fix for the slowness of area lights is basically exactly replicating what area lights already do: Create a grid of point lights. POV-Ray area lights are a grid of point lights (as seen in the documentation).

However, this is not the whole story. The area_light in POV-Ray is better than just a grid of point lights because it has some optimization options built into it, besides having other features to control the properties of the area light. The main speed optimization in area lights is adaptive supersampling (which is impossible to achieve by making an explicit grid of point lights). The speed difference between a true grid of point lights and an equivalent area_light can be enormous, without the resulting image having visible differences.

For example let's try this simple scene:

#declare UseTrueArealight = yes;

#declare Light =
  #if(UseTrueArealight)
    light_source
    { 0, 1
      area_light x*2, z*2, 8, 8 adaptive 0
    }
  #else
    union
    { #declare IndX = 0;
      #while(IndX < 8)
        #declare IndZ = 0;
        #while(IndZ < 8)
          light_source
          { <-1+2*IndX/7, 0, -1+2*IndZ/7>, 1/(8*8)
          }
          #declare IndZ = IndZ+1;
        #end
        #declare IndX = IndX+1;
      #end
    }
  #end

camera { location <0,5,-7> look_at 0 angle 35 }
object { Light translate y*20 rotate z*45 rotate y*-30 }
plane { y,-1 pigment { rgb 1 } }
box { -1,1 pigment { rgb <1,.75,.5> } rotate y*30 }

A test render in my computer at 1024x768 and antialiasing 0.1, with UseTrueArealight set to no (ie. use a grid of point lights) took 1 minute 8 seconds to render.

The same settings but UseTrueArealight set to yes took only 7 seconds to render. The two images are practically identical. Even using adaptive 1, which would be needed if the area light was bigger, took only 14 seconds to render.

As you can see, the speed difference is enormous. Thus, this bad advise of using pointlights instead of arealights is not only funny, but also quite catastrophic with respect to render times.

End of Topic: Go Back to the Table of Contents

Topic 8

About area lights in POV-Ray

New users, and sometimes even a bit more advanced users, usually have a quite diverse set of misconceptions and unclear ideas about how the area_light feature of POV-Ray works. Often the advantages and limitations of area lights are not clear. Also the inner working of some of the area light parameters (such as adaptive) are unknown, and thus they cause confusion.

First of all, and most importantly, read the documentation about area_light. The documentation is an excellent source of information about how area lights work in POV-Ray. One of the most important things to read is how adaptive supersampling (controlled by the adaptive keyword) works with area lights; this explains the artifacts which happen when specifying an adaptive value which is too low.

Here is a list of the most important features, properties and limitations that you should know when using area lights:

  • The location vector of the light defines the center of the area light (often new users think that it defines a corner).
  • The two axis vectors of the area light define the physical dimensions of the area light. They form a rectangle which sides are defined by these two vectors and the location vector of the light, as described above, being at its center.
  • A very important issue: Area light properties have effect only in shadow testing calculations, nothing else. This means that diffuse and specular lighting is not affected by any area light setting. (I'm not saying this is a good thing; I'm just stating a fact.)
  • You should always make your area lights adaptive, as this speeds up rendering. If you are getting shadowing artifacts, increase the adaptive value until the shadows look good (note that increasing this value also slows down rendering).
  • And finally: Don't be afraid of making your area light really dense (in terms of how many point sources it contains, ie. the Size_1 and Size_2 values described in the documentation). New users are often intimidated at making their area lights really dense because they fear that it will slow the rendering too much (so they make too sparse area lights, eg. like 4x4, thus getting banded shadows). The fear is partly justified, but only partly; making your area light denser indeed slows down the rendering; however, if you use adaptive supersampling (which you should), the slowing will occur only in certain places of the scene (at shadow borders) and not the whole scene. By specifying a very dense area lights (like 10x10, 15x15 or even 20x20) and perhaps jittering it you will get much smoother and better shadows.

End of Topic: Go Back to the Table of Contents

Topic 9

If I #declare an object applying for example a transform to it with rand() and then create several instances of it, all the instances appear in the same place/are identical. How come?

A #declare is not a #macro. When you create an identifier with #declare (or #local), the value being assigned to the identifier is parsed only once when the #declare is parsed. If you use this identifier (eg. to create instances of the object which was assigned to it), the value assigned to the identifier at the time it was parsed will be used.

An identifier is basically the same thing as a variable in most programming languages: You can assign a value to it (possibly the return value of a more complex expression) and the variable will retain this value until a different value is assigned to it. That is, for example if you write this:

#declare Value = rand(Seed);

POV-Ray will evaluate the expression rand(Seed) and assign its return value (eg. 0.2) to Value. After this, Value will have this value (eg. 0.2) until it's assigned some other value.

If you want an identifier to be parsed and evaluated each time it is used, that is exactly the purpose of the #macro command, so you should create one instead of the #declare.

scaned for dead links StephenS 01:24, 28 April 2010 (UTC)

End of Section: Go Back to the Table of Contents