Knowledgebase:Language Questions and Tips

From POV-Wiki
Jump to navigation Jump to search

Topic 1

How do I make a visible light source? or: Although I put the camera just in front of my light source, I can't see anything. What am I doing wrong?

A light source in POV-Ray is only a concept. When you add a light source to the scene, you are actually saying to POV-Ray hey, there is light coming from this point. As the name says, it's a light source, not a physical light (like a light bulb or a bright spot like a star). POV-Ray doesn't add anything to that place where the light is coming, ie. there is nothing there, only empty space. It's just a kind of mathematical point POV-Ray uses to make shading calculations.

To make the light source visible, you have to put something there. There is a looks_like keyword in the light_source block which allows to easily attach an object to the light source. This object implicitly doesn't cast any shadows. You can make something like this:

light_source
{ <0,0,0>, 1
  looks_like
  { sphere
    { <0,0,0>,0.1
      pigment { rgb 1 }
      finish { ambient 1 }
    }
  }
  translate <10,20,30>
}

Note: It's a good idea to define both things, the light_source and the looks_like object, at the origin, and then translate them to their right place, also the finish { ambient 1 } which makes the sphere to apparently glow (see also the next question).

You can also get visible light sources using other techniques: Media, lens flare (available as 3rd party include file), glow patch, etc.

End of Topic: Go Back to the Table of Contents

Topic 2

How do I make bright objects, which look like they are emitting light?

There is a simple trick to achieve this: Set the ambient value of the object to 1 or higher. This makes POV-Ray to add a very bright illumination value to the object so the color of the object is in practice taken as is, without darkening it due to shadows and shading. This results in an object which seems to glow light by itself even if it's in full darkness (useful to make visible light sources, or small lights like leds which do not cast any considerable light to their surroundings but can be easily seen even in the darkness).

A more sophisticated method would be using an emitting media inside the object (and making the object itself transparent or semi-transparent).

End of Topic: Go Back to the Table of Contents

Topic 3

How do I move the camera in a circular path while looking at the origin?

There are two ways to make this: The easy (and limited) way, and the more mathematical way.

The easy way:

camera
{ location <0,0,-10>
  look_at 0
  rotate <0,clock*360,0>
}

This puts the camera at 10 units in the negative Z-axis and then rotates it around the Y-axis while looking at the origin (it makes a circle of radius 10).

The mathematical way:

camera
{ location <10*sin(2*pi*clock),0,-10*cos(2*pi*clock)>
  look_at 0
}

This makes exactly the same thing as the first code, but this way you can control more precisely the path of the camera. For example you can make the path elliptical instead of circular by changing the factors of the sine and the cosine (for example instead of 10 and 10 you can use 10 and 5 which makes an ellipse with the major radius 10 and minor radius 5).

An easier way to do the above is to use the vrotate() function, which handles the sin() and cos() stuff for you, as well as allowing you to use more complex rotations.

camera
{ location vrotate(x*10, y*360*clock)
  look_at 0
}

To get an ellipse with this method, you can just multiply the result from vrotate by a vector, scaling the resulting circle. With the last two methods you can also control the look_at vector (if you don't want it looking just at the origin).

You could also do more complex transformations combining translate, scale, rotate, and matrix transforms by replacing the vrotate() call with a call of the vtransform() function found in functions.inc (new in POV-Ray 3.5).

End of Topic: Go Back to the Table of Contents

Topic 4

How do I use an image to texture my object?

The answer to this question can be easily found in the POV-Ray documentation, so I will just quote the syntax:

pigment
{ image_map
  { gif "image.gif"
    map_type 1
  }
}

Note: that in order for the image to be aligned properly, either the object has to be located at the origin when applying the pigment or the pigment has to be transformed to align with the object. It is generally easiest to create the object at the origin, apply the texture, then move it to wherever you want it.

Substitute the keyword gif with the type of image you are using (if it isn't a GIF): tga, iff, ppm, pgm, png or sys.

A map_type 0 gives the default planar mapping.
A map_type 1 gives a spherical mapping (maps the image onto a sphere).
With map_type 2 you get a cylindrical mapping (maps the image onto a cylinder).
Finally map_type 5 is a torus or donut shaped mapping (maps the image onto a torus).

See the documentation for more details.

End of Topic: Go Back to the Table of Contents

Topic 5

How can I generate a spline, for example for a camera path for an animation?

POV-Ray 3.5 has a splines feature that allows you to create splines. This is covered in the documentation and there are demo files showing examples of use. There exist also third party include files for spline generation that have greater flexibility than the internal splines, for example the spline macro by Chris Colefax.

End of Topic: Go Back to the Table of Contents

Topic 6

How can I simulate motion blur in POV-Ray?

The official POV-Ray 3.7 doesn't support motion blur calculations, but there are some patched versions which do. (eg. the so-called MegaPov, which at the time of writing this documentation was still based on POV-Ray 3.5 code). See the MegaPov site.

You can also use other tools to make this. One way to simulate motion blur is calculating a small animation and then averaging the images together. This averaging of several images can be made with third party programs, such as ImageMagick. (convert -average frame*.png output.png)

End of Topic: Go Back to the Table of Contents

Topic 7

How can I find the size of a text object / center text / justify text?

You can use the min_extent() and max_extent() functions (new in POV-Ray 3.5) to get the corners of the bounding box of any object. While this is sometimes not the actual size of the object, for text objects this should be fairly accurate, enough to do alignment of the text object.

End of Topic: Go Back to the Table of Contents

Topic 8

How do make extruded text in POV-Ray?

POV-Ray has true type font support built in that allows you to have 3D text in your scenes (see the documentation about the text object for more details).

There are also some outside utilities that will import true type fonts and allow user manipulation on the text. One of these programs is called Elefont.

End of Topic: Go Back to the Table of Contents

Topic 9

How do I make an object hollow?

This question usually means How do I make a hollow object, like a waterglass, or a jug?

Before answering that question, let me explain some things about how POV-Ray handles objects.

Although the POV-Ray documentation talks about solid and hollow objects, that's not how it actually works. Solid and hollow are a bit misleading terms to describe the objects. You can also make an object hollow with that same keyword, but it's not that simple.

Firstly: POV-Ray only handles surfaces, not solid 3D-objects. When you specify a sphere, it's actually just a spherical surface. It's only a surface and it's not filled by anything. This can easily be seen by putting the camera inside the sphere or by clipping a hole to one side of the sphere with the clipped_by keyword (so you can look inside).

People often think that POV-Ray objects are solid, really 3D, with solid material filling the entire object because they make a difference CSG object and it seems like the object is actually solid. What the difference CSG actually does is to cut away a part of the object and add a new surface in the place of the hole, which completely covers the hole, so you can't see inside the object (this new surface is actually the part of the second object which is inside the first object). Again, if you move the camera inside the object, you will see that actually it's hollow and the object is just a surface.

So what's all this solid and hollow stuff the documentation talks of, and what's the hollow keyword used for?

Although objects are actually surfaces, POV-Ray handles them as if they were solid. For example, fog and media do not go inside solid objects. If you put a glass sphere into the fog, you will see that there's no fog inside the sphere.

If you add the hollow keyword to the object, POV-Ray will no longer handle it as solid, so fog and atmosphere will invade the inside of the object. This is the reason why POV-Ray issues a warning when you put the camera inside a non-hollow object (because, as it says, fog and other atmospheric effects may not work as you expected).

If your scene does not use any atmospheric effect (fog or media) there isn't any difference between a solid or hollow object.

So all the objects in POV-Ray are hollow. But the surface of the objects is always infinitely thin, and there's only one surface. With real world hollow objects you have always two surfaces: an outer surface and an inner surface.

Usually people refer to these kind of objects when they ask for hollow objects. This kind of objects are easily achieved with a difference CSG operation, like this:

// A simple water glass made with a difference:
difference
{ cone { <0,0,0>,1,<0,5,0>,1.2 }
  cone { <0,.1,0>,.9,<0,5.1,0>,1.1 }
  texture { Glass }
}

The first cone limits the outer surface of the glass and the second cone limits the inner surface.

End of Topic: Go Back to the Table of Contents

Topic 10

How can I fill a glass with water or other objects?

As described in the hollow objects question above, hollow objects have always two surfaces: an outer surface and an inner surface. If we take the same example, a simple glass would be like:

// A simple water glass made with a difference:
#declare MyGlass=
difference
{ cone { <0,0,0>,1,<0,5,0>,1.2 }
  cone { <0,.1,0>,.9,<0,5.1,0>,1.1 }
  texture { Glass }
}

The first cone limits the outer surface of the glass and the second cone limits the inner surface.

If we want to fill the glass with water, we have to make an object which coincides with the inner surface of the glass. Note that you have to avoid the coincident surfaces problem so you should scale the water object just a little bit smaller than the inner surface of the glass. So we make something like this:

#declare MyGlassWithWater=
union
{ object { MyGlass }
  cone
  { <0,.1,0>,.9,<0,5.1,0>,1.1
    scale .999
    texture { Water }
  }
}

Now the glass is filled with water. But there's one problem: There's too much water. The glass should be filled only up to certain level, which should be definable. Well, this can be easily made with a CSG operation:

#declare MyGlassWithWater=
union
{ object { MyGlass }
  intersection
  { cone { <0,.1,0>,.9,<0,5.1,0>,1.1 }
    plane { y,4 }
    scale .999
    texture { Water }
  }
}

Now the water level is at a height of 4 units.

End of Topic: Go Back to the Table of Contents

Topic 11

How can I bend an object?

There's no direct support for bending in POV-Ray, but you can achieve acceptable bending with the Object Bender by Chris Colefax.

Some objects can be bent by just modelling it with other objects. For example a bent cylinder can be more easily (and accurately) achieved using the intersection of a torus and some limiting objects.

It might be a bit strange why most renderers support bending but POV-Ray doesn't. To understand this one has to know how other renderers (the so-called scanline-renderers work):

In the so-called scanline renders all objects are modelled with triangle meshes (or by primitives such as NURBS or bezier patches which can be very easily converted to triangles). The bending is, in fact, achieved by moving the vertices of the triangles.

In this context the term bending is a bit misleading. Strictly speaking, bending a triangle mesh would also bend the triangles themselves, not only move their vertices. No renderer can do this. (It can be, however, simulated by splitting the triangles into smaller triangles, and so the bending effect is more accurate, although not yet perfect.) What these renderers do is not a true bending in the strict mathematical sense, but only an approximation achieved by moving the vertices of the triangles.

This difference might sound irrelevant, as the result of this kind of fake bending usually looks as good as a true bending. However, it's not irrelevant from the point of view of POV-Ray. This is because POV-Ray does not represent the objects with triangles, but they are true mathematical surfaces. POV-Ray can't fake a bending by moving vertices because there are no vertices to move. In practice bending (and other non-linear transformations) would require the calculation of the intersection of the object surface and a curve (instead of a straight line), which is pretty hard and many times analytically not possible.

Note that isosurface objects can be modified with proper functions in order to achieve all kinds of transformations (linear and non-linear) and thus they are not really bound to this limitation. However, achieving the desired transformation needs some knowledge of mathematics.

See also the variable ior question.

End of Topic: Go Back to the Table of Contents

Topic 12

The focal blur is very grainy. Can I get rid of the graininess?

Yes. Set variance to 0 (or to a very small value, like for example 1/100000) and choose a high enough blur_samples. The rendering will probably slow down quite a lot, but the result should be very good.

End of Topic: Go Back to the Table of Contents

Topic 13

Is blurred reflection possible in POV-Ray

In the unofficial POV-Ray patch called MegaPov there is a feature which allows calculating blurred reflection (which works by shooting many reflected rays and averaging the result instead of shooting just one, as POV-Ray does by default). People were a bit disappointed when this feature was not included in POV-Ray 3.5.

However, is there any way to get blurred reflection in 3.5? Perhaps a bit surprisingly, the answer is yes. You can get blurred reflection which works pretty much like the one in MegaPov, and which can even look better and be a bit faster to render.

The trick is to use averaged textures with differing normals.

When POV-Ray calculates the color of an averaged texture map, it has to calculate the color of each texture individually before it can average them. If the textures are reflective, it has to shoot a reflected ray for each texture in order to get its color. The trick exploits this behaviour in order to achieve blurred reflection by shooting many reflected rays: The idea is to create many identical textures but with slightly differing normals and then average them, which causes POV-Ray to shoot a reflected ray for each one of them.

So you can implement this trick like this:

#declare BlurAmount = .2; // Amount of blurring 
#declare BlurSamples = 40; // How many rays to shoot 

object 
{ MyObject 
  texture 
  { average texture_map 
    { #declare Ind = 0; 
      #declare S = seed(0); 
      #while(Ind < BlurSamples) 
        [1 // The pigment of the object: 
           pigment { ObjectPigment } 
           // The surface finish: 
           finish { ObjectFinish } 
           // This is the actual trick:
           normal 
           { bumps BlurAmount 
             translate <rand(S),rand(S),rand(S)>*100
             scale 1000 
           } 
        ] 
        #declare Ind = Ind+1; 
      #end 
    } 
  } 
} 

There are basically two ways of using this trick: scaling the normals very big (as in the example above) or scaling them very small (ie. with something like scale .001). These two give slightly different results with their own advantages and disadvantages:

  • Scaling very big will usually produce much smoother results, but if the amount of blurriness (BlurAmount) is high it often requires quite many samples to avoid banding artifacts. It works best when the blur amount is small. Another disadvantage is that it may sometimes be rather slow when using heavy antialiasing.
  • Scaling very small will usually produce a rather grainy result which may or may not look good. The advantage of this is that a heavy antialiasing will make the blurriness look better. If you are going to use extreme antialiasing settings for your final rendering, you can often lower the blur samples amount for extra speed and still get a good-looking result.

Also note that this exact same trick can be used to get blurred refraction. It's also possible to get uneven blur, eg. the reflection is blurred more in the x-axis direction than in the other directions (this is achieved by scaling the normals unevenly), and countless variations of this (eg. the amount and direction of the blur may vary throughout the surface of the object) which makes this trick even more powerful than the reflection blur in MegaPov.

If you also want to use a normal modifier in the object in addition to the blurred reflection, you can add it to the normal block as an averaged normal map or similar, for example like this:

   normal 
   { average normal_map 
     { [1 bumps BlurAmount 
          translate <rand(S),rand(S),rand(S)>*100 
          scale 1000 
       ] 
       [1 MyNormal] 
     } 
   } 

Note: this will diminish the amount of blur so BlurAmount will have to be doubled. Also the depth of MyNormal should be double from what it normally would be.

End of Topic: Go Back to the Table of Contents

Topic 14

I have really thin lines or very small details and I'm getting jagged lines and heavy moire patterns no matter what antialiasing settings I use. Is there any way of getting the best possible antialiasing, no matter how long it takes to render?

You can achieve a very high-quality antialiasing by using the antialiasing settings +a0.0 +am2. To increase the quality even further, you can increase the value of the +r parameter from its default value (which is 3), for example +r4 (or higher).

Note: that this can take really long to render because it makes POV-Ray to calculate antialiasing for every pixel in the image. You can speed up the rendering by giving the +a parameter a value larger than 0.0, but any such value will cause jagginess/moire effects if you have lots of really small sub-pixel-sized details in your image. (Usually images don't contain significant amounts of such details and that's why a larger threshold value is enough for most images.)

End of Topic: Go Back to the Table of Contents

Topic 15

If I use an image map with a cylindrical map type (map_type 2) the image is used only once around the cylinder. Is there any way to repeat the image several times around it instead of just once?

It is indeed only possible to put the image once around a cylinder when using map_type 2.

This is a typical example:

cylinder
{ -y, y, 1
  pigment
  { image_map { jpeg "myimage" map_type 2 }
    translate -y*.5
    scale 2
  }
}

The cylinder is wrapped with the image only once. There is a way to circumvent this limitation, though, and it's by using the cylindrical warp transformation. The idea is to use default planar mapping in the image map, scale it down in the proper axis and then apply a cylindrical warp to the pigment, like this:

cylinder
{ -y, y, 1
  pigment
  { image_map { jpeg "myimage" }

    // The trick:
    scale <1/8, 1, 1> // repeat 8 times
    warp { cylindrical }

    translate -y*.5
    scale 2
  }
}

This same trick can be used to apply the image only to a part of the cylinder instead of wrapping it around it completely. To achieve this, just add the keyword once inside the image_map block. The parts not covered by the image will be transparent (this is very useful when using layered textures, with the image map on top of another texture).

Note: the cylindrical warp can be used to apply any pigment (or texture) around the cylinder, not just image maps. This is sometimes quite useful when wanting to apply something to the surface of a cylinder, such as a text pattern, like this:

camera { location -z*6 look_at 0 angle 35 }
light_source { <10, 20, -50>, 1 }

cylinder
{ -y, y, 1
  pigment
  { object
    { text { ttf "crystal", "Hello", 1.1, 0 }
      // Note: Thickness must be larger than cylinder radius
      rgb 1, rgb x
    }
    scale <1/8, 1, 1>
    warp { cylindrical }
    translate -y*.2
  }
  rotate y*145
}

plane { y, -1.001 pigment { checker rgb 1, rgb .5 } }

End of Section: Go Back to the Table of Contents

Topic 16

When I put several transparent objects one in front of another or inside another, POV-Ray calculates a few of them, but the rest are completely black, no matter what transparency values I give.

Short answer: Try increasing the max_trace_level value in the global_settings block (the default is 5).

Long answer:

Raytracing has a peculiar feature: It can calculate reflection and refraction. Each time a ray hits the surface of an object, the program looks if this surface is reflective and/or refractive. If so, it shoots another ray from this point to the appropriate direction.

Now, imagine we have a glass sphere. Glass reflects and refracts, so when the ray hits the sphere, two additional rays are calculated, one outside the sphere (for the reflection) and one inside (for the refraction). Now the inside ray will hit the sphere again, so two new rays are calculated, and so on and so on...

You can easily see that there must be a maximum number of reflections/refractions calculated, because otherwise POV-Ray would calculate that one pixel forever.

This number can be set with the max_trace_level option in the global_settings block. The default value is 5, which is enough for most scenes. Sometimes it isn't enough (specially when there are lots of semitransparent objects one over another) so you have to increase it.

So try something like:

global_settings
{
  max_trace_level 10
}

End of Topic: Go Back to the Table of Contents

Topic 17

When I make an image with POV-Ray, it seems to use just a few colors since I get color banding or concentric circles of colors or whatever where it shouldn't. How can I make POV-Ray to use more colors?

POV-Ray always writes true color images (ie. with 16777216 colors, ie. 256 shades of red, 256 shades of green and 256 shades of blue) (this can be changed when outputting to PNG or to B/W TGA but this is irrelevant when answering to this question).

So POV-Ray is not guilty. It always uses the maximum color resolution available in the target image file format.

This problem usually happens when you are using windows with 16-bit colors (ie. only 65536 colors, the so-called hicolor mode) and open the image created by POV-Ray with a program which doesn't dither the image. The image is still true color, but the program is unable to show all the colors, but shows only 65536 of them (dithering is a method that fakes more colors by mixing pixels of two adjacent colors to simulate the in-between colors).

So the problem is not in POV-Ray, but in your image viewer program. Even if POV-Ray shows a poor image while rendering because you have a resolution with too few colors, the image file created will have full color range.

End of Topic: Go Back to the Table of Contents

Topic 18

When I rotate an object, it dissapears from the image or moves very strangely. Why?

You need to understand how rotation works in POV-Ray.

Objects are always rotated around the axes. When you rotate, for example, <20,0,0>, that means that you are rotating around the X-axis 20 degrees (counter-clockwise). This is independent of the location of the object: It always rotates around the axis (what's the center of the object anyways? How do you locate it?). This means that if the object is not centered in the axis, it will orbit this axis like the Moon orbits the Earth (showing always the same side to the Earth).

It's a very good practice to define all objects centered at the origin (ie. its center is located at <0,0,0>). Then you can rotate it arbitrarily. After this you can translate it to its proper location in the scene. It's a good idea to do this to every object even if you don't rotate it (because you can never say if you will rotate it some day nevertheless).

What if, after all, you have a very complex object defined, but its center is not at the origin, and you want to rotate it around its center? Then you can just translate it to the origin, rotate it and then translate it back to its place. Suppose that the center of the object is located at <10,20,-30>; you can rotate it this way:

translate -<10,20,-30>
rotate <whatever>
translate <10,20,-30>

End of Topic: Go Back to the Table of Contents

Topic 19

If I tell POV-Ray to render a square image or otherwise change the aspect ratio, the output image is distorted. What am I doing wrong?

The problem is that the camera is set to an aspect ratio of 4/3, while the picture you are trying to render has a different aspect ratio from that (like 1/1 for a square image).

You can set the aspect ratio with the right keyword in the camera block. For example, suppose you want to render an image at a resolution of 1024x400 (and assuming square pixels). You can change the aspect ratio of the camera like this:

camera
{ right x*1024/400
  (other camera settings...)
}

This keyword can also be used to change the handedness of POV-Ray (see the question about Moray and POV-Ray handedness for more details).

Note: One could think "Why doesn't POV-Ray always set automatically the aspect ratio of the camera according to the resolution of the image?".

It is actually not a good idea to automatically set the aspect ratio of the camera (assuming square pixels) for reasons described below. Note, however, that it would be possible to automatize this using the image_width and image_height keywords, like this:

camera
{ right x*image_width/image_height // Not recommended, read below
  (other camera settings...)
}

While this may sound very tempting, and in many cases it would indeed be a handy solution, in general this is not recommended. There are several reasons for this:

  1. If the aspect ratio of the image is adjusted automatically like this, it means that if someone renders the image with a resolution having a different aspect ratio than what the author intended, the resulting image will be either cropped or parts of the scene not seen in the original will come into view. In many cases this is a bad thing because it affects (usually negatively) the composition of the scene: Important parts of the scene may get cropped out of view, or parts of the scene not intended to be seen (eg. because of lack of modelling) suddenly come into view. If someone does this he might often not even notice that there's something wrong with the image.
  2. If the viewing angle is adjusted with the angle keyword (as is rather usual), it introduces a rather unwanted effect when used in addition to the automatic aspect ratio setting with right: The aspect ratio of the image will in this case always be done in the vertical direction, either by cropping or extending the image in the upper and lower sides. This means that if you try to eg. render a widescreen version of the scene, you will only end up cropping the image from the upper and lower sides (instead of adding new scenery to the left and right sides). More often than not, scenes are designed to be rather panoramic in nature, ie. the image can be expanded horizontally. However using the automatic aspect ratio correction along with a fixed angle setting will not achieve this.
  3. The automatic aspect ratio correction given above assumes that pixels are square. There are cases where one wants to render an image for a resolution with non-square pixels (examples include the Windows98 startup screen, which uses a resolution of 320x400, and a DVD movie, where the video has always an aspect ratio of 1.78:1 although it can then be stretched to the intended aspect ratio, eg. 2.35:1 by the player). If the automatic aspect ratio correction is used, it would not be possible to render an image with non-square pixels without modifying the scene.
  4. The image_width and image_height keywords break the boundary between the frontend and the backend. What this means in practice is that there's a high probability that they will be deprecated in the future, thus heavily relying on their use may not be a good idea.

Using a fixed aspect ratio setting like right x*1024/400 will make sure that the composition of the image is unchanged and that the image can be rendered with non-square pixels. If someone renders the image with a different aspect ratio than what was intended, there's a much higher chance for him to see that there's something wrong with the rendering settings than if the image would simply get cropped or extended vertically.

End of Topic: Go Back to the Table of Contents

Topic 20

Why are there strange dark pixels or noise on my CSG object?

This is the typical coincident surfaces problem. This happens when two surfaces are exactly at the same place. For example:

union
{ box { <-1,0,-1>,<1,-2,1> texture { Texture1 } }
  box { <-2,0,-2>,<2,-1,2> texture { Texture2 } }
}

The top surface of the first box is coincident with the top surface of the second box. When a ray hits this area, POV-Ray has to decide which surface is closest. It can't, since they are exactly in the same place. Which one it actually chooses depends on the float number calculations, rounding error, initial parameters, position of the camera, etc, and varies from pixel to pixel, causing those seemingly random pixels.

One solution to the problem is to decide which surface you want to be on top and translate that surface just a bit, so it extends slightly past the unwanted surface. For example, if the second box should be the top surface:

union
{ box { <-1,0,-1>,<1,-2,1> texture { Texture1 } }
  box { <-2,0.001,-2>,<2,-1,2> texture { Texture2 } }
}

Scaling an object by a small amount using either scale 1.0001 or scale 0.9999 also works in certain situations.

Note: A similar problem appears when a light source is exactly on a surface: POV-Ray can't calculate accurately if it's actually inside or outside the surface, so dark (shadowed) pixels appear on every surface that is illuminated by this light.

End of Topic: Go Back to the Table of Contents

Topic 21

Why won't the textures in stars.inc work with my sky_sphere?

The only thing that works with a sky_sphere is pigments. Textures and finishes are not allowed. Don't be discouraged though because you can still use the textures in stars.inc with the following method:

Extract only the pigment statement from the declared textures. For example:

texture
{ pigment
  { color_map { [0 rgb ...][.5 rgb ...][1.0 rgb ...] }
    scale ...
  }
  finish { ... }
}

becomes:

pigment
{ color_map { [0 rgb ...][.5 rgb ...][1.0 rgb ...] }
  scale ...
}

The reason for this is that sky_sphere doesn't have a surface, it isn't an actual object. It is really just a fancy version of the background feature which extracts a color from a pigment instead of being a flat color. Because of this, normal and finish features, which depend on the characteristics of the surface of an object for their calculations, can't be used. The textures in stars.inc were intended to be mapped onto a real sphere, and can be used something like this:

sphere
{ 0, 1
  hollow // So it doesn't interfere with any media
  texture { YourSkyTexture }
  scale 100000
}

End of Topic: Go Back to the Table of Contents

Topic 22

When I use filter or transmit with my .tga image map nothing happens

POV-Ray can only apply filter or transmit to 8 bit 256 color palleted images. Since most .tga, .png, and .bmp images are 24bit and 16 million colors they do not work with filter or transmit. If you must use filter or transmit with your image maps you must reduce the color depth to a format the supports 256 colors such as the .gif image format.

You might also check the POV-Ray docs on using the alpha channel of .png files if you need specific areas that are transparent.

End of Topic: Go Back to the Table of Contents

Topic 23

My isosurface is not rendering properly: there are holes or random noise or big parts or even the whole isosurface just disappears.

The most common reason for these type of phenomena with isosurfaces is a too low max_gradient value. Use evaluate to make POV-Ray calculate a proper max_gradient for the isosurface (remember to specify a sensible max_gradient even when you use evaluate or else the result may not be correct).

Sometimes a too high accuracy value can also cause problems even when the max_gradient is ok. If playing with the latter doesn't seem to help, try also lowering the accuracy.

Remember that specifying a max_gradient which is too high for an isosurface, although it gives the correct result, is needlessly slow, so you should always calculate the proper max_gradient for each isosurface you make.

Note: that there are certain pathological functions where no max_gradient or accuracy will help. These functions usually have discontinuities or similar ill-behaving properties. With those you just have to find a solution which gives the best quality/speed tradeoff. Isosurfaces work best with functions which give smooth surfaces.

End of Topic: Go Back to the Table of Contents

Topic 24

When I scale something very large/small or locate something at very far distances the objects dissapear. What is causing this?

POV-Ray uses internally 64-bit floating point numbers for almost all of its calculations. Floating point numbers are a good and flexible way of using non-integer numbers, and since they are directly supported by hardware, they are extremely fast.

The good thing about floating point numbers is that it's possible to represent very large and very small numbers, with quite many decimals. For example the maximum value representable with a 64-bit integer is about 1019, while the maximum value of a 64-bit floating point number is about 10308 (and naturally with integers you can only represent integer numbers, but with floating point numbers you can represent decimal values; for example the smallest positive number closest to 0 which can be represented is something like 10-308).

But the bad news: As is logical, floating point numbers can't magically increase the total amount of different numbers representable with a 64-bit value (you can represent 264 different values, period). Floating point numbers increase the maximum value scales, but this is done at the cost of accuracy. That is, when you begin approaching the extreme values, the accuracy of the number decreases (ie. the values will make larger and larger jumps).

After this (longish) introduction, the answer: Floating point numbers are inaccurate near the extreme value ranges, which can potentially cause rendering artifacts. To avoid going to these extremes, some limits have been imposed for very large and small scales.

In some cases these artificial limits might appear way too restrictive (after all, 64-bit floating point numbers can often represent much larger numbers with good-enough accuracy). However, even with the current limits it's still possible to reach the accuracy limits of floating point numbers. For example, this simple scene shows what happens when these limits are reached (the sphere is full of lighting artifacts caused by floating point number inaccuracies):

camera { location -z*1e5 look_at 0 angle .002 }
light_source { <10,20,-30>, 1 }
sphere { 0,1 pigment { rgb 1 } }

This is not a bug in POV-Ray. This is just a side-effect of floating point number inaccuracy. There's no fix for this (the only fix would be using floating point numbers with more bits, eg. 128 bits, but this is not possible with current hardware).

End of Topic: Go Back to the Table of Contents

Topic 25

Sometimes when I use radiosity, I get big round artifacts in corners. What's wrong?

The image at the right is part of a larger image using radiosity, which shows the radiosity artifact in question (indicated by the red arrow).

This is actually quite a tricky problem with the radiosity algorithm of POV-Ray. In short, the radiosity algorithm works like this:

At certain intervals in a surface (the interval size being calculated using an algorithm which parameters can be fine-tuned in the radiosity block of the scene) POV-Ray will take radiosity samples. These samples are taken by shooting rays at all directions away from the surface in question (ie. in the directions of an imaginary half-sphere on that surface).

Sometimes this sampling point happens to be located just at the junction of two surfaces, like for example the inner corner of a room like in the image. What happens is that some of the rays shot from this point will actually miss the other surface and go outside the room. (Since the coloration of the outside is very different from the inside, usually eg. black, this will have a strong effect on the lighting of that point.)

The reason why this produces a large wrongly-colored circle instead of a tiny point is because of the interpolation algorithm the radiosity engine uses (ie. instead of sampling at each point in the surface it only samples at certain intervals and interpolates the rest for speed).

Radiosity Artifact Example

In a way, this problem is very similar to the coincident surfaces problem: When the sampling point is exactly at the junction of two surfaces, POV-Ray has no way of knowing which side is the inside which should be sampled. (A similar problem happens if you put eg. a light source in such a junction point.)

Unlike the coincident surfaces/light sources problem, this radiosity version of the problem has no trivial solution. This is because the location of the sampling points are calculated automatically and it's not possible to manually modify one of them to avoid the problem. It's also very difficult to enhance the algorithm itself to detect such situations.

Note: that not all radiosity artifacts are caused by this phenomenon. This artifact has the characteristic of usually being a clear, lone circle at the junction of two surfaces.

One simple trick which is worth trying and which often solves the problem is to move the camera a very tiny amount to some direction. This usually makes POV-Ray to take the radiosity samples also shifted by the tiny amount and if we are lucky the problem will not happen.

A more elaborated kludge (which can't be applied in all cases, though), is to make the problematic wall (in this example the wall at the right) shadowless and put a copy of the wall behind it (within a small distance). This way the sample rays leaving the room will hit this second wall and get correct illumination.

A third solution is to avoid sharp angles altogether: if corners are slightly rounded (with eg. a section of an inverted cylinder), the artifact should not happen. (Of course this starts requiring modelling skills, specially if the surfaces are not at 90 degrees of each other...)

End of Section: Go Back to the Table of Contents

Topic 26

How do I turn animation on? I have used the clock variable in my scene, but POV-Ray still only calculates one frame

The easiest way is to just specify the appropriate command line parameter on the command line or in the command line field in the rendering settings menu (in the Windows version). For example, if you want to create 20 frames, type this: +kff20

This will create 20 frames with the clock variable going from 0 to 1. The other command line parameters are found in the POV-Ray documentation.

Ken Tyler has also another good solution for this:

In the directory that you installed POV-Ray into you will find a subdirectory called scenes and another inside that called animate. You will find several example files showing you how to write your scene to use the clock variable. You will still need to activate POV-Ray's animation feature by using an .ini file with the correct info or with command line switches. I personaly like to use the ini file method. If you try this open the master povray.ini file from the tools menu and add the following lines:

;clock=1
;Initial_Frame=1
;Final_Frame=20
;Cyclic_Animation = on
;Subset_Start_Frame=6
;Subset_End_Frame=9

Save the file and close it. When you need to use the animation feature simply go in and edit the povray.ini file and uncomment out the functions you want to use. At a minimum you will need to use the initial_frame and final_frame option to make it work. Once you have stopped rendering your series of frames be sure to comment out the clock variables in the ini file. After you have rendered a series of individual frames you will still need to compile them into the animation format that you wish to use such as AVI or MPEG. See my links pages for programs that can help you do this. POV-Ray has no internal ability to do this for you except on the Macintosh platform of the program.

The Mac version normally doesn't use .ini files and lacks any command line, but uses a completely graphical interface instead. To activate animation, choose the render settings item from the Edit menu (right under Preferences, it will be titled FILENAME Settings, where FILENAME is the name of your file), click on the Animation tab, and enter the needed information in the text boxes.

End of Topic: Go Back to the Table of Contents

Topic 27

What is Photon Mapping?

Photon mapping uses forward raytracing (ie. sending rays from light sources) calculate reflecting and refracting light (aka. caustics), which is a new feature in POV-Ray 3.5.

The following is from the homepage of the developer:

My latest fun addition to POV is the photon map. The basic goal of this implementation of the photon map is to render true reflective and refractive caustics. The photon map was first introduced by Henrik Wann Jensen. It is a way to store light information gathered from a backwards ray-tracing [sic] step in a data structure independent from the geometry of a scene.

It is surprisingly fast and efficient. How is this possible when forward raytracing is so inefficient? For several reasons:

  1. Photon mapping is only used to calculate illumination, ie. lighting values, not to render the actual scene. Lighting values do not have to be as accurate as the actual rendering (it doesn't matter if your reflected light bleeds a bit out of range; actually this kind of bleeding happens in reality as well (due to light diffusing from air), so the result is not unrealistic at all).
  2. Photon mapping is calculated only for those (user-specified) objects that need it (ie. objects that have reflection and/or refraction).
  3. The rays are not shot to all directions, towards the entire scene, but only towards those specified objects. Many rays are indeed shot in vain, without them affecting the final image in any way, but since the total amountof rays shot is relatively small, the rendering time doesn't get inacceptably longer.
  4. The final image itself is rendered with regular backwards raytracing (the photon mapping is a precalculation step done before the actual rendering). The raytracer doesn't need to use forward raytracing in this process (it just uses the precalculated lighting values which are stored in space).

As you have seen, for the photon mapping to work in an acceptable way, you have to tell the program which objects you want to reflect/refract light and which you don't. This way you can optimize a lot the photon mapping step.

End of Topic: Go Back to the Table of Contents

Topic 28

Can POV-Ray use multiple processors?

The new POV-Ray 3.7 supports multiple processors. It uses available processors automatically so you don't have to explicitly turn the feature on.

Background:

Previously there was an explanation here about why supporting multiple processors is very difficult in a raytracer like POV-Ray. The problems were indeed many, and a great amount of work has been put into redesigning the internal structure of POV-Ray to support parallel rendering. Especially the radiosity algorithm has been greatly reworked in order to make it possible to render in parallel (but much of the work has been put into internal code structuring not visible to the end user).

You might have heard of parallel processing patches for pov3.6 and earlier. The problem with these patches is that they basically just start several copies of POV-Ray, giving them different parts of the image to render. This is problematic because the amount of memory needed to render the scene increases directly in relation to the amount of parallel processes. This means that if you eg. render with two processors using such patch, the amount of memory needed to render the scene gets doubled. The new pov3.7, however, is able to render in parallel using as many processors as desired without any significant increase in memory usage. This means that if a scene requires 1 gigabyte of memory to render, rendering the scene using 8 processors still requires just 1 gigabyte of memory (with the old third-party patches it would have required 8 gigabytes).

End of Topic: Go Back to the Table of Contents

Topic 29

Is there a way to generate a wireframe output image from a POV scene file?

Short answer: No.

Long answer:

You have to understand the difference between a modeller like 3D-Studio and POV-Ray in the way they handle objects. Those modellers always use triangle meshes (and some modellers use also NURBS which can be very easily converted into triangles). Triangle meshes are extremely simple to represent in a wireframe format: Just draw a line for each triangle side.

However, POV-Ray handles most of the objects as mathematical entities, not triangle meshes. When you tell POV-Ray to create a sphere, POV-Ray only handles it as a point and a radius, nothing else (besides the possible matrix transform applied to it). POV-Ray only has a notion of the shape of the object as a mathematical formula (it can calculate the intersection of a line and the sphere).

For wireframe output there should be a way to convert that mathematical representation of the object into actual triangles. This is called tesselation.

For some mathematical objects, like the sphere, the box, etc, tesselation is quite trivial. For other entities, like CSG difference, intersection, etc, it's more difficult (although not impossible). For other entities it's completely impossible: infinite non-flat surfaces like paraboloids and hyperboloids (well, actually it is possible if you limit the size of the surface to a finite shape; still the amount of triangles that needs to be created would be extremely high).

There have been lots of discussions about incorporating tesselation into POV-Ray. But since POV-Ray is just a renderer, not a modeller, it doesn't seem to be worth the efforts (adding tesselation to all the primitives and CSG would be a huge job).

(Of course tesselation could give some other advantages, like the ability to fake non-uniform transformations to objects like most triangle mesh modellers do...)

If you just want fast previews of the image, you can try to use the quality parameter of POV-Ray. For example setting quality to 0 (+q0) can give a very fast render. See also the rendering speed question.

End of Topic: Go Back to the Table of Contents

Topic 30

Can I specify variable IOR for an object? Is there any patch that can do this? Is it possible?

Short answer: No.

Long answer:

There are basically two ways of defining variable IOR for an object: IOR changing on the surface of the object and IOR changing throughout inside the object.

The first one is physically incorrect. For uniform IOR it simulates physical IOR quite correctly since for objects with uniform density the light bends at the surface of the object and nowhere else. However if the density of the object is not uniform but changes throughout its volume, the light will bend inside the object, while travelling through it, not only on the surface of the object.

This is why variable IOR on the surface of the object is incorrect and the possibility of making this was removed in POV-Ray 3.1.

From this we can deduce that a constant IOR is kind of property of the surface of the object while variable IOR is a property of the interior of the object (like media in POV-Ray). Of course the physically correct interpretation of this phenomenon is that IOR is always a property of the whole object (ie. its interior), not only its surface (and this is why IOR is now a property of the interior of the object in POV-Ray); however, the effect of a constant IOR has effect only at the surface of the object and this is what POV-Ray does when bending the rays.

The correct simulation for variable IOR, thus, would be to bend the ray inside the object depending on the density of the interior of the object at each point.

This is much harder to do than one may think. The reasons are similar to why non-uniform transformations are too difficult to calculate reasonably (as far as I know there exists no renderer that calculates true non-uniform transformations; mesh modellers just move the vertices, they don't actually transform the object; a true non-uniform transformation would bend the triangles). Moreover: Non-uniform transformations can be faked if the object is made of many polygons (you can move the vertices as most mesh modellers do), but you can't fake a variable IOR in this way.

Variable IOR is (mostly) impossible to calculate analytically (ie. in a mathematically exact way) at least in a reasonable time. The only way would be to calculate it numerically (usually by super-sampling).

Media in POV-Ray works in this way. It doesn't even try to analytically solve the color of the media, but supersamples the media along the ray and averages the result. This can be pretty inaccurate as we can see with the media method 1 (the only one which was supported in POV-Ray 3.1). However some tricks can be used to make the result more accurate without having to spend too much time, for example antialiasing (which is used by the media method 3 in POV-Ray 3.5). This is a quite easy calculation because the ray is straight, POV-Ray knows the start and end points of the ray and it knows that it doesn't intersect with anything along the ray (so it doesn't have to make ray-object intersection calculations while supersampling).

Variable IOR is, however, a completely different story. Here the program would have to shoot a LOT of rays along the path of the bending light ray. For each ray it would have to make all the regular ray-object intersection calculations. It's like having hundreds or thousands of transparent objects one inside another (with max_trace_level set so high that the ray will go through all of them). You can easily test how slow this is. It's VERY slow.

One could think that Hey, why not just shoot a few tens of rays and then use some kind of antialiasing to get the fine details, like in media method 3.

Well, it might work (I have never seen it tested), but I don't think it will help much. The problem is the inaccuracy of the supersampling (even when using antialiasing). In media it's not a big problem; if a very small shadowed area in the media is not detected by the supersampling process, the result will not differ very much from the correct one (since the shadowed area was so small it would have diminished the brightness of that ray just a bit but no more) and it will probably still look good.

With IOR this isn't anymore true. With IOR even very, very small areas may have very strong effect in the end result, since IOR can drastically change the direction of the ray thus making the result completely different (even very small changes can have great effect if the object behind the current refracting object is far away).

This can have disastrous effects. The ior may change drastically from pixel to pixel almost at random, not to talk from frame to frame in an animation. To get a more or less accurate result lots of rays would be needed; just a few rays is not enough. And shooting lots of rays is an extremely slow process.

End of Topic: Go Back to the Table of Contents

Topic 31

Do I need inside_vector to use interior or media in a mesh?

Short answer: No. inside_vector and interior are completely unrelated. inside_vector is used only for CSG calculations and nothing else. You don't need to use it for defining an interior (eg. containing media) to a mesh. The interior block works for any closed surface regardless of its solidness.

Long answer:

The features of the interior property of an object (eg. media, fading, etc) do not need to determine whether a point is inside the object or not. They are solely based on intersection pairs (that is, interior properties are calculated for each ray segment which intersects the object in two places). For this reason eg. media works just ok for closed meshes and bicubic patches without any additional tweaking. In other words, inside_vector is not needed for interior to work.

CSG, however, is a different issue. Some CSG calculations require solving whether a given point is inside an object or not, and this cannot be solved by default for meshes. However, this is where the inside_vector kicks in.

The vector given as parameter to this keyword defines a direction. In theory any (non-zero) vector is ok. What POV-Ray does is that when it needs to know whether a certain point is inside the mesh or not, it shoots a ray from this point to the direction defined by this vector and then counts how many triangles of the mesh the ray hits. If the ray hits an odd number of triangles, POV-Ray assumes the point is inside the mesh, else outside.

In theory it wouldn't be necessary for the user to give a specific vector. POV-Ray could just internally create some random vector. However, the reason why the user is given the option of defining the vector himself is that this algorithm might not be absolutely flawless in all cases. For example, if the vector happens to have the exact same direction as the tangent of a triangle, or if the rays happen to hit lots of edges of the mesh, the CSG operation might suffer from some jittering at some places due to the limited accuracy of floating point numbers (not asimilar to what happens with the infamous coincident-surfaces problem). If this would happen, changing the direction of the inside_vector a bit may well fix the problem.

End of Section: Go Back to the Table of Contents

-scanned for dead links StephenS 01:12, 28 April 2010 (UTC)