Documentation:Tutorial Section 4.1

From POV-Wiki
Revision as of 15:49, 23 June 2009 by Jholsenback (talk | contribs) (external link handling change after recent update)
Jump to navigation Jump to search

This document is protected, so submissions, corrections and discussions should be held on this documents talk page.


Rotation behaves very strangely

"When I rotate an object, it dissapears from the image or moves very strangely. Why?"

You need to understand how rotation works in POV-Ray.

Objects are always rotated around the axes. When you rotate, for example, <20,0,0>, that means that you are rotating around the X-axis 20 degrees (counter-clockwise). This is independent of the location of the object: It always rotates around the axis (what is the center of the object anyways? how do you locate it?). This means that if the object is not centered in the axis, it will orbit this axis like the Moon orbits the Earth (showing always the same side to the Earth).

It is a very good practice to define all objects centered at the origin (ie. its 'center' is located at <0,0,0>). Then you can rotate it arbitrarily. After this you can translate it to its proper location in the scene. It is a good idea to do this to every object even if you do not rotate it (because you cannot never say if you will rotate it some day nevertheless).

What if, after all, you have a very complex object defined, but its center is not at the origin, and you want to rotate it around its center? Then you can just translate it to the origin, rotate it and then translate it back to its place. Suppose that the center of the object is located at <10,20,-30>; you can rotate it this way:

translate -<10,20,-30>
rotate <whatever>
translate <10,20,-30>

The image gets distorted when rendering a square image

"If I tell POV-Ray to render a square image or otherwise change the aspect ratio, the output image is distorted. What am I doing wrong?"

The problem is that the camera is set to an aspect ratio of 4/3, while the picture you are trying to render has an aspect ratio of 1/1 (or whatever).

You can set the aspect ratio with the 'right' keyword in the camera block. The general way to set the correct aspect ratio for your image dimensions is:

camera
{ right x*ImageWidth/ImageHeight
  (other camera settings...)
}

This keyword can also be used to change the handedness of POV-Ray (see the question about Moray and POV-Ray handedness for more details).

Note: One could think "why does not POV-Ray always set automatically the aspect ratio of the camera according to the resolution of the image?".

There is one thing wrong in this thought: It assumes that pixels are always square (ie. the aspect ratio of the pixels is 1/1). The logic of this behaviour comes clear with an example:

Suppose that you design a scene using a regular 4/3 aspect ratio, as usual (like 320x240, 640x480 and so on). This image is designed to look good when viewing in a 4/3 monitor (as they all are in home computers).

Now you want to render this image for the Windows startup image. The resolution of the Windows startup image is 320x400. This resolution has not an aspect ratio of 4/3 and the pixels are not square (the pixels have an aspect ratio of 1/0.6 instead of 1/1). Now, when you render your image at a resolution of 320x400 with POV-Ray and show it with the monitor set to that resolution (as it is set at windows startup when the startup image is shown), the aspect ratio will be the correct one so the image will have the correct proportions (and it will not be squeezed in any direction).

If you had changed the aspect ratio of the camera to 320/400 (instead of using the default 4/3) you would not only have got a different image (showing parts of the scene not shown in the original or hiding parts visible in the original), but it would have looked sqeezed when shown in the 320x400 screen resolution.

Thus, the camera aspect ratio is the aspect ratio of the final image on screen, when viewed in the final resolution (which might not be a 4/3-resolution). Since the monitor screen has an aspect ratio of 4/3, this is the default for the camera as well.

Why are there strange dark pixels or noise on my CSG object?

This is the typical 'coincident surfaces problem'. This happens when two surfaces are exactly at the same place. For example:

union
{ box { <-1,0,-1>,<1,-2,1> texture { Texture1 } }
  box { <-2,0,-2>,<2,-1,2> texture { Texture2 } }
}

The top surface of the first box is coincident with the top surface of the second box. When a ray hits this area, POV-Ray has to decide which surface is closest. It cannot, since they are exactly in the same place. Which one it actually chooses depends on the float number calculations, rounding error, initial parameters, position of the camera, etc, and varies from pixel to pixel, causing those seemingly "random" pixels.

The solution to the problem is to decide which surface you want to be on top and translate that surface just a bit, so it protrudes past the unwanted surface. In the example above, if we want, for example, that the second box is at the top, we will type something like:

union
{ box { <-1,0,-1>,<1,-2,1> texture { Texture1 } }
  box { <-2,0.001,-2>,<2,-1,2> texture { Texture2 } }
}

Note that a similar problem appears when a light source is exactly on a surface: POV-Ray cannot calculate accurately if it is actually inside or outside the surface, so dark (shadowed) pixels appear on every surface that is illuminated by this light.

Why won't the textures in stars.inc work with my sky_sphere?

The only thing that works with a sky_sphere is pigments. Textures and finishes are not allowed. Do not be discouraged though because you can still use the textures in stars.inc with the following method:

Extract only the pigment statement from the declared textures. For example:

texture
{
  pigment { color_map { [0 rgb ..][.5 rgb ..][1.0 rgb ..] } scale .. }
  finish { .. }
}

becomes:

pigment { color_map { [0 rgb ..][.5 rgb ..][1.0 rgb ..] } scale .. }

The reason for this is that sky_sphere does not have a surface, it is not an actual object. It is really just a fancy version of the background feature which extracts a color from a pigment instead of being a flat color. Because of this, normal and finish features, which depend on the characteristics of the surface of an object for their calculations, cannot be used. The textures in stars.inc were intended to be mapped onto a real sphere, and can be used something like this:

sphere
{ 0, 1
  hollow // So it doesn't interfere with any media in the scene
  texture { YourSkyTexture }
  scale 100000
}

When I use filter or transmit with my .tga image map nothing happens.

POV-Ray can only apply filter or transmit to 8 bit 256 color palleted images. Since most .tga, .png, and .bmp images are 24bit and 16 million colors they do not work with filter or transmit. If you must use filter or transmit with your image maps you must reduce the color depth to a format the supports 256 colors such as the .gif image format.

You might also check the POV-Ray docs on using the alpha channel of .png files if you need specific areas that are transparent.

Isosurface not rendering properly?

"My isosurface is not rendering properly: there are holes or random noise or big parts or even the whole isosurface just disappears."

The most common reason for these type of phenomena with isosurfaces is a too low max_gradient value. Use evaluate to make POV-Ray calculate a proper max_gradient for the isosurface (remember to specify a sensible max_gradient even when you use evaluate or else the result may not be correct).

Sometimes a too high accuracy value can also cause problems even when the max_gradient is ok. If playing with the latter does not seem to help, try also lowering the accuracy.

Remember that specifying a max_gradient which is too high for an isosurface, although it gives the correct result, is needlessly slow, so you should always calculate the proper max_gradient for each isosurface you make.

Note that there are certain pathological functions where no max_gradient or accuracy will help. These functions usually have discontinuities or similar "ill-behaving" properties. With those you just have to find a solution which gives the best quality/speed tradeoff. Isosurfaces work best with functions which give smooth surfaces.

Language related things

How do I turn animation on?

"How do I turn animation on? I have used the clock-variable in my scene, but POV-Ray still only calculates one frame."

The easiest way is to just specify the appropriate command line parameter on the command line or in the command line field in the rendering settings menu (in the Windows version). For example, if you want to create 20 frames, type this: +kff20

This will create 20 frames with the clock variable going from 0 to 1. The other command line parameters are found in the POV-Ray documentation.

Ken Tyler has also another good solution for this:

In the directory that you installed POV-Ray into you will find a subdirectory called scenes and another inside that called animate. You will find several example files showing you how to write your scene to use the clock variable. You will still need to activate POV-Ray's animation feature by using an .ini file with the correct info or with command line switches. I personaly like to use the ini file method. If you try this open the master povray.ini file from the tools menu and add the following lines:

;clock=1
;Initial_Frame=1
;Final_Frame=20
;Cyclic_Animation = on
;Subset_Start_Frame=6
;Subset_End_Frame=9

Save the file and close it. When you need to use the animation feature simply go in and edit the povray.ini file and uncomment out the functions you want to use. At a minimum you will need to use the initial_frame and final_frame option to make it work. Once you have stopped rendering your series of frames be sure to comment out the clock variables in the ini file. After you have rendered a series of individual frames you will still need to compile them into the animation format that you wish to use such as AVI or MPEG. See our links collection on our website for programs that can help you do this. POV-Ray has no internal ability to do this for you except on the Macintosh platform of the program.

The Mac version normally does not use .ini files and lacks any command line, but uses a completely graphical interface instead. To activate animation, choose the render settings item from the Edit menu (right under "Preferences", it will be titled "FILENAME Settings", where FILENAME is the name of your file), click on the Animation tab, and enter the needed information in the text boxes.

Can POV-Ray use multiple processors?

Short answer: The only way to run POV-Ray on multiple processors is to run several copies of POV-Ray.

Long answer:

Making a program use multiple threads is not as trivial as it may sound. Here are some reasons why it is quite difficult to make with POV-Ray:

  • You cannot do it with standard C (nor C++), and POV-Ray is intended to be very portable. This is not just an issue of philosophy or purism, POV-Ray is really used on a large variety of different platforms.
  • Multithreading is a very complex issue and it is much more difficult to make a bugless multithreaded program than a single-threaded (there are several things in multithreading, like mutual exclusion problems, which make the multithreaded program very non-deterministic). It is not impossible, though, since it has been done (there are patched versions of POV-Ray with multithreading support). However, it is far from trivial.
  • Raytracing is usually thought as an easily threaded problem. You just calculate one pixel and draw it on screen, independent of the other pixels. However, with advanced techniques, like antialiasing and specially stochastic global illumination calculation (referred as "radiosity" in POV-Ray's documentation and syntax) this is not true anymore.
    • To speed up antialiasing, a threshold value is used between pixels. If the difference in color between two pixels is higher than the threshold, then antialiasing is calculated. Of course we need info from the nearby pixels for this.
    • In global illumination calculations lighting values are stored in a spatial tree structure. The following pixels may use the information stored in this tree for their illumination. This means that the pixel calculation at the upper left corner may affect on the color of the pixel in the lower right corner. This is the reason why calculating a radiosity image in parts does not work very well.

    Both problems can probably be solved in some way, but as said, it is far

    from trivial.

An excellent article about the issue can be found on the Ray Tracing News web page.

Here is an answer from John M. Dlugosz with useful tips:

The POV-Ray rendering engine is a single thread of execution, so when run on a dual Pentium Pro (running NT4) the CPU indicator only goes up to about 50%. POV does not use more than half the available power on the machine.

That is the basic issue, though to quibble a bit it is not exactly true: the rendering engine soaks up one whole CPU, but the editor runs on its own thread, and operating system functions (writing to the file, updating the display, network activity, system background tasks) run on different threads. This gives a little bit of a bonus, and the system uses as much as 54% of available MIPS when watching it. More importantly, the machine is still highly responsive, and editing or other applications continue on without being sluggish.

But for a long render, it is annoying to have one CPU be mostly idle. What can be done to cut rendering time in half (from 20 hours down to 10, for example)?

The simplest thing is to run two copies of POV on the machine. Have one copy render the top half, and the other render the bottom half. Then paste the halves together in your picture editor.

One thing to watch out for: do not just fire up two copies and point them at the same INI file and image file. They will overwrite each other's output and make a big mess. Instead, you must make sure each is writing to a different file.

For moderate renders, you might let one copy chug away on the long render, and use a second copy interactivly to continue development in POV.

Can I get a wireframe render of my scene?

"Is there a way to generate a wireframe output image from a POV scene file?"

Short answer: No.

Long answer:

You have to understand the difference between a modeller like 3D-Studio and POV-Ray in the way they handle objects. Those modellers always use triangle meshes (and some modellers use also NURBS which can be very easily converted into triangles). Triangle meshes are extremely simple to represent in a wireframe format: Just draw a line for each triangle side.

However, POV-Ray handles most of the objects as mathematical entities, not triangle meshes. When you tell POV-Ray to create a sphere, POV-Ray only handles it as a point and a radius, nothing else (besides the possible matrix transform applied to it). POV-Ray only has a notion of the shape of the object as a mathematical formula (it can calculate the intersection of a line and the sphere).

For wireframe output there should be a way to convert that mathematical representation of the object into actual triangles. This is called tesselation.

For some mathematical objects, like the sphere, the box, etc, tesselation is quite trivial. For other entities, like CSG difference, intersection, etc, it is more difficult (although not impossible). For other entities it is completely impossible: infinite non-flat surfaces like paraboloids and hyperboloids (well, actually it is possible if you limit the size of the surface to a finite shape; still the amount of triangles that needs to be created would be extremely high).

There have been lots of discussions about incorporating tesselation into POV-Ray. But since POV-Ray is just a renderer, not a modeller, it does not seem to be worth the efforts (adding tesselation to all the primitives and CSG would be a huge job).

(Of course tesselation could give some other advantages, like the ability to fake non-uniform transformations to objects like most triangle mesh modellers do...)

If you just want fast previews of the image, you can try to use the quality parameter of POV-Ray. For example setting quality to 0 (+q0) can give a very fast render. See also the rendering speed question.

Can I specify variable IOR for an object?

"Can I specify variable IOR for an object? Is there any patch that can do this? Is it possible?"

Short answer: No.

Long answer:

There are basically two ways of defining variable IOR for an object: IOR changing on the surface of the object and IOR changing throughout inside the object.

The first one is physically incorrect. For uniform IOR it simulates physical IOR quite correctly since for objects with uniform density the light bends at the surface of the object and nowhere else. However if the density of the object is not uniform but changes throughout its volume, the light will bend inside the object, while travelling through it, not only on the surface of the object.

This is why variable IOR on the surface of the object is incorrect and the possibility of making this was removed in POV-Ray 3.1.

From this we can deduce that a constant IOR is kind of property of the surface of the object while variable IOR is a property of the interior of the object (like media in POV-Ray). Of course the physically correct interpretation of this phenomenon is that IOR is always a property of the whole object (ie. its interior), not only its surface (and this is why IOR is now a property of the interior of the object in POV-Ray); however, the effect of a constant IOR has effect only at the surface of the object and this is what POV-Ray does when bending the rays.

The correct simulation for variable IOR, thus, would be to bend the ray inside the object depending on the density of the interior of the object at each point.

This is much harder to do than one may think. The reasons are similar to why non-uniform transformations are too difficult to calculate reasonably (as far as I know there exists no renderer that calculates true non-uniform transformations; mesh modellers just move the vertices, they do not actually transform the object; a true non-uniform transformation would bend the triangles). Moreover: Non-uniform transformations can be faked if the object is made of many polygons (you can move the vertices as most mesh modellers do), but you cannot fake a variable IOR in this way.

Variable IOR is (mostly) impossible to calculate analytically (ie. in a mathematically exact way) at least in a reasonable time. The only way would be to calculate it numerically (usually by super-sampling).

Media in POV-Ray works in this way. It does not even try to analytically solve the color of the media, but supersamples the media along the ray and averages the result. This can be pretty inaccurate as we can see with the media method 1 (the only one which was supported in POV-Ray 3.1). However some tricks can be used to make the result more accurate without having to spend too much time, for example antialiasing (which is used by the media method 3). This is a quite easy calculation because the ray is straight, POV-Ray knows the start and end points of the ray and it knows that it does not intersect with anything along the ray (so it does not have to make ray-object intersection calculations while supersampling).

Variable IOR is, however, a completely different story. Here the program would have to shoot a LOT of rays along the path of the bending light ray. For each ray it would have to make all the regular ray-object intersection calculations. It is like having hundreds or thousands of transparent objects one inside another (with max_trace_level set so high that the ray will go through all of them). You can easily test how slow this is. It is very slow.

One could think that "hey, why not just shoot a few tens of rays and then use some kind of antialiasing to get the fine details, like in media method 3".

Well, it might work (I have never seen it tested), but I do not think it will help much. The problem is the inaccuracy of the supersampling (even when using antialiasing). In media it is not a big problem; if a very small shadowed area in the media is not detected by the supersampling process, the result will not differ very much from the correct one (since the shadowed area was so small it would have diminished the brightness of that ray just a bit but no more) and it will probably still look good.

With IOR this is not anymore true. With IOR even very, very small areas may have very strong effect in the end result, since IOR can drastically change the direction of the ray thus making the result completely different (even very small changes can have great effect if the object behind the current refracting object is far away).

This can have disastrous effects. The ior may change drastically from pixel to pixel almost at random, not to talk from frame to frame in an animation.
To get a more or less accurate result lots of rays would be needed; just a few rays is not enough. And shooting lots of rays is an extremely slow process.

What is Photon Mapping?

Photon mapping uses forward raytracing (ie. sending rays from light sources) calculate reflecting and refracting light (aka. caustics).

The following is from the homepage of the developer (Nathan Kopp):

"My latest fun addition to POV is the photon map. The basic goal of this implementation of the photon map is to render true reflective and refractive caustics. The photon map was first introduced by Henrik Wann Jensen. It is a way to store light information gathered from a backwards ray-tracing [sic] step in a data structure independent from the geometry of a scene."

It is surprisingly fast and efficient. How is this possible when forward raytracing is so inefficient? For several reasons:

  1. Photon mapping is only used to calculate illumination, ie. lighting values, not to render the actual scene. Lighting values do not have to be as accurate as the actual rendering (it does not matter if your reflected light "bleeds" a bit out of range; actually this kind of "bleeding" happens in reality as well (due to light diffusing from air), so the result is not unrealistic at all).
  2. Photon mapping is calculated only for those (user-specified) objects that need it (ie. objects that have reflection and/or refraction).
  3. The rays are not shot to all directions, towards the entire scene, but only towards those specified objects. Many rays are indeed shot in vain, without them affecting the final image in any way, but since the total amountof rays shot is relatively small, the rendering time does not get inacceptably longer.
  4. The final image itself is rendered with regular backwards raytracing (the photon mapping is a precalculation step done before the actual rendering). The raytracer does not need to use forward raytracing in this process (it just uses the precalculated lighting values which are stored in space).

As you have seen, for the photon mapping to work in an acceptable way, you have to tell the program which objects you want to reflect/refract light and which you do not. This way you can optimize a lot the photon mapping step.

File Formats

Saving the image to disk.

"I have rendered an image with POV-Ray, but how do I save it to JPG or GIF or any other image format?"

This is a typical problem of people using the Windows version of the program for first time.

POV-Ray is a raytracer which has only one purpose: To read a source file describing the scene to raytrace and then calculate it and save it to disk in a supported image format, usually TGA (and optionally PNG, BMP, etc).

POV-Ray has always had this goal, and still has, and will (desirably) never change. It is mostly command-line oriented. It supports non-essential things, like showing the image while it is rendering.

A GUI does not change anything. POV-Ray is still POV-Ray, with or without GUI. It takes a source code and calculates the image and saves it to disk. By default it shows the image while it is raytracing it, but that is just a secondary feature, non-essential, irrelevant. It can be turned off and POV-Ray will still make its job.

So the answer to the question is: The image is already saved on disk.

Usually it is saved in TGA or BMP format (it depends on the settings) with the same name as the source code (so if the source is named CHAIR.POV, the image will be named CHAIR.TGA or CHAIR.BMP or whatever). The location is either the same directory where the .pov-file is or else a common directory for images (which you can set up in the main povray.ini file).

Can I convert my POV-Ray scenes to another format?

(Answer by Johannes Hubert)

For POV-Ray 2.2: Try Crossroads or if you want to convert to Moray MDL files, try POV2MDL from Thomas Baier.

For POV-Ray 3.1 and newer: There is unfortunately not much you can do. There is no real versatile program yet, that can read (and convert) POV-Ray 3.1 scripts (except for POV-Ray itself :-). Your best shots would be: POV2RIB if you want to convert to the RIB format. If you know how to program C++, you can get the ParPov C++ library from the same URL. It is a class-library for reading POV 3.1 scripts and converting them to C++ objects (it also has been used for POV2RIB).

3DWin from Thomas Baier (see the URL above) converts from the POB format to a lot of other formats. POB is a special binary POV-Ray format devised by Thomas and is written by a custom-compile version of POV-Ray 3.0 (get the POB-SDK at the same URL): This POV-Ray version reads POV-scripts and outputs POB files, which can then be converted by 3DWin. The drawback: Although all objects, textures etc. of the scene are in the POB file, they are not all recognized by 3DWin. Only triangles and meshes of triangles are recognized. Everything else in the scene is lost....

How can I convert my scenes from format X to POV-Ray format?

Crossroads can convert a very limited subset of Povray primitives (spheres and triangles work best). Particularly, it can be used to convert unions of regular (non smooth) triangles to other formats.

Another option is 3DWin.

How do I import all of my textures I created in 3DS Max into POV-Ray?

As POV-Ray supports UV-mapping, textured objects used by renderers such as 3D-Studio can be used by first converting them with a proper converter. You can find a list of converters and other related software in the links collection on our website.


I'm getting color banding in the image How can I avoid artifacts and still get good JPEG compression?


This document is protected, so submissions, corrections and discussions should be held on this documents talk page.