Difference between revisions of "Documentation:Tutorial Section 4.2"

From POV-Wiki
Jump to navigation Jump to search
m (1 revision(s))
m (external link handling change after recent update)
 
(One intermediate revision by the same user not shown)
Line 171: Line 171:
 
</p>
 
</p>
 
<p>However, you are not completely out of luck when dealing with compressing
 
<p>However, you are not completely out of luck when dealing with compressing
mesh data. [http://www.geocities.com/ccolefax/pcm.html|_new <span title="Opens A New Window!">This has been done before</span>].
+
mesh data. [http://www.geocities.com/ccolefax/pcm.html This has been done before].
  
 
</p>
 
</p>
Line 183: Line 183:
  
 
===What is the best animation program available?===
 
===What is the best animation program available?===
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Animation_Utilities/|_new <span title="Opens A New Window!">links collection on our website</span>].
+
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Animation_Utilities/ links collection on our website].
 
</p>
 
</p>
  
Line 190: Line 190:
  
 
</p>
 
</p>
<p>See the [http://irtc.org/anims/faq.html|_new <span title="Opens A New Window!">IRTC Animation FAQ</span>].
+
<p>See the [http://irtc.org/anims/faq.html IRTC Animation FAQ].
 
</p>
 
</p>
  
 
===Where can I find models/textures?===
 
===Where can I find models/textures?===
 
<p align="left">Check the POV-Ray  
 
<p align="left">Check the POV-Ray  
[http://povray.org/links/POV-Ray_Include_Macro_and_Object_Files/Object_and_Scene_Files/|_new <span title="Opens A New Window!">links collection on our website</span>].
+
[http://povray.org/links/POV-Ray_Include_Macro_and_Object_Files/Object_and_Scene_Files/ links collection on our website].
 
</p>
 
</p>
  
 
===What are the best modellers for POV-Ray?===
 
===What are the best modellers for POV-Ray?===
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Programs/POV-Ray_Modelling_Programs/|_new <span title="Opens A New Window!">links collection on our website</span>].
+
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Programs/POV-Ray_Modelling_Programs/ links collection on our website].
 
</p>
 
</p>
  
Line 209: Line 209:
  
 
</p>
 
</p>
<p>Most of the available modelers are listed on the [http://mac.povray.org/|_new <span title="Opens A New Window!">Official POV-Ray MacOS Info Page</span>].
+
<p>Most of the available modelers are listed on the [http://mac.povray.org/ Official POV-Ray MacOS Info Page].
  
 
</p>
 
</p>
Line 228: Line 228:
 
</p>
 
</p>
 
<p>- The POV-Ray users gallery located on
 
<p>- The POV-Ray users gallery located on
[http://www.povray.org/|_new <span title="Opens A New Window!">our web server</span>].
+
[http://www.povray.org/ our web server].
  
 
</p>
 
</p>
<p>- On our private [http://www.povray.org/groups.html|_new <span title="Opens A New Window!">news server</span>] (not connected with USENET)
+
<p>- On our private [http://www.povray.org/groups.html news server] (not connected with USENET)
 
in the povray.binaries.images newsgroup, you will find many images
 
in the povray.binaries.images newsgroup, you will find many images
 
posted from other POV-Ray users. There are also discussion groups, and
 
posted from other POV-Ray users. There are also discussion groups, and
Line 237: Line 237:
  
 
</p>
 
</p>
<p>You should also check the [http://www.irtc.org/|_new <span title="Opens A New Window!">Internet Raytracing Competition homepage</span>].
+
<p>You should also check the [http://www.irtc.org/ Internet Raytracing Competition homepage].
 
</p>
 
</p>
  
 
===Any good heightfield modellers?===
 
===Any good heightfield modellers?===
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Programs/Height_Field_Modelling_Programs_and_Utilities/|_new <span title="Opens A New Window!">links collection on our website</span>].
+
<p align="left">Check the POV-Ray [http://povray.org/links/3D_Programs/Height_Field_Modelling_Programs_and_Utilities/ links collection on our website].
 
</p>
 
</p>
  
 
===Any easy way of creating trees?===
 
===Any easy way of creating trees?===
<p>A program called [http://free.prohosting.com/~jhubert/TreeDesigner/|_new <span title="Opens A New Window!">Tree designer by Johannes Hubert</span>]
+
<p>A program called [http://free.prohosting.com/~jhubert/TreeDesigner/ Tree designer by Johannes Hubert]
 
is an excellent modelling program for trees.
 
is an excellent modelling program for trees.
  
 
</p>
 
</p>
<p>There are also a several good include files for this purpose. Check out our link collection to [http://www.povray.org/links/POV-Ray_Include_Macro_and_Object_Files/|_new <span title="Opens A New Window!">Include Macro and Object Files</span>]).
+
<p>There are also a several good include files for this purpose. Check out our link collection to [http://www.povray.org/links/POV-Ray_Include_Macro_and_Object_Files/Include Macro and Object Files].
 
</p>
 
</p>
  

Latest revision as of 15:57, 23 June 2009

This document is protected, so submissions, corrections and discussions should be held on this documents talk page.


How can I avoid artifacts and still get good JPEG compression?

(Answer by Peter J. Holzer)

First, you have to know a little bit about how a picture is stored in JPEG format.

Unlike most image formats it does not store RGB values, but YUV values (1 grayscale value and two "color difference" values) just like they are used in a color TV signal. Since the human eye uses mostly the gray values to detect edges, one can usually get away with storing the color information at a lower resolution - an 800x600 JPEG typically has only grayscale information at 800x600, but color information at 400x300. This is called supersampling.

For each color channel separately, the picture is then divided into little squares and the cosine transform of each square is computed. A neat feature of this transformation is that if you throw away only a few of the values, the quality will degrade very little, but the image will compress a lot better. The percentage of values stored is called the quality.

Finally, the data is compressed.

Most programs only let you change the quality setting. This is fine for photos and photorealistic renderings of "natural" scenes. Generally, quality values around 75% give be best compromise between quality and image size.

However, for images which contain very saturated colors, the lower resolution of the color channels causes visible artifacts which are very similar to those caused by low quality settings. They can be minimized by setting an extremely high quality (close to 100%), but this will dramatically increase the file size, and often the artifacts are still visible.

A better method is to turn off supersampling. The higher resolution will cause only a modest increase in file size, which is more than offset by the ability to use a lower quality setting.

The cjpeg command line utility (which should be available for all systems which have a command line, e.g., Linux, MS-DOS, Unix, ...) has an "-sample" to set the sampling factors for all passes.

        cjpeg -sample 1x1,1x1,1x1 -quality 75 -optimize

should be good default values which have to be changed only rarely.

Why are there no converters from POV to other formats?

"Why are there so many converters from other 3D file formats to POV, but practically no converters from POV to other formats?"

It is a mistake to think that a POV-Ray file is just the same kind of data file as in most other renderers.

The file format of most renderers is just a data file containing numerical values (vertex coordinates, triangle indices, textures, uv-coordinates, NURBS data...) describing the scene. They usually are very little more than just numerical data containers.

However, POV-Ray files are much more than just data files. POV-Ray files are actually source code of the POV-Ray scripting language. The POV-Ray scripting language is by many means a full programming language (it is Turing-strong). It contains many features typical to programming languages and non-typical to data files (such as variables, loops, mathematical functions, macros, etc). It has many features to describe things in a much more abstract way than just plain numbers.

This is why converting a POV-file to a data file readable by other renderers is so difficult. The converter program would actually have to "execute", that is, interpret the scripting language (in the exact same way as a BASIC or Perl source code is interpreted). Making a scripting language interpreter is a much more laborious job than just converting numerical data from one format to another.

There is also another problem: POV-Ray describes most of its objects as mathematical entities while most of other renderers just handle triangles (or NURBS or similar easily tesselable primitives). A converter would have to make some tesselation in order to convert most POV-Ray primitives to other formats. This can be a quite laborious job for a converter to make (it would have to practically implement an almost fully-qualified POV-Ray renderer).

This is why making a full-featured converter from any POV-file to any other format is an almost impossible task.

Why are triangle meshes in ASCII format?

"Why are triangle meshes in ASCII format? They are huge! A binary format would be a lot smaller. If POV-Ray can read binary images, why can it not read binary mesh data?"

It is not as simple as you may think.

You cannot compare binary mesh data with image files. Yes, images are binary data, but there is one big difference: Image files use integer numbers (usually bytes, in some cases 16-bit integers), which can be easily read in any system.

However, meshes use floating point numbers.

It might come as a bit of surprise that it is far from easy to represent them in binary format so that they can be read in every possible system.

It is very important to keep in mind that POV-Ray is intended to be a very portable program, which should be compilable in virtually any system with a decent C compiler. This is not just mumbo-jumbo; POV-Ray IS used in a wide variety of operating systems and computer architectures, including Windows, MacOS, Linux, (Sparc) Solaris, Digital Unix and so on.

The internal representation of floating point numbers may differ in number of bits and bits reserved for each part of the number inside the data type in different systems. There is also the infamous big-endian/low-endian problem (that is, although the floating point numbers were identical in two different systems, they may be written in different byte-order when writing to a file).

If you try to make carelessly a patch which reads and writes floating point numbers in binary format, you will probably quickly find that your patch only works in a certain architecture only (eg. PC) and not others.

In order to store floating point numbers so that they can be read in any system, you have to store them in an universal format. ASCII is as good as any other.

However, you are not completely out of luck when dealing with compressing mesh data. This has been done before.

Since version 3.5, POV-Ray supports a new type of mesh (called mesh2) which stores the mesh data in a more compact format (similar to the one used in the PCM format described in the abovementioned link, but with a bit more 'syntax' around it).

Utilities, models, etc.

What is the best animation program available?

Check the POV-Ray links collection on our website.

Creating/viewing MPEG-files.

"How can I create/view mpeg-files in Windows/Linux/...?"

See the IRTC Animation FAQ.

Where can I find models/textures?

Check the POV-Ray links collection on our website.

What are the best modellers for POV-Ray?

Check the POV-Ray links collection on our website.

Any POV-Ray modellers for Mac?

(Answer by Henri Sivonen)

Yes there are. However, a text editor is still needed.

Most of the available modelers are listed on the Official POV-Ray MacOS Info Page.

DOS, Windows and Linux i386 modelers can be used with an Intel PC emulator. With Mac OS X most Linux-compatible applications with freely available source code can be compiled and then run natively.

Is there any user gallery of POV-Ray images?

There are literaly hundreds of POV-Ray users galleries. Almost anybody that uses POV-Ray and has a web page has some sort of picture gallery set up. Look for web page address's at the bottoms of the messages posted to the newsgroup comp.graphics.rendering.raytracing and the povray newsgroups.

There are 2 places officially supported by the POV-Team. They are:

- The POV-Ray users gallery located on our web server.

- On our private news server (not connected with USENET) in the povray.binaries.images newsgroup, you will find many images posted from other POV-Ray users. There are also discussion groups, and plenty of sample code and scenes.

You should also check the Internet Raytracing Competition homepage.

Any good heightfield modellers?

Check the POV-Ray links collection on our website.

Any easy way of creating trees?

A program called Tree designer by Johannes Hubert is an excellent modelling program for trees.

There are also a several good include files for this purpose. Check out our link collection to Macro and Object Files.

Rendering speed

Will POV-Ray render faster with a 3D card?

"Will POV-Ray render faster if I buy the latest and fastest 3D videocard?"

No.

3D-cards are not designed for raytracing. They read polygon meshes and then scanline-render them. Scanline rendering has very little, if anything, to do with raytracing. 3D-cards cannot calculate typical features of raytracing as reflections etc. The algorithms used in 3D-cards have nothing to do with raytracing.

This means that you cannot use a 3D-card to speed up raytracing (even if you wanted to do so). Raytracing makes lots of float number calculations, and this is very FPU-consuming. You will get much more speed with a very fast FPU than a 3D-card.

What raytracing does is actually this: Calculate 1 pixel color and (optionally) put it on the screen. You will get little benefit from a fast videocard since only individual pixels are drawn on screen.

How do I increase rendering speed?

This question can be divided into 2 questions:

1) What kind of hardware should I use to increase rendering speed?

(Answer by Ken Tyler)

The truth is the computations needed for rendering images are both complex and time consuming. This is one of the few program types that will actualy put your processors FPU to maximum use.

The things that will most improve speed, roughly in order of appearance, are:

  1. CPU speed
  2. FPU speed
  3. Buss speed and level one and two memory cache - More is better. The faster the buss speed the faster the processor can swap out computations into its level 2 cache and then read them back in. Buss speed therefore can have a large impact on both FPU and CPU calculation times. The more cache memory you have available the faster the operation becomes because the CPU does not have to rely on the much slower system RAM to store information in.
  4. Memory amount, type, and speed. Faster and more is undoubtably better. Swapping out to the hard drive for increasing memory should be considered the last possible option for increasing system memory. The speed of the read/write to disk operation is like walking compared to driving a car. Here again is were buss speed is a major player in the fast rendering game.
  5. Your OS and number of applications open. Closing open applications, including background items like system monitor, task scheduler, internet connections, windows volume control, and all other applications people have hiding in the background, can greatly increase rendering time by stealing cpu cycles. Open task manager and see what you have open and then close everything but the absolute necessities. Other multi-tasking OS's have other methods of determining open application and should be used accordingly.
  6. And lastly your graphics card. This may seem unlikely to you but it is true. If you have a simple 16 bit graphics card your render times, compared to other systems with the same processor and memory but better CG cards, will be equal. No more no less. If you play a lot of games or watch a lot of mpeg movies on your system then by all means own a good CG card. If it is rendering and raytracing you want to do then invest in the best system speed and architecture your money can buy. The graphics cards with hardware acceleration are designed to support fast shading of simple polygons, prevalent in the gaming industry, and offer no support for the intense mathematical number crunching that goes on inside a rendering/raytracing program like Pov-Ray, Studio Max, and Lightwave. If your modeling program uses OpenGl shading methods then a CG card with support for OpenGL will help increase the speed of updating the shading window but when it comes time to render or raytrace the image its support dissapears.

2) How should I make the POV-Ray scenes so that they will render as fast as possible?

These are some things which may speed up rendering without having to compromise the quality of the scene:

  • Bounding boxes: Sometimes POV-Ray's automatic bounding is not perfect and considerable speed may be achieved by bounding objects by hand. These kind of objects are, for example, CSG differences and intersections, blobs and poly objects. See also: CSG speed.
  • Number of light sources: Each light source slows down the rendering. If your scene has many light sources, perhaps you should see if you can remove some of them without loosing much quality. Also replace point light sources with spotlights whenever possible. If a light source only lights a little part of the scene, a spotlight is better than a point light, since the point light is tested for each pixel while the spotlight is only tested when the pixel falls into the cone of the light.
  • Area lights are very slow to calculate. If you have big media effects, they are extremely slow to calculate. Use as few area lights as possible. Always use adaptive area lights unless you need very high accuracy. Use spot area lights whenever possible.
  • When you have many objects with the same texture, union them and apply the texture only once. This will decrease parse time and memory use. (Of course supposing that it does not matter if the texture does not move with the object anymore...)
  • Things to do when doing fast test renderings:
    • Use the quality command line parameter (ie. +Q).
    • Comment out (or enclose with #if-statements) the majority of the light sources and leave only the necessary ones to see the scene.
    • Replace (with #if-statements) slow objects (such as superellipsoids) with faster ones (such as boxes).
    • Replace complex textures with simpler ones (like uniform colors). You can also use the quick_color statement to do this (it will work when you render with quality 5 or lower, ie. command line parameter +Q5).
    • Reflection and refraction: When an object reflects and refracts light (such as a glass object) it usually slows down the rendering considerably. For test renderings turning off one of them (reflection or refraction) or both should greatly increase rendering speed. For example, while testing glass objects it is usually enough to test the refraction only and add the reflection only to the final rendering. (The problem with both reflecting and refracting objects is that the rays will bounce inside the object until max_trace_level is reached, and this is very slow.)
    • If you have reflection/refraction and a very high max_trace_level, try setting the adc_bailout value to something bigger than the default 1/256.

CSG speed

"How do the different kinds of CSG objects compare in speed? How can I speed them up?"

There is a lot of misinformation about CSG speed out there. A very common allegation is that "merge is always slower than union". This statement is not true. Merge is sometimes slower than union, but in some cases it is even faster. For example, consider the following code:

global_settings { max_trace_level 40 }
camera { location -z*8 look_at 0 angle 35 }
light_source { <100,100,-100> 1 }
merge
{ #declare Ind=0;
  #while(Ind<20)
    sphere { z*Ind,2 pigment { rgbt .9 } }
    #declare Ind=Ind+1;
  #end
}

There are 20 semitransparent merged spheres there. A test render took 64 seconds. Substituting 'merge' with 'union' took 352 seconds to render (5.5 times longer). The difference in speed is very notable.

So why is 'merge' so much faster than 'union' in this case? Well, the answer is probably that the number of visible surfaces play a very important role in the rendering speed. When the spheres are unioned there are 18 inner surfaces, while when merged, those inner surfaces are gone. POV-Ray has to calculate lighting and shading for each one of those surfaces and that makes it so slow. When the spheres are merged, there is no need to perform lighting and shading calculations for those 18 surfaces.

So is 'merge' always faster than 'union'? No. If you have completely non-transparent objects, then 'merge' is slightly slower than 'union', and in that case you should always use 'union' instead. It makes no sense using 'merge' with non-transparent objects.

Another common allegation is "difference is very slow; much slower than union". This can also be proven as a false statement. Consider the following example:

camera { location -z*12 look_at 0 angle 35 }
light_source { <100,100,-100> 1 }
difference
{ sphere { 0,2 }
  sphere { <-1,0,-1>,2 }
  sphere { <1,0,-1>,2 }
  pigment { rgb <1,0,0> }
}

This scene took 42 seconds to render, while substituting the 'difference' with a 'union' took 59 seconds (1.4 times longer).

The crucial thing here is the size of the surfaces on screen. The larger the size, the slower to render (because POV-Ray has to do more lighting and shading calculations).

But the second statement is much closer to the truth than the first one: differences are usually slow to render, specially when the member objects of the difference are very much bigger than the resulting CSG object. This is because POV-Ray's automatic bounding is not perfect. A few words about bounding:

Suppose you have hundreds of objects (like spheres or whatever) forming a bigger CSG object, but this object is rather small on screen (like a little house for example). It would be really slow to test ray-object intersection for each one of those objects for each pixel of the screen. This is speeded up by bounding the CSG object with a bounding shape (such as a box). Ray-object intersections are first tested for this bounding box, and it is tested for the objects inside the box only if it hits the box. This speeds rendering considerably since the tests are performed only in the area of the screen where the CSG object is located and nowhere else.

Since it is rather easy to automatically calculate a proper bounding box for a given object, POV-Ray does this and thus you do not have to do it by yourself.

But this automatic bounding is not perfect. There are situations where a perfect automatic bounding is very hard to calculate. One situation is the difference and the intersection CSG operations. POV-Ray does what it can, but sometimes it makes a pretty poor job. This can be specially seen when the resulting CSG object is very small compared to the CSG member objects. For example:

intersection
{ sphere { <-1000,0,0>,1001 }
  sphere { <1000,0,0>,1001 }
}

(This is the same as making a difference with the second sphere inversed)

In this example the member objects extend from <-2001,-1001,-1001> to <2001,1001,1001> although the resulting CSG object is a pretty small lens-shaped object which is only 2 units wide in the x direction and perhaps 10 or 20 or something wide in the y and z directions. As you can see, it is very difficult to calculate the actual dimensions of the object (but not impossible).

In this type of cases POV-Ray makes a huge bounding box which is useless. You should bound this kind of objects by hand (specially when the it has lots of member objects). This can be done with the bounded_by keyword.

Here is an example:

camera { location -z*80 look_at 0 angle 35 }
light_source { <100,200,-150> 1 }
#declare test =
difference
{ union
  { cylinder {<-2, -20, 0>, <-2, 20, 0>, 1}
    cylinder {<2, -20, 0>, <2, 20, 0>, 1}
  }
  box {<-10, 1, -10>, <10, 30, 10>}
  box {<-10, -1, -10>, <10, -30, 10>}
  pigment {rgb <1, .5, .5>}
  bounded_by { box {<-3.1, -1.1, -1.1>, <3.1, 1.1, 1.1>} }
}
 
#declare copy = 0;
#while (copy < 40)
  object {test translate -20*x translate copy*x}
  #declare copy = copy + 3;
#end

This took 51 seconds to render. Commenting out the 'bounded_by' line increased the rendering time to 231 seconds (4.5 times slower).


How do I import all of my textures I created in 3DS Max into POV-Ray? Does POV-Ray support 3DNow?


This document is protected, so submissions, corrections and discussions should be held on this documents talk page.