Difference between revisions of "Documentation:Tutorial Section 3.10"

From POV-Wiki
Jump to navigation Jump to search
m (Protected "Documentation:Tutorial Section 3.10" [edit=sysop:move=sysop])
m (image format updates)
 
(2 intermediate revisions by the same user not shown)
Line 7: Line 7:
 
<br>
 
<br>
 
<!--</wikitalk>--->
 
<!--</wikitalk>--->
===Shadow test===
+
<!--REMOVE--->===SDL tutorial: A raytracer===<!--THIS--->
 +
<!--REMOVE--->====The Trace macro====<!--THIS--->
 +
=====Shadow test=====
 
<p>The very first thing to do for calculating the lighting for a light source
 
<p>The very first thing to do for calculating the lighting for a light source
 
is to see if the light source is illuminating the intersection point in the
 
is to see if the light source is illuminating the intersection point in the
Line 25: Line 27:
 
       #end
 
       #end
 
</pre>
 
</pre>
 
  
 
<p>What we do is to go through all the spheres (we skip the current sphere
 
<p>What we do is to go through all the spheres (we skip the current sphere
Line 37: Line 38:
 
identifier as a boolean value (<code>true</code> if the point is shadowed).</p>
 
identifier as a boolean value (<code>true</code> if the point is shadowed).</p>
  
===Diffuse lighting===
+
=====Diffuse lighting=====
 
<p>The diffuse component of lighting is generated when a light ray hits
 
<p>The diffuse component of lighting is generated when a light ray hits
 
a surface and it is reflected equally to all directions. The brightest part
 
a surface and it is reflected equally to all directions. The brightest part
Line 53: Line 54:
 
         #end
 
         #end
 
</pre>
 
</pre>
 
  
 
<p>The code for diffuse lighting is surprisingly short.</p>
 
<p>The code for diffuse lighting is surprisingly short.</p>
Line 69: Line 69:
 
This gives us the diffuse component of the lighting.</p>
 
This gives us the diffuse component of the lighting.</p>
  
===Specular lighting===
+
=====Specular lighting=====
 
<p>The specular component of lighting comes from the fact that most surfaces
 
<p>The specular component of lighting comes from the fact that most surfaces
 
do not reflect light equally to all directions, but they reflect more light
 
do not reflect light equally to all directions, but they reflect more light
to the &quot;reflected ray&quot; direction, that is, the surface has some mirror
+
to the reflected ray direction, that is, the surface has some mirror
 
properties. The brightest part of the surface is where the reflected ray
 
properties. The brightest part of the surface is where the reflected ray
 
points in the direction of the light.</p>
 
points in the direction of the light.</p>
Line 90: Line 90:
 
         #end
 
         #end
 
</pre>
 
</pre>
 
  
 
<p>The calculation is similar to the diffuse lighting with the following
 
<p>The calculation is similar to the diffuse lighting with the following
Line 100: Line 99:
 
Phong lighting model).</li>
 
Phong lighting model).</li>
 
<li>We do not take the dot-product as is, but we raise it to a power given
 
<li>We do not take the dot-product as is, but we raise it to a power given
in the scene definition (&quot;phong size&quot;).</li>
+
in the scene definition (phong size).</li>
 
<li>We use a brightness factor given in the scene definition to multiply
 
<li>We use a brightness factor given in the scene definition to multiply
the color (&quot;phong amount&quot;).</li>
+
the color (phong amount).</li>
 
</ul>
 
</ul>
  
Line 117: Line 116:
 
</pre>
 
</pre>
  
===Reflection Calculation===
+
=====Reflection Calculation=====
 
<pre>
 
<pre>
 
     // Reflection:
 
     // Reflection:
Line 125: Line 124:
 
     #end
 
     #end
 
</pre>
 
</pre>
 
  
 
<p>Another nice aspect of raytracing is that reflection is very easy to  
 
<p>Another nice aspect of raytracing is that reflection is very easy to  
Line 152: Line 150:
 
</pre>
 
</pre>
  
===Calculating the image===
+
====Calculating the image====
 
<pre>
 
<pre>
 
#debug &quot;Rendering...\n\n&quot;
 
#debug &quot;Rendering...\n\n&quot;
Line 174: Line 172:
 
#debug &quot;\n&quot;
 
#debug &quot;\n&quot;
 
</pre>
 
</pre>
 
  
 
<p>Now we just have to calculate the image into an array of colors. This
 
<p>Now we just have to calculate the image into an array of colors. This
Line 180: Line 177:
 
array representing the final image we are calculating.</p>
 
array representing the final image we are calculating.</p>
  
<p>Note how we use the <code>#debug</code> stream to output useful information
+
<p>Notice how we use the <code>#debug</code> stream to output useful information
 
about the rendering process while we are calculating. This is nice because
 
about the rendering process while we are calculating. This is nice because
 
the rendering process is quite slow and it is good to give the user some
 
the rendering process is quite slow and it is good to give the user some
Line 189: Line 186:
 
&quot;<code>%%</code>&quot;.)</p>
 
&quot;<code>%%</code>&quot;.)</p>
  
<p>What we do here is to go through each &quot;pixel&quot; of the &quot;image&quot; (ie. the
+
<p>What we do here is to go through each pixel of the image (ie. the
 
array) and for each one calculate the camera location (fixed to
 
array) and for each one calculate the camera location (fixed to
 
<code>-z*3</code> here) and the direction of the ray that goes through the
 
<code>-z*3</code> here) and the direction of the ray that goes through the
pixel (in this code the &quot;viewing plane&quot; is fixed and located in the
+
pixel (in this code the viewing plane is fixed and located in the
 
x-y-plane and its height is fixed to 1).</p>
 
x-y-plane and its height is fixed to 1).</p>
  
Line 200: Line 197:
 
   #declare CoordY = IndY/(ImageHeight-1)*2-1;
 
   #declare CoordY = IndY/(ImageHeight-1)*2-1;
 
</pre>
 
</pre>
 
  
 
<p>does is to scale the <code>IndY</code> so that it goes from -1 to 1.
 
<p>does is to scale the <code>IndY</code> so that it goes from -1 to 1.
Line 211: Line 207:
 
a squeezed image).</p>
 
a squeezed image).</p>
  
===Creating the colored mesh===
+
====Creating the colored mesh====
 
<p>If you think that these things we have been examining are advanced, then
 
<p>If you think that these things we have been examining are advanced, then
 
you have not seen anything. Now comes real hard-core advanced POV-Ray code,
 
you have not seen anything. Now comes real hard-core advanced POV-Ray code,
Line 217: Line 213:
  
 
<p>We have now calculated the image into the array of colors. However, we
 
<p>We have now calculated the image into the array of colors. However, we
still have to show these color &quot;pixels&quot; on screen, that is, we have to make
+
still have to show these color pixels on screen, that is, we have to make
 
POV-Ray to render our pixels so that it creates a real image.</p>
 
POV-Ray to render our pixels so that it creates a real image.</p>
  
 
<p>There are several ways of doing this, all of them being more or less
 
<p>There are several ways of doing this, all of them being more or less
&quot;kludges&quot; (as there is currently no way of directly creating an image map
+
kludges (as there is currently no way of directly creating an image map
 
from a group of colors). One could create colored boxes representing each
 
from a group of colors). One could create colored boxes representing each
 
pixel, or one could output to an ascii-formatted image file (mainly PPM)
 
pixel, or one could output to an ascii-formatted image file (mainly PPM)
Line 229: Line 225:
  
 
<p>What we are going to do is to calculate a colored mesh2 which represents
 
<p>What we are going to do is to calculate a colored mesh2 which represents
the &quot;screen&quot;.
+
the screen.
 
As colors are interpolated between the vertices of a triangle, the
 
As colors are interpolated between the vertices of a triangle, the
 
bilinear interpolation comes for free (almost).</p>
 
bilinear interpolation comes for free (almost).</p>
  
===The structure of the mesh===
+
=====The structure of the mesh=====
 
<p>Although all the triangles are located in the x-y plane and they are all
 
<p>Although all the triangles are located in the x-y plane and they are all
 
the same size, the structure of the mesh is quite complicated (so complicated
 
the same size, the structure of the mesh is quite complicated (so complicated
Line 241: Line 237:
 
image:</p>
 
image:</p>
  
<p>[[Image:TutImgTriangles.gif|Triangle arrangement for a 4x3 image]]</p>
+
<table class="centered" width="660px" cellpadding="0" cellspacing="10">
 
+
<tr>
 +
  <td>
 +
    [[Image:TutImgTriangles.gif|center|640px<!--center--->]]
 +
  </td>
 +
</tr>
 +
<tr>
 +
  <td>
 +
    <p class="caption">Triangle arrangement for a 4x3 image</p>
 +
  </td>
 +
</tr>
 +
</table>
  
 
<p>The number pairs in parentheses represent image pixel coordinates
 
<p>The number pairs in parentheses represent image pixel coordinates
Line 278: Line 284:
 
let's just let them be (if you want to remove them, go ahead).</p>
 
let's just let them be (if you want to remove them, go ahead).</p>
  
===Creating the mesh===
+
=====Creating the mesh=====
 
<p>What this means is that for each pixel we create two vertex points,
 
<p>What this means is that for each pixel we create two vertex points,
one at the pixel location and one shifted by &quot;0.5&quot; in the x and y directions.
+
one at the pixel location and one shifted by 0.5 in the x and y directions.
 
Then we specify the color for each vertex points: For the even vertex points
 
Then we specify the color for each vertex points: For the even vertex points
 
it is directly the color of the correspondent pixel; for the odd vertex points
 
it is directly the color of the correspondent pixel; for the odd vertex points
Line 287: Line 293:
 
<p>Let's examine the creation of the mesh step by step:</p>
 
<p>Let's examine the creation of the mesh step by step:</p>
  
===Creating the vertex points===
+
=====Creating the vertex points=====
 
<pre>
 
<pre>
 
#default { finish { ambient 1 } }
 
#default { finish { ambient 1 } }
Line 309: Line 315:
 
   }
 
   }
 
</pre>
 
</pre>
 
  
 
<p>First of all we use a nice trick in POV-Ray: Since we are not using
 
<p>First of all we use a nice trick in POV-Ray: Since we are not using
Line 343: Line 348:
 
above.</p>
 
above.</p>
  
===Creating the textures===
+
=====Creating the textures=====
 
<pre>
 
<pre>
 
   texture_list
 
   texture_list
Line 365: Line 370:
 
   }
 
   }
 
</pre>
 
</pre>
 
  
 
<p>Creating the textures is very similar to creating the vertex points
 
<p>Creating the textures is very similar to creating the vertex points
Line 374: Line 378:
 
textures for each one. The first texture is just the pixel color itself.
 
textures for each one. The first texture is just the pixel color itself.
 
The second texture is the average of the four surrounding pixels. </p>
 
The second texture is the average of the four surrounding pixels. </p>
<p class="Note"><strong>Note:</strong> we can calculate it only for the vertex points in the middle of
+
<p class="Note"><strong>Note:</strong> We can calculate it only for the vertex points in the middle of
 
the squares; for the extra vertex points outside the image we just define
 
the squares; for the extra vertex points outside the image we just define
 
a dummy black texture.</p>
 
a dummy black texture.</p>
Line 380: Line 384:
 
<p>The textures have the same index values as the vertex points.</p>
 
<p>The textures have the same index values as the vertex points.</p>
  
===Creating the triangles===
+
=====Creating the triangles=====
 
<p>This one is a bit trickier. Basically we have to create four triangles
 
<p>This one is a bit trickier. Basically we have to create four triangles
for each &quot;square&quot; between pixels. How many triangles will there be?</p>
+
for each square between pixels. How many triangles will there be?</p>
  
 
<p>Let's examine the creation loop first:</p>
 
<p>Let's examine the creation loop first:</p>
Line 403: Line 407:
 
</pre>
 
</pre>
  
 
+
<p>The number of squares is one less than the number of pixels in each
<p>The number of &quot;squares&quot; is one less than the number of pixels in each
 
 
direction. That is, the number of squares in the x direction will be one
 
direction. That is, the number of squares in the x direction will be one
 
less than the number of pixels in the x direction. The same for the y
 
less than the number of pixels in the x direction. The same for the y
Line 457: Line 460:
 
         IndX*2+1+IndY    *(ImageWidth*2),
 
         IndX*2+1+IndY    *(ImageWidth*2),
 
</pre>
 
</pre>
 
  
 
<p>This one defines the triangle using the current point, the point in the
 
<p>This one defines the triangle using the current point, the point in the
Line 480: Line 482:
 
</pre>
 
</pre>
  
===The Camera-setup===
+
====The Camera-setup====
 
<p>The only thing left is the camera definition, so that POV-Ray can
 
<p>The only thing left is the camera definition, so that POV-Ray can
 
calculate the image correctly:</p>
 
calculate the image correctly:</p>
Line 488: Line 490:
 
</pre>
 
</pre>
  
 
+
<p>Why 2? As the default <code>direction</code> vector is
<p>Why &quot;2&quot;? As the default <code>direction</code> vector is
 
 
<code>&lt;0,0,1&gt;</code> and the default <code>up</code> vector is
 
<code>&lt;0,0,1&gt;</code> and the default <code>up</code> vector is
 
<code>&lt;0,1,0&gt;</code> and we want the up direction to cover 2 units,
 
<code>&lt;0,1,0&gt;</code> and we want the up direction to cover 2 units,

Latest revision as of 08:21, 3 June 2011

This document is protected, so submissions, corrections and discussions should be held on this documents talk page.


SDL tutorial: A raytracer

The Trace macro

Shadow test

The very first thing to do for calculating the lighting for a light source is to see if the light source is illuminating the intersection point in the first place (this is one of the nicest features of raytracing: shadow calculations are laughably easy to do):

      // Shadowtest:
      #local Shadowed = false;
      #local Ind2 = 0;
      #while(Ind2 < ObjAmnt)
        #if(Ind2!=closest & calcRaySphereIntersection(IP,L,nd2)>0)
          #local Shadowed = true;
          #local Ind2 = ObjAmnt;
        #end
        #local Ind2 = Ind2+1;
      #end

What we do is to go through all the spheres (we skip the current sphere although it is not necessary, but a little optimization is still a little optimization), take the intersection point as starting point and the light direction as the direction vector and see if the ray-sphere intersection test returns a positive value for any of them (and quit the loop immediately when one is found, as we do not need to check the rest anymore).

The result of the shadow test is put into the Shadowed identifier as a boolean value (true if the point is shadowed).

Diffuse lighting

The diffuse component of lighting is generated when a light ray hits a surface and it is reflected equally to all directions. The brightest part of the surface is where the normal vector points directly in the direction of the light. The lighting diminishes in relation to the cosine of the angle between the normal vector and the light vector.

      #if(!Shadowed)
        // Diffuse:
        #local Factor = vdot(Normal, L);
        #if(Factor > 0)
          #local Pixel = 
             Pixel + LVect[Ind][1]*Coord[closest][2]*Factor;
        #end

The code for diffuse lighting is surprisingly short.

There is an extremely nice trick in mathematics to get the cosine of the angle between two unit vectors: It is their dot-product.

What we do is to calculate the dot-product of the normal vector and the light vector (both have been normalized previously). If the dot-product is negative it means that the normal vector points in the opposite direction than the light vector. Thus we are only interested in positive values.

Thus, we add to the pixel color the color of the light source multiplied by the color of the surface of the sphere multiplied by the dot-product. This gives us the diffuse component of the lighting.

Specular lighting

The specular component of lighting comes from the fact that most surfaces do not reflect light equally to all directions, but they reflect more light to the reflected ray direction, that is, the surface has some mirror properties. The brightest part of the surface is where the reflected ray points in the direction of the light.

Photorealistic lighting is a very complicated issue and there are lots of different lighting models out there, which try to simulate real-world lighting more or less accurately. For our simple raytracer we just use a simple Phong lighting model, which suffices more than enough.

        // Specular:
        #local Factor = vdot(vnormalize(Refl), L);
        #if(Factor > 0)
          #local Pixel = Pixel + LVect[Ind][1]*
                         pow(Factor, Coord[closest][3].x)*
                         Coord[closest][3].y;
        #end

The calculation is similar to the diffuse lighting with the following differences:

  • We do not use the normal vector, but the reflected vector.
  • The color of the surface is not taken into account (a very simple Phong lighting model).
  • We do not take the dot-product as is, but we raise it to a power given in the scene definition (phong size).
  • We use a brightness factor given in the scene definition to multiply the color (phong amount).

Thus, the color we add to the pixel color is the color of the light source multiplied by the dot-product (which is raised to the given power) and by the given brightness amount.

Then we close the code blocks:

      #end // if(!Shadowed)
      #local Ind = Ind+1;
    #end // while(Ind < LightAmnt)
Reflection Calculation
    // Reflection:
    #if(recLev < MaxRecLev & Coord[closest][1].y > 0)
      #local Pixel = 
        Pixel + Trace(IP, Refl, recLev+1)*Coord[closest][1].y;
    #end

Another nice aspect of raytracing is that reflection is very easy to calculate.

Here we check that the recursion level has not reached the limit and that the sphere has a reflection component defined. If both are so, we add the reflected component (the color of the reflected ray multiplied by the reflection factor) to the pixel color.

This is where the recursive call happens (the macro calls itself). The recursion level (recLev) is increased by one for the next call so that somewhere down the line, the series of Trace() calls will know to stop (preventing a ray from bouncing back and forth forever between two mirrors). This is basically how the max_trace_level global setting works in POV-Ray.

Finally, we close the code blocks and return the pixel color from the macro:

  #end // else

  Pixel
#end

Calculating the image

#debug "Rendering...\n\n"
#declare Image = array[ImageWidth][ImageHeight]

#declare IndY = 0;
#while(IndY < ImageHeight)
  #declare CoordY = IndY/(ImageHeight-1)*2-1;
  #declare IndX = 0;
  #while(IndX < ImageWidth)
    #declare CoordX =
       (IndX/(ImageWidth-1)-.5)*2*ImageWidth/ImageHeight;
    #declare Image[IndX][IndY] =
      Trace(-z*3, <CoordX, CoordY, 3>, 1);
    #declare IndX = IndX+1;
  #end
  #declare IndY = IndY+1;
  #debug concat("\rDone ", str(100*IndY/ImageHeight,0,1),
    "%  (line ", str(IndY,0,0)," out of ",str(ImageHeight,0,0),")")
#end
#debug "\n"

Now we just have to calculate the image into an array of colors. This array is defined at the beginning of the code above; it is a two-dimensional array representing the final image we are calculating.

Notice how we use the #debug stream to output useful information about the rendering process while we are calculating. This is nice because the rendering process is quite slow and it is good to give the user some feedback about what is happening and how long it will take. (Also note that the "%" character in the string of the second #debug command will work ok only in the Windows version of POV-Ray; for other versions it may be necessary to convert it to "%%".)

What we do here is to go through each pixel of the image (ie. the array) and for each one calculate the camera location (fixed to -z*3 here) and the direction of the ray that goes through the pixel (in this code the viewing plane is fixed and located in the x-y-plane and its height is fixed to 1).

What the following line:

  #declare CoordY = IndY/(ImageHeight-1)*2-1;

does is to scale the IndY so that it goes from -1 to 1. It is first divided by the maximum value it gets (which is ImageHeight-1) and then it is multiplied by 2 and substracted by 1. This results in a value which goes from -1 to 1.

The CoordX is calculated similarly, but it is also multiplied by the aspect ratio of the image we are calculating (so that we do not get a squeezed image).

Creating the colored mesh

If you think that these things we have been examining are advanced, then you have not seen anything. Now comes real hard-core advanced POV-Ray code, so be prepared. This could be called The really advanced section.

We have now calculated the image into the array of colors. However, we still have to show these color pixels on screen, that is, we have to make POV-Ray to render our pixels so that it creates a real image.

There are several ways of doing this, all of them being more or less kludges (as there is currently no way of directly creating an image map from a group of colors). One could create colored boxes representing each pixel, or one could output to an ascii-formatted image file (mainly PPM) and then read it as an image map. The first one has the disadvantage of requiring huge amounts of memory and missing bilinear interpolation of the image; the second one has the disadvantage of requiring a temporary file.

What we are going to do is to calculate a colored mesh2 which represents the screen. As colors are interpolated between the vertices of a triangle, the bilinear interpolation comes for free (almost).

The structure of the mesh

Although all the triangles are located in the x-y plane and they are all the same size, the structure of the mesh is quite complicated (so complicated it deserves its own section here).

The following image shows how the triangles are arranged for a 4x3 pixels image:

TutImgTriangles.gif

Triangle arrangement for a 4x3 image

The number pairs in parentheses represent image pixel coordinates (eg. (0,0) refers to the pixel at the lower left corner of the image and (3,2) to the pixel at the upper right corner). That is, the triangles will be colored as the image pixels at these points. The colors will then be interpolated between them along the surface of the triangles.

The filled and non-filled circles in the image represent the vertex points of the triangles and the lines connecting them show how the triangles are arranged. The smaller numbers near these circles indicate their index value (the one which will be created inside the mesh2).

We notice two things which may seem odd: Firstly there are extra vertex points outside the mesh, and secondly, there are extra vertex points in the middle of each square.

Let's start with the vertices in the middle of the squares: We could have just made each square with two triangles instead of four, as we have done here. However, the color interpolation is not nice this way, as there appears a clear diagonal line where the triangle edges go. If we make each square with four triangles instead, then the diagonal lines are less apparent, and the interpolation resembles a lot better a true bilinear interpolation. And what is the color of the middle points? Of course it is the average of the color of the four points in the corners.

Secondly: Yes, the extra vertex points outside the mesh are completely obsolete and take no part in the creation of the mesh. We could perfectly create the exact same mesh without them. However, getting rid of these extra vertex points makes our lives more difficult when creating the triangles, as it would make the indexing of the points more difficult. It may not be too much work to get rid of them, but they do not take any considerable amount of resources and they make our lives easier, so let's just let them be (if you want to remove them, go ahead).

Creating the mesh

What this means is that for each pixel we create two vertex points, one at the pixel location and one shifted by 0.5 in the x and y directions. Then we specify the color for each vertex points: For the even vertex points it is directly the color of the correspondent pixel; for the odd vertex points it is the average of the four surrounding pixels.

Let's examine the creation of the mesh step by step:

Creating the vertex points
#default { finish { ambient 1 } }

#debug "Creating colored mesh to show image...\n"
mesh2
{ vertex_vectors
  { ImageWidth*ImageHeight*2,
    #declare IndY = 0;
    #while(IndY < ImageHeight)
      #declare IndX = 0;
      #while(IndX < ImageWidth)
        <(IndX/(ImageWidth-1)-.5)*ImageWidth/ImageHeight*2,
         IndY/(ImageHeight-1)*2-1, 0>,
        <((IndX+.5)/(ImageWidth-1)-.5)*ImageWidth/ImageHeight*2,
         (IndY+.5)/(ImageHeight-1)*2-1, 0>
        #declare IndX = IndX+1;
      #end
      #declare IndY = IndY+1;
    #end
  }

First of all we use a nice trick in POV-Ray: Since we are not using light sources and there is nothing illuminating our mesh, what we do is to set the ambient value of the mesh to 1. We do this by just making it the default with the #default command, so we do not have to bother later.

As we saw above, what we are going to do is to create two vertex points for each pixel. Thus we know without further thinking how many vertex vectors there will be: ImageWidth*ImageHeight*2

That was the easy part; now we have to figure out how to create the vertex points themselves. Each vertex location should correspond to the pixel location it is representing, thus we go through each pixel index (practically the number pairs in parentheses in the image above) and create vertex points using these index values. The location of these pixels and vertices are the same as we assumed when we calculated the image itself (in the previous part). Thus the y coordinate of each vertex point should go from -1 to 1 and similarly the x coordinate, but scaled with the aspect ratio.

If you look at the creation of the first vector in the code above, you will see that it is almost identical to the direction vector we calculated when creating the image.

The second vector should be shifted by 0.5 in both directions, and that is exactly what is done there. The second vector definition is identical to the first one except that the index values are shifted by 0.5. This creates the points in the middle of the squares.

The index values of these points will be arranged as shown in the image above.

Creating the textures
  texture_list
  { ImageWidth*ImageHeight*2,
    #declare IndY = 0;
    #while(IndY < ImageHeight)
      #declare IndX = 0;
      #while(IndX < ImageWidth)
        texture { pigment { rgb Image[IndX][IndY] } }
        #if(IndX < ImageWidth-1 & IndY < ImageHeight-1)
          texture { pigment { rgb
            (Image[IndX][IndY]+Image[IndX+1][IndY]+
             Image[IndX][IndY+1]+Image[IndX+1][IndY+1])/4 } }
        #else
          texture { pigment { rgb 0 } }
        #end
        #declare IndX = IndX+1;
      #end
      #declare IndY = IndY+1;
    #end
  }

Creating the textures is very similar to creating the vertex points (we could have done both inside the same loop, but due to the syntax of the mesh2 we have to do it separately).

So what we do is to go through all the pixels in the image and create textures for each one. The first texture is just the pixel color itself. The second texture is the average of the four surrounding pixels.

Note: We can calculate it only for the vertex points in the middle of the squares; for the extra vertex points outside the image we just define a dummy black texture.

The textures have the same index values as the vertex points.

Creating the triangles

This one is a bit trickier. Basically we have to create four triangles for each square between pixels. How many triangles will there be?

Let's examine the creation loop first:

  face_indices
  { (ImageWidth-1)*(ImageHeight-1)*4,
    #declare IndY = 0;
    #while(IndY < ImageHeight-1)
      #declare IndX = 0;
      #while(IndX < ImageWidth-1)

        ...

        #declare IndX = IndX+1;
      #end
      #declare IndY = IndY+1;
    #end
  }

The number of squares is one less than the number of pixels in each direction. That is, the number of squares in the x direction will be one less than the number of pixels in the x direction. The same for the y direction. As we want four triangles for each square, the total number of triangles will then be (ImageWidth-1)*(ImageHeight-1)*4.

Then to create each square we loop the amount of pixels minus one for each direction.

Now in the inside of the loop we have to create the four triangles. Let's examine the first one:

        <IndX*2+  IndY    *(ImageWidth*2),
         IndX*2+2+IndY    *(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2)>,
         IndX*2+  IndY    *(ImageWidth*2),
         IndX*2+2+IndY    *(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2),

This creates a triangle with a texture in each vertex. The first three values (the indices to vertex points) are identical to the next three values (the indices to the textures) because the index values were exactly the same for both.

The IndX is always multiplied by 2 because we had two vertex points for each pixel and IndX is basically going through the pixels. Likewise IndY is always multiplied by ImageWidth*2 because that is how long a row of index points is (ie. to get from one row to the next at the same x coordinate we have to advance ImageWidth*2 in the index values).

These two things are identical in all the triangles. What decides which vertex point is chosen is the "+1" or "+2" (or "+0" when there is nothing). For IndX "+0" is the current pixel, "+1" chooses the point in the middle of the square and "+2" chooses the next pixel. For IndY "+1" chooses the next row of pixels.

Thus this triangle definition creates a triangle using the vertex point for the current pixel, the one for the next pixel and the vertex point in the middle of the square.

The next triangle definition is likewise:

        <IndX*2+  IndY    *(ImageWidth*2),
         IndX*2+  (IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2)>,
         IndX*2+  IndY    *(ImageWidth*2),
         IndX*2+  (IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2),

This one defines the triangle using the current point, the point in the next row and the point in the middle of the square.

The next two definitions define the other two triangles:

        <IndX*2+  (IndY+1)*(ImageWidth*2),
         IndX*2+2+(IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2)>,
         IndX*2+  (IndY+1)*(ImageWidth*2),
         IndX*2+2+(IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2),

        <IndX*2+2+IndY    *(ImageWidth*2),
         IndX*2+2+(IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2)>,
         IndX*2+2+IndY    *(ImageWidth*2),
         IndX*2+2+(IndY+1)*(ImageWidth*2),
         IndX*2+1+IndY    *(ImageWidth*2)

The Camera-setup

The only thing left is the camera definition, so that POV-Ray can calculate the image correctly:

  camera { orthographic location -z*2 look_at 0 }

Why 2? As the default direction vector is <0,0,1> and the default up vector is <0,1,0> and we want the up direction to cover 2 units, we have to move the camera two units away.


Going through the light sources Questions and Tips


This document is protected, so submissions, corrections and discussions should be held on this documents talk page.