Shape Super Struct
- 首页 >> Web Shape Super Struct
Higher level APIs and game engines typically to not require setting modeling transformations directly. Instead they allow programmers to position objects in a specified location, orient them in that location, and change the size of the object. Add this type of functionality to our code, by making all of the shape scructs (ReferencePlane, Pyramid, Sphere) a sub-struct of the following super struct.
struct Shape
{
virtual void draw( );
virtual void setPosition( const dvec3 & positionVector );
virtual void setOrientation( const double & angle, const dvec3 & axis );
virtual void setScale( const double & scaleValue);
protected:
dmat4 orientation;
dmat4 position;
dmat4 scale;
std::vector triangleVertices;
}; // end Shape
Remove the triangleVertices data member (c1PlaneVertices and c2PlaneVertices for the ReferencePlane) from each of the sub-structs so that all vertex data is held in the super struct data member. Implement the draw method of the Shape struct so that it sets the modeling transformation of the object based on the specified position, orientation, and uniform scale before calling PerVertex::processTriangleVertices to render the triangleVertices. Modify the renderObjects function in Lab.cpp to utilize this new functionality. The code in this method will start to look like the following:
// Set the position and orientation of the left pyramid and render it.
purplePyramid.setPosition( dvec3( -3.0, 0.0, 0.0 ) );
purplePyramid.setOrientation( angle, dvec3( 0.0, 0.0, 1.0 ) );
purplePyramid.draw( );
Box
Add a struct called Box that can be used to render box shapes of any color and dimension. The rendered box will have six faces. Each face will be composed of two triangles. The vertices should appear in counter-clockwise order on the outward facing sides of the cube. The size of the box should be determined by the values of the width, height, and depth parameters that are passed to the constructor. The color of the box should be determined by the color that is passed to the constructor. The object coordinate origin should be in the center of the box.
struct Box : public Shape
{
Box( color cubeColor = color(1.0f, 1.0f, 0.0f, 1.0f),
float width = 1.0f, float height = 1.0f, float depth = 1.0f);
};
Modified Scene
Once the Box struct has been implemented, modify the scene as depicted below. The viewing transformation for the rendering shown is the initial viewing transformation that has been used for all the labs (-12 in the Z). The World coordinate X axis is pointing to the right. The World coordinate Y axis is pointing up and the world coordinate Z axis is coming out of the screen. In the scene, the board is three units below the world coordinate origin. Four cubes with widths of one unit are stacked in the center of the board. A fourth cube is sitting on the far right hand corner of the board with a pyramid on top of it. The green box is 2 x 2 x 2. All other boxes are default size (1x1x1).
Change the window title to “CSE 287 Project Two – your last name.
Backface Culling
Add a static method called removeBackwardFacingTriangles to the PerVertex class that removes all triangles are are not facing the view point.
std::vector PerVertex::removeBackwardFacingTriangles(const std::vector & triangleVerts)
Input is a std::vector that contains all the triangles that describe the surface of an object. The method should return a std::vector that contains triplets of VertexData structs that describe only triangles that face the viewpoint. Use the findUnitNormal function (declared in Defines.h) when completing your implementation, compare the direction of this normal vector to the vector that describes the viewing direction. In Eye and Clip coordinates, this vector is always 0, 0, -1.
When testing to make sure you have implemented the culling correctly, be sure to use the up arrow to rotate the view. Note that the bottom of the reference plane is no longer visible. Some or parts of the sides of objects may also disappear. If you notice this, re-order the vertices of the missing triangle(s) to correct it.
Polygon Clipping
The clipPolygon method in PerVertex.cpp is currently “pass” through.
Implement the clipPolygon method so that it iterates through the triplets of vertices in clipCoords and clips each against the all six planes of the NDC view volume. Once all triangles have been clipped, ndcCoords is returned by the method. You can call the clipAgainstPlane method to clip a convex polygon against as single NDC plane. Realize that after a single triangle has been clipped against all six planes of the NDC view volume, the result will often be more than three vertices that describe a convex polygon. Thus, it is necessary to call a triangulate method to break the polygon back into triangles for further processing by the graphics pipeline.
Don’t worry. You will be getting a lot of help on this in class.
Depth Buffer Algorithm
The depth buffer algorithm is done in image space. It must be performed before individual fragments are written into the color buffer. Fragments become pixel values only if they pass the depth test. Objects of the FrameBuffer class contain a color buffer for storing pixel color values and a depth buffer for storing a depth values for each pixel. Accessor and mutator methods have been written for both. Both buffers are cleared prior to each rendering of the window (see clearColorAndDepthBuffers).
Implement the depth test in the processFragment method in PerFragmentOperations.cpp. The depth for each fragment is held in fragment.windowPosition.z. Do not do unnecessary processing of a fragment if it is not going to pass the depth test.
Shading in World Coordinates
transformVerticesToWorldCoordinates transforms vertices and normal vectors from Object coordinates to World coordinates and places copies in worldPosition to worldNormal respectively. The position of the viewpoint in World coordinates is stored in the static eyePositionInWorldCoords data member of the PerVertex class and is set in the RenderSceneCB function in Lab.cpp.
Implement the local illumination calculations for ambient, diffuse, and specular reflection in Light.h for each of the different types of lights. Shading calculations should only be performed if the light is enabled. These will be very similar to what you previously did for ray tracing with the exception that there will be no shadow feelers.
Implement applyLighting methods in both PerVertex.cpp and PerFragment.cpp. These methods will calculate the shading for each individual light source and sum up the results. The method in PerVertex will apply the calculated color to a vertex. The method in PerFragment will apply it to a fragment. Per vertex shading should be performed when PerVertex::perVertexLightingEnabled is true and per fragment shading should be performed when FragmentOperations::perPixelLightingEnabled is true. Otherwise, no shading calculations should be performed. Add a sub-menu to make this selectable and insure both are never set to true simultaneously.
Camera Struct
Write a Camera struct. The struct should have only one data member. This data member should be of type glm::dmat4. It should hold the viewing transformation that is associated with camera. The constructor should accept arguments that specify the viewing position and viewing direction relative to world coordinates and which World coordinate direction should be at the top of the rendering window. Note that the second argument is a vector that points in the direction that the view is facing. It is not a position. The struct should also have a method that has the same parameters as the constructor. The struct should have two accessor methods. One returns the viewing transformation matrix that can be used to transform vertices in the graphics pipeline. The other returns the position of the view point in World coordinates.
#include "Defines.h"
struct Camera
{
Camera( glm::dvec3 position = glm::dvec3(0.0, 0.0, 0.0),
glm::dvec3 direction = glm::dvec3(0.0, 0.0, -1.0),
glm::dvec3 up = glm::dvec3(0.0, 1.0, 0.0) );
~Camera();
glm::dvec3 getWorldCoordinateViewPosition();
glm::dmat4 getViewingTransformation();
glm::dmat4 viewTrans;
};
Create Different Views
Set up a menu that allows views 0 through 3 to be selected. Each of the views should be achieved using the Camera struct that you have written. View zero is the initial viewpoint. Views 1 through 3 are shown below. In view 1, the World coordinate origin is 12 units away. In view 2, the World coordinate origin is 30 units away. In view, 3 the World coordinate origin is 15 units away.
Note: To get these views you will have to move the far clipping plane.