diff --git a/README.md b/README.md index 22d2f34..2312a78 100644 --- a/README.md +++ b/README.md @@ -3,346 +3,74 @@ CUDA Rasterizer **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 4** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Ziye Zhou +* Tested on: Windows 7, i5-3210M @ 2.50GHz 4.00GB, GeForce GT 640M LE (Lenovo Laptop) -### (TODO: Your README) - -*DO NOT* leave the README to the last minute! It is a crucial part of the -project, and we will not be able to grade you without a good README. - - -Instructions (delete me) +Representive Image ======================== +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/representive.png?raw=true) -This is due Sunday, October 11, evening at midnight. - -**Summary:** -In this project, you will use CUDA to implement a simplified -rasterized graphics pipeline, similar to the OpenGL pipeline. You will -implement vertex shading, primitive assembly, rasterization, fragment shading, -and a framebuffer. More information about the rasterized graphics pipeline can -be found in the class slides and in the CIS 560 lecture notes. - -The base code provided includes an OBJ loader and much of the I/O and -bookkeeping code. It also includes some functions that you may find useful, -described below. The core rasterization pipeline is left for you to implement. - -You are not required to use this base code if you don't want -to. You may also change any part of the base code as you please. -**This is YOUR project.** - -**Recommendation:** -Every image you save should automatically get a different -filename. Don't delete all of them! For the benefit of your README, keep a -bunch of them around so you can pick a few to document your progress. - - -### Contents - -* `src/` C++/CUDA source files. -* `util/` C++ utility files. -* `objs/` Example OBJ test files (# verts, # tris in buffers after loading) - * `tri.obj` (3v, 1t): The simplest possible geometric object. - * `cube.obj` (36v, 12t): A small model with low depth-complexity. - * `suzanne.obj` (2904 verts, 968 tris): A medium model with low depth-complexity. - * `suzanne_smooth.obj` (2904 verts, 968 tris): A medium model with low depth-complexity. - This model has normals which must be interpolated. - * `cow.obj` (17412 verts, 5804 tris): A large model with low depth-complexity. - * `cow_smooth.obj` (17412 verts, 5804 tris): A large model with low depth-complexity. - This model has normals which must be interpolated. - * `flower.obj` (1920 verts, 640 tris): A medium model with very high depth-complexity. - * `sponza.obj` (837,489 verts, 279,163 tris): A huge model with very high depth-complexity. -* `renders/` Debug render of an example OBJ. -* `external/` Includes and static libraries for 3rd party libraries. - -### Running the code - -The main function requires a scene description file. Call the program with -one as an argument: `cis565_rasterizer objs/cow.obj`. -(In Visual Studio, `../objs/cow.obj`.) - -If you are using Visual Studio, you can set this in the Debugging > Command -Arguments section in the Project properties. Note that this value is different -for every different configuration type. Make sure you get the path right; read -the console for errors. - -## Requirements - -**Ask on the mailing list for any clarifications.** - -In this project, you are given the following code: - -* A library for loading standard Alias/Wavefront `.obj` format mesh - files and converting them to OpenGL-style buffers of index and vertex data. - * This library does NOT read materials, and provides all colors as white by - default. You can use another library if you wish. -* Simple structs for some parts of the pipeline. -* Depth buffer to framebuffer copy. -* CUDA-GL interop. - -You will need to implement the following features/pipeline stages: - -* Vertex shading. -* (Vertex shader) perspective transformation. -* Primitive assembly with support for triangles read from buffers of index and - vertex data. -* Rasterization. -* Fragment shading. -* A depth buffer for storing and depth testing fragments. -* Fragment to depth buffer writing (**with** atomics for race avoidance). -* (Fragment shader) simple lighting scheme, such as Lambert or Blinn-Phong. - -See below for more guidance. - -You are also required to implement at least "3.0" points in extra features. -(the parenthesized numbers must add to 3.0 or more): - -* (1.0) Tile-based pipeline. -* Additional pipeline stages. - * (1.0) Tessellation shader. - * (1.0) Geometry shader, able to output a variable number of primitives per - input primitive, optimized using stream compaction (thrust allowed). - * (0.5 **if not doing geometry shader**) Backface culling, optimized using - stream compaction (thrust allowed). - * (1.0) Transform feedback. - * (0.5) Scissor test. - * (0.5) Blending (when writing into framebuffer). -* (1.0) Instancing: draw one set of vertex data multiple times, each run - through the vertex shader with a different ID. -* (0.5) Correct color interpolation between points on a primitive. -* (1.0) UV texture mapping with bilinear texture filtering and perspective - correct texture coordinates. -* Support for rasterizing additional primitives: - * (0.5) Lines or line strips. - * (0.5) Points. -* (1.0) Anti-aliasing. -* (1.0) Occlusion queries. -* (1.0) Order-independent translucency using a k-buffer. -* (0.5) **Mouse**-based interactive camera support. - -This extra feature list is not comprehensive. If you have a particular idea -you would like to implement, please **contact us first**. - -**IMPORTANT:** -For each extra feature, please provide the following brief analysis: - -* Concise overview write-up of the feature. -* Performance impact of adding the feature (slower or faster). -* If you did something to accelerate the feature, what did you do and why? -* How might this feature be optimized beyond your current implementation? - - -## Base Code Tour - -You will be working primarily in two files: `rasterize.cu`, and -`rasterizeTools.h`. Within these files, areas that you need to complete are -marked with a `TODO` comment. Areas that are useful to and serve as hints for -optional features are marked with `TODO (Optional)`. Functions that are useful -for reference are marked with the comment `CHECKITOUT`. **You should look at -all TODOs and CHECKITOUTs before starting!** There are not many. - -* `src/rasterize.cu` contains the core rasterization pipeline. - * A few pre-made structs are included for you to use, but those marked with - TODO will also be needed for a simple rasterizer. As with any part of the - base code, you may modify or replace these as you see fit. - -* `src/rasterizeTools.h` contains various useful tools - * Includes a number of barycentric coordinate related functions that you may - find useful in implementing scanline based rasterization. - -* `util/utilityCore.hpp` serves as a kitchen-sink of useful functions. - - -## Rasterization Pipeline - -Possible pipelines are described below. Pseudo-type-signatures are given. -Not all of the pseudocode arrays will necessarily actually exist in practice. - -### First-Try Pipeline - -This describes a minimal version of *one possible* graphics pipeline, similar -to modern hardware (DX/OpenGL). Yours need not match precisely. To begin, try -to write a minimal amount of code as described here. Verify some output after -implementing each pipeline step. This will reduce the necessary time spent -debugging. - -Start out by testing a single triangle (`tri.obj`). - -* Clear the depth buffer with some default value. -* Vertex shading: - * `VertexIn[n] vs_input -> VertexOut[n] vs_output` - * A minimal vertex shader will apply no transformations at all - it draws - directly in normalized device coordinates (-1 to 1 in each dimension). -* Primitive assembly. - * `VertexOut[n] vs_output -> Triangle[n/3] primitives` - * Start by supporting ONLY triangles. For a triangle defined by indices - `(a, b, c)` into `VertexOut` array `vo`, simply copy the appropriate values - into a `Triangle` object `(vo[a], vo[b], vo[c])`. -* Rasterization. - * `Triangle[n/3] primitives -> FragmentIn[m] fs_input` - * A scanline implementation is simpler to start with. -* Fragment shading. - * `FragmentIn[m] fs_input -> FragmentOut[m] fs_output` - * A super-simple test fragment shader: output same color for every fragment. - * Also try displaying various debug views (normals, etc.) -* Fragments to depth buffer. - * `FragmentOut[m] -> FragmentOut[width][height]` - * Results in race conditions - don't bother to fix these until it works! - * Can really be done inside the fragment shader, if you call the fragment - shader from the rasterization kernel for every fragment (including those - which get occluded). **OR,** this can be done before fragment shading, which - may be faster but means the fragment shader cannot change the depth. -* A depth buffer for storing and depth testing fragments. - * `FragmentOut[width][height] depthbuffer` - * An array of `fragment` objects. - * At the end of a frame, it should contain the fragments drawn to the screen. -* Fragment to framebuffer writing. - * `FragmentOut[width][height] depthbuffer -> vec3[width][height] framebuffer` - * Simply copies the colors out of the depth buffer into the framebuffer - (to be displayed on the screen). - -### A Useful Pipeline - -* Clear the depth buffer with some default value. -* Vertex shading: - * `VertexIn[n] vs_input -> VertexOut[n] vs_output` - * Apply some vertex transformation (e.g. model-view-projection matrix using - `glm::lookAt ` and `glm::perspective `). -* Primitive assembly. - * `VertexOut[n] vs_output -> Triangle[n/3] primitives` - * As above. - * Other primitive types are optional. -* Rasterization. - * `Triangle[n/3] primitives -> FragmentIn[m] fs_input` - * You may choose to do a tiled rasterization method, which should have lower - global memory bandwidth. - * A scanline optimization: when rasterizing a triangle, only scan over the - box around the triangle (`getAABBForTriangle`). -* Fragment shading. - * `FragmentIn[m] fs_input -> FragmentOut[m] fs_output` - * Add a shading method, such as Lambert or Blinn-Phong. Lights can be defined - by kernel parameters (like GLSL uniforms). -* Fragments to depth buffer. - * `FragmentOut[m] -> FragmentOut[width][height]` - * Can really be done inside the fragment shader, if you call the fragment - shader from the rasterization kernel for every fragment (including those - which get occluded). **OR,** this can be done before fragment shading, which - may be faster but means the fragment shader cannot change the depth. - * This result in an optimization: it allows you to do depth tests before - spending execution time in complex fragment shader code! - * Handle race conditions! Since multiple primitives write fragments to the - same fragment in the depth buffer, races must be avoided by using CUDA - atomics. - * *Approach 1:* Lock the location in the depth buffer during the time that - a thread is comparing old and new fragment depths (and possibly writing - a new fragment). This should work in all cases, but be slower. - * *Approach 2:* Convert your depth value to a fixed-point `int`, and use - `atomicMin` to store it into an `int`-typed depth buffer `intdepth`. After - that, the value which is stored at `intdepth[i]` is (usually) that of the - fragment which should be stored into the `fragment` depth buffer. - * This may result in some rare race conditions (e.g. across blocks). - * The `flower.obj` test file is good for testing race conditions. -* A depth buffer for storing and depth testing fragments. - * `FragmentOut[width][height] depthbuffer` - * An array of `fragment` objects. - * At the end of a frame, it should contain the fragments drawn to the screen. -* Fragment to framebuffer writing. - * `FragmentOut[width][height] depthbuffer -> vec3[width][height] framebuffer` - * Simply copies the colors out of the depth buffer into the framebuffer - (to be displayed on the screen). - -This is a suggested sequence of pipeline steps, but you may choose to alter the -order of this sequence or merge entire kernels as you see fit. For example, if -you decide that doing has benefits, you can choose to merge the vertex shader -and primitive assembly kernels, or merge the perspective transform into another -kernel. There is not necessarily a right sequence of kernels, and you may -choose any sequence that works. Please document in your README what sequence -you choose and why. - +Building the Pipeline +======================== +###Vertex shader +In this stage, I apply the MVP (Model-View-Projection) Matrix on the input vertices, so that I can get their corresponding coordinates in normalized device coordinates (-1 to 1 in each dimension). -## Resources +`vec3 out =vec3 ( MVP * vec4(in,1.f))` -The following resources may be useful for this project: +###Primitive assembly +In this stage, I read the vertices index of triangle from the obj file and store them into the device vector of 'Triangle' -* High-Performance Software Rasterization on GPUs: - * [Paper (HPG 2011)](http://www.tml.tkk.fi/~samuli/publications/laine2011hpg_paper.pdf) - * [Code](http://code.google.com/p/cudaraster/) - * Note that looking over this code for reference with regard to the paper is - fine, but we most likely will not grant any requests to actually - incorporate any of this code into your project. - * [Slides](http://bps11.idav.ucdavis.edu/talks/08-gpuSoftwareRasterLaineAndPantaleoni-BPS2011.pdf) -* The Direct3D 10 System (SIGGRAPH 2006) - for those interested in doing - geometry shaders and transform feedback: - * [Paper](http://dl.acm.org/citation.cfm?id=1141947) - * [Paper, through Penn Libraries proxy](http://proxy.library.upenn.edu:2247/citation.cfm?id=1141947) -* Multi-Fragment Effects on the GPU using the k-Buffer - for those who want to do - order-independent transparency using a k-buffer: - * [Paper](http://www.inf.ufrgs.br/~comba/papers/2007/kbuffer_preprint.pdf) -* FreePipe: A Programmable, Parallel Rendering Architecture for Efficient - Multi-Fragment Effects (I3D 2010): - * [Paper](https://sites.google.com/site/hmcen0921/cudarasterizer) -* Writing A Software Rasterizer In Javascript: - * [Part 1](http://simonstechblog.blogspot.com/2012/04/software-rasterizer-part-1.html) - * [Part 2](http://simonstechblog.blogspot.com/2012/04/software-rasterizer-part-2.html) +###Rasterization +This is the key part of this project. Basically, I parallelize with each primitive (triangle) to do the scanline algorithm. For each primitive, I first caculate AABB(Axis Alined Bounding Box) to get the scan region of latter procedure. If this region is outside the normalized device coordinates (-1 to 1 in each dimension), we can just ignore this primitive since it will not be shown on the screen. Ohterwise, we continue to the sampling and decide stage. In this stage, I simply pick the center of each pixel to do the sampling. +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/sample_combine.png?raw=true) +If this smaple is in the range of the primitive (using the barycenteric coordinate to decide), I just assign the interpolated color and normal to the fragment buffer. But one thing to notice is that we need to know if this primitive is of lager depth value (in the front to be seen). In order to do this, I compare the current z value and the old z value in the fragment buffer. Since multiple primitives can write into the same fragment, we need to avoid the race condition here. To do this, I use a different boolean vector for each fragment. Before enter the critical section of writing into the buffer, we first check if this value is true, if it is true we are save to enter this section; otherwise, we need to wait until others leave the section. As soon as one primitive enter the critical section, I set the value of boolean to false to avoid others enter, and set it back to true when it leaves the critical section. -## Third-Party Code Policy +###Fragment shading +In this stage, I just use the Lambert equation to get the color of the surafce. Also, in order to debug the normal direction, I also implement the visualization of normal direction in this stage. +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/debug_image.png?raw=true) -* Use of any third-party code must be approved by asking on our Google Group. -* If it is approved, all students are welcome to use it. Generally, we approve - use of third-party code that is not a core part of the project. For example, - for the path tracer, we would approve using a third-party library for loading - models, but would not approve copying and pasting a CUDA function for doing - refraction. -* Third-party code **MUST** be credited in README.md. -* Using third-party code without its approval, including using another - student's code, is an academic integrity violation, and will, at minimum, - result in you receiving an F for the semester. +Extra Features +======================== +### Mouse Interaction +I implemented the mouse interaction using the left buttion to change the pitch and head angle of the view direction, middle buttion to change the eye distance, right button to change the Lookat point. +### Transform with Key Interaction +I implemented the transform of obj with the key interaction. Button Up and Down for the scaling of the obj, Button W and S for translating along the Y-axis, Button A and D for translating along the X-axis, Button Z and X for translating along Z-axis. -## README +### Alti-Aliasing +I am using the subpixel sampling method to do the anti-aliasing, basically I do it like this: +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/multiple_sample.png?raw=true) -Replace the contents of this README.md in a clear manner with the following: +I use these five sample point to decide the fraction of pixel occupied by the primitive, so that I can use the fraction to change the color assigned to the fragment. The compare of w and w/o anti-aliasing is like this: +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/wo_anti_aliasing_new%20-%20Copy.png?raw=true) +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/w_anti_aliasing_new%20-%20Copy.png?raw=true) -* A brief description of the project and the specific features you implemented. -* At least one screenshot of your project running. -* A 30 second or longer video of your project running. -* A performance analysis (described below). +### Color-Interpolation +Here I test the interpolation on the simple triangle, with each vertex assign different color. +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/color_interpolation.png?raw=true) -### Performance Analysis +###Blending +Here I am using the simple blending scheme. I add one variable Alpha to denote the transparency of the material. For the color we see, the equation is simply: -The performance analysis is where you will investigate how to make your CUDA -programs more efficient using the skills you've learned in class. You must have -performed at least one experiment on your code to investigate the positive or -negative effects on performance. +`Color = (1-Alpha)*front_Color + Alpha*back_Color` +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/bending_compare.png?raw=true) -We encourage you to get creative with your tweaks. Consider places in your code -that could be considered bottlenecks and try to improve them. +More Results +======================== +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/4.png?raw=true) -Provide summary of your optimizations (no more than one page), along with -tables and or graphs to visually explain any performance differences. +Performace Analysis +======================== +Since I haven't implamented the optimization part for the pipeline, I don't have a very numerical analysis of this program. But I do test the performance with and without the anti-liasing. By using the cow_smooth.obj, the difference is not obvious. I use the FPS to roughly compare the performance. I run the program 100 iterations and get the average result like this: -* Include a breakdown of time spent in each pipeline stage for a few different - models. It is suggested that you use pie charts or 100% stacked bar charts. -* For optimization steps (like backface culling), include a performance - comparison to show the effectiveness. +![alt tag](https://github.com/ziyezhou-Jerry/Project4-CUDA-Rasterizer/blob/master/image/excel.png?raw=true) +The difference is not obivious, and I think the reason may be the mesh is not so large and my sample point is just 5. -## Submit +Future Work +======================== +* Adding some more stages ( Geometry shader, etc.) into the pipeline +* Implementing some optimization methods -If you have modified any of the `CMakeLists.txt` files at all (aside from the -list of `SOURCE_FILES`), you must test that your project can build in Moore -100B/C. Beware of any build issues discussed on the Google Group. -1. Open a GitHub pull request so that we can see that you have finished. - The title should be "Submission: YOUR NAME". - * **ADDITIONALLY:** - In the body of the pull request, include a link to your repository. -2. Send an email to the TA (gmail: kainino1+cis565@) with: - * **Subject**: in the form of `[CIS565] Project N: PENNKEY`. - * Direct link to your pull request on GitHub. - * Estimate the amount of time you spent on the project. - * If there were any outstanding problems, or if you did any extra - work, *briefly* explain. - * Feedback on the project itself, if any. diff --git a/image/1.png b/image/1.png new file mode 100644 index 0000000..6596b01 Binary files /dev/null and b/image/1.png differ diff --git a/image/2.png b/image/2.png new file mode 100644 index 0000000..dab2d5d Binary files /dev/null and b/image/2.png differ diff --git a/image/3.png b/image/3.png new file mode 100644 index 0000000..5e04a7d Binary files /dev/null and b/image/3.png differ diff --git a/image/4.png b/image/4.png new file mode 100644 index 0000000..650fc2f Binary files /dev/null and b/image/4.png differ diff --git a/image/bending_compare.png b/image/bending_compare.png new file mode 100644 index 0000000..e358fec Binary files /dev/null and b/image/bending_compare.png differ diff --git a/image/blending.png b/image/blending.png new file mode 100644 index 0000000..aeffc67 Binary files /dev/null and b/image/blending.png differ diff --git a/image/color_interpolation.png b/image/color_interpolation.png new file mode 100644 index 0000000..40433e4 Binary files /dev/null and b/image/color_interpolation.png differ diff --git a/image/compare.png b/image/compare.png new file mode 100644 index 0000000..863831b Binary files /dev/null and b/image/compare.png differ diff --git a/image/cow.png b/image/cow.png new file mode 100644 index 0000000..5951abe Binary files /dev/null and b/image/cow.png differ diff --git a/image/cow_smooth.png b/image/cow_smooth.png new file mode 100644 index 0000000..881ad0f Binary files /dev/null and b/image/cow_smooth.png differ diff --git a/image/debug_image.png b/image/debug_image.png new file mode 100644 index 0000000..280f951 Binary files /dev/null and b/image/debug_image.png differ diff --git a/image/excel.png b/image/excel.png new file mode 100644 index 0000000..37dfc14 Binary files /dev/null and b/image/excel.png differ diff --git a/image/flower.png b/image/flower.png new file mode 100644 index 0000000..a981310 Binary files /dev/null and b/image/flower.png differ diff --git a/image/flower2.png b/image/flower2.png new file mode 100644 index 0000000..4b75148 Binary files /dev/null and b/image/flower2.png differ diff --git a/image/image.docx b/image/image.docx new file mode 100644 index 0000000..f55b96a Binary files /dev/null and b/image/image.docx differ diff --git a/image/more_results.png b/image/more_results.png new file mode 100644 index 0000000..853e40e Binary files /dev/null and b/image/more_results.png differ diff --git a/image/multiple_sample.png b/image/multiple_sample.png new file mode 100644 index 0000000..0a4795b Binary files /dev/null and b/image/multiple_sample.png differ diff --git a/image/representive.png b/image/representive.png new file mode 100644 index 0000000..14f7f02 Binary files /dev/null and b/image/representive.png differ diff --git a/image/sample_combine.png b/image/sample_combine.png new file mode 100644 index 0000000..482bcf1 Binary files /dev/null and b/image/sample_combine.png differ diff --git a/image/sample_inside.png b/image/sample_inside.png new file mode 100644 index 0000000..3734445 Binary files /dev/null and b/image/sample_inside.png differ diff --git a/image/sample_outside.png b/image/sample_outside.png new file mode 100644 index 0000000..0848ff1 Binary files /dev/null and b/image/sample_outside.png differ diff --git a/image/suzanne.png b/image/suzanne.png new file mode 100644 index 0000000..03992af Binary files /dev/null and b/image/suzanne.png differ diff --git a/image/suzanne_smooth.png b/image/suzanne_smooth.png new file mode 100644 index 0000000..c538e24 Binary files /dev/null and b/image/suzanne_smooth.png differ diff --git a/image/w_anti_aliasing.png b/image/w_anti_aliasing.png new file mode 100644 index 0000000..0e22210 Binary files /dev/null and b/image/w_anti_aliasing.png differ diff --git a/image/w_anti_aliasing_new - Copy.png b/image/w_anti_aliasing_new - Copy.png new file mode 100644 index 0000000..db2df59 Binary files /dev/null and b/image/w_anti_aliasing_new - Copy.png differ diff --git a/image/w_anti_aliasing_new.png b/image/w_anti_aliasing_new.png new file mode 100644 index 0000000..db2df59 Binary files /dev/null and b/image/w_anti_aliasing_new.png differ diff --git a/image/wo_anti_aliasing.png b/image/wo_anti_aliasing.png new file mode 100644 index 0000000..bfcd998 Binary files /dev/null and b/image/wo_anti_aliasing.png differ diff --git a/image/wo_anti_aliasing_new - Copy.png b/image/wo_anti_aliasing_new - Copy.png new file mode 100644 index 0000000..8204e40 Binary files /dev/null and b/image/wo_anti_aliasing_new - Copy.png differ diff --git a/image/wo_anti_aliasing_new.png b/image/wo_anti_aliasing_new.png new file mode 100644 index 0000000..8204e40 Binary files /dev/null and b/image/wo_anti_aliasing_new.png differ diff --git a/image/wo_blending.png b/image/wo_blending.png new file mode 100644 index 0000000..afcbab3 Binary files /dev/null and b/image/wo_blending.png differ diff --git a/src/main.cpp b/src/main.cpp index a125d7c..30d803c 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,17 @@ //------------------------------- int main(int argc, char **argv) { - if (argc != 2) { + + /*glm::mat4 m_model = glm::translate(glm::mat4(1.0),glm::vec3(1.0,0.0,0.0)); + + glm::vec3 a (0.0); + + a = glm::vec3(m_model*glm::vec4(a,1.0));*/ + + + + + if (argc != 2) { cout << "Usage: [obj file]" << endl; return 0; } @@ -23,6 +33,7 @@ int main(int argc, char **argv) { { objLoader loader(argv[1], mesh); mesh->buildBufPoss(); + //mesh->setFirstTriColor(); } frame = 0; @@ -78,7 +89,7 @@ void runCuda() { dptr = NULL; cudaGLMapBufferObject((void **)&dptr, pbo); - rasterize(dptr); + rasterize(dptr,g_camera); cudaGLUnmapBufferObject(pbo); frame++; @@ -92,6 +103,7 @@ void runCuda() { bool init(obj *mesh) { glfwSetErrorCallback(errorCallback); + if (!glfwInit()) { return false; @@ -106,6 +118,9 @@ bool init(obj *mesh) { } glfwMakeContextCurrent(window); glfwSetKeyCallback(window, keyCallback); + glfwSetMouseButtonCallback(window,mouseCallback); + glfwSetCursorPosCallback(window,cursorPosCallback); + glfwSetScrollCallback(window,scrollCallback); // Set up GL context glewExperimental = GL_TRUE; @@ -119,6 +134,10 @@ bool init(obj *mesh) { initCuda(); initPBO(); + //init the camera + g_camera = new Camera(); + g_camera->Reset(width, height); + float cbo[] = { 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, @@ -273,4 +292,95 @@ void keyCallback(GLFWwindow *window, int key, int scancode, int action, int mods if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS) { glfwSetWindowShouldClose(window, GL_TRUE); } + else if (key == GLFW_KEY_UP && action == GLFW_PRESS) { + + g_camera->KeyChangeScale(true); + } + else if (key == GLFW_KEY_DOWN && action == GLFW_PRESS) { + + g_camera->KeyChangeScale(false); + + } + else if(key == GLFW_KEY_W && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(2,true); + } + else if(key == GLFW_KEY_S && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(2,false); + } + else if(key == GLFW_KEY_A && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(1,true); + } + else if(key == GLFW_KEY_D && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(1,false); + } + else if(key == GLFW_KEY_X && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(3,false); + } + else if(key == GLFW_KEY_Z && action == GLFW_PRESS) + { + g_camera->KeyChangeTranslate(3,true); + } + + +} + +void mouseCallback(GLFWwindow *window, int button, int action, int mods) +{ + if(button == GLFW_MOUSE_BUTTON_LEFT && action == GLFW_PRESS) + { + glfwGetCursorPos(window,&g_mouse_old_x,&g_mouse_old_y); + } + else if(button == GLFW_MOUSE_BUTTON_RIGHT && action == GLFW_PRESS) + { + glfwGetCursorPos(window,&g_mouse_old_x,&g_mouse_old_y); + } + else if(button == GLFW_MOUSE_BUTTON_MIDDLE && action == GLFW_PRESS) + { + glfwGetCursorPos(window,&g_mouse_old_x,&g_mouse_old_y); + } +} + +void scrollCallback(GLFWwindow* window, double xoffset, double yoffset) +{ + g_camera->MouseChangeDistance(1.0f,xoffset ,yoffset ); +} + +void cursorPosCallback(GLFWwindow *window, double xpos, double ypos) +{ + if(glfwGetMouseButton(window,GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS) + { + double dx, dy; + dx = (double)(xpos - g_mouse_old_x); + dy = (double)(ypos - g_mouse_old_y); + g_camera->MouseChangeHeadPitch(0.2f, dx, dy); + + g_mouse_old_x = xpos; + g_mouse_old_y = ypos; + + } + else if(glfwGetMouseButton(window,GLFW_MOUSE_BUTTON_MIDDLE) == GLFW_PRESS) + { + double dx, dy; + dx = (double)(xpos - g_mouse_old_x); + dy = (double)(ypos - g_mouse_old_y); + g_camera->MouseChangeLookat(0.01f, dx, dy); + + g_mouse_old_x = xpos; + g_mouse_old_y = ypos; + } + else if(glfwGetMouseButton(window,GLFW_MOUSE_BUTTON_RIGHT) == GLFW_PRESS) + { + double dx, dy; + dx = (double)(xpos - g_mouse_old_x); + dy = (double)(ypos - g_mouse_old_y); + g_camera->MouseChangeDistance(0.05f, dx, dy); + + g_mouse_old_x = xpos; + g_mouse_old_y = ypos; + } } diff --git a/src/main.hpp b/src/main.hpp index 49d3948..7e65392 100644 --- a/src/main.hpp +++ b/src/main.hpp @@ -20,10 +20,12 @@ #include #include #include +#include #include #include #include #include "rasterize.h" +#include using namespace std; @@ -93,3 +95,20 @@ void deleteTexture(GLuint *tex); void mainLoop(); void errorCallback(int error, const char *description); void keyCallback(GLFWwindow *window, int key, int scancode, int action, int mods); +void mouseCallback(GLFWwindow *window, int button, int action, int mods); +void scrollCallback(GLFWwindow* window, double xoffset, double yoffset); +void cursorPosCallback(GLFWwindow *window, double xpos, double ypos); + + +//------------------------------ +//-------Mouse Sate--------- +//------------------------------ + +double g_mouse_old_x; +double g_mouse_old_y; + + +//------------------------------ +//-------Camera --------- +//------------------------------ +Camera* g_camera; \ No newline at end of file diff --git a/src/rasterize.cu b/src/rasterize.cu index 53103b5..a2e682c 100644 --- a/src/rasterize.cu +++ b/src/rasterize.cu @@ -15,6 +15,11 @@ #include #include "rasterizeTools.h" +//#define ENABLE_ANTI_ALIASING +//#define ENABLE_BLENDING + +#define Alpha 0.5f + struct VertexIn { glm::vec3 pos; glm::vec3 nor; @@ -23,23 +28,31 @@ struct VertexIn { }; struct VertexOut { // TODO + glm::vec3 pos; + glm::vec3 nor; + glm::vec3 col; + }; struct Triangle { VertexOut v[3]; }; struct Fragment { glm::vec3 color; + glm::vec3 nor; + float z; }; static int width = 0; static int height = 0; static int *dev_bufIdx = NULL; static VertexIn *dev_bufVertex = NULL; +static VertexOut *dev_bufVertex_out = NULL; static Triangle *dev_primitives = NULL; static Fragment *dev_depthbuffer = NULL; static glm::vec3 *dev_framebuffer = NULL; static int bufIdxSize = 0; static int vertCount = 0; +static bool* dev_is_writable = NULL; // mutex for race condition /** * Kernel that writes the image to the OpenGL PBO directly. @@ -75,6 +88,245 @@ void render(int w, int h, Fragment *depthbuffer, glm::vec3 *framebuffer) { } } +//vertex shader function +__global__ +void kern_vertex_shader(VertexIn *dev_bufVertex_in, VertexOut *dev_bufVertex_out, int vertCount,glm::mat4 MVP,glm::mat4 M_inv_T) //trans = proj*view*model +{ + int index = (blockIdx.x * blockDim.x) + threadIdx.x; + //simple version for doing nothing + + + if( index < vertCount) + { + VertexIn cur_v_in = dev_bufVertex_in[index]; + //calculate pos + dev_bufVertex_out[index].pos = multiplyMV(MVP,glm::vec4(cur_v_in.pos,1.f)); + + //calculate normal + dev_bufVertex_out[index].nor = multiplyMV(M_inv_T, glm::vec4(cur_v_in.nor,1.f)); + //calculate color + dev_bufVertex_out[index].col = cur_v_in.col; + + + } + +} + + +//primitives assembly +__global__ +void kern_premitive_assemble(VertexOut* dev_bufVertex_out,int* dev_bufIdx,Triangle* dev_primitives,int num_of_primitives) +{ + int index = (blockIdx.x * blockDim.x) + threadIdx.x; + + if(index 1 || max_x < -1 || min_y >1 || max_y<-1) + { + return; + } + + + float dx = 2.f/width; + float dy = 2.f/height; + + int min_x_idx = max((int)((min_x+1)/dx),(int)0); + int min_y_idx = max((int)((min_y+1)/dy),(int)0); + int max_x_idx = min((int)((max_x+1)/dx),(int)width-1); + int max_y_idx = min((int)((max_y+1)/dy),(int)height-1); + + + + //first try the center sampling method + + for(int i = min_y_idx;i<=max_y_idx;i++) + { + for(int j = min_x_idx ; j<=max_x_idx ;j++) + { + int buffer_index = (height-i)*width + j; + + //center point sample + float cur_y = -1+ ((float)i*2+1.f)/(float)height; + float cur_x = -1+ ((float)j*2+1.f)/(float)width; + + glm::vec2 cur_vec2 (cur_x,cur_y); + + glm::vec3 b_c = calculateBarycentricCoordinate(m_tri,cur_vec2); + bool is_inside = isBarycentricCoordInBounds(b_c); + + //subpixel center 1 + float sub_y1 = cur_y + dy/4.f; + float sub_x1 = cur_x - dx/4.f; + glm::vec3 b_c1 = calculateBarycentricCoordinate(m_tri,glm::vec2(sub_x1,sub_y1)); + bool is_inside1 = isBarycentricCoordInBounds(b_c1); + + //subpixel center 2 + float sub_y2 = cur_y + dy/4.f; + float sub_x2 = cur_x + dx/4.f; + glm::vec3 b_c2 = calculateBarycentricCoordinate(m_tri,glm::vec2(sub_x2,sub_y2)); + bool is_inside2 = isBarycentricCoordInBounds(b_c2); + + //subpixel center 3 + float sub_y3 = cur_y - dy/4.f; + float sub_x3 = cur_x - dx/4.f; + glm::vec3 b_c3 = calculateBarycentricCoordinate(m_tri,glm::vec2(sub_x3,sub_y3)); + bool is_inside3 = isBarycentricCoordInBounds(b_c3); + + //subpixel center 4 + float sub_y4 = cur_y - dy/4.f; + float sub_x4 = cur_x + dx/4.f; + glm::vec3 b_c4 = calculateBarycentricCoordinate(m_tri,glm::vec2(sub_x4,sub_y4)); + bool is_inside4 = isBarycentricCoordInBounds(b_c4); + + int sample_res = (int)is_inside + (int)is_inside1 + (int)is_inside2 + (int)is_inside3 + (int)is_inside4; + + if(sample_res) + { + float cur_z = getZAtCoordinate(b_c,m_tri); + if(cur_z<=1 && cur_z>= -1) //within the range + { + if(dev_depthbuffer[buffer_index].zcur_z) + { + while(!dev_is_writable[buffer_index]) + {} + + //enter critical area + dev_is_writable[buffer_index] =false; + + dev_depthbuffer[buffer_index].color = Alpha*dev_depthbuffer[buffer_index].color + (1.f - Alpha)*(m_colors[0]*b_c.x +m_colors[1]*b_c.y+m_colors[2]*b_c.z)*((float)sample_res / 5.f); + + dev_is_writable[buffer_index] = true; + } +#endif + + } + } + + } + } + + + } + +} + +//fragment shader +__global__ +void kern_fragment_shader(Fragment *dev_depthbuffer, int num_of_fragment) +{ + int index = (blockIdx.x * blockDim.x) + threadIdx.x; + + if(index < num_of_fragment) + { + // for now just doing nothing to test + if(abs(dev_depthbuffer[index].z + M_INFINITE)>1e-6) + { + dev_depthbuffer[index].color = glm::normalize(glm::vec3(abs(dev_depthbuffer[index].nor.x),abs(dev_depthbuffer[index].nor.y),abs(dev_depthbuffer[index].nor.z))); + + //light direction glm::vec3(0,0,-1) + /*float dot_prod = glm::dot(glm::normalize(glm::vec3(1.0,0.0,0.0)),glm::normalize(dev_depthbuffer[index].nor)); + + if(dot_prod>0) + { + dev_depthbuffer[index].color *= (dot_prod + 0.1); + } + else + { + dev_depthbuffer[index].color *= 0.1; + }*/ + } + + + } + +} + +//fragment init +__global__ +void kern_fragment_init(Fragment *dev_depthbuffer, bool * dev_is_writable, int num_of_fragment) +{ + int index = (blockIdx.x * blockDim.x) + threadIdx.x; + + if(index < num_of_fragment) + { + // for now just doing nothing to test + dev_depthbuffer[index].z = -M_INFINITE; + dev_depthbuffer[index].color = glm::vec3(0.0); + dev_depthbuffer[index].nor = glm::vec3(0.0); + + dev_is_writable[index] = true; + + } +} + /** * Called once at the beginning of the program to allocate memory. */ @@ -84,9 +336,16 @@ void rasterizeInit(int w, int h) { cudaFree(dev_depthbuffer); cudaMalloc(&dev_depthbuffer, width * height * sizeof(Fragment)); cudaMemset(dev_depthbuffer, 0, width * height * sizeof(Fragment)); - cudaFree(dev_framebuffer); + cudaFree(dev_framebuffer); cudaMalloc(&dev_framebuffer, width * height * sizeof(glm::vec3)); cudaMemset(dev_framebuffer, 0, width * height * sizeof(glm::vec3)); + + + cudaFree(dev_is_writable); + cudaMalloc(&dev_is_writable, width * height * sizeof(bool)); + cudaMemset(dev_is_writable, true, width * height * sizeof(bool)); + + checkCUDAError("rasterizeInit"); } @@ -109,14 +368,20 @@ void rasterizeSetBuffers( bufVertex[i].pos = glm::vec3(bufPos[j + 0], bufPos[j + 1], bufPos[j + 2]); bufVertex[i].nor = glm::vec3(bufNor[j + 0], bufNor[j + 1], bufNor[j + 2]); bufVertex[i].col = glm::vec3(bufCol[j + 0], bufCol[j + 1], bufCol[j + 2]); - } + + + + } cudaFree(dev_bufVertex); cudaMalloc(&dev_bufVertex, vertCount * sizeof(VertexIn)); cudaMemcpy(dev_bufVertex, bufVertex, vertCount * sizeof(VertexIn), cudaMemcpyHostToDevice); + cudaFree(dev_bufVertex_out); + cudaMalloc(&dev_bufVertex_out, vertCount * sizeof(VertexOut)); + cudaFree(dev_primitives); - cudaMalloc(&dev_primitives, vertCount / 3 * sizeof(Triangle)); - cudaMemset(dev_primitives, 0, vertCount / 3 * sizeof(Triangle)); + cudaMalloc(&dev_primitives, bufIdxSize / 3 * sizeof(Triangle)); + cudaMemset(dev_primitives, 0, bufIdxSize / 3 * sizeof(Triangle)); checkCUDAError("rasterizeSetBuffers"); } @@ -124,15 +389,56 @@ void rasterizeSetBuffers( /** * Perform rasterization. */ -void rasterize(uchar4 *pbo) { +void rasterize(uchar4 *pbo, Camera *m_camera) +{ int sideLength2d = 8; dim3 blockSize2d(sideLength2d, sideLength2d); dim3 blockCount2d((width - 1) / blockSize2d.x + 1, (height - 1) / blockSize2d.y + 1); - + // TODO: Execute your rasterization pipeline here // (See README for rasterization pipeline outline.) + //vertex shader + dim3 blockSize1d (THREADS_PER_BLOCK); + dim3 blockCount1d (vertCount/THREADS_PER_BLOCK+1); + + glm::mat4 m_view = glm::transpose(m_camera->GetViewMatrix()); + glm::mat4 m_proj = m_camera->GetProjectionMatrix(); + glm::mat4 m_model = m_camera->GetModelMatrix(); + //glm::mat4 MVP = glm::mat4(1.0); + //glm::mat4 MVP_inv_T = glm::mat4(1.0); + + glm::mat4 MVP = m_proj * m_view * m_model; + glm::mat4 M_inv_T = glm::transpose(glm::inverse(m_model)); + + + kern_vertex_shader<<>>(dev_bufVertex, dev_bufVertex_out, vertCount, MVP, M_inv_T); + + //primitive assembler + int num_of_primitives = bufIdxSize/3; + blockCount1d.x = num_of_primitives/THREADS_PER_BLOCK+1; + + + kern_premitive_assemble<<>>(dev_bufVertex_out,dev_bufIdx,dev_primitives, num_of_primitives); + + + //fragment init + int num_of_fragment = width * height; + blockCount1d.x = num_of_fragment/THREADS_PER_BLOCK+1; + + kern_fragment_init<<>>(dev_depthbuffer,dev_is_writable,num_of_fragment); + + + //rasterization + kern_rasterization<<>>(dev_primitives, dev_depthbuffer, num_of_primitives, width, height, dev_is_writable); + + //fragment shader + blockCount1d.x = num_of_fragment/THREADS_PER_BLOCK+1; + + kern_fragment_shader<<>>(dev_depthbuffer,num_of_fragment); + + // Copy depthbuffer colors into framebuffer render<<>>(width, height, dev_depthbuffer, dev_framebuffer); // Copy framebuffer into OpenGL buffer for OpenGL previewing @@ -150,6 +456,9 @@ void rasterizeFree() { cudaFree(dev_bufVertex); dev_bufVertex = NULL; + cudaFree(dev_bufVertex_out); + dev_bufVertex_out = NULL; + cudaFree(dev_primitives); dev_primitives = NULL; @@ -159,5 +468,8 @@ void rasterizeFree() { cudaFree(dev_framebuffer); dev_framebuffer = NULL; + cudaFree(dev_is_writable); + dev_is_writable = NULL; + checkCUDAError("rasterizeFree"); } diff --git a/src/rasterize.h b/src/rasterize.h index a06b339..8e42921 100644 --- a/src/rasterize.h +++ b/src/rasterize.h @@ -9,10 +9,15 @@ #pragma once #include +#include + +#define THREADS_PER_BLOCK 256 + +#define M_INFINITE 10e6 void rasterizeInit(int width, int height); void rasterizeSetBuffers( int bufIdxSize, int *bufIdx, int vertCount, float *bufPos, float *bufNor, float *bufCol); -void rasterize(uchar4 *pbo); +void rasterize(uchar4 *pbo, Camera *m_camera); void rasterizeFree(); diff --git a/util/CMakeLists.txt b/util/CMakeLists.txt index afa80f3..de5dee3 100644 --- a/util/CMakeLists.txt +++ b/util/CMakeLists.txt @@ -8,6 +8,8 @@ set(SOURCE_FILES "obj.cpp" "objloader.hpp" "objloader.cpp" + "camera.h" + "camera.cpp" ) cuda_add_library(util diff --git a/util/camera.cpp b/util/camera.cpp new file mode 100644 index 0000000..0a826aa --- /dev/null +++ b/util/camera.cpp @@ -0,0 +1,245 @@ +#include "camera.h" + +Camera::Camera(void) +{ +} + +Camera::~Camera(void) +{ + +} + +void Camera::Reset(int width, int height) +{ + // setup default camera parameters + m_eye_distance = 20.0; + m_head = 0.0; + m_pitch = 90.0; + m_lookat = glm::vec3(0.0, 0.0, -1.0); + m_up = glm::vec3(0.0, 1.0, 0.0); + m_fovy = 45.0; + m_width = width; + m_height = height; + m_znear = 0.01; + m_zfar = 500.0; + + m_scale = glm::vec3(1.0); + m_translate = glm::vec3(0.0); + + updateViewMatrix(); + updateProjectionMatrix(); + updateModelMatrix(); +} + +//void Camera::Lookat(Mesh* mesh) +//{ +// unsigned int mid_index = mesh->m_vertices_number/2; +// +// EigenVector3 lookat = mesh->m_current_positions.block_vector(mid_index); +// m_lookat = glm::vec3(lookat[0], lookat[1], lookat[2]); +// updateViewMatrix(); +//} + +//void Camera::DrawAxis() +//{ +// glPushAttrib(GL_LIGHTING_BIT | GL_LINE_BIT); +// glDisable(GL_LIGHTING); +// glDisable(GL_DEPTH); +// +// // store previous states +// glMatrixMode(GL_MODELVIEW); +// glPushMatrix(); +// +// // change view matrix. +// glm::vec3 axis_cam_pos = float(2.0) * glm::normalize(m_position - m_lookat); +// glLoadMatrixf(&(glm::lookAt(axis_cam_pos, glm::vec3(0.0, 0.0, 0.0), m_up)[0][0])); +// +// // change viewport +// glViewport(m_width * 15 / 16, 0, m_width / 16, m_height / 16); +// +// //Draw axis. +// glBegin(GL_LINES); +// glColor3d(1.0, 0.0, 0.0); +// glVertex3d(0.0, 0.0, 0.0); +// glVertex3d(1.0, 0.0, 0.0); +// +// glColor3d(0.0, 1.0, 0.0); +// glVertex3d(0.0, 0.0, 0.0); +// glVertex3d(0.0, 1.0, 0.0); +// +// glColor3d(0.0, 0.0, 1.0); +// glVertex3d(0.0, 0.0, 0.0); +// glVertex3d(0.0, 0.0, 1.0); +// glEnd(); +// +// // restore everything +// glViewport(0, 0, m_width, m_height); +// glPopMatrix(); +// glPopAttrib(); +// glEnable(GL_LIGHTING); +// glEnable(GL_DEPTH); +//} + +// mouse interactions +void Camera::MouseChangeDistance(float coe, float dx, float dy) +{ + m_eye_distance -= dy * coe; + if (m_eye_distance < 4.0) m_eye_distance = 4.0; + updateViewMatrix(); +} +void Camera::MouseChangeLookat(float coe, float dx, float dy) +{ + glm::vec3 vdir(m_lookat - m_position); + glm::vec3 u(glm::normalize(glm::cross(vdir, m_up))); + glm::vec3 v(glm::normalize(glm::cross(u, vdir))); + + m_lookat += coe * (dy * v - dx * u); + updateViewMatrix(); +} +void Camera::MouseChangeHeadPitch(float coe, float dx, float dy) +{ + m_head += dy * coe; + m_pitch += dx * coe; + + updateViewMatrix(); +} + +// resize +void Camera::ResizeWindow(int w, int h) +{ + this->m_width = w; + this->m_height = h; + + updateProjectionMatrix(); +} + +glm::vec3 Camera::GetRaycastDirection(int mouse_x, int mouse_y) +{ + float x = (float)(mouse_x) / (float)(m_width-1); + float y = 1.0 - (float)(mouse_y) / (float)(m_height-1); + + // Viewing vector + glm::vec3 E = m_position; + glm::vec3 U = m_up; + glm::vec3 C = glm::normalize(m_lookat - m_position); // implies viewing plane distancei s 1 + + float phi = glm::radians(m_fovy/2.0); + + // Vector A = C x U + glm::vec3 A = glm::normalize(glm::cross(C, U)); + // The REAL up vector B = A x C + glm::vec3 B = glm::normalize(glm::cross(A, C)); + // View Center M = E + C + glm::vec3 M = E + C; + + // V || B, but on NCD + glm::vec3 V = B * glm::tan(phi); + // H || A, but on NDC + // If you didn't use theta here, you can simply use the ratio between this->width() and this->height() + glm::vec3 H = A * glm::tan(phi) / (float)m_height * (float)m_width; + + // Clicking point on the screen. World Coordinate. + glm::vec3 P = M + float(2.0*x - 1.0)*H + float(2.0*y - 1.0)*V; + + m_cached_projection_plane_center = M; + m_cached_projection_plane_xdir = H; + m_cached_projection_plane_ydir = V; + + glm::vec3 dir = glm::normalize(P-E); + + return dir; +} + +glm::vec3 Camera::GetCurrentTargetPoint(int mouse_x, int mouse_y) +{ + // assume camera is not moving + float x = (float)(mouse_x) / (float)(m_width-1); + float y = 1.0f - (float)(mouse_y) / (float)(m_height-1); + + glm::vec3 P = m_cached_projection_plane_center + float(2.0f*x - 1.0f)*m_cached_projection_plane_xdir + float(2.0f*y - 1.0f)*m_cached_projection_plane_ydir; + + glm::vec3 dir = P-m_position; + + glm::vec3 fixed_point_glm = m_cached_projection_plane_distance*dir+m_position; + + return fixed_point_glm; +} + +// private field +void Camera::updateViewMatrix() +{ + float r_head = glm::radians(m_head), r_pitch = glm::radians(m_pitch); + m_position.x = m_lookat.x + m_eye_distance * glm::cos(r_head) * glm::cos(r_pitch); + m_position.y = m_lookat.y + m_eye_distance * glm::sin(r_head); + m_position.z = m_lookat.z + m_eye_distance * glm::cos(r_head) * glm::sin(r_pitch); + + m_up = glm::vec3(0.0, (glm::cos(r_head) > 0.0) ? 1.0 : -1.0, 0.0); + //m_position = glm::vec3(0.0,0.0,1.0); + m_view = glm::lookAt(m_position, m_lookat, m_up); +} +void Camera::updateProjectionMatrix() +{ + //m_projection = glm::perspective(m_fovy, static_cast(m_width) / static_cast(m_height), m_znear, m_zfar); + //m_projection = glm::infinitePerspective(m_fovy, static_cast(m_width) / static_cast(m_height), m_znear); + m_projection = glm::mat4(1.0); +} + +void Camera::updateModelMatrix() +{ + m_model = glm::translate(glm::mat4(1.f),m_translate)*glm::scale(glm::mat4(1.f),m_scale); + //std::cout<< m_translate.x<< m_translate.y<< m_translate.z < +#include +#include + +class Camera +{ +public: + // constructor and destructor + Camera(void); + virtual ~Camera(void); + + // reset + void Reset(int width, int height); + //void Lookat(Mesh* mesh); + + // get camera matrices: + inline glm::mat4 GetViewMatrix() {return m_view;} + inline glm::mat4 GetProjectionMatrix() {return m_projection;} + inline glm::mat4 GetModelMatrix() {return m_model;} + + // get camera position and raycast direction: + inline glm::vec3 GetCameraPosition() {return m_position;} + inline float GetCameraDistance() {return m_eye_distance;} + inline void SetProjectionPlaneDistance(float distance) {m_cached_projection_plane_distance = distance;} + glm::vec3 GetRaycastDirection(int x, int y); + glm::vec3 GetCurrentTargetPoint(int x, int y); + + // mouse interactions + void MouseChangeDistance(float coe, float dx, float dy); + void MouseChangeLookat(float coe, float dx, float dy); + void MouseChangeHeadPitch(float coe, float dx, float dy); + + //key interactions + void KeyChangeScale(bool is_enlarge); + void KeyChangeTranslate(int dir, bool is_add); + + // Draw axis + // void DrawAxis(); + + // resize + void ResizeWindow(int w, int h); + +protected: + int m_width; + int m_height; + float m_znear; + float m_zfar; + float m_fovy; + + float m_eye_distance; + float m_head; + float m_pitch; + + glm::vec3 m_position; + glm::vec3 m_up; + glm::vec3 m_lookat; + glm::vec3 m_cached_projection_plane_center; + glm::vec3 m_cached_projection_plane_xdir; + glm::vec3 m_cached_projection_plane_ydir; + float m_cached_projection_plane_distance; + + glm::mat4 m_view; + glm::mat4 m_projection; + glm::vec3 m_translate; + glm::vec3 m_scale; + glm::mat4 m_model; +private: + // update camera matrices: + void updateViewMatrix(); + void updateProjectionMatrix(); + void updateModelMatrix(); +}; + +#endif \ No newline at end of file diff --git a/util/obj.cpp b/util/obj.cpp index 6ae70f1..86ffa7b 100644 --- a/util/obj.cpp +++ b/util/obj.cpp @@ -487,3 +487,18 @@ int obj::getBufColsize() { return cbosize; } +void obj::setFirstTriColor() +{ + cbo[0 + 0] = 1.0; + cbo[0 + 1] = 0.0; + cbo[0 + 2] = 0.0; + + cbo[3 + 0] = 0.0; + cbo[3 + 1] = 1.0; + cbo[3 + 2] = 0.0; + + cbo[6 + 0] = 0.0; + cbo[6 + 1] = 0.0; + cbo[6 + 2] = 1.0; +} + diff --git a/util/obj.hpp b/util/obj.hpp index d81f5f6..9edd27a 100644 --- a/util/obj.hpp +++ b/util/obj.hpp @@ -81,4 +81,7 @@ class obj { vector *getNormals(); vector *getTextureCoords(); vector *getFaceBoxes(); + + //for testing the color interpolation + void setFirstTriColor(); };