Recently I've been doing some research as far as MUI 3.8 and Zune classes.
I've also been looking into different rendering techniques such as Scan Line vs Ray Tracing and the benefits and drawbacks of each. I had hoped to examine the rendering engine at a later point in the process, but I suppose having a good idea of what is going on in the program up front is good.
It seems that 3D Modeling has a mathematics, a science and a lingo all its own. For example in 2D space a pixel is represented by coordinates x, y. But in 3D space a voxel has x,y,z coordinates. The Octree Method is one of many to divide the 3D scene into triangles (vertices?) that are sent for further processing. Other methods such as "frustum culling" are used to sort and eliminate hidden or obscured triangles from the selection.
There is RayStorm and Lightwave modeling. There is Sculpt 3D and there is POV-Ray modeling. There is also a technique to convert 3D Meshes used in RayStorm and other programs into STL Meshes for 3D Printing. That last part is very interesting.
But as far as RayStorm the part that concerns me is Input and Output. What kind of 3D Meshes are used and what type of pixel data is written to file? Judging by the original file formats, JPG, TGA, PNG, ILBM Deep Images the output is likely just RGB. To get 32bit RGB + alpha requires image post processing in another program to add alpha and alpha blending with the backgroud.
But the Image Viewer is now capable of displaying 24bit or 32bit or anything else. By the end of the week I'll start working on the Main Window with the buttons, child windows and menus for the Modeler Component.
So, as you can see, it's a lot of work. But it's beneficial as well.