On the Purpose of the AssignmentsWhile on an assignment by assignment basis they each taught us a specific implementation, design, or engine process. I think there is a larger lens to look at the semester as a whole. In previous semesters we've written engines where we were instructed on what to do, but for the most part not HOW. While the difficult part for me wasn't the exact implementation of a given feature, it was often times the design or coming up with the sensible decisions on WHERE to place things. By having an existing code base to work within, I had immediate examples for design centric decisions when it came to writing the various systems that we've done. I also enjoyed the "directedness" of the assignments. Because of this, I was able to spend most of my time on the "HOW" to accomplish a goal. Not "WHERE" that has tripped me more than once in the past. I felt like the learning was that much more effective because I didn't spend too long spinning my wheels wondering on the best practices to go along doing something. Since I could just look elsewhere in a different system and see how it was implemented, and try my best to stick to an existing design. By imposing this arbitrary design consideration on myself, I also got experience on working, and sticking to an existing designs. Which, may be more useful if working on a large team. What did I get out of the class?Of course, the exact topics that we covered were at times completely new to me, so it was great to get my first experience with some of these systems and topics. Although, the best insights I got out of the course were directly related to Engine DESIGN and the thought processes / justifications for doing things a certain way. I do want to comment on the fact that at the beginning of the semester I had thought the codebase was quite huge. But, at the end I believed that I had a decent grasp on the code base as a whole. I may not know every system intimately, but I had gotten to the point that I knew the DIRECTION to look in for specific things and questions. Thoughts on General Software ArchitectureAfter the experiences of this semester, working in this engine while also alongside a "large" engineering team. Time and time again I found myself practicing the design mantra of things should be where you THINK they should be. Meaning, sensible. I used that word far too much this semester... I do believe that there needs to be a period of planning prior to beginning major work on a project, whether that be the raw outlining of systems and where they exist or directly on the code habits of working with these systems. I prefer the upfront planning, even if it be brief and sparse. I have always been a person that prefers a plan of attack versus "winging it". This semester one of my colleagues made the statement concerning what constitutes good code (paraphrased): "It needs to be easily iterable, flexible enough that changes can be made to match the needs of the developers and design without too much rework". To do this, what does it entail? Surely we can't be aware of ALL of our future needs for the entirety of the project. But there are base design considerations that can be made. For example, in my latest project NOT having on screen buttons tired DIRECTLY into the graphics system. This would server better in its own interface system that speaks to graphics. By having multiplatform being a high emphasis this semester we can take steps to ensure that our code is as platform independent as possible by using a multitude of the strategies this semester.
I believe good software architecture should be easily iterable. Bad software design ties systems together, with dependencies going back and forth requiring a single end change, to take numerous changes along the way. Or more succinctly: Bad software architecture is when a "small" end change requires a "large" internal change.
0 Comments
What and Why?I return with a new engine feature, basic user interface! User Interface programming and design has been a field that yours truly has had an interest for quite some time. Seeing that we developed very basic on screen sprites, I figured why not try my hand at creating some basic buttons? My biggest experience with game interface programming has been done via Unity, therefore, many of my design decisions and criticisms of my implementation may compare and contrast directly with Unity's Canvas and UI system. One that I have gotten to know pretty well over the years. Before attacking this, I found it useful to map out the exact capabilities that we'd have to write: Observations:
How?!The Buttons are essentially a different type of Sprite but with the needed extra functionality to process click events. The position of the mouse click is passed into ProcessClick(). Which is polled each update, the reasoning for such will be explained later. But, the mouse?!I mentioned before that the existing engine had no support for tracking the mouse whether it be it's state or direct position on screen. To get this done, we had to do some digging in the proper Windows API. This required some brushing up on WindowHandles, and putting these functionalities in an appropriate place. We have an existing UserInput system that until recently, was only detecting simple key presses. There, we can no poll if user has the mouse button clicked or not. Okay, but what about detecting clicks?The real meat of the challenge was detecting whether or not the user clicked on a given button, in a sensible way. Currently, our process for listening for user input has been to check every frame for user input. Our engine was not event driven. Meaning, the engine wasn't listening for an input event to fire off. For moving our meshes and sprites in prior assignments, we just checked the state of the keyboard every frame. This isn't a bad way, but it is a way to do it, in fact Unity I believe does this. You'd do the following in your Update Finally, how about image bounds?After we know how and where buttons clicked are performed, how do we figure out if they've been clicked on our button?! I mentioned above that I made note that we declare the position/size of our on screen sprites via bounds of [-1,1] on screen. This is similar to how Unity's canvas system allows the developer to position elements based on variable screen size. (Unity also allows you to specify the direct pixel offset as well). Because Windows's calls that give us the mouse position give it to us in absolute pixels, we must convert our screen space bounds of [-1,1] to actual pixels within the window. Above is the lines where we can compute exactly where is the center pivot of the image in relation to the actual size of the window. The window size is passed in at button creation for this to happen. We do the same to determine the left, right, top, and bottom bounds of the image. These are then used in a ProcessClick() that will fire off the saved callback specified to fire off when a click is detected. How did this all fit?With the lack of proper mouse support. a true UI system, to get this system working most of it is shoehorned into graphics. Which, while it does work, I really don't like it. Because this is part of a new Interface system. I would have made a proper Interface system, just like Graphics, UserInput, etc. But with direct interface with Graphics. Since of course, they'll need to talk to one another. In ConclusionThis challenge was more of a personal exercise on understanding what needs to go into creating a sensible Interface system. The biggest hurdle was working with a code base that had no regard for the matter, and trying to fit my logic into sensible places that fought with my personal opinions on where things should go while working within a specific time frame. Try it out!Below you can find executables that allow you to mouse around and test out the menu for yourself. Unlike prior downloads of the game, movement of the camera, a mesh, transparency, all of that has been taken out. When you click a button, they are specified to log the event to the log file, which you can find in the executable's main directory.
Thanks to those at Hathos Interactive for allowing me to use some assets from their current project for this example!
NOTE: While the site is quite bare, the team will be releasing an update to their current project sometime this winter / early next year. Being Transparent!Recall back to when we first implemented three dimensional graphics in our game engine in week 9's post. One of the challenges we hypothesized was that of depth. As in, who is in front of whom? Well, we're fortunate that for those opaque meshes, the GPU is in charge of who is in front of what based on their world position and just overwrites that pixel with the pixel that is in the front. But what about transparent objects? Because an object is transparent, we must know what is behind it. This order is very important, and we'll keep in mind the logic of drawing from front to back. Or perhaps more simply, we must first "draw" the pixels that are behind the transparent objects before we draw the transparent objects with their effect. This is crucial as the opaque object's must be on screen in order for the alpha blending to occur properly within the transparent meshes. Previously in both our application and rendering threads, we had a vector that held the Meshes that would be rendered on screen. But how do we impose order within these structures? We'll have to sort them based on their Z position. We could have approached this by adding the transparent meshes to the same structures that held the rest of the meshes, but for simplicity, we'll just go ahead and make a separate structure that is sorted that only holds transparent meshes. We'll just make sure that these are sorted correctly based on their position's Z value, and that they are the ones drawn last. Specifically, we do the following:
In the screenshots above you can see two cube meshes that use our transparent fragment shader, placed in front of one another to be able to see their transparencies with an opaque mesh in the background. ControlsThe Camera: [Key : Action][W : Move Forwards] [A : Move to the Left] [S : Move to the Right] [D: Move Backwards] [Space: Move UP] [Left / Right Ctrl: Move DOWN] The Brain: [Key : Action][Arrow UP : Move Forwards] [Arrow LEFT: Move to the Left] [Arrow RIGHT : Move to the Right] [Arrow DOWN: Move Backwards] [Page UP: Move UP] [Page DOWN: Move DOWN] DownloadsYou can try out these games via the below links. The only difference is that Direct3D will be used in the x64 version, with OpenGL in the other. They have been built and verified to work on Windows.
How to Simplify the Build ProcessPrior to this week, in order to build our game assets using the appropriate build tools, this required the use of a lengthy .lua file that contained instructions on how, and when, to build each and every game asset. This file stretched to almost 500 lines of code! It was a direct offender of programming paradigm of being DRY or Don’t Repeat Yourself. It contained dozens of repeated routines that did similar tasks. These tasks included determining if a directory should be created to house our final compiled assets, and error reporting when appropriate. To mitigate this, we now have our new Asset Build System! The only difference between our game assets are their names / paths and file type. We can contain this data in a more central location, formatted nicely within another .lua file, as shown below: We contain our meshes into our “meshes” group, shaders into “shaders”, and so on. We can then pass these arguments to functions that determine what type they are, and can feed them to the appropriate executable. In the end, we now have a more versatile asset building system with minimal repeat routines. Also, it can be easily extended to include assets of different types that may be useful in the future if the engine is ever expanded upon. Another thing to mention, is that the above .lua file containing assets relevant to the game is now contained directly within the game’s source directory. No longer having to be included as a real part of the game engine. This is a forward step in making the clear distinction between game and engine code. ControlsThe Camera: [Key : Action][W : Move Forwards] [A : Move to the Left] [S : Move to the Right] [D: Move Backwards] [Space: Move UP] [Left / Right Ctrl: Move DOWN] The Brain: [Key : Action][Arrow UP : Move Forwards] [Arrow LEFT: Move to the Left] [Arrow RIGHT : Move to the Right] [Arrow DOWN: Move Backwards] [Page UP: Move UP] [Page DOWN: Move DOWN] DownloadsYou can try out these games via the below links. The only difference is that Direct3D will be used in the x64 version, with OpenGL in the other. They have been built and verified to work on Windows. ![]()
![]()
Configuring MayaMeshExporter Required References for the Project:
Exported DataRefer to the previous post concerning the output format of our mesh files. Of course, we must output the data to satisfy the structures defined within the file. This time however, we added texture support. The file, now a .MSH, needed UVs to complete the task. I specifically only output the minimum required data for this file. Why only the minimum? It's less code maintenance, simplicity, and I don't foresee a much more complex use case for this Exporter for the time being. If in the case that's the opposite, I have no problem coming back and adding more funcitonality, failsafes, and considerations for expandability. DebuggingWhile writing the final format of the data to be exported from Maya, I found it useful to be able to see at every step of the way the actual data being written. In fact, Visual Studio is able to attach to Maya and then when the Exporter/Executable is ran upon scene export, you can debug and step through the process just like anything else! Below, you can see the beginning call to exporting a mesh file via a breakpoint set at the beginning of the process in Visual Studio. ControlsThe Camera: [Key : Action][W : Move Forwards] [A : Move to the Left] [S : Move to the Right] [D: Move Backwards] [Space: Move UP] [Left / Right Ctrl: Move DOWN] The Sphere: [Key : Action][Arrow UP : Move Forwards] [Arrow LEFT: Move to the Left] [Arrow RIGHT : Move to the Right] [Arrow DOWN: Move Backwards] [Page UP: Move UP] [Page DOWN: Move DOWN] DownloadsYou can try out these games via the below links. The only difference is that Direct3D will be used in the x64 version, with OpenGL in the other. They have been built and verified to work on Windows. ![]()
![]()
|
Categories |