Some progress on the boat hull rusting away sequence. I hand-painted mask keyframes that denote where rust forms and where deterioration starts to happen. Black on the mask is solid, black to 50% gray is rust forming, and 50% gray to white is where deterioration occurs. There are 8 keyframes in total. The result is getting there, but I still would like to do a bit of tweaking. Some of the keyframes towards the end jump a bit too much from one step to the next. 

This week I focused on developing this shader-based rusting/deterioration transition (23+ hours.) I had attempted a few different methods for making this happen. I originally was working with two separate masks to make this transition happen - one large detail control map (the overall, top to bottom rusting of the boat, holes, etc.) and one small detail control map (the ribbing of the boat.) This method I was first going with allowed for a lot of versatility in the timing of the transition and the blending, since it was all blended together in the shader, but finding a way to blend them together well in the material turned out to be probably too much overhead without necessity to get the visual result I needed. Painting all of these details in one mask and separating them by value seemed to allow for a more natural, organic blending across the whole boat. 


I've been playing around with materials to keep working at achieving the painted style I want for this project. Although I enjoy the look of hand-painted textures, my goal is to take this style beyond the typical flatly painted look by making it a convincing part of a 3D world with dynamic lighting. Particularly, I've been trying to create custom painted looking normal maps to accomplish this.

In my attempt to sculpt normal maps, I still hadn't been getting the look I wanted. Normal maps seemed to kill any painted illusion and overwhelmed my hand painted textures. Also, the chunkier style of my sculpts conflicted a bit with the more textural quality of my paintings.  While revising the material for my tree bark, I made a bit of a breakthrough in my method. 

These images are of the full tree with a modified, painted looking normal map. These also have the additional opacity mapped shell of the mesh over top, adding the textural edges on the silhouette.

You can see the effect a bit clearer in the image on the right. Two instances of my sculpted, tiling bark normal map texture are each warped slightly differently by noise patterns, and blended over top of each other by a dithered mask. This resulting warped normal map is blended back over top the original normal very slightly to only get a slight warped and dithered effect. It adds a slight pixel-level textural effect to the normal map. 

Then, I step the values of the normal map so that the transitions between different depths and angles are sharper and less gradiented, sort of like a posterization or a cel shading effect, but on the normal map. This helps in getting a more faceted look that I just couldn't achieve in my original sculpt.

It was upsetting to see at first that the effect of this normal map wasn't really coming through once my original painted diffuse color was applied, even with a really direct point light shining on the surface.

Something simpler works a bit better to show the normal map. Although I could work to get a bit more slight hue variations in this map, it's a good base. With the painted quality integrated into the normal map, it looks so much more well integrated into a 3D world. 









I spent the majority of this week working with materials to find this painted style (18 hours.) I also completely revised and reorganized my tree Blueprint (13 hours) and some things still need to be done to unbreak some things I broke in the process. I spent a bit of time with post process materials to soften the edges of objects (5 hours.) Lastly, I spent a lot of time trying to refine the look of my boat by painting model-specific detail maps, only to find that so far, I still like my iteration from last week better (12 hours.) Also, the development in my process with the normal maps makes the work I was doing on the look of the boat a bit unusable now, so I'll be working next week on applying this method to the normal maps I'll be creating for the boat.


My goal for this week has been to push the look of my boat further through defining its material and to work on the material blending between the painted boat material and the rusted boat material.

The opacity masked look  has been working decently to get a sense of a painted edge quality, but something I have been lacking that I really admire about paintings is the mix between hard edges and lost edges. The brush strokes on an edge aren't always all completely hard and opaque. The problem is UE4 masked materials don't allow for grayscale alpha values, they clip black and white- it's either 100% or 70% opacity. Translucent materials can provide this, but they're more intensive. 

I discovered that in the UE4 content examples, they demonstrate a technique used to fake translucence with masked materials by dithering. Dithering is a graphics technique used to simulate value using the density of a bunch of tiny dots when a color palette is limited. The DitherTemporalAA node in the material editor provides that kind of effect.

This same method seems to work to my advantage to give my opacity masks a softer look. Even though the little dots might technically be considered an artifact, it unexpectedly works to my advantage to give my opacity masks a softer, posterized/quantized look that aids in the painted style.

These images show a sphere with the dithered opacity masked by fresnel so it only shows up on the view angle dependent edges of the sphere. Here I've also tried multiplying that dithering by different brush stroke opacity masks to yield slightly different edge qualities. I had tried this same technique before with translucent materials to soften the edges by decreasing their opacity with fresnel as a mask, but the transition was very soft and it tended to look like the object was fading away.

Thinking forward, I would love to create a universal Material Parameter that can be modified by Blueprints across all materials to modify their edge quality, globally or locally. As things become more intense, the brush strokes at the edges might become less calm and more hard-edged and haphazard. I'd also like to do some modification to this edge effect based on how much light is hitting the object. I find in paintings, areas that fall into shadow tend to have lost edges more so than ones in the light. This kind of subtlety might really add depth to the visuals. Another parameter this effect could be dependent on would be distance or even focal point. Paintings use strongly defined edges to emphasize a focal point. Usually foreground elements might also be sharper, and background elements might have more lost edges if they are not the focus. 

I've been trying to find a good way to get the model specific detail I need to define the structure of the boat with more than just tiling textures. A method of sculpting or modeling a high poly and baking it down for additional normal detail doesn't really work well here. Normal maps I have tried to create so far all seem to break the loose painted look I'm going for. I hope to continue investigating non-traditional ways I can apply normal maps to bend light and enhance the painted look rather than diminish it.

This map functions as sort of a hand painted AO mask for the boat. It's not entirely complete as is-- I've found recently that instead of spending too long getting carried away painting for hours, I'll just test a little bit in one area to bring in and preview if I'm getting the look I need from it before I continue. You can see here I've only added some of the studs at the front of the hull. On top of this, I can overlay a hand painted curvature map mask to get some edge detail in specific areas. 

Both of these images above show the paint blending to the rust, which follows a similar but more clamped version of the opacity mask for the deteriorating parts of the boat (left.) I think the transition between the rust and paint could be taken much further by having even an additional mask that follows this one but helps to define the edge/transitional border between these two materials. Right now, to break up the blending with more than just an opacity blend, the mask on the left is multiplied in the material by a black and white hand painted brush stroke map for an additional textural quality to these edges.

You can see in the image on the left, the boat looks more thin-walled than the one on the right. This is where the duplicated mesh shells with different material instances applied work as cross sections to really help along the illusion of multiple layers of rust and textural depth.

Some more hand painted maps from this week - the first a rust-like mask I'm using to blend between the paint and rust, and the second a varied paint color for the painted boat material (which still needs more work.) 

I've found tremendous use in my work so far compiling assets within a Blueprint that can be dragged into the editor with all of the components it needs to be a complete asset. For example, for my boat, I have a few duplicate shells of the original mesh. This allows me to keep them all hooked up to one master asset I can drag around. 

Another benefit is that any global Material Parameter Collection values can be exposed to the Blueprint, so that I can modify material effects by clicking on the boat blueprint and changing values in the details panel. This is how I plan to manage and test the different sequential stages of deterioration that I've allowed for in the material. The transition will happen by blending/lerping the hand-painted mask that designates areas of deterioration to the next hand-painted mask with increased area of deterioration (the next stage or "keyframe" of the transition.) So to control the transition, I'll divide up 0 to 1 into the stages of the sequence, and at the end have one controllable value to slide back and forth between 0 and 1 to make the transition happen. This makes it easy to hook up to anything - distance, triggers, timelines or matinee. 



• Splines for primary branches can be dragged out.
• Tangents of splines can be rotated and scaled. 
• User can add up to 10 primary branches that split form trunk.
• Secondary branches are generated randomly along primary branches with awareness for other branches.
• Implemented random streams so you can modify a random seed to change the tree rather than have it change every time the Blueprint recompiles.

Next steps:
• As the tree branches out to further tiers with smaller branches, more small branches will be included in one piece. Experiment with what these pieces are.
• Continue tweaking parameters on randomly placed branches to get a more natural result.
• Figure out what the loose style should be to generalize detail as it becomes more busy (tiny branches and leaves.) 


• Modeled the boat to prepare for the boat deterioration dream sequence - this is an understatement. The modeling was the easiest part once I figured out how a boat actually works, which has been the majority of the problem solving. They have a very specific and subtle form and curvature that was a bit hard to capture initially until studying a variety of different boats. The shape of the hull is complete, everything else is in progress (railing is a stand in for scale.)

This came a long way from what I started on last week, which looked like a toy boat. I can't stress enough how important it was that I got the form of this right before proceeding. 


Very early tests just to demonstrate the concept of layered meshes to get a painted look with expressive detail (i.e. a rusted boat showing through to an underlying skeleton.)

• Mesh is added to a Blueprint as a component, duplicated multiple times (in this picture, 3x) 

 Each duplicate has a different material instance applied of the main material. 


• The main material is a hand painted B+W mask with a smooth falloff from black to white. This is to get enough of a gradient in the gray values to multiply in a tiling hand painted B+W brush stroke mask, that adds the textural quality to these masked edges.

• Rather than hand painting the mask for the boat in the exact way I want it, I compile the mask in the shader to allow for different levels of clamping on each material instance. This is what allows for the dynamic transitioning of the deterioration of this boat.

• Parameters for UV offset and tiling multiplier to ensure the brush stroke pattern is not repeated on each "layer" of the material instance.

 Each mesh is "pushed" in on within the shader on the vertex normals of the mesh. Each layer/material instance is pushed in an increasing amount. This makes it so the mesh does not have to have these layers baked in, so every time I want to modify the distance between each layer, I don't have to tweak the model in Maya and re-export. 


Concept wax sculpts to use in the creation of tiling textures. Not sure I like the result of these - they might be too sculpted and detailed for the painted style I am going for. 

I had a thought that I could use Substance Designer FX maps to generate procedural tiling brush stroke maps that follow a hand painted flow map for their directionality - a similar approach to this .

This might be somewhere I start to explore, especially to gain better control with the B+W brush stroke maps. Hand painting is great for some things, but hand painting tiling textures can often be a painstaking process, only to find you don't like the result after hours of painting. It is also hard to get a very evenly distributed level of detail when hand painting tiling textures. Naturally you hand craft some areas more than others, and these tend to be the areas that stick out when the texture tiles a couple of times.


Working more on the spline mesh trees this week. My original idea was to be able to drag out spline branches to an end transform point, but I've found that it's not possible to show widgets in editor for the Blueprints that are generated by other Blueprints. So, the end points have to be randomly determined (guided by parameters) instead of being able to freely drag out the end transform widget. This was a bit of a disappointment.

Overall this is a just bit less wonky than last week. I overhauled the method for how the branches should be placed so that I could enable them to have an awareness for how far the branches are being placed from each other. It follows a system of dividing up the trunk spline into sections, and each of those sections store a cumulative array of the locations of the branches generated, so that each time a new one is placed in that section it checks the array to make sure it can't place the current branch in a place within a certain distance of another one. 

The positioning of the branches and their angles haven't yet been properly parameterized. The branches are just sticking out pretty straight, similar to a pine tree, and the trunk mesh is a cylindrical tube that goes all the way up. 

I took a break from trying to procedurally generate the branches and decided to make an example tree that demonstrates what look I am going for in the end. I figured it would help a lot to have an end result to strive for when trying to get these branches to generate correctly.

I started creating this tree using Zspheres to form the armature of a trunk and the main three branches that the trunk split up into. I sculpted the base trunk at a low subdivision to get an organic feeling to the mesh and get the silhouette I wanted. This is actually a relatively fast process. 

In Unreal, this is loaded into a Blueprint as a static mesh component with my bark material applied, in addition to a duplicated version of the same mesh that has a different material instance of the bark material applied. The material instance pushes out the vertices of that second mesh along the vertex normal using World Position Offset. This creates an exterior shell identical to the original mesh.

The material instance on the additional shell mesh has a hand painted grayscale brush stroke opacity map applied. It uses the same diffuse texture as the original material, but offset and lerped with a noise.


Through this I've been learning a lot about painting tiling textures. The problem with the diffuse texture I had painted last week was that it wasn't getting the right amount of texture and color that I wanted. It also wasn't even enough all around, so some areas stuck out when tiling.

A lot of the time I'll paint all on one layer and not use layers and masking to their fullest. For this texture, I painted rough brush strokes all around the canvas on a new layer, painting and erasing to get the right shape, and then I used those strokes as a mask to paint within. This allowed me to loosen up my brush strokes by "coloring outside of the lines" but at the same time, keep them confined to the borders of the mask. I also achieved some unexpected results with blending options.

Here is another concept painting I did this week to outline some of my ideas for the familiar dream environment I'll be pursuing.

Next week I'll be working on something else-- a more static, visually based asset -- alongside my trees. Working with procedurally generated assets can be a bit frustrating after spending hours troubleshooting and cleaning up for little visual result. 


Procedural spline trees: 12 hours
Texture painting and material creation: 10 hours
Painting: 5 hours
Presentation, design doc, blog: 11 hours



This week I focused on material creation, using the tiling tree bark material as my guinea pig for testing. Developing the look of the materials is something I'm really invested in with this project. I've been trying to find the balance between realistic material definition and painterly stylization. One of the main downfalls with the typically seen hand-painted style is often that the light is permanently painted into the diffuse texture. Sometimes this looks nice, but it doesn't allow for the light to change and can come off as a bit flat. It's been especially important for me to find a method for creating normal maps that are a bit chunkier and planar looking, because that's sort of the way light hits are depicted in painting.

This image to the right is my first test to create a stylized bark normal map. This would be a tiling map used on all the trunks and branches of my trees. 

I created this normal map in Zbrush because I feel like sculpting is what will get me the closest to a more unique, hand-crafted style. However, in this first trial, I struggled a lot with sculpting this detail on a flat plane. My method was using the Clay Buildup brush to build up the surface, then polish it back down with Trim Dynamic and hPolish. It came out much too busy for my taste.



This second image is starting to get the sort of simplified, planar stylization that I want. 
The way I created this one was using the same sort of tools, but I discovered it was much easier to achieve this multi-directional faceted look by sculpting onto a cylinder rather than a flat plane.




This most recent iteration shows the diffuse and normal working together a bit better, resulting in something a bit closer to what I envision as the material style for this project. Again, he normal was done by sculpting in Zbrush and I painted a texture based on the normal. I'd like to add some more color variation in the strokes of the diffuse color to get that feeling of mixed paint and unexpected hues to make up an overall impression of a color. Bark is a naturally rough material and I'm still trying to figure out the role of the roughness map in my material, how much it might be needed for something like this, or if it could be painted in a certain way to enhance the look I'm trying to get. For example, maybe a patchy brush stroke roughness map with hard-edged transitions between strokes would provide an interesting painterly variation when light hits it. This is my next step in experimenting.

Aaand finally, here is some questionable progress on the procedural spline tree. It's umm. Getting there.

I've found that the best system for this is to add Blueprints within Blueprints. So, basically, the main Blueprint is the trunk Blueprint, which deforms cylindrical pieces along a spline that is editable per instance, and along that spline, branch Blueprints are placed at a random location, both in distance along the spline and radially around the circumference of the cylindrical trunk.  



Problems with this right now:

1. The branch mesh I'm using as a test a bit too sculpted on its own and most of the deformation of it should be left to the spline. I will also need maybe a few different branch meshes that can be interchanged for the sake of variety.
2. There are currently no parameters controlling how these branches are placed, other than how many of them there should be. So, in the future, there will need to be parameters that control the tendencies of the branches (angle, twisty-ness, length) as well as parameters for intelligently placing them so that some aren't clumped together, or colliding with each other.
3. There's an odd seam between the cylindrical pieces on the trunk that I need to figure out.
4. I want the trunk to be able to split into two large primary branches rather than go all the way up.
5. Still working on my method for placing the additional mesh planes with brush strokes along the spline meshes.


Thinking a bit on how I can make some interesting and unexpected trees. Here are some surreal variations on trees- one ties in industrial elements like pipes, the other forms some sort of pseudo house structure. More paintings defining my spaces are in the works.


Sculpting for normal map stylization: 13 hours
Painting diffuse/roughness maps to develop painterly style: 5 hours
Testing custom lighting in materials and post processes: 8 hours
Sketching and painting concepts: 2 hours
Blog: 30 minutes


This week I've made a lot of progress on the systems that are going to be key in achieving my goals and making my dreamscapes.


This week I stared developing a construction script for my customized, procedural trees that are going to be based around spline components. At it's base level, I'm generating this tree in segments of meshes that follow along a spline. I'm also procedurally generating additional meshes/planes on top of the base structure of the tree that have a hand-painted brush stroke texture for an opacity mask. These trees are a really great first test subject for working with principles of procedural construction and achieving a customized look. 

To start, I've just been working with the trunk and haven't created the algorithm for the placement of additional branches along the trunk. Here's a look into the functionality so far.


Nothing too fancy here. Basically, it calculates the bounds of an input mesh and places the next mesh at the distance along the spline that the previous one ends. It uses a while loop to continue doing this as long as the current distance along the spline is less than the total length of the spline. With this, I've used a tube as my input mesh.






This is also pretty simple. Just rotating the mesh to the current rotation at the distance along the spline. The effect opens a lot of possibilities though.






Something I'll need for a tree is to be able to scale down the meshes as the spline continues. So, the user defines a scale range, and a max scale. The base starts at the max scale and calculates each time a segments is placed how much of the scale range to subtract from the current scale based on the percentage of the current distance along the spline out of the total spline length.

The problem with this, initially, is that it leaves increasingly large gaps between the placement of the meshes as they continue scaling down further along the spline. This is because the placement is determined by the initial bounds of the mesh. This can be fixed by multiplying the Z distance of the bounds by the current scale multiplier. 


Something I want specifically for my trees is a really custom painted look. I'm looking to break up the silhouette of the mesh. I can do this by adding extra meshes that have brush stroke opacity masks on them. So, something I need to do is be able to randomly generate planes along the circumference of a cylinder that follows the spline. I've added a user defined parameter for how many of these planes are to be generate per segment (the density of the planes.) 

I used the mathematical formula to find a random point around the circumference, with the radius being determined by the current scale of the cylinder segment.


You'd think this would be easy, but this has been the most difficult step yet. I've run into quite a few hurdles that stumped me for a while, and it's getting pretty close to how it should be. But properly offsetting the location of these planes from the center hasn't been an easy task. I'm still working out the kinks of this (you can see where things get a bit tangly after the spline curves.) I have another method I'm trying underway, so updates on this soon.




Here are some examples of some effects I might be able to achieve when materials are applied to these planes. As of now, I've painted two different black and white masks with brush stroke patterns. For variation, it determines which mask to use by using the sine of the Z world position as the alpha for a lerp between the two textures. The color right now is just the vertex normal of the mesh, for the sake of demonstration and differentiating the color of the planes.

These are pretty early tests and I haven't done too much with them yet. The next steps here are to work on finding a method that balances a brush stroke abstraction with the proper amount of lighting, color, and depth information. A simple color with the brush stroke opacity mask is too abstract for the style I'm going for, so to achieve more material definition there will of course need to be diffuse, normal, and roughness information. I'm also thinking of developing a custom lighting method within the materials -- using a custom light vector from a Blueprint to control the effect of lighting in the material. I spent a bit of time breaking down some of the methods used in Epic's Stylized Demo. Even though the style of this demo is more like a toony illustration than an impressionist look, it works with some of the same concepts I'm trying to execute here and is a good reference for what's been done in this area.


The world canvas fake volume painting system is working! Here are a few pictures demonstrating what it does so far. Hopefully soon I will be documenting a more broken down explanation showing the Blueprints and materials for this.


You can "paint" white on black textures. There are 4 different planes with these dynamic canvas textures at different heights. The height of a traced hit from the player's weapon socket is matched to the closest plane on the Z axis. Then, the hit in X and Y are translated from world space relative to the world canvas Blueprint, to texture space.



In order to be able to draw to this and keep the last area you painted, rather than just having a white circle stamp follow your reticle around the texture and dump the last stroke, it keeps where you have drawn the last stamp. These planes are arranged in two 2x2 grids of planes and Scene Capture 2D cameras capture these compiled planes orthographically to two render target textures. The materials applied to each of these grids reference each other's render target textures, passing them back and forth (essentially recursively.) The only difference between the two materials is that one of the materials adds both the current frame to the last frame, whereas the other only references the last frame. With the two cameras alternating updating every other frame, this is what makes it possible to keep the previous frame and add the next one.


The textures for each cross section plane are laid out in a 2 x 2 square and captured to a render target texture using a Scene Capture 2D camera. Finally, this compiled texture is fed into a shader that interpolates the value in between these planes (using a flipbook texture that shifts the UVs to the corresponding texture for its depth.) This can be used as a mask within any material. In my case, every material should incorporate this to be able to be painted. I visualized the fake paintable volume with a series of stacked cross sectional planes. 


Now imagine this smooth opacity mask could be broken up and made into a more interesting visual transition if it were multiplied with a brush stroke pattern - maybe even an animating one to make the border even more undulating and ambiguous. Additionally, instead of having these planes represent the entire world space, I'd like to set it up to move based on the current player location -- if you reach the bounds of that area's canvas, the planes will move and your location will be the new center of these bounds. In this way, I can get a better texture resolution per area while keeping it less resource intensive.


Another quick concept I did this week. Trying to develop the idea of this ambiguous cave/rock formation structure in a dark, sandy, beach like setting. These rocks could work on a similar system to my trees.

Here's another concept of how I might incorporate some man-made structures into a dreamy landscape. I found this picture I had taken a while back of an interesting house with a door on the second floor and a bridge running from it into a hillside. I thought this was such an interesting and peculiar visual, and it reminded me of a scene in a dream I once had. I'd love to break up this structure into parts and have those scattered across the landscape as well.


It was recommended to me to check out this surreal game LSD: Dream Emulator, where the game is based on the creator's dream journal. Although it was made in 1998, there are quite a few things to be said about this game and what it was able to achieve with such ancient 3D graphics. It's surprisingly dynamic for its time - the way you navigate through these dream environments is not set in stone and changes based on what path you take. I'm not sure how in depth and variable the gameplay is, but it certainly feels pretty organic. The soundtrack is also dictated by numerous sets of patterns played in variable tones. For a game of its time, it ventures into some pretty new territory.

My project will probably be quite a bit less psychedelic and disturbing, since I'm going more for meditative and awe-inspiring. However, both this game and my project are in a format centered around environmental exploration. The main mechanism in this game is that the player is linked or transported to different environments by bumping into objects. My idea is similar, but takes this to the next level by showing this environment transform into another one around you. I also find it interesting how some environments seem recurring but not the same - they share the essence of that environment, but are slightly different.  This is something I'm also looking to achieve.

Anyway, it's maybe one of the weirdest things I've seen, but also pretty awesome. I think you'll just have to scrub through this yourself:


Tree system: 16 
World canvas system: 5
Material tests: 
Research: 1
Painting: 2 
Blogging/work breakdown and blog response: 
Some additional hours picking apart LSD Dream Emulator.


This is a bit long, but for anyone who wants to read, I want to express how important I feel it is to not ignore the technical component of your work. Knowing your tools well so that they don't undermine your design, as well as experimenting with them to solve a new problem is what allows you to produce something polished beyond what has been done before in a way that is uniquely yours. This element of technical ability and problem solving is not exclusive from your visual design, it's  what makes your visual design. Downplaying its importance what I think leads to so many unrealized visions for student projects. 

It's fair to brush aside assumed "steps" of the process. Of course you're going to UV, texture, etcetera, especially if your project is showcasing your work in that area of the pipeline, just like it's assumed that you'd probably put a brush to canvas if you're going to paint, or cut a piece of wood to make a wooden chair. But in a field with so many varieties of tools and processes that can be combined in ways that haven't even been thought of yet, your tools and how you use them to achieve your vision is the design process. 

Working with 3D graphics is inherently involved quite a bit with technical problem solving and should be treated as such. If, for example, your lighting is bad in your final project, it isn't solely because your vision for it wasn't good enough. You could have spent time doing concept art with beautiful lighting creating exactly the mood you wanted (which is absolutely a great method of discovering what you want for a project), but if you don't know enough about how lighting works in 3D and what it takes to make this work the way you envision it, your end result is going to fall short of your expectations. In 3D graphics, there's a huge barrier there in terms of our taste and what we are actually able to with our knowledge of the tools. 

A lot of 3D graphics, especially in the gaming industry, rely heavily upon being technically sound. There isn't any faking or hiding obvious UV seams, or strategically picking camera angles to cover up a rigging problem. You can't really fake it until you make it. These are fundamentals, the base level to make your ideas come to life in this medium and not be completely undermined by technical pitfalls you didn't know how to overcome. As designers rather than artists, what we produce isn't purely visual, it also is functional and works to serve a purpose, which is why this is so crucial.

This isn't purely about embracing or learning new technology or software. Using the latest software isn't what makes you technically savvy. Your knowledge of a new program makes your work relevant but doesn't make you necessarily skilled, especially when you are learning a program at its face value or using a program that is built around automating a process. In fact, learning tools that take care of that much of the process for you is hurting your design more than helping by making it less original and less yours. Using a toon shader out of the box will give your work a cartoony look, and using Ddo might give your work a grungey worn look-- but it's not what makes it your work. You've let the tool determine your design. Understanding your tools at a deeper level is what will equip you to make them work for your needs. It's how you combine your tools to come up with new solutions that makes you an inventive problem-solver, and it's a valuable, fundamental skill in a creative industry.

Tutorials are a valid starting point to find out what's out there and what's been done, but with every tutorial, it's important to consider what your specific needs are for your project and what you could bring to the table to improve what has been already been done before. The point is, this isn't something you can just learn by a tutorial. Learning software in terms of where buttons are and menus are is easy, but understanding what they're doing, how a software's functionality can be used to meet your goals for a project is an entirely different skill.

I also hope everyone remembers that your senior thesis projects are for yourself, not for anyone else. You're working on this because it's the kind of work you want to be doing; it appeals to you and you're proud of it; it demonstrates something you've put a lot of time and thought into in order to showcase your work; it's hopefully going to take you in a direction that lets you keep making stuff you're passionate about and can make a living by. This isn't just to fulfill a requirement to graduate (I hope), and although it's always worth listening to critiques, take them in stride and be constantly balancing this with your personal aspirations for your project. Don't be misled in a direction that's going to result in you making a project that is an amalgamation of everything everyone else thought was going to be important to make your project successful.


Almost out of the brainstorming stage. I had been overthinking my idea over the past week and I remembered I'm not making a game, and that I really want to focus on just a few things: developing a customized painterly style, developing a system to dynamically and seamlessly transition between two distinctly different dream environments, and using Blueprints to procedurally generate these environments as you explore. I'm not really concerned what the structure is, as long as I can successfully achieve these goals and showcase how I came to that visual output in a tech demo style playthrough.

I originally had been thinking of creating a house, but that started to feel constricting, too based in the development of the characterization of who lived there, and not true to the kinds of freeform spaces I wanted to design. 

Instead, I'm going to be creating two surreal landscapes. All sorts of unexpected surprises can be scattered across landscapes as you explore-- a house, a partial house, manmade objects -- this better reflects the disorganized nature of our dreams.

This week I found a cinematic done for the game Ryse: Son of Rome, and it really inspired me to focus on achieving a painted style in real-time 3D. This cinematic is pre-rendered, but some of the techniques to add a brush stroke quality could apply.  Here are some stills, concept art, and a break down of how it was made. 

Since I often spend too much time painting one thing and realize a few hours later I've turned up with something I didn't even feel passionate about, I forced myself into doing some very quick speed painting exercises to more quickly explore different solutions for the type of mood, color palette, and environment I might want to create.

I'm starting to like the idea of one environment being more warm, inviting, sunny, and foliage based, while the other would have more rock formations, sand, a cooler color palette, and feel more foreboding.

I also did a quick painting to illustrate the transition between two dream spaces at their border.

This week, I successfully got the multiple cross section canvas system working, so now it matches your hit to the nearest Z height and writes to the corresponding texture. The next step is to get it set up so that I can interpolate between these textures to be able to unmask fake "volumes" in shaders. I might edit this and add pictures of this working in the near future.

I've also been doing a lot of testing to develop this painted style, with broken up painted edges and the correct amount of material definition -- no results yet, but this next week I should be moving into more production, less pre-production. Now that I have developed more of a direction in terms of what I'll be creating and my pitch video is complete, I'm excited to start developing my first proof of concepts for my systems and assets.

Pitch video - planning, script, rehearsing, recording, and editing: 6 hours
Painting, sketching, brainstorming: 8 hours
Blogging/blog response: 1 hour
Testing style and shader development, world canvas system: 6 hours


Over the past week I've been thinking a lot about the format of the experience I want to create, which spaces I'd like to build, and what possibilities for dream-like transitions those might provide.

I have decided that the dreamscape's physical manifestation will take the form of clouds. These clouds can function as a pervasive visual and transitional element. Clouds are a common symbol of dreams, and for good reason. Just like clouds, dreams are sort of nebulous, intangible, and evoke powerful visceral reactions within us. A metaphor I'm working with is that if our mind were a water cycle, our brain would produce thought condensation, which would evaporate into dream clouds. Then, our heads would be literally and figuratively in the clouds. Working with this idea, I could allude to this idea by making the clouds become more dense in areas where dreams begin to invade reality.

So far, I have done a painting conceptualizing what those clouds might look like, in color and atmosphere. I'm going with a color palette that reminds me of early morning and sunrise. Early morning reminds me of dreaming because I associate it with awakening from a slumber. Both early morning and dreaming feel very solitary and introspective.

I'd like for the main environment to be a two-floor apartment -- I find we often dream about places that are personal to us or very familiar, and so a home setting feels fitting. At the same time, places we know in our dreams are often amorphous, inexact versions of the place we know in real life that maintain a generalized essence of that space, but with something slightly off. I might be able to create this sense of a disorienting space by randomly switching the placement and types of objects that are in the room each time you come back to it, or even changing the layout of the house. The player might also encounter new and unexpected anomalies in the room -- objects that are completely out of place.

As you explore the house, I want to emulate the nature of the illogical sequences in our dreams, since nonsensical and peculiar happenings are a frequent occurrence. Dreams are disorganized and subject to fleeting thoughts of the mind. For this reason, I'd like the player to explore the house and experience lapses into dream sequences as they look around, seeing morphing objects or sometimes drifting and transitioning entirely to another area for a moment.

While searching for some inspiration, I came across this painting which conveys really well the type of cloud-like transition I want to try to create. 








I've been doing some tests in Unreal Engine 4 with spline meshes. Not much to show yet, but I feel like these will really enable me to create some neat morphing assets, either by deformation along a curve or extension of modular pieces along a curve. Here I've made the beginnings of a Blueprint for a fence type asset, where vertical post meshes are placed at a user-designated interval along the spline. You can drag out the spline points differently per instance of this Blueprint, and the placement of the meshes follows the curve. In picture 3, you can also see I have made it so the user can choose the construction priority of the fence - they can choose whether they want to set the length of the span between the posts, or by the number of posts that they want. In picture 4, I'm demonstrating an "adapt to surface" option I have added, where the placement of the meshes will snap to a surface below it. This is ideal for a fence on a hillside. It opens possibilities for animating the points on this curve as well, which is great for surreal ambient animations.


I am using UE4's new Canvas Render Target Blueprint to dynamically write to a texture that matches up to the size of the world. The player can draw in texture space wherever they are aiming their reticle on the world space plane. By drawing with white on a black texture, the player can be designating in world space a mask. I can create a fake volume by stacking these cross-sectional planes in Z space and interpolating between them. This mask could be fed into the shaders of all objects in an environment to "unmask" them. 


I've been investigating different methods of creating clouds that would be on the ground plane. The best solution I've found for imitating the shifting form of clouds is to use Cascade, the particle system in Unreal. The parameters of the particles can also be changed within Blueprints, which opens a lot of possibility for some interesting transitions using the clouds.

This method of fake volumetric color blending seems to give great control over determining the colors of the clouds in a cube map style (what color is it at the bottom? top? back? front? right? left?) Not much has been done in the way of getting clouds to be properly lit and self shadowed. I might be able to do this by using this method and hooking up these volumetric colors to the angle of a custom light vector. 


Testing and experimentation: 8 hours
Blogging and blog feedback: 2 hours
Pitch video planning: 2 hours
Painting and sketching: 7 hours
Research: 3 hours

This week I feel like I am beginning to visualize and solidify what this project is going to be. More finished sketches for the style of the home to come soon, as well as some early testing with clouds.


My name is Deanna Giovinazzo. I’m a 4th year student in the 3D Digital Graphics program at Rochester Institute of Technology, and this is my production blog for my senior thesis, tentatively titled Morphscape.


Morphscape will be a real-time 3D project built in Unreal Engine 4. In this world, reality has been invaded by an alternate metaphysical dreamscape, which is responsible for causing dreams. This will be structured in an exploratory, interactive “playground” gameplay style, featuring a system that enables the player to explore, interact with their environment, and experience this world transform seamlessly and fluidly around them. 


I am hesitant to call my thesis project a game, since stretching myself thin across the entire pipeline of what is required to make a game is not what I feel will enable me to effectively output the level of visual quality I would like to see in this project in a deceivingly long (but actually very short) timeline.

As a 3D designer, my focus is on creating visuals to convey a mood that invokes a visceral reaction in the player. I don’t consider myself to be a gameplay designer, and so I am not necessarily going to be focusing on developing hard objectives that fuel the player to reach a finish line.

That being said, I fully intend for this to be a playable demonstration of my work, a contained demonstration showcasing the potential of the underlying technical system to realize a dynamic visual experience.


Working on the visual end of real-time 3D, I am primarily concerned with the look and feel of an environment. The impression it gives through its spatial quality, attention to detail in its construction, purposeful set dressing, and how light interacts with the material definition of surfaces is a lot of what inspires me.

Atmosphere and light with a hazy, ethereal quality, as well as a very neutral color palette are what I envision for this project. When I think of my dreams, they are very solitary. They remind me of early morning, a quiet feeling like you are the only one in the world. This, mixed with that hint of unsettling, surreal, something-is-not-quite right, is a mood I’m looking to express.

Additionally, a sense of aesthetic stylization is a goal of mine. I’d like to use my thesis project as an opportunity to explore my love for an illustrative quality. This is something that I can identify in many paintings, but I haven’t seen achieved successfully in real-time 3D yet. I find what I enjoy in a painting is an evident hand of the artist, the quality of brush stokes with contrasting soft and hard edges, lost edges and implied detail through deliberate strokes and careful brush economy. When this is attempted in 3D, I often see it done with very flat hand-painted textures that don’t feel well integrated into the form of the model. Although this in itself is a look that is sometimes appealing, it often looks more toony and doesn’t have the material definition needed to make something feel realistic. This is an area I’d be excited to make progress in.

Erring on the end of technical as well, I’d like to develop a living world that functions in a surprising or exciting way and isn’t just static — something that peaks your curiosity as you move throughout. I envision things happening as you look at them, morphing or transitioning in front of you. 


Here are some paintings with expressive brush strokes I am looking to for defining my style.

Kentucky Route Zero is inspiring in its color, light, atmosphere, and interesting silhouettes. It also has a very serene, dream-like mood. Although it is more two-dimensional and graphical in style than what I am looking to create, it’s a great quality benchmark. Its sense of mystery and peculiarity is also something I strive for.

Botanicula is also a two-dimensional game, but I can take many cues from its whimsical forms, color palettes, and soft, glowing, dreamy light. Additionally, the gameplay is all about clicking around, interacting with the environment and finding surprises, which really appeals to me.

The Unfinished Swan is a great execution of a unique game mechanism revolving around environmental discovery (splattering paint to reveal and navigate the world around you.)

Mind: Path to Thalamus does a nice job of establishing harmonious color palettes and atmospheric quality throughout an area. I enjoy how the environments are sparse and look as if fabricated by scattered thoughts, and their cohesion of these spaces through similar forms works well.

Brothers: A Tale of Two Sons  achieves a slightly painted look in its textures, but still maintains a sense of well-defined, realistic materials. This is something I aspire to do in my work.

Journey is a prime benchmark for beautifully directed color and light, strong silhouettes, and gameplay enhanced by taking in the environment around you. It is also a very quiet and meditative game.


Gathering reference and inspiration: 3 hours

  • -Looking at games, paintings, and real-life locations I might want to build, figuring out what appeals to me for this project

Research: 2 hours

  • -Generally keeping up to date with new and different workflows to keep an eye out for something I might want to incorporate into my project.
  • -Research into dream psychology, surrealism

Planning: 2 hours 

  • -Planning out timeline

Brainstorming: 3 hours

Painting/sketching: 8 hours

Testing: 6 hours

  • -Beginning of testing a dynamic texture that can be drawn on in-game with Unreal Engine’s canvas drawing functions. I want this to serve as a mask for the entire area of the world I build, and be able to draw to it dynamically.

Blogging: 3 hours

  • -Sometimes writing and compiling stuff takes longer than actually doing it.

Here is my semester long timeline. This is subject to change or become more specific as I flesh out more about my project.

Here is my weekly schedule. Right now I have marked out hours that are open for working on my thesis. Realistically, I will not be working all of this time, but I'll be blocking out the hours I actually work on a weekly basis and posting that here.

Next week, I should have more solid progress, including some finished sketches, style tests, color palettes, specific references, and some early testing.


In the past couple of weeks I have been working on a few side projects dealing with augmented reality. The past summer I was on a project that gave me some experience working with recent developments in augmented reality for smartphones. 

Recently, a couple of other students and I were recruited by one of my professors to work on an augmented reality project that would debut at ImagineRIT. The (very important) people attending the ImagineRIT VIP lunch would receive a badge with "special features" that could be unlocked by downloading an AR app and pointing a smartphone or tablet camera at the badge. The special feature of the badge was that, once the image on the badge was recognized by the augmented reality app, it would trigger a video to pop up and play, tracked to the badge in real life.

I was responsible for coming up with the 2D design of the badge that would serve as the trigger image, as well as one of the videos that would play when triggered.

The VIP badge design I created, a pawprint formed by the iconic ImagineRIT bubbles from the original ImagineRIT logo design.

The final, printed version of the ImagineRIT VIP badge.

Our first draft of our idea for the augmented reality portion of this badge was to incorporate 3D assets and recreate the RIT tiger statue and the wall it stands on. This would allow people to pivot around a virtual 3D object using their device.

For to-life accuracy, I used Autodesk's 123D Catch to capture the RIT tiger statue in 3D. I was amazed with the results and did a bit of work afterwards to ready it for mobile (cleaning up the mesh and decimating the polygons, UVing.) I was going to go on to create my own diffuse, specular, and normal maps for the tiger, but testing its success led me to not continue on this endeavor. Our augmented reality app was a skinned version of Aurasma, so we were working through Aurasma's servers. The 3D model did not track well with the already printed badge design. 

Here are some screenshots of the process. 123D Catch gives you an obj that is very dense in polygons and often results in some of the background environment being captured as well. It comes with the real life photo texture applied (although to very patchy UVs.) It does an outstanding job of capturing the form of the model, it just needs a bit of cleaning and optimization for usability.

I chose to reduce the polygons using Decimation Master in Zbrush because of its ability to retain the form in the parts that matter and lower polygons significantly in areas that don't. Overall, I think it does a great job keeping the silhouette of the model. I figured I could afford to lower the polygon count to about 1,000. With the normal map applied, it is barely distinguishable, especially from the distance and size you would view it from on a phone.

With less than a week left on the project, myself and two other students designed video animations that would play when the front and back of the badge were triggered.

Here is the final animation I created in After Effects.


In January, I went on a trip to Italy with a group of students in the Honors program of my art college (College of Imaging Arts and Sciences at RIT.) We spent two weeks there and visited Rome, Florence, Naples, Herculaneum, Pompeii, Sorrento, and Siena. It was such an amazing time and I am so glad I had the privilege of traveling there, especially with such a great group!

In our time there, we had to come up with ideas for independent projects related to Italy and our experiences there. My idea was to take Autodesk 123D Catch captures of various sculptures I saw in Italy. Originally, I was going to create a virtual museum environment to present them in. Throughout the semester, my project idea evolved. Having worked with augmented reality quite a bit in the past year, I decided to bring the resulting 3D models to smartphones and tablets. 

I used Metaio's Creator software to bring in my finalized, cleaned up and reduced models. Creator is essentially a drag and drop program that allows you to bring in any sort of media (3D model, picture, or video). You can pair it with the image that would trigger the AR media to appear, and upload it to a channel on Metaio's server. This is where the image recognition calculations are done and where it is made accessible via QR code on their augmented reality browser app called Junaio. Once you download Junaio for smartphone or tablet on either Android or iOS, you can scan the QR code, point your camera at the trigger image, and see the 3D models I created pop up, tracked to the trigger image on your phone's camera display. 

I thought this was a great medium to present these models in because it is easily accessible to people and allows them to pivot around the model as if it were actually there in real life, emphasizing the three dimensionality of the models.

You can try out the book, or digital PDF, of the trigger images I created that allow you to scan the QR codes with your device and see the classical Italian sculptures! Instructions are included.


With the exciting release of Unreal Engine 4, I've spent the last two weeks making over what I've done with my crane game so far in UDK in Unreal Engine 4. In that time, I've surprisingly accomplished much more than I've gotten done all semester, and the project is starting to take form as a game!

Since this is such a short project, I've needed to allocate my time wisely towards the features that actually make the game a crane game. Until now, I've spent a lot of time on figuring out things in order and getting stuck on seemingly large problems that are actually menial in the large scheme. A strategy I've been trying to go with is hopping around a bit more and getting the less important things to a good enough level to call complete, leaving more time to focus on the highest priority features.

Although I enjoyed learning UnrealScript and programming, there was a lot of initial work to set up the coding environment and understand all of the functions, how to use them, which classes to extend, how to communicate between the classes, etcetera. The node-based visual programming system that the new engine offers, called Blueprints, is much easier to understand, get running, and test/debug (you are able to compile the graphs in engine). The architecture makes it easy to search for nodes contextually, create and access variables between classes, and reference objects that are created in engine. Essentially, it's an amped up Kismet that works with classes and not just the level's events. Overall, it makes a one semester game a bit more manageable. 

I've had a lot of success working with Blueprints so far. I remade the bounding box function to dynamically take in a bounding box mesh, get the bounding box min and max coordinates for the X and Y axes, and set the limits of the player movement to not exceed these coordinates. This was much easier to do with Blueprints than it was with UnrealScript, because it was much easier to pass the coordinate variables between the classes.

I've implemented the idea I was trying to figure out with UnrealScript, which is to constrain the claw to the mover box using a physics gun/physics handle. The content examples provided with Unreal Engine 4 are an excellent resource, so my Blueprint is just a modified version of the physics gun that the character uses in the Physics content example. At the beginning of the level, I set a timer to immediately perform a function that traces for and grabs the claw.

Key bindings are much more easily modified and set in engine as well. Previously, bindings were set by altering the DefaultInput ini file. Now there are input nodes that let you connect them to what happens when they are triggered or released. I've bound the raising and dropping of the claw to the scroll wheel of the mouse. 

Update physics handle is a function that updates the location of the physics handle every tick, and also whenever the mouse wheel is scrolled up, down, or released.

The drop and raise are essentially just changing the Z coordinate of the physics handle's location, but keeping it in sync and updated every tick with the XY movement of the mover box. The location of the physics handle is calculated first every tick, and then the drop and raise functions are performed afterwards and overwrite the Z coordinate of the physics handle. 

The camera is currently working almost completely to my liking - invert horizontal movement and look at rotation. Initially, I had tried to make the Camera component within the Character blueprint, but I realized that if I did this, the camera would always be attached at an offset distance to the player. So, I removed all Camera components from the character Blueprint and made the Camera in a Blueprint of its own and dragged it into the level. Then, I set the view target to the Camera component within my Camera Blueprint upon Event Begin Play.

Setting the default camera to the Camera component in my CGCamera Blueprint.

Construction of the camera - the world location is set to the vector variable I created, so that the initial location can easily be adjusted and not hard coded.

Updating the Camera's rotation every Tick to a combination of the Yaw from the look at player rotator and the Yaw and Roll from a predefined variable.

The camera look at rotation is updated every Tick. It takes the current location of the Camera and the current location of the player, and calculates the rotator needed to aim the start object at the end object (the Find Look at Rotation node.) I found this node by looking at Unreal Engine's Blueprint Content Example of a spotlight that follows the player. I then set the rotation of the camera to a rotator that combines the Yaw from the look at rotation with the initial rotation variable I set to have a defined, unchanging Pitch and Roll value. (The Pitch is always set to aim downward at a predefined angle that does not move, and Roll is always 0.)

Inverse movement of the camera - adds a step amount to move right every time the player movement input is < 0 (left), and adds a negative step amount to move left every time the player movement input is > 0.

The inverse movement of the Camera is calculated in the Character Blueprint, because it was the only way I found to update the location when the player had an axis input (MoveRight, MoveForward.) Using these within a class references itself, and was checking for input to the Camera when it was in the Camera Blueprint instead of the player.

This graph checks upon MoveRight input if the player is moving right or left ( > 0 or < 0 ). If moving left (< 0), it adds a step predefined in a variable to the Y axis, making the camera go right. If moving right (> 0) it adds a negated version of the step to the Y axis, making the camera go left.

The string/cable between the mover box and the claw was made easier by UE4's new Cable Actor, which fills exactly the role I needed. The cable is merely a detail between the mover box and the claw, which are attached to each other using a physics handle. The Cable Actor allows an attachment to an object (the mover box) and an end location (updates with the movement of the claw). It also enables setting the width of the cable, the material, the length, and more parameters.

The Cable Component is added in the Construction Script for CGCharacter so that it is created once on the construction of the class, and various parameters are initialized here. I added the cable component to the claw, so that its end is attached consistently relative to the location of the claw. I couldn't get the cable width to be quite small enough for my needs, so I set the material to a custom material that pushes the normals in, making it appear thinner.

This is the graph showing the end location of the cable updating every Tick. It is consistently offset by the Cable Offset in my case, since the pivot of the mover box I am attaching it to is at the top of the mesh, and I need the cable end to be pinned at the bottom of the mesh.

I created a carpet texture using SVGs in Substance Designer. I could have just as easily done this in Photoshop, but doing this all in Substance Designer allowed me to hook it up to a tile generator node, and tile my design randomly a few times, then overlay a finer noise for the carpet texture, all while still being able to easily modify the shapes, sizes, and colors I chose. My goal was to make an appropriately cheesy carpet pattern, reminiscent of arcades and bowling alleys. In a bit more time I might add some popcorn kernels and crumbs embedded into the carpet, as well as subtle hints of soda stains. For now, it sets the mood to my liking.

Arcade carpet texture diffuse.

The reference photo I took and based my carpet design off of.

I've been working with Allegorithmic's newly released Substance Painter for texturing some assets. UE4 and other engines are beginning to move towards physically based shaders. The maps I primarily need to create for assets are color, specular, normal, height, and roughness. Substance Painter allows for painting and previewing these channels real-time on a 3D mesh. 

So far, I've retopologized, UVed, baked normal maps for, and started to paint Mr. Snuggle Wuggle using Substance Painter. I am a bit behind on making prizes to populate the machine, but that's not a huge concern. One soft body prize will be good enough to use in testing my claw grab.

The initial high-poly, sculpted Snuggle. Some 600k polygons.

Low-poly Snuggle. Retopologized using 3D Coat.

UV maps, unwrapped using 3D Coat.

Still a lot more dirt and sadness to paint here, but just a preview of what I've been working on. I've been painting on the high poly version, since Substance Painter projects what you've painted to the lower poly mesh when you load it in.

 I've designed a simpler claw to save myself the headache of getting too carried away with figuring out a properly functioning, realistic mechanism before I can figure out how to get it moving around and grabbing. A simplified claw style also works a bit better with the more fun aesthetic I want to get out of this project.

The new claw design. After getting my test claw model and rig set up as a functioning Physics Asset in UE4, I decided to remodel it higher poly, with a revised scale for the new engine. 

This is my first time working with physically based shaders. The metal for the claw is basically a chrome - metallic, high specular, and low roughness. I painted the roughness map using Substance Painter to get smudges and little dings, scratches, and scuffs in the metal. They aren't highly noticeable from a distance because I didn't want it to be too prominent. Keeping the claw very reflective seems to give it a good 90's arcade feel. 

I also added in welding detail on the hinge extensions as a normal map -- a subtle detail, but all that matters is that I know it's there.

I have very little experience rigging, but this is my simple claw rig. Basically there is a joint at the very top of the claw, a center "null" joint (not weighted to anything, just an intersection), which branches into the joints that control the rotation of the three claw fingers (a, b, c.) 

The base is all one mesh, and each of the claws are one mesh, but for rigging I Mesh > Combined them all into one. I soft bound the joints to the mesh, but since it's mechanical, I paint weighted each part 100% to its respective joint, and 0% to everything else. This ensures there's no weird, organic bending happening. In order to help me select the individual mechanical parts for paint weighting, I selected the UVs on the UV layout, converted the selection to vertices, and flooded the value of those vertices with the paint weight value I needed.

The base is all one mesh, and each of the claws are one mesh, but for rigging I Mesh > Combined them all into one. I soft bound the joints to the mesh, but since it's mechanical, I paint weighted each part 100% to its respective joint, and 0% to everything else. This ensures there's no weird, organic bending happening. In order to help me select the individual mechanical parts for paint weighting, I selected the UVs on the UV layout, converted the selection to vertices, and flooded the value of those vertices with the paint weight value I needed.



I then exported this rig and its mesh as an FBX, and imported it to UE4 as a Skeletal Mesh. I created a Physics Asset of the Skeletal Mesh. The new PhAT editor in UE4 is much more intuitive to use, though it was nice to have the background of working with it in UDK. I set up the claw rig with bodies in UE4's PhAT Editor. I essentially needed each claw to be constrained like a hinge, swinging only on one axis. 

I then exported this rig and its mesh as an FBX, and imported it to UE4 as a Skeletal Mesh. I created a Physics Asset of the Skeletal Mesh. The new PhAT editor in UE4 is much more intuitive to use, though it was nice to have the background of working with it in UDK. I set up the claw rig with bodies in UE4's PhAT Editor. I essentially needed each claw to be constrained like a hinge, swinging only on one axis. 

The overall look of the game has come a long way in the past week or two, and I'm pleased that it's starting to resemble my initial concept art. This is still a WIP and by no means final. There's lot more fixing and tweaking to do with the textures, shaders, and lighting. I'm still figuring out how to get the glass material set up to reflect the scene. I'd like for the glass material to have plenty of fingerprints and reflections of neon lights in the arcade. I also plan to have arcade machines to the left and right of it, with flickering light changes on the screens. At this point, I might keep the crane machine up against the wall, but I plan to come up with something visually interesting - either a fun wallpaper or posters, or both.

The approach I want to take for grabbing with the claws is to import animations for each of the claws that would move it in to its fullest extent, and combine that with a physics blend, so that the claw animation is stopped by physics forces when colliding with another object. This is the next hurdle I'll be focusing on in the coming weeks, which is why you see the claws not yet properly functioning in the picture. 

Video to come shortly!



I've modeled the high poly version of the crane game exterior, UV mapped the low poly, baked down the normal maps to the low poly, and began to texture it based on my reference photos.

I sculpted two of the stuffed animals I've designed. I started completely in ZBrush from ZSpheres to create the base armatures. I decided to begin these by sculpting, as it helps to achieve subtlety and a more organic, plush feel. I'll be retopologizing these, UVing, rigging, and texturing them to be put in engine and testing soft body physics with. The sculpt will be used to bake the large detail normal map. I'll also be creating a finer procedural normal map for the cloth material, as well as combining procedural textures and hand painting for the diffuse.




Here are some simple mechanisms I've been testing out with UnrealScript. I have been working mostly with placeholder meshes and cubes for the time being, just to get the principles down.


The claw spawns an invisible box mesh that surrounds it at level start. This box is hard attached to the claw and moves with it. Whenever a prize is inside this box, checked by a Touch event every tick, a timer starts. If the prize leaves the box, the timer is cleared. Once this timer is running, I can check to  see when the timer hits a certain time and initiate functions at that time. For instance, I might want a prize's specialBehavior() function to run whenever the prize has been grabbed by the claw for a certain amount of time. After being held for 5 seconds, Mr. Snuggle Wuggle might just start falling apart.


I need to dynamically determine the bounds in the X and Y coordinates that the Pawn is allowed to move. The claw needs to only be able to move to the extent of the walls of the crane game machine, with some padding. However, I didn't want to hard code the constraint to specific coordinates. I want to be able to get the bounding box coordinates of a mesh and set the min and max values for the X and Y coordinates to the min and max X and Y values of the bounding box.

Since I would like to be able to choose which actor the bounding box is based on an instance placed in the level in engine, I made a custom Kismet node which takes in an Actor input. Here is the basic structure of my class for setting up the Kismet node with a basic in, out, and input variable.

class SeqAct_BoundingBox extends SequenceAction;

var() Actor ABox;  // ABox is a global Actor variable that stores the input taken in by the Kismet node. 
                               //The () following var signify that it can take in an input in Kismet. My Kismet node is an Action.

ObjCategory="CraneGame Actions"

InputLinks(0) = (LinkDesc = "In")
OutputLinks(0) = (LinkDesc = "Out")

VariableLinks(0) = (Expected class'SeqVar_Actor', bWriteable = false, LinkDesc   = "ABox", PropertyName=ABox)

When activated, I get the bounding box of the input Actor and store it to a local Box variable called BBox.

function Activated()
local Box BBox;

I then need to pass the Bounding Box coordinates stored in Box BBox to the Pawn class, where I can check if the Pawn's coordinates are within the bounding box coordinates. I had a ton of trouble accessing other classes, like my custom Pawn class. In the Kismet node, I found that I was able to use this function to get the active Pawn from the local player controller. 

local CGPawn CGP;
CGP = CGPawn(GetWorldInfo().GetALocalPlayerController().Pawn);

I made a function in my Pawn class that takes in a Box variable. Inside that function, it stores the information of the coordinates in Box BBox to a variable that is global within the Pawn class. The function in the Pawn class is called P_BBox (for Pawn BBox).

var Box P_ClawBounds;  //global variable at the top of class, for Pawn Claw Bounds

function P_BBox(Box BBox)
P_ClawBounds = BBox;

In the custom Kismet node under the Activated() function, I finally call the function I made in the Pawn class to pass the Box BBox variable over. 


In the Pawn class, every tick, I check to see if the Location of the pawn is >= the max X and Y coordinates and <= the min X and Y coordinates. If it is below or above the min and max coordinates, I set the pawn location back to the min or max. I grab the Min and Max float values from the P_ClawBounds variable that stored the bounding box information from the Kismet node. I set up a temp vector variable called tmpLoc, which is set = to the Pawn's location at the time the tick function is called. I use the tmpLoc then to perform calculations on for the duration of the function, and then at the end I use SetLocation(tmpLoc) to set the location of the Pawn to the updated tmpLoc. 

simulated function Tick(float DeltaTime)
local vector tmpLoc;
local float Max_X;
local float Min_X;
local float Max_Y;
local float Min_Y;


Max_X = P_ClawBounds.Max.X;
Min_X = P_ClawBounds.Min.X;
Max_Y = P_ClawBounds.Max.Y;
Min_Y = P_ClawBounds.Min.Y;

tmpLoc = Location;

if(tmpLoc.X >= Max_X)
tmpLoc.X = Max_X;

if(tmpLoc.Y <= Max_Y)
tmpLoc.Y = Max_Y;

if(tmpLoc.X <= Min_X)
tmpLoc.X = Min_X;
if(tmpLoc.Y <= Min_Y)
tmpLoc.Y = Min_Y;




A major part of making this game is to create realistic physics driven movement. I've determined that the Pawn mesh should actually be, what I'll refer to as (bare with my little knowledge of real life mechanics), the sliding mover box at the top of the machine. This is where the movement of the crane originates from. There is a string that feeds into this box that holds the claw up, and can be dropped down to drop the claw down. 

I've been doing a lot of testing with all of the different physics features in UDK that I knew nothing about before this project.  I've thought through several possibilities for how to make the claw attached onto the string which is attached to the mover box, and how to make the string lengthen and drop down. Here are the conclusions I came to.

The claw mesh is a rigged SkeletalMesh with a PhysicsAsset. Each separate, individually moving mechanical piece of the claw is vertex weighted to its own bone. The PhysicsAsset sets up the correct constraints between the bones (i.e. hinge) and allows physical impulses to be enacted on it. Controlling the grabbing of the claw with this rather than a baked matinee animation will allow for the physical interaction between the collision of the prize and the claw grabber, so the grabbers stop when they collide with a prize. 

This all sounds good in theory, and I believe it's what I need to make happen, but experimenting with this stuff has been extremely frustrating, to say the least. Here's dismal picture of my progress, a broken claw with improperly functioning constraints lying on the cold, hard ground of the UnrealPhAT Editor. 


We'll see what happens as I keep working with this. If necessary, a simpler claw model may be in the works so that things have less interdependence and chance of breaking. 

The claw PhysicsAsset is then attached to a rigid body SkeletalMesh string with a PhysicsAsset instance (since soft bodies cannot hold up meshes), modeled to the length of the longest it will need to drop. This string is then fed up through the box and the top of the machine, which will be hidden to the player and so it won't matter if it clips through the top of the machine. This string would be pinned straight by surrounding collision boxes until the point where it originates from the bottom of the mover box - that way the swing pivot is at the bottom of the mover box.  The string's X and Y coordinates would be locked with that of the mover box, but not the Z. The string drop, and the drop of the claw by proxy, will be controlled by a variable that determines the current Z coordinate. This variable would be clamped at the max and min height position - so that the claw can go no further up than the mover box and no further down than to the platform the stuffed animals are on. This variable would be modified by a key binding (initiated to start dropping by a key press, and an automatic raise after a designated wait time (2 seconds or so) that allows for grabbing prizes at the extent of the drop. 

The downside of this method is that the string would be stiffer looking, since it is a rigid body. The string may bend more angularly at joints as it swings. The string would clip through the top of the machine, which isn't a huge deal but it's a bit of a drawback if I want to zoom out from the machine at any time. Finally, it might give less control to have a three layer dependency between the mover box, the string, and the claw. Each would be based on the physics driven movement of it's parent, which could cause inaccuracies and glitches.

The second method is a bit of a workaround but may prove to be more reliable and give more control. The mover box, as the Pawn, has an invisible Physics Gun "weapon" that is always aimed downwards 90 degrees. This Physics Gun would be constantly firing, and traces to a specifically designated actor (the top bone of the claw.) This way, the movement of the claw is bound to the movement of the mover box, which is player controlled. The string is a soft body that is attached as a detail between the two at its two end joints. It is modeled to its shortest length and, because it's a soft body, will stretch to the extent that the claw is dropped.

I have been working with this method the most, and have got it mostly working. The things I'm figuring out right now are how to make the weapon constantly fire, and how to trace to one specific actor to ensure that the claw is the only mesh that is picked up by the mover box. The advantage of this method is again, the direct link between the mover box and the claw, removing the room for error of the string. Already implemented with the Physics Gun is the sway and interpolation of the object that is being grabbed. Additionally, I've found a modified version of the Physics Gun code that allows for a Push and Pull function that changes the hold distance between the Pawn and the object grabbed, which is essentially the claw drop. It can be bound to the scroll wheel of a mouse, which may be a more fun and interactive way for the player to handle claw drop (using the claw more like a wrecking ball than a static, timed drop.)


This week I focused on hammering out how to constrain the movement of my Pawn and continued developing the camera. I also worked on prize concept art!


My game type is currently extended from UTGame. I had originally extended from UDKGame, but found that my camera was most closely based on a third person camera that pivots around the Pawn (the claw.) The built in SetBehindView in UTGame is helpful for getting a third person camera working in this way. I need to remove the default HUD from UTGame, but I don’t see this as posing too huge of an issue.

My camera is based loosely on this third person camera code. The camera height and offset from the location of the player are global variables with values editable in the defaultproperties of the Pawn class. The CalcCamera() function uses these to calculate the position and rotation of the camera relative to the player.

In my version of this camera, I’ve frozen the rotation of the Pawn, but not the Player Controller, and so the camera moves based on mouse input but the Pawn mesh never turns. In the UpdateRotation() function, After getting the rotation from the mouse input and feeding it into the ProcessViewRotation() function, I clamp the resulting Yaw rotation value. This is because I don’t want to have a full view around the crane game machine – just enough to pivot and give the feeling of peeking inside the machine left and right.

My code right now allows the camera to move in location in the X and Y axes along with the claw Pawn, which is something I need to fix. This makes the camera much too active for my taste. I would prefer it to be very subtle, and for the rotation of the camera to correspond to the X movement of the crane.

I was able to constrain the movement of my claw in the X and Y axes in the PlayerController class in the tick() function. A tick is one frame, so whatever is inside the body of the function is calculated each frame. I simply grabbed the location of the Player to a tempLocation vector (X,Y,Z). I then used an if statement to check if the location in each axis was above or below my max or min values. If it was, I set the tempLocation equal to the min or max value. Then, I called SetLocation(tempLocation) to write the tempLocation to the location of the player.

I attempted to do the same on Z axis, but just set tempLocation.Z to one value, like 100, so the player would stay afloat at 100 units in height. The problem with this was that the built in gravity kept taking effect. I figured out that the best option would be to set the Pawn’s physics to PHYS_Flying. This can be done in the default properties by simply saying Physics = PHYS_Flying, or even changing it from a dropdown in editor in the properties of the Player Start. Even setting the physics to flying didn’t seem to work though – my player kept falling.

After a lot of fooling around, I ended up calling the SetTimer() function in the PostBeginPlay() function of the PlayerController, essentially causing a delay and allowing the PHYS_Flying to be calculated before the player is initialized.


I’ve designed several prizes that are going to populate the machine. Although it’s not a priority to outline every single prize, I found designing them to be a good reminder of the tone of my game. Some of these are cameos from other works I’ve done, others are new ones designed while referencing cheesy and overly sweet stuffed animals from my reference photos.

I’ve found the prizes have helped me come up with some ideas for added functionality I might want in the game. If each prize could have an individualized behavior, there would be a lot of potential for very humorous surprises with each prize you pick up. For example, the sad teddy part might fall apart in a variety of morbid ways whenever it is touched, making it impossible to save. The bubble gum fish might be extremely sticky — and hard to get off of your claw when you want to drop it. And the toilet paper – maybe you need to pick it up in the right way – over or under? By time you get it right you may have unraveled the entire roll.