Source Filmmaker

Source Filmmaker

Not enough ratings
Velbuk [SFM Experiments] [WIP]
By Kumquat [Velbud]
Hello, and welcome to my "notebook" of ideas and information. A lot of this is based on questions asked or answered by people, including myself, as well as ideas that are plausible in theory to accomplish in SFM. The main goals were to test and then no matter the outcome, explain how to do it and the results.

This is not a frequently updated guide. I have a life and goal outside of SFM, and animating in general. If you have ideas for me to look into that you have seen in other animations, be they outside of SFM, or even in real life, let me know in the comments and judging based on interest, I will look into them.

This guide is mainly to convince more exploration of the uses of the components I am sharing with ya'll.

Preview image by: EyeCandry
   
Award
Favorite
Favorited
Unfavorite
Introduction
Greetings,

This is Maikeru [aka Velbud]. I am creating this guide as more of a fun little project for future content creators, be they for SFM usage, implementation in other engines, or just inspirational knowledge about what can be done within SFM beyond and within the basics.

For a while now, I have been looking through a lot of the functions present in Source as well as what is mentioned in Valve Wikis and Facepunch forums. Safe to say, a lot of components are provided in Source. The whole hype about Source 2 is nice and all, but I think building a conceptual yet complex system that works for Source may be transferable and more easily planned, especially with the components we have knowledge of.

Just a few notes:
1) These experiments are designed so that they work similar to physics engines present already in Source as well as in Blender. I am not using those systems themselves unless they indirectly assist with the project. You'll see what I mean later on as I begin adding experiments in.

2) I will be adding every experiment in one at a time. It keeps me focused, and it should keep the viewers focused as well.

3) For the newer people or even the more experienced, there will be a lot of things you may see that are rather strange if not completely understood. I will be using both Blender and SFM a lot here, rarely anything else (for example, GIMP). Here's a good start for everyone to review on SFM:
https://steamproxy.net/app/1840/discussions/0/2741975115066297037/
Once you feel more comfortable with everything that is SFM, then the Valve Wiki is a great resource that will be linked to a lot here.

4) This isn't a course. It is hopefully a resource to answer questions that are a bit more practical and somewhat more relevant to what people want. This isn't the place to ask questions about functional problems like how to use certain features already present in SFM. This guide assumes you already know what this stuff is.

5) Whatever I will create that both works and doesn't work, I will share everything to them and ensure they are organized to my computer's organization. If I do feel like ♥♥♥♥ about it, then I may only post a picture and a video showing what has been done or even how to make it yourself.

My ultimate goal for this project is potentially working on a system to automate all these effects for me, or to inspire people to try to do such as well. If not any of there, then maybe just seeing much better renders of models and movies within SFM.

Alright, there may be more as I work on this, but for now, I think I can leave you with "Welcome Aboard!"
Faking Omni-Lighting [Dealing with Lights] - On Hiatus
It is best to first mention about lighting before going into anything rather complicated involving models. This guide focuses mainly on the given components within SFM.

What is an omni-light? It is a source that can light up all directions. In SFM, there is only standard and volumentric lights, and the main problem is that I havent been able to create a system for a model version of light yet (this requires me to do some research on creating new materials, which will take a while and I want to get the easier things done first).

Omnilighting also helps with giving the overall area a look like it is all lit up and that light is bouncing off everywhere.

SFM doesn't support such system. Instead, I have been trying to find ways to fake it using the lighting we have present. There are a few standard components to lighting.

1) The main light source. This can be the sun, the moon (change the color and intensity), a powerful light source at close proximity.

2) The secondary light sources. Since there can only be 11 lights possible to RENDER (you can have as many lights as you want, but only 11 can be rendered [turn off intensity for ones you arent using] so place the lights wisely).

-certain skyboxes [forgot what they are called] can be modified with holes using alphas/alpha channels or holes in the mesh areas [to be tested] to fake light sources in the night. Rendering requires two, normals-flipped meshes with the same texture. This will either use Unlit or VertexLit, depending on situation with placement of the "sunlight."

-Lighting up the background of the map (the blackvoid), leads to an all white view with just one light. Test if this is possible to make in Source using some basic Hammer fiddling.

3) All light sources will have a light facing the view camera(s), both the ones that are used for rendering the movies and the ones used for rt_cameras. This may seem daunting, but we can bypass the hard parts of animating them simply by using There Is A Bear In My Oatmeal's Camera Node Script. Following his tutorial for the light, we can set it up to work and follow your camera. Manipulate the distance and strength of the light based on distance, and use the alpha channel of the light when the clarity of the light source is at its max.

4) I will try and figure out a script that will animate the presence and absence of lights based on their visibility in the shot. Based on the direction of the camera's facing, the lights that are locked to the script will either be turned off or on, maybe even rendered with the setting you wished for them to have in the shot. Such a system would definitely make this a lot more fun to bear.

5) The standards for light emission are also reliant on the light sources that are almost omni to have two main lights facing inside the light source and the other is emitting away from the light source, in the relative direction it should be going. Combinedd with number 4), it should be modified such that the prop light will weaken to allow for the light facing the camera to be dominant. Remember, the distance of the light matters here because the rendering only looks right with manipulated distance of emission.

More details on the guide: coming soon! Preview image for this chapter will be updated with something a bit better to attest to. For now, the goal is to make one omnilight present.

Future plans:
-Creating an omnilight model. [this is a test for seeing how SFM runs light and potentially giving it an easier means of rendering it]

-https:// www. youtube. com/watch?v=Ivgvu9L3A-o&feature=youtu.be See if accomplishing this request is possible. Let's see if hacking this light from Hammer is possible. Even a script may be in order. IF THIS IS DONE, THEN THERE IS NO NEED FOR RT_CAMERAS TO BE RESPONSIBLE FOR RT REFLECTIONS!!!!!!

[UPDATE: I seem to have found a way to bypass the need for both of these by influencing a larger degree of behavior of the standard spotlight, Maxxy has perfected it into both a shadowless omnilight AND and shadowed pseudo-omnilight. Note that these are both one light influences and have the behaviors of aspects of omnilights respectively, with pseudo-omnilight requiring two due to a limited influence. The methods for this aren't entirely certain for release yet as Maxxy plans on studying this further and compiling a full guide. In the mean time, I will be working on other aspects of this guide. Any other updates will be present in the SFM FAQ.]
rt_camera Faking Reflections Study/Skills [to be finished]
Reflections in SFM are done through a few known means:
1) Standard lights and brushes from maps. [cheapest, generally mediocre quality though]
2) Custom lighting and material work. [ported models from other games tend to have this feature combined with the animator's knowledge of Blinn {more information, see BlueFlytrap's Workshop Item Descriptions}]
3) Cubemaps. [these are simple, and also pretty cheap, but they are not the most appealing. They give the illusion of reflections, and even then are noticeable]
4) rt_cameras. [With good posing and accurate placements, as well as good material manipulation, this can work well as a mirror]

The following are tests I have done to confirm the way rt_cameras work.

a) rt_cameras are HEAVILY dependent on the UVMap and its distribution along the assignment palet, which can be easily viewed as a grey indication of the limits of the prescence of the texture.
Further testing showed that this is dependent on the UVmap assignment orientations too. This means that you dont have to necessarily rely on $nocull to flip your image. You can flip the image using the typical reverse axis scaling (scale the whole UVMap globally at the X axis by -1), and voila. [currently testing if Translucency/Alpha, multi-camera assignments is possible]

UPDATE: I have confirmed that combining it with Doc's color and cloak script, the model is able to be cloaked and colored! Currently looking into multi-camera assigning and seeing if phong or lights in general can be supported. I will also test rt_light effect on a model supporting Blinn. We will see if the effects are somewhat arbitrary to use. I will also look into fish eye rt_camera view to see if a scene looks much more accurate to what we see with our eyes.

b) Multiple tests have led me to the conclusion that Pte Jack was right about a few things involving the cameras. I will not state what because most of what he has said is true, but it isn't the overall picture about these systems.

c) I tested rt_cameras with spheres and half spheres, adjusting the material organization as much as possible to accomodate and minimize seams. It worked well with a half sphere, but not so much with a full sphere [more details later, there are a couple more tests I need to perform to come to a conclusion about using full spheres].

About half spheres: To give the model a look of a reflective FULL sphere, you will need to follow the rt_camera protocol and place the monitorcamera (or the camera you will be using for reflections where you wish for the reflection to be based. You will need $nocull, thus the center, as I have tested, isn't the most optimal place to place the camera. Knowing what part looks reflective while what looks like the exact replica of the camera view is pretty important, too. Thus, it is better to play around with the camera in an environment well defined and non-uniform, such as the itemtest and devtest maps in SFM. Once you have accomplished the hard parts, you are not done yet. You will have to deal with the issue of keeping the illusion of real reflections. The most optimal method for this is by creating another scene camera, use Bear's Camera Script on the half model such that the node is the camera and the reflective surface's center is controlled by the "camera" node. With this combo, you should start building the philosophy of reflections.

TL;DR for spheres.
1) HalfSphere mesh with -1 to 1 flex.
2) rt_camera protocol
3) Bear's Camera Node Script.
4) A scene camera other than the rt_camera. [edit the components as you so please to get it as close as possible to the real deal]

With this cocktail, you can create a somewhat close representation of a reflective sphere while it being a half-sphere.

[COMPONENTS USED FOR THIS EXPERIMENT WILL BE POSTED ONCE EXPERIMENT IS FULLY FINISHED] *

[PICTURES AND PREVIEWS FROM THE EXPERIMENT WILL ALSO BE POSTED]

Just a few warnings:
1) This overall experiment was long due to how clunky a lot of the components are and how demanding the engine is. Don't expect to be perfect the first time you load the components you make for your testing.

2) I have tested the model with cubes and my rule a) applies here. So yes you can get a cube to behave like a sphere rt_camera reflection. However, it also suffers from the same issue as sphere. If you would like components for this, let me know. *

3) rt_cameras are not perfect reflections and thus may require you to do a lot more editing to your model, such as adding in some slight fogginess. I am working on a protocol for this and it will be posted with this part. *

4) Panels can be used for more than just mirrors. They are also good for creating reflective water surfaces. You just need to learn and observe a bit of water to measure out the placement of the rt_camera, or even learn about deflections. This will be my own personal endeavor to make a quiet water background for testing purposes, but note that distance fog applies heavily with anything.

5) By my own conclusion, it is possible, in theory to create a system that allows for models to have realtime reflections. The main issue is that I will need to introduce a system of fluid sphere deformation combined with rt_cameras and this is already a project for later and on the topic of introducing water to SFM. Remembering everything I wrote you here, you could try to create a dummy model with reflective surfaces, but the main issue goes back to the half-sphere vs full sphere issue. I will work on figuring that part out, but hopefully, the full sphere is possible.

This is not meant to introduce ray-tracing materials or components to SFM. It is meant to teach people of a method to bypass these issues from within SFM. I hope these will somehow help. :)

All (*) parts will be worked on and provided download links to the components for you to compile and test for yourself. Let's see if you guys find anything else to add on.

PROGRESS: https://www.youtube.com/watch?v=JUo2lWcUCLA (combining it with Doc's cloaking script)

UPDATE: For irregularly shaped models, I plan on testing a stranger version of the method I have set up here with the simple semi-sphere. In case youre curious, here it goes.
1) The sphere looks very nice in half sphere form reflections but it isnt ideal to what I am working towards. I will provide this mesh with the most recent changes after the experiment is completed, but it isn't over yet.
2) Irregular objects require artifact like meshes within the main mesh. Imagine this, take a full sphere and have a faces based on the vertices of the main mesh, stacked one on top of the other. A good amount will be hidden using a Proxy that works against a player view based on set camera conditions while those that are facing more or less towards the main camera are the most clear. To test this, I will use a normal sphere and instead of spinning it the way I did last time, I will artifact this sphere.

Just a warning, this sphere is NOT the Blender sphere. This was a custom sphere I found from the SFM Workshop and I made comparisons to ensure this guide is as accurate as possible. Everything here is a hypothesis backed with testing. If you want more in depth discussion, comment below what you would like to see as images for conceptual understanding.

I personally adopted the fact that the half sphere version of the camera is a more accurate representation and depication of the surroundings that we see with our own eyes compared to what the camera in SFM sees.

There is a component about creating custom materials in SFM. I will try to see if I can program a ray tracing material, but it may not make sense with SFM's philosophy because no lights are bouncing off of models to other models, only illuminating them or casting shadows, which seems can be done using a replicative rt_method. More information on this in the later future. For now, rt_cameras will be a reference for how the viewer will see the scene.
Physical Model Particles [on hiatus]
Have you ever wanted for SFM to behave as a physics/organic chemistry simulator? Well, that is one of the plans. Before I begin talking about how this is possible, let's get one thing out of the way; these are physical models, not particles.

Concepts:
1) In the real world, we have gravity, but there is a force stronger than gravity, and that is lov...I mean, electric force, aka magnetism. The brief explanation for it is that opposites attract. However, not everyone understands why, until you went through a good amount of physics and ochem. See, the relationship between particles is the transfer from higher concentration to lower concentration. A lot of people may tell you that it is a bit more complicated than that, but just understanding this concept simplifies this experiment a lot more, for the reason a charge exists is due to the absence or plentiful presence of electrons. Remember, electrons aren't necessarily the smallest quantum we know of, and it could be that the interaction between positive and negative without affecting electrons directly may be because of the components of the electrons being affected, which is what we will be taking into consideration here.
2) To get this to work, motion neeeds to be established as a default animation. This will require creating a series of sequences that SFM can deal with and animate correctly for the user. Not only that, a special lock to lock trade system is to be created. If it cannot be done via a script, then instead it can be done through using physics detected and particle sphere crack slipping, which seems much closer in line with the real deal.
3) Particles having more or less energy may be due to their general color particle moving faster then slowing down, or dealing with particles bumping into one another enough to look like they are clumping while beating against one another. Stuff like fire may have to come later for this.
4) Applying the rules of chemistry to these particles may be a bit difficult, for the particles themselves need to have an exact reason for linking or producing certain reactions over others.

The goal will be creating a hydrogen that can interact with itself with thanks to the conditions set by the client, which is simply the animator's animating or just usage of the script.
Bypassing Weight Culling via Modelling Means [on hiatus]
This was a rather bothersome issue that I have found with Source, and generally didn't like the methods for bypassing it currently used. My suggestion would instead be to keep relative gradiented vertex assignments by assigning them to helper bones, with the main bone being the visible and controllable one, and note that the assignments shouldn't overlap (until further notice). The limit I will need to figure out would be the bone limit if it is able to be bypassed in any way, or if I can break a model into pieces then relink it within SFM, or even make the model and armature parts separate pieces.

This was based on an older experiment and my experience fiddling with Overwatch models as well as examining the helper bones. I will post more details later on. For now, just note that this may end up being a test on a single model. If it works, great, we have found ourselves a method that isn't too complex yet does require a bit of work.
Creating Hair Similar to Blizzard Shorts and Disney/Pixar [on hiatus]
This is a very odd one, right? The whole idea behind this is that the goal of the animator is to be able to render models to the highest of quality. However, what I always thought needed a bit more looking into is a translation of a game or movie character with more advanced looking hair into SFM, which tends to not happen for multiple reasons. This experiment will look into how we cn potentially recreate the quality hair seen in most shorts by major animation companies in a more practical sense, such that we remain anatomically accurate while also making it pleasing to look at. Unfortunately, due to laziness, I will not be texturing anything. I will be borrowing hair textures and and using them as a reference. The most important part is the mesh right now because the hair texture can thus be customized based on the skills of the users. :)

The project will consist of recreating Go Go Tomago's hair in sort of the style that it was in the animation Big Hero 6. The idea is that each mesh face or strand of mesh faces will be a hair or two. There will be hair layered within. After doing some studying on how it was done for Overwatch models, I personally didn't like it and so I decided that I will perform a vertice by vertice layering from within. If this comes out right, we get a sort of start to how to model hair in a strand by strand fashion. Just remember, since this is a 3D world and the hair is not too detailed, all we need is two mesh faces perpendicular to one another. Later on, if this experiment seems to be holding ground, we will advance it by adding more details, but avoiding making the hairs into full solid models. Way too hard to pose in Source, while mesh faces are more forgiving.

I hope this experiment will be fun to check out. :)
"Down" Animations
This is a rather odd problem with physics because SFM and source in general seems to have a concept of elastic collisions MORE than inelastic. Ragdolls may be the antithesis of this statement, but the problem is that it is bound to code rather than something that can be controlled similar to a model in IK. Hmm, now that I think about it, IK may ultimately be the best way to go about it for most models. If I create an advanced enough IK for plane skeletons as well as limbed ones, with the controller and interacter relationships applied to all parts equally and correctly, this can work for all animations as a ragdoll faker. In addition, I would need to add a secondary bone system for jiggles, except instead of jiggles, it is utilizing a specialized system of downward facing bone controllers to indicate direction of gravity. And lastly, the hardest part, is getting the physics bases to recognize each other and interact as either inelastic or elastic collisions, most preferably the former. This is my own battle plan for getting a workable SFM Script of a physics faker that takes physicality into account.
Extras
These are not official documentations but these are interesting observations I have made while being away for some time.

1) There are games out there with a lighting system that looks heavily well lit and reflective, like say Dark Souls and The Last Guardian. This was a notice I have made about the engine that was rather odd at first, but logically made sense to rendering only what needs to be rendered within the player view. While there is also the rendering of the map that can save resource usage, there are two things we should mention:
- Ive not tested much on this, but a lot of engines use player view culling, which is essentially turning off parts of maps that are not seen by the viewer. In SFM, this can be done similarly by turning off the view until you need it back in the engine again. This may be wrong and SFM can handle a lot of assets at once, but I have heard that heavily poly assets may cause more interference to render, so I would recommend testing by hiding them. By hiding, I mean click the eye symbol, NOT take them out of the map.
- Similarly, when your camera view moves and adjusts to a different part of the map, like say when the viewer changes from being outside to being in a tunnel, the light lighting color actually changes. This got me postulating that maybe using a sun light source in SFM might do this trick and allow for rendering like this. However, after testing, I realized that this only works if there are other light sources present in the cave being rendered. Hiding lights is also a thing, as I have done it for my light experimental study. What I found was that if I created a system of sunlight, and a box of lights beaming against one another, I can create the illusion of light bouncing off all over an object, if the intensity of the light should be minimal, with the object standing in the middle. If in the cave, the lights change to a darker color, modifying themselves accordingly with the CLOSEST light sources to the object and viewer. There are a few ways to approach this, which means that SFM can handle more than 11-13 lights being present, but not rendering that many. One way I can recommend this is by automating the lights or animating lights to be off that are close by or general off screen ones to replace the more specific ones. This allows for a rather complex yet extremely balanced and well worn system to be at play. I do plan on studying further to see if I can create a script that can take lights and animate them in such a way, but for now, I will prolly have to render something like this in SFM first, huh? Well, the other method I found is by creating a background light box. If I am capable of porting omnilights, or doing some haxer level shizz, I could probably make it easier to do this sort of lighting without making it costly on SFM, but since we dont have much of a system going, the next best thing would be create a light box of multiple shadowless lights (essentially a cube blinn system), all lights pointing directly onto the direction of the camera and the focus. This system works for backgrounds, especially when changing scenes.

Taking these two systems I recommended and combining them effectively, we can render SFM animations a lot more efficiently and allow for a lot of the harder work to be done for the animators as well as methodical ways to potentially trick SFM to work better without hitting the hitches. If there are any questions, let me know, I will draw out the schematic if this needs to be explored/measured a bit further. The thought of this for now counts, just means I would need to learn a bit more about the coding of scripts and see if I can run a test in SFM, or at least create a "fake" version of the system in progress. There are multiple ways lighting can work and be controlled. However, hopefully, this will allow for the more advanced users to have free reign with lighting while getting the basic stuff out of the way.

2) I want to do a test in which to take current choreographic systems and add a physics bit with multiple triggers for different animations based on jiggle reactions. However, the main issues I usually see with this department is the jiggles in SFM are nonsensical. Because of this, I have to run sfm physics investigations further to create imperfections in animations that give off some mistakes that the viewer may find oddly human while at the same time give the animator full control with a bit of error, rather than giving more precision. I wanted to also run a system similar to the IK that functionally behaves correctly in SFM. All of this by itself is on hiatus due to some school year preparation, but I will get to it on my own time. If you want to talk about this more, share ideas and such I am open.
5 Comments
[UA/SK] Divine Lotus 6 Jul, 2019 @ 9:00am 
That one youtube link is not fully formated correctly . The URL is separated

Facepunch Forums have been down. The archive link is here: https://knockout.chat/thread/822/1

For something like this not to happen again, website admins must make their site/community websites decentralized.
Research what decentralization is.

As stated in this wiki about modding Insurgency: https://steamproxy.net/sharedfiles/filedetails/?id=1501754039

(Pasting this to related guides/discussions, spreading the words)
Kumquat [Velbud]  [author] 23 Jan, 2018 @ 9:57am 
Working on breaking a couple of features in sfm such that they can serve more than the intended purposes standing by.
Misuune 23 Jan, 2018 @ 7:20am 
how to break laws of physics in sfm? plz help
Kumquat [Velbud]  [author] 22 Jan, 2018 @ 4:54pm 
That’s correct.
sn0wsh00 22 Jan, 2018 @ 3:24pm 
rt_cameras = render target cameras, correct?