Acid shader

This shader is used to dissolve any shape into nothingness in a style like acid would.

This blog post will show the progression towards the end result.


Intro

I started off with this assignment pretty badly.
I had three different ideas but they were all shot down.
After talking with a teacher who told me about Navier stokes algorithm, brought me to a new idea. Which was a viscious liquid simulation.
Quickly after I started with the research, I found out it had a metric ton of math behind it.

No biggie I thought. Math can be understood with thorough explanations in the research articles. But the articles were quite long and I couldn’t get through it so easily so my motivation went down and down.

Then two weeks had passed and I had little to show.
So it was time after a good talk with the teacher to come up with a new idea and this time an idea with little math behind it.

That new idea is this shader.
What I wanted was to have a shader that would make anything dissolvable over time. It needed to look like an object would slowly start to dissolve into a liquid. I ditched the liquid part later though. As you need math for believable liquid.

I pitched my idea to the shader guild with this document:

I explained that the shader would be build starting from top to bottom.
Starting with surface displacement and ending with additional effects like bubbles and smoke.

The full idea behind the shader was to use this for a game.
A game that would be about playing as a slime.
The slime would be able to dissolve anything that touches it’s body or is in its body. I didn’t pitch this, because I thought it would bring confusion or fear of too big of a scope.

The shader that I had in mind would look like a combination of these two effects:

Where you could see the object dissolve within the slime.
And the slime would be blobby to the ground.

Since this shader would be used for a game, the performance of the shader was quite important. Other than that, the shader should be able to dissolve any mesh and the slime should look transparent and blobby.

Knowing the time I had left to make the shader, I knew I had to cutoff the slime being blobby. As it includes ray marching which I knew is not an easy task. And the acid was far more important than the visual looks of the slime.

So I planned to do as much as possible, within those few weeks I had left.
Before even considering to do this shader, I had already researched about vertex displacement and melting. Especially melting because I thought that would’ve been the hard part.


Step 1: Vertex displacement

Acid Rain

Starting off with rain, a particle system is needed.

The particle system I used looks a bit more like green snow, which is because I used the default texture of Unity for particles.
I used a green zero alpha to full black gradient as I found that had the best additive effect of displacement.
Other things to note about the particle system is that it uses the Render mode: Mesh and uses an Unity plane mesh for that.
And uses collision for lessening the lifetime of the particles when they hit the surface.

For knowing where the particles hit the surface I use a render texture and a camera which only renders the particles.

If you change the Clear flags option to Don’t clear the camera will make it look like the particles are sticking to the surface and don’t go away.
Which makes it end up looking like a yellow mess.

Now this result is not really what I wanted.
So I changed the particle system to last longer and changed the Clear Flags option to Solid Color (Black).

This result was better as it allowed me to tweak the speed of the displacement.

Shader

For the shader, I used a standard surface shader.

The vertex displacement is simple.

#pragma surface surf Standard fullforwardshadows vertex:vert
        
void vert(inout appdata_full v)
{
   fixed4 RT = tex2Dlod(_RenderText,    float4(getNewUV(mul(unity_ObjectToWorld, v.vertex).xyz), 0, 0));
   v.vertex.y -= v.normal.y * RT.g;
}

Using tex2Dlod you can get the color value of the Render texture.
Since the particles are green, you can easily change the vertex position with the green value.
To get the correct UV coordinates, you need to adjust it.

        float2 getNewUV(float3 worldPos) 
        {
            float2 uv = worldPos.xz;
            uv /= (_OrthoCamSize * 2);
            uv += 0.5;
            return uv;
        }

This will make the texture fit properly onto the surface.

You only need to make a C# script which sets the _OrthoCamSize to the correct value. And sets the _RenderText value.

    void Awake()
    {
        Shader.SetGlobalTexture("_RenderText", rt);
        GL.Clear(true, true, Color.black);
        Shader.SetGlobalFloat("_OrthoCamSize", rtCam.orthographicSize);
    }

I use GL.Clear just to be sure the Render texture is cleared after hitting play.
The values are global because I have them as uniform in the shader.

        uniform sampler2D _RenderText;
        uniform float _OrthoCamSize;

As you could see in the previous pictures, the surface doesn’t really look that much displaced. Well other than the view angle, the problem is that the standard Unity plane doesn’t have that much vertices to displace.
To tackle that problem I used tesselation, the simplified one.
Simplified as in, this is done in a surface shader.
Unity has a lot of helping functions added to the surface shader, tessellation is one of those.

Tessellation gives you more vertices to work with. Once you get to know tessellation it kind of becomes a habit to use it instead of making a mesh that the correct amount of vertices.

Anyways, to start off with Tessellation (surface shader way) we need a #include. This will tell the shader that we want to use functions relating to whatever cginc. you include.

#include "Tessellation.cginc"

Having done that change the pragma line to

#pragma surface surf Standard fullforwardshadows vertex:vert addshadow tessellate:tessDistance

Now it’s time to add the functions for the tessellation

        float4 tessDistance(appdata_full v0, appdata_full v1, appdata_full v2)
        {
            float minDist = 10.0;
            float maxDist = _TessMaxDist;

 return UnityDistanceBasedTess(v0.vertex, v1.vertex, v2.vertex, minDist, maxDist, _Tess);
        }

I already added some optimization functions for the tesselation.
One that checks the color of each vert and one that checks the distance between the view and the surface.
The current function already checks for distance.
This would be the standard tessellation one would use.

Optimization is quite needed for tessellation as it basically gives you a lot of more vertices to render. And you do want nice detail, but if you can’t see it or nothing happens to the surface at the moment, then the extra vertices aren’t really needed there.

The color optimization needs to check for the color and the distance so we can return the adjusted tessellation value to the neccessary functions.

        float ColorCalcDistanceTessFactor(float4 v, float minDist, float maxDist, float tess, float4 color)
        {
            float3 worldPos = mul(unity_ObjectToWorld, v).xyz;
            float dist = distance(worldPos, _WorldSpaceCameraPos);
            float f = clamp(1.0 - (dist - minDist) / (maxDist - minDist), _MinTessFactor, 1);

            if (color.g < _ColorThreshold)
            {
                f = _MinTessFactor/10;
            }


            return f * tess;
        }

It checks whether the vertex has a certain amount of green. If it doesn’t have that, we do tesselate it eitherway but with a tinier tessellation factor.
And using clamp for the distance check so it returns a value between that tiny tesselation factor and 1.

This function is used to calculate the tesselation factor for each triangle.

        float4 ColorDistanceBasedTess(float4 v0, float4 v1, float4 v2, float minDist, float maxDist, float tess, float4 v0c, float4 v1c, float4 v2c)
        {
            float3 f;
            f.x = ColorCalcDistanceTessFactor(v0, minDist, maxDist, tess, v0c);
            f.y = ColorCalcDistanceTessFactor(v1, minDist, maxDist, tess, v1c);
            f.z = ColorCalcDistanceTessFactor(v2, minDist, maxDist, tess, v2c);

            return UnityCalcTriEdgeTessFactors(f);
        }

Instead of using UnityDistanceBasedTess, we use this function in tessDistance.


        float4 tessDistance(appdata_full v0, appdata_full v1, appdata_full v2)
        {
            float minDist = 10.0;
            float maxDist = _TessMaxDist;

            return ColorDistanceBasedTess(v0.vertex, v1.vertex, v2.vertex, minDist, maxDist, _Tess, v0.color, v1.color, v2.color);
        }

This is the end result!

All fun, but this is just the begin of the end shader 🙂
Sources used: https://www.patreon.com/posts/25641162

Step 2: Vertex coloring on Collision

Hole maker

After having problems with 3D objects I tried to find another approach of knowing where anything on a mesh is and send that to the shader.
So what do I mean with hole maker?
Well basically I found a tutorial which makes a hole based on how close an object is to the surface.

I changed that script and used it to color the vertices instead of disabling them. I made a simple vertex color shader to show what mine does.

Code

First it’s important to get all the information of the mesh you want and copy it over to another array.

    private void InitializeMesh()
    {
        mesh = new Mesh();

        filter = GetComponent<MeshFilter>();
        originalMesh = Application.isPlaying ? filter.mesh : filter.sharedMesh;

        originalVertices = originalMesh.vertices;
        originalNormals = originalMesh.normals;
        originalUvs = originalMesh.uv;
        originalTriangles = originalMesh.triangles;

        vertices = new Vector3[originalMesh.vertices.Length];
        normals = new Vector3[originalMesh.normals.Length];
        uvs = new Vector2[originalMesh.uv.Length];
        triangles = new int[originalMesh.triangles.Length];
        trianglesDisabled = new bool[triangles.Length];

        originalVertices.CopyTo(vertices, 0);
        originalNormals.CopyTo(normals, 0);
        originalUvs.CopyTo(uvs, 0);
        originalTriangles.CopyTo(triangles, 0);
        vertColors = new Color[originalMesh.vertices.Length];
        magnitudes = new float[originalMesh.vertices.Length];

        for (int i = 0; i < vertices.Length; i++)
        {
            vertColors[i] = Color.black;
        }

        CreateMesh();
    }

In the last lines I also make sure all the vertex colors are black.
As I use the red and green value for displacement later and the initial vertex color of any mesh is white.
Because of that this function can also be called from the editor so that the meshes aren’t already displaced by default.

In CreateMesh I give the mesh the adjusted value arrays

 private void CreateMesh()
    {
        mesh.SetVertices(originalMesh.vertices);
        mesh.SetColors(vertColors);
        mesh.SetNormals(originalMesh.normals);
        mesh.SetUVs(0, originalMesh.uv);
        mesh.SetTriangles(triangles, 0);
    }

I only really change the vertex colors, but who know if I want to change more in the future, it’s handy to keep.

Now for the actual function that does it all

    private Mesh GenerateMeshWithHoles()
    {
        maxReached = 1;
        for (int i = 0; i < vertices.Length; i++)
        {
            originalVertices[i] = AdjustPosition(vertices[i]);
            bool disable = (originalVertices[i] - objPos).magnitude < minDist;
            magnitudes[i] = (originalVertices[i] - objPos).magnitude;


            if (vertColors[i].r >= maxAmount) 
            {
                maxReached++;
                continue;
            }
            if (!disable) continue;

            currentColorChange = colorChange * speedMultiplier * Time.deltaTime;

            vertColors[i] += new Color(currentColorChange, currentColorChange, 0);

            latestColor = vertColors[i];
        }

        autoMelt = !(maxReached >= vertices.Length);
     
        triangles = originalTriangles;

        CreateMesh();

        return mesh;
    }

It does a simple distance check and increases the color of the vertices that are close to the position given.
To make sure the positions of the vertices are correct they are adjusted with this function.

    private Vector3 AdjustPosition(Vector3 vertPos)
    {
        Matrix4x4 localToWorld = transform.localToWorldMatrix;
        vertPos = localToWorld.MultiplyPoint3x4(vertPos);

        return vertPos;
    }

This makes sure that the positions are correct with scale and transform position.
Source used: https://youtu.be/4r9IwX17UbY

Step 3: Correcting colors

So up till now, I’ve just shown the vertex colors.
But I wanted to have more control over it.
I had found a color lerp function that took the height and changed the color depending on the height of a vertex.
Instead of the height I used the green value of a vertex.
Because later I wouldn’t only displace the y value anymore.

fixed4 c = lerp(_ColorBot, _ColorMid, posY / _Middle) * step(posY, _Middle);
c += lerp(_ColorMid, _ColorTop, (posY - _Middle) / (1 - _Middle)) * step(_Middle, posY);

fixed4 cWithVertex = (IN.vertexColor + c) ;

o.Emission = Unity_Hue_Degrees_float(cWithVertex.rgb, _DegreesHueChange);

I add the new color to the vertex color and it gives a sort of lighten effect.
Which I liked.

Without c
With c

In the last line you see that I use Unity_Hue_Degrees_float.
This is a shadergraph code example.
For simple tricks that are a bit complicated I tend to grab the shadergraph one if it works that is.

Step 4: Melting

If you look closely you see that they aren’t really melting.
The way this is done is by displacing the vertices in the x and z coordinate.
The amount is calculated with this function

        float GetMelt(float meltingPoint, float minVal, float minDistance)
        {
            float melt = (minVal - meltingPoint) / minDistance;
            melt = 1 - saturate(melt);
            melt = pow(melt, _MeltCurve);

            return melt;
        }

Saturate helps keeping a value between 0 and 1.
And pow gives it a smooth progression.
As you can see it in this graph.

https://www.desmos.com/calculator/gxt64f0i3e

For the actual displacement this function is used

        float4 getNewVertPosFromWorld(float4 worldSpacePos, float3 worldSpaceNormal, float meltMin)
        {
            float melt = GetMelt(_MeltY,meltMin , _MeltDistance);
            worldSpacePos.xz += (worldSpaceNormal.xz * melt) * _Modifier;

            return mul(unity_WorldToObject, worldSpacePos);
        }

I use the world space position because I use it for the other displacements aswell.
The tutorial I used also wanted to make sure reflections went right, if that is wanted this function is needed to recalculate the new normals.

        float3 getNewNormal(appdata_full v, float4 vertPos, float meltMin)
        {

            float4 bitangent = float4(cross(v.normal, v.tangent), 0);

            float vertOffset = 0.1;
            float4 posAndTangent = getNewVertPosition(v.vertex + v.tangent * vertOffset, v.normal, meltMin);
            float4 posAndBitangent = getNewVertPosition(v.vertex + bitangent * vertOffset, v.normal, meltMin);

            float4 newTangent = posAndTangent - vertPos;
            float4 newBitangent = posAndBitangent - vertPos;

            return cross(newTangent, newBitangent);
        }

If you don’t use reflections then it’s not really noticeable, but it’s good to have just in case!
Source used: Cone wars Dev Diary

Step 5: Adding bubbles Multi-pass

Multi-pass is basically adding a more passes to a shader.
This means that the shader renders the mesh x amount of passes.
Whatever you do in a pass gets rendered.
So for example if you have a pass that displaces the vertices to the left and another pass which displaces the vertices to the right.
You’d have two meshes, one to the left and one to the right.

For fragment shaders you need to use

Pass
{
Name..
Blend..
Tags..
CGPROGRAM
ShaderStuff
CGEND
}

For surface shaders you can dismiss the pass

Name..
Blend..
Tags..
CGPROGRAM
ShaderStuff
CGEND

You can multi-pass fragment shaders with surface shaders and vice versa.
Also double fragment and double surface.

I tend to go for surface shaders, but for the bubbles, which is done with a geometry shader, there’s no other option than to go for a fragment shader.

Geometry shader

To start with a geometry shader you need to have a vertex function which returns an iniatilized appdata.
Which is normal in a fragment shader.

GS_INPUT VS_Main(appdata_full v)
{
	GS_INPUT output = (GS_INPUT)0;

	output.pos =  v.vertex;
	output.normal = v.normal;
	output.tex0 = float2(0, 0);
	output.col = v.color;
	return output;
}

The actual geometry function is quite different from the vertex function

[maxvertexcount(4)]
void GS_Main(point GS_INPUT p[1], inout TriangleStream<FS_INPUT> triStream)
{
	float3 up = UNITY_MATRIX_IT_MV[1].xyz;
	float3 right = -UNITY_MATRIX_IT_MV[0].xyz;
	p[0].pos = mul(unity_WorldToObject, p[0].pos) ;

	float dist = length(ObjSpaceViewDir(p[0].pos));
	

	float halfS = 0.5f * (_Size + (dist * _MinSizeFactor));

	float4 v[4];
	v[0] = float4(p[0].pos + halfS * right - halfS * up, 1.0f);
	v[1] = float4(p[0].pos + halfS * right + halfS * up, 1.0f);
	v[2] = float4(p[0].pos - halfS * right - halfS * up, 1.0f);
	v[3] = float4(p[0].pos - halfS * right + halfS * up, 1.0f);

	FS_INPUT pIn;

	pIn.col = p[0].col;

	pIn.pos = UnityObjectToClipPos(v[0]);
	pIn.tex0 = float2(1.0f, 0.0f);
	triStream.Append(pIn);

	pIn.pos = UnityObjectToClipPos( v[1]);
	pIn.tex0 = float2(1.0f, 1.0f);
	triStream.Append(pIn);

	pIn.pos = UnityObjectToClipPos( v[2]);
	pIn.tex0 = float2(0.0f, 0.0f);
	triStream.Append(pIn);
	 
	pIn.pos = UnityObjectToClipPos( v[3]);
	pIn.tex0 = float2(0.0f, 1.0f);
	triStream.Append(pIn);
}

Starting off I need to say that I got this shader from the an unity forum thread. It’s a billboard shader and it works almost correctly.

[maxvertexcount(4)]

This determines how many vertices we’re able to draw on each vertex.
You don’t need to reach the max amount, but you can’t go beyond it.
Other than that we simply want to make quads that look at you.
The V array is used for the vertex positions of the quad.

float4 v[4];
v[0] = float4(p[0].pos + halfS * right - halfS * up, 1.0f);
v[1] = float4(p[0].pos + halfS * right + halfS * up, 1.0f);
v[2] = float4(p[0].pos - halfS * right - halfS * up, 1.0f);
v[3] = float4(p[0].pos - halfS * right + halfS * up, 1.0f);

To make it look at the view, UNITY_MATRIX_MV is used, which gives the inverse transpose model view matrix

float3 up = UNITY_MATRIX_IT_MV[1].xyz;
float3 right = -UNITY_MATRIX_IT_MV[0].xyz;

The ObjSpaceViewDir helps changing the size depending on how far the quad is from view. So it basically shrinks the further away it is from the view.

float dist = length(ObjSpaceViewDir(p[0].pos));

Appending the corrected positions for each vertex to the trianglestream is the last step of the geometry function.

Now what is the problem of this function?

A cube

The function draws with points instead of triangles.
This makes it ignore certain vertices.
As you can see the cube is missing 3 bubbles.

To fix this, Point should be triangles and the maxvertexcount should be 12.
Three times the initial value.

[maxvertexcount(4*3)]
void GS_Main(triangle GS_INPUT p[3], inout TriangleStream<FS_INPUT> triStream)

And the whole function needs to be looped for the GS_INPUT

for (int i = 0; i < 3; i++)
{


	float3 up = UNITY_MATRIX_IT_MV[1].xyz;
	float3 right = -UNITY_MATRIX_IT_MV[0].xyz;
	p[i].pos = mul(unity_WorldToObject, p[i].pos);
	float dist = length(ObjSpaceViewDir(p[i].pos));

	float halfS = 0.5f * (_Size + (dist * _MinSizeFactor));
	float4 v[4];
	v[0] = float4(p[i].pos + halfS * right - halfS * up, p[i].pos.z);
	v[1] = float4(p[i].pos + halfS * right + halfS * up, p[i].pos.z);
	v[2] = float4(p[i].pos - halfS * right - halfS * up, p[i].pos.z);
	v[3] = float4(p[i].pos - halfS * right + halfS * up, p[i].pos.z);


	FS_INPUT pIn;

	pIn.col = p[i].col;

	pIn.pos = UnityObjectToClipPos(v[0]);
	pIn.tex0 = float2(1.0f, 0.0f);
	triStream.Append(pIn);

	pIn.pos = UnityObjectToClipPos(v[1]);
	pIn.tex0 = float2(1.0f, 1.0f);
	triStream.Append(pIn);

	pIn.pos = UnityObjectToClipPos(v[2]);
	pIn.tex0 = float2(0.0f, 0.0f);
	triStream.Append(pIn);

	pIn.pos = UnityObjectToClipPos(v[3]);
	pIn.tex0 = float2(0.0f, 1.0f);
	triStream.Append(pIn);

	triStream.RestartStrip();
}

And lastly, triStream.RestartStrip(); is important.

With strip
Without strip

It makes a strip if you don’t.
Sources used:

https://forum.unity.com/threads/billboard-geometry-shader.169415/
https://gamedev.stackexchange.com/questions/97009/geometry-shader-not-generating-geometry-for-some-vertices

Adding tessellation in a fragment shader

Tessellation.. here it is again!
This time it’s different.
Since Unity won’t help us in a fragment shader with tessellation you have to make your own functions for it.

Introducing the three pragmas for tessellation:

#pragma vertex TessellationVertexProgram
#pragma hull HullProgram
#pragma domain DomainProgram

Instead of using your vertex vert pragma, you need to use this one.
But later on the domain function can use your old vertex function.
So nothing breaks!

Starting off a new struct is needed

struct ControlPoint
{
	float4 pos		 : INTERNALTESSPOS;
	float2 uv		 : TEXCOORD0;
	float4 col       : COLOR;     
	float4 tangent   : TANGENT;  
	float3 normal    : NORMAL;   
};

This will be initialized and returned by the new vertex program.

ControlPoint TessellationVertexProgram(appdata_custom v)
{
	ControlPoint p;
	p.pos = v.vertex;
	p.col = v.color;
	p.uv = v.texcoord;
	p.tangent = v.tangent;
	p.normal = v.normal;
	return p;
}

After that comes the hull function

[UNITY_domain("tri")]
[UNITY_outputcontrolpoints(3)]
[UNITY_outputtopology("triangle_cw")]
[UNITY_partitioning("fractional_odd")]
[UNITY_patchconstantfunc("PatchConstantFunction")]
ControlPoint HullProgram(InputPatch<ControlPoint, 3> patch, uint id : SV_OutputControlPointID)
{
	return patch[id];
}

Now you can change the partitioning method to “integer” or “fractional_even”. This changes the way the tessellation transitions from an old tessellation to a new tessellation value. With integer, it goes with jumps as the tessellation factor is a float. With fractional it goes more smoothly.
The even is different from odd, as it only begin with tessellation after a tessellation factor of 2.

Now there’s a function in between Hull and Domain. Which is PatchConstant.

struct TessellationFactors
{
	float edge[3] : SV_TessFactor;
	float inside : SV_InsideTessFactor;
};

TessellationFactors PatchConstantFunction(InputPatch<ControlPoint, 3> patch)
{
	float p0factor = patch[0].col.r + patch[0].col.g;
	float p1factor = patch[1].col.r + patch[1].col.g;
	float p2factor = patch[2].col.r + patch[2].col.g;
	float factor = (p0factor + p1factor + p2factor);
	TessellationFactors f;
	f.edge[0] = factor > 0.0 ? _UniformTess : 1.0;
	f.edge[1] = factor > 0.0 ? _UniformTess : 1.0;
	f.edge[2] = factor > 0.0 ? _UniformTess	: 1.0;
	f.inside =  factor > 0.0 ? _UniformTess : 1.0;
#if _TESTTESSELLATION_ON
	f.edge[0] = _UniformTess;
	f.edge[1] = _UniformTess;
	f.edge[2] = _UniformTess;
	f.inside =  _UniformTess;
#endif
	return f;
}

In this function it is decided which edges are allowed to tessellate.
But I don’t really care about individual edges, so I use one value for all.
For optimization and cool transition purposes I check the colors of each patch. So only when the displacement reaches those patches, tessellation should happen.

Last but not least, the domain function

[UNITY_domain("tri")]
GS_INPUT DomainProgram(TessellationFactors factors, OutputPatch<ControlPoint, 3> patch, float3 barycentricCoordinates : SV_DomainLocation)
{
	appdata_custom v;

	v.vertex = 
		patch[0].pos * barycentricCoordinates.x +
		patch[1].pos * barycentricCoordinates.y +
		patch[2].pos * barycentricCoordinates.z;

	v.texcoord.xy =
		patch[0].uv * barycentricCoordinates.x +
		patch[1].uv * barycentricCoordinates.y +
		patch[2].uv * barycentricCoordinates.z;

	v.normal =
		patch[0].normal * barycentricCoordinates.x +
		patch[1].normal * barycentricCoordinates.y +
		patch[2].normal * barycentricCoordinates.z;

	v.tangent =
		patch[0].tangent * barycentricCoordinates.x +
		patch[1].tangent * barycentricCoordinates.y +
		patch[2].tangent * barycentricCoordinates.z;

	v.color =
		patch[0].col * barycentricCoordinates.x +
		patch[1].col * barycentricCoordinates.y +
		patch[2].col * barycentricCoordinates.z;

	return vert(v);
}

It’s pretty simple, you need to return whatever struct you use for your old vertex function.
Initialize your appdata with a sum of the changed variables times the barycentricCoordinates.
This corrects your appdata to the tessellation.

Sources used: https://catlikecoding.com/unity/tutorials/advanced-rendering/tessellation/

Bubble size change

For the size change of the bubble I used the blue channel of the vertex color in the geometry function.

float halfS = 0.5f * (p[i].col.b *(rand(p[i].tex0.xy, _MinSize, _MaxSize)) + (dist * _MinSizeFactor));

The blueness is done by a custom curve texture

float x = _Time.y * rand(v.texcoord.xy, 0.75,1);
float blueness = tex2Dlod(_CurveTex, float4(x * _TimeSpeed, 0, 1, 0));

Rand is just like Random.Range

float rand(float2 seed, float Min, float Max) {
	float randomno = frac(sin(dot(seed, float2(12.9898, 78.233))) * 43758.5453);
	return lerp(Min, Max, randomno);
}

The custom curve texture is made by a script I made that bakes a texture from an animation curve.
It was frustrating coming up with sinus functions in desmos so I ended up making that instead.

To get the correct color of the curve

private Color GetColorFromCurve(float time)
{
    return Color.Lerp(Color.black, Color.white, curve.Evaluate(time));
}

To make a color map out of the curve

private Color[] GetColorMapFromCurve(int samples) 
{
    Color[] clrMap = new Color[samples];
    for (int i = 1; i < clrMap.Length+1; i++)
    {
        clrMap[i-1] = GetColorFromCurve((float)i/samples);
    }
    return clrMap;
}

To save a texture in Unity (non-editable name intended)

private void SaveTexture(Texture2D texture) 
{
    string filePath = Application.dataPath;
    string fileName = "curve.png";
    string fullPath = filePath +"/"+ fileName;
    byte[] bytes = texture.EncodeToPNG();
    FileStream stream = new FileStream(fullPath, FileMode.Create, FileAccess.ReadWrite);
    BinaryWriter writer = new BinaryWriter(stream);
    for (int i = 0; i < bytes.Length; i++)
    {
        writer.Write(bytes[i]);
    }
    writer.Close();
    stream.Close();
    DestroyImmediate(texture);
    #if UNITY_EDITOR
    AssetDatabase.Refresh();
    #endif
}

And lastly to create the texture!

public void BakeCurve(int samples)
{
    Texture2D noiseTexture = new Texture2D(samples, 1);
    noiseTexture.wrapMode = TextureWrapMode.Clamp;
    noiseTexture.SetPixels(GetColorMapFromCurve(samples));
    noiseTexture.Apply();
    SaveTexture(noiseTexture);
}

The created texture:

Result

Vertex coloring

Using the vertex coloring script. Random vertices are selected, and will increase their color over time in a radius over time.

Vertex displacing

Displacing the vertices in the Y value for every green and red value.

Shrinking

I thought this step wouldn’t need a whole chapter on itself, as it is only one line in the shader code

worldSpacePos *= (((_MaxHeight * 2) - clamp((v.color.g+v.color.r) + _MaxHeight, _MaxHeight, _MaxHeight * 2 - _MinMeltHeight)));

It multiplies the vertex positions based on their height and color

Melting

The melting doesn’t really do a melt anymore due to the shrink.
But it does make the shrink more smooth.

Bubbles

The bubbles really increases the visual look of the shader

Clipping

The clipping is a nice finishing touch to it.
As in acid, things dissolve which would leave behind little evidence.

Conclusion

I’ve learned a lot about making shaders but I do feel I have only scratched the surface of making shaders in general. There’s so much you could do with shaders, it’s insane.

The shader I ended up making is not entirely finished.
The bubbles are good but they remind me too much of soap bubbles.
Other than that, the vertex coloring script has settings you need to set in order for it to work with the mesh. It would be best if these were automatically set by first checking the mesh and then calculating what settings it would need for it to be able to dissolve in a good way.

As performance is an important factor, the tesselation should be done automatically aswell, my computer is kind of high-end so I’m not sure whether any pc could handle the shader.

The last thing that would need to be added is the slime.
I had tried to use the ray marching method, but the tutorial I found was for Unreal and I thought, I could translate it to Unity. But it didn’t work out that way.

Related Posts