Lighting, specular maps and tangent-space normal mapping.

or, its a textured quad Jim, but not as we know it

Took some time out of my busy schedule (ha!) to tinker with some shader maths, it struck me that I'd not actually written something that actually does correct tangent space normal mapping, my terrain shader was just a hack really.

For that shader I just sample the normal map straight onto the object directly, without transforming the sampled normal to the surface - admittedly, it was only to provide a certain amount of texture to the terrain, which it does quite nicely and at a much lower cost than calculating the tangent basis properly.

This time round I'm implementing it properly - my 'vision' for this game requires a good quality lighting model, the geometry is not going to be massively complex and the lighting will make the difference.

This shot is just the ground plane of what will be the blacksmiths shop, its nothing complicated yet, but I'm definitely pleased with the lighting and the texture.

A

The shader code is pretty straight forward, there is some optimisation to do, but on the whole this is reasonably tidy. Inputs to the shader are your usual lighting parameters, with an extra vec2 v2LightIsPoint which allows the shader to render both directional and point lights; the diffuse, specular, and normal map textures; and interpolated position, tex-coord, surface normal, and view direction.

I think the view direction could actually be constant, rather than iterated without much of a visual change, at a slight reduction in acuity.

uniform sampler2D sDiffuseTex;
uniform sampler2D sNormalMap;
uniform sampler2D sSpeclTex;

// Material/Light colours
uniform vec4 v4Ambient;
uniform vec4 v4LightCol;
uniform vec4 v4LightCol2;

// 2 Lights - can be point or directional (if v2LightIsPoint==1.0 or 0.0)
uniform vec2 v2LightIsPoint;
uniform vec3 v3LightDir;
uniform vec3 v3LightDir2;
uniform float fShininess;

varying vec3 v3Position;
varying vec2 v2TexCoord;
varying vec3 v3Normal;
varying vec3 v3ViewDir;

void main(void)
{   
    vec3 q0 = dFdx(v3Position.xyz);
    vec3 q1 = dFdy(v3Position.xyz);
    vec2 st0 = dFdx(v2TexCoord.st);
    vec2 st1 = dFdy(v2TexCoord.st);
    vec3 S = normalize( q0 * st1.t - q1 * st0.t);
    vec3 N = v3Normal;
    vec3 T = cross(N, S);
    //S = cross(N, T); // Not completely needed, but possibly produces a more accurate basis.

    vec4 v4Diffuse = texture2D(sDiffuseTex, v2TexCoord);
    vec3 v3NrmlMap = normalize(2.0 * texture2D(sNormalMap, v2TexCoord).rgb - 1.0);

    // Ensure normal map is normalised (might be redundant)
    v3NrmlMap = normalize(v3NrmlMap);

    // Transform by tangent basis to get surface normal in WS
    mat3 m3WSToTS;
    m3WSToTS[0] = S;
    m3WSToTS[1] = T;
    m3WSToTS[2] = N;
    v3NrmlMap = m3WSToTS * v3NrmlMap;

    // Lighting calculations
    float fSpecularIntensity = texture2D(sSpeclTex, v2TexCoord).r;
    v3ViewDir = normalize(v3ViewDir);

    // Light 1
    vec3 v3LightVec = normalize(v3LightDir - v3Position * v2LightIsPoint.x);
    float fNDotL = clamp(dot(v3NrmlMap, v3LightVec), 0.0, 1.0); 

    // Specular Calc 1
    vec3 r = normalize(2 * dot(v3LightVec, v3NrmlMap) * v3NrmlMap - v3LightVec);
    float fDotProduct = dot(r, v3ViewDir);
    vec4 v4Specular1 = fSpecularIntensity * v4LightCol * max(pow(fDotProduct , fShininess), 0);

    // Light 2
    v3LightVec = normalize(v3LightDir2 - v3Position * v2LightIsPoint.y);
    float fNDotL2 = clamp(dot(v3NrmlMap, v3LightVec), 0.0, 1.0);

    // Specular Calc 2
    r = normalize(2 * dot(v3LightVec, v3NrmlMap) * v3NrmlMap - v3LightVec);
    fDotProduct = dot(r, v3ViewDir);
    vec4 v4Specular2 = fSpecularIntensity * v4LightCol2 * max(pow(fDotProduct , fShininess), 0);

    gl_FragColor = v4Ambient
                 + fNDotL  * v4Diffuse * v4LightCol
                 + fNDotL2 * v4Diffuse * v4LightCol2
                 + v4Specular1
                 + v4Specular2;
}

I also noticed an interesting little ditty, the graphics card performed far better when using vec4's than vec3's (for colours). I had tried writing the shader on 3 component vectors and just doing vec4(v3Colour, 1.0) in the final accumulation step, thinking it might make better use of alu pipes. But it seems its probably better to let the compiler work it out itself! (I might investigate this further on some newer architectures as I noticed this coding against an old Ati HD4650 - the difference was 1290fps vs 1250fps, or around ~3%).

The next step on the 3D front is to start creating the room, which currently is largely going to be an extruded grid based affair, but first I'm going to work on a bit of game logic, the first prototypes of the sword crafting and smithing 'mini games'.

Posted By ajw.walters at 12:36:12 on 2013-09-04. Comments (0)

#1GAM, Fastdelegates, and a Nice Texture

or, A month long sabbatical in high-level land.

Ok, so about that staying on track and finishing something? Yeah well...

Instead I thought I'd try something at a different level of coding, more high level gameplay coding. And to that end I thought I'd have a go at a #1GAM entry - where you complete a game in a month. Well, it didn't happen (or rather I'm not going to finish it tonight so its pretty damn certain it won't happen).

It hasn't been a waste of time though, I've toyed around with some ideas I'd had and I've found a way of coding that really really works for me - using the fastdelegates library - so I can use delegates/closures in my program to pass around functions and objects for event handling. One nice thing that falls out of this is very loose coupling (which my god is like being in Zen compared to the usual nest-of-C that I work with) but from those principles fall out a number of really useful things - including transfer of ownership, 'self-cleaning' objects, and some really tightly coded libraries that allow you to turn out some really great effects in a dead-simple 'setup-and-let-go' fashion. I've written as a proof of concept a few pieces of game logic and some UI code and I'm still reeling from how nice and care-free that code is. I'll probably write about that a bit more on its own as I'm still trying things out.

In the meantime, I felt like a doodle, so I produced a really nice (well I think so anyway) texture for the floor of my blacksmiths:

A

Posted By ajw.walters at 20:20:05 on 2013-07-29. Comments (0)

Reconstructing Linear World-Space Z Values from the Depth Buffer

or, What? That ain't my Z fool.

I've been playing around a bit more with my SSAO implementation, it's still a rather naive implementation, but I wanted to make sure that the underpinnings were correct, it just didn't seem like the linearised z values were quite correct. They were 'pretty much linear' but they were not the values I was expecting and certainly not in a range that made much sense.

At first I just went with some approaches that people had used on the internet, but none of them seemed right, or I wasn't happy with the proofs. So I wen't back to first principles, wrote up some debugging code in my shader so I could inspect the range and mapping of the values calculated, and grabbed a notepad (and excel for some number crunching). It seems that the differences I found were possibly ideosyncracies of using GL rather than DX, GL goes from a right-handed coordinate system pre-projection to a left-handed coordinate system post projection, and this involves a number of strategically placed minus signs.

With some of the code snippets I'd found on the internet, they were either missing the minuses, or were just not expecting the change of co-ordinate system in the first place (anything in DX is left->left).

So, pictures, the left hand side of the image are the actual z coordinates written to a texture, and the right hand side are the linear z values reconstructed from the z buffer (GL_DEPTH_ATTACHMENT). The image is of a piece of terrain, I've set the near and far planes so both ends are visibly clipped. The debugging code overlays the reconstructed z values on the actual render, colouring bands around certain values (pink at 0, orange a 1, then other coloured bands at 0.125, 0.25, 0.5, 0.75, and 0.875).

A

And here is the GLSL code - d is just a value sampled straight from the depth buffer (that is the real depth buffer obtained from binding GL_DEPTH_ATTACHMENT):

float LineariseDepth(float d)
{
    float f = g_fFarClip;
    float n = g_fNearClip;
    float A = -(f+n)/(f-n);
    float B = -2 * f * n / (f - n);

    // Scale/Bias 0..1 -> -1..1
    d = d * 2 - 1;

    // Linearise - value will now be in the range (n..f)
    d = B / (A + d);

    return d;
}

Oh and as an extra, here's my DebugBoundaries function that I used for debugging, its not pretty, but certainly useful. I also often use vec4(mod(fVal, 100)/100) which you can use to render a value as repeating gradient bands of a particular width, helps work out what scale numbers your working with when you get some values that you can't fathom.

vec4 DebugBoundaries(float fVal)
{
    if (abs(fVal) < 0.01)
    {
        return vec4(1,0,1,1);
    }
    else if (abs(fVal) < 0.02)
    {
        return vec4(1,0.5,1,1);
    }
    else if (abs(fVal - 0.125) < 0.005)
    {
        return vec4(0.1,0.5,0.0,1);
    }
    else if (abs(fVal - 0.25) < 0.005)
    {
        return vec4(0.3,1,0.0,1);
    }
    else if (abs(fVal - 0.5) < 0.005)
    {
        return vec4(0.5,0.2,0.0,1);
    }
    else if (abs(fVal - 0.75) < 0.005)
    {
        return vec4(0.0,1,0.4,1);
    }
    else if (abs(fVal - 0.875) < 0.005)
    {
        return vec4(0.0,0.5,0.2,1);
    }
    else if (abs(fVal + 0.5) < 0.01)
    {
        return vec4(0,0.2,0.5,1);
    }

    else if (abs(fVal - 1) < 0.01)
    {
        return vec4(1,0.4,0.0,1);
    }
    else if (abs(fVal - 1) < 0.02)
    {
        return vec4(1,0.6,0.0,1);
    }

    else if (abs(fVal + 1) < 0.01)
    {
        return vec4(0,0.4,1,1);
    }

    return vec4(fVal);
}

Update:

Of course the point of all this is to recreate the view-space position from the depth buffer, to test that the position is correct I coloured the fragment based upon its distance in view space from a fixed point relative to the camera - ideally this should colour any pixels within a set sphere of influence. And it seems, my maths was spot on :D

A

Next job, improve SSAO by using a oriented hemispheric kernel instead of a non-rotated sphere to get nicer results with a smaller number of samples.

Posted By ajw.walters at 15:22:49 on 2013-06-18. Comments (0)

Pixel drop-outs on Radeon 46xx

or, There's a glitch in the matrix...

Beautiful accident - SSAO drawn after the fog, made the cliffs look awesome

Had some fun today playing with some shaders - I've been converting to a deferred shading system as this better supports some of the effects I want to employ, such as screen space ambient occlusion (SSAO), some particular lighting effects from some old 'toys' that I'd made such as line-based light emitters which works really well for laser weapon effects.

So here I was tinkering along, and noticed that I was causing the GPU to drop out on some pixel blocks, in particular it seemed this was on elements where some of my HUD/collision avoidance colouring was being applied:

A

After panning the camera around to see what effect it had, I zoomed in and Whoa! Well thats one new and exciting way to draw wireframes!

B

Upon further investigation it does seem that I hadn't actually installed a driver for this card. The Radeon 46xx series driver now comes from a legacy driver package, I'd just installed the standard driver package, hadn't realised all I'd achieved was to install the AMD driver installer, and had in fact been using the default Windows 7 in-box driver.

Once rectified, and with the SSAO pass displaying something other than black I was presented with a quite fortunate accident. I had got some shader code in the wrong order, and this was causing the SSAO to be applied after the fog, BUT that and my 'inaccessible terrain' overlay made the hill sides look pretty damn swish! Now I just have to get them to look that good not by accident!

Posted By ajw.walters at 15:38:04 on 2013-05-31. Comments (0)

HDMI Woes - NVIDIA and the EDID

It sounded so simple!

I have a Sony STR DH800 AV receiver, which does HDMI switching - I figured, great, I'll plug my laptop, xbox, etc into the back, then go HDMI->DVI into my monitor, less cables, less fuss!

But no, no, nothing is ever simple when the letters 'HD' are involved - the following buggerance factors apply:

  1. The receiver expects people to only use it with TV's and DVD players, and so exclusively (and forcefully) supports only 'TV' style resolutions, 480p, 720i/p, 1080i/p.

  2. Though allegedly only 'passing' through the video data unchanged, it fails to pass back the EDID data from the screen.

  3. My monitor is a 20" 1680x1050 panel, so the only 'HD' formats it supports are lousy 480p and 720p. Come on TV world, 720p is not high definition, I had a bigger screen resolution on my PC 10+ years ago.

  4. My laptop's GFX card (GeForce 9300m) does something stupid with the EDID information that means it only thinks I'm allowed to use 480p. (The Xbox 360 for comparison quite happily bangs out 720p).

Luckily, after much searching on the net, I have found a solution to 'buggerance factor 4', which makes the setup at least partially useful. It turns out that it is possible to create a monitor.inf file that can tells Windows, and the display driver to explicitly override the EDID information detected from the display device (Thank you Microsoft!). You can grab my version for the Sony DH800 here:

EDIDOverride-SONYDH800.inf

This basically forces the driver to expose all the resolutions as supported by the AV Receiver - I tried a hybrid version using the EDID from the monitor, but it turns out the receiver just cannot transmit 1680x1050, which is a shame, as it means I can plug my laptop into my bigger monitor but only have a lower resolution than the laptop.

You can find out more information from the original post on the AVS Forums:

EDID Override Thread - AVS Forum

Posted By ajw.walters at 08:56:00 on 2010-12-09. Comments (0)

Yosemite National Park - Simply astounding!

or, Why a few days walking in paradise is never enough

[singlepic id=26 w=500 h= float=center]

Sooo many pictures, I filled about 1.5gb per day on my SLR, and a good 1gb on my compact (once I'd charged it) - its going to take a while to upload even my favorites.

I am officially completely and utterly jealous of anyone who lives near to such a wonderful place, every time I looked out and thought 'Wow, that's the most breathtaking thing I've ever seen' you round the next corner and BAM there's something even better, that was when I was still driving into the place, walking 4 mile trail and up to Nevada falls was an amazing experience, it was just a shame I was only there for 3 days, I could have spent a month and still feel torn away.

So many places in the world left to see, but I can tell you now, I am going there again for a proper go! Sadly I didn't manage to do Half Dome, which I would have loved to have got to the top of, its a long hike, and from the guide books suggestions would have required pretty much every hour of sunlight available at this time of year unless you were a crazy person, I was game, but a bit unprepared and knackered myself on the first day with a 9 mile round trip to Glacier Point via 4 Mile trail (its actually 4.6 miles these day, my maths skills are fine!).

Instead, I cut it short at Nevada Falls, and made a gentle go of it, then topped off the day with a wonderful second trip (by car this time) to Glacier point for an attempt at night photography. I don't have a remote yet so I was limited to 30s exposures, but I had an attempt with using the manual shutter release and mirror lockup but its nigh on impossible to stay still enough to hold the button down for a few minutes at a go so I didn't come away with what I hoped - again this was a real shame because I have never in all my life seen such a clear sky, it was a truly satisfying experience, not a sound but the breeze, 6.5k ft up, lying on the ground with a sky so perfect that you could see the dust cloud formations along the belt of the Milky Way, I never realised it was possible to see such things with the Mk1 eye ball.

I've got a couple of shots handy from my small camera, I'll sort and upload some more shots from my main PC later.

[nggallery id=8]

Posted By ajw.walters at 21:24:33 on 2010-10-07. Comments (0)

Novawar 2

Saw this game featured on GameDev today, looks pretty impressive, I shall definitely be keeping a close eye on this.

http://www.novawar-game.com/

Posted By Unknown at 22:32:17 on 2006-06-13. Comments (0)

Random Page

In the never ending quest for useful content, some people Page2 turn to the dark side; where by pages such as TestPage are filled with mindeless bullshit of the nth degree.

Mumbling

Mumbling is an artform in itself, being able to spew infinite amounts of text with minimal thought or effort is incredibly useful if you have nothing better to be doing. Sane people would probably have opened some file already on their computer in order to fill this example TestPage.

Mad

Inflicted with a higher degree of intellectual freedom

Madness of King George

This is just some crap so I have another heading level. WHoop!!!

Blackadder

Edmund Elizabeth Blackadder

Posted By TestPoster at 20:57:19 on 2006-06-13. Comments (0)

Test Page

Whoop!

Some Nice Data! Page2 and some other crap. TestPage will show you how.

=== Blah blah blah ===
  1. Item 1
  2. Item 2
  3. Item 3
  • Item 1
  • Item 2
  • Item 3
  1. A
  2. B
  3. C

    int main(int argc, char** argv)
    {
        printf("Hello, is it me you're looking for?\n");
        return 0;
    }
    

Or you can have some test with code in it - for example printf() is the function used to output the message to stdout.

Posted By ajw.walters at 20:57:12 on 2006-06-13. Comments (0)

Another Page

Whoop!

Some Nice Data! Page2 and some other crap. TestPage will show you how.

=== Blah blah blah ===
  1. Item 1
  2. Item 2
  3. Item 3
  • Item 1
  • Item 2
  • Item 3
  1. A
  2. B
  3. C

    int main(int argc, char** argv)
    {
        printf("Hello, is it me you're looking for?\n");
        return 0;
    }
    

Or you can have some test with code in it - for example printf() is the function used to output the message to stdout.

Posted By TestPoster at 20:57:05 on 2006-06-13. Comments (2)