Skip to content


Brief Description

The Shadertoy generator or modifier can apply most of the shaders available on


Parameter / Script Name Type Default Function
Red / process_red Boolean On Enable the red channel in output. Otherwise if there's a source the content of the main source is returned instead, else 0
Green / process_green Boolean On Enable the green channel in output. Otherwise if there's a source the content of the main source is returned instead, else 0
Blue / process_blue Boolean On Enable the blue channel in output. Otherwise if there's a source the content of the main source is returned instead, else 0
Alpha / process_alpha Boolean On Enable the alpha channel in output. Otherwise if there's a source the content of the main source is returned instead, else 0
Shader Mode / shaderMode Choice Shadertoy Compatible
Output Format / output_format_mode Choice Composition Format
Format / format_preset Choice HD (1920x1080)
Format / formatRect -500, -500, 500, 500
Pixel aspect / par Float 1
Use current time / use_current_time Boolean On
iTime / time Float 0
iMouse / mouse_pos Float 2D 0, 0
Gamma correction / gamma_correction Boolean Off
Uniform Parameters / userParams -
Preprocessor Parameters / preprocessorParams -
Source file / shader_source_file File
Source code / shader_source
Use source file / use_shader_source_file Boolean Off
Camera / camera Choice -
Custom Camera / customCam Camera
Camera Transform / camTransform
Source (iChannel0) / source Image -
iChannel0 wrap / channel_0_wrap Choice Repeat
iChannel0 filter / channel_0_filter Choice Linear
iChannel1 / channel_1 Image -
iChannel1 wrap / channel_1_wrap Choice Repeat
iChannel1 filter / channel_1_filter Choice Linear
iChannel2 / channel_2 Image -
iChannel2 wrap / channel_2_wrap Choice Repeat
iChannel2 filter / channel_2_filter Choice Linear
iChannel3 / channel_3 Image -
iChannel3 wrap / channel_3_wrap Choice Repeat
iChannel3 filter / channel_3_filter Choice Linear


Shaders are GLSL fragment shaders with built-in pre-declared uniforms. Multi-pass shaders and sound are not supported. Some multi-pass shaders can still be reproduced in Autograph by chaining several Shadertoy modifiers, one for each pass.


Image shaders implement the mainImage() function in order to generate procedural images by computing a color for each pixel. This function is expected to be called once per pixel, and it is responsibility of the Autograph to provide the right inputs to it and get the output color from it and assign it to the screen pixel. The function signature is:

void mainImage( out vec4 fragColor, in vec2 fragCoord );

where fragCoord contains the pixel coordinates for which the shader needs to compute a color. The coordinates are in pixel units, ranging from 0.5 to resolution-0.5, over the rendering surface, where the resolution is passed to the shader through the iResolution uniform. The resulting color is gathered in fragColor as a 4 component vector.

Things to watch for

  • the f suffix for floating point numbers: 1.0f is illegal in GLSL. You must use 1.0
  • saturate(): saturate(x) doesn't exist in GLSL. Use clamp(x,0.0,1.0) instead
  • pow/sqrt: please don't feed sqrt() and pow() with negative numbers. Add an abs() or max(0.0,) to the argument
  • mod: please don't do mod(x,0.0). This is undefined in some platforms
  • variables: initialize your variables. Don't assume they'll be set to 0 by default
  • functions: don't call your functions the same as some of your variables

Shadertoy Uniforms

Type Name Function Description
vec3 iResolution image The viewport resolution (z is pixel aspect ratio, usually 1.0)
float iTime image/sound Current time in seconds
float iTimeDelta image Time it takes to render a frame, in seconds
int iFrame image Current frame
float iFrameRate image Number of frames rendered per second
float iChannelTime[4] image Time for channel (if video or sound), in seconds
vec3 iChannelResolution[4] image/sound Input texture resolution for each channel
vec2 iChannelOffset[4] image Input texture offset in pixel coords for each channel
vec4 iMouse image xy = current pixel coords (if LMB is down). zw = click pixel
sampler2D iChannel{i} image/sound Sampler for input textures i
vec4 iDate image/sound Year, month, day, time in seconds in .xyzw
float iSampleRate image/sound The sound sample rate (typically 44100)

Autograph added uniforms

Type Name Function Description
vec2 iProxyScale image The pixel render scale (e.g. 0.5,0.5 when rendering half-size)
mat3 iTransform image Useful in Autograph Extended Mode: The transformation matrix passed in the shader pipeline. This can be used to properly transform any spatial position used in the shader
mat4 iCamProjMatrix image The composition active camera's post-framing projection matrix
mat4 iCamInvProjMatrix image The inverse of iCamInvProjMatrix
mat4 iCamViewMatrix image The view matrix transformation of the frustrum of the active camera. Converts points from world space to (eye) frustum space.
mat4 iCamInvViewMatrix image The inverse of iCamViewMatrix
mat4 iCamViewportTransform image The transform to map from the normalized eye space to canonical pixel space
mat4 iCamInvViewportTransform image The inverse of iCamViewportTransform

Autograph Extended mode vs Shadertoy compatible

The Autograph Extended mode does not use the shader in the same context. Instead of being rasterized right away, the shader is merged into Autograph's main shader pipeline. This has many advantages, and amongst them:

  • The shader can be produced on an infinite space, similarly to Autograph's built-in procedural generators such as the Noise
  • The shader is not rasterized right away, thus does not suffer of pixel filtering due to post-transforms

However, these advantages come with restrictions:

  • All shadertoy uniforms are no longer uniforms but are implictly passed as parameters of the mainImage function, although you do not have to explicitly add them to the function signature. This means that any other helper function using these uniforms must be passed these by parameter.

  • Only code declared in functions is kept: Do not use any preprocessor defines or shader constant

If you want to have an almost guarantee that a shader from Shadertoy works in Autograph just with a copy/paste, then keep the Shaderty compatible mode. If you want to write your own shader and take advantage of Autograph's shader pipeline, then tweak slightly your shader to adapt to it.

Adding user-controllable uniforms

Any parameter added to the Uniform Parameters list will be made available as a uniform in the shader, using the same name as the one displayed. You can also add parameters to control preprocessor defines by adding them to the Preprocessor parameters list. Each preprocessor parameter will be available as a preprocessor constant of the same name in the shader. This is useful to optimize shaders instead of having cascading if/else

Editing the shader code

Either paste the source-code in the Source code text field, or check the User source file parameter and point to an external file. Whenever the file is resaved, make sure to press the reload button on the same line.

Example shader using Autograph's Composition camera

float hash( float n ) {
    return fract(sin(n)*43758.5453);

float noise(vec3 x) {
    vec3 p = floor(x);
    vec3 f = fract(x);
    f = f*f*(3.0-2.0*f);
    float n = p.x + p.y*57.0 + 113.0*p.z;
    float res = mix(mix(mix( hash(n+  0.0), hash(n+  1.0),f.x),
                        mix( hash(n+ 57.0), hash(n+ 58.0),f.x),f.y),
                    mix(mix( hash(n+113.0), hash(n+114.0),f.x),
                        mix( hash(n+170.0), hash(n+171.0),f.x),f.y),f.z);
    return res;

float fbm(vec3 pos) {
    // properties
    const int octaves = 6;
    float lacunarity = 2.0;
    float gain = 0.5;
    mat3 m = mat3( 0.00,  0.80,  0.60,
              -0.80,  0.36, -0.48,
              -0.60, -0.48,  0.64 );

    // initial values
    float value = 0.0;
    float amp = 0.5;
    float frq = 0.0;

    // loop of octaves
    for (int i = 0; i < octaves; i++)
        value += amp*noise(pos);
        pos = m*pos*lacunarity;
        amp *= gain;
        lacunarity += 0.01;
    return value;

// Main SDF

// Cloud density function
float smokeVolume(vec3 p, float scale, float timeOffset)
    vec3 pn = p/scale;
    // a sphere centered at the world origin, of size 1 with noisy boundaries
    return attenuationCoefficient * (1.0 - smoothstep(0.9, 1.2, length(pn) + fbm(pn + vec3(0,0,timeOffset))));

// Ray marching in view frustum
// returns accumulated volume density on the travelled segment along the ray
float rayMarch(vec3 ray_d,
               mat4 invViewMat,
               float viewDepth,
               vec2 nearFar,
               float marchSteps,
               float volumeScale,
               float timeOffset)
    float accumDensity = 0;

    // march on the segment [start;end]; start/end being the intersections of the ray with the near/far planes.
    vec3 start = (nearFar.x / ray_d.z) * ray_d;
    vec3 end = (nearFar.y / ray_d.z) * ray_d;
    // size of one marching step
    float s = distance(start, end) / marchSteps;

    for (int i = 0; i < marchSteps; ++i) {
        vec3 pos = mix(start, end, float(i) / float(marchSteps));
        // ss is equal to the march step if we don't hit the surface, otherwise it's the distance from the beginning of the step to the surface
        float ss = s * (1.0 - max(0.0, (viewDepth - pos.z) / (-ray_d.z * s)));
        // back to world space to evaluate the volume density, which is defined in world space
        vec3 worldPos = (invViewMat * vec4(pos,1.)).xyz;
        // evaluate volume density
        // FIXME: we should probably evaluate the volume in the middle of the step along the ray (something like mix(start, end, (float(i)+0.5) / float(marchSteps)))
        float d = smokeVolume(worldPos, volumeScale, timeOffset);
        accumDensity += ss*d;
        if (pos.z < viewDepth) {
    return max(0.0, accumDensity);


vec4 viewPositionFromDepth(vec2 texcoord, mat4 invProj, float z) {
    // Get the depth value for this pixel
    // Get x/w and y/w from the viewport position
    float x = texcoord.x * 2 - 1;
    float y = texcoord.y * 2 - 1;
    vec4 vProjectedPos = vec4(x, y, z * 2.0 - 1.0, 1.0);
    // Transform by the inverse projection matrix
    vec4 vPositionVS = invProj * vProjectedPos;
    // Divide by w to get the view-space position
    return vPositionVS / vPositionVS.w;

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 texel = 1.0/vec2(iResolution);
    ivec2 screenPos = ivec2(fragCoord);
    vec2 uv = fragCoord * texel;

    // extract near/far from projection matrix (assume perspective)
    float near    = -iCamProjMatrix[3][2]/(iCamProjMatrix[2][2]-1);
    float far     = -iCamProjMatrix[3][2]/(iCamProjMatrix[2][2]+1);
    vec2 nearFar  = vec2(near,far);

    // reconstruct fragment position in view space
    vec4 depthTex = texture(iChannel1, uv);
    vec4 viewPos  = viewPositionFromDepth(uv, iCamInvProjMatrix, depthTex.r);
    // cast ray in view space in the direction of the frag position
    float accumLinearDensity = rayMarch(normalize(,iCamInvViewMatrix,viewPos.z,nearFar,maxMarchSteps,volumeScale,iTime);

    vec4 color = texelFetch(iChannel0, screenPos, 0);
    float attenuation = exp(-accumLinearDensity);

    // attenuate color
    color.rgb =  (1.0-attenuation) * volumeColor.rgb +  attenuation * color.rgb;
    color.a = (1.0-attenuation) + color.a * attenuation;

    fragColor = color;