`
`Petitioner Apple Inc. - Exhibit 1017, p. 1
`
`
`
`O SIGGRAPH '90, Dallas, August 6-10, 1990 system to compute. Because of these difficulties with the single shading model approach, several systems have been described that provide greater flexibility and extensibility. Whitted proposed that the rendering system have a collection of built-in shaders accessible via a shader dispatch table[31]. Presumably, the interface to these shaders was well-defined so that experienced hackers with access to the source code could extend the system. Cook developed a model which separated the conceptually independent tasks of light source specification, surface reflectance, and atmospheric effects[6]. The user could control each of these shading processes indepen.dently by giving a sequence of expressions which were read in, parsed, and executed at run-time by the rendering system. Perlin's image synthesizer carried this idea further by providing a full language including conditional and looping constructs, func- tion and procedure definitions, and a full set of arithmetic and log- ical operators[24]. But Perlin abandoned the distinction between the shading processes proposed by Cook, and instead introduced a "pixel stream" model. In the pixel stream model, shading is a postprocess which occurs after visible surface calculations. Unfortunately, this makes his language hard to use within the con- text of a radiosity or ray-tracing program, where much of the shading calculation is independent of surface visibility. In this paper, a new language is described which incor- porates features of both Cook's and Perlin's systems. The goals in the design of the new language were to: • Develop an abstract shading model based on ray optics suitable for both global and local illumination models. It should also be abstract in the sense of being independent of a specific algorithm or implementation in either hardware or software. • Define the interface between the rendering program and the shading modules. All the information that might logically be available to a built-in shading module should be made available to the user of the shading language. • Provide a high-level language which is easy to use. It should have features - point and color types and operators, integration statements, built-in functions - that allow shad- ing calculations to be expressed naturally and succinctly. A detailed description of the shading language grammar is avail- able in the RenderMan interface specification[l], and many examples of its use are contained in Upstill[28]. The intent of this paper is to point out the features of the new language beyond those described by Perlin and Cook. The design of the new language also raised many subtle design and implementation issues whose resolution required a combination of graphics and systems perspectives. The alternatives that were considered, and the factors that influenced the choices that were made are dis- cussed. Finally, we discuss some of the more interesting parts of the implementation, particularly those aspects where a combina- tion of techniques drawn from graphics, systems and compiler theory were used to improve performance. 2. Model of the Shading Process Kajiya has pointed out that the rendering process can be modeled as a integral equation representing the transport of light through the environment[ 15]. i (x, x')=v (x, x') [I (x, x') + S r (x, x', x") i (x" ,x") dr"] The solution, i (x, x'), is the intensity of light at x which comes from x'. The integral computes the amount of light reflected from a surface at * as a function of the surface bidirectional reflectance function r(x, x/, x'3 and the incoming light intensity distribution i (x', x"). The term l (x, x') gives the amount of light emitted by light sources at x' in the direction towards x. The sum of these two terms is the amount of light initially traveling from x' to x, but not all that light makes it to x: some of it may be scattered or blocked by an intervening material. The term v (x, x') gives the percentage of light that makes it from x' to x. The shading language allows procedures, called shaders, to be written that compute the various terms in the above equation. Shaders implement the local processes involved in shading; all the global processes used in solving the rendering equation are con- trolled by the renderer. The three major types of shaders are: • Light Source Shaders. A light source shader calculates the term l (x, x'), the color and intensity of light emitted from a particular point on a light source in a particular direction. • Surface Reflectance Shaders. A surface reflectance shader calculates the integral of the bidirectional reflectance func- tion r(x, x', x') with the incoming light distribution i(x', x'). • Volume or Atmosphere Shaders. Volume shaders compute the term v (x, x'). Only scattering effects need be com- puted; the effects of light rays intersecting other surfaces are handled by the renderer. A surface shader shades an infinitesimal surface element given the incoming light distribution and all the local properties of the sur- face element. A surface shader assumes nothing about how this incoming light distribution was calculated, or whether the incom- ing light came directly from light sources or indirectly via other surfaces. A surface shader can be bound to any geometric primi- tive. The rendering program is responsible for evaluating the geometry, and provides enough information to characterize the infinitesimal surface element. Similarly, when a light source shader computes the emitted light, it makes no assumptions about what surfaces that light may fall on, or whether the light will be blocked before it reaches the surface. A light source shader also makes no assumptions about whether it is bound to a geometric primitive to form an area light. Kajiya has shown how standard rendering techniques can be viewed as approximate solutions of the rendering equation. The simplest approximation assumes that light is scattered only once. A ray is emitted from a light source, reflected by a surface, and modulated on its way towards the eye. This is often referred to as local shading, in contrast to global shading, because no information about other objects is used when shading an object. This shading model is what is used by most real-time graphics hardware and is easily accommodated by the shading language. Whitted's ray tracing algorithm[30] considers these direct tran- sport paths, plus light transported from surfaces intersected by reflected and refracted rays. This can also be accommodated by the shading language by recursively calling light source and sur- face shaders. To summarize, the abstraction used by the shading language provides a way of specifying all the local interactions of light, without assuming how the rendering program solves the light transport equation. Thus, shaders written in the shading language can be used by many different types of rendering algo- rithms. 3. Language Features The shading language is modeled after C[17], much like many other programming languages developed under UNIX. The specification of the syntax and grammar is available in the specification[l] and examples of its use are described in a recent book[28]. In the following sections the novel features of the shad- ing language are discussed. The emphasis is on the high-level design issues that influenced each feature, and the implementation problems that they posed. The features discussed include the semantics of the color and point data types, the meta types 290
`
`Petitioner Apple Inc. - Exhibit 1017, p. 2
`
`
`
`~ Computer Graphics, Volume 24, Number 4, August 1990 uniform and varying, the classes and subclasses of shaders and how shaders are attached to geometry, the intrinsic state infor- mation which is provided by the rendering system, the special control constructs for integrating incoming light in surface shaders and for casting outgoing light in light source shaders, some of the more unusual built-in functions, and support for texture mapping. 3.1. Types The shading language supports a very small set of fixed data types: floats, strings, colors, and points. Since points and colors are the fundamental objects passed between shaders and the renderer, these are supported at a high level in the shading language. No facilities exists to define new types or data struc- tures. 3.1.1. Colors The physical basis of color is a spectrum of light. The spectrum describes the amount of light energy as a continuous function of wavelength. Since calculations involving continuous spectra are generally not feasible, spectra are represented with a fixed number of samples. Different methods for sampling spectra are described in Hall[12, 13]. In the shading language, color is an abstract data type which represents a sampled spectra. The number of color samples per color can be set before rendering to control the precision of the color computations. One sample implies a monochrome color space; three samples a triaxial color space; and more samples are available for more precise color computations. There is no support for sampling or resampling spectra; this is assumed to be done by the modeling program driv- ing the renderer or the output program generating the final display. Within the spectral color space model there are two impor- tant operations involved in shading calculations: additive light combination, where the result is the spectrum formed by combin- ing multiple sources of light, and~ltering, where the result is the spectrum produced after light interacts with some material. These operations are mapped into the language by overloading the stan- dard addition and multiplication operators ("+" and "*"), respectively. All color computations in the language are expressed with these operators, so that they are independent of the actual number of samples. A typical color computation might be expressed as CO * (La + Ld) + Cs * Ls + Ct * Lt where Cd and Cs are the diffuse and specular colors of a material, La, Ld, and Ls, are the amount of ambient, diffuse, and specular light reflected from the surface, and Ct and Lt are the transparency and amount of light transmitted through the material. Note that transparency is treated just like any other color, that is, it has the same number of color components. Many rendering systems make the mistake of modeling transparency with a single number. This is presumably motivated by the use of RGBA color models[26] where o~ which was originally developed to represent coverage is treated as an opacity (equal to one minus the transparency). We considered having two types of color, one for light spectra and another for material absorption spectra, and to restrict the values of light spectra samples to always be positive, since they represent energy which must be positive, and the values of mate~ai spectra to always be between 0 and 1, since they represent percent absorption. However, this was thought to be too restrictive - for example, negative light sources are sometimes used for faking shadows, and reflective colors greater than 1 are used for modeling stimulated emission. For similar reasons, we also allowed other arithmetic operators between colors, although they are very seldomly used. In our experience, it is very con- venient to think of color as light and hence, not to clamp it to some maximum value. Eventually, after all shading computations have been per- formed, the light "exposes" film using a non-linear remapping from light intensities to output pixel colors. After this process, a pixel color value of 1 is treated as the maximum display intensity. Since the addition and multiplication of colors is performed on a component by component basis, the shading language makes no assumptions about what. physical color each sample actually represents. Also, if only color operators are used to combine colors inside a shader, an arbitrary linear transformation can be applied to the input or output colors without affecting the results. This gives the modeling program driving the renderer complete control over what spectral color space the renderer is computing in. One advantage of this is that color computations can per- formed in absolute or calibrated color spaces, just by transforming the input or output color space. There are many other color spaces used in computer graph- ics for defining colors. We refer to these as modeling color spaces to distinguish them from spectral rendering color spaces. In general, adding and multiplying colors in these other color spaces has no physical basis, and hence executing shaders with colors in non-spectral color spaces can lead to unpredictable results. The language supports the use of modeling color spaces by providing built-in functions which immediately convert con- stant colors defined in these color spaces to the current rendering color space. 3.1.2. Points. The type point is used to represent a three component vec- tor. The arithmetic operators are overloaded so that when they involve points they are treated in the standard vector algebra sense. New operators were added to implement the operations of dot product C.") and cross product C^"). Using this syntax, a Lambertion shading formula can be expressed on one line. C * max( O, L.N ) where L and N are tile light and normal vectors, and C is a color. The main advantage of having built-in vector operations is that the standard shading formulae can be expressed in a succinct and natural way, often just by copying them right from the litera- ture. Another advantage of expressing shading calculations using vector arithmetic is that they are then expressed in a coordinate-free way, which means that the shader could be evaluated in different coordinate systems. Care must be taken in applying this idea since, in general, transformations between coor- dinate systems do not preserve metric properties such as angles and distances. Since the physics underlying shading calculations are based on these metric quantities, the results of shading calcu- lations will be different in coordinate systems which do not preserve them. For this reason shading calculations are defined "to appear" as if they took place in the world coordinate system, but it is permissible for the renderer to shade in other coordinate systems that are isometric to the world coordinate system. A common example of this is shading in "camera" or "eye" space instead of world space. In certain situations, it is necessary for a calculation involv- ing a point to be performed in a specific coordinate system. For example, all surface shaders accessing a solid texture need to access the texture in the solid texture's coordinate system. This is supported by providing a procedure which transforms points between named coordinate systems. The standard named coordi- nate systems are "raster", "screen", "camera", "world" and "object". In our system it is also possible to mark other coordi- nate systems, and then to refer to them within shaders. 291
`
`Petitioner Apple Inc. - Exhibit 1017, p. 3
`
`
`
`@ SIGGRAPH '90, Dallas, August 6-10, 1990 3.2. Uniform and Varying Variables All variables in the shading language fall into one of two general classes: uniform and varying, uniform variables are those whose values are independent of position, and hence, constant once all the properties have been bound; varying variables are those whose values are allowed to change as a function of posi- tion. Varying variables come about in two ways. First, geometric properties of the surface usually change across the sur- face. Two examples are surface parameters and the position of a. point on a surface. The normal changes on a curved surface such as a bicubic patch, but remains constant if the surface is a planar polygon. Second, variables attached to polygon vertices or to comers of a parametric surface are automatically interpolated across the surface by the rendering program, and hence are vary- ing. The best examples of varying variables attached to polygons are vertex colors in Gouraud shading and vertex normals in Phong shading. The concept of interpolating arbitrary variables during scan conversion was first introduced by Whitted and Weimer[31]. The concept of uniform and varying variables allows the shading language compiler to make significant optimizations. If the shader contains an expression or subexpression involving only constants and uniform variables, then that expression need only be evaluated once, and not every time the shader is executed. This is similar to constant folding at compile time, but differs in that a different uniform subexpression may occur each time a new instance of a shader is created, or each time a shader is bound to a surface. Because shading calculations are so expensive, a folklore has developed over hand coding these types of optimizations. For example, if the viewing transformation is a parallel projection, meaning the eye is at infinity, and a planar polygon is being shaded, the incoming direction, the normal, and hence the direc- tion of the reflection vector are all constant and need only be com- puted once per polygon. A similar situation occurs with local and distant lights. The advantage of using uniform variables and hav- ing the compiler look for uniform expressions is that these optimi- zations are done automatically. 3.3. Shader Classes and instances It is often convenient to think of shaders in an object- oriented way. There are several major subclasses of shaders, corresponding to the set of methods required by the rendering sys- tem. The most general class of shading procedures is a shader, and there are subclasses for light sources, surface reflectance func- tions, and volume scattering. A shader for a specific subclass is created by prefixing its definition by a keyword: surface for a surface shader, light for a light shader, and volume for a volume shader'~. Surface shaders describe different types of material such as metal and plastic; and light source shaders dif- ferent classes of lights such as spotlights and bulbs. Shader definitions are similar to procedure definitions in that they contain a formal argument list. The arguments tO a shader, however, are very different than the arguments to a shad- ing language function. Calling a shader to perform its task is under the control of the rendering system, and all information from the renderer is passed to the shader through external vari- ables and not via its arguments (see Section 3.4). Shaders are never called from other shaders or from other functions in the shading language. "~ Actually the following types also exist: displacement, transformation, and imaqer. The shader arguments define the shader's instance vari- ables and are used to set the properties of a shader when the user adds the shader to the graphics state or attaches it to a geometric primitive. All instance variables have default values that must be specified as part of the definition of the shader. However, the defaults can easily be overridden when creating an instance. This is done by giving a list of name-value pairs; the name refers to which instance variable is being set, and the value to its new value. This method of defaulting makes it easy to use compli- cated shaders with many parameters. For example, the shader metal is declared in the shading language as surface metal( float Ka=l, Ks=l, roughness=.l ) and is attached to the surface with the following procedure call. RiSurface( "metal", "Ka", 0.5 ); This instance of "metal" has a different Ka than the default instance. The arguments, or instance variables, of a shader are typi- cally uniform variables because they describe overall properties of the surface or light source. User-defined interpolated variables are created in the shading language by declaring an argument to a shader to be a varying variable. The interpolated value is made available to the shader via that shading language variable. Finally, it is possible to pass point variables as instance variables to a shader. As a convenience, these points are interpreted to be in the current coordinate system, and transformed automatically to the coordinate system in which the shading calculation is being performed. For example, pointlight has as part of its declaration. light pointlight ( . . . ; point from = point "shader" (0,0,0); ... ) The "shader" coordinate system is the one in effect when the shader was instanced. In the above example the light is placed at the origin of this coordinate system. Note how the transforma- tions apply to default points as well as points supplied when instancing. 3.4. Intrinsic State When a shader is bound to a geometric primitive, it inherits a set of varying variables that describe the geometry of the surface of the geometric primitive. All shaders also inherit the color and opacity of the primitive and the local illumination environment described as a set of light rays. This information is made avail- able to a shader as variables which are bound to the correct values by the rendering system before the shader is evaluated. A shader is very much like a function closure, which contains pointers to the appropriate variables based on the scoping rules for that func- tion[27]. The names and types of the major external variables are shown in Figures 1 and 2. These external variables, along with a few built-in func- tions, specify exactly what information is passed between the rendering system and the shading system, Because this is the only way these modules communicate, determining these variables was one of the most difficult aspects of the design of the shading language. Two general principles were followed: (i) the material information should be minimal, but extensible, and (ii) the geometric and optical information should be complete. A simpler interface between the shading and geometry is specified in Fleischer and Witkin[ 10]. 292
`
`Petitioner Apple Inc. - Exhibit 1017, p. 4
`
`
`
`¢ Computer Graphics, Volume 24, Number 4, August 1990 \ I L/ghti Source Vantage Point • z,~._ N ~: I[luminance Cone x .. ";; "'" ," ,-% I dlPdv Primitive Surface ~_ urface Element to Illuminate .... . ........... ------ " ......_.:.:i <i ................ l """" '~~'~~~ !1 ///'llluminateCone Figure 1. Surface shader state. Figure 2. Light source shader state. Since one of the major goals of the shading language is to extend the types of materials used by the rendering system, it is important to be able to assign arbitrary properties to new materi- als. The only material properties assumed to always be present, and hence made available as global variables, are color and opa- city. All other material properties are explicitly declared as argu- ments to the shader. Since there is no resttriction, in principle, to the number or types of arguments to a shader, the properties of materials can involve any amount of information. The rendering system may perform shading calculations at many points on the surface of a geometric primitive, It provides enough geometric information to characterize The surface element in the neighborhood of the point being shaded. Most shading for- mulae involve only the position P and normal N of the surface. When doing texture mapping it is often necessary to provide the surface parameters. More advanced shading methods, such as bump mapping[5] or tangent bundle mapping[14] require the parametric derivatives of the position vector. From a mathemati- cal point of view, to completely characterize a surface at a point requires knowledge of all its parametric derivatives and cross derivatives at the point. Other intrinsic surface properties, such as Gaussian curvature, can be computed from this information. There are CAD and mathematical applications which require methods to visualize local properties of the surface geometry[9]. Providing all these derivatives of position through global variables would be unwieldy, so functions were provided to take derivatives of position with respect to the surface parameters. This derivative function is discussed in more detail below (see Section 3.7). It is expensive for the rendering system to compute all this information, and this is wasted computation if it is not being used by the shader. Unnecessary calculation can be prevented by hav- ing the shaders provide a bitmask indicating which external vari- ables are referenced within the shader before it is executed. Alter- natively, the runtime environment uses a lazy evaluation to com- pute the values of the variables on demand. It is also useful to provide a bitmask indicating which variables are changed by the shader, since in some cases the renderer may want to retain the original values. 3.5. Light Sources The most general light source description defines the inten- sity and color of the emitted light as a function of position and direction[29]. The shading language provides a method for describing an arbitrary light source distribution procedurally. It does this by providing two constructs, solar and illuminate, The illuminate statement is used to set the color of light coming from a finite point. Its arguments define a cone with its apex at the point from which the light emanates. Only points inside that cone receive light from that light source. The solar statement is used to set the color of light coming from dis- tant or infinite light sources. Its arguments define a cone, centered on the point being illuminated, to which distant light will be cast. Within the block of a illuminate or solar statement, the emitted light direction is available as an independent read-only variable L, and the color of the emitted light C1 is treated as a dependent variable which should be set to define the color and intensity of light emitted as a function of direction. During the design process the following canonical types of lights were considered, and they illustrate the types of light sources which can be modeled. • Ambient light source. Ambient light is non-directional and ambient light shaders do not use either an illuminate or a solar statement. Note that ambient light can still vary as a function of position. • Point light source. A point light source casts equal amounts of light in all directions from a given point. This is the simplest example of the use of an illuminate state- ment. • Spot light source. This is point light source whose inten- sity is ma,~imum along the direction the light is pointed and fails off in other directions. A spotlight also has a circular flap which limits angle of the beam. This is art example of a procedurally defined point light source. • Shadowed light source. This is a point light source whose intensity is modulated by a texture or shadow map. • Distant light source. An infinite light casts light in only one direction. This is the simplest example of the use of a solar statement. • Illumination or environment map. This is a omnidirec- tional distant light sour