Deutsch

Concepts

BS Contact J Specification

2 Concepts

2.1 Introduction and table of contents

2.1.1 Introduction

Core X3D is based on VRML 97  (ISO/IEC 14772-1 ).
The complete set of VRML 97 functionality is subdivided into several components.
In this core X3D specification a restricted subset of the VRML 97 specification is defined.

This subset called "core X3D profile" is targeted towards

  • possible implementation in a low-footprint engine (e.g. a Java applet, small browser plugin)
  • addresses limitation of software renders, not capable of dealing with all details of the VRML 97 lighting model
  • allow broader range of implementations by eliminating some complexity of a complete VRML 97 implementation
  • supporting a simplified but powerful API very similiar in concept to the proposed VRML97 External Authoring Interface. (EAI).

In this section  key concepts of core X3D are described which are based on VRML 97 concepts. Derivations are marked in red.
An extened Profile section describes the core X3D extensibility using strictly defined profiles.

2.1.2 Table of contents

See

Table 2.1

for the table of contents for this clause.

Table 2.1 -- Table of contents, Concepts

2.1 Introduction and table of contents
  2.1.1 Introduction
  2.1.2 Table of contents
  2.1.3 Conventions used

2.2 Overview
  2.2.1 The structure of a X3D file
  2.2.2 Header
  2.2.3 Scene graph
  2.2.4 Prototypes
  2.2.5 Event routing
  2.2.6 Generating X3D files
  2.2.7 Presentation and interaction
  2.2.8 Profiles

2.3 UTF-8 file syntax
  2.3.1 Clear text (UTF-8) encoding
  2.3.2 Statements
  2.3.3 Node statement syntax
  2.3.4 Field statement syntax
  2.3.5 PROTO statement syntax
  2.3.6 IS statement syntax
  2.3.7 EXTERNPROTO statement syntax
  2.3.8 USE statement syntax
  2.3.9 ROUTE statement syntax

2.4 Scene graph structure
  2.4.1 Root nodes
  2.4.2 Scene graph hierarchy
  2.4.3 Descendant and ancestor nodes
  2.4.4 Transformation hierarchy
  2.4.5 Standard units and coordinate system
  2.4.6 Run-time name scope

4.5 X3D and the World Wide Web
  2.5.1 File extension and MIME type
  2.5.2 URLs
  2.5.3 Relative URLs
  2.5.4 Scripting language protocols
  2.5.5 File compression

2.6 Node semantics
  2.6.1 Introduction
  2.6.2 DEF/USE semantics
  2.6.3 Shapes and geometry
  2.6.4 Bounding boxes
  2.6.5 Grouping and children nodes
  2.6.6 Light sources
  2.6.7 Sensor nodes
  2.6.8 Interpolator nodes
  2.6.9 Time-dependent nodes
  2.6.10 Bindable children nodes
  2.6.11 Texture maps

2.7 Field, eventIn, and eventOut semantics

2.8 Prototype semantics

2.9 External prototype semantics

2.10 Event processing
  2.10.1 Introduction
  2.10.2 Route semantics
  2.10.3 Execution model
  2.10.4 Loops
  2.10.5 Fan-in and fan-out

2.11 Time
  2.11.1 Introduction
  2.11.2 Time origin
  2.11.3 Discrete and continuous changes

2.12 Scripting
  2.12.1 Application programming interface

2.13 Navigation
  2.13.1 Introduction
  2.13.2 Navigation paradigms
  2.13.3 Viewing model
  2.13.4 Collision detection and terrain following

2.14 Lighting model
  2.14.1 Introduction
  2.14.2 Lighting 'off'
  2.14.3 Lighting 'on'
  2.14.4 Lighting equations
  2.14.5 References

2.1.3 Conventions used

The following conventions are used throughout this document:

Derivations from VRML ISO/IEC 14772, are marked in red.

Italics are used for event and field names, and are also used when new terms are introduced and equation variables are referenced.

A fixed-space font is used for URL addresses and source code examples. ISO/IEC 14772 UTF-8 encoding examples appear in bold,fixed-space font.

Node type names are appropriately capitalized (e.g., "The Billboard node is a grouping node..."). However, the concept of the node is often referred to in lower case in order to refer to the semantics of the node, not the node itself (e.g., "To rotate the billboard...").

The form "0xhh" expresses a byte as a hexadecimal number representing the bit configuration for that byte.

Throughout this part of ISO/IEC 14772, references are denoted using the "x.[ABCD]" notation, where "x" denotes which clause or annex the reference is described in and "[ABCD]" is an abbreviation of the reference title. For example, 2.[ABCD] refers to a reference described in clause 2 and C.[ABCD] refers to a reference described in annex C.

2.2 Overview

2.2.1 The structure of a X3D file

A X3D file consists of the following major functional components: the header, the scene graph, the prototypes, and event routing. The contents of this file are processed for presentation and interaction by a program known as a browser.

2.2.2 Header

For easy identification of X3D files, every X3D file shall begin with:

#VRML V2.0 <encoding type> X3D [optional comment] <line terminator>

or

#X3D V1.0 <encoding type> [optional comment] <line terminator>

The header is a single line of UTF-8 text identifying the file as a X3D file and identifying the encoding type of the file. It may also contain additional semantic information. There shall be exactly one space separating "#VRML" from "V2.0" and "V2.0" from "<encoding type>". Also, the "<encoding type>" shall be followed by a linefeed (0x0a) or carriage-return (0x0d) character, or by one or more space (0x20) or tab (0x09) characters followed by any other characters, which are treated as a comment, and terminated by a linefeed or carriage-return character.

The <encoding type> is either "utf8" or any other authorized values defined in other parts of ISO/IEC 14772. The identifier "utf8" indicates a clear text encoding that allows for international characters to be displayed in ISO/IEC 14772 using the UTF-8 encoding defined in ISO/IEC 10646-1 (otherwise known as Unicode); see 2.[UTF8]. The usage of UTF-8 is detailed in 6.47, Text, node. The header for a UTF-8 encoded X3D file is

#VRML V2.0 utf8 X3D [optional comment] <line terminator>

or

#X3D V1.0 utf8 [optional comment] <line terminator>

Any characters after the

<encoding type>

on the first line may be ignored by a browser. The header line ends at the occurrence of a

<line terminator>

. A

<line terminator>

is a linefeed character (0x0a) or a carriage-return character (0x0d) .

After the header profile information is placed, see profile section for details.

2.2.3 Scene graph

The scene graph contains nodes which describe objects and their properties. It contains hierarchically grouped geometry to provide an audio-visual representation of objects, as well as nodes that participate in the event generation and routing mechanism.

2.2.4 Prototypes

Prototypes are not supported in the Core X3D profile.

2.2.5 Event routing

Some X3D nodes generate events in response to environmental changes or user interaction. Event routing gives authors a mechanism, separate from the scene graph hierarchy, through which these events can be propagated to effect changes in other nodes. Once generated, events are sent to their routed destinations in time order and processed by the receiving node. This processing can change the state of the node, generate additional events, or change the structure of the scene graph.

The application programming interface (API) allows arbitrary, author-defined event processing. An event received by an application can send events directly to any node to which the application has a reference. Applications can also dynamically add or delete routes and thereby change the event-routing topology.

The ideal event model processes all events instantaneously in the order that they are generated. A timestamp serves two purposes. First, it is a conceptual device used to describe the chronological flow of the event mechanism. It ensures that deterministic results can be achieved by real-world implementations that address processing delays and asynchronous interaction with external devices. Second, timestamps are also made available to the application programming interface to allow events to be processed based on the order of user actions or the elapsed time between events.

2.2.6 Generating X3D files

A generator is a human or computerized creator of X3D files. It is the responsibility of the generator to ensure the correctness of the X3D file and the availability of supporting assets (e.g., images, audio clips, other X3D files) referenced therein.

2.2.7 Presentation and interaction

The interpretation, execution, and presentation of X3D files will typically be undertaken by a mechanism known as a browser, which displays the shapes and sounds in the scene graph. This presentation is known as a virtual world and is navigated in the browser by a human or mechanical entity, known as a user. The world is displayed as if experienced from a particular location; that position and orientation in the world is known as the viewer. The browser might provide navigation paradigms (such as walking or flying) that enable the user to move the viewer through the virtual world but that is not required.

In addition to navigation, the browser provides a mechanism allowing the user to interact with the world through sensor nodes in the scene graph hierarchy. Sensors respond to user interaction with geometric objects in the world, the movement of the user through the world, or the passage of time.

The visual presentation of geometric objects in a X3D world follows a conceptual model designed to resemble the physical characteristics of light. The X3D lighting model describes how appearance properties and lights in the world are combined to produce displayed colours (see 2.14, Lighting Model, for details).

Figure 2.1 illustrates a conceptual model of an X3D browser. The browser is portrayed as a presentation application that accepts user input in the forms of file selection (explicit and implicit) and user interface gestures (e.g., manipulation and navigation using an input device). The three main components of the browser are: Parser, Scene Graph, and Audio/Visual Presentation. The Parser component reads the X3D file and creates the Scene Graph. The Scene Graph component consists of the Transformation Hierarchy (the nodes) and the Route Graph. The Scene Graph also includes the Execution Engine that processes events, reads and edits the Route Graph, and makes changes to the Transform Hierarchy (nodes). User input generally affects sensors and navigation, and thus is wired to the Route Graph component (sensors) and the Audio/Visual Presentation component (navigation). The Audio/Visual Presentation component performs the graphics and audio rendering of the Transform Hierarchy that feeds back to the user.

X3D browser conceptual model

Figure 2.1 -- Conceptual model of an X3D browser

(TODO: PROTOs must be removed, VRML replaced by X3D)





2.2.8 Profiles

Hint: This chapter has been completely revised.
Core X3D supports the concept of profiles and profile components.
A profile is a named collection of functionality and requirements which shall be supported in order for an implementation to conform to that profile. The set of  VRML97 features of  is subdivided into different functionality blocks called components. A given component can support functionality at different levels. An author specifies the required functionality level for the content. A core X3D player can use profile information to load only required components and or dynamically load new program modules, if contents requires additional component levels not currently available on the client system.
 
 
 

component Description possible levels 
rendering the supported set of rendering attributes coreX3D vrml97 
geometry the supported geometric primitives coreX3D vrml97 
navigation the features of the navigation system none vrml97 
media.texture  the allowable image media types coreX3D vrml97 Mimetypes e.g. ( image/gif image/png )
scripting Script node support none vrml97 javascript java
language language features used coreX3D vrml97

Profile indication is done by specially formatted comments added behind the standard VRML 97-file header.

#VRML V2.0 utf8 X3D
#VRML profile=coreX3D

This header indicates that a given VRML 97 content file is compatible with the coreX3D profile described in this document. No features not available are specifiied or used in the content.

#VRML V2.0 utf8 X3D
#VRML profile=coreX3D
#VRML profile:rendering=vrml97
#VRML profile:geometry=coreX3D

Indicates content that is written with respect to the full VRML 97 lighting and rendering model but with the coreX3D geometry profile.

#VRML V2.0 utf8 X3D
#VRML profile=coreX3D
#VRML profile:rendering=coreX3D
#VRML profile:geometry=coreX3D
#VRML profile:media.texture=image/gif image/jpeg
#VRML profile:scripting=none

This header explicitely list level values for several components.
 

#VRML V2.0 utf8 X3D
#VRML profile=vrml97

This content conforms to the ISO/IEC 14772-1 Base profile.

The supported viewers implementation profile(s) can be queried using an API call. Viewers should support an option allowing to specify different set of url's corresponding to the content written for different profiles. This allows players supporting higher level profiles to pick the content version for the higher level profile.
 

2.3 UTF-8 file syntax

2.3.1 Clear text (UTF-8) encoding

This section describes the syntax of UTF-8-encoded, human-readable X3D files. A more formal description of the syntax may be found in annex B, Grammar definition. The semantics of X3D in terms of the UTF-8 encoding are presented in this part of ISO/IEC 14772. Other encodings may be defined in other parts of ISO/IEC 14772. Such encodings shall describe how to map the UTF-8 descriptions to and from the corresponding encoding elements.

For the UTF-8 encoding, the # character begins a comment. The first line of the file, the header, also starts with a "#" character. Otherwise, all characters following a "#", until the next line terminator, are ignored. The only exception is within double-quoted SFString and MFString fields where the "#" character is defined to be part of the string.

Commas, spaces, tabs, linefeeds, and carriage-returns are separator characters wherever they appear outside of string fields. Separator characters and comments are collectively termed whitespace.

A X3D document server may strip comments and extra separators including the comment portion of the header line from a X3D file before transmitting it. WorldInfo nodes should be used for persistent information such as copyrights or author information.

Field, event and node names shall not contain control characters (0x0-0x1f, 0x7f), space (0x20), double or single quotes (0x22: ", 0x27: '), sharp (0x23: #), comma (0x2c: ,), period (0x2e: .), brackets (0x5b, 0x5d: []), backslash (0x5c: \) or braces (0x7b, 0x7d: {}). Further, their first character shall not be a digit (0x30-0x39), plus (0x2b: +), or minus (0x2d: -) character. Otherwise, names may contain any ISO 10646 character encoded using UTF-8. X3D is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."

The following reserved keywords shall not be used for field, event, or node names:

  • DEF
  • EXTERNPROTO
  • FALSE
  • IS
  • NULL
  • PROTO
  • ROUTE
  • TO
  • TRUE
  • USE
  • eventIn
  • eventOut
  • exposedField
  • field

2.3.2 Statements

After the required header, an X3D file may contain any combination of the following:

  1. Any number of root node statements (see 2.2.1, Root nodes);
  2. Any number of USE statements (see 2.6.2, DEF/USE semantics);
  3. Any number of ROUTE statements (see 2.10.2, Route semantics).

2.3.3 Node statement syntax

A node statement consists of an optional name for the node followed by the node's type and then the body of the node. A node is given a name using the keyword DEF followed by the name of the node. The node's body is enclosed in matching braces ("{ }"). Whitespace shall separate the DEF, name of the node, and node type, but is not required before or after the curly braces that enclose the node's body. See B.3, Nodes, for details on node grammar rules.

    [DEF <name>] <nodeType> { <body> }

A node's body consists of any number of field statements, ROUTE statements, in any order.

See 2.6.2, DEF/USE, semantics for more details on node naming. See 2.3.4, Field statement syntax, for a description of field statement syntax and 2.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics. See 2.6, Node semantics, for a description of node statement semantics.

2.3.4 Field statement syntax

A field statement consists of the name of the field followed by the field's value(s). The following illustrates the syntax for a single-valued field:

    <fieldName> <fieldValue>

The following illustrates the syntax for a multiple-valued field:

    <fieldName> [ <fieldValues> ]

See

B.4, Fields

, for details on field statement grammar rules.

Each node type defines the names and types of the fields that each node of that type contains. The same field name may be used by multiple node types. See 5, Field and event reference, for the definition and syntax of specific field types.

See 2.7, Field, eventIn, and eventOut semantics, for a description of field statement semantics.

2.3.5 PROTO statement syntax

Prototypes are not supported in the Core X3D profile.

2.3.6 IS statement syntax

IS statement is not supported in the Core X3D profile.

2.3.7 EXTERNPROTO statement syntax

Extern Prototypes are not supported in the Core X3D profile.

2.3.8 USE statement syntax

A USE statement consists of the USE keyword followed by a node name:

    USE <name>

See B.2, General, for details on USE statement grammar rules.

2.3.9 ROUTE statement syntax

A ROUTE statement consists of the ROUTE keyword followed in order by a node name, a period character, a field name, the TO keyword, a node name, a period character, and a field name. Whitespace is allowed but not required before or after the period characters:

    ROUTE <name>.<field/eventName> TO <name>.<field/eventName>

See B.2, General, for details on ROUTE statement grammar rules.

2.4 Scene graph structure

2.4.1 Root nodes

An X3D file contains zero or more root nodes. The root nodes for an X3D file are those nodes defined by the node statements or USE statements that are not contained in other node statements. Root nodes shall be children nodes (see 2.6.5, Grouping and children nodes).

2.4.2 Scene graph hierarchy

An X3D file contains a directed acyclic graph. Node statements can contain SFNode or MFNode field statements that, in turn, contain node (or USE) statements. This hierarchy of nodes is called the scene graph. Each arc in the graph from A to B means that node A has an SFNode or MFNode field whose value directly contains node B. See C.[FOLE] for details on hierarchical scene graphs.

2.4.3 Descendant and ancestor nodes

The descendants of a node are all of the nodes in its SFNode or MFNode fields, as well as all of those nodes' descendants. The ancestors of a node are all of the nodes that have the node as a descendant.

2.4.4 Transformation hierarchy

The transformation hierarchy includes all of the root nodes and root node descendants that are considered to have one or more particular locations in the virtual world. X3D includes the notion of local coordinate systems, defined in terms of transformations from ancestor coordinate systems (using Transform or Billboard nodes). The coordinate system in which the root nodes are displayed is called the world coordinate system.

An X3D browser's task is to present an X3D file to the user; it does this by presenting the transformation hierarchy to the user. The transformation hierarchy describes the directly perceptible parts of the virtual world.

The following node types are in the scene graph but not affected by the transformation hierarchy: ColorInterpolator, CoordinateInterpolator, NavigationInfo, OrientationInterpolator, PositionInterpolator, ScalarInterpolator, TimeSensor, and WorldInfo.

Nodes that are descendants of LOD or Switch nodes are affected by the transformation hierarchy, even if the settings of a Switch node's whichChoice field or the position of the viewer with respect to a LOD node makes them imperceptible.

The transformation hierarchy shall be a directed acyclic graph; results are undefined if a node in the transformation hierarchy is its own ancestor.

2.4.5 Standard units and coordinate system

ISO/IEC 14772 defines the unit of measure of the world coordinate system to be metres. All other coordinate systems are built from transformations based from the world coordinate system. Table 2.2 lists standard units for ISO/IEC 14772.

Table 2.2 -- Standard units

Category Unit
Linear distance Metres
Angles Radians
Time Seconds
Colour space RGB ([0.,1.], [0.,1.], [0., 1.])

 

 

ISO/IEC 14772 uses a Cartesian, right-handed, three-dimensional coordinate system. By default, the viewer is on the Z-axis looking down the -Z-axis toward the origin with +X to the right and +Y straight up. A modelling transformation (see 6.52, Transform, and 6.6, Billboard) or viewing transformation (see 6.53, Viewpoint) can be used to alter this default projection.

2.4.6 Run-time name scope

Each X3D file defines a run-time name scope that contains all of the root nodes of the file and all of the descendent nodes of the root nodes, with the exception of:

  1. descendent nodes that are inside Inline nodes;

Each Inline node also defines a run-time name scope, consisting of all of the root nodes of the file referred to by the Inline node, restricted as above.

Nodes created dynamically (using the API invoking the Browser.createX3D methods) are not part of any name scope, until they are added to the scene graph, at which point they become part of the same name scope of their parent node(s). A node may be part of more than one run-time name scope. A node shall be removed from a name scope when it is removed from the scene graph.

2.5 X3D and the World Wide Web

2.5.1 File extension and MIME types

The file extension for X3D files is .wrl (for world) or .x3d.

The official MIME type for X3D files is defined as:

    model/x3d

where the MIME major type for 3D data descriptions is model, and the minor type for X3D documents is x3d.

For compatibility with VRML, the following MIME types shall also be supported:

    model/vrml

where the MIME major type for 3D data descriptions is model, and the minor type for VRML documents is vrml and

    x-world/x-vrml

where the MIME major type is x-world, and the minor type for VRML documents is x-vrml.

See C.[MIME] for details.

2.5.2 URLs

A URL (Uniform Resource Locator), described in 2.[URL], specifies a file located on a particular server and accessed through a specified protocol (e.g., http). In ISO/IEC 14772, the upper-case term URL refers to a Uniform Resource Locator, while the italicized lower-case version url refers to a field which may contain URLs or in-line encoded data.

All url fields are of type MFString. The strings in these fields indicate multiple locations to search for data in decreasing order of preference. If the browser cannot locate or interpret the data specified by the first location, it might try the second and subsequent locations in order until a URL containing interpretable data is encountered. But Core X3D browsers only have to interpret the first URL.If no interpretable URL's are located, the node type defines the resultant default behaviour. The url field entries are delimited by double quotation marks " ". Due to 2.5.4, Scripting language protocols, url fields use a superset of the standard URL syntax defined in C.2[URL]. Details on the string field are located in 5.9, SFString and MFString.

More general information on URLs is described in C.2[URL].

2.5.3 Relative URLs

Relative URLs are handled as described in C.2[RURL]. The base document for nodes that contain URL fields is:

  1. the X3D file from which the statement is read, in which case the RURL information provides the data itself.

2.5.4 Scripting language protocols

Scripts  are not supported in the Core X3D profile.

 

2.5.5 File Compression

X3D files can be compressed using gzip. Core X3D browsers shall automatically recognize the compression. Specific encoding is not necessary. gzip compression is part of the Java JDK1.1 and therefore easy to integrate into light weight Java browsers.

 

2.6 Node semantics

2.6.1 Introduction

Each node has the following characteristics:

  1. A type name, for example Group.
  2. Zero or more fields that define how each node differs from other nodes of the same type. Field values are stored in the X3D file along with the nodes, and encode the state of the virtual world.
  3. A set of events that it can receive and send. Each node may receive zero or more different kinds of events which will result in some change to the node's state. Each node may also generate zero or more different kinds of events to report changes in the node's state.
  4. An implementation. The implementation of each node defines how it reacts to events it can receive, when it generates events, and its visual or auditory appearance in the virtual world (if any). The X3D standard defines the semantics of built-in nodes (i.e., nodes with implementations that are provided by the X3D browser).
  5. A name. Nodes can be named. This is used by other statements to reference a specific instantiation of a node.

2.6.2 DEF/USE semantics

A node given a name using the DEF keyword may be referenced by name later in the same file with USE or ROUTE statements. The USE statement does not create a copy of the node. Instead, the same node is inserted into the scene graph a second time, resulting in the node having multiple parents. Using an instance of a node multiple times is called instantiation.

Node names are limited in scope to a single X3D file or string submitted to the CreateX3DFromString browser extension. Given a node named "NewNode" (i.e., DEF NewNode), any "USE NewNode" statements in SFNode or MFNode fields inside NewNode's scope refer to NewNode (see 2.4.4, Transformation hierarchy, for restrictions on self-referential nodes).

If multiple nodes are given the same name, each USE statement refers to the closest node with the given name preceding it in the X3D file.

2.6.3 Shapes and geometry

2.6.3.1 Introduction

The Shape node associates a geometry node with nodes that define that geometry's appearance. Shape nodes shall be part of the transformation hierarchy to have any visible result, and the transformation hierarchy shall contain Shape nodes for any geometry to be visible (the only nodes that render visible results are Shape nodes and the Background node). A Shape node contains exactly one geometry node in its geometry field. The following node types are geometry nodes:

2.6.3.2 Geometric property nodes

Several geometry nodes contain Coordinate and TextureCoordinate as geometric property nodes. The geometric property nodes are defined as individual nodes so that instancing and sharing is possible between different geometry nodes.

2.6.3.3 Appearance nodes

Shape nodes may specify an Appearance node that describes the appearance properties (material and texture) to be applied to the Shape's geometry. Nodes of the following type may be specified in the material field of the Appearance node:

Nodes of the following types may be specified by the

texture

field of the Appearance node:

2.6.3.4 Shape hint fields

The IndexedFaceSet nodes each have three SFBool fields that provide hints about the geometry. These hints specify the vertex ordering, if the shape is solid, and if the shape contains convex faces. These fields are ccw and  solid, respectively. The convex field is not supported in Core X3D, and is assumed to be of value TRUE.

The ccw field defines the ordering of the vertex coordinates of the geometry with respect to user-given or automatically generated normal vectors used in the lighting model equations. If ccw is TRUE, the normals shall follow the right hand rule; the orientation of each normal with respect to the vertices (taken in order) shall be such that the vertices appear to be oriented in a counterclockwise order when the vertices are viewed (in the local coordinate system of the Shape) from the opposite direction as the normal. If ccw is FALSE, the normals shall be oriented in the opposite direction.

The solid field determines whether one or both sides of each polygon shall be displayed. If solid is FALSE, each polygon shall be visible regardless of the viewing direction (i.e., no backface culling shall be done,.) Two-sided lighting in order to illuminate both sides of lit surfaces is not supported in core X3D. If solid is TRUE, the visibility of each polygon shall be determined as follows: Let V be the position of the viewer in the local coordinate system of the geometry. Let N be the geometric normal vector of the polygon, and let P be any point (besides the local origin) in the plane defined by the polygon's vertices. Then if (V dot N) - (N dot P) is greater than zero, the polygon shall be visible; if it is less than or equal to zero, the polygon shall be invisible (backface culled).

All polygons in the shape have to be convex. A polygon is convex if it is planar, does not intersect itself, and all of the interior angles at its vertices are less than 180 degrees. Non-planar and self-intersecting polygons may produce undefined results .

2.6.3.5 Crease angle field

The creaseAngle field, used by the  IndexedFaceSet nodes, affects how default normals are generated. If the angle between the geometric normals of two adjacent faces is less than the crease angle, normals shall be calculated so that the faces are smooth-shaded across the edge; otherwise, normals shall be calculated so that a lighting discontinuity across the edge is produced. For example, a crease angle of 0.5 radians means that an edge between two adjacent polygonal faces will be smooth shaded if the geometric normals of the two faces form an angle that is less than 0.5 radians. Otherwise, the faces will appear faceted. Crease angles shall be greater than or equal to 0.0.
The Core X3D profile supports only values of 0 and 3.14 as a creaseAngle.

2.6.4 Bounding boxes

Several of the nodes include a bounding box specification comprised of two fields, bboxSize and bboxCenter. A bounding box is a rectangular parallelepiped of dimension bboxSize centred on the location bboxCenter in the local coordinate system. This is typically used by grouping nodes to provide a hint to the browser on the group's approximate size for culling optimizations. The default size for bounding boxes (-1, -1, -1) indicates that the user did not specify the bounding box and the effect shall be as if the bounding box were infinitely large. A bboxSize value of (0, 0, 0) is valid and represents a point in space (i.e., an infinitely small box). Specified bboxSize field values shall be >= 0.0 or equal to (-1, -1, -1). The bboxCenter fields specify a position offset from the local coordinate system.

The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside a grouping node (e.g., Transform). These are used as hints to optimize certain operations such as determining whether or not the group needs to be drawn. The bounding box shall be large enough at all times to enclose the union of the group's children's bounding boxes; it shall not include any transformations performed by the group itself (i.e., the bounding box is defined in the local coordinate system of the children). Results are undefined if the specified bounding box is smaller than the true bounding box of the group.

2.6.5 Grouping and children nodes

Grouping nodes have a field that contains a list of children nodes. Each grouping node defines a coordinate space for its children. This coordinate space is relative to the coordinate space of the node of which the group node is a child. Such a node is called a

parent

node. This means that transformations accumulate down the scene graph hierarchy.

The following node types are grouping nodes:

The following node types are children nodes:

The following node types are not valid as children nodes:

Grouping nodes don't support addChildren and removeChildren eventIn definitions. The children exposedField must be accessed directly from an API to modify the hierarchy.

Note that a variety of node types reference other node types through fields. Some of these are parent-child relationships, while others are not (there are node-specific semantics). Table 2.3 lists all node types that reference other nodes through fields.

Table 2.3 -- Nodes with SFNode or MFNode fields

Node Type Field Valid Node Types for Field
Anchor children Valid children nodes
Appearance material Material
  texture ImageTexture
Billboard children Valid children nodes
  texCoord TextureCoordinate
Group children Valid children nodes
IndexedFaceSet color Color
  coord Coordinate
  texCoord TextureCoordinate
IndexedLineSet color Color
  coord Coordinate
LOD level Valid children nodes
Shape appearance Appearance
  geometry IndexedFaceSet, IndexedLineSet, PointSet
Switch choice Valid children nodes
Transform children Valid children nodes

2.6.6 Light sources

Shape nodes are illuminated by the sum of all of the lights in the world that affect them. This includes the contribution of both the direct and ambient illumination from light sources. Ambient illumination results from the scattering and reflection of light originally emitted directly by light sources. The amount of ambient light is associated with the individual lights in the scene. This is a gross approximation to how ambient reflection actually occurs in nature.

Core X3D only supports the DirectionalLight node as light source. ambientIntensity and color fields are ignored. Core X3D uses only white light.

The intensity field specifies the brightness of the direct emission from the light. Light intensity may range from 0.0 (no light emission) to 1.0 (full intensity).

DirectionalLight nodes illuminate only the objects descended from the light's parent grouping node, including any descendent children of the parent grouping nodes.

2.6.7 Sensor nodes

2.6.7.1 Introduction to sensors

The following node types are sensor nodes:

Sensors are children nodes in the hierarchy and therefore may be parented by grouping nodes as described in 2.6.5, Grouping and children nodes.

Each type of sensor defines when an event is generated. The state of the scene graph after several sensors have generated events shall be as if each event is processed separately, in order. If sensors generate events at the same time, the state of the scene graph will be undefined if the results depend on the ordering of the events.

It is possible to create dependencies between various types of sensors. For example, a TouchSensor may result in a change to a VisibilitySensor node's transformation, which in turn may cause the VisibilitySensor node's visibility status to change.

The following two sections classify sensors into two categories: environmental sensors and pointing-device sensors.

2.6.7.2 Environmental sensors

The following node types are environmental sensors:

The ProximitySensor detects when the user navigates into a specified region in the world. The ProximitySensor itself is not visible. The TimeSensor is a clock that has no geometry or location associated with it; it is used to start and stop time-based nodes such as interpolators. The VisibilitySensor detects when a specific part of the world becomes visible to the user. Proximity, time and visibility sensors are each processed independently of whether others exist or overlap.

When environmental sensors are inserted into the transformation hierarchy and before the presentation is updated (i.e., read from file or created by a script), they shall generate events indicating any conditions which the sensor is intended to detect (see 2.10.3, Execution model). The conditions for individual sensor types to generate these initial events are defined in the individual node specifications in 6, Node reference.

2.6.7.3 Pointing-device sensors

Pointing-device sensors detect user pointing events such as the user clicking on a piece of geometry (i.e., TouchSensor). The following node types are pointing-device sensors:

A pointing-device sensor is activated when the user locates the pointing device over geometry that is influenced by that specific pointing-device sensor. Pointing-device sensors have influence over all geometry that is descended from the sensor's parent groups. In the case of the Anchor node, the Anchor node itself is considered to be the parent group. Typically, the pointing-device sensor is a sibling to the geometry that it influences. In other cases, the sensor is a sibling to groups which contain geometry (i.e., are influenced by the pointing-device sensor).

The appearance properties of the geometry do not affect activation of the sensor. In particular, transparent materials or textures shall be treated as opaque with respect to activation of pointing-device sensors.

For a given user activation, the lowest enabled pointing-device sensor in the hierarchy is activated. All other pointing-device sensors above the lowest enabled pointing-device sensor are ignored. The hierarchy is defined by the geometry node over which the pointing-device sensor is located and the entire hierarchy upward. If there are multiple pointing-device sensors tied for lowest, each of these is activated simultaneously and independently, possibly resulting in multiple sensors activating and generating output simultaneously. This feature allows combinations of pointing-device sensors (e.g., TouchSensor and PlaneSensor). If a pointing-device sensor appears in the transformation hierarchy multiple times (DEF/USE), it shall be tested for activation in all of the coordinate systems in which it appears.

If a pointing-device sensor is not enabled when the pointing-device button is activated, it will not generate events related to the pointing device until after the pointing device is deactivated and the sensor is enabled (i.e., enabling a sensor in the middle of dragging does not result in the sensor activating immediately).

The Anchor node is considered to be a pointing-device sensor when trying to determine which sensor (or Anchor node) to activate. For example, a click on Shape3 is handled by SensorD, a click on Shape2 is handled by SensorC and the AnchorA, and a click on Shape1 is handled by SensorA and SensorB:

    Group {
      children [
        DEF Shape1  Shape       { ... }
        DEF SensorA TouchSensor { ... }
        DEF SensorB PlaneSensor { ... }
        DEF AnchorA Anchor {
          url "..."
          children [
            DEF Shape2  Shape { ... }
            DEF SensorC TouchSensor { ... }
            Group {
              children [
                DEF Shape3  Shape { ... }
                DEF SensorD TouchSensor { ... }
              ]
            }
          ]
        }
      ]
    }

2.6.7.4 Drag sensors

Drag sensors are not supported in core X3D.

2.6.7.5 Activating and manipulating sensors

The pointing device controls a pointer in the virtual world. While activated by the pointing device, a sensor will generate events as the pointer moves. Typically the pointing device may be categorized as either 2D (e.g., conventional mouse) or 3D (e.g., wand). It is suggested that the pointer controlled by a 2D device is mapped onto a plane a fixed distance from the viewer and perpendicular to the line of sight. The mapping of a 3D device may describe a 1:1 relationship between movement of the pointing device and movement of the pointer.

The position of the pointer defines a bearing which is used to determine which geometry is being indicated. When implementing a 2D pointing device it is suggested that the bearing is defined by the vector from the viewer position through the location of the pointer. When implementing a 3D pointing device it is suggested that the bearing is defined by extending a vector from the current position of the pointer in the direction indicated by the pointer.

In all cases the pointer is considered to be indicating a specific geometry when that geometry is intersected by the bearing. If the bearing intersects multiple sensors' geometries, only the sensor nearest to the pointer will be eligible for activation.

2.6.8 Interpolator nodes

Interpolator nodes are designed for linear keyframed animation. An interpolator node defines a piecewise-linear function, f(t), on the interval (-infinity, +infinity). The piecewise-linear function is defined by n values of t, called key, and the n corresponding values of f(t), called keyValue. The keys shall be monotonically non-decreasing, otherwise the results are undefined. The keys are not restricted to any interval.

An interpolator node evaluates f(t) given any value of t (via the set_fraction eventIn) as follows: Let the n keys t0, t1, t2, ..., tn-1 partition the domain (-infinity, +infinity) into the n+1 subintervals given by (-infinity, t0), [t0, t1), [t1, t2), ... , [tn-1, +infinity). Also, let the n values v0, v1, v2, ..., vn-1 be the values of f(t) at the associated key values. The piecewise-linear interpolating function, f(t), is defined to be

     f(t) = v0, if t <= t0,
         
= vn-1, if t >= tn-1,            = linterp(t, vi, vi+1), if ti <= t <= ti+1

     where linterp(t,x,y) is the linear interpolant, i belongs to {0,1,..., n-2}.

The third conditional value of f(t) allows the defining of multiple values for a single key, (i.e., limits from both the left and right at a discontinuity in f(t)). The first specified value is used as the limit of f(t) from the left, and the last specified value is used as the limit of f(t) from the right. The value of f(t) at a multiply defined key is indeterminate, but should be one of the associated limit values.

The following node types are interpolator nodes, each based on the type of value that is interpolated:

All interpolator nodes share a common set of fields and semantics:

    eventIn      SFFloat      set_fraction
    exposedField MFFloat      key           [...]
    exposedField MF<type>     keyValue      [...]
    eventOut     [S|M]F<type> value_changed

The type of the keyValue field is dependent on the type of the interpolator (e.g., the ColorInterpolator's keyValue field is of type MFColor).

The set_fraction eventIn receives an SFFloat event and causes the interpolator function to evaluate, resulting in a value_changed eventOut with the same timestamp as the set_fraction event.

ColorInterpolator, OrientationInterpolator, PositionInterpolator, and ScalarInterpolator output a single-value field to value_changed. Each value in the keyValue field corresponds in order to the parameter value in the key field. Results are undefined if the number of values in the key field of an interpolator is not the same as the number of values in the keyValue field.

CoordinateInterpolator sends multiple-value results to value_changed. In this case, the keyValue field is an n x m array of values, where n is the number of values in the key field and m is the number of values at each keyframe. Each m values in the keyValue field correspond, in order, to a parameter value in the key field. Each value_changed event shall contain m interpolated values. Results are undefined if the number of values in the keyValue field divided by the number of values in the key field is not a positive integer.

If an interpolator node's value eventOut is read before it receives any inputs, keyValue[0] is returned if keyValue is not empty. If keyValue is empty (i.e., [ ]), the initial value for the eventOut type is returned (e.g., (0, 0, 0) for SFVec3f); see 5, Field and event reference, for initial event values.

The location of an interpolator node in the transformation hierarchy has no effect on its operation. For example, if a parent of an interpolator node is a Switch node with whichChoice set to -1 (i.e., ignore its children), the interpolator continues to operate as specified (receives and sends events).

2.6.9 Time-dependent nodes

AudioClip and TimeSensor are time-dependent nodes that activate and deactivate themselves at specified times. Each of these nodes contains the exposedFields: startTime, stopTime, and loop, and the eventOut: isActive. The values of the exposedFields are used to determine when the node becomes active or inactive Also, under certain conditions, these nodes ignore events to some of their exposedFields. A node ignores an eventIn by not accepting the new value and not generating an eventOut_changed event. In this subclause, an abstract time-dependent node can be any one of AudioClip, MovieTexture, or TimeSensor.

Time-dependent nodes can execute for 0 or more cycles. A cycle is defined by field data within the node. If, at the end of a cycle, the value of loop is FALSE, execution is terminated (see below for events at termination). Conversely, if loop is TRUE at the end of a cycle, a time-dependent node continues execution into the next cycle. A time-dependent node with loop TRUE at the end of every cycle continues cycling forever if startTime >= stopTime, or until stopTime if  startTime < stopTime.

A time-dependent node generates an isActive TRUE event when it becomes active and generates an isActive FALSE event when it becomes inactive. These are the only times at which an isActive event is generated. In particular, isActive events are not sent at each tick of a simulation.

A time-dependent node is inactive until its startTime is reached. When time now becomes greater than or equal to startTime, an isActive TRUE event is generated and the time-dependent node becomes active (now refers to the time at which the browser is simulating and displaying the virtual world). When a time-dependent node is read from a X3D file and the ROUTEs specified within the X3D file have been established, the node should determine if it is active and, if so, generate an isActive TRUE event and begin generating any other necessary events. However, if a node would have become inactive at any time before the reading of the X3D file, no events are generated upon the completion of the read.

An active time-dependent node will become inactive when stopTime is reached if stopTime > startTime. The value of stopTime is ignored if stopTime <= startTime. Also, an active time-dependent node will become inactive at the end of the current cycle if loop is FALSE. If an active time-dependent node receives a set_loop FALSE event, execution continues until the end of the current cycle or until stopTime (if stopTime > startTime), whichever occurs first. The termination at the end of cycle can be overridden by a subsequent set_loop TRUE event.

Any set_startTime events to an active time-dependent node are ignored. Any set_stopTime event where stopTime <= startTime sent to an active time-dependent node is also ignored. A set_stopTime event where startTime < stopTime <= now sent to an active time-dependent node results in events being generated as if stopTime has just been reached. That is, final events, including an isActive FALSE, are generated and the node becomes inactive. The stopTime_changed event will have the set_stopTime value. Other final events are node-dependent (c.f., TimeSensor).

A time-dependent node may be restarted while it is active by sending a set_stopTime event equal to the current time (which will cause the node to become inactive) and a set_startTime event, setting it to the current time or any time in the future. These events will have the same time stamp and should be processed as set_stopTime, then set_startTime to produce the correct behaviour.

The default values for each of the time-dependent nodes are specified such that any node with default values is already inactive (and, therefore, will generate no events upon loading). A time-dependent node can be defined such that it will be active upon reading by specifying loop TRUE. This use of a non-terminating time-dependent node should be used with caution since it incurs continuous overhead on the simulation.

Figure 2.2 illustrates the behavior of several common cases of time-dependent nodes. In each case, the initial conditions of startTime, stopTime, loop, and the time-dependent node's cycle interval are labelled, the red region denotes the time period during which the time-dependent node is active, the arrows represent eventIns received by and eventOuts sent by the time-dependent node, and the horizontal axis represents time.

Time dependent examples

Figure 2.2 -- Examples of time-dependent node execution

 

2.6.10 Bindable children nodes


Core X3D does not support the stack behavour of bindable node as in VRML97. The Background, NavigationInfo, and Viewpoint nodes have the unique behaviour that only one of each type can be bound (i.e., affecting the user's experience) at any instant in time.  The isBound event is output when a given node is made bound sending TRUE to the set_bind eventIn of a bindable node.
Sending FALSE to a set_bind eventIn, results in a reset to the default value, for example a black background.
The new bound node sends an isBound TRUE event, the current bound node an isBound FALSE event.

When a node replaces the current bound node, the isBound TRUE and FALSE eventOuts from the two nodes are sent simultaneously (i.e., with identical timestamps).

2.6.11 Texture maps

2.6.11.1 Texture map formats

Two node types specify texture maps: Background and ImageTexture. In all cases, texture maps are defined by 2D images that contain an array of colour values describing the texture. The texture map values are interpreted differently depending on the number of components in the texture map and the specifics of the image format. In general, texture maps may be described using one of the following forms:

  1. Intensity textures (one-component)
  2. Intensity plus alpha opacity textures (two-component)
  3. Full RGB textures (three-component)
  4. Full RGB plus alpha opacity textures (four-component)

Note that most image formats specify an alpha opacity, not transparency (where alpha = 1 - transparency).

See Table 2.5 and Table 2.6 for a description of how the various texture types are applied.

Core X3D requires support for JPEG files (see 2.[JPEG]) and recommends support for GIF files (see C.[GIF]). The Java JDK1.1 provides built-in support for these image types.

2.6.11.2 Texture map image formats

Texture nodes that require support for the PNG (see 2.[PNG]) image format (6.5, Background, and 6.22, ImageTexture) shall interpret the PNG pixel formats in the following way:

    • Greyscale pixels without alpha or simple transparency are treated as intensity textures.
    • Greyscale pixels with alpha or simple transparency are treated as intensity plus alpha textures.
    • RGB pixels without alpha channel or simple transparency are treated as full RGB textures.
    • RGB pixels with alpha channel or simple transparency are treated as full RGB plus alpha textures.

If the image specifies colours as indexed-colour (i.e., palettes or colourmaps), the following semantics should be used (note that `greyscale' refers to a palette entry with equal red, green, and blue values):

    • If all the colours in the palette are greyscale and there is no transparency chunk, it is treated as an intensity texture.
    • If all the colours in the palette are greyscale and there is a transparency chunk, it is treated as an intensity plus opacity texture.
    • If any colour in the palette is not grey and there is no transparency chunk, it is treated as a full RGB texture.
    • If any colour in the palette is not grey and there is a transparency chunk, it is treated as a full RGB plus alpha texture.

Texture nodes that require support for JPEG files (see 2.[JPEG], 6.5, Background, and 6.22, ImageTexture) shall interpret JPEG files as follows:

    • Greyscale files (number of components equals 1) are treated as intensity textures.
    • YCbCr files are treated as full RGB textures.
    • No other JPEG file types are required. It is recommended that other JPEG files are treated as a full RGB textures.

Texture nodes that recommend support for GIF files (see C.[GIF], 6.5, Background, and 6.22, ImageTexture) shall follow the applicable semantics described above for the PNG format.

2.7 Field, eventIn, and eventOut semantics

Fields are placed inside node statements in a X3D file, and define the persistent state of the virtual world. Results are undefined if multiple values for the same field in the same node (e.g.,  Material { diffuseColor 1.0 0.0 0.0 diffuseColor 0.0 1.0 0.0 }) are declared.

EventIns and eventOuts define the types and names of events that each type of node may receive or generate. Events are transient and event values are not written to X3D files. Each node interprets the values of the events sent to it or generated by it according to its implementation.

Field, eventIn, and eventOut types, and field encoding syntax, are described in 5, Field and event reference.

An exposedField can receive events like an eventIn, can generate events like an eventOut, and can be stored in X3D files like a field. An exposedField named zzz can be referred to as 'set_zzz' and treated as an eventIn, and can be referred to as 'zzz_changed' and treated as an eventOut. The initial value of an exposedField is its value in the X3D file, or the default value for the node in which it is contained, if a value is not specified. When an exposedField receives an event it shall generate an event with the same value and timestamp. The following sources, in precedence order, shall be used to determine the initial value of the exposedField:

    • the user-defined value in the instantiation (if one is specified);
    • the default value for that field as specified in the node or prototype definition.

The rules for naming fields, exposedFields, eventOuts, and eventIns for the built-in nodes are as follows:

    • All names containing multiple words start with a lower case letter, and the first letter of all subsequent words is capitalized (e.g., diffuseColor), with the exception of set_ and _changed, as described below.
    • All eventIns have the prefix "set_".
    • Certain eventIns and eventOuts of type SFTime do not use the "set_" prefix or "_changed" suffix.
    • All other eventOuts have the suffix "_changed" appended, with the exception of eventOuts of type SFBool. Boolean eventOuts begin with the word "is" (e.g., isFoo) for better readability.

2.8 Prototype semantics

Prototypes are not supported in the core X3D profile.

2.9 External prototype semantics

External Prototypes are not supported in the core X3D profile.
 

2.10 Event processing

2.10.1 Introduction

Most node types have at least one eventIn definition and thus can receive events. Incoming events are data messages sent by other nodes to change some state within the receiving node. Some nodes also have eventOut definitions. These are used to send data messages to destination nodes that some state has changed within the source node.

If an eventOut is read before it has sent any events, the initialvalue as specified in 5, Field and event reference, for each field/event type is returned.

2.10.2 Route semantics

The connection between the node generating the event and the node receiving the event is called a route. Routes are not nodes. The ROUTE statement is a construct for establishing event paths between nodes. ROUTE statements may either appear at the top level of an X3D file or inside a node wherever fields may appear. Nodes referenced in a ROUTE statement shall be defined before the ROUTE statement.

The types of the eventIn and the eventOut shall match exactly. For example, it is illegal to route from an SFFloat to an SFInt32 or from an SFFloat to an MFFloat.

Routes may be established only from eventOuts to eventIns. For convenience, when routing to or from an eventIn or eventOut (or the eventIn or eventOut part of an exposedField), the set_ or _changed part of the event's name is optional. If the browser is trying to establish a ROUTE to an eventIn named zzz and an eventIn of that name is not found, the browser shall then try to establish the ROUTE to the eventIn named set_zzz. Similarly, if establishing a ROUTE from an eventOut named zzz and an eventOut of that name is not found, the browser shall try to establish the ROUTE from zzz_changed.

Redundant routing is ignored. If an X3D file repeats a routing path, the second and subsequent identical routes are ignored. This also applies for routes created dynamically via the API.

2.10.3 Execution model

Once a sensor or the API has generated an initial event, the event is propagated from the eventOut producing the event along any ROUTEs to other nodes. These other nodes may respond by generating additional events, continuing until all routes have been honoured. This process is called an event cascade. All events generated during a given event cascade are assigned the same timestamp as the initial event, since all are considered to happen instantaneously.

Some sensors generate multiple events simultaneously. Similarly, it is possible that asynchronously generated events could arrive at the identical time as one or more sensor generated event. In these cases, all events generated are part of the same initial event cascade and each event has the same timestamp.

After all events of the initial event cascade are honored, post-event processing performs actions stimulated by the event cascade. The entire sequence of events occuring in a single timestamp are:

    • Perform event cascade evaluation.
    • Send final events from environmental sensors being removed from the transformation hierarchy.
    • Add or remove routes specified in addRoute( ) or deleteRoute( ) using the API in the preceeding event cascade.
    • Send initial events from any dynamically created environmental sensors.
    • If any events were generated from steps 2 through 4, go to step 2 and continue.

Figure 2.3 provides a conceptual illustration of the execution model.

Figure 2.3 -- Conceptual execution model

(TODO: Script nodes must be removed)

Nodes that contain eventOuts or exposedFields shall produce at most one event per timestamp. If a field is connected to another field via a ROUTE, an implementation shall send only one event per ROUTE per timestamp. This also applies to scripts where the rules for determining the appropriate action for sending eventOuts are defined in 2.12.9.3, Sending eventOuts.

TBD: use the same timestamp for all events during one "update cycle".

D.19, Execution model, provides an example that demonstrates the execution model. Figure 2.4 illustrates event processing for a single timestamp in example in D.19, Execution model:
 

2.10.4 Loops

Event cascades may not contain loops in core X3D where an event E is routed to a node that generates an event that eventually results in E being generated again.

2.10.5 Fan-in and fan-out

Fan-in occurs when two or more routes write to the same eventIn. Events coming into an eventIn from different eventOuts with the same timestamp shall be processed, but the order of evaluation is implementation dependent.

Fan-out occurs when one eventOut routes to two or more eventIns. This results in sending any event generated by the eventOut to all of the eventIns.
 

2.11 Time

2.11.1 Introduction

The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will approximate "real" time. A world's creator should make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will have a timestamp greater than any previous time event.

2.11.2 Time origin

Time (0.0) is equivalent to 00:00:00 GMT January 1, 1970. Absolute times are specified in SFTime or MFTime fields as double-precision floating point numbers representing seconds. Negative absolute times are interpreted as happening before 1970.

Processing an event with timestamp t may only result in generating events with timestamps greater than or equal to t.

2.11.3 Discrete and continuous changes

Core X3D does not distinguish between discrete events (such as those generated by a TouchSensor) and events that are the result of sampling a conceptually continuous set of changes (such as the fraction events generated by a TimeSensor). An ideal X3D implementation would generate an infinite number of samples for continuous changes, each of which would be processed infinitely quickly.

Before processing a discrete event, all continuous changes that are occurring at the discrete event's timestamp shall behave as if they generate events at that same timestamp.

Beyond the requirements that continuous changes be up-to-date during the processing of discrete changes, the sampling frequency of continuous changes is implementation dependent. Typically a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per frame, where a frame is a single rendering of the world or one time-step in a simulation.
 

2.12 Scripting

Scripting is not supported in the core X3D profile.

2.12.1 Application programming interface

Core X3D provides a powerful API for customization. See 5. Programming API for details.

2.13 Navigation

2.13.1 Introduction

Core X3D doesn't require built-in navigation.

Conceptually speaking, every X3D world contains a viewpoint from which the world is currently being viewed. Navigation is the action taken by the user to change the position and/or orientation of this viewpoint thereby changing the user's view. This allows the user to move through a world or examine an object. The NavigationInfo node (see 6.29, NavigationInfo) specifies the characteristics of the desired navigation behaviour, but the exact user interface is browser-dependent. The Viewpoint node (see 6.53, Viewpoint) specifies key locations and orientations in the world to which the user may be moved via API or browser-specific user interfaces.

2.13.2 Navigation paradigms

The browser may allow the user to modify the location and orientation of the viewer in the virtual world using a navigation paradigm. Many different navigation paradigms are possible, depending on the nature of the virtual world and the task the user wishes to perform. For instance, a walking paradigm would be appropriate in an architectural walkthrough application, while a flying paradigm might be better in an application exploring interstellar space. Examination is another common use for X3D, where the world is considered to be a single object which the user wishes to view from many angles and distances.

The NavigationInfo node has a type field that specifies the navigation paradigm for this world. The actual user interface provided to accomplish this navigation is browser-dependent. See 6.29, NavigationInfo, for details.

2.13.3 Viewing model

The browser controls the location and orientation of the viewer in the world, based on input from the user (using the browser-provided navigation paradigm) and the motion of the currently bound Viewpoint node (and its coordinate system). The X3D author can place any number of viewpoints in the world at important places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoint nodes exist in their parent's coordinate system, and both the viewpoint and the coordinate system may be changed to affect the view of the world presented by the browser. Only one viewpoint is bound at a time. A detailed description of how the Viewpoint node operates is described in 2.6.10, Bindable children nodes, and 6.53, Viewpoint.

Navigation is performed relative to the Viewpoint's location and does not affect the location and orientation values of a Viewpoint node. The location of the viewer may be determined with a ProximitySensor node (see 6.38, ProximitySensor).

2.13.4 Collision detection and terrain following

Core X3D doesn't support Collision nodes and doesn't require navigation. So browser that don't support navigation can ignore this chapter.

An X3D file can contain NavigationInfo nodes that influence the browser's navigation paradigm. The browser is responsible for detecting collisions between the viewer and the objects in the virtual world, and is also responsible for adjusting the viewer's location when a collision occurs. Browsers shall not disable collision detection except for the special cases listed below. Collision nodes can be used to generate events when viewer and objects collide, and can be used to designate that certain objects should be treated as transparent to collisions. Support for inter-object collision is not specified. The NavigationInfo types of WALK, FLY, and NONE shall strictly support collision detection. However, the NavigationInfo types ANY and EXAMINE may temporarily disable collision detection during navigation, but shall not disable collision detection during the normal execution of the world. See 6.29, NavigationInfo, for details on the various navigation types.

NavigationInfo nodes can be used to specify certain parameters often used by browser navigation paradigms. The size and shape of the viewer's avatar determines how close the avatar may be to an object before a collision is considered to take place. These parameters can also be used to implement terrain following by keeping the avatar a certain distance above the ground. They can additionally be used to determine how short an object must be for the viewer to automatically step up onto it instead of colliding with it.

2.14 Lighting model

Core X3D uses a simplified lighting model. Therefore this chapter has been revised completely. It is based on the lighting specification described in C.[SHOUT].

2.14.1 Introduction

The Core X3D lighting model provides detailed equations which define the color to apply to each geometric object. For each object, the values of the Material node and texture currently being applied to the object are combined with the lights illuminating the object. These equations are designed to simulate the physical properties of light striking a surface.

2.14.2 Lighting 'off'

A Shape node is unlit if either of the following is true:

    • The shape's appearance field is NULL (default).
    • The material field in the Appearance node is NULL (default).

Note the special cases of geometry nodes that do not support lighting (see IndexedLineSet, and PointSet, for details).

If the shape is unlit, the color (Irgb) and alpha (A, 1-transparency) of the shape at each point on the shape's geometry is given in Table 2.4.

Table 2.4 -- Unlit color and alpha mapping

Texture type
color NULL
No texture Irgb= (1, 1, 1)
A = 1
Intensity
(one-component)
Irgb = (IT,IT,IT )
A = 1
Intensity+Alpha
(two-component)
Irgb= (IT,IT,IT )
A = AT
RGB
(three-component)
Irgb= ITrgb
A = 1
RGBA
(four-component)
Irgb= ITrgb
A = AT

where:

AT = normalized [0, 1] alpha value from 2 or 4 component texture image
IT = normalized [0, 1] intensity from 1 or 2 component texture image
ITrgb= color from 3-4 component texture image

2.14.3 Lighting 'on'

If the shape is lit (i.e., a Material and an Appearance node are specified for the Shape), the Material and Texture nodes determines the diffuse color for the lighting equation as specified in Table 2.5.

Table 2.5 -- Lit color and alpha mapping

Texture type Color node NULL
No texture ODrgb = IDrgb
A = 1-TM
Intensity texture
(one-component)
ODrgb = IT × IDrgb
A = 1-TM
Intensity+Alpha texture
(two-component)
ODrgb = IT × IDrgb
A = AT×(1-TM)
RGB texture
(three-component)
ODrgb = ITrgb
A = 1-TM
RGBA texture
(four-component)
ODrgb = ITrgb
A = AT×(1-TM)

where:

IDrgb = material diffuseColor
ODrgb = diffuse factor, used in lighting equations below
TM = material transparency

All other terms are as defined in 2.14.2, Lighting `off'.

2.14.4 Lighting equations

An ideal Core X3D implementation will evaluate the following lighting equation at each point on a lit surface. RGB intensities at each point on a geometry (Irgb) are given by:

Irgb = OErgb + SUM( oni × ILrgb× diffusei)

where:

diffusei = Ii × ODrgb × ( N · L )

and:

· = modified vector dot product: if dot product < 0, then 0.0, otherwise, dot product
I Lrgb = light i color
Ii = light i intensity
L = -direction of light source i (Core X3D has only directional lights)
N = normalized normal vector at this point on geometry (calculated by viewer)
ODrgb = diffuse color, from Material node and/or texture node
OErgb = Material emissiveColor
on i = 1, if light source i affects this point on the geometry,
0, if light source i does not affect this geometry (if outside of enclosing Group/Transform for DirectionalLights, or on field is FALSE)

SUM: sum over all light sources i

2.14.5 References

The X3D lighting equations are based on the simple illumination equations given in C.[FOLE] and C.[OPEN].