Cg 编程/Unity/虚拟现实投影
本教程讨论了 Unity 中的离轴透视投影。它基于 “顶点变换”部分。由于只需要更改视图矩阵和投影矩阵,而这在 C# 中实现,因此不需要着色器编程。
离轴透视投影的主要应用是虚拟现实环境,例如照片中所示的 CAVE 或所谓的鱼缸 VR 系统。通常,用户的头部位置被跟踪,并且为每个显示器计算跟踪位置处的相机的透视投影,这样用户就能够体验到通过窗口看到三维世界的错觉,而不是看一个平面显示器。
同轴投影指的是相机位置位于视平面的对称轴上,即通过视平面中心并与其正交的轴。这种情况在 “顶点变换”部分 中进行了讨论。
然而,在虚拟现实环境中,虚拟相机通常会跟踪用户的头部位置,以创建视差效果,从而创造出更加逼真的三维世界错觉。由于跟踪的头部位置并不局限于视平面的对称轴,因此同轴投影对于大多数虚拟现实环境来说是不够的。
离轴透视投影通过允许任意相机位置来解决这个问题。虽然一些底层图形 API(例如旧版本的 OpenGL)支持离轴投影,但它们对同轴投影的支持更好,因为这是更常见的情况。类似地,许多高级工具(例如 Unity)支持离轴投影,但对同轴投影提供了更好的支持,即你可以通过几次鼠标点击来指定任何同轴投影,但你需要编写脚本来实现离轴投影。
离轴透视投影需要与同轴透视投影不同的视图矩阵和投影矩阵。为了计算同轴视图矩阵,将指定的视图方向旋转到 z 轴上,如 “顶点变换”部分 所述。离轴视图矩阵的唯一区别是这个“视图方向”是作为指定视平面的正交方向计算的,即视平面的表面法向量。
离轴投影矩阵必须更改,因为视平面的边缘不再围绕与(技术上的)“视图方向”的交点对称。因此,必须计算到边缘的四个距离,并将它们放入合适的投影矩阵中。有关详细信息,请参阅 Robert Kooima 在其出版物 “Generalized Perspective Projection” 中的描述。下一节介绍了该技术在 Unity 中的实现。
以下脚本基于 Robert Kooima 出版物中的代码。实现差异非常少。其中一个是,在 Unity 中,视平面更容易指定为内置的 Quad 对象,它在对象坐标系中的角点位于 (±0.5, ±0.5, 0)。此外,原始代码是为右手坐标系编写的,而 Unity 使用左手坐标系;因此,所有叉积的结果必须乘以 -1。此外,此处的代码考虑了相机可能正在看到 Quad 对象的背面。
另一个区别是,相机 GameObject 的旋转和参数 fieldOfView
被 Unity 用于视锥剔除;因此,脚本应将这些值设置为适当的值。(这些值对矩阵的计算没有意义。)不幸的是,如果其他脚本(即设置跟踪头部位置的脚本)也设置相机旋转,这可能会导致问题。因此,可以使用变量 estimateViewFrustum
来停用此估计(可能会导致 Unity 对视锥剔除不正确)。
如果参数 setNearClipPlane
设置为 true
,则脚本将近裁剪平面的距离设置为相机与视平面之间的距离加上 nearClipDistanceOffset
的值。但是,如果该值小于 minNearClipDistance
,则将其设置为 minNearClipDistance
。此功能在使用该脚本渲染镜像时特别有用,如 “镜像”部分 所述。nearClipDistanceOffset
应该是一个负数,尽可能接近 0,同时避免出现伪影。
// This script should be attached to a Camera object
// in Unity. Once a Quad object is specified as the
// "projectionScreen", the script computes a suitable
// view and projection matrix for the camera.
// The code is based on Robert Kooima's publication
// "Generalized Perspective Projection," 2009,
// http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf
using UnityEngine;
// Use the following line to apply the script in the editor:
[ExecuteInEditMode]
public class ObliqueProjectionToQuad : MonoBehaviour {
public GameObject projectionScreen;
public bool estimateViewFrustum = true;
public bool setNearClipPlane = false;
public float minNearClipDistance = 0.0001f;
public float nearClipDistanceOffset = -0.01f;
private Camera cameraComponent;
void OnPreCull () {
cameraComponent = GetComponent<Camera> ();
if (null != projectionScreen && null != cameraComponent) {
Vector3 pa =
projectionScreen.transform.TransformPoint (
new Vector3 (-0.5f, -0.5f, 0.0f));
// lower left corner in world coordinates
Vector3 pb =
projectionScreen.transform.TransformPoint (
new Vector3 (0.5f, -0.5f, 0.0f));
// lower right corner
Vector3 pc =
projectionScreen.transform.TransformPoint (
new Vector3 (-0.5f, 0.5f, 0.0f));
// upper left corner
Vector3 pe = transform.position;
// eye position
float n = cameraComponent.nearClipPlane;
// distance of near clipping plane
float f = cameraComponent.farClipPlane;
// distance of far clipping plane
Vector3 va; // from pe to pa
Vector3 vb; // from pe to pb
Vector3 vc; // from pe to pc
Vector3 vr; // right axis of screen
Vector3 vu; // up axis of screen
Vector3 vn; // normal vector of screen
float l; // distance to left screen edge
float r; // distance to right screen edge
float b; // distance to bottom screen edge
float t; // distance to top screen edge
float d; // distance from eye to screen
vr = pb - pa;
vu = pc - pa;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
// are we looking at the backface of the plane object?
if (Vector3.Dot (-Vector3.Cross (va, vc), vb) < 0.0f) {
// mirror points along the x axis (most users
// probably expect the y axis to stay fixed)
vr = -vr;
pa = pb;
pb = pa + vr;
pc = pa + vu;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
}
vr.Normalize ();
vu.Normalize ();
vn = -Vector3.Cross (vr, vu);
// we need the minus sign because Unity
// uses a left-handed coordinate system
vn.Normalize ();
d = -Vector3.Dot (va, vn);
if (setNearClipPlane) {
n = Mathf.Max (minNearClipDistance, d + nearClipDistanceOffset);
cameraComponent.nearClipPlane = n;
}
l = Vector3.Dot (vr, va) * n / d;
r = Vector3.Dot (vr, vb) * n / d;
b = Vector3.Dot (vu, va) * n / d;
t = Vector3.Dot (vu, vc) * n / d;
Matrix4x4 p = new Matrix4x4 (); // projection matrix
p[0, 0] = 2.0f * n / (r - l);
p[0, 1] = 0.0f;
p[0, 2] = (r + l) / (r - l);
p[0, 3] = 0.0f;
p[1, 0] = 0.0f;
p[1, 1] = 2.0f * n / (t - b);
p[1, 2] = (t + b) / (t - b);
p[1, 3] = 0.0f;
p[2, 0] = 0.0f;
p[2, 1] = 0.0f;
p[2, 2] = (f + n) / (n - f);
p[2, 3] = 2.0f * f * n / (n - f);
p[3, 0] = 0.0f;
p[3, 1] = 0.0f;
p[3, 2] = -1.0f;
p[3, 3] = 0.0f;
Matrix4x4 rm = new Matrix4x4 (); // rotation matrix;
rm[0, 0] = vr.x;
rm[0, 1] = vr.y;
rm[0, 2] = vr.z;
rm[0, 3] = 0.0f;
rm[1, 0] = vu.x;
rm[1, 1] = vu.y;
rm[1, 2] = vu.z;
rm[1, 3] = 0.0f;
rm[2, 0] = vn.x;
rm[2, 1] = vn.y;
rm[2, 2] = vn.z;
rm[2, 3] = 0.0f;
rm[3, 0] = 0.0f;
rm[3, 1] = 0.0f;
rm[3, 2] = 0.0f;
rm[3, 3] = 1.0f;
Matrix4x4 tm = new Matrix4x4 (); // translation matrix;
tm[0, 0] = 1.0f;
tm[0, 1] = 0.0f;
tm[0, 2] = 0.0f;
tm[0, 3] = -pe.x;
tm[1, 0] = 0.0f;
tm[1, 1] = 1.0f;
tm[1, 2] = 0.0f;
tm[1, 3] = -pe.y;
tm[2, 0] = 0.0f;
tm[2, 1] = 0.0f;
tm[2, 2] = 1.0f;
tm[2, 3] = -pe.z;
tm[3, 0] = 0.0f;
tm[3, 1] = 0.0f;
tm[3, 2] = 0.0f;
tm[3, 3] = 1.0f;
// set matrices
cameraComponent.projectionMatrix = p;
cameraComponent.worldToCameraMatrix = rm * tm;
// The original paper puts everything into the projection
// matrix (i.e. sets it to p * rm * tm and the other
// matrix to the identity), but this doesn't appear to
// work with Unity's shadow maps.
if (estimateViewFrustum) {
// rotate camera to screen for culling to work
Quaternion q = new Quaternion ();
q.SetLookRotation ((0.5f * (pb + pc) - pe), vu);
// look at center of screen
cameraComponent.transform.rotation = q;
// set fieldOfView to a conservative estimate
// to make frustum tall enough
if (cameraComponent.aspect >= 1.0f) {
cameraComponent.fieldOfView = Mathf.Rad2Deg *
Mathf.Atan (((pb - pa).magnitude + (pc - pa).magnitude) /
va.magnitude);
} else {
// take the camera aspect into account to
// make the frustum wide enough
cameraComponent.fieldOfView =
Mathf.Rad2Deg / cameraComponent.aspect *
Mathf.Atan (((pb - pa).magnitude + (pc - pa).magnitude) /
va.magnitude);
}
}
}
}
}
要使用此脚本,请在项目窗口中选择创建 > C# 脚本,将脚本命名为“ObliqueProjectionToQuad”,双击新脚本进行编辑,然后将上述代码复制并粘贴到其中。然后将脚本附加到你的主相机(从项目窗口拖动到层次结构窗口中的相机对象)。此外,创建一个 Quad 对象(游戏对象 > 3D 对象 > Quad 在主菜单中),并将其放置到虚拟场景中以定义视平面。在检查器窗口中停用 Quad 的网格渲染器,使其不可见(它只是一个占位符)。选择相机对象,并将 Quad 对象拖动到检查器中的投影屏幕。当游戏启动时,脚本将处于活动状态。添加以下代码行
[ExecuteInEditMode]
如代码中所述,使脚本在编辑器中也能运行。
请注意,Unity 中可能有一些部分会忽略新的投影矩阵,因此与该脚本组合使用时无法使用。
请注意,此代码是为内置的 Plane 对象而不是 Quad 对象编写的。
// This script should be attached to a Camera object
// in Unity. Once a Plane object is specified as the
// "projectionScreen", the script computes a suitable
// view and projection matrix for the camera.
// The code is based on Robert Kooima's publication
// "Generalized Perspective Projection," 2009,
// http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf
// Use the following line to apply the script in the editor:
// @script ExecuteInEditMode()
#pragma strict
public var projectionScreen : GameObject;
public var estimateViewFrustum : boolean = true;
public var setNearClipPlane : boolean = false;
public var nearClipDistanceOffset : float = -0.01;
private var cameraComponent : Camera;
function LateUpdate() {
cameraComponent = GetComponent(Camera);
if (null != projectionScreen && null != cameraComponent)
{
var pa : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(-5.0, 0.0, -5.0));
// lower left corner in world coordinates
var pb : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(5.0, 0.0, -5.0));
// lower right corner
var pc : Vector3 =
projectionScreen.transform.TransformPoint(
Vector3(-5.0, 0.0, 5.0));
// upper left corner
var pe : Vector3 = transform.position;
// eye position
var n : float = cameraComponent.nearClipPlane;
// distance of near clipping plane
var f : float = cameraComponent.farClipPlane;
// distance of far clipping plane
var va : Vector3; // from pe to pa
var vb : Vector3; // from pe to pb
var vc : Vector3; // from pe to pc
var vr : Vector3; // right axis of screen
var vu : Vector3; // up axis of screen
var vn : Vector3; // normal vector of screen
var l : float; // distance to left screen edge
var r : float; // distance to right screen edge
var b : float; // distance to bottom screen edge
var t : float; // distance to top screen edge
var d : float; // distance from eye to screen
vr = pb - pa;
vu = pc - pa;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
// are we looking at the backface of the plane object?
if (Vector3.Dot(-Vector3.Cross(va, vc), vb) < 0.0)
{
// mirror points along the z axis (most users
// probably expect the x axis to stay fixed)
vu = -vu;
pa = pc;
pb = pa + vr;
pc = pa + vu;
va = pa - pe;
vb = pb - pe;
vc = pc - pe;
}
vr.Normalize();
vu.Normalize();
vn = -Vector3.Cross(vr, vu);
// we need the minus sign because Unity
// uses a left-handed coordinate system
vn.Normalize();
d = -Vector3.Dot(va, vn);
if (setNearClipPlane)
{
n = d + nearClipDistanceOffset;
cameraComponent.nearClipPlane = n;
}
l = Vector3.Dot(vr, va) * n / d;
r = Vector3.Dot(vr, vb) * n / d;
b = Vector3.Dot(vu, va) * n / d;
t = Vector3.Dot(vu, vc) * n / d;
var p : Matrix4x4; // projection matrix
p[0,0] = 2.0*n/(r-l);
p[0,1] = 0.0;
p[0,2] = (r+l)/(r-l);
p[0,3] = 0.0;
p[1,0] = 0.0;
p[1,1] = 2.0*n/(t-b);
p[1,2] = (t+b)/(t-b);
p[1,3] = 0.0;
p[2,0] = 0.0;
p[2,1] = 0.0;
p[2,2] = (f+n)/(n-f);
p[2,3] = 2.0*f*n/(n-f);
p[3,0] = 0.0;
p[3,1] = 0.0;
p[3,2] = -1.0;
p[3,3] = 0.0;
var rm : Matrix4x4; // rotation matrix;
rm[0,0] = vr.x;
rm[0,1] = vr.y;
rm[0,2] = vr.z;
rm[0,3] = 0.0;
rm[1,0] = vu.x;
rm[1,1] = vu.y;
rm[1,2] = vu.z;
rm[1,3] = 0.0;
rm[2,0] = vn.x;
rm[2,1] = vn.y;
rm[2,2] = vn.z;
rm[2,3] = 0.0;
rm[3,0] = 0.0;
rm[3,1] = 0.0;
rm[3,2] = 0.0;
rm[3,3] = 1.0;
var tm : Matrix4x4; // translation matrix;
tm[0,0] = 1.0;
tm[0,1] = 0.0;
tm[0,2] = 0.0;
tm[0,3] = -pe.x;
tm[1,0] = 0.0;
tm[1,1] = 1.0;
tm[1,2] = 0.0;
tm[1,3] = -pe.y;
tm[2,0] = 0.0;
tm[2,1] = 0.0;
tm[2,2] = 1.0;
tm[2,3] = -pe.z;
tm[3,0] = 0.0;
tm[3,1] = 0.0;
tm[3,2] = 0.0;
tm[3,3] = 1.0;
// set matrices
cameraComponent.projectionMatrix = p;
cameraComponent.worldToCameraMatrix = rm * tm;
// The original paper puts everything into the projection
// matrix (i.e. sets it to p * rm * tm and the other
// matrix to the identity), but this doesn't appear to
// work with Unity's shadow maps.
if (estimateViewFrustum)
{
// rotate camera to screen for culling to work
var q : Quaternion;
q.SetLookRotation((0.5 * (pb + pc) - pe), vu);
// look at center of screen
cameraComponent.transform.rotation = q;
// set fieldOfView to a conservative estimate
// to make frustum tall enough
if (cameraComponent.aspect >= 1.0)
{
cameraComponent.fieldOfView = Mathf.Rad2Deg *
Mathf.Atan(((pb-pa).magnitude + (pc-pa).magnitude)
/ va.magnitude);
}
else
{
// take the camera aspect into account to
// make the frustum wide enough
cameraComponent.fieldOfView =
Mathf.Rad2Deg / cameraComponent.aspect *
Mathf.Atan(((pb-pa).magnitude + (pc-pa).magnitude)
/ va.magnitude);
}
}
}
}
如果已知立体显示器的左右相机的的位置,则可以将该脚本分别应用于每个相机。但是,如果 Unity Camera
(我们将其称为 mycam
)用于立体渲染,则 mycam.transform.position
中的位置指定左右相机之间的中点。在这种情况下,可以使用 mycam.GetStereoViewMatrix(Camera.StereoscopicEye.Left).inverse.GetRow(3)
获取左侧相机的 位置(以四维向量表示),而可以使用 mycam.GetStereoViewMatrix(Camera.StereoscopicEye.Right).inverse.GetRow(3)
以类似方式获取右侧相机的 位置。然后,可以使用这些位置来设置两个单独的相机以进行离轴投影。
在离轴投影的一些应用(例如镜像、门户或魔术镜)中,离轴相机可能会渲染到渲染纹理中,这些渲染纹理随后用于纹理化表面。在立体渲染的情况下,通常有两个渲染纹理(每个眼睛一个)。因此,使用生成的渲染纹理进行纹理化通常必须使用正确的渲染纹理。为此,Unity 提供了内置的着色器变量 unity_StereoEyeIndex
,它对于左眼为 0,对于右眼为 1。例如,着色器可以从左侧眼睛的渲染纹理中读取颜色 leftColor
,从右侧眼睛的渲染纹理中读取颜色 rightColor
。然后,着色器表达式 lerp(leftColor, rightColor, unity_StereoEyeIndex)
在使用渲染纹理进行立体渲染时计算出正确の色。“镜像”部分 中包含了这种方法的完整着色器代码。
在本教程中,我们了解了
- 离轴透视投影的用途以及与同轴透视投影的区别
- 计算离轴透视投影的视图矩阵和投影矩阵
- 该计算及其在 Unity 中的限制的实现
如果你想了解更多