three.js实现3D影院的原理的代码分析

本篇文章我们通过介绍3D影院的视觉原理,并介绍了three.js事件处理过程,全面分析了实现3D影院的基础知识。

1.创建一个3d的空间

可以想象一下我们在房间内,房间是一个立方体,如果你有生活品味,可能会在房间内贴上壁纸,three.js可以很方便的创建一个立方体,并且给它的周围贴上纹理,让照相机在立方体之中,照相机可以360旋转,就模拟了一个真实的场景。

转换为代码:

const path = 'assets/image/'
 const format = '.jpg'
 const urls = [
 `${path}px${format}`, `${path}nx${format}`,
 `${path}py${format}`, `${path}ny${format}`,
 `${path}pz${format}`, `${path}nz${format}`
 ]
 const materials = []
 urls.forEach(url => {
 const textureLoader = new TextureLoader()
 textureLoader.setCrossOrigin(this.crossOrigin)
 const texture = textureLoader.load(url)
 materials.push(new MeshBasicMaterial({
 map: texture,
 overdraw: true,
 side: BackSide
 }))
 })
 const cube = new Mesh(new CubeGeometry(9000, 9000, 9000), new MeshFaceMaterial(materials))
 this.scene.add(cube)

CubeGeometry创建一个超大的立方体 MeshFaceMaterial给立方体贴上文理,由于视角是在立方体内部,所以side:BackSide 2.粒子效果

一个3d模型是由点,线,面组成的,可以遍历模型的每一个点,把每一个点转换为几何模型,并且给它贴上文理,拷贝每一个点的位置,用这些几何模型重新构成一个只有点的模型,这就是粒子效果的基本原理。

this.points = new Group()
 const vertices = []
 let point
 const texture = new TextureLoader().load('assets/image/dot.png')
 geometry.vertices.forEach((o, i) => {
 // 记录每个点的位置
 vertices.push(o.clone())
 const _geometry = new Geometry()
 // 拿到当前点的位置
 const pos = vertices[i]
 _geometry.vertices.push(new Vector3())
 const color = new Color()
 color.r = Math.abs(Math.random() * 10)
 color.g = Math.abs(Math.random() * 10)
 color.b = Math.abs(Math.random() * 10)
 const material = new PointsMaterial({
 color,
 size: Math.random() * 4 + 2,
 map: texture,
 blending: AddEquation,
 depthTest: false,
 transparent: true
 })
 point = new Points(_geometry, material)
 point.position.copy(pos)
 this.points.add(point)
 })
 return this.points

new Group创建一个群,可以说是粒子的集合通过point.position.copy(pos)设置粒子和位置,坐标和模型中对应点的位置相同 3.点击事件的处理

three.js的点击事件需要借助光线投射器(Raycaster),为了方便理解,请先看一张图:

Raycaster发射一个射线,intersectObject监测射线命中的物体

this.raycaster = new Raycaster()
// 把你要监听点击事件的物体用数组储存起来
this.seats.push(seat)

onTouchStart(event) {
 event.preventDefault()
 event.clientX = event.touches[0].clientX;
 event.clientY = event.touches[0].clientY;
 this.onClick(event)
 }

 onClick(event) {
 const mouse = new Vector2()
 mouse.x = ( event.clientX / this.renderer.domElement.clientWidth ) * 2 - 1
 mouse.y = - ( event.clientY / this.renderer.domElement.clientHeight ) * 2 + 1;
 this.raycaster.setFromCamera(mouse, this.camera)
 // 检测命中的座位
 const intersects = this.raycaster.intersectObjects(this.seats)
 if (intersects.length > 0) {
 intersects[0].object.material = new MeshLambertMaterial({
  color: 0xff0000
 })
 }
 }

intersects.length > 0 表示射线命中了某个几何体偷懒只实现了移动端的点击实现,如果想看pc怎么实现,请看thee.js官网

4.着色器的初步使用

着色器分为顶点着色器和片元着色器,用GLSL语言编写,是一种和GPU沟通的的语言,这里只讲如何使用

const vertext = `
 void main()
 {
 gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
 }
 `

const fragment = `
 uniform vec2 resolution;
 uniform float time;

 vec2 rand(vec2 pos)
 {
 return fract( 0.00005 * (pow(pos+2.0, pos.yx + 1.0) * 22222.0));
 }
 vec2 rand2(vec2 pos)
 {
 return rand(rand(pos));
 }

 float softnoise(vec2 pos, float scale)
 {
 vec2 smplpos = pos * scale;
 float c0 = rand2((floor(smplpos) + vec2(0.0, 0.0)) / scale).x;
 float c1 = rand2((floor(smplpos) + vec2(1.0, 0.0)) / scale).x;
 float c2 = rand2((floor(smplpos) + vec2(0.0, 1.0)) / scale).x;
 float c3 = rand2((floor(smplpos) + vec2(1.0, 1.0)) / scale).x;

 vec2 a = fract(smplpos);
 return mix(
 mix(c0, c1, smoothstep(0.0, 1.0, a.x)),
 mix(c2, c3, smoothstep(0.0, 1.0, a.x)),
 smoothstep(0.0, 1.0, a.y));
 }

 void main(void)
 {
 vec2 pos = gl_FragCoord.xy / resolution.y;
 pos.x += time * 0.1;
 float color = 0.0;
 float s = 1.0;
 for(int i = 0; i < 8; i++)
 {
 color += softnoise(pos+vec2(i)*0.02, s * 4.0) / s / 2.0;
 s *= 2.0;
 }
 gl_FragColor = vec4(color);
 }
 `
// 设置物体的质材为着色器质材
 let material = new ShaderMaterial({
 uniforms: uniforms,
 vertexShader: vertext,
 fragmentShader: fragment,
 transparent: true,
 })

5.光晕效果

由于是模拟电影院,我想做一个投影仪,模拟投影仪射出的光线。

// 光晕效果必须设置alpha = true
 const renderer = this.renderer = new WebGLRenderer({alpha: true, antialias: true})

 let textureFlare = new TextureLoader().load('assets/image/lensflare0.png')
 let textureFlare3 = new TextureLoader().load('assets/image/lensflare3.png')
 let flareColor = new Color(0xffffff)
 let lensFlare = new LensFlare(textureFlare, 150, 0.0 , AdditiveBlending, flareColor)
 lensFlare.add(textureFlare3, 60, 0.6, AdditiveBlending);
 lensFlare.add(textureFlare3, 70, 0.7, AdditiveBlending);
 lensFlare.add(textureFlare3, 120, 0.9, AdditiveBlending);
 lensFlare.add(textureFlare3, 70, 1.0, AdditiveBlending);
 lensFlare.position.set(0, 150, -85)

主要的光线还是靠lensflare0.png模拟 textureFlare3设置光晕的范围