当前位置: 首页 > 知识库问答 >
问题:

WebGL2——如何存储和检索3D顶点网格计算新顶点位置所需的3D纹理数据

卫甫
2023-03-14

3D物理模拟需要访问着色器中相邻顶点的位置和属性以计算顶点的新位置。2D版本可以工作,但无法将解决方案移植到3D。翻转两个3D纹理似乎是正确的,为一个纹理输入x、y和z坐标集,并获取包含相邻点的位置速度加速度数据的vec4s以用于计算每个顶点的新位置和速度。2D版本使用1次带有帧缓冲区的绘图调用将所有生成的gl_FragColors保存到采样器2D。我想使用帧缓冲区对采样器3D做同样的事情。但它看起来像是在3D中使用帧缓冲区,我需要一次写入一层第二个3D纹理,直到所有层都被保存。我对将顶点网格映射到纹理的相对x、y、z坐标以及如何将其单独保存到图层感到困惑。在2D版本中,写入帧缓冲区的gl_FragColor直接映射到画布的2D x-y坐标系,每个像素都是一个顶点。

这适用于片段着色器中的2D:

vec2 onePixel = vec2(1.0, 1.0)/u_textureSize;
vec4 currentState = texture2D(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1;i<=1;i+=2){
    for (int j=-1;j<=1;j+=2){
        if (i == 0 && j == 0) continue;
        vec2 neighborCoord = v_texCoord + vec2(onePixel.x*float(i), onePixel.y*float(j));

        vec4 neighborState;
        if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0){
            neighborState = vec4(0.0,0.0,0.0,1.0);
        } else {
            neighborState = texture2D(u_image, neighborCoord);
        }

        float deltaP =  neighborState.r - currentState.r;
        float deltaV = neighborState.g - currentState.g;

        fTotal += u_kSpring*deltaP + u_dSpring*deltaV;
    }
}

float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);

这是我在片段着色器中尝试的3D:#version 300 es

vec3 onePixel = vec3(1.0, 1.0, 1.0)/u_textureSize;
vec4 currentState = texture(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1; i<=1; i++){
    for (int j=-1; j<=1; j++){
        for (int k=-1; k<=1; k++){
           if (i == 0 && j == 0 && k == 0) continue;
           vec3 neighborCoord = v_texCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
           vec4 neighborState;

           if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.z < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0 || neighborCoord.z >= 1.0){
               neighborState = vec4(0.0,0.0,0.0,1.0);
           } else {
               neighborState = texture(u_image, neighborCoord);
           }
           float deltaP =  neighborState.r - currentState.r;  //Distance from neighbor
           float springDeltaLength =  (deltaP - u_springOrigLength[counter]);

           //Add the force on our point of interest from the current neighbor point.  We'll be adding up to 26 of these together.
           fTotal += u_kSpring[counter]*springDeltaLength;
        }
    }
}

float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);

写完之后,我继续阅读,发现帧缓冲区不能同时访问sampler3D的所有层进行写入。我需要一次处理1-4层。我既不确定如何做到这一点,也不确定gl_FragColor是否在正确的图层上到达正确的像素。

我发现了这个答案:Render to 3D texture webgl2它演示了在帧缓冲区中一次写入多个层,但我不知道如何将其与片段着色器等同起来,从一次绘制调用,自动运行1000000次(100 x 100 x 100…(长x宽x高)),每次用位置速度加速度数据填充sampler3D中的正确像素,然后我可以在下一次迭代中使用它。

我还没有结果。我希望以编程方式制作第一个sampler3D,用它生成新的顶点数据,保存在第二个sampler3D中,然后切换纹理并重复。

共有1个答案

白祺然
2023-03-14

WebGL是基于目的地的。这意味着它对每个要写入目标的结果执行一个操作。唯一可以设置的目标是二维平面中的点(像素的平方)、线和三角形。这意味着写入3D纹理需要分别处理每个平面。在N为4到8的情况下,通过将多个附件设置到最大允许附件数的帧缓冲区,最多可以分别执行N个平面

所以我假设你知道如何一次渲染100层。在初始化时,要么创建100个帧缓冲区,然后将不同的层附加到每个帧缓冲区。或者,在渲染时使用不同的附件更新单个帧缓冲区。如果知道会发生多少验证,我会选择制作100个帧缓冲区

所以

const framebuffers = [];
for (let layer = 0; layer < numLayers; ++layer) {
  const fb = gl.createFramebuffer();
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, texture, 
    0, layer);
  framebuffers.push(fb);
}

现在在渲染时渲染到每个层

framebuffers.forEach((fb, layer) => {
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  // pass in the layer number to the shader it can use for calculations
  gl.uniform1f(layerLocation, layer);
  ....
  gl.drawXXX(...);
});

WebGL1不支持3D纹理,因此我们知道您正在使用WebGL2,因为您提到了使用sampler3D

在WebGL2中,通常在着色器顶部使用#version 300 es,表示您希望使用更现代的GLSL es 3.00。

绘制到多个层需要首先计算出要渲染到多少层。WebGL2一次至少支持4层,所以我们可以假设4层。为此,需要在每个帧缓冲区上附加4层

const layersPerFramebuffer = 4;
const framebuffers = [];
for (let baseLayer = 0; baseLayer < numLayers; baseLayer += layersPerFramebuffer) {
  const fb = gl.createFramebuffer();
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  for (let layer = 0; layer < layersPerFramebuffer; ++layer) {
    gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + layer, texture, 0, baseLayer + layer);
  }
  framebuffers.push(fb);
}

GLSL ES 3.0着色器不使用gl_FragCoord他们使用用户定义的输出,所以我们会声明一个数组输出

out vec4 ourOutput[4];

然后像之前使用的gl_FragColor一样使用它,只是添加一个索引。下面我们将处理4层。我们只为v_texCoord传递一个vec2,并根据baseLayerTexCoord计算第三个坐标,这是我们在每次draw调用中传递的。

varying vec2 v_texCoord;
uniform float baseLayerTexCoord;

vec4 results[4];
vec3 onePixel = vec3(1.0, 1.0, 1.0)/u_textureSize;
const int numLayers = 4;
for (int layer = 0; layer < numLayers; ++layer) {
    vec3 baseTexCoord = vec3(v_texCoord, baseLayerTexCoord + onePixel * float(layer));
    vec4 currentState = texture(u_image, baseTexCoord);
    float fTotal = 0.0;
    for (int i=-1; i<=1; i++){
        for (int j=-1; j<=1; j++){
            for (int k=-1; k<=1; k++){
               if (i == 0 && j == 0 && k == 0) continue;
               vec3 neighborCoord = baseTexCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
               vec4 neighborState;

               if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.z < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0 || neighborCoord.z >= 1.0){
                   neighborState = vec4(0.0,0.0,0.0,1.0);
               } else {
                   neighborState = texture(u_image, neighborCoord);
               }
               float deltaP =  neighborState.r - currentState.r;  //Distance from neighbor
               float springDeltaLength =  (deltaP - u_springOrigLength[counter]);

               //Add the force on our point of interest from the current neighbor point.  We'll be adding up to 26 of these together.
               fTotal += u_kSpring[counter]*springDeltaLength;
            }
        }
    }

    float acceleration = fTotal/u_mass;
    float velocity = acceleration*u_dt + currentState.g;
    float position = velocity*u_dt + currentState.r;
    results[layer] = vec4(position,velocity,acceleration,1);
}
ourOutput[0] = results[0];
ourOutput[1] = results[1];
ourOutput[2] = results[2];
ourOutput[3] = results[3];

最后一件事是我们需要调用gl。drawBuffers告诉WebGL2将输出存储在哪里。因为我们一次做4层,所以我们会使用

gl.drawBuffers([
  gl.COLOR_ATTACHMENT0,
  gl.COLOR_ATTACHMENT1,
  gl.COLOR_ATTACHMENT2,
  gl.COLOR_ATTACHMENT3,
]);
framebuffers.forEach((fb, ndx) => {
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  gl.uniform1f(baseLayerTexCoordLocation, (ndx * layersPerFramebuffer + 0.5) / numLayers);
  ....
  gl.drawXXX(...);
});

示例:

function main() {
  const gl = document.querySelector('canvas').getContext('webgl2');
  if (!gl) {
    return alert('need webgl2');
  }
  const ext = gl.getExtension('EXT_color_buffer_float');
  if (!ext) {
    return alert('need EXT_color_buffer_float');
  }
  
  const vs = `#version 300 es
  in vec4 position;
  out vec2 v_texCoord;
  void main() {
    gl_Position = position;
    // position will be a quad -1 to +1 so we
    // can use that for our texcoords
    v_texCoord = position.xy * 0.5 + 0.5;
  }
  `;
  
  const fs = `#version 300 es
precision highp float;
in vec2 v_texCoord;
uniform float baseLayerTexCoord;
uniform highp sampler3D u_image;
uniform mat3 u_kernel[3];

out vec4 ourOutput[4];

void main() {
  vec3 textureSize = vec3(textureSize(u_image, 0));
  vec3 onePixel = vec3(1.0, 1.0, 1.0)/textureSize;
  const int numLayers = 4;
  vec4 results[4];
  for (int layer = 0; layer < numLayers; ++layer) {
      vec3 baseTexCoord = vec3(v_texCoord, baseLayerTexCoord + onePixel * float(layer));
      float fTotal = 0.0;
      vec4 color;
      for (int i=-1; i<=1; i++){
          for (int j=-1; j<=1; j++){
              for (int k=-1; k<=1; k++){
                 vec3 neighborCoord = baseTexCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
                 color += u_kernel[k + 1][j + 1][i + 1] * texture(u_image, neighborCoord);
              }
          }
      }

      results[layer] = color;
  }
  ourOutput[0] = results[0];
  ourOutput[1] = results[1];
  ourOutput[2] = results[2];
  ourOutput[3] = results[3];
}
  `;
  const vs2 = `#version 300 es
  uniform vec4 position;
  uniform float size;
  void main() {
    gl_Position = position;
    gl_PointSize = size;
  }
  `;
  const fs2 = `#version 300 es
  precision highp float;
  uniform highp sampler3D u_image;
  uniform float slice;
  out vec4 outColor;
  void main() {
    outColor = texture(u_image, vec3(gl_PointCoord.xy, slice));
  }
  `;
  
  const computeProgramInfo = twgl.createProgramInfo(gl, [vs, fs]);
  const drawProgramInfo = twgl.createProgramInfo(gl, [vs2, fs2]);
  
  const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
    position: {
      numComponents: 2,
      data: [
        -1, -1,
         1, -1,
        -1,  1,
        -1,  1,
         1, -1,
         1,  1,
      ],
    },
  });

  function create3DTexture(gl, size) {
    const tex = gl.createTexture();
    const data = new Float32Array(size * size * size * 4);
    for (let i = 0; i < data.length; i += 4) {
      data[i + 0] = i % 100 / 100;
      data[i + 1] = i % 10000 / 10000;
      data[i + 2] = i % 100000 / 100000;
      data[i + 3] = 1;
    }
    gl.bindTexture(gl.TEXTURE_3D, tex);
    gl.texImage3D(gl.TEXTURE_3D, 0, gl.RGBA32F, size, size, size, 0, gl.RGBA, gl.FLOAT, data);

    gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
    gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
    return tex;
  }

  const size = 100;
  let inTex = create3DTexture(gl, size);
  let outTex = create3DTexture(gl, size);
  const numLayers = size;
  const layersPerFramebuffer = 4;
  
  function makeFramebufferSet(gl, tex) {
    const framebuffers = [];
    for (let baseLayer = 0; baseLayer < numLayers; baseLayer += layersPerFramebuffer) {
      const fb = gl.createFramebuffer();
      gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
      for (let layer = 0; layer < layersPerFramebuffer; ++layer) {
        gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + layer, tex, 0, baseLayer + layer);
      }
      framebuffers.push(fb);
    }
    return framebuffers;
  };
  
  let inFramebuffers = makeFramebufferSet(gl, inTex);
  let outFramebuffers = makeFramebufferSet(gl, outTex);

  function render() {
    gl.viewport(0, 0, size, size);
    gl.useProgram(computeProgramInfo.program);
    twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);

    outFramebuffers.forEach((fb, ndx) => {
      gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
      gl.drawBuffers([
        gl.COLOR_ATTACHMENT0,
        gl.COLOR_ATTACHMENT1,
        gl.COLOR_ATTACHMENT2,
        gl.COLOR_ATTACHMENT3,
      ]);

      const baseLayerTexCoord = (ndx * layersPerFramebuffer + 0.5) / numLayers;
      twgl.setUniforms(computeProgramInfo, {
        baseLayerTexCoord,
        u_kernel: [
          0, 0, 0,
          0, 0, 0,
          0, 0, 0,

          0, 0, 1,
          0, 0, 0,
          0, 0, 0,

          0, 0, 0,
          0, 0, 0,
          0, 0, 0,
        ],
        u_image: inTex,      
      });

      gl.drawArrays(gl.TRIANGLES, 0, 6);
    });

    {
      const t = inFramebuffers;
      inFramebuffers = outFramebuffers;
      outFramebuffers = t;
    }

    {
      const t = inTex;
      inTex = outTex;
      outTex = t;
    }

    gl.bindFramebuffer(gl.FRAMEBUFFER, null);
    gl.drawBuffers([gl.BACK]);
    gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);

    gl.useProgram(drawProgramInfo.program);

    const slices = 10.0;
    const sliceSize = 25.0
    for (let slice = 0; slice < slices; ++slice) {
      const sliceZTexCoord = (slice / slices * size + 0.5) / size;
      twgl.setUniforms(drawProgramInfo, {
        position: [
          ((slice * (sliceSize + 1) + sliceSize * .5) / gl.canvas.width * 2) - 1,
          0,
          0,
          1,
        ],
        slice: sliceZTexCoord,
        size: sliceSize,
      });
      gl.drawArrays(gl.POINTS, 0, 1);
    }
    
    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);
}

main();


function glEnumToString(gl, v) {
  const hits = [];
  for (const key in gl) {
    if (gl[key] === v) {
      hits.push(key);
    }
  }
  return hits.length ? hits.join(' | ') : `0x${v.toString(16)}`;
}
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
 类似资料:
  • 我正在尝试LWJGL库,但我有点困惑。当我尝试渲染具有2d顶点的四边形时:glVertex2f(0,0);glVertex2f(0, 1000);glVertex2f(1000, 1000);glVertex2f(1000, 0);,那么一切似乎都很好,但是当我使用下面的代码时,我只看到黑屏。是我使用了错误的坐标,因此它没有显示在屏幕上,还是其他问题?

  • 2.1节~2.4节通过缓冲类型几何体BufferGeometry给大家讲解了顶点位置、颜色、法向量、索引数据,本节课给大家引入一个新的threejs几何体APIGeometry。几何体Geometry和缓冲类型几何体BufferGeometry表达的含义相同,只是对象的结构不同,Threejs渲染的时候会先把Geometry转化为BufferGeometry再解析几何体顶点数据进行渲染。 Vect

  • 通过几何体BufferGeometry的顶点索引属性BufferGeometry.index可以设置几何体顶点索引数据,如果你有WebGL基础很容易理解顶点索引的概念,如果没有也没有关系,下面会通过一个简单的例子形象说明。 比如绘制一个矩形网格模型,至少需要两个三角形拼接而成,两个三角形,每个三角形有三个顶点,也就是说需要定义6个顶点位置数据。对于矩形网格模型而言,两个三角形有两个顶点位置是重合的

  • 如何在Gremlin查询中检索从根顶点开始的所有顶点属性? 我们有以下结构: 根顶点:Employee 边缘:EdCompany,EdDepartment,EdRole顶点:公司,部门,角色 我们试图接收与根顶点连接的其他顶点的数据。有人这样想: 我们尝试了该查询,但返回了一个复杂的JSON: 编辑: 我们还尝试了Kelvin建议的查询: 堆栈跟踪:提交查询失败:g.V().hasLabel(“E

  • 嗯,我可以通过逻辑做到这一点,但我打赌有一个数学运算或表达式来做到这一点。一个存在吗?如果是,是什么? 以下是算法: 长位置=位位置/64 接下来是:当2的次方时,如何将除法转换为按位移位?

  • 上节课自定义几何体给大家介绍了一个顶点位置坐标概念,本节课给大家介绍一个新的几何体顶点概念,就是几何体顶点颜色。 通常几何体顶点位置坐标数据和几何体顶点颜色数据都是一一对应的,比如顶点1有一个顶点位置坐标数据,也有一个顶点颜色数据,顶点2同样也有一个顶点位置坐标数据,也有一个顶点颜色数据... 每个顶点设置一种颜色 你可以在上节课代码更改为下面代码设置,你可以看到几何体的六个顶点分别渲染为几何体设