当前位置: 首页 > 工具软件 > AR.js > 使用案例 >

three.ar.js_我们如何通过AR.js使产品吉祥物栩栩如生

祁兴运
2023-12-01

three.ar.js

by Mateusz Tarnaski

由Mateusz Tarnaski

我们如何通过AR.js使产品吉祥物栩栩如生 (How we brought our product mascot to life with AR.js)

Short answer: using a browser-based Augmented Reality (AR) application. For the long answer, read below.

简短答案:使用基于浏览器的增强现实(AR)应用程序。 对于长答案,请阅读以下内容。

The idea of playing with AR started as a random interesting experiment. In our company, we strive to stay at the edge of the curve. We share technical novelties and new technologies with each other on a semi-regular basis. Since we are mostly dealing with web technologies, the concept of AR in the browser really took off.

玩AR的想法最初是一个有趣的随机实验。 在我们公司,我们努力保持在曲线的边缘。 我们半定期地分享技术新颖性和新技术。 由于我们主要处理Web技术,因此浏览器中AR的概念真正兴起了。

Since AR is mostly an entertainment technology, a practical application was not obvious to us from the start. Luckily, two unrelated things happened at the same time:

由于AR主要是一种娱乐技术,因此从一开始,实际应用对我们来说并不明显。 幸运的是,同时发生了两项无关的事情:

We decided to bring Hubert to life during the event, in the form of an AR app for people to play with. In our heads, users should be able to:

我们决定在活动期间通过一个可供人们玩耍的AR应用程序使Hubert栩栩如生。 在我们看来,用户应该能够:

  • render Hubert on a wall background in their phones

    在手机中的背景墙上渲染休伯特
  • take a picture of the rendered model

    拍摄渲染模型的照片
  • tweet the photo (not the subject of this article)

    鸣叫照片(不是本文的主题)

The end result is available on glitch.com, scaled down and rotated to be suitable for a desktop experience (you can also take a quick look into the source code).

最终结果可在glitch.com找到并按比例缩小和旋转以适合桌面体验(您也可以快速查看源代码 )。

实时渲染休伯特 (Rendering Hubert in real time)

We used AR.js (version from this commit) as the main building block of our AR app — it packages webRTC camera access, marker recognition, and 3D scene rendering. We liked it mostly because you can have a basic demo running in around 20 lines of code.

我们使用AR.js ( 此commit的版本)作为我们的AR应用程序的主要构建块-它包装了webRTC摄像机访问,标记识别和3D场景渲染。 我们之所以喜欢它,主要是因为您可以在大约20行代码中运行一个基本演示。

Under the hood, AR.js can use either three.js or A-frame implementations to render your 3D scenes.

在后台,AR.js可以使用three.js或A-frame实现来渲染3D场景。

  • three.js offers fine-grained control of 3D rendering, and is JavaScript-based. You have probably heard about it in the context of rendering 2D and 3D scenes in the browser.

    three.js提供了3D渲染的细粒度控制,并且基于JavaScript。 您可能在浏览器中渲染2D和3D场景时听说过它。
  • A-frame is a web framework designed specifically for building VR and AR experiences. It has an HTML-like markup that is more declarative than three.js, but sacrifices some of the control in favor of ease of use.

    A框架是专门为构建VR和AR体验而设计的Web框架。 它具有类似HTML的标记,比three.js更具声明性,但是为了便于使用而牺牲了一些控件。

We didn’t have a VR or 3D expert (except Mrówa, who prepared the 3D model). As A-frame’s HTML-like declarative syntax looked more familiar to us, we opted for A-frame to do the rendering.

我们没有VR或3D专家(准备3D模型的Mrówa除外)。 由于A框架的类似HTML的声明性语法对我们来说更熟悉,因此我们选择了A框架进行渲染。

Here you can see the code for rendering Hubert, 30 lines on the dot. We omitted some options and A-frame tweaking for the sake of simplicity. You can refer to the repo to see it all.

在这里,您可以看到渲染点30行的Hubert的代码。 为了简单起见,我们省略了一些选项和A帧调整。 您可以参考该仓库以查看全部内容。

This gives us Hubert nicely rendered in the web browser in real time.

这使我们的Hubert可以在Web浏览器中很好地实时呈现

拍摄照片以进行鸣叫 (Capturing a photo to tweet)

Unfortunately, we don’t have a single video feed rendering the whole scene. There is the video from your camera and a rendered 3D scene. We quickly figured out that we would have to capture a frame from both sources and put them together for a nice photo of Hubert.

不幸的是,我们没有一个视频提要呈现整个场景。 有摄像机的视频和渲染的3D场景。 我们很快就发现,我们必须从这两个来源中捕获一帧并将它们放在一起才能获得Hubert的漂亮照片。

Taking frames out of a webRTC video stream is pretty straightforward. The best material on the subject can be found here. If your browser has the appropriate API, you need two elements:

从webRTC视频流中取出帧非常简单。 有关此主题最佳材料可以在此处找到 。 如果您的浏览器具有适当的API,则需要两个元素:

  • a reference to your source <video/> tag

    对源<video />标签的引用
  • a destination <canvas/> element in which to put your frame

    用于放置框架的目标<canvas />元素

Then it’s just a simple matter of drawing a 2D image from video to canvas. In our case, both of these are a bit tricky.

然后,只需将视频中的2D图像绘制到画布即可。 在我们的情况下,这两者都有些棘手。

The video take we are using is generated and embedded by AR.js. We had no idea how to get it gracefully, so we hacked our way around it with a loop and a DOM selector:

我们正在使用的视频片段由AR.js生成并嵌入。 我们不知道如何优雅地获取它,因此我们用一个循环和一个DOM选择器来破解它:

We also needed to hack some scaling. AR.js doesn’t present the raw video feed to the user, they scale it to fill the screen without losing aspect ratio. That means we need to apply the same scaling to our frame. If not, our screenshot will have “more” of the video feed than is shown on the screen. We don’t want to confuse the users here.

我们还需要破解一些扩展。 AR.js不会将原始视频提要呈现给用户,而是会对其进行缩放以填充整个屏幕,而不会丢失宽高比。 这意味着我们需要对框架应用相同的缩放比例。 如果没有,我们的屏幕快照将具有比屏幕上显示的更多的视频供稿。 我们不想在这里混淆用户。

What the user sees:

用户看到的内容:

If we take a frame without scaling and just try to copy from point (0,0) we lose margins imposed by AR.js. This is a totally different picture from what is presented to the user:

如果我们采取缩放的框架, 只是尝试从点(0,0)复制,则会失去AR.js施加的边距。 这是与呈现给用户的图片完全不同的图片:

Suffice it to say we just reverse-engineered the scaling and figured out the bounding box of what the user sees:

可以说我们只是对缩放比例进行了逆向工程,并弄清楚了用户看到的边界框:

To achieve this final result (the same as what is presented live to the user, give or take some camera shake):

要获得最终结果(与向用户实时展示的内容相同,请晃动或晃动照相机):

Now we just need to get Hubert in the picture. Again, the API for that is very straightforward. To capture a screenshot of a rendered A-frame scene, we need to get the scene’s canvas. The relevant part is copied to our destination canvas, on top of the previously taken video frame.

现在,我们只需要让Hubert参与其中即可。 同样, 该API非常简单。 要捕获渲染的A帧场景的屏幕截图,我们需要获取场景的画布。 在先前拍摄的视频帧之上,相关部分被复制到我们的目标画布。

Getting the relevant part is the tricky bit in our case. Thanks to the AR.js scaling, we cannot simply get the “perspective” shot of the scene and use that. It will look too wide or too short, depending on orientation.

在我们的案例中,获取相关部分是棘手的事情。 由于AR.js的缩放,我们不能简单地获得场景的“透视”镜头并使用它。 根据方向的不同,它看起来太宽或太短。

For landscape mode (width > height), the scaling method we used for video works perfectly well.

对于横向模式(宽度>高度),我们用于视频的缩放方法效果很好。

For portrait mode, it works great on a PC… However, once you enter the realm of mobile devices, the scaling breaks and the screenshot doesn’t look nice. You get this skinny Hubert…

对于纵向模式,它可以在PC上很好地工作……但是,一旦进入移动设备领域,缩放比例就会中断,并且屏幕截图看起来也不会很好。 你得到这个瘦的休伯特…

…instead of our lovely, bubbly mascot in all his glory:

…代替他所有荣耀的可爱,起泡的吉祥物:

We are still not sure why that is the case. We made the mistake of not testing it out thoroughly on actual mobile devices, thinking it would work the same as it does on the development machine. (Yes, we know how bad that sounds, but that’s the reality of it.) During the conference, we managed to figure out the formula for portrait scaling and introduced a fix:

我们仍然不确定为什么会这样。 我们犯了一个错误,即没有在实际的移动设备上进行全面测试,认为它的工作原理与在开发机上的工作原理相同。 (是的,我们知道这听起来有多糟,但这是事实。)在会议期间,我们设法找出人像缩放的公式并引入了解决方法:

It’s not pretty. It’s one of those “it’s late, it works, just leave it” fixes. The values presented above produced a satisfactory result and we left it at that.

不好看 这是“迟到了,它起作用了,随它去吧”修复程序之一。 上面给出的值产生了令人满意的结果,我们保留了该结果。

With that, we have a picture of Hubert in the real world! It can be retrieved from the destination canvas element and displayed on the page or sent to the server to tweet out.

这样,我们就可以看到真实世界中的休伯特照片! 可以从目标画布元素中检索它并将其显示在页面上,或发送到服务器以发出鸣叫。

摘要 (Summary)

AR in the browser is possible. Even more, it is possible on mid-grade mobile hardware (as of June 2018). Getting it to work on every phone and browser is still a long shot, so don’t count on it for wide, diverse userbases.

浏览器中的AR是可能的。 更重要的是,截至2018年6月,中端移动硬件也可以使用。 要使其在每部手机和浏览器上都能正常工作还很遥远,因此不要指望广泛而多样的用户群可以使用它。

However, if you have a somewhat controlled environment, augmented reality on a phone can be used to create unique experiences. These don’t require special hardware or workstations and that is a big, big plus. Just make sure to test it on actual devices ahead of time.

但是,如果您在某种程度上控制了环境,则可以使用电话上的增强现实来创建独特的体验。 这些不需要特殊的硬件或工作站,这是一大优势。 只要确保提前在实际设备上进行测试即可。

翻译自: https://www.freecodecamp.org/news/how-we-brought-our-product-mascot-to-life-87830db12ff4/

three.ar.js

 类似资料: