Dockerfile
linksThis Docker image simplifies the process of creating a full Node.js environment for frontend development with multistage building.
It includes all the dependencies for Puppeteer, so you can just npm install puppeteer
and it should work.
It also includes a default Nginx configuration for your frontend application, so in multi-stage Docker builds you can just copy it to an Ngnix "stage" and have an always freshly compiled production ready frontend Docker image for deployment.
It is derivated from this article I wrote:
Angular in Docker with Nginx, supporting configurations / environments, built with multi-stage Docker builds and testing with Chrome Headless
Create your frontend Node.js based code (Angular, React, Vue.js).
Create a file .dockerignore
(similar to .gitignore
) and include in it:
node_modules
...to avoid copying your node_modules
to Docker, making things unnecessarily slower.
package.json
:npm install --save-dev puppeteer
Dockerfile
based on this image and name the stage build-stage
, for building:# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM tiangolo/node-frontend:10 as build-stage
...
package.json
and possibly your package-lock.json
:...
WORKDIR /app
COPY package*.json /app/
...
...just the package*.json
files to install all the dependencies once and let Docker use the cache for the next builds. Instead of installing everything after every change in your source code.
npm
packages inside your Dockerfile
:...
RUN npm install
...
.vue
or React with JSX, it will be compiled inside Docker:...
COPY ./ /app/
...
puppeteer
itself), you can just run it. E.g.:...
RUN npm run test -- --browsers ChromeHeadlessNoSandbox --watch=false
...
...if your tests didn't pass, they will throw an error and your build will stop. So, you will never ship a "broken" frontend Docker image to production.
--configuration
s, create a default ARG
to be used at build time:...
ARG configuration=production
...
npm
:...
RUN npm run build
...
ARG
, e.g.:...
RUN npm run build -- --output-path=./dist/out --configuration $configuration
...
...after that, you would have a fresh build of your frontend app code inside a Docker container. But if you are serving frontend (static files) you could serve them with a high performance server as Nginx, and have a leaner Docker image without all the Node.js code.
...
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
...
build-stage
name created above in the previous "stage", copy the files generated there to the directory that Nginx uses:...
COPY --from=build-stage /app/dist/out/ /usr/share/nginx/html
...
... make sure you change /app/dist/out/
to the directory inside /app/
that contains your compiled frontend code.
index.html
), so that you can use "HTML5" full URLs and they will always work, even if your users type them directly in the browser. Make your Docker image copy that default configuration from the previous stage to Nginx's configurations directory:...
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
...
Dockerfile
could look like:# Stage 0, "build-stage", based on Node.js, to build and compile the frontend
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
RUN npm run test -- --browsers ChromeHeadlessNoSandbox --watch=false
ARG configuration=production
RUN npm run build -- --output-path=./dist/out --configuration $configuration
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /app/dist/out/ /usr/share/nginx/html
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
docker build -t my-frontend-project:prod .
...If you had tests and added above, they will be run. Your app will be compiled and you will end up with a lean high performance Nginx server with your fresh compiled app. Ready for production.
--configuration
s), for example if you have a "staging" environment, you can pass them like:docker build -t my-frontend-project:stag --build-arg configuration="staging" .
docker run -p 80:80 my-frontend-project:prod
...if you are running Docker locally you can now go to http://localhost
in your browser and see your frontend.
npm run start
...use it.
It's faster and simpler to develop locally. But once you think you got it, build your Docker image and try it. You will see how it looks in the full production environment.
If you want to have Chrome Headless tests, run them locally first, as you normally would (Karma, Jasmine, Jest, etc). Using the live normal browser. Make sure you have all the configurations right. Then install Puppeteer locally and make sure it runs locally (with local Headless Chrome). Once you know it is running locally, you can add that to your Dockerfile
and have "continuous integration" and "continuous building"... and if you want add "continuous deployment". But first make it run locally, it's easier to debug only one step at a time.
Have fun.
You can include more Nginx configurations by copying them to /etc/nginx/conf.d/
, beside the included Nginx configuration.
By default, this Nginx configuration routes everything to your frontend app (to your index.html
). But if you want some specific routes to instead return, for example, an HTTP 404 "Not Found" error, you can include more nginx .conf
files in the directory: /etc/nginx/extra-conf.d/
.
For example, if you want your final Nginx to send 404 errors to /api
and /docs
you can create a file `nginx-backend-not-found.conf:
location /api {
return 404;
}
location /docs {
return 404;
}
And in your Dockerfile
add a line:
COPY ./nginx-backend-not-found.conf /etc/nginx/extra-conf.d/nginx-backend-not-found.conf
These files will be included inside of an "Nginx server
directive".
So, you have to put contents that can be included there, like location
.
This functionality was made to solve a very specific but common use case:
Let's say you have a load balancer on top of your frontend (and probably backend too), and it sends everything that goes to /api/
to the backend, and /docs
to an API documentation site (handled by the backend or other service), and the rest, /
, to your frontend.
And your frontend has long-term caching for your main frontend app (as would be normal).
And then at some point, during development or because of a bug, your backend, that serves /docs
is down.
You try to go there, but because it's down, your load balancer falls back to what handles /
, your frontend.
So, you only see your same frontend instead of the /docs
.
Then you check the logs in your backend, you fix it, and try to load /docs
again.
But because the frontend had long-term caching, it still shows your same frontend at /docs
, even though your backend is back online. Then you have to load it in an incognito window, or fiddle with the local cache of your frontend, etc.
By making Nginx simply respond with 404 errors when requested for /docs
, you avoid that problem.
And because you have a load balancer on top, redirecting requests to /docs
to the correct service, Nginx would never actually return that 404. Only in the case of a failure, or during development.
This project is licensed under the terms of the MIT license.
Node是kubernetes集群的工作节点,可以是物理机也可以是虚拟机。 Node的状态 Node包括如下状态信息: Address HostName:可以被kubelet中的--hostname-override参数替代。 ExternalIP:可以被集群外部路由到的IP地址。 InternalIP:集群内部使用的IP,集群外部无法访问。 Condition OutOfDisk:磁盘空间不足时
node 负责 peer node 子命令。
这用于确定进程需要运行的节点的值。 由于分布式编程用于在不同节点上运行函数,因此在希望在不同机器上运行程序时,此功能非常有用。 语法 (Syntax) node() 参数 (Parameters) None 返回值 (Return Value) 这将返回本地节点的名称。 如果节点未分发,则返回nonode@nohost 。 例如 (For example) -module(helloworld)
The Po.et Node The Po.et Node allows you to timestamp documents in a decentralized manner. It's built on top of the Bitcoin blockchain and IPFS. Index The Po.et Node Index How to Run the Po.et Node De
Node-Lua是一款基于Lua实现的脚本和服务器引擎,它支持构建海量Lua服务(Context_Lua)并以多线程方式运行在多核服务器上,采用了任务多路复用的设计方案,有效利用了多核优势。node-lua致力于构建一个快速、简单易用的Lua脚本和服务器开发和运行环境。该引擎参考了Node-Js和Skynet的设计思想,并对其进行了整合和优化。 该引擎当前版本实现了以下特性: 引擎核心层同时支持同
在程序里经常都需要生成一些特定格式的 id ,每种场合的需求都可能有些不一样,虽然写起来代码不复杂,但零零碎碎的东西做多了也挺烦的,于是设计了这个用于 node.js 的万能 ID 生成器。 AnyID 生成的 ID 为字符串(也可以纯数字),信息密度尽可能的高,也就是用最少的位数表示尽量多的信息。 AnyID 设计的首要考虑原则是 API 的直观易用。看看这些例子: 指定长度,随机值填充 21