官网:https://github.com/openresty/lua-resty-lock
This library implements a simple mutex lock in a similar way to
ngx_proxy module's proxy_cache_lock directive.
* lua-resty-lock锁为互斥锁
Under the hood, this library uses ngx_lua module's shared memory
dictionaries. The lock waiting is nonblocking because we use
stepwise ngx.sleep to poll the lock periodically
* 使用ngx_lua的共享模块实现锁
* 锁非阻塞,使用ngx.sleep等待,周期性地获取锁状态
new:创建锁
语法格式:obj, err = lock:new(dict_name, opts?)
Creates a new lock object instance by specifying the shared dictionary
name (created by lua_shared_dict) and an optional options table opts.
* 使用共享空间实现锁,opts参数可选
In case of failure, returns nil and a string describing the error.
* 如果失败,返回nil、错误描述信息
# opts可选参数
* exptime:锁过期时间,默认30s,可精确到0.001s
* timeout:等待获取锁的超时时间,默认5s,设置为0表示如果获取不到锁,不等待直接离开
* step:锁初始等待时间,单位为秒,可精确到0.001s
* ratio:锁等待时间的放大比例,默认2,每次等待获取锁sleep时长变为上次2倍
* max_step:最大sleep时长,默认0.5s
lock:加锁
语法格式:elapsed, err = obj:lock(key)
Tries to lock a key across all the Nginx worker processes in the
current Nginx server instance. Different keys are different locks.
The length of the key string must not be larger than 65535 bytes.
* 所有的worker进程获取同一把锁
* 不同的key表示不同的锁
* key的长度不能大于65535字节
Returns the waiting time (in seconds) if the lock is successfully acquired. Otherwise returns nil and a string describing the error.
* 如果加锁成功,返回等待时间
* 加锁失败,返回nil、错误描述信息
The waiting time is not from the wallclock, but rather is from simply
adding up all the waiting "steps". A nonzero elapsed return value indicates
that someone else has just hold this lock. But a zero return value cannot
gurantee that no one else has just acquired and released the lock.
* 等待时间是所有steps的和
* 非0值表示有其他worker持有锁
* 0不一定表示又其他worker持有锁,并刚刚释放锁
When this method is waiting on fetching the lock, no operating system
threads will be blocked and the current Lua "light thread" will be
automatically yielded behind the scene.
* 等待获取锁期间,操作系统线程不会阻塞
* 当前的lua ligth thread会自动让出cpu
It is strongly recommended to always call the unlock() method to
actively release the lock as soon as possible.
* 建议释放锁的时候手动调用unlock
If the unlock() method is never called after this method call, the
lock will get released when
the current resty.lock object instance is collected automatically by the Lua GC.
the exptime for the lock entry is reached.
* 如果没有手动调用unlock,锁会在以下情况自动释放
* lua回收resty.lock对象
* lock锁过期
Common errors for this method call is
"timeout" : The timeout threshold specified by the timeout option of the new method is exceeded.
"locked" : The current resty.lock object instance is already holding a lock (not necessarily of the same key).
* 常见错误:锁等待超时、当前对象已经持有锁
Other possible errors are from ngx_lua's shared dictionary API.
* 其他可能的错误是ngx_lua的错误
It is required to create different resty.lock instances for multiple
simultaneous locks (i.e., those around different keys)
* 需要为并发锁(不同的线程)创建不同的锁对象(new创建新的锁对象)
unlock:释放锁
语法格式:ok, err = obj:unlock()
Releases the lock held by the current resty.lock object instance.
Returns 1 on success. Returns nil and a string describing the error otherwise.
* 释放当前锁对象持有的锁
* 如果成功,返回1
* 如果失败,返回nil、错误描述信息
If you call unlock when no lock is currently held, the error
"unlocked" will be returned
* 如果当前对象不持有锁,调用unlock会返回unlocked错误信息
expire:设置当前锁的到期时间
语法格式:ok, err = obj:expire(timeout)
Sets the TTL of the lock held by the current resty.lock object instance.
This will reset the timeout of the lock to timeout seconds if it is given,
otherwise the timeout provided while calling new will be used.
* 设置当前锁的到期时间
Note that the timeout supplied inside this function is independent from
the timeout provided while calling new. Calling expire() will not change
the timeout value specified inside new and subsequent expire(nil) call
will still use the timeout number from new.
* 如果设置为expire(nil),锁过期时间仍为new创建的过期时间
Returns true on success. Returns nil and a string describing the error otherwise.
* 设置成功,返回true
* 设置失败,返回nil、错误描述信息
If you call expire when no lock is currently held, the error
"unlocked" will be returned
* 如果当前没有持有锁,返回unlocked错误描述信息
为不同的线程创建不同的锁对象
It is always a bad idea to share a single resty.lock object instance
across multiple ngx_lua "light threads" because the object itself is
stateful and is vulnerable to race conditions. It is highly recommended
to always allocate a separate resty.lock object instance for each
"light thread" that needs one
* 推荐为不同的线程创建不同的锁对象
缓存后端数据,加锁流程
One common use case for this library is avoid the so-called "dog-pile effect",
that is, to limit concurrent backend queries for the same key when a cache miss
happens. This usage is similar to the standard ngx_proxy module's proxy_cache_lock
directive.
* lock锁通常用在缓存时,给后端数据库加锁
The basic workflow for a cache lock is as follows:
* Check the cache for a hit with the key. If a cache miss happens,
proceed to step 2.
* Instantiate a resty.lock object, call the lock method on the key,
and check the 1st return value, i.e., the lock waiting time. If it
is nil, handle the error; otherwise proceed to step 3.
* Check the cache again for a hit. If it is still a miss, proceed to step 4;
otherwise release the lock by calling unlock and then return the cached value.
* Query the backend (the data source) for the value, put the result into the
cache, and then release the lock currently held by calling unlock
* 一般流程如下
* 检查缓存是否存在,如果缓存不存在,执行步骤2
* 创建锁,尝试加锁,获取锁后,执行步骤3
* 再次检查缓存,如果有数据,直接返回,如果没有,去后端查询数据
* 后端返回数据,将数据缓存,并返回给客户端
缓存加锁流程
local resty_lock = require "resty.lock"
local cache = ngx.shared.my_cache
-- step 1:
local val, err = cache:get(key)
if val then
ngx.say("result: ", val)
return
end
if err then
return fail("failed to get key from shm: ", err)
end
-- cache miss!
-- step 2:
local lock, err = resty_lock:new("my_locks")
if not lock then
return fail("failed to create lock: ", err)
end
local elapsed, err = lock:lock(key)
if not elapsed then
return fail("failed to acquire the lock: ", err)
end
-- lock successfully acquired!
-- step 3:
-- someone might have already put the value into the cache
-- so we check it here again:
val, err = cache:get(key)
if val then
local ok, err = lock:unlock()
if not ok then
return fail("failed to unlock: ", err)
end
ngx.say("result: ", val)
return
end
--- step 4:
local val = fetch_redis(key)
if not val then
local ok, err = lock:unlock()
if not ok then
return fail("failed to unlock: ", err)
end
-- FIXME: we should handle the backend miss more carefully
-- here, like inserting a stub value into the cache.
ngx.say("no value found")
return
end
-- update the shm cache with the newly fetched value
local ok, err = cache:set(key, val, 1)
if not ok then
local ok, err = lock:unlock()
if not ok then
return fail("failed to unlock: ", err)
end
return fail("failed to update shm cache: ", err)
end
local ok, err = lock:unlock()
if not ok then
return fail("failed to unlock: ", err)
end
ngx.say("result: ", val)
创建mysql容器
docker run -it -d --net fixed --ip 172.18.0.61 -p 3306:3306 \
-e MYSQL_ROOT_PASSWORD=123456 --name mysql4 mysql
修改数据库:权限、创建表、添加数据
mysql> ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
Query OK, 0 rows affected (0.01 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> create database lihu;
ERROR 1007 (HY000): Can't create database 'lihu'; database exists
mysql> drop database lihu;
Query OK, 1 row affected (0.04 sec)
mysql> create database lihu;
Query OK, 1 row affected (0.01 sec)
mysql> use lihu;
Database changed
mysql> create table test(id int not null primary key auto_increment, name varchar(20));
Query OK, 0 rows affected (0.04 sec)
mysql> insert into test(id, name) values(1, 'gtlx'), (2, 'hzw');
Query OK, 2 rows affected (0.01 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select * from test;
+----+------+
| id | name |
+----+------+
| 1 | gtlx |
| 2 | hzw |
+----+------+
2 rows in set (0.00 sec)
nginx.conf
pcre_jit on;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_temp_path /var/run/openresty/nginx-client-body;
proxy_temp_path /var/run/openresty/nginx-proxy;
fastcgi_temp_path /var/run/openresty/nginx-fastcgi;
uwsgi_temp_path /var/run/openresty/nginx-uwsgi;
scgi_temp_path /var/run/openresty/nginx-scgi;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
#设置共享缓存
lua_shared_dict test 10m;
}
default.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/local/openresty/nginx/html;
index index.html index.htm;
}
location /test {
content_by_lua_block {
local cache = ngx.shared.test;
local cjson = require 'cjson';
local resty_lock = require 'resty.lock';
local mysql = require 'resty.mysql';
local db, err = mysql:new();
if not db then
ngx.say("mysql创建失败", err);
end
db:set_timeout(1000);
local res, err, errcode, sqlstate = db:connect({
host = "172.18.0.61", port = 3306, database = "lihu",
user = "root", password = "123456"
});
if not res then
ngx.say("连接出错", err, errcode, sqlstate);
end
local function fetch_data(id)
local res, err, errcode, sqlstate = db:query(
"select * from test where id ="..id
);
if not res then
ngx.say("数据查询失败", err);
return nil;
end
if #res == 0 then
ngx.say("后端没有查询到数据");
return nil;
end
ngx.say("后端查询结果 ==> ");
ngx.say(type(res));
ngx.say(cjson.encode(res));
for key,value in pairs(res) do
ngx.say(key, " ==> ", cjson.encode(value))
end
ngx.say("\n后端返回数据 ==> ",res[1].name);
return res[1].name;
end
local id = ngx.var.arg_id;
local value, err = cache:get(id);
if value then
ngx.say("缓存中查询到结果: ", id, " ==> ", value);
return
end
local lock, err = resty_lock:new("test");
if not lock then
ngx.say("创建锁失败 ==> ", err);
return
end
local elapsed, err = lock:lock("lock");
if not elapsed then
ngx.say("加锁失败 ==> ", err);
return
end
value, err = cache:get(id);
if value then
local ok, err = lock:unlock();
if not ok then
ngx.say("释放锁失败 ==> ", err);
end
ngx.say("缓存中二次查询获取结果: ", id, " ==> ", value);
return
end
value = fetch_data(id);
if not value then
ok, err = lock:unlock();
if not ok then
ngx.say("释放锁失败 ==> ", err);
end
ngx.say("数据库没有查询到结果");
return
end
cache:set(id, value);
ok, err = lock:unlock();
if not ok then
ngx.say("释放锁失败 ==> ", err);
end
ngx.say("数据库查询后返回数据: ", id, " ==> ", value);
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/openresty/nginx/html;
}
}
创建openresty容器
docker run -it -d --net fixed --ip 172.18.0.101 -p 8001:80 \
-v /Users/huli/lua/openresty/cache/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf \
-v /Users/huli/lua/openresty/cache/default.conf:/etc/nginx/conf.d/default.conf \
--name open-cache lihu12344/openresty
使用测试
# 缓存中没有,需要从后端数据库读取数据
huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test?id=2'
后端查询结果 ==>
table
[{"id":2,"name":"hzw"}]
1 ==> {"id":2,"name":"hzw"}
后端返回数据 ==> hzw
数据库查询后返回数据: 2 ==> hzw
# 再次读取数据时,直接从缓存中获取
huli@hudeMacBook-Pro cache % curl --location --request GET 'localhost:8001/test?id=2'
缓存中查询到结果: 2 ==> hzw