redis-shake v3.0.0

This commit is contained in:
suxb201 2021-09-03 14:25:52 +08:00
parent 68a04e06a2
commit 011854100c
587 changed files with 26208 additions and 18563 deletions

View File

@ -1,47 +1,39 @@
name: CI
on: [push, pull_request]
on: [ push, pull_request ]
jobs:
black-box-test:
runs-on: ubuntu-latest
strategy:
matrix:
redis-version: [ 5, 6, 7 ]
steps:
- name: Git checkout
uses: actions/checkout@v2
test-ubuntu-with-redis-5:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: clone and make redis
run: |
sudo apt-get install git
git clone https://github.com/redis/redis
cd redis
git checkout 5.0
make REDIS_CFLAGS='-Werror' BUILD_TLS=no -j
- name: make RedisShake
run: |
make
cp redis/src/redis-server bin/
- name: test
run: |
cd test
pip3 install -r requirements.txt
python3 main.py
- name: clone and make redis
run: |
sudo apt-get install git
git clone https://github.com/redis/redis
cd redis
git checkout ${{ matrix.redis-version }}.0
make -j
mkdir bin
cp src/redis-server bin/redis-server
echo "$GITHUB_WORKSPACE/redis/bin" >> $GITHUB_PATH
test-ubuntu-with-redis-6:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: clone and make redis
run: |
sudo apt-get install git
git clone https://github.com/redis/redis
cd redis
git checkout 6.0
make REDIS_CFLAGS='-Werror' BUILD_TLS=no -j
- name: make RedisShake
run: |
make
cp redis/src/redis-server bin/
- name: test
run: |
cd test
pip3 install -r requirements.txt
python3 main.py
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: make redis-shake
run: |
sh build.sh
- name: test
run: |
cd test
pip3 install -r requirements.txt
python3 main.py

34
.gitignore vendored
View File

@ -1,35 +1,7 @@
.pytest_cache
__pycache__
tmp
.gopath
.idea
*.iml
logs
*.pprof
*.output
*.data
*.sw[ap]
*.yml
*.pid
*.tar.gz
*.log
tags
result.db.*
bin/*
conf/*
!conf/redis-shake.conf
!.circleci/config.yml
dump.data
runtime.trace
.DS_Store
data
.cache/
diagnostic/
src/vendor/*
!src/vendor/vendor.json
/scripts/hypervisor
*.log
*.rdb
*.aof

217
ChangeLog
View File

@ -1,217 +0,0 @@
2021-08-19 Alibaba Cloud.
* VERSION: 2.1.0
* IMPROVE: 1. Go modules migration. #305
2. Add target.dbmap config option. #352
3. Add big key split for stream. #354
2020-05-02 Alibaba Cloud.
* VERSION: 2.0.1
* IMPROVE: rebase v1.6.28.
2020-04-22 Alibaba Cloud.
* VERSION: 2.0
* FEATURE: support resuming from break point.
2020-04-21 Alibaba Cloud.
* VERSION: 1.6.28
* IMPROVE: polish redigo log.
2020-03-25 Alibaba Cloud.
* VERSION: 1.6.27
* BUGFIX: transaction bug that only using one key but still raised "not
hashed in the same slot" error, see #257.
* IMPROVE: polish redis save rdb failed log, see #256.
* IMPROVE: support mset/msetnx command, see #247.
* IMPROVE: change "rewrite" to "key_exists" in configuration, see #259.
2020-02-06 Alibaba Cloud.
* VERSION: 1.6.26
* IMPROVE: set auth_type to auth as default, see #237.
see #227.
2020-01-21 Alibaba Cloud.
* VERSION: 1.6.25
* BUGFIX: in redis 2.8, delete key but convert the returned reply failed.
see #227.
2019-12-31 Alibaba Cloud.
* VERSION: 1.7-unstable
* FEATURE: add integration test.
* FEATURE: support resuming from break-point.
2019-12-20 Alibaba Cloud.
* VERSION: 1.6.24
* BUGFIX: cluster receive channel size adjust from 4096 to `sender.count`.
* BUGFIX: update redis-go-cluster to solve the send and receive
concurrency conflict.
* BUGFIX: fix some bugs in redis-go-cluster including io timeout problem,
see #192, #210.
* IMPROVE: set 'psync' to true by default in configuration, if the source
redis version is less than v2.8, switch to false.
* IMPROVE: when target version is less than the source, do restore
directly. Then catch the "Bad data format" and retry by split value. see
#211.
* IMPROVE: catch more errors in `restoreBigRdbEntry` function.
* BUGFIX: merge bad resp CRLF bugfix, see #204.
2019-11-28 Alibaba Cloud.
* VERSION: 1.6.23
* BUGFIX: update redis-go-cluster driver to solve MOVED error in lua
script.
* BUGFIX: update redis-go-clister driver to remove the error when meets
nil bluk string.
* IMPROVE: add keep_alive in cluster connection.
2019-11-25 Alibaba Cloud.
* VERSION: 1.6.22
* BUGFIX: solve MOVED error when key is unicode encoding which is not
solved completed in v1.6.21.
* BUGFIX: update redis-go-cluster to solve the bug of \r\n. see #73 in
redis-full-check.
* BUGFIX: solve flushcount comparison: "flushcount=4097 > 4096" =>
"flushcount=4096 >= 4096".
* IMPROVE: add more log in redis-go-cluster.
2019-11-12 Alibaba Cloud.
* VERSION: 1.6.21
* BUGFIX: update redis-go-cluster to solve the MOVED error when target
redis type is cluster and key is unicode encoding. Thanks
@shuff1e(sfxu@foxmail.com). see#68 in RedisFullCheck.
* IMPROVE: in rump mode, filter key in fetching stage instead of in writing.
2019-10-18 Alibaba Cloud.
* VERSION: 1.6.20
* IMPROVE: add progress bar in rump mode. see #174.
* IMPROVE: add run_direct.py script.
* IMRPOVE: set big_key_threshold to 1 to avoid some sync failed cases. see
#173.
* IMRPOVE: remove restrict of node must == 1 when source type is
standalone.
2019-09-19 Alibaba Cloud.
* VERSION: 1.6.19
* BUGFIX: update "redis-go-cluster" driver to fix bug to throw CROSSSLOT
error when migrating lua script.
* IMPROVE: only run hypervisor when type is `sync`.
2019-09-04 Alibaba Cloud.
* VERSION: 1.6.18
* BUGFIX: restore quicklist panic when target is cluster. see #156
2019-08-27 Alibaba Cloud.
* VERSION: 1.6.17
* BUGFIX: transaction syncing panic when target redis is cluster. see
#145.
* IMPROVE: adjust RecvChanSize based on `sender.count` or `scan.key_number`
if target redis type is cluster.
* IMPROVE: remove some variables in conf like `heartbeat`, `ncpu`.
* IMPROVE: print inner error message from redigo driver return message.
2019-08-09 Alibaba Cloud.
* VERSION: 1.6.16
* BUGFIX: big key in `rump` mode all expired.
* BUGFIX: `rump` mode restores quick list failed. see #141.
2019-08-09 Alibaba Cloud.
* VERSION: 1.6.15
* IMPROVE: add `target.version` to support some proxy like twemproxy.
* BUGFIX: filter `select` command in `rump` when only db0 supports in some
redis version.
* IMPROVE: remove the cluster limit when target type is rump.
* IMPROVE: add `scan.key_number` limit judge when target type is cluster
in type `rump`. see #136.
2019-08-01 Alibaba Cloud.
* VERSION: 1.6.14
* BUGFIX: the `rdb.parallel` parameter limits concurrency without effect.
see #133
* BUGFIX: call `select` when target redis type is cluster in `rump` mode.
* IMPROVE: add `http_profile = -1` to exit once finish rdb syncing in
`restore` mode.
* IMPROVE: 'info xxx' command isn't supported in codis, used 'info' and
parse 'xxx'.
* IMPROVE: rename `rdb.xx` to `source.rdb.xx` or `target.rdb.xx`.
2019-07-24 Alibaba Cloud.
* VERSION: 1.6.13
* IMPROVE: support `filter.db.whitelist` and `filter.db.blacklist` to let
different db syncing to db0 even when target type is cluster. see #127.
* BUGFIX: fix bug of connection url in automatic discovery in cluster. see
#124.
* IMPROVE: support `target.db` in rump mode.
* IMPROVE: add debug log in RDB syncing.
2019-07-11 Alibaba Cloud.
* VERSION: 1.6.12
* IMPROVE: support filter key with whitelist and blacklist.
* IMPROVE: support filter db with whitelist and blacklist.
* BUGFIX: fix "bypass" count in metric.
2019-07-04 Alibaba Cloud.
* VERSION: 1.6.11
* BUGFIX: adapt "redis-go-cluster" driver to fix bug of big key syncing
block in `sync` mode. See #114
2019-07-03 Alibaba Cloud.
* VERSION: 1.6.10
* IMPROVE: support print Lua in `decode` mode.
* BUGFIX: merge metric panic PR#111
* IMPROVE: check checksum and version once receiving error from the target in
`rump` mode.
2019-06-21 Alibaba Cloud.
* VERSION: 1.6.9
* IMPROVE: support Lua and transaction when target is open source cluster
version.
* IMPROVE: support filter Lua: `filter.lua`
2019-06-21 Alibaba Cloud.
* VERSION: 1.6.8
* IMPROVE: add hypervisor.
* IMPROVE: add key filter in `rump` mode.
* IMPROVE: add prometheus metrics with url: "localhost:$http_profile/metrics"
2019-06-13 Alibaba Cloud.
* VERSION: 1.6.7
* IMPROVE: split big key in `rump` mode.
* IMPROVE: add rate transmission mechanism in `rump` mode.
* IMPROVE: add metric in `rump` mode.
2019-06-09 Alibaba Cloud.
* VERSION: 1.6.6
* cherry-pick merge v1.4.4
* BUGFIX: delete single command failed when filter key given.
2019-06-06 Alibaba Cloud.
* VERSION: 1.6.5
* IMPROVE: run rump in parallel to support several db nodes behind proxy.
* BUGFIX: run rump panic when the source is proxy with more than 1 db.
2019-06-05 Alibaba Cloud.
* VERSION: 1.4.4
* BUGFIX: modify the ttl from millisecond to second in restore when
overpass big key threshold.
* IMPROVE: set some default values in configuration.
2019-05-30 Alibaba Cloud.
* VERSION: 1.6.4
* BUGFIX: fix bug of `GetDetailedInfo` panic.
2019-05-26 Alibaba Cloud.
* VERSION: 1.6.3
* IMPROVE: target address supports cluster.
* IMPROVE: supports TLS for standalone.
2019-05-16 Alibaba Cloud.
* VERSION: 1.6.2
* BUGFIX: fix bug of `rump` mode only syncing db 0 data.
2019-05-14 Alibaba Cloud.
* VERSION: 1.6.1
* IMPROVE: support fetching db address from sentinel, the failover
mechanism is not yet supported.
2019-05-08 Alibaba Cloud.
* VERSION: 1.6.0
* FEATURE: source address supports cluster.
* FEATURE: target address supports several proxies to write data in
a round-robin way.
2019-05-11 Alibaba Cloud.
* VERSION: 1.4.3
* BUGFIX: add add metric in restore mode.
2019-04-24 Alibaba Cloud.
* VERSION: 1.4.2
* IMPROVE: improve rump to support fetching data from given keys in
the file.
2019-04-24 Alibaba Cloud.
* VERSION: 1.4.1
* IMPROVE: improve rump to better fetch data from aliyun_cluster and
tencent_cluster.
2019-04-21 Alibaba Cloud.
* VERSION: 1.4.0
* FEATURE: support "rump" type to syncing data when `sync` and `psync`
commands are not supported.
* IMPROVE: add commands sending and receiving debug log.
2019-04-13 Alibaba Cloud.
* VERSION: 1.2.3
* IMPROVE: polish log print to print more error info.
2019-04-03 Alibaba Cloud.
* VERSION: 1.2.2
* BUGFIX: support 5.0 rdb RDB_OPCODE_MODULE_AUX, RDB_OPCODE_IDLE and
RDB_OPCODE_FREQ type.
2019-03-27 Alibaba Cloud.
* VERSION: 1.2.1
* IMPROVE: support syncing lua script in RDB syncing.
2019-03-11 Alibaba Cloud.
* VERSION: 1.2.0
* FEATURE: support 5.0.
2019-02-21 Alibaba Cloud.
* VERSION: 1.0.0
* REDIS-SHAKE: initial release.

View File

@ -1,6 +0,0 @@
FROM busybox
COPY ./bin/redis-shake /usr/local/app/redis-shake
COPY ./conf/redis-shake.conf /usr/local/app/redis-shake.conf
ENV TYPE sync
CMD /usr/local/app/redis-shake -type=${TYPE} -conf=/usr/local/app/redis-shake.conf

View File

@ -1,15 +0,0 @@
all: build
runtest:
./test.sh
build:
./build.sh
clean:
rm -rf bin
rm -rf *.pprof
rm -rf *.output
rm -rf logs
rm -rf diagnostic/
rm -rf *.pid

184
README.md
View File

@ -1,128 +1,108 @@
RedisShake is mainly used to synchronize data from one redis to another.<br>
Thanks to the Douyu's WSD team for the support. <br>
# redis-shake
* [中文文档](https://yq.aliyun.com/articles/691794)
* [English tutorial](https://github.com/alibaba/RedisShake/wiki/tutorial-about-how-to-set-up)
* [中文使用文档](https://github.com/alibaba/RedisShake/wiki/%E7%AC%AC%E4%B8%80%E6%AC%A1%E4%BD%BF%E7%94%A8%EF%BC%8C%E5%A6%82%E4%BD%95%E8%BF%9B%E8%A1%8C%E9%85%8D%E7%BD%AE%EF%BC%9F)
* [Release package](https://github.com/alibaba/RedisShake/releases)
[![CI](https://github.com/alibaba/RedisShake/actions/workflows/ci.yml/badge.svg?branch=v3)](https://github.com/alibaba/RedisShake/actions/workflows/ci.yml)
# Redis-Shake
redis-shake 是一个用来做 Redis 数据迁移的工具,并提供一定程度的数据清洗能力。
Redis-shake is developed and maintained by NoSQL Team in Alibaba-Cloud Database department.
## 特性
Redis-shake has made some improvements based on [redis-port](https://github.com/CodisLabs/redis-port), including bug fixes, performance improvements and feature enhancements.
* 支持 Redis 原生数据结构
* 支持源端为单机实例,目的端为单机或集群实例
* 测试在 5.0、6.0 和 7.0
* 支持使用 lua 自定义过滤规则
# Main Functions
![image.png](https://s2.loli.net/2022/06/30/vU346lVBrNofKzu.png)
The type can be one of the following:
# 文档
* **decode**: Decode dumped payload to human readable format (hex-encoding).
* **restore**: Restore RDB file to target redis.
* **dump**: Dump RDB file from source redis.
* **sync**: Sync data from source redis to target redis by `sync` or `psync` command. Including full synchronization and incremental synchronization.
* **rump**: Sync data from source redis to target redis by `scan` command. Only support full synchronization. Plus, RedisShake also supports fetching data from given keys in the input file when `scan` command is not supported on the source side. This mode is usually used when `sync` and `psync` redis commands aren't supported.
## 安装
Please check out the `conf/redis-shake.conf` to see the detailed parameters description.
### 从 Release 下载安装
# Support
unstable 版本,暂不支持。
Redis version from 2.x to 6.x.
### 从源码编译
Supports `Standalone`, `Cluster` and some proxies type like `Codis`, `twemproxy`, `Aliyun Cluster Proxy`, `Tencent Cloud Proxy` and so on.
下载源码后,运行 `sh build.sh` 命令编译。
For `codis` and `twemproxy`, there maybe some constraints, please checkout this [question](https://github.com/alibaba/RedisShake/wiki/FAQ#q-does-redisshake-supports-codis-and-twemproxy).
Support Redis Modules:
[TairHash](https://github.com/alibaba/TairHash): A redis module, similar to redis hash, but you can set expire and version for the field
[TairZset](https://github.com/alibaba/TairZset): A redis module, similar to redis zset, but you can set multiple scores for each member to support multi-dimensional sorting
[TairString](https://github.com/alibaba/TairString): A redis module, similar to redis string, but you can set expire and version for the value. It also provides many very useful commands, such as cas/cad, etc.
# Configuration
Redis-shake has several parameters in the configuration `conf/redis-shake.conf`, that maybe confusing, if this is your first time using, please visit this [tutorial](https://github.com/alibaba/RedisShake/wiki/tutorial-about-how-to-set-up).
# Verification
User can use [RedisFullCheck](https://github.com/alibaba/RedisFullCheck) to verify correctness.
# Metric
Redis-shake offers metrics through restful api and log file.<br>
* restful api: `curl 127.0.0.1:9320/metric`.
* log: the metric info will be printed in the log periodically if enable.
* inner routine heap: `curl http://127.0.0.1:9310/debug/pprof/goroutine?debug=2`
# Redis Type
Both the source and target type can be standalone, opensource cluster and proxy. Although the architecture patterns of different vendors are different for the proxy architecture, we still support different cloud vendors like alibaba-cloud, tencent-cloud and so on.
If the target is open source redis cluster, redis-shake uses [redis-go-cluster](https://github.com/chasex/redis-go-cluster) driver to write data. When target type is proxy, redis-shakes write data in round-robin way.
If the source is redis cluster, redis-shake launches multiple goroutines for parallel pull. User can use `rdb.parallel` to control the RDB syncing concurrency.
The "move slot" operations must be disabled on the source side.
# Usage
## download the binary
You can **directly download** the binary in the [release package](https://github.com/alibaba/RedisShake/releases).
Run through similar commands:
```shell
./redis-shake.linux -type=sync -conf=redis-shake.conf # please note: user must modify redis-shake.conf first to match needs.
```
## build by yourself
You can also build redis-shake yourself according to the following steps:
```shell
git clone https://github.com/alibaba/RedisShake.git
cd RedisShake
sh build.sh
cd bin
./redis-shake.linux -type=sync -conf=redis-shake.conf # please note: user must modify redis-shake.conf first to match needs.
```
# Shake series tool
## 运行
We also provide some tools for synchronization in Shake series.
1. 编辑 redis-shake.toml修改其中的 source 与 target 配置项
2. 启动 redis-shake
* [MongoShake](https://github.com/aliyun/MongoShake): mongodb data synchronization tool.
* [RedisShake](https://github.com/aliyun/RedisShake): redis data synchronization tool.
* [RedisFullCheck](https://github.com/aliyun/RedisFullCheck): redis data synchronization verification tool.
* [NimoShake](https://github.com/alibaba/NimoShake): sync dynamodb to mongodb.
```shell
./bin/redis-shake redis-shake.toml
```
Plus, we have a [DingTalk](https://www.dingtalk.com/) group, so that users can join and discuss.
Group code: 23165540
3. 观察数据同步情况
# Code branch rules
## 配置
Version rules: a.b.c.
redis-shake 配置文件参考https://github.com/alibaba/RedisShake/blob/v3/redis-shake.toml
* a: major version
* b: minor version. even number means stable version.
* c: bugfix version
为避免歧义强制要求配置文件中的每一项配置均需要赋值,否则会报错。
| branch name | rules |
| --- | :--- |
| master | master branch, do not allowed push code. store the latest stable version. develop branch will merge into this branch once new version created.|
| **develop**(main branch) | develop branch. all the bellowing branches fork from this. |
| feature-\* | new feature branch. forked from develop branch and then merge back after finish developing, testing, and code review. |
| bugfix-\* | bugfix branch. forked from develop branch and then merge back after finish developing, testing, and code review. |
| improve-\* | improvement branch. forked from develop branch and then merge back after finish developing, testing, and code review. |
## 数据过滤
Tag rules:
Add tag when releasing: "release-v{version}-{date}". for example: "release-v1.0.2-20180628"
User can use `-version` to print the version.
redis-shake 支持使用 lua 脚本自定义过滤规则,可以实现对数据进行过滤。 搭配 lua 脚本时redis-shake 启动命令:
# Thanks
```shell
./bin/redis-shake redis-shake.toml filter/xxx.lua
```
| Username | Mail |
| :------: | :------: |
| ceshihao | davidzheng23@gmail.com |
| wangyiyang | wangyiyang.kk@gmail.com |
| muicoder | muicoder@gmail.com |
| zhklcf | huikangzhu@126.com |
| shuff1e | sfxu@foxmail.com |
| xuhualin | xuhualing8439523@163.com |
lua 的书写参照 `filter/*.lua` 文件,目前提供以下过滤模板供参考:
1. `filter/print.lua`:打印收到的所有命令
2. `filter/swap_db.lua`:交换 db0 和 db1 的数据
### 自定义过滤规则
参照 `filter/print.lua` 新建一个 lua 脚本,并在 lua 脚本中实现 filter 函数,该函数的参数为:
- id命令序列号
- is_base是否是从 dump.rdb 文件中读取的命令
- group命令组不同命令所归属的 Group 见 [redis/src/commands](https://github.com/redis/redis/tree/unstable/src/commands) 下的描述文件
- cmd_name命令名称
- keys命令的 keys
- slotskeys 的 slots
- db_id数据库 id
- timestamp_ms命令的时间戳单位为毫秒。目前版本不支持。
返回值为:
- code
- 0表示不过滤该命令
- 1表示过滤该命令
- 2表示不应该出现该命令并让 redis-shake 报错退出
- db_id重定向的 db_id
# 贡献
## lua 脚本
欢迎分享更有创意的 lua 脚本。
1. 在 `filters/` 下添加相关脚本。
2. 在 `README.md` 中添加相关描述。
3. 提交一份 pull request。
## Redis Module 支持
1. 在 `internal/rdb/types` 下添加相关类型。
2. 在 `scripts/commands` 下添加相关命令,并使用脚本生成 `table.go` 文件,移动至 `internal/commands` 目录。
3. 在 `test/cases` 下添加相关测试用例。
4. 提交一份 pull request。
# 感谢
redis-shake 旧版是阿里云基于豌豆荚开源的 redis-port 进行二次开发的一个支持 Redis 异构集群实时同步的工具。
redis-shake v3 在 redis-shake 旧版的基础上重新组织代码结构,使其更具可维护性的版本。
redis-shake v3 参考借鉴了以下项目:
- https://github.com/HDT3213/rdb
- https://github.com/sripathikrishnan/redis-rdb-tools

View File

@ -1,52 +1,36 @@
#!/bin/bash
set -o errexit
# make sure we're in the directory where the project lives
PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$PROJECT_DIR"
MODULE_NAME=$(grep module src/go.mod |cut -d ' ' -f 2)
# go version >=1.6
go_version=$(go version | awk -F' ' '{print $3;}')
bigVersion=$(echo $go_version | awk -F'[o.]' '{print $2}')
midVersion=$(echo $go_version | awk -F'[o.]' '{print $3}')
if [ $bigVersion -lt "1" -o $bigVersion -eq "1" -a $midVersion -lt "6" ]; then
echo "go version[$go_version] must >= 1.6"
exit 1
fi
# older version Git don't support --short !
if [ -d ".git" ];then
branch=$(git symbolic-ref -q HEAD | awk -F'/' '{print $3;}')
cid=$(git rev-parse HEAD)
else
branch="unknown"
cid="0.0"
fi
branch=$branch","$cid
info="$MODULE_NAME/redis-shake/common.Version=$branch"
# golang version
info=$info","$go_version
t=$(date "+%Y-%m-%d_%H:%M:%S")
info=$info","$t
set -e
echo "[ BUILD RELEASE ]"
BIN_DIR=$(pwd)/bin/
cd src
goos=(linux darwin windows)
for g in "linux" "darwin" "windows";
do
echo "try build GOOS=$g"
rm -rf "$BIN_DIR"
mkdir -p "$BIN_DIR"
# build the current platform
echo "try build for current platform"
go build -v -trimpath -o "$BIN_DIR/redis-shake" "./cmd/redis-shake"
echo "build success"
for g in "linux" "darwin"; do
for a in "amd64" "arm64"; do
echo "try build GOOS=$g GOARCH=$a"
export GOOS=$g
go build -v -trimpath -ldflags "-X $info" -o "$BIN_DIR/redis-shake.$g" "$MODULE_NAME/redis-shake/main"
export GOARCH=$a
go build -v -trimpath -o "$BIN_DIR/redis-shake-$g-$a" "./cmd/redis-shake"
unset GOOS
echo "build $g successfully!"
unset GOARCH
echo "build success"
done
done
cd $PROJECT_DIR
cp conf/redis-shake.conf $BIN_DIR
cp redis-shake.toml "$BIN_DIR"
if [ "$1" == "dist" ]; then
echo "[ DIST ]"
cd bin
cp -r ../filters ./
tar -czvf ./redis-shake.tar.gz ./redis-shake.toml ./redis-shake-* ./filters
rm -rf ./filters
cd ..
fi

84
cmd/redis-shake/main.go Normal file
View File

@ -0,0 +1,84 @@
package main
import (
"fmt"
"github.com/alibaba/RedisShake/internal/commands"
"github.com/alibaba/RedisShake/internal/config"
"github.com/alibaba/RedisShake/internal/filter"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/reader"
"github.com/alibaba/RedisShake/internal/statistics"
"github.com/alibaba/RedisShake/internal/writer"
"os"
"runtime"
)
func main() {
if len(os.Args) < 2 || len(os.Args) > 3 {
fmt.Println("Usage: redis-shake <config file> <lua file>")
fmt.Println("Example: redis-shake config.toml lua.lua")
os.Exit(1)
}
if len(os.Args) == 3 {
luaFile := os.Args[2]
filter.LoadFromFile(luaFile)
}
// load config
configFile := os.Args[1]
config.LoadFromFile(configFile)
log.Init()
log.Infof("GOOS: %s, GOARCH: %s", runtime.GOOS, runtime.GOARCH)
log.Infof("Ncpu: %d, GOMAXPROCS: %d", config.Config.Advanced.Ncpu, runtime.GOMAXPROCS(0))
log.Infof("pid: %d", os.Getpid())
if len(os.Args) == 2 {
log.Infof("No lua file specified, will not filter any cmd.")
}
// create writer
var theWriter writer.Writer
switch config.Config.Target.Type {
case "standalone":
if len(config.Config.Target.Addresses) != 1 {
log.Panicf("standalone target must have only one address")
}
theWriter = writer.NewRedisWriter(config.Config.Target.Addresses[0], config.Config.Target.Password, config.Config.Target.IsTLS)
case "cluster":
if len(config.Config.Target.Addresses) == 1 {
log.Panicf("cluster target must have at least two address")
}
theWriter = writer.NewRedisClusterWriter(config.Config.Target.Addresses, config.Config.Target.Password, config.Config.Target.IsTLS)
default:
log.Panicf("unknown target type: %s", config.Config.Target.Type)
}
// create reader
source := &config.Config.Source
theReader := reader.NewPSyncReader(source.Address, source.Password, source.IsTLS)
ch := theReader.StartRead()
// start sync
statistics.Init()
id := uint64(0)
for e := range ch {
// calc arguments
e.Id = id
id++
e.CmdName, e.Group, e.Keys = commands.CalcKeys(e.Argv)
e.Slots = commands.CalcSlots(e.Keys)
// filter
code := filter.Filter(e)
if code == filter.Allow {
theWriter.Write(e)
statistics.AddAllowEntriesCount()
} else if code == filter.Disallow {
// do something
statistics.AddDisallowEntriesCount()
} else {
log.Panicf("error when run lua filter. entry: %s", e.ToString())
}
}
}

View File

@ -1,277 +0,0 @@
# This file is the configuration of redis-shake.
# If you have any problem, please visit: https://github.com/alibaba/RedisShake/wiki/FAQ
# 有疑问请先查阅https://github.com/alibaba/RedisShake/wiki/FAQ
# current configuration version, do not modify.
# 当前配置文件的版本号,请不要修改该值。
conf.version = 1
# id
id = redis-shake
# The log file name, if left blank, it will be printed to stdout,
# otherwise it will be printed to the specified file.
# 日志文件名,留空则会打印到 stdout否则打印到指定文件。
# for example:
# log.file =
# log.file = /var/log/redis-shake.log
log.file =
# log level: "none", "error", "warn", "info", "debug".
# default is "info".
# 日志等级可选none error warn info debug
# 默认为info
log.level = info
# 进程文件存储目录,留空则会输出到当前目录,
# 注意这个是目录,真正生成的 pid 是 {pid_path}/{id}.pid
# 例如:
# pid_path = ./
# pid_path = /var/run/
pid_path =
# pprof port.
system_profile = 9310
# restful port, set -1 means disable, in `restore` mode RedisShake will exit once finish restoring RDB only if this value
# is -1, otherwise, it'll wait forever.
# restful port, 查看 metric 端口, -1 表示不启用. 如果是`restore`模式,只有设置为-1才会在完成RDB恢复后退出否则会一直block。
# http://127.0.0.1:9320/conf 查看 redis-shake 使用的配置
# http://127.0.0.1:9320/metric 查看 redis-shake 的同步情况
http_profile = 9320
# parallel routines number used in RDB file syncing. default is 64.
# 启动多少个并发线程同步一个RDB文件。
parallel = 32
# source redis configuration.
# used in `dump`, `sync` and `rump`.
# source redis type, e.g. "standalone" (default), "sentinel" or "cluster".
# 1. "standalone": standalone db mode.
# 2. "sentinel": the redis address is read from sentinel.
# 3. "cluster": the source redis has several db.
# 4. "proxy": the proxy address, currently, only used in "rump" mode.
# used in `dump`, `sync` and `rump`.
# 源端 Redis 的类型可选standalone sentinel cluster proxy
# 注意proxy 只用于 rump 模式。
source.type = standalone
# ip:port
# the source address can be the following:
# 1. single db address. for "standalone" type.
# 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
# 3. cluster that has several db nodes split by semicolon(;). for "cluster" type. e.g., 10.1.1.1:20331;10.1.1.2:20441.
# 4. proxy address(used in "rump" mode only). for "proxy" type.
# 源redis地址。对于sentinel或者开源cluster模式输入格式为"master名字:拉取角色为master或者slave@sentinel的地址"别的cluster
# 架构比如codis, twemproxy, aliyun proxy等需要配置所有master或者slave的db地址。
# 源端 redis 的地址
# 1. standalone 模式配置 ip:port, 例如: 10.1.1.1:20331
# 2. cluster 模式需要配置所有 nodes 的 ip:port, 例如: source.address = 10.1.1.1:20331;10.1.1.2:20441
source.address = 127.0.0.1:20441
# source password, left blank means no password.
# 源端密码,留空表示无密码。
source.password_raw =
# auth type, don't modify it
source.auth_type = auth
# tls enable, true or false. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
source.tls_enable = false
# Whether to verify the validity of the redis certificate, true means verification, false means no verification
source.tls_skip_verify = false
# input RDB file.
# used in `decode` and `restore`.
# if the input is list split by semicolon(;), redis-shake will restore the list one by one.
# 如果是decode或者restore这个参数表示读取的rdb文件。支持输入列表例如rdb.0;rdb.1;rdb.2
# redis-shake将会挨个进行恢复。
source.rdb.input =
# the concurrence of RDB syncing, default is len(source.address) or len(source.rdb.input).
# used in `dump`, `sync` and `restore`. 0 means default.
# This is useless when source.type isn't cluster or only input is only one RDB.
# 拉取的并发度,如果是`dump`或者`sync`默认是source.address中db的个数`restore`模式默认len(source.rdb.input)。
# 假如db节点/输入的rdb有5个但rdb.parallel=3那么一次只会
# 并发拉取3个db的全量数据直到某个db的rdb拉取完毕并进入增量才会拉取第4个db节点的rdb
# 以此类推最后会有len(source.address)或者len(rdb.input)个增量线程同时存在。
source.rdb.parallel = 0
# for special cloud vendor: ucloud
# used in `decode` and `restore`.
# ucloud集群版的rdb文件添加了slot前缀进行特判剥离: ucloud_cluster。
source.rdb.special_cloud =
# target redis configuration. used in `restore`, `sync` and `rump`.
# the type of target redis can be "standalone", "proxy" or "cluster".
# 1. "standalone": standalone db mode.
# 2. "sentinel": the redis address is read from sentinel.
# 3. "cluster": open source cluster (not supported currently).
# 4. "proxy": proxy layer ahead redis. Data will be inserted in a round-robin way if more than 1 proxy given.
# 目的redis的类型支持standalonesentinelcluster和proxy四种模式。
target.type = standalone
# ip:port
# the target address can be the following:
# 1. single db address. for "standalone" type.
# 2. ${sentinel_master_name}:${master or slave}@sentinel single/cluster address, e.g., mymaster:master@127.0.0.1:26379;127.0.0.1:26380, or @127.0.0.1:26379;127.0.0.1:26380. for "sentinel" type.
# 3. cluster that has several db nodes split by semicolon(;). for "cluster" type.
# 4. proxy address. for "proxy" type.
target.address = 127.0.0.1:6379
# target password, left blank means no password.
# 目的端密码,留空表示无密码。
target.password_raw =
# auth type, don't modify it
target.auth_type = auth
# all the data will be written into this db. < 0 means disable.
target.db = -1
# Format: 0-5;1-3 ,Indicates that the data of the source db0 is written to the target db5, and
# the data of the source db1 is all written to the target db3.
# Note: When target.db is specified, target.dbmap will not take effect.
# 例如 0-5;1-3 表示源端 db0 的数据会被写入目的端 db5, 源端 db1 的数据会被写入目的端 db3
# 当 target.db 开启的时候 target.dbmap 不会生效.
target.dbmap =
# tls enable, true or false. Currently, only support standalone.
# open source redis does NOT support tls so far, but some cloud versions do.
target.tls_enable = false
# Whether to verify the validity of the redis certificate, true means verification, false means no verification
target.tls_skip_verify = false
# output RDB file prefix.
# used in `decode` and `dump`.
# 如果是decode或者dump这个参数表示输出的rdb前缀比如输入有3个db那么dump分别是:
# ${output_rdb}.0, ${output_rdb}.1, ${output_rdb}.2
target.rdb.output = local_dump
# some redis proxy like twemproxy doesn't support to fetch version, so please set it here.
# e.g., target.version = 4.0
target.version =
# use for expire key, set the time gap when source and target timestamp are not the same.
# 用于处理过期的键值,当迁移两端不一致的时候,目的端需要加上这个值
fake_time =
# how to solve when destination restore has the same key.
# rewrite: overwrite.
# none: panic directly.
# ignore: skip this key. not used in rump mode.
# used in `restore`, `sync` and `rump`.
# 当源目的有重复 key 时是否进行覆写, 可选值:
# 1. rewrite: 源端覆盖目的端
# 2. none: 一旦发生进程直接退出
# 3. ignore: 保留目的端key忽略源端的同步 key. 该值在 rump 模式下不会生效.
key_exists = none
# filter db, key, slot, lua.
# filter db.
# used in `restore`, `sync` and `rump`.
# e.g., "0;5;10" means match db0, db5 and db10.
# at most one of `filter.db.whitelist` and `filter.db.blacklist` parameters can be given.
# if the filter.db.whitelist is not empty, the given db list will be passed while others filtered.
# if the filter.db.blacklist is not empty, the given db list will be filtered while others passed.
# all dbs will be passed if no condition given.
# 指定的db被通过比如0;5;10将会使db0, db5, db10通过, 其他的被过滤
filter.db.whitelist =
# 指定的db被过滤比如0;5;10将会使db0, db5, db10过滤其他的被通过
filter.db.blacklist =
# filter key with prefix string. multiple keys are separated by ';'.
# e.g., "abc;bzz" match let "abc", "abc1", "abcxxx", "bzz" and "bzzwww".
# used in `restore`, `sync` and `rump`.
# at most one of `filter.key.whitelist` and `filter.key.blacklist` parameters can be given.
# if the filter.key.whitelist is not empty, the given keys will be passed while others filtered.
# if the filter.key.blacklist is not empty, the given keys will be filtered while others passed.
# all the namespace will be passed if no condition given.
# 支持按前缀过滤key只让指定前缀的key通过分号分隔。比如指定abc将会通过abc, abc1, abcxxx
filter.key.whitelist =
# 支持按前缀过滤key不让指定前缀的key通过分号分隔。比如指定abc将会阻塞abc, abc1, abcxxx
filter.key.blacklist =
# filter given slot, multiple slots are separated by ';'.
# e.g., 1;2;3
# used in `sync`.
# 指定过滤slot只让指定的slot通过
filter.slot =
# filter give commands. multiple commands are separated by ';'.
# e.g., "flushall;flushdb".
# used in `sync`.
# at most one of `filter.command.whitelist` and `filter.command.blacklist` parameters can be given.
# if the filter.command.whitelist is not empty, the given commands will be passed while others filtered.
# if the filter.command.blacklist is not empty, the given commands will be filtered.
# besides, the other config caused filters also effect as usual, e.g. filter.lua = true would filter lua commands.
# all the commands, except the other config caused filtered commands, will be passed if no condition given.
# 只让指定命令通过,分号分隔
filter.command.whitelist =
# 不让指定命令通过,分号分隔
# 除了这些指定的命令外,其他配置选项指定的过滤命令也会照常生效,如开启 filter.lua 将会过滤 lua 相关命令
filter.command.blacklist =
# filter lua script. true means not pass. However, in redis 5.0, the lua
# converts to transaction(multi+{commands}+exec) which will be passed.
# 控制不让lua脚本通过true表示不通过
filter.lua = false
# big key threshold, the default is 500 * 1024 * 1024 bytes. If the value is bigger than
# this given value, all the field will be spilt and write into the target in order. If
# the target Redis type is Codis, this should be set to 1, please checkout FAQ to find
# the reason.
# 正常key如果不大那么都是直接调用restore写入到目的端如果key对应的value字节超过了给定
# 的值那么会分批依次一个一个写入。如果目的端是Codis这个需要置为1具体原因请查看FAQ。
# 如果目的端大版本小于源端也建议设置为1。
big_key_threshold = 524288000
# enable metric
# used in `sync`.
# 是否启用metric
metric = true
# print in log
# 是否将metric打印到log中
metric.print_log = false
# sender information.
# sender flush buffer size of byte.
# used in `sync`.
# 发送缓存的字节长度,超过这个阈值将会强行刷缓存发送
sender.size = 104857600
# sender flush buffer size of oplog number.
# used in `sync`. flush sender buffer when bigger than this threshold.
# 发送缓存的报文个数超过这个阈值将会强行刷缓存发送对于目的端是cluster的情况这个值
# 的调大将会占用部分内存。
sender.count = 4095
# delay channel size. once one oplog is sent to target redis, the oplog id and timestamp will also
# stored in this delay queue. this timestamp will be used to calculate the time delay when receiving
# ack from target redis.
# used in `sync`.
# 用于metric统计时延的队列
sender.delay_channel_size = 65535
# enable keep_alive option in TCP when connecting redis.
# the unit is second.
# 0 means disable.
# TCP keep-alive保活参数单位秒0表示不启用。
keep_alive = 0
# used in `rump`.
# number of keys captured each time. default is 100.
# 每次scan的个数不配置则默认100.
scan.key_number = 50
# used in `rump`.
# we support some special redis types that don't use default `scan` command like alibaba cloud and tencent cloud.
# 有些版本具有特殊的格式与普通的scan命令有所不同我们进行了特殊的适配。目前支持腾讯云的集群版"tencent_cluster"
# 和阿里云的集群版"aliyun_cluster",注释主从版不需要配置,只针对集群版。
scan.special_cloud =
# used in `rump`.
# we support to fetching data from given file which marks the key list.
# 有些云版本既不支持sync/psync也不支持scan我们支持从文件中进行读取所有key列表并进行抓取一行一个key。
scan.key_file =
# limit the rate of transmission. Only used in `rump` currently.
# e.g., qps = 1000 means pass 1000 keys per second. default is 500,000(0 means default)
qps = 200000
# enable resume from break point, please visit xxx to see more details.
# 断点续传开关
resume_from_break_point = false
# ----------------splitter----------------
# below variables are useless for current open source version so don't set.
# replace hash tag.
# used in `sync`.
replace_hash_tag = false

27
filters/print.lua Normal file
View File

@ -0,0 +1,27 @@
--- function name must be `filter`
---
--- arguments:
--- @id number: the sequence of the cmd
--- @is_base boolean: whether the command is decoded from dump.rdb file
--- @group string: the group of cmd
--- @cmd_name string: cmd name
--- @keys table: keys of the command
--- @slots table: slots of the command
--- @db_id: database id
--- @timestamp_ms number: timestamp in milliseconds, 0 if not available
--- return:
--- @code number:
--- * 0: allow
--- * 1: disallow
--- * 2: error occurred
--- @db_id number: redirection database id
function filter(id, is_base, group, cmd_name, keys, slots, db_id, timestamp_ms)
local keys_size = #keys
local slots_size = #slots -- slots_size should be equal to keys_size
print(string.format("lua filter. id=[%d], is_base=[%s], db_id=[%d], group=[%s], cmd_name=[%s], keys=[%s], slots=[%s], timestamp_ms=[%d]",
id, tostring(is_base), db_id, group, cmd_name, table.concat(keys, ", "), table.concat(slots, ", "), timestamp_ms))
return 0, db_id
end

14
filters/swap_db.lua Normal file
View File

@ -0,0 +1,14 @@
--- dbid: 0 -> 1
--- dbid: 1 -> 0
--- dbid: others -> drop
function filter(id, is_base, group, cmd_name, keys, slots, db_id, timestamp_ms)
if db_id == 0 then
-- print("db_id is 0, redirect to 1")
return 0, 1
elseif db_id == 1 then
-- print("db_id is 1, redirect to 0")
return 0, 0
else
return 1, db_id
end
end

11
go.mod Normal file
View File

@ -0,0 +1,11 @@
module github.com/alibaba/RedisShake
go 1.16
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.0.0-beta.3
github.com/rs/zerolog v1.27.0
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e // indirect
)

33
go.sum Normal file
View File

@ -0,0 +1,33 @@
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/pelletier/go-toml/v2 v2.0.0-beta.3 h1:PNCTU4naEJ8mKal97P3A2qDU74QRQGlv4FXiL1XDqi4=
github.com/pelletier/go-toml/v2 v2.0.0-beta.3/go.mod h1:aNseLYu/uKskg0zpr/kbr2z8yGuWtotWf/0BpGIAL2Y=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rs/xid v1.3.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.27.0 h1:1T7qCieN22GVc8S4Q2yuexzBb1EqjbgjSH9RohbMjKs=
github.com/rs/zerolog v1.27.0/go.mod h1:7frBqO0oezxmnO7GF86FY++uy8I0Tk/If5ni1G9Qc0U=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.1-0.20210427113832-6241f9ab9942 h1:t0lM6y/M5IiUZyvbBTcngso8SZEZICH7is9B6g/obVU=
github.com/stretchr/testify v1.7.1-0.20210427113832-6241f9ab9942/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64 h1:5mLPGnFdSsevFRFc9q3yYbBkB6tsm4aCwwQV/j1JQAQ=
github.com/yuin/gopher-lua v0.0.0-20220504180219-658193537a64/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
golang.org/x/sys v0.0.0-20190204203706-41f3e6584952/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e h1:fLOSk5Q00efkSvAm+4xcoXD+RRmLmmulPn5I3Y9F2EM=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

34
internal/client/func.go Normal file
View File

@ -0,0 +1,34 @@
package client
import (
"bytes"
"github.com/alibaba/RedisShake/internal/client/proto"
"github.com/alibaba/RedisShake/internal/log"
)
func ArrayString(replyInterface interface{}, err error) []string {
if err != nil {
log.PanicError(err)
}
replyArray := replyInterface.([]interface{})
replyArrayString := make([]string, len(replyArray))
for inx, item := range replyArray {
replyArrayString[inx] = item.(string)
}
return replyArrayString
}
func EncodeArgv(argv []string) *bytes.Buffer {
buf := new(bytes.Buffer)
writer := proto.NewWriter(buf)
argvInterface := make([]interface{}, len(argv))
for inx, item := range argv {
argvInterface[inx] = item
}
err := writer.WriteArgs(argvInterface)
if err != nil {
log.PanicError(err)
}
return buf
}

View File

@ -0,0 +1 @@
port from https://github.com/go-redis/redis/tree/master/internal/proto

View File

@ -0,0 +1,521 @@
package proto
import (
"bufio"
"errors"
"fmt"
"io"
"math"
"math/big"
"strconv"
)
// redis resp protocol data type.
const (
RespStatus = '+' // +<string>\r\n
RespError = '-' // -<string>\r\n
RespString = '$' // $<length>\r\n<bytes>\r\n
RespInt = ':' // :<number>\r\n
RespNil = '_' // _\r\n
RespFloat = ',' // ,<floating-point-number>\r\n (golang float)
RespBool = '#' // true: #t\r\n false: #f\r\n
RespBlobError = '!' // !<length>\r\n<bytes>\r\n
RespVerbatim = '=' // =<length>\r\nFORMAT:<bytes>\r\n
RespBigInt = '(' // (<big number>\r\n
RespArray = '*' // *<len>\r\n... (same as resp2)
RespMap = '%' // %<len>\r\n(key)\r\n(value)\r\n... (golang map)
RespSet = '~' // ~<len>\r\n... (same as Array)
RespAttr = '|' // |<len>\r\n(key)\r\n(value)\r\n... + command reply
RespPush = '>' // ><len>\r\n... (same as Array)
)
// Not used temporarily.
// Redis has not used these two data types for the time being, and will implement them later.
// Streamed = "EOF:"
// StreamedAggregated = '?'
//------------------------------------------------------------------------------
const Nil = RedisError("redis: nil")
type RedisError string
func (e RedisError) Error() string { return string(e) }
func (RedisError) RedisError() {}
func ParseErrorReply(line []byte) error {
return RedisError(line[1:])
}
//------------------------------------------------------------------------------
type Reader struct {
rd *bufio.Reader
}
func NewReader(rd *bufio.Reader) *Reader {
return &Reader{
rd: rd,
}
}
func (r *Reader) Buffered() int {
return r.rd.Buffered()
}
func (r *Reader) Peek(n int) ([]byte, error) {
return r.rd.Peek(n)
}
func (r *Reader) Reset(rd io.Reader) {
r.rd.Reset(rd)
}
// PeekReplyType returns the data type of the next response without advancing the Reader,
// and discard the attribute type.
func (r *Reader) PeekReplyType() (byte, error) {
b, err := r.rd.Peek(1)
if err != nil {
return 0, err
}
if b[0] == RespAttr {
if err = r.DiscardNext(); err != nil {
return 0, err
}
return r.PeekReplyType()
}
return b[0], nil
}
// ReadLine Return a valid reply, it will check the protocol or redis error,
// and discard the attribute type.
func (r *Reader) ReadLine() ([]byte, error) {
line, err := r.readLine()
if err != nil {
return nil, err
}
switch line[0] {
case RespError:
return nil, ParseErrorReply(line)
case RespNil:
return nil, Nil
case RespBlobError:
var blobErr string
blobErr, err = r.readStringReply(line)
if err == nil {
err = RedisError(blobErr)
}
return nil, err
case RespAttr:
if err = r.Discard(line); err != nil {
return nil, err
}
return r.ReadLine()
}
// Compatible with RESP2
if IsNilReply(line) {
return nil, Nil
}
return line, nil
}
// readLine returns an error if:
// - there is a pending read error;
// - or line does not end with \r\n.
func (r *Reader) readLine() ([]byte, error) {
b, err := r.rd.ReadSlice('\n')
if err != nil {
if err != bufio.ErrBufferFull {
return nil, err
}
full := make([]byte, len(b))
copy(full, b)
b, err = r.rd.ReadBytes('\n')
if err != nil {
return nil, err
}
full = append(full, b...)
b = full
}
if len(b) <= 2 || b[len(b)-1] != '\n' || b[len(b)-2] != '\r' {
return nil, fmt.Errorf("redis: invalid reply: %q", b)
}
return b[:len(b)-2], nil
}
func (r *Reader) ReadReply() (interface{}, error) {
line, err := r.ReadLine()
if err != nil {
return nil, err
}
switch line[0] {
case RespStatus:
return string(line[1:]), nil
case RespInt:
return parseInt(line[1:], 10, 64)
case RespFloat:
return r.readFloat(line)
case RespBool:
return r.readBool(line)
case RespBigInt:
return r.readBigInt(line)
case RespString:
return r.readStringReply(line)
case RespVerbatim:
return r.readVerb(line)
case RespArray, RespSet, RespPush:
return r.readSlice(line)
case RespMap:
return r.readMap(line)
}
return nil, fmt.Errorf("redis: can't parse %.100q", line)
}
func (r *Reader) readFloat(line []byte) (float64, error) {
v := string(line[1:])
switch string(line[1:]) {
case "inf":
return math.Inf(1), nil
case "-inf":
return math.Inf(-1), nil
}
return strconv.ParseFloat(v, 64)
}
func (r *Reader) readBool(line []byte) (bool, error) {
switch string(line[1:]) {
case "t":
return true, nil
case "f":
return false, nil
}
return false, fmt.Errorf("redis: can't parse bool reply: %q", line)
}
func (r *Reader) readBigInt(line []byte) (*big.Int, error) {
i := new(big.Int)
if i, ok := i.SetString(string(line[1:]), 10); ok {
return i, nil
}
return nil, fmt.Errorf("redis: can't parse bigInt reply: %q", line)
}
func (r *Reader) readStringReply(line []byte) (string, error) {
n, err := replyLen(line)
if err != nil {
return "", err
}
b := make([]byte, n+2)
_, err = io.ReadFull(r.rd, b)
if err != nil {
return "", err
}
return bytesToString(b[:n]), nil
}
func (r *Reader) readVerb(line []byte) (string, error) {
s, err := r.readStringReply(line)
if err != nil {
return "", err
}
if len(s) < 4 || s[3] != ':' {
return "", fmt.Errorf("redis: can't parse verbatim string reply: %q", line)
}
return s[4:], nil
}
func (r *Reader) readSlice(line []byte) ([]interface{}, error) {
n, err := replyLen(line)
if err != nil {
return nil, err
}
val := make([]interface{}, n)
for i := 0; i < len(val); i++ {
v, err := r.ReadReply()
if err != nil {
if err == Nil {
val[i] = nil
continue
}
if err, ok := err.(RedisError); ok {
val[i] = err
continue
}
return nil, err
}
val[i] = v
}
return val, nil
}
func (r *Reader) readMap(line []byte) (map[interface{}]interface{}, error) {
n, err := replyLen(line)
if err != nil {
return nil, err
}
m := make(map[interface{}]interface{}, n)
for i := 0; i < n; i++ {
k, err := r.ReadReply()
if err != nil {
return nil, err
}
v, err := r.ReadReply()
if err != nil {
if err == Nil {
m[k] = nil
continue
}
if err, ok := err.(RedisError); ok {
m[k] = err
continue
}
return nil, err
}
m[k] = v
}
return m, nil
}
// -------------------------------
func (r *Reader) ReadInt() (int64, error) {
line, err := r.ReadLine()
if err != nil {
return 0, err
}
switch line[0] {
case RespInt, RespStatus:
return parseInt(line[1:], 10, 64)
case RespString:
s, err := r.readStringReply(line)
if err != nil {
return 0, err
}
return parseInt([]byte(s), 10, 64)
case RespBigInt:
b, err := r.readBigInt(line)
if err != nil {
return 0, err
}
if !b.IsInt64() {
return 0, fmt.Errorf("bigInt(%s) value out of range", b.String())
}
return b.Int64(), nil
}
return 0, fmt.Errorf("redis: can't parse int reply: %.100q", line)
}
func (r *Reader) ReadFloat() (float64, error) {
line, err := r.ReadLine()
if err != nil {
return 0, err
}
switch line[0] {
case RespFloat:
return r.readFloat(line)
case RespStatus:
return strconv.ParseFloat(string(line[1:]), 64)
case RespString:
s, err := r.readStringReply(line)
if err != nil {
return 0, err
}
return strconv.ParseFloat(s, 64)
}
return 0, fmt.Errorf("redis: can't parse float reply: %.100q", line)
}
func (r *Reader) ReadString() (string, error) {
line, err := r.ReadLine()
if err != nil {
return "", err
}
switch line[0] {
case RespStatus, RespInt, RespFloat:
return string(line[1:]), nil
case RespString:
return r.readStringReply(line)
case RespBool:
b, err := r.readBool(line)
return strconv.FormatBool(b), err
case RespVerbatim:
return r.readVerb(line)
case RespBigInt:
b, err := r.readBigInt(line)
if err != nil {
return "", err
}
return b.String(), nil
}
return "", fmt.Errorf("redis: can't parse reply=%.100q reading string", line)
}
func (r *Reader) ReadBool() (bool, error) {
s, err := r.ReadString()
if err != nil {
return false, err
}
return s == "OK" || s == "1" || s == "true", nil
}
func (r *Reader) ReadSlice() ([]interface{}, error) {
line, err := r.ReadLine()
if err != nil {
return nil, err
}
return r.readSlice(line)
}
// ReadFixedArrayLen read fixed array length.
func (r *Reader) ReadFixedArrayLen(fixedLen int) error {
n, err := r.ReadArrayLen()
if err != nil {
return err
}
if n != fixedLen {
return fmt.Errorf("redis: got %d elements in the array, wanted %d", n, fixedLen)
}
return nil
}
// ReadArrayLen Read and return the length of the array.
func (r *Reader) ReadArrayLen() (int, error) {
line, err := r.ReadLine()
if err != nil {
return 0, err
}
switch line[0] {
case RespArray, RespSet, RespPush:
return replyLen(line)
default:
return 0, fmt.Errorf("redis: can't parse array/set/push reply: %.100q", line)
}
}
// ReadFixedMapLen reads fixed map length.
func (r *Reader) ReadFixedMapLen(fixedLen int) error {
n, err := r.ReadMapLen()
if err != nil {
return err
}
if n != fixedLen {
return fmt.Errorf("redis: got %d elements in the map, wanted %d", n, fixedLen)
}
return nil
}
// ReadMapLen reads the length of the map type.
// If responding to the array type (RespArray/RespSet/RespPush),
// it must be a multiple of 2 and return n/2.
// Other types will return an error.
func (r *Reader) ReadMapLen() (int, error) {
line, err := r.ReadLine()
if err != nil {
return 0, err
}
switch line[0] {
case RespMap:
return replyLen(line)
case RespArray, RespSet, RespPush:
// Some commands and RESP2 protocol may respond to array types.
n, err := replyLen(line)
if err != nil {
return 0, err
}
if n%2 != 0 {
return 0, fmt.Errorf("redis: the length of the array must be a multiple of 2, got: %d", n)
}
return n / 2, nil
default:
return 0, fmt.Errorf("redis: can't parse map reply: %.100q", line)
}
}
// DiscardNext read and discard the data represented by the next line.
func (r *Reader) DiscardNext() error {
line, err := r.readLine()
if err != nil {
return err
}
return r.Discard(line)
}
// Discard the data represented by line.
func (r *Reader) Discard(line []byte) (err error) {
if len(line) == 0 {
return errors.New("redis: invalid line")
}
switch line[0] {
case RespStatus, RespError, RespInt, RespNil, RespFloat, RespBool, RespBigInt:
return nil
}
n, err := replyLen(line)
if err != nil && err != Nil {
return err
}
switch line[0] {
case RespBlobError, RespString, RespVerbatim:
// +\r\n
_, err = r.rd.Discard(n + 2)
return err
case RespArray, RespSet, RespPush:
for i := 0; i < n; i++ {
if err = r.DiscardNext(); err != nil {
return err
}
}
return nil
case RespMap, RespAttr:
// Read key & value.
for i := 0; i < n*2; i++ {
if err = r.DiscardNext(); err != nil {
return err
}
}
return nil
}
return fmt.Errorf("redis: can't parse %.100q", line)
}
func replyLen(line []byte) (n int, err error) {
n, err = atoi(line[1:])
if err != nil {
return 0, err
}
if n < -1 {
return 0, fmt.Errorf("redis: invalid reply: %q", line)
}
switch line[0] {
case RespString, RespVerbatim, RespBlobError,
RespArray, RespSet, RespPush, RespMap, RespAttr:
if n == -1 {
return 0, Nil
}
}
return n, nil
}
// IsNilReply detects redis.Nil of RESP2.
func IsNilReply(line []byte) bool {
return len(line) == 3 &&
(line[0] == RespString || line[0] == RespArray) &&
line[1] == '-' && line[2] == '1'
}

View File

@ -0,0 +1,27 @@
package proto
import "strconv"
func bytesToString(b []byte) string {
return string(b)
}
func stringToBytes(s string) []byte {
return []byte(s)
}
func atoi(b []byte) (int, error) {
return strconv.Atoi(bytesToString(b))
}
func parseInt(b []byte, base int, bitSize int) (int64, error) {
return strconv.ParseInt(bytesToString(b), base, bitSize)
}
func parseUint(b []byte, base int, bitSize int) (uint64, error) {
return strconv.ParseUint(bytesToString(b), base, bitSize)
}
func parseFloat(b []byte, bitSize int) (float64, error) {
return strconv.ParseFloat(bytesToString(b), bitSize)
}

View File

@ -0,0 +1,156 @@
package proto
import (
"encoding"
"fmt"
"io"
"net"
"strconv"
"time"
)
type writer interface {
io.Writer
io.ByteWriter
// WriteString implement io.StringWriter.
WriteString(s string) (n int, err error)
}
type Writer struct {
writer
lenBuf []byte
numBuf []byte
}
func NewWriter(wr writer) *Writer {
return &Writer{
writer: wr,
lenBuf: make([]byte, 64),
numBuf: make([]byte, 64),
}
}
func (w *Writer) WriteArgs(args []interface{}) error {
if err := w.WriteByte(RespArray); err != nil {
return err
}
if err := w.writeLen(len(args)); err != nil {
return err
}
for _, arg := range args {
if err := w.WriteArg(arg); err != nil {
return err
}
}
return nil
}
func (w *Writer) writeLen(n int) error {
w.lenBuf = strconv.AppendUint(w.lenBuf[:0], uint64(n), 10)
w.lenBuf = append(w.lenBuf, '\r', '\n')
_, err := w.Write(w.lenBuf)
return err
}
func (w *Writer) WriteArg(v interface{}) error {
switch v := v.(type) {
case nil:
return w.string("")
case string:
return w.string(v)
case []byte:
return w.bytes(v)
case int:
return w.int(int64(v))
case int8:
return w.int(int64(v))
case int16:
return w.int(int64(v))
case int32:
return w.int(int64(v))
case int64:
return w.int(v)
case uint:
return w.uint(uint64(v))
case uint8:
return w.uint(uint64(v))
case uint16:
return w.uint(uint64(v))
case uint32:
return w.uint(uint64(v))
case uint64:
return w.uint(v)
case float32:
return w.float(float64(v))
case float64:
return w.float(v)
case bool:
if v {
return w.int(1)
}
return w.int(0)
case time.Time:
w.numBuf = v.AppendFormat(w.numBuf[:0], time.RFC3339Nano)
return w.bytes(w.numBuf)
case time.Duration:
return w.int(v.Nanoseconds())
case encoding.BinaryMarshaler:
b, err := v.MarshalBinary()
if err != nil {
return err
}
return w.bytes(b)
case net.IP:
return w.bytes(v)
default:
return fmt.Errorf(
"redis: can't marshal %T (implement encoding.BinaryMarshaler)", v)
}
}
func (w *Writer) bytes(b []byte) error {
if err := w.WriteByte(RespString); err != nil {
return err
}
if err := w.writeLen(len(b)); err != nil {
return err
}
if _, err := w.Write(b); err != nil {
return err
}
return w.crlf()
}
func (w *Writer) string(s string) error {
return w.bytes(stringToBytes(s))
}
func (w *Writer) uint(n uint64) error {
w.numBuf = strconv.AppendUint(w.numBuf[:0], n, 10)
return w.bytes(w.numBuf)
}
func (w *Writer) int(n int64) error {
w.numBuf = strconv.AppendInt(w.numBuf[:0], n, 10)
return w.bytes(w.numBuf)
}
func (w *Writer) float(f float64) error {
w.numBuf = strconv.AppendFloat(w.numBuf[:0], f, 'f', -1, 64)
return w.bytes(w.numBuf)
}
func (w *Writer) crlf() error {
if err := w.WriteByte('\r'); err != nil {
return err
}
return w.WriteByte('\n')
}

109
internal/client/redis.go Normal file
View File

@ -0,0 +1,109 @@
package client
import (
"bufio"
"crypto/tls"
"github.com/alibaba/RedisShake/internal/client/proto"
"github.com/alibaba/RedisShake/internal/log"
"net"
"time"
)
type Redis struct {
reader *bufio.Reader
writer *bufio.Writer
protoReader *proto.Reader
protoWriter *proto.Writer
}
func NewRedisClient(address string, password string, isTls bool) *Redis {
r := new(Redis)
var conn net.Conn
var dialer net.Dialer
var err error
dialer.Timeout = 3 * time.Second
if isTls {
conn, err = tls.DialWithDialer(&dialer, "tcp", address, &tls.Config{InsecureSkipVerify: true})
} else {
conn, err = dialer.Dial("tcp", address)
}
if err != nil {
log.PanicError(err)
}
r.reader = bufio.NewReader(conn)
r.writer = bufio.NewWriter(conn)
r.protoReader = proto.NewReader(r.reader)
r.protoWriter = proto.NewWriter(r.writer)
// auth
if password != "" {
reply := r.DoWithStringReply("auth", password)
if reply != "OK" {
log.Panicf("auth failed with reply: %s", reply)
}
log.Infof("auth successful. address=[%s]", address)
} else {
log.Infof("no password. address=[%s]", address)
}
// ping to test connection
reply := r.DoWithStringReply("ping")
if reply != "PONG" {
panic("ping failed with reply: " + reply)
}
return r
}
func (r *Redis) DoWithStringReply(args ...string) string {
r.Send(args...)
replyInterface, err := r.Receive()
if err != nil {
log.PanicError(err)
}
reply := replyInterface.(string)
return reply
}
func (r *Redis) Send(args ...string) {
argsInterface := make([]interface{}, len(args))
for inx, item := range args {
argsInterface[inx] = item
}
err := r.protoWriter.WriteArgs(argsInterface)
if err != nil {
log.PanicError(err)
}
r.flush()
}
func (r *Redis) SendBytes(buf []byte) {
_, err := r.writer.Write(buf)
if err != nil {
log.PanicError(err)
}
r.flush()
}
func (r *Redis) flush() {
err := r.writer.Flush()
if err != nil {
log.PanicError(err)
}
}
func (r *Redis) Receive() (interface{}, error) {
return r.protoReader.ReadReply()
}
func (r *Redis) BufioReader() *bufio.Reader {
return r.reader
}
func (r *Redis) SetBufioReader(rd *bufio.Reader) {
r.reader = rd
r.protoReader = proto.NewReader(r.reader)
}

112
internal/commands/keys.go Normal file
View File

@ -0,0 +1,112 @@
package commands
import (
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/utils"
"math"
"strconv"
"strings"
)
// CalcKeys https://redis.io/docs/reference/key-specs/
func CalcKeys(argv []string) (cmaName string, group string, keys []string) {
argc := len(argv)
group = "unknown"
cmaName = strings.ToUpper(argv[0])
if _, ok := containers[cmaName]; ok {
cmaName = fmt.Sprintf("%s-%s", cmaName, strings.ToUpper(argv[1]))
}
cmd, ok := redisCommands[cmaName]
if !ok {
log.Warnf("unknown command. argv=%v", argv)
return
}
group = cmd.group
for _, spec := range cmd.keySpec {
begin := 0
switch spec.beginSearchType {
case "index":
begin = spec.beginSearchIndex
case "keyword":
var inx, step int
if spec.beginSearchStartFrom > 0 {
inx = spec.beginSearchStartFrom
step = 1
} else {
inx = -spec.beginSearchStartFrom
step = -1
}
for ; ; inx += step {
if inx == argc {
log.Panicf("not found keyword. argv=%v", argv)
}
if strings.ToUpper(argv[inx]) == spec.beginSearchKeyword {
begin = inx + 1
break
}
}
default:
log.Panicf("wrong type: %s", spec.beginSearchType)
}
switch spec.findKeysType {
case "range":
var lastKeyInx int
if spec.findKeysRangeLastKey >= 0 {
lastKeyInx = begin + spec.findKeysRangeLastKey
} else {
lastKeyInx = argc + spec.findKeysRangeLastKey
}
limitCount := math.MaxInt32
if spec.findKeysRangeLimit <= -2 {
limitCount = (argc - begin) / (-spec.findKeysRangeLimit)
}
keyStep := spec.findKeysRangeKeyStep
for inx := begin; inx <= lastKeyInx && limitCount > 0; inx += keyStep {
keys = append(keys, argv[inx])
limitCount -= 1
}
case "keynum":
keynumIdx := begin + spec.findKeysKeynumIndex
if keynumIdx < 0 || keynumIdx > argc {
log.Panicf("keynumInx wrong. argv=%v, keynumIdx=[%d]", argv, keynumIdx)
}
keyCount, err := strconv.Atoi(argv[keynumIdx])
if err != nil {
log.PanicError(err)
}
firstKey := spec.findKeysKeynumFirstKey
step := spec.findKeysKeynumKeyStep
for inx := begin + firstKey; keyCount > 0; inx += step {
keys = append(keys, argv[inx])
keyCount -= 1
}
default:
log.Panicf("wrong type: %s", spec.findKeysType)
}
}
return
}
func CalcSlots(keys []string) []int {
slots := make([]int, len(keys))
for inx, key := range keys {
hashtag := ""
findHashTag:
for i, s := range key {
if s == '{' {
for k := i; k < len(key); k++ {
if key[k] == '}' {
hashtag = key[i+1 : k]
break findHashTag
}
}
}
}
if len(hashtag) > 0 {
key = hashtag
}
slots[inx] = int(utils.Crc16(key) & 0x3fff)
}
return slots
}

View File

@ -0,0 +1,44 @@
package commands
import (
"testing"
)
func testEq(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
func TestCalcKeys(t *testing.T) {
// SET
cmd, group, keys := CalcKeys([]string{"SET", "key", "value"})
if cmd != "SET" || group != "STRING" || !testEq(keys, []string{"key"}) {
t.Errorf("CalcKeys(SET key value) failed. cmd=%s, group=%s, keys=%v", cmd, group, keys)
}
// MSET
cmd, group, keys = CalcKeys([]string{"MSET", "key1", "value1", "key2", "value2"})
if cmd != "MSET" || group != "STRING" || !testEq(keys, []string{"key1", "key2"}) {
t.Errorf("CalcKeys(MSET key1 value1 key2 value2) failed. cmd=%s, group=%s, keys=%v", cmd, group, keys)
}
// XADD
cmd, group, keys = CalcKeys([]string{"XADD", "key", "*", "field1", "value1", "field2", "value2"})
if cmd != "XADD" || group != "STREAM" || !testEq(keys, []string{"key"}) {
t.Errorf("CalcKeys(XADD key * field1 value1 field2 value2) failed. cmd=%s, group=%s, keys=%v", cmd, group, keys)
}
// ZUNIONSTORE
cmd, group, keys = CalcKeys([]string{"ZUNIONSTORE", "key", "2", "key1", "key2"})
if cmd != "ZUNIONSTORE" || group != "SORTED_SET" || !testEq(keys, []string{"key", "key1", "key2"}) {
t.Errorf("CalcKeys(ZUNIONSTORE key 2 key1 key2) failed. cmd=%s, group=%s, keys=%v", cmd, group, keys)
}
}

2018
internal/commands/table.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,27 @@
package commands
type keySpec struct {
// begin_search
beginSearchType string
// @index
beginSearchIndex int
// @keyword
beginSearchKeyword string
beginSearchStartFrom int
// find_keys
findKeysType string
// @range
findKeysRangeLastKey int
findKeysRangeKeyStep int
findKeysRangeLimit int
// @keynum
findKeysKeynumIndex int
findKeysKeynumFirstKey int
findKeysKeynumKeyStep int
}
type redisCommand struct {
group string
keySpec []keySpec
}

87
internal/config/config.go Normal file
View File

@ -0,0 +1,87 @@
package config
import (
"bytes"
"fmt"
"github.com/pelletier/go-toml/v2"
"io/ioutil"
"os"
"runtime"
)
type tomlSource struct {
Address string `toml:"address"`
Password string `toml:"password"`
IsTLS bool `toml:"tls"`
}
type tomlTarget struct {
Type string `toml:"type"`
Addresses []string `toml:"addresses"`
Password string `toml:"password"`
IsTLS bool `toml:"tls"`
}
type tomlAdvanced struct {
Dir string `toml:"dir"`
Ncpu int `toml:"ncpu"`
// log
LogFile string `toml:"log_file"`
LogLevel string `toml:"log_level"`
LogInterval int `toml:"log_interval"`
// rdb restore
RDBRestoreCommandBehavior string `toml:"rdb_restore_command_behavior"`
// for writer
PipelineCountLimit uint64 `toml:"pipeline_count_limit"`
TargetRedisClientMaxQuerybufLen uint64 `toml:"target_redis_client_max_querybuf_len"`
TargetRedisProtoMaxBulkLen uint64 `toml:"target_redis_proto_max_bulk_len"`
}
type tomlShakeConfig struct {
Source tomlSource
Target tomlTarget
Advanced tomlAdvanced
}
var Config tomlShakeConfig
func LoadFromFile(filename string) {
buf, err := ioutil.ReadFile(filename)
if err != nil {
panic(err.Error())
}
decoder := toml.NewDecoder(bytes.NewReader(buf))
decoder.SetStrict(true)
err = decoder.Decode(&Config)
if err != nil {
missingError, ok := err.(*toml.StrictMissingError)
if ok {
panic(fmt.Sprintf("decode config error:\n%s", missingError.String()))
}
panic(err.Error())
}
// dir
err = os.MkdirAll(Config.Advanced.Dir, os.ModePerm)
if err != nil {
panic(err.Error())
}
err = os.Chdir(Config.Advanced.Dir)
if err != nil {
panic(err.Error())
}
// cpu core
var ncpu int
if Config.Advanced.Ncpu == 0 {
ncpu = runtime.NumCPU()
} else {
ncpu = Config.Advanced.Ncpu
}
runtime.GOMAXPROCS(ncpu)
}

42
internal/entry/entry.go Normal file
View File

@ -0,0 +1,42 @@
package entry
import "fmt"
type Entry struct {
Id uint64
IsBase bool // whether the command is decoded from dump.rdb file
DbId int
Argv []string
TimestampMs uint64
CmdName string
Group string
Keys []string
Slots []int
// for statistics
Offset int64
EncodedSize uint64 // the size of the entry after encode
}
func NewEntry() *Entry {
e := Entry{}
e.Argv = make([]string, 0)
e.Keys = make([]string, 0)
e.Slots = make([]int, 0)
e.DbId = 0
e.TimestampMs = 0
return &e
}
func (e *Entry) NextEntry() *Entry {
newE := NewEntry()
newE.Id = e.Id + 1
newE.DbId = e.DbId
newE.TimestampMs = 0
return newE
}
func (e *Entry) ToString() string {
return fmt.Sprintf("%v", e.Argv)
}

55
internal/filter/filter.go Normal file
View File

@ -0,0 +1,55 @@
package filter
import (
"github.com/alibaba/RedisShake/internal/entry"
lua "github.com/yuin/gopher-lua"
)
const (
Allow = 0
Disallow = 1
Error = 2
)
var luaInstance *lua.LState
func LoadFromFile(luaFile string) {
luaInstance = lua.NewState()
err := luaInstance.DoFile(luaFile)
if err != nil {
panic(err)
}
}
func Filter(e *entry.Entry) int {
if luaInstance == nil {
return Allow
}
keys := luaInstance.NewTable()
for _, key := range e.Keys {
keys.Append(lua.LString(key))
}
slots := luaInstance.NewTable()
for _, slot := range e.Slots {
slots.Append(lua.LNumber(slot))
}
f := luaInstance.GetGlobal("filter")
luaInstance.Push(f)
luaInstance.Push(lua.LNumber(e.Id)) // id
luaInstance.Push(lua.LBool(e.IsBase)) // is_base
luaInstance.Push(lua.LString(e.Group)) // group
luaInstance.Push(lua.LString(e.CmdName)) // cmd name
luaInstance.Push(keys) // keys
luaInstance.Push(slots) // slots
luaInstance.Push(lua.LNumber(e.DbId)) // dbid
luaInstance.Push(lua.LNumber(e.TimestampMs)) // timestamp_ms
luaInstance.Call(8, 2)
code := int(luaInstance.Get(1).(lua.LNumber))
e.DbId = int(luaInstance.Get(2).(lua.LNumber))
luaInstance.Pop(2)
return code
}

72
internal/log/func.go Normal file
View File

@ -0,0 +1,72 @@
package log
import (
"fmt"
"github.com/rs/zerolog"
)
func Assert(condition bool, msg string) {
if !condition {
Panicf("Assert failed: %s", msg)
}
}
func Debugf(format string, args ...interface{}) {
logFinally(logger.Debug(), format, args...)
}
func Infof(format string, args ...interface{}) {
logFinally(logger.Info(), format, args...)
}
func Warnf(format string, args ...interface{}) {
logFinally(logger.Warn(), format, args...)
}
func Panicf(format string, args ...interface{}) {
logFinally(logger.Panic(), format, args...)
}
func PanicError(err error) {
Panicf(err.Error())
}
func logFinally(event *zerolog.Event, format string, args ...interface{}) {
str := fmt.Sprintf(format, args...)
//inxTrunct := -1
//keyStart := -1
//valueStart := -1
//key := ""
//value := ""
//for inx, b := range str {
// switch b {
// case ' ':
// keyStart = inx + 1
// case '=':
// if keyStart == -1 {
// continue
// }
// key = str[keyStart:inx]
// case '[':
// valueStart = inx + 1
// case ']':
// if valueStart == -1 {
// continue
// }
// value = str[valueStart:inx]
// if key == "" || value == "" {
// continue
// }
// event = event.Str(key, value)
// if inxTrunct == -1 {
// inxTrunct = keyStart - 1
// }
// keyStart = -1
// valueStart = -1
// }
//}
//if inxTrunct != -1 {
// str = str[:inxTrunct]
//}
event.Msg(str)
}

34
internal/log/init.go Normal file
View File

@ -0,0 +1,34 @@
package log
import (
"fmt"
"github.com/alibaba/RedisShake/internal/config"
"github.com/rs/zerolog"
"os"
)
var logger zerolog.Logger
func Init() {
// log level
switch config.Config.Advanced.LogLevel {
case "debug":
zerolog.SetGlobalLevel(zerolog.DebugLevel)
case "info":
zerolog.SetGlobalLevel(zerolog.InfoLevel)
case "warn":
zerolog.SetGlobalLevel(zerolog.WarnLevel)
default:
panic(fmt.Sprintf("unknown log level: %s", config.Config.Advanced.LogLevel))
}
// log file
consoleWriter := zerolog.ConsoleWriter{Out: os.Stdout, TimeFormat: "2006-01-02 15:04:05"}
fileWriter, err := os.OpenFile(config.Config.Advanced.LogFile, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
panic(fmt.Sprintf("open log file failed: %s", err))
}
multi := zerolog.MultiLevelWriter(consoleWriter, fileWriter)
logger = zerolog.New(multi).With().Timestamp().Logger()
}

193
internal/rdb/rdb.go Normal file
View File

@ -0,0 +1,193 @@
package rdb
import (
"bufio"
"bytes"
"encoding/binary"
"github.com/alibaba/RedisShake/internal/config"
"github.com/alibaba/RedisShake/internal/entry"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"github.com/alibaba/RedisShake/internal/rdb/types"
"github.com/alibaba/RedisShake/internal/statistics"
"github.com/alibaba/RedisShake/internal/utils"
"io"
"os"
"strconv"
"time"
)
const (
kFlagFunction2 = 245 // function library data
kFlagFunction = 246 // old function library data for 7.0 rc1 and rc2
kFlagModuleAux = 247 // Module auxiliary data.
kFlagIdle = 0xf8 // LRU idle time.
kFlagFreq = 0xf9 // LFU frequency.
kFlagAUX = 0xfa // RDB aux field.
kFlagResizeDB = 0xfb // Hash table resize hint.
kFlagExpireMs = 0xfc // Expire time in milliseconds.
kFlagExpire = 0xfd // Old expire time in seconds.
kFlagSelect = 0xfe // DB number of the following keys.
kEOF = 0xff // End of the RDB file.
)
type Loader struct {
replStreamDbId int // https://github.com/alibaba/RedisShake/pull/430#issuecomment-1099014464
nowDBId int
expireAt uint64
idle int64
freq int64
filPath string
fp *os.File
ch chan *entry.Entry
}
func NewLoader(filPath string, ch chan *entry.Entry) *Loader {
ld := new(Loader)
ld.ch = ch
ld.filPath = filPath
return ld
}
func (ld *Loader) ParseRDB() int {
var err error
ld.fp, err = os.OpenFile(ld.filPath, os.O_RDONLY, 0666)
if err != nil {
log.Panicf("open file failed. file_path=[%s], error=[%s]", ld.filPath, err)
}
rd := bufio.NewReader(ld.fp)
//magic + version
buf := make([]byte, 9)
_, err = io.ReadFull(rd, buf)
if err != nil {
log.PanicError(err)
}
if !bytes.Equal(buf[:5], []byte("REDIS")) {
log.Panicf("verify magic string, invalid file format. bytes=[%v]", buf[:5])
}
version, err := strconv.Atoi(string(buf[5:]))
if err != nil {
log.PanicError(err)
}
log.Infof("RDB version: %d", version)
// read entries
ld.parseRDBEntry(rd)
return ld.replStreamDbId
}
func (ld *Loader) parseRDBEntry(rd *bufio.Reader) {
// for stat
UpdateRDBSentSize := func() {
offset, err := ld.fp.Seek(0, io.SeekCurrent)
if err != nil {
log.PanicError(err)
}
statistics.UpdateRDBSentSize(offset)
}
defer UpdateRDBSentSize()
// read one entry
tick := time.Tick(time.Second * 1)
for true {
typeByte := structure.ReadByte(rd)
switch typeByte {
case kFlagIdle:
ld.idle = int64(structure.ReadLength(rd))
case kFlagFreq:
ld.freq = int64(structure.ReadByte(rd))
case kFlagAUX:
key := structure.ReadString(rd)
value := structure.ReadString(rd)
if key == "repl-stream-db" {
var err error
ld.replStreamDbId, err = strconv.Atoi(value)
if err != nil {
log.PanicError(err)
}
log.Infof("RDB repl-stream-db: %d", ld.replStreamDbId)
} else {
log.Infof("RDB AUX fields. key=[%s], value=[%s]", key, value)
}
case kFlagResizeDB:
dbSize := structure.ReadLength(rd)
expireSize := structure.ReadLength(rd)
log.Infof("RDB resize db. db_size=[%d], expire_size=[%d]", dbSize, expireSize)
case kFlagExpireMs:
ld.expireAt = structure.ReadUint64(rd)
log.Debugf("RDB expire at %d", ld.expireAt)
case kFlagExpire:
ld.expireAt = uint64(structure.ReadUint32(rd)) * 1000
log.Debugf("RDB expire at %d", ld.expireAt)
case kFlagSelect:
dbid := structure.ReadLength(rd)
ld.nowDBId = int(dbid)
log.Debugf("RDB select db, DbId=[%d]", dbid)
case kEOF:
return
default:
key := structure.ReadString(rd)
var value bytes.Buffer
anotherReader := io.TeeReader(rd, &value)
o := types.ParseObject(anotherReader, typeByte, key)
if uint64(value.Len()) > config.Config.Advanced.TargetRedisProtoMaxBulkLen {
cmds := o.Rewrite()
for _, cmd := range cmds {
e := entry.NewEntry()
e.IsBase = true
e.DbId = ld.nowDBId
e.Argv = cmd
ld.ch <- e
}
if ld.expireAt != 0 {
e := entry.NewEntry()
e.IsBase = true
e.DbId = ld.nowDBId
e.Argv = []string{"PEXPIREAT", key, strconv.FormatUint(ld.expireAt, 10)}
ld.ch <- e
}
} else {
e := entry.NewEntry()
e.IsBase = true
e.DbId = ld.nowDBId
v := ld.createValueDump(typeByte, value.Bytes())
e.Argv = []string{"restore", key, strconv.FormatUint(ld.expireAt, 10), v}
if config.Config.Advanced.RDBRestoreCommandBehavior == "rewrite" {
e.Argv = append(e.Argv, "replace")
}
if ld.expireAt != 0 {
e.Argv = append(e.Argv, "absttl")
}
if ld.idle != 0 {
e.Argv = append(e.Argv, "idletime", strconv.FormatInt(ld.idle, 10))
}
if ld.freq != 0 {
e.Argv = append(e.Argv, "freq", strconv.FormatInt(ld.freq, 10))
}
ld.ch <- e
}
ld.expireAt = 0
ld.idle = 0
ld.freq = 0
}
select {
case <-tick:
UpdateRDBSentSize()
default:
}
}
}
func (ld *Loader) createValueDump(typeByte byte, val []byte) string {
var b bytes.Buffer
c := utils.NewDigest()
w := io.MultiWriter(&b, c)
_, _ = w.Write([]byte{typeByte})
_, _ = w.Write(val)
_ = binary.Write(w, binary.LittleEndian, uint16(6))
_ = binary.Write(w, binary.LittleEndian, c.Sum64())
return b.String()
}

View File

@ -0,0 +1,20 @@
package structure
import (
"github.com/alibaba/RedisShake/internal/log"
"io"
)
func ReadByte(rd io.Reader) byte {
b := ReadBytes(rd, 1)[0]
return b
}
func ReadBytes(rd io.Reader, n int) []byte {
buf := make([]byte, n)
_, err := io.ReadFull(rd, buf)
if err != nil {
log.PanicError(err)
}
return buf
}

View File

@ -0,0 +1,44 @@
package structure
import (
"encoding/binary"
"github.com/alibaba/RedisShake/internal/log"
"io"
"math"
"strconv"
)
func ReadFloat(rd io.Reader) float64 {
u := ReadUint8(rd)
switch u {
case 253:
return math.NaN()
case 254:
return math.Inf(0)
case 255:
return math.Inf(-1)
default:
buf := make([]byte, u)
_, err := io.ReadFull(rd, buf)
if err != nil {
return 0
}
v, err := strconv.ParseFloat(string(buf), 64)
if err != nil {
log.PanicError(err)
}
return v
}
}
func ReadDouble(rd io.Reader) float64 {
var buf = make([]byte, 8)
_, err := io.ReadFull(rd, buf)
if err != nil {
log.PanicError(err)
}
num := binary.LittleEndian.Uint64(buf)
return math.Float64frombits(num)
}

View File

@ -0,0 +1,58 @@
package structure
import (
"encoding/binary"
"io"
)
func ReadUint8(rd io.Reader) uint8 {
b := ReadByte(rd)
return b
}
func ReadUint16(rd io.Reader) uint16 {
buf := ReadBytes(rd, 2)
return binary.LittleEndian.Uint16(buf)
}
func ReadUint24(rd io.Reader) uint32 {
buf := ReadBytes(rd, 3)
buf = append(buf, 0)
return binary.LittleEndian.Uint32(buf)
}
func ReadUint32(rd io.Reader) uint32 {
buf := ReadBytes(rd, 4)
return binary.LittleEndian.Uint32(buf)
}
func ReadUint64(rd io.Reader) uint64 {
buf := ReadBytes(rd, 8)
return binary.LittleEndian.Uint64(buf)
}
func ReadInt8(rd io.Reader) int8 {
b := ReadByte(rd)
return int8(b)
}
func ReadInt16(rd io.Reader) int16 {
buf := ReadBytes(rd, 2)
return int16(binary.LittleEndian.Uint16(buf))
}
func ReadInt24(rd io.Reader) int32 {
buf := ReadBytes(rd, 3)
buf = append([]byte{0}, buf...)
return int32(binary.LittleEndian.Uint32(buf)) >> 8
}
func ReadInt32(rd io.Reader) int32 {
buf := ReadBytes(rd, 4)
return int32(binary.LittleEndian.Uint32(buf))
}
func ReadInt64(rd io.Reader) int64 {
buf := ReadBytes(rd, 8)
return int64(binary.LittleEndian.Uint64(buf))
}

View File

@ -0,0 +1,32 @@
package structure
import (
"bufio"
"encoding/binary"
"io"
"strconv"
"strings"
)
func ReadIntset(rd io.Reader) []string {
rd = bufio.NewReader(strings.NewReader(ReadString(rd)))
encodingType := int(ReadUint32(rd))
size := int(ReadUint32(rd))
elements := make([]string, size)
for i := 0; i < size; i++ {
intBytes := ReadBytes(rd, encodingType)
var intString string
switch encodingType {
case 2:
intString = strconv.FormatInt(int64(int16(binary.LittleEndian.Uint16(intBytes))), 10)
case 4:
intString = strconv.FormatInt(int64(int32(binary.LittleEndian.Uint32(intBytes))), 10)
case 8:
intString = strconv.FormatInt(int64(int64(binary.LittleEndian.Uint64(intBytes))), 10)
}
elements[i] = intString
}
return elements
}

View File

@ -0,0 +1,62 @@
package structure
import (
"encoding/binary"
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"io"
)
const (
RDB6ByteLen = 0 // RDB_6BITLEN
RDB14ByteLen = 1 // RDB_14BITLEN
len32or64Bit = 2
lenSpecial = 3
RDB32ByteLen = 0x80
RDB64ByteLen = 0x81
)
func ReadLength(rd io.Reader) uint64 {
length, special, err := readEncodedLength(rd)
if special {
log.Panicf("illegal length special=true, encoding: %d", length)
}
if err != nil {
log.PanicError(err)
}
return length
}
func readEncodedLength(rd io.Reader) (length uint64, special bool, err error) {
var lengthBuffer = make([]byte, 8)
firstByte := ReadByte(rd)
first2bits := (firstByte & 0xc0) >> 6 // first 2 bits of encoding
switch first2bits {
case RDB6ByteLen:
length = uint64(firstByte) & 0x3f
case RDB14ByteLen:
nextByte := ReadByte(rd)
length = (uint64(firstByte)&0x3f)<<8 | uint64(nextByte)
case len32or64Bit:
if firstByte == RDB32ByteLen {
_, err = io.ReadFull(rd, lengthBuffer[0:4])
if err != nil {
return 0, false, fmt.Errorf("read len32Bit failed: %s", err.Error())
}
length = uint64(binary.BigEndian.Uint32(lengthBuffer))
} else if firstByte == RDB64ByteLen {
_, err = io.ReadFull(rd, lengthBuffer)
if err != nil {
return 0, false, fmt.Errorf("read len64Bit failed: %s", err.Error())
}
length = binary.BigEndian.Uint64(lengthBuffer)
} else {
return 0, false, fmt.Errorf("illegal length encoding: %x", firstByte)
}
case lenSpecial:
special = true
length = uint64(firstByte) & 0x3f
}
return length, special, nil
}

View File

@ -0,0 +1,167 @@
package structure
import (
"bufio"
"github.com/alibaba/RedisShake/internal/log"
"io"
"math"
"strconv"
"strings"
)
const (
lpEncoding7BitUintMask = 0x80 // 10000000 LP_ENCODING_7BIT_UINT_MASK
lpEncoding7BitUint = 0x00 // 00000000 LP_ENCODING_7BIT_UINT
lpEncoding6BitStrMask = 0xC0 // 11000000 LP_ENCODING_6BIT_STR_MASK
lpEncoding6BitStr = 0x80 // 10000000 LP_ENCODING_6BIT_STR
lpEncoding13BitIntMask = 0xE0 // 11100000 LP_ENCODING_13BIT_INT_MASK
lpEncoding13BitInt = 0xC0 // 11000000 LP_ENCODING_13BIT_INT
lpEncoding12BitStrMask = 0xF0 // 11110000 LP_ENCODING_12BIT_STR_MASK
lpEncoding12BitStr = 0xE0 // 11100000 LP_ENCODING_12BIT_STR
lpEncoding16BitIntMask = 0xFF // 11111111 LP_ENCODING_16BIT_INT_MASK
lpEncoding16BitInt = 0xF1 // 11110001 LP_ENCODING_16BIT_INT
lpEncoding24BitIntMask = 0xFF // 11111111 LP_ENCODING_24BIT_INT_MASK
lpEncoding24BitInt = 0xF2 // 11110010 LP_ENCODING_24BIT_INT
lpEncoding32BitIntMask = 0xFF // 11111111 LP_ENCODING_32BIT_INT_MASK
lpEncoding32BitInt = 0xF3 // 11110011 LP_ENCODING_32BIT_INT
lpEncoding64BitIntMask = 0xFF // 11111111 LP_ENCODING_64BIT_INT_MASK
lpEncoding64BitInt = 0xF4 // 11110100 LP_ENCODING_64BIT_INT
lpEncoding32BitStrMask = 0xFF // 11111111 LP_ENCODING_32BIT_STR_MASK
lpEncoding32BitStr = 0xF0 // 11110000 LP_ENCODING_32BIT_STR
)
func ReadListpack(rd io.Reader) []string {
rd = bufio.NewReader(strings.NewReader(ReadString(rd)))
bytes := ReadUint32(rd) // bytes
size := int(ReadUint16(rd))
log.Debugf("ReadListpack: bytes=[%d], size=[%d]", bytes, size)
var elements []string
for i := 0; i < size; i++ {
ele := readListpackEntry(rd)
elements = append(elements, ele)
}
lastByte := ReadByte(rd)
if lastByte != 0xFF {
log.Panicf("ReadListpack: last byte is not 0xFF, but [%d]", lastByte)
}
return elements
}
// redis/src/Listpack.c lpGet()
func readListpackEntry(rd io.Reader) string {
var val int64
var uval, negstart, negmax uint64
fireByte := ReadByte(rd)
if (fireByte & lpEncoding7BitUintMask) == lpEncoding7BitUint { // 7bit uint
uval = uint64(fireByte & 0x7f) // 0x7f is 01111111
negmax = 0
negstart = math.MaxUint64 // uint
_ = ReadBytes(rd, lpEncodeBacklen(1)) // encode: 1 byte
} else if (fireByte & lpEncoding6BitStrMask) == lpEncoding6BitStr { // 6bit length str
length := int(fireByte & 0x3f) // 0x3f is 00111111
ele := string(ReadBytes(rd, length))
_ = ReadBytes(rd, lpEncodeBacklen(1+length)) // encode: 1byte, str: length
return ele
} else if (fireByte & lpEncoding13BitIntMask) == lpEncoding13BitInt { // 13bit int
secondByte := ReadByte(rd)
uval = (uint64(fireByte&0x1f) << 8) + uint64(secondByte) // 5bit + 8bit, 0x1f is 00011111
negstart = uint64(1) << 12
negmax = 8191 // uint13_max
_ = ReadBytes(rd, lpEncodeBacklen(2))
} else if (fireByte & lpEncoding16BitIntMask) == lpEncoding16BitInt { // 16bit int
uval = uint64(ReadUint16(rd))
negstart = uint64(1) << 15
negmax = 65535 // uint16_max
_ = ReadBytes(rd, lpEncodeBacklen(2)) // encode: 1byte, int: 2byte
} else if (fireByte & lpEncoding24BitIntMask) == lpEncoding24BitInt { // 24bit int
uval = uint64(ReadUint24(rd))
negstart = uint64(1) << 23
negmax = math.MaxUint32 >> 8 // uint24_max
_ = ReadBytes(rd, lpEncodeBacklen(1+3)) // encode: 1byte, int: 3byte
} else if (fireByte & lpEncoding32BitIntMask) == lpEncoding32BitInt { // 32bit int
uval = uint64(ReadUint32(rd))
negstart = uint64(1) << 31
negmax = math.MaxUint32 // uint32_max
_ = ReadBytes(rd, lpEncodeBacklen(1+4)) // encode: 1byte, int: 4byte
} else if (fireByte & lpEncoding64BitIntMask) == lpEncoding64BitInt { // 64bit int
uval = ReadUint64(rd)
negstart = uint64(1) << 63
negmax = math.MaxUint64 // uint64_max
_ = ReadBytes(rd, lpEncodeBacklen(1+8)) // encode: 1byte, int: 8byte
} else if (fireByte & lpEncoding12BitStrMask) == lpEncoding12BitStr { // 12bit length str
secondByte := ReadByte(rd)
length := (int(fireByte&0x0f) << 8) + int(secondByte) // 4bit + 8bit
ele := string(ReadBytes(rd, length))
_ = ReadBytes(rd, lpEncodeBacklen(2+length)) // encode: 2byte, str: length
return ele
} else if (fireByte & lpEncoding32BitStrMask) == lpEncoding32BitStr { // 32bit length str
length := int(ReadUint32(rd))
ele := string(ReadBytes(rd, length))
_ = ReadBytes(rd, lpEncodeBacklen(5+length)) // encode: 1byte, length: 4byte, str: length
return ele
} else {
// redis use this value, don't know why
// uval = 12345678900000000 + uint64(fireByte)
// negstart = math.MaxUint64
// negmax = 0
log.Panicf("unknown encoding: %x", fireByte)
}
/* We reach this code path only for integer encodings.
* Convert the unsigned value to the signed one using two's complement
* rule. */
if uval >= negstart {
/* This three steps conversion should avoid undefined behaviors
* in the unsigned -> signed conversion. */
uval = negmax - uval
val = int64(uval)
val = -val - 1
} else {
val = int64(uval)
}
return strconv.FormatInt(val, 10)
}
/* the function just returns the length(byte) of `backlen`. */
func lpEncodeBacklen(len int) int {
if len <= 127 {
return 1
} else if len < 16383 {
return 2
} else if len < 2097151 {
return 3
} else if len < 268435455 {
return 4
} else {
return 5
}
}

View File

@ -0,0 +1,77 @@
package structure
import (
"github.com/alibaba/RedisShake/internal/log"
"io"
"strconv"
)
const (
RDBEncInt8 = 0 // RDB_ENC_INT8
RDBEncInt16 = 1 // RDB_ENC_INT16
RDBEncInt32 = 2 // RDB_ENC_INT32
RDBEncLZF = 3 // RDB_ENC_LZF
)
func ReadString(rd io.Reader) string {
length, special, err := readEncodedLength(rd)
if err != nil {
log.PanicError(err)
}
if special {
switch length {
case RDBEncInt8:
b := ReadInt8(rd)
return strconv.Itoa(int(b))
case RDBEncInt16:
b := ReadInt16(rd)
return strconv.Itoa(int(b))
case RDBEncInt32:
b := ReadInt32(rd)
return strconv.Itoa(int(b))
case RDBEncLZF:
inLen := ReadLength(rd)
outLen := ReadLength(rd)
in := ReadBytes(rd, int(inLen))
return lzfDecompress(in, int(outLen))
default:
log.Panicf("Unknown string encode type %d", length)
}
}
return string(ReadBytes(rd, int(length)))
}
func lzfDecompress(in []byte, outLen int) string {
out := make([]byte, outLen)
i, o := 0, 0
for i < len(in) {
ctrl := int(in[i])
i++
if ctrl < 32 {
for x := 0; x <= ctrl; x++ {
out[o] = in[i]
i++
o++
}
} else {
length := ctrl >> 5
if length == 7 {
length = length + int(in[i])
i++
}
ref := o - ((ctrl & 0x1f) << 8) - int(in[i]) - 1
i++
for x := 0; x <= length+1; x++ {
out[o] = out[ref]
ref++
o++
}
}
}
if o != outLen {
log.Panicf("lzf decompress failed: outLen: %d, o: %d", outLen, o)
}
return string(out)
}

View File

@ -0,0 +1,116 @@
package structure
import (
"bufio"
"encoding/binary"
"github.com/alibaba/RedisShake/internal/log"
"io"
"strconv"
"strings"
)
const (
zipStr06B = 0x00 // 0000 ZIP_STR_06B
zipStr14B = 0x01 // 0001
zipStr32B = 0x02 // 0010
zipInt04B = 0x0f // high 4 bits of Int 04 encoding
zipInt08B = 0xfe // 11111110
zipInt16B = 0xc0 // 11000000
zipInt24B = 0xf0 // 11110000
zipInt32B = 0xd0 // 11010000
zipInt64B = 0xe0 // 11100000
)
func ReadZipList(rd io.Reader) []string {
rd = bufio.NewReader(strings.NewReader(ReadString(rd)))
// The general layout of the ziplist is as follows:
// <zlbytes> <zltail> <zllen> <entry> <entry> ... <entry> <zlend>
_ = ReadUint32(rd) // zlbytes
_ = ReadUint32(rd) // zltail
size := int(ReadUint16(rd))
log.Debugf("ReadZipList size=[%d]", size)
var elements []string
if size == 65535 { // 2^16-1, we need to traverse the entire list to know how many items it holds.
for firstByte := ReadByte(rd); firstByte != 0xFE; firstByte = ReadByte(rd) {
ele := readZipListEntry(rd, firstByte)
elements = append(elements, ele)
}
} else {
for i := 0; i < size; i++ {
firstByte := ReadByte(rd)
ele := readZipListEntry(rd, firstByte)
elements = append(elements, ele)
}
if lastByte := ReadByte(rd); lastByte != 0xFF {
log.Panicf("invalid zipList lastByte encoding: %d", lastByte)
}
}
return elements
}
/*
* So practically an entry is encoded in the following way:
*
* <prevlen from 0 to 253> <encoding> <entry>
*
* Or alternatively if the previous entry length is greater than 253 bytes
* the following encoding is used:
*
* 0xFE <4 bytes unsigned little endian prevlen> <encoding> <entry>
*/
func readZipListEntry(rd io.Reader, firstByte byte) string {
// read prevlen
if firstByte == 0xFE {
_ = ReadUint32(rd) // read 4 bytes prevlen
}
// read encoding
firstByte = ReadByte(rd)
first2bits := (firstByte & 0xc0) >> 6 // first 2 bits of encoding
switch first2bits {
case zipStr06B:
length := int(firstByte & 0x3f) // 0x3f = 00111111
return string(ReadBytes(rd, length))
case zipStr14B:
secondByte := ReadByte(rd)
println(int(firstByte&0x3f), int(secondByte))
length := (int(firstByte&0x3f) << 8) | int(secondByte)
println(length)
return string(ReadBytes(rd, length))
case zipStr32B:
lenBytes := ReadBytes(rd, 4)
length := binary.BigEndian.Uint32(lenBytes)
return string(ReadBytes(rd, int(length)))
}
switch firstByte {
case zipInt08B:
v := ReadInt8(rd)
return strconv.FormatInt(int64(v), 10)
case zipInt16B:
v := ReadInt16(rd)
return strconv.FormatInt(int64(v), 10)
case zipInt24B:
v := ReadInt24(rd)
return strconv.FormatInt(int64(v), 10)
case zipInt32B:
v := ReadInt32(rd)
return strconv.FormatInt(int64(v), 10)
case zipInt64B:
v := ReadInt64(rd)
return strconv.FormatInt(v, 10)
}
if (firstByte >> 4) == zipInt04B {
v := int64(firstByte & 0x0f) // 0x0f = 00001111
v = v - 1 // 1-13 -> 0-12
if v < 0 || v > 12 {
log.Panicf("invalid zipInt04B encoding: %d", v)
}
return strconv.FormatInt(v, 10)
}
log.Panicf("invalid encoding: %d", firstByte)
return ""
}

View File

@ -0,0 +1,71 @@
package types
import (
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
type HashObject struct {
key string
value map[string]string
}
func (o *HashObject) LoadFromBuffer(rd io.Reader, key string, typeByte byte) {
o.key = key
o.value = make(map[string]string)
switch typeByte {
case rdbTypeHash:
o.readHash(rd)
case rdbTypeHashZipmap:
o.readHashZipmap(rd)
case rdbTypeHashZiplist:
o.readHashZiplist(rd)
case rdbTypeHashListpack:
o.readHashListpack(rd)
default:
log.Panicf("unknown hash type. typeByte=[%d]", typeByte)
}
}
func (o *HashObject) readHash(rd io.Reader) {
size := int(structure.ReadLength(rd))
for i := 0; i < size; i++ {
key := structure.ReadString(rd)
value := structure.ReadString(rd)
o.value[key] = value
}
}
func (o *HashObject) readHashZipmap(rd io.Reader) {
log.Panicf("not implemented rdbTypeZipmap")
}
func (o *HashObject) readHashZiplist(rd io.Reader) {
list := structure.ReadZipList(rd)
size := len(list)
for i := 0; i < size; i += 2 {
key := list[i]
value := list[i+1]
o.value[key] = value
}
}
func (o *HashObject) readHashListpack(rd io.Reader) {
list := structure.ReadListpack(rd)
size := len(list)
for i := 0; i < size; i += 2 {
key := list[i]
value := list[i+1]
o.value[key] = value
}
}
func (o *HashObject) Rewrite() []RedisCmd {
var cmds []RedisCmd
for k, v := range o.value {
cmd := RedisCmd{"hset", o.key, k, v}
cmds = append(cmds, cmd)
}
return cmds
}

View File

@ -0,0 +1,117 @@
package types
import (
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
const (
// StringType is redis string
StringType = "string"
// ListType is redis list
ListType = "list"
// SetType is redis set
SetType = "set"
// HashType is redis hash
HashType = "hash"
// ZSetType is redis sorted set
ZSetType = "zset"
// AuxType is redis metadata key-value pair
AuxType = "aux"
// DBSizeType is for _OPCODE_RESIZEDB
DBSizeType = "dbsize"
)
const (
rdbTypeString = 0 // RDB_TYPE_STRING
rdbTypeList = 1
rdbTypeSet = 2
rdbTypeZSet = 3
rdbTypeHash = 4
rdbTypeZSet2 = 5 // ZSET version 2 with doubles stored in binary.
rdbTypeModule = 6 // RDB_TYPE_MODULE
rdbTypeModule2 = 7 // RDB_TYPE_MODULE2 Module value with annotations for parsing without the generating module being loaded.
// Object types for encoded objects.
rdbTypeHashZipmap = 9
rdbTypeListZiplist = 10
rdbTypeSetIntset = 11
rdbTypeZSetZiplist = 12
rdbTypeHashZiplist = 13
rdbTypeListQuicklist = 14 // RDB_TYPE_LIST_QUICKLIST
rdbTypeStreamListpacks = 15 // RDB_TYPE_STREAM_LISTPACKS
rdbTypeHashListpack = 16 // RDB_TYPE_HASH_ZIPLIST
rdbTypeZSetListpack = 17 // RDB_TYPE_ZSET_LISTPACK
rdbTypeListQuicklist2 = 18 // RDB_TYPE_LIST_QUICKLIST_2 https://github.com/redis/redis/pull/9357
rdbTypeStreamListpacks2 = 19 // RDB_TYPE_STREAM_LISTPACKS2
moduleTypeNameCharSet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_"
)
type RedisCmd []string
// RedisObject is interface for a redis object
type RedisObject interface {
LoadFromBuffer(rd io.Reader, key string, typeByte byte)
Rewrite() []RedisCmd // TODO big key
}
func ParseObject(rd io.Reader, typeByte byte, key string) RedisObject {
log.Debugf("parse rdb object. typeByte=[%d], key=[%s]", typeByte, key)
switch typeByte {
case rdbTypeString: // string
o := new(StringObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeList, rdbTypeListZiplist, rdbTypeListQuicklist, rdbTypeListQuicklist2: // list
o := new(ListObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeSet, rdbTypeSetIntset: // set
o := new(SetObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeZSet, rdbTypeZSet2, rdbTypeZSetZiplist, rdbTypeZSetListpack: // zset
o := new(ZsetObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeHash, rdbTypeHashZipmap, rdbTypeHashZiplist, rdbTypeHashListpack: // hash
o := new(HashObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeStreamListpacks, rdbTypeStreamListpacks2: // stream
o := new(StreamObject)
o.LoadFromBuffer(rd, key, typeByte)
return o
case rdbTypeModule, rdbTypeModule2: // module
if typeByte == rdbTypeModule {
log.Panicf("module type is not supported")
}
moduleId := structure.ReadLength(rd)
moduleName := moduleTypeNameByID(moduleId)
switch moduleName {
case "exhash---":
log.Panicf("exhash module is not supported")
case "exstrtype":
log.Panicf("exstrtype module is not supported")
case "tair-json":
log.Panicf("tair-json module is not supported")
default:
log.Panicf("unknown module type: %s", moduleName)
}
}
log.Panicf("unknown type byte: %d", typeByte)
return nil
}
func moduleTypeNameByID(moduleId uint64) string {
nameList := make([]byte, 9)
moduleId >>= 10
for i := 8; i >= 0; i-- {
nameList[i] = moduleTypeNameCharSet[moduleId&63]
moduleId >>= 6
}
return string(nameList)
}

View File

@ -0,0 +1,79 @@
package types
import (
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
// quicklist node container formats
const (
quicklistNodeContainerPlain = 1 // QUICKLIST_NODE_CONTAINER_PLAIN
quicklistNodeContainerPacked = 2 // QUICKLIST_NODE_CONTAINER_PACKED
)
type ListObject struct {
key string
elements []string
}
func (o *ListObject) LoadFromBuffer(rd io.Reader, key string, typeByte byte) {
o.key = key
switch typeByte {
case rdbTypeList:
o.readList(rd)
case rdbTypeListZiplist:
o.elements = structure.ReadZipList(rd)
case rdbTypeListQuicklist:
o.readQuickList(rd)
case rdbTypeListQuicklist2:
o.readQuickList2(rd)
default:
log.Panicf("unknown list type %d", typeByte)
}
}
func (o *ListObject) Rewrite() []RedisCmd {
cmds := make([]RedisCmd, len(o.elements))
for inx, ele := range o.elements {
cmd := RedisCmd{"rpush", o.key, ele}
cmds[inx] = cmd
}
return cmds
}
func (o *ListObject) readList(rd io.Reader) {
size := int(structure.ReadLength(rd))
for i := 0; i < size; i++ {
ele := structure.ReadString(rd)
o.elements = append(o.elements, ele)
}
}
func (o *ListObject) readQuickList(rd io.Reader) {
size := int(structure.ReadLength(rd))
log.Debugf("readQuickList size=[%d]", size)
for i := 0; i < size; i++ {
ziplistElements := structure.ReadZipList(rd)
o.elements = append(o.elements, ziplistElements...)
}
}
func (o *ListObject) readQuickList2(rd io.Reader) {
size := int(structure.ReadLength(rd))
log.Debugf("readQuickList2 size=[%d]", size)
for i := 0; i < size; i++ {
container := structure.ReadLength(rd)
log.Debugf("readQuickList2 container=[%d]", container)
if container == quicklistNodeContainerPlain {
ele := structure.ReadString(rd)
o.elements = append(o.elements, ele)
} else if container == quicklistNodeContainerPacked {
listpackElements := structure.ReadListpack(rd)
o.elements = append(o.elements, listpackElements...)
} else {
log.Panicf("unknown quicklist container %d", container)
}
}
}

42
internal/rdb/types/set.go Normal file
View File

@ -0,0 +1,42 @@
package types
import (
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
type SetObject struct {
key string
elements []string
}
func (o *SetObject) LoadFromBuffer(rd io.Reader, key string, typeByte byte) {
o.key = key
switch typeByte {
case rdbTypeSet:
o.readSet(rd)
case rdbTypeSetIntset:
o.elements = structure.ReadIntset(rd)
default:
log.Panicf("unknown set type. typeByte=[%d]", typeByte)
}
}
func (o *SetObject) readSet(rd io.Reader) {
size := int(structure.ReadLength(rd))
o.elements = make([]string, size)
for i := 0; i < size; i++ {
val := structure.ReadString(rd)
o.elements[i] = val
}
}
func (o *SetObject) Rewrite() []RedisCmd {
cmds := make([]RedisCmd, len(o.elements))
for inx, ele := range o.elements {
cmd := RedisCmd{"sadd", o.key, ele}
cmds[inx] = cmd
}
return cmds
}

View File

@ -0,0 +1,248 @@
package types
import (
"encoding/binary"
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
"strconv"
)
/*
* The master entry is composed like in the following example:
*
* +-------+---------+------------+---------+--/--+---------+---------+-+
* | count | deleted | num-fields | field_1 | field_2 | ... | field_N |0|
* +-------+---------+------------+---------+--/--+---------+---------+-+
* Populate the Listpack with the new entry. We use the following
* encoding:
*
* +-----+--------+----------+-------+-------+-/-+-------+-------+--------+
* |flags|entry-id|num-fields|field-1|value-1|...|field-N|value-N|lp-count|
* +-----+--------+----------+-------+-------+-/-+-------+-------+--------+
*
* However if the SAMEFIELD flag is set, we have just to populate
* the entry with the values, so it becomes:
*
* +-----+--------+-------+-/-+-------+--------+
* |flags|entry-id|value-1|...|value-N|lp-count|
* +-----+--------+-------+-/-+-------+--------+
*
* The entry-id field is actually two separated fields: the ms
* and seq difference compared to the master entry.
*
* The lp-count field is a number that states the number of Listpack pieces
* that compose the entry, so that it's possible to travel the entry
* in reverse order: we can just start from the end of the Listpack, read
* the entry, and jump back N times to seek the "flags" field to read
* the stream full entry. */
type StreamObject struct {
key string
cmds []RedisCmd
}
func (o *StreamObject) LoadFromBuffer(rd io.Reader, key string, typeByte byte) {
o.key = key
switch typeByte {
case rdbTypeStreamListpacks:
o.readStream(rd, key, typeByte)
case rdbTypeStreamListpacks2:
o.readStream(rd, key, typeByte)
default:
log.Panicf("unknown hash type. typeByte=[%d]", typeByte)
}
}
// see redis rewriteStreamObject()
func (o *StreamObject) readStream(rd io.Reader, masterKey string, typeByte byte) {
// 1. length(number of listpack), k1, v1, k2, v2, ..., number, ms, seq
/* Load the number of Listpack. */
nListpack := int(structure.ReadLength(rd))
for i := 0; i < nListpack; i++ {
/* Load key */
key := structure.ReadString(rd)
/* key is streamId, like: 1612181627287-0 */
masterMs := int64(binary.BigEndian.Uint64([]byte(key[:8])))
masterSeq := int64(binary.BigEndian.Uint64([]byte(key[8:])))
/* value is a listpack */
elements := structure.ReadListpack(rd)
inx := 0
/* The front of stream listpack is master entry */
/* Parse the master entry */
count := nextInteger(&inx, elements) // count
deleted := nextInteger(&inx, elements) // deleted
numFields := int(nextInteger(&inx, elements)) // num-fields
fields := elements[3 : 3+numFields] // fields
inx = 3 + numFields
// master entry end by zero
lastEntry := nextString(&inx, elements)
if lastEntry != "0" {
log.Panicf("master entry not ends by zero. lastEntry=[%s]", lastEntry)
}
/* Parse entries */
for count != 0 || deleted != 0 {
flags := nextInteger(&inx, elements) // [is_same_fields|is_deleted]
entryMs := nextInteger(&inx, elements)
entrySeq := nextInteger(&inx, elements)
args := []string{"xadd", masterKey, fmt.Sprintf("%v-%v", entryMs+masterMs, entrySeq+masterSeq)}
if flags&2 == 2 { // same fields, get field from master entry.
for j := 0; j < numFields; j++ {
args = append(args, fields[j], nextString(&inx, elements))
}
} else { // get field by lp.Next()
num := int(nextInteger(&inx, elements))
args = append(args, elements[inx:inx+num*2]...)
inx += num * 2
}
_ = nextString(&inx, elements) // lp_count
if flags&1 == 1 { // is_deleted
deleted -= 1
} else {
count -= 1
o.cmds = append(o.cmds, args)
}
}
}
/* Load total number of items inside the stream. */
_ = structure.ReadLength(rd) // number
/* Load the last entry ID. */
lastMs := structure.ReadLength(rd)
lastSeq := structure.ReadLength(rd)
lastid := fmt.Sprintf("%v-%v", lastMs, lastSeq)
if nListpack == 0 {
/* Use the XADD MAXLEN 0 trick to generate an empty stream if
* the key we are serializing is an empty string, which is possible
* for the Stream type. */
args := []string{"xadd", masterKey, "MAXLEN", "0", lastid, "x", "y"}
o.cmds = append(o.cmds, args)
}
/* Append XSETID after XADD, make sure lastid is correct,
* in case of XDEL lastid. */
o.cmds = append(o.cmds, []string{"xsetid", masterKey, lastid})
if typeByte == rdbTypeStreamListpacks2 {
/* Load the first entry ID. */
_ = structure.ReadLength(rd) // first_ms
_ = structure.ReadLength(rd) // first_seq
/* Load the maximal deleted entry ID. */
_ = structure.ReadLength(rd) // max_deleted_ms
_ = structure.ReadLength(rd) // max_deleted_seq
/* Load the offset. */
_ = structure.ReadLength(rd) // offset
}
/* 2. nConsumerGroup, groupName, ms, seq, PEL, Consumers */
/* Load the number of groups. */
nConsumerGroup := int(structure.ReadLength(rd))
for i := 0; i < nConsumerGroup; i++ {
/* Load groupName */
groupName := structure.ReadString(rd)
/* Load the last ID */
lastMs := structure.ReadLength(rd)
lastSeq := structure.ReadLength(rd)
lastid := fmt.Sprintf("%v-%v", lastMs, lastSeq)
/* Create Group */
o.cmds = append(o.cmds, []string{"CREATE", masterKey, groupName, lastid})
/* Load group offset. */
if typeByte == rdbTypeStreamListpacks2 {
_ = structure.ReadLength(rd) // offset
}
/* Load the global PEL */
nPel := int(structure.ReadLength(rd))
mapId2Time := make(map[string]uint64)
mapId2Count := make(map[string]uint64)
for j := 0; j < nPel; j++ {
/* Load streamId */
tmpBytes := structure.ReadBytes(rd, 16)
ms := binary.BigEndian.Uint64(tmpBytes[:8])
seq := binary.BigEndian.Uint64(tmpBytes[8:])
streamId := fmt.Sprintf("%v-%v", ms, seq)
/* Load deliveryTime */
deliveryTime := structure.ReadUint64(rd)
/* Load deliveryCount */
deliveryCount := structure.ReadLength(rd)
/* Save deliveryTime and deliveryCount */
mapId2Time[streamId] = deliveryTime
mapId2Count[streamId] = deliveryCount
}
/* Generate XCLAIMs for each consumer that happens to
* have pending entries. Empty consumers are discarded. */
nConsumer := int(structure.ReadLength(rd))
for j := 0; j < nConsumer; j++ {
/* Load consumerName */
consumerName := structure.ReadString(rd)
/* Load lastSeenTime */
_ = structure.ReadUint64(rd)
/* Consumer PEL */
nPEL := int(structure.ReadLength(rd))
for i := 0; i < nPEL; i++ {
/* Load streamId */
tmpBytes := structure.ReadBytes(rd, 16)
ms := binary.BigEndian.Uint64(tmpBytes[:8])
seq := binary.BigEndian.Uint64(tmpBytes[8:])
streamId := fmt.Sprintf("%v-%v", ms, seq)
/* Send */
args := []string{
"xclaim", masterKey, groupName, consumerName, "0", streamId,
"TIME", strconv.FormatUint(mapId2Time[streamId], 10),
"RETRYCOUNT", strconv.FormatUint(mapId2Count[streamId], 10),
"JUSTID", "FORCE"}
o.cmds = append(o.cmds, args)
}
}
}
}
func nextInteger(inx *int, elements []string) int64 {
ele := elements[*inx]
*inx++
i, err := strconv.ParseInt(ele, 10, 64)
if err != nil {
log.Panicf("integer is not a number. ele=[%s]", ele)
}
return i
}
func nextString(inx *int, elements []string) string {
ele := elements[*inx]
*inx++
return ele
}
func (o *StreamObject) Rewrite() []RedisCmd {
return o.cmds
}

View File

@ -0,0 +1,22 @@
package types
import (
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
type StringObject struct {
value string
key string
}
func (o *StringObject) LoadFromBuffer(rd io.Reader, key string, _ byte) {
o.key = key
o.value = structure.ReadString(rd)
}
func (o *StringObject) Rewrite() []RedisCmd {
cmd := RedisCmd{}
cmd = append(cmd, "set", o.key, o.value)
return []RedisCmd{cmd}
}

View File

@ -0,0 +1,89 @@
package types
import (
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb/structure"
"io"
)
type ZSetEntry struct {
Member string
Score string
}
type ZsetObject struct {
key string
elements []ZSetEntry
}
func (o *ZsetObject) LoadFromBuffer(rd io.Reader, key string, typeByte byte) {
o.key = key
switch typeByte {
case rdbTypeZSet:
o.readZset(rd)
case rdbTypeZSet2:
o.readZset2(rd)
case rdbTypeZSetZiplist:
o.readZsetZiplist(rd)
case rdbTypeZSetListpack:
o.readZsetListpack(rd)
default:
log.Panicf("unknown zset type. typeByte=[%d]", typeByte)
}
}
func (o *ZsetObject) readZset(rd io.Reader) {
size := int(structure.ReadLength(rd))
o.elements = make([]ZSetEntry, size)
for i := 0; i < size; i++ {
o.elements[i].Member = structure.ReadString(rd)
score := structure.ReadFloat(rd)
o.elements[i].Score = fmt.Sprintf("%f", score)
}
}
func (o *ZsetObject) readZset2(rd io.Reader) {
size := int(structure.ReadLength(rd))
o.elements = make([]ZSetEntry, size)
for i := 0; i < size; i++ {
o.elements[i].Member = structure.ReadString(rd)
score := structure.ReadDouble(rd)
o.elements[i].Score = fmt.Sprintf("%f", score)
}
}
func (o *ZsetObject) readZsetZiplist(rd io.Reader) {
list := structure.ReadZipList(rd)
size := len(list)
if size%2 != 0 {
log.Panicf("zset listpack size is not even. size=[%d]", size)
}
o.elements = make([]ZSetEntry, size/2)
for i := 0; i < size; i += 2 {
o.elements[i/2].Member = list[i]
o.elements[i/2].Score = list[i+1]
}
}
func (o *ZsetObject) readZsetListpack(rd io.Reader) {
list := structure.ReadListpack(rd)
size := len(list)
if size%2 != 0 {
log.Panicf("zset listpack size is not even. size=[%d]", size)
}
o.elements = make([]ZSetEntry, size/2)
for i := 0; i < size; i += 2 {
o.elements[i/2].Member = list[i]
o.elements[i/2].Score = list[i+1]
}
}
func (o *ZsetObject) Rewrite() []RedisCmd {
cmds := make([]RedisCmd, len(o.elements))
for inx, ele := range o.elements {
cmd := RedisCmd{"zadd", o.key, ele.Score, ele.Member}
cmds[inx] = cmd
}
return cmds
}

View File

@ -0,0 +1,7 @@
package reader
import "github.com/alibaba/RedisShake/internal/entry"
type Reader interface {
StartRead() chan *entry.Entry
}

237
internal/reader/psync.go Normal file
View File

@ -0,0 +1,237 @@
package reader
import (
"bufio"
"errors"
"github.com/alibaba/RedisShake/internal/client"
"github.com/alibaba/RedisShake/internal/entry"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/rdb"
"github.com/alibaba/RedisShake/internal/reader/rotate"
"github.com/alibaba/RedisShake/internal/statistics"
"io"
"io/ioutil"
"os"
"strconv"
"strings"
"time"
)
type psyncReader struct {
client *client.Redis
address string
ch chan *entry.Entry
DbId int
rd *bufio.Reader
receivedOffset int64
}
func NewPSyncReader(address string, password string, isTls bool) Reader {
r := new(psyncReader)
r.init(address, password, isTls)
return r
}
func (r *psyncReader) init(address string, password string, isTls bool) {
r.address = address
standalone := client.NewRedisClient(address, password, isTls)
r.client = standalone
r.rd = r.client.BufioReader()
log.Infof("psyncReader connected to redis successful. address=[%s]", address)
}
func (r *psyncReader) StartRead() chan *entry.Entry {
r.ch = make(chan *entry.Entry, 1024)
go func() {
r.clearDir()
r.saveRDB()
startOffset := r.receivedOffset
go r.saveAOF(r.rd)
go r.sendReplconfAck()
r.sendRDB()
time.Sleep(1 * time.Second) // wait for saveAOF create aof file
r.sendAOF(startOffset)
}()
return r.ch
}
func (r *psyncReader) clearDir() {
files, err := ioutil.ReadDir("./")
if err != nil {
log.PanicError(err)
}
for _, f := range files {
if strings.HasSuffix(f.Name(), ".rdb") || strings.HasSuffix(f.Name(), ".aof") {
err = os.Remove(f.Name())
if err != nil {
log.PanicError(err)
}
log.Warnf("remove file. filename=[%s]", f.Name())
}
}
}
func (r *psyncReader) saveRDB() {
log.Infof("start save RDB. address=[%s]", r.address)
argv := []string{"replconf", "listening-port", "10007"} // 10007 is magic number
log.Infof("send %v", argv)
reply := r.client.DoWithStringReply(argv...)
if reply != "OK" {
log.Warnf("send replconf command to redis server failed. address=[%s], reply=[%s], error=[]", r.address, reply)
}
// send psync
argv = []string{"PSYNC", "?", "-1"}
r.client.Send(argv...)
log.Infof("send %v", argv)
// format: \n\n\n$<reply>\r\n
for true {
// \n\n\n$
b, err := r.rd.ReadByte()
if err != nil {
log.PanicError(err)
}
if b == '\n' {
continue
}
if b != '+' {
log.Panicf("invalid rdb format. address=[%s], b=[%s]", r.address, string(b))
}
break
}
reply, err := r.rd.ReadString('\n')
if err != nil {
log.PanicError(err)
}
reply = strings.TrimSpace(reply)
log.Infof("receive [%s]", reply)
masterOffset, err := strconv.Atoi(strings.Split(reply, " ")[2])
if err != nil {
log.PanicError(err)
}
r.receivedOffset = int64(masterOffset)
log.Infof("source db is doing bgsave. address=[%s]", r.address)
timeStart := time.Now()
// format: \n\n\n$<length>\r\n<rdb>
for true {
// \n\n\n$
b, err := r.rd.ReadByte()
if err != nil {
log.PanicError(err)
}
if b == '\n' {
continue
}
if b != '$' {
log.Panicf("invalid rdb format. address=[%s], b=[%s]", r.address, string(b))
}
break
}
log.Infof("source db bgsave finished. timeUsed=[%.2f]s, address=[%s]", time.Since(timeStart).Seconds(), r.address)
lengthStr, err := r.rd.ReadString('\n')
if err != nil {
log.PanicError(err)
}
lengthStr = strings.TrimSpace(lengthStr)
length, err := strconv.ParseInt(lengthStr, 10, 64)
if err != nil {
log.PanicError(err)
}
log.Infof("received rdb length. length=[%d]", length)
statistics.SetRDBFileSize(length)
// create rdb file
rdbFilePath := "dump.rdb"
log.Infof("create dump.rdb file. filename_path=[%s]", rdbFilePath)
rdbFileHandle, err := os.OpenFile(rdbFilePath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
log.PanicError(err)
}
// read rdb
var readTotal int64 = 0
buf := make([]byte, 32*1024*1024)
for readTotal < length {
n, err := r.rd.Read(buf)
if err != nil {
log.PanicError(err)
}
readTotal += int64(n)
statistics.UpdateRDBReceivedSize(readTotal)
_, err = rdbFileHandle.Write(buf[:n])
if err != nil {
log.PanicError(err)
}
}
err = rdbFileHandle.Close()
if err != nil {
log.PanicError(err)
}
log.Infof("save RDB finished. address=[%s], total_bytes=[%d]", r.address, readTotal)
}
func (r *psyncReader) saveAOF(rd io.Reader) {
log.Infof("start save AOF. address=[%s]", r.address)
// create aof file
aofWriter := rotate.NewAOFWriter(r.receivedOffset)
buf := make([]byte, 16*1024) // 16KB is enough for writing file
for {
n, err := rd.Read(buf)
if errors.Is(err, io.EOF) {
log.Infof("read aof finished. address=[%s]", r.address)
break
}
if err != nil {
log.PanicError(err)
}
r.receivedOffset += int64(n)
statistics.UpdateAOFReceivedOffset(r.receivedOffset)
aofWriter.Write(buf[:n])
}
aofWriter.Close()
}
func (r *psyncReader) sendRDB() {
// start parse rdb
log.Infof("start send RDB. address=[%s]", r.address)
rdbLoader := rdb.NewLoader("dump.rdb", r.ch)
r.DbId = rdbLoader.ParseRDB()
log.Infof("send RDB finished. address=[%s], repl-stream-db=[%d]", r.address, r.DbId)
}
func (r *psyncReader) sendAOF(offset int64) {
aofReader := rotate.NewAOFReader(offset)
r.client.SetBufioReader(bufio.NewReader(aofReader))
for {
argv := client.ArrayString(r.client.Receive())
log.Debugf("psyncReader receive. argv=%v", argv)
// select
if strings.EqualFold(argv[0], "select") {
DbId, err := strconv.Atoi(argv[1])
if err != nil {
log.PanicError(err)
}
r.DbId = DbId
continue
}
e := entry.NewEntry()
e.Argv = argv
e.DbId = r.DbId
e.Offset = aofReader.Offset()
r.ch <- e
}
}
func (r *psyncReader) sendReplconfAck() {
for range time.Tick(time.Millisecond * 100) {
// send ack receivedOffset
r.client.Send("replconf", "ack", strconv.FormatInt(r.receivedOffset, 10))
}
}

View File

@ -0,0 +1,80 @@
package rotate
import (
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/utils"
"io"
"os"
"time"
)
type AOFReader struct {
file *os.File
offset int64
pos int64
filename string
}
func NewAOFReader(offset int64) *AOFReader {
r := new(AOFReader)
r.openFile(offset)
return r
}
func (r *AOFReader) openFile(offset int64) {
r.filename = fmt.Sprintf("%d.aof", offset)
var err error
r.file, err = os.OpenFile(r.filename, os.O_RDONLY, 0644)
if err != nil {
log.PanicError(err)
}
r.offset = offset
r.pos = 0
log.Infof("AOFReader open file. aof_filename=[%s]", r.filename)
}
func (r *AOFReader) readNextFile(offset int64) {
filename := fmt.Sprintf("%d.aof", offset)
if utils.DoesFileExist(filename) {
r.Close()
r.openFile(offset)
}
}
func (r *AOFReader) Read(buf []byte) (n int, err error) {
n, err = r.file.Read(buf)
for err == io.EOF {
if r.filename != fmt.Sprintf("%d.aof", r.offset) {
r.readNextFile(r.offset)
}
time.Sleep(time.Millisecond * 10)
_, err = r.file.Seek(0, 1)
if err != nil {
log.PanicError(err)
}
n, err = r.file.Read(buf)
}
if err != nil {
log.PanicError(err)
}
r.offset += int64(n)
r.pos += int64(n)
return n, nil
}
func (r *AOFReader) Offset() int64 {
return r.offset
}
func (r *AOFReader) Close() {
if r.file == nil {
return
}
err := r.file.Close()
if err != nil {
log.PanicError(err)
}
r.file = nil
log.Infof("AOFReader close file. aof_filename=[%s]", r.filename)
}

View File

@ -0,0 +1,66 @@
package rotate
import (
"fmt"
"github.com/alibaba/RedisShake/internal/log"
"os"
)
const MaxFileSize = 1024 * 1024 * 1024 // 1G
type AOFWriter struct {
file *os.File
offset int64
filename string
filesize int64
}
func NewAOFWriter(offset int64) *AOFWriter {
w := &AOFWriter{}
w.openFile(offset)
return w
}
func (w *AOFWriter) openFile(offset int64) {
w.filename = fmt.Sprintf("%d.aof", offset)
var err error
w.file, err = os.OpenFile(w.filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0644)
if err != nil {
log.PanicError(err)
}
w.offset = offset
w.filesize = 0
log.Infof("AOFWriter open file. filename=[%s]", w.filename)
}
func (w *AOFWriter) Write(buf []byte) {
_, err := w.file.Write(buf)
if err != nil {
log.PanicError(err)
}
w.offset += int64(len(buf))
w.filesize += int64(len(buf))
if w.filesize > MaxFileSize {
w.Close()
w.openFile(w.offset)
}
err = w.file.Sync()
if err != nil {
log.PanicError(err)
}
}
func (w *AOFWriter) Close() {
if w.file == nil {
return
}
err := w.file.Sync()
if err != nil {
log.PanicError(err)
}
err = w.file.Close()
if err != nil {
log.PanicError(err)
}
log.Infof("AOFWriter close file. filename=[%s], filesize=[%d]", w.filename, w.filesize)
}

View File

@ -0,0 +1,92 @@
package statistics
import (
"github.com/alibaba/RedisShake/internal/config"
"github.com/alibaba/RedisShake/internal/log"
"time"
)
var (
// ID
entryId uint64
// rdb
rdbFileSize int64
rdbReceivedSize int64
rdbSendSize int64
// aof
aofReceivedOffset int64
aofAppliedOffset int64
// ops
allowEntriesCount int64
disallowEntriesCount int64
unansweredBytesCount uint64
)
func Init() {
go func() {
seconds := config.Config.Advanced.LogInterval
if seconds <= 0 {
log.Infof("statistics disabled. seconds=[%d]", seconds)
}
for range time.Tick(time.Duration(seconds) * time.Second) {
if rdbFileSize == 0 {
continue
}
if rdbFileSize > rdbReceivedSize {
log.Infof("receiving rdb. percent=[%.2f]%, rdbFileSize=[%.3f]G, rdbReceivedSize=[%.3f]G",
float64(rdbReceivedSize)/float64(rdbFileSize)*100,
float64(rdbFileSize)/1024/1024/1024,
float64(rdbReceivedSize)/1024/1024/1024)
} else if rdbFileSize > rdbSendSize {
log.Infof("syncing rdb. percent=[%.2f]%%, allowOps=[%.2f], disallowOps=[%.2f], entryId=[%d], unansweredBytesCount=[%d]bytes, rdbFileSize=[%.3f]G, rdbSendSize=[%.3f]G",
float64(rdbSendSize)*100/float64(rdbFileSize),
float32(allowEntriesCount)/float32(seconds),
float32(disallowEntriesCount)/float32(seconds),
entryId,
unansweredBytesCount,
float64(rdbFileSize)/1024/1024/1024,
float64(rdbSendSize)/1024/1024/1024)
} else {
log.Infof("syncing aof. allowOps=[%.2f], disallowOps=[%.2f], entryId=[%d], unansweredBytesCount=[%d]bytes, diff=[%d], aofReceivedOffset=[%d], aofAppliedOffset=[%d]",
float32(allowEntriesCount)/float32(seconds),
float32(disallowEntriesCount)/float32(seconds),
entryId,
unansweredBytesCount,
aofReceivedOffset-aofAppliedOffset,
aofReceivedOffset,
aofAppliedOffset)
}
allowEntriesCount = 0
disallowEntriesCount = 0
}
}()
}
func UpdateEntryId(id uint64) {
entryId = id
}
func AddAllowEntriesCount() {
allowEntriesCount++
}
func AddDisallowEntriesCount() {
disallowEntriesCount++
}
func SetRDBFileSize(size int64) {
rdbFileSize = size
}
func UpdateRDBReceivedSize(size int64) {
rdbReceivedSize = size
}
func UpdateRDBSentSize(offset int64) {
rdbSendSize = offset
}
func UpdateAOFReceivedOffset(offset int64) {
aofReceivedOffset = offset
}
func UpdateAOFAppliedOffset(offset int64) {
aofAppliedOffset = offset
}
func UpdateUnansweredBytesCount(count uint64) {
unansweredBytesCount = count
}

View File

@ -79,10 +79,10 @@ var crc16tab = [256]uint16{
0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0,
}
func crc16(buf string) uint16 {
func Crc16(buf string) uint16 {
var crc uint16
for i := 0; i < len(buf); i++ {
crc = (crc << uint16(8)) ^ crc16tab[((crc>>uint16(8))^uint16(buf[i]))&0x00FF]
for _, n := range buf {
crc = (crc << uint16(8)) ^ crc16tab[((crc>>uint16(8))^uint16(n))&0x00FF]
}
return crc
}

View File

@ -1,14 +1,6 @@
// Copyright 2016 CodisLabs. All Rights Reserved.
// Licensed under the MIT (MIT-LICENSE.txt) license.
package utils
package digest
import (
"encoding/binary"
"hash"
)
var crc64_table = [256]uint64{
var crc64Table = [256]uint64{
0x0000000000000000, 0x7ad870c830358979, 0xf5b0e190606b12f2, 0x8f689158505e9b8b,
0xc038e5739841b68f, 0xbae095bba8743ff6, 0x358804e3f82aa47d, 0x4f50742bc81f2d04,
0xab28ecb46814fe75, 0xd1f09c7c5821770c, 0x5e980d24087fec87, 0x24407dec384a65fe,
@ -78,29 +70,20 @@ type digest struct {
crc uint64
}
func (d *digest) update(p []byte) {
for _, b := range p {
d.crc = crc64_table[byte(d.crc)^b] ^ (d.crc >> 8)
}
}
func New() hash.Hash64 {
func NewDigest() *digest {
d := &digest{}
return d
}
func (d *digest) update(p []byte) {
for _, b := range p {
d.crc = crc64Table[byte(d.crc)^b] ^ (d.crc >> 8)
}
}
func (d *digest) Write(p []byte) (int, error) {
d.update(p)
return len(p), nil
}
func (d *digest) Sum(in []byte) []byte {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, d.crc)
return append(in, buf...)
}
func (d *digest) Sum64() uint64 { return d.crc }
func (d *digest) BlockSize() int { return 1 }
func (d *digest) Size() int { return 8 }
func (d *digest) Reset() { d.crc = 0 }
func (d *digest) Sum64() uint64 { return d.crc }

18
internal/utils/file.go Normal file
View File

@ -0,0 +1,18 @@
package utils
import (
"github.com/alibaba/RedisShake/internal/log"
"os"
)
func DoesFileExist(fileName string) bool {
_, err := os.Stat(fileName)
if err != nil {
if os.IsNotExist(err) {
return false
} else {
log.PanicError(err)
}
}
return true
}

View File

@ -0,0 +1,7 @@
package writer
import "github.com/alibaba/RedisShake/internal/entry"
type Writer interface {
Write(entry *entry.Entry)
}

78
internal/writer/redis.go Normal file
View File

@ -0,0 +1,78 @@
package writer
import (
"github.com/alibaba/RedisShake/internal/client"
"github.com/alibaba/RedisShake/internal/config"
"github.com/alibaba/RedisShake/internal/entry"
"github.com/alibaba/RedisShake/internal/log"
"github.com/alibaba/RedisShake/internal/statistics"
"strconv"
"sync/atomic"
"time"
)
type redisWriter struct {
client *client.Redis
DbId int
chWaitReply chan *entry.Entry
UpdateUnansweredBytesCount uint64 // have sent in bytes
}
func NewRedisWriter(address string, password string, isTls bool) Writer {
rw := new(redisWriter)
rw.client = client.NewRedisClient(address, password, isTls)
log.Infof("redisWriter connected to redis successful. address=[%s]", address)
rw.chWaitReply = make(chan *entry.Entry, config.Config.Advanced.PipelineCountLimit)
go rw.flushInterval()
return rw
}
func (w *redisWriter) Write(e *entry.Entry) {
// switch db if we need
if w.DbId != e.DbId {
w.switchDbTo(e.DbId)
}
// send
buf := client.EncodeArgv(e.Argv)
e.EncodedSize = uint64(buf.Len())
for e.EncodedSize+atomic.LoadUint64(&w.UpdateUnansweredBytesCount) > config.Config.Advanced.TargetRedisClientMaxQuerybufLen {
time.Sleep(1 * time.Millisecond)
}
atomic.AddUint64(&w.UpdateUnansweredBytesCount, e.EncodedSize)
w.client.SendBytes(buf.Bytes())
w.chWaitReply <- e
}
func (w *redisWriter) switchDbTo(newDbId int) {
w.client.Send("select", strconv.Itoa(newDbId))
w.DbId = newDbId
}
func (w *redisWriter) flushInterval() {
for {
select {
case e := <-w.chWaitReply:
reply, err := w.client.Receive()
log.Debugf("redisWriter received reply. argv=%v, reply=%v, error=[%v]", e.Argv, reply, err)
if err != nil {
if err.Error() == "BUSYKEY Target key name already exists." {
if config.Config.Advanced.RDBRestoreCommandBehavior == "skip" {
log.Warnf("redisWriter received BUSYKEY reply. argv=%v", e.Argv)
} else if config.Config.Advanced.RDBRestoreCommandBehavior == "panic" {
log.Panicf("redisWriter received BUSYKEY reply. argv=%v", e.Argv)
}
} else {
log.Panicf("redisWriter received error. error=[%v], argv=%v", err, e.Argv)
}
}
atomic.AddUint64(&w.UpdateUnansweredBytesCount, ^(e.EncodedSize - 1))
statistics.UpdateEntryId(e.Id)
statistics.UpdateAOFAppliedOffset(e.Offset)
statistics.UpdateUnansweredBytesCount(atomic.LoadUint64(&w.UpdateUnansweredBytesCount))
}
}
}

View File

@ -0,0 +1,94 @@
package writer
import (
"github.com/alibaba/RedisShake/internal/entry"
"github.com/alibaba/RedisShake/internal/log"
"strconv"
"strings"
)
const KeySlots = 16384
type RedisClusterWriter struct {
client []Writer
router [KeySlots]Writer
}
func NewRedisClusterWriter(addresses []string, password string, isTls bool) Writer {
rw := new(RedisClusterWriter)
rw.client = make([]Writer, len(addresses))
for inx, address := range addresses {
rw.client[inx] = NewRedisWriter(address, password, isTls)
}
log.Infof("redisClusterWriter connected to redis cluster successful. addresses=%v", addresses)
rw.loadClusterNodes()
return rw
}
func (r *RedisClusterWriter) loadClusterNodes() {
for _, writer := range r.client {
standalone := writer.(*redisWriter)
reply := standalone.client.DoWithStringReply("cluster", "nodes")
reply = strings.TrimSpace(reply)
for _, line := range strings.Split(reply, "\n") {
line = strings.TrimSpace(line)
words := strings.Split(line, " ")
if strings.Contains(words[2], "myself") {
log.Infof("redisClusterWriter load cluster nodes. line=%v", line)
for i := 8; i < len(words); i++ {
words[i] = strings.TrimSpace(words[i])
var start, end int
var err error
if strings.Contains(words[i], "-") {
seg := strings.Split(words[i], "-")
start, err = strconv.Atoi(seg[0])
if err != nil {
log.PanicError(err)
}
end, err = strconv.Atoi(seg[1])
if err != nil {
log.PanicError(err)
}
} else {
start, err = strconv.Atoi(words[i])
if err != nil {
log.PanicError(err)
}
end = start
}
for j := start; j <= end; j++ {
if r.router[j] != nil {
log.Panicf("redisClusterWriter: slot %d already occupied", j)
}
r.router[j] = standalone
}
}
}
}
}
for i := 0; i < KeySlots; i++ {
if r.router[i] == nil {
log.Panicf("redisClusterWriter: slot %d not occupied", i)
}
}
}
func (r *RedisClusterWriter) Write(entry *entry.Entry) {
if len(entry.Slots) == 0 {
for _, writer := range r.client {
writer.Write(entry)
}
return
}
lastSlot := -1
for _, slot := range entry.Slots {
if lastSlot == -1 {
lastSlot = slot
}
if slot != lastSlot {
log.Panicf("CROSSSLOT Keys in request don't hash to the same slot. argv=%v", entry.Argv)
}
}
r.router[lastSlot].Write(entry)
}

View File

@ -12,54 +12,6 @@ furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
---
The MIT License (MIT)
Copyright (c) 2016 CodisLabs
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
---
The MIT License (MIT)
Copyright (c) 2014 Wandoujia Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE

38
redis-shake.toml Normal file
View File

@ -0,0 +1,38 @@
[source] # standalone
address = "127.0.0.1:6379"
password = ""
tls = false
[target]
type = "standalone" # standalone or cluster
addresses = ["127.0.0.1:6380"]
password = ""
tls = false
[advanced]
dir = "data"
# runtime.GOMAXPROCS, 0 means use runtime.NumCPU() cpu cores
ncpu = 3
# log
log_file = "redis-shake.log"
log_level = "info" # debug, info or warn
log_interval = 5 # in seconds
# redis-shake gets key and value from rdb file, and uses RESTORE command to
# create the key in target redis. Redis RESTORE will return a "Target key name
# is busy" error when key already exists. You can use this configuration item
# to change the default behavior of restore:
# panic: redis-shake will stop when meet "Target key name is busy" error.
# rewrite: redis-shake will replace the key with new value.
# ignore: redis-shake will skip restore the key when meet "Target key name is busy" error.
rdb_restore_command_behavior = "rewrite" # panic, rewrite or skip
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default. This amount is normally 1gb.
target_redis_client_max_querybuf_len = 1024_000_000
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited to 512 mb.
target_redis_proto_max_bulk_len = 512_000_000

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

View File

@ -1,4 +0,0 @@
build hypervisor
```shell
gcc -Wall -O3 scripts/hypervisor.c -o hypervisor -lpthread
```

View File

@ -0,0 +1,24 @@
{
"CAT": {
"summary": "List the ACL categories or the commands inside a category",
"complexity": "O(1) since the categories and commands are a fixed set.",
"group": "server",
"since": "6.0.0",
"arity": -2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "categoryname",
"type": "string",
"optional": true
}
]
}
}

View File

@ -0,0 +1,25 @@
{
"DELUSER": {
"summary": "Remove the specified ACL users and the associated rules",
"complexity": "O(1) amortized time considering the typical user.",
"group": "server",
"since": "6.0.0",
"arity": -3,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "username",
"type": "string",
"multiple": true
}
]
}
}

View File

@ -0,0 +1,35 @@
{
"DRYRUN": {
"summary": "Returns whether the user can execute the given command without executing the command.",
"complexity": "O(1).",
"group": "server",
"since": "7.0.0",
"arity": -4,
"container": "ACL",
"function": "aclCommand",
"history": [],
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "username",
"type": "string"
},
{
"name": "command",
"type": "string"
},
{
"name": "arg",
"type": "string",
"optional": true,
"multiple": true
}
]
}
}

View File

@ -0,0 +1,24 @@
{
"GENPASS": {
"summary": "Generate a pseudorandom secure password to use for ACL users",
"complexity": "O(1)",
"group": "server",
"since": "6.0.0",
"arity": -2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "bits",
"type": "integer",
"optional": true
}
]
}
}

View File

@ -0,0 +1,34 @@
{
"GETUSER": {
"summary": "Get the rules for a specific ACL user",
"complexity": "O(N). Where N is the number of password, command and pattern rules that the user has.",
"group": "server",
"since": "6.0.0",
"arity": 3,
"container": "ACL",
"function": "aclCommand",
"history": [
[
"6.2.0",
"Added Pub/Sub channel patterns."
],
[
"7.0.0",
"Added selectors and changed the format of key and channel patterns from a list to their rule representation."
]
],
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "username",
"type": "string"
}
]
}
}

View File

@ -0,0 +1,16 @@
{
"HELP": {
"summary": "Show helpful text about the different subcommands",
"complexity": "O(1)",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"LOADING",
"STALE",
"SENTINEL"
]
}
}

View File

@ -0,0 +1,18 @@
{
"LIST": {
"summary": "List the current ACL rules in ACL config file format",
"complexity": "O(N). Where N is the number of configured users.",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
]
}
}

View File

@ -0,0 +1,18 @@
{
"LOAD": {
"summary": "Reload the ACLs from the configured ACL file",
"complexity": "O(N). Where N is the number of configured users.",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
]
}
}

View File

@ -0,0 +1,36 @@
{
"LOG": {
"summary": "List latest events denied because of ACLs in place",
"complexity": "O(N) with N being the number of entries shown.",
"group": "server",
"since": "6.0.0",
"arity": -2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "operation",
"type": "oneof",
"optional": true,
"arguments": [
{
"name": "count",
"type": "integer"
},
{
"name": "reset",
"type": "pure-token",
"token": "RESET"
}
]
}
]
}
}

View File

@ -0,0 +1,18 @@
{
"SAVE": {
"summary": "Save the current ACL rules in the configured ACL file",
"complexity": "O(N). Where N is the number of configured users.",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
]
}
}

View File

@ -0,0 +1,40 @@
{
"SETUSER": {
"summary": "Modify or create the rules for a specific ACL user",
"complexity": "O(N). Where N is the number of rules provided.",
"group": "server",
"since": "6.0.0",
"arity": -3,
"container": "ACL",
"function": "aclCommand",
"history": [
[
"6.2.0",
"Added Pub/Sub channel patterns."
],
[
"7.0.0",
"Added selectors and key based permissions."
]
],
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
],
"arguments": [
{
"name": "username",
"type": "string"
},
{
"name": "rule",
"type": "string",
"optional": true,
"multiple": true
}
]
}
}

View File

@ -0,0 +1,18 @@
{
"USERS": {
"summary": "List the username of all the configured ACL rules",
"complexity": "O(N). Where N is the number of configured users.",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"ADMIN",
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
]
}
}

View File

@ -0,0 +1,17 @@
{
"WHOAMI": {
"summary": "Return the name of the user associated to the current connection",
"complexity": "O(1)",
"group": "server",
"since": "6.0.0",
"arity": 2,
"container": "ACL",
"function": "aclCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE",
"SENTINEL"
]
}
}

12
scripts/commands/acl.json Normal file
View File

@ -0,0 +1,12 @@
{
"ACL": {
"summary": "A container for Access List Control commands ",
"complexity": "Depends on subcommand.",
"group": "server",
"since": "6.0.0",
"arity": -2,
"command_flags": [
"SENTINEL"
]
}
}

View File

@ -0,0 +1,49 @@
{
"APPEND": {
"summary": "Append a value to a key",
"complexity": "O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.",
"group": "string",
"since": "2.0.0",
"arity": 3,
"function": "appendCommand",
"command_flags": [
"WRITE",
"DENYOOM",
"FAST"
],
"acl_categories": [
"STRING"
],
"key_specs": [
{
"flags": [
"RW",
"INSERT"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0
},
{
"name": "value",
"type": "string"
}
]
}
}

View File

@ -0,0 +1,16 @@
{
"ASKING": {
"summary": "Sent by cluster clients after an -ASK redirect",
"complexity": "O(1)",
"group": "cluster",
"since": "3.0.0",
"arity": 1,
"function": "askingCommand",
"command_flags": [
"FAST"
],
"acl_categories": [
"CONNECTION"
]
}
}

View File

@ -0,0 +1,40 @@
{
"AUTH": {
"summary": "Authenticate to the server",
"complexity": "O(N) where N is the number of passwords defined for the user",
"group": "connection",
"since": "1.0.0",
"arity": -2,
"function": "authCommand",
"history": [
[
"6.0.0",
"Added ACL style (username and password)."
]
],
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE",
"FAST",
"NO_AUTH",
"SENTINEL",
"ALLOW_BUSY"
],
"acl_categories": [
"CONNECTION"
],
"arguments": [
{
"name": "username",
"type": "string",
"optional": true,
"since": "6.0.0"
},
{
"name": "password",
"type": "string"
}
]
}
}

View File

@ -0,0 +1,15 @@
{
"BGREWRITEAOF": {
"summary": "Asynchronously rewrite the append-only file",
"complexity": "O(1)",
"group": "server",
"since": "1.0.0",
"arity": 1,
"function": "bgrewriteaofCommand",
"command_flags": [
"NO_ASYNC_LOADING",
"ADMIN",
"NOSCRIPT"
]
}
}

View File

@ -0,0 +1,30 @@
{
"BGSAVE": {
"summary": "Asynchronously save the dataset to disk",
"complexity": "O(1)",
"group": "server",
"since": "1.0.0",
"arity": -1,
"function": "bgsaveCommand",
"history": [
[
"3.2.2",
"Added the `SCHEDULE` option."
]
],
"command_flags": [
"NO_ASYNC_LOADING",
"ADMIN",
"NOSCRIPT"
],
"arguments": [
{
"name": "schedule",
"token": "SCHEDULE",
"type": "pure-token",
"optional": true,
"since": "3.2.2"
}
]
}
}

View File

@ -0,0 +1,82 @@
{
"BITCOUNT": {
"summary": "Count set bits in a string",
"complexity": "O(N)",
"group": "bitmap",
"since": "2.6.0",
"arity": -2,
"function": "bitcountCommand",
"history": [
[
"7.0.0",
"Added the `BYTE|BIT` option."
]
],
"command_flags": [
"READONLY"
],
"acl_categories": [
"BITMAP"
],
"key_specs": [
{
"flags": [
"RO",
"ACCESS"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0
},
{
"name": "index",
"type": "block",
"optional": true,
"arguments": [
{
"name": "start",
"type": "integer"
},
{
"name": "end",
"type": "integer"
},
{
"name": "index_unit",
"type": "oneof",
"optional": true,
"since": "7.0.0",
"arguments": [
{
"name": "byte",
"type": "pure-token",
"token": "BYTE"
},
{
"name": "bit",
"type": "pure-token",
"token": "BIT"
}
]
}
]
}
]
}
}

View File

@ -0,0 +1,143 @@
{
"BITFIELD": {
"summary": "Perform arbitrary bitfield integer operations on strings",
"complexity": "O(1) for each subcommand specified",
"group": "bitmap",
"since": "3.2.0",
"arity": -2,
"function": "bitfieldCommand",
"get_keys_function": "bitfieldGetKeys",
"command_flags": [
"WRITE",
"DENYOOM"
],
"acl_categories": [
"BITMAP"
],
"key_specs": [
{
"notes": "This command allows both access and modification of the key",
"flags": [
"RW",
"UPDATE",
"ACCESS",
"VARIABLE_FLAGS"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0
},
{
"name": "operation",
"type": "oneof",
"multiple": true,
"arguments": [
{
"token": "GET",
"name": "encoding_offset",
"type": "block",
"arguments": [
{
"name": "encoding",
"type": "string"
},
{
"name": "offset",
"type": "integer"
}
]
},
{
"name": "write",
"type": "block",
"arguments": [
{
"token": "OVERFLOW",
"name": "wrap_sat_fail",
"type": "oneof",
"optional": true,
"arguments": [
{
"name": "wrap",
"type": "pure-token",
"token": "WRAP"
},
{
"name": "sat",
"type": "pure-token",
"token": "SAT"
},
{
"name": "fail",
"type": "pure-token",
"token": "FAIL"
}
]
},
{
"name": "write_operation",
"type": "oneof",
"arguments": [
{
"token": "SET",
"name": "encoding_offset_value",
"type": "block",
"arguments": [
{
"name": "encoding",
"type": "string"
},
{
"name": "offset",
"type": "integer"
},
{
"name": "value",
"type": "integer"
}
]
},
{
"token": "INCRBY",
"name": "encoding_offset_increment",
"type": "block",
"arguments": [
{
"name": "encoding",
"type": "string"
},
{
"name": "offset",
"type": "integer"
},
{
"name": "increment",
"type": "integer"
}
]
}
]
}
]
}
]
}
]
}
}

View File

@ -0,0 +1,61 @@
{
"BITFIELD_RO": {
"summary": "Perform arbitrary bitfield integer operations on strings. Read-only variant of BITFIELD",
"complexity": "O(1) for each subcommand specified",
"group": "bitmap",
"since": "6.0.0",
"arity": -2,
"function": "bitfieldroCommand",
"command_flags": [
"READONLY",
"FAST"
],
"acl_categories": [
"BITMAP"
],
"key_specs": [
{
"flags": [
"RO",
"ACCESS"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0
},
{
"token": "GET",
"name": "encoding_offset",
"type": "block",
"multiple": true,
"multiple_token": true,
"arguments": [
{
"name": "encoding",
"type": "string"
},
{
"name": "offset",
"type": "integer"
}
]
}
]
}
}

View File

@ -0,0 +1,72 @@
{
"BITOP": {
"summary": "Perform bitwise operations between strings",
"complexity": "O(N)",
"group": "bitmap",
"since": "2.6.0",
"arity": -4,
"function": "bitopCommand",
"command_flags": [
"WRITE",
"DENYOOM"
],
"acl_categories": [
"BITMAP"
],
"key_specs": [
{
"flags": [
"OW",
"UPDATE"
],
"begin_search": {
"index": {
"pos": 2
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
},
{
"flags": [
"RO",
"ACCESS"
],
"begin_search": {
"index": {
"pos": 3
}
},
"find_keys": {
"range": {
"lastkey": -1,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "operation",
"type": "string"
},
{
"name": "destkey",
"type": "key",
"key_spec_index": 0
},
{
"name": "key",
"type": "key",
"key_spec_index": 1,
"multiple": true
}
]
}
}

View File

@ -0,0 +1,93 @@
{
"BITPOS": {
"summary": "Find first bit set or clear in a string",
"complexity": "O(N)",
"group": "bitmap",
"since": "2.8.7",
"arity": -3,
"function": "bitposCommand",
"history": [
[
"7.0.0",
"Added the `BYTE|BIT` option."
]
],
"command_flags": [
"READONLY"
],
"acl_categories": [
"BITMAP"
],
"key_specs": [
{
"flags": [
"RO",
"ACCESS"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0
},
{
"name": "bit",
"type": "integer"
},
{
"name": "index",
"type": "block",
"optional": true,
"arguments": [
{
"name": "start",
"type": "integer"
},
{
"name": "end_index",
"type": "block",
"optional": true,
"arguments": [
{
"name": "end",
"type": "integer"
},
{
"name": "index_unit",
"type": "oneof",
"optional": true,
"since": "7.0.0",
"arguments": [
{
"name": "byte",
"type": "pure-token",
"token": "BYTE"
},
{
"name": "bit",
"type": "pure-token",
"token": "BIT"
}
]
}
]
}
]
}
]
}
}

View File

@ -0,0 +1,106 @@
{
"BLMOVE": {
"summary": "Pop an element from a list, push it to another list and return it; or block until one is available",
"complexity": "O(1)",
"group": "list",
"since": "6.2.0",
"arity": 6,
"function": "blmoveCommand",
"command_flags": [
"WRITE",
"DENYOOM",
"NOSCRIPT",
"BLOCKING"
],
"acl_categories": [
"LIST"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
},
{
"flags": [
"RW",
"INSERT"
],
"begin_search": {
"index": {
"pos": 2
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "source",
"type": "key",
"key_spec_index": 0
},
{
"name": "destination",
"type": "key",
"key_spec_index": 1
},
{
"name": "wherefrom",
"type": "oneof",
"arguments": [
{
"name": "left",
"type": "pure-token",
"token": "LEFT"
},
{
"name": "right",
"type": "pure-token",
"token": "RIGHT"
}
]
},
{
"name": "whereto",
"type": "oneof",
"arguments": [
{
"name": "left",
"type": "pure-token",
"token": "LEFT"
},
{
"name": "right",
"type": "pure-token",
"token": "RIGHT"
}
]
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,77 @@
{
"BLMPOP": {
"summary": "Pop elements from a list, or block until one is available",
"complexity": "O(N+M) where N is the number of provided keys and M is the number of elements returned.",
"group": "list",
"since": "7.0.0",
"arity": -5,
"function": "blmpopCommand",
"get_keys_function": "blmpopGetKeys",
"command_flags": [
"WRITE",
"BLOCKING"
],
"acl_categories": [
"LIST"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 2
}
},
"find_keys": {
"keynum": {
"keynumidx": 0,
"firstkey": 1,
"step": 1
}
}
}
],
"arguments": [
{
"name": "timeout",
"type": "double"
},
{
"name": "numkeys",
"type": "integer"
},
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "where",
"type": "oneof",
"arguments": [
{
"name": "left",
"type": "pure-token",
"token": "LEFT"
},
{
"name": "right",
"type": "pure-token",
"token": "RIGHT"
}
]
},
{
"token": "COUNT",
"name": "count",
"type": "integer",
"optional": true
}
]
}
}

View File

@ -0,0 +1,57 @@
{
"BLPOP": {
"summary": "Remove and get the first element in a list, or block until one is available",
"complexity": "O(N) where N is the number of provided keys.",
"group": "list",
"since": "2.0.0",
"arity": -3,
"function": "blpopCommand",
"history": [
[
"6.0.0",
"`timeout` is interpreted as a double instead of an integer."
]
],
"command_flags": [
"WRITE",
"NOSCRIPT",
"BLOCKING"
],
"acl_categories": [
"LIST"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": -2,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,57 @@
{
"BRPOP": {
"summary": "Remove and get the last element in a list, or block until one is available",
"complexity": "O(N) where N is the number of provided keys.",
"group": "list",
"since": "2.0.0",
"arity": -3,
"function": "brpopCommand",
"history": [
[
"6.0.0",
"`timeout` is interpreted as a double instead of an integer."
]
],
"command_flags": [
"WRITE",
"NOSCRIPT",
"BLOCKING"
],
"acl_categories": [
"LIST"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": -2,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,85 @@
{
"BRPOPLPUSH": {
"summary": "Pop an element from a list, push it to another list and return it; or block until one is available",
"complexity": "O(1)",
"group": "list",
"since": "2.2.0",
"arity": 4,
"function": "brpoplpushCommand",
"history": [
[
"6.0.0",
"`timeout` is interpreted as a double instead of an integer."
]
],
"deprecated_since": "6.2.0",
"replaced_by": "`BLMOVE` with the `RIGHT` and `LEFT` arguments",
"doc_flags": [
"DEPRECATED"
],
"command_flags": [
"WRITE",
"DENYOOM",
"NOSCRIPT",
"BLOCKING"
],
"acl_categories": [
"LIST"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
},
{
"flags": [
"RW",
"INSERT"
],
"begin_search": {
"index": {
"pos": 2
}
},
"find_keys": {
"range": {
"lastkey": 0,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "source",
"type": "key",
"key_spec_index": 0
},
{
"name": "destination",
"type": "key",
"key_spec_index": 1
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,77 @@
{
"BZMPOP": {
"summary": "Remove and return members with scores in a sorted set or block until one is available",
"complexity": "O(K) + O(N*log(M)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.",
"group": "sorted_set",
"since": "7.0.0",
"arity": -5,
"function": "bzmpopCommand",
"get_keys_function": "blmpopGetKeys",
"command_flags": [
"WRITE",
"BLOCKING"
],
"acl_categories": [
"SORTEDSET"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 2
}
},
"find_keys": {
"keynum": {
"keynumidx": 0,
"firstkey": 1,
"step": 1
}
}
}
],
"arguments": [
{
"name": "timeout",
"type": "double"
},
{
"name": "numkeys",
"type": "integer"
},
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "where",
"type": "oneof",
"arguments": [
{
"name": "min",
"type": "pure-token",
"token": "MIN"
},
{
"name": "max",
"type": "pure-token",
"token": "MAX"
}
]
},
{
"token": "COUNT",
"name": "count",
"type": "integer",
"optional": true
}
]
}
}

View File

@ -0,0 +1,58 @@
{
"BZPOPMAX": {
"summary": "Remove and return the member with the highest score from one or more sorted sets, or block until one is available",
"complexity": "O(log(N)) with N being the number of elements in the sorted set.",
"group": "sorted_set",
"since": "5.0.0",
"arity": -3,
"function": "bzpopmaxCommand",
"history": [
[
"6.0.0",
"`timeout` is interpreted as a double instead of an integer."
]
],
"command_flags": [
"WRITE",
"NOSCRIPT",
"FAST",
"BLOCKING"
],
"acl_categories": [
"SORTEDSET"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": -2,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,58 @@
{
"BZPOPMIN": {
"summary": "Remove and return the member with the lowest score from one or more sorted sets, or block until one is available",
"complexity": "O(log(N)) with N being the number of elements in the sorted set.",
"group": "sorted_set",
"since": "5.0.0",
"arity": -3,
"function": "bzpopminCommand",
"history": [
[
"6.0.0",
"`timeout` is interpreted as a double instead of an integer."
]
],
"command_flags": [
"WRITE",
"NOSCRIPT",
"FAST",
"BLOCKING"
],
"acl_categories": [
"SORTEDSET"
],
"key_specs": [
{
"flags": [
"RW",
"ACCESS",
"DELETE"
],
"begin_search": {
"index": {
"pos": 1
}
},
"find_keys": {
"range": {
"lastkey": -2,
"step": 1,
"limit": 0
}
}
}
],
"arguments": [
{
"name": "key",
"type": "key",
"key_spec_index": 0,
"multiple": true
},
{
"name": "timeout",
"type": "double"
}
]
}
}

View File

@ -0,0 +1,37 @@
{
"CACHING": {
"summary": "Instruct the server about tracking or not keys in the next request",
"complexity": "O(1)",
"group": "connection",
"since": "6.0.0",
"arity": 3,
"container": "CLIENT",
"function": "clientCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE"
],
"acl_categories": [
"CONNECTION"
],
"arguments": [
{
"name": "mode",
"type": "oneof",
"arguments": [
{
"name": "yes",
"type": "pure-token",
"token": "YES"
},
{
"name": "no",
"type": "pure-token",
"token": "NO"
}
]
}
]
}
}

View File

@ -0,0 +1,19 @@
{
"GETNAME": {
"summary": "Get the current connection name",
"complexity": "O(1)",
"group": "connection",
"since": "2.6.9",
"arity": 2,
"container": "CLIENT",
"function": "clientCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE"
],
"acl_categories": [
"CONNECTION"
]
}
}

View File

@ -0,0 +1,19 @@
{
"GETREDIR": {
"summary": "Get tracking notifications redirection client ID if any",
"complexity": "O(1)",
"group": "connection",
"since": "6.0.0",
"arity": 2,
"container": "CLIENT",
"function": "clientCommand",
"command_flags": [
"NOSCRIPT",
"LOADING",
"STALE"
],
"acl_categories": [
"CONNECTION"
]
}
}

View File

@ -0,0 +1,18 @@
{
"HELP": {
"summary": "Show helpful text about the different subcommands",
"complexity": "O(1)",
"group": "connection",
"since": "5.0.0",
"arity": 2,
"container": "CLIENT",
"function": "clientCommand",
"command_flags": [
"LOADING",
"STALE"
],
"acl_categories": [
"CONNECTION"
]
}
}

Some files were not shown because too many files have changed in this diff Show More