Compare commits

...

51 Commits

Author SHA1 Message Date
WANG Xu 02b77b9d5c
Merge pull request #32345 from taosdata/enh/main/docker-create-snode
feat(entrypoint): add wait for serverPort and create snode on dnode
2025-07-30 16:08:39 +08:00
WANG MINGMING d0c20d06f9
fix(doc): explanation of timezone (#32390) 2025-07-30 15:52:29 +08:00
Jing Sima 82eb54b33b
test: [TS-6100] Reopen UTs (#32363) 2025-07-30 15:11:10 +08:00
Yihao Deng c4a303e09c
fix: failed to write blob data using STMT2 in inefficient mode (#32365) 2025-07-30 14:36:08 +08:00
Zhixiao Bao 1546943839
fix: modify the execution order of compatibility test cases. (#32385) 2025-07-30 14:24:58 +08:00
Kaili Xu 3ef6778b5c
enh: grant support for data source ORC (#32370) 2025-07-30 14:02:24 +08:00
Simon Guan 9c61184fef
Merge pull request #32375 from taosdata/feat/TS-6100-3.0
merge: from stream branch to main
2025-07-30 14:00:15 +08:00
Simon Guan 0bdd1045a3 test: update stream case 2025-07-30 09:06:36 +08:00
Simon Guan a75f50a2a4 Merge branch 'main' into feat/TS-6100-3.0 2025-07-30 09:05:32 +08:00
Simon Guan 827f52317d test: update case 2025-07-30 09:05:14 +08:00
Alex Duan 56fcc8bd4c
test: create vtable stable from 150100 (#32360) 2025-07-30 09:03:54 +08:00
WANG MINGMING 694da6eb6e
Merge pull request #32362 from taosdata/feat/TS-6100-3.0-ming
Feat/ts 6100 3.0 ming
2025-07-29 18:04:23 +08:00
Jing Sima a163744ab0 fix: [TD-37133] Forbid use %%trows multi times in union query. 2025-07-29 17:51:18 +08:00
wangmm0220 89d27dbfc1 fix(stream): case error 2025-07-29 17:09:47 +08:00
wangmm0220 e69beb1378 fix(stream): memory leak 2025-07-29 17:04:03 +08:00
Jinqing Kuang 7fa68248e5 fix(stream): fix memory leak of stream notify content 2025-07-29 16:59:36 +08:00
wangmm0220 806eaa084f fix(stream): open file error in group cache operator 2025-07-29 16:31:25 +08:00
wangmm0220 119f3c3829 fix(stream): open file error in group cache operator 2025-07-29 16:18:48 +08:00
Pan Wei 0e1fb0cd9d
fix: split dead loop issue (#32354)
* enh: add operator reset func

* fix: merge join reset issue

* fix: memory issues

* fix: add debug assert

* fix: memory issues

* fix: memory leak

* fix: memory issues

* fix taos log miss

* fix: case issue

* fix: case issue

* fix: case issues

* fix: drop dnode issue

* fix: memory issues

* fix: memory issues

* fix: memory leak issues

* fix: recalculate time range issue

* fix: add debug log

* fix: memory issues

* fix: enable case asan

* Update streamlist_for_ci.task

* fix: case asan issue

* fix: stream name issue

* fix: external window compile issues

* fix: deploy memory issue

* fix: ahandle issue

* fix: ahandle issue

* fix: ahandle issue

* fix: virtual table reader list issue

* fix: log info

* fix: msg error

* fix: virtual table addr list issue

* fix: memory issues

* fix: memory leak issue

* fix: memory issues

* fix: memory free issues

* fix: memory issues

* fix: snode deploy issue

* fix: mnode reader issue

* fix: memory issues

* fix: add debug test

* enh: add ignore nodata trigger

* fix: memory leaks

* fix: configuration issue

* fix: memory issue

* fix: external window issue

* fix: external window issues

* fix: external window placeholder issue

* fix: placeholder function init issues

* fix: memory leak issue

* fix: add debug log

* fix: compile issues

* fix: double free issue

* fix: runner addr update issue

* fix: msg rsp issue

* fix: external window reset issue

* fix: configuration issue

* fix: deploy msg issue

* fix: compile issue

* fix: external window idx issue

* fix: ci issues

* fix: ci case issues

* fix: drop dnode issue

* fix: add debug log

* fix: conflict

* fix: create stream if not exists issue

* fix: ahandle memory leak

* fix: case issue

* fix: exchange issues

* fix: crash issue

* fix: exchange prefetch issue

* fix: snode quit issue

* enh: support indef rows func

* fix: crash issues

* Fix external window collect vector function

* fix: external window indef rows issues

* fix: external window issue

* enh: support count always return value in external window

* fix: force output when has more result block

* fix: runner block retrieve issue

* fix: crash issue

* fix: count cases issue

* fix: reader deploy message issue

* fix: task deploy issue

* fix: external window scalar issue

* fix: compile issue

* fix: group cache reset issue

* fix: add protection check

* fix: add grant check

* fix: add disableStream config

* fix: notify free issue

* fix: case issue

* fix: grant issues

* fix: memory leak issue

* fix: memory leak issue

* fix: memory leak issue

* fix: stbJoin issue

* fix: rpc send issue

* fix: rsp stream group id issue

* fix: redeploy stream issue

* fix: cases issues

* fix: memory leak issue

* fix: snode quit issue

* fix: invalid read issue

* fix: memory leak issue

* fix: split dead loop issue

* fix: crash issue

* fix: acquire task issue

---------

Co-authored-by: huohong <sallyhuo@taosdata.com>
Co-authored-by: Jing Sima <simondominic9997@outlook.com>
Co-authored-by: facetosea <285808407@qq.com>
2025-07-29 16:14:58 +08:00
Zhixiao Bao dff36407a1
Merge pull request #32361 from taosdata/test/TS-6100/bzx
test: add recalc manual to ci.
2025-07-29 16:11:39 +08:00
xiao-77 4543ee6a2f test: add recalc manual to ci. 2025-07-29 15:44:31 +08:00
wangmm0220 2130e5547c fix(stream): tsdbCalcData block error 2025-07-29 15:11:08 +08:00
Jinqing Kuang cddfc261b8 fix(stream): fix recalc case 2025-07-29 14:31:44 +08:00
Jinqing Kuang a5ab759ad7 fix(stream): fix recalculation range 2025-07-29 14:31:44 +08:00
Li Hui 3cc6f21a8b
Merge pull request #32290 from taosdata/feat/TS-6100-3.0-lihui
test: modify case
2025-07-29 10:47:49 +08:00
Zhixiao Bao 9ab3f58b05
Merge pull request #32348 from taosdata/test/TS-6100/bzx
test: modify case test_recalc_manual_with_options.py.
2025-07-29 09:55:31 +08:00
xiao-77 d10289466c test: modify case test_recalc_manual_with_options.py. 2025-07-29 09:52:21 +08:00
Zhixiao Bao cac5820617
Merge pull request #32318 from taosdata/test/TS-6100/bzx
test: modify recalc ci cases.
2025-07-28 23:25:03 +08:00
Alex Duan e3d286912f
Merge pull request #32328 from taosdata/case/TD-36886-3.0
test: verify bug4 merge case to vehicle.py
2025-07-28 20:52:44 +08:00
Alex Duan 5058ad9ac2 case: fix vehicle.py check stream2 2025-07-28 20:19:05 +08:00
chenhaoran f156610721 fix(entrypoint): move snode creation command after serverPort readiness check 2025-07-28 20:11:43 +08:00
chenhaoran c86894cf50 feat(entrypoint): add wait for serverPort and create snode on dnode 2025-07-28 19:52:48 +08:00
plum-lihui 5ba2945f7a test: add cases 2025-07-28 19:50:28 +08:00
wangmm0220 059e0d9b67 fix(stream): add execId to pTaskInfo to avoid error in groupCache 2025-07-28 19:11:03 +08:00
Simon Guan ed477d727e
Merge pull request #32342 from taosdata/merge/main_to_stream
merge: from main to stream
2025-07-28 17:49:38 +08:00
qevolg af0b67410d case: remove bug3 comment 2025-07-28 16:50:05 +08:00
qevolg a6b00ef2cb test: remove fixed case file 2025-07-28 16:35:58 +08:00
xiao-77 3ac0dc0702 test: fix ci cases. 2025-07-28 16:23:16 +08:00
qevolg cfb74e4185 fix: calc sql table name invalid 2025-07-28 16:15:05 +08:00
xiao-77 c89fc6bcc3 test: modify ci test. 2025-07-28 16:12:52 +08:00
qevolg 123b74f635 fix: from table is error fixed 2025-07-28 15:46:44 +08:00
plum-lihui c2427eb12b test:add old cases 2025-07-28 15:13:25 +08:00
qevolg 984edc2e6c test: verify bug4 merge case to vehicle.py 2025-07-28 14:49:17 +08:00
plum-lihui f326297798 Merge branch 'feat/TS-6100-3.0' into feat/TS-6100-3.0-lihui 2025-07-28 14:30:35 +08:00
xiao-77 275c4ef7ae Merge remote-tracking branch 'origin/feat/TS-6100-3.0' into test/TS-6100/bzx 2025-07-28 13:44:32 +08:00
wangmm0220 5717e92461 fix(stream): return only ts in tsdb ts data 2025-07-28 13:09:27 +08:00
WANG MINGMING 5e6061b5b8
Merge pull request #32326 from taosdata/feat/TS-6100-3.0
Feat/ts 6100 3.0
2025-07-28 13:05:27 +08:00
xiao-77 809842a294 fix: ci test. 2025-07-28 11:19:10 +08:00
xiao-77 e4b1b61bd6 Merge remote-tracking branch 'origin/feat/TS-6100-3.0' into test/TS-6100/bzx 2025-07-28 10:59:22 +08:00
xiao-77 db63fc71df test: mute some tests. 2025-07-28 10:45:30 +08:00
plum-lihui 69fc9a2c4b test: modify case 2025-07-26 18:08:09 +08:00
63 changed files with 1364 additions and 3043 deletions

View File

@ -36,7 +36,6 @@ typedef struct SStreamTriggerReaderInfo {
SSDataBlock* triggerResBlock;
SSDataBlock* calcResBlock;
SSDataBlock* tsBlock;
SSDataBlock* calcResBlockTmp;
SExprInfo* pExprInfo;
int32_t numOfExpr;
SArray* uidList; // for virtual table stream, uid list

View File

@ -74,11 +74,11 @@
# > 0 (any retrieved column size greater than this value all data will be compressed.)
# compressColData -1
# system time zone
# system time zone (for linux/mac)
# timezone UTC-8
# system time zone (for windows 10)
# timezone Asia/Shanghai (CST, +0800)
# system time zone (for linux/mac/windows)
# timezone Asia/Shanghai
# system locale
# locale en_US.UTF-8

View File

@ -161,5 +161,7 @@ if [ "$NEEDS_INITDB" = "1" ]; then
touch "${DATA_DIR}/.docker-entrypoint-inited"
fi
sh -c "taos -p'$TAOS_ROOT_PASSWORD' -h $FIRST_EP_HOST -P $FIRST_EP_PORT -s 'create snode on dnode 1;'"
tail -f /dev/null
# while true; do sleep 1000; done

View File

@ -641,7 +641,13 @@ int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision, t
}
if (!IS_CALENDAR_TIME_DURATION(unit)) {
return t + duration;
double tmp = t;
if (tmp + duration >= (double)INT64_MAX || tmp + duration <= (double)INT64_MIN) {
uError("time overflow, t:%" PRId64 ", duration:%" PRId64 ", unit:%c, precision:%d", t, duration, unit, precision);
return t;
} else {
return t + duration;
}
}
// The following code handles the y/n time duration

View File

@ -222,14 +222,12 @@ TDMT_MND_BALANCE_VGROUP_LEADER = 439
TDMT_MND_BALANCE_VGROUP_LEADER_RSP = 440
TDMT_MND_RESTORE_DNODE = 441
TDMT_MND_RESTORE_DNODE_RSP = 442
TDMT_MND_PAUSE_STREAM = 443
TDMT_MND_PAUSE_STREAM_RSP = 444
TDMT_MND_RESUME_STREAM = 445
TDMT_MND_RESUME_STREAM_RSP = 446
TDMT_MND_STREAM_CHECKPOINT_TIMER_RSP = 448
TDMT_MND_STREAM_BEGIN_CHECKPOINT_RSP = 450
TDMT_MND_STREAM_CHECKPOINT_CANDIDITATE_RSP = 452
TDMT_MND_STREAM_NODECHANGE_CHECK_RSP = 454
TDMT_MND_STOP_STREAM = 443
TDMT_MND_STOP_STREAM_RSP = 444
TDMT_MND_START_STREAM = 445
TDMT_MND_START_STREAM_RSP = 446
TDMT_MND_RECALC_STREAM = 447
TDMT_MND_RECALC_STREAM_RSP = 448
TDMT_MND_TRIM_DB_TIMER = 455
TDMT_MND_TRIM_DB_TIMER_RSP = 456
TDMT_MND_GRANT_NOTIFY = 457
@ -270,10 +268,6 @@ TDMT_MND_GET_TSMA = 491
TDMT_MND_GET_TSMA_RSP = 492
TDMT_MND_DROP_TB_WITH_TSMA = 493
TDMT_MND_DROP_TB_WITH_TSMA_RSP = 494
TDMT_MND_STREAM_UPDATE_CHKPT_EVT = 495
TDMT_MND_STREAM_UPDATE_CHKPT_EVT_RSP = 496
TDMT_MND_STREAM_CHKPT_REPORT = 497
TDMT_MND_STREAM_CHKPT_REPORT_RSP = 498
TDMT_VND_SUBMIT = 513
TDMT_VND_SUBMIT_RSP = 514
TDMT_VND_CREATE_TABLE = 515
@ -328,12 +322,6 @@ TDMT_VND_UNUSED14 = 563
TDMT_VND_UNUSED14_RSP = 564
TDMT_VND_UNUSED15 = 565
TDMT_VND_UNUSED15_RSP = 566
TDMT_VND_CREATE_SMA = 567
TDMT_VND_CREATE_SMA_RSP = 568
TDMT_VND_CANCEL_SMA = 569
TDMT_VND_CANCEL_SMA_RSP = 570
TDMT_VND_DROP_SMA = 571
TDMT_VND_DROP_SMA_RSP = 572
TDMT_VND_SUBMIT_RSMA = 573
TDMT_VND_SUBMIT_RSMA_RSP = 574
TDMT_VND_FETCH_RSMA = 575
@ -402,10 +390,10 @@ TDMT_SCH_LINK_BROKEN = 787
TDMT_SCH_LINK_BROKEN_RSP = 788
TDMT_SCH_TASK_NOTIFY = 789
TDMT_SCH_TASK_NOTIFY_RSP = 790
TDMT_STREAM_TASK_DEPLOY = 1025
TDMT_STREAM_TASK_DEPLOY_RSP = 1026
TDMT_STREAM_TASK_DROP = 1027
TDMT_STREAM_TASK_DROP_RSP = 1028
TDMT_STREAM_FETCH = 1025
TDMT_STREAM_FETCH_RSP = 1026
TDMT_STREAM_FETCH_FROM_CACHE = 1027
TDMT_STREAM_FETCH_FROM_CACHE_RSP = 1028
TDMT_STREAM_TASK_RUN = 1029
TDMT_STREAM_TASK_RUN_RSP = 1030
TDMT_STREAM_TASK_DISPATCH = 1031
@ -428,10 +416,6 @@ TDMT_STREAM_TASK_STOP = 1047
TDMT_STREAM_TASK_STOP_RSP = 1048
TDMT_STREAM_UNUSED = 1049
TDMT_STREAM_UNUSED_RSP = 1050
TDMT_STREAM_CREATE = 1051
TDMT_STREAM_CREATE_RSP = 1052
TDMT_STREAM_DROP = 1053
TDMT_STREAM_DROP_RSP = 1054
TDMT_STREAM_RETRIEVE_TRIGGER = 1055
TDMT_STREAM_RETRIEVE_TRIGGER_RSP = 1056
TDMT_SYNC_TIMEOUT = 1537
@ -506,8 +490,6 @@ TDMT_VND_STREAM_TASK_CHECK = 1801
TDMT_VND_STREAM_TASK_CHECK_RSP = 1802
TDMT_VND_STREAM_UNUSED = 1803
TDMT_VND_STREAM_UNUSED_RSP = 1804
TDMT_VND_GET_STREAM_PROGRESS = 1805
TDMT_VND_GET_STREAM_PROGRESS_RSP = 1806
TDMT_VND_TMQ_SUBSCRIBE = 2049
TDMT_VND_TMQ_SUBSCRIBE_RSP = 2050
TDMT_VND_TMQ_DELETE_SUB = 2051

View File

@ -163,8 +163,6 @@ ParseStatus readConfig(const string& filePath, vector<STestMsgTypeInfo>& msgType
return ParseStatus::Success;
}
// TODO(smj) : disable for stream, reopen it later
#if 0
TEST(td_msg_test, msg_type_compatibility_test) {
// cout << TMSG_INFO(TDMT_VND_DROP_TABLE) << endl;
@ -228,8 +226,6 @@ TEST(td_msg_test, msg_type_compatibility_test) {
}
}
#endif
size_t maxLengthOfMsgType() {
size_t maxLen = 0;
for (const auto& info : tMsgTypeInfo) {

View File

@ -109,7 +109,7 @@ _over:
taosMemoryFreeClear(buf);
if (code != TSDB_CODE_SUCCESS) {
char *p = (pStream == NULL) ? "null" : pStream->pCreate->name;
char *p = (pStream == NULL || NULL == pStream->pCreate) ? "null" : pStream->pCreate->name;
mError("stream:%s, failed to decode from raw:%p since %s at:%d", p, pRaw, tstrerror(code), lino);
taosMemoryFreeClear(pRow);

View File

@ -2196,9 +2196,9 @@ static int32_t msmSTRemoveStream(int64_t streamId, bool fromStreamMap) {
int64_t taskId = *(pStreamId + 1);
code = taosHashRemove(mStreamMgmt.taskMap, pStreamId, keyLen);
if (code) {
mstsError("TASK:%" PRId64 " remove from taskMap failed, error:%s", taskId, tstrerror(code));
mstsError("TASK:%" PRIx64 " remove from taskMap failed, error:%s", taskId, tstrerror(code));
} else {
mstsDebug("TASK:%" PRId64 " removed from taskMap", taskId);
mstsDebug("TASK:%" PRIx64 " removed from taskMap", taskId);
}
}
}
@ -2237,18 +2237,18 @@ static int32_t msmLaunchStreamDeployAction(SStmGrpCtx* pCtx, SStmStreamAction* p
if (pStatus) {
stopped = atomic_load_8(&pStatus->stopped);
if (0 == stopped) {
mstsDebug("stream %s already running and in streamMap, ignore deploy it", pAction->streamName);
return code;
}
if (MST_IS_USER_STOPPED(stopped) && !pAction->userAction) {
mstsWarn("stream %s already stopped by user, stopped:%d, ignore deploy it", pAction->streamName, stopped);
return code;
}
if (stopped == atomic_val_compare_exchange_8(&pStatus->stopped, stopped, 0)) {
mstsDebug("stream %s will try to reset and redeploy it", pAction->streamName);
msmResetStreamForRedeploy(streamId, pStatus);
} else {
if (MST_IS_USER_STOPPED(stopped) && !pAction->userAction) {
mstsWarn("stream %s already stopped by user, stopped:%d, ignore deploy it", pAction->streamName, stopped);
return code;
}
if (stopped == atomic_val_compare_exchange_8(&pStatus->stopped, stopped, 0)) {
mstsDebug("stream %s will try to reset and redeploy it from stopped %d", pAction->streamName, stopped);
msmResetStreamForRedeploy(streamId, pStatus);
}
}
}

View File

@ -1487,6 +1487,7 @@ static int32_t vnodeProcessStreamTsdbCalcDataReq(SVnode* pVnode, SRpcMsg* pMsg,
int32_t lino = 0;
void* buf = NULL;
size_t size = 0;
SSDataBlock* pBlockRes = NULL;
STREAM_CHECK_NULL_GOTO(sStreamReaderInfo, terrno);
void* pTask = sStreamReaderInfo->pTask;
@ -1505,9 +1506,8 @@ static int32_t vnodeProcessStreamTsdbCalcDataReq(SVnode* pVnode, SRpcMsg* pMsg,
STREAM_CHECK_RET_GOTO(createStreamTask(pVnode, &options, &pTaskInner, sStreamReaderInfo->triggerResBlock, NULL, &api));
STREAM_CHECK_RET_GOTO(taosHashPut(sStreamReaderInfo->streamTaskMap, &key, LONG_BYTES, &pTaskInner, sizeof(pTaskInner)));
STREAM_CHECK_RET_GOTO(createOneDataBlock(sStreamReaderInfo->calcResBlock, false, &pTaskInner->pResBlockDst));
STREAM_CHECK_RET_GOTO(createOneDataBlock(sStreamReaderInfo->triggerResBlock, false, &pTaskInner->pResBlockDst));
STREAM_CHECK_RET_GOTO(createOneDataBlock(sStreamReaderInfo->calcResBlock, false, &pBlockRes));
} else {
void** tmp = taosHashGet(sStreamReaderInfo->streamTaskMap, &key, LONG_BYTES);
STREAM_CHECK_NULL_GOTO(tmp, TSDB_CODE_STREAM_NO_CONTEXT);
@ -1527,14 +1527,14 @@ static int32_t vnodeProcessStreamTsdbCalcDataReq(SVnode* pVnode, SRpcMsg* pMsg,
SSDataBlock* pBlock = NULL;
STREAM_CHECK_RET_GOTO(getTableData(pTaskInner, &pBlock));
STREAM_CHECK_RET_GOTO(qStreamFilter(pBlock, pTaskInner->pFilterInfo));
blockDataTransform(sStreamReaderInfo->calcResBlockTmp, pBlock);
STREAM_CHECK_RET_GOTO(blockDataMerge(pTaskInner->pResBlockDst, sStreamReaderInfo->calcResBlockTmp));
STREAM_CHECK_RET_GOTO(blockDataMerge(pTaskInner->pResBlockDst, pBlock));
if (pTaskInner->pResBlockDst->info.rows >= STREAM_RETURN_ROWS_NUM) {
break;
}
}
STREAM_CHECK_RET_GOTO(buildRsp(pTaskInner->pResBlockDst, &buf, &size));
ST_TASK_DLOG("vgId:%d %s get result rows:%" PRId64, TD_VID(pVnode), __func__, pTaskInner->pResBlockDst->info.rows);
blockDataTransform(pBlockRes, pTaskInner->pResBlockDst);
STREAM_CHECK_RET_GOTO(buildRsp(pBlockRes, &buf, &size));
ST_TASK_DLOG("vgId:%d %s get result rows:%" PRId64, TD_VID(pVnode), __func__, pBlockRes->info.rows);
if (!hasNext) {
taosHashRemove(sStreamReaderInfo->streamTaskMap, &key, LONG_BYTES);
}
@ -1544,7 +1544,7 @@ end:
SRpcMsg rsp = {
.msgType = TDMT_STREAM_TRIGGER_PULL_RSP, .info = pMsg->info, .pCont = buf, .contLen = size, .code = code};
tmsgSendRsp(&rsp);
blockDataDestroy(pBlockRes);
return code;
}

View File

@ -2278,7 +2278,7 @@ static void addExistTableInfoIntoRes(SVnode *pVnode, SSubmitReq2 *pRequest, SSub
vError("vgId:%d, table uid:%" PRId64 " not exists, line:%d", TD_VID(pVnode), pTbData->uid, __LINE__);
}
} else {
buildExistSubTalbeRsp(pVnode, pTbData, &pCreateTbRsp->pMeta);
code = buildExistSubTalbeRsp(pVnode, pTbData, &pCreateTbRsp->pMeta);
}
TSDB_CHECK_CODE(code, lino, _exit);
@ -2299,6 +2299,9 @@ static int32_t vnodeHandleDataWrite(SVnode *pVnode, int64_t version, SSubmitReq2
SMetaInfo info = {0};
SSubmitTbData *pTbData = taosArrayGet(pRequest->aSubmitTbData, i);
if (pTbData->flags & SUBMIT_REQ_WITH_BLOB) {
hasBlob = 1;
}
if (pTbData->flags & SUBMIT_REQ_COLUMN_DATA_FORMAT) {
continue; // skip column data format
}
@ -2340,10 +2343,6 @@ static int32_t vnodeHandleDataWrite(SVnode *pVnode, int64_t version, SSubmitReq2
info.skmVer);
return code;
}
if (pTbData->flags & SUBMIT_REQ_WITH_BLOB) {
hasBlob = 1;
}
}
// Do write data

View File

@ -1782,32 +1782,13 @@ static int32_t resetDynQueryCtrlOperState(SOperatorInfo* pOper) {
case DYN_QTYPE_STB_HASH:{
pDyn->stbJoin.execInfo = (SDynQueryCtrlExecInfo){0};
SStbJoinDynCtrlInfo* pStbJoin = &pDyn->stbJoin;
if (pStbJoin->basic.batchFetch) {
if (pStbJoin->ctx.prev.leftHash) {
tSimpleHashSetFreeFp(pStbJoin->ctx.prev.leftHash, freeVgTableList);
tSimpleHashClear(pStbJoin->ctx.prev.leftHash);
}
if (pStbJoin->ctx.prev.rightHash) {
tSimpleHashSetFreeFp(pStbJoin->ctx.prev.rightHash, freeVgTableList);
tSimpleHashClear(pStbJoin->ctx.prev.rightHash);
}
} else {
if (pStbJoin->ctx.prev.leftCache) {
tSimpleHashClear(pStbJoin->ctx.prev.leftCache);
}
if (pStbJoin->ctx.prev.rightCache) {
tSimpleHashClear(pStbJoin->ctx.prev.rightCache);
}
if (pStbJoin->ctx.prev.onceTable) {
tSimpleHashClear(pStbJoin->ctx.prev.onceTable);
}
}
destroyStbJoinDynCtrlInfo(&pDyn->stbJoin);
int32_t code = initSeqStbJoinTableHash(&pDyn->stbJoin.ctx.prev, pDyn->stbJoin.basic.batchFetch);
if (TSDB_CODE_SUCCESS != code) {
qError("initSeqStbJoinTableHash failed since %s", tstrerror(code));
return code;
}
destroyStbJoinTableList(pStbJoin->ctx.prev.pListHead);
pStbJoin->ctx.prev.pListHead = NULL;
pStbJoin->ctx.prev.joinBuild = false;
pStbJoin->ctx.prev.pListTail = NULL;

View File

@ -641,8 +641,8 @@ static int32_t retrieveBlkFromBufCache(SGroupCacheOperatorInfo* pGCache, SGroupC
static FORCE_INLINE void initGcVgroupCtx(SOperatorInfo* pOperator, SGcVgroupCtx* pVgCtx, int32_t downstreamId, int32_t vgId, SArray* pTbList) {
pVgCtx->pTbList = pTbList;
pVgCtx->id = vgId;
(void)snprintf(pVgCtx->fileCtx.baseFilename, sizeof(pVgCtx->fileCtx.baseFilename) - 1, "%s/gc_%d_%" PRIx64 "_%" PRIu64 "_%d_%d",
tsTempDir, taosGetPId(), pOperator->pTaskInfo->id.queryId, pOperator->pTaskInfo->id.taskId, downstreamId, vgId);
(void)snprintf(pVgCtx->fileCtx.baseFilename, sizeof(pVgCtx->fileCtx.baseFilename) - 1, "%s/gc_%d_%" PRIx64 "_%" PRIu64 "_%p_%d_%d",
tsTempDir, taosGetPId(), pOperator->pTaskInfo->id.queryId, pOperator->pTaskInfo->id.taskId, pOperator, downstreamId, vgId);
pVgCtx->fileCtx.baseFilename[sizeof(pVgCtx->fileCtx.baseFilename) - 1] = 0;
pVgCtx->fileCtx.baseNameLen = strlen(pVgCtx->fileCtx.baseFilename);
@ -1454,8 +1454,8 @@ static int32_t initGroupCacheDownstreamCtx(SOperatorInfo* pOperator) {
return terrno;
}
(void)snprintf(pCtx->fileCtx.baseFilename, sizeof(pCtx->fileCtx.baseFilename) - 1, "%s/gc_%d_%" PRIx64 "_%" PRIu64 "_%d",
tsTempDir, taosGetPId(), pOperator->pTaskInfo->id.queryId, pOperator->pTaskInfo->id.taskId, pCtx->id);
(void)snprintf(pCtx->fileCtx.baseFilename, sizeof(pCtx->fileCtx.baseFilename) - 1, "%s/gc_%d_%" PRIx64 "_%" PRIu64 "_%p_%d",
tsTempDir, taosGetPId(), pOperator->pTaskInfo->id.queryId, pOperator->pTaskInfo->id.taskId, pOperator, pCtx->id);
pCtx->fileCtx.baseFilename[sizeof(pCtx->fileCtx.baseFilename) - 1] = 0;
pCtx->fileCtx.baseNameLen = strlen(pCtx->fileCtx.baseFilename);
}
@ -1542,7 +1542,7 @@ static int32_t resetGroupCacheDownstreamCtx(SOperatorInfo* pOper) {
tSimpleHashPut(pCtx->pVgTbHash, &defaultVg, sizeof(defaultVg), &vgCtx, sizeof(vgCtx));
}
taosArrayClear(pCtx->pFreeBlock);
taosArrayClearEx(pCtx->pFreeBlock, freeGcBlockInList);
taosHashClear(pCtx->pSessions);
taosHashClear(pCtx->pWaitSessions);
freeSGcFileCacheCtx(&pCtx->fileCtx);

View File

@ -71,6 +71,8 @@ typedef struct SSTriggerHistoryGroup {
SArray *pVirTableInfos; // SArray<SSTriggerVirTableInfo *>
SSHashObj *pTableMetas; // SSHashObj<tbUid, SSTriggerTableMeta>
bool finished;
TriggerWindowBuf winBuf;
STimeWindow nextWindow;
SValue stateVal;
@ -158,10 +160,11 @@ typedef struct SSTriggerHistoryContext {
ESTriggerContextStatus status;
int64_t gid;
STimeWindow range;
STimeWindow scanRange;
STimeWindow calcRange;
STimeWindow stepRange;
bool isHistory;
bool needTsdbMeta;
STimeWindow stepRange;
bool pendingToFinish;
SSHashObj *pReaderTsdbProgress; // SSHashObj<vgId, SSTriggerTsdbProgress>
@ -221,7 +224,8 @@ typedef struct SSTriggerCalcNode {
typedef struct SSTriggerRecalcRequest {
int64_t gid;
STimeWindow range;
STimeWindow scanRange;
STimeWindow calcRange;
SSHashObj *pTsdbVersions;
bool isHistory;
} SSTriggerRecalcRequest;
@ -318,8 +322,8 @@ int32_t stTriggerTaskAcquireRequest(SStreamTriggerTask *pTask, int64_t sessionId
SSTriggerCalcRequest **ppRequest);
int32_t stTriggerTaskReleaseRequest(SStreamTriggerTask *pTask, SSTriggerCalcRequest **ppRequest);
int32_t stTriggerTaskAddRecalcRequest(SStreamTriggerTask *pTask, int64_t gid, STimeWindow range,
SSHashObj *pWalProgress, bool isHistory);
int32_t stTriggerTaskAddRecalcRequest(SStreamTriggerTask *pTask, SSTriggerRealtimeGroup *pGroup,
STimeWindow *pCalcRange, SSHashObj *pWalProgress, bool isHistory);
int32_t stTriggerTaskFetchRecalcRequest(SStreamTriggerTask *pTask, SSTriggerRecalcRequest **ppReq);
// interfaces called by stream mgmt thread

View File

@ -290,25 +290,23 @@ int32_t smStartStreamTasks(SStreamTaskStart* pStart) {
int64_t streamId = pStart->task.streamId;
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SStreamTask* pTask= NULL;
void* taskAddr = NULL;
SStreamTask** ppTask = taosHashGet(gStreamMgmt.taskMap, &pStart->task.streamId, sizeof(pStart->task.streamId) + sizeof(pStart->task.taskId));
if (NULL == ppTask) {
stsError("stream not exists while try to start task %" PRId64, pStart->task.taskId);
goto _exit;
}
SStreamTask* pTask = *ppTask;
TAOS_CHECK_EXIT(streamAcquireTask(streamId, pStart->task.taskId, (SStreamTask**)&pTask, &taskAddr));
pStart->startMsg.header.msgType = STREAM_MSG_START;
STM_CHK_SET_ERROR_EXIT(stTriggerTaskExecute((SStreamTriggerTask *)pTask, (SStreamMsg *)&pStart->startMsg));
ST_TASK_ILOG("stream start succeed, tidx:%d", pTask->taskIdx);
return code;
_exit:
stsError("%s failed at line %d, error:%s", __FUNCTION__, lino, tstrerror(code));
if (code) {
stsError("%s failed at line %d, error:%s", __FUNCTION__, lino, tstrerror(code));
} else {
ST_TASK_ILOG("stream start succeed, tidx:%d", pTask->taskIdx);
}
streamReleaseTask(taskAddr);
return code;
}
@ -748,15 +746,11 @@ int32_t smHandleTaskMgmtRsp(SStreamMgmtRsp* pRsp) {
int64_t key[2] = {streamId, pRsp->task.taskId};
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SStreamTask* pTask= NULL;
void* taskAddr = NULL;
SStreamTask** ppTask = taosHashAcquire(gStreamMgmt.taskMap, key, sizeof(key));
if (NULL == ppTask) {
stsWarn("TASK:%" PRIx64 " already not exists in taskMap while try to handle mgmtRsp", key[1]);
return code;
}
SStreamTask* pTask= *ppTask;
TAOS_CHECK_EXIT(streamAcquireTask(streamId, pRsp->task.taskId, (SStreamTask**)&pTask, &taskAddr));
switch (pRsp->header.msgType) {
case STREAM_MSG_ORIGTBL_READER_INFO: {
SStreamMgmtReq* pReq = atomic_load_ptr(&pTask->pMgmtReq);
@ -788,7 +782,7 @@ _exit:
ST_TASK_ILOG("handle task mgmt rsp succeed, tidx:%d", pTask->taskIdx);
}
taosHashRelease(gStreamMgmt.taskMap, ppTask);
streamReleaseTask(taskAddr);
return code;
}

View File

@ -214,7 +214,6 @@ static void releaseStreamReaderInfo(void* p) {
blockDataDestroy(pInfo->triggerResBlock);
blockDataDestroy(pInfo->calcResBlock);
blockDataDestroy(pInfo->tsBlock);
blockDataDestroy(pInfo->calcResBlockTmp);
destroyExprInfo(pInfo->pExprInfo, pInfo->numOfExpr);
taosMemoryFreeClear(pInfo->pExprInfo);
taosArrayDestroy(pInfo->uidList);
@ -349,7 +348,6 @@ static SStreamTriggerReaderInfo* createStreamReaderInfo(void* pTask, const SStre
SNodeList* pScanCols = ((STableScanPhysiNode*)(sStreamReaderInfo->calcAst->pNode))->scan.pScanCols;
setColIdForCalcResBlock(pseudoCols, sStreamReaderInfo->calcResBlock->pDataBlock);
setColIdForCalcResBlock(pScanCols, sStreamReaderInfo->calcResBlock->pDataBlock);
STREAM_CHECK_RET_GOTO(createOneDataBlock(sStreamReaderInfo->calcResBlock, false, &sStreamReaderInfo->calcResBlockTmp));
}
STREAM_CHECK_RET_GOTO(createDataBlockForTs(&sStreamReaderInfo->tsBlock));

View File

@ -729,42 +729,40 @@ _end:
return code;
}
int32_t stTriggerTaskAddRecalcRequest(SStreamTriggerTask *pTask, int64_t gid, STimeWindow range,
SSHashObj *pWalProgress, bool isHistory) {
int32_t stTriggerTaskAddRecalcRequest(SStreamTriggerTask *pTask, SSTriggerRealtimeGroup *pGroup,
STimeWindow *pCalcRange, SSHashObj *pWalProgress, bool isHistory) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
bool needUnlock = false;
SSTriggerRecalcRequest *pReq = NULL;
if (pTask->fillHistory || pTask->fillHistoryFirst) {
range.skey = pTask->fillHistoryStartTime;
} else if (pTask->triggerType == STREAM_TRIGGER_SLIDING) {
STimeWindow firstWindow = {0};
if (pTask->interval.interval == 0) {
firstWindow = stTriggerTaskGetIntervalWindow(pTask, range.skey);
} else {
firstWindow = stTriggerTaskGetPeriodWindow(pTask, range.skey);
}
range.skey = firstWindow.skey;
} else {
void *px = tSimpleHashGet(pTask->pHistoryCutoffTime, &gid, sizeof(int64_t));
range.skey = (px == NULL) ? (INT64_MIN + 1) : *(int64_t *)px;
}
if (range.skey > range.ekey) {
goto _end;
}
QUERY_CHECK_NULL(pGroup, code, lino, _end, TSDB_CODE_INVALID_PARA);
pReq = taosMemoryCalloc(1, sizeof(SSTriggerRecalcRequest));
QUERY_CHECK_NULL(pReq, code, lino, _end, terrno);
pReq->gid = gid;
pReq->range = range;
pReq->gid = pGroup->gid;
pReq->calcRange = *pCalcRange;
if (pTask->fillHistory || pTask->fillHistoryFirst) {
pReq->scanRange.skey = pTask->fillHistoryStartTime;
} else {
void *px = tSimpleHashGet(pTask->pHistoryCutoffTime, &pReq->gid, sizeof(int64_t));
pReq->scanRange.skey = ((px == NULL) ? INT64_MIN : *(int64_t *)px) + 1;
}
pReq->scanRange.ekey = pGroup->oldThreshold;
if (pReq->scanRange.skey > pReq->scanRange.ekey) {
goto _end;
}
ST_TASK_DLOG("add recalc request, gid: %" PRId64 ", scanRange: [%" PRId64 ", %" PRId64 "], calcRange: [%" PRId64
", %" PRId64 "]",
pReq->gid, pReq->scanRange.skey, pReq->scanRange.ekey, pReq->calcRange.skey, pReq->calcRange.ekey);
pReq->pTsdbVersions = tSimpleHashInit(32, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT));
QUERY_CHECK_NULL(pReq->pTsdbVersions, code, lino, _end, terrno);
pReq->isHistory = isHistory;
ST_TASK_DLOG("add recalc request, gid: %" PRId64 ", range: [%" PRId64 ", %" PRId64 "]", gid, range.skey, range.ekey);
int32_t iter = 0;
SSTriggerWalProgress *pProgress = tSimpleHashIterate(pWalProgress, NULL, &iter);
while (pProgress != NULL) {
@ -811,8 +809,10 @@ int32_t stTriggerTaskFetchRecalcRequest(SStreamTriggerTask *pTask, SSTriggerReca
if (pNode != NULL) {
*ppReq = *(SSTriggerRecalcRequest **)pNode->data;
taosMemoryFreeClear(pNode);
ST_TASK_DLOG("start recalc request, gid: %" PRId64 ", range: [%" PRId64 ", %" PRId64 "]", (*ppReq)->gid,
(*ppReq)->range.skey, (*ppReq)->range.ekey);
ST_TASK_DLOG("start recalc request, gid: %" PRId64 ", scanRange: [%" PRId64 ", %" PRId64 "], calcRange: [%" PRId64
", %" PRId64 "]",
(*ppReq)->gid, (*ppReq)->scanRange.skey, (*ppReq)->scanRange.ekey, (*ppReq)->calcRange.skey,
(*ppReq)->calcRange.ekey);
} else {
*ppReq = NULL;
@ -1787,17 +1787,7 @@ int32_t stTriggerTaskExecute(SStreamTriggerTask *pTask, const SStreamMsg *pMsg)
while (px != NULL) {
SSTriggerRealtimeGroup *pGroup = *(SSTriggerRealtimeGroup **)px;
STimeWindow range = {.skey = pReq->start, .ekey = pReq->end - 1};
if (pTask->triggerType == STREAM_TRIGGER_SLIDING) {
STimeWindow lastWindow = {0};
if (pTask->interval.interval > 0) {
lastWindow = stTriggerTaskGetIntervalWindow(pTask, range.ekey);
} else {
lastWindow = stTriggerTaskGetPeriodWindow(pTask, range.ekey);
}
range.ekey = lastWindow.ekey;
}
range.ekey = TMIN(range.ekey, pGroup->oldThreshold);
code = stTriggerTaskAddRecalcRequest(pTask, pGroup->gid, range, pContext->pReaderWalProgress, true);
code = stTriggerTaskAddRecalcRequest(pTask, pGroup, &range, pContext->pReaderWalProgress, true);
QUERY_CHECK_CODE(code, lino, _end);
px = tSimpleHashIterate(pContext->pGroups, px, &iter);
}
@ -2558,6 +2548,7 @@ static int32_t stRealtimeContextRetryPullRequest(SSTriggerRealtimeContext *pCont
_end:
if (code != TSDB_CODE_SUCCESS) {
destroyAhandle(msg.info.ahandle);
ST_TASK_ELOG("%s failed at line %d since %s", __func__, lino, tstrerror(code));
}
return code;
@ -3775,7 +3766,6 @@ static int32_t stHistoryContextInit(SSTriggerHistoryContext *pContext, SStreamTr
QUERY_CHECK_NULL(pProgress->reqCids, code, lino, _end, terrno);
}
}
pContext->stepRange = pContext->range;
pContext->pTrigDataBlocks = taosArrayInit(0, POINTER_BYTES);
QUERY_CHECK_NULL(pContext->pTrigDataBlocks, code, lino, _end, terrno);
@ -3850,8 +3840,9 @@ static int32_t stHistoryContextHandleRequest(SSTriggerHistoryContext *pContext,
int32_t lino = 0;
SStreamTriggerTask *pTask = pContext->pTask;
pContext->gid = pReq->gid;
pContext->range = pReq->range;
pContext->stepRange = pContext->range;
pContext->scanRange = pReq->scanRange;
pContext->calcRange = pReq->calcRange;
pContext->stepRange = pContext->scanRange;
pContext->isHistory = pReq->isHistory;
int32_t iter = 0;
SSTriggerTsdbProgress *pProgress = tSimpleHashIterate(pContext->pReaderTsdbProgress, NULL, &iter);
@ -3961,7 +3952,7 @@ static int32_t stHistoryContextSendPullReq(SSTriggerHistoryContext *pContext, ES
pProgress = tSimpleHashGet(pContext->pReaderTsdbProgress, &pReader->nodeId, sizeof(int32_t));
QUERY_CHECK_NULL(pProgress, code, lino, _end, TSDB_CODE_INTERNAL_ERROR);
SSTriggerFirstTsRequest *pReq = &pProgress->pullReq.firstTsReq;
pReq->startTime = pContext->range.skey;
pReq->startTime = pContext->scanRange.skey;
pReq->ver = pProgress->version;
break;
}
@ -4002,7 +3993,7 @@ static int32_t stHistoryContextSendPullReq(SSTriggerHistoryContext *pContext, ES
pProgress = tSimpleHashGet(pContext->pReaderTsdbProgress, &pReader->nodeId, sizeof(int32_t));
QUERY_CHECK_NULL(pProgress, code, lino, _end, TSDB_CODE_INTERNAL_ERROR);
SSTriggerTsdbTriggerDataRequest *pReq = &pProgress->pullReq.tsdbTriggerDataReq;
pReq->startTime = pContext->range.skey;
pReq->startTime = pContext->scanRange.skey;
pReq->gid = pContext->gid;
pReq->order = 1;
pReq->ver = pProgress->version;
@ -4523,7 +4514,7 @@ static int32_t stHistoryContextCheck(SSTriggerHistoryContext *pContext) {
}
goto _end;
// TODO(kjq): backward start time to the previous window end of each group
} else if (pContext->range.skey > pContext->range.ekey) {
} else if (pContext->scanRange.skey > pContext->scanRange.ekey) {
goto _end;
}
@ -4531,7 +4522,7 @@ static int32_t stHistoryContextCheck(SSTriggerHistoryContext *pContext) {
if (pContext->needTsdbMeta) {
// TODO(kjq): use precision of trigger table
int64_t step = STREAM_TRIGGER_HISTORY_STEP_MS;
pContext->stepRange.skey = pContext->range.skey / step * step;
pContext->stepRange.skey = pContext->scanRange.skey / step * step;
pContext->stepRange.ekey = pContext->stepRange.skey + step - 1;
for (pContext->curReaderIdx = 0; pContext->curReaderIdx < TARRAY_SIZE(pTask->readerList);
pContext->curReaderIdx++) {
@ -4613,10 +4604,22 @@ static int32_t stHistoryContextCheck(SSTriggerHistoryContext *pContext) {
int32_t nParams = taosArrayGetSize(pGroup->pPendingCalcParams);
bool needCalc = (pTask->lowLatencyCalc && (nParams > 0)) || (nParams >= STREAM_CALC_REQ_MAX_WIN_NUM);
if (needCalc) {
int32_t nCalcParams = TMIN(nParams, STREAM_CALC_REQ_MAX_WIN_NUM);
void *px =
taosArrayAddBatch(pContext->pCalcReq->params, TARRAY_DATA(pGroup->pPendingCalcParams), nCalcParams);
QUERY_CHECK_NULL(px, code, lino, _end, terrno);
SSTriggerCalcParam *pParam = NULL;
for (int32_t i = 0; i < nParams; i++) {
pParam = TARRAY_GET_ELEM(pGroup->pPendingCalcParams, i);
if ((i + 1 < nParams && (pParam + 1)->wstart <= pContext->calcRange.skey) || pGroup->finished) {
// skip params out of calc range
continue;
}
void *px = taosArrayPush(pContext->pCalcReq->params, pParam);
QUERY_CHECK_NULL(px, code, lino, _end, terrno);
pGroup->finished = (pParam->wend >= pContext->calcRange.ekey);
if (TARRAY_SIZE(pContext->pCalcReq->params) >= STREAM_CALC_REQ_MAX_WIN_NUM) {
// max windows reached, send calc request
break;
}
}
int32_t nCalcParams = TARRAY_ELEM_IDX(pGroup->pPendingCalcParams, pParam) + 1;
taosArrayPopFrontBatch(pGroup->pPendingCalcParams, nCalcParams);
}
}
@ -4663,7 +4666,7 @@ static int32_t stHistoryContextCheck(SSTriggerHistoryContext *pContext) {
int64_t step = STREAM_TRIGGER_HISTORY_STEP_MS;
QUERY_CHECK_CONDITION(pContext->stepRange.skey + step - 1 == pContext->stepRange.ekey, code, lino, _end,
TSDB_CODE_INTERNAL_ERROR);
finished = (pContext->stepRange.skey + step > pContext->range.ekey);
finished = (pContext->stepRange.skey + step > pContext->scanRange.ekey);
} else if (pTask->triggerType != STREAM_TRIGGER_SLIDING) {
for (int32_t i = 0; i < TARRAY_SIZE(pContext->pTrigDataBlocks); i++) {
SSDataBlock *pDataBlock = *(SSDataBlock **)TARRAY_GET_ELEM(pContext->pTrigDataBlocks, i);
@ -4917,7 +4920,7 @@ static int32_t stHistoryContextProcPullRsp(SSTriggerHistoryContext *pContext, SR
int32_t iter = 0;
void *px = tSimpleHashIterate(pContext->pFirstTsMap, NULL, &iter);
while (px != NULL) {
pContext->range.skey = TMAX(pContext->range.skey, *(int64_t *)px);
pContext->scanRange.skey = TMAX(pContext->scanRange.skey, *(int64_t *)px);
px = tSimpleHashIterate(pContext->pFirstTsMap, px, &iter);
}
@ -5696,10 +5699,7 @@ static int32_t stRealtimeGroupAddMetaDatas(SSTriggerRealtimeGroup *pGroup, SArra
// add recalc request
if (recalcRange.skey <= recalcRange.ekey) {
if (pTask->triggerType != STREAM_TRIGGER_STATE && pTask->triggerType != STREAM_TRIGGER_EVENT) {
recalcRange.ekey = pGroup->oldThreshold;
}
code = stTriggerTaskAddRecalcRequest(pTask, pGroup->gid, recalcRange, pContext->pReaderWalProgress, true);
code = stTriggerTaskAddRecalcRequest(pTask, pGroup, &recalcRange, pContext->pReaderWalProgress, true);
QUERY_CHECK_CODE(code, lino, _end);
}
@ -7507,7 +7507,7 @@ static int32_t stHistoryGroupMergeSavedWindows(SSTriggerHistoryGroup *pGroup, in
}
// some window may have not been closed yet
if (pWin->range.ekey + gap > pContext->stepRange.ekey || pWin->range.ekey + gap > pContext->range.ekey) {
if (pWin->range.ekey + gap > pContext->stepRange.ekey || pWin->range.ekey + gap > pContext->scanRange.ekey) {
// TODO(kjq): restore prevProcTime from saved init windows
pWin->prevProcTime = taosGetTimestampNs();
if (TRINGBUF_SIZE(&pGroup->winBuf) > 0) {
@ -8111,7 +8111,7 @@ static int32_t stHistoryGroupDoStateCheck(SSTriggerHistoryGroup *pGroup) {
// open the first window
char *newVal = colDataGetData(pStateCol, startIdx);
int32_t bytes = isVarType ? varDataTLen(newVal) : pStateCol->info.bytes;
if (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN) {
if (pTask->notifyHistory && (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN)) {
code = streamBuildStateNotifyContent(STRIGGER_EVENT_WINDOW_OPEN, &pStateCol->info, NULL, newVal,
&pExtraNotifyContent);
QUERY_CHECK_CODE(code, lino, _end);
@ -8122,10 +8122,6 @@ static int32_t stHistoryGroupDoStateCheck(SSTriggerHistoryGroup *pGroup) {
startIdx++;
}
if (!IS_TRIGGER_GROUP_OPEN_WINDOW(pGroup) && pTsData[startIdx] > pContext->range.ekey) {
goto _end;
}
for (int32_t r = startIdx; r < endIdx; r++) {
char *newVal = colDataGetData(pStateCol, r);
int32_t bytes = isVarType ? varDataTLen(newVal) : pStateCol->info.bytes;
@ -8133,18 +8129,14 @@ static int32_t stHistoryGroupDoStateCheck(SSTriggerHistoryGroup *pGroup) {
TRINGBUF_HEAD(&pGroup->winBuf)->wrownum++;
TRINGBUF_HEAD(&pGroup->winBuf)->range.ekey = pTsData[r];
} else {
if (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_CLOSE) {
if (pTask->notifyHistory && (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_CLOSE)) {
code = streamBuildStateNotifyContent(STRIGGER_EVENT_WINDOW_CLOSE, &pStateCol->info, pStateData, newVal,
&pExtraNotifyContent);
QUERY_CHECK_CODE(code, lino, _end);
}
bool isLastWin = TRINGBUF_HEAD(&pGroup->winBuf)->range.ekey > pContext->range.ekey;
code = stHistoryGroupCloseWindow(pGroup, &pExtraNotifyContent, false);
QUERY_CHECK_CODE(code, lino, _end);
if (isLastWin) {
break;
}
if (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN) {
if (pTask->notifyHistory && (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN)) {
code = streamBuildStateNotifyContent(STRIGGER_EVENT_WINDOW_OPEN, &pStateCol->info, pStateData, newVal,
&pExtraNotifyContent);
QUERY_CHECK_CODE(code, lino, _end);
@ -8195,10 +8187,6 @@ static int32_t stHistoryGroupDoEventCheck(SSTriggerHistoryGroup *pGroup) {
psCol = NULL;
peCol = NULL;
if (!IS_TRIGGER_GROUP_OPEN_WINDOW(pGroup) && pTsData[startIdx] > pContext->range.ekey) {
goto _end;
}
for (int32_t r = startIdx; r < endIdx; r++) {
if (IS_TRIGGER_GROUP_OPEN_WINDOW(pGroup)) {
TRINGBUF_HEAD(&pGroup->winBuf)->range.ekey = pTsData[r];
@ -8215,7 +8203,7 @@ static int32_t stHistoryGroupDoEventCheck(SSTriggerHistoryGroup *pGroup) {
ps = (bool *)psCol->pData;
}
if (ps[r]) {
if (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN) {
if (pTask->notifyHistory && (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_OPEN)) {
code = streamBuildEventNotifyContent(pDataBlock, pTask->pStartCondCols, r, &pExtraNotifyContent);
QUERY_CHECK_CODE(code, lino, _end);
}
@ -8235,16 +8223,12 @@ static int32_t stHistoryGroupDoEventCheck(SSTriggerHistoryGroup *pGroup) {
pe = (bool *)peCol->pData;
}
if (pe[r]) {
if (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_CLOSE) {
if (pTask->notifyHistory && (pTask->notifyEventType & STRIGGER_EVENT_WINDOW_CLOSE)) {
code = streamBuildEventNotifyContent(pDataBlock, pTask->pEndCondCols, r, &pExtraNotifyContent);
QUERY_CHECK_CODE(code, lino, _end);
}
bool isLastWin = TRINGBUF_HEAD(&pGroup->winBuf)->range.ekey > pContext->range.ekey;
code = stHistoryGroupCloseWindow(pGroup, &pExtraNotifyContent, false);
QUERY_CHECK_CODE(code, lino, _end);
if (isLastWin) {
break;
}
}
}
}

View File

@ -91,6 +91,7 @@ static SKeyword keywordTable[] = {
{"DBS", TK_DBS},
{"DECIMAL", TK_DECIMAL},
{"DELETE", TK_DELETE},
{"DELETE_MARK", TK_DELETE_MARK},
{"DELETE_OUTPUT_TABLE", TK_DELETE_OUTPUT_TABLE},
{"DELETE_RECALC", TK_DELETE_RECALC},
{"DESC", TK_DESC},

View File

@ -9349,26 +9349,34 @@ static int32_t translateSetOperator(STranslateContext* pCxt, SSetOperator* pSetO
(*pCxt->pParseCxt->setQueryFp)(pCxt->pParseCxt->requestRid);
}
int32_t code = translateQuery(pCxt, pSetOperator->pLeft);
if (TSDB_CODE_SUCCESS == code) {
code = resetHighLevelTranslateNamespace(pCxt);
bool hasTrows = false;
int32_t code = TSDB_CODE_SUCCESS;
PAR_ERR_RET(translateQuery(pCxt, pSetOperator->pLeft));
if (pCxt->createStreamCalc) {
hasTrows = BIT_FLAG_TEST_MASK(pCxt->placeHolderBitmap, PLACE_HOLDER_PARTITION_ROWS);
BIT_FLAG_UNSET_MASK(pCxt->placeHolderBitmap, PLACE_HOLDER_PARTITION_ROWS);
}
if (TSDB_CODE_SUCCESS == code) {
code = translateQuery(pCxt, pSetOperator->pRight);
}
if (TSDB_CODE_SUCCESS == code) {
pSetOperator->joinContains = getBothJoinContais(pSetOperator->pLeft, pSetOperator->pRight);
}
if (TSDB_CODE_SUCCESS == code) {
pSetOperator->precision = calcSetOperatorPrecision(pSetOperator);
code = translateSetOperProject(pCxt, pSetOperator);
}
if (TSDB_CODE_SUCCESS == code) {
code = translateSetOperOrderBy(pCxt, pSetOperator);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkSetOperLimit(pCxt, (SLimitNode*)pSetOperator->pLimit);
PAR_ERR_RET(resetHighLevelTranslateNamespace(pCxt));
PAR_ERR_RET(translateQuery(pCxt, pSetOperator->pRight));
if (pCxt->createStreamCalc && hasTrows) {
if (BIT_FLAG_TEST_MASK(pCxt->placeHolderBitmap, PLACE_HOLDER_PARTITION_ROWS)) {
PAR_ERR_RET(generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_STREAM_INVALID_QUERY,
"%%%%trows can not appear multi times in union query"));
} else {
BIT_FLAG_SET_MASK(pCxt->placeHolderBitmap, PLACE_HOLDER_PARTITION_ROWS);
}
}
pSetOperator->joinContains = getBothJoinContais(pSetOperator->pLeft, pSetOperator->pRight);
pSetOperator->precision = calcSetOperatorPrecision(pSetOperator);
PAR_ERR_RET(translateSetOperProject(pCxt, pSetOperator));
PAR_ERR_RET(translateSetOperOrderBy(pCxt, pSetOperator));
PAR_ERR_RET(checkSetOperLimit(pCxt, (SLimitNode*)pSetOperator->pLimit));
return code;
}

View File

@ -115,6 +115,13 @@ void generateInformationSchema(MockCatalogService* mcs) {
.addColumn("view_name", TSDB_DATA_TYPE_BINARY, TSDB_VIEW_NAME_LEN)
.addColumn("create_time", TSDB_DATA_TYPE_TIMESTAMP)
.done();
mcs->createTableBuilder(TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_STREAMS, TSDB_SYSTEM_TABLE, 5)
.addColumn("stream_name", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("db_name", TSDB_DATA_TYPE_BINARY, TSDB_DB_NAME_LEN)
.addColumn("status", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("message", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("create_time", TSDB_DATA_TYPE_TIMESTAMP)
.done();
}
void generatePerformanceSchema(MockCatalogService* mcs) {
@ -122,12 +129,6 @@ void generatePerformanceSchema(MockCatalogService* mcs) {
.addColumn("id", TSDB_DATA_TYPE_INT)
.addColumn("create_time", TSDB_DATA_TYPE_TIMESTAMP)
.done();
mcs->createTableBuilder(TSDB_INFORMATION_SCHEMA_DB, TSDB_INS_TABLE_STREAMS, TSDB_SYSTEM_TABLE, 4)
.addColumn("stream_name", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("status", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("message", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
.addColumn("create_time", TSDB_DATA_TYPE_TIMESTAMP)
.done();
mcs->createTableBuilder(TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_CONSUMERS, TSDB_SYSTEM_TABLE, 2)
.addColumn("consumer_id", TSDB_DATA_TYPE_BIGINT)
.addColumn("consumer_group", TSDB_DATA_TYPE_BINARY, TSDB_TABLE_NAME_LEN)
@ -225,7 +226,7 @@ void generateTestStables(MockCatalogService* mcs, const std::string& db) {
mcs->createSubTable(db, "st2", "st2s2", 3);
}
{
ITableBuilder& builder = mcs->createTableBuilder(db, "t1", TSDB_NORMAL_TABLE, 3, 0)
ITableBuilder& builder = mcs->createTableBuilder(db, "stream_t1", TSDB_NORMAL_TABLE, 3, 0)
.setPrecision(TSDB_TIME_PRECISION_MILLI)
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
.addColumn("c1", TSDB_DATA_TYPE_INT)
@ -233,7 +234,7 @@ void generateTestStables(MockCatalogService* mcs, const std::string& db) {
builder.done();
}
{
ITableBuilder& builder = mcs->createTableBuilder(db, "t2", TSDB_NORMAL_TABLE, 4, 0)
ITableBuilder& builder = mcs->createTableBuilder(db, "stream_t2", TSDB_NORMAL_TABLE, 4, 0)
.setPrecision(TSDB_TIME_PRECISION_MILLI)
.addColumn("ts", TSDB_DATA_TYPE_TIMESTAMP)
.addColumn("c1", TSDB_DATA_TYPE_INT)
@ -360,6 +361,8 @@ int32_t __catalogRefreshGetTableMeta(SCatalog* pCatalog, SRequestConnInfo* pConn
int32_t __catalogRemoveTableMeta(SCatalog* pCtg, SName* pTableName) { return 0; }
int32_t __catalogRemoveTableRelatedMeta(SCatalog* pCtg, SName* pTableName) { return 0; }
int32_t __catalogRemoveViewMeta(SCatalog* pCtg, SName* pTableName) { return 0; }
int32_t __catalogGetTableIndex(SCatalog* pCtg, void* pTrans, const SEpSet* pMgmtEps, const SName* pName,
@ -398,6 +401,7 @@ void initMetaDataEnv() {
stub.set(catalogGetUdfInfo, __catalogGetUdfInfo);
stub.set(catalogRefreshGetTableMeta, __catalogRefreshGetTableMeta);
stub.set(catalogRemoveTableMeta, __catalogRemoveTableMeta);
stub.set(catalogRemoveTableRelatedMeta, __catalogRemoveTableRelatedMeta);
stub.set(catalogRemoveViewMeta, __catalogRemoveViewMeta);
stub.set(catalogGetTableIndex, __catalogGetTableIndex);
stub.set(catalogGetDnodeList, __catalogGetDnodeList);

View File

@ -552,248 +552,247 @@ TEST_F(ParserInitialATest, alterSTableSemanticCheck) {
* | COMMENT 'string_value'
* }
*/
// TODO(smj) : disable for stream, reopen it later
//TEST_F(ParserInitialATest, alterTable) {
// useDb("root", "test");
//
// // normal/child table
// {
// SVAlterTbReq expect = {0};
//
// auto clearAlterTbReq = [&]() {
// free(expect.tbName);
// free(expect.colName);
// free(expect.colNewName);
// free(expect.tagName);
// memset(&expect, 0, sizeof(SVAlterTbReq));
// };
//
// auto setAlterTableCol = [&](const char* pTbname, int8_t alterType, const char* pColName, int8_t dataType = 0,
// int32_t dataBytes = 0, const char* pNewColName = nullptr) {
// expect.tbName = strdup(pTbname);
// expect.action = alterType;
// expect.colName = strdup(pColName);
//
// switch (alterType) {
// case TSDB_ALTER_TABLE_ADD_COLUMN:
// expect.type = dataType;
// expect.flags = COL_SMA_ON;
// expect.bytes = dataBytes > 0 ? dataBytes : (dataType > 0 ? tDataTypes[dataType].bytes : 0);
// break;
// case TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES:
// expect.colModBytes = dataBytes;
// break;
// case TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME:
// expect.colNewName = strdup(pNewColName);
// break;
// default:
// break;
// }
// };
//
// auto setAlterTableTag = [&](const char* pTbname, const char* pTagName, uint8_t* pNewVal, uint32_t bytes) {
// expect.tbName = strdup(pTbname);
// expect.action = TSDB_ALTER_TABLE_UPDATE_TAG_VAL;
// expect.tagName = strdup(pTagName);
//
// expect.isNull = (nullptr == pNewVal);
// expect.nTagVal = bytes;
// expect.pTagVal = pNewVal;
// };
//
// auto setAlterTableOptions = [&](const char* pTbname, int32_t ttl, char* pComment = nullptr) {
// expect.tbName = strdup(pTbname);
// expect.action = TSDB_ALTER_TABLE_UPDATE_OPTIONS;
// if (-1 != ttl) {
// expect.updateTTL = true;
// expect.newTTL = ttl;
// }
// if (nullptr != pComment) {
// expect.newCommentLen = strlen(pComment);
// expect.newComment = pComment;
// }
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_VNODE_MODIFY_STMT);
// SVnodeModifyOpStmt* pStmt = (SVnodeModifyOpStmt*)pQuery->pRoot;
//
// ASSERT_EQ(pStmt->sqlNodeType, QUERY_NODE_ALTER_TABLE_STMT);
// ASSERT_NE(pStmt->pDataBlocks, nullptr);
// ASSERT_EQ(taosArrayGetSize(pStmt->pDataBlocks), 1);
// SVgDataBlocks* pVgData = (SVgDataBlocks*)taosArrayGetP(pStmt->pDataBlocks, 0);
// void* pBuf = POINTER_SHIFT(pVgData->pData, sizeof(SMsgHead));
// SVAlterTbReq req = {0};
// SDecoder coder = {0};
// tDecoderInit(&coder, (uint8_t*)pBuf, pVgData->size);
// ASSERT_EQ(tDecodeSVAlterTbReq(&coder, &req), TSDB_CODE_SUCCESS);
//
// ASSERT_EQ(std::string(req.tbName), std::string(expect.tbName));
// ASSERT_EQ(req.action, expect.action);
// if (nullptr != expect.colName) {
// ASSERT_EQ(std::string(req.colName), std::string(expect.colName));
// }
// ASSERT_EQ(req.type, expect.type);
// ASSERT_EQ(req.flags, expect.flags);
// ASSERT_EQ(req.bytes, expect.bytes);
// ASSERT_EQ(req.colModBytes, expect.colModBytes);
// if (nullptr != expect.colNewName) {
// ASSERT_EQ(std::string(req.colNewName), std::string(expect.colNewName));
// }
// if (nullptr != expect.tagName) {
// ASSERT_EQ(std::string(req.tagName), std::string(expect.tagName));
// }
// ASSERT_EQ(req.isNull, expect.isNull);
// ASSERT_EQ(req.nTagVal, expect.nTagVal);
// if (nullptr != req.pTagVal) {
// ASSERT_EQ(memcmp(req.pTagVal, expect.pTagVal, expect.nTagVal), 0);
// }
// ASSERT_EQ(req.updateTTL, expect.updateTTL);
// ASSERT_EQ(req.newTTL, expect.newTTL);
// if (nullptr != expect.newComment) {
// ASSERT_EQ(std::string(req.newComment), std::string(expect.newComment));
// ASSERT_EQ(req.newCommentLen, strlen(req.newComment));
// ASSERT_EQ(expect.newCommentLen, strlen(expect.newComment));
// }
//
// tDecoderClear(&coder);
// });
//
// setAlterTableOptions("t1", 10, nullptr);
// run("ALTER TABLE t1 TTL 10");
// clearAlterTbReq();
//
// setAlterTableOptions("t1", -1, (char*)"test");
// run("ALTER TABLE t1 COMMENT 'test'");
// clearAlterTbReq();
//
// setAlterTableCol("t1", TSDB_ALTER_TABLE_ADD_COLUMN, "cc1", TSDB_DATA_TYPE_BIGINT);
// run("ALTER TABLE t1 ADD COLUMN cc1 BIGINT");
// clearAlterTbReq();
//
// setAlterTableCol("t1", TSDB_ALTER_TABLE_DROP_COLUMN, "c1");
// run("ALTER TABLE t1 DROP COLUMN c1");
// clearAlterTbReq();
//
// setAlterTableCol("t1", TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES, "c2", TSDB_DATA_TYPE_VARCHAR, 30 + VARSTR_HEADER_SIZE);
// run("ALTER TABLE t1 MODIFY COLUMN c2 VARCHAR(30)");
// clearAlterTbReq();
//
// setAlterTableCol("t1", TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME, "c1", 0, 0, "cc1");
// run("ALTER TABLE t1 RENAME COLUMN c1 cc1");
// clearAlterTbReq();
//
// int32_t val = 10;
// setAlterTableTag("st1s1", "tag1", (uint8_t*)&val, sizeof(val));
// run("ALTER TABLE st1s1 SET TAG tag1=10");
// clearAlterTbReq();
// }
//
// // super table
// {
// SMAlterStbReq expect = {0};
//
// auto clearAlterStbReq = [&]() {
// tFreeSMAltertbReq(&expect);
// memset(&expect, 0, sizeof(SMAlterStbReq));
// };
//
// auto setAlterStbReq = [&](const char* pTbname, int8_t alterType, int32_t numOfFields = 0,
// const char* pField1Name = nullptr, int8_t field1Type = 0, int32_t field1Bytes = 0,
// const char* pField2Name = nullptr, const char* pComment = nullptr) {
// int32_t len = snprintf(expect.name, sizeof(expect.name), "0.test.%s", pTbname);
// expect.name[len] = '\0';
// expect.alterType = alterType;
// if (nullptr != pComment) {
// expect.comment = strdup(pComment);
// expect.commentLen = strlen(pComment);
// }
//
// expect.numOfFields = numOfFields;
// if (NULL == expect.pFields) {
// expect.pFields = taosArrayInit(2, sizeof(TAOS_FIELD));
// ASSERT_TRUE(expect.pFields);
// TAOS_FIELD field = {0};
// ASSERT_TRUE(nullptr != taosArrayPush(expect.pFields, &field));
// ASSERT_TRUE(nullptr != taosArrayPush(expect.pFields, &field));
// }
//
// TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 0);
// if (NULL != pField1Name) {
// strcpy(pField->name, pField1Name);
// pField->name[strlen(pField1Name)] = '\0';
// } else {
// memset(pField, 0, sizeof(TAOS_FIELD));
// }
// pField->type = field1Type;
// pField->bytes = field1Bytes > 0 ? field1Bytes : (field1Type > 0 ? tDataTypes[field1Type].bytes : 0);
//
// pField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 1);
// if (NULL != pField2Name) {
// strcpy(pField->name, pField2Name);
// pField->name[strlen(pField2Name)] = '\0';
// } else {
// memset(pField, 0, sizeof(TAOS_FIELD));
// }
// pField->type = 0;
// pField->bytes = 0;
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_ALTER_TABLE_STMT);
// ASSERT_EQ(pQuery->pCmdMsg->msgType, TDMT_MND_ALTER_STB);
// SMAlterStbReq req = {0};
// ASSERT_EQ(tDeserializeSMAlterStbReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req), TSDB_CODE_SUCCESS);
// ASSERT_EQ(std::string(req.name), std::string(expect.name));
// ASSERT_EQ(req.alterType, expect.alterType);
// ASSERT_EQ(req.numOfFields, expect.numOfFields);
// if (expect.numOfFields > 0) {
// TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(req.pFields, 0);
// TAOS_FIELD* pExpectField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 0);
// ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
// ASSERT_EQ(pField->type, pExpectField->type);
// ASSERT_EQ(pField->bytes, pExpectField->bytes);
// }
// if (expect.numOfFields > 1) {
// TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(req.pFields, 1);
// TAOS_FIELD* pExpectField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 1);
// ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
// ASSERT_EQ(pField->type, pExpectField->type);
// ASSERT_EQ(pField->bytes, pExpectField->bytes);
// }
// tFreeSMAltertbReq(&req);
// });
//
// setAlterStbReq("st1", TSDB_ALTER_TABLE_ADD_TAG, 1, "tag11", TSDB_DATA_TYPE_BIGINT);
// run("ALTER TABLE st1 ADD TAG tag11 BIGINT");
// clearAlterStbReq();
//
// setAlterStbReq("st1", TSDB_ALTER_TABLE_DROP_TAG, 1, "tag1");
// run("ALTER TABLE st1 DROP TAG tag1");
// clearAlterStbReq();
//
// setAlterStbReq("st1", TSDB_ALTER_TABLE_UPDATE_TAG_BYTES, 1, "tag2", TSDB_DATA_TYPE_VARCHAR,
// 30 + VARSTR_HEADER_SIZE);
// run("ALTER TABLE st1 MODIFY TAG tag2 VARCHAR(30)");
// clearAlterStbReq();
//
// setAlterStbReq("st1", TSDB_ALTER_TABLE_UPDATE_TAG_NAME, 2, "tag1", 0, 0, "tag11");
// run("ALTER TABLE st1 RENAME TAG tag1 tag11");
// clearAlterStbReq();
// }
//}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(ParserInitialATest, alterTableSemanticCheck) {
// useDb("root", "test");
//
// run("ALTER TABLE st1s1 RENAME COLUMN c1 cc1", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
// run("ALTER TABLE st1s1 ADD TAG tag11 BIGINT", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
// run("ALTER TABLE st1s1 DROP TAG tag1", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
// run("ALTER TABLE st1s1 MODIFY TAG tag2 VARCHAR(30)", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
// run("ALTER TABLE st1s1 RENAME TAG tag1 tag11", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
// run("ALTER TABLE st1s1 SET TAG tag2 = '123456789012345678901'", TSDB_CODE_PAR_VALUE_TOO_LONG);
//}
TEST_F(ParserInitialATest, alterTable) {
useDb("root", "test");
// normal/child table
{
SVAlterTbReq expect = {0};
auto clearAlterTbReq = [&]() {
free(expect.tbName);
free(expect.colName);
free(expect.colNewName);
free(expect.tagName);
memset(&expect, 0, sizeof(SVAlterTbReq));
};
auto setAlterTableCol = [&](const char* pTbname, int8_t alterType, const char* pColName, int8_t dataType = 0,
int32_t dataBytes = 0, const char* pNewColName = nullptr) {
expect.tbName = strdup(pTbname);
expect.action = alterType;
expect.colName = strdup(pColName);
switch (alterType) {
case TSDB_ALTER_TABLE_ADD_COLUMN:
expect.type = dataType;
expect.flags = COL_SMA_ON;
expect.bytes = dataBytes > 0 ? dataBytes : (dataType > 0 ? tDataTypes[dataType].bytes : 0);
break;
case TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES:
expect.colModBytes = dataBytes;
break;
case TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME:
expect.colNewName = strdup(pNewColName);
break;
default:
break;
}
};
auto setAlterTableTag = [&](const char* pTbname, const char* pTagName, uint8_t* pNewVal, uint32_t bytes) {
expect.tbName = strdup(pTbname);
expect.action = TSDB_ALTER_TABLE_UPDATE_TAG_VAL;
expect.tagName = strdup(pTagName);
expect.isNull = (nullptr == pNewVal);
expect.nTagVal = bytes;
expect.pTagVal = pNewVal;
};
auto setAlterTableOptions = [&](const char* pTbname, int32_t ttl, char* pComment = nullptr) {
expect.tbName = strdup(pTbname);
expect.action = TSDB_ALTER_TABLE_UPDATE_OPTIONS;
if (-1 != ttl) {
expect.updateTTL = true;
expect.newTTL = ttl;
}
if (nullptr != pComment) {
expect.newCommentLen = strlen(pComment);
expect.newComment = pComment;
}
};
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_VNODE_MODIFY_STMT);
SVnodeModifyOpStmt* pStmt = (SVnodeModifyOpStmt*)pQuery->pRoot;
ASSERT_EQ(pStmt->sqlNodeType, QUERY_NODE_ALTER_TABLE_STMT);
ASSERT_NE(pStmt->pDataBlocks, nullptr);
ASSERT_EQ(taosArrayGetSize(pStmt->pDataBlocks), 1);
SVgDataBlocks* pVgData = (SVgDataBlocks*)taosArrayGetP(pStmt->pDataBlocks, 0);
void* pBuf = POINTER_SHIFT(pVgData->pData, sizeof(SMsgHead));
SVAlterTbReq req = {0};
SDecoder coder = {0};
tDecoderInit(&coder, (uint8_t*)pBuf, pVgData->size);
ASSERT_EQ(tDecodeSVAlterTbReq(&coder, &req), TSDB_CODE_SUCCESS);
ASSERT_EQ(std::string(req.tbName), std::string(expect.tbName));
ASSERT_EQ(req.action, expect.action);
if (nullptr != expect.colName) {
ASSERT_EQ(std::string(req.colName), std::string(expect.colName));
}
ASSERT_EQ(req.type, expect.type);
ASSERT_EQ(req.flags, expect.flags);
ASSERT_EQ(req.bytes, expect.bytes);
ASSERT_EQ(req.colModBytes, expect.colModBytes);
if (nullptr != expect.colNewName) {
ASSERT_EQ(std::string(req.colNewName), std::string(expect.colNewName));
}
if (nullptr != expect.tagName) {
ASSERT_EQ(std::string(req.tagName), std::string(expect.tagName));
}
ASSERT_EQ(req.isNull, expect.isNull);
ASSERT_EQ(req.nTagVal, expect.nTagVal);
if (nullptr != req.pTagVal) {
ASSERT_EQ(memcmp(req.pTagVal, expect.pTagVal, expect.nTagVal), 0);
}
ASSERT_EQ(req.updateTTL, expect.updateTTL);
ASSERT_EQ(req.newTTL, expect.newTTL);
if (nullptr != expect.newComment) {
ASSERT_EQ(std::string(req.newComment), std::string(expect.newComment));
ASSERT_EQ(req.newCommentLen, strlen(req.newComment));
ASSERT_EQ(expect.newCommentLen, strlen(expect.newComment));
}
tDecoderClear(&coder);
});
setAlterTableOptions("t1", 10, nullptr);
run("ALTER TABLE t1 TTL 10");
clearAlterTbReq();
setAlterTableOptions("t1", -1, (char*)"test");
run("ALTER TABLE t1 COMMENT 'test'");
clearAlterTbReq();
setAlterTableCol("t1", TSDB_ALTER_TABLE_ADD_COLUMN, "cc1", TSDB_DATA_TYPE_BIGINT);
run("ALTER TABLE t1 ADD COLUMN cc1 BIGINT");
clearAlterTbReq();
setAlterTableCol("t1", TSDB_ALTER_TABLE_DROP_COLUMN, "c1");
run("ALTER TABLE t1 DROP COLUMN c1");
clearAlterTbReq();
setAlterTableCol("t1", TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES, "c2", TSDB_DATA_TYPE_VARCHAR, 30 + VARSTR_HEADER_SIZE);
run("ALTER TABLE t1 MODIFY COLUMN c2 VARCHAR(30)");
clearAlterTbReq();
setAlterTableCol("t1", TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME, "c1", 0, 0, "cc1");
run("ALTER TABLE t1 RENAME COLUMN c1 cc1");
clearAlterTbReq();
int32_t val = 10;
setAlterTableTag("st1s1", "tag1", (uint8_t*)&val, sizeof(val));
run("ALTER TABLE st1s1 SET TAG tag1=10");
clearAlterTbReq();
}
// super table
{
SMAlterStbReq expect = {0};
auto clearAlterStbReq = [&]() {
tFreeSMAltertbReq(&expect);
memset(&expect, 0, sizeof(SMAlterStbReq));
};
auto setAlterStbReq = [&](const char* pTbname, int8_t alterType, int32_t numOfFields = 0,
const char* pField1Name = nullptr, int8_t field1Type = 0, int32_t field1Bytes = 0,
const char* pField2Name = nullptr, const char* pComment = nullptr) {
int32_t len = snprintf(expect.name, sizeof(expect.name), "0.test.%s", pTbname);
expect.name[len] = '\0';
expect.alterType = alterType;
if (nullptr != pComment) {
expect.comment = strdup(pComment);
expect.commentLen = strlen(pComment);
}
expect.numOfFields = numOfFields;
if (NULL == expect.pFields) {
expect.pFields = taosArrayInit(2, sizeof(TAOS_FIELD));
ASSERT_TRUE(expect.pFields);
TAOS_FIELD field = {0};
ASSERT_TRUE(nullptr != taosArrayPush(expect.pFields, &field));
ASSERT_TRUE(nullptr != taosArrayPush(expect.pFields, &field));
}
TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 0);
if (NULL != pField1Name) {
strcpy(pField->name, pField1Name);
pField->name[strlen(pField1Name)] = '\0';
} else {
memset(pField, 0, sizeof(TAOS_FIELD));
}
pField->type = field1Type;
pField->bytes = field1Bytes > 0 ? field1Bytes : (field1Type > 0 ? tDataTypes[field1Type].bytes : 0);
pField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 1);
if (NULL != pField2Name) {
strcpy(pField->name, pField2Name);
pField->name[strlen(pField2Name)] = '\0';
} else {
memset(pField, 0, sizeof(TAOS_FIELD));
}
pField->type = 0;
pField->bytes = 0;
};
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_ALTER_TABLE_STMT);
ASSERT_EQ(pQuery->pCmdMsg->msgType, TDMT_MND_ALTER_STB);
SMAlterStbReq req = {0};
ASSERT_EQ(tDeserializeSMAlterStbReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req), TSDB_CODE_SUCCESS);
ASSERT_EQ(std::string(req.name), std::string(expect.name));
ASSERT_EQ(req.alterType, expect.alterType);
ASSERT_EQ(req.numOfFields, expect.numOfFields);
if (expect.numOfFields > 0) {
TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(req.pFields, 0);
TAOS_FIELD* pExpectField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 0);
ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
ASSERT_EQ(pField->type, pExpectField->type);
ASSERT_EQ(pField->bytes, pExpectField->bytes);
}
if (expect.numOfFields > 1) {
TAOS_FIELD* pField = (TAOS_FIELD*)taosArrayGet(req.pFields, 1);
TAOS_FIELD* pExpectField = (TAOS_FIELD*)taosArrayGet(expect.pFields, 1);
ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
ASSERT_EQ(pField->type, pExpectField->type);
ASSERT_EQ(pField->bytes, pExpectField->bytes);
}
tFreeSMAltertbReq(&req);
});
setAlterStbReq("st1", TSDB_ALTER_TABLE_ADD_TAG, 1, "tag11", TSDB_DATA_TYPE_BIGINT);
run("ALTER TABLE st1 ADD TAG tag11 BIGINT");
clearAlterStbReq();
setAlterStbReq("st1", TSDB_ALTER_TABLE_DROP_TAG, 1, "tag1");
run("ALTER TABLE st1 DROP TAG tag1");
clearAlterStbReq();
setAlterStbReq("st1", TSDB_ALTER_TABLE_UPDATE_TAG_BYTES, 1, "tag2", TSDB_DATA_TYPE_VARCHAR,
30 + VARSTR_HEADER_SIZE);
run("ALTER TABLE st1 MODIFY TAG tag2 VARCHAR(30)");
clearAlterStbReq();
setAlterStbReq("st1", TSDB_ALTER_TABLE_UPDATE_TAG_NAME, 2, "tag1", 0, 0, "tag11");
run("ALTER TABLE st1 RENAME TAG tag1 tag11");
clearAlterStbReq();
}
}
TEST_F(ParserInitialATest, alterTableSemanticCheck) {
useDb("root", "test");
run("ALTER TABLE st1s1 RENAME COLUMN c1 cc1", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
run("ALTER TABLE st1s1 ADD TAG tag11 BIGINT", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
run("ALTER TABLE st1s1 DROP TAG tag1", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
run("ALTER TABLE st1s1 MODIFY TAG tag2 VARCHAR(30)", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
run("ALTER TABLE st1s1 RENAME TAG tag1 tag11", TSDB_CODE_PAR_INVALID_ALTER_TABLE);
run("ALTER TABLE st1s1 SET TAG tag2 = '123456789012345678901'", TSDB_CODE_PAR_VALUE_TOO_LONG);
}
/*
* ALTER USER user_name alter_user_clause

View File

@ -107,61 +107,6 @@ TEST_F(ParserExplainToSyncdbTest, mergeVgroup) {
run("MERGE VGROUP 1 2");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(ParserExplainToSyncdbTest, pauseStreamStmt) {
// useDb("root", "test");
//
// SMPauseStreamReq expect = {0};
//
// auto setMPauseStreamReq = [&](const string& name, bool igNotExists = false) {
// snprintf(expect.name, sizeof(expect.name), "0.%s", name.c_str());
// expect.igNotExists = igNotExists;
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_PAUSE_STREAM_STMT);
// ASSERT_EQ(pQuery->pCmdMsg->msgType, TDMT_MND_STOP_STREAM);
// SMPauseStreamReq req = {0};
// ASSERT_EQ(tDeserializeSMPauseStreamReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req), TSDB_CODE_SUCCESS);
// ASSERT_EQ(string(req.name), string(expect.name));
// ASSERT_EQ(req.igNotExists, expect.igNotExists);
// });
//
// setMPauseStreamReq("str1");
// run("PAUSE STREAM str1");
//
// setMPauseStreamReq("str2", true);
// run("PAUSE STREAM IF EXISTS str2");
//}
//
//TEST_F(ParserExplainToSyncdbTest, resumeStreamStmt) {
// useDb("root", "test");
//
// SMResumeStreamReq expect = {0};
//
// auto setMResumeStreamReq = [&](const string& name, bool igNotExists = false, bool igUntreated = false) {
// snprintf(expect.name, sizeof(expect.name), "0.%s", name.c_str());
// expect.igNotExists = igNotExists;
// expect.igUntreated = igUntreated;
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_RESUME_STREAM_STMT);
// ASSERT_EQ(pQuery->pCmdMsg->msgType, TDMT_MND_START_STREAM);
// SMResumeStreamReq req = {0};
// ASSERT_EQ(tDeserializeSMResumeStreamReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req), TSDB_CODE_SUCCESS);
// ASSERT_EQ(string(req.name), string(expect.name));
// ASSERT_EQ(req.igNotExists, expect.igNotExists);
// ASSERT_EQ(req.igUntreated, expect.igUntreated);
// });
//
// setMResumeStreamReq("str1");
// run("RESUME STREAM str1");
//
// setMResumeStreamReq("str2", true, true);
// run("RESUME STREAM IF EXISTS IGNORE UNTREATED str2");
//}
TEST_F(ParserExplainToSyncdbTest, redistributeVgroup) {
useDb("root", "test");

View File

@ -497,98 +497,9 @@ TEST_F(ParserInitialCTest, createFunction) {
* CREATE [ OR REPLACE ] VIEW name [ ( column_name [, ...] ) ] AS query
*
*/
//TEST_F(ParserInitialCTest, createView) {
// useDb("root", "test");
//
// SCMCreateStreamReq expect = {0};
//
// auto clearCreateStreamReq = [&]() {
// tFreeSCMCreateStreamReq(&expect);
// memset(&expect, 0, sizeof(SCMCreateStreamReq));
// };
//
// auto setCreateStreamReq = [&](const char* pStream, const char* pSrcDb, const char* pSql, const char* pDstStb,
// int8_t igExists = 0) {
// snprintf(expect.name, sizeof(expect.name), "0.%s", pStream);
// expect.igExists = igExists;
// expect.sql = taosStrdup(pSql);
// };
//
///*STREAMTODO
// auto setStreamOptions =
// [&](int8_t createStb = STREAM_CREATE_STABLE_TRUE, int8_t triggerType = STREAM_TRIGGER_WINDOW_CLOSE,
// int64_t maxDelay = 0, int64_t watermark = 0, int8_t igExpired = STREAM_DEFAULT_IGNORE_EXPIRED,
// int8_t fillHistory = STREAM_DEFAULT_FILL_HISTORY, int8_t igUpdate = STREAM_DEFAULT_IGNORE_UPDATE) {
// expect.createStb = createStb;
// expect.triggerType = triggerType;
// expect.maxDelay = maxDelay;
// expect.watermark = watermark;
// expect.fillHistory = fillHistory;
// expect.igExpired = igExpired;
// expect.igUpdate = igUpdate;
// };
//*/
//
// auto addTag = [&](const char* pFieldName, uint8_t type, int32_t bytes = 0) {
// SField field = {0};
// strcpy(field.name, pFieldName);
// field.type = type;
// field.bytes = bytes > 0 ? bytes : tDataTypes[type].bytes;
// field.flags |= COL_SMA_ON;
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_STREAM_STMT);
// SCMCreateStreamReq req = {0};
// ASSERT_TRUE(TSDB_CODE_SUCCESS ==
// tDeserializeSCMCreateStreamReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req));
//
// ASSERT_EQ(std::string(req.name), std::string(expect.name));
// ASSERT_EQ(req.igExists, expect.igExists);
// ASSERT_EQ(std::string(req.sql), std::string(expect.sql));
// ASSERT_EQ(req.triggerType, expect.triggerType);
// ASSERT_EQ(req.maxDelay, expect.maxDelay);
// ASSERT_EQ(req.watermark, expect.watermark);
// ASSERT_EQ(req.fillHistory, expect.fillHistory);
// tFreeSCMCreateStreamReq(&req);
// });
//
// setCreateStreamReq("s1", "test", "create stream s1 into st3 as select count(*) from t1 interval(10s)", "st3");
// //setStreamOptions();
// run("CREATE STREAM s1 INTO st3 AS SELECT COUNT(*) FROM t1 INTERVAL(10S)");
// clearCreateStreamReq();
//
// setCreateStreamReq(
// "s1", "test",
// "create stream if not exists s1 trigger max_delay 20s watermark 10s ignore expired 0 fill_history 0 ignore "
// "update 1 into st3 as select count(*) from t1 interval(10s)",
// "st3", 1);
// //setStreamOptions(STREAM_CREATE_STABLE_TRUE, STREAM_TRIGGER_MAX_DELAY, 20 * MILLISECOND_PER_SECOND,
// // 10 * MILLISECOND_PER_SECOND, 0, 0, 1);
// run("CREATE STREAM IF NOT EXISTS s1 TRIGGER MAX_DELAY 20s WATERMARK 10s IGNORE EXPIRED 0 FILL_HISTORY 0 IGNORE "
// "UPDATE 1 INTO st3 AS SELECT COUNT(*) FROM t1 INTERVAL(10S)");
// clearCreateStreamReq();
//
// setCreateStreamReq("s1", "test",
// "create stream s1 into st3 tags(tname varchar(10), id int) subtable(concat('new-', tname)) as "
// "select _wstart wstart, count(*) cnt from st1 partition by tbname tname, tag1 id interval(10s)",
// "st3");
// addTag("tname", TSDB_DATA_TYPE_VARCHAR, 10 + VARSTR_HEADER_SIZE);
// addTag("id", TSDB_DATA_TYPE_INT);
// //setStreamOptions();
// run("CREATE STREAM s1 INTO st3 TAGS(tname VARCHAR(10), id INT) SUBTABLE(CONCAT('new-', tname)) "
// "AS SELECT _WSTART wstart, COUNT(*) cnt FROM st1 PARTITION BY TBNAME tname, tag1 id INTERVAL(10S)");
// clearCreateStreamReq();
//
// // st1 already exists
// setCreateStreamReq(
// "s1", "test",
// "create stream s1 into st1 tags(tag2) as select max(c1), c2 from t1 partition by tbname tag2 interval(10s)",
// "st1");
// //setStreamOptions(STREAM_CREATE_STABLE_FALSE);
// run("CREATE STREAM s1 INTO st1 TAGS(tag2) AS SELECT MAX(c1), c2 FROM t1 PARTITION BY TBNAME tag2 INTERVAL(10S)");
// clearCreateStreamReq();
//}
TEST_F(ParserInitialCTest, createView) {
useDb("root", "test");
}
/*
* CREATE MNODE ON DNODE dnode_id
@ -760,165 +671,165 @@ TEST_F(ParserInitialCTest, createSnode) {
* column_definition:
* type_name [COMMENT 'string_value']
*/
//TEST_F(ParserInitialCTest, createStable) {
// useDb("root", "test");
//
// SMCreateStbReq expect = {0};
//
// auto clearCreateStbReq = [&]() {
// tFreeSMCreateStbReq(&expect);
// memset(&expect, 0, sizeof(SMCreateStbReq));
// };
//
// auto setCreateStbReq =
// [&](const char* pDbName, const char* pTbName, int8_t igExists = 0, int64_t delay1 = -1, int64_t delay2 = -1,
// int64_t watermark1 = TSDB_DEFAULT_ROLLUP_WATERMARK, int64_t watermark2 = TSDB_DEFAULT_ROLLUP_WATERMARK,
// int64_t deleteMark1 = TSDB_DEFAULT_ROLLUP_DELETE_MARK, int64_t deleteMark2 = TSDB_DEFAULT_ROLLUP_DELETE_MARK,
// int32_t ttl = TSDB_DEFAULT_TABLE_TTL, const char* pComment = nullptr) {
// int32_t len = snprintf(expect.name, sizeof(expect.name), "0.%s.%s", pDbName, pTbName);
// expect.name[len] = '\0';
// expect.igExists = igExists;
// expect.delay1 = delay1;
// expect.delay2 = delay2;
// expect.watermark1 = watermark1;
// expect.watermark2 = watermark2;
// expect.deleteMark1 = deleteMark1;
// expect.deleteMark2 = deleteMark2;
// // expect.ttl = ttl;
// if (nullptr != pComment) {
// expect.pComment = taosStrdup(pComment);
// expect.commentLen = strlen(pComment);
// }
// };
//
// auto addFieldToCreateStbReq = [&](bool col, const char* pFieldName, uint8_t type, int32_t bytes = 0,
// int8_t flags = COL_SMA_ON) {
// SField field = {0};
// strcpy(field.name, pFieldName);
// field.type = type;
// field.bytes = bytes > 0 ? bytes : tDataTypes[type].bytes;
// field.flags = flags;
//
// if (col) {
// if (NULL == expect.pColumns) {
// expect.pColumns = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SField));
// }
// ASSERT_TRUE(nullptr != taosArrayPush(expect.pColumns, &field));
// expect.numOfColumns += 1;
// } else {
// if (NULL == expect.pTags) {
// expect.pTags = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SField));
// }
// ASSERT_TRUE(taosArrayPush(expect.pTags, &field) != nullptr);
// expect.numOfTags += 1;
// }
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_TABLE_STMT);
// SMCreateStbReq req = {0};
// ASSERT_TRUE(TSDB_CODE_SUCCESS == tDeserializeSMCreateStbReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req));
//
// ASSERT_EQ(std::string(req.name), std::string(expect.name));
// ASSERT_EQ(req.igExists, expect.igExists);
// ASSERT_EQ(req.delay1, expect.delay1);
// ASSERT_EQ(req.delay2, expect.delay2);
// ASSERT_EQ(req.watermark1, expect.watermark1);
// ASSERT_EQ(req.watermark2, expect.watermark2);
// ASSERT_EQ(req.ttl, expect.ttl);
// ASSERT_EQ(req.numOfColumns, expect.numOfColumns);
// ASSERT_EQ(req.numOfTags, expect.numOfTags);
// // ASSERT_EQ(req.commentLen, expect.commentLen);
// ASSERT_EQ(req.ast1Len, expect.ast1Len);
// ASSERT_EQ(req.ast2Len, expect.ast2Len);
//
// if (expect.numOfColumns > 0) {
// ASSERT_EQ(taosArrayGetSize(req.pColumns), expect.numOfColumns);
// ASSERT_EQ(taosArrayGetSize(req.pColumns), taosArrayGetSize(expect.pColumns));
// for (int32_t i = 0; i < expect.numOfColumns; ++i) {
// SField* pField = (SField*)taosArrayGet(req.pColumns, i);
// SField* pExpectField = (SField*)taosArrayGet(expect.pColumns, i);
// ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
// ASSERT_EQ(pField->type, pExpectField->type);
// ASSERT_EQ(pField->bytes, pExpectField->bytes);
// ASSERT_EQ(pField->flags, pExpectField->flags);
// }
// }
// if (expect.numOfTags > 0) {
// ASSERT_EQ(taosArrayGetSize(req.pTags), expect.numOfTags);
// ASSERT_EQ(taosArrayGetSize(req.pTags), taosArrayGetSize(expect.pTags));
// for (int32_t i = 0; i < expect.numOfTags; ++i) {
// SField* pField = (SField*)taosArrayGet(req.pTags, i);
// SField* pExpectField = (SField*)taosArrayGet(expect.pTags, i);
// ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
// ASSERT_EQ(pField->type, pExpectField->type);
// ASSERT_EQ(pField->bytes, pExpectField->bytes);
// ASSERT_EQ(pField->flags, pExpectField->flags);
// }
// }
// if (expect.commentLen > 0) {
// ASSERT_EQ(std::string(req.pComment), std::string(expect.pComment));
// }
// if (expect.ast1Len > 0) {
// ASSERT_EQ(std::string(req.pAst1), std::string(expect.pAst1));
// }
// if (expect.ast2Len > 0) {
// ASSERT_EQ(std::string(req.pAst2), std::string(expect.pAst2));
// }
// tFreeSMCreateStbReq(&req);
// });
//
// setCreateStbReq("test", "t1");
// addFieldToCreateStbReq(true, "ts", TSDB_DATA_TYPE_TIMESTAMP);
// addFieldToCreateStbReq(true, "c1", TSDB_DATA_TYPE_INT);
// addFieldToCreateStbReq(false, "id", TSDB_DATA_TYPE_INT);
// run("CREATE STABLE t1(ts TIMESTAMP, c1 INT) TAGS(id INT)");
// clearCreateStbReq();
//
// setCreateStbReq("rollup_db", "t1", 1, 100 * MILLISECOND_PER_SECOND, 10 * MILLISECOND_PER_MINUTE, 10,
// 1 * MILLISECOND_PER_MINUTE, 1000 * MILLISECOND_PER_SECOND, 200 * MILLISECOND_PER_MINUTE, 100,
// "test create table");
// addFieldToCreateStbReq(true, "ts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0);
// addFieldToCreateStbReq(true, "c1", TSDB_DATA_TYPE_INT);
// addFieldToCreateStbReq(true, "c2", TSDB_DATA_TYPE_UINT);
// addFieldToCreateStbReq(true, "c3", TSDB_DATA_TYPE_BIGINT);
// addFieldToCreateStbReq(true, "c4", TSDB_DATA_TYPE_UBIGINT, 0, 0);
// addFieldToCreateStbReq(true, "c5", TSDB_DATA_TYPE_FLOAT, 0, 0);
// addFieldToCreateStbReq(true, "c6", TSDB_DATA_TYPE_DOUBLE, 0, 0);
// addFieldToCreateStbReq(true, "c7", TSDB_DATA_TYPE_BINARY, 20 + VARSTR_HEADER_SIZE, 0);
// addFieldToCreateStbReq(true, "c8", TSDB_DATA_TYPE_SMALLINT, 0, 0);
// addFieldToCreateStbReq(true, "c9", TSDB_DATA_TYPE_USMALLINT, 0, 0);
// addFieldToCreateStbReq(true, "c10", TSDB_DATA_TYPE_TINYINT, 0, 0);
// addFieldToCreateStbReq(true, "c11", TSDB_DATA_TYPE_UTINYINT, 0, 0);
// addFieldToCreateStbReq(true, "c12", TSDB_DATA_TYPE_BOOL, 0, 0);
// addFieldToCreateStbReq(true, "c13", TSDB_DATA_TYPE_NCHAR, 30 * TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE, 0);
// addFieldToCreateStbReq(true, "c14", TSDB_DATA_TYPE_VARCHAR, 50 + VARSTR_HEADER_SIZE, 0);
// addFieldToCreateStbReq(false, "a1", TSDB_DATA_TYPE_TIMESTAMP);
// addFieldToCreateStbReq(false, "a2", TSDB_DATA_TYPE_INT);
// addFieldToCreateStbReq(false, "a3", TSDB_DATA_TYPE_UINT);
// addFieldToCreateStbReq(false, "a4", TSDB_DATA_TYPE_BIGINT);
// addFieldToCreateStbReq(false, "a5", TSDB_DATA_TYPE_UBIGINT);
// addFieldToCreateStbReq(false, "a6", TSDB_DATA_TYPE_FLOAT);
// addFieldToCreateStbReq(false, "a7", TSDB_DATA_TYPE_DOUBLE);
// addFieldToCreateStbReq(false, "a8", TSDB_DATA_TYPE_BINARY, 20 + VARSTR_HEADER_SIZE);
// addFieldToCreateStbReq(false, "a9", TSDB_DATA_TYPE_SMALLINT);
// addFieldToCreateStbReq(false, "a10", TSDB_DATA_TYPE_USMALLINT);
// addFieldToCreateStbReq(false, "a11", TSDB_DATA_TYPE_TINYINT);
// addFieldToCreateStbReq(false, "a12", TSDB_DATA_TYPE_UTINYINT);
// addFieldToCreateStbReq(false, "a13", TSDB_DATA_TYPE_BOOL);
// addFieldToCreateStbReq(false, "a14", TSDB_DATA_TYPE_NCHAR, 30 * TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE);
// addFieldToCreateStbReq(false, "a15", TSDB_DATA_TYPE_VARCHAR, 50 + VARSTR_HEADER_SIZE);
// run("CREATE STABLE IF NOT EXISTS rollup_db.t1("
// "ts TIMESTAMP, c1 INT, c2 INT UNSIGNED, c3 BIGINT, c4 BIGINT UNSIGNED, c5 FLOAT, c6 DOUBLE, c7 BINARY(20), "
// "c8 SMALLINT, c9 SMALLINT UNSIGNED, c10 TINYINT, c11 TINYINT UNSIGNED, c12 BOOL, "
// "c13 NCHAR(30), c14 VARCHAR(50)) "
// "TAGS (a1 TIMESTAMP, a2 INT, a3 INT UNSIGNED, a4 BIGINT, a5 BIGINT UNSIGNED, a6 FLOAT, a7 DOUBLE, "
// "a8 BINARY(20), a9 SMALLINT, a10 SMALLINT UNSIGNED, a11 TINYINT, "
// "a12 TINYINT UNSIGNED, a13 BOOL, a14 NCHAR(30), a15 VARCHAR(50)) "
// "COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) MAX_DELAY 100s,10m WATERMARK 10a,1m "
// "DELETE_MARK 1000s,200m");
// clearCreateStbReq();
//}
TEST_F(ParserInitialCTest, createStable) {
useDb("root", "test");
SMCreateStbReq expect = {0};
auto clearCreateStbReq = [&]() {
tFreeSMCreateStbReq(&expect);
memset(&expect, 0, sizeof(SMCreateStbReq));
};
auto setCreateStbReq =
[&](const char* pDbName, const char* pTbName, int8_t igExists = 0, int64_t delay1 = -1, int64_t delay2 = -1,
int64_t watermark1 = TSDB_DEFAULT_ROLLUP_WATERMARK, int64_t watermark2 = TSDB_DEFAULT_ROLLUP_WATERMARK,
int64_t deleteMark1 = TSDB_DEFAULT_ROLLUP_DELETE_MARK, int64_t deleteMark2 = TSDB_DEFAULT_ROLLUP_DELETE_MARK,
int32_t ttl = TSDB_DEFAULT_TABLE_TTL, const char* pComment = nullptr) {
int32_t len = snprintf(expect.name, sizeof(expect.name), "0.%s.%s", pDbName, pTbName);
expect.name[len] = '\0';
expect.igExists = igExists;
expect.delay1 = delay1;
expect.delay2 = delay2;
expect.watermark1 = watermark1;
expect.watermark2 = watermark2;
expect.deleteMark1 = deleteMark1;
expect.deleteMark2 = deleteMark2;
// expect.ttl = ttl;
if (nullptr != pComment) {
expect.pComment = taosStrdup(pComment);
expect.commentLen = strlen(pComment);
}
};
auto addFieldToCreateStbReq = [&](bool col, const char* pFieldName, uint8_t type, int32_t bytes = 0,
int8_t flags = COL_SMA_ON) {
SField field = {0};
strcpy(field.name, pFieldName);
field.type = type;
field.bytes = bytes > 0 ? bytes : tDataTypes[type].bytes;
field.flags = flags;
if (col) {
if (NULL == expect.pColumns) {
expect.pColumns = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SField));
}
ASSERT_TRUE(nullptr != taosArrayPush(expect.pColumns, &field));
expect.numOfColumns += 1;
} else {
if (NULL == expect.pTags) {
expect.pTags = taosArrayInit(TARRAY_MIN_SIZE, sizeof(SField));
}
ASSERT_TRUE(taosArrayPush(expect.pTags, &field) != nullptr);
expect.numOfTags += 1;
}
};
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_CREATE_TABLE_STMT);
SMCreateStbReq req = {0};
ASSERT_TRUE(TSDB_CODE_SUCCESS == tDeserializeSMCreateStbReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req));
ASSERT_EQ(std::string(req.name), std::string(expect.name));
ASSERT_EQ(req.igExists, expect.igExists);
ASSERT_EQ(req.delay1, expect.delay1);
ASSERT_EQ(req.delay2, expect.delay2);
ASSERT_EQ(req.watermark1, expect.watermark1);
ASSERT_EQ(req.watermark2, expect.watermark2);
ASSERT_EQ(req.ttl, expect.ttl);
ASSERT_EQ(req.numOfColumns, expect.numOfColumns);
ASSERT_EQ(req.numOfTags, expect.numOfTags);
// ASSERT_EQ(req.commentLen, expect.commentLen);
ASSERT_EQ(req.ast1Len, expect.ast1Len);
ASSERT_EQ(req.ast2Len, expect.ast2Len);
if (expect.numOfColumns > 0) {
ASSERT_EQ(taosArrayGetSize(req.pColumns), expect.numOfColumns);
ASSERT_EQ(taosArrayGetSize(req.pColumns), taosArrayGetSize(expect.pColumns));
for (int32_t i = 0; i < expect.numOfColumns; ++i) {
SField* pField = (SField*)taosArrayGet(req.pColumns, i);
SField* pExpectField = (SField*)taosArrayGet(expect.pColumns, i);
ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
ASSERT_EQ(pField->type, pExpectField->type);
ASSERT_EQ(pField->bytes, pExpectField->bytes);
ASSERT_EQ(pField->flags, pExpectField->flags);
}
}
if (expect.numOfTags > 0) {
ASSERT_EQ(taosArrayGetSize(req.pTags), expect.numOfTags);
ASSERT_EQ(taosArrayGetSize(req.pTags), taosArrayGetSize(expect.pTags));
for (int32_t i = 0; i < expect.numOfTags; ++i) {
SField* pField = (SField*)taosArrayGet(req.pTags, i);
SField* pExpectField = (SField*)taosArrayGet(expect.pTags, i);
ASSERT_EQ(std::string(pField->name), std::string(pExpectField->name));
ASSERT_EQ(pField->type, pExpectField->type);
ASSERT_EQ(pField->bytes, pExpectField->bytes);
ASSERT_EQ(pField->flags, pExpectField->flags);
}
}
if (expect.commentLen > 0) {
ASSERT_EQ(std::string(req.pComment), std::string(expect.pComment));
}
if (expect.ast1Len > 0) {
ASSERT_EQ(std::string(req.pAst1), std::string(expect.pAst1));
}
if (expect.ast2Len > 0) {
ASSERT_EQ(std::string(req.pAst2), std::string(expect.pAst2));
}
tFreeSMCreateStbReq(&req);
});
setCreateStbReq("test", "t1");
addFieldToCreateStbReq(true, "ts", TSDB_DATA_TYPE_TIMESTAMP);
addFieldToCreateStbReq(true, "c1", TSDB_DATA_TYPE_INT);
addFieldToCreateStbReq(false, "id", TSDB_DATA_TYPE_INT);
run("CREATE STABLE t1(ts TIMESTAMP, c1 INT) TAGS(id INT)");
clearCreateStbReq();
setCreateStbReq("rollup_db", "t1", 1, 100 * MILLISECOND_PER_SECOND, 10 * MILLISECOND_PER_MINUTE, 10,
1 * MILLISECOND_PER_MINUTE, 1000 * MILLISECOND_PER_SECOND, 200 * MILLISECOND_PER_MINUTE, 100,
"test create table");
addFieldToCreateStbReq(true, "ts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0);
addFieldToCreateStbReq(true, "c1", TSDB_DATA_TYPE_INT);
addFieldToCreateStbReq(true, "c2", TSDB_DATA_TYPE_UINT);
addFieldToCreateStbReq(true, "c3", TSDB_DATA_TYPE_BIGINT);
addFieldToCreateStbReq(true, "c4", TSDB_DATA_TYPE_UBIGINT, 0, 0);
addFieldToCreateStbReq(true, "c5", TSDB_DATA_TYPE_FLOAT, 0, 0);
addFieldToCreateStbReq(true, "c6", TSDB_DATA_TYPE_DOUBLE, 0, 0);
addFieldToCreateStbReq(true, "c7", TSDB_DATA_TYPE_BINARY, 20 + VARSTR_HEADER_SIZE, 0);
addFieldToCreateStbReq(true, "c8", TSDB_DATA_TYPE_SMALLINT, 0, 0);
addFieldToCreateStbReq(true, "c9", TSDB_DATA_TYPE_USMALLINT, 0, 0);
addFieldToCreateStbReq(true, "c10", TSDB_DATA_TYPE_TINYINT, 0, 0);
addFieldToCreateStbReq(true, "c11", TSDB_DATA_TYPE_UTINYINT, 0, 0);
addFieldToCreateStbReq(true, "c12", TSDB_DATA_TYPE_BOOL, 0, 0);
addFieldToCreateStbReq(true, "c13", TSDB_DATA_TYPE_NCHAR, 30 * TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE, 0);
addFieldToCreateStbReq(true, "c14", TSDB_DATA_TYPE_VARCHAR, 50 + VARSTR_HEADER_SIZE, 0);
addFieldToCreateStbReq(false, "a1", TSDB_DATA_TYPE_TIMESTAMP);
addFieldToCreateStbReq(false, "a2", TSDB_DATA_TYPE_INT);
addFieldToCreateStbReq(false, "a3", TSDB_DATA_TYPE_UINT);
addFieldToCreateStbReq(false, "a4", TSDB_DATA_TYPE_BIGINT);
addFieldToCreateStbReq(false, "a5", TSDB_DATA_TYPE_UBIGINT);
addFieldToCreateStbReq(false, "a6", TSDB_DATA_TYPE_FLOAT);
addFieldToCreateStbReq(false, "a7", TSDB_DATA_TYPE_DOUBLE);
addFieldToCreateStbReq(false, "a8", TSDB_DATA_TYPE_BINARY, 20 + VARSTR_HEADER_SIZE);
addFieldToCreateStbReq(false, "a9", TSDB_DATA_TYPE_SMALLINT);
addFieldToCreateStbReq(false, "a10", TSDB_DATA_TYPE_USMALLINT);
addFieldToCreateStbReq(false, "a11", TSDB_DATA_TYPE_TINYINT);
addFieldToCreateStbReq(false, "a12", TSDB_DATA_TYPE_UTINYINT);
addFieldToCreateStbReq(false, "a13", TSDB_DATA_TYPE_BOOL);
addFieldToCreateStbReq(false, "a14", TSDB_DATA_TYPE_NCHAR, 30 * TSDB_NCHAR_SIZE + VARSTR_HEADER_SIZE);
addFieldToCreateStbReq(false, "a15", TSDB_DATA_TYPE_VARCHAR, 50 + VARSTR_HEADER_SIZE);
run("CREATE STABLE IF NOT EXISTS rollup_db.t1("
"ts TIMESTAMP, c1 INT, c2 INT UNSIGNED, c3 BIGINT, c4 BIGINT UNSIGNED, c5 FLOAT, c6 DOUBLE, c7 BINARY(20), "
"c8 SMALLINT, c9 SMALLINT UNSIGNED, c10 TINYINT, c11 TINYINT UNSIGNED, c12 BOOL, "
"c13 NCHAR(30), c14 VARCHAR(50)) "
"TAGS (a1 TIMESTAMP, a2 INT, a3 INT UNSIGNED, a4 BIGINT, a5 BIGINT UNSIGNED, a6 FLOAT, a7 DOUBLE, "
"a8 BINARY(20), a9 SMALLINT, a10 SMALLINT UNSIGNED, a11 TINYINT, "
"a12 TINYINT UNSIGNED, a13 BOOL, a14 NCHAR(30), a15 VARCHAR(50)) "
"COMMENT 'test create table' SMA(c1, c2, c3) ROLLUP (MIN) MAX_DELAY 100s,10m WATERMARK 10a,1m "
"DELETE_MARK 1000s,200m");
clearCreateStbReq();
}
TEST_F(ParserInitialCTest, createStableSemanticCheck) {
useDb("root", "test");

View File

@ -41,13 +41,13 @@ TEST_F(ParserInitialDTest, deleteSemanticCheck) {
}
// DESC table_name
//TEST_F(ParserInitialDTest, describe) {
// useDb("root", "test");
//
// run("DESC t1");
//
// run("DESCRIBE st1");
//}
TEST_F(ParserInitialDTest, describe) {
useDb("root", "test");
run("DESC t1");
run("DESCRIBE st1");
}
// todo describe
// todo DROP account
@ -228,37 +228,6 @@ TEST_F(ParserInitialDTest, dropSTable) {
run("DROP STABLE st1");
}
//TEST_F(ParserInitialDTest, dropStream) {
// useDb("root", "test");
//
// SMDropStreamReq expect = {0};
//
// auto clearDropStreamReq = [&]() { memset(&expect, 0, sizeof(SMDropStreamReq)); };
//
// auto setDropStreamReq = [&](const char* pStream, int8_t igNotExists = 0) {
// sprintf(expect.name, "0.%s", pStream);
// expect.igNotExists = igNotExists;
// };
//
// setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
// ASSERT_EQ(nodeType(pQuery->pRoot), QUERY_NODE_DROP_STREAM_STMT);
// SMDropStreamReq req = {0};
// ASSERT_TRUE(TSDB_CODE_SUCCESS == tDeserializeSMDropStreamReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req));
//
// ASSERT_EQ(std::string(req.name), std::string(expect.name));
// ASSERT_EQ(req.igNotExists, expect.igNotExists);
// tFreeMDropStreamReq(&req);
// });
//
// setDropStreamReq("s1");
// run("DROP STREAM s1");
// clearDropStreamReq();
//
// setDropStreamReq("s2", 1);
// run("DROP STREAM IF EXISTS s2");
// clearDropStreamReq();
//}
TEST_F(ParserInitialDTest, dropTable) {
useDb("root", "test");

View File

@ -31,22 +31,22 @@ namespace ParserTest {
class ParserInsertTest : public ParserTestBase {};
// INSERT INTO tb_name [(field1_name, ...)] VALUES (field1_value, ...)
//TEST_F(ParserInsertTest, singleTableSingleRowTest) {
// useDb("root", "test");
//
// run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)");
//
// run("INSERT INTO t1 (ts, c1, c2, c3, c4, c5) VALUES (now, 1, 'beijing', 3, 4, 5)");
//}
TEST_F(ParserInsertTest, singleTableSingleRowTest) {
useDb("root", "test");
run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)");
run("INSERT INTO t1 (ts, c1, c2, c3, c4, c5) VALUES (now, 1, 'beijing', 3, 4, 5)");
}
// INSERT INTO tb_name VALUES (field1_value, ...)(field1_value, ...)
//TEST_F(ParserInsertTest, singleTableMultiRowTest) {
// useDb("root", "test");
//
// run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)"
// "(now+1s, 2, 'shanghai', 6, 7, 8)"
// "(now+2s, 3, 'guangzhou', 9, 10, 11)");
//}
TEST_F(ParserInsertTest, singleTableMultiRowTest) {
useDb("root", "test");
run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)"
"(now+1s, 2, 'shanghai', 6, 7, 8)"
"(now+2s, 3, 'guangzhou', 9, 10, 11)");
}
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
TEST_F(ParserInsertTest, multiTableSingleRowTest) {

View File

@ -50,20 +50,20 @@ TEST_F(ParserSelectTest, constant) {
run("SELECT * FROM t1 WHERE -2");
}
//TEST_F(ParserSelectTest, expression) {
// useDb("root", "test");
//
// run("SELECT ts + 10s, c1 + 10, concat(c2, 'abc') FROM t1");
//
// run("SELECT ts > 0, c1 < 20 and c2 = 'qaz' FROM t1");
//
// run("SELECT ts > 0, c1 between 10 and 20 and c2 = 'qaz' FROM t1");
//
// run("SELECT c1 | 10, c2 & 20, c4 | c5 FROM t1");
//
// run("SELECT CASE WHEN ts > '2020-1-1 10:10:10' THEN c1 + 10 ELSE c1 - 10 END FROM t1 "
// "WHERE CASE c1 WHEN c3 + 20 THEN c3 - 1 WHEN c3 + 10 THEN c3 - 2 ELSE 10 END > 0");
//}
TEST_F(ParserSelectTest, expression) {
useDb("root", "test");
run("SELECT ts + 10s, c1 + 10, concat(c2, 'abc') FROM t1");
run("SELECT ts > 0, c1 < 20 and c2 = 'qaz' FROM t1");
run("SELECT ts > 0, c1 between 10 and 20 and c2 = 'qaz' FROM t1");
run("SELECT c1 | 10, c2 & 20, c4 | c5 FROM t1");
run("SELECT CASE WHEN ts > '2020-1-1 10:10:10' THEN c1 + 10 ELSE c1 - 10 END FROM t1 "
"WHERE CASE c1 WHEN c3 + 20 THEN c3 - 1 WHEN c3 + 10 THEN c3 - 2 ELSE 10 END > 0");
}
TEST_F(ParserSelectTest, condition) {
useDb("root", "test");
@ -95,59 +95,59 @@ TEST_F(ParserSelectTest, aggFunc) {
run("SELECT LEASTSQUARES(c1, -1, 1) FROM t1");
}
//TEST_F(ParserSelectTest, multiResFunc) {
// useDb("root", "test");
//
// run("SELECT LAST(*), FIRST(*), LAST_ROW(*) FROM t1");
//
// run("SELECT LAST(c1, c2), FIRST(t1.*), LAST_ROW(c3) FROM t1");
//
// run("SELECT LAST(t2.*), FIRST(t1.c1, t2.*), LAST_ROW(t1.*, t2.*) FROM st1s1 t1, st1s2 t2 WHERE t1.ts = t2.ts");
//}
TEST_F(ParserSelectTest, multiResFunc) {
useDb("root", "test");
//TEST_F(ParserSelectTest, timelineFunc) {
// useDb("root", "test");
//
// run("SELECT LAST(*), FIRST(*) FROM t1");
//
// run("SELECT FIRST(ts), FIRST(c1), FIRST(c2), FIRST(c3) FROM t1");
//
// run("SELECT LAST(*), FIRST(*) FROM t1 GROUP BY c1");
//
// run("SELECT LAST(*), FIRST(*) FROM t1 INTERVAL(10s)");
//
// run("SELECT diff(c1) FROM t1");
//
// run("select diff(ts) from (select _wstart as ts, count(*) from st1 partition by tbname interval(1d))", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
//
// run("select diff(ts) from (select _wstart as ts, count(*) from st1 partition by tbname interval(1d) order by ts)");
//
// run("select t1.* from st1s1 t1, (select _wstart as ts, count(*) from st1s2 partition by tbname interval(1d)) t2 WHERE t1.ts = t2.ts");
//
// run("select t1.* from st1s1 t1, (select _wstart as ts, count(*) from st1s2 partition by tbname interval(1d) order by ts) t2 WHERE t1.ts = t2.ts");
//
//}
run("SELECT LAST(*), FIRST(*), LAST_ROW(*) FROM t1");
//TEST_F(ParserSelectTest, selectFunc) {
// useDb("root", "test");
//
// // select function
// run("SELECT MAX(c1), MIN(c1) FROM t1");
// // select function for GROUP BY clause
// run("SELECT MAX(c1), MIN(c1) FROM t1 GROUP BY c1");
// // select function for INTERVAL clause
// run("SELECT MAX(c1), MIN(c1) FROM t1 INTERVAL(10s)");
// // select function along with the columns of select row
// run("SELECT MAX(c1), c2 FROM t1");
// run("SELECT MAX(c1), t1.* FROM t1");
// // select function along with the columns of select row, and with GROUP BY clause
// run("SELECT MAX(c1), c2 FROM t1 GROUP BY c3");
// run("SELECT MAX(c1), t1.* FROM t1 GROUP BY c3");
// // select function along with the columns of select row, and with window clause
// run("SELECT MAX(c1), c2 FROM t1 INTERVAL(10s)");
// run("SELECT MAX(c1), c2 FROM t1 SESSION(ts, 10s)");
// run("SELECT MAX(c1), c2 FROM t1 STATE_WINDOW(c3)");
//}
run("SELECT LAST(c1, c2), FIRST(t1.*), LAST_ROW(c3) FROM t1");
run("SELECT LAST(t2.*), FIRST(t1.c1, t2.*), LAST_ROW(t1.*, t2.*) FROM st1s1 t1, st1s2 t2 WHERE t1.ts = t2.ts");
}
TEST_F(ParserSelectTest, timelineFunc) {
useDb("root", "test");
run("SELECT LAST(*), FIRST(*) FROM t1");
run("SELECT FIRST(ts), FIRST(c1), FIRST(c2), FIRST(c3) FROM t1");
run("SELECT LAST(*), FIRST(*) FROM t1 GROUP BY c1");
run("SELECT LAST(*), FIRST(*) FROM t1 INTERVAL(10s)");
run("SELECT diff(c1) FROM t1");
run("select diff(ts) from (select _wstart as ts, count(*) from st1 partition by tbname interval(1d))", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
run("select diff(ts) from (select _wstart as ts, count(*) from st1 partition by tbname interval(1d) order by ts)");
run("select t1.* from st1s1 t1, (select _wstart as ts, count(*) from st1s2 partition by tbname interval(1d)) t2 WHERE t1.ts = t2.ts");
run("select t1.* from st1s1 t1, (select _wstart as ts, count(*) from st1s2 partition by tbname interval(1d) order by ts) t2 WHERE t1.ts = t2.ts");
}
TEST_F(ParserSelectTest, selectFunc) {
useDb("root", "test");
// select function
run("SELECT MAX(c1), MIN(c1) FROM t1");
// select function for GROUP BY clause
run("SELECT MAX(c1), MIN(c1) FROM t1 GROUP BY c1");
// select function for INTERVAL clause
run("SELECT MAX(c1), MIN(c1) FROM t1 INTERVAL(10s)");
// select function along with the columns of select row
run("SELECT MAX(c1), c2 FROM t1");
run("SELECT MAX(c1), t1.* FROM t1");
// select function along with the columns of select row, and with GROUP BY clause
run("SELECT MAX(c1), c2 FROM t1 GROUP BY c3");
run("SELECT MAX(c1), t1.* FROM t1 GROUP BY c3");
// select function along with the columns of select row, and with window clause
run("SELECT MAX(c1), c2 FROM t1 INTERVAL(10s)");
run("SELECT MAX(c1), c2 FROM t1 SESSION(ts, 10s)");
run("SELECT MAX(c1), c2 FROM t1 STATE_WINDOW(c3)");
}
TEST_F(ParserSelectTest, IndefiniteRowsFunc) {
useDb("root", "test");
@ -155,21 +155,21 @@ TEST_F(ParserSelectTest, IndefiniteRowsFunc) {
run("SELECT DIFF(c1) FROM t1");
}
//TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
// useDb("root", "test");
//
// run("SELECT DIFF(c1), c2 FROM t1");
//
// run("SELECT DIFF(c1), tbname FROM t1");
//
// run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
//
// run("SELECT DIFF(c1), CSUM(c1) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
//
// run("SELECT CSUM(c3) FROM t1 STATE_WINDOW(c1)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
//
// run("SELECT DIFF(c1) FROM t1 INTERVAL(10s)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
//}
TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
useDb("root", "test");
run("SELECT DIFF(c1), c2 FROM t1");
run("SELECT DIFF(c1), tbname FROM t1");
run("SELECT DIFF(c1), count(*) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
run("SELECT DIFF(c1), CSUM(c1) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
run("SELECT CSUM(c3) FROM t1 STATE_WINDOW(c1)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
run("SELECT DIFF(c1) FROM t1 INTERVAL(10s)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
}
TEST_F(ParserSelectTest, useDefinedFunc) {
useDb("root", "test");
@ -221,11 +221,11 @@ TEST_F(ParserSelectTest, partitionBy) {
run("SELECT SUM(c1), c2 FROM t1 PARTITION BY c2");
}
//TEST_F(ParserSelectTest, partitionBySemanticCheck) {
// useDb("root", "test");
//
// run("SELECT SUM(c1), c2, c3 FROM t1 PARTITION BY c2", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
//}
TEST_F(ParserSelectTest, partitionBySemanticCheck) {
useDb("root", "test");
run("SELECT SUM(c1), c2, c3 FROM t1 PARTITION BY c2", TSDB_CODE_PAR_NOT_SINGLE_GROUP);
}
TEST_F(ParserSelectTest, groupBy) {
useDb("root", "test");

View File

@ -153,11 +153,11 @@ TEST_F(ParserShowToUseTest, showStables) {
run("SHOW test.stables like 'c%'");
}
//TEST_F(ParserShowToUseTest, showStreams) {
// useDb("root", "test");
//
// run("SHOW streams");
//}
TEST_F(ParserShowToUseTest, showStreams) {
useDb("root", "test");
run("SHOW streams");
}
TEST_F(ParserShowToUseTest, showSubscriptions) {
useDb("root", "test");

View File

@ -320,7 +320,7 @@ static bool stbSplNeedSplitWindow(SLogicNode* pNode) {
}
if (WINDOW_TYPE_EXTERNAL == pWindow->winType) {
return !stbSplHasGatherExecFunc(pWindow->pFuncs) && stbSplHasMultiTbScan(pNode);
return pWindow->pFuncs && !stbSplHasGatherExecFunc(pWindow->pFuncs) && stbSplHasMultiTbScan(pNode);
}
if (WINDOW_TYPE_SESSION == pWindow->winType) {

View File

@ -30,23 +30,23 @@ TEST_F(PlanBasicTest, selectClause) {
run("SELECT MAX(c1) c2, c2 FROM st1");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanBasicTest, whereClause) {
// useDb("root", "test");
//
// run("SELECT * FROM t1 WHERE c1 > 10");
//
// run("SELECT * FROM t1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59'");
//
// run("SELECT ts, c1 FROM t1 WHERE ts > NOW AND ts IS NULL AND (c1 > 0 OR c3 < 20)");
//}
//
//TEST_F(PlanBasicTest, caseWhen) {
// useDb("root", "test");
//
// run("SELECT CASE WHEN ts > '2020-1-1 10:10:10' THEN c1 + 10 ELSE c1 - 10 END FROM t1 "
// "WHERE CASE c1 WHEN c2 + 20 THEN c4 - 1 WHEN c2 + 10 THEN c4 - 2 ELSE 10 END > 0");
//}
TEST_F(PlanBasicTest, whereClause) {
useDb("root", "test");
run("SELECT * FROM t1 WHERE c1 > 10");
run("SELECT * FROM t1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59'");
run("SELECT ts, c1 FROM t1 WHERE ts > NOW AND ts IS NULL AND (c1 > 0 OR c3 < 20)");
}
TEST_F(PlanBasicTest, caseWhen) {
useDb("root", "test");
run("SELECT CASE WHEN ts > '2020-1-1 10:10:10' THEN c1 + 10 ELSE c1 - 10 END FROM t1 "
"WHERE CASE c1 WHEN c2 + 20 THEN c4 - 1 WHEN c2 + 10 THEN c4 - 2 ELSE 10 END > 0");
}
TEST_F(PlanBasicTest, func) {
useDb("root", "test");
@ -110,24 +110,24 @@ TEST_F(PlanBasicTest, interpFunc) {
run("SELECT TBNAME, _IROWTS, INTERP(c1) FROM t1 PARTITION BY TBNAME "
"RANGE('2017-7-14 18:00:00', '2017-7-14 19:00:00') EVERY(5s) FILL(LINEAR)");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanBasicTest, lastRowFuncWithoutCache) {
// useDb("root", "test");
//
// run("SELECT LAST_ROW(c1) FROM t1");
//
// run("SELECT LAST_ROW(*) FROM t1");
//
// run("SELECT LAST_ROW(c1, c2) FROM t1");
//
// run("SELECT LAST_ROW(c1), c2 FROM t1");
//
// run("SELECT LAST_ROW(c1) FROM st1");
//
// run("SELECT LAST_ROW(c1) FROM st1 PARTITION BY TBNAME");
//
// run("SELECT LAST_ROW(c1), SUM(c3) FROM t1");
//}
TEST_F(PlanBasicTest, lastRowFuncWithoutCache) {
useDb("root", "test");
run("SELECT LAST_ROW(c1) FROM t1");
run("SELECT LAST_ROW(*) FROM t1");
run("SELECT LAST_ROW(c1, c2) FROM t1");
run("SELECT LAST_ROW(c1), c2 FROM t1");
run("SELECT LAST_ROW(c1) FROM st1");
run("SELECT LAST_ROW(c1) FROM st1 PARTITION BY TBNAME");
run("SELECT LAST_ROW(c1), SUM(c3) FROM t1");
}
TEST_F(PlanBasicTest, timeLineFunc) {
useDb("root", "test");
@ -172,16 +172,16 @@ TEST_F(PlanBasicTest, pseudoColumn) {
run("SELECT _TAGS, * FROM st1s1");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanBasicTest, indefiniteRowsFunc) {
// useDb("root", "test");
//
// run("SELECT DIFF(c1) FROM t1");
//
// run("SELECT DIFF(c1), c2 FROM t1");
//
// run("SELECT DIFF(c1), DIFF(c3), ts FROM t1");
//}
TEST_F(PlanBasicTest, indefiniteRowsFunc) {
useDb("root", "test");
run("SELECT DIFF(c1) FROM t1");
run("SELECT DIFF(c1), c2 FROM t1");
run("SELECT DIFF(c1), DIFF(c3), ts FROM t1");
}
TEST_F(PlanBasicTest, withoutFrom) {
useDb("root", "test");

View File

@ -20,30 +20,29 @@ using namespace std;
class PlanGroupByTest : public PlannerTestBase {};
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanGroupByTest, basic) {
// useDb("root", "test");
//
// run("SELECT COUNT(*) FROM t1");
//
// run("SELECT c1, MAX(c3), MIN(c3), COUNT(*) FROM t1 GROUP BY c1");
//
// run("SELECT c1 + c3, c1 + COUNT(*) FROM t1 WHERE c2 = 'abc' GROUP BY c1, c3");
//
// run("SELECT c1 + c3, SUM(c4 * c5) FROM t1 WHERE CONCAT(c2, 'wwww') = 'abcwww' GROUP BY c1 + c3");
//
// run("SELECT SUM(CEIL(c1)) FROM t1 GROUP BY CEIL(c1)");
//
// run("SELECT COUNT(*) FROM st1");
//
// run("SELECT c1 FROM st1 GROUP BY c1");
//
// run("SELECT COUNT(*) FROM st1 GROUP BY c1");
//
// run("SELECT SUM(c1) FROM st1 GROUP BY c2 HAVING SUM(c1) IS NOT NULL");
//
// run("SELECT AVG(c1) FROM st1");
//}
TEST_F(PlanGroupByTest, basic) {
useDb("root", "test");
run("SELECT COUNT(*) FROM t1");
run("SELECT c1, MAX(c3), MIN(c3), COUNT(*) FROM t1 GROUP BY c1");
run("SELECT c1 + c3, c1 + COUNT(*) FROM t1 WHERE c2 = 'abc' GROUP BY c1, c3");
run("SELECT c1 + c3, SUM(c4 * c5) FROM t1 WHERE CONCAT(c2, 'wwww') = 'abcwww' GROUP BY c1 + c3");
run("SELECT SUM(CEIL(c1)) FROM t1 GROUP BY CEIL(c1)");
run("SELECT COUNT(*) FROM st1");
run("SELECT c1 FROM st1 GROUP BY c1");
run("SELECT COUNT(*) FROM st1 GROUP BY c1");
run("SELECT SUM(c1) FROM st1 GROUP BY c2 HAVING SUM(c1) IS NOT NULL");
run("SELECT AVG(c1) FROM st1");
}
TEST_F(PlanGroupByTest, withPartitionBy) {
useDb("root", "test");
@ -59,7 +58,7 @@ TEST_F(PlanGroupByTest, withOrderBy) {
// ORDER BY aggfunc
run("SELECT COUNT(*), SUM(c1) FROM t1 ORDER BY SUM(c1)");
// ORDER BY alias of aggfunc
// run("SELECT COUNT(*), SUM(c1) a FROM t1 ORDER BY a");
run("SELECT COUNT(*), SUM(c1) a FROM t1 ORDER BY a");
}
TEST_F(PlanGroupByTest, multiResFunc) {
@ -70,18 +69,18 @@ TEST_F(PlanGroupByTest, multiResFunc) {
run("SELECT LAST(*), FIRST(*) FROM t1 GROUP BY c1");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanGroupByTest, selectFunc) {
// useDb("root", "test");
//
// // select function
// run("SELECT MAX(c1), MIN(c1) FROM t1");
// // select function for GROUP BY clause
// run("SELECT MAX(c1), MIN(c1) FROM t1 GROUP BY c1");
// // select function along with the columns of select row
// run("SELECT MAX(c1), c2 FROM t1");
// run("SELECT MAX(c1), t1.* FROM t1");
// // select function along with the columns of select row, and with GROUP BY clause
// run("SELECT MAX(c1), c2 FROM t1 GROUP BY c3");
// run("SELECT MAX(c1), t1.* FROM t1 GROUP BY c3");
//}
TEST_F(PlanGroupByTest, selectFunc) {
useDb("root", "test");
// select function
run("SELECT MAX(c1), MIN(c1) FROM t1");
// select function for GROUP BY clause
run("SELECT MAX(c1), MIN(c1) FROM t1 GROUP BY c1");
// select function along with the columns of select row
run("SELECT MAX(c1), c2 FROM t1");
run("SELECT MAX(c1), t1.* FROM t1");
// select function along with the columns of select row, and with GROUP BY clause
run("SELECT MAX(c1), c2 FROM t1 GROUP BY c3");
run("SELECT MAX(c1), t1.* FROM t1 GROUP BY c3");
}

View File

@ -32,30 +32,29 @@ TEST_F(PlanIntervalTest, pseudoCol) {
run("SELECT _WSTART, _WDURATION, _WEND, COUNT(*) FROM t1 INTERVAL(10s)");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanIntervalTest, fill) {
// useDb("root", "test");
//
// run("SELECT COUNT(*) FROM t1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
// "INTERVAL(10s) FILL(LINEAR)");
//
// run("SELECT COUNT(*) FROM st1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
// "INTERVAL(10s) FILL(LINEAR)");
//
// run("SELECT COUNT(*), SUM(c1) FROM t1 "
// "WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
// "INTERVAL(10s) FILL(VALUE, 10, 20)");
//
// run("SELECT _WSTART, TBNAME, COUNT(*) FROM st1 "
// "WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' "
// "PARTITION BY TBNAME INTERVAL(10s) FILL(PREV)");
//
// run("SELECT COUNT(c1), MAX(c3), COUNT(c1) FROM t1 "
// "WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' INTERVAL(10s) FILL(PREV)");
//
// run("SELECT COUNT(c1) FROM t1 WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' "
// "PARTITION BY c2 INTERVAL(10s) FILL(PREV) ORDER BY c2");
//}
TEST_F(PlanIntervalTest, fill) {
useDb("root", "test");
run("SELECT COUNT(*) FROM t1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(LINEAR)");
run("SELECT COUNT(*) FROM st1 WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(LINEAR)");
run("SELECT COUNT(*), SUM(c1) FROM t1 "
"WHERE ts > TIMESTAMP '2022-04-01 00:00:00' and ts < TIMESTAMP '2022-04-30 23:59:59' "
"INTERVAL(10s) FILL(VALUE, 10, 20)");
run("SELECT _WSTART, TBNAME, COUNT(*) FROM st1 "
"WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' "
"PARTITION BY TBNAME INTERVAL(10s) FILL(PREV)");
run("SELECT COUNT(c1), MAX(c3), COUNT(c1) FROM t1 "
"WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' INTERVAL(10s) FILL(PREV)");
run("SELECT COUNT(c1) FROM t1 WHERE ts > '2022-04-01 00:00:00' and ts < '2022-04-30 23:59:59' "
"PARTITION BY c2 INTERVAL(10s) FILL(PREV) ORDER BY c2");
}
TEST_F(PlanIntervalTest, selectFunc) {
useDb("root", "test");

View File

@ -91,20 +91,19 @@ TEST_F(PlanOptimizeTest, PartitionTags) {
run("SELECT SUM(c1), tbname FROM st1 GROUP BY tbname");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanOptimizeTest, eliminateProjection) {
// useDb("root", "test");
//
// run("SELECT c1, sum(c3) FROM t1 GROUP BY c1");
//
// run("SELECT c1 FROM t1");
//
// run("SELECT * FROM st1");
//
// run("SELECT c1 FROM st1s3");
//
// // run("select 1-abs(c1) from (select unique(c1) c1 from st1s3) order by 1 nulls first");
//}
TEST_F(PlanOptimizeTest, eliminateProjection) {
useDb("root", "test");
run("SELECT c1, sum(c3) FROM t1 GROUP BY c1");
run("SELECT c1 FROM t1");
run("SELECT * FROM st1");
run("SELECT c1 FROM st1s3");
// run("select 1-abs(c1) from (select unique(c1) c1 from st1s3) order by 1 nulls first");
}
TEST_F(PlanOptimizeTest, mergeProjects) {
useDb("root", "test");
@ -118,24 +117,23 @@ TEST_F(PlanOptimizeTest, pushDownProjectCond) {
run("select 1-abs(c1) from (select unique(c1) c1 from st1s3) where 1-c1>5 order by 1 nulls first");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanOptimizeTest, LastRowScan) {
// useDb("root", "cache_db");
//
// run("SELECT LAST_ROW(c1), c2 FROM t1");
//
// run("SELECT LAST_ROW(c1), c2, tag1, tbname FROM st1");
//
// run("SELECT LAST_ROW(c1) FROM st1 PARTITION BY TBNAME");
//
// run("SELECT LAST_ROW(c1), SUM(c3) FROM t1");
//
// run("SELECT LAST_ROW(tag1) FROM st1");
//
// run("SELECT LAST(c1) FROM st1");
//
// run("SELECT LAST(c1), c2 FROM st1");
//}
TEST_F(PlanOptimizeTest, LastRowScan) {
useDb("root", "cache_db");
run("SELECT LAST_ROW(c1), c2 FROM t1");
run("SELECT LAST_ROW(c1), c2, tag1, tbname FROM st1");
run("SELECT LAST_ROW(c1) FROM st1 PARTITION BY TBNAME");
run("SELECT LAST_ROW(c1), SUM(c3) FROM t1");
run("SELECT LAST_ROW(tag1) FROM st1");
run("SELECT LAST(c1) FROM st1");
run("SELECT LAST(c1), c2 FROM st1");
}
TEST_F(PlanOptimizeTest, tagScan) {
useDb("root", "test");

View File

@ -31,15 +31,15 @@ TEST_F(PlanStateTest, stateExpr) {
run("SELECT COUNT(*) FROM t1 STATE_WINDOW(CASE WHEN c1 > 10 THEN 1 ELSE 0 END)");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanStateTest, selectFunc) {
// useDb("root", "test");
//
// // select function for STATE_WINDOW clause
// run("SELECT MAX(c1), MIN(c1) FROM t1 STATE_WINDOW(c3)");
// // select function along with the columns of select row, and with STATE_WINDOW clause
// run("SELECT MAX(c1), c2 FROM t1 STATE_WINDOW(c3)");
//}
TEST_F(PlanStateTest, selectFunc) {
useDb("root", "test");
// select function for STATE_WINDOW clause
run("SELECT MAX(c1), MIN(c1) FROM t1 STATE_WINDOW(c3)");
// select function along with the columns of select row, and with STATE_WINDOW clause
run("SELECT MAX(c1), c2 FROM t1 STATE_WINDOW(c3)");
}
TEST_F(PlanStateTest, stable) {
useDb("root", "test");

View File

@ -40,14 +40,13 @@ TEST_F(PlanSubqeuryTest, basic) {
run("SELECT * FROM (SELECT AVG(c1) a FROM st1 INTERVAL(10s)) WHERE a > 1");
}
// TODO(smj) : disable for stream, reopen it later
//TEST_F(PlanSubqeuryTest, doubleGroupBy) {
// useDb("root", "test");
//
// run("SELECT COUNT(*) FROM ("
// "SELECT c1 + c3 a, c1 + COUNT(*) b FROM t1 WHERE c2 = 'abc' GROUP BY c1, c3) "
// "WHERE a > 100 GROUP BY b");
//}
TEST_F(PlanSubqeuryTest, doubleGroupBy) {
useDb("root", "test");
run("SELECT COUNT(*) FROM ("
"SELECT c1 + c3 a, c1 + COUNT(*) b FROM t1 WHERE c2 = 'abc' GROUP BY c1, c3) "
"WHERE a > 100 GROUP BY b");
}
TEST_F(PlanSubqeuryTest, innerSetOperator) {
useDb("root", "test");

View File

@ -18,7 +18,7 @@ typedef double (*_double_fn_2)(double, double);
typedef int (*_conv_fn)(int);
typedef void (*_trim_space_fn)(char *, char *, int32_t, int32_t, void *);
typedef int32_t (*_trim_fn)(char *, char *, char *, int32_t, int32_t, void *);
typedef int32_t (*_len_fn)(char *, int32_t, VarDataLenT *);
typedef int32_t (*_len_fn)(char *, int32_t, void *);
/** Math functions **/
static double tlog(double v) { return log(v); }
@ -458,11 +458,11 @@ static int32_t doScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarP
}
/** String functions **/
static int32_t tlength(char *input, int32_t type, VarDataLenT *len) {
if (type == IS_STR_DATA_BLOB(type)) {
*len = blobDataLen(input);
static int32_t tlength(char *input, int32_t type, void *len) {
if (IS_STR_DATA_BLOB(type)) {
*(BlobDataLenT *)len = blobDataLen(input);
} else {
*len = varDataLen(input);
*(VarDataLenT *)len = varDataLen(input);
}
return TSDB_CODE_SUCCESS;
}
@ -484,7 +484,7 @@ uint8_t getCharLen(const unsigned char *str) {
}
}
static int32_t tcharlength(char *input, int32_t type, VarDataLenT *len) {
static int32_t tcharlength(char *input, int32_t type, void *len) {
if (type == TSDB_DATA_TYPE_VARCHAR) {
// calculate the number of characters in the string considering the multi-byte character
char *str = varDataVal(input);
@ -494,19 +494,19 @@ static int32_t tcharlength(char *input, int32_t type, VarDataLenT *len) {
strLen++;
pos += getCharLen((unsigned char *)(str + pos));
}
*len = strLen;
*(VarDataLenT *)len = strLen;
return TSDB_CODE_SUCCESS;
} else if (type == TSDB_DATA_TYPE_GEOMETRY) {
*len = varDataLen(input);
*(VarDataLenT *)len = varDataLen(input);
} else if (IS_STR_DATA_BLOB(type)) {
// for blob, we just return the length of the blob data
*len = blobDataLen(input);
*(BlobDataLenT *)len = blobDataLen(input);
} else {
// for nchar, we assume each character is 4 bytes
if (type != TSDB_DATA_TYPE_NCHAR) {
return TSDB_CODE_FUNC_FUNTION_PARA_TYPE;
} else { // NCHAR
*len = varDataLen(input) / TSDB_NCHAR_SIZE;
*(VarDataLenT *)len = varDataLen(input) / TSDB_NCHAR_SIZE;
}
}
return TSDB_CODE_SUCCESS;
@ -826,7 +826,11 @@ static int32_t doLengthFunction(SScalarParam *pInput, int32_t inputNum, SScalarP
}
char *in = colDataGetData(pInputData, i);
SCL_ERR_RET(lenFn(in, type, (VarDataLenT *)&(out[i])));
if (IS_STR_DATA_BLOB(type)) {
SCL_ERR_RET(lenFn(in, type, (BlobDataLenT *)&(out[i])));
} else {
SCL_ERR_RET(lenFn(in, type, (VarDataLenT *)&(out[i])));
}
}
pOutput->numOfRows = pInput->numOfRows;

View File

@ -80,7 +80,7 @@ class TestStreamSlidingTrigger:
("select _wstart, _tcurrent_ts, avg(cint), max(cint) from {calcTbname} partition by tbname interval(5s)", (0, False), True), #25
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from {calcTbname} interval(10s)", (0, False), True), #26
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from {calcTbname} where cts >= _tprev_ts and cts < _tcurrent_ts interval(10s)", (0, False), True), #27
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from %%trows where cts >= _tprev_ts + 1s and cts < _tcurrent_ts - 1s interval(1s)", (0, False), True), #28
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from %%trows interval(1s)", (0, False), True), #28
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from %%trows interval(60s)" , (0, False), True), #29
("select _wstart, _tcurrent_ts, avg(cint), sum(cint) from {calcTbname} partition by cint state_window(cuint)", (0, False), True), #30
@ -95,11 +95,11 @@ class TestStreamSlidingTrigger:
("select _wstart,sum({calcTbname}.cint), max({calcTbname}.cint) from {calcTbname} ,{calcTbname} as t2 where {calcTbname}.cts=t2.cts interval(120s)", (0, False), True), #37
("(select _wstart, avg(cint) c, max(cint) from {calcTbname} partition by cint interval(60s) ) union all (select _wstart, avg(cint) c, max(cint) from {calcTbname} partition by cint interval(60s) order by _wstart,c)", (0, False), True), #38
("select last(cts), avg(cint), sum(cint) from %%tbname group by tbname", (1, True), True), #39
("select cts, cint, %%tbname from %%trows where %%tbname like '%1' order by cts", (0, False), True), #40
("select cts, cint, %%tbname from %%trows order by cts", (0, False), True), #40
("select cts, cint from {calcTbname} where _tcurrent_ts % 2 = 1 order by cts", (0, False), True), #41
("select last(cts), avg(cint), sum(cint) from %%trows group by tbname", (1, True), True), #42
("(select _wstart, avg(cint) c, max(cint) from {calcTbname} interval(60s) order by _wstart,c) union all (select _wstart, avg(cint) c, max(cint) from {calcTbname} interval(60s) order by _wstart,c)", (0, False), True), #43
("select cts, cint, %%tbname from %%trows where cint >15 and tint >0 and %%tbname like '%2' order by cts", (0, False), True), #44
("select cts, cint, %%tbname from %%trows order by cts", (0, False), True), #44
("select _tcurrent_ts, avg(cint), sum(cint) from %%tbname group by cint order by cint", (1, True), True), #45
]

View File

@ -43,7 +43,7 @@ class TestStreamMetaTrigger:
streams.append(self.Basic4()) # [ok]
streams.append(self.Basic5()) # [ok]
# TD-36525 [流计算开发阶段] 删除流结果表后继续触发了也没有重建,不符合预期
# TD-37144 [流计算开发阶段] 删除流结果表后继续触发了也没有重建,不符合预期
# streams.append(self.Basic6()) # [fail]
streams.append(self.Basic7()) # [ok]

View File

@ -47,7 +47,7 @@ class TestStreamOptionsTrigger:
streams.append(self.Basic9()) # PRE_FILTER [ok]
streams.append(self.Basic10()) # FORCE_OUTPUT [ok]
streams.append(self.Basic11()) # MAX_DELAY [ok]
streams.append(self.Basic11_1()) # MAX_DELAY [fail] # TD-37017 [流计算开发阶段] state窗口+max_delay+ns精度库多出来一个结果窗口
streams.append(self.Basic11_1()) # MAX_DELAY [ok]
streams.append(self.Basic12()) # EVENT_TYPE [ok]
streams.append(self.Basic13()) # IGNORE_NODATA_TRIGGER [fail]

View File

@ -247,6 +247,20 @@ class TestStreamRecalcManual:
)
# Test 2: Manual recalculation with time range and end time
tdSql.execute("insert into tdb.mt1 values ('2025-01-01 02:04:00', 10, 100, 1.5, 'normal');")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_interval_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:00:00")
and tdSql.compareData(0, 1, 401)
and tdSql.compareData(0, 2, 240.922693266833)
and tdSql.compareData(1, 0, "2025-01-01 02:02:00")
and tdSql.compareData(1, 1, 400)
and tdSql.compareData(1, 2, 245.5)
)
)
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:00:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:02:03', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_interval_manual from '2025-01-01 02:00:00' to '2025-01-01 02:01:00';")

View File

@ -549,7 +549,7 @@ class TestStreamRecalcExpiredTime:
tdLog.info(f"PERIOD result count after expired data: {result_count_after}")
# For PERIOD trigger, expired data should not increase result count
assert result_count_before == result_count_after, "PERIOD expired_time result count should not change for expired data"
assert result_count_after >= result_count_before, "PERIOD expired_time result count should >= before expired data"
def check06(self):

View File

@ -249,7 +249,6 @@ class TestStreamRecalcManual:
stream.createStream()
# Check functions for each test case
def check01(self):
# Test interval+sliding with manual recalculation
tdLog.info("Check 1: INTERVAL+SLIDING manual recalculation")
@ -257,12 +256,13 @@ class TestStreamRecalcManual:
# Write source data for testing
tdLog.info("write source data for manual recalculation testing")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 00:00:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# Check initial results
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_interval_manual",
func=lambda: (
tdSql.getRows() == 1
tdSql.getRows() >= 1
and tdSql.compareData(0, 0, "2025-01-01 02:00:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 241.5)
@ -275,7 +275,6 @@ class TestStreamRecalcManual:
tdLog.info("Test manual recalculation with time range")
tdSql.execute("recalculate stream rdb.s_interval_manual from '2025-01-01 02:00:00';")
#TODO(beryl): blocked by TD-36691
# Verify results after recalculation
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_interval_manual",
@ -286,9 +285,22 @@ class TestStreamRecalcManual:
and tdSql.compareData(0, 2, 240.922693266833)
)
)
# Test 2: Manual recalculation with time range and end time
tdSql.execute("insert into tdb.mt1 values ('2025-01-01 02:04:00', 10, 100, 1.5, 'normal');")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_interval_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:00:00")
and tdSql.compareData(0, 1, 401)
and tdSql.compareData(0, 2, 240.922693266833)
and tdSql.compareData(1, 0, "2025-01-01 02:02:00")
and tdSql.compareData(1, 1, 400)
and tdSql.compareData(1, 2, 245.5)
)
)
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:00:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:02:03', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_interval_manual from '2025-01-01 02:00:00' to '2025-01-01 02:01:00';")
@ -326,7 +338,6 @@ class TestStreamRecalcManual:
tdLog.info("Test SESSION manual recalculation with time range")
tdSql.execute("recalculate stream rdb.s_session_manual from '2025-01-01 02:10:00';")
#TODO(beryl): blocked by TD-36691
# Verify results after recalculation
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_session_manual",
@ -340,9 +351,22 @@ class TestStreamRecalcManual:
# Test 2: Manual recalculation with time range and end time
tdSql.execute("insert into tdb.sm1 values ('2025-01-01 02:14:00', 60, 'normal');")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_session_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:10:00")
and tdSql.compareData(0, 1, 201)
and tdSql.compareData(0, 2, 259.25373134328356)
and tdSql.compareData(1, 0, "2025-01-01 02:11:50")
and tdSql.compareData(1, 1, 100)
and tdSql.compareData(1, 2, 264)
)
)
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:10:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:12:03', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_session_manual from '2025-01-01 02:10:00' to '2025-01-01 02:12:00';")
tdSql.execute("recalculate stream rdb.s_session_manual from '2025-01-01 02:10:30' to '2025-01-01 02:11:00';")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_session_manual",
@ -381,37 +405,56 @@ class TestStreamRecalcManual:
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:20:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_state_manual from '2025-01-01 02:20:00' to '2025-01-01 02:23:00';")
# # Verify results after recalculation
# tdSql.checkResultsByFunc(
# sql=f"select ts, cnt, avg_val from rdb.r_state_manual",
# func=lambda: (
# tdSql.getRows() == 2
# and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
# and tdSql.compareData(0, 1, 101)
# and tdSql.compareData(0, 2, 277.326732673267)
# and tdSql.compareData(1, 0, "2025-01-01 02:21:00")
# and tdSql.compareData(1, 1, 100)
# and tdSql.compareData(1, 2, 282)
# )
# )
# Verify results after recalculation
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_state_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
and tdSql.compareData(0, 1, 101)
and tdSql.compareData(0, 2, 277.326732673267)
and tdSql.compareData(1, 0, "2025-01-01 02:21:00")
and tdSql.compareData(1, 1, 100)
and tdSql.compareData(1, 2, 282)
)
)
# # Test 2: Manual recalculation with time range and end time
# tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:20:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:21:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# tdSql.execute("recalculate stream rdb.s_state_manual from '2025-01-01 02:20:00' to '2025-01-01 02:21:00';")
# Test 2: Manual recalculation with time range and end time
tdSql.execute("insert into tdb.sw1 values ('2025-01-01 02:23:00', 60, 'debug');",)
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_state_manual",
func=lambda: (
tdSql.getRows() == 3
and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
and tdSql.compareData(0, 1, 101)
and tdSql.compareData(0, 2, 277.326732673267)
and tdSql.compareData(1, 0, "2025-01-01 02:21:00")
and tdSql.compareData(1, 1, 100)
and tdSql.compareData(1, 2, 282)
and tdSql.compareData(2, 0, "2025-01-01 02:22:00")
and tdSql.compareData(2, 1, 100)
and tdSql.compareData(2, 2, 284)
)
)
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:20:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:21:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_state_manual from '2025-01-01 02:20:00' to '2025-01-01 02:20:50';")
# tdSql.checkResultsByFunc(
# sql=f"select ts, cnt, avg_val from rdb.r_state_manual",
# func=lambda: (
# tdSql.getRows() == 2
# and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
# and tdSql.compareData(0, 1, 102)
# and tdSql.compareData(0, 2, 274.705882352941)
# and tdSql.compareData(1, 0, "2025-01-01 02:21:00")
# and tdSql.compareData(1, 1, 100)
# and tdSql.compareData(1, 2, 282)
# )
# )
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_state_manual",
func=lambda: (
tdSql.getRows() == 3
and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
and tdSql.compareData(0, 1, 102)
and tdSql.compareData(0, 2, 274.705882352941)
and tdSql.compareData(1, 0, "2025-01-01 02:21:00")
and tdSql.compareData(1, 1, 101)
and tdSql.compareData(1, 2, 279.306930693069)
and tdSql.compareData(2, 0, "2025-01-01 02:22:00")
and tdSql.compareData(2, 1, 100)
and tdSql.compareData(2, 2, 284)
)
)
def check04(self):
# Test event window with manual recalculation
@ -426,48 +469,49 @@ class TestStreamRecalcManual:
and tdSql.compareData(0, 0, "2025-01-01 02:30:00.000")
and tdSql.compareData(0, 1, 200)
and tdSql.compareData(0, 2, 300.5)
and tdSql.compareData(1, 0, "2025-01-01 02:31:00.000")
and tdSql.compareData(1, 0, "2025-01-01 02:31:30.000")
and tdSql.compareData(1, 1, 200)
and tdSql.compareData(1, 2, 303.5)
)
)
# # Test 1: Manual recalculation with time range for EVENT_WINDOW
# tdLog.info("Test EVENT_WINDOW manual recalculation with time range")
# tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:30:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# tdSql.execute("recalculate stream rdb.s_event_manual from '2025-01-01 02:30:00';")
# Test 1: Manual recalculation with time range for EVENT_WINDOW
tdLog.info("Test EVENT_WINDOW manual recalculation with time range")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:30:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_event_manual from '2025-01-01 02:30:00';")
# # Verify results after recalculation
# tdSql.checkResultsByFunc(
# sql=f"select ts, cnt, avg_val from rdb.r_event_manual",
# func=lambda: (
# tdSql.getRows() == 2
# and tdSql.compareData(0, 0, "2025-01-01 02:30:00.000")
# and tdSql.compareData(0, 1, 201)
# and tdSql.compareData(0, 2, 299.054726368159)
# and tdSql.compareData(1, 0, "2025-01-01 02:31:00.000")
# and tdSql.compareData(1, 1, 200)
# and tdSql.compareData(1, 2, 303.5)
# )
# )
# Verify results after recalculation
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_event_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:30:00.000")
and tdSql.compareData(0, 1, 201)
and tdSql.compareData(0, 2, 299.0547263681592)
and tdSql.compareData(1, 0, "2025-01-01 02:31:30.000")
and tdSql.compareData(1, 1, 200)
and tdSql.compareData(1, 2, 303.5)
)
)
# # Test 2: Manual recalculation without end time
# tdLog.info("Test EVENT_WINDOW manual recalculation without end time")
# tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:31:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:32:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# tdSql.execute("recalculate stream rdb.s_event_manual from '2025-01-01 02:30:00' to '2025-01-01 02:31:00';")
# Test 2: Manual recalculation without end time
tdLog.info("Test EVENT_WINDOW manual recalculation without end time")
# tdSql.checkResultsByFunc(
# sql=f"select ts, cnt, avg_val from rdb.r_event_manual",
# func=lambda: (
# tdSql.getRows() == 2
# and tdSql.compareData(0, 0, "2025-01-01 02:30:00.000")
# and tdSql.compareData(0, 1, 202)
# and tdSql.compareData(0, 2, 297.623762376238)
# and tdSql.compareData(1, 0, "2025-01-01 02:31:00.000")
# and tdSql.compareData(1, 1, 200)
# and tdSql.compareData(1, 2, 303.5)
# )
# )
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:30:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:31:02', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("recalculate stream rdb.s_event_manual from '2025-01-01 02:30:00' to '2025-01-01 02:31:00';")
# tdLog.info("EVENT_WINDOW manual recalculation test completed")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_event_manual",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:30:00.000")
and tdSql.compareData(0, 1, 202)
and tdSql.compareData(0, 2, 297.623762376238)
and tdSql.compareData(1, 0, "2025-01-01 02:31:30.000")
and tdSql.compareData(1, 1, 200)
and tdSql.compareData(1, 2, 303.5)
)
)
tdLog.info("EVENT_WINDOW manual recalculation test completed")

View File

@ -90,31 +90,31 @@ class TestStreamRecalcWithOptions:
# Trigger tables for WATERMARK testing
stb_watermark = "create table tdb.watermark_triggers (ts timestamp, cint int, c2 int, c3 double, category varchar(16)) tags(id int, name varchar(16));"
ctb_watermark = "create table tdb.wm1 using tdb.watermark_triggers tags(1, 'device1'), tdb.wm2 using tdb.watermark_triggers tags(2, 'device2'), tdb.wm3 using tdb.watermark_triggers tags(3, 'device3')"
ctb_watermark = "create table tdb.wm1 using tdb.watermark_triggers tags(1, 'device1') tdb.wm2 using tdb.watermark_triggers tags(2, 'device2') tdb.wm3 using tdb.watermark_triggers tags(3, 'device3')"
tdSql.execute(stb_watermark)
tdSql.execute(ctb_watermark)
# Trigger tables for EXPIRED_TIME testing
stb_expired = "create table tdb.expired_triggers (ts timestamp, cint int, c2 int, c3 double, category varchar(16)) tags(id int, name varchar(16));"
ctb_expired = "create table tdb.exp1 using tdb.expired_triggers tags(1, 'device1'), tdb.exp2 using tdb.expired_triggers tags(2, 'device2'), tdb.exp3 using tdb.expired_triggers tags(3, 'device3')"
ctb_expired = "create table tdb.exp1 using tdb.expired_triggers tags(1, 'device1') tdb.exp2 using tdb.expired_triggers tags(2, 'device2') tdb.exp3 using tdb.expired_triggers tags(3, 'device3')"
tdSql.execute(stb_expired)
tdSql.execute(ctb_expired)
# Trigger tables for IGNORE_DISORDER testing
stb_disorder = "create table tdb.disorder_triggers (ts timestamp, cint int, c2 int, c3 double, category varchar(16)) tags(id int, name varchar(16));"
ctb_disorder = "create table tdb.dis1 using tdb.disorder_triggers tags(1, 'device1'), tdb.dis2 using tdb.disorder_triggers tags(2, 'device2'), tdb.dis3 using tdb.disorder_triggers tags(3, 'device3')"
ctb_disorder = "create table tdb.dis1 using tdb.disorder_triggers tags(1, 'device1') tdb.dis2 using tdb.disorder_triggers tags(2, 'device2') tdb.dis3 using tdb.disorder_triggers tags(3, 'device3')"
tdSql.execute(stb_disorder)
tdSql.execute(ctb_disorder)
# Trigger tables for DELETE_RECALC testing
stb_delete = "create table tdb.delete_triggers (ts timestamp, cint int, c2 int, c3 double, category varchar(16)) tags(id int, name varchar(16));"
ctb_delete = "create table tdb.del1 using tdb.delete_triggers tags(1, 'device1'), tdb.del2 using tdb.delete_triggers tags(2, 'device2'), tdb.del3 using tdb.delete_triggers tags(3, 'device3')"
ctb_delete = "create table tdb.del1 using tdb.delete_triggers tags(1, 'device1') tdb.del2 using tdb.delete_triggers tags(2, 'device2') tdb.del3 using tdb.delete_triggers tags(3, 'device3')"
tdSql.execute(stb_delete)
tdSql.execute(ctb_delete)
# Additional trigger tables for session window with options
stb_session_opt = "create table tdb.session_opt_triggers (ts timestamp, val_num int, status varchar(16)) tags(device_id int);"
ctb_session_opt = "create table tdb.so1 using tdb.session_opt_triggers tags(1), tdb.so2 using tdb.session_opt_triggers tags(2), tdb.so3 using tdb.session_opt_triggers tags(3)"
ctb_session_opt = "create table tdb.so1 using tdb.session_opt_triggers tags(1) tdb.so2 using tdb.session_opt_triggers tags(2) tdb.so3 using tdb.session_opt_triggers tags(3)"
tdSql.execute(stb_session_opt)
tdSql.execute(ctb_session_opt)
@ -159,7 +159,7 @@ class TestStreamRecalcWithOptions:
"insert into tdb.del1 values ('2025-01-01 02:30:00', 10, 100, 1.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:30:30', 20, 200, 2.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:31:00', 30, 300, 3.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:31:30', 40, 400, 4.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:31:50', 40, 400, 4.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:32:00', 50, 500, 5.5, 'normal');",
"insert into tdb.del1 values ('2025-01-01 02:32:30', 60, 600, 6.5, 'normal');",
]
@ -216,11 +216,15 @@ class TestStreamRecalcWithOptions:
# Test 4.1: INTERVAL stream without DELETE_RECALC - manual recalculation for deleted data
stream = StreamItem(
id=5,
stream="create stream rdb.s_delete_interval interval(2m) sliding(2m) from tdb.delete_triggers partition by tbname into rdb.r_delete_interval as select _twstart ts, count(*) cnt, avg(cint) avg_val from qdb.meters where cts >= _twstart and cts < _twend;",
stream="create stream rdb.s_delete_interval session(ts,45s) from tdb.delete_triggers partition by tbname into rdb.r_delete_interval as select _twstart ts, count(*) cnt, avg(cint) avg_val from qdb.meters where cts >= _twstart and cts < _twend;",
check_func=self.check05,
)
self.streams.append(stream)
tdLog.info(f"create total:{len(self.streams)} streams")
for stream in self.streams:
stream.createStream()
# Check functions for each test case
def check01(self):
# Test WATERMARK with manual recalculation
@ -246,11 +250,18 @@ class TestStreamRecalcWithOptions:
# Manual recalculation within WATERMARK range - should recalculate
tdLog.info("Test WATERMARK manual recalculation - within tolerance range")
tdSql.execute("recalculate stream rdb.s_watermark_interval from '2025-01-01 02:02:00' to '2025-01-01 02:04:00';")
tdSql.execute("recalculate stream rdb.s_watermark_interval from '2025-01-01 02:00:00' to '2025-01-01 02:05:00';")
time.sleep(2) # Wait for processing
# recalc can not recalc the data that is inside the watermark range
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_watermark_interval",
func=lambda: (
tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2025-01-01 02:00:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 241.5)
)
)
def check03(self):
# Test EXPIRED_TIME with manual recalculation
@ -264,7 +275,7 @@ class TestStreamRecalcWithOptions:
tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2025-01-01 02:10:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 241.5)
and tdSql.compareData(0, 2, 261.5)
)
)
@ -274,12 +285,28 @@ class TestStreamRecalcWithOptions:
# Note: This test simulates that we're now at a much later time, making earlier data expired
# In real scenario, the current stream processing time would determine what's expired
tdSql.execute("insert into tdb.exp1 values ('2025-01-01 02:04:00', 15, 150, 1.75, 'expired');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:04:01', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
# Manual recalculation for expired data - should still work since manual recalc bypasses expiry
tdLog.info("Test EXPIRED_TIME manual recalculation - expired data")
tdSql.execute("recalculate stream rdb.s_expired_interval from '2025-01-01 02:04:00' to '2025-01-01 02:14:00';")
time.sleep(2)
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_expired_interval",
func=lambda: (
tdSql.getRows() == 4
and tdSql.compareData(0, 0, "2025-01-01 02:04:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 249.5)
and tdSql.compareData(1, 0, "2025-01-01 02:06:00")
and tdSql.compareData(1, 1, 400)
and tdSql.compareData(1, 2, 253.5)
and tdSql.compareData(2, 0, "2025-01-01 02:08:00")
and tdSql.compareData(2, 1, 400)
and tdSql.compareData(2, 2, 257.5)
and tdSql.compareData(3, 0, "2025-01-01 02:10:00")
and tdSql.compareData(3, 1, 400)
and tdSql.compareData(3, 2, 261.5)
)
)
def check04(self):
# Test IGNORE_DISORDER with manual recalculation
@ -293,18 +320,30 @@ class TestStreamRecalcWithOptions:
tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2025-01-01 02:20:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 241.5)
and tdSql.compareData(0, 2, 281.5)
)
)
# Test 1: Write disordered data that would normally be ignored
tdLog.info("Write disordered data that is normally ignored")
tdSql.execute("insert into tdb.dis1 values ('2025-01-01 02:18:15', 25, 250, 2.75, 'disorder');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:18:15', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
tdSql.execute("insert into tdb.dis1 values ('2025-01-01 02:18:00', 25, 250, 2.75, 'disorder');")
# Manual recalculation - should process ignored disordered data
tdLog.info("Test IGNORE_DISORDER manual recalculation - should process ignored data")
tdSql.execute("recalculate stream rdb.s_disorder_interval from '2025-01-01 02:18:00' to '2025-01-01 02:24:00';")
tdSql.execute("recalculate stream rdb.s_disorder_interval from '2025-01-01 02:16:00' to '2025-01-01 02:24:00';")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_disorder_interval",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:18:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 277.5)
and tdSql.compareData(1, 0, "2025-01-01 02:20:00")
and tdSql.compareData(1, 1, 400)
and tdSql.compareData(1, 2, 281.5)
)
)
time.sleep(2)
@ -319,40 +358,23 @@ class TestStreamRecalcWithOptions:
func=lambda: (
tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2025-01-01 02:30:00")
and tdSql.compareData(0, 1, 400)
and tdSql.compareData(0, 2, 241.5)
and tdSql.compareData(0, 1, 200)
and tdSql.compareData(0, 2, 300.5)
)
)
# Test 1: Delete some data from trigger table
tdLog.info("Delete data from trigger table")
tdSql.execute("delete from tdb.del1 where ts = '2025-01-01 02:30:30';")
# Also delete corresponding data from query table for consistency
tdSql.execute("delete from qdb.t0 where cts >= '2025-01-01 02:30:30' and cts < '2025-01-01 02:30:31';")
# Since DELETE_RECALC is not specified, deletion should be ignored by auto-recalc
# But manual recalculation should still work
time.sleep(2)
# Manual recalculation after deletion - should reflect the deletion
tdLog.info("Test manual recalculation after deletion (no DELETE_RECALC)")
tdSql.execute("recalculate stream rdb.s_delete_interval from '2025-01-01 02:30:00' to '2025-01-01 02:32:00';")
time.sleep(2)
# Test 2: Add back some data and delete again
tdLog.info("Add back data and delete again")
tdSql.execute("insert into tdb.del1 values ('2025-01-01 02:30:45', 35, 350, 3.75, 'new');")
tdSql.execute("insert into qdb.t0 values ('2025-01-01 02:30:45', 10, 100, 1.5, 1.5, 0.8, 0.8, 'normal', 1, 1, 1, 1, true, 'normal', 'normal', '10', '10', 'POINT(0.8 0.8)');")
time.sleep(2)
# Delete the newly added data
tdSql.execute("delete from tdb.del1 where ts = '2025-01-01 02:30:45';")
tdSql.execute("delete from qdb.t0 where cts = '2025-01-01 02:30:45';")
# Manual recalculation should handle the deletion
tdLog.info("Test manual recalculation after second deletion")
tdSql.execute("recalculate stream rdb.s_delete_interval from '2025-01-01 02:30:00' to '2025-01-01 02:32:00';")
time.sleep(2)
tdLog.info("Manual recalculation with options test completed")
tdSql.execute("recalculate stream rdb.s_delete_interval from '2025-01-01 02:28:00' to '2025-01-01 02:34:00';")
tdSql.checkResultsByFunc(
sql=f"select ts, cnt, avg_val from rdb.r_delete_interval",
func=lambda: (
tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2025-01-01 02:30:00")
and tdSql.compareData(0, 1, 0)
and tdSql.compareData(0, 2, None)
and tdSql.compareData(1, 0, "2025-01-01 02:31:00")
and tdSql.compareData(1, 1, 0)
and tdSql.compareData(1, 2, None)
)
)

View File

@ -2,7 +2,6 @@
taos> select val,senid,senid_name from test1.str_cjdl_point_data_szls_jk_test order by _c0;
val | senid | senid_name |
==============================================================================================
998 | sendid_a1 | name_a1 |
759 | sendid_a1 | name_a1 |
142 | sendid_a1 | name_a1 |
758 | sendid_a1 | name_a1 |

1 taos> select val,senid,senid_name from test1.str_cjdl_point_data_szls_jk_test order by _c0;
2 val | senid | senid_name |
3 ==============================================================================================
4 998 | sendid_a1 | name_a1 | 759 | sendid_a1 | name_a1 |
759 | sendid_a1 | name_a1 |
5 142 | sendid_a1 | name_a1 |
6 758 | sendid_a1 | name_a1 |
7 604 | sendid_a1 | name_a1 |

View File

@ -21,19 +21,14 @@
{
"name": "vehicles",
"child_table_exists": "yes",
"childtable_count": 10,
"childtable_count": 20,
"insert_rows": 10000000,
"childtable_prefix": "vehicle_110100_00",
"insert_mode": "taosc",
"timestamp_step": 1000,
"start_timestamp":"now",
"random_write_future": "yes",
"disorder_ratio": 40,
"update_ratio": 30,
"delete_ratio": 30,
"disorder_fill_interval": 300,
"update_fill_interval": 25,
"generate_row_rule": 2,
"timestamp_step": 60000,
"interlace_rows": 1,
"start_timestamp": 1700000000000,
"generate_row_rule": 1,
"columns": [
{ "type": "FLOAT", "name": "longitude", "min": 1, "max": 50 },
{ "type": "FLOAT", "name": "latitude", "min": 180, "max": 250 },

View File

@ -23,19 +23,13 @@
{
"name": "vehicles",
"child_table_exists": "no",
"childtable_count": 10,
"insert_rows": 1000,
"childtable_count": 20,
"insert_rows": 100,
"childtable_prefix": "vehicle_110100_00",
"insert_mode": "taosc",
"timestamp_step": 800000,
"start_timestamp":"now",
"random_write_future": "yes",
"disorder_ratio": 70,
"update_ratio": 50,
"delete_ratio": 50,
"disorder_fill_interval": 3000,
"update_fill_interval": 250,
"generate_row_rule": 2,
"timestamp_step": 1000,
"start_timestamp":1600000000000,
"generate_row_rule": 0,
"columns": [
{ "type": "FLOAT", "name": "longitude", "min": 1, "max": 50 },
{ "type": "FLOAT", "name": "latitude", "min": 180, "max": 250 },

View File

@ -119,7 +119,9 @@ class Test_BigPress:
"CREATE VTABLE `vt_6` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_006', '京ZB86G7', 2, 'zd', '1960758157', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_7` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_007', '京ZCR392', 2, 'zd', '6560472044', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_8` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_008', '京ZD43R1', 2, 'zd', '3491377379', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_9` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_009', '京ZD62R2', 2, 'zd', '8265223624', '车辆场景.XX物流公司.华北分公司.北京车队')"
"CREATE VTABLE `vt_9` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_009', '京ZD62R2', 2, 'zd', '8265223624', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_10` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_0010`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_010', '京ZD66G4', 2, 'zd', '3689589229', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_501` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_0011`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.呼和浩特车队', '150100_001', '蒙Z0C3N7', 2, 'zd', '3689589230', '车辆场景.XX物流公司.华北分公司.呼和浩特车队')",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls) - 2} vtable successfully.")
@ -165,26 +167,33 @@ class Test_BigPress:
# vehicle
sqls = [
# stream1
"create stream if not exists `idmp`.`ana_stream1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
"create stream if not exists `idmp`.`ana_stream1_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
# stream_stb1
"create stream if not exists `idmp`.`veh_stream_stb1` interval(5m) sliding(5m) from `idmp`.`vst_车辆_652220` partition by `车辆资产模型`,`车辆ID` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream_stb1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度`, sum(`里程`) as `里程和` from %%trows",
"create stream if not exists `idmp`.`veh_stream_stb1_sub1` interval(5m) sliding(5m) from `idmp`.`vst_车辆_652220` partition by `车辆资产模型`,`车辆ID` stream_options(IGNORE_NODATA_TRIGGER|FILL_HISTORY_FIRST) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream_stb1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度`, sum(`里程`) as `里程和` from %%trows",
# stream1
"create stream if not exists `idmp`.`veh_stream1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
"create stream if not exists `idmp`.`veh_stream1_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(delete_recalc) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
# stream2
"create stream if not exists `idmp`.`ana_stream2` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream2_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream2` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream2_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream3
"create stream if not exists `idmp`.`ana_stream3` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream3_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream3` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream3_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream4
"create stream if not exists `idmp`.`ana_stream4` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream4_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` stream_options(DELETE_RECALC) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream4` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream4_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` stream_options(DELETE_RECALC) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream5
"create stream if not exists `idmp`.`ana_stream5` interval(5m) sliding(5m) from `idmp`.`vt_5` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream5_sub1` interval(5m) sliding(5m) from `idmp`.`vt_5` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream5` interval(5m) sliding(5m) from `idmp`.`vt_5` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream5_sub1` interval(5m) sliding(5m) from `idmp`.`vt_5` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream6
"create stream if not exists `idmp`.`ana_stream6` interval(10m) sliding(5m) from `idmp`.`vt_6` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream6_sub1` interval(10m) sliding(5m) from `idmp`.`vt_6` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream7` interval(5m) sliding(10m) from `idmp`.`vt_7` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream7_sub1` interval(5m) sliding(10m) from `idmp`.`vt_7` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream6` interval(10m) sliding(5m) from `idmp`.`vt_6` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream6_sub1` interval(10m) sliding(5m) from `idmp`.`vt_6` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream7
"create stream if not exists `idmp`.`veh_stream7` interval(5m) sliding(10m) from `idmp`.`vt_7` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream7_sub1` interval(5m) sliding(10m) from `idmp`.`vt_7` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream8 watermark
"create stream if not exists `idmp`.`veh_stream8` interval(5m) sliding(5m) from `idmp`.`vt_8` stream_options(WATERMARK(10m)) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream8` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")

View File

@ -600,17 +600,12 @@ class Test_IDMP_Meters:
)
# result_stream_sub1
# ****** bug10 ******
'''
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
sql = result_sql_sub1,
func = lambda: tdSql.checkRows(17, show=True)
and tdSql.compareData(0, 0, 1752563060000)
and tdSql.compareData(0, 1, 0) # cnt
and tdSql.compareData(1, 0, 1752564380000)
and tdSql.compareData(1, 1, 0) # cnt
)
'''
# result_stream1_sub2
tdSql.checkResultsBySql(

View File

@ -1,257 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from datetime import datetime
from datetime import date
class Test_IDMP_Meters:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36363
History:
- 2025-7-10 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# verify results
self.verifyResults()
'''
# restart dnode
self.restartDnode()
# write trigger data after restart
self.writeTriggerAfterRestart()
# verify results after restart
self.verifyResultsAfterRestart()
'''
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "assert01"
self.vdb = "tdasset"
self.stb = "electricity_meters"
self.start = 1752563000000
self.start_current = 10
self.start_voltage = 260
self.start2 = 1752574200000
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/meters_data/data/")
tdLog.info(f"import data to db={self.db} successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
"create database tdasset;",
"use tdasset;",
"CREATE STABLE `vst_智能电表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `电流` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `电压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `功率` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `相位` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `单元` TINYINT, `楼层` TINYINT, `设备ID` VARCHAR(20), `path1` VARCHAR(512)) SMA(`ts`,`电流`) VIRTUAL 1;",
"CREATE STABLE `vst_智能水表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `流量` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `水压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `path1` VARCHAR(512)) SMA(`ts`,`流量`) VIRTUAL 1;",
"CREATE VTABLE `vt_em-1` (`电流` FROM `asset01`.`em-1`.`current`, `电压` FROM `asset01`.`em-1`.`voltage`, `功率` FROM `asset01`.`em-1`.`power`, `相位` FROM `asset01`.`em-1`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010001', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-2` (`电流` FROM `asset01`.`em-2`.`current`, `电压` FROM `asset01`.`em-2`.`voltage`, `功率` FROM `asset01`.`em-2`.`power`, `相位` FROM `asset01`.`em-2`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010002', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-3` (`电流` FROM `asset01`.`em-3`.`current`, `电压` FROM `asset01`.`em-3`.`voltage`, `功率` FROM `asset01`.`em-3`.`power`, `相位` FROM `asset01`.`em-3`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010003', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-4` (`电流` FROM `asset01`.`em-4`.`current`, `电压` FROM `asset01`.`em-4`.`voltage`, `功率` FROM `asset01`.`em-4`.`power`, `相位` FROM `asset01`.`em-4`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010004', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-5` (`电流` FROM `asset01`.`em-5`.`current`, `电压` FROM `asset01`.`em-5`.`voltage`, `功率` FROM `asset01`.`em-5`.`power`, `相位` FROM `asset01`.`em-5`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010005', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-6` (`电流` FROM `asset01`.`em-6`.`current`, `电压` FROM `asset01`.`em-6`.`voltage`, `功率` FROM `asset01`.`em-6`.`power`, `相位` FROM `asset01`.`em-6`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001006', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-7` (`电流` FROM `asset01`.`em-7`.`current`, `电压` FROM `asset01`.`em-7`.`voltage`, `功率` FROM `asset01`.`em-7`.`power`, `相位` FROM `asset01`.`em-7`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001007', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-8` (`电流` FROM `asset01`.`em-8`.`current`, `电压` FROM `asset01`.`em-8`.`voltage`, `功率` FROM `asset01`.`em-8`.`power`, `相位` FROM `asset01`.`em-8`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001008', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-9` (`电流` FROM `asset01`.`em-9`.`current`, `电压` FROM `asset01`.`em-9`.`voltage`, `功率` FROM `asset01`.`em-9`.`power`, `相位` FROM `asset01`.`em-9`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001009', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-10` (`电流` FROM `asset01`.`em-10`.`current`, `电压` FROM `asset01`.`em-10`.`voltage`, `功率` FROM `asset01`.`em-10`.`power`, `相位` FROM `asset01`.`em-10`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', 1, 2, 'em202502200010010', '公共事业.北京.朝阳.三元桥街道');",
"CREATE VTABLE `vt_em-11` (`电流` FROM `asset01`.`em-11`.`current`, `电压` FROM `asset01`.`em-11`.`voltage`, `功率` FROM `asset01`.`em-11`.`power`, `相位` FROM `asset01`.`em-11`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 11, 'em202502200010011', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-12` (`电流` FROM `asset01`.`em-12`.`current`, `电压` FROM `asset01`.`em-12`.`voltage`, `功率` FROM `asset01`.`em-12`.`power`, `相位` FROM `asset01`.`em-12`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 12, 'em202502200010012', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-13` (`电流` FROM `asset01`.`em-13`.`current`, `电压` FROM `asset01`.`em-13`.`voltage`, `功率` FROM `asset01`.`em-13`.`power`, `相位` FROM `asset01`.`em-13`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 13, 'em202502200010013', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-14` (`电流` FROM `asset01`.`em-14`.`current`, `电压` FROM `asset01`.`em-14`.`voltage`, `功率` FROM `asset01`.`em-14`.`power`, `相位` FROM `asset01`.`em-14`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 14, 'em202502200010014', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-15` (`电流` FROM `asset01`.`em-15`.`current`, `电压` FROM `asset01`.`em-15`.`voltage`, `功率` FROM `asset01`.`em-15`.`power`, `相位` FROM `asset01`.`em-15`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 1, 15, 'em202502200010015', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_wm-1` (`流量` FROM `asset01`.`wm-1`.`rate`, `水压` FROM `asset01`.`wm-1`.`pressure`) USING `vst_智能水表_1` (`_ignore_path`, `地址`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', '公共事业.北京.朝阳.三元桥街道');"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
# stream1
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream1` event_window( start with `电压` > 250 end with `电压` <= 250 ) TRUE_FOR(10m) FROM `tdasset`.`vt_em-1` NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream1` AS SELECT _twstart+0s AS output_timestamp, COUNT(ts) AS cnt, avg(`电压`) AS `平均电压` FROM tdasset.`vt_em-1` WHERE ts >= _twstart AND ts <_twend;",
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream1_sub1` event_window( start with `电压` > 250 end with `电压` <= 250 ) TRUE_FOR(10m) FROM `tdasset`.`vt_em-1` STREAM_OPTIONS(EVENT_TYPE(WINDOW_OPEN)) NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN) INTO `tdasset`.`result_stream1_sub1` AS SELECT _twstart+0s AS output_timestamp, COUNT(ts) AS cnt, avg(`电压`) AS `平均电压` FROM tdasset.`vt_em-1` WHERE ts >= _twstart AND ts <_twend;",
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream1_sub2` event_window( start with `电压` > 250 end with `电压` <= 250 ) TRUE_FOR(10m) FROM `tdasset`.`vt_em-1` STREAM_OPTIONS(EVENT_TYPE(WINDOW_CLOSE)) NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_CLOSE) INTO `tdasset`.`result_stream1_sub2` AS SELECT _twstart+0s AS output_timestamp, COUNT(ts) AS cnt, avg(`电压`) AS `平均电压` FROM tdasset.`vt_em-1` WHERE ts >= _twstart AND ts <_twend;",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream1
self.trigger_stream1()
#
# 5. verify results
#
def verifyResults(self):
self.verify_stream1()
# --------------------- stream trigger ----------------------
#
# stream1 trigger
#
def trigger_stream1(self):
# 1~20 minutes no trigger
ts = self.start
# voltage = 100
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 100);"
tdSql.execute(sql, show=True)
# voltage = 300
for i in range(20):
ts += 1*60*1000
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 300);"
tdSql.execute(sql, show=True)
ts += 1*60*1000
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 100);"
tdSql.execute(sql, show=True)
# voltage = 100
ts += 1*60*1000
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 100);"
tdSql.execute(sql, show=True)
# voltage = 400
for i in range(11):
ts += 1*60*1000
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 400);"
tdSql.execute(sql, show=True)
# voltage = 100
ts += 1*60*1000
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, 100);"
tdSql.execute(sql, show=True)
# high-lower not trigger
for i in range(30):
ts += 1*60*1000
if i % 2 == 0:
voltage = 250 - i
else:
voltage = 250 + i
sql = f"insert into asset01.`em-1`(ts,voltage) values({ts}, {voltage});"
tdSql.execute(sql, show=True)
#
# --------------------- verify ----------------------
#
#
# verify stream1
#
def verify_stream1(self):
# result_stream1
result_sql = f"select * from {self.vdb}.`result_stream1` "
result_sql_sub1 = f"select * from {self.vdb}.`result_stream1_sub1` "
result_sql_sub2 = f"select * from {self.vdb}.`result_stream1_sub2` "
# result_stream1
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, 1752563060000)
and tdSql.compareData(0, 1, 20) # cnt
and tdSql.compareData(0, 2, 300)
and tdSql.compareData(1, 0, 1752564380000)
and tdSql.compareData(1, 1, 11) # cnt
and tdSql.compareData(1, 2, 400)
)
# result_stream_sub1
# ****** bug10 ******
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, 1752563060000)
and tdSql.compareData(0, 1, 0) # cnt
and tdSql.compareData(1, 0, 1752564380000)
and tdSql.compareData(1, 1, 0) # cnt
)
# result_stream1_sub2
tdSql.checkResultsBySql(
sql = result_sql,
exp_sql = result_sql_sub2
)
tdLog.info("verify stream1 .................................. successfully.")

View File

@ -1,289 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from new_test_framework.utils.srvCtl import *
from datetime import datetime
from datetime import date
class Test_Scene_Asset01:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36363
History:
- 2025-7-10 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# wait stream processing
self.waitStreamProcessing()
# verify results
self.verifyResults()
# write trigger data again
self.writeTriggerDataAgain()
# wait stream processing
self.waitStreamProcessing()
# verify results
self.verifyResultsAgain()
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "assert01"
self.vdb = "tdasset"
self.stb = "electricity_meters"
self.start = 1752563000000
self.start_current = 10
self.start_voltage = 260
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/meters_data/data/")
tdLog.info(f"import data to db={self.db} successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
"create database tdasset;",
"use tdasset;",
"CREATE STABLE `vst_智能电表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `电流` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `电压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `功率` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `相位` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `单元` TINYINT, `楼层` TINYINT, `设备ID` VARCHAR(20), `path1` VARCHAR(512)) SMA(`ts`,`电流`) VIRTUAL 1;",
"CREATE STABLE `vst_智能水表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `流量` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `水压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `path1` VARCHAR(512)) SMA(`ts`,`流量`) VIRTUAL 1;",
"CREATE VTABLE `vt_em-1` (`电流` FROM `asset01`.`em-1`.`current`, `电压` FROM `asset01`.`em-1`.`voltage`, `功率` FROM `asset01`.`em-1`.`power`, `相位` FROM `asset01`.`em-1`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010001', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-2` (`电流` FROM `asset01`.`em-2`.`current`, `电压` FROM `asset01`.`em-2`.`voltage`, `功率` FROM `asset01`.`em-2`.`power`, `相位` FROM `asset01`.`em-2`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010002', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-3` (`电流` FROM `asset01`.`em-3`.`current`, `电压` FROM `asset01`.`em-3`.`voltage`, `功率` FROM `asset01`.`em-3`.`power`, `相位` FROM `asset01`.`em-3`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010003', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-4` (`电流` FROM `asset01`.`em-4`.`current`, `电压` FROM `asset01`.`em-4`.`voltage`, `功率` FROM `asset01`.`em-4`.`power`, `相位` FROM `asset01`.`em-4`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010004', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-5` (`电流` FROM `asset01`.`em-5`.`current`, `电压` FROM `asset01`.`em-5`.`voltage`, `功率` FROM `asset01`.`em-5`.`power`, `相位` FROM `asset01`.`em-5`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010005', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-6` (`电流` FROM `asset01`.`em-6`.`current`, `电压` FROM `asset01`.`em-6`.`voltage`, `功率` FROM `asset01`.`em-6`.`power`, `相位` FROM `asset01`.`em-6`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001006', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-7` (`电流` FROM `asset01`.`em-7`.`current`, `电压` FROM `asset01`.`em-7`.`voltage`, `功率` FROM `asset01`.`em-7`.`power`, `相位` FROM `asset01`.`em-7`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001007', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-8` (`电流` FROM `asset01`.`em-8`.`current`, `电压` FROM `asset01`.`em-8`.`voltage`, `功率` FROM `asset01`.`em-8`.`power`, `相位` FROM `asset01`.`em-8`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001008', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-9` (`电流` FROM `asset01`.`em-9`.`current`, `电压` FROM `asset01`.`em-9`.`voltage`, `功率` FROM `asset01`.`em-9`.`power`, `相位` FROM `asset01`.`em-9`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001009', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-10` (`电流` FROM `asset01`.`em-10`.`current`, `电压` FROM `asset01`.`em-10`.`voltage`, `功率` FROM `asset01`.`em-10`.`power`, `相位` FROM `asset01`.`em-10`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', 1, 2, 'em202502200010010', '公共事业.北京.朝阳.三元桥街道');",
"CREATE VTABLE `vt_em-11` (`电流` FROM `asset01`.`em-11`.`current`, `电压` FROM `asset01`.`em-11`.`voltage`, `功率` FROM `asset01`.`em-11`.`power`, `相位` FROM `asset01`.`em-11`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 11, 'em202502200010011', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-12` (`电流` FROM `asset01`.`em-12`.`current`, `电压` FROM `asset01`.`em-12`.`voltage`, `功率` FROM `asset01`.`em-12`.`power`, `相位` FROM `asset01`.`em-12`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 12, 'em202502200010012', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-13` (`电流` FROM `asset01`.`em-13`.`current`, `电压` FROM `asset01`.`em-13`.`voltage`, `功率` FROM `asset01`.`em-13`.`power`, `相位` FROM `asset01`.`em-13`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 13, 'em202502200010013', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-14` (`电流` FROM `asset01`.`em-14`.`current`, `电压` FROM `asset01`.`em-14`.`voltage`, `功率` FROM `asset01`.`em-14`.`power`, `相位` FROM `asset01`.`em-14`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 14, 'em202502200010014', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-15` (`电流` FROM `asset01`.`em-15`.`current`, `电压` FROM `asset01`.`em-15`.`voltage`, `功率` FROM `asset01`.`em-15`.`power`, `相位` FROM `asset01`.`em-15`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 1, 15, 'em202502200010015', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_wm-1` (`流量` FROM `asset01`.`wm-1`.`rate`, `水压` FROM `asset01`.`wm-1`.`pressure`) USING `vst_智能水表_1` (`_ignore_path`, `地址`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', '公共事业.北京.朝阳.三元桥街道');"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream4` INTERVAL(10m) SLIDING(10m) FROM `tdasset`.`vt_em-4` NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream4` AS SELECT _twstart+0s as output_timestamp,COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压` , SUM(`功率`) AS `功率和` FROM tdasset.`vt_em-4` WHERE ts >=_twstart AND ts <_twend ",
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream4_sub1` INTERVAL(10m) SLIDING(10m) FROM `tdasset`.`vt_em-4` stream_options(IGNORE_DISORDER) NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream4_sub1` AS SELECT _twstart+0s as output_timestamp,COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压` , SUM(`功率`) AS `功率和` FROM tdasset.`vt_em-4` WHERE ts >=_twstart AND ts <_twend"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream4
self.trigger_stream4()
#
# 5. wait stream processing
#
def waitStreamProcessing(self):
tdLog.info("wait for check result sleep 5s ...")
time.sleep(5)
#
# 6. verify results
#
def verifyResults(self):
self.verify_stream4()
#
# 7. write trigger data again
#
def writeTriggerDataAgain(self):
# stream4
self.trigger_stream4_again()
#
# 8. verify results again
#
def verifyResultsAgain(self):
# stream4
self.verify_stream4_again()
# --------------------- stream trigger ----------------------
#
# stream4 trigger
#
def trigger_stream4(self):
ts = 1752574200000
table = "asset01.`em-4`"
step = 1 * 60 * 1000 # 1 minute
count = 120
cols = "ts,voltage,power"
vals = "400,200"
tdSql.insertFixedVal(table, ts, step, count, cols, vals)
#
# stream4 trigger again
#
def trigger_stream4_again(self):
ts = 1752574200000 + 30 * 1000 # offset 30 seconds
table = "asset01.`em-4`"
step = 1 * 60 * 1000 # 1 minute
count = 119
cols = "ts,voltage,power"
vals = "200,100"
tdSql.insertFixedVal(table, ts, step, count, cols, vals)
#
# --------------------- verify ----------------------
#
#
# verify stream4
#
def verify_stream4(self, tables=None):
# result_stream4/result_stream4_sub1
if tables is None:
tables = [
"result_stream4",
"result_stream4_sub1"
]
for table in tables:
result_sql = f"select * from {self.vdb}.`{table}` "
tdLog.info(result_sql)
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 11
)
ts = 1752574200000
for i in range(tdSql.getRows()):
tdSql.checkData(i, 0, ts)
tdSql.checkData(i, 1, 10)
tdSql.checkData(i, 2, 400)
tdSql.checkData(i, 3, 2000)
ts += 10 * 60 * 1000 # 10 minutes
tdLog.info(f"verify stream4 {tables} successfully.")
#
# verify stream4 again
#
def verify_stream4_again(self):
# result_stream4
ts = 1752574200000
result_sql = f"select * from {self.vdb}.`result_stream4` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 11
)
for i in range(tdSql.getRows()):
tdSql.checkData(i, 0, ts)
tdSql.checkData(i, 1, 20)
tdSql.checkData(i, 2, 300)
tdSql.checkData(i, 3, 3000)
ts += 10 * 60 * 1000 # 10 minutes
self.verify_stream4(tables=["result_stream4_sub1"])
# restart dnode
tdLog.info("restart dnode to verify stream4_sub1 ...")
sc.dnodeRestartAll()
# result_stream4_sub1
for i in range(10):
# write
sqls = [
"INSERT INTO asset01.`em-4`(ts,voltage,power) VALUES(1752574230000,2000,1000);",
"INSERT INTO asset01.`em-4`(ts,voltage,power) VALUES(1752574230000,2001,10000);",
"INSERT INTO asset01.`em-4`(ts,voltage,power) VALUES(1752581310000,2002,1001);"
]
tdSql.executes(sqls)
tdLog.info(f"loop check i={i} sleep 3s...")
time.sleep(5)
# verify
self.verify_stream4(tables=["result_stream4_sub1"])
tdLog.info("verify stream4 again successfully.")

View File

@ -1,199 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from new_test_framework.utils.srvCtl import *
from datetime import datetime
from datetime import date
class Test_Scene_Asset01:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36363
History:
- 2025-7-10 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# wait stream processing
self.waitStreamProcessing()
# verify results
self.verifyResults()
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "assert01"
self.vdb = "tdasset"
self.stb = "electricity_meters"
self.start = 1752563000000
self.start_current = 10
self.start_voltage = 260
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/meters_data/data/")
tdLog.info(f"import data to db={self.db} successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
"create database tdasset;",
"use tdasset;",
"CREATE STABLE `vst_智能电表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `电流` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `电压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `功率` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `相位` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `单元` TINYINT, `楼层` TINYINT, `设备ID` VARCHAR(20), `path1` VARCHAR(512)) SMA(`ts`,`电流`) VIRTUAL 1;",
"CREATE STABLE `vst_智能水表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `流量` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `水压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `path1` VARCHAR(512)) SMA(`ts`,`流量`) VIRTUAL 1;",
"CREATE VTABLE `vt_em-1` (`电流` FROM `asset01`.`em-1`.`current`, `电压` FROM `asset01`.`em-1`.`voltage`, `功率` FROM `asset01`.`em-1`.`power`, `相位` FROM `asset01`.`em-1`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010001', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-2` (`电流` FROM `asset01`.`em-2`.`current`, `电压` FROM `asset01`.`em-2`.`voltage`, `功率` FROM `asset01`.`em-2`.`power`, `相位` FROM `asset01`.`em-2`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010002', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-3` (`电流` FROM `asset01`.`em-3`.`current`, `电压` FROM `asset01`.`em-3`.`voltage`, `功率` FROM `asset01`.`em-3`.`power`, `相位` FROM `asset01`.`em-3`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010003', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-4` (`电流` FROM `asset01`.`em-4`.`current`, `电压` FROM `asset01`.`em-4`.`voltage`, `功率` FROM `asset01`.`em-4`.`power`, `相位` FROM `asset01`.`em-4`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010004', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-5` (`电流` FROM `asset01`.`em-5`.`current`, `电压` FROM `asset01`.`em-5`.`voltage`, `功率` FROM `asset01`.`em-5`.`power`, `相位` FROM `asset01`.`em-5`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010005', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-6` (`电流` FROM `asset01`.`em-6`.`current`, `电压` FROM `asset01`.`em-6`.`voltage`, `功率` FROM `asset01`.`em-6`.`power`, `相位` FROM `asset01`.`em-6`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001006', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-7` (`电流` FROM `asset01`.`em-7`.`current`, `电压` FROM `asset01`.`em-7`.`voltage`, `功率` FROM `asset01`.`em-7`.`power`, `相位` FROM `asset01`.`em-7`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001007', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-8` (`电流` FROM `asset01`.`em-8`.`current`, `电压` FROM `asset01`.`em-8`.`voltage`, `功率` FROM `asset01`.`em-8`.`power`, `相位` FROM `asset01`.`em-8`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001008', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-9` (`电流` FROM `asset01`.`em-9`.`current`, `电压` FROM `asset01`.`em-9`.`voltage`, `功率` FROM `asset01`.`em-9`.`power`, `相位` FROM `asset01`.`em-9`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001009', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-10` (`电流` FROM `asset01`.`em-10`.`current`, `电压` FROM `asset01`.`em-10`.`voltage`, `功率` FROM `asset01`.`em-10`.`power`, `相位` FROM `asset01`.`em-10`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', 1, 2, 'em202502200010010', '公共事业.北京.朝阳.三元桥街道');",
"CREATE VTABLE `vt_em-11` (`电流` FROM `asset01`.`em-11`.`current`, `电压` FROM `asset01`.`em-11`.`voltage`, `功率` FROM `asset01`.`em-11`.`power`, `相位` FROM `asset01`.`em-11`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 11, 'em202502200010011', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-12` (`电流` FROM `asset01`.`em-12`.`current`, `电压` FROM `asset01`.`em-12`.`voltage`, `功率` FROM `asset01`.`em-12`.`power`, `相位` FROM `asset01`.`em-12`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 12, 'em202502200010012', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-13` (`电流` FROM `asset01`.`em-13`.`current`, `电压` FROM `asset01`.`em-13`.`voltage`, `功率` FROM `asset01`.`em-13`.`power`, `相位` FROM `asset01`.`em-13`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 13, 'em202502200010013', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-14` (`电流` FROM `asset01`.`em-14`.`current`, `电压` FROM `asset01`.`em-14`.`voltage`, `功率` FROM `asset01`.`em-14`.`power`, `相位` FROM `asset01`.`em-14`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 14, 'em202502200010014', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-15` (`电流` FROM `asset01`.`em-15`.`current`, `电压` FROM `asset01`.`em-15`.`voltage`, `功率` FROM `asset01`.`em-15`.`power`, `相位` FROM `asset01`.`em-15`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 1, 15, 'em202502200010015', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_wm-1` (`流量` FROM `asset01`.`wm-1`.`rate`, `水压` FROM `asset01`.`wm-1`.`pressure`) USING `vst_智能水表_1` (`_ignore_path`, `地址`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', '公共事业.北京.朝阳.三元桥街道');"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream4` INTERVAL(10m) SLIDING(10m) FROM `tdasset`.`vt_em-4` NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream4` AS SELECT _twstart+0s as output_timestamp,COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压` , SUM(`功率`) AS `功率和` FROM tdasset.`vt_em-4` WHERE ts >=_twstart AND ts <=_twend "
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream4
self.trigger_stream4()
#
# 5. wait stream processing
#
def waitStreamProcessing(self):
tdLog.info("wait for check result sleep 5s ...")
time.sleep(5)
#
# 6. verify results
#
def verifyResults(self):
self.verify_stream4()
# --------------------- stream trigger ----------------------
#
# stream4 trigger
#
def trigger_stream4(self):
ts = 1752574200000
table = "asset01.`em-4`"
step = 1 * 60 * 1000 # 1 minute
count = 120
cols = "ts,voltage,power"
vals = "400,200"
tdSql.insertFixedVal(table, ts, step, count, cols, vals)
#
# --------------------- verify ----------------------
#
#
# verify stream4
#
def verify_stream4(self, tables=None):
self.check_vt_ts()
#
# --------------------- find other bugs ----------------------
#
# virtual table ts is null
def check_vt_ts(self):
# vt_em-4
tdSql.checkResultsByFunc (
sql = "SELECT * FROM tdasset.`vt_em-4` WHERE `电流` is null;",
func = lambda: tdSql.getRows() == 120
and tdSql.compareData(0, 0, 1752574200000)
and tdSql.compareData(0, 2, 400)
and tdSql.compareData(0, 3, 200)
)

View File

@ -1,191 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from datetime import datetime
from datetime import date
class Test_IDMP_Meters:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36363
History:
- 2025-7-10 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# wait stream processing
self.waitStreamProcessing()
# verify results
self.verifyResults()
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "assert01"
self.vdb = "tdasset"
self.stb = "electricity_meters"
self.start = 1752563000000
self.start_current = 10
self.start_voltage = 260
self.start2 = 1752574200000
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/meters_data/data/")
tdLog.info(f"import data to db={self.db} successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
"create database tdasset;",
"use tdasset;",
"CREATE STABLE `vst_智能电表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `电流` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `电压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `功率` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `相位` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `单元` TINYINT, `楼层` TINYINT, `设备ID` VARCHAR(20), `path1` VARCHAR(512)) SMA(`ts`,`电流`) VIRTUAL 1;",
"CREATE STABLE `vst_智能水表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `流量` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `水压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `path1` VARCHAR(512)) SMA(`ts`,`流量`) VIRTUAL 1;",
"CREATE VTABLE `vt_em-1` (`电流` FROM `asset01`.`em-1`.`current`, `电压` FROM `asset01`.`em-1`.`voltage`, `功率` FROM `asset01`.`em-1`.`power`, `相位` FROM `asset01`.`em-1`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010001', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-2` (`电流` FROM `asset01`.`em-2`.`current`, `电压` FROM `asset01`.`em-2`.`voltage`, `功率` FROM `asset01`.`em-2`.`power`, `相位` FROM `asset01`.`em-2`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010002', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-3` (`电流` FROM `asset01`.`em-3`.`current`, `电压` FROM `asset01`.`em-3`.`voltage`, `功率` FROM `asset01`.`em-3`.`power`, `相位` FROM `asset01`.`em-3`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010003', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-4` (`电流` FROM `asset01`.`em-4`.`current`, `电压` FROM `asset01`.`em-4`.`voltage`, `功率` FROM `asset01`.`em-4`.`power`, `相位` FROM `asset01`.`em-4`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010004', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-5` (`电流` FROM `asset01`.`em-5`.`current`, `电压` FROM `asset01`.`em-5`.`voltage`, `功率` FROM `asset01`.`em-5`.`power`, `相位` FROM `asset01`.`em-5`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010005', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-6` (`电流` FROM `asset01`.`em-6`.`current`, `电压` FROM `asset01`.`em-6`.`voltage`, `功率` FROM `asset01`.`em-6`.`power`, `相位` FROM `asset01`.`em-6`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001006', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-7` (`电流` FROM `asset01`.`em-7`.`current`, `电压` FROM `asset01`.`em-7`.`voltage`, `功率` FROM `asset01`.`em-7`.`power`, `相位` FROM `asset01`.`em-7`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001007', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-8` (`电流` FROM `asset01`.`em-8`.`current`, `电压` FROM `asset01`.`em-8`.`voltage`, `功率` FROM `asset01`.`em-8`.`power`, `相位` FROM `asset01`.`em-8`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001008', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-9` (`电流` FROM `asset01`.`em-9`.`current`, `电压` FROM `asset01`.`em-9`.`voltage`, `功率` FROM `asset01`.`em-9`.`power`, `相位` FROM `asset01`.`em-9`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001009', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-10` (`电流` FROM `asset01`.`em-10`.`current`, `电压` FROM `asset01`.`em-10`.`voltage`, `功率` FROM `asset01`.`em-10`.`power`, `相位` FROM `asset01`.`em-10`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', 1, 2, 'em202502200010010', '公共事业.北京.朝阳.三元桥街道');",
"CREATE VTABLE `vt_em-11` (`电流` FROM `asset01`.`em-11`.`current`, `电压` FROM `asset01`.`em-11`.`voltage`, `功率` FROM `asset01`.`em-11`.`power`, `相位` FROM `asset01`.`em-11`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 11, 'em202502200010011', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-12` (`电流` FROM `asset01`.`em-12`.`current`, `电压` FROM `asset01`.`em-12`.`voltage`, `功率` FROM `asset01`.`em-12`.`power`, `相位` FROM `asset01`.`em-12`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 12, 'em202502200010012', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-13` (`电流` FROM `asset01`.`em-13`.`current`, `电压` FROM `asset01`.`em-13`.`voltage`, `功率` FROM `asset01`.`em-13`.`power`, `相位` FROM `asset01`.`em-13`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 13, 'em202502200010013', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-14` (`电流` FROM `asset01`.`em-14`.`current`, `电压` FROM `asset01`.`em-14`.`voltage`, `功率` FROM `asset01`.`em-14`.`power`, `相位` FROM `asset01`.`em-14`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 14, 'em202502200010014', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-15` (`电流` FROM `asset01`.`em-15`.`current`, `电压` FROM `asset01`.`em-15`.`voltage`, `功率` FROM `asset01`.`em-15`.`power`, `相位` FROM `asset01`.`em-15`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 1, 15, 'em202502200010015', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_wm-1` (`流量` FROM `asset01`.`wm-1`.`rate`, `水压` FROM `asset01`.`wm-1`.`pressure`) USING `vst_智能水表_1` (`_ignore_path`, `地址`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', '公共事业.北京.朝阳.三元桥街道');"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream4` INTERVAL(1a) SLIDING(1a) FROM `tdasset`.`vt_em-4` NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream4` AS SELECT _twstart+0s as output_timestamp,COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压` , SUM(`功率`) AS `功率和` FROM tdasset.`vt_em-4` WHERE ts >=_twstart AND ts <=_twend ",
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream4_sub8` INTERVAL(1a) SLIDING(1a) FROM `tdasset`.`vt_em-4` stream_options(IGNORE_DISORDER|LOW_LATENCY_CALC|IGNORE_NODATA_TRIGGER) NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream4_sub8` AS SELECT _twstart as output_timestamp,COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压` , SUM(`功率`) AS `功率和` FROM tdasset.`vt_em-4` WHERE ts >=_twstart AND ts <=_twend AND ts >= 1752574200000",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream4
self.trigger_stream4()
#
# 5. wait stream processing
#
def waitStreamProcessing(self):
tdLog.info("wait for check result sleep 5s ...")
time.sleep(5)
#
# 6. verify results
#
def verifyResults(self):
self.verify_stream4()
# --------------------- stream trigger ----------------------
#
# stream4 trigger
#
def trigger_stream4(self):
ts = self.start2
table = "asset01.`em-4`"
step = 1 * 60 * 1000 # 1 minute
count = 120
cols = "ts,voltage,power"
vals = "400,200"
tdSql.insertFixedVal(table, ts, step, count, cols, vals)
#
# --------------------- verify ----------------------
#
#
# verify stream4
#
def verify_stream4(self, tables=None):
# ***** bug5 ****
self.verify_stream4_sub8()
def verify_stream4_sub8(self):
# result_stream4_sub8
tdSql.checkResultsBySql(
sql = f"select * from {self.vdb}.`result_stream4_sub8` ",
exp_sql = f"select ts,1,voltage,power from asset01.`em-4` where ts >= 1752574200000 limit 119;"
)
tdLog.info("verify stream4_sub8 ............................. successfully.")

View File

@ -1,209 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from datetime import datetime
from datetime import date
class Test_IDMP_Meters:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36363
History:
- 2025-7-10 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# verify results
self.verifyResults()
'''
# restart dnode
self.restartDnode()
# write trigger data after restart
self.writeTriggerAfterRestart()
# verify results after restart
self.verifyResultsAfterRestart()
'''
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "assert01"
self.vdb = "tdasset"
self.stb = "electricity_meters"
self.start = 1752563000000
self.start_current = 10
self.start_voltage = 260
self.start2 = 1752574200000
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/meters_data/data/")
tdLog.info(f"import data to db={self.db} successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
"create database tdasset;",
"use tdasset;",
"CREATE STABLE `vst_智能电表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `电流` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `电压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `功率` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `相位` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `单元` TINYINT, `楼层` TINYINT, `设备ID` VARCHAR(20), `path1` VARCHAR(512)) SMA(`ts`,`电流`) VIRTUAL 1;",
"CREATE STABLE `vst_智能水表_1` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `流量` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `水压` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `地址` VARCHAR(50), `path1` VARCHAR(512)) SMA(`ts`,`流量`) VIRTUAL 1;",
"CREATE VTABLE `vt_em-1` (`电流` FROM `asset01`.`em-1`.`current`, `电压` FROM `asset01`.`em-1`.`voltage`, `功率` FROM `asset01`.`em-1`.`power`, `相位` FROM `asset01`.`em-1`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010001', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-2` (`电流` FROM `asset01`.`em-2`.`current`, `电压` FROM `asset01`.`em-2`.`voltage`, `功率` FROM `asset01`.`em-2`.`power`, `相位` FROM `asset01`.`em-2`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010002', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-3` (`电流` FROM `asset01`.`em-3`.`current`, `电压` FROM `asset01`.`em-3`.`voltage`, `功率` FROM `asset01`.`em-3`.`power`, `相位` FROM `asset01`.`em-3`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 1, 2, 'em202502200010003', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-4` (`电流` FROM `asset01`.`em-4`.`current`, `电压` FROM `asset01`.`em-4`.`voltage`, `功率` FROM `asset01`.`em-4`.`power`, `相位` FROM `asset01`.`em-4`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010004', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-5` (`电流` FROM `asset01`.`em-5`.`current`, `电压` FROM `asset01`.`em-5`.`voltage`, `功率` FROM `asset01`.`em-5`.`power`, `相位` FROM `asset01`.`em-5`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.海淀.西三旗街道', 2, 2, 'em202502200010005', '公共事业.北京.海淀.西三旗街道');",
"CREATE VTABLE `vt_em-6` (`电流` FROM `asset01`.`em-6`.`current`, `电压` FROM `asset01`.`em-6`.`voltage`, `功率` FROM `asset01`.`em-6`.`power`, `相位` FROM `asset01`.`em-6`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001006', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-7` (`电流` FROM `asset01`.`em-7`.`current`, `电压` FROM `asset01`.`em-7`.`voltage`, `功率` FROM `asset01`.`em-7`.`power`, `相位` FROM `asset01`.`em-7`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001007', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-8` (`电流` FROM `asset01`.`em-8`.`current`, `电压` FROM `asset01`.`em-8`.`voltage`, `功率` FROM `asset01`.`em-8`.`power`, `相位` FROM `asset01`.`em-8`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001008', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-9` (`电流` FROM `asset01`.`em-9`.`current`, `电压` FROM `asset01`.`em-9`.`voltage`, `功率` FROM `asset01`.`em-9`.`power`, `相位` FROM `asset01`.`em-9`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.国贸街道', 1, 2, 'em20250220001009', '公共事业.北京.朝阳.国贸街道');",
"CREATE VTABLE `vt_em-10` (`电流` FROM `asset01`.`em-10`.`current`, `电压` FROM `asset01`.`em-10`.`voltage`, `功率` FROM `asset01`.`em-10`.`power`, `相位` FROM `asset01`.`em-10`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', 1, 2, 'em202502200010010', '公共事业.北京.朝阳.三元桥街道');",
"CREATE VTABLE `vt_em-11` (`电流` FROM `asset01`.`em-11`.`current`, `电压` FROM `asset01`.`em-11`.`voltage`, `功率` FROM `asset01`.`em-11`.`power`, `相位` FROM `asset01`.`em-11`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 11, 'em202502200010011', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-12` (`电流` FROM `asset01`.`em-12`.`current`, `电压` FROM `asset01`.`em-12`.`voltage`, `功率` FROM `asset01`.`em-12`.`power`, `相位` FROM `asset01`.`em-12`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 12, 'em202502200010012', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-13` (`电流` FROM `asset01`.`em-13`.`current`, `电压` FROM `asset01`.`em-13`.`voltage`, `功率` FROM `asset01`.`em-13`.`power`, `相位` FROM `asset01`.`em-13`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 13, 'em202502200010013', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-14` (`电流` FROM `asset01`.`em-14`.`current`, `电压` FROM `asset01`.`em-14`.`voltage`, `功率` FROM `asset01`.`em-14`.`power`, `相位` FROM `asset01`.`em-14`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 11, 14, 'em202502200010014', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_em-15` (`电流` FROM `asset01`.`em-15`.`current`, `电压` FROM `asset01`.`em-15`.`voltage`, `功率` FROM `asset01`.`em-15`.`power`, `相位` FROM `asset01`.`em-15`.`phase`) USING `vst_智能电表_1` (`_ignore_path`, `地址`, `单元`, `楼层`, `设备ID`, `path1`) TAGS (NULL, '北京.朝阳.望京街道', 1, 15, 'em202502200010015', '公共事业.北京.朝阳.望京街道');",
"CREATE VTABLE `vt_wm-1` (`流量` FROM `asset01`.`wm-1`.`rate`, `水压` FROM `asset01`.`wm-1`.`pressure`) USING `vst_智能水表_1` (`_ignore_path`, `地址`, `path1`) TAGS (NULL, '北京.朝阳.三元桥街道', '公共事业.北京.朝阳.三元桥街道');"
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
# stream8
"CREATE STREAM IF NOT EXISTS `tdasset`.`ana_stream8` PERIOD(1s, 0s) FROM `tdasset`.`vt_em-8` STREAM_OPTIONS(IGNORE_DISORDER) NOTIFY('ws://idmp:6042/eventReceive') ON(WINDOW_OPEN|WINDOW_CLOSE) INTO `tdasset`.`result_stream8` AS SELECT CAST(_tlocaltime/1000000 as timestamp), COUNT(ts) AS cnt, AVG(`电压`) AS `平均电压`, SUM(`功率`) AS `功率和` FROM %%trows",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream8
self.trigger_stream8()
#
# 5. verify results
#
def verifyResults(self):
self.verify_stream8()
# --------------------- stream trigger ----------------------
#
# stream8 trigger
#
def trigger_stream8(self):
ts = self.start2
table = "asset01.`em-8`"
cols = "ts,current,voltage,power"
sleepS = 0.2 # 0.2 seconds
# write to windows 1
count = 20
fixedVals = "100, 200, 300"
tdSql.insertNow(table, sleepS, count, cols, fixedVals)
#
# --------------------- verify ----------------------
#
#
# verify stream8
#
def verify_stream8(self):
# sleep
time.sleep(5)
# result_stream8
result_sql = f"select * from {self.vdb}.`result_stream8` "
tdSql.query(result_sql)
count = tdSql.getRows()
found = False
for i in range(count):
# row
if tdSql.getData(i, 1) == 20 :
found = True
if found:
tdSql.checkData(i, 1, 20) # cnt
tdSql.checkData(i, 2, 200) # avg(voltage)
tdSql.checkData(i, 3, 6000) # sum(power)
if found == False:
tdLog.exit(f"stream8 not found expected data.")
tdLog.info(f"verify stream8 ................................. successfully.")

View File

@ -115,7 +115,7 @@ class Test_IDMP_Vehicle:
"CREATE VTABLE `vt_7` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_007', '京ZCR392', 2, 'zd', '6560472044', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_8` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_008', '京ZD43R1', 2, 'zd', '3491377379', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_9` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_009', '京ZD62R2', 2, 'zd', '8265223624', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_10` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_010', '京ZD66G4', 2, 'zd', '3689589229', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_501` (`经度` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_150100_001`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.呼和浩特车队', '110100_011', '蒙Z0C3N7', 2, 'zd', '3689589230', '车辆场景.XX物流公司.华北分公司.呼和浩特车队')",
]
tdSql.executes(sqls)
@ -128,29 +128,33 @@ class Test_IDMP_Vehicle:
def createStreams(self):
sqls = [
# stream1
"create stream if not exists `idmp`.`ana_stream1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
"create stream if not exists `idmp`.`ana_stream1_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
# stream_stb1
"create stream if not exists `idmp`.`veh_stream_stb1` interval(5m) sliding(5m) from `idmp`.`vst_车辆_652220` partition by `车辆资产模型`,`车辆ID` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream_stb1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度`, sum(`里程`) as `里程和` from %%trows",
"create stream if not exists `idmp`.`veh_stream_stb1_sub1` interval(5m) sliding(5m) from `idmp`.`vst_车辆_652220` partition by `车辆资产模型`,`车辆ID` stream_options(IGNORE_NODATA_TRIGGER|FILL_HISTORY_FIRST) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream_stb1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度`, sum(`里程`) as `里程和` from %%trows",
# stream1
"create stream if not exists `idmp`.`veh_stream1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
"create stream if not exists `idmp`.`veh_stream1_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_1` stream_options(delete_recalc) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_1` where ts >= _twstart and ts <_twend",
# stream2
"create stream if not exists `idmp`.`ana_stream2` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream2_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream2` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream2_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_2` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream2_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream3
"create stream if not exists `idmp`.`ana_stream3` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream3_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream3` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream3_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_3` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream4
"create stream if not exists `idmp`.`ana_stream4` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream4_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` stream_options(DELETE_RECALC) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream4` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream4_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_4` stream_options(DELETE_RECALC) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream4_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream5
"create stream if not exists `idmp`.`ana_stream5` interval(5m) sliding(5m) from `idmp`.`vt_5` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream5_sub1` interval(5m) sliding(5m) from `idmp`.`vt_5` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream5` interval(5m) sliding(5m) from `idmp`.`vt_5` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream5_sub1` interval(5m) sliding(5m) from `idmp`.`vt_5` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream5_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream6
"create stream if not exists `idmp`.`ana_stream6` interval(10m) sliding(5m) from `idmp`.`vt_6` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream6_sub1` interval(10m) sliding(5m) from `idmp`.`vt_6` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream6` interval(10m) sliding(5m) from `idmp`.`vt_6` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream6_sub1` interval(10m) sliding(5m) from `idmp`.`vt_6` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream6_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream7
"create stream if not exists `idmp`.`ana_stream7` interval(5m) sliding(10m) from `idmp`.`vt_7` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream7_sub1` interval(5m) sliding(10m) from `idmp`.`vt_7` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream8
"create stream if not exists `idmp`.`ana_stream8` interval(5m) sliding(5m) from `idmp`.`vst_车辆_652220` partition by `车辆资产模型`,`车辆ID` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream8` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度`, sum(`里程`) as `里程和` from %%trows",
"create stream if not exists `idmp`.`veh_stream7` interval(5m) sliding(10m) from `idmp`.`vt_7` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`veh_stream7_sub1` interval(5m) sliding(10m) from `idmp`.`vt_7` stream_options(IGNORE_NODATA_TRIGGER) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream7_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
# stream8 watermark
"create stream if not exists `idmp`.`veh_stream8` interval(5m) sliding(5m) from `idmp`.`vt_8` stream_options(WATERMARK(10m)) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream8` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
]
tdSql.executes(sqls)
@ -168,6 +172,8 @@ class Test_IDMP_Vehicle:
# 4. write trigger data
#
def writeTriggerData(self):
# stream_stb1
self.trigger_stream_stb1()
# stream1
self.trigger_stream1()
# stream2
@ -185,12 +191,11 @@ class Test_IDMP_Vehicle:
# stream8
self.trigger_stream8()
#
# 5. verify results
#
def verifyResults(self):
self.verify_stream_stb1()
self.verify_stream1()
self.verify_stream2()
self.verify_stream3()
@ -200,7 +205,7 @@ class Test_IDMP_Vehicle:
self.verify_stream5()
self.verify_stream6()
self.verify_stream7()
self.verify_stream8()
#self.verify_stream8()
#
@ -246,10 +251,34 @@ class Test_IDMP_Vehicle:
# 10. verify results after restart
#
def verifyResultsAfterRestart(self):
pass
pass
def printSql(self, label, sql):
print(label + sql)
rows = tdSql.getResult(sql)
i = 0
for row in rows:
print(f"i={i} {row}")
i += 1
# --------------------- stream trigger ----------------------
#
# stream_stb1 trigger
#
def trigger_stream_stb1(self):
table = f"{self.db}.`vehicle_150100_001`"
cols = "ts,speed,mileage"
# data1
ts = self.start
vals = "150,300"
count = 11
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
#
# stream1 trigger
#
@ -280,6 +309,8 @@ class Test_IDMP_Vehicle:
count = 2
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
sql = f"select {cols} from {table}"
self.printSql("first:", sql)
# delete win1 2 rows
tdSql.deleteRows(table, f"ts >= {self.start + 1 * self.step} and ts <= {self.start + 2 * self.step}")
@ -316,6 +347,8 @@ class Test_IDMP_Vehicle:
count = 1
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
self.printSql("second: ", sql)
#
# stream2 trigger
@ -635,6 +668,30 @@ class Test_IDMP_Vehicle:
# --------------------- verify ----------------------
#
#
# verify stream_stb1
#
def verify_stream_stb1(self):
# check data
result_sql = f"select * from {self.vdb}.`result_stream_stb1` where `车辆ID`= '110100_011'"
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
# row1
and tdSql.compareData(0, 0, self.start) # ts
and tdSql.compareData(0, 1, 5) # cnt
and tdSql.compareData(0, 2, 150) # avg(speed)
and tdSql.compareData(0, 3, 1500) # sum
# row2
and tdSql.compareData(1, 0, self.start + 5 * self.step) # ts
and tdSql.compareData(1, 1, 5) # cnt
and tdSql.compareData(1, 2, 150) # avg(speed)
and tdSql.compareData(1, 3, 1500) # sum
)
tdLog.info(f"verify stream_stb1 ............................. successfully.")
#
# verify stream1
#
@ -650,7 +707,7 @@ class Test_IDMP_Vehicle:
)
# sub
#self.verify_stream1_sub1()
self.verify_stream1_sub1()
tdLog.info("verify stream1 .................................. successfully.")
# stream1 sub1
@ -659,10 +716,10 @@ class Test_IDMP_Vehicle:
result_sql = f"select * from {self.vdb}.`result_stream1_sub1` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 3
and tdSql.compareData(1, 0, self.start + (5 + 2 + 1) * self.step) # ts
and tdSql.compareData(1, 1, 9) # cnt
and tdSql.compareData(1, 2, 140) # avg(speed)
func = lambda: tdSql.checkRows(1, show=True)
and tdSql.compareData(0, 0, self.start + (5 + 2 + 1 + 3 + 1) * self.step) # ts
and tdSql.compareData(0, 1, 9) # cnt
and tdSql.compareData(0, 2, 140) # avg(speed)
)
tdLog.info("verify stream1 sub1 ............................. successfully.")
@ -682,8 +739,7 @@ class Test_IDMP_Vehicle:
)
# sub
# ***** bug3 *****
# self.verify_stream2_sub1()
self.verify_stream2_sub1()
tdLog.info("verify stream2 .................................. successfully.")
@ -694,8 +750,8 @@ class Test_IDMP_Vehicle:
result_sql = f"select * from {self.vdb}.`result_stream2_sub1` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, self.start + 10 * self.step) # ts
func = lambda: tdSql.checkRows(2, show=True)
and tdSql.compareData(0, 0, self.start) # ts
and tdSql.compareData(0, 1, 6) # cnt
)
tdLog.info("verify stream2 sub1 ............................. successfully.")
@ -946,21 +1002,4 @@ class Test_IDMP_Vehicle:
# verify stream8
#
def verify_stream8(self):
# check data
result_sql = f"select * from {self.vdb}.`result_stream8` where `车辆ID`= '110100_008'"
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
# row1
and tdSql.compareData(0, 0, self.start) # ts
and tdSql.compareData(0, 1, 5) # cnt
and tdSql.compareData(0, 2, 150) # avg(speed)
and tdSql.compareData(0, 3, 1500) # sum
# row2
and tdSql.compareData(1, 0, self.start + 5 * self.step) # ts
and tdSql.compareData(1, 1, 5) # cnt
and tdSql.compareData(1, 2, 150) # avg(speed)
and tdSql.compareData(1, 3, 1500) # sum
)
tdLog.info(f"verify stream8 ................................. successfully.")

View File

@ -1,260 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from datetime import datetime
from datetime import date
class Test_IDMP_Vehicle:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36781
History:
- 2025-7-18 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# verify results
self.verifyResults()
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "idmp_sample_vehicle"
self.vdb = "idmp"
self.stb = "vehicles"
self.start = 1752900000000
self.start_current = 10
self.start_voltage = 260
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/vehicle_data/")
tdLog.info(f"import data to db={self.db}. successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
f"create database {self.vdb};",
f"use {self.vdb};",
"CREATE STABLE `vst_车辆_652220` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `经度` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `纬度` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `高程` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `速度` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `方向` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `报警标志` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `里程` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `车辆资产模型` VARCHAR(128), `车辆ID` VARCHAR(32), `车牌号` VARCHAR(17), `车牌颜色` TINYINT, `终端制造商` VARCHAR(11), `终端ID` VARCHAR(15), `path2` VARCHAR(512)) SMA(`ts`,`经度`) VIRTUAL 1",
"CREATE VTABLE `vt_京Z1NW34_624364` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_001', '京Z1NW34', 2, 'zd', '2551765954', '车辆场景.XX物流公司.华北分公司.北京车队')",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls) - 2} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
"create stream if not exists `idmp`.`ana_stream1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_京Z1NW34_624364` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_京Z1NW34_624364` where ts >= _twstart and ts <_twend",
"create stream if not exists `idmp`.`ana_stream1_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_京Z1NW34_624364` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream1_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from idmp.`vt_京Z1NW34_624364` where ts >= _twstart and ts <_twend",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream1
self.trigger_stream1()
#
# 5. verify results
#
def verifyResults(self):
self.verify_stream1()
# --------------------- stream trigger ----------------------
#
# stream1 trigger
#
def trigger_stream1(self):
ts = self.start
table = f"{self.db}.`vehicle_110100_001`"
step = 1 * 60 * 1000 # 1 minute
cols = "ts,speed"
# win1 1~5
vals = "120"
count = 5
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# null
count = 2
vals = "null"
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# end
vals = "60"
count = 1
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# win3 50 ~ 51 end-windows
ts += 50 * step
vals = "10"
count = 2
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
''' ***** bug1 *****
# disorder win2 10~15
win2 = self.start + 10 * step
vals = "60"
count = 2
ts = tdSql.insertFixedVal(table, win2, step, count, cols, vals)
'''
'''
win2 = self.start + 10 * step
vals = "60"
count = 1
ts = tdSql.insertFixedVal(table, win2, step, count, cols, vals)
# disorder win2 20~26
win2 = self.start + 20 * step
vals = "150"
count = 6
ts = tdSql.insertFixedVal(table, win2, step, count, cols, vals)
'''
# delete win1 2 rows
tdSql.deleteRows(table, f"ts >= {self.start + 1 * step} and ts <= {self.start + 2 * step}")
# disorder
ts = self.start + (5 + 2 + 1) * step
vals = "130"
count = 3
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# null
count = 10
vals = "null"
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# null changed 65
ts = self.start + (5 + 2 + 1 + 3) * step
count = 1
vals = "65"
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# null changed 140
count = 5
vals = "140"
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# 130 change to null
ts = self.start
vals = "null"
count = 1
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
# trigger disorder event
ts += 50 * step
vals = "9"
count = 1
ts = tdSql.insertFixedVal(table, ts, step, count, cols, vals)
#
# --------------------- verify ----------------------
#
#
# verify stream1
#
def verify_stream1(self):
# check
result_sql = f"select * from {self.vdb}.`result_stream1` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, self.start) # ts
and tdSql.compareData(0, 1, 5) # cnt
and tdSql.compareData(0, 2, 120) # avg(speed)
)
# sub
self.verify_stream1_sub1()
tdLog.info("verify stream1 .................................. successfully.")
# stream1 sub1
def verify_stream1_sub1(self):
# check
result_sql = f"select * from {self.vdb}.`result_stream1_sub1` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 1
and tdSql.compareData(1, 0, self.start + (5 + 2 + 1) * self.step) # ts
and tdSql.compareData(1, 1, 9) # cnt
and tdSql.compareData(1, 2, 140) # avg(speed)
)
tdLog.info("verify stream1 sub1 ............................. successfully.")

View File

@ -1,216 +0,0 @@
import time
import math
import random
from new_test_framework.utils import tdLog, tdSql, tdStream, etool
from datetime import datetime
from datetime import date
class Test_IDMP_Vehicle:
def setup_class(cls):
tdLog.debug(f"start to execute {__file__}")
def test_stream_usecase_em(self):
"""Nevados
Refer: https://taosdata.feishu.cn/wiki/Zkb2wNkHDihARVkGHYEcbNhmnxb
Catalog:
- Streams:UseCases
Since: v3.3.7.0
Labels: common,ci
Jira: https://jira.taosdata.com:18080/browse/TD-36781
History:
- 2025-7-18 Alex Duan Created
"""
#
# main test
#
# env
tdStream.createSnode()
# prepare data
self.prepare()
# create vtables
self.createVtables()
# create streams
self.createStreams()
# check stream status
self.checkStreamStatus()
# insert trigger data
self.writeTriggerData()
# verify results
self.verifyResults()
#
# --------------------- main flow frame ----------------------
#
#
# prepare data
#
def prepare(self):
# name
self.db = "idmp_sample_vehicle"
self.vdb = "idmp"
self.stb = "vehicles"
self.step = 1 * 60 * 1000 # 1 minute
self.start = 1752900000000
self.start_current = 10
self.start_voltage = 260
# import data
etool.taosdump(f"-i cases/13-StreamProcessing/20-UseCase/vehicle_data/")
tdLog.info(f"import data to db={self.db}. successfully.")
#
# 1. create vtables
#
def createVtables(self):
sqls = [
f"create database {self.vdb};",
f"use {self.vdb};",
"CREATE STABLE `vst_车辆_652220` (`ts` TIMESTAMP ENCODE 'delta-i' COMPRESS 'lz4' LEVEL 'medium', `经度` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `纬度` FLOAT ENCODE 'delta-d' COMPRESS 'lz4' LEVEL 'medium', `高程` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `速度` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `方向` SMALLINT ENCODE 'simple8b' COMPRESS 'zlib' LEVEL 'medium', `报警标志` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium', `里程` INT ENCODE 'simple8b' COMPRESS 'lz4' LEVEL 'medium') TAGS (`_ignore_path` VARCHAR(20), `车辆资产模型` VARCHAR(128), `车辆ID` VARCHAR(32), `车牌号` VARCHAR(17), `车牌颜色` TINYINT, `终端制造商` VARCHAR(11), `终端ID` VARCHAR(15), `path2` VARCHAR(512)) SMA(`ts`,`经度`) VIRTUAL 1",
"CREATE VTABLE `vt_京Z1NW34_624364` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_001`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_001', '京Z1NW34', 2, 'zd', '2551765954', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京Z1NW84_916965` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_002`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_002', '京Z1NW84', 2, 'zd', '1819625826', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京Z2NW48_176514` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_003`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_003', '京Z2NW48', 2, 'zd', '5206002832', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京Z7A0Q7_520761` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_004`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_004', '京Z7A0Q7', 2, 'zd', '1663944041', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京Z7A2Q5_157395` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_005`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_005', '京Z7A2Q5', 2, 'zd', '7942624528', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京ZB86G7_956382` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_006`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_006', '京ZB86G7', 2, 'zd', '1960758157', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京ZCR392_837580` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_007`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_007', '京ZCR392', 2, 'zd', '6560472044', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京ZD43R1_860146` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_008`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_008', '京ZD43R1', 2, 'zd', '3491377379', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京ZD62R2_866800` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_009`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_009', '京ZD62R2', 2, 'zd', '8265223624', '车辆场景.XX物流公司.华北分公司.北京车队')",
"CREATE VTABLE `vt_京ZD66G4_940130` (`经度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`longitude`, `纬度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`latitude`, `高程` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`elevation`, `速度` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`speed`, `方向` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`direction`, `报警标志` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`alarm`, `里程` FROM `idmp_sample_vehicle`.`vehicle_110100_010`.`mileage`) USING `vst_车辆_652220` (`_ignore_path`, `车辆资产模型`, `车辆ID`, `车牌号`, `车牌颜色`, `终端制造商`, `终端ID`, `path2`) TAGS (NULL, 'XX物流公司.华北分公司.北京车队', '110100_010', '京ZD66G4', 2, 'zd', '3689589229', '车辆场景.XX物流公司.华北分公司.北京车队')",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls) - 2} vtable successfully.")
#
# 2. create streams
#
def createStreams(self):
sqls = [
"create stream if not exists `idmp`.`ana_stream3` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_京Z2NW48_176514` stream_options(ignore_disorder) notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
"create stream if not exists `idmp`.`ana_stream3_sub1` event_window( start with `速度` > 100 end with `速度` <= 100 ) true_for(5m) from `idmp`.`vt_京Z2NW48_176514` notify('ws://idmp:6042/eventReceive') on(window_open|window_close) into `idmp`.`result_stream3_sub1` as select _twstart+0s as output_timestamp, count(*) as cnt, avg(`速度`) as `平均速度` from %%trows",
]
tdSql.executes(sqls)
tdLog.info(f"create {len(sqls)} streams successfully.")
#
# 3. wait stream ready
#
def checkStreamStatus(self):
print("wait stream ready ...")
tdStream.checkStreamStatus()
tdLog.info(f"check stream status successfully.")
#
# 4. write trigger data
#
def writeTriggerData(self):
# stream3
self.trigger_stream3()
#
# 5. verify results
#
def verifyResults(self):
self.verify_stream3()
# --------------------- stream trigger ----------------------
#
# stream3 trigger
#
def trigger_stream3(self):
table = f"{self.db}.`vehicle_110100_003`"
cols = "ts,speed"
# write order data
# win1 order 1 ~ no -> no
ts = self.start
vals = "120"
count = 3
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
ts += 1 * self.step
vals = "60"
count = 1
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
# win2 order 10 ~ no -> trigger
ts = self.start + 10 * self.step
vals = "130"
count = 4
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
ts += 1 * self.step
vals = "65"
count = 1
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
# win3 order 20 ~ trigger -> no
ts = self.start + 20 * self.step
vals = "140"
count = 6
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
vals = "70"
count = 1
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
# win4 order 30 ~ trigger -> trigger
ts = self.start + 30 * self.step
vals = "150"
count = 8
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
vals = "75"
count = 1
ts = tdSql.insertFixedVal(table, ts, self.step, count, cols, vals)
#
# --------------------- verify ----------------------
#
#
# verify stream3
#
def verify_stream3(self):
# check
result_sql = f"select * from {self.vdb}.`result_stream3` "
tdSql.checkResultsByFunc (
sql = result_sql,
func = lambda: tdSql.getRows() == 2
# row1
and tdSql.compareData(0, 0, self.start + 20 * self.step) # ts
and tdSql.compareData(0, 1, 6 + 1) # cnt
# row2
and tdSql.compareData(1, 0, self.start + 30 * self.step) # ts
and tdSql.compareData(1, 1, 8 + 1) # cnt
)
tdLog.info("verify stream3 .................................. successfully.")

View File

@ -65,7 +65,7 @@ class Test_Nevados:
self.kpi_db_test(db, stb, precision, real_start_time) # 2 [ok]
self.kpi_trackers_test(db, stb, precision, real_start_time) # 4 [ok]
# self.off_target_trackers(db, stb, precision, real_start_time) # 5 [fail]
self.off_target_trackers(db, stb, precision, real_start_time) # 5 [fail]
self.kpi_zones_test(db, stb, precision, real_start_time) # 7 [ok]
self.kpi_sites_test(db, stb, precision, real_start_time) # 8 [ok]
self.trackers_motor_current_state_window(db, stb, precision, real_start_time) # 9 [ok]
@ -634,7 +634,7 @@ class Test_Nevados:
sql = f"select * from dev.off_target_trackers_{sub_prefix}0 limit 40;"
exp_sql = (f"select _wend as window_end, site, tracker,"
f" last(reg_pitch) as off_target_pitch, last(mode) as mode"
f" from trackers where tbname = '{sub_prefix}0' and _ts >= '2025-01-01 00:00:00.000' and _ts < '2025-02-02 04:30:00.000' and abs(reg_pitch-reg_move_pitch) > 2"
f" from trackers where tbname = '{sub_prefix}0' and _ts >= '2025-02-01 00:00:00.000' and _ts < '2025-02-02 04:30:00.000' and abs(reg_pitch-reg_move_pitch) > 2"
f" partition by site,tracker interval(15m) sliding(5m) limit 40;")
tdLog.info(f"exp_sql: {exp_sql}")
tdSql.checkResultsBySql(sql=sql, exp_sql=exp_sql)

View File

@ -86,7 +86,7 @@ class Test_ThreeGorges:
# tdLog.info(f"select * from {self.dbname}.str_cjdl_point_data_szls_jk_test where _c0 >today()")
# if tdSql.getRows() != 1:
# raise Exception("ERROR: result is now right!")
tdSql.checkRowsLoop(7,f"select val,senid,senid_name from {self.dbname}.{self.outTbname} order by _c0;",200,1)
tdSql.checkRowsLoop(6,f"select val,senid,senid_name from {self.dbname}.{self.outTbname} order by _c0;",200,1)
self.checkResultWithResultFile()

View File

@ -60,7 +60,7 @@ class Test_ThreeGorges:
self.checkStreamRunning()
self.sxny_data2()
self.dataIn()
tdSql.checkRowsLoop(500,f"select val,tablename,point, ps_code, cnstationno, index_code from {self.dbname}.{self.outTbname} order by tablename;",100,1)
tdSql.checkRowsLoop(500,f"select val,tablename,point, ps_code, cnstationno, index_code from {self.dbname}.{self.outTbname} order by tablename;",200,1)
self.checkResultWithResultFile()

View File

@ -102,28 +102,31 @@ class TestOthersOldCaseAtonce:
f" into res_ct1 (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
)
# tdSql.execute(
# f"create stream sg0 count_window(1, 1, cint) from {self.db}.{self.stbName} partition by tbname, tint"
# f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
# f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
# f" into res_stb OUTPUT_SUBTABLE(CONCAT('res_stb_', tbname)) (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
# f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
# )
tdSql.execute(
f"create stream sg0 count_window(1, 1, cint) from {self.db}.{self.stbName} partition by tbname, tint"
f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
f" into res_stb OUTPUT_SUBTABLE(CONCAT('res_stb_', tbname)) (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
)
# tdSql.execute(
# f"create stream s0_v count_window(1, 1, cint) from {self.db}.vct1"
# f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
# f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
# f" into res_vct1 (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
# f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
# )
# tdSql.execute(
# f"create stream sg0_v count_window(1, 1, cint) from {self.db}.{self.vstbName} partition by tbname, tint "
# f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
# f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
# f" into res_vstb OUTPUT_SUBTABLE(CONCAT('res_vstb_', tbname)) (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
# f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
# )
tdSql.execute(
f"create stream s0_v count_window(1, 1, cint) from {self.db}.vct1"
f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
f" into res_vct1 (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
)
tdSql.execute(
f"create stream sg0_v count_window(1, 1, cint) from {self.db}.{self.vstbName} partition by tbname, tint "
f" stream_options(pre_filter(cint > 4 and cint < 7) | watermark(10s) | expired_time(60s) | max_delay(5s)"
f" | delete_recalc | fill_history('2025-03-01 00:00:00') | force_output)"
f" into res_vstb OUTPUT_SUBTABLE(CONCAT('res_vstb_', tbname)) (lastts, firstts, cnt_v, sum_v, ysum_v, tws, twe)"
f" as select last_row(_c0), first(_c0), count(cint), sum(cint), sum(ctiny), _twstart, _twend from %%trows;"
)
def insert1(self):
sqls = [
@ -157,18 +160,18 @@ class TestOthersOldCaseAtonce:
sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name="res_ct1"',
func=lambda: tdSql.getRows() == 1,
)
# tdSql.checkResultsByFunc(
# sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name="res_vct1"',
# func=lambda: tdSql.getRows() == 1,
# )
# tdSql.checkResultsByFunc(
# sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name like "res_stb_ct%"',
# func=lambda: tdSql.getRows() == 2,
# )
# tdSql.checkResultsByFunc(
# sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name like "res_vstb_ct%"',
# func=lambda: tdSql.getRows() == 2,
# )
tdSql.checkResultsByFunc(
sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name="res_vct1"',
func=lambda: tdSql.getRows() == 1,
)
tdSql.checkResultsByFunc(
sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name like "res_stb_ct%"',
func=lambda: tdSql.getRows() == 2,
)
tdSql.checkResultsByFunc(
sql=f'select * from information_schema.ins_tables where db_name="{self.db}" and table_name like "res_vstb_vct%"',
func=lambda: tdSql.getRows() == 2,
)
tdSql.checkTableSchema(
dbname=self.db,
@ -178,14 +181,46 @@ class TestOthersOldCaseAtonce:
["firstts", "TIMESTAMP", 8, ""],
["cnt_v", "BIGINT", 8, ""],
["sum_v", "BIGINT", 8, ""],
["ysum_v", "DOUBLE", 8, ""],
["ysum_v", "BIGINT", 8, ""],
["tws", "TIMESTAMP", 8, ""],
["twe", "TIMESTAMP", 8, ""],
],
)
# tdSql.checkResultsByFunc(
# sql=f"select lastts, firstts, cnt_v, sum_v, ysum_v from {self.db}.res_ct1",
# func=lambda: tdSql.getRows() == 4
# and tdSql.compareData(0, 0, "2025-03-01 00:00:25")
# and tdSql.compareData(0, 1, "2025-03-01 00:00:25")
# and tdSql.compareData(0, 2, 1)
# and tdSql.compareData(0, 3, 5)
# and tdSql.compareData(0, 4, 4)
# and tdSql.compareData(1, 0, "2025-03-01 00:00:30")
# and tdSql.compareData(1, 1, "2025-03-01 00:00:30")
# and tdSql.compareData(1, 2, 1)
# and tdSql.compareData(1, 3, 6)
# and tdSql.compareData(1, 4, 3)
# and tdSql.compareData(2, 0, "2025-04-01 00:00:25")
# and tdSql.compareData(2, 1, "2025-04-01 00:00:25")
# and tdSql.compareData(2, 2, 1)
# and tdSql.compareData(2, 3, 5)
# and tdSql.compareData(2, 4, 4)
# and tdSql.compareData(3, 0, "2025-04-01 00:00:30")
# and tdSql.compareData(3, 1, "2025-04-01 00:00:30")
# and tdSql.compareData(3, 2, 1)
# and tdSql.compareData(3, 3, 6)
# and tdSql.compareData(3, 4, 3)
# )
self.common_checkResults('res_ct1')
self.common_checkResults('res_vct1')
self.common_checkResults('res_stb_ct1')
self.common_checkResults('res_stb_ct2')
self.common_checkResults('res_vstb_vct1')
self.common_checkResults('res_vstb_vct2')
def common_checkResults(self, res_tbl_name):
tdSql.checkResultsByFunc(
sql=f"select lastts, firstts, cnt_v, sum_v, avg_v from {self.db}.res_ct1",
sql=f"select lastts, firstts, cnt_v, sum_v, ysum_v from {self.db}.{res_tbl_name}",
func=lambda: tdSql.getRows() == 4
and tdSql.compareData(0, 0, "2025-03-01 00:00:25")
and tdSql.compareData(0, 1, "2025-03-01 00:00:25")
@ -208,5 +243,4 @@ class TestOthersOldCaseAtonce:
and tdSql.compareData(3, 3, 6)
and tdSql.compareData(3, 4, 3)
)

View File

@ -35,22 +35,22 @@ class TestStreamOldCaseCount:
- 2025-7-25 Simon Guan Migrated from tsim/stream/countSliding1.sim
- 2025-7-25 Simon Guan Migrated from tsim/stream/countSliding2.sim
- 2025-7-25 Simon Guan Migrated from tsim/stream/scalar.sim
"""
tdStream.createSnode()
streams = []
streams.append(self.Count0())
streams.append(self.Count02())
streams.append(self.Count03())
streams.append(self.Count21())
streams.append(self.Count22())
streams.append(self.Count31())
streams.append(self.Sliding01())
streams.append(self.Sliding02())
# streams.append(self.Count01()) pass
# streams.append(self.Count02()) pass
# streams.append(self.Count03()) pass
# streams.append(self.Count21()) pass
# streams.append(self.Count22()) pass
# streams.append(self.Count31()) pass
# streams.append(self.Sliding01()) pass
# streams.append(self.Sliding02()) pass
streams.append(self.Sliding11())
streams.append(self.Sliding21())
# streams.append(self.Sliding21())
tdStream.checkAll(streams)
class Count01(StreamCheckItem):
@ -65,7 +65,7 @@ class TestStreamOldCaseCount:
f"create table t1(ts timestamp, a int, b int, c int, d double);"
)
tdSql.execute(
f"create stream streams1 count_window(3) from t1 stream_options(max_delay(3s)|expired_time(0)|watermark(100s)) into streamt as select _wstart as s, count(*) c1, sum(b), max(c) from t1 where ts >= _wstart and ts < _twend;"
f"create stream streams1 count_window(3) from t1 stream_options(max_delay(3s)|expired_time(200s)|watermark(100s)) into streamt as select _twstart as s, count(*) c1, sum(b), max(c) from t1 where ts >= _twstart and ts <= _twend;"
)
def insert1(self):
@ -77,16 +77,19 @@ class TestStreamOldCaseCount:
tdSql.execute(f"insert into t1 values(1648791223001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t1 values(1648791223009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t1 values(1648791323009, 0, 4, 4, 4.0);")
def check1(self):
tdSql.checkResultsByFunc(
f"select * from streamt;",
lambda: tdSql.getRows() > 1
and tdSql.getData(0, 1) == 3
and tdSql.getData(0, 2) == 6
and tdSql.getData(0, 3) == 3
and tdSql.getData(1, 1) == 3
and tdSql.getData(1, 2) == 6
and tdSql.getData(1, 3) == 3,
lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.000")
and tdSql.compareData(0, 2, 6)
and tdSql.compareData(0, 3, 3)
and tdSql.compareData(1, 0, "2022-04-01 13:33:43.000")
and tdSql.compareData(1, 1, 3)
and tdSql.compareData(1, 2, 6)
and tdSql.compareData(1, 3, 3),
)
class Count02(StreamCheckItem):
@ -103,7 +106,7 @@ class TestStreamOldCaseCount:
tdSql.execute(f"create table t1 using st tags(1, 1, 1);")
tdSql.execute(f"create table t2 using st tags(2, 2, 2);")
tdSql.execute(
f"create stream streams2 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt2 as select _wstart as s, count(*) c1, sum(b), max(c) from st partition by tbname count_window(3)"
f"create stream streams2 count_window(3) from st partition by tbname stream_options(max_delay(3s)|expired_time(200s)) into streamt2 as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows count_window(3)"
)
def insert1(self):
@ -125,17 +128,14 @@ class TestStreamOldCaseCount:
def check1(self):
tdSql.checkResultsByFunc(
f"select * from streamt2 order by 1, 2;",
lambda: tdSql.getRows() > 2
f"select * from streamt2 where tag_tbname='t1';",
lambda: tdSql.getRows() == 2
and tdSql.getData(0, 1) == 3
and tdSql.getData(0, 2) == 6
and tdSql.getData(0, 3) == 3
and tdSql.getData(1, 1) == 3
and tdSql.getData(1, 2) == 6
and tdSql.getData(1, 3) == 3
and tdSql.getData(2, 1) == 3
and tdSql.getData(2, 2) == 6
and tdSql.getData(2, 3) == 3,
and tdSql.getData(1, 3) == 3,
)
class Count03(StreamCheckItem):
@ -154,24 +154,22 @@ class TestStreamOldCaseCount:
tdSql.execute(f"insert into t1 values(1648791213009, 0, 3, 3, 1.0);")
tdSql.execute(
f"create stream streams3 trigger at_once FILL_HISTORY 1 IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt3 as select _wstart as s, count(*) c1, sum(b), max(c) from t1 count_window(3);"
f"create stream streams3 count_window(3) from t1 stream_options(max_delay(3s)|expired_time(1s)) into streamt3 as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows;"
)
def insert1(self):
tdSql.execute(f"insert into t1 values(1648791223000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t1 values(1648791221001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t1 values(1648791223001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t1 values(1648791223009, 0, 3, 3, 1.0);")
def check1(self):
tdSql.checkResultsByFunc(
f"select * from streamt3;",
lambda: tdSql.getRows() > 1
lambda: tdSql.getRows() == 1
and tdSql.getData(0, 1) == 3
and tdSql.getData(0, 2) == 6
and tdSql.getData(0, 3) == 3
and tdSql.getData(1, 1) == 3
and tdSql.getData(1, 2) == 6
and tdSql.getData(1, 3) == 3,
and tdSql.getData(0, 3) == 3,
)
class Count21(StreamCheckItem):
@ -186,7 +184,7 @@ class TestStreamOldCaseCount:
f"create table t1(ts timestamp, a int, b int, c int, d double);"
)
tdSql.execute(
f"create stream streams1 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt as select _wstart as s, count(*) c1, sum(b), max(c) from t1 count_window(3);"
f"create stream streams1 count_window(3) from t1 stream_options(max_delay(3s)|expired_time(200s)) into streamt as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows;"
)
def insert1(self):
@ -196,7 +194,9 @@ class TestStreamOldCaseCount:
def check1(self):
tdSql.checkResultsByFunc(
f"select * from streamt;",
lambda: tdSql.getRows() > 0 and tdSql.getData(0, 1) == 2,
lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 2),
)
def insert2(self):
@ -205,7 +205,9 @@ class TestStreamOldCaseCount:
def check2(self):
tdSql.checkResultsByFunc(
f"select * from streamt;",
lambda: tdSql.getRows() == 1 and tdSql.getData(0, 1) == 3,
lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 2),
)
def insert3(self):
@ -216,19 +218,24 @@ class TestStreamOldCaseCount:
def check3(self):
tdSql.checkResultsByFunc(
f"select * from streamt order by 1;",
lambda: tdSql.getRows() == 2,
lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 3)
and tdSql.compareData(1, 0, "2022-04-01 13:33:43.001")
and tdSql.compareData(1, 1, 2),
)
def insert4(self):
tdSql.execute(f"insert into t1 values(1648791212000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t1 values(1648791224000, 0, 1, 1, 1.0);")
def check4(self):
tdSql.checkResultsByFunc(
f"select * from streamt order by 1;",
lambda: tdSql.getRows() == 3
and tdSql.getData(0, 1) == 3
and tdSql.getData(1, 1) == 3
and tdSql.getData(2, 1) == 1,
lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 3)
and tdSql.compareData(1, 0, "2022-04-01 13:33:43.001")
and tdSql.compareData(1, 1, 3),
)
class Count22(StreamCheckItem):
@ -245,49 +252,53 @@ class TestStreamOldCaseCount:
tdSql.execute(f"create table t1 using st tags(1, 1, 1);")
tdSql.execute(f"create table t2 using st tags(2, 2, 2);")
tdSql.execute(
f"create stream streams2 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt2 as select _wstart as s, count(*) c1, sum(b), max(c) from st partition by tbname count_window(3)"
f"create stream streams2 count_window(3) from st partition by tbname stream_options(max_delay(3s)|expired_time(2s)|watermark(1s)) into streamt2 as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows "
)
def insert1(self):
tdSql.execute(f"insert into t1 values(1648791213001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t1 values(1648791213009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t1 values(1648791214009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t2 values(1648791213001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t2 values(1648791213009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t2 values(1648791214009, 0, 3, 3, 1.0);")
def check1(self):
tdSql.checkResultsByFunc(
f"select * from streamt2 order by 1;;",
lambda: tdSql.getRows() > 1
and tdSql.getData(0, 1) == 2
and tdSql.getData(1, 1) == 2,
f"select * from streamt2 where tag_tbname='t1' order by 1;",
lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 1),
)
def insert2(self):
tdSql.execute(f"insert into t1 values(1648791213000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t2 values(1648791213000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t1 values(1648791215009, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t2 values(1648791215009, 0, 1, 1, 1.0);")
def check2(self):
tdSql.checkResultsByFunc(
f"select * from streamt2 order by 1;;",
lambda: tdSql.getRows() == 2
and tdSql.getData(0, 1) == 3
and tdSql.getData(1, 1) == 3,
f"select * from streamt2 where tag_tbname='t1' order by 1;",
lambda: tdSql.getRows() == 1
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 2),
)
def insert3(self):
tdSql.execute(f"insert into t1 values(1648791223000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t1 values(1648791223001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t1 values(1648791223009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t1 values(1648791224009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t2 values(1648791223000, 0, 1, 1, 1.0);")
tdSql.execute(f"insert into t2 values(1648791223001, 9, 2, 2, 1.1);")
tdSql.execute(f"insert into t2 values(1648791223009, 0, 3, 3, 1.0);")
tdSql.execute(f"insert into t2 values(1648791224009, 0, 3, 3, 1.0);")
def check3(self):
tdSql.checkResultsByFunc(
f"select * from streamt2 order by 1;",
lambda: tdSql.getRows() == 4,
f"select * from streamt2 where tag_tbname='t1' order by 1;",
lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 3)
and tdSql.compareData(1, 0, "2022-04-01 13:33:43.000")
and tdSql.compareData(1, 1, 2),
)
def insert4(self):
@ -296,14 +307,12 @@ class TestStreamOldCaseCount:
def check4(self):
tdSql.checkResultsByFunc(
f"select * from streamt2 order by 1;",
lambda: tdSql.getRows() == 6
and tdSql.getData(0, 1) == 3
and tdSql.getData(1, 1) == 3
and tdSql.getData(2, 1) == 3
and tdSql.getData(3, 1) == 3
and tdSql.getData(4, 1) == 1
and tdSql.getData(5, 1) == 1,
f"select * from streamt2 where tag_tbname='t1' order by 1;",
lambda: tdSql.getRows() == 2
and tdSql.compareData(0, 0, "2022-04-01 13:33:33.001")
and tdSql.compareData(0, 1, 3)
and tdSql.compareData(1, 0, "2022-04-01 13:33:43.000")
and tdSql.compareData(1, 1, 2),
)
class Count31(StreamCheckItem):
@ -318,7 +327,7 @@ class TestStreamOldCaseCount:
f"create table t1(ts timestamp, a int, b int, c int, d double);"
)
tdSql.execute(
f"create stream streams1 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt as select _wstart as s, count(*) c1, sum(b), max(c) from t1 count_window(3);"
f"create stream streams1 count_window(3) from t1 stream_options(max_delay(3s)|expired_time(200s)) into streamt as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows ;"
)
def insert1(self):
@ -355,7 +364,7 @@ class TestStreamOldCaseCount:
f"select * from streamt order by 1;",
lambda: tdSql.getRows() == 2
and tdSql.getData(0, 1) == 3
and tdSql.getData(1, 1) == 2,
and tdSql.getData(1, 1) == 3,
)
class Sliding01(StreamCheckItem):
@ -370,7 +379,7 @@ class TestStreamOldCaseCount:
f"create table t1(ts timestamp, a int, b int, c int, d double);"
)
tdSql.execute(
f"create stream streams1 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt as select _wstart as s, count(*) c1, sum(b), max(c) from t1 count_window(4, 2);"
f"create stream streams1 count_window(4, 2) from t1 stream_options(max_delay(3s)) into streamt as select _twstart as s, count(*) c1, sum(b), max(c) from t1 where ts >= _twstart and ts <= _twend;"
)
def insert1(self):
@ -447,10 +456,10 @@ class TestStreamOldCaseCount:
class Sliding02(StreamCheckItem):
def __init__(self):
self.db = "Count00"
self.db = "sliding02"
def create(self):
tdSql.execute(f"create database sliding02 vgroups 4;")
tdSql.execute(f"create database sliding02 vgroups 4 buffer 8;")
tdSql.execute(f"use sliding02;")
tdSql.execute(
@ -459,7 +468,7 @@ class TestStreamOldCaseCount:
tdSql.execute(f"create table t1 using st tags(1, 1, 1);")
tdSql.execute(f"create table t2 using st tags(2, 2, 2);")
tdSql.execute(
f"create stream streams2 trigger at_once IGNORE EXPIRED 1 IGNORE UPDATE 0 WATERMARK 100s into streamt2 as select _wstart as s, count(*) c1, sum(b), max(c) from st partition by tbname count_window(4, 2);"
f"create stream streams2 count_window(4, 2) from st partition by tbname stream_options(max_delay(3s)) into streamt2 as select _twstart as s, count(*) c1, sum(b), max(c) from %%trows;"
)
def insert1(self):
@ -522,7 +531,7 @@ class TestStreamOldCaseCount:
def check5(self):
tdSql.checkResultsByFunc(
f"select * from streamt2;",
lambda: tdSql.getRows() == 9,
lambda: tdSql.getRows() == 7,
)
def insert6(self):

View File

@ -74,11 +74,11 @@
# > 0 (any retrieved column size greater than this value all data will be compressed.)
# compressColData -1
# system time zone
# system time zone (for linux/mac)
# timezone UTC-8
# system time zone (for windows 10)
# timezone Asia/Shanghai (CST, +0800)
# system time zone (for windows)
# timezone Asia/Shanghai
# system locale
# locale en_US.UTF-8

View File

@ -293,6 +293,7 @@ class TestInformationSchema:
'mongodb':'MongoDB',
'csv':'CSV',
'sparkplugb':"SparkplugB",
'orc':'ORC',
'idmp_ts_attr':'TDengine IDMP Time-Series Attributes',
'idmp_nts_attr':'TDengine IDMP Non-Time-Series Attributes',
'idmp_element':'TDengine IDMP Elements',

View File

@ -27,6 +27,9 @@
,,y,.,./ci/pytest.sh pytest cases/01-DataTypes/test_ts6333.py
,,y,.,./ci/pytest.sh pytest cases/01-DataTypes/test_composite_key_load.py
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility_rolling_upgrade.py -N 3
# 02-Databases
## 01-Create
,,y,.,./ci/pytest.sh pytest cases/02-Databases/01-Create/test_db_basic1.py
@ -569,11 +572,13 @@
,,n,.,pytest cases/13-StreamProcessing/07-SubQuery/test_subquery_state.py
## 08-Recalc
#,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_expired_time.py need to modify case
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_expired_time.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_ignore_disorder.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_delete_recalc.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_watermark.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_combined_options.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_manual.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_manual_with_options.py
## 20-UseCase
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/20-UseCase/test_idmp_meters.py
@ -615,11 +620,9 @@
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_oldcase_snode_restart_with_checkpoint.py -N 4
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_oldcase_stream_multi_agg.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_oldcase_stream_basic.py
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility_rolling_upgrade.py -N 3
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility_rolling_upgrade_all.py -N 3
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_drop.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_empty_identifier.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_oldcase_at_once.py
## 31-OldCases
@ -1004,6 +1007,7 @@
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_stbJoin.py -Q 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_stbJoin.py -Q 4
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_hint.py
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility_rolling_upgrade_all.py -N 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_hint.py -Q 2
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_hint.py -Q 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_hint.py -Q 4
@ -1541,6 +1545,8 @@
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_blockSMA.py -Q 2
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_projectionDesc.py -Q 2
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility.py
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_between.py -Q 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_distinct.py -Q 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/2-query/test_varchar.py -Q 3
@ -1954,9 +1960,6 @@
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/0-others/test_information_schema.py
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/0-others/test_ins_filesets.py
#newstm,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/0-others/test_grant.py
#newstm,,n,.,pytest cases/uncatalog/system-test/0-others/test_compatibility_rolling_upgrade_all.py -N 3
#newstm,,n,.,pytest cases/uncatalog/system-test/0-others/test_compatibility.py
#newstm,,n,.,pytest cases/uncatalog/system-test/0-others/test_compatibility_rolling_upgrade.py -N 3
,,y,.,./ci/pytest.sh pytest cases/uncatalog/system-test/0-others/view/non_marterial_view/test_view.py

View File

@ -68,11 +68,13 @@
,,n,.,pytest cases/13-StreamProcessing/07-SubQuery/test_subquery_state.py
## 08-Recalc
#,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_expired_time.py need to modify case
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_expired_time.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_ignore_disorder.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_delete_recalc.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_watermark.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_combined_options.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_manual.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/08-Recalc/test_recalc_manual_with_options.py
## 20-UseCase
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/20-UseCase/test_idmp_meters.py
@ -119,6 +121,7 @@
,,n,.,pytest cases/13-StreamProcessing/30-OldPyCases/test_compatibility.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_drop.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_empty_identifier.py
,,y,.,./ci/pytest.sh pytest cases/13-StreamProcessing/30-OldPyCases/test_oldcase_at_once.py
## 31-OldCases