select * from lb_task_run where resource_group = ? and state in (1, 10, 19) "kubectl edit Deployment -nwedata unified-data-security 修改cpu和内存为2c4g " ods_t_com_sc_accrual_detail_his_a_h dwd_gs_vlgs_tplcore_t_annualize_rate_jd_a_h这两个任务停不掉 --dwd_gs_vlgs_tplcore_gross_value_pg_a_h这个任务也看看呢,每次运行都找不到执行日志 ods_t_customer_a_h dwdp_da_rpt_gross_value_detail_ss_hi ods_dm_application_a_h ads_tpldrs_bksur_pre_prod_item_detail_a_h ads_tpldrs_bksur_acc_prod_fee_detail_a_h kubectl edit Deployment -nwedata unified-data-security https://f.ws59.cn/f/ei9rw4fo8kt 复制链接到浏览器打开 https://getnote.top/2 https://f.ws59.cn/f/ei6s65mlwoa 复制链接到浏览器打开 ads_tplshow_pursur_acc_gross_detail_by_item_a_min_v这个实例7:06的调度,上游任务都运行完成了,但是他是8点半40多才开始运行。 1、kubectl get mariadb -n sso | grep <实例关键字> 获取到对应的 MariaDB 对象xx 2、编辑MariaDB对象 kubectl edit mariadb xxx -n sso 将limits和requests调整到预期值即可,磁盘扩容一定要保证request和limit中storage填写的磁盘大小是相同的 eg: resources: limits: cpu: 200m memory: 200M storage: 10G requests: cpu: 200m memory: 200M storage: 10G 20240417194145032 2024-04-17 00:00:00 select DEFAULT.filterformulaudf("123", map('SA',nvl(cast(t7.sa as string),'0'))) from project_analysis_reinsur_prod.dwd_rein_reprm_df t7 limit 2; 公网链接: https://registry.aurora.tencent.com/packages/_material/0/noarch/1719464099/material.7079584c9b3b4924ac80450fa9c85f94.20240627125449.tgz?version=v1&issueDate=2024-06-27T12%3A54%3A56%2B08%3A00&uuid=3cbef59e-8b70-4416-8fbc-99e0fa63343c&authorization=HKtIX32H4SdaOYKK%2F8RYpx5dWA1aR3uKkJhpC3ppbFD8dga2Pr%2B8B5xsi748anOIzTvunHVG8JtIAN6%2FXod0xHRfUNgGryr7D8QE2NP83BEY8X04JzSL58TWv7PpxDBxcr341dAwBu7Dq%2BFRsXXQyLbOhju2%2B%2Bu72InHUoZxRmnL1wxhS%2FR1GrDvrokK8c1sUtIVbrvLILckZXKlLzUJRyCCoY9IydezOIS64Mvw2SsMQe6UPuRoIvY3PBzZCKsxsQRNvP%2FREOVpHzyvlUi%2F9NEcYDrvDpRBJZvqLxuagFopcDswlOPWxhQE1bGgjsoQ3KpfGlL0i0mh5oD5DKC6Vg%3D%3D&privateKeyHash=WJesrCD%2Fha5pRdC7i080TRzCfaLxcCe5dmoX01KsU4s%3D 31684c66-cd8e-4930-bf73-2f60d686e016 https://getnote.top/guldan https://getnote.top/2 dwd_rein_liab_adj_policlm_di dwd_rein_reclm_w_di_ sum(starrocks_fe_mv_refresh_running_jobs) by (job) 这个正常,子实例20240520110632406_2024-06-26 08:06:00没执行是因为父实例20240531142957381_2024-06-26 08:06:00还没执行完 1.rm -f /var/lib/mysql/gcache.page.00*,清理cache文件 2.执行mysqladmin shutdown,触发优雅重启mysql服务 3.输入命令mysql,进入mariadb,查询show status like '%wsrep_cluster%'; 确认在线副本数为3,以及 show status like '%wsrep_last%' 确认最后次提交; 三个pod都操作完之后,查看pod状态是否正常 https://getnote.top/guldan ads_tplshow_pursur_acc_gross_detail_by_item_a_min_v 20240620164706183 这个实例7:06的调度,上游任务都运行完成了,但是他是8点半40多才开始运行。 @李同 @周小龙 https://getnote.top/sr Failed to start service. err:[ Exited too quickly (process log may have details) ]] #企业微信会议:341-896-868 导出dump jmap -dump:live,format=b,file=/tmp/heap.hprof {pid} jmap -histo:live {pid} > histo.txt https://docs.starrocks.io/zh/docs/administration/management/Scale_up_down/ https://drive.weixin.qq.com/s?k=AJEAIQdfAAopEBRWt9 ALTER USER jack@'172.10.1.10' IDENTIFIED BY '54321'; curl -X POST -H 'Content-Type: application/json' -d '{"data": {"taskId":"20240618220203612", "curRunDate":"2024-06-18 22:10:00", "taskType": {"typeId": "126"}}}' http://127.0.0.1:9066/getYarnAppidFromLog tracking URL: 132456451654564 sudo -u hdfs java -jar arthas-boot.jar --telnet-port 9998 --http-port -1 curl -X POST -H 'Content-Type: application/json' '{}' http://172.16.12.7:9066/startInstance SELECT SUBSTRING('example', -1, 5); workflow depend is not successful 20240325162341915412 loader-0 172.16.4.9 10.27.135.57 dwt_cf_infe_item_pre_npp_a_min checkWorkflowState for instanc checkWorkflowState for instance {} workflow {} sed -n '/2021-08-06 15:40:/,/2021-08-06 16:10:/p' scheduler-all-log.2021-08-06.log > myscheduler-all-log.2021-08-06.log dwd_gs_vlgs_tplcore_gross_value_kh_a_min_new 20240531142957381 2024-06-17 16:42:00 https://f.ws59.cn/f/ed5p6rzbcfe 复制链接到浏览器打开 mv_refresh_try_lock_timeout_ms admin set frontend config ("mv_refresh_try_lock_timeout_ms" = "30000"); https://doc.weixin.qq.com/doc/w3_AMkARwYHACk0ZDfbbTgTG0W1un6Pd?scode=AJEAIQdfAAoV7afOnxAMkARwYHACk tceadmin/aaaaaaaa1! 909619400/tester001/Tbds@2022 909619400/tester002/Qwert@12345 909619400/tester003/Tbds@2022 #清除指定集群 use woodpecker; delete from application_job_actions where job_id in (select id from application_jobs where cluster_id in ('tbds-9fwam863')); delete from application_jobs where cluster_id in ('tbds-9fwam863'); DELETE FROM `action` WHERE cluster_id in ('tbds-9fwam863'); DELETE FROM `alert_info` WHERE cluster_id in ('tbds-9fwam863'); DELETE FROM `cluster_conf_group_config_file_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_conf_group_config_file_version_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_node_config_file_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_node_config_file_version_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_resource` WHERE cluster_id in ('tbds-9fwam863'); DELETE FROM `cluster_service` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_service_client` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_service_config_file_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_service_config_file_version_new` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_service_config_separate` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `cluster_service_node` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `conf_group_info` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `conf_group_node_info` WHERE cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); DELETE FROM `host_heart` WHERE uuid in (select uuid FROM `host` WHERE cluster_id in ('tbds-9fwam863')); DELETE FROM `host` WHERE cluster_id in ('tbds-9fwam863'); delete from cluster_service_config_state where cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); delete from conf_push_history where cluster_id in (select id from cluster where cluster_id in ('tbds-9fwam863')); delete from cluster_state where cluster_id in ('tbds-9fwam863'); delete from task where process_id in (select process_id from job where dataid in ('tbds-9fwam863')); delete from stage where data_id in ('tbds-9fwam863'); delete from job where dataid in ('tbds-9fwam863'); delete from action where cluster_id in ('tbds-9fwam863'); DELETE FROM `cluster` WHERE cluster_id in ('tbds-9fwam863'); #清理emrcc数据库,参见galileo.clusterinfo中的记录,获取id和clusterId use galileo; delete from `clusterproductconfig` where clusterId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `cluster_cdb_info` where clusterId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `server_hardwareinfo` where clusterId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `web_proxy_authinfo` where clusterId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `taskflow_wood_flowinfo` where flowId in (select flowId from `taskflow` where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863'))); delete from `taskflow` where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `taskflow_status` where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `usermetainfo` where clusterId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `taskflowparams` where flowId in (select id from `taskflow` where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863'))); delete from taskflow_status where flowId in (select flowId from taskflow where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863'))); delete from taskflow_wood_flowinfo where flowId in (select flowId from taskflow where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863'))); delete from taskflow where docId in (select id from clusterinfo where clusterId in ('tbds-9fwam863')); delete from `clusterinfo` where clusterId in ('tbds-9fwam863'); tbds-bootstrap run 'cat /data/init.log |tail -n 5' sh /data/tools/tbds-bootstrap.sh run "sh /data/image-sm/build_tmp/force_clean.sh > /data/force_clean.log "; sh /data/tools/tbds-bootstrap.sh run 'cat /data/force_clean.log |tail -n 10' #添加sudo权限 tbds-bootstrap run "echo 'tbds ALL=(ALL) NOPASSWD:/usr/bin/make,/usr/bin/rm, /usr/bin/crontab,/usr/sbin/kadmin.local,/usr/bin/sleep,/usr/bin/mysql,/usr/bin/cd,/usr/sbin/setcap,/usr/sbin/service,/usr/sbin/kdb5_ldap_util,/usr/bin/chmod,/usr/bin/touch,/usr/bin/bash,/usr/bin/hostnamectl,/usr/bin/yum,/usr/bin/mkdir,/usr/bin/mv,/usr/bin/wget,/usr/bin/tar,/usr/bin/unzip,/usr/bin/chown,/usr/sbin/useradd,/usr/bin/cp,/usr/bin/ln,/usr/bin/pip*,/usr/local/bin/pip*,/usr/bin/python*,/usr/bin/systemctl,/usr/bin/tee,/usr/bin/sed,/usr/sbin/dmidecode,/usr/local/jdk/bin/jstat,/usr/sbin/groupadd'>>/etc/sudoers" tbds-bootstrap run "echo 'tbds ALL=(ALL) NOPASSWD:/usr/local/sbin/slapadd,/usr/local/sbin/slapdn,/usr/local/sbin/slapcat,/usr/local/sbin/slaptest,/usr/local/sbin/slapauth,/usr/local/sbin/slapschema,/usr/local/sbin/slappasswd,/usr/local/sbin/slapindex,/usr/local/sbin/slapacl,/usr/local/sbin/uuserver,/usr/local/sbin/kprop,/usr/local/sbin/sim_server,/usr/local/sbin/krb5-send-pr,/usr/local/sbin/krb5kdc,/usr/local/sbin/gss-server,/usr/local/sbin/sserver,/usr/local/sbin/kproplog,/usr/local/sbin/kpropd,/usr/local/sbin/kdb5_util,/usr/local/sbin/kdb5_ldap_util,/usr/local/sbin/kadmin.local,/usr/local/sbin/kadmind,/usr/local/bin/ldappasswd,/usr/local/bin/ldapmodrdn,/usr/local/bin/ldapexop,/usr/local/bin/ldapcompare,/usr/local/bin/ldapadd,/usr/local/bin/ldapmodify,/usr/local/bin/ldapdelete,/usr/local/bin/ldapwhoami,/usr/local/bin/ldapurl,/usr/local/bin/ldapsearch,/usr/local/bin/compile_et,/usr/local/bin/kadmin,/usr/local/bin/kdestroy,/usr/local/bin/k5srvutil,/usr/local/bin/gss-client,/usr/local/bin/kswitch,/usr/local/bin/kinit,/usr/local/bin/sclient,/usr/local/bin/sim_client,/usr/local/bin/kvno,/usr/local/bin/ktutil,/usr/local/bin/ksu,/usr/local/bin/krb5-config,/usr/local/bin/kpasswd,/usr/local/bin/klist,/usr/local/bin/uuclient'>>/etc/sudoers" tbds-bootstrap run "echo 'hadoop ALL=(ALL) NOPASSWD: /usr/bin/sleep,/usr/sbin/dmidecode,/usr/bin/mysql,/usr/bin/cd,/usr/sbin/setcap,/usr/sbin/service,/usr/bin/rm,/usr/sbin/kadmin.local,/usr/bin/chmod,/usr/bin/touch,/usr/bin/bash,/usr/bin/hostnamectl,/usr/bin/yum,/usr/bin/mkdir,/usr/bin/mv,/usr/bin/unzip,/usr/bin/tar,/usr/bin/chown,/usr/sbin/useradd,/usr/sbin/usermod,/usr/bin/cp,/usr/bin/ln,/usr/bin/pip*,/usr/local/bin/pip*,/usr/bin/python*,/usr/bin/systemctl,/usr/bin/tee,/usr/bin/sed,/usr/local/jdk/bin/jstat,/usr/bin/ranger-admin,/usr/bin/ranger-usersync,/usr/local/service/ranger/setup.sh,/usr/local/service/ranger/sbin/ranger-daemon.sh,/usr/local/service/ranger/backup_setup.sh,/usr/local/service/ranger/sbin/default-settings.sh,/usr/local/service/ranger/usersync/setup.sh,/usr/local/service/ranger/usersync/setup.py,/usr/local/service/ranger/ews/ranger-admin-services.sh,/usr/local/service/ranger/usersync/ranger-usersync-services.sh,/usr/local/service/woodpecker/woodpecker-ems-agent/sbin/iotopz'>>/etc/sudoers" #umask修改 /data/tools/tbds-bootstrap.sh push /data/tools/utils/init_jobs/init-set-umask.sh /tmp/ /data/tools/tbds-bootstrap.sh run 'sh /tmp/init-set-umask.sh 022' ========== #新生产环境 10.27.135.105 o.bennu.life.cntaiping.com 10.27.135.105 imgcache.bennu.life.cntaiping.com 10.27.135.105 cas.bennu.life.cntaiping.com 10.27.135.105 api.bennu.life.cntaiping.com 10.27.135.105 yapi3oss.bennu.life.cntaiping.com 10.27.135.105 yunapi3.bennu.life.cntaiping.com 10.27.135.105 cspobject.o.bennu.life.cntaiping.com 10.27.135.105 tcs-system-csp-mgmt-console.bennu.life.cntaiping.com 10.27.135.105 api.o.bennu.life.cntaiping.com 10.27.135.105 grafana.chongqing.bennu.life.cntaiping.com 10.27.135.105 tcs-platform-websocket.bennu.life.cntaiping.com 10.27.135.105 wedata.o.bennu.life.cntaiping.com 10.27.135.105 wedata-api.o.bennu.life.cntaiping.com 10.27.135.105 wedata-tcs-data-service-gateway.bennu.life.cntaiping.com 10.27.135.105 nacos.bennu.life.cntaiping.com 10.27.135.105 uat-resulthouse.bennu.life.cntaiping.com 10.27.135.105 oss-csp1.csp.bennu.life.cntaiping.com 10.27.135.105 dawn-console.bennu.life.cntaiping.com 10.27.135.105 tbds.o.bennu.life.cntaiping.com 10.27.135.105 grafana.o.bennu.life.cntaiping.com 10.27.135.105 grafana.chongqing.bennu.life.cntaiping.com 10.27.135.105 flink.o.bennu.life.cntaiping.com #### 新测试环境 10.28.148.16 o.test.bennu.life.cntaiping.com 10.28.148.16 imgcache.test.bennu.life.cntaiping.com 10.28.148.16 cas.test.bennu.life.cntaiping.com 10.28.148.16 api.o.test.bennu.life.cntaiping.com 10.28.148.16 oapi.test.bennu.life.cntaiping.com 10.28.148.16 tcs-platform-websocket.test.bennu.life.cntaiping.com 10.28.148.16 tcs-system-csp-mgmt-console.test.bennu.life.cntaiping.com 10.28.148.16 oss-csp1.csp.test.bennu.life.cntaiping.com 10.28.148.16 dawn-console.test.bennu.life.cntaiping.com 10.28.148.16 tbds.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata.o.test.bennu.life.cntaiping.com 10.28.148.16 o.test.bennu.life.cntaiping.com 10.28.148.16 imgcache.test.bennu.life.cntaiping.com 10.28.148.16 test.bennu.life.cntaiping.com 10.28.148.16 cas.test.bennu.life.cntaiping.com 10.28.148.16 api2.test.bennu.life.cntaiping.com 10.28.148.16 api3.test.bennu.life.cntaiping.com 10.28.148.16 api.test.bennu.life.cntaiping.com 10.28.148.16 api3.oss.test.bennu.life.cntaiping.com 10.28.148.16 api2.oss.test.bennu.life.cntaiping.com 10.28.148.16 yapi3oss.test.bennu.life.cntaiping.com 10.28.148.16 yunapi3.oss.test.bennu.life.cntaiping.com 10.28.148.16 yunapi3.test.bennu.life.cntaiping.com 10.28.148.16 api.o.test.bennu.life.cntaiping.com 10.28.148.16 api.t.test.bennu.life.cntaiping.com 10.28.148.16 grafana.chongqing.test.bennu.life.cntaiping.com 10.28.148.16 wedata-api.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-studio.o.test.bennu.life.cntaiping.com 10.28.148.16 tcs-platform-websocket.test.bennu.life.cntaiping.com 10.28.148.16 wedata-dq.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-ola.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-security.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-datahub.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-manage.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-do.o.test.bennu.life.cntaiping.com 10.28.148.16 console.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-console.o.test.bennu.life.cntaiping.com 10.28.148.16 index.o.test.bennu.life.cntaiping.com 10.28.148.16 cspobject.o.test.bennu.life.cntaiping.com 10.28.148.16 tcs-system-csp-mgmt-console.test.bennu.life.cntaiping.com 10.28.148.16 wedata-dataplan.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-dataservice.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-oceanus.o.test.bennu.life.cntaiping.com 10.28.148.16 flink.o.test.bennu.life.cntaiping.com 10.28.148.16 wedata-api.o.test.bennu.life.cntaiping.com 10.28.148.16 grafana.chongqing.test.bennu.life.cntaiping.com 10.28.148.16 infrastore-metric-gateway.chongqing.test.bennu.life.cntaiping.com 10.28.148.16 wedata-tcs-data-service-gateway.test.bennu.life.cntaiping.com 10.28.148.16 nacos.test.bennu.life.cntaiping.com 。