当前位置: 首页 > news >正文

网站链接太多怎么做网站地图网站设计两边为什么要留白

网站链接太多怎么做网站地图,网站设计两边为什么要留白,wordpress 贴吧,wordpress右键在大数据处理环境下#xff0c;慢SQL日志优化是一个必要的步骤#xff0c;尤其当日志文件达到数GB时#xff0c;直接操作日志文件会带来诸多不便。本文将介绍如何通过Java和JSQLParser库来解析和去重慢SQL日志#xff0c;以提高性能和可维护性。 背景 公司生产环境中慢SQL日志优化是一个必要的步骤尤其当日志文件达到数GB时直接操作日志文件会带来诸多不便。本文将介绍如何通过Java和JSQLParser库来解析和去重慢SQL日志以提高性能和可维护性。 背景 公司生产环境中某些操作产生的SQL执行时间较长会记录在慢SQL日志文件中。慢SQL日志文件包含了SQL的执行时间、用户信息、查询语句等内容。由于这些日志文件可能包含大量重复的SQL语句逐条查看和处理既耗时又低效因此有必要进行去重操作。 目标 本文旨在通过以下步骤实现慢SQL日志的去重 读取日志文件内容解析出注释和SQL语句。解析SQL定义SQL相同的标准。实现对象存储解析出来的各个部分重写equals和hashCode方法。使用Set集合去重。将去重后的结果写入文件。 工具和依赖 为了实现上述目标我们将使用以下工具和依赖 Java 8或以上版本MavenJSQLParser库 Maven依赖 dependencygroupIdcom.github.jsqlparser/groupIdartifactIdjsqlparser/artifactIdversion4.9/version /dependencySQL日志示例 以下是一个慢SQL日志的示例其中包含了时间、用户信息、数据库模式、查询时间、发送的字节数、时间戳以及SQL语句 # Time: 2024-05-10T20:30:12.03533708:00 # UserHost: root[root] [192.168.110.110] Id: 13708199 # Schema: laterdatabase Last_errno: 0 Killed: 0 # Query_time: 5.000000 Lock_time: 0.000122 Rows_sent: 1 Rows_examined: 610953 Rows_affected: 0 # Bytes_sent: 56 SET timestamp1715344212; SELECT * FROM emp where name %三%; # Time: 2024-05-10T11:28:27.31596608:00 # UserHost: root[root] [192.168.110.110] Id: 13666423 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 3.290658 Lock_time: 0.000131 Rows_sent: 0 Rows_examined: 0 Rows_affected: 1 # Bytes_sent: 11 SET timestamp1715311707; insert into scott.emp ( name, age) values (张三, 38); # Time: 2024-05-10T20:30:12.03533708:00 # UserHost: root[root] [192.168.110.110] Id: 13708199 # Schema: laterdatabase Last_errno: 0 Killed: 0 # Query_time: 5.000000 Lock_time: 0.000122 Rows_sent: 1 Rows_examined: 610953 Rows_affected: 0 # Bytes_sent: 56 SET timestamp1715344212; SELECT * FROM emp where name %三%; # Time: 2024-05-14T16:18:03.87935108:00 # UserHost: root[root] [192.168.110.110] Id: 13966826 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 3.120938 Lock_time: 0.000100 Rows_sent: 0 Rows_examined: 1 Rows_affected: 1 # Bytes_sent: 52 SET timestamp1715674683; UPDATE emp SET ename 张三, age 18 WHERE id 1045983421034180 ; # Time: 2024-05-10T20:30:12.03533708:00 # UserHost: root[root] [192.168.110.110] Id: 13708199 # Schema: laterdatabase Last_errno: 0 Killed: 0 # Query_time: 5.000000 Lock_time: 0.000122 Rows_sent: 1 Rows_examined: 610953 Rows_affected: 0 # Bytes_sent: 56 SET timestamp1715344212; SELECT * FROM emp where name %三%; # Time: 2024-05-06T01:58:36.95967108:00 # UserHost: root[root] [192.168.110.110] Id: 13387119 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 6.161219 Lock_time: 0.000875 Rows_sent: 0 Rows_examined: 2137468 Rows_affected: 0 # Bytes_sent: 11 SET timestamp1714931916; delete from emp where id 1; # Time: 2024-05-10T20:30:12.03533708:00 # UserHost: root[root] [192.168.110.110] Id: 13708199 # Schema: laterdatabase Last_errno: 0 Killed: 0 # Query_time: 5.000000 Lock_time: 0.000122 Rows_sent: 1 Rows_examined: 610953 Rows_affected: 0 # Bytes_sent: 56 SET timestamp1715344212; SELECT * FROM emp where name %三%;去重后的效果 # slow.log 未知服务 4 # Time: 2024-05-10T20:30:12.03533708:00 # UserHost: root[root] [192.168.110.110] Id: 13708199 # Schema: laterdatabase Last_errno: 0 Killed: 0 # Query_time: 5.000000 Lock_time: 0.000122 Rows_sent: 1 Rows_examined: 610953 Rows_affected: 0 # Bytes_sent: 56 SET timestamp1715344212; SELECT * FROM emp where name %三%;# slow.log 未知服务 1 # Time: 2024-05-14T16:18:03.87935108:00 # UserHost: root[root] [192.168.110.110] Id: 13966826 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 3.120938 Lock_time: 0.000100 Rows_sent: 0 Rows_examined: 1 Rows_affected: 1 # Bytes_sent: 52 SET timestamp1715674683; UPDATE emp SET ename 张三, age 18 WHERE id 1045983421034180 ;# slow.log 未知服务 1 # Time: 2024-05-06T01:58:36.95967108:00 # UserHost: root[root] [192.168.110.110] Id: 13387119 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 6.161219 Lock_time: 0.000875 Rows_sent: 0 Rows_examined: 2137468 Rows_affected: 0 # Bytes_sent: 11 SET timestamp1714931916; delete from emp where id 1;# slow.log 未知服务 1 # Time: 2024-05-10T11:28:27.31596608:00 # UserHost: root[root] [192.168.110.110] Id: 13666423 # Schema: scott Last_errno: 0 Killed: 0 # Query_time: 3.290658 Lock_time: 0.000131 Rows_sent: 0 Rows_examined: 0 Rows_affected: 1 # Bytes_sent: 11 SET timestamp1715311707; insert into scott.emp ( name, age) values (张三, 38);额外信息说明 # slow.log 未知服务 4为去重时额外添加进去的行,slow.log代表sql所在原文件(如果慢sql日志文件被切分成好多个小文件时方便定位第一次出现的位置)未知服务代表服务名称,如果项目是微服务架构,可以将ip替换为服务名,可读性更高,只需要关注自己负责的未付即可,也可以在输出去重后的文件时按照微服务名称命名,每个微服务的日志单独写入到一个文件.# slow.log 未知服务 4 最后的数字表示该sql出现的次数 去重与统计 import lombok.Data; import lombok.experimental.Accessors; import net.sf.jsqlparser.parser.CCJSqlParserUtil; import net.sf.jsqlparser.statement.Statement; import net.sf.jsqlparser.statement.delete.Delete; import net.sf.jsqlparser.statement.insert.Insert; import net.sf.jsqlparser.statement.select.Select; import net.sf.jsqlparser.statement.select.SelectItem; import net.sf.jsqlparser.statement.update.Update; import net.sf.jsqlparser.statement.update.UpdateSet; import net.sf.jsqlparser.util.TablesNamesFinder; import org.springframework.util.StringUtils;import java.io.StringReader; import java.util.ArrayList; import java.util.List; import java.util.Objects;Data Accessors(chain true) public class SlowQueryMetadata {private int count 1;private String time;private String userAndHost;private String schema;private String queryTime;private String bytesSent;private String timestamp;private String sql;private String fileName;private String appName;/*** 和元数据保持一致格式*/Overridepublic String toString() {return# fileName \t appName \t count \n time \n userAndHost \n schema \n queryTime \n bytesSent \n timestamp \n sql \n;}/*** 去重 不同参数的相同格式的sql* 计数*/Overridepublic boolean equals(Object o) {if (this o) return true;if (o null || getClass() ! o.getClass()) return false;SlowQueryMetadata slowQueryMetadata (SlowQueryMetadata) o;boolean equals Objects.equals(uniqueString(sql), uniqueString(slowQueryMetadata.sql));if (equals) {slowQueryMetadata.setCount(slowQueryMetadata.getCount() 1);}return equals;}Overridepublic int hashCode() {return Objects.hash(uniqueString(sql));}/*** 去除sql中的参数*/private String uniqueString(String sql) {if (!StringUtils.hasText(sql)) {return ;}try {sql sql.replace(\t, );sql sql.replaceAll(\\s, );// 替换掉注释部分 /*...*/sql sql.replaceAll(/\\*.*?\\*/, );// 相对比较耗时Statement statement CCJSqlParserUtil.parse(new StringReader(sql));if (statement instanceof Insert) {return insertStatement((Insert) statement);}if (statement instanceof Update) {return updateStatement((Update) statement);}if (statement instanceof Delete) {return deleteStatement((Delete) statement);}if (statement instanceof Select) {return selectStatement(statement);}} catch (Exception e) {return sql;}return sql;}private String selectStatement(Statement statement) {ListString tables new ArrayList(new TablesNamesFinder().getTables(statement));Select select (Select) statement;try {ListSelectItem? selectItems select.getPlainSelect().getSelectItems();return select selectItems tables;} catch (Exception e) {return select tables;}}/*** delete语句只要表名相同 删除条件列相同 即可认为是同一条sql*/private String deleteStatement(Delete delete) {StringBuilder builder new StringBuilder(delete );builder.append(delete.getTable().getName().trim());builder.append( where );String[] ands delete.getWhere().toString().toLowerCase().split(and);for (String and : ands) {builder.append(and.split()[0].trim()).append( );}return builder.toString();}/*** 更新语句只要表名 要更新的列名 where条件列名 相同就认为是同一条sql*/private String updateStatement(Update update) {StringBuilder builder new StringBuilder(update );builder.append(update.getTable().getName().trim());builder.append( column );for (UpdateSet updateSet : update.getUpdateSets()) {builder.append(updateSet.getColumns().toString().trim());}builder.append( where );String[] ands update.getWhere().toString().toLowerCase().split(and);for (String and : ands) {builder.append(and.split()[0].trim()).append( );}return builder.toString();}/*** 新增语句只要表相同列相同即可认为是相同sql*/private String insertStatement(Insert statement) {return insert statement.getTable().getName() statement.getColumns().toString();}}package com.study.jsqlparser;import com.study.jsqlparser.dto.SlowQueryMetadata; import com.study.jsqlparser.utils.FileUtils; import lombok.SneakyThrows; import lombok.extern.slf4j.Slf4j; import org.springframework.util.StringUtils;import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Comparator; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors;Slf4j public class SlowQueryAnalysis {SneakyThrowspublic static void main(String[] args) { // statistics4Directory();statistics4File();}/*** 对单个慢sql文件文件去重*/private static void statistics4File() throws IOException {String fileFullName slow.log;// 去重后的sql文件String deduplicationFileFullName deduplication.log;File slowFile new File(fileFullName);ListString list FileUtils.readFileByLine(slowFile);log.info(从文件中读取到的总行数:{}, list.size());ListSlowQueryMetadata slowQueryMetadataList getSqlDTOList(list, slowFile.getName());log.info(提取出了:{}条sql, slowQueryMetadataList.size());HashSetSlowQueryMetadata set new HashSet(slowQueryMetadataList);log.info(去重后的sql条数:{}, set.size());ListString deduplication set.stream().sorted(Comparator.comparingInt(SlowQueryMetadata::getCount).reversed()).map(SlowQueryMetadata::toString).collect(Collectors.toList());FileUtils.write2File(new File(deduplicationFileFullName), deduplication);}/*** 对文件夹下所有慢sql文件去重*/private static void statistics4Directory() throws IOException {String directoryFullName E:\\xinao\\sql优化\\0516慢SQL已分割\\sql\\;// 去重后的sql文件String deduplicationFileFullName deduplication.sql;SetSlowQueryMetadata set new HashSet();for (File file : new File(directoryFullName).listFiles()) {String fileName file.getName();log.info(fileName);ListString list FileUtils.readFileByLine(file);log.info(从文件中读取到的总行数:{}, list.size());ListSlowQueryMetadata slowQueryMetadataList getSqlDTOList(list, fileName);log.info(提取出了:{}条sql, slowQueryMetadataList.size());set.addAll(slowQueryMetadataList);}log.info(去重后的sql条数:{}, set.size());ListString deduplication set.stream().sorted(Comparator.comparingInt(SlowQueryMetadata::getCount).reversed()).map(SlowQueryMetadata::toString).collect(Collectors.toList());FileUtils.write2File(new File(deduplicationFileFullName), deduplication);}private static ListSlowQueryMetadata getSqlDTOList(ListString list, String fileName) {ListSlowQueryMetadata slowQueryMetadataList new ArrayList();for (int i 0; i list.size(); i) {String line list.get(i);if (!StringUtils.hasText(line)) {continue;}if (line.trim().startsWith(# Time:)) {SlowQueryMetadata slowQueryMetadata new SlowQueryMetadata();slowQueryMetadata.setFileName(fileName);slowQueryMetadataList.add(slowQueryMetadata);slowQueryMetadata.setTime(line);boolean multilineComment false;while (i list.size() - 1) {i;line list.get(i);// 处理多行注释if (line.trim().contains(/*)) {multilineComment true;continue;}if (line.trim().contains(*/)) {multilineComment false;continue;}if (multilineComment) {continue;}if (line.trim().startsWith(# Time:)) {i--;break;}if (line.startsWith(# UserHost:)) {slowQueryMetadata.setUserAndHost(line);evaluationAppName(line, slowQueryMetadata);}if (line.startsWith(# Schema:)) {slowQueryMetadata.setSchema(line);}if (line.startsWith(# Query_time:)) {slowQueryMetadata.setQueryTime(line);}if (line.startsWith(# Bytes_sent:)) {slowQueryMetadata.setBytesSent(line);}if (line.startsWith(SET timestamp)) {slowQueryMetadata.setTimestamp(line);}if (line.toLowerCase().trim().startsWith(insert) || line.toLowerCase().trim().startsWith(delete) || line.toLowerCase().trim().startsWith(update) || line.toLowerCase().trim().startsWith(select)) {StringBuilder sql new StringBuilder(line);while (i list.size() - 1) {i;line list.get(i);if (line.startsWith(# Time: )) {i--;break;}if (StringUtils.hasText(line) !line.trim().startsWith(--)) {sql.append(line);}}slowQueryMetadata.setSql(sql.toString());break;}}}}return slowQueryMetadataList;}private static void evaluationAppName(String line, SlowQueryMetadata slowQueryMetadata) {try {// # UserHost: root[root] [192.168.100.101] Id: 13523930String ip line.substring(line.lastIndexOf([)1,line.lastIndexOf(]));MapString, String appMap getAppMap();String appName appMap.get(ip);if (!StringUtils.hasText(appName)) {appName 未知服务;}slowQueryMetadata.setAppName(appName);} catch (Exception e) {slowQueryMetadata.setAppName(异常服务);}}private static MapString, String getAppMap() {MapString, String appMap new HashMap();appMap.put(192.168.100.101, com-study-gateway);appMap.put(192.168.100.102, com-study-gateway);appMap.put(192.168.100.103, com-study-registry);appMap.put(192.168.100.104, com-study-registry);appMap.put(192.168.100.104, com-study-obs-web);appMap.put(192.168.100.106, com-study-uid-web);return appMap;}}JSQLParser使用 insert语句 Test public void testInsertStatement() {String sql INSERT INTO emp (name, age) VALUES (张三, 38);try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Insert) {Insert insert (Insert) parse;Table table insert.getTable();String name table.getName();log.info(name);Values values insert.getValues();log.info(values.toString());ExpressionListColumn columns insert.getColumns();log.info(columns.toString());}} catch (JSQLParserException e) {e.printStackTrace();} }19:02:07.239 [main] INFO com.study.jsqlparser.JSQLParserTest - emp 19:02:07.242 [main] INFO com.study.jsqlparser.JSQLParserTest - VALUES (张三, 38) 19:02:07.242 [main] INFO com.study.jsqlparser.JSQLParserTest - name, agedelete语句 Test public void testDeleteStatement() {String sql DELETE FROM emp WHERE id 1;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Delete) {Delete delete (Delete) parse;Table table delete.getTable();log.info(table.toString());Expression where delete.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();} }19:08:04.037 [main] INFO com.study.jsqlparser.JSQLParserTest - emp 19:08:04.039 [main] INFO com.study.jsqlparser.JSQLParserTest - id 1update 语句 Test public void testUpdateStatement() {String sql UPDATE emp SET ename 张三, age 18 WHERE id 1045983421034180;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Update) {Update update (Update) parse;log.info(update.getTable().toString());ListUpdateSet updateSets update.getUpdateSets();for (UpdateSet updateSet : updateSets) {ExpressionListColumn columns updateSet.getColumns();log.info(columns.toString());ExpressionList? values updateSet.getValues();log.info(values.toString());}Expression where update.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();} }19:19:18.450 [main] INFO com.study.jsqlparser.JSQLParserTest - emp 19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - ename 19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - 张三 19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - age 19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - 18 19:19:18.452 [main] INFO com.study.jsqlparser.JSQLParserTest - id 1045983421034180简单select语句 Test public void testSelectStatement() {String sql SELECT name,age FROM emp WHERE name LIKE %三% group by name,age having avg(age) 18 limit 10;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select (Select) parse;PlainSelect plainSelect select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();} }19:47:22.904 [main] INFO com.study.jsqlparser.JSQLParserTest - [name, age] 19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - emp 19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - name LIKE %三% 19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - GROUP BY name, age 19:47:22.906 [main] INFO com.study.jsqlparser.JSQLParserTest - avg(age) 18 19:47:22.907 [main] INFO com.study.jsqlparser.JSQLParserTest - LIMIT 10复杂 select 语句 Test public void testComplexSelectStatement() {String sql select d.deptno,d.dname,avg(sal) from emp e join dept d on e.deptno d.deptno where e.is_deleted 0 group by d.deptno,d.dname having avg(sal)10000 ORDER BY avg(sal) limit 10;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select (Select) parse;PlainSelect plainSelect select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getJoins().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getOrderByElements().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();} }08:34:32.895 [main] INFO com.study.jsqlparser.JSQLParserTest - [d.deptno, d.dname, avg(sal)] 08:34:32.898 [main] INFO com.study.jsqlparser.JSQLParserTest - emp e 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - [JOIN dept d ON e.deptno d.deptno] 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - e.is_deleted 0 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - GROUP BY d.deptno, d.dname 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - avg(sal) 10000 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - LIMIT 10 08:34:32.899 [main] INFO com.study.jsqlparser.JSQLParserTest - [avg(sal)]JSQLParserTest完整代码 import lombok.extern.slf4j.Slf4j; import net.sf.jsqlparser.JSQLParserException; import net.sf.jsqlparser.expression.Expression; import net.sf.jsqlparser.expression.operators.relational.ExpressionList; import net.sf.jsqlparser.parser.CCJSqlParserUtil; import net.sf.jsqlparser.schema.Column; import net.sf.jsqlparser.schema.Table; import net.sf.jsqlparser.statement.Statement; import net.sf.jsqlparser.statement.delete.Delete; import net.sf.jsqlparser.statement.insert.Insert; import net.sf.jsqlparser.statement.select.PlainSelect; import net.sf.jsqlparser.statement.select.Select; import net.sf.jsqlparser.statement.select.Values; import net.sf.jsqlparser.statement.update.Update; import net.sf.jsqlparser.statement.update.UpdateSet; import org.junit.jupiter.api.Test;import java.util.List;Slf4j public class JSQLParserTest {Testpublic void testInsertStatement() {String sql INSERT INTO emp (name, age) VALUES (张三, 38);try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Insert) {Insert insert (Insert) parse;Table table insert.getTable();String name table.getName();log.info(name);Values values insert.getValues();log.info(values.toString());ExpressionListColumn columns insert.getColumns();log.info(columns.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}Testpublic void testDeleteStatement() {String sql DELETE FROM emp WHERE id 1;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Delete) {Delete delete (Delete) parse;Table table delete.getTable();log.info(table.toString());Expression where delete.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}//Testpublic void testUpdateStatement() {String sql UPDATE emp SET ename 张三, age 18 WHERE id 1045983421034180;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Update) {Update update (Update) parse;log.info(update.getTable().toString());ListUpdateSet updateSets update.getUpdateSets();for (UpdateSet updateSet : updateSets) {ExpressionListColumn columns updateSet.getColumns();log.info(columns.toString());ExpressionList? values updateSet.getValues();log.info(values.toString());}Expression where update.getWhere();log.info(where.toString());}} catch (JSQLParserException e) {e.printStackTrace();}}Testpublic void testSelectStatement() {String sql SELECT name,age FROM emp WHERE name LIKE %三% group by name,age having avg(age) 18 limit 10;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select (Select) parse;PlainSelect plainSelect select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}}Testpublic void testComplexSelectStatement() {String sql select d.deptno,d.dname,avg(sal) from emp e join dept d on e.deptno d.deptno where e.is_deleted 0 group by d.deptno,d.dname having avg(sal)10000 ORDER BY avg(sal) limit 10;try {Statement parse CCJSqlParserUtil.parse(sql);if (parse instanceof Select) {Select select (Select) parse;PlainSelect plainSelect select.getPlainSelect();log.info(plainSelect.getSelectItems().toString());log.info(plainSelect.getFromItem().toString());log.info(plainSelect.getJoins().toString());log.info(plainSelect.getWhere().toString());log.info(plainSelect.getGroupBy().toString());log.info(plainSelect.getHaving().toString());log.info(plainSelect.getOrderByElements().toString());log.info(plainSelect.getLimit().toString());}} catch (JSQLParserException e) {e.printStackTrace();}} }结论 通过本文介绍的方法可以有效地对数GB的慢SQL日志文件进行去重处理大幅提高查询效率并减小文件规模。这种方法不仅适用于慢SQL日志处理还可以扩展到其他类似的日志文件去重需求中。 未来工作 未来的优化方向可以包括 更加智能的SQL标准化方法以处理更复杂的SQL语句。将处理逻辑进一步并行化利用多线程和分布式计算提高处理速度。开发图形化界面工具方便运维人员操作和查看去重后的日志。 通过持续的优化和改进我们可以进一步提升慢SQL日志处理的效率和准确性为企业的数据处理和优化提供坚实的基础。
http://www.sczhlp.com/news/236593/

相关文章:

  • 怎么给客户谈做网站网络摄像头定制开发
  • 墙绘做网站哪家好织梦模板网
  • 凡科建站怎么导出做网站排名大概要多少钱
  • 个人网站的主题网站换域名图片这么设置
  • 房地产论坛网站建设wordpress 拓展
  • 慧聪网的网站建设策略clh网站建设公司
  • 石家庄个人建站模板上海市建设安全协会网站打不开
  • 津做网站深圳网站seo教程
  • VS2022 17.14.18 (October 2025) MAUI编译失败,提示 ios net8无效
  • 深圳团购网站设计价格wordpress图片文件目录下
  • 贵阳网站设计与开发怎么做经典广告案例
  • 建站助手干运维为什么会废了
  • 广州海珠区网站建设安监局网站建设方案
  • 潍坊做网站价格广东品牌网站建设哪家好
  • 网站支付宝支付接口申请织梦 电影网站 模板
  • 邢台营销型网站制作在贵州省住房和城乡建设厅网站查询
  • 徐州经济技术开发区顺昌网站建设wzjseo
  • 北京网站建设价位大学专业建设的内容
  • 做3d教学网站济南怎样做网站推广
  • 经营网站 备案查询建设行业的门户网站
  • 网站建设c云世家网络网页生成app在线
  • 网站建设吗小型企业网站系统
  • 炫酷网站首页辽源市建设局网站
  • 网站后台如何添加视频怎么做商务网站的架构
  • 成都网站建设制作服务做网络销售都做什么网站
  • 想学做网站需要学什么施工企业资质标准2021
  • 网站后台如何添加视频优化网站技术
  • 全国十大婚恋网站排名代做毕业设计网站
  • 在哪个网站可以学做甜点做网站必须要购买空间吗
  • 兰州网站seo珠海网站设计公司