MYSQL系列分库分表(三):ShardingJDBC实现分库分表落地实践中

2023年 9月 25日 60.6k 0

系列文档参考 MYSQL系列-整体架构介绍
紧接上文MYSQL系列-分库分表(三):Sharding-JDBC实现分库分表落地实践-上

详细设计

表模型改动

当前表结构如下

point_shard1 point_shard2 分库分表字段
point_balance point_balance1 point_balance2 point_balance1 point_balance2 uid(long)
point_balance point_balance1 point_balance2 point_balance1 point_balance2 uid(long)
point_balance_his point_balance_his1 point_balance_his2 point_balance_his1 point_balance_his2 uid(long)
coupon_info coupon_info1 coupon_info2 coupon_info1 coupon_info2 coupon_code(String)

上述改动主要是将不同类型分库分表、以及配置表落在单库都涉及到

相关表结构如下

DROP TABLES IF EXISTS point_balance_his1;
CREATE TABLE IF NOT EXISTS `point_balance_his1` (
    `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT '主键ID',
    `uid` BIGINT(20) NOT NULL COMMENT 'uid',
    `point_sum` BIGINT(20) NOT NULL COMMENT '积分总和',
    `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `country` VARCHAR(20) NOT NULL DEFAULT 'CN' COMMENT '国家',
    PRIMARY KEY (`id`),
    INDEX idx_uid(`uid`)
    ) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='用户积分变动历史表1';

DROP TABLES IF EXISTS point_balance_his2;
CREATE TABLE IF NOT EXISTS `point_balance_his2` (
    `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT '主键ID',
    `uid` BIGINT(20) NOT NULL COMMENT 'uid',
    `point_sum` BIGINT(20) NOT NULL COMMENT '积分总和',
    `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `country` VARCHAR(20) NOT NULL DEFAULT 'CN' COMMENT '国家',
    PRIMARY KEY (`id`),
    INDEX idx_uid(`uid`)
    ) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='用户积分变动历史表2';

DROP TABLE IF EXISTS `coupon_info1`;
CREATE TABLE coupon_info1 (
                              `id` INT AUTO_INCREMENT NOT NULL COMMENT '自增ID',
                              `coupon_code` VARCHAR (20) NOT NULL COMMENT '券码CODE',
                              `rev_uid` BIGINT(20) DEFAULT NULL COMMENT '领券UID',
                              `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
                              `modify_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
                              `country` VARCHAR (20) NOT NULL DEFAULT 'CN' COMMENT '国家',
                              PRIMARY KEY (`id`),
                              INDEX idx_coupon_code_country (coupon_code, country)
) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8 COMMENT = '券码表1' ;
DROP TABLE IF EXISTS `coupon_info2`;
CREATE TABLE coupon_info2 (
                              `id` INT AUTO_INCREMENT NOT NULL COMMENT '自增ID',
                              `coupon_code` VARCHAR (20) NOT NULL COMMENT '券码CODE',
                              `rev_uid` BIGINT(20) DEFAULT NULL COMMENT '领券UID',
                              `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
                              `modify_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间',
                              `country` VARCHAR (20) NOT NULL DEFAULT 'CN' COMMENT '国家',
                              PRIMARY KEY (`id`),
                              INDEX idx_coupon_code_country (coupon_code, country)
) ENGINE = INNODB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8 COMMENT = '券码表2' ;

maven引用

主要涉及将sharding-jdbc等引用进来

注意,本期是基于sharding-jdbc 5.2来进行落地实践的


    
        org.springframework.boot
        spring-boot-starter
        2.7.9
        
            
                org.yaml
                snakeyaml
            
        
    
    
        org.springframework.boot
        spring-boot-starter-jdbc
        2.7.9
    
    
        mysql
        mysql-connector-java
        5.1.49
    
    
        org.mybatis.spring.boot
        mybatis-spring-boot-starter
        2.1.0
    
    
        com.alibaba
        druid
        1.2.16
    
    
        org.aspectj
        aspectjrt
        1.9.5
    
    
        org.aspectj
        aspectjweaver
        1.9.5
    
    
        cglib
        cglib-nodep
        3.2.10
    
    
        junit
        junit
        4.13
        test
    
    
        org.springframework.boot
        spring-boot-starter-test
        2.7.9
        test
    
    
        com.hyw.study
        com.toby.logback
        1.0-SNAPSHOT
    
    
        org.apache.shardingsphere
        shardingsphere-jdbc-core-spring-boot-starter
        5.2.0
    
    
        org.yaml
        snakeyaml
        1.33
    
    
        org.glassfish.jaxb
        jaxb-runtime
        2.3.8
    

关键技术点实现

整体框架

META-INF新建sharding-databases-tables.yaml配置文件,主要将druid数据源配置、分库分表算法、读写分离等放进去
然后使用JAVA代码加载创建datasource

@Configuration
public class ShardingSphereConfig {

    @Bean
    public DataSource dataSource() throws IOException, SQLException {
        DataSource dataSource = YamlShardingSphereDataSourceFactory.createDataSource(
                getFile("classpath:META-INF/sharding-databases-tables.yaml"));
        return dataSource;
    }
}

mybatics相关配置如下
application.yml配置

spring:
  main:
    allow-bean-definition-overriding: true
mybatis:
  config-location: classpath:sqlmap-config.xml

sqlmap-config.xml如下





    
        
        
        
        
        
        
    

Mapper文件本次采用注解的方式,去掉xml配置,具体如下:

public interface CouponInfoMapper {

    @Insert("insert into coupon_info (coupon_code, rev_uid, country)\n" +
            "        values (#{couponCode}, #{recUid}, #{country})\n")
    int addCouponCode(CouponInfo couponInfo);

    @Select("select id,\n" +
            "        coupon_code,\n" +
            "        rev_uid,\n" +
            "        create_time,\n" +
            "        modify_time,\n" +
            "        country" +
            "        from coupon_info where coupon_code=#{couponCode} and country=#{country}")
    List selectByCouponCode(@Param("couponCode") String couponCode, @Param("country") String country);
}

业务表point_balance1基于UID进行分库分表

yml文件中配置数据源

dataSources:
  ds_1:
    dataSourceClassName: com.alibaba.druid.pool.DruidDataSource
    driverClassName: com.mysql.jdbc.Driver
    url: jdbc:mysql://172.17.108.172:3306/point_shard1?autoReconnect=true&characterEncoding=UTF-8&useUnicode=true&connectTimeout=3000&socketTimeout=3000
    username: root
    password: -
  ds_2:
    dataSourceClassName: com.alibaba.druid.pool.DruidDataSource
    driverClassName: com.mysql.jdbc.Driver
    url: jdbc:mysql://172.17.108.172:3306/point_shard2?autoReconnect=true&characterEncoding=UTF-8&useUnicode=true&connectTimeout=3000&socketTimeout=3000
    username: root
    password: -

主要druid是url,连接池是HikariCP的化是jdbcUrl

配置分库分表,具体值含义参考官网-数据源配置

- !SHARDING
  tables:
    point_balance:
      actualDataNodes: ds_${1..4}.point_balance${1..2}
      databaseStrategy:
        standard:
          shardingColumn: uid
          shardingAlgorithmName: d_uid_inline
      tableStrategy:
        standard:
          shardingColumn: uid
          shardingAlgorithmName: t_uid_inline
      auditStrategy:
        auditorNames:
          - sharding_key_required_auditor
        allowHintDisable: true

d_uid_inline是分库算法

shardingAlgorithms:
  d_uid_inline:
    type: CLASS_BASED
    props:
      algorithmClassName: com.toby.sharding.jdbc.source.start.db.sharding.UIDDbSharding
      strategy: STANDARD

具体的分库逻辑在com.toby.sharding.jdbc.source.start.db.sharding.UIDDbSharding

public abstract class AbstractLongShardingAlgorithm implements StandardShardingAlgorithm {
Properties properties;

@Override
public String doSharding(Collection collection, PreciseShardingValue preciseShardingValue) {
String name = routeTable(collection, getLongFromObject(preciseShardingValue.getValue()));
if (Objects.isNull(name)) {
throw new UnsupportedOperationException("cannot not find rout for" + preciseShardingValue);
}
//返回需要分库的yml中配置名字
return name;
}

@Override
public Collection doSharding(Collection collection, RangeShardingValue rangeShardingValue) {
Range range = rangeShardingValue.getValueRange();
Set names = new HashSet();
for (long start = range.lowerEndpoint(); start

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论