详解mybatis批量插入10万条数据的优化过程
作者:大造梦家
这篇文章主要介绍了详解mybatis批量插入10万条数据的优化过程,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
数据库 在使用mybatis插入大量数据的时候,为了提高效率,放弃循环插入,改为批量插入,mapper如下:
package com.lcy.service.mapper; import com.lcy.service.pojo.TestVO; import org.apache.ibatis.annotations.Insert; import java.util.List; public interface TestMapper { @Insert("") Integer testBatchInsert(List list); }
实体类:
package com.lcy.service.pojo; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; @Data @NoArgsConstructor @AllArgsConstructor public class TestVO { private String t1; private String t2; private String t3; private String t4; private String t5; }
测试类如下:
import com.lcy.service.TestApplication; import com.lcy.service.mapper.TestMapper; import com.lcy.service.pojo.TestVO; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import java.util.ArrayList; import java.util.List; @SpringBootTest(classes = TestApplication.class) @RunWith(SpringRunner.class) public class TestDemo { @Autowired private TestMapper testMapper; @Test public void insert() { List list = new ArrayList<>(); for (int i = 0; i < 200000; i++) { list.add(new TestVO(i + "," + i, i + "," + i, i + "," + i, i + "," + i, i + "," + i)); } System.out.println(testMapper.testBatchInsert(list)); } }
为了复现bug,我限制了JVM内存:
执行测试类报错如下:
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3746)
可以看到,Arrays在申请内存的时候,导致栈内存溢出
改进方法,分批新增:
import com.lcy.service.TestApplication; import com.lcy.service.mapper.TestMapper; import com.lcy.service.pojo.TestVO; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import javax.swing.*; import java.util.ArrayList; import java.util.List; import java.util.stream.Collectors; @SpringBootTest(classes = TestApplication.class) @RunWith(SpringRunner.class) public class TestDemo { @Autowired private TestMapper testMapper; @Test public void insert() { List list = new ArrayList<>(); for (int i = 0; i < 200000; i++) { list.add(new TestVO(i + "," + i, i + "," + i, i + "," + i, i + "," + i, i + "," + i)); } int index = list.size() / 10000; for (int i=0;i< index;i++){ //stream流表达式,skip表示跳过前i*10000条记录,limit表示读取当前流的前10000条记录 testMapper.testBatchInsert(list.stream().skip(i*10000).limit(10000).collect(Collectors.toList())); } } }
还有一种方法是调高JVM内存,不过不建议使用,不仅吃内存,而且数据量过大会导致sql过长报错
到此这篇关于详解mybatis批量插入10万条数据的优化过程的文章就介绍到这了,更多相关mybatis批量插入10万数据内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!