Golang

关注公众号 jb51net

关闭
首页 > 脚本专栏 > Golang > Go切片扩容机制

Go切片扩容机制详细说明和举例

作者:ProblemTerminator

Go 语言中的切片是一种动态数组,它可以自动扩容和缩容以适应不同的数据量,这篇文章主要给大家介绍了关于Go切片扩容机制详细说明和举例的相关资料,文中通过代码介绍的非常详细,需要的朋友可以参考下

本文对于切片扩容做了非常详细的说明和举例,篇幅较长,行文不易,一字一句纯手打创造,倾注了不少精力,感谢支持。

切片扩容的理解

关于切片的“扩容”,我们先来理解一下有一个初印象。

扩容实际上就是因为已有容量不足以容纳新元素(经过追加后的长度>已有容量时),因此需要结合原切片本身及所需要的新容量来分配一块新的内存空间,经过扩容操作,新的容量就确定了。

同时因为切片的底层也是通过数组来存储数据,每次扩容时旧数组必定已满,而数组固定定长是无法扩容的,因此切片扩容后也会使用一个新数组,而非旧数组,旧数组中的数据当然会全部复制到新数组中。

值得注意的是,每次扩容时的重点是“cap”,而非底层数组的“长度”

扩容机制源码分析

因为其中的逻辑策略稍复杂,先贴出所有源码然后分别分析,源码位于src/runtime/slice.go

(本文中涉及引用的源码版本为go1.17.13)

// growslice handles slice growth during append.
// It is passed the slice element type, the old slice, and the desired new minimum capacity,
// and it returns a new slice with at least that capacity, with the old data
// copied into it.
// The new slice's length is set to the old slice's length,
// NOT to the new requested capacity.
// This is for codegen convenience. The old slice's length is used immediately
// to calculate where to write new values during an append.
// TODO: When the old backend is gone, reconsider this decision.
// The SSA backend might prefer the new length or to return only ptr/cap and save stack space.
func growslice(et *_type, old slice, cap int) slice {
	if raceenabled {
		callerpc := getcallerpc()
		racereadrangepc(old.array, uintptr(old.len*int(et.size)), callerpc, funcPC(growslice))
	}
	if msanenabled {
		msanread(old.array, uintptr(old.len*int(et.size)))
	}

	if cap < old.cap {
		panic(errorString("growslice: cap out of range"))
	}

	if et.size == 0 {
		// append should not create a slice with nil pointer but non-zero len.
		// We assume that append doesn't need to preserve old.array in this case.
		return slice{unsafe.Pointer(&zerobase), old.len, cap}
	}

	newcap := old.cap
	doublecap := newcap + newcap
	if cap > doublecap {
		newcap = cap
	} else {
		if old.cap < 1024 {
			newcap = doublecap
		} else {
			// Check 0 < newcap to detect overflow
			// and prevent an infinite loop.
			for 0 < newcap && newcap < cap {
				newcap += newcap / 4
			}
			// Set newcap to the requested cap when
			// the newcap calculation overflowed.
			if newcap <= 0 {
				newcap = cap
			}
		}
	}

	var overflow bool
	var lenmem, newlenmem, capmem uintptr
	// Specialize for common values of et.size.
	// For 1 we don't need any division/multiplication.
	// For sys.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
	// For powers of 2, use a variable shift.
	switch {
	case et.size == 1:
		lenmem = uintptr(old.len)
		newlenmem = uintptr(cap)
		capmem = roundupsize(uintptr(newcap))
		overflow = uintptr(newcap) > maxAlloc
		newcap = int(capmem)
	case et.size == sys.PtrSize:
		lenmem = uintptr(old.len) * sys.PtrSize
		newlenmem = uintptr(cap) * sys.PtrSize
		capmem = roundupsize(uintptr(newcap) * sys.PtrSize)
		overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
		newcap = int(capmem / sys.PtrSize)
	case isPowerOfTwo(et.size):
		var shift uintptr
		if sys.PtrSize == 8 {
			// Mask shift for better code generation.
			shift = uintptr(sys.Ctz64(uint64(et.size))) & 63
		} else {
			shift = uintptr(sys.Ctz32(uint32(et.size))) & 31
		}
		lenmem = uintptr(old.len) << shift
		newlenmem = uintptr(cap) << shift
		capmem = roundupsize(uintptr(newcap) << shift)
		overflow = uintptr(newcap) > (maxAlloc >> shift)
		newcap = int(capmem >> shift)
	default:
		lenmem = uintptr(old.len) * et.size
		newlenmem = uintptr(cap) * et.size
		capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
		capmem = roundupsize(capmem)
		newcap = int(capmem / et.size)
	}

	// The check of overflow in addition to capmem > maxAlloc is needed
	// to prevent an overflow which can be used to trigger a segfault
	// on 32bit architectures with this example program:
	//
	// type T [1<<27 + 1]int64
	//
	// var d T
	// var s []T
	//
	// func main() {
	//   s = append(s, d, d, d, d)
	//   print(len(s), "\n")
	// }
	if overflow || capmem > maxAlloc {
		panic(errorString("growslice: cap out of range"))
	}

	var p unsafe.Pointer
	if et.ptrdata == 0 {
		p = mallocgc(capmem, nil, false)
		// The append() that calls growslice is going to overwrite from old.len to cap (which will be the new length).
		// Only clear the part that will not be overwritten.
		memclrNoHeapPointers(add(p, newlenmem), capmem-newlenmem)
	} else {
		// Note: can't use rawmem (which avoids zeroing of memory), because then GC can scan uninitialized memory.
		p = mallocgc(capmem, et, true)
		if lenmem > 0 && writeBarrier.enabled {
			// Only shade the pointers in old.array since we know the destination slice p
			// only contains nil pointers because it has been cleared during alloc.
			bulkBarrierPreWriteSrcOnly(uintptr(p), uintptr(old.array), lenmem-et.size+et.ptrdata)
		}
	}
	memmove(p, old.array, lenmem)

	return slice{p, old.len, newcap}
}

首先从注释可以得到:

1,growtslice函数用来处理追加过程中的切片增长

2,该函数传递切片元素类型 et、旧切片old、期望的新最小容量cap,其中旧切片old是一个slice类型的对象,其结构是:

type slice struct {
	array unsafe.Pointer  // 指针,指向可访问到的第一个元素
	len   int    // 长度
	cap   int    // 容量
}

返回一个至少具有该容量的新切片,新切片中会包含旧数据。

其中首要的核心逻辑:

	newcap := old.cap   // old.cap是旧容量
	doublecap := newcap + newcap   // 计算出旧容量的两倍
	if cap > doublecap {   // 期望的容量大于旧容量的两倍
		newcap = cap   // 新容量=期望的容量
	} else {
		if old.cap < 1024 {   // 和1024比较
			newcap = doublecap   // 如果小于1024,新容量=旧容量的两倍
		} else {
			// Check 0 < newcap to detect overflow
			// and prevent an infinite loop.
			for 0 < newcap && newcap < cap {  // 循环处理,直到达到期望容量
				newcap += newcap / 4   // 一次增加25%
			}
			// Set newcap to the requested cap when
			// the newcap calculation overflowed.
			if newcap <= 0 {
				newcap = cap
			}
		}
	}

细节:0 < newcap 条件是用来判断是否溢出,如果溢出超过整型最大值,则也会终止。

总结:

1,期望的容量(cap)如果大于旧容量的两倍,则新容量直接设置为期望容量,反之第二条;

2,旧容量(old.cap)和1024进行比较,如果小于1024,则新容量设置为两倍的旧容量大小,反之第三条;

3,如果大于1024则循环处理,每次以25%的增长速度(1.25倍)增长,newcap不断增加直到达到期望容量的大小。

以1024判断也会出现一些不友好的现象,但使用起来正常,这个后面再详细说。

我们按这个规则验证, 按自然数依次增加不断追加简单计算一下:

	arr1 := []int{}
	fmt.Printf("(1) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加1个值
	arr1 = append(arr1, 1) // 相当于索引为0的位置的数据被设置为1
	fmt.Printf("(2) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加2个值
	arr1 = append(arr1, 2, 3)
	fmt.Printf("(3) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加3个值
	arr1 = append(arr1, 4, 5, 6)
	fmt.Printf("(4) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加4个值
	arr1 = append(arr1, 7, 8, 9, 10)
	fmt.Printf("(5) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加5个值
	arr1 = append(arr1, 11, 12, 13, 14, 15)
	fmt.Printf("(6) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加6个值
	arr1 = append(arr1, 16, 17, 18, 19, 20, 21)
	fmt.Printf("(7) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加7个值
	arr1 = append(arr1, 22, 23, 24, 25, 26, 27, 28)
	fmt.Printf("(8) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)
	// 追加8个值
	arr1 = append(arr1, 29, 30, 31, 32, 33, 34, 35, 36)
	fmt.Printf("(9) len=%v, cap=%v, addr:%p, arr1: %v\n", len(arr1), cap(arr1), &arr1, arr1)

这次是每次增加的元素依次递增,看看有什么不一样:

(1) len=0, cap=0, addr:0xc000004078, arr1: []
(2) len=1, cap=1, addr:0xc000004078, arr1: [1]
(3) len=3, cap=3, addr:0xc000004078, arr1: [1 2 3]
(4) len=6, cap=6, addr:0xc000004078, arr1: [1 2 3 4 5 6]
(5) len=10, cap=12, addr:0xc000004078, arr1: [1 2 3 4 5 6 7 8 9 10]
(6) len=15, cap=24, addr:0xc000004078, arr1: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
(7) len=21, cap=24, addr:0xc000004078, arr1: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21]
(8) len=28, cap=48, addr:0xc000004078, arr1: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28]
(9) len=36, cap=48, addr:0xc000004078, arr1: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36]

增加1这个数据时,期望新容量=1,1>旧容量0的两倍,cap变为1;

增加2、3时,期望新容量=3,旧容量1的两倍=2,3>2 ,cap变为3;

增加4、5、6时,期望新容量=6,旧容量3的两倍=6, 与1024比较<1024,cap变为6;

增加7、8、9、10时,期望新容量=10,旧容量6的两倍=12, 与1024比较<1024,cap变为12;

...

看起来还正常,OK,我们再看一个直接、快捷一点的例子:

	arr := make([]int, 66)
	fmt.Printf("(1) len=%v, cap=%v, arr: %v\n", len(arr), cap(arr), arr)

	arr = append(arr, 1)
	fmt.Printf("(1) len=%v, cap=%v, arr: %v\n", len(arr), cap(arr), arr)

speed running:

(1) len=66, cap=66, arr: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
(2) len=67, cap=144, arr: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
]

新创建的切片容量是66,追加了一个数容量就变为了144?,推一下:

追加1时,期望新容量=67,旧容量66的两倍=132,67<132 ,与1024比较<1024,cap变为132

不应该是132吗,为什么结果是144   ??? 

继续看下一节。

分配大小修正/cap调整

上一小节所总结的逻辑/规则并不是不对,它是正确的,但在计算得出newcap后又进行了另一段逻辑,然后得出了一个新的newcap,因此与上一步所得的newcap不相等是有可能的,一起看看下一段逻辑:

var overflow bool
	var lenmem, newlenmem, capmem uintptr
	// Specialize for common values of et.size.
	// For 1 we don't need any division/multiplication.
	// For sys.PtrSize, compiler will optimize division/multiplication into a shift by a constant.
	// For powers of 2, use a variable shift.
	switch {
	case et.size == 1:
		lenmem = uintptr(old.len)
		newlenmem = uintptr(cap)
		capmem = roundupsize(uintptr(newcap))
		overflow = uintptr(newcap) > maxAlloc
		newcap = int(capmem)
	case et.size == sys.PtrSize:
		lenmem = uintptr(old.len) * sys.PtrSize
		newlenmem = uintptr(cap) * sys.PtrSize
		capmem = roundupsize(uintptr(newcap) * sys.PtrSize)
		overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
		newcap = int(capmem / sys.PtrSize)
	case isPowerOfTwo(et.size):
		var shift uintptr
		if sys.PtrSize == 8 {
			// Mask shift for better code generation.
			shift = uintptr(sys.Ctz64(uint64(et.size))) & 63
		} else {
			shift = uintptr(sys.Ctz32(uint32(et.size))) & 31
		}
		lenmem = uintptr(old.len) << shift
		newlenmem = uintptr(cap) << shift
		capmem = roundupsize(uintptr(newcap) << shift)
		overflow = uintptr(newcap) > (maxAlloc >> shift)
		newcap = int(capmem >> shift)
	default:
		lenmem = uintptr(old.len) * et.size
		newlenmem = uintptr(cap) * et.size
		capmem, overflow = math.MulUintptr(et.size, uintptr(newcap))
		capmem = roundupsize(capmem)
		newcap = int(capmem / et.size)
	}

这个switch分支根据切片元素类型(et)所占的大小(某个类型如int所占的空间)来判断,其中核心的函数roundupsize:

// Returns size of the memory block that mallocgc will allocate if you ask for the size.
func roundupsize(size uintptr) uintptr {
	if size < _MaxSmallSize {   // 小于32768(32K)
		if size <= smallSizeMax-8 {   //  如果 size <= 1024-8
			return uintptr(class_to_size[size_to_class8[divRoundUp(size, smallSizeDiv)]])
		} else {
			return uintptr(class_to_size[size_to_class128[divRoundUp(size-smallSizeMax, largeSizeDiv)]])
		}
	}
	if size+_PageSize < size {
		return size
	}
	return alignUp(size, _PageSize)
}


// divRoundUp returns ceil(n / a).
func divRoundUp(n, a uintptr) uintptr {
	// a is generally a power of two. This will get inlined and
	// the compiler will optimize the division.
	return (n + a - 1) / a
}

用以根据申请的空间大小返回实际分配的内存大小。sys.PtrSize是什么?

// PtrSize is the size of a pointer in bytes - unsafe.Sizeof(uintptr(0)) but as an ideal constant.
// It is also the size of the machine's native word size (that is, 4 on 32-bit systems, 8 on 64-bit).
const PtrSize = 4 << (^uintptr(0) >> 63)

PtrSize是指针的大小,以字节为单位,大小是unsafe.Sizeof(uintptr(0)),即=8。它也是机器本机单词大小的大小(即,32位系统上为4,64位系统中为8)。

因此基于本机PtrSize 的值=8。

case条件中元素类型的大小又是怎么回事? 写个例子你就明白了:

	var a int
	fmt.Println("int size = ", unsafe.Sizeof(a)) // 8
	var b bool
	fmt.Println("bool size = ", unsafe.Sizeof(b)) // 1
	var c uint
	fmt.Println("uint size = ", unsafe.Sizeof(c))    // 8
	fmt.Println("string size = ", unsafe.Sizeof("")) // 16
	var d int8 = 127
	fmt.Println("int8 size = ", unsafe.Sizeof(d)) // 1
	var e int16
	fmt.Println("int16 size = ", unsafe.Sizeof(e)) // 2
	var f int64
	fmt.Println("int64 size = ", unsafe.Sizeof(f)) // 8

也就是说如果是int类型的切片,在64位的机器上,et.size的值为8,基于此,现在走一下对应的case,按上述66追加1那个例子推演:

	switch {
	case et.size == 1:   // 空间大小为1时
    // ...
    case et.size == sys.PtrSize:   // 8
		lenmem = uintptr(old.len) * sys.PtrSize  // 66*8=528
		newlenmem = uintptr(cap) * sys.PtrSize   // 67 * 8 =536
		capmem = roundupsize(uintptr(newcap) * sys.PtrSize)  // roundupsize(132 * 8) =  roundupsize(1056)
		overflow = uintptr(newcap) > maxAlloc/sys.PtrSize
		newcap = int(capmem / sys.PtrSize)
    case isPowerOfTwo(et.size):
    // ...
	default:
    // ...

按roundupsize(1056)计算:
divRoundUp(size-smallSizeMax, largeSizeDiv) = divRoundUp(32,128) = 1
size_to_class128[1] = 33
class_to_size[33] = 1152
newcap = int(capmem / sys.PtrSize) = 1152 / 8 = 144
因此经过计算,newcap=144, 并不是132

破案了!!!  

后面的修正逻辑的意义在于,仅以容量和数字判断进行单纯的计算来得到新容量较片面,忽略了切片的类型对应所占的空间大小等因素的影响,经过这一步内存分配的二次计算,才会返回一个实际要分配的容量大小作为newcap。

那么这下我们掌握了正确、完整的计算逻辑,假设随便找一个容量的切片进行追加:

	arr := make([]int, 88)
	fmt.Printf("(1) len=%v, cap=%v\n", len(arr), cap(arr))

	arr = append(arr, 1)
	fmt.Printf("(2) len=%v, cap=%v\n", len(arr), cap(arr))

我们自己推算一下:

capmem = roundupsize(uintptr(newcap) * sys.PtrSize) = roundupsize(1408)

按roundupsize(1408)计算:
divRoundUp(384,128) = 3
size_to_class128[3] = 35
class_to_size[35] = 1408
newcap = 1408 / 8 = 176

来看看是不是176,speed running:

(1) len=88, cap=88
(2) len=89, cap=176

very good。

这次先到这里,再会!

总结

到此这篇关于Go切片扩容机制详细说明和举例的文章就介绍到这了,更多相关Go切片扩容机制内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

您可能感兴趣的文章:
阅读全文