1 // Copyright 2009 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
5 // Garbage collector (GC).
7 // The GC runs concurrently with mutator threads, is type accurate (aka precise), allows multiple
8 // GC thread to run in parallel. It is a concurrent mark and sweep that uses a write barrier. It is
9 // non-generational and non-compacting. Allocation is done using size segregated per P allocation
10 // areas to minimize fragmentation while eliminating locks in the common case.
12 // The algorithm decomposes into several steps.
13 // This is a high level description of the algorithm being used. For an overview of GC a good
14 // place to start is Richard Jones' gchandbook.org.
16 // The algorithm's intellectual heritage includes Dijkstra's on-the-fly algorithm, see
17 // Edsger W. Dijkstra, Leslie Lamport, A. J. Martin, C. S. Scholten, and E. F. M. Steffens. 1978.
18 // On-the-fly garbage collection: an exercise in cooperation. Commun. ACM 21, 11 (November 1978),
20 // For journal quality proofs that these steps are complete, correct, and terminate see
21 // Hudson, R., and Moss, J.E.B. Copying Garbage Collection without stopping the world.
22 // Concurrency and Computation: Practice and Experience 15(3-5), 2003.
24 // 1. GC performs sweep termination.
26 // a. Stop the world. This causes all Ps to reach a GC safe-point.
28 // b. Sweep any unswept spans. There will only be unswept spans if
29 // this GC cycle was forced before the expected time.
31 // 2. GC performs the mark phase.
33 // a. Prepare for the mark phase by setting gcphase to _GCmark
34 // (from _GCoff), enabling the write barrier, enabling mutator
35 // assists, and enqueueing root mark jobs. No objects may be
36 // scanned until all Ps have enabled the write barrier, which is
37 // accomplished using STW.
39 // b. Start the world. From this point, GC work is done by mark
40 // workers started by the scheduler and by assists performed as
41 // part of allocation. The write barrier shades both the
42 // overwritten pointer and the new pointer value for any pointer
43 // writes (see mbarrier.go for details). Newly allocated objects
44 // are immediately marked black.
46 // c. GC performs root marking jobs. This includes scanning all
47 // stacks, shading all globals, and shading any heap pointers in
48 // off-heap runtime data structures. Scanning a stack stops a
49 // goroutine, shades any pointers found on its stack, and then
50 // resumes the goroutine.
52 // d. GC drains the work queue of grey objects, scanning each grey
53 // object to black and shading all pointers found in the object
54 // (which in turn may add those pointers to the work queue).
56 // e. Because GC work is spread across local caches, GC uses a
57 // distributed termination algorithm to detect when there are no
58 // more root marking jobs or grey objects (see gcMarkDone). At this
59 // point, GC transitions to mark termination.
61 // 3. GC performs mark termination.
65 // b. Set gcphase to _GCmarktermination, and disable workers and
68 // c. Perform housekeeping like flushing mcaches.
70 // 4. GC performs the sweep phase.
72 // a. Prepare for the sweep phase by setting gcphase to _GCoff,
73 // setting up sweep state and disabling the write barrier.
75 // b. Start the world. From this point on, newly allocated objects
76 // are white, and allocating sweeps spans before use if necessary.
78 // c. GC does concurrent sweeping in the background and in response
79 // to allocation. See description below.
81 // 5. When sufficient allocation has taken place, replay the sequence
82 // starting with 1 above. See discussion of GC rate below.
86 // The sweep phase proceeds concurrently with normal program execution.
87 // The heap is swept span-by-span both lazily (when a goroutine needs another span)
88 // and concurrently in a background goroutine (this helps programs that are not CPU bound).
89 // At the end of STW mark termination all spans are marked as "needs sweeping".
91 // The background sweeper goroutine simply sweeps spans one-by-one.
93 // To avoid requesting more OS memory while there are unswept spans, when a
94 // goroutine needs another span, it first attempts to reclaim that much memory
95 // by sweeping. When a goroutine needs to allocate a new small-object span, it
96 // sweeps small-object spans for the same object size until it frees at least
97 // one object. When a goroutine needs to allocate large-object span from heap,
98 // it sweeps spans until it frees at least that many pages into heap. There is
99 // one case where this may not suffice: if a goroutine sweeps and frees two
100 // nonadjacent one-page spans to the heap, it will allocate a new two-page
101 // span, but there can still be other one-page unswept spans which could be
102 // combined into a two-page span.
104 // It's critical to ensure that no operations proceed on unswept spans (that would corrupt
105 // mark bits in GC bitmap). During GC all mcaches are flushed into the central cache,
106 // so they are empty. When a goroutine grabs a new span into mcache, it sweeps it.
107 // When a goroutine explicitly frees an object or sets a finalizer, it ensures that
108 // the span is swept (either by sweeping it, or by waiting for the concurrent sweep to finish).
109 // The finalizer goroutine is kicked off only when all spans are swept.
110 // When the next GC starts, it sweeps all not-yet-swept spans (if any).
113 // Next GC is after we've allocated an extra amount of memory proportional to
114 // the amount already in use. The proportion is controlled by GOGC environment variable
115 // (100 by default). If GOGC=100 and we're using 4M, we'll GC again when we get to 8M
116 // (this mark is computed by the gcController.heapGoal method). This keeps the GC cost in
117 // linear proportion to the allocation cost. Adjusting GOGC just changes the linear constant
118 // (and also the amount of extra memory used).
122 // In order to prevent long pauses while scanning large objects and to
123 // improve parallelism, the garbage collector breaks up scan jobs for
124 // objects larger than maxObletBytes into "oblets" of at most
125 // maxObletBytes. When scanning encounters the beginning of a large
126 // object, it scans only the first oblet and enqueues the remaining
127 // oblets as new scan jobs.
133 "runtime/internal/atomic"
139 _FinBlockSize = 4 * 1024
141 // concurrentSweep is a debug flag. Disabling this flag
142 // ensures all spans are swept while the world is stopped.
143 concurrentSweep = true
145 // debugScanConservative enables debug logging for stack
146 // frames that are scanned conservatively.
147 debugScanConservative = false
149 // sweepMinHeapDistance is a lower bound on the heap distance
150 // (in bytes) reserved for concurrent sweeping between GC
152 sweepMinHeapDistance = 1024 * 1024
155 // heapObjectsCanMove always returns false in the current garbage collector.
156 // It exists for go4.org/unsafe/assume-no-moving-gc, which is an
157 // unfortunate idea that had an even more unfortunate implementation.
158 // Every time a new Go release happened, the package stopped building,
159 // and the authors had to add a new file with a new //go:build line, and
160 // then the entire ecosystem of packages with that as a dependency had to
161 // explicitly update to the new version. Many packages depend on
162 // assume-no-moving-gc transitively, through paths like
163 // inet.af/netaddr -> go4.org/intern -> assume-no-moving-gc.
164 // This was causing a significant amount of friction around each new
165 // release, so we added this bool for the package to //go:linkname
166 // instead. The bool is still unfortunate, but it's not as bad as
167 // breaking the ecosystem on every new release.
169 // If the Go garbage collector ever does move heap objects, we can set
170 // this to true to break all the programs using assume-no-moving-gc.
172 //go:linkname heapObjectsCanMove
173 func heapObjectsCanMove() bool {
178 if unsafe.Sizeof(workbuf{}) != _WorkbufSize {
179 throw("size of Workbuf is suboptimal")
181 // No sweep on the first cycle.
182 sweep.active.state.Store(sweepDrainedMask)
184 // Initialize GC pacer state.
185 // Use the environment variable GOGC for the initial gcPercent value.
186 // Use the environment variable GOMEMLIMIT for the initial memoryLimit value.
187 gcController.init(readGOGC(), readGOMEMLIMIT())
190 work.markDoneSema = 1
191 lockInit(&work.sweepWaiters.lock, lockRankSweepWaiters)
192 lockInit(&work.assistQueue.lock, lockRankAssistQueue)
193 lockInit(&work.wbufSpans.lock, lockRankWbufSpans)
196 // gcenable is called after the bulk of the runtime initialization,
197 // just before we're about to start letting user code run.
198 // It kicks off the background sweeper goroutine, the background
199 // scavenger goroutine, and enables GC.
201 // Kick off sweeping and scavenging.
202 c := make(chan int, 2)
207 memstats.enablegc = true // now that runtime is initialized, GC is okay
210 // Garbage collector phase.
211 // Indicates to write barrier and synchronization task to perform.
214 // The compiler knows about this variable.
215 // If you change it, you must change builtin/runtime.go, too.
216 // If you change the first four bytes, you must also change the write
217 // barrier insertion code.
218 var writeBarrier struct {
219 enabled bool // compiler emits a check of this before calling write barrier
220 pad [3]byte // compiler uses 32-bit load for "enabled" field
221 needed bool // identical to enabled, for now (TODO: dedup)
222 alignme uint64 // guarantee alignment so that compiler can use a 32 or 64-bit load
225 // gcBlackenEnabled is 1 if mutator assists and background mark
226 // workers are allowed to blacken objects. This must only be set when
227 // gcphase == _GCmark.
228 var gcBlackenEnabled uint32
231 _GCoff = iota // GC not running; sweeping in background, write barrier disabled
232 _GCmark // GC marking roots and workbufs: allocate black, write barrier ENABLED
233 _GCmarktermination // GC mark termination: allocate black, P's help GC, write barrier ENABLED
237 func setGCPhase(x uint32) {
238 atomic.Store(&gcphase, x)
239 writeBarrier.needed = gcphase == _GCmark || gcphase == _GCmarktermination
240 writeBarrier.enabled = writeBarrier.needed
243 // gcMarkWorkerMode represents the mode that a concurrent mark worker
244 // should operate in.
246 // Concurrent marking happens through four different mechanisms. One
247 // is mutator assists, which happen in response to allocations and are
248 // not scheduled. The other three are variations in the per-P mark
249 // workers and are distinguished by gcMarkWorkerMode.
250 type gcMarkWorkerMode int
253 // gcMarkWorkerNotWorker indicates that the next scheduled G is not
254 // starting work and the mode should be ignored.
255 gcMarkWorkerNotWorker gcMarkWorkerMode = iota
257 // gcMarkWorkerDedicatedMode indicates that the P of a mark
258 // worker is dedicated to running that mark worker. The mark
259 // worker should run without preemption.
260 gcMarkWorkerDedicatedMode
262 // gcMarkWorkerFractionalMode indicates that a P is currently
263 // running the "fractional" mark worker. The fractional worker
264 // is necessary when GOMAXPROCS*gcBackgroundUtilization is not
265 // an integer and using only dedicated workers would result in
266 // utilization too far from the target of gcBackgroundUtilization.
267 // The fractional worker should run until it is preempted and
268 // will be scheduled to pick up the fractional part of
269 // GOMAXPROCS*gcBackgroundUtilization.
270 gcMarkWorkerFractionalMode
272 // gcMarkWorkerIdleMode indicates that a P is running the mark
273 // worker because it has nothing else to do. The idle worker
274 // should run until it is preempted and account its time
275 // against gcController.idleMarkTime.
279 // gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes
280 // to use in execution traces.
281 var gcMarkWorkerModeStrings = [...]string{
288 // pollFractionalWorkerExit reports whether a fractional mark worker
289 // should self-preempt. It assumes it is called from the fractional
291 func pollFractionalWorkerExit() bool {
292 // This should be kept in sync with the fractional worker
293 // scheduler logic in findRunnableGCWorker.
295 delta := now - gcController.markStartTime
299 p := getg().m.p.ptr()
300 selfTime := p.gcFractionalMarkTime + (now - p.gcMarkWorkerStartTime)
301 // Add some slack to the utilization goal so that the
302 // fractional worker isn't behind again the instant it exits.
303 return float64(selfTime)/float64(delta) > 1.2*gcController.fractionalUtilizationGoal
308 type workType struct {
309 full lfstack // lock-free list of full blocks workbuf
310 _ cpu.CacheLinePad // prevents false-sharing between full and empty
311 empty lfstack // lock-free list of empty blocks workbuf
312 _ cpu.CacheLinePad // prevents false-sharing between empty and nproc/nwait
316 // free is a list of spans dedicated to workbufs, but
317 // that don't currently contain any workbufs.
319 // busy is a list of all spans containing workbufs on
320 // one of the workbuf lists.
324 // Restore 64-bit alignment on 32-bit.
327 // bytesMarked is the number of bytes marked this cycle. This
328 // includes bytes blackened in scanned objects, noscan objects
329 // that go straight to black, and permagrey objects scanned by
330 // markroot during the concurrent scan phase. This is updated
331 // atomically during the cycle. Updates may be batched
332 // arbitrarily, since the value is only read at the end of the
335 // Because of benign races during marking, this number may not
336 // be the exact number of marked bytes, but it should be very
339 // Put this field here because it needs 64-bit atomic access
340 // (and thus 8-byte alignment even on 32-bit architectures).
343 markrootNext uint32 // next markroot job
344 markrootJobs uint32 // number of markroot jobs
350 // Number of roots of various root types. Set by gcMarkRootPrepare.
352 // nStackRoots == len(stackRoots), but we have nStackRoots for
354 nDataRoots, nBSSRoots, nSpanRoots, nStackRoots int
356 // Base indexes of each root type. Set by gcMarkRootPrepare.
357 baseData, baseBSS, baseSpans, baseStacks, baseEnd uint32
359 // stackRoots is a snapshot of all of the Gs that existed
360 // before the beginning of concurrent marking. The backing
361 // store of this must not be modified because it might be
362 // shared with allgs.
365 // Each type of GC state transition is protected by a lock.
366 // Since multiple threads can simultaneously detect the state
367 // transition condition, any thread that detects a transition
368 // condition must acquire the appropriate transition lock,
369 // re-check the transition condition and return if it no
370 // longer holds or perform the transition if it does.
371 // Likewise, any transition must invalidate the transition
372 // condition before releasing the lock. This ensures that each
373 // transition is performed by exactly one thread and threads
374 // that need the transition to happen block until it has
377 // startSema protects the transition from "off" to mark or
380 // markDoneSema protects transitions from mark to mark termination.
383 bgMarkReady note // signal background mark worker has started
384 bgMarkDone uint32 // cas to 1 when at a background mark completion point
385 // Background mark completion signaling
387 // mode is the concurrency mode of the current GC cycle.
390 // userForced indicates the current GC cycle was forced by an
391 // explicit user call.
394 // initialHeapLive is the value of gcController.heapLive at the
395 // beginning of this GC cycle.
396 initialHeapLive uint64
398 // assistQueue is a queue of assists that are blocked because
399 // there was neither enough credit to steal or enough work to
406 // sweepWaiters is a list of blocked goroutines to wake when
407 // we transition from mark termination to sweep.
408 sweepWaiters struct {
413 // cycles is the number of completed GC cycles, where a GC
414 // cycle is sweep termination, mark, mark termination, and
415 // sweep. This differs from memstats.numgc, which is
416 // incremented at mark termination.
419 // Timing/utilization stats for this cycle.
420 stwprocs, maxprocs int32
421 tSweepTerm, tMark, tMarkTerm, tEnd int64 // nanotime() of phase start
423 pauseNS int64 // total STW time this cycle
424 pauseStart int64 // nanotime() of last STW
426 // debug.gctrace heap sizes for this cycle.
427 heap0, heap1, heap2 uint64
429 // Cumulative estimated CPU usage.
433 // GC runs a garbage collection and blocks the caller until the
434 // garbage collection is complete. It may also block the entire
437 // We consider a cycle to be: sweep termination, mark, mark
438 // termination, and sweep. This function shouldn't return
439 // until a full cycle has been completed, from beginning to
440 // end. Hence, we always want to finish up the current cycle
441 // and start a new one. That means:
443 // 1. In sweep termination, mark, or mark termination of cycle
444 // N, wait until mark termination N completes and transitions
447 // 2. In sweep N, help with sweep N.
449 // At this point we can begin a full cycle N+1.
451 // 3. Trigger cycle N+1 by starting sweep termination N+1.
453 // 4. Wait for mark termination N+1 to complete.
455 // 5. Help with sweep N+1 until it's done.
457 // This all has to be written to deal with the fact that the
458 // GC may move ahead on its own. For example, when we block
459 // until mark termination N, we may wake up in cycle N+2.
461 // Wait until the current sweep termination, mark, and mark
462 // termination complete.
463 n := work.cycles.Load()
466 // We're now in sweep N or later. Trigger GC cycle N+1, which
467 // will first finish sweep N if necessary and then enter sweep
469 gcStart(gcTrigger{kind: gcTriggerCycle, n: n + 1})
471 // Wait for mark termination N+1 to complete.
474 // Finish sweep N+1 before returning. We do this both to
475 // complete the cycle and because runtime.GC() is often used
476 // as part of tests and benchmarks to get the system into a
477 // relatively stable and isolated state.
478 for work.cycles.Load() == n+1 && sweepone() != ^uintptr(0) {
483 // Callers may assume that the heap profile reflects the
484 // just-completed cycle when this returns (historically this
485 // happened because this was a STW GC), but right now the
486 // profile still reflects mark termination N, not N+1.
488 // As soon as all of the sweep frees from cycle N+1 are done,
489 // we can go ahead and publish the heap profile.
491 // First, wait for sweeping to finish. (We know there are no
492 // more spans on the sweep queue, but we may be concurrently
493 // sweeping spans, so we have to wait.)
494 for work.cycles.Load() == n+1 && !isSweepDone() {
498 // Now we're really done with sweeping, so we can publish the
499 // stable heap profile. Only do this if we haven't already hit
500 // another mark termination.
502 cycle := work.cycles.Load()
503 if cycle == n+1 || (gcphase == _GCmark && cycle == n+2) {
509 // gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has
510 // already completed this mark phase, it returns immediately.
511 func gcWaitOnMark(n uint32) {
513 // Disable phase transitions.
514 lock(&work.sweepWaiters.lock)
515 nMarks := work.cycles.Load()
516 if gcphase != _GCmark {
517 // We've already completed this cycle's mark.
522 unlock(&work.sweepWaiters.lock)
526 // Wait until sweep termination, mark, and mark
527 // termination of cycle N complete.
528 work.sweepWaiters.list.push(getg())
529 goparkunlock(&work.sweepWaiters.lock, waitReasonWaitForGCCycle, traceBlockUntilGCEnds, 1)
533 // gcMode indicates how concurrent a GC cycle should be.
537 gcBackgroundMode gcMode = iota // concurrent GC and sweep
538 gcForceMode // stop-the-world GC now, concurrent sweep
539 gcForceBlockMode // stop-the-world GC now and STW sweep (forced by user)
542 // A gcTrigger is a predicate for starting a GC cycle. Specifically,
543 // it is an exit condition for the _GCoff phase.
544 type gcTrigger struct {
546 now int64 // gcTriggerTime: current time
547 n uint32 // gcTriggerCycle: cycle number to start
550 type gcTriggerKind int
553 // gcTriggerHeap indicates that a cycle should be started when
554 // the heap size reaches the trigger heap size computed by the
556 gcTriggerHeap gcTriggerKind = iota
558 // gcTriggerTime indicates that a cycle should be started when
559 // it's been more than forcegcperiod nanoseconds since the
560 // previous GC cycle.
563 // gcTriggerCycle indicates that a cycle should be started if
564 // we have not yet started cycle number gcTrigger.n (relative
569 // test reports whether the trigger condition is satisfied, meaning
570 // that the exit condition for the _GCoff phase has been met. The exit
571 // condition should be tested when allocating.
572 func (t gcTrigger) test() bool {
573 if !memstats.enablegc || panicking.Load() != 0 || gcphase != _GCoff {
578 // Non-atomic access to gcController.heapLive for performance. If
579 // we are going to trigger on this, this thread just
580 // atomically wrote gcController.heapLive anyway and we'll see our
582 trigger, _ := gcController.trigger()
583 return gcController.heapLive.Load() >= trigger
585 if gcController.gcPercent.Load() < 0 {
588 lastgc := int64(atomic.Load64(&memstats.last_gc_nanotime))
589 return lastgc != 0 && t.now-lastgc > forcegcperiod
591 // t.n > work.cycles, but accounting for wraparound.
592 return int32(t.n-work.cycles.Load()) > 0
597 // gcStart starts the GC. It transitions from _GCoff to _GCmark (if
598 // debug.gcstoptheworld == 0) or performs all of GC (if
599 // debug.gcstoptheworld != 0).
601 // This may return without performing this transition in some cases,
602 // such as when called on a system stack or with locks held.
603 func gcStart(trigger gcTrigger) {
604 // Since this is called from malloc and malloc is called in
605 // the guts of a number of libraries that might be holding
606 // locks, don't attempt to start GC in non-preemptible or
607 // potentially unstable situations.
609 if gp := getg(); gp == mp.g0 || mp.locks > 1 || mp.preemptoff != "" {
616 // Pick up the remaining unswept/not being swept spans concurrently
618 // This shouldn't happen if we're being invoked in background
619 // mode since proportional sweep should have just finished
620 // sweeping everything, but rounding errors, etc, may leave a
621 // few spans unswept. In forced mode, this is necessary since
622 // GC can be forced at any point in the sweeping cycle.
624 // We check the transition condition continuously here in case
625 // this G gets delayed in to the next GC cycle.
626 for trigger.test() && sweepone() != ^uintptr(0) {
630 // Perform GC initialization and the sweep termination
632 semacquire(&work.startSema)
633 // Re-check transition condition under transition lock.
635 semrelease(&work.startSema)
639 // In gcstoptheworld debug mode, upgrade the mode accordingly.
640 // We do this after re-checking the transition condition so
641 // that multiple goroutines that detect the heap trigger don't
642 // start multiple STW GCs.
643 mode := gcBackgroundMode
644 if debug.gcstoptheworld == 1 {
646 } else if debug.gcstoptheworld == 2 {
647 mode = gcForceBlockMode
650 // Ok, we're doing it! Stop everybody else
652 semacquire(&worldsema)
654 // For stats, check if this GC was forced by the user.
655 // Update it under gcsema to avoid gctrace getting wrong values.
656 work.userForced = trigger.kind == gcTriggerCycle
662 // Check that all Ps have finished deferred mcache flushes.
663 for _, p := range allp {
664 if fg := p.mcache.flushGen.Load(); fg != mheap_.sweepgen {
665 println("runtime: p", p.id, "flushGen", fg, "!= sweepgen", mheap_.sweepgen)
666 throw("p mcache not flushed")
670 gcBgMarkStartWorkers()
672 systemstack(gcResetMarkState)
674 work.stwprocs, work.maxprocs = gomaxprocs, gomaxprocs
675 if work.stwprocs > ncpu {
676 // This is used to compute CPU time of the STW phases,
677 // so it can't be more than ncpu, even if GOMAXPROCS is.
680 work.heap0 = gcController.heapLive.Load()
685 work.tSweepTerm = now
686 work.pauseStart = now
687 systemstack(func() { stopTheWorldWithSema(stwGCSweepTerm) })
688 // Finish sweep before we start concurrent scan.
693 // clearpools before we start the GC. If we wait they memory will not be
694 // reclaimed until the next GC cycle.
699 // Assists and workers can start the moment we start
701 gcController.startCycle(now, int(gomaxprocs), trigger)
703 // Notify the CPU limiter that assists may begin.
704 gcCPULimiter.startGCTransition(true, now)
706 // In STW mode, disable scheduling of user Gs. This may also
707 // disable scheduling of this goroutine, so it may block as
708 // soon as we start the world again.
709 if mode != gcBackgroundMode {
710 schedEnableUser(false)
713 // Enter concurrent mark phase and enable
716 // Because the world is stopped, all Ps will
717 // observe that write barriers are enabled by
718 // the time we start the world and begin
721 // Write barriers must be enabled before assists are
722 // enabled because they must be enabled before
723 // any non-leaf heap objects are marked. Since
724 // allocations are blocked until assists can
725 // happen, we want enable assists as early as
729 gcBgMarkPrepare() // Must happen before assist enable.
732 // Mark all active tinyalloc blocks. Since we're
733 // allocating from these, they need to be black like
734 // other allocations. The alternative is to blacken
735 // the tiny block on every allocation from it, which
736 // would slow down the tiny allocator.
739 // At this point all Ps have enabled the write
740 // barrier, thus maintaining the no white to
741 // black invariant. Enable mutator assists to
742 // put back-pressure on fast allocating
744 atomic.Store(&gcBlackenEnabled, 1)
746 // In STW mode, we could block the instant systemstack
747 // returns, so make sure we're not preemptible.
752 now = startTheWorldWithSema()
753 work.pauseNS += now - work.pauseStart
755 memstats.gcPauseDist.record(now - work.pauseStart)
757 sweepTermCpu := int64(work.stwprocs) * (work.tMark - work.tSweepTerm)
758 work.cpuStats.gcPauseTime += sweepTermCpu
759 work.cpuStats.gcTotalTime += sweepTermCpu
761 // Release the CPU limiter.
762 gcCPULimiter.finishGCTransition(now)
765 // Release the world sema before Gosched() in STW mode
766 // because we will need to reacquire it later but before
767 // this goroutine becomes runnable again, and we could
768 // self-deadlock otherwise.
769 semrelease(&worldsema)
772 // Make sure we block instead of returning to user code
774 if mode != gcBackgroundMode {
778 semrelease(&work.startSema)
781 // gcMarkDoneFlushed counts the number of P's with flushed work.
783 // Ideally this would be a captured local in gcMarkDone, but forEachP
784 // escapes its callback closure, so it can't capture anything.
786 // This is protected by markDoneSema.
787 var gcMarkDoneFlushed uint32
789 // gcMarkDone transitions the GC from mark to mark termination if all
790 // reachable objects have been marked (that is, there are no grey
791 // objects and can be no more in the future). Otherwise, it flushes
792 // all local work to the global queues where it can be discovered by
795 // This should be called when all local mark work has been drained and
796 // there are no remaining workers. Specifically, when
798 // work.nwait == work.nproc && !gcMarkWorkAvailable(p)
800 // The calling context must be preemptible.
802 // Flushing local work is important because idle Ps may have local
803 // work queued. This is the only way to make that work visible and
804 // drive GC to completion.
806 // It is explicitly okay to have write barriers in this function. If
807 // it does transition to mark termination, then all reachable objects
808 // have been marked, so the write barrier cannot shade any more
811 // Ensure only one thread is running the ragged barrier at a
813 semacquire(&work.markDoneSema)
816 // Re-check transition condition under transition lock.
818 // It's critical that this checks the global work queues are
819 // empty before performing the ragged barrier. Otherwise,
820 // there could be global work that a P could take after the P
821 // has passed the ragged barrier.
822 if !(gcphase == _GCmark && work.nwait == work.nproc && !gcMarkWorkAvailable(nil)) {
823 semrelease(&work.markDoneSema)
827 // forEachP needs worldsema to execute, and we'll need it to
828 // stop the world later, so acquire worldsema now.
829 semacquire(&worldsema)
831 // Flush all local buffers and collect flushedWork flags.
832 gcMarkDoneFlushed = 0
835 // Mark the user stack as preemptible so that it may be scanned.
836 // Otherwise, our attempt to force all P's to a safepoint could
837 // result in a deadlock as we attempt to preempt a worker that's
838 // trying to preempt us (e.g. for a stack scan).
839 casGToWaiting(gp, _Grunning, waitReasonGCMarkTermination)
840 forEachP(func(pp *p) {
841 // Flush the write barrier buffer, since this may add
842 // work to the gcWork.
845 // Flush the gcWork, since this may create global work
846 // and set the flushedWork flag.
848 // TODO(austin): Break up these workbufs to
849 // better distribute work.
851 // Collect the flushedWork flag.
852 if pp.gcw.flushedWork {
853 atomic.Xadd(&gcMarkDoneFlushed, 1)
854 pp.gcw.flushedWork = false
857 casgstatus(gp, _Gwaiting, _Grunning)
860 if gcMarkDoneFlushed != 0 {
861 // More grey objects were discovered since the
862 // previous termination check, so there may be more
863 // work to do. Keep going. It's possible the
864 // transition condition became true again during the
865 // ragged barrier, so re-check it.
866 semrelease(&worldsema)
870 // There was no global work, no local work, and no Ps
871 // communicated work since we took markDoneSema. Therefore
872 // there are no grey objects and no more objects can be
873 // shaded. Transition to mark termination.
876 work.pauseStart = now
877 getg().m.preemptoff = "gcing"
878 systemstack(func() { stopTheWorldWithSema(stwGCMarkTerm) })
879 // The gcphase is _GCmark, it will transition to _GCmarktermination
880 // below. The important thing is that the wb remains active until
881 // all marking is complete. This includes writes made by the GC.
883 // There is sometimes work left over when we enter mark termination due
884 // to write barriers performed after the completion barrier above.
885 // Detect this and resume concurrent mark. This is obviously
888 // See issue #27993 for details.
890 // Switch to the system stack to call wbBufFlush1, though in this case
891 // it doesn't matter because we're non-preemptible anyway.
894 for _, p := range allp {
903 getg().m.preemptoff = ""
905 now := startTheWorldWithSema()
906 work.pauseNS += now - work.pauseStart
907 memstats.gcPauseDist.record(now - work.pauseStart)
909 semrelease(&worldsema)
913 gcComputeStartingStackSize()
915 // Disable assists and background workers. We must do
916 // this before waking blocked assists.
917 atomic.Store(&gcBlackenEnabled, 0)
919 // Notify the CPU limiter that GC assists will now cease.
920 gcCPULimiter.startGCTransition(false, now)
922 // Wake all blocked assists. These will run when we
923 // start the world again.
926 // Likewise, release the transition lock. Blocked
927 // workers and assists will run when we start the
929 semrelease(&work.markDoneSema)
931 // In STW mode, re-enable user goroutines. These will be
932 // queued to run after we start the world.
933 schedEnableUser(true)
935 // endCycle depends on all gcWork cache stats being flushed.
936 // The termination algorithm above ensured that up to
937 // allocations since the ragged barrier.
938 gcController.endCycle(now, int(gomaxprocs), work.userForced)
940 // Perform mark termination. This will restart the world.
944 // World must be stopped and mark assists and background workers must be
946 func gcMarkTermination() {
947 // Start marktermination (write barrier remains enabled for now).
948 setGCPhase(_GCmarktermination)
950 work.heap1 = gcController.heapLive.Load()
951 startTime := nanotime()
954 mp.preemptoff = "gcing"
957 casGToWaiting(curgp, _Grunning, waitReasonGarbageCollection)
959 // Run gc on the g0 stack. We do this so that the g stack
960 // we're currently running on will no longer change. Cuts
961 // the root set down a bit (g0 stacks are not scanned, and
962 // we don't need to scan gc's internal state). We also
963 // need to switch to g0 so we can shrink the stack.
966 // Must return immediately.
967 // The outer function's stack may have moved
968 // during gcMark (it shrinks stacks, including the
969 // outer function's stack), so we must not refer
970 // to any of its variables. Return back to the
971 // non-system stack to pick up the new addresses
972 // before continuing.
977 work.heap2 = work.bytesMarked
978 if debug.gccheckmark > 0 {
979 // Run a full non-parallel, stop-the-world
980 // mark using checkmark bits, to check that we
981 // didn't forget to mark anything during the
982 // concurrent mark process.
985 gcw := &getg().m.p.ptr().gcw
987 wbBufFlush1(getg().m.p.ptr())
992 // marking is complete so we can turn the write barrier off
994 stwSwept = gcSweep(work.mode)
998 casgstatus(curgp, _Gwaiting, _Grunning)
1007 if gcphase != _GCoff {
1008 throw("gc done but gcphase != _GCoff")
1011 // Record heapInUse for scavenger.
1012 memstats.lastHeapInUse = gcController.heapInUse.load()
1014 // Update GC trigger and pacing, as well as downstream consumers
1015 // of this pacing information, for the next cycle.
1016 systemstack(gcControllerCommit)
1018 // Update timing memstats
1020 sec, nsec, _ := time_now()
1021 unixNow := sec*1e9 + int64(nsec)
1022 work.pauseNS += now - work.pauseStart
1024 memstats.gcPauseDist.record(now - work.pauseStart)
1025 atomic.Store64(&memstats.last_gc_unix, uint64(unixNow)) // must be Unix time to make sense to user
1026 atomic.Store64(&memstats.last_gc_nanotime, uint64(now)) // monotonic time for us
1027 memstats.pause_ns[memstats.numgc%uint32(len(memstats.pause_ns))] = uint64(work.pauseNS)
1028 memstats.pause_end[memstats.numgc%uint32(len(memstats.pause_end))] = uint64(unixNow)
1029 memstats.pause_total_ns += uint64(work.pauseNS)
1031 markTermCpu := int64(work.stwprocs) * (work.tEnd - work.tMarkTerm)
1032 work.cpuStats.gcPauseTime += markTermCpu
1033 work.cpuStats.gcTotalTime += markTermCpu
1035 // Accumulate CPU stats.
1037 // Pass gcMarkPhase=true so we can get all the latest GC CPU stats in there too.
1038 work.cpuStats.accumulate(now, true)
1040 // Compute overall GC CPU utilization.
1041 // Omit idle marking time from the overall utilization here since it's "free".
1042 memstats.gc_cpu_fraction = float64(work.cpuStats.gcTotalTime-work.cpuStats.gcIdleTime) / float64(work.cpuStats.totalTime)
1044 // Reset assist time and background time stats.
1046 // Do this now, instead of at the start of the next GC cycle, because
1047 // these two may keep accumulating even if the GC is not active.
1048 scavenge.assistTime.Store(0)
1049 scavenge.backgroundTime.Store(0)
1051 // Reset idle time stat.
1052 sched.idleTime.Store(0)
1054 // Reset sweep state.
1056 sweep.npausesweep = 0
1058 if work.userForced {
1059 memstats.numforcedgc++
1062 // Bump GC cycle count and wake goroutines waiting on sweep.
1063 lock(&work.sweepWaiters.lock)
1065 injectglist(&work.sweepWaiters.list)
1066 unlock(&work.sweepWaiters.lock)
1068 // Increment the scavenge generation now.
1070 // This moment represents peak heap in use because we're
1071 // about to start sweeping.
1072 mheap_.pages.scav.index.nextGen()
1074 // Release the CPU limiter.
1075 gcCPULimiter.finishGCTransition(now)
1077 // Finish the current heap profiling cycle and start a new
1078 // heap profiling cycle. We do this before starting the world
1079 // so events don't leak into the wrong cycle.
1082 // There may be stale spans in mcaches that need to be swept.
1083 // Those aren't tracked in any sweep lists, so we need to
1084 // count them against sweep completion until we ensure all
1085 // those spans have been forced out.
1087 // If gcSweep fully swept the heap (for example if the sweep
1088 // is not concurrent due to a GODEBUG setting), then we expect
1089 // the sweepLocker to be invalid, since sweeping is done.
1091 // N.B. Below we might duplicate some work from gcSweep; this is
1092 // fine as all that work is idempotent within a GC cycle, and
1093 // we're still holding worldsema so a new cycle can't start.
1094 sl := sweep.active.begin()
1095 if !stwSwept && !sl.valid {
1096 throw("failed to set sweep barrier")
1097 } else if stwSwept && sl.valid {
1098 throw("non-concurrent sweep failed to drain all sweep queues")
1101 systemstack(func() { startTheWorldWithSema() })
1103 // Flush the heap profile so we can start a new cycle next GC.
1104 // This is relatively expensive, so we don't do it with the
1108 // Prepare workbufs for freeing by the sweeper. We do this
1109 // asynchronously because it can take non-trivial time.
1110 prepareFreeWorkbufs()
1112 // Free stack spans. This must be done between GC cycles.
1113 systemstack(freeStackSpans)
1115 // Ensure all mcaches are flushed. Each P will flush its own
1116 // mcache before allocating, but idle Ps may not. Since this
1117 // is necessary to sweep all spans, we need to ensure all
1118 // mcaches are flushed before we start the next GC cycle.
1120 // While we're here, flush the page cache for idle Ps to avoid
1121 // having pages get stuck on them. These pages are hidden from
1122 // the scavenger, so in small idle heaps a significant amount
1123 // of additional memory might be held onto.
1125 // Also, flush the pinner cache, to avoid leaking that memory
1127 systemstack(func() {
1128 forEachP(func(pp *p) {
1129 pp.mcache.prepareForSweep()
1130 if pp.status == _Pidle {
1131 systemstack(func() {
1133 pp.pcache.flush(&mheap_.pages)
1134 unlock(&mheap_.lock)
1137 pp.pinnerCache = nil
1141 // Now that we've swept stale spans in mcaches, they don't
1142 // count against unswept spans.
1144 // Note: this sweepLocker may not be valid if sweeping had
1145 // already completed during the STW. See the corresponding
1146 // begin() call that produced sl.
1147 sweep.active.end(sl)
1150 // Print gctrace before dropping worldsema. As soon as we drop
1151 // worldsema another cycle could start and smash the stats
1152 // we're trying to print.
1153 if debug.gctrace > 0 {
1154 util := int(memstats.gc_cpu_fraction * 100)
1158 print("gc ", memstats.numgc,
1159 " @", string(itoaDiv(sbuf[:], uint64(work.tSweepTerm-runtimeInitTime)/1e6, 3)), "s ",
1161 prev := work.tSweepTerm
1162 for i, ns := range []int64{work.tMark, work.tMarkTerm, work.tEnd} {
1166 print(string(fmtNSAsMS(sbuf[:], uint64(ns-prev))))
1169 print(" ms clock, ")
1170 for i, ns := range []int64{
1171 int64(work.stwprocs) * (work.tMark - work.tSweepTerm),
1172 gcController.assistTime.Load(),
1173 gcController.dedicatedMarkTime.Load() + gcController.fractionalMarkTime.Load(),
1174 gcController.idleMarkTime.Load(),
1177 if i == 2 || i == 3 {
1178 // Separate mark time components with /.
1183 print(string(fmtNSAsMS(sbuf[:], uint64(ns))))
1186 work.heap0>>20, "->", work.heap1>>20, "->", work.heap2>>20, " MB, ",
1187 gcController.lastHeapGoal>>20, " MB goal, ",
1188 gcController.lastStackScan.Load()>>20, " MB stacks, ",
1189 gcController.globalsScan.Load()>>20, " MB globals, ",
1190 work.maxprocs, " P")
1191 if work.userForced {
1198 // Set any arena chunks that were deferred to fault.
1199 lock(&userArenaState.lock)
1200 faultList := userArenaState.fault
1201 userArenaState.fault = nil
1202 unlock(&userArenaState.lock)
1203 for _, lc := range faultList {
1204 lc.mspan.setUserArenaChunkToFault()
1207 // Enable huge pages on some metadata if we cross a heap threshold.
1208 if gcController.heapGoal() > minHeapForMetadataHugePages {
1209 mheap_.enableMetadataHugePages()
1212 semrelease(&worldsema)
1214 // Careful: another GC cycle may start now.
1219 // now that gc is done, kick off finalizer thread if needed
1220 if !concurrentSweep {
1221 // give the queued finalizers, if any, a chance to run
1226 // gcBgMarkStartWorkers prepares background mark worker goroutines. These
1227 // goroutines will not run until the mark phase, but they must be started while
1228 // the work is not stopped and from a regular G stack. The caller must hold
1230 func gcBgMarkStartWorkers() {
1231 // Background marking is performed by per-P G's. Ensure that each P has
1232 // a background GC G.
1234 // Worker Gs don't exit if gomaxprocs is reduced. If it is raised
1235 // again, we can reuse the old workers; no need to create new workers.
1236 for gcBgMarkWorkerCount < gomaxprocs {
1239 notetsleepg(&work.bgMarkReady, -1)
1240 noteclear(&work.bgMarkReady)
1241 // The worker is now guaranteed to be added to the pool before
1242 // its P's next findRunnableGCWorker.
1244 gcBgMarkWorkerCount++
1248 // gcBgMarkPrepare sets up state for background marking.
1249 // Mutator assists must not yet be enabled.
1250 func gcBgMarkPrepare() {
1251 // Background marking will stop when the work queues are empty
1252 // and there are no more workers (note that, since this is
1253 // concurrent, this may be a transient state, but mark
1254 // termination will clean it up). Between background workers
1255 // and assists, we don't really know how many workers there
1256 // will be, so we pretend to have an arbitrarily large number
1257 // of workers, almost all of which are "waiting". While a
1258 // worker is working it decrements nwait. If nproc == nwait,
1259 // there are no workers.
1260 work.nproc = ^uint32(0)
1261 work.nwait = ^uint32(0)
1264 // gcBgMarkWorkerNode is an entry in the gcBgMarkWorkerPool. It points to a single
1265 // gcBgMarkWorker goroutine.
1266 type gcBgMarkWorkerNode struct {
1267 // Unused workers are managed in a lock-free stack. This field must be first.
1270 // The g of this worker.
1273 // Release this m on park. This is used to communicate with the unlock
1274 // function, which cannot access the G's stack. It is unused outside of
1275 // gcBgMarkWorker().
1279 func gcBgMarkWorker() {
1282 // We pass node to a gopark unlock function, so it can't be on
1283 // the stack (see gopark). Prevent deadlock from recursively
1284 // starting GC by disabling preemption.
1285 gp.m.preemptoff = "GC worker init"
1286 node := new(gcBgMarkWorkerNode)
1287 gp.m.preemptoff = ""
1291 node.m.set(acquirem())
1292 notewakeup(&work.bgMarkReady)
1293 // After this point, the background mark worker is generally scheduled
1294 // cooperatively by gcController.findRunnableGCWorker. While performing
1295 // work on the P, preemption is disabled because we are working on
1296 // P-local work buffers. When the preempt flag is set, this puts itself
1297 // into _Gwaiting to be woken up by gcController.findRunnableGCWorker
1298 // at the appropriate time.
1300 // When preemption is enabled (e.g., while in gcMarkDone), this worker
1301 // may be preempted and schedule as a _Grunnable G from a runq. That is
1302 // fine; it will eventually gopark again for further scheduling via
1303 // findRunnableGCWorker.
1305 // Since we disable preemption before notifying bgMarkReady, we
1306 // guarantee that this G will be in the worker pool for the next
1307 // findRunnableGCWorker. This isn't strictly necessary, but it reduces
1308 // latency between _GCmark starting and the workers starting.
1311 // Go to sleep until woken by
1312 // gcController.findRunnableGCWorker.
1313 gopark(func(g *g, nodep unsafe.Pointer) bool {
1314 node := (*gcBgMarkWorkerNode)(nodep)
1316 if mp := node.m.ptr(); mp != nil {
1317 // The worker G is no longer running; release
1320 // N.B. it is _safe_ to release the M as soon
1321 // as we are no longer performing P-local mark
1324 // However, since we cooperatively stop work
1325 // when gp.preempt is set, if we releasem in
1326 // the loop then the following call to gopark
1327 // would immediately preempt the G. This is
1328 // also safe, but inefficient: the G must
1329 // schedule again only to enter gopark and park
1330 // again. Thus, we defer the release until
1331 // after parking the G.
1335 // Release this G to the pool.
1336 gcBgMarkWorkerPool.push(&node.node)
1337 // Note that at this point, the G may immediately be
1338 // rescheduled and may be running.
1340 }, unsafe.Pointer(node), waitReasonGCWorkerIdle, traceBlockSystemGoroutine, 0)
1342 // Preemption must not occur here, or another G might see
1343 // p.gcMarkWorkerMode.
1345 // Disable preemption so we can use the gcw. If the
1346 // scheduler wants to preempt us, we'll stop draining,
1347 // dispose the gcw, and then preempt.
1348 node.m.set(acquirem())
1349 pp := gp.m.p.ptr() // P can't change with preemption disabled.
1351 if gcBlackenEnabled == 0 {
1352 println("worker mode", pp.gcMarkWorkerMode)
1353 throw("gcBgMarkWorker: blackening not enabled")
1356 if pp.gcMarkWorkerMode == gcMarkWorkerNotWorker {
1357 throw("gcBgMarkWorker: mode not set")
1360 startTime := nanotime()
1361 pp.gcMarkWorkerStartTime = startTime
1362 var trackLimiterEvent bool
1363 if pp.gcMarkWorkerMode == gcMarkWorkerIdleMode {
1364 trackLimiterEvent = pp.limiterEvent.start(limiterEventIdleMarkWork, startTime)
1367 decnwait := atomic.Xadd(&work.nwait, -1)
1368 if decnwait == work.nproc {
1369 println("runtime: work.nwait=", decnwait, "work.nproc=", work.nproc)
1370 throw("work.nwait was > work.nproc")
1373 systemstack(func() {
1374 // Mark our goroutine preemptible so its stack
1375 // can be scanned. This lets two mark workers
1376 // scan each other (otherwise, they would
1377 // deadlock). We must not modify anything on
1378 // the G stack. However, stack shrinking is
1379 // disabled for mark workers, so it is safe to
1380 // read from the G stack.
1381 casGToWaiting(gp, _Grunning, waitReasonGCWorkerActive)
1382 switch pp.gcMarkWorkerMode {
1384 throw("gcBgMarkWorker: unexpected gcMarkWorkerMode")
1385 case gcMarkWorkerDedicatedMode:
1386 gcDrain(&pp.gcw, gcDrainUntilPreempt|gcDrainFlushBgCredit)
1388 // We were preempted. This is
1389 // a useful signal to kick
1390 // everything out of the run
1391 // queue so it can run
1393 if drainQ, n := runqdrain(pp); n > 0 {
1395 globrunqputbatch(&drainQ, int32(n))
1399 // Go back to draining, this time
1400 // without preemption.
1401 gcDrain(&pp.gcw, gcDrainFlushBgCredit)
1402 case gcMarkWorkerFractionalMode:
1403 gcDrain(&pp.gcw, gcDrainFractional|gcDrainUntilPreempt|gcDrainFlushBgCredit)
1404 case gcMarkWorkerIdleMode:
1405 gcDrain(&pp.gcw, gcDrainIdle|gcDrainUntilPreempt|gcDrainFlushBgCredit)
1407 casgstatus(gp, _Gwaiting, _Grunning)
1410 // Account for time and mark us as stopped.
1412 duration := now - startTime
1413 gcController.markWorkerStop(pp.gcMarkWorkerMode, duration)
1414 if trackLimiterEvent {
1415 pp.limiterEvent.stop(limiterEventIdleMarkWork, now)
1417 if pp.gcMarkWorkerMode == gcMarkWorkerFractionalMode {
1418 atomic.Xaddint64(&pp.gcFractionalMarkTime, duration)
1421 // Was this the last worker and did we run out
1423 incnwait := atomic.Xadd(&work.nwait, +1)
1424 if incnwait > work.nproc {
1425 println("runtime: p.gcMarkWorkerMode=", pp.gcMarkWorkerMode,
1426 "work.nwait=", incnwait, "work.nproc=", work.nproc)
1427 throw("work.nwait > work.nproc")
1430 // We'll releasem after this point and thus this P may run
1431 // something else. We must clear the worker mode to avoid
1432 // attributing the mode to a different (non-worker) G in
1434 pp.gcMarkWorkerMode = gcMarkWorkerNotWorker
1436 // If this worker reached a background mark completion
1437 // point, signal the main GC goroutine.
1438 if incnwait == work.nproc && !gcMarkWorkAvailable(nil) {
1439 // We don't need the P-local buffers here, allow
1440 // preemption because we may schedule like a regular
1441 // goroutine in gcMarkDone (block on locks, etc).
1442 releasem(node.m.ptr())
1450 // gcMarkWorkAvailable reports whether executing a mark worker
1451 // on p is potentially useful. p may be nil, in which case it only
1452 // checks the global sources of work.
1453 func gcMarkWorkAvailable(p *p) bool {
1454 if p != nil && !p.gcw.empty() {
1457 if !work.full.empty() {
1458 return true // global work available
1460 if work.markrootNext < work.markrootJobs {
1461 return true // root scan work available
1466 // gcMark runs the mark (or, for concurrent GC, mark termination)
1467 // All gcWork caches must be empty.
1468 // STW is in effect at this point.
1469 func gcMark(startTime int64) {
1470 if debug.allocfreetrace > 0 {
1474 if gcphase != _GCmarktermination {
1475 throw("in gcMark expecting to see gcphase as _GCmarktermination")
1477 work.tstart = startTime
1479 // Check that there's no marking work remaining.
1480 if work.full != 0 || work.markrootNext < work.markrootJobs {
1481 print("runtime: full=", hex(work.full), " next=", work.markrootNext, " jobs=", work.markrootJobs, " nDataRoots=", work.nDataRoots, " nBSSRoots=", work.nBSSRoots, " nSpanRoots=", work.nSpanRoots, " nStackRoots=", work.nStackRoots, "\n")
1482 panic("non-empty mark queue after concurrent mark")
1485 if debug.gccheckmark > 0 {
1486 // This is expensive when there's a large number of
1487 // Gs, so only do it if checkmark is also enabled.
1491 // Drop allg snapshot. allgs may have grown, in which case
1492 // this is the only reference to the old backing store and
1493 // there's no need to keep it around.
1494 work.stackRoots = nil
1496 // Clear out buffers and double-check that all gcWork caches
1497 // are empty. This should be ensured by gcMarkDone before we
1498 // enter mark termination.
1500 // TODO: We could clear out buffers just before mark if this
1501 // has a non-negligible impact on STW time.
1502 for _, p := range allp {
1503 // The write barrier may have buffered pointers since
1504 // the gcMarkDone barrier. However, since the barrier
1505 // ensured all reachable objects were marked, all of
1506 // these must be pointers to black objects. Hence we
1507 // can just discard the write barrier buffer.
1508 if debug.gccheckmark > 0 {
1509 // For debugging, flush the buffer and make
1510 // sure it really was all marked.
1519 print("runtime: P ", p.id, " flushedWork ", gcw.flushedWork)
1520 if gcw.wbuf1 == nil {
1521 print(" wbuf1=<nil>")
1523 print(" wbuf1.n=", gcw.wbuf1.nobj)
1525 if gcw.wbuf2 == nil {
1526 print(" wbuf2=<nil>")
1528 print(" wbuf2.n=", gcw.wbuf2.nobj)
1531 throw("P has cached GC work at end of mark termination")
1533 // There may still be cached empty buffers, which we
1534 // need to flush since we're going to free them. Also,
1535 // there may be non-zero stats because we allocated
1536 // black after the gcMarkDone barrier.
1540 // Flush scanAlloc from each mcache since we're about to modify
1541 // heapScan directly. If we were to flush this later, then scanAlloc
1542 // might have incorrect information.
1544 // Note that it's not important to retain this information; we know
1545 // exactly what heapScan is at this point via scanWork.
1546 for _, p := range allp {
1554 // Reset controller state.
1555 gcController.resetLive(work.bytesMarked)
1558 // gcSweep must be called on the system stack because it acquires the heap
1559 // lock. See mheap for details.
1561 // Returns true if the heap was fully swept by this function.
1563 // The world must be stopped.
1566 func gcSweep(mode gcMode) bool {
1567 assertWorldStopped()
1569 if gcphase != _GCoff {
1570 throw("gcSweep being done but phase is not GCoff")
1574 mheap_.sweepgen += 2
1575 sweep.active.reset()
1576 mheap_.pagesSwept.Store(0)
1577 mheap_.sweepArenas = mheap_.allArenas
1578 mheap_.reclaimIndex.Store(0)
1579 mheap_.reclaimCredit.Store(0)
1580 unlock(&mheap_.lock)
1582 sweep.centralIndex.clear()
1584 if !concurrentSweep || mode == gcForceBlockMode {
1585 // Special case synchronous sweep.
1586 // Record that no proportional sweeping has to happen.
1588 mheap_.sweepPagesPerByte = 0
1589 unlock(&mheap_.lock)
1590 // Flush all mcaches.
1591 for _, pp := range allp {
1592 pp.mcache.prepareForSweep()
1594 // Sweep all spans eagerly.
1595 for sweepone() != ^uintptr(0) {
1598 // Free workbufs eagerly.
1599 prepareFreeWorkbufs()
1600 for freeSomeWbufs(false) {
1602 // All "free" events for this mark/sweep cycle have
1603 // now happened, so we can make this profile cycle
1604 // available immediately.
1610 // Background sweep.
1613 sweep.parked = false
1614 ready(sweep.g, 0, true)
1620 // gcResetMarkState resets global state prior to marking (concurrent
1621 // or STW) and resets the stack scan state of all Gs.
1623 // This is safe to do without the world stopped because any Gs created
1624 // during or after this will start out in the reset state.
1626 // gcResetMarkState must be called on the system stack because it acquires
1627 // the heap lock. See mheap for details.
1630 func gcResetMarkState() {
1631 // This may be called during a concurrent phase, so lock to make sure
1632 // allgs doesn't change.
1633 forEachG(func(gp *g) {
1634 gp.gcscandone = false // set to true in gcphasework
1635 gp.gcAssistBytes = 0
1638 // Clear page marks. This is just 1MB per 64GB of heap, so the
1639 // time here is pretty trivial.
1641 arenas := mheap_.allArenas
1642 unlock(&mheap_.lock)
1643 for _, ai := range arenas {
1644 ha := mheap_.arenas[ai.l1()][ai.l2()]
1645 for i := range ha.pageMarks {
1650 work.bytesMarked = 0
1651 work.initialHeapLive = gcController.heapLive.Load()
1654 // Hooks for other packages
1656 var poolcleanup func()
1657 var boringCaches []unsafe.Pointer // for crypto/internal/boring
1659 //go:linkname sync_runtime_registerPoolCleanup sync.runtime_registerPoolCleanup
1660 func sync_runtime_registerPoolCleanup(f func()) {
1664 //go:linkname boring_registerCache crypto/internal/boring/bcache.registerCache
1665 func boring_registerCache(p unsafe.Pointer) {
1666 boringCaches = append(boringCaches, p)
1671 if poolcleanup != nil {
1675 // clear boringcrypto caches
1676 for _, p := range boringCaches {
1677 atomicstorep(p, nil)
1680 // Clear central sudog cache.
1681 // Leave per-P caches alone, they have strictly bounded size.
1682 // Disconnect cached list before dropping it on the floor,
1683 // so that a dangling ref to one entry does not pin all of them.
1684 lock(&sched.sudoglock)
1685 var sg, sgnext *sudog
1686 for sg = sched.sudogcache; sg != nil; sg = sgnext {
1690 sched.sudogcache = nil
1691 unlock(&sched.sudoglock)
1693 // Clear central defer pool.
1694 // Leave per-P pools alone, they have strictly bounded size.
1695 lock(&sched.deferlock)
1696 // disconnect cached list before dropping it on the floor,
1697 // so that a dangling ref to one entry does not pin all of them.
1698 var d, dlink *_defer
1699 for d = sched.deferpool; d != nil; d = dlink {
1703 sched.deferpool = nil
1704 unlock(&sched.deferlock)
1709 // itoaDiv formats val/(10**dec) into buf.
1710 func itoaDiv(buf []byte, val uint64, dec int) []byte {
1713 for val >= 10 || i >= idec {
1714 buf[i] = byte(val%10 + '0')
1722 buf[i] = byte(val + '0')
1726 // fmtNSAsMS nicely formats ns nanoseconds as milliseconds.
1727 func fmtNSAsMS(buf []byte, ns uint64) []byte {
1729 // Format as whole milliseconds.
1730 return itoaDiv(buf, ns/1e6, 0)
1732 // Format two digits of precision, with at most three decimal places.
1743 return itoaDiv(buf, x, dec)
1746 // Helpers for testing GC.
1748 // gcTestMoveStackOnNextCall causes the stack to be moved on a call
1749 // immediately following the call to this. It may not work correctly
1750 // if any other work appears after this call (such as returning).
1751 // Typically the following call should be marked go:noinline so it
1752 // performs a stack check.
1754 // In rare cases this may not cause the stack to move, specifically if
1755 // there's a preemption between this call and the next.
1756 func gcTestMoveStackOnNextCall() {
1758 gp.stackguard0 = stackForceMove
1761 // gcTestIsReachable performs a GC and returns a bit set where bit i
1762 // is set if ptrs[i] is reachable.
1763 func gcTestIsReachable(ptrs ...unsafe.Pointer) (mask uint64) {
1764 // This takes the pointers as unsafe.Pointers in order to keep
1765 // them live long enough for us to attach specials. After
1766 // that, we drop our references to them.
1769 panic("too many pointers for uint64 mask")
1772 // Block GC while we attach specials and drop our references
1773 // to ptrs. Otherwise, if a GC is in progress, it could mark
1774 // them reachable via this function before we have a chance to
1778 // Create reachability specials for ptrs.
1779 specials := make([]*specialReachable, len(ptrs))
1780 for i, p := range ptrs {
1781 lock(&mheap_.speciallock)
1782 s := (*specialReachable)(mheap_.specialReachableAlloc.alloc())
1783 unlock(&mheap_.speciallock)
1784 s.special.kind = _KindSpecialReachable
1785 if !addspecial(p, &s.special) {
1786 throw("already have a reachable special (duplicate pointer?)")
1789 // Make sure we don't retain ptrs.
1795 // Force a full GC and sweep.
1798 // Process specials.
1799 for i, s := range specials {
1802 println("runtime: object", i, "was not swept")
1803 throw("IsReachable failed")
1808 lock(&mheap_.speciallock)
1809 mheap_.specialReachableAlloc.free(unsafe.Pointer(s))
1810 unlock(&mheap_.speciallock)
1816 // gcTestPointerClass returns the category of what p points to, one of:
1817 // "heap", "stack", "data", "bss", "other". This is useful for checking
1818 // that a test is doing what it's intended to do.
1820 // This is nosplit simply to avoid extra pointer shuffling that may
1821 // complicate a test.
1824 func gcTestPointerClass(p unsafe.Pointer) string {
1825 p2 := uintptr(noescape(p))
1827 if gp.stack.lo <= p2 && p2 < gp.stack.hi {
1830 if base, _, _ := findObject(p2, 0, 0); base != 0 {
1833 for _, datap := range activeModules() {
1834 if datap.data <= p2 && p2 < datap.edata || datap.noptrdata <= p2 && p2 < datap.enoptrdata {
1837 if datap.bss <= p2 && p2 < datap.ebss || datap.noptrbss <= p2 && p2 <= datap.enoptrbss {