1 // Copyright 2014 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
11 "runtime/internal/atomic"
12 "runtime/internal/sys"
16 // set using cmd/go/internal/modload.ModInfoProg
19 // Goroutine scheduler
20 // The scheduler's job is to distribute ready-to-run goroutines over worker threads.
22 // The main concepts are:
24 // M - worker thread, or machine.
25 // P - processor, a resource that is required to execute Go code.
26 // M must have an associated P to execute Go code, however it can be
27 // blocked or in a syscall w/o an associated P.
29 // Design doc at https://golang.org/s/go11sched.
31 // Worker thread parking/unparking.
32 // We need to balance between keeping enough running worker threads to utilize
33 // available hardware parallelism and parking excessive running worker threads
34 // to conserve CPU resources and power. This is not simple for two reasons:
35 // (1) scheduler state is intentionally distributed (in particular, per-P work
36 // queues), so it is not possible to compute global predicates on fast paths;
37 // (2) for optimal thread management we would need to know the future (don't park
38 // a worker thread when a new goroutine will be readied in near future).
40 // Three rejected approaches that would work badly:
41 // 1. Centralize all scheduler state (would inhibit scalability).
42 // 2. Direct goroutine handoff. That is, when we ready a new goroutine and there
43 // is a spare P, unpark a thread and handoff it the thread and the goroutine.
44 // This would lead to thread state thrashing, as the thread that readied the
45 // goroutine can be out of work the very next moment, we will need to park it.
46 // Also, it would destroy locality of computation as we want to preserve
47 // dependent goroutines on the same thread; and introduce additional latency.
48 // 3. Unpark an additional thread whenever we ready a goroutine and there is an
49 // idle P, but don't do handoff. This would lead to excessive thread parking/
50 // unparking as the additional threads will instantly park without discovering
53 // The current approach:
55 // This approach applies to three primary sources of potential work: readying a
56 // goroutine, new/modified-earlier timers, and idle-priority GC. See below for
57 // additional details.
59 // We unpark an additional thread when we submit work if (this is wakep()):
60 // 1. There is an idle P, and
61 // 2. There are no "spinning" worker threads.
63 // A worker thread is considered spinning if it is out of local work and did
64 // not find work in the global run queue or netpoller; the spinning state is
65 // denoted in m.spinning and in sched.nmspinning. Threads unparked this way are
66 // also considered spinning; we don't do goroutine handoff so such threads are
67 // out of work initially. Spinning threads spin on looking for work in per-P
68 // run queues and timer heaps or from the GC before parking. If a spinning
69 // thread finds work it takes itself out of the spinning state and proceeds to
70 // execution. If it does not find work it takes itself out of the spinning
71 // state and then parks.
73 // If there is at least one spinning thread (sched.nmspinning>1), we don't
74 // unpark new threads when submitting work. To compensate for that, if the last
75 // spinning thread finds work and stops spinning, it must unpark a new spinning
76 // thread. This approach smooths out unjustified spikes of thread unparking,
77 // but at the same time guarantees eventual maximal CPU parallelism
80 // The main implementation complication is that we need to be very careful
81 // during spinning->non-spinning thread transition. This transition can race
82 // with submission of new work, and either one part or another needs to unpark
83 // another worker thread. If they both fail to do that, we can end up with
84 // semi-persistent CPU underutilization.
86 // The general pattern for submission is:
87 // 1. Submit work to the local or global run queue, timer heap, or GC state.
88 // 2. #StoreLoad-style memory barrier.
89 // 3. Check sched.nmspinning.
91 // The general pattern for spinning->non-spinning transition is:
92 // 1. Decrement nmspinning.
93 // 2. #StoreLoad-style memory barrier.
94 // 3. Check all per-P work queues and GC for new work.
96 // Note that all this complexity does not apply to global run queue as we are
97 // not sloppy about thread unparking when submitting to global queue. Also see
98 // comments for nmspinning manipulation.
100 // How these different sources of work behave varies, though it doesn't affect
101 // the synchronization approach:
102 // * Ready goroutine: this is an obvious source of work; the goroutine is
103 // immediately ready and must run on some thread eventually.
104 // * New/modified-earlier timer: The current timer implementation (see time.go)
105 // uses netpoll in a thread with no work available to wait for the soonest
106 // timer. If there is no thread waiting, we want a new spinning thread to go
108 // * Idle-priority GC: The GC wakes a stopped idle thread to contribute to
109 // background GC work (note: currently disabled per golang.org/issue/19112).
110 // Also see golang.org/issue/44313, as this should be extended to all GC
121 // This slice records the initializing tasks that need to be
122 // done to start up the runtime. It is built by the linker.
123 var runtime_inittasks []*initTask
125 // main_init_done is a signal used by cgocallbackg that initialization
126 // has been completed. It is made before _cgo_notify_runtime_init_done,
127 // so all cgo calls can rely on it existing. When main_init is complete,
128 // it is closed, meaning cgocallbackg can reliably receive from it.
129 var main_init_done chan bool
131 //go:linkname main_main main.main
134 // mainStarted indicates that the main M has started.
137 // runtimeInitTime is the nanotime() at which the runtime started.
138 var runtimeInitTime int64
140 // Value to use for signal mask for newly created M's.
141 var initSigmask sigset
143 // The main goroutine.
147 // Racectx of m0->g0 is used only as the parent of the main goroutine.
148 // It must not be used for anything else.
151 // Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.
152 // Using decimal instead of binary GB and MB because
153 // they look nicer in the stack overflow failure message.
154 if goarch.PtrSize == 8 {
155 maxstacksize = 1000000000
157 maxstacksize = 250000000
160 // An upper limit for max stack size. Used to avoid random crashes
161 // after calling SetMaxStack and trying to allocate a stack that is too big,
162 // since stackalloc works with 32-bit sizes.
163 maxstackceiling = 2 * maxstacksize
165 // Allow newproc to start new Ms.
168 if GOARCH != "wasm" { // no threads on wasm yet, so no sysmon
170 newm(sysmon, nil, -1)
174 // Lock the main goroutine onto this, the main OS thread,
175 // during initialization. Most programs won't care, but a few
176 // do require certain calls to be made by the main thread.
177 // Those can arrange for main.main to run in the main thread
178 // by calling runtime.LockOSThread during initialization
179 // to preserve the lock.
183 throw("runtime.main not on m0")
186 // Record when the world started.
187 // Must be before doInit for tracing init.
188 runtimeInitTime = nanotime()
189 if runtimeInitTime == 0 {
190 throw("nanotime returning zero")
193 if debug.inittrace != 0 {
194 inittrace.id = getg().goid
195 inittrace.active = true
198 doInit(runtime_inittasks) // Must be before defer.
200 // Defer unlock so that runtime.Goexit during init does the unlock too.
210 main_init_done = make(chan bool)
212 if _cgo_pthread_key_created == nil {
213 throw("_cgo_pthread_key_created missing")
216 if _cgo_thread_start == nil {
217 throw("_cgo_thread_start missing")
219 if GOOS != "windows" {
220 if _cgo_setenv == nil {
221 throw("_cgo_setenv missing")
223 if _cgo_unsetenv == nil {
224 throw("_cgo_unsetenv missing")
227 if _cgo_notify_runtime_init_done == nil {
228 throw("_cgo_notify_runtime_init_done missing")
231 // Set the x_crosscall2_ptr C function pointer variable point to crosscall2.
232 if set_crosscall2 == nil {
233 throw("set_crosscall2 missing")
237 // Start the template thread in case we enter Go from
238 // a C-created thread and need to create a new thread.
239 startTemplateThread()
240 cgocall(_cgo_notify_runtime_init_done, nil)
243 // Run the initializing tasks. Depending on build mode this
244 // list can arrive a few different ways, but it will always
245 // contain the init tasks computed by the linker for all the
246 // packages in the program (excluding those added at runtime
247 // by package plugin). Run through the modules in dependency
248 // order (the order they are initialized by the dynamic
249 // loader, i.e. they are added to the moduledata linked list).
250 for m := &firstmoduledata; m != nil; m = m.next {
254 // Disable init tracing after main init done to avoid overhead
255 // of collecting statistics in malloc and newproc
256 inittrace.active = false
258 close(main_init_done)
263 if isarchive || islibrary {
264 // A program compiled with -buildmode=c-archive or c-shared
265 // has a main, but it is not executed.
268 fn := main_main // make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
271 runExitHooks(0) // run hooks now, since racefini does not return
275 // Make racy client program work: if panicking on
276 // another goroutine at the same time as main returns,
277 // let the other goroutine finish printing the panic trace.
278 // Once it does, it will exit. See issues 3934 and 20018.
279 if runningPanicDefers.Load() != 0 {
280 // Running deferred functions should not take long.
281 for c := 0; c < 1000; c++ {
282 if runningPanicDefers.Load() == 0 {
288 if panicking.Load() != 0 {
289 gopark(nil, nil, waitReasonPanicWait, traceBlockForever, 1)
300 // os_beforeExit is called from os.Exit(0).
302 //go:linkname os_beforeExit os.runtime_beforeExit
303 func os_beforeExit(exitCode int) {
304 runExitHooks(exitCode)
305 if exitCode == 0 && raceenabled {
310 // start forcegc helper goroutine
315 func forcegchelper() {
317 lockInit(&forcegc.lock, lockRankForcegc)
320 if forcegc.idle.Load() {
321 throw("forcegc: phase error")
323 forcegc.idle.Store(true)
324 goparkunlock(&forcegc.lock, waitReasonForceGCIdle, traceBlockSystemGoroutine, 1)
325 // this goroutine is explicitly resumed by sysmon
326 if debug.gctrace > 0 {
329 // Time-triggered, fully concurrent.
330 gcStart(gcTrigger{kind: gcTriggerTime, now: nanotime()})
334 // Gosched yields the processor, allowing other goroutines to run. It does not
335 // suspend the current goroutine, so execution resumes automatically.
343 // goschedguarded yields the processor like gosched, but also checks
344 // for forbidden states and opts out of the yield in those cases.
347 func goschedguarded() {
348 mcall(goschedguarded_m)
351 // goschedIfBusy yields the processor like gosched, but only does so if
352 // there are no idle Ps or if we're on the only P and there's nothing in
353 // the run queue. In both cases, there is freely available idle time.
356 func goschedIfBusy() {
358 // Call gosched if gp.preempt is set; we may be in a tight loop that
359 // doesn't otherwise yield.
360 if !gp.preempt && sched.npidle.Load() > 0 {
366 // Puts the current goroutine into a waiting state and calls unlockf on the
369 // If unlockf returns false, the goroutine is resumed.
371 // unlockf must not access this G's stack, as it may be moved between
372 // the call to gopark and the call to unlockf.
374 // Note that because unlockf is called after putting the G into a waiting
375 // state, the G may have already been readied by the time unlockf is called
376 // unless there is external synchronization preventing the G from being
377 // readied. If unlockf returns false, it must guarantee that the G cannot be
378 // externally readied.
380 // Reason explains why the goroutine has been parked. It is displayed in stack
381 // traces and heap dumps. Reasons should be unique and descriptive. Do not
382 // re-use reasons, add new ones.
383 func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceReason traceBlockReason, traceskip int) {
384 if reason != waitReasonSleep {
385 checkTimeouts() // timeouts may expire while two goroutines keep the scheduler busy
389 status := readgstatus(gp)
390 if status != _Grunning && status != _Gscanrunning {
391 throw("gopark: bad g status")
394 mp.waitunlockf = unlockf
395 gp.waitreason = reason
396 mp.waitTraceBlockReason = traceReason
397 mp.waitTraceSkip = traceskip
399 // can't do anything that might move the G between Ms here.
403 // Puts the current goroutine into a waiting state and unlocks the lock.
404 // The goroutine can be made runnable again by calling goready(gp).
405 func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int) {
406 gopark(parkunlock_c, unsafe.Pointer(lock), reason, traceReason, traceskip)
409 func goready(gp *g, traceskip int) {
411 ready(gp, traceskip, true)
416 func acquireSudog() *sudog {
417 // Delicate dance: the semaphore implementation calls
418 // acquireSudog, acquireSudog calls new(sudog),
419 // new calls malloc, malloc can call the garbage collector,
420 // and the garbage collector calls the semaphore implementation
422 // Break the cycle by doing acquirem/releasem around new(sudog).
423 // The acquirem/releasem increments m.locks during new(sudog),
424 // which keeps the garbage collector from being invoked.
427 if len(pp.sudogcache) == 0 {
428 lock(&sched.sudoglock)
429 // First, try to grab a batch from central cache.
430 for len(pp.sudogcache) < cap(pp.sudogcache)/2 && sched.sudogcache != nil {
431 s := sched.sudogcache
432 sched.sudogcache = s.next
434 pp.sudogcache = append(pp.sudogcache, s)
436 unlock(&sched.sudoglock)
437 // If the central cache is empty, allocate a new one.
438 if len(pp.sudogcache) == 0 {
439 pp.sudogcache = append(pp.sudogcache, new(sudog))
442 n := len(pp.sudogcache)
443 s := pp.sudogcache[n-1]
444 pp.sudogcache[n-1] = nil
445 pp.sudogcache = pp.sudogcache[:n-1]
447 throw("acquireSudog: found s.elem != nil in cache")
454 func releaseSudog(s *sudog) {
456 throw("runtime: sudog with non-nil elem")
459 throw("runtime: sudog with non-false isSelect")
462 throw("runtime: sudog with non-nil next")
465 throw("runtime: sudog with non-nil prev")
467 if s.waitlink != nil {
468 throw("runtime: sudog with non-nil waitlink")
471 throw("runtime: sudog with non-nil c")
475 throw("runtime: releaseSudog with non-nil gp.param")
477 mp := acquirem() // avoid rescheduling to another P
479 if len(pp.sudogcache) == cap(pp.sudogcache) {
480 // Transfer half of local cache to the central cache.
481 var first, last *sudog
482 for len(pp.sudogcache) > cap(pp.sudogcache)/2 {
483 n := len(pp.sudogcache)
484 p := pp.sudogcache[n-1]
485 pp.sudogcache[n-1] = nil
486 pp.sudogcache = pp.sudogcache[:n-1]
494 lock(&sched.sudoglock)
495 last.next = sched.sudogcache
496 sched.sudogcache = first
497 unlock(&sched.sudoglock)
499 pp.sudogcache = append(pp.sudogcache, s)
503 // called from assembly.
504 func badmcall(fn func(*g)) {
505 throw("runtime: mcall called on m->g0 stack")
508 func badmcall2(fn func(*g)) {
509 throw("runtime: mcall function returned")
512 func badreflectcall() {
513 panic(plainError("arg size to reflect.call more than 1GB"))
517 //go:nowritebarrierrec
518 func badmorestackg0() {
519 if !crashStackImplemented {
520 writeErrStr("fatal: morestack on g0\n")
525 switchToCrashStack(func() {
526 print("runtime: morestack on g0, stack [", hex(g.stack.lo), " ", hex(g.stack.hi), "], sp=", hex(g.sched.sp), ", called from\n")
527 g.m.traceback = 2 // include pc and sp in stack trace
528 traceback1(g.sched.pc, g.sched.sp, g.sched.lr, g, 0)
531 throw("morestack on g0")
536 //go:nowritebarrierrec
537 func badmorestackgsignal() {
538 writeErrStr("fatal: morestack on gsignal\n")
546 // gcrash is a fake g that can be used when crashing due to bad
550 var crashingG atomic.Pointer[g]
552 // Switch to crashstack and call fn, with special handling of
553 // concurrent and recursive cases.
555 // Nosplit as it is called in a bad stack condition (we know
556 // morestack would fail).
559 //go:nowritebarrierrec
560 func switchToCrashStack(fn func()) {
562 if crashingG.CompareAndSwapNoWB(nil, me) {
563 switchToCrashStack0(fn) // should never return
566 if crashingG.Load() == me {
567 // recursive crashing. too bad.
568 writeErrStr("fatal: recursive switchToCrashStack\n")
571 // Another g is crashing. Give it some time, hopefully it will finish traceback.
573 writeErrStr("fatal: concurrent switchToCrashStack\n")
577 const crashStackImplemented = GOARCH == "amd64" || GOARCH == "arm64" || GOARCH == "mips64" || GOARCH == "mips64le" || GOARCH == "riscv64"
580 func switchToCrashStack0(fn func()) // in assembly
582 func lockedOSThread() bool {
584 return gp.lockedm != 0 && gp.m.lockedg != 0
588 // allgs contains all Gs ever created (including dead Gs), and thus
591 // Access via the slice is protected by allglock or stop-the-world.
592 // Readers that cannot take the lock may (carefully!) use the atomic
597 // allglen and allgptr are atomic variables that contain len(allgs) and
598 // &allgs[0] respectively. Proper ordering depends on totally-ordered
599 // loads and stores. Writes are protected by allglock.
601 // allgptr is updated before allglen. Readers should read allglen
602 // before allgptr to ensure that allglen is always <= len(allgptr). New
603 // Gs appended during the race can be missed. For a consistent view of
604 // all Gs, allglock must be held.
606 // allgptr copies should always be stored as a concrete type or
607 // unsafe.Pointer, not uintptr, to ensure that GC can still reach it
608 // even if it points to a stale array.
613 func allgadd(gp *g) {
614 if readgstatus(gp) == _Gidle {
615 throw("allgadd: bad status Gidle")
619 allgs = append(allgs, gp)
620 if &allgs[0] != allgptr {
621 atomicstorep(unsafe.Pointer(&allgptr), unsafe.Pointer(&allgs[0]))
623 atomic.Storeuintptr(&allglen, uintptr(len(allgs)))
627 // allGsSnapshot returns a snapshot of the slice of all Gs.
629 // The world must be stopped or allglock must be held.
630 func allGsSnapshot() []*g {
631 assertWorldStoppedOrLockHeld(&allglock)
633 // Because the world is stopped or allglock is held, allgadd
634 // cannot happen concurrently with this. allgs grows
635 // monotonically and existing entries never change, so we can
636 // simply return a copy of the slice header. For added safety,
637 // we trim everything past len because that can still change.
638 return allgs[:len(allgs):len(allgs)]
641 // atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
642 func atomicAllG() (**g, uintptr) {
643 length := atomic.Loaduintptr(&allglen)
644 ptr := (**g)(atomic.Loadp(unsafe.Pointer(&allgptr)))
648 // atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
649 func atomicAllGIndex(ptr **g, i uintptr) *g {
650 return *(**g)(add(unsafe.Pointer(ptr), i*goarch.PtrSize))
653 // forEachG calls fn on every G from allgs.
655 // forEachG takes a lock to exclude concurrent addition of new Gs.
656 func forEachG(fn func(gp *g)) {
658 for _, gp := range allgs {
664 // forEachGRace calls fn on every G from allgs.
666 // forEachGRace avoids locking, but does not exclude addition of new Gs during
667 // execution, which may be missed.
668 func forEachGRace(fn func(gp *g)) {
669 ptr, length := atomicAllG()
670 for i := uintptr(0); i < length; i++ {
671 gp := atomicAllGIndex(ptr, i)
678 // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
679 // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
683 // cpuinit sets up CPU feature flags and calls internal/cpu.Initialize. env should be the complete
684 // value of the GODEBUG environment variable.
685 func cpuinit(env string) {
687 case "aix", "darwin", "ios", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
688 cpu.DebugOptions = true
692 // Support cpu feature variables are used in code generated by the compiler
693 // to guard execution of instructions that can not be assumed to be always supported.
696 x86HasPOPCNT = cpu.X86.HasPOPCNT
697 x86HasSSE41 = cpu.X86.HasSSE41
698 x86HasFMA = cpu.X86.HasFMA
701 armHasVFPv4 = cpu.ARM.HasVFPv4
704 arm64HasATOMICS = cpu.ARM64.HasATOMICS
708 // getGodebugEarly extracts the environment variable GODEBUG from the environment on
709 // Unix-like operating systems and returns it. This function exists to extract GODEBUG
710 // early before much of the runtime is initialized.
711 func getGodebugEarly() string {
712 const prefix = "GODEBUG="
715 case "aix", "darwin", "ios", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
716 // Similar to goenv_unix but extracts the environment value for
718 // TODO(moehrmann): remove when general goenvs() can be called before cpuinit()
720 for argv_index(argv, argc+1+n) != nil {
724 for i := int32(0); i < n; i++ {
725 p := argv_index(argv, argc+1+i)
726 s := unsafe.String(p, findnull(p))
728 if hasPrefix(s, prefix) {
729 env = gostring(p)[len(prefix):]
737 // The bootstrap sequence is:
741 // make & queue new G
742 // call runtime·mstart
744 // The new G calls runtime·main.
746 lockInit(&sched.lock, lockRankSched)
747 lockInit(&sched.sysmonlock, lockRankSysmon)
748 lockInit(&sched.deferlock, lockRankDefer)
749 lockInit(&sched.sudoglock, lockRankSudog)
750 lockInit(&deadlock, lockRankDeadlock)
751 lockInit(&paniclk, lockRankPanic)
752 lockInit(&allglock, lockRankAllg)
753 lockInit(&allpLock, lockRankAllp)
754 lockInit(&reflectOffs.lock, lockRankReflectOffs)
755 lockInit(&finlock, lockRankFin)
756 lockInit(&cpuprof.lock, lockRankCpuprof)
758 // Enforce that this lock is always a leaf lock.
759 // All of this lock's critical sections should be
761 lockInit(&memstats.heapStats.noPLock, lockRankLeafRank)
763 // raceinit must be the first call to race detector.
764 // In particular, it must be done before mallocinit below calls racemapshadow.
767 gp.racectx, raceprocctx0 = raceinit()
770 sched.maxmcount = 10000
772 // The world starts stopped.
778 godebug := getGodebugEarly()
779 initPageTrace(godebug) // must run after mallocinit but before anything allocates
780 cpuinit(godebug) // must run before alginit
781 alginit() // maps, hash, fastrand must not be used before this call
782 fastrandinit() // must run before mcommoninit
783 mcommoninit(gp.m, -1)
784 modulesinit() // provides activeModules
785 typelinksinit() // uses maps, activeModules
786 itabsinit() // uses activeModules
787 stkobjinit() // must run before GC starts
789 sigsave(&gp.m.sigmask)
790 initSigmask = gp.m.sigmask
799 // Allocate stack space that can be used when crashing due to bad stack
800 // conditions, e.g. morestack on g0.
801 gcrash.stack = stackalloc(16384)
802 gcrash.stackguard0 = gcrash.stack.lo + 1000
803 gcrash.stackguard1 = gcrash.stack.lo + 1000
805 // if disableMemoryProfiling is set, update MemProfileRate to 0 to turn off memprofile.
806 // Note: parsedebugvars may update MemProfileRate, but when disableMemoryProfiling is
807 // set to true by the linker, it means that nothing is consuming the profile, it is
808 // safe to set MemProfileRate to 0.
809 if disableMemoryProfiling {
814 sched.lastpoll.Store(nanotime())
816 if n, ok := atoi32(gogetenv("GOMAXPROCS")); ok && n > 0 {
819 if procresize(procs) != nil {
820 throw("unknown runnable goroutine during bootstrap")
824 // World is effectively started now, as P's can run.
827 if buildVersion == "" {
828 // Condition should never trigger. This code just serves
829 // to ensure runtime·buildVersion is kept in the resulting binary.
830 buildVersion = "unknown"
832 if len(modinfo) == 1 {
833 // Condition should never trigger. This code just serves
834 // to ensure runtime·modinfo is kept in the resulting binary.
839 func dumpgstatus(gp *g) {
841 print("runtime: gp: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
842 print("runtime: getg: g=", thisg, ", goid=", thisg.goid, ", g->atomicstatus=", readgstatus(thisg), "\n")
845 // sched.lock must be held.
847 assertLockHeld(&sched.lock)
849 // Exclude extra M's, which are used for cgocallback from threads
852 // The purpose of the SetMaxThreads limit is to avoid accidental fork
853 // bomb from something like millions of goroutines blocking on system
854 // calls, causing the runtime to create millions of threads. By
855 // definition, this isn't a problem for threads created in C, so we
856 // exclude them from the limit. See https://go.dev/issue/60004.
857 count := mcount() - int32(extraMInUse.Load()) - int32(extraMLength.Load())
858 if count > sched.maxmcount {
859 print("runtime: program exceeds ", sched.maxmcount, "-thread limit\n")
860 throw("thread exhaustion")
864 // mReserveID returns the next ID to use for a new m. This new m is immediately
865 // considered 'running' by checkdead.
867 // sched.lock must be held.
868 func mReserveID() int64 {
869 assertLockHeld(&sched.lock)
871 if sched.mnext+1 < sched.mnext {
872 throw("runtime: thread ID overflow")
880 // Pre-allocated ID may be passed as 'id', or omitted by passing -1.
881 func mcommoninit(mp *m, id int64) {
884 // g0 stack won't make sense for user (and is not necessary unwindable).
886 callers(1, mp.createstack[:])
897 lo := uint32(int64Hash(uint64(mp.id), fastrandseed))
898 hi := uint32(int64Hash(uint64(cputicks()), ^fastrandseed))
902 // Same behavior as for 1.17.
903 // TODO: Simplify this.
904 if goarch.BigEndian {
905 mp.fastrand = uint64(lo)<<32 | uint64(hi)
907 mp.fastrand = uint64(hi)<<32 | uint64(lo)
911 if mp.gsignal != nil {
912 mp.gsignal.stackguard1 = mp.gsignal.stack.lo + stackGuard
915 // Add to allm so garbage collector doesn't free g->m
916 // when it is just in a register or thread-local storage.
919 // NumCgoCall() iterates over allm w/o schedlock,
920 // so we need to publish it safely.
921 atomicstorep(unsafe.Pointer(&allm), unsafe.Pointer(mp))
924 // Allocate memory to hold a cgo traceback if the cgo call crashes.
925 if iscgo || GOOS == "solaris" || GOOS == "illumos" || GOOS == "windows" {
926 mp.cgoCallers = new(cgoCallers)
930 func (mp *m) becomeSpinning() {
932 sched.nmspinning.Add(1)
933 sched.needspinning.Store(0)
936 func (mp *m) hasCgoOnStack() bool {
937 return mp.ncgo > 0 || mp.isextra
940 var fastrandseed uintptr
942 func fastrandinit() {
943 s := (*[unsafe.Sizeof(fastrandseed)]byte)(unsafe.Pointer(&fastrandseed))[:]
947 // Mark gp ready to run.
948 func ready(gp *g, traceskip int, next bool) {
949 status := readgstatus(gp)
952 mp := acquirem() // disable preemption because it can be holding p in a local var
953 if status&^_Gscan != _Gwaiting {
955 throw("bad g->status in ready")
958 // status is Gwaiting or Gscanwaiting, make Grunnable and put on runq
959 trace := traceAcquire()
960 casgstatus(gp, _Gwaiting, _Grunnable)
962 trace.GoUnpark(gp, traceskip)
965 runqput(mp.p.ptr(), gp, next)
970 // freezeStopWait is a large value that freezetheworld sets
971 // sched.stopwait to in order to request that all Gs permanently stop.
972 const freezeStopWait = 0x7fffffff
974 // freezing is set to non-zero if the runtime is trying to freeze the
976 var freezing atomic.Bool
978 // Similar to stopTheWorld but best-effort and can be called several times.
979 // There is no reverse operation, used during crashing.
980 // This function must not lock any mutexes.
981 func freezetheworld() {
983 if debug.dontfreezetheworld > 0 {
984 // Don't prempt Ps to stop goroutines. That will perturb
985 // scheduler state, making debugging more difficult. Instead,
986 // allow goroutines to continue execution.
988 // fatalpanic will tracebackothers to trace all goroutines. It
989 // is unsafe to trace a running goroutine, so tracebackothers
990 // will skip running goroutines. That is OK and expected, we
991 // expect users of dontfreezetheworld to use core files anyway.
993 // However, allowing the scheduler to continue running free
994 // introduces a race: a goroutine may be stopped when
995 // tracebackothers checks its status, and then start running
996 // later when we are in the middle of traceback, potentially
999 // To mitigate this, when an M naturally enters the scheduler,
1000 // schedule checks if freezing is set and if so stops
1001 // execution. This guarantees that while Gs can transition from
1002 // running to stopped, they can never transition from stopped
1005 // The sleep here allows racing Ms that missed freezing and are
1006 // about to run a G to complete the transition to running
1007 // before we start traceback.
1012 // stopwait and preemption requests can be lost
1013 // due to races with concurrently executing threads,
1014 // so try several times
1015 for i := 0; i < 5; i++ {
1016 // this should tell the scheduler to not start any new goroutines
1017 sched.stopwait = freezeStopWait
1018 sched.gcwaiting.Store(true)
1019 // this should stop running goroutines
1021 break // no running goroutines
1031 // All reads and writes of g's status go through readgstatus, casgstatus
1032 // castogscanstatus, casfrom_Gscanstatus.
1035 func readgstatus(gp *g) uint32 {
1036 return gp.atomicstatus.Load()
1039 // The Gscanstatuses are acting like locks and this releases them.
1040 // If it proves to be a performance hit we should be able to make these
1041 // simple atomic stores but for now we are going to throw if
1042 // we see an inconsistent state.
1043 func casfrom_Gscanstatus(gp *g, oldval, newval uint32) {
1046 // Check that transition is valid.
1049 print("runtime: casfrom_Gscanstatus bad oldval gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
1051 throw("casfrom_Gscanstatus:top gp->status is not in scan state")
1052 case _Gscanrunnable,
1057 if newval == oldval&^_Gscan {
1058 success = gp.atomicstatus.CompareAndSwap(oldval, newval)
1062 print("runtime: casfrom_Gscanstatus failed gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
1064 throw("casfrom_Gscanstatus: gp->status is not in scan state")
1066 releaseLockRank(lockRankGscan)
1069 // This will return false if the gp is not in the expected status and the cas fails.
1070 // This acts like a lock acquire while the casfromgstatus acts like a lock release.
1071 func castogscanstatus(gp *g, oldval, newval uint32) bool {
1077 if newval == oldval|_Gscan {
1078 r := gp.atomicstatus.CompareAndSwap(oldval, newval)
1080 acquireLockRank(lockRankGscan)
1086 print("runtime: castogscanstatus oldval=", hex(oldval), " newval=", hex(newval), "\n")
1087 throw("castogscanstatus")
1088 panic("not reached")
1091 // casgstatusAlwaysTrack is a debug flag that causes casgstatus to always track
1092 // various latencies on every transition instead of sampling them.
1093 var casgstatusAlwaysTrack = false
1095 // If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
1096 // and casfrom_Gscanstatus instead.
1097 // casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
1098 // put it in the Gscan state is finished.
1101 func casgstatus(gp *g, oldval, newval uint32) {
1102 if (oldval&_Gscan != 0) || (newval&_Gscan != 0) || oldval == newval {
1103 systemstack(func() {
1104 print("runtime: casgstatus: oldval=", hex(oldval), " newval=", hex(newval), "\n")
1105 throw("casgstatus: bad incoming values")
1109 acquireLockRank(lockRankGscan)
1110 releaseLockRank(lockRankGscan)
1112 // See https://golang.org/cl/21503 for justification of the yield delay.
1113 const yieldDelay = 5 * 1000
1116 // loop if gp->atomicstatus is in a scan state giving
1117 // GC time to finish and change the state to oldval.
1118 for i := 0; !gp.atomicstatus.CompareAndSwap(oldval, newval); i++ {
1119 if oldval == _Gwaiting && gp.atomicstatus.Load() == _Grunnable {
1120 throw("casgstatus: waiting for Gwaiting but is Grunnable")
1123 nextYield = nanotime() + yieldDelay
1125 if nanotime() < nextYield {
1126 for x := 0; x < 10 && gp.atomicstatus.Load() != oldval; x++ {
1131 nextYield = nanotime() + yieldDelay/2
1135 if oldval == _Grunning {
1136 // Track every gTrackingPeriod time a goroutine transitions out of running.
1137 if casgstatusAlwaysTrack || gp.trackingSeq%gTrackingPeriod == 0 {
1146 // Handle various kinds of tracking.
1149 // - Time spent in runnable.
1150 // - Time spent blocked on a sync.Mutex or sync.RWMutex.
1153 // We transitioned out of runnable, so measure how much
1154 // time we spent in this state and add it to
1157 gp.runnableTime += now - gp.trackingStamp
1158 gp.trackingStamp = 0
1160 if !gp.waitreason.isMutexWait() {
1161 // Not blocking on a lock.
1164 // Blocking on a lock, measure it. Note that because we're
1165 // sampling, we have to multiply by our sampling period to get
1166 // a more representative estimate of the absolute value.
1167 // gTrackingPeriod also represents an accurate sampling period
1168 // because we can only enter this state from _Grunning.
1170 sched.totalMutexWaitTime.Add((now - gp.trackingStamp) * gTrackingPeriod)
1171 gp.trackingStamp = 0
1175 if !gp.waitreason.isMutexWait() {
1176 // Not blocking on a lock.
1179 // Blocking on a lock. Write down the timestamp.
1181 gp.trackingStamp = now
1183 // We just transitioned into runnable, so record what
1184 // time that happened.
1186 gp.trackingStamp = now
1188 // We're transitioning into running, so turn off
1189 // tracking and record how much time we spent in
1192 sched.timeToRun.record(gp.runnableTime)
1197 // casGToWaiting transitions gp from old to _Gwaiting, and sets the wait reason.
1199 // Use this over casgstatus when possible to ensure that a waitreason is set.
1200 func casGToWaiting(gp *g, old uint32, reason waitReason) {
1201 // Set the wait reason before calling casgstatus, because casgstatus will use it.
1202 gp.waitreason = reason
1203 casgstatus(gp, old, _Gwaiting)
1206 // casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
1207 // Returns old status. Cannot call casgstatus directly, because we are racing with an
1208 // async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
1209 // it might have become Grunnable by the time we get to the cas. If we called casgstatus,
1210 // it would loop waiting for the status to go back to Gwaiting, which it never will.
1213 func casgcopystack(gp *g) uint32 {
1215 oldstatus := readgstatus(gp) &^ _Gscan
1216 if oldstatus != _Gwaiting && oldstatus != _Grunnable {
1217 throw("copystack: bad status, not Gwaiting or Grunnable")
1219 if gp.atomicstatus.CompareAndSwap(oldstatus, _Gcopystack) {
1225 // casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
1227 // TODO(austin): This is the only status operation that both changes
1228 // the status and locks the _Gscan bit. Rethink this.
1229 func casGToPreemptScan(gp *g, old, new uint32) {
1230 if old != _Grunning || new != _Gscan|_Gpreempted {
1231 throw("bad g transition")
1233 acquireLockRank(lockRankGscan)
1234 for !gp.atomicstatus.CompareAndSwap(_Grunning, _Gscan|_Gpreempted) {
1238 // casGFromPreempted attempts to transition gp from _Gpreempted to
1239 // _Gwaiting. If successful, the caller is responsible for
1240 // re-scheduling gp.
1241 func casGFromPreempted(gp *g, old, new uint32) bool {
1242 if old != _Gpreempted || new != _Gwaiting {
1243 throw("bad g transition")
1245 gp.waitreason = waitReasonPreempted
1246 return gp.atomicstatus.CompareAndSwap(_Gpreempted, _Gwaiting)
1249 // stwReason is an enumeration of reasons the world is stopping.
1250 type stwReason uint8
1252 // Reasons to stop-the-world.
1254 // Avoid reusing reasons and add new ones instead.
1256 stwUnknown stwReason = iota // "unknown"
1257 stwGCMarkTerm // "GC mark termination"
1258 stwGCSweepTerm // "GC sweep termination"
1259 stwWriteHeapDump // "write heap dump"
1260 stwGoroutineProfile // "goroutine profile"
1261 stwGoroutineProfileCleanup // "goroutine profile cleanup"
1262 stwAllGoroutinesStack // "all goroutines stack trace"
1263 stwReadMemStats // "read mem stats"
1264 stwAllThreadsSyscall // "AllThreadsSyscall"
1265 stwGOMAXPROCS // "GOMAXPROCS"
1266 stwStartTrace // "start trace"
1267 stwStopTrace // "stop trace"
1268 stwForTestCountPagesInUse // "CountPagesInUse (test)"
1269 stwForTestReadMetricsSlow // "ReadMetricsSlow (test)"
1270 stwForTestReadMemStatsSlow // "ReadMemStatsSlow (test)"
1271 stwForTestPageCachePagesLeaked // "PageCachePagesLeaked (test)"
1272 stwForTestResetDebugLog // "ResetDebugLog (test)"
1275 func (r stwReason) String() string {
1276 return stwReasonStrings[r]
1279 // If you add to this list, also add it to src/internal/trace/parser.go.
1280 // If you change the values of any of the stw* constants, bump the trace
1281 // version number and make a copy of this.
1282 var stwReasonStrings = [...]string{
1283 stwUnknown: "unknown",
1284 stwGCMarkTerm: "GC mark termination",
1285 stwGCSweepTerm: "GC sweep termination",
1286 stwWriteHeapDump: "write heap dump",
1287 stwGoroutineProfile: "goroutine profile",
1288 stwGoroutineProfileCleanup: "goroutine profile cleanup",
1289 stwAllGoroutinesStack: "all goroutines stack trace",
1290 stwReadMemStats: "read mem stats",
1291 stwAllThreadsSyscall: "AllThreadsSyscall",
1292 stwGOMAXPROCS: "GOMAXPROCS",
1293 stwStartTrace: "start trace",
1294 stwStopTrace: "stop trace",
1295 stwForTestCountPagesInUse: "CountPagesInUse (test)",
1296 stwForTestReadMetricsSlow: "ReadMetricsSlow (test)",
1297 stwForTestReadMemStatsSlow: "ReadMemStatsSlow (test)",
1298 stwForTestPageCachePagesLeaked: "PageCachePagesLeaked (test)",
1299 stwForTestResetDebugLog: "ResetDebugLog (test)",
1302 // stopTheWorld stops all P's from executing goroutines, interrupting
1303 // all goroutines at GC safe points and records reason as the reason
1304 // for the stop. On return, only the current goroutine's P is running.
1305 // stopTheWorld must not be called from a system stack and the caller
1306 // must not hold worldsema. The caller must call startTheWorld when
1307 // other P's should resume execution.
1309 // stopTheWorld is safe for multiple goroutines to call at the
1310 // same time. Each will execute its own stop, and the stops will
1313 // This is also used by routines that do stack dumps. If the system is
1314 // in panic or being exited, this may not reliably stop all
1316 func stopTheWorld(reason stwReason) {
1317 semacquire(&worldsema)
1319 gp.m.preemptoff = reason.String()
1320 systemstack(func() {
1321 // Mark the goroutine which called stopTheWorld preemptible so its
1322 // stack may be scanned.
1323 // This lets a mark worker scan us while we try to stop the world
1324 // since otherwise we could get in a mutual preemption deadlock.
1325 // We must not modify anything on the G stack because a stack shrink
1326 // may occur. A stack shrink is otherwise OK though because in order
1327 // to return from this function (and to leave the system stack) we
1328 // must have preempted all goroutines, including any attempting
1329 // to scan our stack, in which case, any stack shrinking will
1330 // have already completed by the time we exit.
1331 // Don't provide a wait reason because we're still executing.
1332 casGToWaiting(gp, _Grunning, waitReasonStoppingTheWorld)
1333 stopTheWorldWithSema(reason)
1334 casgstatus(gp, _Gwaiting, _Grunning)
1338 // startTheWorld undoes the effects of stopTheWorld.
1339 func startTheWorld() {
1340 systemstack(func() { startTheWorldWithSema() })
1342 // worldsema must be held over startTheWorldWithSema to ensure
1343 // gomaxprocs cannot change while worldsema is held.
1345 // Release worldsema with direct handoff to the next waiter, but
1346 // acquirem so that semrelease1 doesn't try to yield our time.
1348 // Otherwise if e.g. ReadMemStats is being called in a loop,
1349 // it might stomp on other attempts to stop the world, such as
1350 // for starting or ending GC. The operation this blocks is
1351 // so heavy-weight that we should just try to be as fair as
1354 // We don't want to just allow us to get preempted between now
1355 // and releasing the semaphore because then we keep everyone
1356 // (including, for example, GCs) waiting longer.
1359 semrelease1(&worldsema, true, 0)
1363 // stopTheWorldGC has the same effect as stopTheWorld, but blocks
1364 // until the GC is not running. It also blocks a GC from starting
1365 // until startTheWorldGC is called.
1366 func stopTheWorldGC(reason stwReason) {
1368 stopTheWorld(reason)
1371 // startTheWorldGC undoes the effects of stopTheWorldGC.
1372 func startTheWorldGC() {
1377 // Holding worldsema grants an M the right to try to stop the world.
1378 var worldsema uint32 = 1
1380 // Holding gcsema grants the M the right to block a GC, and blocks
1381 // until the current GC is done. In particular, it prevents gomaxprocs
1382 // from changing concurrently.
1384 // TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
1385 // being changed/enabled during a GC, remove this.
1386 var gcsema uint32 = 1
1388 // stopTheWorldWithSema is the core implementation of stopTheWorld.
1389 // The caller is responsible for acquiring worldsema and disabling
1390 // preemption first and then should stopTheWorldWithSema on the system
1393 // semacquire(&worldsema, 0)
1394 // m.preemptoff = "reason"
1395 // systemstack(stopTheWorldWithSema)
1397 // When finished, the caller must either call startTheWorld or undo
1398 // these three operations separately:
1400 // m.preemptoff = ""
1401 // systemstack(startTheWorldWithSema)
1402 // semrelease(&worldsema)
1404 // It is allowed to acquire worldsema once and then execute multiple
1405 // startTheWorldWithSema/stopTheWorldWithSema pairs.
1406 // Other P's are able to execute between successive calls to
1407 // startTheWorldWithSema and stopTheWorldWithSema.
1408 // Holding worldsema causes any other goroutines invoking
1409 // stopTheWorld to block.
1410 func stopTheWorldWithSema(reason stwReason) {
1411 trace := traceAcquire()
1413 trace.STWStart(reason)
1418 // If we hold a lock, then we won't be able to stop another M
1419 // that is blocked trying to acquire the lock.
1421 throw("stopTheWorld: holding locks")
1425 sched.stopwait = gomaxprocs
1426 sched.gcwaiting.Store(true)
1429 gp.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.
1431 // try to retake all P's in Psyscall status
1432 trace = traceAcquire()
1433 for _, pp := range allp {
1435 if s == _Psyscall && atomic.Cas(&pp.status, s, _Pgcstop) {
1437 trace.GoSysBlock(pp)
1451 pp, _ := pidleget(now)
1455 pp.status = _Pgcstop
1458 wait := sched.stopwait > 0
1461 // wait for remaining P's to stop voluntarily
1464 // wait for 100us, then try to re-preempt in case of any races
1465 if notetsleep(&sched.stopnote, 100*1000) {
1466 noteclear(&sched.stopnote)
1475 if sched.stopwait != 0 {
1476 bad = "stopTheWorld: not stopped (stopwait != 0)"
1478 for _, pp := range allp {
1479 if pp.status != _Pgcstop {
1480 bad = "stopTheWorld: not stopped (status != _Pgcstop)"
1484 if freezing.Load() {
1485 // Some other thread is panicking. This can cause the
1486 // sanity checks above to fail if the panic happens in
1487 // the signal handler on a stopped thread. Either way,
1488 // we should halt this thread.
1499 func startTheWorldWithSema() int64 {
1500 assertWorldStopped()
1502 mp := acquirem() // disable preemption because it can be holding p in a local var
1503 if netpollinited() {
1504 list, delta := netpoll(0) // non-blocking
1506 netpollAdjustWaiters(delta)
1515 p1 := procresize(procs)
1516 sched.gcwaiting.Store(false)
1517 if sched.sysmonwait.Load() {
1518 sched.sysmonwait.Store(false)
1519 notewakeup(&sched.sysmonnote)
1532 throw("startTheWorld: inconsistent mp->nextp")
1535 notewakeup(&mp.park)
1537 // Start M to run P. Do not start another M below.
1542 // Capture start-the-world time before doing clean-up tasks.
1543 startTime := nanotime()
1544 trace := traceAcquire()
1550 // Wakeup an additional proc in case we have excessive runnable goroutines
1551 // in local queues or in the global queue. If we don't, the proc will park itself.
1552 // If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
1560 // usesLibcall indicates whether this runtime performs system calls
1562 func usesLibcall() bool {
1564 case "aix", "darwin", "illumos", "ios", "solaris", "windows":
1567 return GOARCH != "mips64"
1572 // mStackIsSystemAllocated indicates whether this runtime starts on a
1573 // system-allocated stack.
1574 func mStackIsSystemAllocated() bool {
1576 case "aix", "darwin", "plan9", "illumos", "ios", "solaris", "windows":
1579 return GOARCH != "mips64"
1584 // mstart is the entry-point for new Ms.
1585 // It is written in assembly, uses ABI0, is marked TOPFRAME, and calls mstart0.
1588 // mstart0 is the Go entry-point for new Ms.
1589 // This must not split the stack because we may not even have stack
1590 // bounds set up yet.
1592 // May run during STW (because it doesn't have a P yet), so write
1593 // barriers are not allowed.
1596 //go:nowritebarrierrec
1600 osStack := gp.stack.lo == 0
1602 // Initialize stack bounds from system stack.
1603 // Cgo may have left stack size in stack.hi.
1604 // minit may update the stack bounds.
1606 // Note: these bounds may not be very accurate.
1607 // We set hi to &size, but there are things above
1608 // it. The 1024 is supposed to compensate this,
1609 // but is somewhat arbitrary.
1612 size = 16384 * sys.StackGuardMultiplier
1614 gp.stack.hi = uintptr(noescape(unsafe.Pointer(&size)))
1615 gp.stack.lo = gp.stack.hi - size + 1024
1617 // Initialize stack guard so that we can start calling regular
1619 gp.stackguard0 = gp.stack.lo + stackGuard
1620 // This is the g0, so we can also call go:systemstack
1621 // functions, which check stackguard1.
1622 gp.stackguard1 = gp.stackguard0
1625 // Exit this thread.
1626 if mStackIsSystemAllocated() {
1627 // Windows, Solaris, illumos, Darwin, AIX and Plan 9 always system-allocate
1628 // the stack, but put it in gp.stack before mstart,
1629 // so the logic above hasn't set osStack yet.
1635 // The go:noinline is to guarantee the getcallerpc/getcallersp below are safe,
1636 // so that we can set up g0.sched to return to the call of mstart1 above.
1643 throw("bad runtime·mstart")
1646 // Set up m.g0.sched as a label returning to just
1647 // after the mstart1 call in mstart0 above, for use by goexit0 and mcall.
1648 // We're never coming back to mstart1 after we call schedule,
1649 // so other calls can reuse the current frame.
1650 // And goexit0 does a gogo that needs to return from mstart1
1651 // and let mstart0 exit the thread.
1652 gp.sched.g = guintptr(unsafe.Pointer(gp))
1653 gp.sched.pc = getcallerpc()
1654 gp.sched.sp = getcallersp()
1659 // Install signal handlers; after minit so that minit can
1660 // prepare the thread to be able to handle the signals.
1665 if fn := gp.m.mstartfn; fn != nil {
1670 acquirep(gp.m.nextp.ptr())
1676 // mstartm0 implements part of mstart1 that only runs on the m0.
1678 // Write barriers are allowed here because we know the GC can't be
1679 // running yet, so they'll be no-ops.
1681 //go:yeswritebarrierrec
1683 // Create an extra M for callbacks on threads not created by Go.
1684 // An extra M is also needed on Windows for callbacks created by
1685 // syscall.NewCallback. See issue #6751 for details.
1686 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
1693 // mPark causes a thread to park itself, returning once woken.
1698 notesleep(&gp.m.park)
1699 noteclear(&gp.m.park)
1702 // mexit tears down and exits the current thread.
1704 // Don't call this directly to exit the thread, since it must run at
1705 // the top of the thread stack. Instead, use gogo(&gp.m.g0.sched) to
1706 // unwind the stack to the point that exits the thread.
1708 // It is entered with m.p != nil, so write barriers are allowed. It
1709 // will release the P before exiting.
1711 //go:yeswritebarrierrec
1712 func mexit(osStack bool) {
1716 // This is the main thread. Just wedge it.
1718 // On Linux, exiting the main thread puts the process
1719 // into a non-waitable zombie state. On Plan 9,
1720 // exiting the main thread unblocks wait even though
1721 // other threads are still running. On Solaris we can
1722 // neither exitThread nor return from mstart. Other
1723 // bad things probably happen on other platforms.
1725 // We could try to clean up this M more before wedging
1726 // it, but that complicates signal handling.
1727 handoffp(releasep())
1733 throw("locked m0 woke up")
1739 // Free the gsignal stack.
1740 if mp.gsignal != nil {
1741 stackfree(mp.gsignal.stack)
1742 // On some platforms, when calling into VDSO (e.g. nanotime)
1743 // we store our g on the gsignal stack, if there is one.
1744 // Now the stack is freed, unlink it from the m, so we
1745 // won't write to it when calling VDSO code.
1749 // Remove m from allm.
1751 for pprev := &allm; *pprev != nil; pprev = &(*pprev).alllink {
1757 throw("m not found in allm")
1759 // Delay reaping m until it's done with the stack.
1761 // Put mp on the free list, though it will not be reaped while freeWait
1762 // is freeMWait. mp is no longer reachable via allm, so even if it is
1763 // on an OS stack, we must keep a reference to mp alive so that the GC
1764 // doesn't free mp while we are still using it.
1766 // Note that the free list must not be linked through alllink because
1767 // some functions walk allm without locking, so may be using alllink.
1768 mp.freeWait.Store(freeMWait)
1769 mp.freelink = sched.freem
1773 atomic.Xadd64(&ncgocall, int64(mp.ncgocall))
1776 handoffp(releasep())
1777 // After this point we must not have write barriers.
1779 // Invoke the deadlock detector. This must happen after
1780 // handoffp because it may have started a new M to take our
1787 if GOOS == "darwin" || GOOS == "ios" {
1788 // Make sure pendingPreemptSignals is correct when an M exits.
1790 if mp.signalPending.Load() != 0 {
1791 pendingPreemptSignals.Add(-1)
1795 // Destroy all allocated resources. After this is called, we may no
1796 // longer take any locks.
1800 // No more uses of mp, so it is safe to drop the reference.
1801 mp.freeWait.Store(freeMRef)
1803 // Return from mstart and let the system thread
1804 // library free the g0 stack and terminate the thread.
1808 // mstart is the thread's entry point, so there's nothing to
1809 // return to. Exit the thread directly. exitThread will clear
1810 // m.freeWait when it's done with the stack and the m can be
1812 exitThread(&mp.freeWait)
1815 // forEachP calls fn(p) for every P p when p reaches a GC safe point.
1816 // If a P is currently executing code, this will bring the P to a GC
1817 // safe point and execute fn on that P. If the P is not executing code
1818 // (it is idle or in a syscall), this will call fn(p) directly while
1819 // preventing the P from exiting its state. This does not ensure that
1820 // fn will run on every CPU executing Go code, but it acts as a global
1821 // memory barrier. GC uses this as a "ragged barrier."
1823 // The caller must hold worldsema. fn must not refer to any
1824 // part of the current goroutine's stack, since the GC may move it.
1825 func forEachP(reason waitReason, fn func(*p)) {
1826 systemstack(func() {
1828 // Mark the user stack as preemptible so that it may be scanned.
1829 // Otherwise, our attempt to force all P's to a safepoint could
1830 // result in a deadlock as we attempt to preempt a worker that's
1831 // trying to preempt us (e.g. for a stack scan).
1833 // N.B. The execution tracer is not aware of this status
1834 // transition and handles it specially based on the
1836 casGToWaiting(gp, _Grunning, reason)
1837 forEachPInternal(fn)
1838 casgstatus(gp, _Gwaiting, _Grunning)
1842 // forEachPInternal calls fn(p) for every P p when p reaches a GC safe point.
1843 // It is the internal implementation of forEachP.
1845 // The caller must hold worldsema and either must ensure that a GC is not
1846 // running (otherwise this may deadlock with the GC trying to preempt this P)
1847 // or it must leave its goroutine in a preemptible state before it switches
1848 // to the systemstack. Due to these restrictions, prefer forEachP when possible.
1851 func forEachPInternal(fn func(*p)) {
1853 pp := getg().m.p.ptr()
1856 if sched.safePointWait != 0 {
1857 throw("forEachP: sched.safePointWait != 0")
1859 sched.safePointWait = gomaxprocs - 1
1860 sched.safePointFn = fn
1862 // Ask all Ps to run the safe point function.
1863 for _, p2 := range allp {
1865 atomic.Store(&p2.runSafePointFn, 1)
1870 // Any P entering _Pidle or _Psyscall from now on will observe
1871 // p.runSafePointFn == 1 and will call runSafePointFn when
1872 // changing its status to _Pidle/_Psyscall.
1874 // Run safe point function for all idle Ps. sched.pidle will
1875 // not change because we hold sched.lock.
1876 for p := sched.pidle.ptr(); p != nil; p = p.link.ptr() {
1877 if atomic.Cas(&p.runSafePointFn, 1, 0) {
1879 sched.safePointWait--
1883 wait := sched.safePointWait > 0
1886 // Run fn for the current P.
1889 // Force Ps currently in _Psyscall into _Pidle and hand them
1890 // off to induce safe point function execution.
1891 trace := traceAcquire()
1892 for _, p2 := range allp {
1894 if s == _Psyscall && p2.runSafePointFn == 1 && atomic.Cas(&p2.status, s, _Pidle) {
1896 trace.GoSysBlock(p2)
1907 // Wait for remaining Ps to run fn.
1910 // Wait for 100us, then try to re-preempt in
1911 // case of any races.
1913 // Requires system stack.
1914 if notetsleep(&sched.safePointNote, 100*1000) {
1915 noteclear(&sched.safePointNote)
1921 if sched.safePointWait != 0 {
1922 throw("forEachP: not done")
1924 for _, p2 := range allp {
1925 if p2.runSafePointFn != 0 {
1926 throw("forEachP: P did not run fn")
1931 sched.safePointFn = nil
1936 // runSafePointFn runs the safe point function, if any, for this P.
1937 // This should be called like
1939 // if getg().m.p.runSafePointFn != 0 {
1943 // runSafePointFn must be checked on any transition in to _Pidle or
1944 // _Psyscall to avoid a race where forEachP sees that the P is running
1945 // just before the P goes into _Pidle/_Psyscall and neither forEachP
1946 // nor the P run the safe-point function.
1947 func runSafePointFn() {
1948 p := getg().m.p.ptr()
1949 // Resolve the race between forEachP running the safe-point
1950 // function on this P's behalf and this P running the
1951 // safe-point function directly.
1952 if !atomic.Cas(&p.runSafePointFn, 1, 0) {
1955 sched.safePointFn(p)
1957 sched.safePointWait--
1958 if sched.safePointWait == 0 {
1959 notewakeup(&sched.safePointNote)
1964 // When running with cgo, we call _cgo_thread_start
1965 // to start threads for us so that we can play nicely with
1967 var cgoThreadStart unsafe.Pointer
1969 type cgothreadstart struct {
1975 // Allocate a new m unassociated with any thread.
1976 // Can use p for allocation context if needed.
1977 // fn is recorded as the new m's m.mstartfn.
1978 // id is optional pre-allocated m ID. Omit by passing -1.
1980 // This function is allowed to have write barriers even if the caller
1981 // isn't because it borrows pp.
1983 //go:yeswritebarrierrec
1984 func allocm(pp *p, fn func(), id int64) *m {
1987 // The caller owns pp, but we may borrow (i.e., acquirep) it. We must
1988 // disable preemption to ensure it is not stolen, which would make the
1989 // caller lose ownership.
1994 acquirep(pp) // temporarily borrow p for mallocs in this function
1997 // Release the free M list. We need to do this somewhere and
1998 // this may free up a stack we can use.
1999 if sched.freem != nil {
2002 for freem := sched.freem; freem != nil; {
2003 wait := freem.freeWait.Load()
2004 if wait == freeMWait {
2005 next := freem.freelink
2006 freem.freelink = newList
2011 // Free the stack if needed. For freeMRef, there is
2012 // nothing to do except drop freem from the sched.freem
2014 if wait == freeMStack {
2015 // stackfree must be on the system stack, but allocm is
2016 // reachable off the system stack transitively from
2018 systemstack(func() {
2019 stackfree(freem.g0.stack)
2022 freem = freem.freelink
2024 sched.freem = newList
2032 // In case of cgo or Solaris or illumos or Darwin, pthread_create will make us a stack.
2033 // Windows and Plan 9 will layout sched stack on OS stack.
2034 if iscgo || mStackIsSystemAllocated() {
2037 mp.g0 = malg(16384 * sys.StackGuardMultiplier)
2041 if pp == gp.m.p.ptr() {
2046 allocmLock.runlock()
2050 // needm is called when a cgo callback happens on a
2051 // thread without an m (a thread not created by Go).
2052 // In this case, needm is expected to find an m to use
2053 // and return with m, g initialized correctly.
2054 // Since m and g are not set now (likely nil, but see below)
2055 // needm is limited in what routines it can call. In particular
2056 // it can only call nosplit functions (textflag 7) and cannot
2057 // do any scheduling that requires an m.
2059 // In order to avoid needing heavy lifting here, we adopt
2060 // the following strategy: there is a stack of available m's
2061 // that can be stolen. Using compare-and-swap
2062 // to pop from the stack has ABA races, so we simulate
2063 // a lock by doing an exchange (via Casuintptr) to steal the stack
2064 // head and replace the top pointer with MLOCKED (1).
2065 // This serves as a simple spin lock that we can use even
2066 // without an m. The thread that locks the stack in this way
2067 // unlocks the stack by storing a valid stack head pointer.
2069 // In order to make sure that there is always an m structure
2070 // available to be stolen, we maintain the invariant that there
2071 // is always one more than needed. At the beginning of the
2072 // program (if cgo is in use) the list is seeded with a single m.
2073 // If needm finds that it has taken the last m off the list, its job
2074 // is - once it has installed its own m so that it can do things like
2075 // allocate memory - to create a spare m and put it on the list.
2077 // Each of these extra m's also has a g0 and a curg that are
2078 // pressed into service as the scheduling stack and current
2079 // goroutine for the duration of the cgo callback.
2081 // It calls dropm to put the m back on the list,
2082 // 1. when the callback is done with the m in non-pthread platforms,
2083 // 2. or when the C thread exiting on pthread platforms.
2085 // The signal argument indicates whether we're called from a signal
2089 func needm(signal bool) {
2090 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
2091 // Can happen if C/C++ code calls Go from a global ctor.
2092 // Can also happen on Windows if a global ctor uses a
2093 // callback created by syscall.NewCallback. See issue #6751
2096 // Can not throw, because scheduler is not initialized yet.
2097 writeErrStr("fatal error: cgo callback before cgo call\n")
2101 // Save and block signals before getting an M.
2102 // The signal handler may call needm itself,
2103 // and we must avoid a deadlock. Also, once g is installed,
2104 // any incoming signals will try to execute,
2105 // but we won't have the sigaltstack settings and other data
2106 // set up appropriately until the end of minit, which will
2107 // unblock the signals. This is the same dance as when
2108 // starting a new m to run Go code via newosproc.
2113 // getExtraM is safe here because of the invariant above,
2114 // that the extra list always contains or will soon contain
2116 mp, last := getExtraM()
2118 // Set needextram when we've just emptied the list,
2119 // so that the eventual call into cgocallbackg will
2120 // allocate a new m for the extra list. We delay the
2121 // allocation until then so that it can be done
2122 // after exitsyscall makes sure it is okay to be
2123 // running at all (that is, there's no garbage collection
2124 // running right now).
2125 mp.needextram = last
2127 // Store the original signal mask for use by minit.
2128 mp.sigmask = sigmask
2130 // Install TLS on some platforms (previously setg
2131 // would do this if necessary).
2134 // Install g (= m->g0) and set the stack bounds
2135 // to match the current stack.
2138 callbackUpdateSystemStack(mp, sp, signal)
2140 // Should mark we are already in Go now.
2141 // Otherwise, we may call needm again when we get a signal, before cgocallbackg1,
2142 // which means the extram list may be empty, that will cause a deadlock.
2143 mp.isExtraInC = false
2145 // Initialize this thread to use the m.
2149 // mp.curg is now a real goroutine.
2150 casgstatus(mp.curg, _Gdead, _Gsyscall)
2154 // Acquire an extra m and bind it to the C thread when a pthread key has been created.
2157 func needAndBindM() {
2160 if _cgo_pthread_key_created != nil && *(*uintptr)(_cgo_pthread_key_created) != 0 {
2165 // newextram allocates m's and puts them on the extra list.
2166 // It is called with a working local m, so that it can do things
2167 // like call schedlock and allocate.
2169 c := extraMWaiters.Swap(0)
2171 for i := uint32(0); i < c; i++ {
2174 } else if extraMLength.Load() == 0 {
2175 // Make sure there is at least one extra M.
2180 // oneNewExtraM allocates an m and puts it on the extra list.
2181 func oneNewExtraM() {
2182 // Create extra goroutine locked to extra m.
2183 // The goroutine is the context in which the cgo callback will run.
2184 // The sched.pc will never be returned to, but setting it to
2185 // goexit makes clear to the traceback routines where
2186 // the goroutine stack ends.
2187 mp := allocm(nil, nil, -1)
2189 gp.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum
2190 gp.sched.sp = gp.stack.hi
2191 gp.sched.sp -= 4 * goarch.PtrSize // extra space in case of reads slightly beyond frame
2193 gp.sched.g = guintptr(unsafe.Pointer(gp))
2194 gp.syscallpc = gp.sched.pc
2195 gp.syscallsp = gp.sched.sp
2196 gp.stktopsp = gp.sched.sp
2197 // malg returns status as _Gidle. Change to _Gdead before
2198 // adding to allg where GC can see it. We use _Gdead to hide
2199 // this from tracebacks and stack scans since it isn't a
2200 // "real" goroutine until needm grabs it.
2201 casgstatus(gp, _Gidle, _Gdead)
2205 // mark we are in C by default.
2206 mp.isExtraInC = true
2210 gp.goid = sched.goidgen.Add(1)
2212 gp.racectx = racegostart(abi.FuncPCABIInternal(newextram) + sys.PCQuantum)
2214 trace := traceAcquire()
2216 trace.OneNewExtraM(gp)
2219 // put on allg for garbage collector
2222 // gp is now on the allg list, but we don't want it to be
2223 // counted by gcount. It would be more "proper" to increment
2224 // sched.ngfree, but that requires locking. Incrementing ngsys
2225 // has the same effect.
2228 // Add m to the extra list.
2232 // dropm puts the current m back onto the extra list.
2234 // 1. On systems without pthreads, like Windows
2235 // dropm is called when a cgo callback has called needm but is now
2236 // done with the callback and returning back into the non-Go thread.
2238 // The main expense here is the call to signalstack to release the
2239 // m's signal stack, and then the call to needm on the next callback
2240 // from this thread. It is tempting to try to save the m for next time,
2241 // which would eliminate both these costs, but there might not be
2242 // a next time: the current thread (which Go does not control) might exit.
2243 // If we saved the m for that thread, there would be an m leak each time
2244 // such a thread exited. Instead, we acquire and release an m on each
2245 // call. These should typically not be scheduling operations, just a few
2246 // atomics, so the cost should be small.
2248 // 2. On systems with pthreads
2249 // dropm is called while a non-Go thread is exiting.
2250 // We allocate a pthread per-thread variable using pthread_key_create,
2251 // to register a thread-exit-time destructor.
2252 // And store the g into a thread-specific value associated with the pthread key,
2253 // when first return back to C.
2254 // So that the destructor would invoke dropm while the non-Go thread is exiting.
2255 // This is much faster since it avoids expensive signal-related syscalls.
2257 // This always runs without a P, so //go:nowritebarrierrec is required.
2259 // This may run with a different stack than was recorded in g0 (there is no
2260 // call to callbackUpdateSystemStack prior to dropm), so this must be
2261 // //go:nosplit to avoid the stack bounds check.
2263 //go:nowritebarrierrec
2266 // Clear m and g, and return m to the extra list.
2267 // After the call to setg we can only call nosplit functions
2268 // with no pointer manipulation.
2271 // Return mp.curg to dead state.
2272 casgstatus(mp.curg, _Gsyscall, _Gdead)
2273 mp.curg.preemptStop = false
2276 // Block signals before unminit.
2277 // Unminit unregisters the signal handling stack (but needs g on some systems).
2278 // Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.
2279 // It's important not to try to handle a signal between those two steps.
2280 sigmask := mp.sigmask
2286 // Clear g0 stack bounds to ensure that needm always refreshes the
2287 // bounds when reusing this M.
2296 msigrestore(sigmask)
2299 // bindm store the g0 of the current m into a thread-specific value.
2301 // We allocate a pthread per-thread variable using pthread_key_create,
2302 // to register a thread-exit-time destructor.
2303 // We are here setting the thread-specific value of the pthread key, to enable the destructor.
2304 // So that the pthread_key_destructor would dropm while the C thread is exiting.
2306 // And the saved g will be used in pthread_key_destructor,
2307 // since the g stored in the TLS by Go might be cleared in some platforms,
2308 // before the destructor invoked, so, we restore g by the stored g, before dropm.
2310 // We store g0 instead of m, to make the assembly code simpler,
2311 // since we need to restore g0 in runtime.cgocallback.
2313 // On systems without pthreads, like Windows, bindm shouldn't be used.
2315 // NOTE: this always runs without a P, so, nowritebarrierrec required.
2318 //go:nowritebarrierrec
2320 if GOOS == "windows" || GOOS == "plan9" {
2321 fatal("bindm in unexpected GOOS")
2325 fatal("the current g is not g0")
2327 if _cgo_bindm != nil {
2328 asmcgocall(_cgo_bindm, unsafe.Pointer(g))
2332 // A helper function for EnsureDropM.
2333 func getm() uintptr {
2334 return uintptr(unsafe.Pointer(getg().m))
2338 // Locking linked list of extra M's, via mp.schedlink. Must be accessed
2339 // only via lockextra/unlockextra.
2341 // Can't be atomic.Pointer[m] because we use an invalid pointer as a
2342 // "locked" sentinel value. M's on this list remain visible to the GC
2343 // because their mp.curg is on allgs.
2344 extraM atomic.Uintptr
2345 // Number of M's in the extraM list.
2346 extraMLength atomic.Uint32
2347 // Number of waiters in lockextra.
2348 extraMWaiters atomic.Uint32
2350 // Number of extra M's in use by threads.
2351 extraMInUse atomic.Uint32
2354 // lockextra locks the extra list and returns the list head.
2355 // The caller must unlock the list by storing a new list head
2356 // to extram. If nilokay is true, then lockextra will
2357 // return a nil list head if that's what it finds. If nilokay is false,
2358 // lockextra will keep waiting until the list head is no longer nil.
2361 func lockextra(nilokay bool) *m {
2366 old := extraM.Load()
2371 if old == 0 && !nilokay {
2373 // Add 1 to the number of threads
2374 // waiting for an M.
2375 // This is cleared by newextram.
2376 extraMWaiters.Add(1)
2382 if extraM.CompareAndSwap(old, locked) {
2383 return (*m)(unsafe.Pointer(old))
2391 func unlockextra(mp *m, delta int32) {
2392 extraMLength.Add(delta)
2393 extraM.Store(uintptr(unsafe.Pointer(mp)))
2396 // Return an M from the extra M list. Returns last == true if the list becomes
2397 // empty because of this call.
2399 // Spins waiting for an extra M, so caller must ensure that the list always
2400 // contains or will soon contain at least one M.
2403 func getExtraM() (mp *m, last bool) {
2404 mp = lockextra(false)
2406 unlockextra(mp.schedlink.ptr(), -1)
2407 return mp, mp.schedlink.ptr() == nil
2410 // Returns an extra M back to the list. mp must be from getExtraM. Newly
2411 // allocated M's should use addExtraM.
2414 func putExtraM(mp *m) {
2419 // Adds a newly allocated M to the extra M list.
2422 func addExtraM(mp *m) {
2423 mnext := lockextra(true)
2424 mp.schedlink.set(mnext)
2429 // allocmLock is locked for read when creating new Ms in allocm and their
2430 // addition to allm. Thus acquiring this lock for write blocks the
2431 // creation of new Ms.
2434 // execLock serializes exec and clone to avoid bugs or unspecified
2435 // behaviour around exec'ing while creating/destroying threads. See
2440 // These errors are reported (via writeErrStr) by some OS-specific
2441 // versions of newosproc and newosproc0.
2443 failthreadcreate = "runtime: failed to create new OS thread\n"
2444 failallocatestack = "runtime: failed to allocate stack for the new OS thread\n"
2447 // newmHandoff contains a list of m structures that need new OS threads.
2448 // This is used by newm in situations where newm itself can't safely
2449 // start an OS thread.
2450 var newmHandoff struct {
2453 // newm points to a list of M structures that need new OS
2454 // threads. The list is linked through m.schedlink.
2457 // waiting indicates that wake needs to be notified when an m
2458 // is put on the list.
2462 // haveTemplateThread indicates that the templateThread has
2463 // been started. This is not protected by lock. Use cas to set
2465 haveTemplateThread uint32
2468 // Create a new m. It will start off with a call to fn, or else the scheduler.
2469 // fn needs to be static and not a heap allocated closure.
2470 // May run with m.p==nil, so write barriers are not allowed.
2472 // id is optional pre-allocated m ID. Omit by passing -1.
2474 //go:nowritebarrierrec
2475 func newm(fn func(), pp *p, id int64) {
2476 // allocm adds a new M to allm, but they do not start until created by
2477 // the OS in newm1 or the template thread.
2479 // doAllThreadsSyscall requires that every M in allm will eventually
2480 // start and be signal-able, even with a STW.
2482 // Disable preemption here until we start the thread to ensure that
2483 // newm is not preempted between allocm and starting the new thread,
2484 // ensuring that anything added to allm is guaranteed to eventually
2488 mp := allocm(pp, fn, id)
2490 mp.sigmask = initSigmask
2491 if gp := getg(); gp != nil && gp.m != nil && (gp.m.lockedExt != 0 || gp.m.incgo) && GOOS != "plan9" {
2492 // We're on a locked M or a thread that may have been
2493 // started by C. The kernel state of this thread may
2494 // be strange (the user may have locked it for that
2495 // purpose). We don't want to clone that into another
2496 // thread. Instead, ask a known-good thread to create
2497 // the thread for us.
2499 // This is disabled on Plan 9. See golang.org/issue/22227.
2501 // TODO: This may be unnecessary on Windows, which
2502 // doesn't model thread creation off fork.
2503 lock(&newmHandoff.lock)
2504 if newmHandoff.haveTemplateThread == 0 {
2505 throw("on a locked thread with no template thread")
2507 mp.schedlink = newmHandoff.newm
2508 newmHandoff.newm.set(mp)
2509 if newmHandoff.waiting {
2510 newmHandoff.waiting = false
2511 notewakeup(&newmHandoff.wake)
2513 unlock(&newmHandoff.lock)
2514 // The M has not started yet, but the template thread does not
2515 // participate in STW, so it will always process queued Ms and
2516 // it is safe to releasem.
2526 var ts cgothreadstart
2527 if _cgo_thread_start == nil {
2528 throw("_cgo_thread_start missing")
2531 ts.tls = (*uint64)(unsafe.Pointer(&mp.tls[0]))
2532 ts.fn = unsafe.Pointer(abi.FuncPCABI0(mstart))
2534 msanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))
2537 asanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))
2539 execLock.rlock() // Prevent process clone.
2540 asmcgocall(_cgo_thread_start, unsafe.Pointer(&ts))
2544 execLock.rlock() // Prevent process clone.
2549 // startTemplateThread starts the template thread if it is not already
2552 // The calling thread must itself be in a known-good state.
2553 func startTemplateThread() {
2554 if GOARCH == "wasm" { // no threads on wasm yet
2558 // Disable preemption to guarantee that the template thread will be
2559 // created before a park once haveTemplateThread is set.
2561 if !atomic.Cas(&newmHandoff.haveTemplateThread, 0, 1) {
2565 newm(templateThread, nil, -1)
2569 // templateThread is a thread in a known-good state that exists solely
2570 // to start new threads in known-good states when the calling thread
2571 // may not be in a good state.
2573 // Many programs never need this, so templateThread is started lazily
2574 // when we first enter a state that might lead to running on a thread
2575 // in an unknown state.
2577 // templateThread runs on an M without a P, so it must not have write
2580 //go:nowritebarrierrec
2581 func templateThread() {
2588 lock(&newmHandoff.lock)
2589 for newmHandoff.newm != 0 {
2590 newm := newmHandoff.newm.ptr()
2591 newmHandoff.newm = 0
2592 unlock(&newmHandoff.lock)
2594 next := newm.schedlink.ptr()
2599 lock(&newmHandoff.lock)
2601 newmHandoff.waiting = true
2602 noteclear(&newmHandoff.wake)
2603 unlock(&newmHandoff.lock)
2604 notesleep(&newmHandoff.wake)
2608 // Stops execution of the current m until new work is available.
2609 // Returns with acquired P.
2613 if gp.m.locks != 0 {
2614 throw("stopm holding locks")
2617 throw("stopm holding p")
2620 throw("stopm spinning")
2627 acquirep(gp.m.nextp.ptr())
2632 // startm's caller incremented nmspinning. Set the new M's spinning.
2633 getg().m.spinning = true
2636 // Schedules some M to run the p (creates an M if necessary).
2637 // If p==nil, tries to get an idle P, if no idle P's does nothing.
2638 // May run with m.p==nil, so write barriers are not allowed.
2639 // If spinning is set, the caller has incremented nmspinning and must provide a
2640 // P. startm will set m.spinning in the newly started M.
2642 // Callers passing a non-nil P must call from a non-preemptible context. See
2643 // comment on acquirem below.
2645 // Argument lockheld indicates whether the caller already acquired the
2646 // scheduler lock. Callers holding the lock when making the call must pass
2647 // true. The lock might be temporarily dropped, but will be reacquired before
2650 // Must not have write barriers because this may be called without a P.
2652 //go:nowritebarrierrec
2653 func startm(pp *p, spinning, lockheld bool) {
2654 // Disable preemption.
2656 // Every owned P must have an owner that will eventually stop it in the
2657 // event of a GC stop request. startm takes transient ownership of a P
2658 // (either from argument or pidleget below) and transfers ownership to
2659 // a started M, which will be responsible for performing the stop.
2661 // Preemption must be disabled during this transient ownership,
2662 // otherwise the P this is running on may enter GC stop while still
2663 // holding the transient P, leaving that P in limbo and deadlocking the
2666 // Callers passing a non-nil P must already be in non-preemptible
2667 // context, otherwise such preemption could occur on function entry to
2668 // startm. Callers passing a nil P may be preemptible, so we must
2669 // disable preemption before acquiring a P from pidleget below.
2676 // TODO(prattmic): All remaining calls to this function
2677 // with _p_ == nil could be cleaned up to find a P
2678 // before calling startm.
2679 throw("startm: P required for spinning=true")
2692 // No M is available, we must drop sched.lock and call newm.
2693 // However, we already own a P to assign to the M.
2695 // Once sched.lock is released, another G (e.g., in a syscall),
2696 // could find no idle P while checkdead finds a runnable G but
2697 // no running M's because this new M hasn't started yet, thus
2698 // throwing in an apparent deadlock.
2699 // This apparent deadlock is possible when startm is called
2700 // from sysmon, which doesn't count as a running M.
2702 // Avoid this situation by pre-allocating the ID for the new M,
2703 // thus marking it as 'running' before we drop sched.lock. This
2704 // new M will eventually run the scheduler to execute any
2711 // The caller incremented nmspinning, so set m.spinning in the new M.
2719 // Ownership transfer of pp committed by start in newm.
2720 // Preemption is now safe.
2728 throw("startm: m is spinning")
2731 throw("startm: m has p")
2733 if spinning && !runqempty(pp) {
2734 throw("startm: p has runnable gs")
2736 // The caller incremented nmspinning, so set m.spinning in the new M.
2737 nmp.spinning = spinning
2739 notewakeup(&nmp.park)
2740 // Ownership transfer of pp committed by wakeup. Preemption is now
2745 // Hands off P from syscall or locked M.
2746 // Always runs without a P, so write barriers are not allowed.
2748 //go:nowritebarrierrec
2749 func handoffp(pp *p) {
2750 // handoffp must start an M in any situation where
2751 // findrunnable would return a G to run on pp.
2753 // if it has local work, start it straight away
2754 if !runqempty(pp) || sched.runqsize != 0 {
2755 startm(pp, false, false)
2758 // if there's trace work to do, start it straight away
2759 if (traceEnabled() || traceShuttingDown()) && traceReaderAvailable() != nil {
2760 startm(pp, false, false)
2763 // if it has GC work, start it straight away
2764 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(pp) {
2765 startm(pp, false, false)
2768 // no local work, check that there are no spinning/idle M's,
2769 // otherwise our help is not required
2770 if sched.nmspinning.Load()+sched.npidle.Load() == 0 && sched.nmspinning.CompareAndSwap(0, 1) { // TODO: fast atomic
2771 sched.needspinning.Store(0)
2772 startm(pp, true, false)
2776 if sched.gcwaiting.Load() {
2777 pp.status = _Pgcstop
2779 if sched.stopwait == 0 {
2780 notewakeup(&sched.stopnote)
2785 if pp.runSafePointFn != 0 && atomic.Cas(&pp.runSafePointFn, 1, 0) {
2786 sched.safePointFn(pp)
2787 sched.safePointWait--
2788 if sched.safePointWait == 0 {
2789 notewakeup(&sched.safePointNote)
2792 if sched.runqsize != 0 {
2794 startm(pp, false, false)
2797 // If this is the last running P and nobody is polling network,
2798 // need to wakeup another M to poll network.
2799 if sched.npidle.Load() == gomaxprocs-1 && sched.lastpoll.Load() != 0 {
2801 startm(pp, false, false)
2805 // The scheduler lock cannot be held when calling wakeNetPoller below
2806 // because wakeNetPoller may call wakep which may call startm.
2807 when := nobarrierWakeTime(pp)
2816 // Tries to add one more P to execute G's.
2817 // Called when a G is made runnable (newproc, ready).
2818 // Must be called with a P.
2820 // Be conservative about spinning threads, only start one if none exist
2822 if sched.nmspinning.Load() != 0 || !sched.nmspinning.CompareAndSwap(0, 1) {
2826 // Disable preemption until ownership of pp transfers to the next M in
2827 // startm. Otherwise preemption here would leave pp stuck waiting to
2830 // See preemption comment on acquirem in startm for more details.
2835 pp, _ = pidlegetSpinning(0)
2837 if sched.nmspinning.Add(-1) < 0 {
2838 throw("wakep: negative nmspinning")
2844 // Since we always have a P, the race in the "No M is available"
2845 // comment in startm doesn't apply during the small window between the
2846 // unlock here and lock in startm. A checkdead in between will always
2847 // see at least one running M (ours).
2850 startm(pp, true, false)
2855 // Stops execution of the current m that is locked to a g until the g is runnable again.
2856 // Returns with acquired P.
2857 func stoplockedm() {
2860 if gp.m.lockedg == 0 || gp.m.lockedg.ptr().lockedm.ptr() != gp.m {
2861 throw("stoplockedm: inconsistent locking")
2864 // Schedule another M to run this p.
2869 // Wait until another thread schedules lockedg again.
2871 status := readgstatus(gp.m.lockedg.ptr())
2872 if status&^_Gscan != _Grunnable {
2873 print("runtime:stoplockedm: lockedg (atomicstatus=", status, ") is not Grunnable or Gscanrunnable\n")
2874 dumpgstatus(gp.m.lockedg.ptr())
2875 throw("stoplockedm: not runnable")
2877 acquirep(gp.m.nextp.ptr())
2881 // Schedules the locked m to run the locked gp.
2882 // May run during STW, so write barriers are not allowed.
2884 //go:nowritebarrierrec
2885 func startlockedm(gp *g) {
2886 mp := gp.lockedm.ptr()
2888 throw("startlockedm: locked to me")
2891 throw("startlockedm: m has p")
2893 // directly handoff current P to the locked m
2897 notewakeup(&mp.park)
2901 // Stops the current m for stopTheWorld.
2902 // Returns when the world is restarted.
2906 if !sched.gcwaiting.Load() {
2907 throw("gcstopm: not waiting for gc")
2910 gp.m.spinning = false
2911 // OK to just drop nmspinning here,
2912 // startTheWorld will unpark threads as necessary.
2913 if sched.nmspinning.Add(-1) < 0 {
2914 throw("gcstopm: negative nmspinning")
2919 pp.status = _Pgcstop
2921 if sched.stopwait == 0 {
2922 notewakeup(&sched.stopnote)
2928 // Schedules gp to run on the current M.
2929 // If inheritTime is true, gp inherits the remaining time in the
2930 // current time slice. Otherwise, it starts a new time slice.
2933 // Write barriers are allowed because this is called immediately after
2934 // acquiring a P in several places.
2936 //go:yeswritebarrierrec
2937 func execute(gp *g, inheritTime bool) {
2940 if goroutineProfile.active {
2941 // Make sure that gp has had its stack written out to the goroutine
2942 // profile, exactly as it was when the goroutine profiler first stopped
2944 tryRecordGoroutineProfile(gp, osyield)
2947 // Assign gp.m before entering _Grunning so running Gs have an
2951 casgstatus(gp, _Grunnable, _Grunning)
2954 gp.stackguard0 = gp.stack.lo + stackGuard
2956 mp.p.ptr().schedtick++
2959 // Check whether the profiler needs to be turned on or off.
2960 hz := sched.profilehz
2961 if mp.profilehz != hz {
2962 setThreadCPUProfiler(hz)
2965 trace := traceAcquire()
2967 // GoSysExit has to happen when we have a P, but before GoStart.
2968 // So we emit it here.
2969 if gp.syscallsp != 0 {
2979 // Finds a runnable goroutine to execute.
2980 // Tries to steal from other P's, get g from local or global queue, poll network.
2981 // tryWakeP indicates that the returned goroutine is not normal (GC worker, trace
2982 // reader) so the caller should try to wake a P.
2983 func findRunnable() (gp *g, inheritTime, tryWakeP bool) {
2986 // The conditions here and in handoffp must agree: if
2987 // findrunnable would return a G to run, handoffp must start
2992 if sched.gcwaiting.Load() {
2996 if pp.runSafePointFn != 0 {
3000 // now and pollUntil are saved for work stealing later,
3001 // which may steal timers. It's important that between now
3002 // and then, nothing blocks, so these numbers remain mostly
3004 now, pollUntil, _ := checkTimers(pp, 0)
3006 // Try to schedule the trace reader.
3007 if traceEnabled() || traceShuttingDown() {
3010 trace := traceAcquire()
3011 casgstatus(gp, _Gwaiting, _Grunnable)
3013 trace.GoUnpark(gp, 0)
3016 return gp, false, true
3020 // Try to schedule a GC worker.
3021 if gcBlackenEnabled != 0 {
3022 gp, tnow := gcController.findRunnableGCWorker(pp, now)
3024 return gp, false, true
3029 // Check the global runnable queue once in a while to ensure fairness.
3030 // Otherwise two goroutines can completely occupy the local runqueue
3031 // by constantly respawning each other.
3032 if pp.schedtick%61 == 0 && sched.runqsize > 0 {
3034 gp := globrunqget(pp, 1)
3037 return gp, false, false
3041 // Wake up the finalizer G.
3042 if fingStatus.Load()&(fingWait|fingWake) == fingWait|fingWake {
3043 if gp := wakefing(); gp != nil {
3047 if *cgo_yield != nil {
3048 asmcgocall(*cgo_yield, nil)
3052 if gp, inheritTime := runqget(pp); gp != nil {
3053 return gp, inheritTime, false
3057 if sched.runqsize != 0 {
3059 gp := globrunqget(pp, 0)
3062 return gp, false, false
3067 // This netpoll is only an optimization before we resort to stealing.
3068 // We can safely skip it if there are no waiters or a thread is blocked
3069 // in netpoll already. If there is any kind of logical race with that
3070 // blocked thread (e.g. it has already returned from netpoll, but does
3071 // not set lastpoll yet), this thread will do blocking netpoll below
3073 if netpollinited() && netpollAnyWaiters() && sched.lastpoll.Load() != 0 {
3074 if list, delta := netpoll(0); !list.empty() { // non-blocking
3077 netpollAdjustWaiters(delta)
3078 trace := traceAcquire()
3079 casgstatus(gp, _Gwaiting, _Grunnable)
3081 trace.GoUnpark(gp, 0)
3084 return gp, false, false
3088 // Spinning Ms: steal work from other Ps.
3090 // Limit the number of spinning Ms to half the number of busy Ps.
3091 // This is necessary to prevent excessive CPU consumption when
3092 // GOMAXPROCS>>1 but the program parallelism is low.
3093 if mp.spinning || 2*sched.nmspinning.Load() < gomaxprocs-sched.npidle.Load() {
3098 gp, inheritTime, tnow, w, newWork := stealWork(now)
3100 // Successfully stole.
3101 return gp, inheritTime, false
3104 // There may be new timer or GC work; restart to
3110 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3111 // Earlier timer to wait for.
3116 // We have nothing to do.
3118 // If we're in the GC mark phase, can safely scan and blacken objects,
3119 // and have work to do, run idle-time marking rather than give up the P.
3120 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(pp) && gcController.addIdleMarkWorker() {
3121 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
3123 pp.gcMarkWorkerMode = gcMarkWorkerIdleMode
3126 trace := traceAcquire()
3127 casgstatus(gp, _Gwaiting, _Grunnable)
3129 trace.GoUnpark(gp, 0)
3132 return gp, false, false
3134 gcController.removeIdleMarkWorker()
3138 // If a callback returned and no other goroutine is awake,
3139 // then wake event handler goroutine which pauses execution
3140 // until a callback was triggered.
3141 gp, otherReady := beforeIdle(now, pollUntil)
3143 trace := traceAcquire()
3144 casgstatus(gp, _Gwaiting, _Grunnable)
3146 trace.GoUnpark(gp, 0)
3149 return gp, false, false
3155 // Before we drop our P, make a snapshot of the allp slice,
3156 // which can change underfoot once we no longer block
3157 // safe-points. We don't need to snapshot the contents because
3158 // everything up to cap(allp) is immutable.
3159 allpSnapshot := allp
3160 // Also snapshot masks. Value changes are OK, but we can't allow
3161 // len to change out from under us.
3162 idlepMaskSnapshot := idlepMask
3163 timerpMaskSnapshot := timerpMask
3165 // return P and block
3167 if sched.gcwaiting.Load() || pp.runSafePointFn != 0 {
3171 if sched.runqsize != 0 {
3172 gp := globrunqget(pp, 0)
3174 return gp, false, false
3176 if !mp.spinning && sched.needspinning.Load() == 1 {
3177 // See "Delicate dance" comment below.
3182 if releasep() != pp {
3183 throw("findrunnable: wrong p")
3185 now = pidleput(pp, now)
3188 // Delicate dance: thread transitions from spinning to non-spinning
3189 // state, potentially concurrently with submission of new work. We must
3190 // drop nmspinning first and then check all sources again (with
3191 // #StoreLoad memory barrier in between). If we do it the other way
3192 // around, another thread can submit work after we've checked all
3193 // sources but before we drop nmspinning; as a result nobody will
3194 // unpark a thread to run the work.
3196 // This applies to the following sources of work:
3198 // * Goroutines added to the global or a per-P run queue.
3199 // * New/modified-earlier timers on a per-P timer heap.
3200 // * Idle-priority GC work (barring golang.org/issue/19112).
3202 // If we discover new work below, we need to restore m.spinning as a
3203 // signal for resetspinning to unpark a new worker thread (because
3204 // there can be more than one starving goroutine).
3206 // However, if after discovering new work we also observe no idle Ps
3207 // (either here or in resetspinning), we have a problem. We may be
3208 // racing with a non-spinning M in the block above, having found no
3209 // work and preparing to release its P and park. Allowing that P to go
3210 // idle will result in loss of work conservation (idle P while there is
3211 // runnable work). This could result in complete deadlock in the
3212 // unlikely event that we discover new work (from netpoll) right as we
3213 // are racing with _all_ other Ps going idle.
3215 // We use sched.needspinning to synchronize with non-spinning Ms going
3216 // idle. If needspinning is set when they are about to drop their P,
3217 // they abort the drop and instead become a new spinning M on our
3218 // behalf. If we are not racing and the system is truly fully loaded
3219 // then no spinning threads are required, and the next thread to
3220 // naturally become spinning will clear the flag.
3222 // Also see "Worker thread parking/unparking" comment at the top of the
3224 wasSpinning := mp.spinning
3227 if sched.nmspinning.Add(-1) < 0 {
3228 throw("findrunnable: negative nmspinning")
3231 // Note the for correctness, only the last M transitioning from
3232 // spinning to non-spinning must perform these rechecks to
3233 // ensure no missed work. However, the runtime has some cases
3234 // of transient increments of nmspinning that are decremented
3235 // without going through this path, so we must be conservative
3236 // and perform the check on all spinning Ms.
3238 // See https://go.dev/issue/43997.
3240 // Check global and P runqueues again.
3243 if sched.runqsize != 0 {
3244 pp, _ := pidlegetSpinning(0)
3246 gp := globrunqget(pp, 0)
3248 throw("global runq empty with non-zero runqsize")
3253 return gp, false, false
3258 pp := checkRunqsNoP(allpSnapshot, idlepMaskSnapshot)
3265 // Check for idle-priority GC work again.
3266 pp, gp := checkIdleGCNoP()
3271 // Run the idle worker.
3272 pp.gcMarkWorkerMode = gcMarkWorkerIdleMode
3273 trace := traceAcquire()
3274 casgstatus(gp, _Gwaiting, _Grunnable)
3276 trace.GoUnpark(gp, 0)
3279 return gp, false, false
3282 // Finally, check for timer creation or expiry concurrently with
3283 // transitioning from spinning to non-spinning.
3285 // Note that we cannot use checkTimers here because it calls
3286 // adjusttimers which may need to allocate memory, and that isn't
3287 // allowed when we don't have an active P.
3288 pollUntil = checkTimersNoP(allpSnapshot, timerpMaskSnapshot, pollUntil)
3291 // Poll network until next timer.
3292 if netpollinited() && (netpollAnyWaiters() || pollUntil != 0) && sched.lastpoll.Swap(0) != 0 {
3293 sched.pollUntil.Store(pollUntil)
3295 throw("findrunnable: netpoll with p")
3298 throw("findrunnable: netpoll with spinning")
3305 delay = pollUntil - now
3311 // When using fake time, just poll.
3314 list, delta := netpoll(delay) // block until new work is available
3315 // Refresh now again, after potentially blocking.
3317 sched.pollUntil.Store(0)
3318 sched.lastpoll.Store(now)
3319 if faketime != 0 && list.empty() {
3320 // Using fake time and nothing is ready; stop M.
3321 // When all M's stop, checkdead will call timejump.
3326 pp, _ := pidleget(now)
3330 netpollAdjustWaiters(delta)
3336 netpollAdjustWaiters(delta)
3337 trace := traceAcquire()
3338 casgstatus(gp, _Gwaiting, _Grunnable)
3340 trace.GoUnpark(gp, 0)
3343 return gp, false, false
3350 } else if pollUntil != 0 && netpollinited() {
3351 pollerPollUntil := sched.pollUntil.Load()
3352 if pollerPollUntil == 0 || pollerPollUntil > pollUntil {
3360 // pollWork reports whether there is non-background work this P could
3361 // be doing. This is a fairly lightweight check to be used for
3362 // background work loops, like idle GC. It checks a subset of the
3363 // conditions checked by the actual scheduler.
3364 func pollWork() bool {
3365 if sched.runqsize != 0 {
3368 p := getg().m.p.ptr()
3372 if netpollinited() && netpollAnyWaiters() && sched.lastpoll.Load() != 0 {
3373 if list, delta := netpoll(0); !list.empty() {
3375 netpollAdjustWaiters(delta)
3382 // stealWork attempts to steal a runnable goroutine or timer from any P.
3384 // If newWork is true, new work may have been readied.
3386 // If now is not 0 it is the current time. stealWork returns the passed time or
3387 // the current time if now was passed as 0.
3388 func stealWork(now int64) (gp *g, inheritTime bool, rnow, pollUntil int64, newWork bool) {
3389 pp := getg().m.p.ptr()
3393 const stealTries = 4
3394 for i := 0; i < stealTries; i++ {
3395 stealTimersOrRunNextG := i == stealTries-1
3397 for enum := stealOrder.start(fastrand()); !enum.done(); enum.next() {
3398 if sched.gcwaiting.Load() {
3399 // GC work may be available.
3400 return nil, false, now, pollUntil, true
3402 p2 := allp[enum.position()]
3407 // Steal timers from p2. This call to checkTimers is the only place
3408 // where we might hold a lock on a different P's timers. We do this
3409 // once on the last pass before checking runnext because stealing
3410 // from the other P's runnext should be the last resort, so if there
3411 // are timers to steal do that first.
3413 // We only check timers on one of the stealing iterations because
3414 // the time stored in now doesn't change in this loop and checking
3415 // the timers for each P more than once with the same value of now
3416 // is probably a waste of time.
3418 // timerpMask tells us whether the P may have timers at all. If it
3419 // can't, no need to check at all.
3420 if stealTimersOrRunNextG && timerpMask.read(enum.position()) {
3421 tnow, w, ran := checkTimers(p2, now)
3423 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3427 // Running the timers may have
3428 // made an arbitrary number of G's
3429 // ready and added them to this P's
3430 // local run queue. That invalidates
3431 // the assumption of runqsteal
3432 // that it always has room to add
3433 // stolen G's. So check now if there
3434 // is a local G to run.
3435 if gp, inheritTime := runqget(pp); gp != nil {
3436 return gp, inheritTime, now, pollUntil, ranTimer
3442 // Don't bother to attempt to steal if p2 is idle.
3443 if !idlepMask.read(enum.position()) {
3444 if gp := runqsteal(pp, p2, stealTimersOrRunNextG); gp != nil {
3445 return gp, false, now, pollUntil, ranTimer
3451 // No goroutines found to steal. Regardless, running a timer may have
3452 // made some goroutine ready that we missed. Indicate the next timer to
3454 return nil, false, now, pollUntil, ranTimer
3457 // Check all Ps for a runnable G to steal.
3459 // On entry we have no P. If a G is available to steal and a P is available,
3460 // the P is returned which the caller should acquire and attempt to steal the
3462 func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p {
3463 for id, p2 := range allpSnapshot {
3464 if !idlepMaskSnapshot.read(uint32(id)) && !runqempty(p2) {
3466 pp, _ := pidlegetSpinning(0)
3468 // Can't get a P, don't bother checking remaining Ps.
3477 // No work available.
3481 // Check all Ps for a timer expiring sooner than pollUntil.
3483 // Returns updated pollUntil value.
3484 func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64 {
3485 for id, p2 := range allpSnapshot {
3486 if timerpMaskSnapshot.read(uint32(id)) {
3487 w := nobarrierWakeTime(p2)
3488 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3497 // Check for idle-priority GC, without a P on entry.
3499 // If some GC work, a P, and a worker G are all available, the P and G will be
3500 // returned. The returned P has not been wired yet.
3501 func checkIdleGCNoP() (*p, *g) {
3502 // N.B. Since we have no P, gcBlackenEnabled may change at any time; we
3503 // must check again after acquiring a P. As an optimization, we also check
3504 // if an idle mark worker is needed at all. This is OK here, because if we
3505 // observe that one isn't needed, at least one is currently running. Even if
3506 // it stops running, its own journey into the scheduler should schedule it
3507 // again, if need be (at which point, this check will pass, if relevant).
3508 if atomic.Load(&gcBlackenEnabled) == 0 || !gcController.needIdleMarkWorker() {
3511 if !gcMarkWorkAvailable(nil) {
3515 // Work is available; we can start an idle GC worker only if there is
3516 // an available P and available worker G.
3518 // We can attempt to acquire these in either order, though both have
3519 // synchronization concerns (see below). Workers are almost always
3520 // available (see comment in findRunnableGCWorker for the one case
3521 // there may be none). Since we're slightly less likely to find a P,
3522 // check for that first.
3524 // Synchronization: note that we must hold sched.lock until we are
3525 // committed to keeping it. Otherwise we cannot put the unnecessary P
3526 // back in sched.pidle without performing the full set of idle
3527 // transition checks.
3529 // If we were to check gcBgMarkWorkerPool first, we must somehow handle
3530 // the assumption in gcControllerState.findRunnableGCWorker that an
3531 // empty gcBgMarkWorkerPool is only possible if gcMarkDone is running.
3533 pp, now := pidlegetSpinning(0)
3539 // Now that we own a P, gcBlackenEnabled can't change (as it requires STW).
3540 if gcBlackenEnabled == 0 || !gcController.addIdleMarkWorker() {
3546 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
3550 gcController.removeIdleMarkWorker()
3556 return pp, node.gp.ptr()
3559 // wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
3560 // going to wake up before the when argument; or it wakes an idle P to service
3561 // timers and the network poller if there isn't one already.
3562 func wakeNetPoller(when int64) {
3563 if sched.lastpoll.Load() == 0 {
3564 // In findrunnable we ensure that when polling the pollUntil
3565 // field is either zero or the time to which the current
3566 // poll is expected to run. This can have a spurious wakeup
3567 // but should never miss a wakeup.
3568 pollerPollUntil := sched.pollUntil.Load()
3569 if pollerPollUntil == 0 || pollerPollUntil > when {
3573 // There are no threads in the network poller, try to get
3574 // one there so it can handle new timers.
3575 if GOOS != "plan9" { // Temporary workaround - see issue #42303.
3581 func resetspinning() {
3584 throw("resetspinning: not a spinning m")
3586 gp.m.spinning = false
3587 nmspinning := sched.nmspinning.Add(-1)
3589 throw("findrunnable: negative nmspinning")
3591 // M wakeup policy is deliberately somewhat conservative, so check if we
3592 // need to wakeup another P here. See "Worker thread parking/unparking"
3593 // comment at the top of the file for details.
3597 // injectglist adds each runnable G on the list to some run queue,
3598 // and clears glist. If there is no current P, they are added to the
3599 // global queue, and up to npidle M's are started to run them.
3600 // Otherwise, for each idle P, this adds a G to the global queue
3601 // and starts an M. Any remaining G's are added to the current P's
3603 // This may temporarily acquire sched.lock.
3604 // Can run concurrently with GC.
3605 func injectglist(glist *gList) {
3609 trace := traceAcquire()
3611 for gp := glist.head.ptr(); gp != nil; gp = gp.schedlink.ptr() {
3612 trace.GoUnpark(gp, 0)
3617 // Mark all the goroutines as runnable before we put them
3618 // on the run queues.
3619 head := glist.head.ptr()
3622 for gp := head; gp != nil; gp = gp.schedlink.ptr() {
3625 casgstatus(gp, _Gwaiting, _Grunnable)
3628 // Turn the gList into a gQueue.
3634 startIdle := func(n int) {
3635 for i := 0; i < n; i++ {
3636 mp := acquirem() // See comment in startm.
3639 pp, _ := pidlegetSpinning(0)
3646 startm(pp, false, true)
3652 pp := getg().m.p.ptr()
3655 globrunqputbatch(&q, int32(qsize))
3661 npidle := int(sched.npidle.Load())
3664 for n = 0; n < npidle && !q.empty(); n++ {
3670 globrunqputbatch(&globq, int32(n))
3677 runqputbatch(pp, &q, qsize)
3681 // One round of scheduler: find a runnable goroutine and execute it.
3687 throw("schedule: holding locks")
3690 if mp.lockedg != 0 {
3692 execute(mp.lockedg.ptr(), false) // Never returns.
3695 // We should not schedule away from a g that is executing a cgo call,
3696 // since the cgo call is using the m's g0 stack.
3698 throw("schedule: in cgo")
3705 // Safety check: if we are spinning, the run queue should be empty.
3706 // Check this before calling checkTimers, as that might call
3707 // goready to put a ready goroutine on the local run queue.
3708 if mp.spinning && (pp.runnext != 0 || pp.runqhead != pp.runqtail) {
3709 throw("schedule: spinning with local work")
3712 gp, inheritTime, tryWakeP := findRunnable() // blocks until work is available
3714 if debug.dontfreezetheworld > 0 && freezing.Load() {
3715 // See comment in freezetheworld. We don't want to perturb
3716 // scheduler state, so we didn't gcstopm in findRunnable, but
3717 // also don't want to allow new goroutines to run.
3719 // Deadlock here rather than in the findRunnable loop so if
3720 // findRunnable is stuck in a loop we don't perturb that
3726 // This thread is going to run a goroutine and is not spinning anymore,
3727 // so if it was marked as spinning we need to reset it now and potentially
3728 // start a new spinning M.
3733 if sched.disable.user && !schedEnabled(gp) {
3734 // Scheduling of this goroutine is disabled. Put it on
3735 // the list of pending runnable goroutines for when we
3736 // re-enable user scheduling and look again.
3738 if schedEnabled(gp) {
3739 // Something re-enabled scheduling while we
3740 // were acquiring the lock.
3743 sched.disable.runnable.pushBack(gp)
3750 // If about to schedule a not-normal goroutine (a GCworker or tracereader),
3751 // wake a P if there is one.
3755 if gp.lockedm != 0 {
3756 // Hands off own p to the locked m,
3757 // then blocks waiting for a new p.
3762 execute(gp, inheritTime)
3765 // dropg removes the association between m and the current goroutine m->curg (gp for short).
3766 // Typically a caller sets gp's status away from Grunning and then
3767 // immediately calls dropg to finish the job. The caller is also responsible
3768 // for arranging that gp will be restarted using ready at an
3769 // appropriate time. After calling dropg and arranging for gp to be
3770 // readied later, the caller can do other work but eventually should
3771 // call schedule to restart the scheduling of goroutines on this m.
3775 setMNoWB(&gp.m.curg.m, nil)
3776 setGNoWB(&gp.m.curg, nil)
3779 // checkTimers runs any timers for the P that are ready.
3780 // If now is not 0 it is the current time.
3781 // It returns the passed time or the current time if now was passed as 0.
3782 // and the time when the next timer should run or 0 if there is no next timer,
3783 // and reports whether it ran any timers.
3784 // If the time when the next timer should run is not 0,
3785 // it is always larger than the returned time.
3786 // We pass now in and out to avoid extra calls of nanotime.
3788 //go:yeswritebarrierrec
3789 func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool) {
3790 // If it's not yet time for the first timer, or the first adjusted
3791 // timer, then there is nothing to do.
3792 next := pp.timer0When.Load()
3793 nextAdj := pp.timerModifiedEarliest.Load()
3794 if next == 0 || (nextAdj != 0 && nextAdj < next) {
3799 // No timers to run or adjust.
3800 return now, 0, false
3807 // Next timer is not ready to run, but keep going
3808 // if we would clear deleted timers.
3809 // This corresponds to the condition below where
3810 // we decide whether to call clearDeletedTimers.
3811 if pp != getg().m.p.ptr() || int(pp.deletedTimers.Load()) <= int(pp.numTimers.Load()/4) {
3812 return now, next, false
3816 lock(&pp.timersLock)
3818 if len(pp.timers) > 0 {
3819 adjusttimers(pp, now)
3820 for len(pp.timers) > 0 {
3821 // Note that runtimer may temporarily unlock
3823 if tw := runtimer(pp, now); tw != 0 {
3833 // If this is the local P, and there are a lot of deleted timers,
3834 // clear them out. We only do this for the local P to reduce
3835 // lock contention on timersLock.
3836 if pp == getg().m.p.ptr() && int(pp.deletedTimers.Load()) > len(pp.timers)/4 {
3837 clearDeletedTimers(pp)
3840 unlock(&pp.timersLock)
3842 return now, pollUntil, ran
3845 func parkunlock_c(gp *g, lock unsafe.Pointer) bool {
3846 unlock((*mutex)(lock))
3850 // park continuation on g0.
3851 func park_m(gp *g) {
3854 trace := traceAcquire()
3856 // N.B. Not using casGToWaiting here because the waitreason is
3857 // set by park_m's caller.
3858 casgstatus(gp, _Grunning, _Gwaiting)
3860 trace.GoPark(mp.waitTraceBlockReason, mp.waitTraceSkip)
3866 if fn := mp.waitunlockf; fn != nil {
3867 ok := fn(gp, mp.waitlock)
3868 mp.waitunlockf = nil
3871 trace := traceAcquire()
3872 casgstatus(gp, _Gwaiting, _Grunnable)
3874 trace.GoUnpark(gp, 2)
3877 execute(gp, true) // Schedule it back, never returns.
3883 func goschedImpl(gp *g, preempted bool) {
3884 trace := traceAcquire()
3885 status := readgstatus(gp)
3886 if status&^_Gscan != _Grunning {
3888 throw("bad g status")
3890 casgstatus(gp, _Grunning, _Grunnable)
3912 // Gosched continuation on g0.
3913 func gosched_m(gp *g) {
3914 goschedImpl(gp, false)
3917 // goschedguarded is a forbidden-states-avoided version of gosched_m.
3918 func goschedguarded_m(gp *g) {
3919 if !canPreemptM(gp.m) {
3920 gogo(&gp.sched) // never return
3922 goschedImpl(gp, false)
3925 func gopreempt_m(gp *g) {
3926 goschedImpl(gp, true)
3929 // preemptPark parks gp and puts it in _Gpreempted.
3932 func preemptPark(gp *g) {
3933 status := readgstatus(gp)
3934 if status&^_Gscan != _Grunning {
3936 throw("bad g status")
3939 if gp.asyncSafePoint {
3940 // Double-check that async preemption does not
3941 // happen in SPWRITE assembly functions.
3942 // isAsyncSafePoint must exclude this case.
3943 f := findfunc(gp.sched.pc)
3945 throw("preempt at unknown pc")
3947 if f.flag&abi.FuncFlagSPWrite != 0 {
3948 println("runtime: unexpected SPWRITE function", funcname(f), "in async preempt")
3949 throw("preempt SPWRITE")
3953 // Transition from _Grunning to _Gscan|_Gpreempted. We can't
3954 // be in _Grunning when we dropg because then we'd be running
3955 // without an M, but the moment we're in _Gpreempted,
3956 // something could claim this G before we've fully cleaned it
3957 // up. Hence, we set the scan bit to lock down further
3958 // transitions until we can dropg.
3959 casGToPreemptScan(gp, _Grunning, _Gscan|_Gpreempted)
3962 // Be careful about how we trace this next event. The ordering
3965 // The moment we CAS into _Gpreempted, suspendG could CAS to
3966 // _Gwaiting, do its work, and ready the goroutine. All of
3967 // this could happen before we even get the chance to emit
3968 // an event. The end result is that the events could appear
3969 // out of order, and the tracer generally assumes the scheduler
3970 // takes care of the ordering between GoPark and GoUnpark.
3972 // The answer here is simple: emit the event while we still hold
3973 // the _Gscan bit on the goroutine. We still need to traceAcquire
3974 // and traceRelease across the CAS because the tracer could be
3975 // what's calling suspendG in the first place, and we want the
3976 // CAS and event emission to appear atomic to the tracer.
3977 trace := traceAcquire()
3979 trace.GoPark(traceBlockPreempted, 0)
3981 casfrom_Gscanstatus(gp, _Gscan|_Gpreempted, _Gpreempted)
3988 // goyield is like Gosched, but it:
3989 // - emits a GoPreempt trace event instead of a GoSched trace event
3990 // - puts the current G on the runq of the current P instead of the globrunq
3996 func goyield_m(gp *g) {
3997 trace := traceAcquire()
3999 casgstatus(gp, _Grunning, _Grunnable)
4005 runqput(pp, gp, false)
4009 // Finishes execution of the current goroutine.
4014 trace := traceAcquire()
4022 // goexit continuation on g0.
4023 func goexit0(gp *g) {
4027 casgstatus(gp, _Grunning, _Gdead)
4028 gcController.addScannableStack(pp, -int64(gp.stack.hi-gp.stack.lo))
4029 if isSystemGoroutine(gp, false) {
4033 locked := gp.lockedm != 0
4036 gp.preemptStop = false
4037 gp.paniconfault = false
4038 gp._defer = nil // should be true already but just in case.
4039 gp._panic = nil // non-nil for Goexit during panic. points at stack-allocated data.
4041 gp.waitreason = waitReasonZero
4046 if gcBlackenEnabled != 0 && gp.gcAssistBytes > 0 {
4047 // Flush assist credit to the global pool. This gives
4048 // better information to pacing if the application is
4049 // rapidly creating an exiting goroutines.
4050 assistWorkPerByte := gcController.assistWorkPerByte.Load()
4051 scanCredit := int64(assistWorkPerByte * float64(gp.gcAssistBytes))
4052 gcController.bgScanCredit.Add(scanCredit)
4053 gp.gcAssistBytes = 0
4058 if GOARCH == "wasm" { // no threads yet on wasm
4060 schedule() // never returns
4063 if mp.lockedInt != 0 {
4064 print("invalid m->lockedInt = ", mp.lockedInt, "\n")
4065 throw("internal lockOSThread error")
4069 // The goroutine may have locked this thread because
4070 // it put it in an unusual kernel state. Kill it
4071 // rather than returning it to the thread pool.
4073 // Return to mstart, which will release the P and exit
4075 if GOOS != "plan9" { // See golang.org/issue/22227.
4078 // Clear lockedExt on plan9 since we may end up re-using
4086 // save updates getg().sched to refer to pc and sp so that a following
4087 // gogo will restore pc and sp.
4089 // save must not have write barriers because invoking a write barrier
4090 // can clobber getg().sched.
4093 //go:nowritebarrierrec
4094 func save(pc, sp uintptr) {
4097 if gp == gp.m.g0 || gp == gp.m.gsignal {
4098 // m.g0.sched is special and must describe the context
4099 // for exiting the thread. mstart1 writes to it directly.
4100 // m.gsignal.sched should not be used at all.
4101 // This check makes sure save calls do not accidentally
4102 // run in contexts where they'd write to system g's.
4103 throw("save on system g not allowed")
4110 // We need to ensure ctxt is zero, but can't have a write
4111 // barrier here. However, it should always already be zero.
4113 if gp.sched.ctxt != nil {
4118 // The goroutine g is about to enter a system call.
4119 // Record that it's not using the cpu anymore.
4120 // This is called only from the go syscall library and cgocall,
4121 // not from the low-level system calls used by the runtime.
4123 // Entersyscall cannot split the stack: the save must
4124 // make g->sched refer to the caller's stack segment, because
4125 // entersyscall is going to return immediately after.
4127 // Nothing entersyscall calls can split the stack either.
4128 // We cannot safely move the stack during an active call to syscall,
4129 // because we do not know which of the uintptr arguments are
4130 // really pointers (back into the stack).
4131 // In practice, this means that we make the fast path run through
4132 // entersyscall doing no-split things, and the slow path has to use systemstack
4133 // to run bigger things on the system stack.
4135 // reentersyscall is the entry point used by cgo callbacks, where explicitly
4136 // saved SP and PC are restored. This is needed when exitsyscall will be called
4137 // from a function further up in the call stack than the parent, as g->syscallsp
4138 // must always point to a valid stack frame. entersyscall below is the normal
4139 // entry point for syscalls, which obtains the SP and PC from the caller.
4142 // At the start of a syscall we emit traceGoSysCall to capture the stack trace.
4143 // If the syscall does not block, that is it, we do not emit any other events.
4144 // If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
4145 // when syscall returns we emit traceGoSysExit and when the goroutine starts running
4146 // (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
4147 // To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
4148 // we remember current value of syscalltick in m (gp.m.syscalltick = gp.m.p.ptr().syscalltick),
4149 // whoever emits traceGoSysBlock increments p.syscalltick afterwards;
4150 // and we wait for the increment before emitting traceGoSysExit.
4151 // Note that the increment is done even if tracing is not enabled,
4152 // because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
4155 func reentersyscall(pc, sp uintptr) {
4156 trace := traceAcquire()
4159 // Disable preemption because during this function g is in Gsyscall status,
4160 // but can have inconsistent g->sched, do not let GC observe it.
4163 // Entersyscall must not call any function that might split/grow the stack.
4164 // (See details in comment above.)
4165 // Catch calls that might, by replacing the stack guard with something that
4166 // will trip any stack check and leaving a flag to tell newstack to die.
4167 gp.stackguard0 = stackPreempt
4168 gp.throwsplit = true
4170 // Leave SP around for GC and traceback.
4174 casgstatus(gp, _Grunning, _Gsyscall)
4175 if staticLockRanking {
4176 // When doing static lock ranking casgstatus can call
4177 // systemstack which clobbers g.sched.
4180 if gp.syscallsp < gp.stack.lo || gp.stack.hi < gp.syscallsp {
4181 systemstack(func() {
4182 print("entersyscall inconsistent ", hex(gp.syscallsp), " [", hex(gp.stack.lo), ",", hex(gp.stack.hi), "]\n")
4183 throw("entersyscall")
4188 systemstack(func() {
4192 // systemstack itself clobbers g.sched.{pc,sp} and we might
4193 // need them later when the G is genuinely blocked in a
4198 if sched.sysmonwait.Load() {
4199 systemstack(entersyscall_sysmon)
4203 if gp.m.p.ptr().runSafePointFn != 0 {
4204 // runSafePointFn may stack split if run on this stack
4205 systemstack(runSafePointFn)
4209 gp.m.syscalltick = gp.m.p.ptr().syscalltick
4214 atomic.Store(&pp.status, _Psyscall)
4215 if sched.gcwaiting.Load() {
4216 systemstack(entersyscall_gcwait)
4223 // Standard syscall entry used by the go syscall library and normal cgo calls.
4225 // This is exported via linkname to assembly in the syscall package and x/sys.
4228 //go:linkname entersyscall
4229 func entersyscall() {
4230 reentersyscall(getcallerpc(), getcallersp())
4233 func entersyscall_sysmon() {
4235 if sched.sysmonwait.Load() {
4236 sched.sysmonwait.Store(false)
4237 notewakeup(&sched.sysmonnote)
4242 func entersyscall_gcwait() {
4244 pp := gp.m.oldp.ptr()
4247 if sched.stopwait > 0 && atomic.Cas(&pp.status, _Psyscall, _Pgcstop) {
4248 trace := traceAcquire()
4250 trace.GoSysBlock(pp)
4255 if sched.stopwait--; sched.stopwait == 0 {
4256 notewakeup(&sched.stopnote)
4262 // The same as entersyscall(), but with a hint that the syscall is blocking.
4265 func entersyscallblock() {
4268 gp.m.locks++ // see comment in entersyscall
4269 gp.throwsplit = true
4270 gp.stackguard0 = stackPreempt // see comment in entersyscall
4271 gp.m.syscalltick = gp.m.p.ptr().syscalltick
4272 gp.m.p.ptr().syscalltick++
4274 // Leave SP around for GC and traceback.
4278 gp.syscallsp = gp.sched.sp
4279 gp.syscallpc = gp.sched.pc
4280 if gp.syscallsp < gp.stack.lo || gp.stack.hi < gp.syscallsp {
4284 systemstack(func() {
4285 print("entersyscallblock inconsistent ", hex(sp1), " ", hex(sp2), " ", hex(sp3), " [", hex(gp.stack.lo), ",", hex(gp.stack.hi), "]\n")
4286 throw("entersyscallblock")
4289 casgstatus(gp, _Grunning, _Gsyscall)
4290 if gp.syscallsp < gp.stack.lo || gp.stack.hi < gp.syscallsp {
4291 systemstack(func() {
4292 print("entersyscallblock inconsistent ", hex(sp), " ", hex(gp.sched.sp), " ", hex(gp.syscallsp), " [", hex(gp.stack.lo), ",", hex(gp.stack.hi), "]\n")
4293 throw("entersyscallblock")
4297 systemstack(entersyscallblock_handoff)
4299 // Resave for traceback during blocked call.
4300 save(getcallerpc(), getcallersp())
4305 func entersyscallblock_handoff() {
4306 trace := traceAcquire()
4309 trace.GoSysBlock(getg().m.p.ptr())
4312 handoffp(releasep())
4315 // The goroutine g exited its system call.
4316 // Arrange for it to run on a cpu again.
4317 // This is called only from the go syscall library, not
4318 // from the low-level system calls used by the runtime.
4320 // Write barriers are not allowed because our P may have been stolen.
4322 // This is exported via linkname to assembly in the syscall package.
4325 //go:nowritebarrierrec
4326 //go:linkname exitsyscall
4327 func exitsyscall() {
4330 gp.m.locks++ // see comment in entersyscall
4331 if getcallersp() > gp.syscallsp {
4332 throw("exitsyscall: syscall frame is no longer valid")
4336 oldp := gp.m.oldp.ptr()
4338 if exitsyscallfast(oldp) {
4339 // When exitsyscallfast returns success, we have a P so can now use
4341 if goroutineProfile.active {
4342 // Make sure that gp has had its stack written out to the goroutine
4343 // profile, exactly as it was when the goroutine profiler first
4344 // stopped the world.
4345 systemstack(func() {
4346 tryRecordGoroutineProfileWB(gp)
4349 trace := traceAcquire()
4351 if oldp != gp.m.p.ptr() || gp.m.syscalltick != gp.m.p.ptr().syscalltick {
4352 systemstack(func() {
4357 // There's a cpu for us, so we can run.
4358 gp.m.p.ptr().syscalltick++
4359 // We need to cas the status and scan before resuming...
4360 casgstatus(gp, _Gsyscall, _Grunning)
4365 // Garbage collector isn't running (since we are),
4366 // so okay to clear syscallsp.
4370 // restore the preemption request in case we've cleared it in newstack
4371 gp.stackguard0 = stackPreempt
4373 // otherwise restore the real stackGuard, we've spoiled it in entersyscall/entersyscallblock
4374 gp.stackguard0 = gp.stack.lo + stackGuard
4376 gp.throwsplit = false
4378 if sched.disable.user && !schedEnabled(gp) {
4379 // Scheduling of this goroutine is disabled.
4386 trace := traceAcquire()
4388 // Wait till traceGoSysBlock event is emitted.
4389 // This ensures consistency of the trace (the goroutine is started after it is blocked).
4390 for oldp != nil && oldp.syscalltick == gp.m.syscalltick {
4393 // We can't trace syscall exit right now because we don't have a P.
4394 // Tracing code can invoke write barriers that cannot run without a P.
4395 // So instead we remember the syscall exit time and emit the event
4396 // in execute when we have a P.
4397 gp.trace.sysExitTime = traceClockNow()
4403 // Call the scheduler.
4406 // Scheduler returned, so we're allowed to run now.
4407 // Delete the syscallsp information that we left for
4408 // the garbage collector during the system call.
4409 // Must wait until now because until gosched returns
4410 // we don't know for sure that the garbage collector
4413 gp.m.p.ptr().syscalltick++
4414 gp.throwsplit = false
4418 func exitsyscallfast(oldp *p) bool {
4421 // Freezetheworld sets stopwait but does not retake P's.
4422 if sched.stopwait == freezeStopWait {
4426 // Try to re-acquire the last P.
4427 if oldp != nil && oldp.status == _Psyscall && atomic.Cas(&oldp.status, _Psyscall, _Pidle) {
4428 // There's a cpu for us, so we can run.
4430 exitsyscallfast_reacquired()
4434 // Try to get any other idle P.
4435 if sched.pidle != 0 {
4437 systemstack(func() {
4438 ok = exitsyscallfast_pidle()
4440 trace := traceAcquire()
4443 // Wait till traceGoSysBlock event is emitted.
4444 // This ensures consistency of the trace (the goroutine is started after it is blocked).
4445 for oldp.syscalltick == gp.m.syscalltick {
4461 // exitsyscallfast_reacquired is the exitsyscall path on which this G
4462 // has successfully reacquired the P it was running on before the
4466 func exitsyscallfast_reacquired() {
4468 if gp.m.syscalltick != gp.m.p.ptr().syscalltick {
4469 trace := traceAcquire()
4471 // The p was retaken and then enter into syscall again (since gp.m.syscalltick has changed).
4472 // traceGoSysBlock for this syscall was already emitted,
4473 // but here we effectively retake the p from the new syscall running on the same p.
4474 systemstack(func() {
4475 // Denote blocking of the new syscall.
4476 trace.GoSysBlock(gp.m.p.ptr())
4477 // Denote completion of the current syscall.
4482 gp.m.p.ptr().syscalltick++
4486 func exitsyscallfast_pidle() bool {
4488 pp, _ := pidleget(0)
4489 if pp != nil && sched.sysmonwait.Load() {
4490 sched.sysmonwait.Store(false)
4491 notewakeup(&sched.sysmonnote)
4501 // exitsyscall slow path on g0.
4502 // Failed to acquire P, enqueue gp as runnable.
4504 // Called via mcall, so gp is the calling g from this M.
4506 //go:nowritebarrierrec
4507 func exitsyscall0(gp *g) {
4508 casgstatus(gp, _Gsyscall, _Grunnable)
4512 if schedEnabled(gp) {
4519 // Below, we stoplockedm if gp is locked. globrunqput releases
4520 // ownership of gp, so we must check if gp is locked prior to
4521 // committing the release by unlocking sched.lock, otherwise we
4522 // could race with another M transitioning gp from unlocked to
4524 locked = gp.lockedm != 0
4525 } else if sched.sysmonwait.Load() {
4526 sched.sysmonwait.Store(false)
4527 notewakeup(&sched.sysmonnote)
4532 execute(gp, false) // Never returns.
4535 // Wait until another thread schedules gp and so m again.
4537 // N.B. lockedm must be this M, as this g was running on this M
4538 // before entersyscall.
4540 execute(gp, false) // Never returns.
4543 schedule() // Never returns.
4546 // Called from syscall package before fork.
4548 //go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork
4550 func syscall_runtime_BeforeFork() {
4553 // Block signals during a fork, so that the child does not run
4554 // a signal handler before exec if a signal is sent to the process
4555 // group. See issue #18600.
4557 sigsave(&gp.m.sigmask)
4560 // This function is called before fork in syscall package.
4561 // Code between fork and exec must not allocate memory nor even try to grow stack.
4562 // Here we spoil g.stackguard0 to reliably detect any attempts to grow stack.
4563 // runtime_AfterFork will undo this in parent process, but not in child.
4564 gp.stackguard0 = stackFork
4567 // Called from syscall package after fork in parent.
4569 //go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork
4571 func syscall_runtime_AfterFork() {
4574 // See the comments in beforefork.
4575 gp.stackguard0 = gp.stack.lo + stackGuard
4577 msigrestore(gp.m.sigmask)
4582 // inForkedChild is true while manipulating signals in the child process.
4583 // This is used to avoid calling libc functions in case we are using vfork.
4584 var inForkedChild bool
4586 // Called from syscall package after fork in child.
4587 // It resets non-sigignored signals to the default handler, and
4588 // restores the signal mask in preparation for the exec.
4590 // Because this might be called during a vfork, and therefore may be
4591 // temporarily sharing address space with the parent process, this must
4592 // not change any global variables or calling into C code that may do so.
4594 //go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild
4596 //go:nowritebarrierrec
4597 func syscall_runtime_AfterForkInChild() {
4598 // It's OK to change the global variable inForkedChild here
4599 // because we are going to change it back. There is no race here,
4600 // because if we are sharing address space with the parent process,
4601 // then the parent process can not be running concurrently.
4602 inForkedChild = true
4604 clearSignalHandlers()
4606 // When we are the child we are the only thread running,
4607 // so we know that nothing else has changed gp.m.sigmask.
4608 msigrestore(getg().m.sigmask)
4610 inForkedChild = false
4613 // pendingPreemptSignals is the number of preemption signals
4614 // that have been sent but not received. This is only used on Darwin.
4616 var pendingPreemptSignals atomic.Int32
4618 // Called from syscall package before Exec.
4620 //go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec
4621 func syscall_runtime_BeforeExec() {
4622 // Prevent thread creation during exec.
4625 // On Darwin, wait for all pending preemption signals to
4626 // be received. See issue #41702.
4627 if GOOS == "darwin" || GOOS == "ios" {
4628 for pendingPreemptSignals.Load() > 0 {
4634 // Called from syscall package after Exec.
4636 //go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec
4637 func syscall_runtime_AfterExec() {
4641 // Allocate a new g, with a stack big enough for stacksize bytes.
4642 func malg(stacksize int32) *g {
4645 stacksize = round2(stackSystem + stacksize)
4646 systemstack(func() {
4647 newg.stack = stackalloc(uint32(stacksize))
4649 newg.stackguard0 = newg.stack.lo + stackGuard
4650 newg.stackguard1 = ^uintptr(0)
4651 // Clear the bottom word of the stack. We record g
4652 // there on gsignal stack during VDSO on ARM and ARM64.
4653 *(*uintptr)(unsafe.Pointer(newg.stack.lo)) = 0
4658 // Create a new g running fn.
4659 // Put it on the queue of g's waiting to run.
4660 // The compiler turns a go statement into a call to this.
4661 func newproc(fn *funcval) {
4664 systemstack(func() {
4665 newg := newproc1(fn, gp, pc)
4667 pp := getg().m.p.ptr()
4668 runqput(pp, newg, true)
4676 // Create a new g in state _Grunnable, starting at fn. callerpc is the
4677 // address of the go statement that created this. The caller is responsible
4678 // for adding the new g to the scheduler.
4679 func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g {
4681 fatal("go of nil func value")
4684 mp := acquirem() // disable preemption because we hold M and P in local vars.
4688 newg = malg(stackMin)
4689 casgstatus(newg, _Gidle, _Gdead)
4690 allgadd(newg) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.
4692 if newg.stack.hi == 0 {
4693 throw("newproc1: newg missing stack")
4696 if readgstatus(newg) != _Gdead {
4697 throw("newproc1: new g is not Gdead")
4700 totalSize := uintptr(4*goarch.PtrSize + sys.MinFrameSize) // extra space in case of reads slightly beyond frame
4701 totalSize = alignUp(totalSize, sys.StackAlign)
4702 sp := newg.stack.hi - totalSize
4705 *(*uintptr)(unsafe.Pointer(sp)) = 0
4708 if GOARCH == "arm64" {
4710 *(*uintptr)(unsafe.Pointer(sp - goarch.PtrSize)) = 0
4713 memclrNoHeapPointers(unsafe.Pointer(&newg.sched), unsafe.Sizeof(newg.sched))
4716 newg.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum // +PCQuantum so that previous instruction is in same function
4717 newg.sched.g = guintptr(unsafe.Pointer(newg))
4718 gostartcallfn(&newg.sched, fn)
4719 newg.parentGoid = callergp.goid
4720 newg.gopc = callerpc
4721 newg.ancestors = saveAncestors(callergp)
4722 newg.startpc = fn.fn
4723 if isSystemGoroutine(newg, false) {
4726 // Only user goroutines inherit pprof labels.
4728 newg.labels = mp.curg.labels
4730 if goroutineProfile.active {
4731 // A concurrent goroutine profile is running. It should include
4732 // exactly the set of goroutines that were alive when the goroutine
4733 // profiler first stopped the world. That does not include newg, so
4734 // mark it as not needing a profile before transitioning it from
4736 newg.goroutineProfiled.Store(goroutineProfileSatisfied)
4739 // Track initial transition?
4740 newg.trackingSeq = uint8(fastrand())
4741 if newg.trackingSeq%gTrackingPeriod == 0 {
4742 newg.tracking = true
4744 gcController.addScannableStack(pp, int64(newg.stack.hi-newg.stack.lo))
4746 // Get a goid and switch to runnable. Make all this atomic to the tracer.
4747 trace := traceAcquire()
4748 casgstatus(newg, _Gdead, _Grunnable)
4749 if pp.goidcache == pp.goidcacheend {
4750 // Sched.goidgen is the last allocated id,
4751 // this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].
4752 // At startup sched.goidgen=0, so main goroutine receives goid=1.
4753 pp.goidcache = sched.goidgen.Add(_GoidCacheBatch)
4754 pp.goidcache -= _GoidCacheBatch - 1
4755 pp.goidcacheend = pp.goidcache + _GoidCacheBatch
4757 newg.goid = pp.goidcache
4760 trace.GoCreate(newg, newg.startpc)
4764 // Set up race context.
4766 newg.racectx = racegostart(callerpc)
4768 if newg.labels != nil {
4769 // See note in proflabel.go on labelSync's role in synchronizing
4770 // with the reads in the signal handler.
4771 racereleasemergeg(newg, unsafe.Pointer(&labelSync))
4779 // saveAncestors copies previous ancestors of the given caller g and
4780 // includes info for the current caller into a new set of tracebacks for
4781 // a g being created.
4782 func saveAncestors(callergp *g) *[]ancestorInfo {
4783 // Copy all prior info, except for the root goroutine (goid 0).
4784 if debug.tracebackancestors <= 0 || callergp.goid == 0 {
4787 var callerAncestors []ancestorInfo
4788 if callergp.ancestors != nil {
4789 callerAncestors = *callergp.ancestors
4791 n := int32(len(callerAncestors)) + 1
4792 if n > debug.tracebackancestors {
4793 n = debug.tracebackancestors
4795 ancestors := make([]ancestorInfo, n)
4796 copy(ancestors[1:], callerAncestors)
4798 var pcs [tracebackInnerFrames]uintptr
4799 npcs := gcallers(callergp, 0, pcs[:])
4800 ipcs := make([]uintptr, npcs)
4802 ancestors[0] = ancestorInfo{
4804 goid: callergp.goid,
4805 gopc: callergp.gopc,
4808 ancestorsp := new([]ancestorInfo)
4809 *ancestorsp = ancestors
4813 // Put on gfree list.
4814 // If local list is too long, transfer a batch to the global list.
4815 func gfput(pp *p, gp *g) {
4816 if readgstatus(gp) != _Gdead {
4817 throw("gfput: bad status (not Gdead)")
4820 stksize := gp.stack.hi - gp.stack.lo
4822 if stksize != uintptr(startingStackSize) {
4823 // non-standard stack size - free it.
4832 if pp.gFree.n >= 64 {
4838 for pp.gFree.n >= 32 {
4839 gp := pp.gFree.pop()
4841 if gp.stack.lo == 0 {
4848 lock(&sched.gFree.lock)
4849 sched.gFree.noStack.pushAll(noStackQ)
4850 sched.gFree.stack.pushAll(stackQ)
4851 sched.gFree.n += inc
4852 unlock(&sched.gFree.lock)
4856 // Get from gfree list.
4857 // If local list is empty, grab a batch from global list.
4858 func gfget(pp *p) *g {
4860 if pp.gFree.empty() && (!sched.gFree.stack.empty() || !sched.gFree.noStack.empty()) {
4861 lock(&sched.gFree.lock)
4862 // Move a batch of free Gs to the P.
4863 for pp.gFree.n < 32 {
4864 // Prefer Gs with stacks.
4865 gp := sched.gFree.stack.pop()
4867 gp = sched.gFree.noStack.pop()
4876 unlock(&sched.gFree.lock)
4879 gp := pp.gFree.pop()
4884 if gp.stack.lo != 0 && gp.stack.hi-gp.stack.lo != uintptr(startingStackSize) {
4885 // Deallocate old stack. We kept it in gfput because it was the
4886 // right size when the goroutine was put on the free list, but
4887 // the right size has changed since then.
4888 systemstack(func() {
4895 if gp.stack.lo == 0 {
4896 // Stack was deallocated in gfput or just above. Allocate a new one.
4897 systemstack(func() {
4898 gp.stack = stackalloc(startingStackSize)
4900 gp.stackguard0 = gp.stack.lo + stackGuard
4903 racemalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4906 msanmalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4909 asanunpoison(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4915 // Purge all cached G's from gfree list to the global list.
4916 func gfpurge(pp *p) {
4922 for !pp.gFree.empty() {
4923 gp := pp.gFree.pop()
4925 if gp.stack.lo == 0 {
4932 lock(&sched.gFree.lock)
4933 sched.gFree.noStack.pushAll(noStackQ)
4934 sched.gFree.stack.pushAll(stackQ)
4935 sched.gFree.n += inc
4936 unlock(&sched.gFree.lock)
4939 // Breakpoint executes a breakpoint trap.
4944 // dolockOSThread is called by LockOSThread and lockOSThread below
4945 // after they modify m.locked. Do not allow preemption during this call,
4946 // or else the m might be different in this function than in the caller.
4949 func dolockOSThread() {
4950 if GOARCH == "wasm" {
4951 return // no threads on wasm yet
4954 gp.m.lockedg.set(gp)
4955 gp.lockedm.set(gp.m)
4958 // LockOSThread wires the calling goroutine to its current operating system thread.
4959 // The calling goroutine will always execute in that thread,
4960 // and no other goroutine will execute in it,
4961 // until the calling goroutine has made as many calls to
4962 // UnlockOSThread as to LockOSThread.
4963 // If the calling goroutine exits without unlocking the thread,
4964 // the thread will be terminated.
4966 // All init functions are run on the startup thread. Calling LockOSThread
4967 // from an init function will cause the main function to be invoked on
4970 // A goroutine should call LockOSThread before calling OS services or
4971 // non-Go library functions that depend on per-thread state.
4974 func LockOSThread() {
4975 if atomic.Load(&newmHandoff.haveTemplateThread) == 0 && GOOS != "plan9" {
4976 // If we need to start a new thread from the locked
4977 // thread, we need the template thread. Start it now
4978 // while we're in a known-good state.
4979 startTemplateThread()
4983 if gp.m.lockedExt == 0 {
4985 panic("LockOSThread nesting overflow")
4991 func lockOSThread() {
4992 getg().m.lockedInt++
4996 // dounlockOSThread is called by UnlockOSThread and unlockOSThread below
4997 // after they update m->locked. Do not allow preemption during this call,
4998 // or else the m might be in different in this function than in the caller.
5001 func dounlockOSThread() {
5002 if GOARCH == "wasm" {
5003 return // no threads on wasm yet
5006 if gp.m.lockedInt != 0 || gp.m.lockedExt != 0 {
5013 // UnlockOSThread undoes an earlier call to LockOSThread.
5014 // If this drops the number of active LockOSThread calls on the
5015 // calling goroutine to zero, it unwires the calling goroutine from
5016 // its fixed operating system thread.
5017 // If there are no active LockOSThread calls, this is a no-op.
5019 // Before calling UnlockOSThread, the caller must ensure that the OS
5020 // thread is suitable for running other goroutines. If the caller made
5021 // any permanent changes to the state of the thread that would affect
5022 // other goroutines, it should not call this function and thus leave
5023 // the goroutine locked to the OS thread until the goroutine (and
5024 // hence the thread) exits.
5027 func UnlockOSThread() {
5029 if gp.m.lockedExt == 0 {
5037 func unlockOSThread() {
5039 if gp.m.lockedInt == 0 {
5040 systemstack(badunlockosthread)
5046 func badunlockosthread() {
5047 throw("runtime: internal error: misuse of lockOSThread/unlockOSThread")
5050 func gcount() int32 {
5051 n := int32(atomic.Loaduintptr(&allglen)) - sched.gFree.n - sched.ngsys.Load()
5052 for _, pp := range allp {
5056 // All these variables can be changed concurrently, so the result can be inconsistent.
5057 // But at least the current goroutine is running.
5064 func mcount() int32 {
5065 return int32(sched.mnext - sched.nmfreed)
5069 signalLock atomic.Uint32
5071 // Must hold signalLock to write. Reads may be lock-free, but
5072 // signalLock should be taken to synchronize with changes.
5076 func _System() { _System() }
5077 func _ExternalCode() { _ExternalCode() }
5078 func _LostExternalCode() { _LostExternalCode() }
5079 func _GC() { _GC() }
5080 func _LostSIGPROFDuringAtomic64() { _LostSIGPROFDuringAtomic64() }
5081 func _VDSO() { _VDSO() }
5083 // Called if we receive a SIGPROF signal.
5084 // Called by the signal handler, may run during STW.
5086 //go:nowritebarrierrec
5087 func sigprof(pc, sp, lr uintptr, gp *g, mp *m) {
5088 if prof.hz.Load() == 0 {
5092 // If mp.profilehz is 0, then profiling is not enabled for this thread.
5093 // We must check this to avoid a deadlock between setcpuprofilerate
5094 // and the call to cpuprof.add, below.
5095 if mp != nil && mp.profilehz == 0 {
5099 // On mips{,le}/arm, 64bit atomics are emulated with spinlocks, in
5100 // runtime/internal/atomic. If SIGPROF arrives while the program is inside
5101 // the critical section, it creates a deadlock (when writing the sample).
5102 // As a workaround, create a counter of SIGPROFs while in critical section
5103 // to store the count, and pass it to sigprof.add() later when SIGPROF is
5104 // received from somewhere else (with _LostSIGPROFDuringAtomic64 as pc).
5105 if GOARCH == "mips" || GOARCH == "mipsle" || GOARCH == "arm" {
5106 if f := findfunc(pc); f.valid() {
5107 if hasPrefix(funcname(f), "runtime/internal/atomic") {
5108 cpuprof.lostAtomic++
5112 if GOARCH == "arm" && goarm < 7 && GOOS == "linux" && pc&0xffff0000 == 0xffff0000 {
5113 // runtime/internal/atomic functions call into kernel
5114 // helpers on arm < 7. See
5115 // runtime/internal/atomic/sys_linux_arm.s.
5116 cpuprof.lostAtomic++
5121 // Profiling runs concurrently with GC, so it must not allocate.
5122 // Set a trap in case the code does allocate.
5123 // Note that on windows, one thread takes profiles of all the
5124 // other threads, so mp is usually not getg().m.
5125 // In fact mp may not even be stopped.
5126 // See golang.org/issue/17165.
5127 getg().m.mallocing++
5130 var stk [maxCPUProfStack]uintptr
5132 if mp.ncgo > 0 && mp.curg != nil && mp.curg.syscallpc != 0 && mp.curg.syscallsp != 0 {
5134 // Check cgoCallersUse to make sure that we are not
5135 // interrupting other code that is fiddling with
5136 // cgoCallers. We are running in a signal handler
5137 // with all signals blocked, so we don't have to worry
5138 // about any other code interrupting us.
5139 if mp.cgoCallersUse.Load() == 0 && mp.cgoCallers != nil && mp.cgoCallers[0] != 0 {
5140 for cgoOff < len(mp.cgoCallers) && mp.cgoCallers[cgoOff] != 0 {
5143 n += copy(stk[:], mp.cgoCallers[:cgoOff])
5144 mp.cgoCallers[0] = 0
5147 // Collect Go stack that leads to the cgo call.
5148 u.initAt(mp.curg.syscallpc, mp.curg.syscallsp, 0, mp.curg, unwindSilentErrors)
5149 } else if usesLibcall() && mp.libcallg != 0 && mp.libcallpc != 0 && mp.libcallsp != 0 {
5150 // Libcall, i.e. runtime syscall on windows.
5151 // Collect Go stack that leads to the call.
5152 u.initAt(mp.libcallpc, mp.libcallsp, 0, mp.libcallg.ptr(), unwindSilentErrors)
5153 } else if mp != nil && mp.vdsoSP != 0 {
5154 // VDSO call, e.g. nanotime1 on Linux.
5155 // Collect Go stack that leads to the call.
5156 u.initAt(mp.vdsoPC, mp.vdsoSP, 0, gp, unwindSilentErrors|unwindJumpStack)
5158 u.initAt(pc, sp, lr, gp, unwindSilentErrors|unwindTrap|unwindJumpStack)
5160 n += tracebackPCs(&u, 0, stk[n:])
5163 // Normal traceback is impossible or has failed.
5164 // Account it against abstract "System" or "GC".
5167 pc = abi.FuncPCABIInternal(_VDSO) + sys.PCQuantum
5168 } else if pc > firstmoduledata.etext {
5169 // "ExternalCode" is better than "etext".
5170 pc = abi.FuncPCABIInternal(_ExternalCode) + sys.PCQuantum
5173 if mp.preemptoff != "" {
5174 stk[1] = abi.FuncPCABIInternal(_GC) + sys.PCQuantum
5176 stk[1] = abi.FuncPCABIInternal(_System) + sys.PCQuantum
5180 if prof.hz.Load() != 0 {
5181 // Note: it can happen on Windows that we interrupted a system thread
5182 // with no g, so gp could nil. The other nil checks are done out of
5183 // caution, but not expected to be nil in practice.
5184 var tagPtr *unsafe.Pointer
5185 if gp != nil && gp.m != nil && gp.m.curg != nil {
5186 tagPtr = &gp.m.curg.labels
5188 cpuprof.add(tagPtr, stk[:n])
5192 if gp != nil && gp.m != nil {
5193 if gp.m.curg != nil {
5198 traceCPUSample(gprof, pp, stk[:n])
5200 getg().m.mallocing--
5203 // setcpuprofilerate sets the CPU profiling rate to hz times per second.
5204 // If hz <= 0, setcpuprofilerate turns off CPU profiling.
5205 func setcpuprofilerate(hz int32) {
5206 // Force sane arguments.
5211 // Disable preemption, otherwise we can be rescheduled to another thread
5212 // that has profiling enabled.
5216 // Stop profiler on this thread so that it is safe to lock prof.
5217 // if a profiling signal came in while we had prof locked,
5218 // it would deadlock.
5219 setThreadCPUProfiler(0)
5221 for !prof.signalLock.CompareAndSwap(0, 1) {
5224 if prof.hz.Load() != hz {
5225 setProcessCPUProfiler(hz)
5228 prof.signalLock.Store(0)
5231 sched.profilehz = hz
5235 setThreadCPUProfiler(hz)
5241 // init initializes pp, which may be a freshly allocated p or a
5242 // previously destroyed p, and transitions it to status _Pgcstop.
5243 func (pp *p) init(id int32) {
5245 pp.status = _Pgcstop
5246 pp.sudogcache = pp.sudogbuf[:0]
5247 pp.deferpool = pp.deferpoolbuf[:0]
5249 if pp.mcache == nil {
5252 throw("missing mcache?")
5254 // Use the bootstrap mcache0. Only one P will get
5255 // mcache0: the one with ID 0.
5258 pp.mcache = allocmcache()
5261 if raceenabled && pp.raceprocctx == 0 {
5263 pp.raceprocctx = raceprocctx0
5264 raceprocctx0 = 0 // bootstrap
5266 pp.raceprocctx = raceproccreate()
5269 lockInit(&pp.timersLock, lockRankTimers)
5271 // This P may get timers when it starts running. Set the mask here
5272 // since the P may not go through pidleget (notably P 0 on startup).
5274 // Similarly, we may not go through pidleget before this P starts
5275 // running if it is P 0 on startup.
5279 // destroy releases all of the resources associated with pp and
5280 // transitions it to status _Pdead.
5282 // sched.lock must be held and the world must be stopped.
5283 func (pp *p) destroy() {
5284 assertLockHeld(&sched.lock)
5285 assertWorldStopped()
5287 // Move all runnable goroutines to the global queue
5288 for pp.runqhead != pp.runqtail {
5289 // Pop from tail of local queue
5291 gp := pp.runq[pp.runqtail%uint32(len(pp.runq))].ptr()
5292 // Push onto head of global queue
5295 if pp.runnext != 0 {
5296 globrunqputhead(pp.runnext.ptr())
5299 if len(pp.timers) > 0 {
5300 plocal := getg().m.p.ptr()
5301 // The world is stopped, but we acquire timersLock to
5302 // protect against sysmon calling timeSleepUntil.
5303 // This is the only case where we hold the timersLock of
5304 // more than one P, so there are no deadlock concerns.
5305 lock(&plocal.timersLock)
5306 lock(&pp.timersLock)
5307 moveTimers(plocal, pp.timers)
5309 pp.numTimers.Store(0)
5310 pp.deletedTimers.Store(0)
5311 pp.timer0When.Store(0)
5312 unlock(&pp.timersLock)
5313 unlock(&plocal.timersLock)
5315 // Flush p's write barrier buffer.
5316 if gcphase != _GCoff {
5320 for i := range pp.sudogbuf {
5321 pp.sudogbuf[i] = nil
5323 pp.sudogcache = pp.sudogbuf[:0]
5324 pp.pinnerCache = nil
5325 for j := range pp.deferpoolbuf {
5326 pp.deferpoolbuf[j] = nil
5328 pp.deferpool = pp.deferpoolbuf[:0]
5329 systemstack(func() {
5330 for i := 0; i < pp.mspancache.len; i++ {
5331 // Safe to call since the world is stopped.
5332 mheap_.spanalloc.free(unsafe.Pointer(pp.mspancache.buf[i]))
5334 pp.mspancache.len = 0
5336 pp.pcache.flush(&mheap_.pages)
5337 unlock(&mheap_.lock)
5339 freemcache(pp.mcache)
5344 if pp.timerRaceCtx != 0 {
5345 // The race detector code uses a callback to fetch
5346 // the proc context, so arrange for that callback
5347 // to see the right thing.
5348 // This hack only works because we are the only
5354 racectxend(pp.timerRaceCtx)
5359 raceprocdestroy(pp.raceprocctx)
5366 // Change number of processors.
5368 // sched.lock must be held, and the world must be stopped.
5370 // gcworkbufs must not be being modified by either the GC or the write barrier
5371 // code, so the GC must not be running if the number of Ps actually changes.
5373 // Returns list of Ps with local work, they need to be scheduled by the caller.
5374 func procresize(nprocs int32) *p {
5375 assertLockHeld(&sched.lock)
5376 assertWorldStopped()
5379 if old < 0 || nprocs <= 0 {
5380 throw("procresize: invalid arg")
5382 trace := traceAcquire()
5384 trace.Gomaxprocs(nprocs)
5388 // update statistics
5390 if sched.procresizetime != 0 {
5391 sched.totaltime += int64(old) * (now - sched.procresizetime)
5393 sched.procresizetime = now
5395 maskWords := (nprocs + 31) / 32
5397 // Grow allp if necessary.
5398 if nprocs > int32(len(allp)) {
5399 // Synchronize with retake, which could be running
5400 // concurrently since it doesn't run on a P.
5402 if nprocs <= int32(cap(allp)) {
5403 allp = allp[:nprocs]
5405 nallp := make([]*p, nprocs)
5406 // Copy everything up to allp's cap so we
5407 // never lose old allocated Ps.
5408 copy(nallp, allp[:cap(allp)])
5412 if maskWords <= int32(cap(idlepMask)) {
5413 idlepMask = idlepMask[:maskWords]
5414 timerpMask = timerpMask[:maskWords]
5416 nidlepMask := make([]uint32, maskWords)
5417 // No need to copy beyond len, old Ps are irrelevant.
5418 copy(nidlepMask, idlepMask)
5419 idlepMask = nidlepMask
5421 ntimerpMask := make([]uint32, maskWords)
5422 copy(ntimerpMask, timerpMask)
5423 timerpMask = ntimerpMask
5428 // initialize new P's
5429 for i := old; i < nprocs; i++ {
5435 atomicstorep(unsafe.Pointer(&allp[i]), unsafe.Pointer(pp))
5439 if gp.m.p != 0 && gp.m.p.ptr().id < nprocs {
5440 // continue to use the current P
5441 gp.m.p.ptr().status = _Prunning
5442 gp.m.p.ptr().mcache.prepareForSweep()
5444 // release the current P and acquire allp[0].
5446 // We must do this before destroying our current P
5447 // because p.destroy itself has write barriers, so we
5448 // need to do that from a valid P.
5450 trace := traceAcquire()
5452 // Pretend that we were descheduled
5453 // and then scheduled again to keep
5456 trace.ProcStop(gp.m.p.ptr())
5466 trace := traceAcquire()
5473 // g.m.p is now set, so we no longer need mcache0 for bootstrapping.
5476 // release resources from unused P's
5477 for i := nprocs; i < old; i++ {
5480 // can't free P itself because it can be referenced by an M in syscall
5484 if int32(len(allp)) != nprocs {
5486 allp = allp[:nprocs]
5487 idlepMask = idlepMask[:maskWords]
5488 timerpMask = timerpMask[:maskWords]
5493 for i := nprocs - 1; i >= 0; i-- {
5495 if gp.m.p.ptr() == pp {
5503 pp.link.set(runnablePs)
5507 stealOrder.reset(uint32(nprocs))
5508 var int32p *int32 = &gomaxprocs // make compiler check that gomaxprocs is an int32
5509 atomic.Store((*uint32)(unsafe.Pointer(int32p)), uint32(nprocs))
5511 // Notify the limiter that the amount of procs has changed.
5512 gcCPULimiter.resetCapacity(now, nprocs)
5517 // Associate p and the current m.
5519 // This function is allowed to have write barriers even if the caller
5520 // isn't because it immediately acquires pp.
5522 //go:yeswritebarrierrec
5523 func acquirep(pp *p) {
5524 // Do the part that isn't allowed to have write barriers.
5527 // Have p; write barriers now allowed.
5529 // Perform deferred mcache flush before this P can allocate
5530 // from a potentially stale mcache.
5531 pp.mcache.prepareForSweep()
5533 trace := traceAcquire()
5540 // wirep is the first step of acquirep, which actually associates the
5541 // current M to pp. This is broken out so we can disallow write
5542 // barriers for this part, since we don't yet have a P.
5544 //go:nowritebarrierrec
5550 throw("wirep: already in go")
5552 if pp.m != 0 || pp.status != _Pidle {
5557 print("wirep: p->m=", pp.m, "(", id, ") p->status=", pp.status, "\n")
5558 throw("wirep: invalid p state")
5562 pp.status = _Prunning
5565 // Disassociate p and the current m.
5566 func releasep() *p {
5570 throw("releasep: invalid arg")
5573 if pp.m.ptr() != gp.m || pp.status != _Prunning {
5574 print("releasep: m=", gp.m, " m->p=", gp.m.p.ptr(), " p->m=", hex(pp.m), " p->status=", pp.status, "\n")
5575 throw("releasep: invalid p state")
5577 trace := traceAcquire()
5579 trace.ProcStop(gp.m.p.ptr())
5588 func incidlelocked(v int32) {
5590 sched.nmidlelocked += v
5597 // Check for deadlock situation.
5598 // The check is based on number of running M's, if 0 -> deadlock.
5599 // sched.lock must be held.
5601 assertLockHeld(&sched.lock)
5603 // For -buildmode=c-shared or -buildmode=c-archive it's OK if
5604 // there are no running goroutines. The calling program is
5605 // assumed to be running.
5606 if islibrary || isarchive {
5610 // If we are dying because of a signal caught on an already idle thread,
5611 // freezetheworld will cause all running threads to block.
5612 // And runtime will essentially enter into deadlock state,
5613 // except that there is a thread that will call exit soon.
5614 if panicking.Load() > 0 {
5618 // If we are not running under cgo, but we have an extra M then account
5619 // for it. (It is possible to have an extra M on Windows without cgo to
5620 // accommodate callbacks created by syscall.NewCallback. See issue #6751
5623 if !iscgo && cgoHasExtraM && extraMLength.Load() > 0 {
5627 run := mcount() - sched.nmidle - sched.nmidlelocked - sched.nmsys
5632 print("runtime: checkdead: nmidle=", sched.nmidle, " nmidlelocked=", sched.nmidlelocked, " mcount=", mcount(), " nmsys=", sched.nmsys, "\n")
5634 throw("checkdead: inconsistent counts")
5638 forEachG(func(gp *g) {
5639 if isSystemGoroutine(gp, false) {
5642 s := readgstatus(gp)
5643 switch s &^ _Gscan {
5650 print("runtime: checkdead: find g ", gp.goid, " in status ", s, "\n")
5652 throw("checkdead: runnable g")
5655 if grunning == 0 { // possible if main goroutine calls runtime·Goexit()
5656 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5657 fatal("no goroutines (main called runtime.Goexit) - deadlock!")
5660 // Maybe jump time forward for playground.
5662 if when := timeSleepUntil(); when < maxWhen {
5665 // Start an M to steal the timer.
5666 pp, _ := pidleget(faketime)
5668 // There should always be a free P since
5669 // nothing is running.
5671 throw("checkdead: no p for timer")
5675 // There should always be a free M since
5676 // nothing is running.
5678 throw("checkdead: no m for timer")
5680 // M must be spinning to steal. We set this to be
5681 // explicit, but since this is the only M it would
5682 // become spinning on its own anyways.
5683 sched.nmspinning.Add(1)
5686 notewakeup(&mp.park)
5691 // There are no goroutines running, so we can look at the P's.
5692 for _, pp := range allp {
5693 if len(pp.timers) > 0 {
5698 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5699 fatal("all goroutines are asleep - deadlock!")
5702 // forcegcperiod is the maximum time in nanoseconds between garbage
5703 // collections. If we go this long without a garbage collection, one
5704 // is forced to run.
5706 // This is a variable for testing purposes. It normally doesn't change.
5707 var forcegcperiod int64 = 2 * 60 * 1e9
5709 // needSysmonWorkaround is true if the workaround for
5710 // golang.org/issue/42515 is needed on NetBSD.
5711 var needSysmonWorkaround bool = false
5713 // Always runs without a P, so write barriers are not allowed.
5715 //go:nowritebarrierrec
5722 lasttrace := int64(0)
5723 idle := 0 // how many cycles in succession we had not wokeup somebody
5727 if idle == 0 { // start with 20us sleep...
5729 } else if idle > 50 { // start doubling the sleep after 1ms...
5732 if delay > 10*1000 { // up to 10ms
5737 // sysmon should not enter deep sleep if schedtrace is enabled so that
5738 // it can print that information at the right time.
5740 // It should also not enter deep sleep if there are any active P's so
5741 // that it can retake P's from syscalls, preempt long running G's, and
5742 // poll the network if all P's are busy for long stretches.
5744 // It should wakeup from deep sleep if any P's become active either due
5745 // to exiting a syscall or waking up due to a timer expiring so that it
5746 // can resume performing those duties. If it wakes from a syscall it
5747 // resets idle and delay as a bet that since it had retaken a P from a
5748 // syscall before, it may need to do it again shortly after the
5749 // application starts work again. It does not reset idle when waking
5750 // from a timer to avoid adding system load to applications that spend
5751 // most of their time sleeping.
5753 if debug.schedtrace <= 0 && (sched.gcwaiting.Load() || sched.npidle.Load() == gomaxprocs) {
5755 if sched.gcwaiting.Load() || sched.npidle.Load() == gomaxprocs {
5756 syscallWake := false
5757 next := timeSleepUntil()
5759 sched.sysmonwait.Store(true)
5761 // Make wake-up period small enough
5762 // for the sampling to be correct.
5763 sleep := forcegcperiod / 2
5764 if next-now < sleep {
5767 shouldRelax := sleep >= osRelaxMinNS
5771 syscallWake = notetsleep(&sched.sysmonnote, sleep)
5776 sched.sysmonwait.Store(false)
5777 noteclear(&sched.sysmonnote)
5787 lock(&sched.sysmonlock)
5788 // Update now in case we blocked on sysmonnote or spent a long time
5789 // blocked on schedlock or sysmonlock above.
5792 // trigger libc interceptors if needed
5793 if *cgo_yield != nil {
5794 asmcgocall(*cgo_yield, nil)
5796 // poll network if not polled for more than 10ms
5797 lastpoll := sched.lastpoll.Load()
5798 if netpollinited() && lastpoll != 0 && lastpoll+10*1000*1000 < now {
5799 sched.lastpoll.CompareAndSwap(lastpoll, now)
5800 list, delta := netpoll(0) // non-blocking - returns list of goroutines
5802 // Need to decrement number of idle locked M's
5803 // (pretending that one more is running) before injectglist.
5804 // Otherwise it can lead to the following situation:
5805 // injectglist grabs all P's but before it starts M's to run the P's,
5806 // another M returns from syscall, finishes running its G,
5807 // observes that there is no work to do and no other running M's
5808 // and reports deadlock.
5812 netpollAdjustWaiters(delta)
5815 if GOOS == "netbsd" && needSysmonWorkaround {
5816 // netpoll is responsible for waiting for timer
5817 // expiration, so we typically don't have to worry
5818 // about starting an M to service timers. (Note that
5819 // sleep for timeSleepUntil above simply ensures sysmon
5820 // starts running again when that timer expiration may
5821 // cause Go code to run again).
5823 // However, netbsd has a kernel bug that sometimes
5824 // misses netpollBreak wake-ups, which can lead to
5825 // unbounded delays servicing timers. If we detect this
5826 // overrun, then startm to get something to handle the
5829 // See issue 42515 and
5830 // https://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=50094.
5831 if next := timeSleepUntil(); next < now {
5832 startm(nil, false, false)
5835 if scavenger.sysmonWake.Load() != 0 {
5836 // Kick the scavenger awake if someone requested it.
5839 // retake P's blocked in syscalls
5840 // and preempt long running G's
5841 if retake(now) != 0 {
5846 // check if we need to force a GC
5847 if t := (gcTrigger{kind: gcTriggerTime, now: now}); t.test() && forcegc.idle.Load() {
5849 forcegc.idle.Store(false)
5851 list.push(forcegc.g)
5853 unlock(&forcegc.lock)
5855 if debug.schedtrace > 0 && lasttrace+int64(debug.schedtrace)*1000000 <= now {
5857 schedtrace(debug.scheddetail > 0)
5859 unlock(&sched.sysmonlock)
5863 type sysmontick struct {
5870 // forcePreemptNS is the time slice given to a G before it is
5872 const forcePreemptNS = 10 * 1000 * 1000 // 10ms
5874 func retake(now int64) uint32 {
5876 // Prevent allp slice changes. This lock will be completely
5877 // uncontended unless we're already stopping the world.
5879 // We can't use a range loop over allp because we may
5880 // temporarily drop the allpLock. Hence, we need to re-fetch
5881 // allp each time around the loop.
5882 for i := 0; i < len(allp); i++ {
5885 // This can happen if procresize has grown
5886 // allp but not yet created new Ps.
5889 pd := &pp.sysmontick
5892 if s == _Prunning || s == _Psyscall {
5893 // Preempt G if it's running for too long.
5894 t := int64(pp.schedtick)
5895 if int64(pd.schedtick) != t {
5896 pd.schedtick = uint32(t)
5898 } else if pd.schedwhen+forcePreemptNS <= now {
5900 // In case of syscall, preemptone() doesn't
5901 // work, because there is no M wired to P.
5906 // Retake P from syscall if it's there for more than 1 sysmon tick (at least 20us).
5907 t := int64(pp.syscalltick)
5908 if !sysretake && int64(pd.syscalltick) != t {
5909 pd.syscalltick = uint32(t)
5910 pd.syscallwhen = now
5913 // On the one hand we don't want to retake Ps if there is no other work to do,
5914 // but on the other hand we want to retake them eventually
5915 // because they can prevent the sysmon thread from deep sleep.
5916 if runqempty(pp) && sched.nmspinning.Load()+sched.npidle.Load() > 0 && pd.syscallwhen+10*1000*1000 > now {
5919 // Drop allpLock so we can take sched.lock.
5921 // Need to decrement number of idle locked M's
5922 // (pretending that one more is running) before the CAS.
5923 // Otherwise the M from which we retake can exit the syscall,
5924 // increment nmidle and report deadlock.
5926 if atomic.Cas(&pp.status, s, _Pidle) {
5927 trace := traceAcquire()
5929 trace.GoSysBlock(pp)
5945 // Tell all goroutines that they have been preempted and they should stop.
5946 // This function is purely best-effort. It can fail to inform a goroutine if a
5947 // processor just started running it.
5948 // No locks need to be held.
5949 // Returns true if preemption request was issued to at least one goroutine.
5950 func preemptall() bool {
5952 for _, pp := range allp {
5953 if pp.status != _Prunning {
5963 // Tell the goroutine running on processor P to stop.
5964 // This function is purely best-effort. It can incorrectly fail to inform the
5965 // goroutine. It can inform the wrong goroutine. Even if it informs the
5966 // correct goroutine, that goroutine might ignore the request if it is
5967 // simultaneously executing newstack.
5968 // No lock needs to be held.
5969 // Returns true if preemption request was issued.
5970 // The actual preemption will happen at some point in the future
5971 // and will be indicated by the gp->status no longer being
5973 func preemptone(pp *p) bool {
5975 if mp == nil || mp == getg().m {
5979 if gp == nil || gp == mp.g0 {
5985 // Every call in a goroutine checks for stack overflow by
5986 // comparing the current stack pointer to gp->stackguard0.
5987 // Setting gp->stackguard0 to StackPreempt folds
5988 // preemption into the normal stack overflow check.
5989 gp.stackguard0 = stackPreempt
5991 // Request an async preemption of this P.
5992 if preemptMSupported && debug.asyncpreemptoff == 0 {
6002 func schedtrace(detailed bool) {
6009 print("SCHED ", (now-starttime)/1e6, "ms: gomaxprocs=", gomaxprocs, " idleprocs=", sched.npidle.Load(), " threads=", mcount(), " spinningthreads=", sched.nmspinning.Load(), " needspinning=", sched.needspinning.Load(), " idlethreads=", sched.nmidle, " runqueue=", sched.runqsize)
6011 print(" gcwaiting=", sched.gcwaiting.Load(), " nmidlelocked=", sched.nmidlelocked, " stopwait=", sched.stopwait, " sysmonwait=", sched.sysmonwait.Load(), "\n")
6013 // We must be careful while reading data from P's, M's and G's.
6014 // Even if we hold schedlock, most data can be changed concurrently.
6015 // E.g. (p->m ? p->m->id : -1) can crash if p->m changes from non-nil to nil.
6016 for i, pp := range allp {
6018 h := atomic.Load(&pp.runqhead)
6019 t := atomic.Load(&pp.runqtail)
6021 print(" P", i, ": status=", pp.status, " schedtick=", pp.schedtick, " syscalltick=", pp.syscalltick, " m=")
6027 print(" runqsize=", t-h, " gfreecnt=", pp.gFree.n, " timerslen=", len(pp.timers), "\n")
6029 // In non-detailed mode format lengths of per-P run queues as:
6030 // [len1 len2 len3 len4]
6036 if i == len(allp)-1 {
6047 for mp := allm; mp != nil; mp = mp.alllink {
6049 print(" M", mp.id, ": p=")
6061 print(" mallocing=", mp.mallocing, " throwing=", mp.throwing, " preemptoff=", mp.preemptoff, " locks=", mp.locks, " dying=", mp.dying, " spinning=", mp.spinning, " blocked=", mp.blocked, " lockedg=")
6062 if lockedg := mp.lockedg.ptr(); lockedg != nil {
6070 forEachG(func(gp *g) {
6071 print(" G", gp.goid, ": status=", readgstatus(gp), "(", gp.waitreason.String(), ") m=")
6078 if lockedm := gp.lockedm.ptr(); lockedm != nil {
6088 // schedEnableUser enables or disables the scheduling of user
6091 // This does not stop already running user goroutines, so the caller
6092 // should first stop the world when disabling user goroutines.
6093 func schedEnableUser(enable bool) {
6095 if sched.disable.user == !enable {
6099 sched.disable.user = !enable
6101 n := sched.disable.n
6103 globrunqputbatch(&sched.disable.runnable, n)
6105 for ; n != 0 && sched.npidle.Load() != 0; n-- {
6106 startm(nil, false, false)
6113 // schedEnabled reports whether gp should be scheduled. It returns
6114 // false is scheduling of gp is disabled.
6116 // sched.lock must be held.
6117 func schedEnabled(gp *g) bool {
6118 assertLockHeld(&sched.lock)
6120 if sched.disable.user {
6121 return isSystemGoroutine(gp, true)
6126 // Put mp on midle list.
6127 // sched.lock must be held.
6128 // May run during STW, so write barriers are not allowed.
6130 //go:nowritebarrierrec
6132 assertLockHeld(&sched.lock)
6134 mp.schedlink = sched.midle
6140 // Try to get an m from midle list.
6141 // sched.lock must be held.
6142 // May run during STW, so write barriers are not allowed.
6144 //go:nowritebarrierrec
6146 assertLockHeld(&sched.lock)
6148 mp := sched.midle.ptr()
6150 sched.midle = mp.schedlink
6156 // Put gp on the global runnable queue.
6157 // sched.lock must be held.
6158 // May run during STW, so write barriers are not allowed.
6160 //go:nowritebarrierrec
6161 func globrunqput(gp *g) {
6162 assertLockHeld(&sched.lock)
6164 sched.runq.pushBack(gp)
6168 // Put gp at the head of the global runnable queue.
6169 // sched.lock must be held.
6170 // May run during STW, so write barriers are not allowed.
6172 //go:nowritebarrierrec
6173 func globrunqputhead(gp *g) {
6174 assertLockHeld(&sched.lock)
6180 // Put a batch of runnable goroutines on the global runnable queue.
6181 // This clears *batch.
6182 // sched.lock must be held.
6183 // May run during STW, so write barriers are not allowed.
6185 //go:nowritebarrierrec
6186 func globrunqputbatch(batch *gQueue, n int32) {
6187 assertLockHeld(&sched.lock)
6189 sched.runq.pushBackAll(*batch)
6194 // Try get a batch of G's from the global runnable queue.
6195 // sched.lock must be held.
6196 func globrunqget(pp *p, max int32) *g {
6197 assertLockHeld(&sched.lock)
6199 if sched.runqsize == 0 {
6203 n := sched.runqsize/gomaxprocs + 1
6204 if n > sched.runqsize {
6207 if max > 0 && n > max {
6210 if n > int32(len(pp.runq))/2 {
6211 n = int32(len(pp.runq)) / 2
6216 gp := sched.runq.pop()
6219 gp1 := sched.runq.pop()
6220 runqput(pp, gp1, false)
6225 // pMask is an atomic bitstring with one bit per P.
6228 // read returns true if P id's bit is set.
6229 func (p pMask) read(id uint32) bool {
6231 mask := uint32(1) << (id % 32)
6232 return (atomic.Load(&p[word]) & mask) != 0
6235 // set sets P id's bit.
6236 func (p pMask) set(id int32) {
6238 mask := uint32(1) << (id % 32)
6239 atomic.Or(&p[word], mask)
6242 // clear clears P id's bit.
6243 func (p pMask) clear(id int32) {
6245 mask := uint32(1) << (id % 32)
6246 atomic.And(&p[word], ^mask)
6249 // updateTimerPMask clears pp's timer mask if it has no timers on its heap.
6251 // Ideally, the timer mask would be kept immediately consistent on any timer
6252 // operations. Unfortunately, updating a shared global data structure in the
6253 // timer hot path adds too much overhead in applications frequently switching
6254 // between no timers and some timers.
6256 // As a compromise, the timer mask is updated only on pidleget / pidleput. A
6257 // running P (returned by pidleget) may add a timer at any time, so its mask
6258 // must be set. An idle P (passed to pidleput) cannot add new timers while
6259 // idle, so if it has no timers at that time, its mask may be cleared.
6261 // Thus, we get the following effects on timer-stealing in findrunnable:
6263 // - Idle Ps with no timers when they go idle are never checked in findrunnable
6264 // (for work- or timer-stealing; this is the ideal case).
6265 // - Running Ps must always be checked.
6266 // - Idle Ps whose timers are stolen must continue to be checked until they run
6267 // again, even after timer expiration.
6269 // When the P starts running again, the mask should be set, as a timer may be
6270 // added at any time.
6272 // TODO(prattmic): Additional targeted updates may improve the above cases.
6273 // e.g., updating the mask when stealing a timer.
6274 func updateTimerPMask(pp *p) {
6275 if pp.numTimers.Load() > 0 {
6279 // Looks like there are no timers, however another P may transiently
6280 // decrement numTimers when handling a timerModified timer in
6281 // checkTimers. We must take timersLock to serialize with these changes.
6282 lock(&pp.timersLock)
6283 if pp.numTimers.Load() == 0 {
6284 timerpMask.clear(pp.id)
6286 unlock(&pp.timersLock)
6289 // pidleput puts p on the _Pidle list. now must be a relatively recent call
6290 // to nanotime or zero. Returns now or the current time if now was zero.
6292 // This releases ownership of p. Once sched.lock is released it is no longer
6295 // sched.lock must be held.
6297 // May run during STW, so write barriers are not allowed.
6299 //go:nowritebarrierrec
6300 func pidleput(pp *p, now int64) int64 {
6301 assertLockHeld(&sched.lock)
6304 throw("pidleput: P has non-empty run queue")
6309 updateTimerPMask(pp) // clear if there are no timers.
6310 idlepMask.set(pp.id)
6311 pp.link = sched.pidle
6314 if !pp.limiterEvent.start(limiterEventIdle, now) {
6315 throw("must be able to track idle limiter event")
6320 // pidleget tries to get a p from the _Pidle list, acquiring ownership.
6322 // sched.lock must be held.
6324 // May run during STW, so write barriers are not allowed.
6326 //go:nowritebarrierrec
6327 func pidleget(now int64) (*p, int64) {
6328 assertLockHeld(&sched.lock)
6330 pp := sched.pidle.ptr()
6332 // Timer may get added at any time now.
6336 timerpMask.set(pp.id)
6337 idlepMask.clear(pp.id)
6338 sched.pidle = pp.link
6339 sched.npidle.Add(-1)
6340 pp.limiterEvent.stop(limiterEventIdle, now)
6345 // pidlegetSpinning tries to get a p from the _Pidle list, acquiring ownership.
6346 // This is called by spinning Ms (or callers than need a spinning M) that have
6347 // found work. If no P is available, this must synchronized with non-spinning
6348 // Ms that may be preparing to drop their P without discovering this work.
6350 // sched.lock must be held.
6352 // May run during STW, so write barriers are not allowed.
6354 //go:nowritebarrierrec
6355 func pidlegetSpinning(now int64) (*p, int64) {
6356 assertLockHeld(&sched.lock)
6358 pp, now := pidleget(now)
6360 // See "Delicate dance" comment in findrunnable. We found work
6361 // that we cannot take, we must synchronize with non-spinning
6362 // Ms that may be preparing to drop their P.
6363 sched.needspinning.Store(1)
6370 // runqempty reports whether pp has no Gs on its local run queue.
6371 // It never returns true spuriously.
6372 func runqempty(pp *p) bool {
6373 // Defend against a race where 1) pp has G1 in runqnext but runqhead == runqtail,
6374 // 2) runqput on pp kicks G1 to the runq, 3) runqget on pp empties runqnext.
6375 // Simply observing that runqhead == runqtail and then observing that runqnext == nil
6376 // does not mean the queue is empty.
6378 head := atomic.Load(&pp.runqhead)
6379 tail := atomic.Load(&pp.runqtail)
6380 runnext := atomic.Loaduintptr((*uintptr)(unsafe.Pointer(&pp.runnext)))
6381 if tail == atomic.Load(&pp.runqtail) {
6382 return head == tail && runnext == 0
6387 // To shake out latent assumptions about scheduling order,
6388 // we introduce some randomness into scheduling decisions
6389 // when running with the race detector.
6390 // The need for this was made obvious by changing the
6391 // (deterministic) scheduling order in Go 1.5 and breaking
6392 // many poorly-written tests.
6393 // With the randomness here, as long as the tests pass
6394 // consistently with -race, they shouldn't have latent scheduling
6396 const randomizeScheduler = raceenabled
6398 // runqput tries to put g on the local runnable queue.
6399 // If next is false, runqput adds g to the tail of the runnable queue.
6400 // If next is true, runqput puts g in the pp.runnext slot.
6401 // If the run queue is full, runnext puts g on the global queue.
6402 // Executed only by the owner P.
6403 func runqput(pp *p, gp *g, next bool) {
6404 if randomizeScheduler && next && fastrandn(2) == 0 {
6410 oldnext := pp.runnext
6411 if !pp.runnext.cas(oldnext, guintptr(unsafe.Pointer(gp))) {
6417 // Kick the old runnext out to the regular run queue.
6422 h := atomic.LoadAcq(&pp.runqhead) // load-acquire, synchronize with consumers
6424 if t-h < uint32(len(pp.runq)) {
6425 pp.runq[t%uint32(len(pp.runq))].set(gp)
6426 atomic.StoreRel(&pp.runqtail, t+1) // store-release, makes the item available for consumption
6429 if runqputslow(pp, gp, h, t) {
6432 // the queue is not full, now the put above must succeed
6436 // Put g and a batch of work from local runnable queue on global queue.
6437 // Executed only by the owner P.
6438 func runqputslow(pp *p, gp *g, h, t uint32) bool {
6439 var batch [len(pp.runq)/2 + 1]*g
6441 // First, grab a batch from local queue.
6444 if n != uint32(len(pp.runq)/2) {
6445 throw("runqputslow: queue is not full")
6447 for i := uint32(0); i < n; i++ {
6448 batch[i] = pp.runq[(h+i)%uint32(len(pp.runq))].ptr()
6450 if !atomic.CasRel(&pp.runqhead, h, h+n) { // cas-release, commits consume
6455 if randomizeScheduler {
6456 for i := uint32(1); i <= n; i++ {
6457 j := fastrandn(i + 1)
6458 batch[i], batch[j] = batch[j], batch[i]
6462 // Link the goroutines.
6463 for i := uint32(0); i < n; i++ {
6464 batch[i].schedlink.set(batch[i+1])
6467 q.head.set(batch[0])
6468 q.tail.set(batch[n])
6470 // Now put the batch on global queue.
6472 globrunqputbatch(&q, int32(n+1))
6477 // runqputbatch tries to put all the G's on q on the local runnable queue.
6478 // If the queue is full, they are put on the global queue; in that case
6479 // this will temporarily acquire the scheduler lock.
6480 // Executed only by the owner P.
6481 func runqputbatch(pp *p, q *gQueue, qsize int) {
6482 h := atomic.LoadAcq(&pp.runqhead)
6485 for !q.empty() && t-h < uint32(len(pp.runq)) {
6487 pp.runq[t%uint32(len(pp.runq))].set(gp)
6493 if randomizeScheduler {
6494 off := func(o uint32) uint32 {
6495 return (pp.runqtail + o) % uint32(len(pp.runq))
6497 for i := uint32(1); i < n; i++ {
6498 j := fastrandn(i + 1)
6499 pp.runq[off(i)], pp.runq[off(j)] = pp.runq[off(j)], pp.runq[off(i)]
6503 atomic.StoreRel(&pp.runqtail, t)
6506 globrunqputbatch(q, int32(qsize))
6511 // Get g from local runnable queue.
6512 // If inheritTime is true, gp should inherit the remaining time in the
6513 // current time slice. Otherwise, it should start a new time slice.
6514 // Executed only by the owner P.
6515 func runqget(pp *p) (gp *g, inheritTime bool) {
6516 // If there's a runnext, it's the next G to run.
6518 // If the runnext is non-0 and the CAS fails, it could only have been stolen by another P,
6519 // because other Ps can race to set runnext to 0, but only the current P can set it to non-0.
6520 // Hence, there's no need to retry this CAS if it fails.
6521 if next != 0 && pp.runnext.cas(next, 0) {
6522 return next.ptr(), true
6526 h := atomic.LoadAcq(&pp.runqhead) // load-acquire, synchronize with other consumers
6531 gp := pp.runq[h%uint32(len(pp.runq))].ptr()
6532 if atomic.CasRel(&pp.runqhead, h, h+1) { // cas-release, commits consume
6538 // runqdrain drains the local runnable queue of pp and returns all goroutines in it.
6539 // Executed only by the owner P.
6540 func runqdrain(pp *p) (drainQ gQueue, n uint32) {
6541 oldNext := pp.runnext
6542 if oldNext != 0 && pp.runnext.cas(oldNext, 0) {
6543 drainQ.pushBack(oldNext.ptr())
6548 h := atomic.LoadAcq(&pp.runqhead) // load-acquire, synchronize with other consumers
6554 if qn > uint32(len(pp.runq)) { // read inconsistent h and t
6558 if !atomic.CasRel(&pp.runqhead, h, h+qn) { // cas-release, commits consume
6562 // We've inverted the order in which it gets G's from the local P's runnable queue
6563 // and then advances the head pointer because we don't want to mess up the statuses of G's
6564 // while runqdrain() and runqsteal() are running in parallel.
6565 // Thus we should advance the head pointer before draining the local P into a gQueue,
6566 // so that we can update any gp.schedlink only after we take the full ownership of G,
6567 // meanwhile, other P's can't access to all G's in local P's runnable queue and steal them.
6568 // See https://groups.google.com/g/golang-dev/c/0pTKxEKhHSc/m/6Q85QjdVBQAJ for more details.
6569 for i := uint32(0); i < qn; i++ {
6570 gp := pp.runq[(h+i)%uint32(len(pp.runq))].ptr()
6577 // Grabs a batch of goroutines from pp's runnable queue into batch.
6578 // Batch is a ring buffer starting at batchHead.
6579 // Returns number of grabbed goroutines.
6580 // Can be executed by any P.
6581 func runqgrab(pp *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32 {
6583 h := atomic.LoadAcq(&pp.runqhead) // load-acquire, synchronize with other consumers
6584 t := atomic.LoadAcq(&pp.runqtail) // load-acquire, synchronize with the producer
6589 // Try to steal from pp.runnext.
6590 if next := pp.runnext; next != 0 {
6591 if pp.status == _Prunning {
6592 // Sleep to ensure that pp isn't about to run the g
6593 // we are about to steal.
6594 // The important use case here is when the g running
6595 // on pp ready()s another g and then almost
6596 // immediately blocks. Instead of stealing runnext
6597 // in this window, back off to give pp a chance to
6598 // schedule runnext. This will avoid thrashing gs
6599 // between different Ps.
6600 // A sync chan send/recv takes ~50ns as of time of
6601 // writing, so 3us gives ~50x overshoot.
6602 if GOOS != "windows" && GOOS != "openbsd" && GOOS != "netbsd" {
6605 // On some platforms system timer granularity is
6606 // 1-15ms, which is way too much for this
6607 // optimization. So just yield.
6611 if !pp.runnext.cas(next, 0) {
6614 batch[batchHead%uint32(len(batch))] = next
6620 if n > uint32(len(pp.runq)/2) { // read inconsistent h and t
6623 for i := uint32(0); i < n; i++ {
6624 g := pp.runq[(h+i)%uint32(len(pp.runq))]
6625 batch[(batchHead+i)%uint32(len(batch))] = g
6627 if atomic.CasRel(&pp.runqhead, h, h+n) { // cas-release, commits consume
6633 // Steal half of elements from local runnable queue of p2
6634 // and put onto local runnable queue of p.
6635 // Returns one of the stolen elements (or nil if failed).
6636 func runqsteal(pp, p2 *p, stealRunNextG bool) *g {
6638 n := runqgrab(p2, &pp.runq, t, stealRunNextG)
6643 gp := pp.runq[(t+n)%uint32(len(pp.runq))].ptr()
6647 h := atomic.LoadAcq(&pp.runqhead) // load-acquire, synchronize with consumers
6648 if t-h+n >= uint32(len(pp.runq)) {
6649 throw("runqsteal: runq overflow")
6651 atomic.StoreRel(&pp.runqtail, t+n) // store-release, makes the item available for consumption
6655 // A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
6656 // be on one gQueue or gList at a time.
6657 type gQueue struct {
6662 // empty reports whether q is empty.
6663 func (q *gQueue) empty() bool {
6667 // push adds gp to the head of q.
6668 func (q *gQueue) push(gp *g) {
6669 gp.schedlink = q.head
6676 // pushBack adds gp to the tail of q.
6677 func (q *gQueue) pushBack(gp *g) {
6680 q.tail.ptr().schedlink.set(gp)
6687 // pushBackAll adds all Gs in q2 to the tail of q. After this q2 must
6689 func (q *gQueue) pushBackAll(q2 gQueue) {
6693 q2.tail.ptr().schedlink = 0
6695 q.tail.ptr().schedlink = q2.head
6702 // pop removes and returns the head of queue q. It returns nil if
6704 func (q *gQueue) pop() *g {
6707 q.head = gp.schedlink
6715 // popList takes all Gs in q and returns them as a gList.
6716 func (q *gQueue) popList() gList {
6717 stack := gList{q.head}
6722 // A gList is a list of Gs linked through g.schedlink. A G can only be
6723 // on one gQueue or gList at a time.
6728 // empty reports whether l is empty.
6729 func (l *gList) empty() bool {
6733 // push adds gp to the head of l.
6734 func (l *gList) push(gp *g) {
6735 gp.schedlink = l.head
6739 // pushAll prepends all Gs in q to l.
6740 func (l *gList) pushAll(q gQueue) {
6742 q.tail.ptr().schedlink = l.head
6747 // pop removes and returns the head of l. If l is empty, it returns nil.
6748 func (l *gList) pop() *g {
6751 l.head = gp.schedlink
6756 //go:linkname setMaxThreads runtime/debug.setMaxThreads
6757 func setMaxThreads(in int) (out int) {
6759 out = int(sched.maxmcount)
6760 if in > 0x7fffffff { // MaxInt32
6761 sched.maxmcount = 0x7fffffff
6763 sched.maxmcount = int32(in)
6771 func procPin() int {
6776 return int(mp.p.ptr().id)
6785 //go:linkname sync_runtime_procPin sync.runtime_procPin
6787 func sync_runtime_procPin() int {
6791 //go:linkname sync_runtime_procUnpin sync.runtime_procUnpin
6793 func sync_runtime_procUnpin() {
6797 //go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin
6799 func sync_atomic_runtime_procPin() int {
6803 //go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin
6805 func sync_atomic_runtime_procUnpin() {
6809 // Active spinning for sync.Mutex.
6811 //go:linkname sync_runtime_canSpin sync.runtime_canSpin
6813 func sync_runtime_canSpin(i int) bool {
6814 // sync.Mutex is cooperative, so we are conservative with spinning.
6815 // Spin only few times and only if running on a multicore machine and
6816 // GOMAXPROCS>1 and there is at least one other running P and local runq is empty.
6817 // As opposed to runtime mutex we don't do passive spinning here,
6818 // because there can be work on global runq or on other Ps.
6819 if i >= active_spin || ncpu <= 1 || gomaxprocs <= sched.npidle.Load()+sched.nmspinning.Load()+1 {
6822 if p := getg().m.p.ptr(); !runqempty(p) {
6828 //go:linkname sync_runtime_doSpin sync.runtime_doSpin
6830 func sync_runtime_doSpin() {
6831 procyield(active_spin_cnt)
6834 var stealOrder randomOrder
6836 // randomOrder/randomEnum are helper types for randomized work stealing.
6837 // They allow to enumerate all Ps in different pseudo-random orders without repetitions.
6838 // The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
6839 // are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
6840 type randomOrder struct {
6845 type randomEnum struct {
6852 func (ord *randomOrder) reset(count uint32) {
6854 ord.coprimes = ord.coprimes[:0]
6855 for i := uint32(1); i <= count; i++ {
6856 if gcd(i, count) == 1 {
6857 ord.coprimes = append(ord.coprimes, i)
6862 func (ord *randomOrder) start(i uint32) randomEnum {
6866 inc: ord.coprimes[i/ord.count%uint32(len(ord.coprimes))],
6870 func (enum *randomEnum) done() bool {
6871 return enum.i == enum.count
6874 func (enum *randomEnum) next() {
6876 enum.pos = (enum.pos + enum.inc) % enum.count
6879 func (enum *randomEnum) position() uint32 {
6883 func gcd(a, b uint32) uint32 {
6890 // An initTask represents the set of initializations that need to be done for a package.
6891 // Keep in sync with ../../test/noinit.go:initTask
6892 type initTask struct {
6893 state uint32 // 0 = uninitialized, 1 = in progress, 2 = done
6895 // followed by nfns pcs, uintptr sized, one per init function to run
6898 // inittrace stores statistics for init functions which are
6899 // updated by malloc and newproc when active is true.
6900 var inittrace tracestat
6902 type tracestat struct {
6903 active bool // init tracing activation status
6904 id uint64 // init goroutine id
6905 allocs uint64 // heap allocations
6906 bytes uint64 // heap allocated bytes
6909 func doInit(ts []*initTask) {
6910 for _, t := range ts {
6915 func doInit1(t *initTask) {
6917 case 2: // fully initialized
6919 case 1: // initialization in progress
6920 throw("recursive call during initialization - linker skew")
6921 default: // not initialized yet
6922 t.state = 1 // initialization in progress
6929 if inittrace.active {
6931 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6936 // We should have pruned all of these in the linker.
6937 throw("inittask with no functions")
6940 firstFunc := add(unsafe.Pointer(t), 8)
6941 for i := uint32(0); i < t.nfns; i++ {
6942 p := add(firstFunc, uintptr(i)*goarch.PtrSize)
6943 f := *(*func())(unsafe.Pointer(&p))
6947 if inittrace.active {
6949 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6952 f := *(*func())(unsafe.Pointer(&firstFunc))
6953 pkg := funcpkgpath(findfunc(abi.FuncPCABIInternal(f)))
6956 print("init ", pkg, " @")
6957 print(string(fmtNSAsMS(sbuf[:], uint64(start-runtimeInitTime))), " ms, ")
6958 print(string(fmtNSAsMS(sbuf[:], uint64(end-start))), " ms clock, ")
6959 print(string(itoa(sbuf[:], after.bytes-before.bytes)), " bytes, ")
6960 print(string(itoa(sbuf[:], after.allocs-before.allocs)), " allocs")
6964 t.state = 2 // initialization done