1 // Copyright 2014 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
11 "runtime/internal/atomic"
12 "runtime/internal/sys"
16 // set using cmd/go/internal/modload.ModInfoProg
19 // Goroutine scheduler
20 // The scheduler's job is to distribute ready-to-run goroutines over worker threads.
22 // The main concepts are:
24 // M - worker thread, or machine.
25 // P - processor, a resource that is required to execute Go code.
26 // M must have an associated P to execute Go code, however it can be
27 // blocked or in a syscall w/o an associated P.
29 // Design doc at https://golang.org/s/go11sched.
31 // Worker thread parking/unparking.
32 // We need to balance between keeping enough running worker threads to utilize
33 // available hardware parallelism and parking excessive running worker threads
34 // to conserve CPU resources and power. This is not simple for two reasons:
35 // (1) scheduler state is intentionally distributed (in particular, per-P work
36 // queues), so it is not possible to compute global predicates on fast paths;
37 // (2) for optimal thread management we would need to know the future (don't park
38 // a worker thread when a new goroutine will be readied in near future).
40 // Three rejected approaches that would work badly:
41 // 1. Centralize all scheduler state (would inhibit scalability).
42 // 2. Direct goroutine handoff. That is, when we ready a new goroutine and there
43 // is a spare P, unpark a thread and handoff it the thread and the goroutine.
44 // This would lead to thread state thrashing, as the thread that readied the
45 // goroutine can be out of work the very next moment, we will need to park it.
46 // Also, it would destroy locality of computation as we want to preserve
47 // dependent goroutines on the same thread; and introduce additional latency.
48 // 3. Unpark an additional thread whenever we ready a goroutine and there is an
49 // idle P, but don't do handoff. This would lead to excessive thread parking/
50 // unparking as the additional threads will instantly park without discovering
53 // The current approach:
55 // This approach applies to three primary sources of potential work: readying a
56 // goroutine, new/modified-earlier timers, and idle-priority GC. See below for
57 // additional details.
59 // We unpark an additional thread when we submit work if (this is wakep()):
60 // 1. There is an idle P, and
61 // 2. There are no "spinning" worker threads.
63 // A worker thread is considered spinning if it is out of local work and did
64 // not find work in the global run queue or netpoller; the spinning state is
65 // denoted in m.spinning and in sched.nmspinning. Threads unparked this way are
66 // also considered spinning; we don't do goroutine handoff so such threads are
67 // out of work initially. Spinning threads spin on looking for work in per-P
68 // run queues and timer heaps or from the GC before parking. If a spinning
69 // thread finds work it takes itself out of the spinning state and proceeds to
70 // execution. If it does not find work it takes itself out of the spinning
71 // state and then parks.
73 // If there is at least one spinning thread (sched.nmspinning>1), we don't
74 // unpark new threads when submitting work. To compensate for that, if the last
75 // spinning thread finds work and stops spinning, it must unpark a new spinning
76 // thread. This approach smooths out unjustified spikes of thread unparking,
77 // but at the same time guarantees eventual maximal CPU parallelism
80 // The main implementation complication is that we need to be very careful
81 // during spinning->non-spinning thread transition. This transition can race
82 // with submission of new work, and either one part or another needs to unpark
83 // another worker thread. If they both fail to do that, we can end up with
84 // semi-persistent CPU underutilization.
86 // The general pattern for submission is:
87 // 1. Submit work to the local run queue, timer heap, or GC state.
88 // 2. #StoreLoad-style memory barrier.
89 // 3. Check sched.nmspinning.
91 // The general pattern for spinning->non-spinning transition is:
92 // 1. Decrement nmspinning.
93 // 2. #StoreLoad-style memory barrier.
94 // 3. Check all per-P work queues and GC for new work.
96 // Note that all this complexity does not apply to global run queue as we are
97 // not sloppy about thread unparking when submitting to global queue. Also see
98 // comments for nmspinning manipulation.
100 // How these different sources of work behave varies, though it doesn't affect
101 // the synchronization approach:
102 // * Ready goroutine: this is an obvious source of work; the goroutine is
103 // immediately ready and must run on some thread eventually.
104 // * New/modified-earlier timer: The current timer implementation (see time.go)
105 // uses netpoll in a thread with no work available to wait for the soonest
106 // timer. If there is no thread waiting, we want a new spinning thread to go
108 // * Idle-priority GC: The GC wakes a stopped idle thread to contribute to
109 // background GC work (note: currently disabled per golang.org/issue/19112).
110 // Also see golang.org/issue/44313, as this should be extended to all GC
120 //go:linkname runtime_inittask runtime..inittask
121 var runtime_inittask initTask
123 //go:linkname main_inittask main..inittask
124 var main_inittask initTask
126 // main_init_done is a signal used by cgocallbackg that initialization
127 // has been completed. It is made before _cgo_notify_runtime_init_done,
128 // so all cgo calls can rely on it existing. When main_init is complete,
129 // it is closed, meaning cgocallbackg can reliably receive from it.
130 var main_init_done chan bool
132 //go:linkname main_main main.main
135 // mainStarted indicates that the main M has started.
138 // runtimeInitTime is the nanotime() at which the runtime started.
139 var runtimeInitTime int64
141 // Value to use for signal mask for newly created M's.
142 var initSigmask sigset
144 // The main goroutine.
148 // Racectx of m0->g0 is used only as the parent of the main goroutine.
149 // It must not be used for anything else.
152 // Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.
153 // Using decimal instead of binary GB and MB because
154 // they look nicer in the stack overflow failure message.
155 if goarch.PtrSize == 8 {
156 maxstacksize = 1000000000
158 maxstacksize = 250000000
161 // An upper limit for max stack size. Used to avoid random crashes
162 // after calling SetMaxStack and trying to allocate a stack that is too big,
163 // since stackalloc works with 32-bit sizes.
164 maxstackceiling = 2 * maxstacksize
166 // Allow newproc to start new Ms.
169 if GOARCH != "wasm" { // no threads on wasm yet, so no sysmon
170 // For runtime_syscall_doAllThreadsSyscall, we
171 // register sysmon is not ready for the world to be
173 atomic.Store(&sched.sysmonStarting, 1)
175 newm(sysmon, nil, -1)
179 // Lock the main goroutine onto this, the main OS thread,
180 // during initialization. Most programs won't care, but a few
181 // do require certain calls to be made by the main thread.
182 // Those can arrange for main.main to run in the main thread
183 // by calling runtime.LockOSThread during initialization
184 // to preserve the lock.
188 throw("runtime.main not on m0")
192 // Record when the world started.
193 // Must be before doInit for tracing init.
194 runtimeInitTime = nanotime()
195 if runtimeInitTime == 0 {
196 throw("nanotime returning zero")
199 if debug.inittrace != 0 {
200 inittrace.id = getg().goid
201 inittrace.active = true
204 doInit(&runtime_inittask) // Must be before defer.
206 // Defer unlock so that runtime.Goexit during init does the unlock too.
216 main_init_done = make(chan bool)
218 if _cgo_thread_start == nil {
219 throw("_cgo_thread_start missing")
221 if GOOS != "windows" {
222 if _cgo_setenv == nil {
223 throw("_cgo_setenv missing")
225 if _cgo_unsetenv == nil {
226 throw("_cgo_unsetenv missing")
229 if _cgo_notify_runtime_init_done == nil {
230 throw("_cgo_notify_runtime_init_done missing")
232 // Start the template thread in case we enter Go from
233 // a C-created thread and need to create a new thread.
234 startTemplateThread()
235 cgocall(_cgo_notify_runtime_init_done, nil)
238 doInit(&main_inittask)
240 // Disable init tracing after main init done to avoid overhead
241 // of collecting statistics in malloc and newproc
242 inittrace.active = false
244 close(main_init_done)
249 if isarchive || islibrary {
250 // A program compiled with -buildmode=c-archive or c-shared
251 // has a main, but it is not executed.
254 fn := main_main // make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
260 // Make racy client program work: if panicking on
261 // another goroutine at the same time as main returns,
262 // let the other goroutine finish printing the panic trace.
263 // Once it does, it will exit. See issues 3934 and 20018.
264 if atomic.Load(&runningPanicDefers) != 0 {
265 // Running deferred functions should not take long.
266 for c := 0; c < 1000; c++ {
267 if atomic.Load(&runningPanicDefers) == 0 {
273 if atomic.Load(&panicking) != 0 {
274 gopark(nil, nil, waitReasonPanicWait, traceEvGoStop, 1)
284 // os_beforeExit is called from os.Exit(0).
285 //go:linkname os_beforeExit os.runtime_beforeExit
286 func os_beforeExit() {
292 // start forcegc helper goroutine
297 func forcegchelper() {
299 lockInit(&forcegc.lock, lockRankForcegc)
302 if forcegc.idle != 0 {
303 throw("forcegc: phase error")
305 atomic.Store(&forcegc.idle, 1)
306 goparkunlock(&forcegc.lock, waitReasonForceGCIdle, traceEvGoBlock, 1)
307 // this goroutine is explicitly resumed by sysmon
308 if debug.gctrace > 0 {
311 // Time-triggered, fully concurrent.
312 gcStart(gcTrigger{kind: gcTriggerTime, now: nanotime()})
318 // Gosched yields the processor, allowing other goroutines to run. It does not
319 // suspend the current goroutine, so execution resumes automatically.
325 // goschedguarded yields the processor like gosched, but also checks
326 // for forbidden states and opts out of the yield in those cases.
328 func goschedguarded() {
329 mcall(goschedguarded_m)
332 // Puts the current goroutine into a waiting state and calls unlockf on the
335 // If unlockf returns false, the goroutine is resumed.
337 // unlockf must not access this G's stack, as it may be moved between
338 // the call to gopark and the call to unlockf.
340 // Note that because unlockf is called after putting the G into a waiting
341 // state, the G may have already been readied by the time unlockf is called
342 // unless there is external synchronization preventing the G from being
343 // readied. If unlockf returns false, it must guarantee that the G cannot be
344 // externally readied.
346 // Reason explains why the goroutine has been parked. It is displayed in stack
347 // traces and heap dumps. Reasons should be unique and descriptive. Do not
348 // re-use reasons, add new ones.
349 func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int) {
350 if reason != waitReasonSleep {
351 checkTimeouts() // timeouts may expire while two goroutines keep the scheduler busy
355 status := readgstatus(gp)
356 if status != _Grunning && status != _Gscanrunning {
357 throw("gopark: bad g status")
360 mp.waitunlockf = unlockf
361 gp.waitreason = reason
362 mp.waittraceev = traceEv
363 mp.waittraceskip = traceskip
365 // can't do anything that might move the G between Ms here.
369 // Puts the current goroutine into a waiting state and unlocks the lock.
370 // The goroutine can be made runnable again by calling goready(gp).
371 func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int) {
372 gopark(parkunlock_c, unsafe.Pointer(lock), reason, traceEv, traceskip)
375 func goready(gp *g, traceskip int) {
377 ready(gp, traceskip, true)
382 func acquireSudog() *sudog {
383 // Delicate dance: the semaphore implementation calls
384 // acquireSudog, acquireSudog calls new(sudog),
385 // new calls malloc, malloc can call the garbage collector,
386 // and the garbage collector calls the semaphore implementation
388 // Break the cycle by doing acquirem/releasem around new(sudog).
389 // The acquirem/releasem increments m.locks during new(sudog),
390 // which keeps the garbage collector from being invoked.
393 if len(pp.sudogcache) == 0 {
394 lock(&sched.sudoglock)
395 // First, try to grab a batch from central cache.
396 for len(pp.sudogcache) < cap(pp.sudogcache)/2 && sched.sudogcache != nil {
397 s := sched.sudogcache
398 sched.sudogcache = s.next
400 pp.sudogcache = append(pp.sudogcache, s)
402 unlock(&sched.sudoglock)
403 // If the central cache is empty, allocate a new one.
404 if len(pp.sudogcache) == 0 {
405 pp.sudogcache = append(pp.sudogcache, new(sudog))
408 n := len(pp.sudogcache)
409 s := pp.sudogcache[n-1]
410 pp.sudogcache[n-1] = nil
411 pp.sudogcache = pp.sudogcache[:n-1]
413 throw("acquireSudog: found s.elem != nil in cache")
420 func releaseSudog(s *sudog) {
422 throw("runtime: sudog with non-nil elem")
425 throw("runtime: sudog with non-false isSelect")
428 throw("runtime: sudog with non-nil next")
431 throw("runtime: sudog with non-nil prev")
433 if s.waitlink != nil {
434 throw("runtime: sudog with non-nil waitlink")
437 throw("runtime: sudog with non-nil c")
441 throw("runtime: releaseSudog with non-nil gp.param")
443 mp := acquirem() // avoid rescheduling to another P
445 if len(pp.sudogcache) == cap(pp.sudogcache) {
446 // Transfer half of local cache to the central cache.
447 var first, last *sudog
448 for len(pp.sudogcache) > cap(pp.sudogcache)/2 {
449 n := len(pp.sudogcache)
450 p := pp.sudogcache[n-1]
451 pp.sudogcache[n-1] = nil
452 pp.sudogcache = pp.sudogcache[:n-1]
460 lock(&sched.sudoglock)
461 last.next = sched.sudogcache
462 sched.sudogcache = first
463 unlock(&sched.sudoglock)
465 pp.sudogcache = append(pp.sudogcache, s)
469 // called from assembly
470 func badmcall(fn func(*g)) {
471 throw("runtime: mcall called on m->g0 stack")
474 func badmcall2(fn func(*g)) {
475 throw("runtime: mcall function returned")
478 func badreflectcall() {
479 panic(plainError("arg size to reflect.call more than 1GB"))
482 var badmorestackg0Msg = "fatal: morestack on g0\n"
485 //go:nowritebarrierrec
486 func badmorestackg0() {
487 sp := stringStructOf(&badmorestackg0Msg)
488 write(2, sp.str, int32(sp.len))
491 var badmorestackgsignalMsg = "fatal: morestack on gsignal\n"
494 //go:nowritebarrierrec
495 func badmorestackgsignal() {
496 sp := stringStructOf(&badmorestackgsignalMsg)
497 write(2, sp.str, int32(sp.len))
505 func lockedOSThread() bool {
507 return gp.lockedm != 0 && gp.m.lockedg != 0
511 // allgs contains all Gs ever created (including dead Gs), and thus
514 // Access via the slice is protected by allglock or stop-the-world.
515 // Readers that cannot take the lock may (carefully!) use the atomic
520 // allglen and allgptr are atomic variables that contain len(allgs) and
521 // &allgs[0] respectively. Proper ordering depends on totally-ordered
522 // loads and stores. Writes are protected by allglock.
524 // allgptr is updated before allglen. Readers should read allglen
525 // before allgptr to ensure that allglen is always <= len(allgptr). New
526 // Gs appended during the race can be missed. For a consistent view of
527 // all Gs, allglock must be held.
529 // allgptr copies should always be stored as a concrete type or
530 // unsafe.Pointer, not uintptr, to ensure that GC can still reach it
531 // even if it points to a stale array.
536 func allgadd(gp *g) {
537 if readgstatus(gp) == _Gidle {
538 throw("allgadd: bad status Gidle")
542 allgs = append(allgs, gp)
543 if &allgs[0] != allgptr {
544 atomicstorep(unsafe.Pointer(&allgptr), unsafe.Pointer(&allgs[0]))
546 atomic.Storeuintptr(&allglen, uintptr(len(allgs)))
550 // atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
551 func atomicAllG() (**g, uintptr) {
552 length := atomic.Loaduintptr(&allglen)
553 ptr := (**g)(atomic.Loadp(unsafe.Pointer(&allgptr)))
557 // atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
558 func atomicAllGIndex(ptr **g, i uintptr) *g {
559 return *(**g)(add(unsafe.Pointer(ptr), i*goarch.PtrSize))
562 // forEachG calls fn on every G from allgs.
564 // forEachG takes a lock to exclude concurrent addition of new Gs.
565 func forEachG(fn func(gp *g)) {
567 for _, gp := range allgs {
573 // forEachGRace calls fn on every G from allgs.
575 // forEachGRace avoids locking, but does not exclude addition of new Gs during
576 // execution, which may be missed.
577 func forEachGRace(fn func(gp *g)) {
578 ptr, length := atomicAllG()
579 for i := uintptr(0); i < length; i++ {
580 gp := atomicAllGIndex(ptr, i)
587 // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
588 // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
592 // cpuinit extracts the environment variable GODEBUG from the environment on
593 // Unix-like operating systems and calls internal/cpu.Initialize.
595 const prefix = "GODEBUG="
599 case "aix", "darwin", "ios", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
600 cpu.DebugOptions = true
602 // Similar to goenv_unix but extracts the environment value for
604 // TODO(moehrmann): remove when general goenvs() can be called before cpuinit()
606 for argv_index(argv, argc+1+n) != nil {
610 for i := int32(0); i < n; i++ {
611 p := argv_index(argv, argc+1+i)
612 s := *(*string)(unsafe.Pointer(&stringStruct{unsafe.Pointer(p), findnull(p)}))
614 if hasPrefix(s, prefix) {
615 env = gostring(p)[len(prefix):]
623 // Support cpu feature variables are used in code generated by the compiler
624 // to guard execution of instructions that can not be assumed to be always supported.
625 x86HasPOPCNT = cpu.X86.HasPOPCNT
626 x86HasSSE41 = cpu.X86.HasSSE41
627 x86HasFMA = cpu.X86.HasFMA
629 armHasVFPv4 = cpu.ARM.HasVFPv4
631 arm64HasATOMICS = cpu.ARM64.HasATOMICS
634 // The bootstrap sequence is:
638 // make & queue new G
639 // call runtime·mstart
641 // The new G calls runtime·main.
643 lockInit(&sched.lock, lockRankSched)
644 lockInit(&sched.sysmonlock, lockRankSysmon)
645 lockInit(&sched.deferlock, lockRankDefer)
646 lockInit(&sched.sudoglock, lockRankSudog)
647 lockInit(&deadlock, lockRankDeadlock)
648 lockInit(&paniclk, lockRankPanic)
649 lockInit(&allglock, lockRankAllg)
650 lockInit(&allpLock, lockRankAllp)
651 lockInit(&reflectOffs.lock, lockRankReflectOffs)
652 lockInit(&finlock, lockRankFin)
653 lockInit(&trace.bufLock, lockRankTraceBuf)
654 lockInit(&trace.stringsLock, lockRankTraceStrings)
655 lockInit(&trace.lock, lockRankTrace)
656 lockInit(&cpuprof.lock, lockRankCpuprof)
657 lockInit(&trace.stackTab.lock, lockRankTraceStackTab)
658 // Enforce that this lock is always a leaf lock.
659 // All of this lock's critical sections should be
661 lockInit(&memstats.heapStats.noPLock, lockRankLeafRank)
663 // raceinit must be the first call to race detector.
664 // In particular, it must be done before mallocinit below calls racemapshadow.
667 _g_.racectx, raceprocctx0 = raceinit()
670 sched.maxmcount = 10000
672 // The world starts stopped.
678 fastrandinit() // must run before mcommoninit
679 mcommoninit(_g_.m, -1)
680 cpuinit() // must run before alginit
681 alginit() // maps must not be used before this call
682 modulesinit() // provides activeModules
683 typelinksinit() // uses maps, activeModules
684 itabsinit() // uses activeModules
686 sigsave(&_g_.m.sigmask)
687 initSigmask = _g_.m.sigmask
689 if offset := unsafe.Offsetof(sched.timeToRun); offset%8 != 0 {
691 throw("sched.timeToRun not aligned to 8 bytes")
700 sched.lastpoll = uint64(nanotime())
702 if n, ok := atoi32(gogetenv("GOMAXPROCS")); ok && n > 0 {
705 if procresize(procs) != nil {
706 throw("unknown runnable goroutine during bootstrap")
710 // World is effectively started now, as P's can run.
713 // For cgocheck > 1, we turn on the write barrier at all times
714 // and check all pointer writes. We can't do this until after
715 // procresize because the write barrier needs a P.
716 if debug.cgocheck > 1 {
717 writeBarrier.cgo = true
718 writeBarrier.enabled = true
719 for _, p := range allp {
724 if buildVersion == "" {
725 // Condition should never trigger. This code just serves
726 // to ensure runtime·buildVersion is kept in the resulting binary.
727 buildVersion = "unknown"
729 if len(modinfo) == 1 {
730 // Condition should never trigger. This code just serves
731 // to ensure runtime·modinfo is kept in the resulting binary.
736 func dumpgstatus(gp *g) {
738 print("runtime: gp: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
739 print("runtime: g: g=", _g_, ", goid=", _g_.goid, ", g->atomicstatus=", readgstatus(_g_), "\n")
742 // sched.lock must be held.
744 assertLockHeld(&sched.lock)
746 if mcount() > sched.maxmcount {
747 print("runtime: program exceeds ", sched.maxmcount, "-thread limit\n")
748 throw("thread exhaustion")
752 // mReserveID returns the next ID to use for a new m. This new m is immediately
753 // considered 'running' by checkdead.
755 // sched.lock must be held.
756 func mReserveID() int64 {
757 assertLockHeld(&sched.lock)
759 if sched.mnext+1 < sched.mnext {
760 throw("runtime: thread ID overflow")
768 // Pre-allocated ID may be passed as 'id', or omitted by passing -1.
769 func mcommoninit(mp *m, id int64) {
772 // g0 stack won't make sense for user (and is not necessary unwindable).
774 callers(1, mp.createstack[:])
785 mp.fastrand[0] = uint32(int64Hash(uint64(mp.id), fastrandseed))
786 mp.fastrand[1] = uint32(int64Hash(uint64(cputicks()), ^fastrandseed))
787 if mp.fastrand[0]|mp.fastrand[1] == 0 {
792 if mp.gsignal != nil {
793 mp.gsignal.stackguard1 = mp.gsignal.stack.lo + _StackGuard
796 // Add to allm so garbage collector doesn't free g->m
797 // when it is just in a register or thread-local storage.
800 // NumCgoCall() iterates over allm w/o schedlock,
801 // so we need to publish it safely.
802 atomicstorep(unsafe.Pointer(&allm), unsafe.Pointer(mp))
805 // Allocate memory to hold a cgo traceback if the cgo call crashes.
806 if iscgo || GOOS == "solaris" || GOOS == "illumos" || GOOS == "windows" {
807 mp.cgoCallers = new(cgoCallers)
811 var fastrandseed uintptr
813 func fastrandinit() {
814 s := (*[unsafe.Sizeof(fastrandseed)]byte)(unsafe.Pointer(&fastrandseed))[:]
818 // Mark gp ready to run.
819 func ready(gp *g, traceskip int, next bool) {
821 traceGoUnpark(gp, traceskip)
824 status := readgstatus(gp)
828 mp := acquirem() // disable preemption because it can be holding p in a local var
829 if status&^_Gscan != _Gwaiting {
831 throw("bad g->status in ready")
834 // status is Gwaiting or Gscanwaiting, make Grunnable and put on runq
835 casgstatus(gp, _Gwaiting, _Grunnable)
836 runqput(_g_.m.p.ptr(), gp, next)
841 // freezeStopWait is a large value that freezetheworld sets
842 // sched.stopwait to in order to request that all Gs permanently stop.
843 const freezeStopWait = 0x7fffffff
845 // freezing is set to non-zero if the runtime is trying to freeze the
849 // Similar to stopTheWorld but best-effort and can be called several times.
850 // There is no reverse operation, used during crashing.
851 // This function must not lock any mutexes.
852 func freezetheworld() {
853 atomic.Store(&freezing, 1)
854 // stopwait and preemption requests can be lost
855 // due to races with concurrently executing threads,
856 // so try several times
857 for i := 0; i < 5; i++ {
858 // this should tell the scheduler to not start any new goroutines
859 sched.stopwait = freezeStopWait
860 atomic.Store(&sched.gcwaiting, 1)
861 // this should stop running goroutines
863 break // no running goroutines
873 // All reads and writes of g's status go through readgstatus, casgstatus
874 // castogscanstatus, casfrom_Gscanstatus.
876 func readgstatus(gp *g) uint32 {
877 return atomic.Load(&gp.atomicstatus)
880 // The Gscanstatuses are acting like locks and this releases them.
881 // If it proves to be a performance hit we should be able to make these
882 // simple atomic stores but for now we are going to throw if
883 // we see an inconsistent state.
884 func casfrom_Gscanstatus(gp *g, oldval, newval uint32) {
887 // Check that transition is valid.
890 print("runtime: casfrom_Gscanstatus bad oldval gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
892 throw("casfrom_Gscanstatus:top gp->status is not in scan state")
898 if newval == oldval&^_Gscan {
899 success = atomic.Cas(&gp.atomicstatus, oldval, newval)
903 print("runtime: casfrom_Gscanstatus failed gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
905 throw("casfrom_Gscanstatus: gp->status is not in scan state")
907 releaseLockRank(lockRankGscan)
910 // This will return false if the gp is not in the expected status and the cas fails.
911 // This acts like a lock acquire while the casfromgstatus acts like a lock release.
912 func castogscanstatus(gp *g, oldval, newval uint32) bool {
918 if newval == oldval|_Gscan {
919 r := atomic.Cas(&gp.atomicstatus, oldval, newval)
921 acquireLockRank(lockRankGscan)
927 print("runtime: castogscanstatus oldval=", hex(oldval), " newval=", hex(newval), "\n")
928 throw("castogscanstatus")
932 // If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
933 // and casfrom_Gscanstatus instead.
934 // casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
935 // put it in the Gscan state is finished.
937 func casgstatus(gp *g, oldval, newval uint32) {
938 if (oldval&_Gscan != 0) || (newval&_Gscan != 0) || oldval == newval {
940 print("runtime: casgstatus: oldval=", hex(oldval), " newval=", hex(newval), "\n")
941 throw("casgstatus: bad incoming values")
945 acquireLockRank(lockRankGscan)
946 releaseLockRank(lockRankGscan)
948 // See https://golang.org/cl/21503 for justification of the yield delay.
949 const yieldDelay = 5 * 1000
952 // loop if gp->atomicstatus is in a scan state giving
953 // GC time to finish and change the state to oldval.
954 for i := 0; !atomic.Cas(&gp.atomicstatus, oldval, newval); i++ {
955 if oldval == _Gwaiting && gp.atomicstatus == _Grunnable {
956 throw("casgstatus: waiting for Gwaiting but is Grunnable")
959 nextYield = nanotime() + yieldDelay
961 if nanotime() < nextYield {
962 for x := 0; x < 10 && gp.atomicstatus != oldval; x++ {
967 nextYield = nanotime() + yieldDelay/2
971 // Handle tracking for scheduling latencies.
972 if oldval == _Grunning {
973 // Track every 8th time a goroutine transitions out of running.
974 if gp.trackingSeq%gTrackingPeriod == 0 {
981 if oldval == _Grunnable {
982 // We transitioned out of runnable, so measure how much
983 // time we spent in this state and add it to
985 gp.runnableTime += now - gp.runnableStamp
988 if newval == _Grunnable {
989 // We just transitioned into runnable, so record what
990 // time that happened.
991 gp.runnableStamp = now
992 } else if newval == _Grunning {
993 // We're transitioning into running, so turn off
994 // tracking and record how much time we spent in
997 sched.timeToRun.record(gp.runnableTime)
1003 // casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
1004 // Returns old status. Cannot call casgstatus directly, because we are racing with an
1005 // async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
1006 // it might have become Grunnable by the time we get to the cas. If we called casgstatus,
1007 // it would loop waiting for the status to go back to Gwaiting, which it never will.
1009 func casgcopystack(gp *g) uint32 {
1011 oldstatus := readgstatus(gp) &^ _Gscan
1012 if oldstatus != _Gwaiting && oldstatus != _Grunnable {
1013 throw("copystack: bad status, not Gwaiting or Grunnable")
1015 if atomic.Cas(&gp.atomicstatus, oldstatus, _Gcopystack) {
1021 // casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
1023 // TODO(austin): This is the only status operation that both changes
1024 // the status and locks the _Gscan bit. Rethink this.
1025 func casGToPreemptScan(gp *g, old, new uint32) {
1026 if old != _Grunning || new != _Gscan|_Gpreempted {
1027 throw("bad g transition")
1029 acquireLockRank(lockRankGscan)
1030 for !atomic.Cas(&gp.atomicstatus, _Grunning, _Gscan|_Gpreempted) {
1034 // casGFromPreempted attempts to transition gp from _Gpreempted to
1035 // _Gwaiting. If successful, the caller is responsible for
1036 // re-scheduling gp.
1037 func casGFromPreempted(gp *g, old, new uint32) bool {
1038 if old != _Gpreempted || new != _Gwaiting {
1039 throw("bad g transition")
1041 return atomic.Cas(&gp.atomicstatus, _Gpreempted, _Gwaiting)
1044 // stopTheWorld stops all P's from executing goroutines, interrupting
1045 // all goroutines at GC safe points and records reason as the reason
1046 // for the stop. On return, only the current goroutine's P is running.
1047 // stopTheWorld must not be called from a system stack and the caller
1048 // must not hold worldsema. The caller must call startTheWorld when
1049 // other P's should resume execution.
1051 // stopTheWorld is safe for multiple goroutines to call at the
1052 // same time. Each will execute its own stop, and the stops will
1055 // This is also used by routines that do stack dumps. If the system is
1056 // in panic or being exited, this may not reliably stop all
1058 func stopTheWorld(reason string) {
1059 semacquire(&worldsema)
1061 gp.m.preemptoff = reason
1062 systemstack(func() {
1063 // Mark the goroutine which called stopTheWorld preemptible so its
1064 // stack may be scanned.
1065 // This lets a mark worker scan us while we try to stop the world
1066 // since otherwise we could get in a mutual preemption deadlock.
1067 // We must not modify anything on the G stack because a stack shrink
1068 // may occur. A stack shrink is otherwise OK though because in order
1069 // to return from this function (and to leave the system stack) we
1070 // must have preempted all goroutines, including any attempting
1071 // to scan our stack, in which case, any stack shrinking will
1072 // have already completed by the time we exit.
1073 casgstatus(gp, _Grunning, _Gwaiting)
1074 stopTheWorldWithSema()
1075 casgstatus(gp, _Gwaiting, _Grunning)
1079 // startTheWorld undoes the effects of stopTheWorld.
1080 func startTheWorld() {
1081 systemstack(func() { startTheWorldWithSema(false) })
1083 // worldsema must be held over startTheWorldWithSema to ensure
1084 // gomaxprocs cannot change while worldsema is held.
1086 // Release worldsema with direct handoff to the next waiter, but
1087 // acquirem so that semrelease1 doesn't try to yield our time.
1089 // Otherwise if e.g. ReadMemStats is being called in a loop,
1090 // it might stomp on other attempts to stop the world, such as
1091 // for starting or ending GC. The operation this blocks is
1092 // so heavy-weight that we should just try to be as fair as
1095 // We don't want to just allow us to get preempted between now
1096 // and releasing the semaphore because then we keep everyone
1097 // (including, for example, GCs) waiting longer.
1100 semrelease1(&worldsema, true, 0)
1104 // stopTheWorldGC has the same effect as stopTheWorld, but blocks
1105 // until the GC is not running. It also blocks a GC from starting
1106 // until startTheWorldGC is called.
1107 func stopTheWorldGC(reason string) {
1109 stopTheWorld(reason)
1112 // startTheWorldGC undoes the effects of stopTheWorldGC.
1113 func startTheWorldGC() {
1118 // Holding worldsema grants an M the right to try to stop the world.
1119 var worldsema uint32 = 1
1121 // Holding gcsema grants the M the right to block a GC, and blocks
1122 // until the current GC is done. In particular, it prevents gomaxprocs
1123 // from changing concurrently.
1125 // TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
1126 // being changed/enabled during a GC, remove this.
1127 var gcsema uint32 = 1
1129 // stopTheWorldWithSema is the core implementation of stopTheWorld.
1130 // The caller is responsible for acquiring worldsema and disabling
1131 // preemption first and then should stopTheWorldWithSema on the system
1134 // semacquire(&worldsema, 0)
1135 // m.preemptoff = "reason"
1136 // systemstack(stopTheWorldWithSema)
1138 // When finished, the caller must either call startTheWorld or undo
1139 // these three operations separately:
1141 // m.preemptoff = ""
1142 // systemstack(startTheWorldWithSema)
1143 // semrelease(&worldsema)
1145 // It is allowed to acquire worldsema once and then execute multiple
1146 // startTheWorldWithSema/stopTheWorldWithSema pairs.
1147 // Other P's are able to execute between successive calls to
1148 // startTheWorldWithSema and stopTheWorldWithSema.
1149 // Holding worldsema causes any other goroutines invoking
1150 // stopTheWorld to block.
1151 func stopTheWorldWithSema() {
1154 // If we hold a lock, then we won't be able to stop another M
1155 // that is blocked trying to acquire the lock.
1156 if _g_.m.locks > 0 {
1157 throw("stopTheWorld: holding locks")
1161 sched.stopwait = gomaxprocs
1162 atomic.Store(&sched.gcwaiting, 1)
1165 _g_.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.
1167 // try to retake all P's in Psyscall status
1168 for _, p := range allp {
1170 if s == _Psyscall && atomic.Cas(&p.status, s, _Pgcstop) {
1188 wait := sched.stopwait > 0
1191 // wait for remaining P's to stop voluntarily
1194 // wait for 100us, then try to re-preempt in case of any races
1195 if notetsleep(&sched.stopnote, 100*1000) {
1196 noteclear(&sched.stopnote)
1205 if sched.stopwait != 0 {
1206 bad = "stopTheWorld: not stopped (stopwait != 0)"
1208 for _, p := range allp {
1209 if p.status != _Pgcstop {
1210 bad = "stopTheWorld: not stopped (status != _Pgcstop)"
1214 if atomic.Load(&freezing) != 0 {
1215 // Some other thread is panicking. This can cause the
1216 // sanity checks above to fail if the panic happens in
1217 // the signal handler on a stopped thread. Either way,
1218 // we should halt this thread.
1229 func startTheWorldWithSema(emitTraceEvent bool) int64 {
1230 assertWorldStopped()
1232 mp := acquirem() // disable preemption because it can be holding p in a local var
1233 if netpollinited() {
1234 list := netpoll(0) // non-blocking
1244 p1 := procresize(procs)
1246 if sched.sysmonwait != 0 {
1247 sched.sysmonwait = 0
1248 notewakeup(&sched.sysmonnote)
1261 throw("startTheWorld: inconsistent mp->nextp")
1264 notewakeup(&mp.park)
1266 // Start M to run P. Do not start another M below.
1271 // Capture start-the-world time before doing clean-up tasks.
1272 startTime := nanotime()
1277 // Wakeup an additional proc in case we have excessive runnable goroutines
1278 // in local queues or in the global queue. If we don't, the proc will park itself.
1279 // If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
1287 // usesLibcall indicates whether this runtime performs system calls
1289 func usesLibcall() bool {
1291 case "aix", "darwin", "illumos", "ios", "solaris", "windows":
1294 return GOARCH == "386" || GOARCH == "amd64" || GOARCH == "arm" || GOARCH == "arm64"
1299 // mStackIsSystemAllocated indicates whether this runtime starts on a
1300 // system-allocated stack.
1301 func mStackIsSystemAllocated() bool {
1303 case "aix", "darwin", "plan9", "illumos", "ios", "solaris", "windows":
1307 case "386", "amd64", "arm", "arm64":
1314 // mstart is the entry-point for new Ms.
1315 // It is written in assembly, uses ABI0, is marked TOPFRAME, and calls mstart0.
1318 // mstart0 is the Go entry-point for new Ms.
1319 // This must not split the stack because we may not even have stack
1320 // bounds set up yet.
1322 // May run during STW (because it doesn't have a P yet), so write
1323 // barriers are not allowed.
1326 //go:nowritebarrierrec
1330 osStack := _g_.stack.lo == 0
1332 // Initialize stack bounds from system stack.
1333 // Cgo may have left stack size in stack.hi.
1334 // minit may update the stack bounds.
1336 // Note: these bounds may not be very accurate.
1337 // We set hi to &size, but there are things above
1338 // it. The 1024 is supposed to compensate this,
1339 // but is somewhat arbitrary.
1340 size := _g_.stack.hi
1342 size = 8192 * sys.StackGuardMultiplier
1344 _g_.stack.hi = uintptr(noescape(unsafe.Pointer(&size)))
1345 _g_.stack.lo = _g_.stack.hi - size + 1024
1347 // Initialize stack guard so that we can start calling regular
1349 _g_.stackguard0 = _g_.stack.lo + _StackGuard
1350 // This is the g0, so we can also call go:systemstack
1351 // functions, which check stackguard1.
1352 _g_.stackguard1 = _g_.stackguard0
1355 // Exit this thread.
1356 if mStackIsSystemAllocated() {
1357 // Windows, Solaris, illumos, Darwin, AIX and Plan 9 always system-allocate
1358 // the stack, but put it in _g_.stack before mstart,
1359 // so the logic above hasn't set osStack yet.
1365 // The go:noinline is to guarantee the getcallerpc/getcallersp below are safe,
1366 // so that we can set up g0.sched to return to the call of mstart1 above.
1371 if _g_ != _g_.m.g0 {
1372 throw("bad runtime·mstart")
1375 // Set up m.g0.sched as a label returning to just
1376 // after the mstart1 call in mstart0 above, for use by goexit0 and mcall.
1377 // We're never coming back to mstart1 after we call schedule,
1378 // so other calls can reuse the current frame.
1379 // And goexit0 does a gogo that needs to return from mstart1
1380 // and let mstart0 exit the thread.
1381 _g_.sched.g = guintptr(unsafe.Pointer(_g_))
1382 _g_.sched.pc = getcallerpc()
1383 _g_.sched.sp = getcallersp()
1388 // Install signal handlers; after minit so that minit can
1389 // prepare the thread to be able to handle the signals.
1394 if fn := _g_.m.mstartfn; fn != nil {
1399 acquirep(_g_.m.nextp.ptr())
1405 // mstartm0 implements part of mstart1 that only runs on the m0.
1407 // Write barriers are allowed here because we know the GC can't be
1408 // running yet, so they'll be no-ops.
1410 //go:yeswritebarrierrec
1412 // Create an extra M for callbacks on threads not created by Go.
1413 // An extra M is also needed on Windows for callbacks created by
1414 // syscall.NewCallback. See issue #6751 for details.
1415 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
1422 // mPark causes a thread to park itself - temporarily waking for
1423 // fixups but otherwise waiting to be fully woken. This is the
1424 // only way that m's should park themselves.
1429 notesleep(&g.m.park)
1430 // Note, because of signal handling by this parked m,
1431 // a preemptive mDoFixup() may actually occur via
1432 // mDoFixupAndOSYield(). (See golang.org/issue/44193)
1433 noteclear(&g.m.park)
1440 // mexit tears down and exits the current thread.
1442 // Don't call this directly to exit the thread, since it must run at
1443 // the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to
1444 // unwind the stack to the point that exits the thread.
1446 // It is entered with m.p != nil, so write barriers are allowed. It
1447 // will release the P before exiting.
1449 //go:yeswritebarrierrec
1450 func mexit(osStack bool) {
1455 // This is the main thread. Just wedge it.
1457 // On Linux, exiting the main thread puts the process
1458 // into a non-waitable zombie state. On Plan 9,
1459 // exiting the main thread unblocks wait even though
1460 // other threads are still running. On Solaris we can
1461 // neither exitThread nor return from mstart. Other
1462 // bad things probably happen on other platforms.
1464 // We could try to clean up this M more before wedging
1465 // it, but that complicates signal handling.
1466 handoffp(releasep())
1472 throw("locked m0 woke up")
1478 // Free the gsignal stack.
1479 if m.gsignal != nil {
1480 stackfree(m.gsignal.stack)
1481 // On some platforms, when calling into VDSO (e.g. nanotime)
1482 // we store our g on the gsignal stack, if there is one.
1483 // Now the stack is freed, unlink it from the m, so we
1484 // won't write to it when calling VDSO code.
1488 // Remove m from allm.
1490 for pprev := &allm; *pprev != nil; pprev = &(*pprev).alllink {
1496 throw("m not found in allm")
1499 // Delay reaping m until it's done with the stack.
1501 // If this is using an OS stack, the OS will free it
1502 // so there's no need for reaping.
1503 atomic.Store(&m.freeWait, 1)
1504 // Put m on the free list, though it will not be reaped until
1505 // freeWait is 0. Note that the free list must not be linked
1506 // through alllink because some functions walk allm without
1507 // locking, so may be using alllink.
1508 m.freelink = sched.freem
1513 atomic.Xadd64(&ncgocall, int64(m.ncgocall))
1516 handoffp(releasep())
1517 // After this point we must not have write barriers.
1519 // Invoke the deadlock detector. This must happen after
1520 // handoffp because it may have started a new M to take our
1527 if GOOS == "darwin" || GOOS == "ios" {
1528 // Make sure pendingPreemptSignals is correct when an M exits.
1530 if atomic.Load(&m.signalPending) != 0 {
1531 atomic.Xadd(&pendingPreemptSignals, -1)
1535 // Destroy all allocated resources. After this is called, we may no
1536 // longer take any locks.
1540 // Return from mstart and let the system thread
1541 // library free the g0 stack and terminate the thread.
1545 // mstart is the thread's entry point, so there's nothing to
1546 // return to. Exit the thread directly. exitThread will clear
1547 // m.freeWait when it's done with the stack and the m can be
1549 exitThread(&m.freeWait)
1552 // forEachP calls fn(p) for every P p when p reaches a GC safe point.
1553 // If a P is currently executing code, this will bring the P to a GC
1554 // safe point and execute fn on that P. If the P is not executing code
1555 // (it is idle or in a syscall), this will call fn(p) directly while
1556 // preventing the P from exiting its state. This does not ensure that
1557 // fn will run on every CPU executing Go code, but it acts as a global
1558 // memory barrier. GC uses this as a "ragged barrier."
1560 // The caller must hold worldsema.
1563 func forEachP(fn func(*p)) {
1565 _p_ := getg().m.p.ptr()
1568 if sched.safePointWait != 0 {
1569 throw("forEachP: sched.safePointWait != 0")
1571 sched.safePointWait = gomaxprocs - 1
1572 sched.safePointFn = fn
1574 // Ask all Ps to run the safe point function.
1575 for _, p := range allp {
1577 atomic.Store(&p.runSafePointFn, 1)
1582 // Any P entering _Pidle or _Psyscall from now on will observe
1583 // p.runSafePointFn == 1 and will call runSafePointFn when
1584 // changing its status to _Pidle/_Psyscall.
1586 // Run safe point function for all idle Ps. sched.pidle will
1587 // not change because we hold sched.lock.
1588 for p := sched.pidle.ptr(); p != nil; p = p.link.ptr() {
1589 if atomic.Cas(&p.runSafePointFn, 1, 0) {
1591 sched.safePointWait--
1595 wait := sched.safePointWait > 0
1598 // Run fn for the current P.
1601 // Force Ps currently in _Psyscall into _Pidle and hand them
1602 // off to induce safe point function execution.
1603 for _, p := range allp {
1605 if s == _Psyscall && p.runSafePointFn == 1 && atomic.Cas(&p.status, s, _Pidle) {
1615 // Wait for remaining Ps to run fn.
1618 // Wait for 100us, then try to re-preempt in
1619 // case of any races.
1621 // Requires system stack.
1622 if notetsleep(&sched.safePointNote, 100*1000) {
1623 noteclear(&sched.safePointNote)
1629 if sched.safePointWait != 0 {
1630 throw("forEachP: not done")
1632 for _, p := range allp {
1633 if p.runSafePointFn != 0 {
1634 throw("forEachP: P did not run fn")
1639 sched.safePointFn = nil
1644 // syscall_runtime_doAllThreadsSyscall serializes Go execution and
1645 // executes a specified fn() call on all m's.
1647 // The boolean argument to fn() indicates whether the function's
1648 // return value will be consulted or not. That is, fn(true) should
1649 // return true if fn() succeeds, and fn(true) should return false if
1650 // it failed. When fn(false) is called, its return status will be
1653 // syscall_runtime_doAllThreadsSyscall first invokes fn(true) on a
1654 // single, coordinating, m, and only if it returns true does it go on
1655 // to invoke fn(false) on all of the other m's known to the process.
1657 //go:linkname syscall_runtime_doAllThreadsSyscall syscall.runtime_doAllThreadsSyscall
1658 func syscall_runtime_doAllThreadsSyscall(fn func(bool) bool) {
1660 panic("doAllThreadsSyscall not supported with cgo enabled")
1665 for atomic.Load(&sched.sysmonStarting) != 0 {
1669 // We don't want this thread to handle signals for the
1670 // duration of this critical section. The underlying issue
1671 // being that this locked coordinating m is the one monitoring
1672 // for fn() execution by all the other m's of the runtime,
1673 // while no regular go code execution is permitted (the world
1674 // is stopped). If this present m were to get distracted to
1675 // run signal handling code, and find itself waiting for a
1676 // second thread to execute go code before being able to
1677 // return from that signal handling, a deadlock will result.
1678 // (See golang.org/issue/44193.)
1684 stopTheWorldGC("doAllThreadsSyscall")
1685 if atomic.Load(&newmHandoff.haveTemplateThread) != 0 {
1686 // Ensure that there are no in-flight thread
1687 // creations: don't want to race with allm.
1688 lock(&newmHandoff.lock)
1689 for !newmHandoff.waiting {
1690 unlock(&newmHandoff.lock)
1692 lock(&newmHandoff.lock)
1694 unlock(&newmHandoff.lock)
1696 if netpollinited() {
1699 sigRecvPrepareForFixup()
1702 // For m's running without racectx, we loan out the
1703 // racectx of this call.
1704 lock(&mFixupRace.lock)
1705 mFixupRace.ctx = _g_.racectx
1706 unlock(&mFixupRace.lock)
1708 if ok := fn(true); ok {
1710 for mp := allm; mp != nil; mp = mp.alllink {
1711 if mp.procid == tid {
1712 // This m has already completed fn()
1716 // Be wary of mp's without procid values if
1717 // they are known not to park. If they are
1718 // marked as parking with a zero procid, then
1719 // they will be racing with this code to be
1720 // allocated a procid and we will annotate
1721 // them with the need to execute the fn when
1722 // they acquire a procid to run it.
1723 if mp.procid == 0 && !mp.doesPark {
1724 // Reaching here, we are either
1725 // running Windows, or cgo linked
1726 // code. Neither of which are
1727 // currently supported by this API.
1728 throw("unsupported runtime environment")
1730 // stopTheWorldGC() doesn't guarantee stopping
1731 // all the threads, so we lock here to avoid
1732 // the possibility of racing with mp.
1733 lock(&mp.mFixup.lock)
1735 atomic.Store(&mp.mFixup.used, 1)
1737 // For non-service threads this will
1738 // cause the wakeup to be short lived
1739 // (once the mutex is unlocked). The
1740 // next real wakeup will occur after
1741 // startTheWorldGC() is called.
1742 notewakeup(&mp.park)
1744 unlock(&mp.mFixup.lock)
1748 for mp := allm; done && mp != nil; mp = mp.alllink {
1749 if mp.procid == tid {
1752 done = atomic.Load(&mp.mFixup.used) == 0
1757 // if needed force sysmon and/or newmHandoff to wakeup.
1759 if atomic.Load(&sched.sysmonwait) != 0 {
1760 atomic.Store(&sched.sysmonwait, 0)
1761 notewakeup(&sched.sysmonnote)
1764 lock(&newmHandoff.lock)
1765 if newmHandoff.waiting {
1766 newmHandoff.waiting = false
1767 notewakeup(&newmHandoff.wake)
1769 unlock(&newmHandoff.lock)
1774 lock(&mFixupRace.lock)
1776 unlock(&mFixupRace.lock)
1779 msigrestore(sigmask)
1783 // runSafePointFn runs the safe point function, if any, for this P.
1784 // This should be called like
1786 // if getg().m.p.runSafePointFn != 0 {
1790 // runSafePointFn must be checked on any transition in to _Pidle or
1791 // _Psyscall to avoid a race where forEachP sees that the P is running
1792 // just before the P goes into _Pidle/_Psyscall and neither forEachP
1793 // nor the P run the safe-point function.
1794 func runSafePointFn() {
1795 p := getg().m.p.ptr()
1796 // Resolve the race between forEachP running the safe-point
1797 // function on this P's behalf and this P running the
1798 // safe-point function directly.
1799 if !atomic.Cas(&p.runSafePointFn, 1, 0) {
1802 sched.safePointFn(p)
1804 sched.safePointWait--
1805 if sched.safePointWait == 0 {
1806 notewakeup(&sched.safePointNote)
1811 // When running with cgo, we call _cgo_thread_start
1812 // to start threads for us so that we can play nicely with
1814 var cgoThreadStart unsafe.Pointer
1816 type cgothreadstart struct {
1822 // Allocate a new m unassociated with any thread.
1823 // Can use p for allocation context if needed.
1824 // fn is recorded as the new m's m.mstartfn.
1825 // id is optional pre-allocated m ID. Omit by passing -1.
1827 // This function is allowed to have write barriers even if the caller
1828 // isn't because it borrows _p_.
1830 //go:yeswritebarrierrec
1831 func allocm(_p_ *p, fn func(), id int64) *m {
1833 acquirem() // disable GC because it can be called from sysmon
1835 acquirep(_p_) // temporarily borrow p for mallocs in this function
1838 // Release the free M list. We need to do this somewhere and
1839 // this may free up a stack we can use.
1840 if sched.freem != nil {
1843 for freem := sched.freem; freem != nil; {
1844 if freem.freeWait != 0 {
1845 next := freem.freelink
1846 freem.freelink = newList
1851 // stackfree must be on the system stack, but allocm is
1852 // reachable off the system stack transitively from
1854 systemstack(func() {
1855 stackfree(freem.g0.stack)
1857 freem = freem.freelink
1859 sched.freem = newList
1867 // In case of cgo or Solaris or illumos or Darwin, pthread_create will make us a stack.
1868 // Windows and Plan 9 will layout sched stack on OS stack.
1869 if iscgo || mStackIsSystemAllocated() {
1872 mp.g0 = malg(8192 * sys.StackGuardMultiplier)
1876 if _p_ == _g_.m.p.ptr() {
1884 // needm is called when a cgo callback happens on a
1885 // thread without an m (a thread not created by Go).
1886 // In this case, needm is expected to find an m to use
1887 // and return with m, g initialized correctly.
1888 // Since m and g are not set now (likely nil, but see below)
1889 // needm is limited in what routines it can call. In particular
1890 // it can only call nosplit functions (textflag 7) and cannot
1891 // do any scheduling that requires an m.
1893 // In order to avoid needing heavy lifting here, we adopt
1894 // the following strategy: there is a stack of available m's
1895 // that can be stolen. Using compare-and-swap
1896 // to pop from the stack has ABA races, so we simulate
1897 // a lock by doing an exchange (via Casuintptr) to steal the stack
1898 // head and replace the top pointer with MLOCKED (1).
1899 // This serves as a simple spin lock that we can use even
1900 // without an m. The thread that locks the stack in this way
1901 // unlocks the stack by storing a valid stack head pointer.
1903 // In order to make sure that there is always an m structure
1904 // available to be stolen, we maintain the invariant that there
1905 // is always one more than needed. At the beginning of the
1906 // program (if cgo is in use) the list is seeded with a single m.
1907 // If needm finds that it has taken the last m off the list, its job
1908 // is - once it has installed its own m so that it can do things like
1909 // allocate memory - to create a spare m and put it on the list.
1911 // Each of these extra m's also has a g0 and a curg that are
1912 // pressed into service as the scheduling stack and current
1913 // goroutine for the duration of the cgo callback.
1915 // When the callback is done with the m, it calls dropm to
1916 // put the m back on the list.
1919 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
1920 // Can happen if C/C++ code calls Go from a global ctor.
1921 // Can also happen on Windows if a global ctor uses a
1922 // callback created by syscall.NewCallback. See issue #6751
1925 // Can not throw, because scheduler is not initialized yet.
1926 write(2, unsafe.Pointer(&earlycgocallback[0]), int32(len(earlycgocallback)))
1930 // Save and block signals before getting an M.
1931 // The signal handler may call needm itself,
1932 // and we must avoid a deadlock. Also, once g is installed,
1933 // any incoming signals will try to execute,
1934 // but we won't have the sigaltstack settings and other data
1935 // set up appropriately until the end of minit, which will
1936 // unblock the signals. This is the same dance as when
1937 // starting a new m to run Go code via newosproc.
1942 // Lock extra list, take head, unlock popped list.
1943 // nilokay=false is safe here because of the invariant above,
1944 // that the extra list always contains or will soon contain
1946 mp := lockextra(false)
1948 // Set needextram when we've just emptied the list,
1949 // so that the eventual call into cgocallbackg will
1950 // allocate a new m for the extra list. We delay the
1951 // allocation until then so that it can be done
1952 // after exitsyscall makes sure it is okay to be
1953 // running at all (that is, there's no garbage collection
1954 // running right now).
1955 mp.needextram = mp.schedlink == 0
1957 unlockextra(mp.schedlink.ptr())
1959 // Store the original signal mask for use by minit.
1960 mp.sigmask = sigmask
1962 // Install TLS on some platforms (previously setg
1963 // would do this if necessary).
1966 // Install g (= m->g0) and set the stack bounds
1967 // to match the current stack. We don't actually know
1968 // how big the stack is, like we don't know how big any
1969 // scheduling stack is, but we assume there's at least 32 kB,
1970 // which is more than enough for us.
1973 _g_.stack.hi = getcallersp() + 1024
1974 _g_.stack.lo = getcallersp() - 32*1024
1975 _g_.stackguard0 = _g_.stack.lo + _StackGuard
1977 // Initialize this thread to use the m.
1981 // mp.curg is now a real goroutine.
1982 casgstatus(mp.curg, _Gdead, _Gsyscall)
1983 atomic.Xadd(&sched.ngsys, -1)
1986 var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n")
1988 // newextram allocates m's and puts them on the extra list.
1989 // It is called with a working local m, so that it can do things
1990 // like call schedlock and allocate.
1992 c := atomic.Xchg(&extraMWaiters, 0)
1994 for i := uint32(0); i < c; i++ {
1998 // Make sure there is at least one extra M.
1999 mp := lockextra(true)
2007 // oneNewExtraM allocates an m and puts it on the extra list.
2008 func oneNewExtraM() {
2009 // Create extra goroutine locked to extra m.
2010 // The goroutine is the context in which the cgo callback will run.
2011 // The sched.pc will never be returned to, but setting it to
2012 // goexit makes clear to the traceback routines where
2013 // the goroutine stack ends.
2014 mp := allocm(nil, nil, -1)
2016 gp.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum
2017 gp.sched.sp = gp.stack.hi
2018 gp.sched.sp -= 4 * goarch.PtrSize // extra space in case of reads slightly beyond frame
2020 gp.sched.g = guintptr(unsafe.Pointer(gp))
2021 gp.syscallpc = gp.sched.pc
2022 gp.syscallsp = gp.sched.sp
2023 gp.stktopsp = gp.sched.sp
2024 // malg returns status as _Gidle. Change to _Gdead before
2025 // adding to allg where GC can see it. We use _Gdead to hide
2026 // this from tracebacks and stack scans since it isn't a
2027 // "real" goroutine until needm grabs it.
2028 casgstatus(gp, _Gidle, _Gdead)
2034 gp.goid = int64(atomic.Xadd64(&sched.goidgen, 1))
2036 gp.racectx = racegostart(abi.FuncPCABIInternal(newextram) + sys.PCQuantum)
2038 // put on allg for garbage collector
2041 // gp is now on the allg list, but we don't want it to be
2042 // counted by gcount. It would be more "proper" to increment
2043 // sched.ngfree, but that requires locking. Incrementing ngsys
2044 // has the same effect.
2045 atomic.Xadd(&sched.ngsys, +1)
2047 // Add m to the extra list.
2048 mnext := lockextra(true)
2049 mp.schedlink.set(mnext)
2054 // dropm is called when a cgo callback has called needm but is now
2055 // done with the callback and returning back into the non-Go thread.
2056 // It puts the current m back onto the extra list.
2058 // The main expense here is the call to signalstack to release the
2059 // m's signal stack, and then the call to needm on the next callback
2060 // from this thread. It is tempting to try to save the m for next time,
2061 // which would eliminate both these costs, but there might not be
2062 // a next time: the current thread (which Go does not control) might exit.
2063 // If we saved the m for that thread, there would be an m leak each time
2064 // such a thread exited. Instead, we acquire and release an m on each
2065 // call. These should typically not be scheduling operations, just a few
2066 // atomics, so the cost should be small.
2068 // TODO(rsc): An alternative would be to allocate a dummy pthread per-thread
2069 // variable using pthread_key_create. Unlike the pthread keys we already use
2070 // on OS X, this dummy key would never be read by Go code. It would exist
2071 // only so that we could register at thread-exit-time destructor.
2072 // That destructor would put the m back onto the extra list.
2073 // This is purely a performance optimization. The current version,
2074 // in which dropm happens on each cgo call, is still correct too.
2075 // We may have to keep the current version on systems with cgo
2076 // but without pthreads, like Windows.
2078 // Clear m and g, and return m to the extra list.
2079 // After the call to setg we can only call nosplit functions
2080 // with no pointer manipulation.
2083 // Return mp.curg to dead state.
2084 casgstatus(mp.curg, _Gsyscall, _Gdead)
2085 mp.curg.preemptStop = false
2086 atomic.Xadd(&sched.ngsys, +1)
2088 // Block signals before unminit.
2089 // Unminit unregisters the signal handling stack (but needs g on some systems).
2090 // Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.
2091 // It's important not to try to handle a signal between those two steps.
2092 sigmask := mp.sigmask
2096 mnext := lockextra(true)
2098 mp.schedlink.set(mnext)
2102 // Commit the release of mp.
2105 msigrestore(sigmask)
2108 // A helper function for EnsureDropM.
2109 func getm() uintptr {
2110 return uintptr(unsafe.Pointer(getg().m))
2114 var extraMCount uint32 // Protected by lockextra
2115 var extraMWaiters uint32
2117 // lockextra locks the extra list and returns the list head.
2118 // The caller must unlock the list by storing a new list head
2119 // to extram. If nilokay is true, then lockextra will
2120 // return a nil list head if that's what it finds. If nilokay is false,
2121 // lockextra will keep waiting until the list head is no longer nil.
2123 func lockextra(nilokay bool) *m {
2128 old := atomic.Loaduintptr(&extram)
2133 if old == 0 && !nilokay {
2135 // Add 1 to the number of threads
2136 // waiting for an M.
2137 // This is cleared by newextram.
2138 atomic.Xadd(&extraMWaiters, 1)
2144 if atomic.Casuintptr(&extram, old, locked) {
2145 return (*m)(unsafe.Pointer(old))
2153 func unlockextra(mp *m) {
2154 atomic.Storeuintptr(&extram, uintptr(unsafe.Pointer(mp)))
2157 // execLock serializes exec and clone to avoid bugs or unspecified behaviour
2158 // around exec'ing while creating/destroying threads. See issue #19546.
2159 var execLock rwmutex
2161 // newmHandoff contains a list of m structures that need new OS threads.
2162 // This is used by newm in situations where newm itself can't safely
2163 // start an OS thread.
2164 var newmHandoff struct {
2167 // newm points to a list of M structures that need new OS
2168 // threads. The list is linked through m.schedlink.
2171 // waiting indicates that wake needs to be notified when an m
2172 // is put on the list.
2176 // haveTemplateThread indicates that the templateThread has
2177 // been started. This is not protected by lock. Use cas to set
2179 haveTemplateThread uint32
2182 // Create a new m. It will start off with a call to fn, or else the scheduler.
2183 // fn needs to be static and not a heap allocated closure.
2184 // May run with m.p==nil, so write barriers are not allowed.
2186 // id is optional pre-allocated m ID. Omit by passing -1.
2187 //go:nowritebarrierrec
2188 func newm(fn func(), _p_ *p, id int64) {
2189 mp := allocm(_p_, fn, id)
2190 mp.doesPark = (_p_ != nil)
2192 mp.sigmask = initSigmask
2193 if gp := getg(); gp != nil && gp.m != nil && (gp.m.lockedExt != 0 || gp.m.incgo) && GOOS != "plan9" {
2194 // We're on a locked M or a thread that may have been
2195 // started by C. The kernel state of this thread may
2196 // be strange (the user may have locked it for that
2197 // purpose). We don't want to clone that into another
2198 // thread. Instead, ask a known-good thread to create
2199 // the thread for us.
2201 // This is disabled on Plan 9. See golang.org/issue/22227.
2203 // TODO: This may be unnecessary on Windows, which
2204 // doesn't model thread creation off fork.
2205 lock(&newmHandoff.lock)
2206 if newmHandoff.haveTemplateThread == 0 {
2207 throw("on a locked thread with no template thread")
2209 mp.schedlink = newmHandoff.newm
2210 newmHandoff.newm.set(mp)
2211 if newmHandoff.waiting {
2212 newmHandoff.waiting = false
2213 notewakeup(&newmHandoff.wake)
2215 unlock(&newmHandoff.lock)
2223 var ts cgothreadstart
2224 if _cgo_thread_start == nil {
2225 throw("_cgo_thread_start missing")
2228 ts.tls = (*uint64)(unsafe.Pointer(&mp.tls[0]))
2229 ts.fn = unsafe.Pointer(abi.FuncPCABI0(mstart))
2231 msanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))
2233 execLock.rlock() // Prevent process clone.
2234 asmcgocall(_cgo_thread_start, unsafe.Pointer(&ts))
2238 execLock.rlock() // Prevent process clone.
2243 // startTemplateThread starts the template thread if it is not already
2246 // The calling thread must itself be in a known-good state.
2247 func startTemplateThread() {
2248 if GOARCH == "wasm" { // no threads on wasm yet
2252 // Disable preemption to guarantee that the template thread will be
2253 // created before a park once haveTemplateThread is set.
2255 if !atomic.Cas(&newmHandoff.haveTemplateThread, 0, 1) {
2259 newm(templateThread, nil, -1)
2263 // mFixupRace is used to temporarily borrow the race context from the
2264 // coordinating m during a syscall_runtime_doAllThreadsSyscall and
2265 // loan it out to each of the m's of the runtime so they can execute a
2266 // mFixup.fn in that context.
2267 var mFixupRace struct {
2272 // mDoFixup runs any outstanding fixup function for the running m.
2273 // Returns true if a fixup was outstanding and actually executed.
2275 // Note: to avoid deadlocks, and the need for the fixup function
2276 // itself to be async safe, signals are blocked for the working m
2277 // while it holds the mFixup lock. (See golang.org/issue/44193)
2280 func mDoFixup() bool {
2282 if used := atomic.Load(&_g_.m.mFixup.used); used == 0 {
2286 // slow path - if fixup fn is used, block signals and lock.
2290 lock(&_g_.m.mFixup.lock)
2291 fn := _g_.m.mFixup.fn
2293 if gcphase != _GCoff {
2294 // We can't have a write barrier in this
2295 // context since we may not have a P, but we
2296 // clear fn to signal that we've executed the
2297 // fixup. As long as fn is kept alive
2298 // elsewhere, technically we should have no
2299 // issues with the GC, but fn is likely
2300 // generated in a different package altogether
2301 // that may change independently. Just assert
2302 // the GC is off so this lack of write barrier
2303 // is more obviously safe.
2304 throw("GC must be disabled to protect validity of fn value")
2306 if _g_.racectx != 0 || !raceenabled {
2309 // temporarily acquire the context of the
2310 // originator of the
2311 // syscall_runtime_doAllThreadsSyscall and
2312 // block others from using it for the duration
2313 // of the fixup call.
2314 lock(&mFixupRace.lock)
2315 _g_.racectx = mFixupRace.ctx
2318 unlock(&mFixupRace.lock)
2320 *(*uintptr)(unsafe.Pointer(&_g_.m.mFixup.fn)) = 0
2321 atomic.Store(&_g_.m.mFixup.used, 0)
2323 unlock(&_g_.m.mFixup.lock)
2324 msigrestore(sigmask)
2328 // mDoFixupAndOSYield is called when an m is unable to send a signal
2329 // because the allThreadsSyscall mechanism is in progress. That is, an
2330 // mPark() has been interrupted with this signal handler so we need to
2331 // ensure the fixup is executed from this context.
2333 func mDoFixupAndOSYield() {
2338 // templateThread is a thread in a known-good state that exists solely
2339 // to start new threads in known-good states when the calling thread
2340 // may not be in a good state.
2342 // Many programs never need this, so templateThread is started lazily
2343 // when we first enter a state that might lead to running on a thread
2344 // in an unknown state.
2346 // templateThread runs on an M without a P, so it must not have write
2349 //go:nowritebarrierrec
2350 func templateThread() {
2357 lock(&newmHandoff.lock)
2358 for newmHandoff.newm != 0 {
2359 newm := newmHandoff.newm.ptr()
2360 newmHandoff.newm = 0
2361 unlock(&newmHandoff.lock)
2363 next := newm.schedlink.ptr()
2368 lock(&newmHandoff.lock)
2370 newmHandoff.waiting = true
2371 noteclear(&newmHandoff.wake)
2372 unlock(&newmHandoff.lock)
2373 notesleep(&newmHandoff.wake)
2378 // Stops execution of the current m until new work is available.
2379 // Returns with acquired P.
2383 if _g_.m.locks != 0 {
2384 throw("stopm holding locks")
2387 throw("stopm holding p")
2390 throw("stopm spinning")
2397 acquirep(_g_.m.nextp.ptr())
2402 // startm's caller incremented nmspinning. Set the new M's spinning.
2403 getg().m.spinning = true
2406 // Schedules some M to run the p (creates an M if necessary).
2407 // If p==nil, tries to get an idle P, if no idle P's does nothing.
2408 // May run with m.p==nil, so write barriers are not allowed.
2409 // If spinning is set, the caller has incremented nmspinning and startm will
2410 // either decrement nmspinning or set m.spinning in the newly started M.
2412 // Callers passing a non-nil P must call from a non-preemptible context. See
2413 // comment on acquirem below.
2415 // Must not have write barriers because this may be called without a P.
2416 //go:nowritebarrierrec
2417 func startm(_p_ *p, spinning bool) {
2418 // Disable preemption.
2420 // Every owned P must have an owner that will eventually stop it in the
2421 // event of a GC stop request. startm takes transient ownership of a P
2422 // (either from argument or pidleget below) and transfers ownership to
2423 // a started M, which will be responsible for performing the stop.
2425 // Preemption must be disabled during this transient ownership,
2426 // otherwise the P this is running on may enter GC stop while still
2427 // holding the transient P, leaving that P in limbo and deadlocking the
2430 // Callers passing a non-nil P must already be in non-preemptible
2431 // context, otherwise such preemption could occur on function entry to
2432 // startm. Callers passing a nil P may be preemptible, so we must
2433 // disable preemption before acquiring a P from pidleget below.
2441 // The caller incremented nmspinning, but there are no idle Ps,
2442 // so it's okay to just undo the increment and give up.
2443 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2444 throw("startm: negative nmspinning")
2453 // No M is available, we must drop sched.lock and call newm.
2454 // However, we already own a P to assign to the M.
2456 // Once sched.lock is released, another G (e.g., in a syscall),
2457 // could find no idle P while checkdead finds a runnable G but
2458 // no running M's because this new M hasn't started yet, thus
2459 // throwing in an apparent deadlock.
2461 // Avoid this situation by pre-allocating the ID for the new M,
2462 // thus marking it as 'running' before we drop sched.lock. This
2463 // new M will eventually run the scheduler to execute any
2470 // The caller incremented nmspinning, so set m.spinning in the new M.
2474 // Ownership transfer of _p_ committed by start in newm.
2475 // Preemption is now safe.
2481 throw("startm: m is spinning")
2484 throw("startm: m has p")
2486 if spinning && !runqempty(_p_) {
2487 throw("startm: p has runnable gs")
2489 // The caller incremented nmspinning, so set m.spinning in the new M.
2490 nmp.spinning = spinning
2492 notewakeup(&nmp.park)
2493 // Ownership transfer of _p_ committed by wakeup. Preemption is now
2498 // Hands off P from syscall or locked M.
2499 // Always runs without a P, so write barriers are not allowed.
2500 //go:nowritebarrierrec
2501 func handoffp(_p_ *p) {
2502 // handoffp must start an M in any situation where
2503 // findrunnable would return a G to run on _p_.
2505 // if it has local work, start it straight away
2506 if !runqempty(_p_) || sched.runqsize != 0 {
2510 // if it has GC work, start it straight away
2511 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {
2515 // no local work, check that there are no spinning/idle M's,
2516 // otherwise our help is not required
2517 if atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) == 0 && atomic.Cas(&sched.nmspinning, 0, 1) { // TODO: fast atomic
2522 if sched.gcwaiting != 0 {
2523 _p_.status = _Pgcstop
2525 if sched.stopwait == 0 {
2526 notewakeup(&sched.stopnote)
2531 if _p_.runSafePointFn != 0 && atomic.Cas(&_p_.runSafePointFn, 1, 0) {
2532 sched.safePointFn(_p_)
2533 sched.safePointWait--
2534 if sched.safePointWait == 0 {
2535 notewakeup(&sched.safePointNote)
2538 if sched.runqsize != 0 {
2543 // If this is the last running P and nobody is polling network,
2544 // need to wakeup another M to poll network.
2545 if sched.npidle == uint32(gomaxprocs-1) && atomic.Load64(&sched.lastpoll) != 0 {
2551 // The scheduler lock cannot be held when calling wakeNetPoller below
2552 // because wakeNetPoller may call wakep which may call startm.
2553 when := nobarrierWakeTime(_p_)
2562 // Tries to add one more P to execute G's.
2563 // Called when a G is made runnable (newproc, ready).
2565 if atomic.Load(&sched.npidle) == 0 {
2568 // be conservative about spinning threads
2569 if atomic.Load(&sched.nmspinning) != 0 || !atomic.Cas(&sched.nmspinning, 0, 1) {
2575 // Stops execution of the current m that is locked to a g until the g is runnable again.
2576 // Returns with acquired P.
2577 func stoplockedm() {
2580 if _g_.m.lockedg == 0 || _g_.m.lockedg.ptr().lockedm.ptr() != _g_.m {
2581 throw("stoplockedm: inconsistent locking")
2584 // Schedule another M to run this p.
2589 // Wait until another thread schedules lockedg again.
2591 status := readgstatus(_g_.m.lockedg.ptr())
2592 if status&^_Gscan != _Grunnable {
2593 print("runtime:stoplockedm: lockedg (atomicstatus=", status, ") is not Grunnable or Gscanrunnable\n")
2594 dumpgstatus(_g_.m.lockedg.ptr())
2595 throw("stoplockedm: not runnable")
2597 acquirep(_g_.m.nextp.ptr())
2601 // Schedules the locked m to run the locked gp.
2602 // May run during STW, so write barriers are not allowed.
2603 //go:nowritebarrierrec
2604 func startlockedm(gp *g) {
2607 mp := gp.lockedm.ptr()
2609 throw("startlockedm: locked to me")
2612 throw("startlockedm: m has p")
2614 // directly handoff current P to the locked m
2618 notewakeup(&mp.park)
2622 // Stops the current m for stopTheWorld.
2623 // Returns when the world is restarted.
2627 if sched.gcwaiting == 0 {
2628 throw("gcstopm: not waiting for gc")
2631 _g_.m.spinning = false
2632 // OK to just drop nmspinning here,
2633 // startTheWorld will unpark threads as necessary.
2634 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2635 throw("gcstopm: negative nmspinning")
2640 _p_.status = _Pgcstop
2642 if sched.stopwait == 0 {
2643 notewakeup(&sched.stopnote)
2649 // Schedules gp to run on the current M.
2650 // If inheritTime is true, gp inherits the remaining time in the
2651 // current time slice. Otherwise, it starts a new time slice.
2654 // Write barriers are allowed because this is called immediately after
2655 // acquiring a P in several places.
2657 //go:yeswritebarrierrec
2658 func execute(gp *g, inheritTime bool) {
2661 // Assign gp.m before entering _Grunning so running Gs have an
2665 casgstatus(gp, _Grunnable, _Grunning)
2668 gp.stackguard0 = gp.stack.lo + _StackGuard
2670 _g_.m.p.ptr().schedtick++
2673 // Check whether the profiler needs to be turned on or off.
2674 hz := sched.profilehz
2675 if _g_.m.profilehz != hz {
2676 setThreadCPUProfiler(hz)
2680 // GoSysExit has to happen when we have a P, but before GoStart.
2681 // So we emit it here.
2682 if gp.syscallsp != 0 && gp.sysblocktraced {
2683 traceGoSysExit(gp.sysexitticks)
2691 // Finds a runnable goroutine to execute.
2692 // Tries to steal from other P's, get g from local or global queue, poll network.
2693 func findrunnable() (gp *g, inheritTime bool) {
2696 // The conditions here and in handoffp must agree: if
2697 // findrunnable would return a G to run, handoffp must start
2701 _p_ := _g_.m.p.ptr()
2702 if sched.gcwaiting != 0 {
2706 if _p_.runSafePointFn != 0 {
2710 now, pollUntil, _ := checkTimers(_p_, 0)
2712 if fingwait && fingwake {
2713 if gp := wakefing(); gp != nil {
2717 if *cgo_yield != nil {
2718 asmcgocall(*cgo_yield, nil)
2722 if gp, inheritTime := runqget(_p_); gp != nil {
2723 return gp, inheritTime
2727 if sched.runqsize != 0 {
2729 gp := globrunqget(_p_, 0)
2737 // This netpoll is only an optimization before we resort to stealing.
2738 // We can safely skip it if there are no waiters or a thread is blocked
2739 // in netpoll already. If there is any kind of logical race with that
2740 // blocked thread (e.g. it has already returned from netpoll, but does
2741 // not set lastpoll yet), this thread will do blocking netpoll below
2743 if netpollinited() && atomic.Load(&netpollWaiters) > 0 && atomic.Load64(&sched.lastpoll) != 0 {
2744 if list := netpoll(0); !list.empty() { // non-blocking
2747 casgstatus(gp, _Gwaiting, _Grunnable)
2749 traceGoUnpark(gp, 0)
2755 // Spinning Ms: steal work from other Ps.
2757 // Limit the number of spinning Ms to half the number of busy Ps.
2758 // This is necessary to prevent excessive CPU consumption when
2759 // GOMAXPROCS>>1 but the program parallelism is low.
2760 procs := uint32(gomaxprocs)
2761 if _g_.m.spinning || 2*atomic.Load(&sched.nmspinning) < procs-atomic.Load(&sched.npidle) {
2762 if !_g_.m.spinning {
2763 _g_.m.spinning = true
2764 atomic.Xadd(&sched.nmspinning, 1)
2767 gp, inheritTime, tnow, w, newWork := stealWork(now)
2770 // Successfully stole.
2771 return gp, inheritTime
2774 // There may be new timer or GC work; restart to
2778 if w != 0 && (pollUntil == 0 || w < pollUntil) {
2779 // Earlier timer to wait for.
2784 // We have nothing to do.
2786 // If we're in the GC mark phase, can safely scan and blacken objects,
2787 // and have work to do, run idle-time marking rather than give up the
2789 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {
2790 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
2792 _p_.gcMarkWorkerMode = gcMarkWorkerIdleMode
2794 casgstatus(gp, _Gwaiting, _Grunnable)
2796 traceGoUnpark(gp, 0)
2803 // If a callback returned and no other goroutine is awake,
2804 // then wake event handler goroutine which pauses execution
2805 // until a callback was triggered.
2806 gp, otherReady := beforeIdle(now, pollUntil)
2808 casgstatus(gp, _Gwaiting, _Grunnable)
2810 traceGoUnpark(gp, 0)
2818 // Before we drop our P, make a snapshot of the allp slice,
2819 // which can change underfoot once we no longer block
2820 // safe-points. We don't need to snapshot the contents because
2821 // everything up to cap(allp) is immutable.
2822 allpSnapshot := allp
2823 // Also snapshot masks. Value changes are OK, but we can't allow
2824 // len to change out from under us.
2825 idlepMaskSnapshot := idlepMask
2826 timerpMaskSnapshot := timerpMask
2828 // return P and block
2830 if sched.gcwaiting != 0 || _p_.runSafePointFn != 0 {
2834 if sched.runqsize != 0 {
2835 gp := globrunqget(_p_, 0)
2839 if releasep() != _p_ {
2840 throw("findrunnable: wrong p")
2845 // Delicate dance: thread transitions from spinning to non-spinning
2846 // state, potentially concurrently with submission of new work. We must
2847 // drop nmspinning first and then check all sources again (with
2848 // #StoreLoad memory barrier in between). If we do it the other way
2849 // around, another thread can submit work after we've checked all
2850 // sources but before we drop nmspinning; as a result nobody will
2851 // unpark a thread to run the work.
2853 // This applies to the following sources of work:
2855 // * Goroutines added to a per-P run queue.
2856 // * New/modified-earlier timers on a per-P timer heap.
2857 // * Idle-priority GC work (barring golang.org/issue/19112).
2859 // If we discover new work below, we need to restore m.spinning as a signal
2860 // for resetspinning to unpark a new worker thread (because there can be more
2861 // than one starving goroutine). However, if after discovering new work
2862 // we also observe no idle Ps it is OK to skip unparking a new worker
2863 // thread: the system is fully loaded so no spinning threads are required.
2864 // Also see "Worker thread parking/unparking" comment at the top of the file.
2865 wasSpinning := _g_.m.spinning
2867 _g_.m.spinning = false
2868 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2869 throw("findrunnable: negative nmspinning")
2872 // Note the for correctness, only the last M transitioning from
2873 // spinning to non-spinning must perform these rechecks to
2874 // ensure no missed work. We are performing it on every M that
2875 // transitions as a conservative change to monitor effects on
2876 // latency. See golang.org/issue/43997.
2878 // Check all runqueues once again.
2879 _p_ = checkRunqsNoP(allpSnapshot, idlepMaskSnapshot)
2882 _g_.m.spinning = true
2883 atomic.Xadd(&sched.nmspinning, 1)
2887 // Check for idle-priority GC work again.
2888 _p_, gp = checkIdleGCNoP()
2891 _g_.m.spinning = true
2892 atomic.Xadd(&sched.nmspinning, 1)
2894 // Run the idle worker.
2895 _p_.gcMarkWorkerMode = gcMarkWorkerIdleMode
2896 casgstatus(gp, _Gwaiting, _Grunnable)
2898 traceGoUnpark(gp, 0)
2903 // Finally, check for timer creation or expiry concurrently with
2904 // transitioning from spinning to non-spinning.
2906 // Note that we cannot use checkTimers here because it calls
2907 // adjusttimers which may need to allocate memory, and that isn't
2908 // allowed when we don't have an active P.
2909 pollUntil = checkTimersNoP(allpSnapshot, timerpMaskSnapshot, pollUntil)
2912 // Poll network until next timer.
2913 if netpollinited() && (atomic.Load(&netpollWaiters) > 0 || pollUntil != 0) && atomic.Xchg64(&sched.lastpoll, 0) != 0 {
2914 atomic.Store64(&sched.pollUntil, uint64(pollUntil))
2916 throw("findrunnable: netpoll with p")
2919 throw("findrunnable: netpoll with spinning")
2926 delay = pollUntil - now
2932 // When using fake time, just poll.
2935 list := netpoll(delay) // block until new work is available
2936 atomic.Store64(&sched.pollUntil, 0)
2937 atomic.Store64(&sched.lastpoll, uint64(nanotime()))
2938 if faketime != 0 && list.empty() {
2939 // Using fake time and nothing is ready; stop M.
2940 // When all M's stop, checkdead will call timejump.
2954 casgstatus(gp, _Gwaiting, _Grunnable)
2956 traceGoUnpark(gp, 0)
2961 _g_.m.spinning = true
2962 atomic.Xadd(&sched.nmspinning, 1)
2966 } else if pollUntil != 0 && netpollinited() {
2967 pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
2968 if pollerPollUntil == 0 || pollerPollUntil > pollUntil {
2976 // pollWork reports whether there is non-background work this P could
2977 // be doing. This is a fairly lightweight check to be used for
2978 // background work loops, like idle GC. It checks a subset of the
2979 // conditions checked by the actual scheduler.
2980 func pollWork() bool {
2981 if sched.runqsize != 0 {
2984 p := getg().m.p.ptr()
2988 if netpollinited() && atomic.Load(&netpollWaiters) > 0 && sched.lastpoll != 0 {
2989 if list := netpoll(0); !list.empty() {
2997 // stealWork attempts to steal a runnable goroutine or timer from any P.
2999 // If newWork is true, new work may have been readied.
3001 // If now is not 0 it is the current time. stealWork returns the passed time or
3002 // the current time if now was passed as 0.
3003 func stealWork(now int64) (gp *g, inheritTime bool, rnow, pollUntil int64, newWork bool) {
3004 pp := getg().m.p.ptr()
3008 const stealTries = 4
3009 for i := 0; i < stealTries; i++ {
3010 stealTimersOrRunNextG := i == stealTries-1
3012 for enum := stealOrder.start(fastrand()); !enum.done(); enum.next() {
3013 if sched.gcwaiting != 0 {
3014 // GC work may be available.
3015 return nil, false, now, pollUntil, true
3017 p2 := allp[enum.position()]
3022 // Steal timers from p2. This call to checkTimers is the only place
3023 // where we might hold a lock on a different P's timers. We do this
3024 // once on the last pass before checking runnext because stealing
3025 // from the other P's runnext should be the last resort, so if there
3026 // are timers to steal do that first.
3028 // We only check timers on one of the stealing iterations because
3029 // the time stored in now doesn't change in this loop and checking
3030 // the timers for each P more than once with the same value of now
3031 // is probably a waste of time.
3033 // timerpMask tells us whether the P may have timers at all. If it
3034 // can't, no need to check at all.
3035 if stealTimersOrRunNextG && timerpMask.read(enum.position()) {
3036 tnow, w, ran := checkTimers(p2, now)
3038 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3042 // Running the timers may have
3043 // made an arbitrary number of G's
3044 // ready and added them to this P's
3045 // local run queue. That invalidates
3046 // the assumption of runqsteal
3047 // that it always has room to add
3048 // stolen G's. So check now if there
3049 // is a local G to run.
3050 if gp, inheritTime := runqget(pp); gp != nil {
3051 return gp, inheritTime, now, pollUntil, ranTimer
3057 // Don't bother to attempt to steal if p2 is idle.
3058 if !idlepMask.read(enum.position()) {
3059 if gp := runqsteal(pp, p2, stealTimersOrRunNextG); gp != nil {
3060 return gp, false, now, pollUntil, ranTimer
3066 // No goroutines found to steal. Regardless, running a timer may have
3067 // made some goroutine ready that we missed. Indicate the next timer to
3069 return nil, false, now, pollUntil, ranTimer
3072 // Check all Ps for a runnable G to steal.
3074 // On entry we have no P. If a G is available to steal and a P is available,
3075 // the P is returned which the caller should acquire and attempt to steal the
3077 func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p {
3078 for id, p2 := range allpSnapshot {
3079 if !idlepMaskSnapshot.read(uint32(id)) && !runqempty(p2) {
3087 // Can't get a P, don't bother checking remaining Ps.
3095 // Check all Ps for a timer expiring sooner than pollUntil.
3097 // Returns updated pollUntil value.
3098 func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64 {
3099 for id, p2 := range allpSnapshot {
3100 if timerpMaskSnapshot.read(uint32(id)) {
3101 w := nobarrierWakeTime(p2)
3102 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3111 // Check for idle-priority GC, without a P on entry.
3113 // If some GC work, a P, and a worker G are all available, the P and G will be
3114 // returned. The returned P has not been wired yet.
3115 func checkIdleGCNoP() (*p, *g) {
3116 // N.B. Since we have no P, gcBlackenEnabled may change at any time; we
3117 // must check again after acquiring a P.
3118 if atomic.Load(&gcBlackenEnabled) == 0 {
3121 if !gcMarkWorkAvailable(nil) {
3125 // Work is available; we can start an idle GC worker only if there is
3126 // an available P and available worker G.
3128 // We can attempt to acquire these in either order, though both have
3129 // synchronization concerns (see below). Workers are almost always
3130 // available (see comment in findRunnableGCWorker for the one case
3131 // there may be none). Since we're slightly less likely to find a P,
3132 // check for that first.
3134 // Synchronization: note that we must hold sched.lock until we are
3135 // committed to keeping it. Otherwise we cannot put the unnecessary P
3136 // back in sched.pidle without performing the full set of idle
3137 // transition checks.
3139 // If we were to check gcBgMarkWorkerPool first, we must somehow handle
3140 // the assumption in gcControllerState.findRunnableGCWorker that an
3141 // empty gcBgMarkWorkerPool is only possible if gcMarkDone is running.
3149 // Now that we own a P, gcBlackenEnabled can't change (as it requires
3151 if gcBlackenEnabled == 0 {
3157 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
3166 return pp, node.gp.ptr()
3169 // wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
3170 // going to wake up before the when argument; or it wakes an idle P to service
3171 // timers and the network poller if there isn't one already.
3172 func wakeNetPoller(when int64) {
3173 if atomic.Load64(&sched.lastpoll) == 0 {
3174 // In findrunnable we ensure that when polling the pollUntil
3175 // field is either zero or the time to which the current
3176 // poll is expected to run. This can have a spurious wakeup
3177 // but should never miss a wakeup.
3178 pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
3179 if pollerPollUntil == 0 || pollerPollUntil > when {
3183 // There are no threads in the network poller, try to get
3184 // one there so it can handle new timers.
3185 if GOOS != "plan9" { // Temporary workaround - see issue #42303.
3191 func resetspinning() {
3193 if !_g_.m.spinning {
3194 throw("resetspinning: not a spinning m")
3196 _g_.m.spinning = false
3197 nmspinning := atomic.Xadd(&sched.nmspinning, -1)
3198 if int32(nmspinning) < 0 {
3199 throw("findrunnable: negative nmspinning")
3201 // M wakeup policy is deliberately somewhat conservative, so check if we
3202 // need to wakeup another P here. See "Worker thread parking/unparking"
3203 // comment at the top of the file for details.
3207 // injectglist adds each runnable G on the list to some run queue,
3208 // and clears glist. If there is no current P, they are added to the
3209 // global queue, and up to npidle M's are started to run them.
3210 // Otherwise, for each idle P, this adds a G to the global queue
3211 // and starts an M. Any remaining G's are added to the current P's
3213 // This may temporarily acquire sched.lock.
3214 // Can run concurrently with GC.
3215 func injectglist(glist *gList) {
3220 for gp := glist.head.ptr(); gp != nil; gp = gp.schedlink.ptr() {
3221 traceGoUnpark(gp, 0)
3225 // Mark all the goroutines as runnable before we put them
3226 // on the run queues.
3227 head := glist.head.ptr()
3230 for gp := head; gp != nil; gp = gp.schedlink.ptr() {
3233 casgstatus(gp, _Gwaiting, _Grunnable)
3236 // Turn the gList into a gQueue.
3242 startIdle := func(n int) {
3243 for ; n != 0 && sched.npidle != 0; n-- {
3248 pp := getg().m.p.ptr()
3251 globrunqputbatch(&q, int32(qsize))
3257 npidle := int(atomic.Load(&sched.npidle))
3260 for n = 0; n < npidle && !q.empty(); n++ {
3266 globrunqputbatch(&globq, int32(n))
3273 runqputbatch(pp, &q, qsize)
3277 // One round of scheduler: find a runnable goroutine and execute it.
3282 if _g_.m.locks != 0 {
3283 throw("schedule: holding locks")
3286 if _g_.m.lockedg != 0 {
3288 execute(_g_.m.lockedg.ptr(), false) // Never returns.
3291 // We should not schedule away from a g that is executing a cgo call,
3292 // since the cgo call is using the m's g0 stack.
3294 throw("schedule: in cgo")
3301 if sched.gcwaiting != 0 {
3305 if pp.runSafePointFn != 0 {
3309 // Sanity check: if we are spinning, the run queue should be empty.
3310 // Check this before calling checkTimers, as that might call
3311 // goready to put a ready goroutine on the local run queue.
3312 if _g_.m.spinning && (pp.runnext != 0 || pp.runqhead != pp.runqtail) {
3313 throw("schedule: spinning with local work")
3319 var inheritTime bool
3321 // Normal goroutines will check for need to wakeP in ready,
3322 // but GCworkers and tracereaders will not, so the check must
3323 // be done here instead.
3325 if trace.enabled || trace.shutdown {
3328 casgstatus(gp, _Gwaiting, _Grunnable)
3329 traceGoUnpark(gp, 0)
3333 if gp == nil && gcBlackenEnabled != 0 {
3334 gp = gcController.findRunnableGCWorker(_g_.m.p.ptr())
3340 // Check the global runnable queue once in a while to ensure fairness.
3341 // Otherwise two goroutines can completely occupy the local runqueue
3342 // by constantly respawning each other.
3343 if _g_.m.p.ptr().schedtick%61 == 0 && sched.runqsize > 0 {
3345 gp = globrunqget(_g_.m.p.ptr(), 1)
3350 gp, inheritTime = runqget(_g_.m.p.ptr())
3351 // We can see gp != nil here even if the M is spinning,
3352 // if checkTimers added a local goroutine via goready.
3355 gp, inheritTime = findrunnable() // blocks until work is available
3358 // This thread is going to run a goroutine and is not spinning anymore,
3359 // so if it was marked as spinning we need to reset it now and potentially
3360 // start a new spinning M.
3365 if sched.disable.user && !schedEnabled(gp) {
3366 // Scheduling of this goroutine is disabled. Put it on
3367 // the list of pending runnable goroutines for when we
3368 // re-enable user scheduling and look again.
3370 if schedEnabled(gp) {
3371 // Something re-enabled scheduling while we
3372 // were acquiring the lock.
3375 sched.disable.runnable.pushBack(gp)
3382 // If about to schedule a not-normal goroutine (a GCworker or tracereader),
3383 // wake a P if there is one.
3387 if gp.lockedm != 0 {
3388 // Hands off own p to the locked m,
3389 // then blocks waiting for a new p.
3394 execute(gp, inheritTime)
3397 // dropg removes the association between m and the current goroutine m->curg (gp for short).
3398 // Typically a caller sets gp's status away from Grunning and then
3399 // immediately calls dropg to finish the job. The caller is also responsible
3400 // for arranging that gp will be restarted using ready at an
3401 // appropriate time. After calling dropg and arranging for gp to be
3402 // readied later, the caller can do other work but eventually should
3403 // call schedule to restart the scheduling of goroutines on this m.
3407 setMNoWB(&_g_.m.curg.m, nil)
3408 setGNoWB(&_g_.m.curg, nil)
3411 // checkTimers runs any timers for the P that are ready.
3412 // If now is not 0 it is the current time.
3413 // It returns the passed time or the current time if now was passed as 0.
3414 // and the time when the next timer should run or 0 if there is no next timer,
3415 // and reports whether it ran any timers.
3416 // If the time when the next timer should run is not 0,
3417 // it is always larger than the returned time.
3418 // We pass now in and out to avoid extra calls of nanotime.
3419 //go:yeswritebarrierrec
3420 func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool) {
3421 // If it's not yet time for the first timer, or the first adjusted
3422 // timer, then there is nothing to do.
3423 next := int64(atomic.Load64(&pp.timer0When))
3424 nextAdj := int64(atomic.Load64(&pp.timerModifiedEarliest))
3425 if next == 0 || (nextAdj != 0 && nextAdj < next) {
3430 // No timers to run or adjust.
3431 return now, 0, false
3438 // Next timer is not ready to run, but keep going
3439 // if we would clear deleted timers.
3440 // This corresponds to the condition below where
3441 // we decide whether to call clearDeletedTimers.
3442 if pp != getg().m.p.ptr() || int(atomic.Load(&pp.deletedTimers)) <= int(atomic.Load(&pp.numTimers)/4) {
3443 return now, next, false
3447 lock(&pp.timersLock)
3449 if len(pp.timers) > 0 {
3450 adjusttimers(pp, now)
3451 for len(pp.timers) > 0 {
3452 // Note that runtimer may temporarily unlock
3454 if tw := runtimer(pp, now); tw != 0 {
3464 // If this is the local P, and there are a lot of deleted timers,
3465 // clear them out. We only do this for the local P to reduce
3466 // lock contention on timersLock.
3467 if pp == getg().m.p.ptr() && int(atomic.Load(&pp.deletedTimers)) > len(pp.timers)/4 {
3468 clearDeletedTimers(pp)
3471 unlock(&pp.timersLock)
3473 return now, pollUntil, ran
3476 func parkunlock_c(gp *g, lock unsafe.Pointer) bool {
3477 unlock((*mutex)(lock))
3481 // park continuation on g0.
3482 func park_m(gp *g) {
3486 traceGoPark(_g_.m.waittraceev, _g_.m.waittraceskip)
3489 casgstatus(gp, _Grunning, _Gwaiting)
3492 if fn := _g_.m.waitunlockf; fn != nil {
3493 ok := fn(gp, _g_.m.waitlock)
3494 _g_.m.waitunlockf = nil
3495 _g_.m.waitlock = nil
3498 traceGoUnpark(gp, 2)
3500 casgstatus(gp, _Gwaiting, _Grunnable)
3501 execute(gp, true) // Schedule it back, never returns.
3507 func goschedImpl(gp *g) {
3508 status := readgstatus(gp)
3509 if status&^_Gscan != _Grunning {
3511 throw("bad g status")
3513 casgstatus(gp, _Grunning, _Grunnable)
3522 // Gosched continuation on g0.
3523 func gosched_m(gp *g) {
3530 // goschedguarded is a forbidden-states-avoided version of gosched_m
3531 func goschedguarded_m(gp *g) {
3533 if !canPreemptM(gp.m) {
3534 gogo(&gp.sched) // never return
3543 func gopreempt_m(gp *g) {
3550 // preemptPark parks gp and puts it in _Gpreempted.
3553 func preemptPark(gp *g) {
3555 traceGoPark(traceEvGoBlock, 0)
3557 status := readgstatus(gp)
3558 if status&^_Gscan != _Grunning {
3560 throw("bad g status")
3562 gp.waitreason = waitReasonPreempted
3564 if gp.asyncSafePoint {
3565 // Double-check that async preemption does not
3566 // happen in SPWRITE assembly functions.
3567 // isAsyncSafePoint must exclude this case.
3568 f := findfunc(gp.sched.pc)
3570 throw("preempt at unknown pc")
3572 if f.flag&funcFlag_SPWRITE != 0 {
3573 println("runtime: unexpected SPWRITE function", funcname(f), "in async preempt")
3574 throw("preempt SPWRITE")
3578 // Transition from _Grunning to _Gscan|_Gpreempted. We can't
3579 // be in _Grunning when we dropg because then we'd be running
3580 // without an M, but the moment we're in _Gpreempted,
3581 // something could claim this G before we've fully cleaned it
3582 // up. Hence, we set the scan bit to lock down further
3583 // transitions until we can dropg.
3584 casGToPreemptScan(gp, _Grunning, _Gscan|_Gpreempted)
3586 casfrom_Gscanstatus(gp, _Gscan|_Gpreempted, _Gpreempted)
3590 // goyield is like Gosched, but it:
3591 // - emits a GoPreempt trace event instead of a GoSched trace event
3592 // - puts the current G on the runq of the current P instead of the globrunq
3598 func goyield_m(gp *g) {
3603 casgstatus(gp, _Grunning, _Grunnable)
3605 runqput(pp, gp, false)
3609 // Finishes execution of the current goroutine.
3620 // goexit continuation on g0.
3621 func goexit0(gp *g) {
3624 casgstatus(gp, _Grunning, _Gdead)
3625 if isSystemGoroutine(gp, false) {
3626 atomic.Xadd(&sched.ngsys, -1)
3629 locked := gp.lockedm != 0
3632 gp.preemptStop = false
3633 gp.paniconfault = false
3634 gp._defer = nil // should be true already but just in case.
3635 gp._panic = nil // non-nil for Goexit during panic. points at stack-allocated data.
3642 if gcBlackenEnabled != 0 && gp.gcAssistBytes > 0 {
3643 // Flush assist credit to the global pool. This gives
3644 // better information to pacing if the application is
3645 // rapidly creating an exiting goroutines.
3646 assistWorkPerByte := float64frombits(atomic.Load64(&gcController.assistWorkPerByte))
3647 scanCredit := int64(assistWorkPerByte * float64(gp.gcAssistBytes))
3648 atomic.Xaddint64(&gcController.bgScanCredit, scanCredit)
3649 gp.gcAssistBytes = 0
3654 if GOARCH == "wasm" { // no threads yet on wasm
3655 gfput(_g_.m.p.ptr(), gp)
3656 schedule() // never returns
3659 if _g_.m.lockedInt != 0 {
3660 print("invalid m->lockedInt = ", _g_.m.lockedInt, "\n")
3661 throw("internal lockOSThread error")
3663 gfput(_g_.m.p.ptr(), gp)
3665 // The goroutine may have locked this thread because
3666 // it put it in an unusual kernel state. Kill it
3667 // rather than returning it to the thread pool.
3669 // Return to mstart, which will release the P and exit
3671 if GOOS != "plan9" { // See golang.org/issue/22227.
3672 gogo(&_g_.m.g0.sched)
3674 // Clear lockedExt on plan9 since we may end up re-using
3682 // save updates getg().sched to refer to pc and sp so that a following
3683 // gogo will restore pc and sp.
3685 // save must not have write barriers because invoking a write barrier
3686 // can clobber getg().sched.
3689 //go:nowritebarrierrec
3690 func save(pc, sp uintptr) {
3693 if _g_ == _g_.m.g0 || _g_ == _g_.m.gsignal {
3694 // m.g0.sched is special and must describe the context
3695 // for exiting the thread. mstart1 writes to it directly.
3696 // m.gsignal.sched should not be used at all.
3697 // This check makes sure save calls do not accidentally
3698 // run in contexts where they'd write to system g's.
3699 throw("save on system g not allowed")
3706 // We need to ensure ctxt is zero, but can't have a write
3707 // barrier here. However, it should always already be zero.
3709 if _g_.sched.ctxt != nil {
3714 // The goroutine g is about to enter a system call.
3715 // Record that it's not using the cpu anymore.
3716 // This is called only from the go syscall library and cgocall,
3717 // not from the low-level system calls used by the runtime.
3719 // Entersyscall cannot split the stack: the save must
3720 // make g->sched refer to the caller's stack segment, because
3721 // entersyscall is going to return immediately after.
3723 // Nothing entersyscall calls can split the stack either.
3724 // We cannot safely move the stack during an active call to syscall,
3725 // because we do not know which of the uintptr arguments are
3726 // really pointers (back into the stack).
3727 // In practice, this means that we make the fast path run through
3728 // entersyscall doing no-split things, and the slow path has to use systemstack
3729 // to run bigger things on the system stack.
3731 // reentersyscall is the entry point used by cgo callbacks, where explicitly
3732 // saved SP and PC are restored. This is needed when exitsyscall will be called
3733 // from a function further up in the call stack than the parent, as g->syscallsp
3734 // must always point to a valid stack frame. entersyscall below is the normal
3735 // entry point for syscalls, which obtains the SP and PC from the caller.
3738 // At the start of a syscall we emit traceGoSysCall to capture the stack trace.
3739 // If the syscall does not block, that is it, we do not emit any other events.
3740 // If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
3741 // when syscall returns we emit traceGoSysExit and when the goroutine starts running
3742 // (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
3743 // To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
3744 // we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),
3745 // whoever emits traceGoSysBlock increments p.syscalltick afterwards;
3746 // and we wait for the increment before emitting traceGoSysExit.
3747 // Note that the increment is done even if tracing is not enabled,
3748 // because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
3751 func reentersyscall(pc, sp uintptr) {
3754 // Disable preemption because during this function g is in Gsyscall status,
3755 // but can have inconsistent g->sched, do not let GC observe it.
3758 // Entersyscall must not call any function that might split/grow the stack.
3759 // (See details in comment above.)
3760 // Catch calls that might, by replacing the stack guard with something that
3761 // will trip any stack check and leaving a flag to tell newstack to die.
3762 _g_.stackguard0 = stackPreempt
3763 _g_.throwsplit = true
3765 // Leave SP around for GC and traceback.
3769 casgstatus(_g_, _Grunning, _Gsyscall)
3770 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3771 systemstack(func() {
3772 print("entersyscall inconsistent ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3773 throw("entersyscall")
3778 systemstack(traceGoSysCall)
3779 // systemstack itself clobbers g.sched.{pc,sp} and we might
3780 // need them later when the G is genuinely blocked in a
3785 if atomic.Load(&sched.sysmonwait) != 0 {
3786 systemstack(entersyscall_sysmon)
3790 if _g_.m.p.ptr().runSafePointFn != 0 {
3791 // runSafePointFn may stack split if run on this stack
3792 systemstack(runSafePointFn)
3796 _g_.m.syscalltick = _g_.m.p.ptr().syscalltick
3797 _g_.sysblocktraced = true
3802 atomic.Store(&pp.status, _Psyscall)
3803 if sched.gcwaiting != 0 {
3804 systemstack(entersyscall_gcwait)
3811 // Standard syscall entry used by the go syscall library and normal cgo calls.
3813 // This is exported via linkname to assembly in the syscall package.
3816 //go:linkname entersyscall
3817 func entersyscall() {
3818 reentersyscall(getcallerpc(), getcallersp())
3821 func entersyscall_sysmon() {
3823 if atomic.Load(&sched.sysmonwait) != 0 {
3824 atomic.Store(&sched.sysmonwait, 0)
3825 notewakeup(&sched.sysmonnote)
3830 func entersyscall_gcwait() {
3832 _p_ := _g_.m.oldp.ptr()
3835 if sched.stopwait > 0 && atomic.Cas(&_p_.status, _Psyscall, _Pgcstop) {
3837 traceGoSysBlock(_p_)
3841 if sched.stopwait--; sched.stopwait == 0 {
3842 notewakeup(&sched.stopnote)
3848 // The same as entersyscall(), but with a hint that the syscall is blocking.
3850 func entersyscallblock() {
3853 _g_.m.locks++ // see comment in entersyscall
3854 _g_.throwsplit = true
3855 _g_.stackguard0 = stackPreempt // see comment in entersyscall
3856 _g_.m.syscalltick = _g_.m.p.ptr().syscalltick
3857 _g_.sysblocktraced = true
3858 _g_.m.p.ptr().syscalltick++
3860 // Leave SP around for GC and traceback.
3864 _g_.syscallsp = _g_.sched.sp
3865 _g_.syscallpc = _g_.sched.pc
3866 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3869 sp3 := _g_.syscallsp
3870 systemstack(func() {
3871 print("entersyscallblock inconsistent ", hex(sp1), " ", hex(sp2), " ", hex(sp3), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3872 throw("entersyscallblock")
3875 casgstatus(_g_, _Grunning, _Gsyscall)
3876 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3877 systemstack(func() {
3878 print("entersyscallblock inconsistent ", hex(sp), " ", hex(_g_.sched.sp), " ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3879 throw("entersyscallblock")
3883 systemstack(entersyscallblock_handoff)
3885 // Resave for traceback during blocked call.
3886 save(getcallerpc(), getcallersp())
3891 func entersyscallblock_handoff() {
3894 traceGoSysBlock(getg().m.p.ptr())
3896 handoffp(releasep())
3899 // The goroutine g exited its system call.
3900 // Arrange for it to run on a cpu again.
3901 // This is called only from the go syscall library, not
3902 // from the low-level system calls used by the runtime.
3904 // Write barriers are not allowed because our P may have been stolen.
3906 // This is exported via linkname to assembly in the syscall package.
3909 //go:nowritebarrierrec
3910 //go:linkname exitsyscall
3911 func exitsyscall() {
3914 _g_.m.locks++ // see comment in entersyscall
3915 if getcallersp() > _g_.syscallsp {
3916 throw("exitsyscall: syscall frame is no longer valid")
3920 oldp := _g_.m.oldp.ptr()
3922 if exitsyscallfast(oldp) {
3924 if oldp != _g_.m.p.ptr() || _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
3925 systemstack(traceGoStart)
3928 // There's a cpu for us, so we can run.
3929 _g_.m.p.ptr().syscalltick++
3930 // We need to cas the status and scan before resuming...
3931 casgstatus(_g_, _Gsyscall, _Grunning)
3933 // Garbage collector isn't running (since we are),
3934 // so okay to clear syscallsp.
3938 // restore the preemption request in case we've cleared it in newstack
3939 _g_.stackguard0 = stackPreempt
3941 // otherwise restore the real _StackGuard, we've spoiled it in entersyscall/entersyscallblock
3942 _g_.stackguard0 = _g_.stack.lo + _StackGuard
3944 _g_.throwsplit = false
3946 if sched.disable.user && !schedEnabled(_g_) {
3947 // Scheduling of this goroutine is disabled.
3954 _g_.sysexitticks = 0
3956 // Wait till traceGoSysBlock event is emitted.
3957 // This ensures consistency of the trace (the goroutine is started after it is blocked).
3958 for oldp != nil && oldp.syscalltick == _g_.m.syscalltick {
3961 // We can't trace syscall exit right now because we don't have a P.
3962 // Tracing code can invoke write barriers that cannot run without a P.
3963 // So instead we remember the syscall exit time and emit the event
3964 // in execute when we have a P.
3965 _g_.sysexitticks = cputicks()
3970 // Call the scheduler.
3973 // Scheduler returned, so we're allowed to run now.
3974 // Delete the syscallsp information that we left for
3975 // the garbage collector during the system call.
3976 // Must wait until now because until gosched returns
3977 // we don't know for sure that the garbage collector
3980 _g_.m.p.ptr().syscalltick++
3981 _g_.throwsplit = false
3985 func exitsyscallfast(oldp *p) bool {
3988 // Freezetheworld sets stopwait but does not retake P's.
3989 if sched.stopwait == freezeStopWait {
3993 // Try to re-acquire the last P.
3994 if oldp != nil && oldp.status == _Psyscall && atomic.Cas(&oldp.status, _Psyscall, _Pidle) {
3995 // There's a cpu for us, so we can run.
3997 exitsyscallfast_reacquired()
4001 // Try to get any other idle P.
4002 if sched.pidle != 0 {
4004 systemstack(func() {
4005 ok = exitsyscallfast_pidle()
4006 if ok && trace.enabled {
4008 // Wait till traceGoSysBlock event is emitted.
4009 // This ensures consistency of the trace (the goroutine is started after it is blocked).
4010 for oldp.syscalltick == _g_.m.syscalltick {
4024 // exitsyscallfast_reacquired is the exitsyscall path on which this G
4025 // has successfully reacquired the P it was running on before the
4029 func exitsyscallfast_reacquired() {
4031 if _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
4033 // The p was retaken and then enter into syscall again (since _g_.m.syscalltick has changed).
4034 // traceGoSysBlock for this syscall was already emitted,
4035 // but here we effectively retake the p from the new syscall running on the same p.
4036 systemstack(func() {
4037 // Denote blocking of the new syscall.
4038 traceGoSysBlock(_g_.m.p.ptr())
4039 // Denote completion of the current syscall.
4043 _g_.m.p.ptr().syscalltick++
4047 func exitsyscallfast_pidle() bool {
4050 if _p_ != nil && atomic.Load(&sched.sysmonwait) != 0 {
4051 atomic.Store(&sched.sysmonwait, 0)
4052 notewakeup(&sched.sysmonnote)
4062 // exitsyscall slow path on g0.
4063 // Failed to acquire P, enqueue gp as runnable.
4065 // Called via mcall, so gp is the calling g from this M.
4067 //go:nowritebarrierrec
4068 func exitsyscall0(gp *g) {
4069 casgstatus(gp, _Gsyscall, _Grunnable)
4073 if schedEnabled(gp) {
4080 // Below, we stoplockedm if gp is locked. globrunqput releases
4081 // ownership of gp, so we must check if gp is locked prior to
4082 // committing the release by unlocking sched.lock, otherwise we
4083 // could race with another M transitioning gp from unlocked to
4085 locked = gp.lockedm != 0
4086 } else if atomic.Load(&sched.sysmonwait) != 0 {
4087 atomic.Store(&sched.sysmonwait, 0)
4088 notewakeup(&sched.sysmonnote)
4093 execute(gp, false) // Never returns.
4096 // Wait until another thread schedules gp and so m again.
4098 // N.B. lockedm must be this M, as this g was running on this M
4099 // before entersyscall.
4101 execute(gp, false) // Never returns.
4104 schedule() // Never returns.
4110 // Block signals during a fork, so that the child does not run
4111 // a signal handler before exec if a signal is sent to the process
4112 // group. See issue #18600.
4114 sigsave(&gp.m.sigmask)
4117 // This function is called before fork in syscall package.
4118 // Code between fork and exec must not allocate memory nor even try to grow stack.
4119 // Here we spoil g->_StackGuard to reliably detect any attempts to grow stack.
4120 // runtime_AfterFork will undo this in parent process, but not in child.
4121 gp.stackguard0 = stackFork
4124 // Called from syscall package before fork.
4125 //go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork
4127 func syscall_runtime_BeforeFork() {
4128 systemstack(beforefork)
4134 // See the comments in beforefork.
4135 gp.stackguard0 = gp.stack.lo + _StackGuard
4137 msigrestore(gp.m.sigmask)
4142 // Called from syscall package after fork in parent.
4143 //go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork
4145 func syscall_runtime_AfterFork() {
4146 systemstack(afterfork)
4149 // inForkedChild is true while manipulating signals in the child process.
4150 // This is used to avoid calling libc functions in case we are using vfork.
4151 var inForkedChild bool
4153 // Called from syscall package after fork in child.
4154 // It resets non-sigignored signals to the default handler, and
4155 // restores the signal mask in preparation for the exec.
4157 // Because this might be called during a vfork, and therefore may be
4158 // temporarily sharing address space with the parent process, this must
4159 // not change any global variables or calling into C code that may do so.
4161 //go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild
4163 //go:nowritebarrierrec
4164 func syscall_runtime_AfterForkInChild() {
4165 // It's OK to change the global variable inForkedChild here
4166 // because we are going to change it back. There is no race here,
4167 // because if we are sharing address space with the parent process,
4168 // then the parent process can not be running concurrently.
4169 inForkedChild = true
4171 clearSignalHandlers()
4173 // When we are the child we are the only thread running,
4174 // so we know that nothing else has changed gp.m.sigmask.
4175 msigrestore(getg().m.sigmask)
4177 inForkedChild = false
4180 // pendingPreemptSignals is the number of preemption signals
4181 // that have been sent but not received. This is only used on Darwin.
4183 var pendingPreemptSignals uint32
4185 // Called from syscall package before Exec.
4186 //go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec
4187 func syscall_runtime_BeforeExec() {
4188 // Prevent thread creation during exec.
4191 // On Darwin, wait for all pending preemption signals to
4192 // be received. See issue #41702.
4193 if GOOS == "darwin" || GOOS == "ios" {
4194 for int32(atomic.Load(&pendingPreemptSignals)) > 0 {
4200 // Called from syscall package after Exec.
4201 //go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec
4202 func syscall_runtime_AfterExec() {
4206 // Allocate a new g, with a stack big enough for stacksize bytes.
4207 func malg(stacksize int32) *g {
4210 stacksize = round2(_StackSystem + stacksize)
4211 systemstack(func() {
4212 newg.stack = stackalloc(uint32(stacksize))
4214 newg.stackguard0 = newg.stack.lo + _StackGuard
4215 newg.stackguard1 = ^uintptr(0)
4216 // Clear the bottom word of the stack. We record g
4217 // there on gsignal stack during VDSO on ARM and ARM64.
4218 *(*uintptr)(unsafe.Pointer(newg.stack.lo)) = 0
4223 // Create a new g running fn.
4224 // Put it on the queue of g's waiting to run.
4225 // The compiler turns a go statement into a call to this.
4226 func newproc(fn *funcval) {
4229 systemstack(func() {
4230 newg := newproc1(fn, gp, pc)
4232 _p_ := getg().m.p.ptr()
4233 runqput(_p_, newg, true)
4241 // Create a new g in state _Grunnable, starting at fn. callerpc is the
4242 // address of the go statement that created this. The caller is responsible
4243 // for adding the new g to the scheduler.
4244 func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g {
4248 _g_.m.throwing = -1 // do not dump full stacks
4249 throw("go of nil func value")
4251 acquirem() // disable preemption because it can be holding p in a local var
4253 _p_ := _g_.m.p.ptr()
4256 newg = malg(_StackMin)
4257 casgstatus(newg, _Gidle, _Gdead)
4258 allgadd(newg) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.
4260 if newg.stack.hi == 0 {
4261 throw("newproc1: newg missing stack")
4264 if readgstatus(newg) != _Gdead {
4265 throw("newproc1: new g is not Gdead")
4268 totalSize := uintptr(4*goarch.PtrSize + sys.MinFrameSize) // extra space in case of reads slightly beyond frame
4269 totalSize = alignUp(totalSize, sys.StackAlign)
4270 sp := newg.stack.hi - totalSize
4274 *(*uintptr)(unsafe.Pointer(sp)) = 0
4276 spArg += sys.MinFrameSize
4279 memclrNoHeapPointers(unsafe.Pointer(&newg.sched), unsafe.Sizeof(newg.sched))
4282 newg.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum // +PCQuantum so that previous instruction is in same function
4283 newg.sched.g = guintptr(unsafe.Pointer(newg))
4284 gostartcallfn(&newg.sched, fn)
4285 newg.gopc = callerpc
4286 newg.ancestors = saveAncestors(callergp)
4287 newg.startpc = fn.fn
4288 if _g_.m.curg != nil {
4289 newg.labels = _g_.m.curg.labels
4291 if isSystemGoroutine(newg, false) {
4292 atomic.Xadd(&sched.ngsys, +1)
4294 // Track initial transition?
4295 newg.trackingSeq = uint8(fastrand())
4296 if newg.trackingSeq%gTrackingPeriod == 0 {
4297 newg.tracking = true
4299 casgstatus(newg, _Gdead, _Grunnable)
4301 if _p_.goidcache == _p_.goidcacheend {
4302 // Sched.goidgen is the last allocated id,
4303 // this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].
4304 // At startup sched.goidgen=0, so main goroutine receives goid=1.
4305 _p_.goidcache = atomic.Xadd64(&sched.goidgen, _GoidCacheBatch)
4306 _p_.goidcache -= _GoidCacheBatch - 1
4307 _p_.goidcacheend = _p_.goidcache + _GoidCacheBatch
4309 newg.goid = int64(_p_.goidcache)
4312 newg.racectx = racegostart(callerpc)
4315 traceGoCreate(newg, newg.startpc)
4322 // saveAncestors copies previous ancestors of the given caller g and
4323 // includes infor for the current caller into a new set of tracebacks for
4324 // a g being created.
4325 func saveAncestors(callergp *g) *[]ancestorInfo {
4326 // Copy all prior info, except for the root goroutine (goid 0).
4327 if debug.tracebackancestors <= 0 || callergp.goid == 0 {
4330 var callerAncestors []ancestorInfo
4331 if callergp.ancestors != nil {
4332 callerAncestors = *callergp.ancestors
4334 n := int32(len(callerAncestors)) + 1
4335 if n > debug.tracebackancestors {
4336 n = debug.tracebackancestors
4338 ancestors := make([]ancestorInfo, n)
4339 copy(ancestors[1:], callerAncestors)
4341 var pcs [_TracebackMaxFrames]uintptr
4342 npcs := gcallers(callergp, 0, pcs[:])
4343 ipcs := make([]uintptr, npcs)
4345 ancestors[0] = ancestorInfo{
4347 goid: callergp.goid,
4348 gopc: callergp.gopc,
4351 ancestorsp := new([]ancestorInfo)
4352 *ancestorsp = ancestors
4356 // Put on gfree list.
4357 // If local list is too long, transfer a batch to the global list.
4358 func gfput(_p_ *p, gp *g) {
4359 if readgstatus(gp) != _Gdead {
4360 throw("gfput: bad status (not Gdead)")
4363 stksize := gp.stack.hi - gp.stack.lo
4365 if stksize != _FixedStack {
4366 // non-standard stack size - free it.
4375 if _p_.gFree.n >= 64 {
4381 for _p_.gFree.n >= 32 {
4382 gp = _p_.gFree.pop()
4384 if gp.stack.lo == 0 {
4391 lock(&sched.gFree.lock)
4392 sched.gFree.noStack.pushAll(noStackQ)
4393 sched.gFree.stack.pushAll(stackQ)
4394 sched.gFree.n += inc
4395 unlock(&sched.gFree.lock)
4399 // Get from gfree list.
4400 // If local list is empty, grab a batch from global list.
4401 func gfget(_p_ *p) *g {
4403 if _p_.gFree.empty() && (!sched.gFree.stack.empty() || !sched.gFree.noStack.empty()) {
4404 lock(&sched.gFree.lock)
4405 // Move a batch of free Gs to the P.
4406 for _p_.gFree.n < 32 {
4407 // Prefer Gs with stacks.
4408 gp := sched.gFree.stack.pop()
4410 gp = sched.gFree.noStack.pop()
4419 unlock(&sched.gFree.lock)
4422 gp := _p_.gFree.pop()
4427 if gp.stack.lo == 0 {
4428 // Stack was deallocated in gfput. Allocate a new one.
4429 systemstack(func() {
4430 gp.stack = stackalloc(_FixedStack)
4432 gp.stackguard0 = gp.stack.lo + _StackGuard
4435 racemalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4438 msanmalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4444 // Purge all cached G's from gfree list to the global list.
4445 func gfpurge(_p_ *p) {
4451 for !_p_.gFree.empty() {
4452 gp := _p_.gFree.pop()
4454 if gp.stack.lo == 0 {
4461 lock(&sched.gFree.lock)
4462 sched.gFree.noStack.pushAll(noStackQ)
4463 sched.gFree.stack.pushAll(stackQ)
4464 sched.gFree.n += inc
4465 unlock(&sched.gFree.lock)
4468 // Breakpoint executes a breakpoint trap.
4473 // dolockOSThread is called by LockOSThread and lockOSThread below
4474 // after they modify m.locked. Do not allow preemption during this call,
4475 // or else the m might be different in this function than in the caller.
4477 func dolockOSThread() {
4478 if GOARCH == "wasm" {
4479 return // no threads on wasm yet
4482 _g_.m.lockedg.set(_g_)
4483 _g_.lockedm.set(_g_.m)
4488 // LockOSThread wires the calling goroutine to its current operating system thread.
4489 // The calling goroutine will always execute in that thread,
4490 // and no other goroutine will execute in it,
4491 // until the calling goroutine has made as many calls to
4492 // UnlockOSThread as to LockOSThread.
4493 // If the calling goroutine exits without unlocking the thread,
4494 // the thread will be terminated.
4496 // All init functions are run on the startup thread. Calling LockOSThread
4497 // from an init function will cause the main function to be invoked on
4500 // A goroutine should call LockOSThread before calling OS services or
4501 // non-Go library functions that depend on per-thread state.
4502 func LockOSThread() {
4503 if atomic.Load(&newmHandoff.haveTemplateThread) == 0 && GOOS != "plan9" {
4504 // If we need to start a new thread from the locked
4505 // thread, we need the template thread. Start it now
4506 // while we're in a known-good state.
4507 startTemplateThread()
4511 if _g_.m.lockedExt == 0 {
4513 panic("LockOSThread nesting overflow")
4519 func lockOSThread() {
4520 getg().m.lockedInt++
4524 // dounlockOSThread is called by UnlockOSThread and unlockOSThread below
4525 // after they update m->locked. Do not allow preemption during this call,
4526 // or else the m might be in different in this function than in the caller.
4528 func dounlockOSThread() {
4529 if GOARCH == "wasm" {
4530 return // no threads on wasm yet
4533 if _g_.m.lockedInt != 0 || _g_.m.lockedExt != 0 {
4542 // UnlockOSThread undoes an earlier call to LockOSThread.
4543 // If this drops the number of active LockOSThread calls on the
4544 // calling goroutine to zero, it unwires the calling goroutine from
4545 // its fixed operating system thread.
4546 // If there are no active LockOSThread calls, this is a no-op.
4548 // Before calling UnlockOSThread, the caller must ensure that the OS
4549 // thread is suitable for running other goroutines. If the caller made
4550 // any permanent changes to the state of the thread that would affect
4551 // other goroutines, it should not call this function and thus leave
4552 // the goroutine locked to the OS thread until the goroutine (and
4553 // hence the thread) exits.
4554 func UnlockOSThread() {
4556 if _g_.m.lockedExt == 0 {
4564 func unlockOSThread() {
4566 if _g_.m.lockedInt == 0 {
4567 systemstack(badunlockosthread)
4573 func badunlockosthread() {
4574 throw("runtime: internal error: misuse of lockOSThread/unlockOSThread")
4577 func gcount() int32 {
4578 n := int32(atomic.Loaduintptr(&allglen)) - sched.gFree.n - int32(atomic.Load(&sched.ngsys))
4579 for _, _p_ := range allp {
4583 // All these variables can be changed concurrently, so the result can be inconsistent.
4584 // But at least the current goroutine is running.
4591 func mcount() int32 {
4592 return int32(sched.mnext - sched.nmfreed)
4600 func _System() { _System() }
4601 func _ExternalCode() { _ExternalCode() }
4602 func _LostExternalCode() { _LostExternalCode() }
4603 func _GC() { _GC() }
4604 func _LostSIGPROFDuringAtomic64() { _LostSIGPROFDuringAtomic64() }
4605 func _VDSO() { _VDSO() }
4607 // Called if we receive a SIGPROF signal.
4608 // Called by the signal handler, may run during STW.
4609 //go:nowritebarrierrec
4610 func sigprof(pc, sp, lr uintptr, gp *g, mp *m) {
4615 // If mp.profilehz is 0, then profiling is not enabled for this thread.
4616 // We must check this to avoid a deadlock between setcpuprofilerate
4617 // and the call to cpuprof.add, below.
4618 if mp != nil && mp.profilehz == 0 {
4622 // On mips{,le}, 64bit atomics are emulated with spinlocks, in
4623 // runtime/internal/atomic. If SIGPROF arrives while the program is inside
4624 // the critical section, it creates a deadlock (when writing the sample).
4625 // As a workaround, create a counter of SIGPROFs while in critical section
4626 // to store the count, and pass it to sigprof.add() later when SIGPROF is
4627 // received from somewhere else (with _LostSIGPROFDuringAtomic64 as pc).
4628 if GOARCH == "mips" || GOARCH == "mipsle" || GOARCH == "arm" {
4629 if f := findfunc(pc); f.valid() {
4630 if hasPrefix(funcname(f), "runtime/internal/atomic") {
4631 cpuprof.lostAtomic++
4637 // Profiling runs concurrently with GC, so it must not allocate.
4638 // Set a trap in case the code does allocate.
4639 // Note that on windows, one thread takes profiles of all the
4640 // other threads, so mp is usually not getg().m.
4641 // In fact mp may not even be stopped.
4642 // See golang.org/issue/17165.
4643 getg().m.mallocing++
4645 var stk [maxCPUProfStack]uintptr
4647 if mp.ncgo > 0 && mp.curg != nil && mp.curg.syscallpc != 0 && mp.curg.syscallsp != 0 {
4649 // Check cgoCallersUse to make sure that we are not
4650 // interrupting other code that is fiddling with
4651 // cgoCallers. We are running in a signal handler
4652 // with all signals blocked, so we don't have to worry
4653 // about any other code interrupting us.
4654 if atomic.Load(&mp.cgoCallersUse) == 0 && mp.cgoCallers != nil && mp.cgoCallers[0] != 0 {
4655 for cgoOff < len(mp.cgoCallers) && mp.cgoCallers[cgoOff] != 0 {
4658 copy(stk[:], mp.cgoCallers[:cgoOff])
4659 mp.cgoCallers[0] = 0
4662 // Collect Go stack that leads to the cgo call.
4663 n = gentraceback(mp.curg.syscallpc, mp.curg.syscallsp, 0, mp.curg, 0, &stk[cgoOff], len(stk)-cgoOff, nil, nil, 0)
4668 n = gentraceback(pc, sp, lr, gp, 0, &stk[0], len(stk), nil, nil, _TraceTrap|_TraceJumpStack)
4672 // Normal traceback is impossible or has failed.
4673 // See if it falls into several common cases.
4675 if usesLibcall() && mp.libcallg != 0 && mp.libcallpc != 0 && mp.libcallsp != 0 {
4676 // Libcall, i.e. runtime syscall on windows.
4677 // Collect Go stack that leads to the call.
4678 n = gentraceback(mp.libcallpc, mp.libcallsp, 0, mp.libcallg.ptr(), 0, &stk[0], len(stk), nil, nil, 0)
4680 if n == 0 && mp != nil && mp.vdsoSP != 0 {
4681 n = gentraceback(mp.vdsoPC, mp.vdsoSP, 0, gp, 0, &stk[0], len(stk), nil, nil, _TraceTrap|_TraceJumpStack)
4684 // If all of the above has failed, account it against abstract "System" or "GC".
4687 pc = abi.FuncPCABIInternal(_VDSO) + sys.PCQuantum
4688 } else if pc > firstmoduledata.etext {
4689 // "ExternalCode" is better than "etext".
4690 pc = abi.FuncPCABIInternal(_ExternalCode) + sys.PCQuantum
4693 if mp.preemptoff != "" {
4694 stk[1] = abi.FuncPCABIInternal(_GC) + sys.PCQuantum
4696 stk[1] = abi.FuncPCABIInternal(_System) + sys.PCQuantum
4702 cpuprof.add(gp, stk[:n])
4704 getg().m.mallocing--
4707 // If the signal handler receives a SIGPROF signal on a non-Go thread,
4708 // it tries to collect a traceback into sigprofCallers.
4709 // sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.
4710 var sigprofCallers cgoCallers
4711 var sigprofCallersUse uint32
4713 // sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread,
4714 // and the signal handler collected a stack trace in sigprofCallers.
4715 // When this is called, sigprofCallersUse will be non-zero.
4716 // g is nil, and what we can do is very limited.
4718 //go:nowritebarrierrec
4719 func sigprofNonGo() {
4722 for n < len(sigprofCallers) && sigprofCallers[n] != 0 {
4725 cpuprof.addNonGo(sigprofCallers[:n])
4728 atomic.Store(&sigprofCallersUse, 0)
4731 // sigprofNonGoPC is called when a profiling signal arrived on a
4732 // non-Go thread and we have a single PC value, not a stack trace.
4733 // g is nil, and what we can do is very limited.
4735 //go:nowritebarrierrec
4736 func sigprofNonGoPC(pc uintptr) {
4740 abi.FuncPCABIInternal(_ExternalCode) + sys.PCQuantum,
4742 cpuprof.addNonGo(stk)
4746 // setcpuprofilerate sets the CPU profiling rate to hz times per second.
4747 // If hz <= 0, setcpuprofilerate turns off CPU profiling.
4748 func setcpuprofilerate(hz int32) {
4749 // Force sane arguments.
4754 // Disable preemption, otherwise we can be rescheduled to another thread
4755 // that has profiling enabled.
4759 // Stop profiler on this thread so that it is safe to lock prof.
4760 // if a profiling signal came in while we had prof locked,
4761 // it would deadlock.
4762 setThreadCPUProfiler(0)
4764 for !atomic.Cas(&prof.signalLock, 0, 1) {
4768 setProcessCPUProfiler(hz)
4771 atomic.Store(&prof.signalLock, 0)
4774 sched.profilehz = hz
4778 setThreadCPUProfiler(hz)
4784 // init initializes pp, which may be a freshly allocated p or a
4785 // previously destroyed p, and transitions it to status _Pgcstop.
4786 func (pp *p) init(id int32) {
4788 pp.status = _Pgcstop
4789 pp.sudogcache = pp.sudogbuf[:0]
4790 pp.deferpool = pp.deferpoolbuf[:0]
4792 if pp.mcache == nil {
4795 throw("missing mcache?")
4797 // Use the bootstrap mcache0. Only one P will get
4798 // mcache0: the one with ID 0.
4801 pp.mcache = allocmcache()
4804 if raceenabled && pp.raceprocctx == 0 {
4806 pp.raceprocctx = raceprocctx0
4807 raceprocctx0 = 0 // bootstrap
4809 pp.raceprocctx = raceproccreate()
4812 lockInit(&pp.timersLock, lockRankTimers)
4814 // This P may get timers when it starts running. Set the mask here
4815 // since the P may not go through pidleget (notably P 0 on startup).
4817 // Similarly, we may not go through pidleget before this P starts
4818 // running if it is P 0 on startup.
4822 // destroy releases all of the resources associated with pp and
4823 // transitions it to status _Pdead.
4825 // sched.lock must be held and the world must be stopped.
4826 func (pp *p) destroy() {
4827 assertLockHeld(&sched.lock)
4828 assertWorldStopped()
4830 // Move all runnable goroutines to the global queue
4831 for pp.runqhead != pp.runqtail {
4832 // Pop from tail of local queue
4834 gp := pp.runq[pp.runqtail%uint32(len(pp.runq))].ptr()
4835 // Push onto head of global queue
4838 if pp.runnext != 0 {
4839 globrunqputhead(pp.runnext.ptr())
4842 if len(pp.timers) > 0 {
4843 plocal := getg().m.p.ptr()
4844 // The world is stopped, but we acquire timersLock to
4845 // protect against sysmon calling timeSleepUntil.
4846 // This is the only case where we hold the timersLock of
4847 // more than one P, so there are no deadlock concerns.
4848 lock(&plocal.timersLock)
4849 lock(&pp.timersLock)
4850 moveTimers(plocal, pp.timers)
4854 pp.deletedTimers = 0
4855 atomic.Store64(&pp.timer0When, 0)
4856 unlock(&pp.timersLock)
4857 unlock(&plocal.timersLock)
4859 // Flush p's write barrier buffer.
4860 if gcphase != _GCoff {
4864 for i := range pp.sudogbuf {
4865 pp.sudogbuf[i] = nil
4867 pp.sudogcache = pp.sudogbuf[:0]
4868 for j := range pp.deferpoolbuf {
4869 pp.deferpoolbuf[j] = nil
4871 pp.deferpool = pp.deferpoolbuf[:0]
4872 systemstack(func() {
4873 for i := 0; i < pp.mspancache.len; i++ {
4874 // Safe to call since the world is stopped.
4875 mheap_.spanalloc.free(unsafe.Pointer(pp.mspancache.buf[i]))
4877 pp.mspancache.len = 0
4879 pp.pcache.flush(&mheap_.pages)
4880 unlock(&mheap_.lock)
4882 freemcache(pp.mcache)
4887 if pp.timerRaceCtx != 0 {
4888 // The race detector code uses a callback to fetch
4889 // the proc context, so arrange for that callback
4890 // to see the right thing.
4891 // This hack only works because we are the only
4897 racectxend(pp.timerRaceCtx)
4902 raceprocdestroy(pp.raceprocctx)
4909 // Change number of processors.
4911 // sched.lock must be held, and the world must be stopped.
4913 // gcworkbufs must not be being modified by either the GC or the write barrier
4914 // code, so the GC must not be running if the number of Ps actually changes.
4916 // Returns list of Ps with local work, they need to be scheduled by the caller.
4917 func procresize(nprocs int32) *p {
4918 assertLockHeld(&sched.lock)
4919 assertWorldStopped()
4922 if old < 0 || nprocs <= 0 {
4923 throw("procresize: invalid arg")
4926 traceGomaxprocs(nprocs)
4929 // update statistics
4931 if sched.procresizetime != 0 {
4932 sched.totaltime += int64(old) * (now - sched.procresizetime)
4934 sched.procresizetime = now
4936 maskWords := (nprocs + 31) / 32
4938 // Grow allp if necessary.
4939 if nprocs > int32(len(allp)) {
4940 // Synchronize with retake, which could be running
4941 // concurrently since it doesn't run on a P.
4943 if nprocs <= int32(cap(allp)) {
4944 allp = allp[:nprocs]
4946 nallp := make([]*p, nprocs)
4947 // Copy everything up to allp's cap so we
4948 // never lose old allocated Ps.
4949 copy(nallp, allp[:cap(allp)])
4953 if maskWords <= int32(cap(idlepMask)) {
4954 idlepMask = idlepMask[:maskWords]
4955 timerpMask = timerpMask[:maskWords]
4957 nidlepMask := make([]uint32, maskWords)
4958 // No need to copy beyond len, old Ps are irrelevant.
4959 copy(nidlepMask, idlepMask)
4960 idlepMask = nidlepMask
4962 ntimerpMask := make([]uint32, maskWords)
4963 copy(ntimerpMask, timerpMask)
4964 timerpMask = ntimerpMask
4969 // initialize new P's
4970 for i := old; i < nprocs; i++ {
4976 atomicstorep(unsafe.Pointer(&allp[i]), unsafe.Pointer(pp))
4980 if _g_.m.p != 0 && _g_.m.p.ptr().id < nprocs {
4981 // continue to use the current P
4982 _g_.m.p.ptr().status = _Prunning
4983 _g_.m.p.ptr().mcache.prepareForSweep()
4985 // release the current P and acquire allp[0].
4987 // We must do this before destroying our current P
4988 // because p.destroy itself has write barriers, so we
4989 // need to do that from a valid P.
4992 // Pretend that we were descheduled
4993 // and then scheduled again to keep
4996 traceProcStop(_g_.m.p.ptr())
5010 // g.m.p is now set, so we no longer need mcache0 for bootstrapping.
5013 // release resources from unused P's
5014 for i := nprocs; i < old; i++ {
5017 // can't free P itself because it can be referenced by an M in syscall
5021 if int32(len(allp)) != nprocs {
5023 allp = allp[:nprocs]
5024 idlepMask = idlepMask[:maskWords]
5025 timerpMask = timerpMask[:maskWords]
5030 for i := nprocs - 1; i >= 0; i-- {
5032 if _g_.m.p.ptr() == p {
5040 p.link.set(runnablePs)
5044 stealOrder.reset(uint32(nprocs))
5045 var int32p *int32 = &gomaxprocs // make compiler check that gomaxprocs is an int32
5046 atomic.Store((*uint32)(unsafe.Pointer(int32p)), uint32(nprocs))
5050 // Associate p and the current m.
5052 // This function is allowed to have write barriers even if the caller
5053 // isn't because it immediately acquires _p_.
5055 //go:yeswritebarrierrec
5056 func acquirep(_p_ *p) {
5057 // Do the part that isn't allowed to have write barriers.
5060 // Have p; write barriers now allowed.
5062 // Perform deferred mcache flush before this P can allocate
5063 // from a potentially stale mcache.
5064 _p_.mcache.prepareForSweep()
5071 // wirep is the first step of acquirep, which actually associates the
5072 // current M to _p_. This is broken out so we can disallow write
5073 // barriers for this part, since we don't yet have a P.
5075 //go:nowritebarrierrec
5077 func wirep(_p_ *p) {
5081 throw("wirep: already in go")
5083 if _p_.m != 0 || _p_.status != _Pidle {
5088 print("wirep: p->m=", _p_.m, "(", id, ") p->status=", _p_.status, "\n")
5089 throw("wirep: invalid p state")
5093 _p_.status = _Prunning
5096 // Disassociate p and the current m.
5097 func releasep() *p {
5101 throw("releasep: invalid arg")
5103 _p_ := _g_.m.p.ptr()
5104 if _p_.m.ptr() != _g_.m || _p_.status != _Prunning {
5105 print("releasep: m=", _g_.m, " m->p=", _g_.m.p.ptr(), " p->m=", hex(_p_.m), " p->status=", _p_.status, "\n")
5106 throw("releasep: invalid p state")
5109 traceProcStop(_g_.m.p.ptr())
5117 func incidlelocked(v int32) {
5119 sched.nmidlelocked += v
5126 // Check for deadlock situation.
5127 // The check is based on number of running M's, if 0 -> deadlock.
5128 // sched.lock must be held.
5130 assertLockHeld(&sched.lock)
5132 // For -buildmode=c-shared or -buildmode=c-archive it's OK if
5133 // there are no running goroutines. The calling program is
5134 // assumed to be running.
5135 if islibrary || isarchive {
5139 // If we are dying because of a signal caught on an already idle thread,
5140 // freezetheworld will cause all running threads to block.
5141 // And runtime will essentially enter into deadlock state,
5142 // except that there is a thread that will call exit soon.
5147 // If we are not running under cgo, but we have an extra M then account
5148 // for it. (It is possible to have an extra M on Windows without cgo to
5149 // accommodate callbacks created by syscall.NewCallback. See issue #6751
5152 if !iscgo && cgoHasExtraM {
5153 mp := lockextra(true)
5154 haveExtraM := extraMCount > 0
5161 run := mcount() - sched.nmidle - sched.nmidlelocked - sched.nmsys
5166 print("runtime: checkdead: nmidle=", sched.nmidle, " nmidlelocked=", sched.nmidlelocked, " mcount=", mcount(), " nmsys=", sched.nmsys, "\n")
5167 throw("checkdead: inconsistent counts")
5171 forEachG(func(gp *g) {
5172 if isSystemGoroutine(gp, false) {
5175 s := readgstatus(gp)
5176 switch s &^ _Gscan {
5183 print("runtime: checkdead: find g ", gp.goid, " in status ", s, "\n")
5184 throw("checkdead: runnable g")
5187 if grunning == 0 { // possible if main goroutine calls runtime·Goexit()
5188 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5189 throw("no goroutines (main called runtime.Goexit) - deadlock!")
5192 // Maybe jump time forward for playground.
5194 when, _p_ := timeSleepUntil()
5197 for pp := &sched.pidle; *pp != 0; pp = &(*pp).ptr().link {
5198 if (*pp).ptr() == _p_ {
5205 // There should always be a free M since
5206 // nothing is running.
5207 throw("checkdead: no m for timer")
5210 notewakeup(&mp.park)
5215 // There are no goroutines running, so we can look at the P's.
5216 for _, _p_ := range allp {
5217 if len(_p_.timers) > 0 {
5222 getg().m.throwing = -1 // do not dump full stacks
5223 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5224 throw("all goroutines are asleep - deadlock!")
5227 // forcegcperiod is the maximum time in nanoseconds between garbage
5228 // collections. If we go this long without a garbage collection, one
5229 // is forced to run.
5231 // This is a variable for testing purposes. It normally doesn't change.
5232 var forcegcperiod int64 = 2 * 60 * 1e9
5234 // Always runs without a P, so write barriers are not allowed.
5236 //go:nowritebarrierrec
5243 // For syscall_runtime_doAllThreadsSyscall, sysmon is
5244 // sufficiently up to participate in fixups.
5245 atomic.Store(&sched.sysmonStarting, 0)
5247 lasttrace := int64(0)
5248 idle := 0 // how many cycles in succession we had not wokeup somebody
5252 if idle == 0 { // start with 20us sleep...
5254 } else if idle > 50 { // start doubling the sleep after 1ms...
5257 if delay > 10*1000 { // up to 10ms
5263 // sysmon should not enter deep sleep if schedtrace is enabled so that
5264 // it can print that information at the right time.
5266 // It should also not enter deep sleep if there are any active P's so
5267 // that it can retake P's from syscalls, preempt long running G's, and
5268 // poll the network if all P's are busy for long stretches.
5270 // It should wakeup from deep sleep if any P's become active either due
5271 // to exiting a syscall or waking up due to a timer expiring so that it
5272 // can resume performing those duties. If it wakes from a syscall it
5273 // resets idle and delay as a bet that since it had retaken a P from a
5274 // syscall before, it may need to do it again shortly after the
5275 // application starts work again. It does not reset idle when waking
5276 // from a timer to avoid adding system load to applications that spend
5277 // most of their time sleeping.
5279 if debug.schedtrace <= 0 && (sched.gcwaiting != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs)) {
5281 if atomic.Load(&sched.gcwaiting) != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs) {
5282 syscallWake := false
5283 next, _ := timeSleepUntil()
5285 atomic.Store(&sched.sysmonwait, 1)
5287 // Make wake-up period small enough
5288 // for the sampling to be correct.
5289 sleep := forcegcperiod / 2
5290 if next-now < sleep {
5293 shouldRelax := sleep >= osRelaxMinNS
5297 syscallWake = notetsleep(&sched.sysmonnote, sleep)
5303 atomic.Store(&sched.sysmonwait, 0)
5304 noteclear(&sched.sysmonnote)
5314 lock(&sched.sysmonlock)
5315 // Update now in case we blocked on sysmonnote or spent a long time
5316 // blocked on schedlock or sysmonlock above.
5319 // trigger libc interceptors if needed
5320 if *cgo_yield != nil {
5321 asmcgocall(*cgo_yield, nil)
5323 // poll network if not polled for more than 10ms
5324 lastpoll := int64(atomic.Load64(&sched.lastpoll))
5325 if netpollinited() && lastpoll != 0 && lastpoll+10*1000*1000 < now {
5326 atomic.Cas64(&sched.lastpoll, uint64(lastpoll), uint64(now))
5327 list := netpoll(0) // non-blocking - returns list of goroutines
5329 // Need to decrement number of idle locked M's
5330 // (pretending that one more is running) before injectglist.
5331 // Otherwise it can lead to the following situation:
5332 // injectglist grabs all P's but before it starts M's to run the P's,
5333 // another M returns from syscall, finishes running its G,
5334 // observes that there is no work to do and no other running M's
5335 // and reports deadlock.
5342 if GOOS == "netbsd" {
5343 // netpoll is responsible for waiting for timer
5344 // expiration, so we typically don't have to worry
5345 // about starting an M to service timers. (Note that
5346 // sleep for timeSleepUntil above simply ensures sysmon
5347 // starts running again when that timer expiration may
5348 // cause Go code to run again).
5350 // However, netbsd has a kernel bug that sometimes
5351 // misses netpollBreak wake-ups, which can lead to
5352 // unbounded delays servicing timers. If we detect this
5353 // overrun, then startm to get something to handle the
5356 // See issue 42515 and
5357 // https://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=50094.
5358 if next, _ := timeSleepUntil(); next < now {
5362 if atomic.Load(&scavenge.sysmonWake) != 0 {
5363 // Kick the scavenger awake if someone requested it.
5366 // retake P's blocked in syscalls
5367 // and preempt long running G's
5368 if retake(now) != 0 {
5373 // check if we need to force a GC
5374 if t := (gcTrigger{kind: gcTriggerTime, now: now}); t.test() && atomic.Load(&forcegc.idle) != 0 {
5378 list.push(forcegc.g)
5380 unlock(&forcegc.lock)
5382 if debug.schedtrace > 0 && lasttrace+int64(debug.schedtrace)*1000000 <= now {
5384 schedtrace(debug.scheddetail > 0)
5386 unlock(&sched.sysmonlock)
5390 type sysmontick struct {
5397 // forcePreemptNS is the time slice given to a G before it is
5399 const forcePreemptNS = 10 * 1000 * 1000 // 10ms
5401 func retake(now int64) uint32 {
5403 // Prevent allp slice changes. This lock will be completely
5404 // uncontended unless we're already stopping the world.
5406 // We can't use a range loop over allp because we may
5407 // temporarily drop the allpLock. Hence, we need to re-fetch
5408 // allp each time around the loop.
5409 for i := 0; i < len(allp); i++ {
5412 // This can happen if procresize has grown
5413 // allp but not yet created new Ps.
5416 pd := &_p_.sysmontick
5419 if s == _Prunning || s == _Psyscall {
5420 // Preempt G if it's running for too long.
5421 t := int64(_p_.schedtick)
5422 if int64(pd.schedtick) != t {
5423 pd.schedtick = uint32(t)
5425 } else if pd.schedwhen+forcePreemptNS <= now {
5427 // In case of syscall, preemptone() doesn't
5428 // work, because there is no M wired to P.
5433 // Retake P from syscall if it's there for more than 1 sysmon tick (at least 20us).
5434 t := int64(_p_.syscalltick)
5435 if !sysretake && int64(pd.syscalltick) != t {
5436 pd.syscalltick = uint32(t)
5437 pd.syscallwhen = now
5440 // On the one hand we don't want to retake Ps if there is no other work to do,
5441 // but on the other hand we want to retake them eventually
5442 // because they can prevent the sysmon thread from deep sleep.
5443 if runqempty(_p_) && atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) > 0 && pd.syscallwhen+10*1000*1000 > now {
5446 // Drop allpLock so we can take sched.lock.
5448 // Need to decrement number of idle locked M's
5449 // (pretending that one more is running) before the CAS.
5450 // Otherwise the M from which we retake can exit the syscall,
5451 // increment nmidle and report deadlock.
5453 if atomic.Cas(&_p_.status, s, _Pidle) {
5455 traceGoSysBlock(_p_)
5470 // Tell all goroutines that they have been preempted and they should stop.
5471 // This function is purely best-effort. It can fail to inform a goroutine if a
5472 // processor just started running it.
5473 // No locks need to be held.
5474 // Returns true if preemption request was issued to at least one goroutine.
5475 func preemptall() bool {
5477 for _, _p_ := range allp {
5478 if _p_.status != _Prunning {
5481 if preemptone(_p_) {
5488 // Tell the goroutine running on processor P to stop.
5489 // This function is purely best-effort. It can incorrectly fail to inform the
5490 // goroutine. It can inform the wrong goroutine. Even if it informs the
5491 // correct goroutine, that goroutine might ignore the request if it is
5492 // simultaneously executing newstack.
5493 // No lock needs to be held.
5494 // Returns true if preemption request was issued.
5495 // The actual preemption will happen at some point in the future
5496 // and will be indicated by the gp->status no longer being
5498 func preemptone(_p_ *p) bool {
5500 if mp == nil || mp == getg().m {
5504 if gp == nil || gp == mp.g0 {
5510 // Every call in a goroutine checks for stack overflow by
5511 // comparing the current stack pointer to gp->stackguard0.
5512 // Setting gp->stackguard0 to StackPreempt folds
5513 // preemption into the normal stack overflow check.
5514 gp.stackguard0 = stackPreempt
5516 // Request an async preemption of this P.
5517 if preemptMSupported && debug.asyncpreemptoff == 0 {
5527 func schedtrace(detailed bool) {
5534 print("SCHED ", (now-starttime)/1e6, "ms: gomaxprocs=", gomaxprocs, " idleprocs=", sched.npidle, " threads=", mcount(), " spinningthreads=", sched.nmspinning, " idlethreads=", sched.nmidle, " runqueue=", sched.runqsize)
5536 print(" gcwaiting=", sched.gcwaiting, " nmidlelocked=", sched.nmidlelocked, " stopwait=", sched.stopwait, " sysmonwait=", sched.sysmonwait, "\n")
5538 // We must be careful while reading data from P's, M's and G's.
5539 // Even if we hold schedlock, most data can be changed concurrently.
5540 // E.g. (p->m ? p->m->id : -1) can crash if p->m changes from non-nil to nil.
5541 for i, _p_ := range allp {
5543 h := atomic.Load(&_p_.runqhead)
5544 t := atomic.Load(&_p_.runqtail)
5550 print(" P", i, ": status=", _p_.status, " schedtick=", _p_.schedtick, " syscalltick=", _p_.syscalltick, " m=", id, " runqsize=", t-h, " gfreecnt=", _p_.gFree.n, " timerslen=", len(_p_.timers), "\n")
5552 // In non-detailed mode format lengths of per-P run queues as:
5553 // [len1 len2 len3 len4]
5559 if i == len(allp)-1 {
5570 for mp := allm; mp != nil; mp = mp.alllink {
5573 lockedg := mp.lockedg.ptr()
5586 print(" M", mp.id, ": p=", id1, " curg=", id2, " mallocing=", mp.mallocing, " throwing=", mp.throwing, " preemptoff=", mp.preemptoff, ""+" locks=", mp.locks, " dying=", mp.dying, " spinning=", mp.spinning, " blocked=", mp.blocked, " lockedg=", id3, "\n")
5589 forEachG(func(gp *g) {
5591 lockedm := gp.lockedm.ptr()
5600 print(" G", gp.goid, ": status=", readgstatus(gp), "(", gp.waitreason.String(), ") m=", id1, " lockedm=", id2, "\n")
5605 // schedEnableUser enables or disables the scheduling of user
5608 // This does not stop already running user goroutines, so the caller
5609 // should first stop the world when disabling user goroutines.
5610 func schedEnableUser(enable bool) {
5612 if sched.disable.user == !enable {
5616 sched.disable.user = !enable
5618 n := sched.disable.n
5620 globrunqputbatch(&sched.disable.runnable, n)
5622 for ; n != 0 && sched.npidle != 0; n-- {
5630 // schedEnabled reports whether gp should be scheduled. It returns
5631 // false is scheduling of gp is disabled.
5633 // sched.lock must be held.
5634 func schedEnabled(gp *g) bool {
5635 assertLockHeld(&sched.lock)
5637 if sched.disable.user {
5638 return isSystemGoroutine(gp, true)
5643 // Put mp on midle list.
5644 // sched.lock must be held.
5645 // May run during STW, so write barriers are not allowed.
5646 //go:nowritebarrierrec
5648 assertLockHeld(&sched.lock)
5650 mp.schedlink = sched.midle
5656 // Try to get an m from midle list.
5657 // sched.lock must be held.
5658 // May run during STW, so write barriers are not allowed.
5659 //go:nowritebarrierrec
5661 assertLockHeld(&sched.lock)
5663 mp := sched.midle.ptr()
5665 sched.midle = mp.schedlink
5671 // Put gp on the global runnable queue.
5672 // sched.lock must be held.
5673 // May run during STW, so write barriers are not allowed.
5674 //go:nowritebarrierrec
5675 func globrunqput(gp *g) {
5676 assertLockHeld(&sched.lock)
5678 sched.runq.pushBack(gp)
5682 // Put gp at the head of the global runnable queue.
5683 // sched.lock must be held.
5684 // May run during STW, so write barriers are not allowed.
5685 //go:nowritebarrierrec
5686 func globrunqputhead(gp *g) {
5687 assertLockHeld(&sched.lock)
5693 // Put a batch of runnable goroutines on the global runnable queue.
5694 // This clears *batch.
5695 // sched.lock must be held.
5696 // May run during STW, so write barriers are not allowed.
5697 //go:nowritebarrierrec
5698 func globrunqputbatch(batch *gQueue, n int32) {
5699 assertLockHeld(&sched.lock)
5701 sched.runq.pushBackAll(*batch)
5706 // Try get a batch of G's from the global runnable queue.
5707 // sched.lock must be held.
5708 func globrunqget(_p_ *p, max int32) *g {
5709 assertLockHeld(&sched.lock)
5711 if sched.runqsize == 0 {
5715 n := sched.runqsize/gomaxprocs + 1
5716 if n > sched.runqsize {
5719 if max > 0 && n > max {
5722 if n > int32(len(_p_.runq))/2 {
5723 n = int32(len(_p_.runq)) / 2
5728 gp := sched.runq.pop()
5731 gp1 := sched.runq.pop()
5732 runqput(_p_, gp1, false)
5737 // pMask is an atomic bitstring with one bit per P.
5740 // read returns true if P id's bit is set.
5741 func (p pMask) read(id uint32) bool {
5743 mask := uint32(1) << (id % 32)
5744 return (atomic.Load(&p[word]) & mask) != 0
5747 // set sets P id's bit.
5748 func (p pMask) set(id int32) {
5750 mask := uint32(1) << (id % 32)
5751 atomic.Or(&p[word], mask)
5754 // clear clears P id's bit.
5755 func (p pMask) clear(id int32) {
5757 mask := uint32(1) << (id % 32)
5758 atomic.And(&p[word], ^mask)
5761 // updateTimerPMask clears pp's timer mask if it has no timers on its heap.
5763 // Ideally, the timer mask would be kept immediately consistent on any timer
5764 // operations. Unfortunately, updating a shared global data structure in the
5765 // timer hot path adds too much overhead in applications frequently switching
5766 // between no timers and some timers.
5768 // As a compromise, the timer mask is updated only on pidleget / pidleput. A
5769 // running P (returned by pidleget) may add a timer at any time, so its mask
5770 // must be set. An idle P (passed to pidleput) cannot add new timers while
5771 // idle, so if it has no timers at that time, its mask may be cleared.
5773 // Thus, we get the following effects on timer-stealing in findrunnable:
5775 // * Idle Ps with no timers when they go idle are never checked in findrunnable
5776 // (for work- or timer-stealing; this is the ideal case).
5777 // * Running Ps must always be checked.
5778 // * Idle Ps whose timers are stolen must continue to be checked until they run
5779 // again, even after timer expiration.
5781 // When the P starts running again, the mask should be set, as a timer may be
5782 // added at any time.
5784 // TODO(prattmic): Additional targeted updates may improve the above cases.
5785 // e.g., updating the mask when stealing a timer.
5786 func updateTimerPMask(pp *p) {
5787 if atomic.Load(&pp.numTimers) > 0 {
5791 // Looks like there are no timers, however another P may transiently
5792 // decrement numTimers when handling a timerModified timer in
5793 // checkTimers. We must take timersLock to serialize with these changes.
5794 lock(&pp.timersLock)
5795 if atomic.Load(&pp.numTimers) == 0 {
5796 timerpMask.clear(pp.id)
5798 unlock(&pp.timersLock)
5801 // pidleput puts p to on the _Pidle list.
5803 // This releases ownership of p. Once sched.lock is released it is no longer
5806 // sched.lock must be held.
5808 // May run during STW, so write barriers are not allowed.
5809 //go:nowritebarrierrec
5810 func pidleput(_p_ *p) {
5811 assertLockHeld(&sched.lock)
5813 if !runqempty(_p_) {
5814 throw("pidleput: P has non-empty run queue")
5816 updateTimerPMask(_p_) // clear if there are no timers.
5817 idlepMask.set(_p_.id)
5818 _p_.link = sched.pidle
5819 sched.pidle.set(_p_)
5820 atomic.Xadd(&sched.npidle, 1) // TODO: fast atomic
5823 // pidleget tries to get a p from the _Pidle list, acquiring ownership.
5825 // sched.lock must be held.
5827 // May run during STW, so write barriers are not allowed.
5828 //go:nowritebarrierrec
5829 func pidleget() *p {
5830 assertLockHeld(&sched.lock)
5832 _p_ := sched.pidle.ptr()
5834 // Timer may get added at any time now.
5835 timerpMask.set(_p_.id)
5836 idlepMask.clear(_p_.id)
5837 sched.pidle = _p_.link
5838 atomic.Xadd(&sched.npidle, -1) // TODO: fast atomic
5843 // runqempty reports whether _p_ has no Gs on its local run queue.
5844 // It never returns true spuriously.
5845 func runqempty(_p_ *p) bool {
5846 // Defend against a race where 1) _p_ has G1 in runqnext but runqhead == runqtail,
5847 // 2) runqput on _p_ kicks G1 to the runq, 3) runqget on _p_ empties runqnext.
5848 // Simply observing that runqhead == runqtail and then observing that runqnext == nil
5849 // does not mean the queue is empty.
5851 head := atomic.Load(&_p_.runqhead)
5852 tail := atomic.Load(&_p_.runqtail)
5853 runnext := atomic.Loaduintptr((*uintptr)(unsafe.Pointer(&_p_.runnext)))
5854 if tail == atomic.Load(&_p_.runqtail) {
5855 return head == tail && runnext == 0
5860 // To shake out latent assumptions about scheduling order,
5861 // we introduce some randomness into scheduling decisions
5862 // when running with the race detector.
5863 // The need for this was made obvious by changing the
5864 // (deterministic) scheduling order in Go 1.5 and breaking
5865 // many poorly-written tests.
5866 // With the randomness here, as long as the tests pass
5867 // consistently with -race, they shouldn't have latent scheduling
5869 const randomizeScheduler = raceenabled
5871 // runqput tries to put g on the local runnable queue.
5872 // If next is false, runqput adds g to the tail of the runnable queue.
5873 // If next is true, runqput puts g in the _p_.runnext slot.
5874 // If the run queue is full, runnext puts g on the global queue.
5875 // Executed only by the owner P.
5876 func runqput(_p_ *p, gp *g, next bool) {
5877 if randomizeScheduler && next && fastrand()%2 == 0 {
5883 oldnext := _p_.runnext
5884 if !_p_.runnext.cas(oldnext, guintptr(unsafe.Pointer(gp))) {
5890 // Kick the old runnext out to the regular run queue.
5895 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with consumers
5897 if t-h < uint32(len(_p_.runq)) {
5898 _p_.runq[t%uint32(len(_p_.runq))].set(gp)
5899 atomic.StoreRel(&_p_.runqtail, t+1) // store-release, makes the item available for consumption
5902 if runqputslow(_p_, gp, h, t) {
5905 // the queue is not full, now the put above must succeed
5909 // Put g and a batch of work from local runnable queue on global queue.
5910 // Executed only by the owner P.
5911 func runqputslow(_p_ *p, gp *g, h, t uint32) bool {
5912 var batch [len(_p_.runq)/2 + 1]*g
5914 // First, grab a batch from local queue.
5917 if n != uint32(len(_p_.runq)/2) {
5918 throw("runqputslow: queue is not full")
5920 for i := uint32(0); i < n; i++ {
5921 batch[i] = _p_.runq[(h+i)%uint32(len(_p_.runq))].ptr()
5923 if !atomic.CasRel(&_p_.runqhead, h, h+n) { // cas-release, commits consume
5928 if randomizeScheduler {
5929 for i := uint32(1); i <= n; i++ {
5930 j := fastrandn(i + 1)
5931 batch[i], batch[j] = batch[j], batch[i]
5935 // Link the goroutines.
5936 for i := uint32(0); i < n; i++ {
5937 batch[i].schedlink.set(batch[i+1])
5940 q.head.set(batch[0])
5941 q.tail.set(batch[n])
5943 // Now put the batch on global queue.
5945 globrunqputbatch(&q, int32(n+1))
5950 // runqputbatch tries to put all the G's on q on the local runnable queue.
5951 // If the queue is full, they are put on the global queue; in that case
5952 // this will temporarily acquire the scheduler lock.
5953 // Executed only by the owner P.
5954 func runqputbatch(pp *p, q *gQueue, qsize int) {
5955 h := atomic.LoadAcq(&pp.runqhead)
5958 for !q.empty() && t-h < uint32(len(pp.runq)) {
5960 pp.runq[t%uint32(len(pp.runq))].set(gp)
5966 if randomizeScheduler {
5967 off := func(o uint32) uint32 {
5968 return (pp.runqtail + o) % uint32(len(pp.runq))
5970 for i := uint32(1); i < n; i++ {
5971 j := fastrandn(i + 1)
5972 pp.runq[off(i)], pp.runq[off(j)] = pp.runq[off(j)], pp.runq[off(i)]
5976 atomic.StoreRel(&pp.runqtail, t)
5979 globrunqputbatch(q, int32(qsize))
5984 // Get g from local runnable queue.
5985 // If inheritTime is true, gp should inherit the remaining time in the
5986 // current time slice. Otherwise, it should start a new time slice.
5987 // Executed only by the owner P.
5988 func runqget(_p_ *p) (gp *g, inheritTime bool) {
5989 // If there's a runnext, it's the next G to run.
5995 if _p_.runnext.cas(next, 0) {
5996 return next.ptr(), true
6001 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6006 gp := _p_.runq[h%uint32(len(_p_.runq))].ptr()
6007 if atomic.CasRel(&_p_.runqhead, h, h+1) { // cas-release, commits consume
6013 // runqdrain drains the local runnable queue of _p_ and returns all goroutines in it.
6014 // Executed only by the owner P.
6015 func runqdrain(_p_ *p) (drainQ gQueue, n uint32) {
6016 oldNext := _p_.runnext
6017 if oldNext != 0 && _p_.runnext.cas(oldNext, 0) {
6018 drainQ.pushBack(oldNext.ptr())
6023 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6029 if qn > uint32(len(_p_.runq)) { // read inconsistent h and t
6033 if !atomic.CasRel(&_p_.runqhead, h, h+qn) { // cas-release, commits consume
6037 // We've inverted the order in which it gets G's from the local P's runnable queue
6038 // and then advances the head pointer because we don't want to mess up the statuses of G's
6039 // while runqdrain() and runqsteal() are running in parallel.
6040 // Thus we should advance the head pointer before draining the local P into a gQueue,
6041 // so that we can update any gp.schedlink only after we take the full ownership of G,
6042 // meanwhile, other P's can't access to all G's in local P's runnable queue and steal them.
6043 // See https://groups.google.com/g/golang-dev/c/0pTKxEKhHSc/m/6Q85QjdVBQAJ for more details.
6044 for i := uint32(0); i < qn; i++ {
6045 gp := _p_.runq[(h+i)%uint32(len(_p_.runq))].ptr()
6052 // Grabs a batch of goroutines from _p_'s runnable queue into batch.
6053 // Batch is a ring buffer starting at batchHead.
6054 // Returns number of grabbed goroutines.
6055 // Can be executed by any P.
6056 func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32 {
6058 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6059 t := atomic.LoadAcq(&_p_.runqtail) // load-acquire, synchronize with the producer
6064 // Try to steal from _p_.runnext.
6065 if next := _p_.runnext; next != 0 {
6066 if _p_.status == _Prunning {
6067 // Sleep to ensure that _p_ isn't about to run the g
6068 // we are about to steal.
6069 // The important use case here is when the g running
6070 // on _p_ ready()s another g and then almost
6071 // immediately blocks. Instead of stealing runnext
6072 // in this window, back off to give _p_ a chance to
6073 // schedule runnext. This will avoid thrashing gs
6074 // between different Ps.
6075 // A sync chan send/recv takes ~50ns as of time of
6076 // writing, so 3us gives ~50x overshoot.
6077 if GOOS != "windows" {
6080 // On windows system timer granularity is
6081 // 1-15ms, which is way too much for this
6082 // optimization. So just yield.
6086 if !_p_.runnext.cas(next, 0) {
6089 batch[batchHead%uint32(len(batch))] = next
6095 if n > uint32(len(_p_.runq)/2) { // read inconsistent h and t
6098 for i := uint32(0); i < n; i++ {
6099 g := _p_.runq[(h+i)%uint32(len(_p_.runq))]
6100 batch[(batchHead+i)%uint32(len(batch))] = g
6102 if atomic.CasRel(&_p_.runqhead, h, h+n) { // cas-release, commits consume
6108 // Steal half of elements from local runnable queue of p2
6109 // and put onto local runnable queue of p.
6110 // Returns one of the stolen elements (or nil if failed).
6111 func runqsteal(_p_, p2 *p, stealRunNextG bool) *g {
6113 n := runqgrab(p2, &_p_.runq, t, stealRunNextG)
6118 gp := _p_.runq[(t+n)%uint32(len(_p_.runq))].ptr()
6122 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with consumers
6123 if t-h+n >= uint32(len(_p_.runq)) {
6124 throw("runqsteal: runq overflow")
6126 atomic.StoreRel(&_p_.runqtail, t+n) // store-release, makes the item available for consumption
6130 // A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
6131 // be on one gQueue or gList at a time.
6132 type gQueue struct {
6137 // empty reports whether q is empty.
6138 func (q *gQueue) empty() bool {
6142 // push adds gp to the head of q.
6143 func (q *gQueue) push(gp *g) {
6144 gp.schedlink = q.head
6151 // pushBack adds gp to the tail of q.
6152 func (q *gQueue) pushBack(gp *g) {
6155 q.tail.ptr().schedlink.set(gp)
6162 // pushBackAll adds all Gs in l2 to the tail of q. After this q2 must
6164 func (q *gQueue) pushBackAll(q2 gQueue) {
6168 q2.tail.ptr().schedlink = 0
6170 q.tail.ptr().schedlink = q2.head
6177 // pop removes and returns the head of queue q. It returns nil if
6179 func (q *gQueue) pop() *g {
6182 q.head = gp.schedlink
6190 // popList takes all Gs in q and returns them as a gList.
6191 func (q *gQueue) popList() gList {
6192 stack := gList{q.head}
6197 // A gList is a list of Gs linked through g.schedlink. A G can only be
6198 // on one gQueue or gList at a time.
6203 // empty reports whether l is empty.
6204 func (l *gList) empty() bool {
6208 // push adds gp to the head of l.
6209 func (l *gList) push(gp *g) {
6210 gp.schedlink = l.head
6214 // pushAll prepends all Gs in q to l.
6215 func (l *gList) pushAll(q gQueue) {
6217 q.tail.ptr().schedlink = l.head
6222 // pop removes and returns the head of l. If l is empty, it returns nil.
6223 func (l *gList) pop() *g {
6226 l.head = gp.schedlink
6231 //go:linkname setMaxThreads runtime/debug.setMaxThreads
6232 func setMaxThreads(in int) (out int) {
6234 out = int(sched.maxmcount)
6235 if in > 0x7fffffff { // MaxInt32
6236 sched.maxmcount = 0x7fffffff
6238 sched.maxmcount = int32(in)
6246 func procPin() int {
6251 return int(mp.p.ptr().id)
6260 //go:linkname sync_runtime_procPin sync.runtime_procPin
6262 func sync_runtime_procPin() int {
6266 //go:linkname sync_runtime_procUnpin sync.runtime_procUnpin
6268 func sync_runtime_procUnpin() {
6272 //go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin
6274 func sync_atomic_runtime_procPin() int {
6278 //go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin
6280 func sync_atomic_runtime_procUnpin() {
6284 // Active spinning for sync.Mutex.
6285 //go:linkname sync_runtime_canSpin sync.runtime_canSpin
6287 func sync_runtime_canSpin(i int) bool {
6288 // sync.Mutex is cooperative, so we are conservative with spinning.
6289 // Spin only few times and only if running on a multicore machine and
6290 // GOMAXPROCS>1 and there is at least one other running P and local runq is empty.
6291 // As opposed to runtime mutex we don't do passive spinning here,
6292 // because there can be work on global runq or on other Ps.
6293 if i >= active_spin || ncpu <= 1 || gomaxprocs <= int32(sched.npidle+sched.nmspinning)+1 {
6296 if p := getg().m.p.ptr(); !runqempty(p) {
6302 //go:linkname sync_runtime_doSpin sync.runtime_doSpin
6304 func sync_runtime_doSpin() {
6305 procyield(active_spin_cnt)
6308 var stealOrder randomOrder
6310 // randomOrder/randomEnum are helper types for randomized work stealing.
6311 // They allow to enumerate all Ps in different pseudo-random orders without repetitions.
6312 // The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
6313 // are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
6314 type randomOrder struct {
6319 type randomEnum struct {
6326 func (ord *randomOrder) reset(count uint32) {
6328 ord.coprimes = ord.coprimes[:0]
6329 for i := uint32(1); i <= count; i++ {
6330 if gcd(i, count) == 1 {
6331 ord.coprimes = append(ord.coprimes, i)
6336 func (ord *randomOrder) start(i uint32) randomEnum {
6340 inc: ord.coprimes[i%uint32(len(ord.coprimes))],
6344 func (enum *randomEnum) done() bool {
6345 return enum.i == enum.count
6348 func (enum *randomEnum) next() {
6350 enum.pos = (enum.pos + enum.inc) % enum.count
6353 func (enum *randomEnum) position() uint32 {
6357 func gcd(a, b uint32) uint32 {
6364 // An initTask represents the set of initializations that need to be done for a package.
6365 // Keep in sync with ../../test/initempty.go:initTask
6366 type initTask struct {
6367 // TODO: pack the first 3 fields more tightly?
6368 state uintptr // 0 = uninitialized, 1 = in progress, 2 = done
6371 // followed by ndeps instances of an *initTask, one per package depended on
6372 // followed by nfns pcs, one per init function to run
6375 // inittrace stores statistics for init functions which are
6376 // updated by malloc and newproc when active is true.
6377 var inittrace tracestat
6379 type tracestat struct {
6380 active bool // init tracing activation status
6381 id int64 // init goroutine id
6382 allocs uint64 // heap allocations
6383 bytes uint64 // heap allocated bytes
6386 func doInit(t *initTask) {
6388 case 2: // fully initialized
6390 case 1: // initialization in progress
6391 throw("recursive call during initialization - linker skew")
6392 default: // not initialized yet
6393 t.state = 1 // initialization in progress
6395 for i := uintptr(0); i < t.ndeps; i++ {
6396 p := add(unsafe.Pointer(t), (3+i)*goarch.PtrSize)
6397 t2 := *(**initTask)(p)
6402 t.state = 2 // initialization done
6411 if inittrace.active {
6413 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6417 firstFunc := add(unsafe.Pointer(t), (3+t.ndeps)*goarch.PtrSize)
6418 for i := uintptr(0); i < t.nfns; i++ {
6419 p := add(firstFunc, i*goarch.PtrSize)
6420 f := *(*func())(unsafe.Pointer(&p))
6424 if inittrace.active {
6426 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6429 f := *(*func())(unsafe.Pointer(&firstFunc))
6430 pkg := funcpkgpath(findfunc(abi.FuncPCABIInternal(f)))
6433 print("init ", pkg, " @")
6434 print(string(fmtNSAsMS(sbuf[:], uint64(start-runtimeInitTime))), " ms, ")
6435 print(string(fmtNSAsMS(sbuf[:], uint64(end-start))), " ms clock, ")
6436 print(string(itoa(sbuf[:], after.bytes-before.bytes)), " bytes, ")
6437 print(string(itoa(sbuf[:], after.allocs-before.allocs)), " allocs")
6441 t.state = 2 // initialization done