1 // Copyright 2014 The Go Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style
3 // license that can be found in the LICENSE file.
10 "runtime/internal/atomic"
11 "runtime/internal/sys"
15 // set using cmd/go/internal/modload.ModInfoProg
18 // Goroutine scheduler
19 // The scheduler's job is to distribute ready-to-run goroutines over worker threads.
21 // The main concepts are:
23 // M - worker thread, or machine.
24 // P - processor, a resource that is required to execute Go code.
25 // M must have an associated P to execute Go code, however it can be
26 // blocked or in a syscall w/o an associated P.
28 // Design doc at https://golang.org/s/go11sched.
30 // Worker thread parking/unparking.
31 // We need to balance between keeping enough running worker threads to utilize
32 // available hardware parallelism and parking excessive running worker threads
33 // to conserve CPU resources and power. This is not simple for two reasons:
34 // (1) scheduler state is intentionally distributed (in particular, per-P work
35 // queues), so it is not possible to compute global predicates on fast paths;
36 // (2) for optimal thread management we would need to know the future (don't park
37 // a worker thread when a new goroutine will be readied in near future).
39 // Three rejected approaches that would work badly:
40 // 1. Centralize all scheduler state (would inhibit scalability).
41 // 2. Direct goroutine handoff. That is, when we ready a new goroutine and there
42 // is a spare P, unpark a thread and handoff it the thread and the goroutine.
43 // This would lead to thread state thrashing, as the thread that readied the
44 // goroutine can be out of work the very next moment, we will need to park it.
45 // Also, it would destroy locality of computation as we want to preserve
46 // dependent goroutines on the same thread; and introduce additional latency.
47 // 3. Unpark an additional thread whenever we ready a goroutine and there is an
48 // idle P, but don't do handoff. This would lead to excessive thread parking/
49 // unparking as the additional threads will instantly park without discovering
52 // The current approach:
54 // This approach applies to three primary sources of potential work: readying a
55 // goroutine, new/modified-earlier timers, and idle-priority GC. See below for
56 // additional details.
58 // We unpark an additional thread when we submit work if (this is wakep()):
59 // 1. There is an idle P, and
60 // 2. There are no "spinning" worker threads.
62 // A worker thread is considered spinning if it is out of local work and did
63 // not find work in the global run queue or netpoller; the spinning state is
64 // denoted in m.spinning and in sched.nmspinning. Threads unparked this way are
65 // also considered spinning; we don't do goroutine handoff so such threads are
66 // out of work initially. Spinning threads spin on looking for work in per-P
67 // run queues and timer heaps or from the GC before parking. If a spinning
68 // thread finds work it takes itself out of the spinning state and proceeds to
69 // execution. If it does not find work it takes itself out of the spinning
70 // state and then parks.
72 // If there is at least one spinning thread (sched.nmspinning>1), we don't
73 // unpark new threads when submitting work. To compensate for that, if the last
74 // spinning thread finds work and stops spinning, it must unpark a new spinning
75 // thread. This approach smooths out unjustified spikes of thread unparking,
76 // but at the same time guarantees eventual maximal CPU parallelism
79 // The main implementation complication is that we need to be very careful
80 // during spinning->non-spinning thread transition. This transition can race
81 // with submission of new work, and either one part or another needs to unpark
82 // another worker thread. If they both fail to do that, we can end up with
83 // semi-persistent CPU underutilization.
85 // The general pattern for submission is:
86 // 1. Submit work to the local run queue, timer heap, or GC state.
87 // 2. #StoreLoad-style memory barrier.
88 // 3. Check sched.nmspinning.
90 // The general pattern for spinning->non-spinning transition is:
91 // 1. Decrement nmspinning.
92 // 2. #StoreLoad-style memory barrier.
93 // 3. Check all per-P work queues and GC for new work.
95 // Note that all this complexity does not apply to global run queue as we are
96 // not sloppy about thread unparking when submitting to global queue. Also see
97 // comments for nmspinning manipulation.
99 // How these different sources of work behave varies, though it doesn't affect
100 // the synchronization approach:
101 // * Ready goroutine: this is an obvious source of work; the goroutine is
102 // immediately ready and must run on some thread eventually.
103 // * New/modified-earlier timer: The current timer implementation (see time.go)
104 // uses netpoll in a thread with no work available to wait for the soonest
105 // timer. If there is no thread waiting, we want a new spinning thread to go
107 // * Idle-priority GC: The GC wakes a stopped idle thread to contribute to
108 // background GC work (note: currently disabled per golang.org/issue/19112).
109 // Also see golang.org/issue/44313, as this should be extended to all GC
119 //go:linkname runtime_inittask runtime..inittask
120 var runtime_inittask initTask
122 //go:linkname main_inittask main..inittask
123 var main_inittask initTask
125 // main_init_done is a signal used by cgocallbackg that initialization
126 // has been completed. It is made before _cgo_notify_runtime_init_done,
127 // so all cgo calls can rely on it existing. When main_init is complete,
128 // it is closed, meaning cgocallbackg can reliably receive from it.
129 var main_init_done chan bool
131 //go:linkname main_main main.main
134 // mainStarted indicates that the main M has started.
137 // runtimeInitTime is the nanotime() at which the runtime started.
138 var runtimeInitTime int64
140 // Value to use for signal mask for newly created M's.
141 var initSigmask sigset
143 // The main goroutine.
147 // Racectx of m0->g0 is used only as the parent of the main goroutine.
148 // It must not be used for anything else.
151 // Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.
152 // Using decimal instead of binary GB and MB because
153 // they look nicer in the stack overflow failure message.
154 if sys.PtrSize == 8 {
155 maxstacksize = 1000000000
157 maxstacksize = 250000000
160 // An upper limit for max stack size. Used to avoid random crashes
161 // after calling SetMaxStack and trying to allocate a stack that is too big,
162 // since stackalloc works with 32-bit sizes.
163 maxstackceiling = 2 * maxstacksize
165 // Allow newproc to start new Ms.
168 if GOARCH != "wasm" { // no threads on wasm yet, so no sysmon
169 // For runtime_syscall_doAllThreadsSyscall, we
170 // register sysmon is not ready for the world to be
172 atomic.Store(&sched.sysmonStarting, 1)
174 newm(sysmon, nil, -1)
178 // Lock the main goroutine onto this, the main OS thread,
179 // during initialization. Most programs won't care, but a few
180 // do require certain calls to be made by the main thread.
181 // Those can arrange for main.main to run in the main thread
182 // by calling runtime.LockOSThread during initialization
183 // to preserve the lock.
187 throw("runtime.main not on m0")
191 // Record when the world started.
192 // Must be before doInit for tracing init.
193 runtimeInitTime = nanotime()
194 if runtimeInitTime == 0 {
195 throw("nanotime returning zero")
198 if debug.inittrace != 0 {
199 inittrace.id = getg().goid
200 inittrace.active = true
203 doInit(&runtime_inittask) // Must be before defer.
205 // Defer unlock so that runtime.Goexit during init does the unlock too.
215 main_init_done = make(chan bool)
217 if _cgo_thread_start == nil {
218 throw("_cgo_thread_start missing")
220 if GOOS != "windows" {
221 if _cgo_setenv == nil {
222 throw("_cgo_setenv missing")
224 if _cgo_unsetenv == nil {
225 throw("_cgo_unsetenv missing")
228 if _cgo_notify_runtime_init_done == nil {
229 throw("_cgo_notify_runtime_init_done missing")
231 // Start the template thread in case we enter Go from
232 // a C-created thread and need to create a new thread.
233 startTemplateThread()
234 cgocall(_cgo_notify_runtime_init_done, nil)
237 doInit(&main_inittask)
239 // Disable init tracing after main init done to avoid overhead
240 // of collecting statistics in malloc and newproc
241 inittrace.active = false
243 close(main_init_done)
248 if isarchive || islibrary {
249 // A program compiled with -buildmode=c-archive or c-shared
250 // has a main, but it is not executed.
253 fn := main_main // make an indirect call, as the linker doesn't know the address of the main package when laying down the runtime
259 // Make racy client program work: if panicking on
260 // another goroutine at the same time as main returns,
261 // let the other goroutine finish printing the panic trace.
262 // Once it does, it will exit. See issues 3934 and 20018.
263 if atomic.Load(&runningPanicDefers) != 0 {
264 // Running deferred functions should not take long.
265 for c := 0; c < 1000; c++ {
266 if atomic.Load(&runningPanicDefers) == 0 {
272 if atomic.Load(&panicking) != 0 {
273 gopark(nil, nil, waitReasonPanicWait, traceEvGoStop, 1)
283 // os_beforeExit is called from os.Exit(0).
284 //go:linkname os_beforeExit os.runtime_beforeExit
285 func os_beforeExit() {
291 // start forcegc helper goroutine
296 func forcegchelper() {
298 lockInit(&forcegc.lock, lockRankForcegc)
301 if forcegc.idle != 0 {
302 throw("forcegc: phase error")
304 atomic.Store(&forcegc.idle, 1)
305 goparkunlock(&forcegc.lock, waitReasonForceGCIdle, traceEvGoBlock, 1)
306 // this goroutine is explicitly resumed by sysmon
307 if debug.gctrace > 0 {
310 // Time-triggered, fully concurrent.
311 gcStart(gcTrigger{kind: gcTriggerTime, now: nanotime()})
317 // Gosched yields the processor, allowing other goroutines to run. It does not
318 // suspend the current goroutine, so execution resumes automatically.
324 // goschedguarded yields the processor like gosched, but also checks
325 // for forbidden states and opts out of the yield in those cases.
327 func goschedguarded() {
328 mcall(goschedguarded_m)
331 // Puts the current goroutine into a waiting state and calls unlockf on the
334 // If unlockf returns false, the goroutine is resumed.
336 // unlockf must not access this G's stack, as it may be moved between
337 // the call to gopark and the call to unlockf.
339 // Note that because unlockf is called after putting the G into a waiting
340 // state, the G may have already been readied by the time unlockf is called
341 // unless there is external synchronization preventing the G from being
342 // readied. If unlockf returns false, it must guarantee that the G cannot be
343 // externally readied.
345 // Reason explains why the goroutine has been parked. It is displayed in stack
346 // traces and heap dumps. Reasons should be unique and descriptive. Do not
347 // re-use reasons, add new ones.
348 func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int) {
349 if reason != waitReasonSleep {
350 checkTimeouts() // timeouts may expire while two goroutines keep the scheduler busy
354 status := readgstatus(gp)
355 if status != _Grunning && status != _Gscanrunning {
356 throw("gopark: bad g status")
359 mp.waitunlockf = unlockf
360 gp.waitreason = reason
361 mp.waittraceev = traceEv
362 mp.waittraceskip = traceskip
364 // can't do anything that might move the G between Ms here.
368 // Puts the current goroutine into a waiting state and unlocks the lock.
369 // The goroutine can be made runnable again by calling goready(gp).
370 func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int) {
371 gopark(parkunlock_c, unsafe.Pointer(lock), reason, traceEv, traceskip)
374 func goready(gp *g, traceskip int) {
376 ready(gp, traceskip, true)
381 func acquireSudog() *sudog {
382 // Delicate dance: the semaphore implementation calls
383 // acquireSudog, acquireSudog calls new(sudog),
384 // new calls malloc, malloc can call the garbage collector,
385 // and the garbage collector calls the semaphore implementation
387 // Break the cycle by doing acquirem/releasem around new(sudog).
388 // The acquirem/releasem increments m.locks during new(sudog),
389 // which keeps the garbage collector from being invoked.
392 if len(pp.sudogcache) == 0 {
393 lock(&sched.sudoglock)
394 // First, try to grab a batch from central cache.
395 for len(pp.sudogcache) < cap(pp.sudogcache)/2 && sched.sudogcache != nil {
396 s := sched.sudogcache
397 sched.sudogcache = s.next
399 pp.sudogcache = append(pp.sudogcache, s)
401 unlock(&sched.sudoglock)
402 // If the central cache is empty, allocate a new one.
403 if len(pp.sudogcache) == 0 {
404 pp.sudogcache = append(pp.sudogcache, new(sudog))
407 n := len(pp.sudogcache)
408 s := pp.sudogcache[n-1]
409 pp.sudogcache[n-1] = nil
410 pp.sudogcache = pp.sudogcache[:n-1]
412 throw("acquireSudog: found s.elem != nil in cache")
419 func releaseSudog(s *sudog) {
421 throw("runtime: sudog with non-nil elem")
424 throw("runtime: sudog with non-false isSelect")
427 throw("runtime: sudog with non-nil next")
430 throw("runtime: sudog with non-nil prev")
432 if s.waitlink != nil {
433 throw("runtime: sudog with non-nil waitlink")
436 throw("runtime: sudog with non-nil c")
440 throw("runtime: releaseSudog with non-nil gp.param")
442 mp := acquirem() // avoid rescheduling to another P
444 if len(pp.sudogcache) == cap(pp.sudogcache) {
445 // Transfer half of local cache to the central cache.
446 var first, last *sudog
447 for len(pp.sudogcache) > cap(pp.sudogcache)/2 {
448 n := len(pp.sudogcache)
449 p := pp.sudogcache[n-1]
450 pp.sudogcache[n-1] = nil
451 pp.sudogcache = pp.sudogcache[:n-1]
459 lock(&sched.sudoglock)
460 last.next = sched.sudogcache
461 sched.sudogcache = first
462 unlock(&sched.sudoglock)
464 pp.sudogcache = append(pp.sudogcache, s)
468 // called from assembly
469 func badmcall(fn func(*g)) {
470 throw("runtime: mcall called on m->g0 stack")
473 func badmcall2(fn func(*g)) {
474 throw("runtime: mcall function returned")
477 func badreflectcall() {
478 panic(plainError("arg size to reflect.call more than 1GB"))
481 var badmorestackg0Msg = "fatal: morestack on g0\n"
484 //go:nowritebarrierrec
485 func badmorestackg0() {
486 sp := stringStructOf(&badmorestackg0Msg)
487 write(2, sp.str, int32(sp.len))
490 var badmorestackgsignalMsg = "fatal: morestack on gsignal\n"
493 //go:nowritebarrierrec
494 func badmorestackgsignal() {
495 sp := stringStructOf(&badmorestackgsignalMsg)
496 write(2, sp.str, int32(sp.len))
504 func lockedOSThread() bool {
506 return gp.lockedm != 0 && gp.m.lockedg != 0
510 // allgs contains all Gs ever created (including dead Gs), and thus
513 // Access via the slice is protected by allglock or stop-the-world.
514 // Readers that cannot take the lock may (carefully!) use the atomic
519 // allglen and allgptr are atomic variables that contain len(allgs) and
520 // &allgs[0] respectively. Proper ordering depends on totally-ordered
521 // loads and stores. Writes are protected by allglock.
523 // allgptr is updated before allglen. Readers should read allglen
524 // before allgptr to ensure that allglen is always <= len(allgptr). New
525 // Gs appended during the race can be missed. For a consistent view of
526 // all Gs, allglock must be held.
528 // allgptr copies should always be stored as a concrete type or
529 // unsafe.Pointer, not uintptr, to ensure that GC can still reach it
530 // even if it points to a stale array.
535 func allgadd(gp *g) {
536 if readgstatus(gp) == _Gidle {
537 throw("allgadd: bad status Gidle")
541 allgs = append(allgs, gp)
542 if &allgs[0] != allgptr {
543 atomicstorep(unsafe.Pointer(&allgptr), unsafe.Pointer(&allgs[0]))
545 atomic.Storeuintptr(&allglen, uintptr(len(allgs)))
549 // atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
550 func atomicAllG() (**g, uintptr) {
551 length := atomic.Loaduintptr(&allglen)
552 ptr := (**g)(atomic.Loadp(unsafe.Pointer(&allgptr)))
556 // atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
557 func atomicAllGIndex(ptr **g, i uintptr) *g {
558 return *(**g)(add(unsafe.Pointer(ptr), i*sys.PtrSize))
561 // forEachG calls fn on every G from allgs.
563 // forEachG takes a lock to exclude concurrent addition of new Gs.
564 func forEachG(fn func(gp *g)) {
566 for _, gp := range allgs {
572 // forEachGRace calls fn on every G from allgs.
574 // forEachGRace avoids locking, but does not exclude addition of new Gs during
575 // execution, which may be missed.
576 func forEachGRace(fn func(gp *g)) {
577 ptr, length := atomicAllG()
578 for i := uintptr(0); i < length; i++ {
579 gp := atomicAllGIndex(ptr, i)
586 // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
587 // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
591 // cpuinit extracts the environment variable GODEBUG from the environment on
592 // Unix-like operating systems and calls internal/cpu.Initialize.
594 const prefix = "GODEBUG="
598 case "aix", "darwin", "ios", "dragonfly", "freebsd", "netbsd", "openbsd", "illumos", "solaris", "linux":
599 cpu.DebugOptions = true
601 // Similar to goenv_unix but extracts the environment value for
603 // TODO(moehrmann): remove when general goenvs() can be called before cpuinit()
605 for argv_index(argv, argc+1+n) != nil {
609 for i := int32(0); i < n; i++ {
610 p := argv_index(argv, argc+1+i)
611 s := *(*string)(unsafe.Pointer(&stringStruct{unsafe.Pointer(p), findnull(p)}))
613 if hasPrefix(s, prefix) {
614 env = gostring(p)[len(prefix):]
622 // Support cpu feature variables are used in code generated by the compiler
623 // to guard execution of instructions that can not be assumed to be always supported.
624 x86HasPOPCNT = cpu.X86.HasPOPCNT
625 x86HasSSE41 = cpu.X86.HasSSE41
626 x86HasFMA = cpu.X86.HasFMA
628 armHasVFPv4 = cpu.ARM.HasVFPv4
630 arm64HasATOMICS = cpu.ARM64.HasATOMICS
633 // The bootstrap sequence is:
637 // make & queue new G
638 // call runtime·mstart
640 // The new G calls runtime·main.
642 lockInit(&sched.lock, lockRankSched)
643 lockInit(&sched.sysmonlock, lockRankSysmon)
644 lockInit(&sched.deferlock, lockRankDefer)
645 lockInit(&sched.sudoglock, lockRankSudog)
646 lockInit(&deadlock, lockRankDeadlock)
647 lockInit(&paniclk, lockRankPanic)
648 lockInit(&allglock, lockRankAllg)
649 lockInit(&allpLock, lockRankAllp)
650 lockInit(&reflectOffs.lock, lockRankReflectOffs)
651 lockInit(&finlock, lockRankFin)
652 lockInit(&trace.bufLock, lockRankTraceBuf)
653 lockInit(&trace.stringsLock, lockRankTraceStrings)
654 lockInit(&trace.lock, lockRankTrace)
655 lockInit(&cpuprof.lock, lockRankCpuprof)
656 lockInit(&trace.stackTab.lock, lockRankTraceStackTab)
657 // Enforce that this lock is always a leaf lock.
658 // All of this lock's critical sections should be
660 lockInit(&memstats.heapStats.noPLock, lockRankLeafRank)
662 // raceinit must be the first call to race detector.
663 // In particular, it must be done before mallocinit below calls racemapshadow.
666 _g_.racectx, raceprocctx0 = raceinit()
669 sched.maxmcount = 10000
671 // The world starts stopped.
677 fastrandinit() // must run before mcommoninit
678 mcommoninit(_g_.m, -1)
679 cpuinit() // must run before alginit
680 alginit() // maps must not be used before this call
681 modulesinit() // provides activeModules
682 typelinksinit() // uses maps, activeModules
683 itabsinit() // uses activeModules
685 sigsave(&_g_.m.sigmask)
686 initSigmask = _g_.m.sigmask
688 if offset := unsafe.Offsetof(sched.timeToRun); offset%8 != 0 {
690 throw("sched.timeToRun not aligned to 8 bytes")
699 sched.lastpoll = uint64(nanotime())
701 if n, ok := atoi32(gogetenv("GOMAXPROCS")); ok && n > 0 {
704 if procresize(procs) != nil {
705 throw("unknown runnable goroutine during bootstrap")
709 // World is effectively started now, as P's can run.
712 // For cgocheck > 1, we turn on the write barrier at all times
713 // and check all pointer writes. We can't do this until after
714 // procresize because the write barrier needs a P.
715 if debug.cgocheck > 1 {
716 writeBarrier.cgo = true
717 writeBarrier.enabled = true
718 for _, p := range allp {
723 if buildVersion == "" {
724 // Condition should never trigger. This code just serves
725 // to ensure runtime·buildVersion is kept in the resulting binary.
726 buildVersion = "unknown"
728 if len(modinfo) == 1 {
729 // Condition should never trigger. This code just serves
730 // to ensure runtime·modinfo is kept in the resulting binary.
735 func dumpgstatus(gp *g) {
737 print("runtime: gp: gp=", gp, ", goid=", gp.goid, ", gp->atomicstatus=", readgstatus(gp), "\n")
738 print("runtime: g: g=", _g_, ", goid=", _g_.goid, ", g->atomicstatus=", readgstatus(_g_), "\n")
741 // sched.lock must be held.
743 assertLockHeld(&sched.lock)
745 if mcount() > sched.maxmcount {
746 print("runtime: program exceeds ", sched.maxmcount, "-thread limit\n")
747 throw("thread exhaustion")
751 // mReserveID returns the next ID to use for a new m. This new m is immediately
752 // considered 'running' by checkdead.
754 // sched.lock must be held.
755 func mReserveID() int64 {
756 assertLockHeld(&sched.lock)
758 if sched.mnext+1 < sched.mnext {
759 throw("runtime: thread ID overflow")
767 // Pre-allocated ID may be passed as 'id', or omitted by passing -1.
768 func mcommoninit(mp *m, id int64) {
771 // g0 stack won't make sense for user (and is not necessary unwindable).
773 callers(1, mp.createstack[:])
784 mp.fastrand[0] = uint32(int64Hash(uint64(mp.id), fastrandseed))
785 mp.fastrand[1] = uint32(int64Hash(uint64(cputicks()), ^fastrandseed))
786 if mp.fastrand[0]|mp.fastrand[1] == 0 {
791 if mp.gsignal != nil {
792 mp.gsignal.stackguard1 = mp.gsignal.stack.lo + _StackGuard
795 // Add to allm so garbage collector doesn't free g->m
796 // when it is just in a register or thread-local storage.
799 // NumCgoCall() iterates over allm w/o schedlock,
800 // so we need to publish it safely.
801 atomicstorep(unsafe.Pointer(&allm), unsafe.Pointer(mp))
804 // Allocate memory to hold a cgo traceback if the cgo call crashes.
805 if iscgo || GOOS == "solaris" || GOOS == "illumos" || GOOS == "windows" {
806 mp.cgoCallers = new(cgoCallers)
810 var fastrandseed uintptr
812 func fastrandinit() {
813 s := (*[unsafe.Sizeof(fastrandseed)]byte)(unsafe.Pointer(&fastrandseed))[:]
817 // Mark gp ready to run.
818 func ready(gp *g, traceskip int, next bool) {
820 traceGoUnpark(gp, traceskip)
823 status := readgstatus(gp)
827 mp := acquirem() // disable preemption because it can be holding p in a local var
828 if status&^_Gscan != _Gwaiting {
830 throw("bad g->status in ready")
833 // status is Gwaiting or Gscanwaiting, make Grunnable and put on runq
834 casgstatus(gp, _Gwaiting, _Grunnable)
835 runqput(_g_.m.p.ptr(), gp, next)
840 // freezeStopWait is a large value that freezetheworld sets
841 // sched.stopwait to in order to request that all Gs permanently stop.
842 const freezeStopWait = 0x7fffffff
844 // freezing is set to non-zero if the runtime is trying to freeze the
848 // Similar to stopTheWorld but best-effort and can be called several times.
849 // There is no reverse operation, used during crashing.
850 // This function must not lock any mutexes.
851 func freezetheworld() {
852 atomic.Store(&freezing, 1)
853 // stopwait and preemption requests can be lost
854 // due to races with concurrently executing threads,
855 // so try several times
856 for i := 0; i < 5; i++ {
857 // this should tell the scheduler to not start any new goroutines
858 sched.stopwait = freezeStopWait
859 atomic.Store(&sched.gcwaiting, 1)
860 // this should stop running goroutines
862 break // no running goroutines
872 // All reads and writes of g's status go through readgstatus, casgstatus
873 // castogscanstatus, casfrom_Gscanstatus.
875 func readgstatus(gp *g) uint32 {
876 return atomic.Load(&gp.atomicstatus)
879 // The Gscanstatuses are acting like locks and this releases them.
880 // If it proves to be a performance hit we should be able to make these
881 // simple atomic stores but for now we are going to throw if
882 // we see an inconsistent state.
883 func casfrom_Gscanstatus(gp *g, oldval, newval uint32) {
886 // Check that transition is valid.
889 print("runtime: casfrom_Gscanstatus bad oldval gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
891 throw("casfrom_Gscanstatus:top gp->status is not in scan state")
897 if newval == oldval&^_Gscan {
898 success = atomic.Cas(&gp.atomicstatus, oldval, newval)
902 print("runtime: casfrom_Gscanstatus failed gp=", gp, ", oldval=", hex(oldval), ", newval=", hex(newval), "\n")
904 throw("casfrom_Gscanstatus: gp->status is not in scan state")
906 releaseLockRank(lockRankGscan)
909 // This will return false if the gp is not in the expected status and the cas fails.
910 // This acts like a lock acquire while the casfromgstatus acts like a lock release.
911 func castogscanstatus(gp *g, oldval, newval uint32) bool {
917 if newval == oldval|_Gscan {
918 r := atomic.Cas(&gp.atomicstatus, oldval, newval)
920 acquireLockRank(lockRankGscan)
926 print("runtime: castogscanstatus oldval=", hex(oldval), " newval=", hex(newval), "\n")
927 throw("castogscanstatus")
931 // If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
932 // and casfrom_Gscanstatus instead.
933 // casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
934 // put it in the Gscan state is finished.
936 func casgstatus(gp *g, oldval, newval uint32) {
937 if (oldval&_Gscan != 0) || (newval&_Gscan != 0) || oldval == newval {
939 print("runtime: casgstatus: oldval=", hex(oldval), " newval=", hex(newval), "\n")
940 throw("casgstatus: bad incoming values")
944 acquireLockRank(lockRankGscan)
945 releaseLockRank(lockRankGscan)
947 // See https://golang.org/cl/21503 for justification of the yield delay.
948 const yieldDelay = 5 * 1000
951 // loop if gp->atomicstatus is in a scan state giving
952 // GC time to finish and change the state to oldval.
953 for i := 0; !atomic.Cas(&gp.atomicstatus, oldval, newval); i++ {
954 if oldval == _Gwaiting && gp.atomicstatus == _Grunnable {
955 throw("casgstatus: waiting for Gwaiting but is Grunnable")
958 nextYield = nanotime() + yieldDelay
960 if nanotime() < nextYield {
961 for x := 0; x < 10 && gp.atomicstatus != oldval; x++ {
966 nextYield = nanotime() + yieldDelay/2
970 // Handle tracking for scheduling latencies.
971 if oldval == _Grunning {
972 // Track every 8th time a goroutine transitions out of running.
973 if gp.trackingSeq%gTrackingPeriod == 0 {
980 if oldval == _Grunnable {
981 // We transitioned out of runnable, so measure how much
982 // time we spent in this state and add it to
984 gp.runnableTime += now - gp.runnableStamp
987 if newval == _Grunnable {
988 // We just transitioned into runnable, so record what
989 // time that happened.
990 gp.runnableStamp = now
991 } else if newval == _Grunning {
992 // We're transitioning into running, so turn off
993 // tracking and record how much time we spent in
996 sched.timeToRun.record(gp.runnableTime)
1002 // casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
1003 // Returns old status. Cannot call casgstatus directly, because we are racing with an
1004 // async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
1005 // it might have become Grunnable by the time we get to the cas. If we called casgstatus,
1006 // it would loop waiting for the status to go back to Gwaiting, which it never will.
1008 func casgcopystack(gp *g) uint32 {
1010 oldstatus := readgstatus(gp) &^ _Gscan
1011 if oldstatus != _Gwaiting && oldstatus != _Grunnable {
1012 throw("copystack: bad status, not Gwaiting or Grunnable")
1014 if atomic.Cas(&gp.atomicstatus, oldstatus, _Gcopystack) {
1020 // casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
1022 // TODO(austin): This is the only status operation that both changes
1023 // the status and locks the _Gscan bit. Rethink this.
1024 func casGToPreemptScan(gp *g, old, new uint32) {
1025 if old != _Grunning || new != _Gscan|_Gpreempted {
1026 throw("bad g transition")
1028 acquireLockRank(lockRankGscan)
1029 for !atomic.Cas(&gp.atomicstatus, _Grunning, _Gscan|_Gpreempted) {
1033 // casGFromPreempted attempts to transition gp from _Gpreempted to
1034 // _Gwaiting. If successful, the caller is responsible for
1035 // re-scheduling gp.
1036 func casGFromPreempted(gp *g, old, new uint32) bool {
1037 if old != _Gpreempted || new != _Gwaiting {
1038 throw("bad g transition")
1040 return atomic.Cas(&gp.atomicstatus, _Gpreempted, _Gwaiting)
1043 // stopTheWorld stops all P's from executing goroutines, interrupting
1044 // all goroutines at GC safe points and records reason as the reason
1045 // for the stop. On return, only the current goroutine's P is running.
1046 // stopTheWorld must not be called from a system stack and the caller
1047 // must not hold worldsema. The caller must call startTheWorld when
1048 // other P's should resume execution.
1050 // stopTheWorld is safe for multiple goroutines to call at the
1051 // same time. Each will execute its own stop, and the stops will
1054 // This is also used by routines that do stack dumps. If the system is
1055 // in panic or being exited, this may not reliably stop all
1057 func stopTheWorld(reason string) {
1058 semacquire(&worldsema)
1060 gp.m.preemptoff = reason
1061 systemstack(func() {
1062 // Mark the goroutine which called stopTheWorld preemptible so its
1063 // stack may be scanned.
1064 // This lets a mark worker scan us while we try to stop the world
1065 // since otherwise we could get in a mutual preemption deadlock.
1066 // We must not modify anything on the G stack because a stack shrink
1067 // may occur. A stack shrink is otherwise OK though because in order
1068 // to return from this function (and to leave the system stack) we
1069 // must have preempted all goroutines, including any attempting
1070 // to scan our stack, in which case, any stack shrinking will
1071 // have already completed by the time we exit.
1072 casgstatus(gp, _Grunning, _Gwaiting)
1073 stopTheWorldWithSema()
1074 casgstatus(gp, _Gwaiting, _Grunning)
1078 // startTheWorld undoes the effects of stopTheWorld.
1079 func startTheWorld() {
1080 systemstack(func() { startTheWorldWithSema(false) })
1082 // worldsema must be held over startTheWorldWithSema to ensure
1083 // gomaxprocs cannot change while worldsema is held.
1085 // Release worldsema with direct handoff to the next waiter, but
1086 // acquirem so that semrelease1 doesn't try to yield our time.
1088 // Otherwise if e.g. ReadMemStats is being called in a loop,
1089 // it might stomp on other attempts to stop the world, such as
1090 // for starting or ending GC. The operation this blocks is
1091 // so heavy-weight that we should just try to be as fair as
1094 // We don't want to just allow us to get preempted between now
1095 // and releasing the semaphore because then we keep everyone
1096 // (including, for example, GCs) waiting longer.
1099 semrelease1(&worldsema, true, 0)
1103 // stopTheWorldGC has the same effect as stopTheWorld, but blocks
1104 // until the GC is not running. It also blocks a GC from starting
1105 // until startTheWorldGC is called.
1106 func stopTheWorldGC(reason string) {
1108 stopTheWorld(reason)
1111 // startTheWorldGC undoes the effects of stopTheWorldGC.
1112 func startTheWorldGC() {
1117 // Holding worldsema grants an M the right to try to stop the world.
1118 var worldsema uint32 = 1
1120 // Holding gcsema grants the M the right to block a GC, and blocks
1121 // until the current GC is done. In particular, it prevents gomaxprocs
1122 // from changing concurrently.
1124 // TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
1125 // being changed/enabled during a GC, remove this.
1126 var gcsema uint32 = 1
1128 // stopTheWorldWithSema is the core implementation of stopTheWorld.
1129 // The caller is responsible for acquiring worldsema and disabling
1130 // preemption first and then should stopTheWorldWithSema on the system
1133 // semacquire(&worldsema, 0)
1134 // m.preemptoff = "reason"
1135 // systemstack(stopTheWorldWithSema)
1137 // When finished, the caller must either call startTheWorld or undo
1138 // these three operations separately:
1140 // m.preemptoff = ""
1141 // systemstack(startTheWorldWithSema)
1142 // semrelease(&worldsema)
1144 // It is allowed to acquire worldsema once and then execute multiple
1145 // startTheWorldWithSema/stopTheWorldWithSema pairs.
1146 // Other P's are able to execute between successive calls to
1147 // startTheWorldWithSema and stopTheWorldWithSema.
1148 // Holding worldsema causes any other goroutines invoking
1149 // stopTheWorld to block.
1150 func stopTheWorldWithSema() {
1153 // If we hold a lock, then we won't be able to stop another M
1154 // that is blocked trying to acquire the lock.
1155 if _g_.m.locks > 0 {
1156 throw("stopTheWorld: holding locks")
1160 sched.stopwait = gomaxprocs
1161 atomic.Store(&sched.gcwaiting, 1)
1164 _g_.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.
1166 // try to retake all P's in Psyscall status
1167 for _, p := range allp {
1169 if s == _Psyscall && atomic.Cas(&p.status, s, _Pgcstop) {
1187 wait := sched.stopwait > 0
1190 // wait for remaining P's to stop voluntarily
1193 // wait for 100us, then try to re-preempt in case of any races
1194 if notetsleep(&sched.stopnote, 100*1000) {
1195 noteclear(&sched.stopnote)
1204 if sched.stopwait != 0 {
1205 bad = "stopTheWorld: not stopped (stopwait != 0)"
1207 for _, p := range allp {
1208 if p.status != _Pgcstop {
1209 bad = "stopTheWorld: not stopped (status != _Pgcstop)"
1213 if atomic.Load(&freezing) != 0 {
1214 // Some other thread is panicking. This can cause the
1215 // sanity checks above to fail if the panic happens in
1216 // the signal handler on a stopped thread. Either way,
1217 // we should halt this thread.
1228 func startTheWorldWithSema(emitTraceEvent bool) int64 {
1229 assertWorldStopped()
1231 mp := acquirem() // disable preemption because it can be holding p in a local var
1232 if netpollinited() {
1233 list := netpoll(0) // non-blocking
1243 p1 := procresize(procs)
1245 if sched.sysmonwait != 0 {
1246 sched.sysmonwait = 0
1247 notewakeup(&sched.sysmonnote)
1260 throw("startTheWorld: inconsistent mp->nextp")
1263 notewakeup(&mp.park)
1265 // Start M to run P. Do not start another M below.
1270 // Capture start-the-world time before doing clean-up tasks.
1271 startTime := nanotime()
1276 // Wakeup an additional proc in case we have excessive runnable goroutines
1277 // in local queues or in the global queue. If we don't, the proc will park itself.
1278 // If we have lots of excessive work, resetspinning will unpark additional procs as necessary.
1286 // usesLibcall indicates whether this runtime performs system calls
1288 func usesLibcall() bool {
1290 case "aix", "darwin", "illumos", "ios", "solaris", "windows":
1293 return GOARCH == "386" || GOARCH == "amd64" || GOARCH == "arm" || GOARCH == "arm64"
1298 // mStackIsSystemAllocated indicates whether this runtime starts on a
1299 // system-allocated stack.
1300 func mStackIsSystemAllocated() bool {
1302 case "aix", "darwin", "plan9", "illumos", "ios", "solaris", "windows":
1306 case "386", "amd64", "arm", "arm64":
1313 // mstart is the entry-point for new Ms.
1314 // It is written in assembly, uses ABI0, is marked TOPFRAME, and calls mstart0.
1317 // mstart0 is the Go entry-point for new Ms.
1318 // This must not split the stack because we may not even have stack
1319 // bounds set up yet.
1321 // May run during STW (because it doesn't have a P yet), so write
1322 // barriers are not allowed.
1325 //go:nowritebarrierrec
1329 osStack := _g_.stack.lo == 0
1331 // Initialize stack bounds from system stack.
1332 // Cgo may have left stack size in stack.hi.
1333 // minit may update the stack bounds.
1335 // Note: these bounds may not be very accurate.
1336 // We set hi to &size, but there are things above
1337 // it. The 1024 is supposed to compensate this,
1338 // but is somewhat arbitrary.
1339 size := _g_.stack.hi
1341 size = 8192 * sys.StackGuardMultiplier
1343 _g_.stack.hi = uintptr(noescape(unsafe.Pointer(&size)))
1344 _g_.stack.lo = _g_.stack.hi - size + 1024
1346 // Initialize stack guard so that we can start calling regular
1348 _g_.stackguard0 = _g_.stack.lo + _StackGuard
1349 // This is the g0, so we can also call go:systemstack
1350 // functions, which check stackguard1.
1351 _g_.stackguard1 = _g_.stackguard0
1354 // Exit this thread.
1355 if mStackIsSystemAllocated() {
1356 // Windows, Solaris, illumos, Darwin, AIX and Plan 9 always system-allocate
1357 // the stack, but put it in _g_.stack before mstart,
1358 // so the logic above hasn't set osStack yet.
1364 // The go:noinline is to guarantee the getcallerpc/getcallersp below are safe,
1365 // so that we can set up g0.sched to return to the call of mstart1 above.
1370 if _g_ != _g_.m.g0 {
1371 throw("bad runtime·mstart")
1374 // Set up m.g0.sched as a label returning to just
1375 // after the mstart1 call in mstart0 above, for use by goexit0 and mcall.
1376 // We're never coming back to mstart1 after we call schedule,
1377 // so other calls can reuse the current frame.
1378 // And goexit0 does a gogo that needs to return from mstart1
1379 // and let mstart0 exit the thread.
1380 _g_.sched.g = guintptr(unsafe.Pointer(_g_))
1381 _g_.sched.pc = getcallerpc()
1382 _g_.sched.sp = getcallersp()
1387 // Install signal handlers; after minit so that minit can
1388 // prepare the thread to be able to handle the signals.
1393 if fn := _g_.m.mstartfn; fn != nil {
1398 acquirep(_g_.m.nextp.ptr())
1404 // mstartm0 implements part of mstart1 that only runs on the m0.
1406 // Write barriers are allowed here because we know the GC can't be
1407 // running yet, so they'll be no-ops.
1409 //go:yeswritebarrierrec
1411 // Create an extra M for callbacks on threads not created by Go.
1412 // An extra M is also needed on Windows for callbacks created by
1413 // syscall.NewCallback. See issue #6751 for details.
1414 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
1421 // mPark causes a thread to park itself - temporarily waking for
1422 // fixups but otherwise waiting to be fully woken. This is the
1423 // only way that m's should park themselves.
1428 notesleep(&g.m.park)
1429 // Note, because of signal handling by this parked m,
1430 // a preemptive mDoFixup() may actually occur via
1431 // mDoFixupAndOSYield(). (See golang.org/issue/44193)
1432 noteclear(&g.m.park)
1439 // mexit tears down and exits the current thread.
1441 // Don't call this directly to exit the thread, since it must run at
1442 // the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to
1443 // unwind the stack to the point that exits the thread.
1445 // It is entered with m.p != nil, so write barriers are allowed. It
1446 // will release the P before exiting.
1448 //go:yeswritebarrierrec
1449 func mexit(osStack bool) {
1454 // This is the main thread. Just wedge it.
1456 // On Linux, exiting the main thread puts the process
1457 // into a non-waitable zombie state. On Plan 9,
1458 // exiting the main thread unblocks wait even though
1459 // other threads are still running. On Solaris we can
1460 // neither exitThread nor return from mstart. Other
1461 // bad things probably happen on other platforms.
1463 // We could try to clean up this M more before wedging
1464 // it, but that complicates signal handling.
1465 handoffp(releasep())
1471 throw("locked m0 woke up")
1477 // Free the gsignal stack.
1478 if m.gsignal != nil {
1479 stackfree(m.gsignal.stack)
1480 // On some platforms, when calling into VDSO (e.g. nanotime)
1481 // we store our g on the gsignal stack, if there is one.
1482 // Now the stack is freed, unlink it from the m, so we
1483 // won't write to it when calling VDSO code.
1487 // Remove m from allm.
1489 for pprev := &allm; *pprev != nil; pprev = &(*pprev).alllink {
1495 throw("m not found in allm")
1498 // Delay reaping m until it's done with the stack.
1500 // If this is using an OS stack, the OS will free it
1501 // so there's no need for reaping.
1502 atomic.Store(&m.freeWait, 1)
1503 // Put m on the free list, though it will not be reaped until
1504 // freeWait is 0. Note that the free list must not be linked
1505 // through alllink because some functions walk allm without
1506 // locking, so may be using alllink.
1507 m.freelink = sched.freem
1513 handoffp(releasep())
1514 // After this point we must not have write barriers.
1516 // Invoke the deadlock detector. This must happen after
1517 // handoffp because it may have started a new M to take our
1524 if GOOS == "darwin" || GOOS == "ios" {
1525 // Make sure pendingPreemptSignals is correct when an M exits.
1527 if atomic.Load(&m.signalPending) != 0 {
1528 atomic.Xadd(&pendingPreemptSignals, -1)
1532 // Destroy all allocated resources. After this is called, we may no
1533 // longer take any locks.
1537 // Return from mstart and let the system thread
1538 // library free the g0 stack and terminate the thread.
1542 // mstart is the thread's entry point, so there's nothing to
1543 // return to. Exit the thread directly. exitThread will clear
1544 // m.freeWait when it's done with the stack and the m can be
1546 exitThread(&m.freeWait)
1549 // forEachP calls fn(p) for every P p when p reaches a GC safe point.
1550 // If a P is currently executing code, this will bring the P to a GC
1551 // safe point and execute fn on that P. If the P is not executing code
1552 // (it is idle or in a syscall), this will call fn(p) directly while
1553 // preventing the P from exiting its state. This does not ensure that
1554 // fn will run on every CPU executing Go code, but it acts as a global
1555 // memory barrier. GC uses this as a "ragged barrier."
1557 // The caller must hold worldsema.
1560 func forEachP(fn func(*p)) {
1562 _p_ := getg().m.p.ptr()
1565 if sched.safePointWait != 0 {
1566 throw("forEachP: sched.safePointWait != 0")
1568 sched.safePointWait = gomaxprocs - 1
1569 sched.safePointFn = fn
1571 // Ask all Ps to run the safe point function.
1572 for _, p := range allp {
1574 atomic.Store(&p.runSafePointFn, 1)
1579 // Any P entering _Pidle or _Psyscall from now on will observe
1580 // p.runSafePointFn == 1 and will call runSafePointFn when
1581 // changing its status to _Pidle/_Psyscall.
1583 // Run safe point function for all idle Ps. sched.pidle will
1584 // not change because we hold sched.lock.
1585 for p := sched.pidle.ptr(); p != nil; p = p.link.ptr() {
1586 if atomic.Cas(&p.runSafePointFn, 1, 0) {
1588 sched.safePointWait--
1592 wait := sched.safePointWait > 0
1595 // Run fn for the current P.
1598 // Force Ps currently in _Psyscall into _Pidle and hand them
1599 // off to induce safe point function execution.
1600 for _, p := range allp {
1602 if s == _Psyscall && p.runSafePointFn == 1 && atomic.Cas(&p.status, s, _Pidle) {
1612 // Wait for remaining Ps to run fn.
1615 // Wait for 100us, then try to re-preempt in
1616 // case of any races.
1618 // Requires system stack.
1619 if notetsleep(&sched.safePointNote, 100*1000) {
1620 noteclear(&sched.safePointNote)
1626 if sched.safePointWait != 0 {
1627 throw("forEachP: not done")
1629 for _, p := range allp {
1630 if p.runSafePointFn != 0 {
1631 throw("forEachP: P did not run fn")
1636 sched.safePointFn = nil
1641 // syscall_runtime_doAllThreadsSyscall serializes Go execution and
1642 // executes a specified fn() call on all m's.
1644 // The boolean argument to fn() indicates whether the function's
1645 // return value will be consulted or not. That is, fn(true) should
1646 // return true if fn() succeeds, and fn(true) should return false if
1647 // it failed. When fn(false) is called, its return status will be
1650 // syscall_runtime_doAllThreadsSyscall first invokes fn(true) on a
1651 // single, coordinating, m, and only if it returns true does it go on
1652 // to invoke fn(false) on all of the other m's known to the process.
1654 //go:linkname syscall_runtime_doAllThreadsSyscall syscall.runtime_doAllThreadsSyscall
1655 func syscall_runtime_doAllThreadsSyscall(fn func(bool) bool) {
1657 panic("doAllThreadsSyscall not supported with cgo enabled")
1662 for atomic.Load(&sched.sysmonStarting) != 0 {
1666 // We don't want this thread to handle signals for the
1667 // duration of this critical section. The underlying issue
1668 // being that this locked coordinating m is the one monitoring
1669 // for fn() execution by all the other m's of the runtime,
1670 // while no regular go code execution is permitted (the world
1671 // is stopped). If this present m were to get distracted to
1672 // run signal handling code, and find itself waiting for a
1673 // second thread to execute go code before being able to
1674 // return from that signal handling, a deadlock will result.
1675 // (See golang.org/issue/44193.)
1681 stopTheWorldGC("doAllThreadsSyscall")
1682 if atomic.Load(&newmHandoff.haveTemplateThread) != 0 {
1683 // Ensure that there are no in-flight thread
1684 // creations: don't want to race with allm.
1685 lock(&newmHandoff.lock)
1686 for !newmHandoff.waiting {
1687 unlock(&newmHandoff.lock)
1689 lock(&newmHandoff.lock)
1691 unlock(&newmHandoff.lock)
1693 if netpollinited() {
1696 sigRecvPrepareForFixup()
1699 // For m's running without racectx, we loan out the
1700 // racectx of this call.
1701 lock(&mFixupRace.lock)
1702 mFixupRace.ctx = _g_.racectx
1703 unlock(&mFixupRace.lock)
1705 if ok := fn(true); ok {
1707 for mp := allm; mp != nil; mp = mp.alllink {
1708 if mp.procid == tid {
1709 // This m has already completed fn()
1713 // Be wary of mp's without procid values if
1714 // they are known not to park. If they are
1715 // marked as parking with a zero procid, then
1716 // they will be racing with this code to be
1717 // allocated a procid and we will annotate
1718 // them with the need to execute the fn when
1719 // they acquire a procid to run it.
1720 if mp.procid == 0 && !mp.doesPark {
1721 // Reaching here, we are either
1722 // running Windows, or cgo linked
1723 // code. Neither of which are
1724 // currently supported by this API.
1725 throw("unsupported runtime environment")
1727 // stopTheWorldGC() doesn't guarantee stopping
1728 // all the threads, so we lock here to avoid
1729 // the possibility of racing with mp.
1730 lock(&mp.mFixup.lock)
1732 atomic.Store(&mp.mFixup.used, 1)
1734 // For non-service threads this will
1735 // cause the wakeup to be short lived
1736 // (once the mutex is unlocked). The
1737 // next real wakeup will occur after
1738 // startTheWorldGC() is called.
1739 notewakeup(&mp.park)
1741 unlock(&mp.mFixup.lock)
1745 for mp := allm; done && mp != nil; mp = mp.alllink {
1746 if mp.procid == tid {
1749 done = atomic.Load(&mp.mFixup.used) == 0
1754 // if needed force sysmon and/or newmHandoff to wakeup.
1756 if atomic.Load(&sched.sysmonwait) != 0 {
1757 atomic.Store(&sched.sysmonwait, 0)
1758 notewakeup(&sched.sysmonnote)
1761 lock(&newmHandoff.lock)
1762 if newmHandoff.waiting {
1763 newmHandoff.waiting = false
1764 notewakeup(&newmHandoff.wake)
1766 unlock(&newmHandoff.lock)
1771 lock(&mFixupRace.lock)
1773 unlock(&mFixupRace.lock)
1776 msigrestore(sigmask)
1780 // runSafePointFn runs the safe point function, if any, for this P.
1781 // This should be called like
1783 // if getg().m.p.runSafePointFn != 0 {
1787 // runSafePointFn must be checked on any transition in to _Pidle or
1788 // _Psyscall to avoid a race where forEachP sees that the P is running
1789 // just before the P goes into _Pidle/_Psyscall and neither forEachP
1790 // nor the P run the safe-point function.
1791 func runSafePointFn() {
1792 p := getg().m.p.ptr()
1793 // Resolve the race between forEachP running the safe-point
1794 // function on this P's behalf and this P running the
1795 // safe-point function directly.
1796 if !atomic.Cas(&p.runSafePointFn, 1, 0) {
1799 sched.safePointFn(p)
1801 sched.safePointWait--
1802 if sched.safePointWait == 0 {
1803 notewakeup(&sched.safePointNote)
1808 // When running with cgo, we call _cgo_thread_start
1809 // to start threads for us so that we can play nicely with
1811 var cgoThreadStart unsafe.Pointer
1813 type cgothreadstart struct {
1819 // Allocate a new m unassociated with any thread.
1820 // Can use p for allocation context if needed.
1821 // fn is recorded as the new m's m.mstartfn.
1822 // id is optional pre-allocated m ID. Omit by passing -1.
1824 // This function is allowed to have write barriers even if the caller
1825 // isn't because it borrows _p_.
1827 //go:yeswritebarrierrec
1828 func allocm(_p_ *p, fn func(), id int64) *m {
1830 acquirem() // disable GC because it can be called from sysmon
1832 acquirep(_p_) // temporarily borrow p for mallocs in this function
1835 // Release the free M list. We need to do this somewhere and
1836 // this may free up a stack we can use.
1837 if sched.freem != nil {
1840 for freem := sched.freem; freem != nil; {
1841 if freem.freeWait != 0 {
1842 next := freem.freelink
1843 freem.freelink = newList
1848 // stackfree must be on the system stack, but allocm is
1849 // reachable off the system stack transitively from
1851 systemstack(func() {
1852 stackfree(freem.g0.stack)
1854 freem = freem.freelink
1856 sched.freem = newList
1864 // In case of cgo or Solaris or illumos or Darwin, pthread_create will make us a stack.
1865 // Windows and Plan 9 will layout sched stack on OS stack.
1866 if iscgo || mStackIsSystemAllocated() {
1869 mp.g0 = malg(8192 * sys.StackGuardMultiplier)
1873 if _p_ == _g_.m.p.ptr() {
1881 // needm is called when a cgo callback happens on a
1882 // thread without an m (a thread not created by Go).
1883 // In this case, needm is expected to find an m to use
1884 // and return with m, g initialized correctly.
1885 // Since m and g are not set now (likely nil, but see below)
1886 // needm is limited in what routines it can call. In particular
1887 // it can only call nosplit functions (textflag 7) and cannot
1888 // do any scheduling that requires an m.
1890 // In order to avoid needing heavy lifting here, we adopt
1891 // the following strategy: there is a stack of available m's
1892 // that can be stolen. Using compare-and-swap
1893 // to pop from the stack has ABA races, so we simulate
1894 // a lock by doing an exchange (via Casuintptr) to steal the stack
1895 // head and replace the top pointer with MLOCKED (1).
1896 // This serves as a simple spin lock that we can use even
1897 // without an m. The thread that locks the stack in this way
1898 // unlocks the stack by storing a valid stack head pointer.
1900 // In order to make sure that there is always an m structure
1901 // available to be stolen, we maintain the invariant that there
1902 // is always one more than needed. At the beginning of the
1903 // program (if cgo is in use) the list is seeded with a single m.
1904 // If needm finds that it has taken the last m off the list, its job
1905 // is - once it has installed its own m so that it can do things like
1906 // allocate memory - to create a spare m and put it on the list.
1908 // Each of these extra m's also has a g0 and a curg that are
1909 // pressed into service as the scheduling stack and current
1910 // goroutine for the duration of the cgo callback.
1912 // When the callback is done with the m, it calls dropm to
1913 // put the m back on the list.
1916 if (iscgo || GOOS == "windows") && !cgoHasExtraM {
1917 // Can happen if C/C++ code calls Go from a global ctor.
1918 // Can also happen on Windows if a global ctor uses a
1919 // callback created by syscall.NewCallback. See issue #6751
1922 // Can not throw, because scheduler is not initialized yet.
1923 write(2, unsafe.Pointer(&earlycgocallback[0]), int32(len(earlycgocallback)))
1927 // Save and block signals before getting an M.
1928 // The signal handler may call needm itself,
1929 // and we must avoid a deadlock. Also, once g is installed,
1930 // any incoming signals will try to execute,
1931 // but we won't have the sigaltstack settings and other data
1932 // set up appropriately until the end of minit, which will
1933 // unblock the signals. This is the same dance as when
1934 // starting a new m to run Go code via newosproc.
1939 // Lock extra list, take head, unlock popped list.
1940 // nilokay=false is safe here because of the invariant above,
1941 // that the extra list always contains or will soon contain
1943 mp := lockextra(false)
1945 // Set needextram when we've just emptied the list,
1946 // so that the eventual call into cgocallbackg will
1947 // allocate a new m for the extra list. We delay the
1948 // allocation until then so that it can be done
1949 // after exitsyscall makes sure it is okay to be
1950 // running at all (that is, there's no garbage collection
1951 // running right now).
1952 mp.needextram = mp.schedlink == 0
1954 unlockextra(mp.schedlink.ptr())
1956 // Store the original signal mask for use by minit.
1957 mp.sigmask = sigmask
1959 // Install TLS on some platforms (previously setg
1960 // would do this if necessary).
1963 // Install g (= m->g0) and set the stack bounds
1964 // to match the current stack. We don't actually know
1965 // how big the stack is, like we don't know how big any
1966 // scheduling stack is, but we assume there's at least 32 kB,
1967 // which is more than enough for us.
1970 _g_.stack.hi = getcallersp() + 1024
1971 _g_.stack.lo = getcallersp() - 32*1024
1972 _g_.stackguard0 = _g_.stack.lo + _StackGuard
1974 // Initialize this thread to use the m.
1978 // mp.curg is now a real goroutine.
1979 casgstatus(mp.curg, _Gdead, _Gsyscall)
1980 atomic.Xadd(&sched.ngsys, -1)
1983 var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n")
1985 // newextram allocates m's and puts them on the extra list.
1986 // It is called with a working local m, so that it can do things
1987 // like call schedlock and allocate.
1989 c := atomic.Xchg(&extraMWaiters, 0)
1991 for i := uint32(0); i < c; i++ {
1995 // Make sure there is at least one extra M.
1996 mp := lockextra(true)
2004 // oneNewExtraM allocates an m and puts it on the extra list.
2005 func oneNewExtraM() {
2006 // Create extra goroutine locked to extra m.
2007 // The goroutine is the context in which the cgo callback will run.
2008 // The sched.pc will never be returned to, but setting it to
2009 // goexit makes clear to the traceback routines where
2010 // the goroutine stack ends.
2011 mp := allocm(nil, nil, -1)
2013 gp.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum
2014 gp.sched.sp = gp.stack.hi
2015 gp.sched.sp -= 4 * sys.PtrSize // extra space in case of reads slightly beyond frame
2017 gp.sched.g = guintptr(unsafe.Pointer(gp))
2018 gp.syscallpc = gp.sched.pc
2019 gp.syscallsp = gp.sched.sp
2020 gp.stktopsp = gp.sched.sp
2021 // malg returns status as _Gidle. Change to _Gdead before
2022 // adding to allg where GC can see it. We use _Gdead to hide
2023 // this from tracebacks and stack scans since it isn't a
2024 // "real" goroutine until needm grabs it.
2025 casgstatus(gp, _Gidle, _Gdead)
2031 gp.goid = int64(atomic.Xadd64(&sched.goidgen, 1))
2033 gp.racectx = racegostart(abi.FuncPCABIInternal(newextram) + sys.PCQuantum)
2035 // put on allg for garbage collector
2038 // gp is now on the allg list, but we don't want it to be
2039 // counted by gcount. It would be more "proper" to increment
2040 // sched.ngfree, but that requires locking. Incrementing ngsys
2041 // has the same effect.
2042 atomic.Xadd(&sched.ngsys, +1)
2044 // Add m to the extra list.
2045 mnext := lockextra(true)
2046 mp.schedlink.set(mnext)
2051 // dropm is called when a cgo callback has called needm but is now
2052 // done with the callback and returning back into the non-Go thread.
2053 // It puts the current m back onto the extra list.
2055 // The main expense here is the call to signalstack to release the
2056 // m's signal stack, and then the call to needm on the next callback
2057 // from this thread. It is tempting to try to save the m for next time,
2058 // which would eliminate both these costs, but there might not be
2059 // a next time: the current thread (which Go does not control) might exit.
2060 // If we saved the m for that thread, there would be an m leak each time
2061 // such a thread exited. Instead, we acquire and release an m on each
2062 // call. These should typically not be scheduling operations, just a few
2063 // atomics, so the cost should be small.
2065 // TODO(rsc): An alternative would be to allocate a dummy pthread per-thread
2066 // variable using pthread_key_create. Unlike the pthread keys we already use
2067 // on OS X, this dummy key would never be read by Go code. It would exist
2068 // only so that we could register at thread-exit-time destructor.
2069 // That destructor would put the m back onto the extra list.
2070 // This is purely a performance optimization. The current version,
2071 // in which dropm happens on each cgo call, is still correct too.
2072 // We may have to keep the current version on systems with cgo
2073 // but without pthreads, like Windows.
2075 // Clear m and g, and return m to the extra list.
2076 // After the call to setg we can only call nosplit functions
2077 // with no pointer manipulation.
2080 // Return mp.curg to dead state.
2081 casgstatus(mp.curg, _Gsyscall, _Gdead)
2082 mp.curg.preemptStop = false
2083 atomic.Xadd(&sched.ngsys, +1)
2085 // Block signals before unminit.
2086 // Unminit unregisters the signal handling stack (but needs g on some systems).
2087 // Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.
2088 // It's important not to try to handle a signal between those two steps.
2089 sigmask := mp.sigmask
2093 mnext := lockextra(true)
2095 mp.schedlink.set(mnext)
2099 // Commit the release of mp.
2102 msigrestore(sigmask)
2105 // A helper function for EnsureDropM.
2106 func getm() uintptr {
2107 return uintptr(unsafe.Pointer(getg().m))
2111 var extraMCount uint32 // Protected by lockextra
2112 var extraMWaiters uint32
2114 // lockextra locks the extra list and returns the list head.
2115 // The caller must unlock the list by storing a new list head
2116 // to extram. If nilokay is true, then lockextra will
2117 // return a nil list head if that's what it finds. If nilokay is false,
2118 // lockextra will keep waiting until the list head is no longer nil.
2120 func lockextra(nilokay bool) *m {
2125 old := atomic.Loaduintptr(&extram)
2130 if old == 0 && !nilokay {
2132 // Add 1 to the number of threads
2133 // waiting for an M.
2134 // This is cleared by newextram.
2135 atomic.Xadd(&extraMWaiters, 1)
2141 if atomic.Casuintptr(&extram, old, locked) {
2142 return (*m)(unsafe.Pointer(old))
2150 func unlockextra(mp *m) {
2151 atomic.Storeuintptr(&extram, uintptr(unsafe.Pointer(mp)))
2154 // execLock serializes exec and clone to avoid bugs or unspecified behaviour
2155 // around exec'ing while creating/destroying threads. See issue #19546.
2156 var execLock rwmutex
2158 // newmHandoff contains a list of m structures that need new OS threads.
2159 // This is used by newm in situations where newm itself can't safely
2160 // start an OS thread.
2161 var newmHandoff struct {
2164 // newm points to a list of M structures that need new OS
2165 // threads. The list is linked through m.schedlink.
2168 // waiting indicates that wake needs to be notified when an m
2169 // is put on the list.
2173 // haveTemplateThread indicates that the templateThread has
2174 // been started. This is not protected by lock. Use cas to set
2176 haveTemplateThread uint32
2179 // Create a new m. It will start off with a call to fn, or else the scheduler.
2180 // fn needs to be static and not a heap allocated closure.
2181 // May run with m.p==nil, so write barriers are not allowed.
2183 // id is optional pre-allocated m ID. Omit by passing -1.
2184 //go:nowritebarrierrec
2185 func newm(fn func(), _p_ *p, id int64) {
2186 mp := allocm(_p_, fn, id)
2187 mp.doesPark = (_p_ != nil)
2189 mp.sigmask = initSigmask
2190 if gp := getg(); gp != nil && gp.m != nil && (gp.m.lockedExt != 0 || gp.m.incgo) && GOOS != "plan9" {
2191 // We're on a locked M or a thread that may have been
2192 // started by C. The kernel state of this thread may
2193 // be strange (the user may have locked it for that
2194 // purpose). We don't want to clone that into another
2195 // thread. Instead, ask a known-good thread to create
2196 // the thread for us.
2198 // This is disabled on Plan 9. See golang.org/issue/22227.
2200 // TODO: This may be unnecessary on Windows, which
2201 // doesn't model thread creation off fork.
2202 lock(&newmHandoff.lock)
2203 if newmHandoff.haveTemplateThread == 0 {
2204 throw("on a locked thread with no template thread")
2206 mp.schedlink = newmHandoff.newm
2207 newmHandoff.newm.set(mp)
2208 if newmHandoff.waiting {
2209 newmHandoff.waiting = false
2210 notewakeup(&newmHandoff.wake)
2212 unlock(&newmHandoff.lock)
2220 var ts cgothreadstart
2221 if _cgo_thread_start == nil {
2222 throw("_cgo_thread_start missing")
2225 ts.tls = (*uint64)(unsafe.Pointer(&mp.tls[0]))
2226 ts.fn = unsafe.Pointer(abi.FuncPCABI0(mstart))
2228 msanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))
2230 execLock.rlock() // Prevent process clone.
2231 asmcgocall(_cgo_thread_start, unsafe.Pointer(&ts))
2235 execLock.rlock() // Prevent process clone.
2240 // startTemplateThread starts the template thread if it is not already
2243 // The calling thread must itself be in a known-good state.
2244 func startTemplateThread() {
2245 if GOARCH == "wasm" { // no threads on wasm yet
2249 // Disable preemption to guarantee that the template thread will be
2250 // created before a park once haveTemplateThread is set.
2252 if !atomic.Cas(&newmHandoff.haveTemplateThread, 0, 1) {
2256 newm(templateThread, nil, -1)
2260 // mFixupRace is used to temporarily borrow the race context from the
2261 // coordinating m during a syscall_runtime_doAllThreadsSyscall and
2262 // loan it out to each of the m's of the runtime so they can execute a
2263 // mFixup.fn in that context.
2264 var mFixupRace struct {
2269 // mDoFixup runs any outstanding fixup function for the running m.
2270 // Returns true if a fixup was outstanding and actually executed.
2272 // Note: to avoid deadlocks, and the need for the fixup function
2273 // itself to be async safe, signals are blocked for the working m
2274 // while it holds the mFixup lock. (See golang.org/issue/44193)
2277 func mDoFixup() bool {
2279 if used := atomic.Load(&_g_.m.mFixup.used); used == 0 {
2283 // slow path - if fixup fn is used, block signals and lock.
2287 lock(&_g_.m.mFixup.lock)
2288 fn := _g_.m.mFixup.fn
2290 if gcphase != _GCoff {
2291 // We can't have a write barrier in this
2292 // context since we may not have a P, but we
2293 // clear fn to signal that we've executed the
2294 // fixup. As long as fn is kept alive
2295 // elsewhere, technically we should have no
2296 // issues with the GC, but fn is likely
2297 // generated in a different package altogether
2298 // that may change independently. Just assert
2299 // the GC is off so this lack of write barrier
2300 // is more obviously safe.
2301 throw("GC must be disabled to protect validity of fn value")
2303 if _g_.racectx != 0 || !raceenabled {
2306 // temporarily acquire the context of the
2307 // originator of the
2308 // syscall_runtime_doAllThreadsSyscall and
2309 // block others from using it for the duration
2310 // of the fixup call.
2311 lock(&mFixupRace.lock)
2312 _g_.racectx = mFixupRace.ctx
2315 unlock(&mFixupRace.lock)
2317 *(*uintptr)(unsafe.Pointer(&_g_.m.mFixup.fn)) = 0
2318 atomic.Store(&_g_.m.mFixup.used, 0)
2320 unlock(&_g_.m.mFixup.lock)
2321 msigrestore(sigmask)
2325 // mDoFixupAndOSYield is called when an m is unable to send a signal
2326 // because the allThreadsSyscall mechanism is in progress. That is, an
2327 // mPark() has been interrupted with this signal handler so we need to
2328 // ensure the fixup is executed from this context.
2330 func mDoFixupAndOSYield() {
2335 // templateThread is a thread in a known-good state that exists solely
2336 // to start new threads in known-good states when the calling thread
2337 // may not be in a good state.
2339 // Many programs never need this, so templateThread is started lazily
2340 // when we first enter a state that might lead to running on a thread
2341 // in an unknown state.
2343 // templateThread runs on an M without a P, so it must not have write
2346 //go:nowritebarrierrec
2347 func templateThread() {
2354 lock(&newmHandoff.lock)
2355 for newmHandoff.newm != 0 {
2356 newm := newmHandoff.newm.ptr()
2357 newmHandoff.newm = 0
2358 unlock(&newmHandoff.lock)
2360 next := newm.schedlink.ptr()
2365 lock(&newmHandoff.lock)
2367 newmHandoff.waiting = true
2368 noteclear(&newmHandoff.wake)
2369 unlock(&newmHandoff.lock)
2370 notesleep(&newmHandoff.wake)
2375 // Stops execution of the current m until new work is available.
2376 // Returns with acquired P.
2380 if _g_.m.locks != 0 {
2381 throw("stopm holding locks")
2384 throw("stopm holding p")
2387 throw("stopm spinning")
2394 acquirep(_g_.m.nextp.ptr())
2399 // startm's caller incremented nmspinning. Set the new M's spinning.
2400 getg().m.spinning = true
2403 // Schedules some M to run the p (creates an M if necessary).
2404 // If p==nil, tries to get an idle P, if no idle P's does nothing.
2405 // May run with m.p==nil, so write barriers are not allowed.
2406 // If spinning is set, the caller has incremented nmspinning and startm will
2407 // either decrement nmspinning or set m.spinning in the newly started M.
2409 // Callers passing a non-nil P must call from a non-preemptible context. See
2410 // comment on acquirem below.
2412 // Must not have write barriers because this may be called without a P.
2413 //go:nowritebarrierrec
2414 func startm(_p_ *p, spinning bool) {
2415 // Disable preemption.
2417 // Every owned P must have an owner that will eventually stop it in the
2418 // event of a GC stop request. startm takes transient ownership of a P
2419 // (either from argument or pidleget below) and transfers ownership to
2420 // a started M, which will be responsible for performing the stop.
2422 // Preemption must be disabled during this transient ownership,
2423 // otherwise the P this is running on may enter GC stop while still
2424 // holding the transient P, leaving that P in limbo and deadlocking the
2427 // Callers passing a non-nil P must already be in non-preemptible
2428 // context, otherwise such preemption could occur on function entry to
2429 // startm. Callers passing a nil P may be preemptible, so we must
2430 // disable preemption before acquiring a P from pidleget below.
2438 // The caller incremented nmspinning, but there are no idle Ps,
2439 // so it's okay to just undo the increment and give up.
2440 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2441 throw("startm: negative nmspinning")
2450 // No M is available, we must drop sched.lock and call newm.
2451 // However, we already own a P to assign to the M.
2453 // Once sched.lock is released, another G (e.g., in a syscall),
2454 // could find no idle P while checkdead finds a runnable G but
2455 // no running M's because this new M hasn't started yet, thus
2456 // throwing in an apparent deadlock.
2458 // Avoid this situation by pre-allocating the ID for the new M,
2459 // thus marking it as 'running' before we drop sched.lock. This
2460 // new M will eventually run the scheduler to execute any
2467 // The caller incremented nmspinning, so set m.spinning in the new M.
2471 // Ownership transfer of _p_ committed by start in newm.
2472 // Preemption is now safe.
2478 throw("startm: m is spinning")
2481 throw("startm: m has p")
2483 if spinning && !runqempty(_p_) {
2484 throw("startm: p has runnable gs")
2486 // The caller incremented nmspinning, so set m.spinning in the new M.
2487 nmp.spinning = spinning
2489 notewakeup(&nmp.park)
2490 // Ownership transfer of _p_ committed by wakeup. Preemption is now
2495 // Hands off P from syscall or locked M.
2496 // Always runs without a P, so write barriers are not allowed.
2497 //go:nowritebarrierrec
2498 func handoffp(_p_ *p) {
2499 // handoffp must start an M in any situation where
2500 // findrunnable would return a G to run on _p_.
2502 // if it has local work, start it straight away
2503 if !runqempty(_p_) || sched.runqsize != 0 {
2507 // if it has GC work, start it straight away
2508 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {
2512 // no local work, check that there are no spinning/idle M's,
2513 // otherwise our help is not required
2514 if atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) == 0 && atomic.Cas(&sched.nmspinning, 0, 1) { // TODO: fast atomic
2519 if sched.gcwaiting != 0 {
2520 _p_.status = _Pgcstop
2522 if sched.stopwait == 0 {
2523 notewakeup(&sched.stopnote)
2528 if _p_.runSafePointFn != 0 && atomic.Cas(&_p_.runSafePointFn, 1, 0) {
2529 sched.safePointFn(_p_)
2530 sched.safePointWait--
2531 if sched.safePointWait == 0 {
2532 notewakeup(&sched.safePointNote)
2535 if sched.runqsize != 0 {
2540 // If this is the last running P and nobody is polling network,
2541 // need to wakeup another M to poll network.
2542 if sched.npidle == uint32(gomaxprocs-1) && atomic.Load64(&sched.lastpoll) != 0 {
2548 // The scheduler lock cannot be held when calling wakeNetPoller below
2549 // because wakeNetPoller may call wakep which may call startm.
2550 when := nobarrierWakeTime(_p_)
2559 // Tries to add one more P to execute G's.
2560 // Called when a G is made runnable (newproc, ready).
2562 if atomic.Load(&sched.npidle) == 0 {
2565 // be conservative about spinning threads
2566 if atomic.Load(&sched.nmspinning) != 0 || !atomic.Cas(&sched.nmspinning, 0, 1) {
2572 // Stops execution of the current m that is locked to a g until the g is runnable again.
2573 // Returns with acquired P.
2574 func stoplockedm() {
2577 if _g_.m.lockedg == 0 || _g_.m.lockedg.ptr().lockedm.ptr() != _g_.m {
2578 throw("stoplockedm: inconsistent locking")
2581 // Schedule another M to run this p.
2586 // Wait until another thread schedules lockedg again.
2588 status := readgstatus(_g_.m.lockedg.ptr())
2589 if status&^_Gscan != _Grunnable {
2590 print("runtime:stoplockedm: lockedg (atomicstatus=", status, ") is not Grunnable or Gscanrunnable\n")
2591 dumpgstatus(_g_.m.lockedg.ptr())
2592 throw("stoplockedm: not runnable")
2594 acquirep(_g_.m.nextp.ptr())
2598 // Schedules the locked m to run the locked gp.
2599 // May run during STW, so write barriers are not allowed.
2600 //go:nowritebarrierrec
2601 func startlockedm(gp *g) {
2604 mp := gp.lockedm.ptr()
2606 throw("startlockedm: locked to me")
2609 throw("startlockedm: m has p")
2611 // directly handoff current P to the locked m
2615 notewakeup(&mp.park)
2619 // Stops the current m for stopTheWorld.
2620 // Returns when the world is restarted.
2624 if sched.gcwaiting == 0 {
2625 throw("gcstopm: not waiting for gc")
2628 _g_.m.spinning = false
2629 // OK to just drop nmspinning here,
2630 // startTheWorld will unpark threads as necessary.
2631 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2632 throw("gcstopm: negative nmspinning")
2637 _p_.status = _Pgcstop
2639 if sched.stopwait == 0 {
2640 notewakeup(&sched.stopnote)
2646 // Schedules gp to run on the current M.
2647 // If inheritTime is true, gp inherits the remaining time in the
2648 // current time slice. Otherwise, it starts a new time slice.
2651 // Write barriers are allowed because this is called immediately after
2652 // acquiring a P in several places.
2654 //go:yeswritebarrierrec
2655 func execute(gp *g, inheritTime bool) {
2658 // Assign gp.m before entering _Grunning so running Gs have an
2662 casgstatus(gp, _Grunnable, _Grunning)
2665 gp.stackguard0 = gp.stack.lo + _StackGuard
2667 _g_.m.p.ptr().schedtick++
2670 // Check whether the profiler needs to be turned on or off.
2671 hz := sched.profilehz
2672 if _g_.m.profilehz != hz {
2673 setThreadCPUProfiler(hz)
2677 // GoSysExit has to happen when we have a P, but before GoStart.
2678 // So we emit it here.
2679 if gp.syscallsp != 0 && gp.sysblocktraced {
2680 traceGoSysExit(gp.sysexitticks)
2688 // Finds a runnable goroutine to execute.
2689 // Tries to steal from other P's, get g from local or global queue, poll network.
2690 func findrunnable() (gp *g, inheritTime bool) {
2693 // The conditions here and in handoffp must agree: if
2694 // findrunnable would return a G to run, handoffp must start
2698 _p_ := _g_.m.p.ptr()
2699 if sched.gcwaiting != 0 {
2703 if _p_.runSafePointFn != 0 {
2707 now, pollUntil, _ := checkTimers(_p_, 0)
2709 if fingwait && fingwake {
2710 if gp := wakefing(); gp != nil {
2714 if *cgo_yield != nil {
2715 asmcgocall(*cgo_yield, nil)
2719 if gp, inheritTime := runqget(_p_); gp != nil {
2720 return gp, inheritTime
2724 if sched.runqsize != 0 {
2726 gp := globrunqget(_p_, 0)
2734 // This netpoll is only an optimization before we resort to stealing.
2735 // We can safely skip it if there are no waiters or a thread is blocked
2736 // in netpoll already. If there is any kind of logical race with that
2737 // blocked thread (e.g. it has already returned from netpoll, but does
2738 // not set lastpoll yet), this thread will do blocking netpoll below
2740 if netpollinited() && atomic.Load(&netpollWaiters) > 0 && atomic.Load64(&sched.lastpoll) != 0 {
2741 if list := netpoll(0); !list.empty() { // non-blocking
2744 casgstatus(gp, _Gwaiting, _Grunnable)
2746 traceGoUnpark(gp, 0)
2752 // Spinning Ms: steal work from other Ps.
2754 // Limit the number of spinning Ms to half the number of busy Ps.
2755 // This is necessary to prevent excessive CPU consumption when
2756 // GOMAXPROCS>>1 but the program parallelism is low.
2757 procs := uint32(gomaxprocs)
2758 if _g_.m.spinning || 2*atomic.Load(&sched.nmspinning) < procs-atomic.Load(&sched.npidle) {
2759 if !_g_.m.spinning {
2760 _g_.m.spinning = true
2761 atomic.Xadd(&sched.nmspinning, 1)
2764 gp, inheritTime, tnow, w, newWork := stealWork(now)
2767 // Successfully stole.
2768 return gp, inheritTime
2771 // There may be new timer or GC work; restart to
2775 if w != 0 && (pollUntil == 0 || w < pollUntil) {
2776 // Earlier timer to wait for.
2781 // We have nothing to do.
2783 // If we're in the GC mark phase, can safely scan and blacken objects,
2784 // and have work to do, run idle-time marking rather than give up the
2786 if gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {
2787 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
2789 _p_.gcMarkWorkerMode = gcMarkWorkerIdleMode
2791 casgstatus(gp, _Gwaiting, _Grunnable)
2793 traceGoUnpark(gp, 0)
2800 // If a callback returned and no other goroutine is awake,
2801 // then wake event handler goroutine which pauses execution
2802 // until a callback was triggered.
2803 gp, otherReady := beforeIdle(now, pollUntil)
2805 casgstatus(gp, _Gwaiting, _Grunnable)
2807 traceGoUnpark(gp, 0)
2815 // Before we drop our P, make a snapshot of the allp slice,
2816 // which can change underfoot once we no longer block
2817 // safe-points. We don't need to snapshot the contents because
2818 // everything up to cap(allp) is immutable.
2819 allpSnapshot := allp
2820 // Also snapshot masks. Value changes are OK, but we can't allow
2821 // len to change out from under us.
2822 idlepMaskSnapshot := idlepMask
2823 timerpMaskSnapshot := timerpMask
2825 // return P and block
2827 if sched.gcwaiting != 0 || _p_.runSafePointFn != 0 {
2831 if sched.runqsize != 0 {
2832 gp := globrunqget(_p_, 0)
2836 if releasep() != _p_ {
2837 throw("findrunnable: wrong p")
2842 // Delicate dance: thread transitions from spinning to non-spinning
2843 // state, potentially concurrently with submission of new work. We must
2844 // drop nmspinning first and then check all sources again (with
2845 // #StoreLoad memory barrier in between). If we do it the other way
2846 // around, another thread can submit work after we've checked all
2847 // sources but before we drop nmspinning; as a result nobody will
2848 // unpark a thread to run the work.
2850 // This applies to the following sources of work:
2852 // * Goroutines added to a per-P run queue.
2853 // * New/modified-earlier timers on a per-P timer heap.
2854 // * Idle-priority GC work (barring golang.org/issue/19112).
2856 // If we discover new work below, we need to restore m.spinning as a signal
2857 // for resetspinning to unpark a new worker thread (because there can be more
2858 // than one starving goroutine). However, if after discovering new work
2859 // we also observe no idle Ps it is OK to skip unparking a new worker
2860 // thread: the system is fully loaded so no spinning threads are required.
2861 // Also see "Worker thread parking/unparking" comment at the top of the file.
2862 wasSpinning := _g_.m.spinning
2864 _g_.m.spinning = false
2865 if int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {
2866 throw("findrunnable: negative nmspinning")
2869 // Note the for correctness, only the last M transitioning from
2870 // spinning to non-spinning must perform these rechecks to
2871 // ensure no missed work. We are performing it on every M that
2872 // transitions as a conservative change to monitor effects on
2873 // latency. See golang.org/issue/43997.
2875 // Check all runqueues once again.
2876 _p_ = checkRunqsNoP(allpSnapshot, idlepMaskSnapshot)
2879 _g_.m.spinning = true
2880 atomic.Xadd(&sched.nmspinning, 1)
2884 // Check for idle-priority GC work again.
2885 _p_, gp = checkIdleGCNoP()
2888 _g_.m.spinning = true
2889 atomic.Xadd(&sched.nmspinning, 1)
2891 // Run the idle worker.
2892 _p_.gcMarkWorkerMode = gcMarkWorkerIdleMode
2893 casgstatus(gp, _Gwaiting, _Grunnable)
2895 traceGoUnpark(gp, 0)
2900 // Finally, check for timer creation or expiry concurrently with
2901 // transitioning from spinning to non-spinning.
2903 // Note that we cannot use checkTimers here because it calls
2904 // adjusttimers which may need to allocate memory, and that isn't
2905 // allowed when we don't have an active P.
2906 pollUntil = checkTimersNoP(allpSnapshot, timerpMaskSnapshot, pollUntil)
2909 // Poll network until next timer.
2910 if netpollinited() && (atomic.Load(&netpollWaiters) > 0 || pollUntil != 0) && atomic.Xchg64(&sched.lastpoll, 0) != 0 {
2911 atomic.Store64(&sched.pollUntil, uint64(pollUntil))
2913 throw("findrunnable: netpoll with p")
2916 throw("findrunnable: netpoll with spinning")
2923 delay = pollUntil - now
2929 // When using fake time, just poll.
2932 list := netpoll(delay) // block until new work is available
2933 atomic.Store64(&sched.pollUntil, 0)
2934 atomic.Store64(&sched.lastpoll, uint64(nanotime()))
2935 if faketime != 0 && list.empty() {
2936 // Using fake time and nothing is ready; stop M.
2937 // When all M's stop, checkdead will call timejump.
2951 casgstatus(gp, _Gwaiting, _Grunnable)
2953 traceGoUnpark(gp, 0)
2958 _g_.m.spinning = true
2959 atomic.Xadd(&sched.nmspinning, 1)
2963 } else if pollUntil != 0 && netpollinited() {
2964 pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
2965 if pollerPollUntil == 0 || pollerPollUntil > pollUntil {
2973 // pollWork reports whether there is non-background work this P could
2974 // be doing. This is a fairly lightweight check to be used for
2975 // background work loops, like idle GC. It checks a subset of the
2976 // conditions checked by the actual scheduler.
2977 func pollWork() bool {
2978 if sched.runqsize != 0 {
2981 p := getg().m.p.ptr()
2985 if netpollinited() && atomic.Load(&netpollWaiters) > 0 && sched.lastpoll != 0 {
2986 if list := netpoll(0); !list.empty() {
2994 // stealWork attempts to steal a runnable goroutine or timer from any P.
2996 // If newWork is true, new work may have been readied.
2998 // If now is not 0 it is the current time. stealWork returns the passed time or
2999 // the current time if now was passed as 0.
3000 func stealWork(now int64) (gp *g, inheritTime bool, rnow, pollUntil int64, newWork bool) {
3001 pp := getg().m.p.ptr()
3005 const stealTries = 4
3006 for i := 0; i < stealTries; i++ {
3007 stealTimersOrRunNextG := i == stealTries-1
3009 for enum := stealOrder.start(fastrand()); !enum.done(); enum.next() {
3010 if sched.gcwaiting != 0 {
3011 // GC work may be available.
3012 return nil, false, now, pollUntil, true
3014 p2 := allp[enum.position()]
3019 // Steal timers from p2. This call to checkTimers is the only place
3020 // where we might hold a lock on a different P's timers. We do this
3021 // once on the last pass before checking runnext because stealing
3022 // from the other P's runnext should be the last resort, so if there
3023 // are timers to steal do that first.
3025 // We only check timers on one of the stealing iterations because
3026 // the time stored in now doesn't change in this loop and checking
3027 // the timers for each P more than once with the same value of now
3028 // is probably a waste of time.
3030 // timerpMask tells us whether the P may have timers at all. If it
3031 // can't, no need to check at all.
3032 if stealTimersOrRunNextG && timerpMask.read(enum.position()) {
3033 tnow, w, ran := checkTimers(p2, now)
3035 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3039 // Running the timers may have
3040 // made an arbitrary number of G's
3041 // ready and added them to this P's
3042 // local run queue. That invalidates
3043 // the assumption of runqsteal
3044 // that it always has room to add
3045 // stolen G's. So check now if there
3046 // is a local G to run.
3047 if gp, inheritTime := runqget(pp); gp != nil {
3048 return gp, inheritTime, now, pollUntil, ranTimer
3054 // Don't bother to attempt to steal if p2 is idle.
3055 if !idlepMask.read(enum.position()) {
3056 if gp := runqsteal(pp, p2, stealTimersOrRunNextG); gp != nil {
3057 return gp, false, now, pollUntil, ranTimer
3063 // No goroutines found to steal. Regardless, running a timer may have
3064 // made some goroutine ready that we missed. Indicate the next timer to
3066 return nil, false, now, pollUntil, ranTimer
3069 // Check all Ps for a runnable G to steal.
3071 // On entry we have no P. If a G is available to steal and a P is available,
3072 // the P is returned which the caller should acquire and attempt to steal the
3074 func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p {
3075 for id, p2 := range allpSnapshot {
3076 if !idlepMaskSnapshot.read(uint32(id)) && !runqempty(p2) {
3084 // Can't get a P, don't bother checking remaining Ps.
3092 // Check all Ps for a timer expiring sooner than pollUntil.
3094 // Returns updated pollUntil value.
3095 func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64 {
3096 for id, p2 := range allpSnapshot {
3097 if timerpMaskSnapshot.read(uint32(id)) {
3098 w := nobarrierWakeTime(p2)
3099 if w != 0 && (pollUntil == 0 || w < pollUntil) {
3108 // Check for idle-priority GC, without a P on entry.
3110 // If some GC work, a P, and a worker G are all available, the P and G will be
3111 // returned. The returned P has not been wired yet.
3112 func checkIdleGCNoP() (*p, *g) {
3113 // N.B. Since we have no P, gcBlackenEnabled may change at any time; we
3114 // must check again after acquiring a P.
3115 if atomic.Load(&gcBlackenEnabled) == 0 {
3118 if !gcMarkWorkAvailable(nil) {
3122 // Work is available; we can start an idle GC worker only if there is
3123 // an available P and available worker G.
3125 // We can attempt to acquire these in either order, though both have
3126 // synchronization concerns (see below). Workers are almost always
3127 // available (see comment in findRunnableGCWorker for the one case
3128 // there may be none). Since we're slightly less likely to find a P,
3129 // check for that first.
3131 // Synchronization: note that we must hold sched.lock until we are
3132 // committed to keeping it. Otherwise we cannot put the unnecessary P
3133 // back in sched.pidle without performing the full set of idle
3134 // transition checks.
3136 // If we were to check gcBgMarkWorkerPool first, we must somehow handle
3137 // the assumption in gcControllerState.findRunnableGCWorker that an
3138 // empty gcBgMarkWorkerPool is only possible if gcMarkDone is running.
3146 // Now that we own a P, gcBlackenEnabled can't change (as it requires
3148 if gcBlackenEnabled == 0 {
3154 node := (*gcBgMarkWorkerNode)(gcBgMarkWorkerPool.pop())
3163 return pp, node.gp.ptr()
3166 // wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
3167 // going to wake up before the when argument; or it wakes an idle P to service
3168 // timers and the network poller if there isn't one already.
3169 func wakeNetPoller(when int64) {
3170 if atomic.Load64(&sched.lastpoll) == 0 {
3171 // In findrunnable we ensure that when polling the pollUntil
3172 // field is either zero or the time to which the current
3173 // poll is expected to run. This can have a spurious wakeup
3174 // but should never miss a wakeup.
3175 pollerPollUntil := int64(atomic.Load64(&sched.pollUntil))
3176 if pollerPollUntil == 0 || pollerPollUntil > when {
3180 // There are no threads in the network poller, try to get
3181 // one there so it can handle new timers.
3182 if GOOS != "plan9" { // Temporary workaround - see issue #42303.
3188 func resetspinning() {
3190 if !_g_.m.spinning {
3191 throw("resetspinning: not a spinning m")
3193 _g_.m.spinning = false
3194 nmspinning := atomic.Xadd(&sched.nmspinning, -1)
3195 if int32(nmspinning) < 0 {
3196 throw("findrunnable: negative nmspinning")
3198 // M wakeup policy is deliberately somewhat conservative, so check if we
3199 // need to wakeup another P here. See "Worker thread parking/unparking"
3200 // comment at the top of the file for details.
3204 // injectglist adds each runnable G on the list to some run queue,
3205 // and clears glist. If there is no current P, they are added to the
3206 // global queue, and up to npidle M's are started to run them.
3207 // Otherwise, for each idle P, this adds a G to the global queue
3208 // and starts an M. Any remaining G's are added to the current P's
3210 // This may temporarily acquire sched.lock.
3211 // Can run concurrently with GC.
3212 func injectglist(glist *gList) {
3217 for gp := glist.head.ptr(); gp != nil; gp = gp.schedlink.ptr() {
3218 traceGoUnpark(gp, 0)
3222 // Mark all the goroutines as runnable before we put them
3223 // on the run queues.
3224 head := glist.head.ptr()
3227 for gp := head; gp != nil; gp = gp.schedlink.ptr() {
3230 casgstatus(gp, _Gwaiting, _Grunnable)
3233 // Turn the gList into a gQueue.
3239 startIdle := func(n int) {
3240 for ; n != 0 && sched.npidle != 0; n-- {
3245 pp := getg().m.p.ptr()
3248 globrunqputbatch(&q, int32(qsize))
3254 npidle := int(atomic.Load(&sched.npidle))
3257 for n = 0; n < npidle && !q.empty(); n++ {
3263 globrunqputbatch(&globq, int32(n))
3270 runqputbatch(pp, &q, qsize)
3274 // One round of scheduler: find a runnable goroutine and execute it.
3279 if _g_.m.locks != 0 {
3280 throw("schedule: holding locks")
3283 if _g_.m.lockedg != 0 {
3285 execute(_g_.m.lockedg.ptr(), false) // Never returns.
3288 // We should not schedule away from a g that is executing a cgo call,
3289 // since the cgo call is using the m's g0 stack.
3291 throw("schedule: in cgo")
3298 if sched.gcwaiting != 0 {
3302 if pp.runSafePointFn != 0 {
3306 // Sanity check: if we are spinning, the run queue should be empty.
3307 // Check this before calling checkTimers, as that might call
3308 // goready to put a ready goroutine on the local run queue.
3309 if _g_.m.spinning && (pp.runnext != 0 || pp.runqhead != pp.runqtail) {
3310 throw("schedule: spinning with local work")
3316 var inheritTime bool
3318 // Normal goroutines will check for need to wakeP in ready,
3319 // but GCworkers and tracereaders will not, so the check must
3320 // be done here instead.
3322 if trace.enabled || trace.shutdown {
3325 casgstatus(gp, _Gwaiting, _Grunnable)
3326 traceGoUnpark(gp, 0)
3330 if gp == nil && gcBlackenEnabled != 0 {
3331 gp = gcController.findRunnableGCWorker(_g_.m.p.ptr())
3337 // Check the global runnable queue once in a while to ensure fairness.
3338 // Otherwise two goroutines can completely occupy the local runqueue
3339 // by constantly respawning each other.
3340 if _g_.m.p.ptr().schedtick%61 == 0 && sched.runqsize > 0 {
3342 gp = globrunqget(_g_.m.p.ptr(), 1)
3347 gp, inheritTime = runqget(_g_.m.p.ptr())
3348 // We can see gp != nil here even if the M is spinning,
3349 // if checkTimers added a local goroutine via goready.
3352 gp, inheritTime = findrunnable() // blocks until work is available
3355 // This thread is going to run a goroutine and is not spinning anymore,
3356 // so if it was marked as spinning we need to reset it now and potentially
3357 // start a new spinning M.
3362 if sched.disable.user && !schedEnabled(gp) {
3363 // Scheduling of this goroutine is disabled. Put it on
3364 // the list of pending runnable goroutines for when we
3365 // re-enable user scheduling and look again.
3367 if schedEnabled(gp) {
3368 // Something re-enabled scheduling while we
3369 // were acquiring the lock.
3372 sched.disable.runnable.pushBack(gp)
3379 // If about to schedule a not-normal goroutine (a GCworker or tracereader),
3380 // wake a P if there is one.
3384 if gp.lockedm != 0 {
3385 // Hands off own p to the locked m,
3386 // then blocks waiting for a new p.
3391 execute(gp, inheritTime)
3394 // dropg removes the association between m and the current goroutine m->curg (gp for short).
3395 // Typically a caller sets gp's status away from Grunning and then
3396 // immediately calls dropg to finish the job. The caller is also responsible
3397 // for arranging that gp will be restarted using ready at an
3398 // appropriate time. After calling dropg and arranging for gp to be
3399 // readied later, the caller can do other work but eventually should
3400 // call schedule to restart the scheduling of goroutines on this m.
3404 setMNoWB(&_g_.m.curg.m, nil)
3405 setGNoWB(&_g_.m.curg, nil)
3408 // checkTimers runs any timers for the P that are ready.
3409 // If now is not 0 it is the current time.
3410 // It returns the passed time or the current time if now was passed as 0.
3411 // and the time when the next timer should run or 0 if there is no next timer,
3412 // and reports whether it ran any timers.
3413 // If the time when the next timer should run is not 0,
3414 // it is always larger than the returned time.
3415 // We pass now in and out to avoid extra calls of nanotime.
3416 //go:yeswritebarrierrec
3417 func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool) {
3418 // If it's not yet time for the first timer, or the first adjusted
3419 // timer, then there is nothing to do.
3420 next := int64(atomic.Load64(&pp.timer0When))
3421 nextAdj := int64(atomic.Load64(&pp.timerModifiedEarliest))
3422 if next == 0 || (nextAdj != 0 && nextAdj < next) {
3427 // No timers to run or adjust.
3428 return now, 0, false
3435 // Next timer is not ready to run, but keep going
3436 // if we would clear deleted timers.
3437 // This corresponds to the condition below where
3438 // we decide whether to call clearDeletedTimers.
3439 if pp != getg().m.p.ptr() || int(atomic.Load(&pp.deletedTimers)) <= int(atomic.Load(&pp.numTimers)/4) {
3440 return now, next, false
3444 lock(&pp.timersLock)
3446 if len(pp.timers) > 0 {
3447 adjusttimers(pp, now)
3448 for len(pp.timers) > 0 {
3449 // Note that runtimer may temporarily unlock
3451 if tw := runtimer(pp, now); tw != 0 {
3461 // If this is the local P, and there are a lot of deleted timers,
3462 // clear them out. We only do this for the local P to reduce
3463 // lock contention on timersLock.
3464 if pp == getg().m.p.ptr() && int(atomic.Load(&pp.deletedTimers)) > len(pp.timers)/4 {
3465 clearDeletedTimers(pp)
3468 unlock(&pp.timersLock)
3470 return now, pollUntil, ran
3473 func parkunlock_c(gp *g, lock unsafe.Pointer) bool {
3474 unlock((*mutex)(lock))
3478 // park continuation on g0.
3479 func park_m(gp *g) {
3483 traceGoPark(_g_.m.waittraceev, _g_.m.waittraceskip)
3486 casgstatus(gp, _Grunning, _Gwaiting)
3489 if fn := _g_.m.waitunlockf; fn != nil {
3490 ok := fn(gp, _g_.m.waitlock)
3491 _g_.m.waitunlockf = nil
3492 _g_.m.waitlock = nil
3495 traceGoUnpark(gp, 2)
3497 casgstatus(gp, _Gwaiting, _Grunnable)
3498 execute(gp, true) // Schedule it back, never returns.
3504 func goschedImpl(gp *g) {
3505 status := readgstatus(gp)
3506 if status&^_Gscan != _Grunning {
3508 throw("bad g status")
3510 casgstatus(gp, _Grunning, _Grunnable)
3519 // Gosched continuation on g0.
3520 func gosched_m(gp *g) {
3527 // goschedguarded is a forbidden-states-avoided version of gosched_m
3528 func goschedguarded_m(gp *g) {
3530 if !canPreemptM(gp.m) {
3531 gogo(&gp.sched) // never return
3540 func gopreempt_m(gp *g) {
3547 // preemptPark parks gp and puts it in _Gpreempted.
3550 func preemptPark(gp *g) {
3552 traceGoPark(traceEvGoBlock, 0)
3554 status := readgstatus(gp)
3555 if status&^_Gscan != _Grunning {
3557 throw("bad g status")
3559 gp.waitreason = waitReasonPreempted
3561 if gp.asyncSafePoint {
3562 // Double-check that async preemption does not
3563 // happen in SPWRITE assembly functions.
3564 // isAsyncSafePoint must exclude this case.
3565 f := findfunc(gp.sched.pc)
3567 throw("preempt at unknown pc")
3569 if f.flag&funcFlag_SPWRITE != 0 {
3570 println("runtime: unexpected SPWRITE function", funcname(f), "in async preempt")
3571 throw("preempt SPWRITE")
3575 // Transition from _Grunning to _Gscan|_Gpreempted. We can't
3576 // be in _Grunning when we dropg because then we'd be running
3577 // without an M, but the moment we're in _Gpreempted,
3578 // something could claim this G before we've fully cleaned it
3579 // up. Hence, we set the scan bit to lock down further
3580 // transitions until we can dropg.
3581 casGToPreemptScan(gp, _Grunning, _Gscan|_Gpreempted)
3583 casfrom_Gscanstatus(gp, _Gscan|_Gpreempted, _Gpreempted)
3587 // goyield is like Gosched, but it:
3588 // - emits a GoPreempt trace event instead of a GoSched trace event
3589 // - puts the current G on the runq of the current P instead of the globrunq
3595 func goyield_m(gp *g) {
3600 casgstatus(gp, _Grunning, _Grunnable)
3602 runqput(pp, gp, false)
3606 // Finishes execution of the current goroutine.
3617 // goexit continuation on g0.
3618 func goexit0(gp *g) {
3621 casgstatus(gp, _Grunning, _Gdead)
3622 if isSystemGoroutine(gp, false) {
3623 atomic.Xadd(&sched.ngsys, -1)
3626 locked := gp.lockedm != 0
3629 gp.preemptStop = false
3630 gp.paniconfault = false
3631 gp._defer = nil // should be true already but just in case.
3632 gp._panic = nil // non-nil for Goexit during panic. points at stack-allocated data.
3639 if gcBlackenEnabled != 0 && gp.gcAssistBytes > 0 {
3640 // Flush assist credit to the global pool. This gives
3641 // better information to pacing if the application is
3642 // rapidly creating an exiting goroutines.
3643 assistWorkPerByte := float64frombits(atomic.Load64(&gcController.assistWorkPerByte))
3644 scanCredit := int64(assistWorkPerByte * float64(gp.gcAssistBytes))
3645 atomic.Xaddint64(&gcController.bgScanCredit, scanCredit)
3646 gp.gcAssistBytes = 0
3651 if GOARCH == "wasm" { // no threads yet on wasm
3652 gfput(_g_.m.p.ptr(), gp)
3653 schedule() // never returns
3656 if _g_.m.lockedInt != 0 {
3657 print("invalid m->lockedInt = ", _g_.m.lockedInt, "\n")
3658 throw("internal lockOSThread error")
3660 gfput(_g_.m.p.ptr(), gp)
3662 // The goroutine may have locked this thread because
3663 // it put it in an unusual kernel state. Kill it
3664 // rather than returning it to the thread pool.
3666 // Return to mstart, which will release the P and exit
3668 if GOOS != "plan9" { // See golang.org/issue/22227.
3669 gogo(&_g_.m.g0.sched)
3671 // Clear lockedExt on plan9 since we may end up re-using
3679 // save updates getg().sched to refer to pc and sp so that a following
3680 // gogo will restore pc and sp.
3682 // save must not have write barriers because invoking a write barrier
3683 // can clobber getg().sched.
3686 //go:nowritebarrierrec
3687 func save(pc, sp uintptr) {
3690 if _g_ == _g_.m.g0 || _g_ == _g_.m.gsignal {
3691 // m.g0.sched is special and must describe the context
3692 // for exiting the thread. mstart1 writes to it directly.
3693 // m.gsignal.sched should not be used at all.
3694 // This check makes sure save calls do not accidentally
3695 // run in contexts where they'd write to system g's.
3696 throw("save on system g not allowed")
3703 // We need to ensure ctxt is zero, but can't have a write
3704 // barrier here. However, it should always already be zero.
3706 if _g_.sched.ctxt != nil {
3711 // The goroutine g is about to enter a system call.
3712 // Record that it's not using the cpu anymore.
3713 // This is called only from the go syscall library and cgocall,
3714 // not from the low-level system calls used by the runtime.
3716 // Entersyscall cannot split the stack: the save must
3717 // make g->sched refer to the caller's stack segment, because
3718 // entersyscall is going to return immediately after.
3720 // Nothing entersyscall calls can split the stack either.
3721 // We cannot safely move the stack during an active call to syscall,
3722 // because we do not know which of the uintptr arguments are
3723 // really pointers (back into the stack).
3724 // In practice, this means that we make the fast path run through
3725 // entersyscall doing no-split things, and the slow path has to use systemstack
3726 // to run bigger things on the system stack.
3728 // reentersyscall is the entry point used by cgo callbacks, where explicitly
3729 // saved SP and PC are restored. This is needed when exitsyscall will be called
3730 // from a function further up in the call stack than the parent, as g->syscallsp
3731 // must always point to a valid stack frame. entersyscall below is the normal
3732 // entry point for syscalls, which obtains the SP and PC from the caller.
3735 // At the start of a syscall we emit traceGoSysCall to capture the stack trace.
3736 // If the syscall does not block, that is it, we do not emit any other events.
3737 // If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
3738 // when syscall returns we emit traceGoSysExit and when the goroutine starts running
3739 // (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
3740 // To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
3741 // we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),
3742 // whoever emits traceGoSysBlock increments p.syscalltick afterwards;
3743 // and we wait for the increment before emitting traceGoSysExit.
3744 // Note that the increment is done even if tracing is not enabled,
3745 // because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
3748 func reentersyscall(pc, sp uintptr) {
3751 // Disable preemption because during this function g is in Gsyscall status,
3752 // but can have inconsistent g->sched, do not let GC observe it.
3755 // Entersyscall must not call any function that might split/grow the stack.
3756 // (See details in comment above.)
3757 // Catch calls that might, by replacing the stack guard with something that
3758 // will trip any stack check and leaving a flag to tell newstack to die.
3759 _g_.stackguard0 = stackPreempt
3760 _g_.throwsplit = true
3762 // Leave SP around for GC and traceback.
3766 casgstatus(_g_, _Grunning, _Gsyscall)
3767 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3768 systemstack(func() {
3769 print("entersyscall inconsistent ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3770 throw("entersyscall")
3775 systemstack(traceGoSysCall)
3776 // systemstack itself clobbers g.sched.{pc,sp} and we might
3777 // need them later when the G is genuinely blocked in a
3782 if atomic.Load(&sched.sysmonwait) != 0 {
3783 systemstack(entersyscall_sysmon)
3787 if _g_.m.p.ptr().runSafePointFn != 0 {
3788 // runSafePointFn may stack split if run on this stack
3789 systemstack(runSafePointFn)
3793 _g_.m.syscalltick = _g_.m.p.ptr().syscalltick
3794 _g_.sysblocktraced = true
3799 atomic.Store(&pp.status, _Psyscall)
3800 if sched.gcwaiting != 0 {
3801 systemstack(entersyscall_gcwait)
3808 // Standard syscall entry used by the go syscall library and normal cgo calls.
3810 // This is exported via linkname to assembly in the syscall package.
3813 //go:linkname entersyscall
3814 func entersyscall() {
3815 reentersyscall(getcallerpc(), getcallersp())
3818 func entersyscall_sysmon() {
3820 if atomic.Load(&sched.sysmonwait) != 0 {
3821 atomic.Store(&sched.sysmonwait, 0)
3822 notewakeup(&sched.sysmonnote)
3827 func entersyscall_gcwait() {
3829 _p_ := _g_.m.oldp.ptr()
3832 if sched.stopwait > 0 && atomic.Cas(&_p_.status, _Psyscall, _Pgcstop) {
3834 traceGoSysBlock(_p_)
3838 if sched.stopwait--; sched.stopwait == 0 {
3839 notewakeup(&sched.stopnote)
3845 // The same as entersyscall(), but with a hint that the syscall is blocking.
3847 func entersyscallblock() {
3850 _g_.m.locks++ // see comment in entersyscall
3851 _g_.throwsplit = true
3852 _g_.stackguard0 = stackPreempt // see comment in entersyscall
3853 _g_.m.syscalltick = _g_.m.p.ptr().syscalltick
3854 _g_.sysblocktraced = true
3855 _g_.m.p.ptr().syscalltick++
3857 // Leave SP around for GC and traceback.
3861 _g_.syscallsp = _g_.sched.sp
3862 _g_.syscallpc = _g_.sched.pc
3863 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3866 sp3 := _g_.syscallsp
3867 systemstack(func() {
3868 print("entersyscallblock inconsistent ", hex(sp1), " ", hex(sp2), " ", hex(sp3), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3869 throw("entersyscallblock")
3872 casgstatus(_g_, _Grunning, _Gsyscall)
3873 if _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {
3874 systemstack(func() {
3875 print("entersyscallblock inconsistent ", hex(sp), " ", hex(_g_.sched.sp), " ", hex(_g_.syscallsp), " [", hex(_g_.stack.lo), ",", hex(_g_.stack.hi), "]\n")
3876 throw("entersyscallblock")
3880 systemstack(entersyscallblock_handoff)
3882 // Resave for traceback during blocked call.
3883 save(getcallerpc(), getcallersp())
3888 func entersyscallblock_handoff() {
3891 traceGoSysBlock(getg().m.p.ptr())
3893 handoffp(releasep())
3896 // The goroutine g exited its system call.
3897 // Arrange for it to run on a cpu again.
3898 // This is called only from the go syscall library, not
3899 // from the low-level system calls used by the runtime.
3901 // Write barriers are not allowed because our P may have been stolen.
3903 // This is exported via linkname to assembly in the syscall package.
3906 //go:nowritebarrierrec
3907 //go:linkname exitsyscall
3908 func exitsyscall() {
3911 _g_.m.locks++ // see comment in entersyscall
3912 if getcallersp() > _g_.syscallsp {
3913 throw("exitsyscall: syscall frame is no longer valid")
3917 oldp := _g_.m.oldp.ptr()
3919 if exitsyscallfast(oldp) {
3921 if oldp != _g_.m.p.ptr() || _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
3922 systemstack(traceGoStart)
3925 // There's a cpu for us, so we can run.
3926 _g_.m.p.ptr().syscalltick++
3927 // We need to cas the status and scan before resuming...
3928 casgstatus(_g_, _Gsyscall, _Grunning)
3930 // Garbage collector isn't running (since we are),
3931 // so okay to clear syscallsp.
3935 // restore the preemption request in case we've cleared it in newstack
3936 _g_.stackguard0 = stackPreempt
3938 // otherwise restore the real _StackGuard, we've spoiled it in entersyscall/entersyscallblock
3939 _g_.stackguard0 = _g_.stack.lo + _StackGuard
3941 _g_.throwsplit = false
3943 if sched.disable.user && !schedEnabled(_g_) {
3944 // Scheduling of this goroutine is disabled.
3951 _g_.sysexitticks = 0
3953 // Wait till traceGoSysBlock event is emitted.
3954 // This ensures consistency of the trace (the goroutine is started after it is blocked).
3955 for oldp != nil && oldp.syscalltick == _g_.m.syscalltick {
3958 // We can't trace syscall exit right now because we don't have a P.
3959 // Tracing code can invoke write barriers that cannot run without a P.
3960 // So instead we remember the syscall exit time and emit the event
3961 // in execute when we have a P.
3962 _g_.sysexitticks = cputicks()
3967 // Call the scheduler.
3970 // Scheduler returned, so we're allowed to run now.
3971 // Delete the syscallsp information that we left for
3972 // the garbage collector during the system call.
3973 // Must wait until now because until gosched returns
3974 // we don't know for sure that the garbage collector
3977 _g_.m.p.ptr().syscalltick++
3978 _g_.throwsplit = false
3982 func exitsyscallfast(oldp *p) bool {
3985 // Freezetheworld sets stopwait but does not retake P's.
3986 if sched.stopwait == freezeStopWait {
3990 // Try to re-acquire the last P.
3991 if oldp != nil && oldp.status == _Psyscall && atomic.Cas(&oldp.status, _Psyscall, _Pidle) {
3992 // There's a cpu for us, so we can run.
3994 exitsyscallfast_reacquired()
3998 // Try to get any other idle P.
3999 if sched.pidle != 0 {
4001 systemstack(func() {
4002 ok = exitsyscallfast_pidle()
4003 if ok && trace.enabled {
4005 // Wait till traceGoSysBlock event is emitted.
4006 // This ensures consistency of the trace (the goroutine is started after it is blocked).
4007 for oldp.syscalltick == _g_.m.syscalltick {
4021 // exitsyscallfast_reacquired is the exitsyscall path on which this G
4022 // has successfully reacquired the P it was running on before the
4026 func exitsyscallfast_reacquired() {
4028 if _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {
4030 // The p was retaken and then enter into syscall again (since _g_.m.syscalltick has changed).
4031 // traceGoSysBlock for this syscall was already emitted,
4032 // but here we effectively retake the p from the new syscall running on the same p.
4033 systemstack(func() {
4034 // Denote blocking of the new syscall.
4035 traceGoSysBlock(_g_.m.p.ptr())
4036 // Denote completion of the current syscall.
4040 _g_.m.p.ptr().syscalltick++
4044 func exitsyscallfast_pidle() bool {
4047 if _p_ != nil && atomic.Load(&sched.sysmonwait) != 0 {
4048 atomic.Store(&sched.sysmonwait, 0)
4049 notewakeup(&sched.sysmonnote)
4059 // exitsyscall slow path on g0.
4060 // Failed to acquire P, enqueue gp as runnable.
4062 // Called via mcall, so gp is the calling g from this M.
4064 //go:nowritebarrierrec
4065 func exitsyscall0(gp *g) {
4066 casgstatus(gp, _Gsyscall, _Grunnable)
4070 if schedEnabled(gp) {
4077 // Below, we stoplockedm if gp is locked. globrunqput releases
4078 // ownership of gp, so we must check if gp is locked prior to
4079 // committing the release by unlocking sched.lock, otherwise we
4080 // could race with another M transitioning gp from unlocked to
4082 locked = gp.lockedm != 0
4083 } else if atomic.Load(&sched.sysmonwait) != 0 {
4084 atomic.Store(&sched.sysmonwait, 0)
4085 notewakeup(&sched.sysmonnote)
4090 execute(gp, false) // Never returns.
4093 // Wait until another thread schedules gp and so m again.
4095 // N.B. lockedm must be this M, as this g was running on this M
4096 // before entersyscall.
4098 execute(gp, false) // Never returns.
4101 schedule() // Never returns.
4107 // Block signals during a fork, so that the child does not run
4108 // a signal handler before exec if a signal is sent to the process
4109 // group. See issue #18600.
4111 sigsave(&gp.m.sigmask)
4114 // This function is called before fork in syscall package.
4115 // Code between fork and exec must not allocate memory nor even try to grow stack.
4116 // Here we spoil g->_StackGuard to reliably detect any attempts to grow stack.
4117 // runtime_AfterFork will undo this in parent process, but not in child.
4118 gp.stackguard0 = stackFork
4121 // Called from syscall package before fork.
4122 //go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork
4124 func syscall_runtime_BeforeFork() {
4125 systemstack(beforefork)
4131 // See the comments in beforefork.
4132 gp.stackguard0 = gp.stack.lo + _StackGuard
4134 msigrestore(gp.m.sigmask)
4139 // Called from syscall package after fork in parent.
4140 //go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork
4142 func syscall_runtime_AfterFork() {
4143 systemstack(afterfork)
4146 // inForkedChild is true while manipulating signals in the child process.
4147 // This is used to avoid calling libc functions in case we are using vfork.
4148 var inForkedChild bool
4150 // Called from syscall package after fork in child.
4151 // It resets non-sigignored signals to the default handler, and
4152 // restores the signal mask in preparation for the exec.
4154 // Because this might be called during a vfork, and therefore may be
4155 // temporarily sharing address space with the parent process, this must
4156 // not change any global variables or calling into C code that may do so.
4158 //go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild
4160 //go:nowritebarrierrec
4161 func syscall_runtime_AfterForkInChild() {
4162 // It's OK to change the global variable inForkedChild here
4163 // because we are going to change it back. There is no race here,
4164 // because if we are sharing address space with the parent process,
4165 // then the parent process can not be running concurrently.
4166 inForkedChild = true
4168 clearSignalHandlers()
4170 // When we are the child we are the only thread running,
4171 // so we know that nothing else has changed gp.m.sigmask.
4172 msigrestore(getg().m.sigmask)
4174 inForkedChild = false
4177 // pendingPreemptSignals is the number of preemption signals
4178 // that have been sent but not received. This is only used on Darwin.
4180 var pendingPreemptSignals uint32
4182 // Called from syscall package before Exec.
4183 //go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec
4184 func syscall_runtime_BeforeExec() {
4185 // Prevent thread creation during exec.
4188 // On Darwin, wait for all pending preemption signals to
4189 // be received. See issue #41702.
4190 if GOOS == "darwin" || GOOS == "ios" {
4191 for int32(atomic.Load(&pendingPreemptSignals)) > 0 {
4197 // Called from syscall package after Exec.
4198 //go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec
4199 func syscall_runtime_AfterExec() {
4203 // Allocate a new g, with a stack big enough for stacksize bytes.
4204 func malg(stacksize int32) *g {
4207 stacksize = round2(_StackSystem + stacksize)
4208 systemstack(func() {
4209 newg.stack = stackalloc(uint32(stacksize))
4211 newg.stackguard0 = newg.stack.lo + _StackGuard
4212 newg.stackguard1 = ^uintptr(0)
4213 // Clear the bottom word of the stack. We record g
4214 // there on gsignal stack during VDSO on ARM and ARM64.
4215 *(*uintptr)(unsafe.Pointer(newg.stack.lo)) = 0
4220 // Create a new g running fn.
4221 // Put it on the queue of g's waiting to run.
4222 // The compiler turns a go statement into a call to this.
4223 func newproc(fn *funcval) {
4226 systemstack(func() {
4227 newg := newproc1(fn, gp, pc)
4229 _p_ := getg().m.p.ptr()
4230 runqput(_p_, newg, true)
4238 // Create a new g in state _Grunnable, starting at fn. callerpc is the
4239 // address of the go statement that created this. The caller is responsible
4240 // for adding the new g to the scheduler.
4241 func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g {
4245 _g_.m.throwing = -1 // do not dump full stacks
4246 throw("go of nil func value")
4248 acquirem() // disable preemption because it can be holding p in a local var
4250 _p_ := _g_.m.p.ptr()
4253 newg = malg(_StackMin)
4254 casgstatus(newg, _Gidle, _Gdead)
4255 allgadd(newg) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.
4257 if newg.stack.hi == 0 {
4258 throw("newproc1: newg missing stack")
4261 if readgstatus(newg) != _Gdead {
4262 throw("newproc1: new g is not Gdead")
4265 totalSize := uintptr(4*sys.PtrSize + sys.MinFrameSize) // extra space in case of reads slightly beyond frame
4266 totalSize = alignUp(totalSize, sys.StackAlign)
4267 sp := newg.stack.hi - totalSize
4271 *(*uintptr)(unsafe.Pointer(sp)) = 0
4273 spArg += sys.MinFrameSize
4276 memclrNoHeapPointers(unsafe.Pointer(&newg.sched), unsafe.Sizeof(newg.sched))
4279 newg.sched.pc = abi.FuncPCABI0(goexit) + sys.PCQuantum // +PCQuantum so that previous instruction is in same function
4280 newg.sched.g = guintptr(unsafe.Pointer(newg))
4281 gostartcallfn(&newg.sched, fn)
4282 newg.gopc = callerpc
4283 newg.ancestors = saveAncestors(callergp)
4284 newg.startpc = fn.fn
4285 if _g_.m.curg != nil {
4286 newg.labels = _g_.m.curg.labels
4288 if isSystemGoroutine(newg, false) {
4289 atomic.Xadd(&sched.ngsys, +1)
4291 // Track initial transition?
4292 newg.trackingSeq = uint8(fastrand())
4293 if newg.trackingSeq%gTrackingPeriod == 0 {
4294 newg.tracking = true
4296 casgstatus(newg, _Gdead, _Grunnable)
4298 if _p_.goidcache == _p_.goidcacheend {
4299 // Sched.goidgen is the last allocated id,
4300 // this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].
4301 // At startup sched.goidgen=0, so main goroutine receives goid=1.
4302 _p_.goidcache = atomic.Xadd64(&sched.goidgen, _GoidCacheBatch)
4303 _p_.goidcache -= _GoidCacheBatch - 1
4304 _p_.goidcacheend = _p_.goidcache + _GoidCacheBatch
4306 newg.goid = int64(_p_.goidcache)
4309 newg.racectx = racegostart(callerpc)
4312 traceGoCreate(newg, newg.startpc)
4319 // saveAncestors copies previous ancestors of the given caller g and
4320 // includes infor for the current caller into a new set of tracebacks for
4321 // a g being created.
4322 func saveAncestors(callergp *g) *[]ancestorInfo {
4323 // Copy all prior info, except for the root goroutine (goid 0).
4324 if debug.tracebackancestors <= 0 || callergp.goid == 0 {
4327 var callerAncestors []ancestorInfo
4328 if callergp.ancestors != nil {
4329 callerAncestors = *callergp.ancestors
4331 n := int32(len(callerAncestors)) + 1
4332 if n > debug.tracebackancestors {
4333 n = debug.tracebackancestors
4335 ancestors := make([]ancestorInfo, n)
4336 copy(ancestors[1:], callerAncestors)
4338 var pcs [_TracebackMaxFrames]uintptr
4339 npcs := gcallers(callergp, 0, pcs[:])
4340 ipcs := make([]uintptr, npcs)
4342 ancestors[0] = ancestorInfo{
4344 goid: callergp.goid,
4345 gopc: callergp.gopc,
4348 ancestorsp := new([]ancestorInfo)
4349 *ancestorsp = ancestors
4353 // Put on gfree list.
4354 // If local list is too long, transfer a batch to the global list.
4355 func gfput(_p_ *p, gp *g) {
4356 if readgstatus(gp) != _Gdead {
4357 throw("gfput: bad status (not Gdead)")
4360 stksize := gp.stack.hi - gp.stack.lo
4362 if stksize != _FixedStack {
4363 // non-standard stack size - free it.
4372 if _p_.gFree.n >= 64 {
4378 for _p_.gFree.n >= 32 {
4379 gp = _p_.gFree.pop()
4381 if gp.stack.lo == 0 {
4388 lock(&sched.gFree.lock)
4389 sched.gFree.noStack.pushAll(noStackQ)
4390 sched.gFree.stack.pushAll(stackQ)
4391 sched.gFree.n += inc
4392 unlock(&sched.gFree.lock)
4396 // Get from gfree list.
4397 // If local list is empty, grab a batch from global list.
4398 func gfget(_p_ *p) *g {
4400 if _p_.gFree.empty() && (!sched.gFree.stack.empty() || !sched.gFree.noStack.empty()) {
4401 lock(&sched.gFree.lock)
4402 // Move a batch of free Gs to the P.
4403 for _p_.gFree.n < 32 {
4404 // Prefer Gs with stacks.
4405 gp := sched.gFree.stack.pop()
4407 gp = sched.gFree.noStack.pop()
4416 unlock(&sched.gFree.lock)
4419 gp := _p_.gFree.pop()
4424 if gp.stack.lo == 0 {
4425 // Stack was deallocated in gfput. Allocate a new one.
4426 systemstack(func() {
4427 gp.stack = stackalloc(_FixedStack)
4429 gp.stackguard0 = gp.stack.lo + _StackGuard
4432 racemalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4435 msanmalloc(unsafe.Pointer(gp.stack.lo), gp.stack.hi-gp.stack.lo)
4441 // Purge all cached G's from gfree list to the global list.
4442 func gfpurge(_p_ *p) {
4448 for !_p_.gFree.empty() {
4449 gp := _p_.gFree.pop()
4451 if gp.stack.lo == 0 {
4458 lock(&sched.gFree.lock)
4459 sched.gFree.noStack.pushAll(noStackQ)
4460 sched.gFree.stack.pushAll(stackQ)
4461 sched.gFree.n += inc
4462 unlock(&sched.gFree.lock)
4465 // Breakpoint executes a breakpoint trap.
4470 // dolockOSThread is called by LockOSThread and lockOSThread below
4471 // after they modify m.locked. Do not allow preemption during this call,
4472 // or else the m might be different in this function than in the caller.
4474 func dolockOSThread() {
4475 if GOARCH == "wasm" {
4476 return // no threads on wasm yet
4479 _g_.m.lockedg.set(_g_)
4480 _g_.lockedm.set(_g_.m)
4485 // LockOSThread wires the calling goroutine to its current operating system thread.
4486 // The calling goroutine will always execute in that thread,
4487 // and no other goroutine will execute in it,
4488 // until the calling goroutine has made as many calls to
4489 // UnlockOSThread as to LockOSThread.
4490 // If the calling goroutine exits without unlocking the thread,
4491 // the thread will be terminated.
4493 // All init functions are run on the startup thread. Calling LockOSThread
4494 // from an init function will cause the main function to be invoked on
4497 // A goroutine should call LockOSThread before calling OS services or
4498 // non-Go library functions that depend on per-thread state.
4499 func LockOSThread() {
4500 if atomic.Load(&newmHandoff.haveTemplateThread) == 0 && GOOS != "plan9" {
4501 // If we need to start a new thread from the locked
4502 // thread, we need the template thread. Start it now
4503 // while we're in a known-good state.
4504 startTemplateThread()
4508 if _g_.m.lockedExt == 0 {
4510 panic("LockOSThread nesting overflow")
4516 func lockOSThread() {
4517 getg().m.lockedInt++
4521 // dounlockOSThread is called by UnlockOSThread and unlockOSThread below
4522 // after they update m->locked. Do not allow preemption during this call,
4523 // or else the m might be in different in this function than in the caller.
4525 func dounlockOSThread() {
4526 if GOARCH == "wasm" {
4527 return // no threads on wasm yet
4530 if _g_.m.lockedInt != 0 || _g_.m.lockedExt != 0 {
4539 // UnlockOSThread undoes an earlier call to LockOSThread.
4540 // If this drops the number of active LockOSThread calls on the
4541 // calling goroutine to zero, it unwires the calling goroutine from
4542 // its fixed operating system thread.
4543 // If there are no active LockOSThread calls, this is a no-op.
4545 // Before calling UnlockOSThread, the caller must ensure that the OS
4546 // thread is suitable for running other goroutines. If the caller made
4547 // any permanent changes to the state of the thread that would affect
4548 // other goroutines, it should not call this function and thus leave
4549 // the goroutine locked to the OS thread until the goroutine (and
4550 // hence the thread) exits.
4551 func UnlockOSThread() {
4553 if _g_.m.lockedExt == 0 {
4561 func unlockOSThread() {
4563 if _g_.m.lockedInt == 0 {
4564 systemstack(badunlockosthread)
4570 func badunlockosthread() {
4571 throw("runtime: internal error: misuse of lockOSThread/unlockOSThread")
4574 func gcount() int32 {
4575 n := int32(atomic.Loaduintptr(&allglen)) - sched.gFree.n - int32(atomic.Load(&sched.ngsys))
4576 for _, _p_ := range allp {
4580 // All these variables can be changed concurrently, so the result can be inconsistent.
4581 // But at least the current goroutine is running.
4588 func mcount() int32 {
4589 return int32(sched.mnext - sched.nmfreed)
4597 func _System() { _System() }
4598 func _ExternalCode() { _ExternalCode() }
4599 func _LostExternalCode() { _LostExternalCode() }
4600 func _GC() { _GC() }
4601 func _LostSIGPROFDuringAtomic64() { _LostSIGPROFDuringAtomic64() }
4602 func _VDSO() { _VDSO() }
4604 // Called if we receive a SIGPROF signal.
4605 // Called by the signal handler, may run during STW.
4606 //go:nowritebarrierrec
4607 func sigprof(pc, sp, lr uintptr, gp *g, mp *m) {
4612 // If mp.profilehz is 0, then profiling is not enabled for this thread.
4613 // We must check this to avoid a deadlock between setcpuprofilerate
4614 // and the call to cpuprof.add, below.
4615 if mp != nil && mp.profilehz == 0 {
4619 // On mips{,le}, 64bit atomics are emulated with spinlocks, in
4620 // runtime/internal/atomic. If SIGPROF arrives while the program is inside
4621 // the critical section, it creates a deadlock (when writing the sample).
4622 // As a workaround, create a counter of SIGPROFs while in critical section
4623 // to store the count, and pass it to sigprof.add() later when SIGPROF is
4624 // received from somewhere else (with _LostSIGPROFDuringAtomic64 as pc).
4625 if GOARCH == "mips" || GOARCH == "mipsle" || GOARCH == "arm" {
4626 if f := findfunc(pc); f.valid() {
4627 if hasPrefix(funcname(f), "runtime/internal/atomic") {
4628 cpuprof.lostAtomic++
4634 // Profiling runs concurrently with GC, so it must not allocate.
4635 // Set a trap in case the code does allocate.
4636 // Note that on windows, one thread takes profiles of all the
4637 // other threads, so mp is usually not getg().m.
4638 // In fact mp may not even be stopped.
4639 // See golang.org/issue/17165.
4640 getg().m.mallocing++
4642 var stk [maxCPUProfStack]uintptr
4644 if mp.ncgo > 0 && mp.curg != nil && mp.curg.syscallpc != 0 && mp.curg.syscallsp != 0 {
4646 // Check cgoCallersUse to make sure that we are not
4647 // interrupting other code that is fiddling with
4648 // cgoCallers. We are running in a signal handler
4649 // with all signals blocked, so we don't have to worry
4650 // about any other code interrupting us.
4651 if atomic.Load(&mp.cgoCallersUse) == 0 && mp.cgoCallers != nil && mp.cgoCallers[0] != 0 {
4652 for cgoOff < len(mp.cgoCallers) && mp.cgoCallers[cgoOff] != 0 {
4655 copy(stk[:], mp.cgoCallers[:cgoOff])
4656 mp.cgoCallers[0] = 0
4659 // Collect Go stack that leads to the cgo call.
4660 n = gentraceback(mp.curg.syscallpc, mp.curg.syscallsp, 0, mp.curg, 0, &stk[cgoOff], len(stk)-cgoOff, nil, nil, 0)
4665 n = gentraceback(pc, sp, lr, gp, 0, &stk[0], len(stk), nil, nil, _TraceTrap|_TraceJumpStack)
4669 // Normal traceback is impossible or has failed.
4670 // See if it falls into several common cases.
4672 if usesLibcall() && mp.libcallg != 0 && mp.libcallpc != 0 && mp.libcallsp != 0 {
4673 // Libcall, i.e. runtime syscall on windows.
4674 // Collect Go stack that leads to the call.
4675 n = gentraceback(mp.libcallpc, mp.libcallsp, 0, mp.libcallg.ptr(), 0, &stk[0], len(stk), nil, nil, 0)
4677 if n == 0 && mp != nil && mp.vdsoSP != 0 {
4678 n = gentraceback(mp.vdsoPC, mp.vdsoSP, 0, gp, 0, &stk[0], len(stk), nil, nil, _TraceTrap|_TraceJumpStack)
4681 // If all of the above has failed, account it against abstract "System" or "GC".
4684 pc = abi.FuncPCABIInternal(_VDSO) + sys.PCQuantum
4685 } else if pc > firstmoduledata.etext {
4686 // "ExternalCode" is better than "etext".
4687 pc = abi.FuncPCABIInternal(_ExternalCode) + sys.PCQuantum
4690 if mp.preemptoff != "" {
4691 stk[1] = abi.FuncPCABIInternal(_GC) + sys.PCQuantum
4693 stk[1] = abi.FuncPCABIInternal(_System) + sys.PCQuantum
4699 cpuprof.add(gp, stk[:n])
4701 getg().m.mallocing--
4704 // If the signal handler receives a SIGPROF signal on a non-Go thread,
4705 // it tries to collect a traceback into sigprofCallers.
4706 // sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.
4707 var sigprofCallers cgoCallers
4708 var sigprofCallersUse uint32
4710 // sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread,
4711 // and the signal handler collected a stack trace in sigprofCallers.
4712 // When this is called, sigprofCallersUse will be non-zero.
4713 // g is nil, and what we can do is very limited.
4715 //go:nowritebarrierrec
4716 func sigprofNonGo() {
4719 for n < len(sigprofCallers) && sigprofCallers[n] != 0 {
4722 cpuprof.addNonGo(sigprofCallers[:n])
4725 atomic.Store(&sigprofCallersUse, 0)
4728 // sigprofNonGoPC is called when a profiling signal arrived on a
4729 // non-Go thread and we have a single PC value, not a stack trace.
4730 // g is nil, and what we can do is very limited.
4732 //go:nowritebarrierrec
4733 func sigprofNonGoPC(pc uintptr) {
4737 abi.FuncPCABIInternal(_ExternalCode) + sys.PCQuantum,
4739 cpuprof.addNonGo(stk)
4743 // setcpuprofilerate sets the CPU profiling rate to hz times per second.
4744 // If hz <= 0, setcpuprofilerate turns off CPU profiling.
4745 func setcpuprofilerate(hz int32) {
4746 // Force sane arguments.
4751 // Disable preemption, otherwise we can be rescheduled to another thread
4752 // that has profiling enabled.
4756 // Stop profiler on this thread so that it is safe to lock prof.
4757 // if a profiling signal came in while we had prof locked,
4758 // it would deadlock.
4759 setThreadCPUProfiler(0)
4761 for !atomic.Cas(&prof.signalLock, 0, 1) {
4765 setProcessCPUProfiler(hz)
4768 atomic.Store(&prof.signalLock, 0)
4771 sched.profilehz = hz
4775 setThreadCPUProfiler(hz)
4781 // init initializes pp, which may be a freshly allocated p or a
4782 // previously destroyed p, and transitions it to status _Pgcstop.
4783 func (pp *p) init(id int32) {
4785 pp.status = _Pgcstop
4786 pp.sudogcache = pp.sudogbuf[:0]
4787 pp.deferpool = pp.deferpoolbuf[:0]
4789 if pp.mcache == nil {
4792 throw("missing mcache?")
4794 // Use the bootstrap mcache0. Only one P will get
4795 // mcache0: the one with ID 0.
4798 pp.mcache = allocmcache()
4801 if raceenabled && pp.raceprocctx == 0 {
4803 pp.raceprocctx = raceprocctx0
4804 raceprocctx0 = 0 // bootstrap
4806 pp.raceprocctx = raceproccreate()
4809 lockInit(&pp.timersLock, lockRankTimers)
4811 // This P may get timers when it starts running. Set the mask here
4812 // since the P may not go through pidleget (notably P 0 on startup).
4814 // Similarly, we may not go through pidleget before this P starts
4815 // running if it is P 0 on startup.
4819 // destroy releases all of the resources associated with pp and
4820 // transitions it to status _Pdead.
4822 // sched.lock must be held and the world must be stopped.
4823 func (pp *p) destroy() {
4824 assertLockHeld(&sched.lock)
4825 assertWorldStopped()
4827 // Move all runnable goroutines to the global queue
4828 for pp.runqhead != pp.runqtail {
4829 // Pop from tail of local queue
4831 gp := pp.runq[pp.runqtail%uint32(len(pp.runq))].ptr()
4832 // Push onto head of global queue
4835 if pp.runnext != 0 {
4836 globrunqputhead(pp.runnext.ptr())
4839 if len(pp.timers) > 0 {
4840 plocal := getg().m.p.ptr()
4841 // The world is stopped, but we acquire timersLock to
4842 // protect against sysmon calling timeSleepUntil.
4843 // This is the only case where we hold the timersLock of
4844 // more than one P, so there are no deadlock concerns.
4845 lock(&plocal.timersLock)
4846 lock(&pp.timersLock)
4847 moveTimers(plocal, pp.timers)
4851 pp.deletedTimers = 0
4852 atomic.Store64(&pp.timer0When, 0)
4853 unlock(&pp.timersLock)
4854 unlock(&plocal.timersLock)
4856 // Flush p's write barrier buffer.
4857 if gcphase != _GCoff {
4861 for i := range pp.sudogbuf {
4862 pp.sudogbuf[i] = nil
4864 pp.sudogcache = pp.sudogbuf[:0]
4865 for j := range pp.deferpoolbuf {
4866 pp.deferpoolbuf[j] = nil
4868 pp.deferpool = pp.deferpoolbuf[:0]
4869 systemstack(func() {
4870 for i := 0; i < pp.mspancache.len; i++ {
4871 // Safe to call since the world is stopped.
4872 mheap_.spanalloc.free(unsafe.Pointer(pp.mspancache.buf[i]))
4874 pp.mspancache.len = 0
4876 pp.pcache.flush(&mheap_.pages)
4877 unlock(&mheap_.lock)
4879 freemcache(pp.mcache)
4884 if pp.timerRaceCtx != 0 {
4885 // The race detector code uses a callback to fetch
4886 // the proc context, so arrange for that callback
4887 // to see the right thing.
4888 // This hack only works because we are the only
4894 racectxend(pp.timerRaceCtx)
4899 raceprocdestroy(pp.raceprocctx)
4906 // Change number of processors.
4908 // sched.lock must be held, and the world must be stopped.
4910 // gcworkbufs must not be being modified by either the GC or the write barrier
4911 // code, so the GC must not be running if the number of Ps actually changes.
4913 // Returns list of Ps with local work, they need to be scheduled by the caller.
4914 func procresize(nprocs int32) *p {
4915 assertLockHeld(&sched.lock)
4916 assertWorldStopped()
4919 if old < 0 || nprocs <= 0 {
4920 throw("procresize: invalid arg")
4923 traceGomaxprocs(nprocs)
4926 // update statistics
4928 if sched.procresizetime != 0 {
4929 sched.totaltime += int64(old) * (now - sched.procresizetime)
4931 sched.procresizetime = now
4933 maskWords := (nprocs + 31) / 32
4935 // Grow allp if necessary.
4936 if nprocs > int32(len(allp)) {
4937 // Synchronize with retake, which could be running
4938 // concurrently since it doesn't run on a P.
4940 if nprocs <= int32(cap(allp)) {
4941 allp = allp[:nprocs]
4943 nallp := make([]*p, nprocs)
4944 // Copy everything up to allp's cap so we
4945 // never lose old allocated Ps.
4946 copy(nallp, allp[:cap(allp)])
4950 if maskWords <= int32(cap(idlepMask)) {
4951 idlepMask = idlepMask[:maskWords]
4952 timerpMask = timerpMask[:maskWords]
4954 nidlepMask := make([]uint32, maskWords)
4955 // No need to copy beyond len, old Ps are irrelevant.
4956 copy(nidlepMask, idlepMask)
4957 idlepMask = nidlepMask
4959 ntimerpMask := make([]uint32, maskWords)
4960 copy(ntimerpMask, timerpMask)
4961 timerpMask = ntimerpMask
4966 // initialize new P's
4967 for i := old; i < nprocs; i++ {
4973 atomicstorep(unsafe.Pointer(&allp[i]), unsafe.Pointer(pp))
4977 if _g_.m.p != 0 && _g_.m.p.ptr().id < nprocs {
4978 // continue to use the current P
4979 _g_.m.p.ptr().status = _Prunning
4980 _g_.m.p.ptr().mcache.prepareForSweep()
4982 // release the current P and acquire allp[0].
4984 // We must do this before destroying our current P
4985 // because p.destroy itself has write barriers, so we
4986 // need to do that from a valid P.
4989 // Pretend that we were descheduled
4990 // and then scheduled again to keep
4993 traceProcStop(_g_.m.p.ptr())
5007 // g.m.p is now set, so we no longer need mcache0 for bootstrapping.
5010 // release resources from unused P's
5011 for i := nprocs; i < old; i++ {
5014 // can't free P itself because it can be referenced by an M in syscall
5018 if int32(len(allp)) != nprocs {
5020 allp = allp[:nprocs]
5021 idlepMask = idlepMask[:maskWords]
5022 timerpMask = timerpMask[:maskWords]
5027 for i := nprocs - 1; i >= 0; i-- {
5029 if _g_.m.p.ptr() == p {
5037 p.link.set(runnablePs)
5041 stealOrder.reset(uint32(nprocs))
5042 var int32p *int32 = &gomaxprocs // make compiler check that gomaxprocs is an int32
5043 atomic.Store((*uint32)(unsafe.Pointer(int32p)), uint32(nprocs))
5047 // Associate p and the current m.
5049 // This function is allowed to have write barriers even if the caller
5050 // isn't because it immediately acquires _p_.
5052 //go:yeswritebarrierrec
5053 func acquirep(_p_ *p) {
5054 // Do the part that isn't allowed to have write barriers.
5057 // Have p; write barriers now allowed.
5059 // Perform deferred mcache flush before this P can allocate
5060 // from a potentially stale mcache.
5061 _p_.mcache.prepareForSweep()
5068 // wirep is the first step of acquirep, which actually associates the
5069 // current M to _p_. This is broken out so we can disallow write
5070 // barriers for this part, since we don't yet have a P.
5072 //go:nowritebarrierrec
5074 func wirep(_p_ *p) {
5078 throw("wirep: already in go")
5080 if _p_.m != 0 || _p_.status != _Pidle {
5085 print("wirep: p->m=", _p_.m, "(", id, ") p->status=", _p_.status, "\n")
5086 throw("wirep: invalid p state")
5090 _p_.status = _Prunning
5093 // Disassociate p and the current m.
5094 func releasep() *p {
5098 throw("releasep: invalid arg")
5100 _p_ := _g_.m.p.ptr()
5101 if _p_.m.ptr() != _g_.m || _p_.status != _Prunning {
5102 print("releasep: m=", _g_.m, " m->p=", _g_.m.p.ptr(), " p->m=", hex(_p_.m), " p->status=", _p_.status, "\n")
5103 throw("releasep: invalid p state")
5106 traceProcStop(_g_.m.p.ptr())
5114 func incidlelocked(v int32) {
5116 sched.nmidlelocked += v
5123 // Check for deadlock situation.
5124 // The check is based on number of running M's, if 0 -> deadlock.
5125 // sched.lock must be held.
5127 assertLockHeld(&sched.lock)
5129 // For -buildmode=c-shared or -buildmode=c-archive it's OK if
5130 // there are no running goroutines. The calling program is
5131 // assumed to be running.
5132 if islibrary || isarchive {
5136 // If we are dying because of a signal caught on an already idle thread,
5137 // freezetheworld will cause all running threads to block.
5138 // And runtime will essentially enter into deadlock state,
5139 // except that there is a thread that will call exit soon.
5144 // If we are not running under cgo, but we have an extra M then account
5145 // for it. (It is possible to have an extra M on Windows without cgo to
5146 // accommodate callbacks created by syscall.NewCallback. See issue #6751
5149 if !iscgo && cgoHasExtraM {
5150 mp := lockextra(true)
5151 haveExtraM := extraMCount > 0
5158 run := mcount() - sched.nmidle - sched.nmidlelocked - sched.nmsys
5163 print("runtime: checkdead: nmidle=", sched.nmidle, " nmidlelocked=", sched.nmidlelocked, " mcount=", mcount(), " nmsys=", sched.nmsys, "\n")
5164 throw("checkdead: inconsistent counts")
5168 forEachG(func(gp *g) {
5169 if isSystemGoroutine(gp, false) {
5172 s := readgstatus(gp)
5173 switch s &^ _Gscan {
5180 print("runtime: checkdead: find g ", gp.goid, " in status ", s, "\n")
5181 throw("checkdead: runnable g")
5184 if grunning == 0 { // possible if main goroutine calls runtime·Goexit()
5185 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5186 throw("no goroutines (main called runtime.Goexit) - deadlock!")
5189 // Maybe jump time forward for playground.
5191 when, _p_ := timeSleepUntil()
5194 for pp := &sched.pidle; *pp != 0; pp = &(*pp).ptr().link {
5195 if (*pp).ptr() == _p_ {
5202 // There should always be a free M since
5203 // nothing is running.
5204 throw("checkdead: no m for timer")
5207 notewakeup(&mp.park)
5212 // There are no goroutines running, so we can look at the P's.
5213 for _, _p_ := range allp {
5214 if len(_p_.timers) > 0 {
5219 getg().m.throwing = -1 // do not dump full stacks
5220 unlock(&sched.lock) // unlock so that GODEBUG=scheddetail=1 doesn't hang
5221 throw("all goroutines are asleep - deadlock!")
5224 // forcegcperiod is the maximum time in nanoseconds between garbage
5225 // collections. If we go this long without a garbage collection, one
5226 // is forced to run.
5228 // This is a variable for testing purposes. It normally doesn't change.
5229 var forcegcperiod int64 = 2 * 60 * 1e9
5231 // Always runs without a P, so write barriers are not allowed.
5233 //go:nowritebarrierrec
5240 // For syscall_runtime_doAllThreadsSyscall, sysmon is
5241 // sufficiently up to participate in fixups.
5242 atomic.Store(&sched.sysmonStarting, 0)
5244 lasttrace := int64(0)
5245 idle := 0 // how many cycles in succession we had not wokeup somebody
5249 if idle == 0 { // start with 20us sleep...
5251 } else if idle > 50 { // start doubling the sleep after 1ms...
5254 if delay > 10*1000 { // up to 10ms
5260 // sysmon should not enter deep sleep if schedtrace is enabled so that
5261 // it can print that information at the right time.
5263 // It should also not enter deep sleep if there are any active P's so
5264 // that it can retake P's from syscalls, preempt long running G's, and
5265 // poll the network if all P's are busy for long stretches.
5267 // It should wakeup from deep sleep if any P's become active either due
5268 // to exiting a syscall or waking up due to a timer expiring so that it
5269 // can resume performing those duties. If it wakes from a syscall it
5270 // resets idle and delay as a bet that since it had retaken a P from a
5271 // syscall before, it may need to do it again shortly after the
5272 // application starts work again. It does not reset idle when waking
5273 // from a timer to avoid adding system load to applications that spend
5274 // most of their time sleeping.
5276 if debug.schedtrace <= 0 && (sched.gcwaiting != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs)) {
5278 if atomic.Load(&sched.gcwaiting) != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs) {
5279 syscallWake := false
5280 next, _ := timeSleepUntil()
5282 atomic.Store(&sched.sysmonwait, 1)
5284 // Make wake-up period small enough
5285 // for the sampling to be correct.
5286 sleep := forcegcperiod / 2
5287 if next-now < sleep {
5290 shouldRelax := sleep >= osRelaxMinNS
5294 syscallWake = notetsleep(&sched.sysmonnote, sleep)
5300 atomic.Store(&sched.sysmonwait, 0)
5301 noteclear(&sched.sysmonnote)
5311 lock(&sched.sysmonlock)
5312 // Update now in case we blocked on sysmonnote or spent a long time
5313 // blocked on schedlock or sysmonlock above.
5316 // trigger libc interceptors if needed
5317 if *cgo_yield != nil {
5318 asmcgocall(*cgo_yield, nil)
5320 // poll network if not polled for more than 10ms
5321 lastpoll := int64(atomic.Load64(&sched.lastpoll))
5322 if netpollinited() && lastpoll != 0 && lastpoll+10*1000*1000 < now {
5323 atomic.Cas64(&sched.lastpoll, uint64(lastpoll), uint64(now))
5324 list := netpoll(0) // non-blocking - returns list of goroutines
5326 // Need to decrement number of idle locked M's
5327 // (pretending that one more is running) before injectglist.
5328 // Otherwise it can lead to the following situation:
5329 // injectglist grabs all P's but before it starts M's to run the P's,
5330 // another M returns from syscall, finishes running its G,
5331 // observes that there is no work to do and no other running M's
5332 // and reports deadlock.
5339 if GOOS == "netbsd" {
5340 // netpoll is responsible for waiting for timer
5341 // expiration, so we typically don't have to worry
5342 // about starting an M to service timers. (Note that
5343 // sleep for timeSleepUntil above simply ensures sysmon
5344 // starts running again when that timer expiration may
5345 // cause Go code to run again).
5347 // However, netbsd has a kernel bug that sometimes
5348 // misses netpollBreak wake-ups, which can lead to
5349 // unbounded delays servicing timers. If we detect this
5350 // overrun, then startm to get something to handle the
5353 // See issue 42515 and
5354 // https://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=50094.
5355 if next, _ := timeSleepUntil(); next < now {
5359 if atomic.Load(&scavenge.sysmonWake) != 0 {
5360 // Kick the scavenger awake if someone requested it.
5363 // retake P's blocked in syscalls
5364 // and preempt long running G's
5365 if retake(now) != 0 {
5370 // check if we need to force a GC
5371 if t := (gcTrigger{kind: gcTriggerTime, now: now}); t.test() && atomic.Load(&forcegc.idle) != 0 {
5375 list.push(forcegc.g)
5377 unlock(&forcegc.lock)
5379 if debug.schedtrace > 0 && lasttrace+int64(debug.schedtrace)*1000000 <= now {
5381 schedtrace(debug.scheddetail > 0)
5383 unlock(&sched.sysmonlock)
5387 type sysmontick struct {
5394 // forcePreemptNS is the time slice given to a G before it is
5396 const forcePreemptNS = 10 * 1000 * 1000 // 10ms
5398 func retake(now int64) uint32 {
5400 // Prevent allp slice changes. This lock will be completely
5401 // uncontended unless we're already stopping the world.
5403 // We can't use a range loop over allp because we may
5404 // temporarily drop the allpLock. Hence, we need to re-fetch
5405 // allp each time around the loop.
5406 for i := 0; i < len(allp); i++ {
5409 // This can happen if procresize has grown
5410 // allp but not yet created new Ps.
5413 pd := &_p_.sysmontick
5416 if s == _Prunning || s == _Psyscall {
5417 // Preempt G if it's running for too long.
5418 t := int64(_p_.schedtick)
5419 if int64(pd.schedtick) != t {
5420 pd.schedtick = uint32(t)
5422 } else if pd.schedwhen+forcePreemptNS <= now {
5424 // In case of syscall, preemptone() doesn't
5425 // work, because there is no M wired to P.
5430 // Retake P from syscall if it's there for more than 1 sysmon tick (at least 20us).
5431 t := int64(_p_.syscalltick)
5432 if !sysretake && int64(pd.syscalltick) != t {
5433 pd.syscalltick = uint32(t)
5434 pd.syscallwhen = now
5437 // On the one hand we don't want to retake Ps if there is no other work to do,
5438 // but on the other hand we want to retake them eventually
5439 // because they can prevent the sysmon thread from deep sleep.
5440 if runqempty(_p_) && atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) > 0 && pd.syscallwhen+10*1000*1000 > now {
5443 // Drop allpLock so we can take sched.lock.
5445 // Need to decrement number of idle locked M's
5446 // (pretending that one more is running) before the CAS.
5447 // Otherwise the M from which we retake can exit the syscall,
5448 // increment nmidle and report deadlock.
5450 if atomic.Cas(&_p_.status, s, _Pidle) {
5452 traceGoSysBlock(_p_)
5467 // Tell all goroutines that they have been preempted and they should stop.
5468 // This function is purely best-effort. It can fail to inform a goroutine if a
5469 // processor just started running it.
5470 // No locks need to be held.
5471 // Returns true if preemption request was issued to at least one goroutine.
5472 func preemptall() bool {
5474 for _, _p_ := range allp {
5475 if _p_.status != _Prunning {
5478 if preemptone(_p_) {
5485 // Tell the goroutine running on processor P to stop.
5486 // This function is purely best-effort. It can incorrectly fail to inform the
5487 // goroutine. It can inform the wrong goroutine. Even if it informs the
5488 // correct goroutine, that goroutine might ignore the request if it is
5489 // simultaneously executing newstack.
5490 // No lock needs to be held.
5491 // Returns true if preemption request was issued.
5492 // The actual preemption will happen at some point in the future
5493 // and will be indicated by the gp->status no longer being
5495 func preemptone(_p_ *p) bool {
5497 if mp == nil || mp == getg().m {
5501 if gp == nil || gp == mp.g0 {
5507 // Every call in a goroutine checks for stack overflow by
5508 // comparing the current stack pointer to gp->stackguard0.
5509 // Setting gp->stackguard0 to StackPreempt folds
5510 // preemption into the normal stack overflow check.
5511 gp.stackguard0 = stackPreempt
5513 // Request an async preemption of this P.
5514 if preemptMSupported && debug.asyncpreemptoff == 0 {
5524 func schedtrace(detailed bool) {
5531 print("SCHED ", (now-starttime)/1e6, "ms: gomaxprocs=", gomaxprocs, " idleprocs=", sched.npidle, " threads=", mcount(), " spinningthreads=", sched.nmspinning, " idlethreads=", sched.nmidle, " runqueue=", sched.runqsize)
5533 print(" gcwaiting=", sched.gcwaiting, " nmidlelocked=", sched.nmidlelocked, " stopwait=", sched.stopwait, " sysmonwait=", sched.sysmonwait, "\n")
5535 // We must be careful while reading data from P's, M's and G's.
5536 // Even if we hold schedlock, most data can be changed concurrently.
5537 // E.g. (p->m ? p->m->id : -1) can crash if p->m changes from non-nil to nil.
5538 for i, _p_ := range allp {
5540 h := atomic.Load(&_p_.runqhead)
5541 t := atomic.Load(&_p_.runqtail)
5547 print(" P", i, ": status=", _p_.status, " schedtick=", _p_.schedtick, " syscalltick=", _p_.syscalltick, " m=", id, " runqsize=", t-h, " gfreecnt=", _p_.gFree.n, " timerslen=", len(_p_.timers), "\n")
5549 // In non-detailed mode format lengths of per-P run queues as:
5550 // [len1 len2 len3 len4]
5556 if i == len(allp)-1 {
5567 for mp := allm; mp != nil; mp = mp.alllink {
5570 lockedg := mp.lockedg.ptr()
5583 print(" M", mp.id, ": p=", id1, " curg=", id2, " mallocing=", mp.mallocing, " throwing=", mp.throwing, " preemptoff=", mp.preemptoff, ""+" locks=", mp.locks, " dying=", mp.dying, " spinning=", mp.spinning, " blocked=", mp.blocked, " lockedg=", id3, "\n")
5586 forEachG(func(gp *g) {
5588 lockedm := gp.lockedm.ptr()
5597 print(" G", gp.goid, ": status=", readgstatus(gp), "(", gp.waitreason.String(), ") m=", id1, " lockedm=", id2, "\n")
5602 // schedEnableUser enables or disables the scheduling of user
5605 // This does not stop already running user goroutines, so the caller
5606 // should first stop the world when disabling user goroutines.
5607 func schedEnableUser(enable bool) {
5609 if sched.disable.user == !enable {
5613 sched.disable.user = !enable
5615 n := sched.disable.n
5617 globrunqputbatch(&sched.disable.runnable, n)
5619 for ; n != 0 && sched.npidle != 0; n-- {
5627 // schedEnabled reports whether gp should be scheduled. It returns
5628 // false is scheduling of gp is disabled.
5630 // sched.lock must be held.
5631 func schedEnabled(gp *g) bool {
5632 assertLockHeld(&sched.lock)
5634 if sched.disable.user {
5635 return isSystemGoroutine(gp, true)
5640 // Put mp on midle list.
5641 // sched.lock must be held.
5642 // May run during STW, so write barriers are not allowed.
5643 //go:nowritebarrierrec
5645 assertLockHeld(&sched.lock)
5647 mp.schedlink = sched.midle
5653 // Try to get an m from midle list.
5654 // sched.lock must be held.
5655 // May run during STW, so write barriers are not allowed.
5656 //go:nowritebarrierrec
5658 assertLockHeld(&sched.lock)
5660 mp := sched.midle.ptr()
5662 sched.midle = mp.schedlink
5668 // Put gp on the global runnable queue.
5669 // sched.lock must be held.
5670 // May run during STW, so write barriers are not allowed.
5671 //go:nowritebarrierrec
5672 func globrunqput(gp *g) {
5673 assertLockHeld(&sched.lock)
5675 sched.runq.pushBack(gp)
5679 // Put gp at the head of the global runnable queue.
5680 // sched.lock must be held.
5681 // May run during STW, so write barriers are not allowed.
5682 //go:nowritebarrierrec
5683 func globrunqputhead(gp *g) {
5684 assertLockHeld(&sched.lock)
5690 // Put a batch of runnable goroutines on the global runnable queue.
5691 // This clears *batch.
5692 // sched.lock must be held.
5693 // May run during STW, so write barriers are not allowed.
5694 //go:nowritebarrierrec
5695 func globrunqputbatch(batch *gQueue, n int32) {
5696 assertLockHeld(&sched.lock)
5698 sched.runq.pushBackAll(*batch)
5703 // Try get a batch of G's from the global runnable queue.
5704 // sched.lock must be held.
5705 func globrunqget(_p_ *p, max int32) *g {
5706 assertLockHeld(&sched.lock)
5708 if sched.runqsize == 0 {
5712 n := sched.runqsize/gomaxprocs + 1
5713 if n > sched.runqsize {
5716 if max > 0 && n > max {
5719 if n > int32(len(_p_.runq))/2 {
5720 n = int32(len(_p_.runq)) / 2
5725 gp := sched.runq.pop()
5728 gp1 := sched.runq.pop()
5729 runqput(_p_, gp1, false)
5734 // pMask is an atomic bitstring with one bit per P.
5737 // read returns true if P id's bit is set.
5738 func (p pMask) read(id uint32) bool {
5740 mask := uint32(1) << (id % 32)
5741 return (atomic.Load(&p[word]) & mask) != 0
5744 // set sets P id's bit.
5745 func (p pMask) set(id int32) {
5747 mask := uint32(1) << (id % 32)
5748 atomic.Or(&p[word], mask)
5751 // clear clears P id's bit.
5752 func (p pMask) clear(id int32) {
5754 mask := uint32(1) << (id % 32)
5755 atomic.And(&p[word], ^mask)
5758 // updateTimerPMask clears pp's timer mask if it has no timers on its heap.
5760 // Ideally, the timer mask would be kept immediately consistent on any timer
5761 // operations. Unfortunately, updating a shared global data structure in the
5762 // timer hot path adds too much overhead in applications frequently switching
5763 // between no timers and some timers.
5765 // As a compromise, the timer mask is updated only on pidleget / pidleput. A
5766 // running P (returned by pidleget) may add a timer at any time, so its mask
5767 // must be set. An idle P (passed to pidleput) cannot add new timers while
5768 // idle, so if it has no timers at that time, its mask may be cleared.
5770 // Thus, we get the following effects on timer-stealing in findrunnable:
5772 // * Idle Ps with no timers when they go idle are never checked in findrunnable
5773 // (for work- or timer-stealing; this is the ideal case).
5774 // * Running Ps must always be checked.
5775 // * Idle Ps whose timers are stolen must continue to be checked until they run
5776 // again, even after timer expiration.
5778 // When the P starts running again, the mask should be set, as a timer may be
5779 // added at any time.
5781 // TODO(prattmic): Additional targeted updates may improve the above cases.
5782 // e.g., updating the mask when stealing a timer.
5783 func updateTimerPMask(pp *p) {
5784 if atomic.Load(&pp.numTimers) > 0 {
5788 // Looks like there are no timers, however another P may transiently
5789 // decrement numTimers when handling a timerModified timer in
5790 // checkTimers. We must take timersLock to serialize with these changes.
5791 lock(&pp.timersLock)
5792 if atomic.Load(&pp.numTimers) == 0 {
5793 timerpMask.clear(pp.id)
5795 unlock(&pp.timersLock)
5798 // pidleput puts p to on the _Pidle list.
5800 // This releases ownership of p. Once sched.lock is released it is no longer
5803 // sched.lock must be held.
5805 // May run during STW, so write barriers are not allowed.
5806 //go:nowritebarrierrec
5807 func pidleput(_p_ *p) {
5808 assertLockHeld(&sched.lock)
5810 if !runqempty(_p_) {
5811 throw("pidleput: P has non-empty run queue")
5813 updateTimerPMask(_p_) // clear if there are no timers.
5814 idlepMask.set(_p_.id)
5815 _p_.link = sched.pidle
5816 sched.pidle.set(_p_)
5817 atomic.Xadd(&sched.npidle, 1) // TODO: fast atomic
5820 // pidleget tries to get a p from the _Pidle list, acquiring ownership.
5822 // sched.lock must be held.
5824 // May run during STW, so write barriers are not allowed.
5825 //go:nowritebarrierrec
5826 func pidleget() *p {
5827 assertLockHeld(&sched.lock)
5829 _p_ := sched.pidle.ptr()
5831 // Timer may get added at any time now.
5832 timerpMask.set(_p_.id)
5833 idlepMask.clear(_p_.id)
5834 sched.pidle = _p_.link
5835 atomic.Xadd(&sched.npidle, -1) // TODO: fast atomic
5840 // runqempty reports whether _p_ has no Gs on its local run queue.
5841 // It never returns true spuriously.
5842 func runqempty(_p_ *p) bool {
5843 // Defend against a race where 1) _p_ has G1 in runqnext but runqhead == runqtail,
5844 // 2) runqput on _p_ kicks G1 to the runq, 3) runqget on _p_ empties runqnext.
5845 // Simply observing that runqhead == runqtail and then observing that runqnext == nil
5846 // does not mean the queue is empty.
5848 head := atomic.Load(&_p_.runqhead)
5849 tail := atomic.Load(&_p_.runqtail)
5850 runnext := atomic.Loaduintptr((*uintptr)(unsafe.Pointer(&_p_.runnext)))
5851 if tail == atomic.Load(&_p_.runqtail) {
5852 return head == tail && runnext == 0
5857 // To shake out latent assumptions about scheduling order,
5858 // we introduce some randomness into scheduling decisions
5859 // when running with the race detector.
5860 // The need for this was made obvious by changing the
5861 // (deterministic) scheduling order in Go 1.5 and breaking
5862 // many poorly-written tests.
5863 // With the randomness here, as long as the tests pass
5864 // consistently with -race, they shouldn't have latent scheduling
5866 const randomizeScheduler = raceenabled
5868 // runqput tries to put g on the local runnable queue.
5869 // If next is false, runqput adds g to the tail of the runnable queue.
5870 // If next is true, runqput puts g in the _p_.runnext slot.
5871 // If the run queue is full, runnext puts g on the global queue.
5872 // Executed only by the owner P.
5873 func runqput(_p_ *p, gp *g, next bool) {
5874 if randomizeScheduler && next && fastrand()%2 == 0 {
5880 oldnext := _p_.runnext
5881 if !_p_.runnext.cas(oldnext, guintptr(unsafe.Pointer(gp))) {
5887 // Kick the old runnext out to the regular run queue.
5892 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with consumers
5894 if t-h < uint32(len(_p_.runq)) {
5895 _p_.runq[t%uint32(len(_p_.runq))].set(gp)
5896 atomic.StoreRel(&_p_.runqtail, t+1) // store-release, makes the item available for consumption
5899 if runqputslow(_p_, gp, h, t) {
5902 // the queue is not full, now the put above must succeed
5906 // Put g and a batch of work from local runnable queue on global queue.
5907 // Executed only by the owner P.
5908 func runqputslow(_p_ *p, gp *g, h, t uint32) bool {
5909 var batch [len(_p_.runq)/2 + 1]*g
5911 // First, grab a batch from local queue.
5914 if n != uint32(len(_p_.runq)/2) {
5915 throw("runqputslow: queue is not full")
5917 for i := uint32(0); i < n; i++ {
5918 batch[i] = _p_.runq[(h+i)%uint32(len(_p_.runq))].ptr()
5920 if !atomic.CasRel(&_p_.runqhead, h, h+n) { // cas-release, commits consume
5925 if randomizeScheduler {
5926 for i := uint32(1); i <= n; i++ {
5927 j := fastrandn(i + 1)
5928 batch[i], batch[j] = batch[j], batch[i]
5932 // Link the goroutines.
5933 for i := uint32(0); i < n; i++ {
5934 batch[i].schedlink.set(batch[i+1])
5937 q.head.set(batch[0])
5938 q.tail.set(batch[n])
5940 // Now put the batch on global queue.
5942 globrunqputbatch(&q, int32(n+1))
5947 // runqputbatch tries to put all the G's on q on the local runnable queue.
5948 // If the queue is full, they are put on the global queue; in that case
5949 // this will temporarily acquire the scheduler lock.
5950 // Executed only by the owner P.
5951 func runqputbatch(pp *p, q *gQueue, qsize int) {
5952 h := atomic.LoadAcq(&pp.runqhead)
5955 for !q.empty() && t-h < uint32(len(pp.runq)) {
5957 pp.runq[t%uint32(len(pp.runq))].set(gp)
5963 if randomizeScheduler {
5964 off := func(o uint32) uint32 {
5965 return (pp.runqtail + o) % uint32(len(pp.runq))
5967 for i := uint32(1); i < n; i++ {
5968 j := fastrandn(i + 1)
5969 pp.runq[off(i)], pp.runq[off(j)] = pp.runq[off(j)], pp.runq[off(i)]
5973 atomic.StoreRel(&pp.runqtail, t)
5976 globrunqputbatch(q, int32(qsize))
5981 // Get g from local runnable queue.
5982 // If inheritTime is true, gp should inherit the remaining time in the
5983 // current time slice. Otherwise, it should start a new time slice.
5984 // Executed only by the owner P.
5985 func runqget(_p_ *p) (gp *g, inheritTime bool) {
5986 // If there's a runnext, it's the next G to run.
5992 if _p_.runnext.cas(next, 0) {
5993 return next.ptr(), true
5998 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6003 gp := _p_.runq[h%uint32(len(_p_.runq))].ptr()
6004 if atomic.CasRel(&_p_.runqhead, h, h+1) { // cas-release, commits consume
6010 // runqdrain drains the local runnable queue of _p_ and returns all goroutines in it.
6011 // Executed only by the owner P.
6012 func runqdrain(_p_ *p) (drainQ gQueue, n uint32) {
6013 oldNext := _p_.runnext
6014 if oldNext != 0 && _p_.runnext.cas(oldNext, 0) {
6015 drainQ.pushBack(oldNext.ptr())
6020 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6026 if qn > uint32(len(_p_.runq)) { // read inconsistent h and t
6030 if !atomic.CasRel(&_p_.runqhead, h, h+qn) { // cas-release, commits consume
6034 // We've inverted the order in which it gets G's from the local P's runnable queue
6035 // and then advances the head pointer because we don't want to mess up the statuses of G's
6036 // while runqdrain() and runqsteal() are running in parallel.
6037 // Thus we should advance the head pointer before draining the local P into a gQueue,
6038 // so that we can update any gp.schedlink only after we take the full ownership of G,
6039 // meanwhile, other P's can't access to all G's in local P's runnable queue and steal them.
6040 // See https://groups.google.com/g/golang-dev/c/0pTKxEKhHSc/m/6Q85QjdVBQAJ for more details.
6041 for i := uint32(0); i < qn; i++ {
6042 gp := _p_.runq[(h+i)%uint32(len(_p_.runq))].ptr()
6049 // Grabs a batch of goroutines from _p_'s runnable queue into batch.
6050 // Batch is a ring buffer starting at batchHead.
6051 // Returns number of grabbed goroutines.
6052 // Can be executed by any P.
6053 func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32 {
6055 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with other consumers
6056 t := atomic.LoadAcq(&_p_.runqtail) // load-acquire, synchronize with the producer
6061 // Try to steal from _p_.runnext.
6062 if next := _p_.runnext; next != 0 {
6063 if _p_.status == _Prunning {
6064 // Sleep to ensure that _p_ isn't about to run the g
6065 // we are about to steal.
6066 // The important use case here is when the g running
6067 // on _p_ ready()s another g and then almost
6068 // immediately blocks. Instead of stealing runnext
6069 // in this window, back off to give _p_ a chance to
6070 // schedule runnext. This will avoid thrashing gs
6071 // between different Ps.
6072 // A sync chan send/recv takes ~50ns as of time of
6073 // writing, so 3us gives ~50x overshoot.
6074 if GOOS != "windows" {
6077 // On windows system timer granularity is
6078 // 1-15ms, which is way too much for this
6079 // optimization. So just yield.
6083 if !_p_.runnext.cas(next, 0) {
6086 batch[batchHead%uint32(len(batch))] = next
6092 if n > uint32(len(_p_.runq)/2) { // read inconsistent h and t
6095 for i := uint32(0); i < n; i++ {
6096 g := _p_.runq[(h+i)%uint32(len(_p_.runq))]
6097 batch[(batchHead+i)%uint32(len(batch))] = g
6099 if atomic.CasRel(&_p_.runqhead, h, h+n) { // cas-release, commits consume
6105 // Steal half of elements from local runnable queue of p2
6106 // and put onto local runnable queue of p.
6107 // Returns one of the stolen elements (or nil if failed).
6108 func runqsteal(_p_, p2 *p, stealRunNextG bool) *g {
6110 n := runqgrab(p2, &_p_.runq, t, stealRunNextG)
6115 gp := _p_.runq[(t+n)%uint32(len(_p_.runq))].ptr()
6119 h := atomic.LoadAcq(&_p_.runqhead) // load-acquire, synchronize with consumers
6120 if t-h+n >= uint32(len(_p_.runq)) {
6121 throw("runqsteal: runq overflow")
6123 atomic.StoreRel(&_p_.runqtail, t+n) // store-release, makes the item available for consumption
6127 // A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
6128 // be on one gQueue or gList at a time.
6129 type gQueue struct {
6134 // empty reports whether q is empty.
6135 func (q *gQueue) empty() bool {
6139 // push adds gp to the head of q.
6140 func (q *gQueue) push(gp *g) {
6141 gp.schedlink = q.head
6148 // pushBack adds gp to the tail of q.
6149 func (q *gQueue) pushBack(gp *g) {
6152 q.tail.ptr().schedlink.set(gp)
6159 // pushBackAll adds all Gs in l2 to the tail of q. After this q2 must
6161 func (q *gQueue) pushBackAll(q2 gQueue) {
6165 q2.tail.ptr().schedlink = 0
6167 q.tail.ptr().schedlink = q2.head
6174 // pop removes and returns the head of queue q. It returns nil if
6176 func (q *gQueue) pop() *g {
6179 q.head = gp.schedlink
6187 // popList takes all Gs in q and returns them as a gList.
6188 func (q *gQueue) popList() gList {
6189 stack := gList{q.head}
6194 // A gList is a list of Gs linked through g.schedlink. A G can only be
6195 // on one gQueue or gList at a time.
6200 // empty reports whether l is empty.
6201 func (l *gList) empty() bool {
6205 // push adds gp to the head of l.
6206 func (l *gList) push(gp *g) {
6207 gp.schedlink = l.head
6211 // pushAll prepends all Gs in q to l.
6212 func (l *gList) pushAll(q gQueue) {
6214 q.tail.ptr().schedlink = l.head
6219 // pop removes and returns the head of l. If l is empty, it returns nil.
6220 func (l *gList) pop() *g {
6223 l.head = gp.schedlink
6228 //go:linkname setMaxThreads runtime/debug.setMaxThreads
6229 func setMaxThreads(in int) (out int) {
6231 out = int(sched.maxmcount)
6232 if in > 0x7fffffff { // MaxInt32
6233 sched.maxmcount = 0x7fffffff
6235 sched.maxmcount = int32(in)
6243 func procPin() int {
6248 return int(mp.p.ptr().id)
6257 //go:linkname sync_runtime_procPin sync.runtime_procPin
6259 func sync_runtime_procPin() int {
6263 //go:linkname sync_runtime_procUnpin sync.runtime_procUnpin
6265 func sync_runtime_procUnpin() {
6269 //go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin
6271 func sync_atomic_runtime_procPin() int {
6275 //go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin
6277 func sync_atomic_runtime_procUnpin() {
6281 // Active spinning for sync.Mutex.
6282 //go:linkname sync_runtime_canSpin sync.runtime_canSpin
6284 func sync_runtime_canSpin(i int) bool {
6285 // sync.Mutex is cooperative, so we are conservative with spinning.
6286 // Spin only few times and only if running on a multicore machine and
6287 // GOMAXPROCS>1 and there is at least one other running P and local runq is empty.
6288 // As opposed to runtime mutex we don't do passive spinning here,
6289 // because there can be work on global runq or on other Ps.
6290 if i >= active_spin || ncpu <= 1 || gomaxprocs <= int32(sched.npidle+sched.nmspinning)+1 {
6293 if p := getg().m.p.ptr(); !runqempty(p) {
6299 //go:linkname sync_runtime_doSpin sync.runtime_doSpin
6301 func sync_runtime_doSpin() {
6302 procyield(active_spin_cnt)
6305 var stealOrder randomOrder
6307 // randomOrder/randomEnum are helper types for randomized work stealing.
6308 // They allow to enumerate all Ps in different pseudo-random orders without repetitions.
6309 // The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
6310 // are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
6311 type randomOrder struct {
6316 type randomEnum struct {
6323 func (ord *randomOrder) reset(count uint32) {
6325 ord.coprimes = ord.coprimes[:0]
6326 for i := uint32(1); i <= count; i++ {
6327 if gcd(i, count) == 1 {
6328 ord.coprimes = append(ord.coprimes, i)
6333 func (ord *randomOrder) start(i uint32) randomEnum {
6337 inc: ord.coprimes[i%uint32(len(ord.coprimes))],
6341 func (enum *randomEnum) done() bool {
6342 return enum.i == enum.count
6345 func (enum *randomEnum) next() {
6347 enum.pos = (enum.pos + enum.inc) % enum.count
6350 func (enum *randomEnum) position() uint32 {
6354 func gcd(a, b uint32) uint32 {
6361 // An initTask represents the set of initializations that need to be done for a package.
6362 // Keep in sync with ../../test/initempty.go:initTask
6363 type initTask struct {
6364 // TODO: pack the first 3 fields more tightly?
6365 state uintptr // 0 = uninitialized, 1 = in progress, 2 = done
6368 // followed by ndeps instances of an *initTask, one per package depended on
6369 // followed by nfns pcs, one per init function to run
6372 // inittrace stores statistics for init functions which are
6373 // updated by malloc and newproc when active is true.
6374 var inittrace tracestat
6376 type tracestat struct {
6377 active bool // init tracing activation status
6378 id int64 // init goroutine id
6379 allocs uint64 // heap allocations
6380 bytes uint64 // heap allocated bytes
6383 func doInit(t *initTask) {
6385 case 2: // fully initialized
6387 case 1: // initialization in progress
6388 throw("recursive call during initialization - linker skew")
6389 default: // not initialized yet
6390 t.state = 1 // initialization in progress
6392 for i := uintptr(0); i < t.ndeps; i++ {
6393 p := add(unsafe.Pointer(t), (3+i)*sys.PtrSize)
6394 t2 := *(**initTask)(p)
6399 t.state = 2 // initialization done
6408 if inittrace.active {
6410 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6414 firstFunc := add(unsafe.Pointer(t), (3+t.ndeps)*sys.PtrSize)
6415 for i := uintptr(0); i < t.nfns; i++ {
6416 p := add(firstFunc, i*sys.PtrSize)
6417 f := *(*func())(unsafe.Pointer(&p))
6421 if inittrace.active {
6423 // Load stats non-atomically since tracinit is updated only by this init goroutine.
6426 f := *(*func())(unsafe.Pointer(&firstFunc))
6427 pkg := funcpkgpath(findfunc(abi.FuncPCABIInternal(f)))
6430 print("init ", pkg, " @")
6431 print(string(fmtNSAsMS(sbuf[:], uint64(start-runtimeInitTime))), " ms, ")
6432 print(string(fmtNSAsMS(sbuf[:], uint64(end-start))), " ms clock, ")
6433 print(string(itoa(sbuf[:], after.bytes-before.bytes)), " bytes, ")
6434 print(string(itoa(sbuf[:], after.allocs-before.allocs)), " allocs")
6438 t.state = 2 // initialization done