klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
<heat> huh, intel pt results seem to be much different than normal perf record -F999
rnicholl1 has joined #osdev
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 265 seconds]
goliath has quit [Quit: SIGSEGV]
rnicholl1 has quit [Quit: My laptop has gone to sleep.]
[itchyjunk] has quit [Remote host closed the connection]
rnicholl1 has joined #osdev
gog has quit [Ping timeout: 250 seconds]
foudfou has quit [Remote host closed the connection]
foudfou has joined #osdev
rnicholl1 has quit [Quit: My laptop has gone to sleep.]
gabi-250 has quit [Remote host closed the connection]
gabi-250 has joined #osdev
rnicholl1 has joined #osdev
rnicholl1 has quit [Quit: My laptop has gone to sleep.]
Turn_Left has quit [Read error: Connection reset by peer]
terrorjack has quit [Quit: The Lounge - https://thelounge.chat]
terrorjack has joined #osdev
shikhin has quit [Quit: Quittin'.]
heat has quit [Ping timeout: 240 seconds]
shikhin has joined #osdev
dude12312414 has joined #osdev
dude12312414 has quit [Client Quit]
Arthuria has joined #osdev
osmten has joined #osdev
Arthuria has quit [Remote host closed the connection]
Arthuria has joined #osdev
Arthuria has quit [Remote host closed the connection]
m3a has joined #osdev
slidercrank has joined #osdev
antranigv has quit [Ping timeout: 268 seconds]
antranigv has joined #osdev
<mjg> heat but is it misaligned to begin with?
<mjg> where is your bench codez
mjg has quit [*.net *.split]
linkdd has quit [*.net *.split]
bgs has joined #osdev
mjg has joined #osdev
<mjg> heat ok i read the asm, it is misaligned indeed
pmaz has joined #osdev
linkdd has joined #osdev
Burgundy has joined #osdev
slidercrank has quit [Ping timeout: 268 seconds]
nyah has joined #osdev
slidercrank has joined #osdev
Left_Turn has joined #osdev
gog has joined #osdev
gog has quit [Client Quit]
gog has joined #osdev
GeDaMo has joined #osdev
slidercrank has quit [Remote host closed the connection]
slidercrank has joined #osdev
goliath has joined #osdev
osmten has quit [Quit: Client closed]
<ddevault> completed a drop-in replacement for the Hare serial driver in C
<gog> hi
<ddevault> still a few things to do but C support is going pretty well :)
<gog> :)
<Ermine> good job!
antranigv has quit [Quit: ZNC 1.8.2 - https://znc.in]
<bslsk05> ​git.sr.ht: ~sircmpwn/hello-mercury-c: main.c - sourcehut git
<ddevault> thanks :)
antranigv has joined #osdev
bauen1 has quit [Ping timeout: 240 seconds]
bnchs has quit [Ping timeout: 268 seconds]
bnchs has joined #osdev
pharonix71 has quit [Ping timeout: 240 seconds]
pharonix71 has joined #osdev
antranigv has quit [Quit: ZNC 1.8.2 - https://znc.in]
plarke has quit [Remote host closed the connection]
heat has joined #osdev
<heat> mjg, ok, so I have some theories
<heat> 1) storing is working like a weird sfence; i tried adding explicit sfences but they had 0 impact on perf (what does this mean? I've never measured sfences in a normal setting)
GreaseMonkey has quit [Quit: No Ping reply in 180 seconds.]
<heat> 2) aligning may be slower than just fuckin copying. unlikely. glibc also aligns, but with vector-sized stores ofc
<heat> 3) something funkay is going on. I don't know how to interpret intel PT's results
<heat> 4) maybe it's just not worth it for kabylake
<heat> also: why tf are you not enabling erms in your stringops its 2023
pmaz has quit [Quit: Konversation terminated!]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 260 seconds]
linear_cannon has quit [Remote host closed the connection]
bauen1 has joined #osdev
zaquest has quit [Remote host closed the connection]
zaquest has joined #osdev
grange_c0 has joined #osdev
Terlisimo has quit [Quit: Connection reset by beer]
frkazoid333 has quit [Ping timeout: 240 seconds]
bauen1 has quit [Ping timeout: 276 seconds]
bauen1 has joined #osdev
Terlisimo has joined #osdev
dayimproper has joined #osdev
<mjg> heat: i have negative brianpower this week
<mjg> heat: prod me monday
<gog> mood
<sham1> doom
dayimproper has quit [Ping timeout: 268 seconds]
<zid> doog
<sbalmos> Morbo? dooooom
<gog> it's morbin time
<GeDaMo> The times, they are a-Morbin'
<gog> i've got the morbs
<bnchs> aaaaaaaaa
<bnchs> morbin
[itchyjunk] has joined #osdev
[_] has joined #osdev
[_] has quit [Remote host closed the connection]
dude12312414 has joined #osdev
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
dayimproper has joined #osdev
CryptoDavid has joined #osdev
dude12312414 has joined #osdev
FreeFull has joined #osdev
<heat> morb be gog
<gog> angel of the morbin
<heat> gog u are gorg
<netbsduser```> just experimented with chatgpt as youth are wont to do
<heat> or better, morg
<netbsduser```> i hoped it would help me writing a new namei that isn't incomprehensible and unmaintainable like my current one
<heat> no.
<heat> i tried making it explain how to read an ext4 inode, or making it explain how a maple tree works, would be an idea
<heat> but it doesn't work
<heat> chatgpt and the c stands for crap
<netbsduser```> what it made was mostly unusable, i never saw anything quite so wanton and ghastly, terrible
theboringkid has joined #osdev
<heat> yes
<netbsduser```> it did do a reasonable job at splitting up a pathname by / delimiters but that's really not what i wanted it to help me with
<heat> anyway if you want a good namei you better look at what's wrong with yours atm, plus what's wrong with others
xenos1984 has quit [Ping timeout: 260 seconds]
<netbsduser```> what was wrong with mine was that it was thrown together ad-hoc and turned into a mess when it gained support for things like lookup-to-the-2nd-last-component
<netbsduser```> also it was recursive when there were symlinks
<netbsduser```> and for a reason i never figured out, reasoning instead i'd be better off rewriting it, it didn't refcount vnodes properly. the new one is much nicer
<netbsduser```> i am just putting the finishing touches on it
<heat> my (slow paced) namei refactoring took inspo from linux: https://github.com/heatd/Onyx/blob/master/kernel/kernel/fs/namei.cpp#L532
<bslsk05> ​github.com: Onyx/namei.cpp at master · heatd/Onyx · GitHub
<heat> although linux's namei is incomprehensible garbage, and so are the typical BSD nameis
<heat> every namei sucks major balls
<netbsduser```> >LOOKUP_DONT_DO_LAST_NAME
<netbsduser```> everyone has their own take on what that should be called
<netbsduser```> in managarm it's kResolvePrefix. i went for LOOKUP_2ND_LAST
<netbsduser```> and aye, i could hardly make heads nor tails of the nameis in the 'real unixes'
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<heat> linux's very much grew organically out of crap code into modern-er crap code
<heat> just like the elf loader, etc. just old ass code
<heat> and aiui that is also the case with BSD vfs
<heat> oh!
<bslsk05> ​github.com: Onyx/namei.c at master · heatd/Onyx · GitHub
xenos1984 has joined #osdev
<netbsduser```> heat: this might be the test suite i was looking for all my life
<netbsduser```> or at least since i wanted to rewrite namei(), and as the saying goes, life begins when you rewrite namei()
<heat> haha yes
<heat> linux refactored some nice bits of their namei back in 2020 and accidentally broke some obscure behavior (the O_DIRECTORY | O_CREAT thing). this found it
<heat> doing refactoring of namei without any sort of testsuite is madness
<heat> if you think of any improvements feel free to send a PR
slidercrank has quit [Ping timeout: 256 seconds]
goliath has quit [Quit: SIGSEGV]
dayimproper has quit [Ping timeout: 268 seconds]
<sham1> Doing any refactoring without tests is madness and broken
<sham1> If you're not doing TDD, you're doing it wrong
theboringkid has quit [Quit: Bye]
<gog> we are absolutely doing it wrong
gog has quit [Quit: Konversation terminated!]
<Ermine> gog: may I pet you
<puck> how can you pet someone who isn't there,
<heat> whos gog
<heat> never heard
<Ermine> remote procedure call
xenos1984 has quit [Ping timeout: 246 seconds]
<sham1> Better be asynchronous
<heat> AWAIT
<zid> yield_to_mommy_sched();
nvmd has joined #osdev
heat_ has joined #osdev
nvmd has quit [Quit: WeeChat 3.8]
heat has quit [Read error: Connection reset by peer]
heat_ is now known as heat
nvmd has joined #osdev
xenos1984 has joined #osdev
frkzoid has joined #osdev
goliath has joined #osdev
elastic_dog has quit [Killed (molybdenum.libera.chat (Nickname regained by services))]
elastic_dog has joined #osdev
slidercrank has joined #osdev
heat has quit [Remote host closed the connection]
heat_ has joined #osdev
<mrvn> Anyone have a per process/thread namei cache?
<mrvn> like store the 2nd-last dir of the path walk in the thread in case they will open a file in the same dir again.
heat_ has quit [Remote host closed the connection]
heat__ has joined #osdev
bauen1 has quit [Ping timeout: 256 seconds]
lanodan has joined #osdev
heat__ is now known as heat
heat is now known as mild-warmth
mild-warmth is now known as heat
slidercrank has quit [Read error: Connection reset by peer]
slidercrank has joined #osdev
slidercrank has quit [Remote host closed the connection]
slidercrank has joined #osdev
linear_cannon has joined #osdev
<heat> does anyone know how fast qemu kvmclock updates?
dude12312414 has joined #osdev
bauen1 has joined #osdev
goliath has quit [Quit: SIGSEGV]
slidercrank has quit [Ping timeout: 256 seconds]
CryptoDavid has quit [Quit: Connection closed for inactivity]
<puck> heat: do you trust the TSC?
<heat> yes
<puck> okay so the kvmclock structure only updates when you write the MSR
<heat> hm?
<heat> oh. i had not realized that
<puck> ah, system_time_new does auto-update randomly
<heat> so where's the win? I still vmexit
<heat> ah yes, that must've been what I read
<puck> to be precise, whenever timekeeping_update is called, i think
<heat> like, erm, the TSC is fine here, but it vmexits, which I would like to avoid
<heat> having a memory mapped clock here is ideal, but I don't know its accuracy
<puck> heat: are you sure rdtsc vmexits?
<heat> yep
<puck> how old is your cpu? :p
<heat> kbl
<heat> maybe I fucked up the qemu flags, but a quick CLOCK_REALTIME bench under the vm showed it slowed down a whole lot vs on the host
<puck> both intel and amd have virtualisation extensions to do TSC offsetting
<heat> i know, but does qemu leverage it?
<puck> it's a kvm thing
<puck> looks like linux can't even handle EXIT_REASON_RDTSC natively?
<puck> heat: like, i think you might want to test if your VM even has the rdtsc feature turned on
heat has quit [Ping timeout: 240 seconds]
heat_ has joined #osdev
terminalpusher has joined #osdev
<heat_> i do have rdtsc plus invtsc
<heat_> (you need to force invtsc in qemu's cmdline)
<heat_> i am fairly sure it is vmexiting but I'll re-test in a bit once I get a setup
jimbzy has quit [Ping timeout: 248 seconds]
<puck> it'd be fun to see the vmexit reason
jimbzy has joined #osdev
<puck> probably try running `perf kvm stat live -a`
heat__ has joined #osdev
heat_ has quit [Read error: Connection reset by peer]
heat__ has quit [Remote host closed the connection]
heat_ has joined #osdev
GeDaMo has quit [Quit: That's it, you people have stood in my way long enough! I'm going to clown college!]
<heat_> puck, perf kvm stat live -a doesn't work
<heat_> :(
<puck> heat_: ...are you kvming???
<heat_> yes, the command doesn't make sense
<heat_> I get Usage: perf kvm [<options>] {top|record|report|diff|buildid-list|stat} ....
<heat_> also fwiw my perf kvm record has been permanently broken
<heat_> hell, doing it right now is making guest /sbin/init die
<heat_> all this tooling is so broken it's not even funny
heat_ is now known as heat
<puck> heat: for me:
<puck> Usage: perf kvm stat live [<options>]
<puck> -a, --all-cpus system-wide collection from all CPUs
<puck> and i tested it out, it does show up a qemu i'm running in another terminal
<heat> yeah that does not work
<heat> (here)
<heat> perf version 6.2.gc9c3395d5e3d
<puck> 6.1.26
<puck> like, does it just not load into the "analyze events" screen?
<mjg> i don't know what's up with perf kvm
<mjg> it worked for me once 8-ish years ago
<mjg> past that no dice
<heat> ikr
<heat> perf kvm is permanently fucked
<bslsk05> ​gist.github.com: gist:d517d7a6eb2e7150e748c7103997eaf8 · GitHub
<heat> mjg, btw can you confirm rdtsc always vmexits under qemu?
<puck> heat: what does perf kvm stat show?
<heat> the exact same thing
<mjg> rdtsc*p* does, i don't know about the other one
<mjg> ez to test if it vmexits in your setting th
<mjg> o
<puck> wait a minute what's with your perf version
<heat> i'm using arch's perf
<heat> mjg, does kvmclock work well?
<mjg> as in does it avoid vmexits? i heard rather contradicting stories and have not testd myself
<heat> yes
<heat> like, in theory you're reading from memory. but you probably lose a bunch of accuracy
* mjg is not much of a vm guy
<puck> have you looked at the kvmclock pages?
<mjg> in both meanings
<bslsk05> ​www.kernel.org: KVM-specific MSRs — The Linux Kernel documentation
<heat> yes i have
<puck> MSR_KVM_SYSTEM_TIME_NEW just encodes a mapping of TSC value to wall time
<puck> uh, system time
<puck> there's a different wall time one that just returns current wall clock
<heat> I don't want wallclock, I want monotonic
<puck> yeah so that just returns a TSC mapping and requires you use the TSC
<heat> like, if you need any sort of rdtsc for sub-ms precision or some shit, i'm fucked
<heat> no
<heat> system_time:
<heat> a host notion of monotonic time, including sleep time at the time this structure was last updated. Unit is nanoseconds.
<puck> yeah okay you could just ram the msr
<mjg> boot a linux vm in qemu and bench gettimeofday
<mjg> if it suckkkz then you have your answer
<puck> okay let's just try that lol
<puck> now i want to know as well what is going on here
<heat> linux heavily favours kvm-clock btw
<mjg> right
<heat> The interval between updates of this structure is arbitrary and implementation-dependent.
<heat> this does not fucking help!!!1!1!
<puck> heat: yes, because you're supposed to use the tsc
<heat> if i'm supposed to use the tsc, why would I use kvmclock?
<puck> you can't match the VM's TSC to a *time* easily, i think
<heat> sure I can
<puck> okay so if i idle a linux vm i see mostly samples for msr (40-ish?), a few npf, and a few hlt
<puck> if i run clock_gettime(CLOCK_MONOTONIC, &tp) in a loop, i see interrupts jump up to ~142
<heat> -cpu host,+invtsc I hope?
<puck> nope lol. hold on
<heat> ok so the kvmclock variant should be "-cpu host,+invtsc", the tsc one should be "-cpu host,+invtsc,-kvm"
<puck> i'm running with clocksosurces jiffies, kvm-clock, acpi_pm, tsc, and rdtsc is active
<puck> i get roughly the same amount of interrupt sammples if i just run rdtsc in a loop, hrm
<puck> ..or just an empty while loop, sure
<heat> have we been bamboozled by mjg all along
<heat> try rdtscp
GreaseMonkey has joined #osdev
<puck> illegal instruction
<heat> what?
<heat> how old is your cpu?
<puck> oh wait is this why kvm perf stat works reliably for me
<puck> heat: uhh when was the 3950x released
<heat> wtf does amd not have rdtscp??
<puck> does rdtscp make sense if you have a global tsc?
<heat> yes
<heat> cat /proc/cpuinfo | grep rdtscp
<heat> on the host
<puck> has it
<zid> are you virtualized without it
<puck> probably
<heat> -cpu host should pick that up, unless it literally can't
<puck> i just -enable-kvm'd it lol
<heat> hm?
<puck> okay one sec while i retry this
<puck> heat: same interrupt count
<puck> heat: anyways, the VMX code doesn't know how to handle rdtsc vmexits
goliath has joined #osdev
<puck> heat: so it *couldn't*, even if it wanted
<heat> someone's lying here
<heat> mjg: your turn
<heat> i think puck is saying that FreeBSD SUCKS SO BAD the CPU vmexits on purpose
<mjg> bro
<puck> oh, freebsd
<mjg> go do that rdstscp in a loop will-it-scale style
<mjg> and see the perffz
<heat> yes, we did that member?
<mjg> and then check vm exit counts
<mjg> right
<mjg> so there you go
<heat> so i have no fucking clue whats going on
<puck> rdtscp: roughly 130 interrupts a second. rdtsc: 130 interrupts a second, roughly
<bslsk05> ​www.theregister.com: Red Hat layoffs prompt calls to unionize, CEO responds • The Register
<mjg> :D
<puck> nop: roughly 130 interrupts a second
<heat> 130 a second sounds like 100HZ?
<mjg> > "We wanted to believe in the Open Organization, that we were a People Company. In the end, executives treated us as disposable
<mjg> heat: who is lying in this one, heat
<puck> heat: so like, i'm not even calling into kernelspace here
<heat> i don't know i lost my focus can you summarize that in a youtube short with AI presidents
<heat> puck, if you switch to acpi-pm or hpet do you get vmexits there?
dude12312414 has quit [Quit: THE RAM IS TOO DAMN HIGH]
<puck> heat: no because i'm still just calling rdtsc
<puck> heat: but also i would need to kill the tsc clocksource instead
<puck> actually, sec, it's set to tsc now
<heat> no, you can switch the clocksource dynamically
<heat> just call gettimeofday
<heat> you'll start seeing syscalls too
<puck> heat: nope
<puck> no syscalls here
<puck> calling gettimeofday in a loop
<puck> with kvmmclock as clocksource
<puck> i switched to acpi_pm and yup, io samples climbed to 9k
<puck> hpet hits ~7k npf
<heat> with tsc you also get no samples?
<puck> just ~130 interrupts
<puck> because rdtsc does not vmexit
<puck> because if it did i think the kernel would probably oops
xenos1984 has quit [Read error: Connection reset by peer]
<heat> welp idk
<heat> mjg back up your claimz dawg!
<mjg> i claim rdtsc*p* rolls with vm exit
<mjg> you can trivially check it performs like shit in a vm vs bare metal
<puck> actually let's do that specific one
<mjg> and you can perf somethin' to see the exits roll up as you execute it in a loop
<mjg> i'm pretty sure we did this very exercise months bakc
<puck> mjg: no, rdtsc doesn't exit here, and can't on at least intel
<mjg> i said rdtsc FUCKING p
<mjg> *p*
<mjg> i made no claims on rtdsc
<puck> same with rdtscp
<mjg> how do you invoke it
<heat> did you manage to passthru rdtscp?
<mjg> i mean are you sure you actually doing it
<puck> mjg: __asm__ __volatile__ ("rdtscp" : [registers here])
<mjg> what's your cpu and qemu version
<puck> 3950x, 8.0, linux 6.1 or so
<puck> 6.2.13
<mjg> you can do bare metal and in-vm execution on this one?
<puck> yes
<mjg> give me 5
<puck> you know how expensive 3950xs are?
<puck> :p
<mjg> :]
<heat> everybody gangsta until mjg shows up with will-it-scale
<mjg> then give me 3
<puck> oh that's will-it-scale
<heat> in 2 hours we'll be edging over flamegraphs
<mjg> ignore that
<mjg> cc -O0 main.c rdtscp.c
<mjg> and execute both on bare metal and in a vm kthx
<mjg> do you know which funny system is fucking DEMOLISHED in a vm setting due to all the excess rdtscps?
<mjg> ILLUMOS
<mjg> they do it a metric fuckton\
<puck> mjg: are these crounts?
<puck> counts*
<puck> like, of rdtscp per second?
<puck> i'll just run it with like 20 iterations
<mjg> yea
<puck> mjg: average: 47303976
<mjg> is stress -O0
<puck> yes, both -O0
<puck> mjg: average: 552240270
<puck> uhh
<puck> typo
<mjg> there you go
<puck> mjg: average: 55240270
<mjg> uh
<puck> yes, the VM is 86% the perf of the host
bgs has quit [Remote host closed the connection]
<puck> and i suspect i'm undercounting a tiny bit
<mjg> first, it is doing *less* on bare metal than i would normally expect
<mjg> my haswell laptop is doing over 80 mln
<mjg> however, i concede it is doing "fine" in the vm
nvmd has quit [Quit: WeeChat 3.8]
<mjg> it may be your cpu does not need the vm exit
<mjg> or they patched something in qemu
<puck> qemu doesn't implement rdtsc(p) *or* the walltime msr
<puck> it's all kernelspace
<mjg> or in kvm
<mjg> whatever
<puck> is kernelspace :p
<mjg> point being, this is total crap at least on old cpus
<puck> mjg: oh so i *was* overcounting
<puck> undercounting*
<puck> mjg: i gave the VM 4 cores instead of the one it was on
<puck> mjg: average: 54532751
<mjg> so about the same
<puck> 99%!
<mjg> this not being crap on a modern cpu is not particularly *shocking*
<mjg> i did not know it got that for tho
<puck> mjg: holy shit the rdtscp sure is taking a toll
<mjg> ?
<puck> [doing some calculations rn]
xenos1984 has joined #osdev
<puck> 110349077 in VM, 111594506 on host
<puck> just by giving the benchmbark a p-ectomy
<puck> mjg: interestingly, rdtsc + rdpid isn't much slower, 109830718 on vm, 109927429 on host
<puck> heat: ^ the benchmarks don't lie (except they usually do) :p
<heat> it's not taking a toll tho?
<heat> you get 98% perf on rdtscp
thatcher has quit [Remote host closed the connection]
thatcher has joined #osdev
<puck> heat: well, but perf on rdtscp is worse than rdtsc + rdpid
Matt|home has quit [Quit: Leaving]
<heat> rdtscp serializes
<puck> it .. doesn't :p
<heat> it does
<heat> also why is rdtsc being twice as slow as rdtscp?
<heat> sorry, rdtscp is not exactly serializing, but pretty much
<heat> it's equivalent-ish to lfence; rdtsc
<puck> rdtscp is twice as slow as rdtsc
<heat> ah ok
<puck> to be fair the question is "why"
<puck> the lfence thing, that is
<heat> because speculation and OoO
<puck> but why on rdtscp, that is
<heat> because it's more useful that way
<heat> before rdtscp you had to use lfence; rdtsc
<heat> which is AFAIK strictly slower than rdtscp
<puck> oh right because you need a fence to actually make sense of the timestamp counter you fetch
<mjg> i just remembered amd has its own virtualisation tech
<heat> well, you don't need it, but if you want it to be super accurate you do
<puck> yes, but i checked the vmx code, not the ammd code
<puck> heat: hrm. weirdly enough: (on host) average: 631322341 (lfence; rdtsc; rdpid)
<bslsk05> ​linux.kernel.narkive.com: kvm: RDTSCP on AMD
<mjg> let's check the code now
<puck> heat: this is *faster* than rdtscp
<heat> hmm, that's odd
<puck> checking the AMD manual suggests something subtly different
<bslsk05> ​elixir.bootlin.com: msr.h - arch/x86/include/asm/msr.h - Linux source code (v6.3.1) - Bootlin
<puck> "Unlike the RDTSC instruction, RDTSCP forces all older instructions to retire before reading the timestamp counter" but no mention of the lfence-like behavior
<puck> to be fair, i'm not sure i need to do a million rdtscps a second, so eh
<heat> if you're tracing you might
zid has quit [Remote host closed the connection]
zid has joined #osdev
biblio has joined #osdev
eschaton has quit [Remote host closed the connection]
zid has quit [Ping timeout: 240 seconds]
zid has joined #osdev
goliath has quit [Quit: SIGSEGV]
linearcannon has joined #osdev
linear_cannon has quit [Ping timeout: 250 seconds]
Burgundy has quit [Ping timeout: 268 seconds]
Turn_Left has quit [Read error: Connection reset by peer]