2014.36: Finalization, lazyness, optimizations.

Hi! It’s timotimo again and this week we’re pretty stoked to present some really neat stuff to you!

  • jnthn implemented finalizer support (giving your classes a DESTROY method or submethod that will be invoked when the object gets collected by the GC) on MoarVM and exposed it to rakudo. Now nine will be in a much better position to work on the Inline::Perl5 module!
  •  Speaking of Inline::Perl5, carlin contributed changes to the module that make it installable via Panda, so it doesn’t require any manual work any more.
  • brrt implemented a little function to force a GC run on MoarVM. It’s not meant for performance tuning, though. Its main purpose is to help write test cases that check the behavior of DESTROY methods and similar things.
  • MoarVM now properly reports errors when mallocing memory fails instead of just crashing, thanks to carlin.
  • carlin also fixed up some cases where signedness vs. unsignedness caused branches to always or never be taken in MoarVM’s guts.
  • brrt taught the JIT log to write out where inlines happened in the code.
  • I implemented bindattrs_* (bind a value to an instance’s attribute given a string that’s not known at compile time) and getattrs_* in the JIT.
  • MoarVM’s GC used to run a full collection through nursery and old generation every 25 runs. Now it counts how much data has been promoted to the old generation since the last time a full collection was done and bases the decision on that fact. I started the patch, jnthn gave it the finishing touches.
  • cognominal worked on the Synopsis 99 a whole lot. If you’re ever puzzled by lingo from the Perl 6 community (for example on this blog), this document lends you a helping hand.
  • The usual assortment of spec test improvements have also happened, of course.
  • In the ecosystem, leont contributed a TAP::Harness module and grondilu added Clifford, a geometric algebra module and btyler built a NativeCall binding to “discount”, a markdown compiler.

Another topic that got a lot of attention by the devs this week is performance of specific things that were currently implemented quite sub-optimally and gave users a very bad experience in common usage scenarios:

  • the .words method of Str used to use the .comb method to do its job. While this makes for very pretty and simple code, it’s far from optimal, especially since we know that whitespace and non-whitespace have to follow each other immediately, TimToady was able to produce an improved version that runs about 6x faster.
  • With well-placed native integers, lizmat was able to improve the speed of List.ACCEPTS (the method that a smartmatch turns into) by about 14% when the check needs to go through the entirety of a big list.
  • jnthn and lizmat went through a few global variables that have costly set-up routines that impact every single start-up. Now they will only be populated with data when they are actually needed.
  • Mouq gave us better code-generation for the -p and -n flags of rakudo.
  • Turning the methods .list, .flat, .eager and .hash of Any into multiple dispatch methods (implemented by lizmat) gives the optimizer better optimization opportunities.
  • lizmat also improved the implementation of “make” (which is used inside grammars and their action classes to attach objects to the match results) to not access the caller’s lexpad more than once. (I actually stumbled upon this opportunity)
  • The Whatever and HyperWhatever objects will no longer have new instances of them created whenever you write * or ** in your code. Instead, a pre-built singleton will be returned by each. This cuts a whole lot of allocations from all over. Especially when iterating over lazy lists you’ll end up calling .gimme(*) and .reify(*) often, where * is just used as a “signal”. These allocations are very small, though. TimToady and jnthn worked on this.
  • jnthn turned take, take-rw, succeed and proceed into multiple dispatch subs, which should impact situations where gather/take is in the hot part of the program and also positively impacts given/when statements.
  • A few micro-optimizations with a small — but definitely measurable — pay-off were done to Str.chomp by lizmat.
  • lizmat began improving IO.lines (makes reading /usr/share/dict/words line-by-line about 32% faster) and moritz added a 4% improvement on top.
  • moritz chopped off two container allocations for every call to ListIter’s reify. On top of that, a few containers in MapIter.reify had to go, as well.
  • By using codepoints instead of one-character-strings inside chomp, jnthn manages to remove some more string allocations in there.
  • jnthn also made throwing exceptions and returning more friendly to the optimizer (in particular, the inliner).
  • In a big bunch of commits, lizmat makes many dynamic variables lazily constructed, removing a significant chunk of the start-up time of every single perl6 program.

All it takes for the devs to start a little performance hack-a-thon is a good, reproducable benchmark with a comparison to another language.

Until The Big List Refactor has happened, the performance of lists, list iteration, lazy lists and so on are going to be suboptimal, as we almost always pay the price for a fully lazy list generation, even when the iteration can be identified to be eager.

Another thing to note is that the name “DESTROY” is not yet final. There’s some discussion about it still, because calling it DESTROY may give people familiar with perl5 a bad idea. As opposed to a reference counted implementation (like CPython or perl5), fully garbage-collected implementations (like PyPy and Rakudo) cannot guarantee that your DESTROY methods are called soon after the object becomes unreachable. The DESTROY method may not even be called at all if the interpreter shuts down before that particular object gets collected. If you want your destruction timely and guaranteed, you’ll have to “do it yourself”.

Either way, it’s been a great week for both performance and feature advances. I’ll be back in a week, or maybe I’ll run benchmarks tonight and just publish an out-of-schedule post here with my results 🙂

See you around!

Advertisements

One thought on “2014.36: Finalization, lazyness, optimizations.

  1. Love your work timotimo.
    Thanks for these handy abstracts of progress.
    The haters will find something to nitpick (“after 14 years, why isn’t it as fast as Perl 5?”) but for the rest of us it’s nice to see stuff happening and the hard grind of yak shaving taking place.

Got something to note?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s