Strange Loop is a superlative software development / computer science conference held annually in St. Louis, Missouri. If I recall correctly, its accomplished founder Alex Miller had two main goals in creating it: to bring together a diverse group of academics and practitioners to share ideas from both spheres, and to promote St. Louis as a great place to live and work. As far as I can tell the conference has been quite successful in both goals.
I first attended Strange Loop in 2010 with my talented co-worker Michael Rehse. I had a great time that first year and learned a ton, and so I returned the following year with a larger group of co-workers — and that was the first time I had my mind blown at Strange Loop: when Rich Hickey gave his amazing keynote talk Simple Made Easy. That talk inspired me to learn Clojure, which eventually became, and remains, my favorite programming language.
Since then I’ve hoped and strived to attend Strange Loop every year, and I’ve been mostly successful. I’ve learned so much, and had so many amazing conversations with amazing people, that it’s become a cornerstone of my professional life.
Many thanks to all the organizers, staff, volunteers, speakers, and attendees who make Strange Loop fantastic! And thanks to Park Assist for backing my trip as part of its professional development program. (We’re hiring!)
- There was one workshop day followed by two session days.
- I attended one workshop: Ally Skills — my notes and impressions are below.
- I was only able to attend one of the session days, but despite that I learned a ton, so attending the conference was most definitely worthwhile for me.
- This is the first time I participated in the Slack “team” created for this year’s community. It was very useful and very fruitful.
- I continue to be super impressed with Strange Loop’s evolution over time. Continuing a hugely positive trend, the attendees this year were notably more diverse than in prior years, and there were a notable number of workshops and sessions focused on issues of diversity — of both demographics and perspectives. The metaphor of “expanding horizons” feels fairly hackneyed at this point, but it feels exactly right for describing what the Strange Loop team continues to strive to do. This is a key reason I find Strange Loop so compelling and valuable.
- I’m continually impressed by how well Strange Loop is run. The organizers really do an amazing job.
- Videos of most sessions were posted within a day of the session. And with excellent production values — super impressive! Videos of almost all of this year’s talks and many from prior years can be found on the Strange Loop channel on YouTube. (2012 videos are on InfoQ.)
For the past few years Strange Loop has actively worked to pair up first-timers with experienced Strange Loopers, who act as “guides” to help the first-timers have the best possible experience.
I volunteered to be a guide last time I attended and had a mixed experience; it felt very awkward and it wasn’t clear whether I had any kind of positive impact on my guidee. I was hesitant, therefore, to volunteer this year. But when the organizers shared that they really needed more volunteers, I decided to step up and give it another chance.
I’m glad I did, because this time around it was a much less awkward experience and it seems more likely that I did have some kind of positive impact on my guidee. We had a great chat and I identified her as a potential candidate for a spring internship at Park Assist!
Bottom line: like any encounter of strangers, this experience can be awkward and uncertain — but it can also be stimulating and rewarding. I’ll definitely try to do this again if I can.
Kudos to the accomplished Bridget Hillyer for organizing the guide program!
Ally Skills Workshop
- This was led by Valerie Aurora of Frame Shift Consulting. I’ve admired her for some time, since the early days of the Ada Initiative.
- The goals seemed to be:
- give people a preliminary but solid conceptual foundation in systematic oppression
- define the role of an ally
- help aspiring allies to start building the skills needed to act as an effective ally
- I felt that all these goals were well accomplished
- The workshop was pretty intense and while it was even a little draining for me, it was absolutely worth it.
- Amazingly, a few hours after the workshop ended I received an email that one of my colleagues at Park Assist had sent to our engineering team, and I noticed that he had addressed the email to “gents” — which is problematic as there is a woman on our team (unfortunately only one; I’m working on this). I don’t know if I would have noticed this if I hadn’t attended the workshop; I almost certainly wouldn’t have taken the action that I did (with the help of some of my fellow attendees).
- If you are interested in having Aurora or one of her associates deliver this workshop for your organization, I highly recommend it and I urge you to contact her posthaste.
I’ve never been great at the hallway track, but I think I am gradually improving, which is gratifying to realize!
I didn’t have anything scheduled for Thursday morning, and during and after breakfast at the beautiful Union Station hotel I struck up some fruitful discussions wherein I learned a few things:
I’ve been working with Ruby this year, since it’s currently the default language for our “cloud” software at Park Assist. So it was interesting to note people’s reactions when I shared this. It’s anecdotal of course, but I don’t think I encountered a single person who was enthusiastic about Ruby. I met plenty of folks who know it and either use it now or used to. But I detected little enthusiasm. This is a weak signal, but a signal nonetheless. That said, Strange Loop is but one very particular community… and certain aspects of this community make this not super-surprising; the Strange Loop community is unusually chock-full of enthusiasts of functional programming, so I also didn’t detect much enthusiasm for, say, Go — which is booming in many other contexts.
Work Experience at Twitter
First I chatted with Bonnie Eisenman. It was a fun, wide-ranging conversation; just getting to know each other and discussing various technologies and techniques. One interesting takeaway for me was that Bonnie has been at Twitter for about a year and she’s very happy working there. I was a little surprised by how strong her positive feelings about Twitter were, and I’m intrigued to learn more about why working there has been so great for her — so I can perhaps apply some of those lessons at Park Assist. (Although many of them may not be applicable, due to the differences of scale.)
Bonnie has published her own rollicking recap of Strange Loop 2016, and it’s much more fun than this one, so check it out!
Kafka vs MQ systems
After I chatted with Bonnie, I approached Jim Breen because I spotted a Kafka sticker on his laptop — the same sticker I have on my laptop. As per the time-honored law of laptop stickers, we had to connect. We talked about Kafka: what we used it for, what we found exciting about it.
I’ve been enthusiastic about Kafka for almost three years now, and actively designing and building systems on Kafka or Kinesis for ~18 months. But I clearly still have much to learn!
Jim told me that at his company they use an MQ system for some data/event transport needs and Kafka for others. This steered us towards talking about some of the strengths and weaknesses of MQ systems versus Kafka, and that’s when Jim educated me that in fact some/most/all MQ systems actually do support true pub/sub, wherein multiple consumers can consume the same messages independently, using a “topic” paradigm for grouping messages. This was super-helpful, because it corrected a misconception I had somehow adopted: I had thought that MQ systems generally supported only queues of messages, wherein a given message could be consumed only once, by a single consumer — in other words, I thought it only supported “popping” messages off of queues.
Clearing this up helped me refine my understanding of Kafka’s strengths and value propositions versus other queueing systems. It seems that its pub/sub support is not quite so unique as I had thought. Jim helped me understand that, rather, it’s Kafka’s strong sequential ordering guarantees, combined with its pull model for consumers, that set Kafka apart from MQ systems; these properties enable each consumer to consume messages independently and asynchronously, and can even go “back in time” to process or reprocess older messages.
I’m really glad I ran into Jim at Strange Loop!
I can’t share anything that Bridget and I discussed, but it’s always a treat to get some time with her!
Sessions I Attended
(For abstracts and author bios, click on the talk titles to go to the official pages of each talk.)
I took a few notes from the first half of this talk and while they’re a bit fragmented, they do represent my experience of this half:
- When a dependency has a bug
- … why don’t you just fix the bug?
- “The program comprehension task…” — schneiderman
- This is a great talk
- “What if we had tools to help us build mental models from code fast?”
- small programs are easy to read and easier to understand
- can we reduce a program into a smaller, easier-to-understand equivalent program?
Shreve on to introduce and explain an oldie-but-goodie technique called “code slicing”, which he explained very well, and then he introduced a method of program analysis built on code slicing that he calls “idealized commit logs”. This is a quite compelling idea and I hope to try to apply it the next time I need to understand a new (to me) codebase.
A Frontend Server, Front to Back by Zach Tellman
A very good talk, very meaty, but also very dense and a little dry.
If you work with asynchronous code and/or callbacks, or if you’re interested in implementing comprehensive instrumentation in a system, I recommend this talk.
- With Netty, harder to understand a system
- no stack traces
- Cool idea: thread a sort of a metrics manager through all the code so that the business logic can be as simple as possible and the metrics manager is responsible for actually measuring and recording the measurements… basically SRP, you tell the thing what happened, it takes care of measuring and recording when it happened.
- You can only improve what you measure / what is measured will be improved, will get better
- Everything you ignore will get worse
- Lessons learned
- Articulate your goals IN ORDER
- understand and describe the extremities of your system
- choose your key metrics carefully
Druid: Powering Interactive Data Applications at Scale by Fangjin Yang
I attended this talk because it looked interesting and because I’m in the middle of refactoring an analytics system built on a conventional (old school) ETL process and a conventional (old school) Data Warehouse implemented with a RDBMS using an OLAP star schema — and I’ve been having second thoughts about it. Before reading the abstract of this talk I had only a vague idea of what Druid was, but as I read the abstract I started nodding vigorously — it seemed like it just might be exactly what the doctor ordered to change this refactoring project from blah to boom. And I figured even if it didn’t yield any concrete changes to my project, it still seemed like I’d learn relevant ideas.
I’m really glad I attended this talk because it was excellent, and my takeaway was that I could and should actively explore incorporating Druid into my current project. I’ve been working on this since then and while it’s too soon to say where it’ll lead, I’m definitely excited.
McNeil and I seem to have been on parallel tracks over the past ~18 months, struggling with many similar problems in the realm of transactional stream processing with the Kafka model. Fortunately he put in the effort to enumerate and articulate the various challenges he encountered and the patterns he developed to handle them. Many of the patterns described were familiar to me, even if I had my own spin on them.
Now that I think about it, this talk, and my own experience, illustrate that transactional stream processing with the Kafka model is in its infancy and that we’re still figuring out how it can and should work. I did think a few of the specific patterns applied by McNeil were a little too complicated for my taste, but overall they were insightful, thoughtful, and thought-provoking. Also: great illustrations!
I think this talk is useful for gaining an understanding of Kafka and Kinesis and for thinking about transaction processing on top of Kafka and/or Kinesis.
Failing (and Recovering) Asynchronously: a Saga by Daniel Solano Gómez
I followed along very enthusiastically for the first ~⅓ of this talk, taking these (rough) notes:
- Caitie McCaffrey spoke about the Saga pattern at GotoConf
- Saga were first described in a paper from ~30 years ago by Hector Garcia-Molina and Kenneth Salem
- It was focused on Long-Lived Transactions (LLT)
- Maybe we can break an LLT up into a group of smaller transactions
- Paper by Arnon Rotem-gal-oz revived Sagas for SOA systems
- Use sagas as an alternative to distributed transactions
- Asynchronous sagas
- add undo semantics to concurrency constructs
- not concerned with persistence
- operations may or may not be distributed
- Asynchronous Sagas
- (speaker used Scala + Akka Streams)
- Forward Operation
- input is the output of the forward operation>
But around that point Gómez lost me. He changed the method of his talk: he switched from a straightforward and effective combination of verbal narrative + key bullets on slides to using a specific visualization notation that (according to another attendee) comes from the Scala community to visualize operations, calls, etc — the diagrams on his slide became the primary means of conveying information. And for some reason I just couldn’t follow the illustrations. I just didn’t “get” them. I have no idea why. But I got lost, so I left. It’s a shame too, because I had been enjoying his speaking style until that point.
Despite this session not fully working for me, it was still valuable, as I learned about the concept of Sagas and ideas about applying them to distributed and/or asynchronous transactions. I suspect these ideas will prove very useful at some point in the future.
End-to-end encryption: Behind the scenes by Martin Kleppmann, Diana Vasile (FINISH)
I’ve admired Martin Kleppmann since my very first exposure to his work, when I attended his talk Turning the database inside out with Apache Samza, which was another blow-my-mind moment at Strange Loop.
Martin’s excellent talks and articles on stream processing have been instrumental over the past ~3 years in my quest to learn stream processing and in my nascent ability to actually design and build effective stream processing systems. I’ve also been eagerly following along as he’s been writing his extremely promising book Designing Data-Intensive Applications.
So when in October 2015 Martin took a position as a researcher at the University of Cambridge Computer Laboratory, to research information security, I figured that if he thinks this topic is important enough to work on full time, then I should pay attention to it as well.
This talk was an engaging and compelling explanation of how a few different public key encryption systems work, and how they can be applied to end-to-end encryption. I enjoyed the talk, but its conclusion was a bit anti-climactic — there was no clear call to action or pointer to how to take the next steps in learning more. No big deal for me though; I’ll just keep following Martin on Twitter!
Humanities x Technology by Ashley Nelson-Hornstein
A compelling reminder that all this technology must be deployed to serve human interests, to make real people’s lives better. Highly recommended.
I don’t recall Nelson-Hornstein making this explicit, but it seems to me that she means as opposed to the interests of institutions such as corporations, governments, or “the market”. But I could be wrong; I could just be channeling Douglas Rushkoff — I am in the middle of his newest book Throwing Rocks At The Google Bus and just this morning I was listening to his new podcast Team Human.
Sessions I Wish I Had Attended and/or Plan to Watch
I didn’t attend the second day of the conference because it was on a Saturday, the Jewish sabbath, and such an event isn’t compatible with how I observe the sabbath. (I hope, perhaps selfishly, that the organizers might consider avoiding Saturdays in the future.)
So these are the sessions I would have attended on the second day, had I been there, and which I hope to watch some time soon:
- Commander: Better Distributed Applications through CQRS and Event Sourcing by Bobby Calderwood
- GraphQL: Designing a Data Language by Lee Byron
- Lies, Damn Lies, and Metrics by André Arko
- Building a Distributed Task Scheduler With Akka, Kafka, and Cassandra by David van Geest
- Tulip: A Language for Humans by Sig Cox, Jeanine Adkisson
Bonus: Highlights from Strange Loops Past
Just a short list of a few talks off the top of my head:
- Simple Made Easy by Rich Hickey (2012)
- Storm: Twitter’s Scalable Realtime Computation System by Nathan Marz (2011)
- Scaling Software with Akka by Jonas Bonér (2012)
- Turning the database inside out with Apache Samza by Martin Kleppmann (2014)