Shipping Telescope 1.9.5

Yesterday the team shipped Telescope 1.9.5, and brought us one step closer to our 2.0 goals.  I spent a lot of time mentoring, debugging, reviewing, and also writing code in this release, and wanted to write about a few aspects of what we did and how it went.

First, I was struck by the breadth and depth of things that we had to work on during this cycle.  I was talking with a friend at Apple recently, who was reflecting on how complex modern software stacks have become.  I've been programming for over 30 years, and he even longer, and neither of us has ever reached the promised future where a NEW! language, framework, tool, or approach suddenly tames the inherint complexities of distributed systems, conflicting platform decisions, or competing user expections.  Reading through this release's code changes, I wrote a list of some of the things I saw us doing:

  • Writing Dockerfiles
  • Dealing with differences between docker Environment Variables vs. Build Args
  • Babel configuration wrangling
  • Elasticsearch
  • Redis
  • Browser origin issues
  • Nginx proxies and caches
  • OpenID
  • Firebase and Firestore mocking in CI and development
  • Traefik service routing
  • Combining SAML Authentication with JWT Authorization
  • Using Portainer to administer containers
  • Writing good technical documentation
  • Dealing with different ways of running the same code: locally, CI, via Docker, production
  • Dealing with failed Unit tests due to timeouts in CI
  • Understanding mock network requests in tests
  • Dependency Updates, some with breaking APIs
  • Configuring tests to be re-run on file changes (watch)
  • Dealing with authenticated requests between microservices
  • Writing e2e tests using Playwright
  • Hashing functions
  • Different strategies for paging results in REST APIs
  • Role-based Authorization
  • HTML sanitization
  • REST API param validation
  • TypeScript
  • Vercel Environment Variables
  • Material UI
  • Implementing multi-step forms in React
  • User Sign-up Flows
  • Accessibility
  • Intersection Observer API
  • React Refs, Context, custom Hooks
  • Scroll Snap API
  • Polyfills
  • Updating and Removing legacy code
  • Mocking database tests
  • HTTP headers

This list is incomplete, but helps give a sense of why Telescope is so interesting to work on, and how valuable it is for the students, who are getting experience in dozens of different technologies, techniques, and software practices.  The funny thing is, If I presented a new course that covers all of these topics, I'd be shot down in a second.  I have some colleagues who are convinced that the best way to learn is by working with toys and sheilding students from the realities of modern software; I disagree, and have always favoured doing real things as the best way to learn how to prepare for a life of software development, which stops being neat and tiddy the minute you start doing anything other than closely scripted tutorials.  We don't help our students by sheilding them from all the complexities of what they must eventually face.

I had a former student email me recently, who was struggling to reconcile how she felt about programming with what it actually was now that she was doing it fulltime.  A lot of what she said sounded familiar to me, and also very normal.  Rather than perceiving her discomfort as a problem, I recognized it for what it really is: the impossible demands that our software requires of us vs. how well equipped we are to meet them.  Programming isn't something you learn in 24 hours, one semester, or during a degree.  This is a long, winding road, and accepting that it's hard for all of us is an important part of not giving up.  Not giving up is 90% of what you need to be a good programmer.

So how do I get students to work on code like this?  First, I don't expect perfect results.  We work in smalls steps, issue by issue, pull request by pull request.  We get it wrong and correct our mistakes.  We struggle through trying different approaches until we land on one that feels right.  We help one another.

This past week I saw a lot of students working together on Slack and Teams to write fixes and do joint reviews.  The move to virtual learning has opened the door to much greater collaboration between students: anyone can share their screen with anyone else in the project at any time, and it's easy to "let me show you what I'm struggling with here."  I'm also fascinated at how students will join in on calls even if they weren't invited, knowing that their presence there will be welcomed rather than met with questioning looks.  This openness to collaboration, and to each other, is exactly what I've sought to build for many years.

On Thursday I spent most of the day stuck on writing one tricky end-to-end test for our authentication flow.  No matter what I did, one of our microservices kept returning a 200 vs. 201, even though the code never returns a 200!  I tried everything I knew how to do, writing, rewriting, and testing from different angles.  Nothing worked.  Eventually I reached out to Chris and Josue, who were just coming online to try and write some tests together.  Sharing my screen and talking to them for 5 minutes completely unblocked me, and was worth more than the 5 hours I'd already spent: our tests were silently automocking fetch(), and every request resulted in a 200.

I've also seen the quality of reviews continue to increase, week after week.  We've been at this for months, and in much the same way that a runner can slowly add more volume every week, or a lifter increase their benchpress in tiny increments, the ability of the group to review each others code and spot issues has gotten better and better with practice.  In the beginning, I had to review everything, and I still do one of the reviews of most PRs.  But more and more in this release I saw PRs land that I hadn't read, but turned out to be well reviwed by two or more other students.  It's been great for me to have my own code reviewed too, since I need help as well, and I've been able to fix many things through the students catching my mistakes as I worked.

Despite all the positives I've seen with collaboration and review, I also struggle to overcome some behaviours.  For example: merging code without reviews; hesitancy to review things that weren't specifically assigned to you; assigning everyone to review a PR, ostensibly meaning that no one is assigned.  Review is hard to teach, and easier to learn through experience.  Reviewing code is how I write most of my code: suggesting fixes or simplifications.  I also learn all kinds of things I don't know.  The students assume that they can send me anything to review, and I'll already know how it works, or at least understand the tech they are using.  Often I don't and I have to go read documentation, or write sample code, before I can provide useful feedback.  As a result, review is as much a documentation and educational process within a project/community as it is a chance to improve how things work.  If you don't request reviews before merging, or you don't get involved in giving them for other people's code, you miss the chance to build a subset of the project that understands how something works.  If you want to be the only person who can ever maintain or fix bugs in a piece of code, go ahead and do it all alone, because that's how you'll stay.

Another struggle I have is trying to figure out how to get people to push their code for review well before the day we ship.  I've yet to see a PR that gets reviewed and lands in a single iteration without changes (my own included).  I know I'm not capable of writing perfect code, but some of the students are still learning this the hard way.  It takes several days for a fix to get reviewed, tested, rebased, and landed.  However, yesterday a bunch of people showed up with unreviewed code changes that they expected to be able to ship the same day.  On the one hand, this is easily solved by being a ruthles product manager, and simply refusing to include things in the current release.  If we were in industry, this type of  bevaviour would result in people losing their jobs, as the rest of the team lost confidence in their colleague's ability to estimate and ship on time.  But this isn't industry, and these aren't employees, so I do my best to help people finish work on time.  Doing so means that mistakes get made, and yesterday's release wouldn't autodeploy to staging or production because of some missed environment variables for one of the containers.  People dropping unfinished code on the floor and walking away, expecting someone else to clean it up, isn't a great strategy for success.

Yet all of this, the victories and defeat, the frustrations and successes, all of it is what it's like to embrace the grind of software development.  If you're not up for it, or don't enjoy it, it's probably good to understand that now.  Most of what we do isn't hard, it's just tedious work that needs to be done, and there's no end of it.  What looks like magic from the outside turns out to be nothing more than a lot of long hours doing unglamorous work.  A release is a chance to let go, to finally exhale after holding our breath and concentrating for as long as we could.  It's nice to take a moment to breath and relax a bit before we do it all again, hopefully a little stronger this time.

Show Comments