"The more I practice, the better I will become" --Thanh Van
"Send a PR up and then lets start collaborating" --Tim Roberts
"whether you're a student or not, there's always something to learn" --Abdul Guled
Nearly a month since my previous Telescope update, I'm back with another story to tell, this time about shipping 2.8.0 and 2.8.1.
Stats
This was a big release. During the last month we landed:
- 108 Pull Requests
- 307 commits
- 183 files changed, with ~9K additions and ~6K deletions
We had 33 people involved with this release, including:
- The usual heavy lifting by our core maintainers. Well done team.
- Lots of community involvement from Josue (it wouldn’t be a release if we didn’t break the servers, right?), Cindy (who never met an ESLint issue she couldn’t submit a PR for), and Abdul (who proved that iframes are still one of the hardest parts of the web platform to get right)
- More contributions from @sancodes and a first time fix by @pbelokon, welcome to Telescope! I hope you'll contribute more in the future.
- Commits from previous Satellite developers, who have now been merged into Telescope's history (I'll discuss this below) – @Metropass, @HyperTHD, @manekenpix, @dhillonks, @Kevan-Y, @DukeManh, @AmasiaNalbandian, @rclee91, @JiaHua-Zou, @menghif, @joelazwar
Highlights
Today I read through the full diff of everything that landed since 2.7.1, and here are some of the highlights that I noticed:
- Fixes to Renovate in order to properly deal with updates to Satellite, and a shiny new Renovate Bot README Badge
- Setup and launched our very own private Docker Registry at docker.cdot.systems. A big thank-you to Chris Tyler for setting up the hardware and provisioning the box.
- Our first successful, automatic Docker build-and-push (feed-discovery) from CI to docker.cdot.systems
- Maintenance to our GitPod, VSCode and other configuration files
- Removal of left-over Firebase and Users service files
- Updates to our nginx configuration to properly cache next.js static assets
- Portainer now hosted at portainer.telescope.cdot.systems, and authentication done via an OAuth2 proxy to GitHub
- Moved Redis persistence from AOF to RDB snapshots
- Added Row Level Security for our Supabase tables
- Jest sub-project fixes to stop running our tests 10 times in a row (!)
- Moved Satellite into our monorepo (hurray!)
- Proper sub-project, ESLint configs throughout the monorepo
- Added the initial version of the Dependency Discovery service
- Moved closer to finishing the parser service with code fixes (tests are next)
- Fixed our planet to use Handlebars properly with Express
- Updates to our Search service API query route
- Rewrote our Single Sign On service and tests to use Supabase for our authentication backend and secure token handling
- Removed unnecessary parts of the Dashboard UI
- Fixed our Build Log Terminal padding (again)
- Updated our DOM sanitizer to allow embedded Twitch iframes
- Added pnpm to our Telescope, base Dockerfile
- Lots of documentation updates, including the feed-discovery service, a new AWS development guide, and lots more new docs in our Docusaurus
- Fixes to allow proper use of a GITHUB TOKEN in our autodeployment server
- Localization of our About Page, including a complete Vietnamese translation (aside, we have some kinh ngạc (amazing) developers from Vietnam on the team, and I'm glad to have them)
- Lots of improvements to our React Native UI
- Added more front-page images to celebrate our 3,000th Issue/PR
- Tons of dependency updates to npm modules and docker images
I'll talk about a few of these in more detail.
OAuth2 Proxying
Kevan Yang and I spent a bunch of time during this release figuring out how to do authentication for our "admin" apps. We have a bunch of web apps, dashboards, and APIs that we want to run in production, but not expose to the general public. An example is Portainer, which we use as a Docker front-end for our staging and production machines. Another is the Subabase Studio dashboard.
We've solved and are implementing all the user-level authentication using Seneca's SSO and our own custom auth solution (JWT-based with Supabase as the back-end), but we didn't have a good "admin" story. Our solution was to add an OAuth2 Proxy that connects to GitHub, and lets us do Org- and/or Team-based authorization.
Now when I go to log in to Portainer, I'm met with this:
We hope to do the same for Supabase in 2.9, as soon as we implement #2979 and add secrets to our Docker deployments.
Supabase Security
I mentioned above that we are finalizing our user authentication, too. Duc Manh made a great video explaining how this works (btw, I love seeing students experiment with submissions like this):
The granularity of what this gives us is amazing. We authenticate a user via our own Seneca SSO, then generate a JWT that can be shared between our own back-end and Supabase. The secret to making this work is to sign our tokens with the same key as Supabase, and then to use setAuth()
to attach this token to our Supabase client.
Having done this, we can then write SQL policies with Postgres to have the database enforce constraints on data access and modification, based on the user's JWT. For example:
-- Every one can read the feed list
CREATE POLICY feeds_read_policy ON feeds
FOR SELECT
USING (true);
-- But only a Seneca user can update, delete, insert their own feeds
CREATE POLICY feeds_update_policy ON feeds
FOR ALL
USING (((current_setting('request.jwt.claims'::text, true))::json ->> 'sub'::text) = user_id);
It's so nice to have this enforced at the data layer. Supabase has not been trivial to self-host and integrate into our existing system; but once we're done, it's going to be really powerful.
Telescope AWS Development Environments on EC2
I'm teaching another course on AWS in parallel to this class, and it's made me think a lot more about how to leverage AWS in the open source work. One immediate need I noticed was to find a way to give people access to cloud development environments powerful enough to build and run Telescope. With each passing release, Telescope gets bigger and more complicated to run.
I filed an issue to try and use VSCode over SSH with AWS, and Cindy snapped it up, producing this incredible guide on how to setup and develop Telescope on an r5.large EC2 instance. I demoed it in class the other day, and a number of students requested accounts. I'm hoping we can automate this process even more so that it's trivial for our students to spin up development environments.
Eventually, it shouldn't matter how powerful your laptop is, since the cloud can do whatever you need.
Running our own Docker Registry
We've been blocked on this for a few releases, and 2.8 was the moment where we finally got serious about running our own Docker Registry. It turns out to be pretty easy (here's our configuration). Tim, Kevan, and Josue did most of the work, and Tim has a great blog post about it.
Today I used it to rewrite our entire CI/CD pipeline to leverage the registry for pushing images, but also as a build cache. In the process I learned that docker-compose
lets you include a cache_from
key when you define a build
, letting you use cached layers if possible. I also learned how to leverage this came cache in CI via the docker/build-push-action
, see the docs on using a Registry Cache.
This should make local development, CI, and production builds all go way faster, and I'm really excited to get it landed.
The Highs and Lows of the Monorepo
I'm a huge proponent of monorepos, but it's not all sunshine and rainbows. Having everything in one place is great for making cross-architectural changes, but it also increases the complexity of setting up and maintaining everything.
In this release we ran into a few of these issues with pnpm, ESLint, and Jest. For example, pnpm using the wrong version of a dependency, or ESLint needing to be configured separately for every sub-project. We even discovered that Jest was running our unit tests 10 times in a row!
Despite these negative surprises, we're slowly but surely ironing out all the wrinkles. Last night Josue, Roxanne, Cindy, and I were amazed to see that changing a version number in a package.json
file deep in our tree automatically published a new node module to the NPM Registry. We even figured out how to finally pull Satellite into our monorepo and not lose the git history. Francesco wrote about how we did it in his blog.
Preparing for 2.9
I sent an email to the team last night, outlining what I think are the most pressing issues for us to solve in 2.9. We're about a month away from shipping 3.0, and some of these are deal breakers:
- Finish the Parser service and remove the Legacy Backend (100% microservices)
- Finish our monorepo tooling (eslint, tests, pnpm, etc), and fix edge cases we're hitting now
- Finish Dockerfile/Image optimizations and use our new Docker Registry for all of our images and deployments
- Ship and finish the first version of the Docusaurus docs, including docs on anything that the next set of students will need
- Get an Alpha (i.e., usable to read posts) version of the React Native App shipped
- Fully integrate Supabase into our front-end and back-end in production, and stop using the Wiki Feed List
- Use our front-end Sign-up flow for adding new users/feeds
- Finish Search/ES improvements
- Finish YouTube/Twitch feed integration
- Finish dependency service so that it's usable by the next set of students in the fall for finding bugs to work on
I've asked the Sheriffs this week to press the team to answer these questions, as they think about what to focus on during 2.9:
- Which part of Telescope 3.0 do I own?
- Which existing Issues are part of this? Which new Issues do I need to file in order to properly describe the work? When will those issues be filed?
- Who can I depend on for support in development, debugging, testing, and reviews? I can't do this alone.
- What are the risks that I see, which could prevent me from getting this shipped
- How will I mitigate these risks?
Shipping software is a team sport, and 3.0 is going to require us all to work hard to support each other.