Chris Padilla/Blog


My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.


    Adding Background Music to Websites

    I'm working on a project where I'm adding my own music for each page in a React app.

    It has me nostalgic for the early internet. You would be cruising around, and all of a sudden, someone's Live Journal would have a charming MIDI of Enya's "Only Time" playing in the background.

    I'm definitely glad this isn't the norm anymore. I don't miss having to mute the obnoxious banner ads that included sound on Ask Jeaves. But now that the web is largely without sound as a background to pages, I've really enjoyed how it's bringing parts of this application to life!

    Quick Overview

    The project isn't out yet, so for the heck of it, let's call my React app "Music Box."

    The music is hosted on the Music Box's CDN (Sanity, in my case). The ideal format would be webm as it works across all modern browsers and is highly performant. For my use case, mp3's suited me just fine.

    In my codebase, I have a big ol' object that stores the track URL's and which page ID's they should play on. It looks something like this:

    const sounds = [
      {
          name: 'Bake Shop',
          src: 'https://cdn.sanity.io/files/qvonp967/production/f4163ffd79e09fdc32d028a1722ef8949fb31b85.mp3',
          conversationIDs: [
            '27f4be58-38f3-4321-bbc9-c76e0c675c36',
            'd008519f-16c0-4ef0-b790-f5eb0cb3b0b4',
          ],
          howl: null,
        },
        {
          name: 'Restaurant',
          src: 'https://cdn.sanity.io/files/qvonp967/production/4606e7ec6208df214d766776e3d5ed33408fe74d.mp3',
          conversationIDs: [
            'e1688c5f-218a-4656-ad96-df9a1c33b8f8',
            'a81fb6a7-d450-45e8-a942-e5c82fb1a812',
          ],
          howl: null,
        },
        ...
    ];

    You'll notice each object also has a howl property. Let's get into how I'm playing sound:

    Playing Audio with Howler.js

    Howler.js is a delightfully feature-full API for handling sound with JavaScript. The library is built on top of the Web Audio API and also uses HTML5 audio for certain use cases. While I could have interfaced with the Web Audio API directly, Howler has much nicer controls for using multiple sounds, interrupting them, and keeping separate sound instances contained in a single sound palette.

    For each page, we initiate the aproriate sound with this code:

      const initiateSound = (src) => {
        const sound = new Howl({
          src,
          loop: true,
        });
    
        return sound;
      };

    src here is derived from the url. The loop option is turned on so that we get continuous music.

    Changing Audio Page to Page

    This is all kept in a SoundController component at the top level of the react tree, above React Router.

    function App() {
    
        ...
        return (
        <>
          <SoundController />
          <Switch location={location} key={location.pathname}>
              <Route
                path="/testimony/:id"
                render={(props) => <Testimony match={props.match} />}
              ></Route>
              <Route path="/act-one">
                <ActOneTestimonySelect />
              </Route>
              ...
          </Switch>
        </>
        )
    };

    The main reason for this is so we have control over fading in and out between pages.

    The other reason is for caching. Remember the howl properties in the sound array? That array is going to be stored in a useRef() call in the SoundController component. Then we can save each instantiated sound with the appropriate element in the array for future reference.

    That's exactly what is happening here inside the useEffect. This code listens for a change in the currentTrackObj (triggered by a page change) and checks if we have a cached howler instance. The cache version is targeted if so, and a new one is played if not.

      useEffect(() => {
        if (currentTrackObj) {
          let howler;
    
          if(currentTrackObj.howl) {
            howler = currentTrackObj.howl;
          } else {
            howler = initiateSound(currentTrackObj.src);
          }
    
          currentTrackObj.howl = howler;
          howlerRef.current = currentTrackObj.howl;
          if (soundPlaying) {
            howlerRef.current.play();
          }
        }
    
        return () => {
          if (howlerRef.current && howlerRef.current.stop) {
            howlerRef.current.stop();
          }
        };
      }, [currentTrackObj]);

    Playing and Pausing

    The state for this is stored in redux as soundPlaying. When that's toggled, we can interface with howler to play and pause the track.

      useEffect(() => {
        if (!playedAudio) {
          dispatch(setPlayedAudio(true));
        }
    
        if (howlerRef.current && howlerRef.current.playing) {
          if (soundPlaying && !howlerRef.current.playing()) {
            howlerRef.current.play();
          } else {
            howlerRef.current.pause();
          }
        }
      }, [soundPlaying]);

    Then that's it! Musical bliss on every page!


    Error Tracking

    Solo deving a fairly large app has humbled me. Errors and bugs can sneak in. Even professional software ships with errors. I've had to accept that it's part of the feedback process of crafting software.

    A lot of the development process for me in this project has been as follows:

    1. Ship new feature
    2. Collaborator tests it and finds a bug
    3. Collaborator tells me something is broken, but with no code-related context they can share
    4. The hunt ensues

    Not ideal! I've been looking for ways to streamline bug squashing by logging pertinent information when errors occur.

    Swallowing pride and accepting a path towards sanity, I've integrated an APM with my app. Here are the details on it:

    Using Sentry

    I opted for Sentry. I did some research on Log Rocket and Exceptionless as well. All of them are fine pieces of software!

    Log Rocket includes a live action replay. Exceptionless provides real time monitoring. And Sentry is focused on capturing errors at the code level.

    For my needs, Sentry seemed to target exactly what I was experiencing — purely code-level issues.

    Integration

    Integrating is as simply as a few npm packages and adding this code to the index.js:

    import React from "react";
    import ReactDOM from "react-dom";
    import * as Sentry from "@sentry/react";
    import { Integrations } from "@sentry/tracing";
    import App from "./App";
    
    //Add these lines
    Sentry.init({
      dsn: "Your DSN here", //paste copied DSN value here
      integrations: [new Integrations.BrowserTracing()],
    
      tracesSampleRate: 1.0, //lower the value in production
    });
    
    ReactDOM.render(<App />, document.getElementById("root"));

    One tweak I had to make was to set Sentry to only run in production. (I'm fairly certain I'll see the errors in development, thank you!)

    if (process?.env.NODE_ENV === 'production') {
      Sentry.init({
        dsn: "Your DSN here", //paste copied DSN value here
        integrations: [new BrowserTracing()],
    
        tracesSampleRate: 1.0,
      });
    }

    With that bit of code, Sentry then comes with the stack trace, the OS and browser environment, potential git commits that caused the issue, and passed arguments. All sorts of goodies to help find the culprit!


    Analytics - Accuracy and Ethics

    I don't personally use analytics on this site. I'm not here to growth hack my occasional writing for ad space. But I am involved in a couple of projects where analytics is good feedback for what we're putting out. So I did a little bit of a deep dive.

    Accuracy is Suspect

    Uncle Dave Rupert and Jim Nielsen have striking comparisons between their different analytics services. The gist is that they are serving up WILDLY different data, telling different stories.

    It's not just that Netlify numbers are generally higher than Google Analytics, either. If you follow one service, the data could tell you that you had fewer visits this month, while the other claims you had more.

    Part of this is because of the difference between how the data is gathered.

    Server Side Analytics measures requests. Client Side loads a script on page load.

    There are pros and cons to both. Client side analytics can better map sources of leads and measure interactivity, but is prone to JS being turned off or plugins blocking their usage. Server Side is prone to inflated numbers due to bot traffic.

    So it seems like the best solution is to have multiple sources of information. Of course that extends to having more metrics than purely quantitative, as well.

    Privacy and Ethics

    Tangentially, there are some ethics around choosing how to track analytics and who to trust with this.

    It's an interesting space at the moment. Chris Coyier of CSS Tricks has written some thoughts on it.. I feel largely aligned. The gist is: aggregate, anonymous analytics is largely ok and needed in several use cases. Personally identifiable analytics are a no-no.

    But I understand that even this “anonymous” tracking is what is being questioned here. For example, just because what I send is anonymous, it doesn’t mean that attempts can’t be made to try to figure out exactly who is doing what by whoever has that data.

    This is key for me. History has told us that if we're not paying for a service, we are likely the product. And so, any analytics service that doesn't have a price tag on it to me is a bit suspect.

    I can't say I have any final conclusions on that matter. Nor any say that X is right and Y is wrong, I have no shade to throw. But as I step more and more into positions where I'm a decision maker when it comes to privacy, I'm working to be more and more informed, putting users best interests at the center.


    Git Hygiene

    My recent projects have involved a fair amount of disposable code. I'll write a component for an A/B test, and then it needs to be ripped out after the experiment closes.

    Git has simplified this process beautifully!

    I could manually handle the files, deleting line by line myself. But git makes it so that I can run a few commands in the CLI to revert everything.

    Here's my workflow for it:

    Modular Commits

    I've been guilty of mega commits that look something like this:

    git commit -m "render revenue data to pie chart AND Connect ID to Dashboard AND move tiers to constants file AND ..."

    I've recently made the switch to breaking out any instance where I would want to put an "and" in my explanation of the change into it's own commit. So now my commits will look more like this:

    $ git commit -m "render revenue data to pie chart"
    $ git commit -m "Connect ID to Dashboard"
    $ git commit -m "Move tiers to constants file"

    There are loads of benefits to this. To anyone reviewing my code, it's far easier to follow the story told by my commits. Isolating a breaking change is much easier.

    The best, though, is that it's WAY easier to isolate a commit or few that needs to be thrown out later.

    Revert Commits

    The word comes back from marketing: The first A/B test was a success, but the second needs taking out.

    If a single commit needs changing, it's as easy as this:

    $ git revert 9425e670e9425e66d61c8201...

    git revert will then create a commit with the inverse of those changes.

    Usually, I need to do this with multiple files. The workflow isn't too different:

    $ git revert --no-commit 820154...
    $ git revert --no-commit 425e66...
    $ git revert --no-commit 9425e6...
    $ git commit -m "the commit message for all of them"``

    Push and merge from there!


    Fluency

    I'm thinking a lot about this thread by multi-instrumentalist and composer Carlos Eiene

    For me, this is the key phrase:

    Where is the fluency line with an instrument? ... I think a closer answer is having the necessary abilities to effectively communicate in whatever situation you may be in. And if you're in a vacuum, learning an instrument by yourself without ever playing it for or with others... you don't get the chance to communicate musically.

    (Putting aside the whole argument for or against language as an analogy for music here.)

    In Music

    This is such a given in music school. You are jamming with musicians all the time, getting feedback, and performing alongside each other all the time.

    For me, it's been interesting transitioning musical communities.

    The main point of the thread is to deemphasize practicing for the sake of mastery alone. To focus on how you serve musically and how you can still effectively communicate with other musicians.

    I'm thinking a LOT about the inverse, though. How do you find that same community and immersion in a musical context that's a lot more individualist than, say, being in a concert band or jazz combo? Where does the feedback come from there?

    When it comes to writing music, I feel like it's much more in the vein of how I imagine authors write. Or Jazz musicians working on transcriptions, actually. You're not limited by time or space. You are communicating and riffing off of someone's ideas that could be from decades ago. I think a present, accessible community is of course important. But online communities are much more lightweight than when you're in a group that rehearses every week together. And so, filling in the gaps takes working with recordings and materials.

    Speaking as an ambivert, this way of connecting musically is pretty amorphous. The buzzword now is that many relationships online are "parasocial." And don't get me wrong, there's beauty to it, too. I love being able to transcribe a Japanese musician's X68000 chip music so easily and readily, there's an interesting kind of intimacy to that engagement with music. The feedback and communication is strange, though. It's not direct communication, and the community, again, is less tangible.

    Anyhow — sometimes I miss in person music making. Maybe I shouldn't expect writing music to be the same kind of fulfilling. For me, the lesson is that music is multifaceted. Different acts in music can balance each other out. We write to express individualism. We perform to connect with a larger community.

    In Code

    This got me thinking with code languages as well.

    There's a spectrum. Folks who are renaissance devs, those who have dipped their toes in many technologies, are fluent in multiple languages and frameworks, etc. And there are folks who are highly specialized.

    Namely, in web development, is it worth going broad or focusing in?

    (Short answer: go T Shaped)

    The answer comes from community, or maybe more importantly, what your problems are your clients grappling with?

    That, too, is a spectrum. If you're aiming for the big companies, python, data structures, and a CS degree in your back pocket helps. If you're doing client work, breadth wins out. If you're an application developer, it may be a more focused in set of JS centric technologies.

    Like music, the field is too large and varied to really say one size fits all.

    No matter what, though, mastery isn't necessarily the goal. Here, it is fluency.

    Some projects may require that intimate knowledge of JS runtime logic.

    Others may only need some familiarity with JQuery.

    The interesting thing about this field, in my mind, is that it's a lot less about working towards a specific target for fluency, but using the tools you have to solve a problem for your collaborators.

    Learning is a natural part of that process. So there is both a really tight feedback loop and there's natural growth and development built in.

    (Again, caveat here to say it's not an excuse to slack on developing your skills. But working towards fluency can keep it so that you are working to master relevant skills vs. simply being virtuosic in an irrelevant way.)

    Back to Music

    The difference here is that software solves a direct problem for someone else. It's creativity with a practical outcome. With music, there's more magic. ✨ The outcomes are less clear, the people you serve and communities you entangle with are less defined. The benefits, even, are vague at times.

    Except, y'know, your soul grows in the process. And simply being creative in the world and sharing that creativity can lead to inspiring others to do the same.


    Geeking Out Over Notion

    You guys, I'm just really jazzed about a piece of software over here.

    I think for anyone that codes, there's just a little bit of the person who organizes their sock drawer in us all. Organization and systems are a big part of the job. And so, our project management has that element to them too.

    In that arena, Notion has just been SO pleasant to use.

    Kanban Board

    My primary use for it is the board view. This alone has been huge, and maybe actually, this is more of a blog about why kanban boards are the best.

    Let me set the scene: Jenn and I start our big game development project together. We're excited, energy is high, and we have lots of brain-space mutually for where things are and what our individual tasks are.

    Then the project gets BIG. The list of features are long, a log of bugs crop up, and we don't have a unified spot to keep notes on individual features.

    Notion Board with Cards under Analysis, Development, and Awaiting Input

    The beautiful thing about a board is that we can keep track of multiple features, ideas, and bugs. From a glance, we can see what's on deck for developing, researching, and giving feedback.

    What's especially cool about Notion's boards is that you can open the card up into it's own document!

    Say that Jenn makes a card called "Add Pizza to Inventory."

    We have a comment function on the card where we can have a conversation over what toppings should the pizza have, when to add the pizza to the inventory, and so on.

    Under that is space for writing on the document. Anything goes here - adding screenshots, keeping a developer todo list, keeping notes from research. So all the details around that feature is kept in one spot.

    What happens often is that we'll talk about an idea, leave it for months, and then have to come back to it. With a comment thread and notes from development, it's that much easier to pick it up and work on when the time comes.

    Guides and Meeting Notes

    Notion is mostly marketed as one of those "everything-buckets", similar to Evernote or Google Drive.

    I'm personally a believer in plain text and just using your file system for note keeping. But, collaboratively, having a hub for all things project related is unbeatable.

    On top of our progress with the board, we used documents for writing meeting notes and keeping track of guides for using Sanity. We both always have the most up to date info, as Notion syncs automatically with any changes either of us makes.


    Automatically Saving Spotify Weekly Playlists

    With friends, I've been talking for ages about how channels of communication have different feelings and expectations around them. Sending a work email feels different from sending an instagram DM, which feels different from texting, which feels different from sending a snap on Snapchat.

    For me, the same is true for music apps. I have Spotify, Tidal, Bandcamp, and YouTube accounts with different musical tastes and moods. Especially since these apps all have algorithms for recommending music, I like each to be tuned into a certain mood.

    It just feels strange having a Herbie Hancock album recommended next to the new Billie Eilish, even though I would listen to both!

    SO I have my Spotify Discover Weekly playlist fine tuned to curate a great mood for work with mostly instrumental music. BUT I have to manually save the playlist every week, or else it's gone to the ether.

    Naturally, I was looking to automate the process! Having worked mostly in JavaScript and React so far, I saw it as a great chance to explore scripting in Python.

    What It Does

    This light script does just a couple of things.

    Of course, it gets and reads the current Discover Weekly playlist data, creates a new playlist, and adds all the new tracks to that playlist.

    It also implements a custom naming convention. I have sock-drawer-level organization preferences for naming these playlists. I like to name these by the first track name and the date the playlist was created. Names end up being:

    • An Old Smile 04/05/22
    • Mirror Temple 03/28/22
    • Apology 3/21/22

    This includes a little bit of trimming – Some track names end up being ridiculously long, sometimes nonsensicle. (looking at you, ⣎⡇ꉺლ༽இ•̛)ྀ◞ ༎ຶ ༽ৣৢ؞ৢ؞ؖ ꉺლ, an actual artist recommendation.) So there's a very simple shortening of the name if needed.

    Using Spotipy and the Spotify Web API

    Spotify already has an exposed web API for doing just what I needed – creating playlists and adding tracks. Doing so, like other OAuth applications, requires providing user authentication and scopes.

    To simplify the authentication and communication, I opted for the lovely Spotipy library. Simple and intuitive, the library handles the back and forth of authenticating the application with Spotify's Web API and holding on to all the tokens needed for requests to my user account.

    Creating a Class for Modularity

    Although this could easily be a single script, I couldn't pass on the opportunity to bundle this code up in some way. I could see this project being extended to handle other playlists, like the Year in Review Playlists.

    Maintaining state was a bit cleaner in writing a class as well. Storing the Spotipy instance and several other reusable pieces of state such as the list of tracks kept all the necessary information stored and self contained, ready for use by the class methods.

    Error Handling in Python

    My first and primary scripting language is JavaScript. Like many other languages, error and exception handling is not necessarily a beginner topic. So it was surprising to me to be accounting for exceptions so early in my Python coding.

    Handle them, I did. Each method is wrapped in a try / except block and logs messages unique to each function, to help keep track of where things will go awry.

    AWS Lambda

    The script wouldn't be much of an improvement if I still had to open up a terminal and run it manually! Uploading to AWS as a Lambda function made sense since it's such a lightweight script that is purely interacting with Spotify's web API.

    I used the Serverless Framework to streamline the process. Initializing the project with their CLI and customizing the config file, I was able to create a Cron Event Handler to fire off the function every Monday at 7:00 AM.

    Playlists Created on Request

    One interesting thing I've noticed about the playlists is that on Mondays when I open up the official Discover Weekly playlist in the desktop App, it will sometimes still show the previous weeks playlist, and then later update with the new tracks for the current week.

    I initially thought this would mean that Spotify only updates the playlists after you make a request. If my script ran before that initial access, then it may be saving an old playlist instead of the newly generated one.

    However, in practice, it seems it may actually have more to do with Spotify's app cache taking time to update. On logging out results from pinging the endpoint for the current Discover Weekly tracks, both from the first load and from a delayed request, both returned the new tracks appropriately. No need to change my code, but an interesting point to explore

    Try It Out!

    If you, too, are an exceptional music nerd, you can give my script a whirl yourself! You can find my code here at this github repo link with guidelines for setting up AWS Lambda and Serverless.


    30 on 30

    My first draft of this, I'll be honest, waxed poetic on time and identity. I wrote about my Saturn Return, the transient nature of reality, and how we are, in essence, a part of the universe observing itself.

    BUT THAT'S NO FUN!

    So instead, here's my listicle of 30 lessons learned leading up to this big, hairy landmark.

    30 Lessons

    1. Enthusiasm is the most important compass. I think about this a lot when it comes to planning my own future. I have no idea what will make me happy tomorrow. And that's ok! There will always be something to be excited about and moving towards!

    2. Everything has diminishing returns at some point. Money, networking connections, living in excess, even living a balanced life to a degree. Aim for 80% in most things, that's the sweet spot of effort and reward.

    3. Life happens in seasons. If you're a high achieving type, it's easy to fall into the idea that production should always stay high. But we need those slow periods for reflection and recharging. A cliche at this point. But really feeling this on a month to month, year to year, decade to decade level has been powerful.

    4. Eat well. Seriously.

    5. Sleep. Another boring one, but come on! It's really important! I can hear you now: "What's next, are you going to tell us to exercise?!"

    6. Exercise. Get out of your head and into your body. A good walk is great medicine.

    7. It all works out in the end. It's hard to know this without having a few lived experiences of genuine challenge under your belt. I feel like I'm just getting there. But trust me. It all works out in the end. In every way, this too shall pass.

    8. Success is not a direct result of effort. Don't get me wrong, effort is wildly important. But it's actually effort multiplied by a much, much, much larger variable of luck, and a third variable of resources (eg "talent" or inclination). A big lesson late in my 20s has been to accept this and use it. Work steadily, stay humble, and look out for open opportunities. It's much more enjoyable than the brute force method.

    9. On n'arrive jamais. One never arrives. (Quoting musicians here for you Eugene Rousseau / Marcel Mule fans!) The anticipation is greater (and really lots more fun!) than the realization. Take time

    10. Keep in touch. Doing this in an intentional, genuine routine is an easy way to get the ol' warm n fuzzies.

    11. Don't take anything too seriously. Seriously.

    12. Back up your files!! I grew up having to reinstall windows on our home PC every couple of years. It wasn't a big deal when it was just kidpix files on there. But now that all of my work is digital, it's a necessity.

    13. Make time for personal creative projects. Even when what you do for work is creative. This has been a lifeline for me. There are so many reasons for it. It's fun, you learn so much by doing it, you discover identity through it. And anything works! Blogging, Twitch streaming, fan fiction. Actually, Vonnegut says it best: ". . . Practice any art, music, singing, dancing, acting, drawing, painting, sculpting, poetry, fiction, essays, reportage, no matter how well or badly, not to get money and fame, but to experience becoming, to find out what's inside you, to make your soul grow."

    14. Attention is the greatest gift to give and receive. Paraphrasing from Simone Weil, as discovered on the blog formerly known as Brain Pickings.

    15. Acceptance as a horizon. Just starting on the path of learning this one. Probably the most important one on the list. Acceptance of others and self is wildly intertwined. Part of growing up is simultaneously being open to the differences in others and yourself. A tricky thing, too big for a listicle!

    16. Beware the differences between your genuine values and societal values. Again, enthusiasm helps here in parsing which is which.

    17. There's greater wisdom in the gut than we give credit for. Some of my better decisions were against reason and in favor of intuition.

    18. Be who you are now. A lesson from teaching music to kids. Pardon the philosophical bent here: A 6th grader's purpose isn't to grow up or to learn all their scales for 7th grade. It's to be a 6th grader. We're all working towards something, but losing sight of who we are now takes away from the unique joys of where we are. The best lessons I taught were ones where we savored enthusiasm. Particularly for beginners, savoring the newness of learning a song they were inspired by. (Sometimes it was Megalovania...actually, most of the time it was Megalovania.) And yeah, then we did some scale work too.

    19. Do something for work, and something else for creativity's sake. I'm here to say it's true, both halves make a greater whole. The nice thing is that the vehicle for money can be inspiring too — coding and music both support each other creatively for me.

    20. Books are great. Go pick one up! Remember how CRAZY BONKERS it is that you and I are connecting minds right now across TIIIIME AND SPAAAACE - through the magic of printed text!

    21. Invest in your tools. When I started at UT, I was simultaneously playfully poked at for playing on awkward mouthpieces, and I was praised for making them work. BUT after buying newer, nicer setups, it was just easier to sound good and more fun to play the dang horn!

    22. You don't need to be a gear-head. Then again, I was learning to code on a $200 chromebook that I had to install linux onto. Build times took ages. But it got me here. 🤷‍♂️

    23. There's so much time. Back to no. 18. Not so much a lesson as much as an observation. The 20s to me felt like a race to Arrive and find stable ground. Once you have it, somewhere between 28 and 36 for most folks I talk to, the world opens up. So savor whichever stage you're in, the striving or the sustaining. Both have their own beauty on the journey.

    24. It's ok to give up on something partway through! Thanks for reading! 👋


    Adding RSS Feed to Next.js with SSR

    I'm a big blog nerd. Growing up, I subscribed to my favorite webcomics. I mourned the death of Google Reader. I love the spirit of blogging today as an alternative, slow paced social media.

    Naturally, I HAD to get one going on this site!

    There are several great resources for getting a feed going with SSG and Next.js. This one was a favorite. Here, I'm going to add my experience setting it up with a SSR Next site.

    The Sitch

    Here's what static site solutions suggested:

    • Write your rssFeedGenerator function
    • Add the function to a static page's getStaticProps method
    • On build, the site will generate the feed and save it in a static XML file.

    The issue for my use case is that my site is leveraging Server Side Rendering. I'm doing this so I can upload a post that is scheduled to release at a later date. With a static site, I would be stuck with old data and the post wouldn't release. With SSR, there is a simple date comparison that filters published and scheduled posts.

    So, since we have a Server Side Rendering solution for pages, we need a SSR solution for the RSS feed.

    Rendering RSS Feed from the Server

    I'll briefly start with the code to generate the XML file for the feed. I'm creating a generateRSSFeed method that largely looks similar to the one described in this guide.

    That gets passed to my handler getRSSFeed.

    export async function getRSSFeed() {
      const posts = await getAllPostsWithConvertedContent(
        [
          'title',
          'date',
          'slug',
          'author',
          'coverImage',
          'excerpt',
          'hidden',
          'content',
        ],
        {
          filter: filterBlogPosts,
          limit: 10,
        }
      );
    
      const feed = generateRSSFeed(posts);
      return feed;
    }

    lib/api.js

    And here's the tweak: I'm using the method in the api routes folder instead of getStaticProps.

    import { getRSSFeed } from '../../lib/api';
    
    export default async function handler(req, res) {
      const xml = await getRSSFeed();
      res.setHeader('Content-Type', 'application/rss+xml');
      res.send(xml);
    }

    pages/api/feed.js

    Instead of generating a static file and saving it to our assets folder, here we're serving it up from the API directly.

    And that's it! Once the time passes on a scheduled post, the next request to the feed will include that latest post!


    Balancing New and Familiar Tech

    After developing this site, I realized that getting started was the hardest part.

    When I set out to build it, I had a clear vision for what I wanted to accomplish. I also had a very ambitious set of tech I wanted to learn along the way.

    Learning It All

    I was inspired with this project to roll my sleeves up and get close to the metal. At work, I design web apps with React, Meteor, Mongo, and several other tools that make life easy. I was hungry to balance it with a real challenge.

    To me, that meant:

    • Writing blog posts in markdown
    • Converting markdown to html**
    • Handling my own routing by picking Express back up
    • Learn a new templating language
    • Deploying to a higher "professional standard"
    • Handling image hosting
    • Optimizing images
    • AND MORE

    Basically, I wanted to hand code as much as I could without any help!

    Getting Stuck

    This went nowhere fast.

    After getting an Express server up, I was deep in decision fatigue. I was having to make unique choices about so many details. I had to learn as I went with a greater number of libraries and tech. I had very little that felt familiar in front of me.

    And so I was stuck motivationally.

    Pareto's Principle

    If you're unfamiliar, Pareto's Principle is the idea that roughly 80% of consequences come from 20% of causes, and vice versa.

    The principle is popular in business. 80% of revenue comes from 20% of clients.

    I realized while in the weeds that it's a fair ratio for development and learning new tech, too.

    80/20 Rule in Tech

    So, ego got checked at the door. I scaled back the "newness" of what I was doing by picking familiar tech - React, Next.js, hosting on Vercel, AWS.

    I then experienced the sweet spot of balancing new technologies and features while building the site.

    My final balance with this tech looked like this:

    • I was familiar with 80% of what I was working with. (React, Next, Vercel, AWS)
    • I was unfamiliar with 20% of the tech I was working with (Hosting Markdown and Image Optimization)

    I found flow with that ratio. It's when I was trying to work with more of a 50/50 balance that I lost momentum. When I was trying to get back into Express with a new templating language, serving static files, AND all of the above new tech, I stalled.

    Finding the right balance kept me productive, happy, and still learning a great deal along the way.


    SSG vs SSR vs CSR

    While building my site, I did a deep dive into rendering. Next.js can serve up static files, client side rendering, and server side rendering. All on a page-by-page level, even! I wanted to define the pros and cons of each. Here's what I found:

    Static Files

    These are the fastest to serve up! Think a simple HTML file or image. There's nothing to process, the server just loads the file and sends it off. These are easily distributed to CDN's as well, so that speed translates all the way from California to Australia.

    Static Site Generation

    The benefits of static files, with the flexibility of templating. If data is stored in a DB or CMS and needs to be piped into your site, this is a great solution. On site build (say, when you push new code), static HTML files are generated from templates and pulled data. The data needs to be something relatively unchanging, as the site typically only builds once and then caches the statically generated HTML files.

    Next has some neat enhancements to this. Incremental Static Regeneration can regenerate your pages after build as your data is updated. You can either set this to a time interval or you can even connect your CMS to your app with a webhook. This way the site only regenerates when data is updated.

    Server Side Rendering

    As the name suggests, rendering happens on the server the same as with SSG. The difference is that it's on request as opposed to build. Build happens infrequently and is triggered by a specific event. Requests, however, are when a user requests your webpage. This has been the way of the web for decades, with php, Ruby on Rails, and even Node.js and Express.

    With several different rendering options now available, this method shines with data that changes fairly frequently. This is also a great solution for sites requiring user authentication, such as logging in to a portal to view your utilities bill.

    Client Side Rendering

    The new hotness, relatively speaking. A JS framework such as React or Vue is used to send a root div and a whole lot of JavaScript to dynamically render the page in the browser. Data is often pulled from several sources that are frequently updated. This is the solution of choice for building apps, dashboards, and anything requiring real time data.

    The Weird Middle-ground

    So, the above is a spectrum, typically trading site performance for the freshness of data served (to vastly oversimplify it.)

    What if you fit somewhere in the middle? Incremental Static Regeneration is really close to Server Side Rendering on the spectrum. Which do you go with then?

    My situation is that my data is not changing, but the conditions for rendering them does change. I finish writing my blog posts on one day, push them to the site, but then only want them to publish a week later.

    OK, so ISG would be great. You can time the interval of when to regenerate the page. What's the big deal?

    Decisions Depends on Volume

    Traffic. I'm just starting the site, so volume is pretty low. With ISG, the first visitor after the regeneration event gets a cached version of the site. Then, later visitors get the fresh one. But that first person is a big deal to me! If I were a national e-commerce site, no sweat. But I'm a local mom and pop shop on the internet.

    Not to mention ISG adds a layer of complexity and maintenance unto itself.

    So! My choice for the site is to go with SSR. I trade off the wicked fast benefits of SSG and ISG. In it's stead I have greater simplicity and the assurance that the few folks visiting my site at the start are getting the freshest content.

    As traffic increases, switching over to ISG is still an option thanks to Next's flexibility.


    My New Website! Details and Tech

    I'm very excited to have plowed some land and planted the seeds for my own garden on the web!

    The sites I've developed have represented big phases in my life. Moomoofilms.com was my portfolio for youtube sketches when I was a kid. After grad school, I put up a music teaching portfolio site for students. Starting in tech, I put together a landing page for all my projects.

    With chrisdpadilla.com, it feels like another step. A unified home for all the different wanderings I do in tech, music, and writing.

    So yes, websites are great! I would definitely recommend getting one!

    With the sentimental side of the site laid out, let's talk tech!

    Considerations

    Features

    Blogging is the main feature of the site. Aside from that, there's a little bit of static file hosting. I do love the idea of playing with full stack features in the future, though, so having access to server side code is also a necessity.

    Longevity

    At the same time, I want something that will last. This site will be my playground for experimenting with new code, but I don't plan on doing a Scott Tolinski level of regular refactoring.

    I started hacking sites in the 2000s. A lot has changed and improved since then, and I want to take advantage of where development is made easier! And I do want to balance that with also making the site portable.

    Performance

    On a structural level, I wanted the site to be performant and accessible. I grew up on view source, and I love when I stumble on a site where I can still find beautiful html in the developer tools. I'm a little old fashioned - even if I'm using modern tooling, I love the feeling of making a site similar to how I would have back when I was growing up. Simplicity just feels good!

    Tech

    Content in Markdown

    My first decision was to build a system where I owned my content and could easily move it as frameworks and CMS's come and go. I do a lot of my personal writing, reflecting, and note taking in markdown already, so writing the blog in markdown files was an easy choice.

    Since they are so lightweight, the posts are stored in the same repository as the code. Down the line, this also makes it really simple for updating the site. When ever I push a commit with a new post, the static site files will regenerate

    An Initial Detour

    My first iteration of the site was an Express server. This hit all the boxes at first:

    • It can easily handle blogging features, while being hugely extensible.
    • Express has been tried and tested. The MVC approach to building websites, also, is a classic method.
    • It would be performant, rendering static files.

    I was determined - I was going to hand code as much as I could and learn a great deal along the way!

    And then it got tedious. I'm up for a good challenge, but I found myself hitting decision fatigue very quickly.

    I needed a bit more help. Momentum is a key ingredient in my projects. If I kept at it with Express, I felt I would lose that momentum.

    Next.js

    I scrapped what I had and switched over to Next.js. Put simply, Next handles everything I was looking to do myself, but makes it effortless.

    Feature-wise, the framework is flexible enough to switch between Server Side Rendering and Static Site Generation on a page by page basis. On top of that, api routes are available to deploy serverless functions for any future features and integrations.

    That lends itself to great performance. There's potential to ramp up the performance through caching, edge function support, and built in image optimization.

    Next has been tried and tested. There's excellent community support. They're also up to version 12, after many successful iterations. I'm not worried about the technology disappearing.

    Asset Hosting on Amazon S3

    An incredibly cheap and easy solution! Next pairs really well here. next/image handles many key performance optimizations, so all I have to worry about is uploading assets onto the S3.

    Style and Design

    My design is intentionally simple. I'm not a full blown minimalist (my desk is always a mess!) But I appreciate a design that doesn't detract from the content.

    Structurally, everything is in one plane old CSS file. I'm using Custom Properties (variable) for some repeating values. But that's as fancy as it gets, though!

    Hosting on Vercel

    Again, a natural pair with Next. It comes with integrated deployment through Github, easy to access logs, and simple set up for SSR and SSG. Also pretty cheap!

    Challenge

    A future concern I have with the site is vendor lock in. Next works like a charm on Vercel. There's support for hosting on other platforms such as Digital Ocean and Netlify. It's hard to say at this point if staying on Next and hosting with Vercel will be the best choice in the future.

    At this time, I definitely needed to get up and running quickly! So I'm happy with my choice today. As I continue developing, I'm planning to decouple my personal server logic from Next's API.

    What I Learned During Development

    Working on this site has been filled with learning opportunities, both in hard and soft skills. For more on particular areas of learning, you can read my articles on the subjects:

    Launch and Beyond

    Now that the code is in a presentable spot, I'm ready to fill the pages up! I already have a few more blog posts and albums in the works. It feels great to have a central home for all of them.

    Here are a few selected inspirations for the site's design:


    Symbolic Links

    The Sitch

    I'm in a spring cleaning state of mind with my data!

    I keep a bunch of text files in a Journal/ folder on my computer. It's pretty similar to the hierarchy Derek Sivers lists in this post on writing in plain text.

    And then separately I have this blog where I store articles as markdown files in the codebase itself.

    personalBlog/
    |- _posts/
    |- components/
    |- util/
    etc...

    It's all on my computer, of course — but it feels strange to write in prose in a text editor like VS Code. It even feels strange to store article drafts in the same place as the code for the site.

    They are only a few clicks away, but they feel like far flung and very different spaces. So how do I handle keeping published articles and drafts both near each other and organized, but spacially being sourced somewhere that my blog has access?

    Alex Payne mentions in this article using Symlinks.

    Creating one looks like this:

    $  ln -s source symlinkDestination

    A very elegant and easy solution! The idea is that you can have a link to another file or directory within a completely different place. Like a regular URL link, but for local files.

    My original idea was to have a symlink in the repo for this codebase and have all my writing stored in the Journal folder, included published pieces. The issue is that symlinks are just that - links. When storing them in git and publishing on Github, the files themselves are not pulled in.

    So I swapped the direction. The symlink lives in my Journal directory, and the actual files are in the codebase. When I publish this article, I'm moving the file from Journal/blog/drafts into the symlink Journal/blog/_posts, which then moves it over to the appropriate folder in the code repo.

    It works beautifully on the command line.

    A nitty, gritty, small tool - but one that makes me unreasonably happy to use!

    A Side Note on Aliases

    Mac's have Aliases, which work in a very similar way. They are restricted to finder, though. I'm working mostly on the command line when I'm writing and working on code, and symbolic links are recognized both by finder and by linux systems.


    How to Learn Web Development

    My Path

    From 2019 to 2021, I taught myself modern web development. What began as a fun hobby eventually turned into a completely new and exciting career trajectory for me. I taught music at the time, and would take my spare moments between lessons and at nights hacking away at projects and learning from online resources.

    It was a blast and has changed my life. Well, obviously in the career department! But teaching myself to properly program was a surprising discovery in itself. I really impressed myself with how much I could figure out on my own with the enough resourcefulness. Plenty of folks I know benefitted from school, bootcamps, or tutors. But I would strongly encourage trying to self teach, if you're curious!

    Who This is For

    Absolute beginners! Intermediate learners. Anyone inbetween! If you are doing this as a hobby or are interested in working full time, then these will help along your journey.

    The most challenging part about self teaching is the lack of structure in the curriculum. This article will be a mix of what worked for me as well as what I would do differently if I could. The best of both worlds!

    It's also worth noting this is for full stack web development specifically. The gist is that web developers can be put in two camps: those that focus on solely UI (a more designer approach) or take a more balanced, logic handling approach (more engineering and data oriented). Both are highly valuable, and the materials below guide you through both, but I lean more in the direction of full stack vs pure front end in my experience.

    That said, let's start with how to approach these resources.

    The Mindset for Learning

    These are topics all unto themselves. Almost cheesy, but important enough to share:

    Focus on Foundations First

    Frameworks come and go. They also hide the tricky parts of using certain tech. Writing React is meant to be easier than vanilla JS once you get the hang of it. Get really good at vanilla first. Same for CSS and HTML — it's invaluable to be really comfortable with the basics first so that you can easily transition from tooling.

    Be Consistent

    Musicians will be familiar with this. Practicing saxophone 30 minutes every day is better than 4 hours once a week. The same is true of programming. You learn inbetween sessions, often, while washing dishes. The routine facilitates that.

    50% Rule

    Practice learning the way that artists do with the 50% rule. The gist is you should spend some time learning from materials - books, blogs, tutorials. An equal amount of time you should be creating (just for the sake of creating).

    It's both a sanity check and it's a way to actually deepen your learning. Doing this will actually instill the confidence that you can write a few lines of working JavaScript, or style a site. And it's more fulfilling to be stepping away from the tutorials and genuinely creating.

    Some resources are good about encouraging this. Books or videos sometimes come with practice problems. If not, make up your own!

    This is the hardest part. That's ok! It gets easier the more you do it. Start small - 5 minutes of study, 5 of practice. Then 15. 30. An hour, two — a day, a week. You'll be amazed at how quickly you grow this way.

    Resources For Learning Web Development

    JavaScript

    Start with JavaScript! HTML and CSS are easier, but refining JavaScript will take the most time. Eat the frog first and start here. For any HTML and CSS you need to know while learning, get the bare bone basics, just enough to start scripting. I used a bootcamps' prep course, though I'd recommend a book or alternative course.

    Head First JavaScript is a fun and readable guide. Free Code Camp also has a great course on JavaScript that's interactive, perfect for the 50% rule.

    HTML, CSS, and Refining JS: The Odin Project

    The real bulk of my learning was done through the Open Source Site The Odin Project. Wildly thorough, the project guides you through just getting started to building full stack apps. All along the way, you are challenged with suggested projects to try out what you learned.

    The site itself primarily links to other resources, all of great quality! It's a great demonstration of what your continued learning will look like on the job, but with the structure of a curriculum. Volunteers maintain it, so the content has stayed relevant through the years

    The course is an overview, and there are a few holes here and there. But the focus on what and how to learn each piece of the web development puzzle is invaluable.

    One note — It's ok to completely crash and burn on a project. I got stuck at one point on an object oriented JS project and simply needed to move on. That is ok! There are other sources that can help fill gaps below.

    Frameworks and Refinement

    To fill in those gaps, getting even more experience building apps and learning from others is crucial. The curriculum above is a great starting place, but it takes more portfolio development to feel really confident in development.

    Most of my favorite video courses are by Wes Bos. React, Next.js, and even his beginner JS course were a great, hands on way to see how individual pieces played together. They're fun, practical, and no-fluff guides.

    Level Up Tutorials are quick guides to getting up and running with a certain piece of tech. It's staggering how much Scott has put out on this site. Find what ever interests you at this point. Balancing the fundamentals with a piece of new and interesting tech like GraphQL or Gatsby at this point is fun! Not to mention, an indicator of passion and curiosity in development.

    These two also host a podcast called Syntax. I listened to their show a lot to get up to speed on what's modern, and also to plain old learn how developers speak. It's a bit easier to pick up context from conversation than just from text, in my opinion. And it's another way to learn while washing the dishes!

    Going Pro

    More on mindset: It will be a slog. Hundreds of applications may yield only a handful of interviews.

    The Odin Project does a great job of offering resources for this phase. I would just add this much:

    Algorithms

    They are a big chunk of it.

    Computer Science style algorithms weren't a big part of interviewing for me. But learning them still made me a much better JS dev. Colt Steel's course is a great way to get familiar with them.

    Cracking the Coding Interview is essential reading. Just reading the first half of the book and trying out the first few problems will be good practice. You don't need mastery here, just familiarity.

    The type of algorithms I did do were more toy problems, à la what's on CodeWars. A problem or two a day will keep your problem solving skills fresh.

    General Technical Knowledge

    One technical interview I took was a much more trivia-style. Google "React Interview Questions" and you'll find what I'm talking about.

    Spending a bit of time learning these (and the principles behind them!) and then adding them to a spaced repetition system helped me integrate them. Podcasts also helped here — this type of interview is mostly to gauge if you can speak like a developer.

    Portfolio

    For the big pieces on your portfolio, write out a short readme on what the app does, the stack, challenges, and major features. This is both for anyone looking at your work and for yourself when you interview.

    It's one thing to be able to talk abstractly about third party integrations. And another to say "Yeah, when I was building my Next app, I used the Stripe API to handle payments on the server. First I..."

    Open Source

    The classic paradox - entry level jobs ask for 2 years of experience. I've heard that client work or tutoring has helped people here. For me, I really enjoyed getting experience through Open Source.

    Look on Code for America's site to see if there is a chapter in your area. If not, some groups may still be working virtually. I'd recommend this over searching github for projects for a few of reasons:

    1. You'll likely get experience working with volunteers with different skills. Designers, researchers, and data analysts may work on the same project. The same is true with programming - you may be the front end expert in a team of backend devs.
    2. You'll learn fast the hard skills of programming in a group. Setting up a new dev environment, using git as a collaboration tool, and participating in code reviews are all part of the experience in open source and on the job.
    3. It's more fun!! The people here want to help you contribute and grow. The projects are for great causes. And writing code that genuinely serves other people in tandem with other volunteers is wildly fulfilling.

    Contributing to open source can be a commitment. It's well worth the effort, though. The confidence and support system you have through it can counter balance the challenge of applying for a job.

    Good luck!

    Take what works for you and leave the rest. Let me know if this helped! Self teaching can be an isolating path, so feel free to reach out and share where you are.


    Parsing Markdown in Node

    I'm writing my own markdown-based blog, and had a lot of fun getting into the nitty gritty of file reading and text manipulation. It takes a little more writing than off the shelf solutions, but I wanted to have more control and ownership over the process.

    Sample File

    I have a file written like so:

    ---
    title: Parsing Markdown in Node
    tags:
      - node
      - blog
      - tech
    date: 2022-05-19
    
    ---
    
    ## Sample File
    
    I have a file written like so:

    The main body of the post is pre-pended by metadata. I want to grab the metaData so it can be used in our formatting engine, and extract it from the post body separately.

    A few tools will help along the way:

    Libraries

    Node File System

    Built into node, I'll use fs to open the data and extract it as a string that can be parsed. readdirSync scans the given file directory for the folder I'm looking for. readFileSync will parse the file and return it's contents as a string I can later manipulate.

    Both these methods actually have asynchronous counterparts! My files are not resource heavy at all, so there's no need to run concurrent asynchronous calls. It could be handy for larger amounts of data, though.

    The path below is constructed with a variable passed in by the user, so I'll handle the case that they've entered a file that doesn't exist.

    const files = fs.readdirSync(path);
    const fileName = files.find((file) => {
        return file.includes('.md');
    });
    
    if (!fileName) {
        console.error('No file found');
        return res.sendStatus(404);
    }
    
    
    const markdown = fs.readFileSync(`_posts/${postName}/${fileName}`, 'utf8');

    Showdown

    With the string version of the file in handle, I'll ees Showdown to convert the body to html. It's simple and flexible, bidirectional, if you need it to be. Conversion takes just a few simple lines of code:

    const showdown = require('showdown');
    
    const converter = new showdown.Converter();
    const postHtml = converter.makeHtml(postBody);

    Regex

    From here, it's all string manipulation to get the data I need.

    Potentially, a splitting the string by the bars ('---') would be enough. Using regex, though, will keep the process more flexible, incase in the article I use the same bars to break a section.

    This regex will do the trick:

    '---(.*)---\n(.*)'

    The match method in JavaScript, returns separate capture groups as part of the returned array. The return value contains:

    • index 0: full match (the entire document in our case)
    • index 1: the first capture group, our tags
    • index 2: the second capture group, the post body

    From here, I just need the built in split, map, and trim methods to grab the data.

    Voilà!

    HTML post body and metaData received!

    Here's the full code:

    const fs = require('fs');
    const parseMarkdownPost = require('../utils/parseMarkdownPost');
    
    const showdown = require('showdown');
    
    module.exports = (markdown) => {
    
      const fileRegex = new RegExp('---(.*)---\n(.*)', 's');
      const splitMarkdown = markdown.match(fileRegex);
      if (!splitMarkdown || splitMarkdown.length < 3) {
        console.error('Misformatted document.');
        return res.sendStatus(404);
      }
      const [match, metaData, postBody] = splitMarkdown;
      const metaDataObj = {};
    ]
    
      metaData.split('\n').forEach((line) => {
        // Store into data object
        const [key, value] = line.split(':').map((item) => item.trim());
        // if tags, split into an array
        if (key === 'tags') {
          // Let's actually delineate tags by commas instead of -'s.
          const tags = value.split(',').map((item) => item.trim());
          metaDataObj[key] = tags;
        } else {
          metaDataObj[key] = value;
        }
      });
    
      // Convert to html
      const converter = new showdown.Converter();
      const postHtml = converter.makeHtml(postBody);
    
      return [metaDataObj, postHtml];
    };

    indexController.js

    const showdown = require('showdown');
    
    module.exports = (markdown) => {
      // Regex matches the bars, captures the meta data, and then goes on to capture the article.
      // The s (single line) option allows the dot to also capture new lines.
      const fileRegex = new RegExp('---(.*)---\n(.*)', 's');
      const splitMarkdown = markdown.match(fileRegex);
      if (!splitMarkdown || splitMarkdown.length < 3) {
        console.error('Misformatted document.');
        return res.sendStatus(404);
      }
      const [match, metaData, postBody] = splitMarkdown;
      const metaDataObj = {};
    
      // Parse metaData
      metaData.split('\n').forEach((line) => {
        // Store into data object
        const [key, value] = line.split(':').map((item) => item.trim());
        // if tags, split into an array
        if (key === 'tags') {
          // Let's actually delineate tags by commas instead of -'s.
          const tags = value.split(',').map((item) => item.trim());
          metaDataObj[key] = tags;
        } else {
          metaDataObj[key] = value;
        }
      });
    
      // Convert to html
      const converter = new showdown.Converter();
      const postHtml = converter.makeHtml(postBody);
    
      return [metaDataObj, postHtml];
    };

    parseMarkdownPost.js