Chris Padilla/Blog


My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.


    Hosting a Node Express App on AWS Elastic Beanstalk

    Heroku has discontinued their free hosting tier for web applications. A major disappointment for many a side-projector! Several of my first web apps were still being hosted on Heroku, so it was time to re-evaluate.

    There are a few other options. Render and Digital Ocean have low cost options. As you can tell by the title of the article, though, I felt it was time to explore hosting on AWS.

    Elastic Beanstalk

    There are a few options for hosting:

    • Running the server as a Lambda function
    • Hosting the server on an EC2 instance
    • Managing load balancing and scaling with Elastic Beanstalk

    For those unfamiliar:

    • Lambda functions are AWS's solution for Serverless Functions
    • EC2 (Elastic Compute Cloud) is a hosting platform for cloud computing
    • Elastic Beanstalk is an orchestration service that wrangles EC2, S3s, CloudWatch, Elastic Load, and many other good-to-haves in hosting an application

    So, maybe it's unfair to say these are different options: Technically Elastic Beanstalk will make use of an EC2 instance with several other goodies baked in to handle the need to up and down scale my apps as needed.

    I'm throwing in running the server as a lambda function as a fun idea. I'm not caching on the server directly, so it's potentially an option. However, I wanted to start with a more direct and traditional approach so that I have the experience for larger applications that require a regularly running server.

    For quick implementation and a nice learning opportunity, I opted for Elastic Beanstalk.

    Code Pipeline

    My CI/CD needs are pretty minimal for my old portfolio projects, but nonetheless, I like being able to push to Github and then let the deploy happen automatically. So I'm setting up my EB applications with code pipeline connecting to my repositories as well.

    Set Up for AWS

    There are a few things we'll want to do to prepare for deploying to AWS:

    1. Match port number to the Internet port
    2. Ensure the version of Node is within AWS's accepted range
    3. Generate static files
    4. Alias the route to our static files
    5. Include a Procfile for defining start script

    In another article, I'll go into the details of generating and routing to our static files. For now, let's look at what getting an Express server that renders templates would look like with the first two steps.

    Match Port Number

    If we use the typical fallback port for servers in local development: 3000, 5000, or 7000, you'll run into an Nginx error with status code 502: bad gateway. To prevent this, we have to set our default port number to 8081, the port typically used for HTTP protocols.

    Depending on how your express app is structure, this can be updated in the bin/www file:

    // bin/www
    
    /**
     * Get port from environment and store in Express.
     */
    
    var port = normalizePort(process.env.PORT || '8081');
    app.set('port', port);

    Or in your server file directly:

    // server.js
    
    const port = process.env.PORT || 8081;
    
    app.listen(port, () => console.log(`Server started on port ${port}`));

    Match Node Version

    The apps I worked with were from several years ago. And things have changed! I had to bump up multiple node version to comply with the AWS environment. This is done easily in the package.json file. It's worth verifying that your app is still running after making these changes and switching your local node with Node Version Manager:

    // package.json
    
    {
        ...
        "engines": {
            "node": "^16.0.0",
            "npm": "6.13.4"
      }
    }

    Deploying

    You have a couple of options for deploying: Downloading the EB CLI, or using the web console. The web console is fairly straightforward and allows for easily bouncing between code pipeline, your application, and the environment generated from there. This guide will get you there.

    More To Come

    So that's getting an Express app up on Elastic Beanstalk! Next time I'll talk about bringing in React within the same project and the pitfalls to watch out for.


    Amazon Virtual Private Clouds

    I'm continuing research on cloud architecture this week. Here are some notes on Virtual Private Clouds (VPC). In these notes, I'll be covering what they are, why use them, and what are the parts that make up a VPC.

    VPC Overview

    A VPC is a private sub-section of AWS that you control, where you can place your resources (EC2's, S3s, Databases). You have full control over who has access to these resources. AWS calls these subnets, IP address ranges, and subnets

    Similar to a Facebook profile - a VPC allows you to control who can view your photos, posts, and videos.

    The advantage of VPC within a public cloud provider is mainly enhanced security. You can be explicit about what resources are made publicly available, and what resources have strict access. An example would be making a web server publicly available through HTTP and HTTPS protocols, while limiting access to the connected database.

    VPC's also allow you to specify a unique IP range for your application. Without a VPC, your IP range may be shared with other services on a public cloud provider. Should one of the other applications be flagged as malicious, a DNS will lump your application in with any access restrictions.

    Home Network Analogy

    VPC's can be likened to a Home network. In your home network, you have:

    • Wires that connect to internet.
    • A modem that is the gateway to the internet.
    • Wires connecting modem to router
    • A router that connects to other devices on network and connects to modem for internet
    • Computers / cell phones

    The home private network is STILL private, even though it's connected to the internet.

    Differences between removing routers and modems from your system is:

    • The router can still connect to other devices if modem goes down
    • If the router goes down instead, no connection are possible. Even if internet connection still coming in.

    The external data flow is as follows:

    Internet => Modem => Router / Switch => Firewall => Devices

    For VPC's, the data flow is:

    Internet => Internet Gateway => Route Table => Network Access Control List (NACL) => EC2 instances (Public) => Private Subnets.

    With an analogy set, let's look at the different parts

    Internet Gateways (IGW)

    These are a combination of hardware and software that provides your private network with a route to the world outside of the VPC. (Horizontally scaled so you have no bandwidth strain)

    These get attached to VPC's. Without it, your VPC can communicate internally, but not with the internet.

    Worth noting:

    • Only one can be attached to a VPC.
    • While there are active AWS Resources attached to the VPC, you can not detach the IGW. (Such as EC2 or RDS Database)

    Route Tables (RT)

    These are rules that determine where network traffic is directed.

    You'll have a Main route table, and possible supplemental route tables.

    You can detach the IGW from the VPC, and then the route will lead to a "black hole" as AWS puts it.

    • You can have multiple active route tables in a VPC
    • You can't delete a route table with active "dependencies" (associated subnets)

    Network Access Control Lists (NACL)

    NACL are an optional layer of security that acts as a firewall controlling traffic in and out of subnets. They have both inbound and outbound traffic. All traffic allowed by default.

    Rules are evaluated by rule number, from lowest to highest. First rule evaluated that matches traffic type gets immediately applied and executed regardless of the rules that come after.

    The wildcard symbol * is a catch all. If we don't allow traffic, it's denied by default.

    Creating a New Network ACL will deny all by default. You add rules from there.

    Different subnets can have different NACLs.

    You can control allowed protocols. If hosting a web server, you may only want to have HTTP and HTTPS.

    A subnet can only be associated with one NACL at a time.

    Once resources are inside, AWS resources may have their own security measures (called Security Groups.) EC2s, for example, can set their own limits on what protocols it allows in.

    Subnets

    Definition: A sub-section of a network. Includes all the computers in a specific location.

    A loose analogy - If your ISP is a network, your home is a subnetwork.

    Subnets may be named like the following group:

    • us-east-1a
    • us-east-1b
    • us-east-1c
    • us-east-1d

    Each are within separate availability zones. This helps create redundancy, availability, and fault tolerance.

    Public v Private Subnets

    Public subnets have a route to the internet. Private subnets do not.

    Both will have separate route tables. One to internet, one not.

    In relation to your VPC and Availability Zones: A VPC spans multiple availability zones. A subnet is designated to only one.

    Availability Zones

    VPC's come with multiple availability zones. They are physically separated within a region, where as subnets were logically separated. This allows our applications to have High Availability and Fault Tolerance - two important paradigms in cloud architecture.

    Availability Zone definition: Distinct location that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your app from the failure of a single location.

    These are a core benefit of using cloud services in AWS. We want duplicate resources span Availability Zones.

    You'll have a Primary Web Server, and a back up. As well as a Primary Redis DB, and a failover.

    Cloud infrastructure helps in the event of local disaster. If your home server dies, you need one off site.

    A little more on High Availability and Fault Tolerance:

    High Availability means the least amount of downtime as possible. It's what results in someone saying "My website is always available. I can always access my data in the cloud."

    Fault Tolerant is resistance to error. It results in someone saying "One of my web servers failed, but my backup immediately took over. Or if something fails, it can repair itself."

    Nat Gateway

    AWS has a shared responsibility model - there are portions of security that you are responsible, and portions that AWS is responsible for.

    We are responsible for maintaining the OS of our systems. We need to update systems regularly with patches from the internet.

    So, question - How do we download updates to private networks?

    NAT Gateway solve this. A NAT Gateway sits within the public subnet and has an EIP - Elastic IP Address.

    It has a route to the internet gateway. Once it's set up, we can update the Route Table to include the Nat Gateway.

    Destination: 0.0.0.0/0
    Target: nat-id

    A NAT gateway does not accept inbound traffic initiated from the internet. It only takes in outbound requests from subnet and receives responses from that request.

    You don't have to manage the config for this. You will, however, need to setup NAT gateways within all of your private subnets.

    Sources

    AWS docs on VPC's

    Cloudflare on VPC's

    Linux Academy AWS Essentialsca


    My Reading Year, 2022

    I made a conscious effort to actually read less this year. Kind of a weird way to start my first "Books I've read this year" sort of blog post, but it's the truth!

    I personally can fall into a point where reading is a vice. I end up reading more than making things. So I tried to take a break this year.

    But it didn't work!! I still read a few good books this year. Many of them are more in line with actual questions I had, specific areas I wanted to grow in, and the like. So this list is a pretty good representation of where my head has been this year.

    Note: I could do the blog-thing where I provide Amazon affiliate links. But I'm not all that interested in getting paid cents if you purchase the book. I'd just rather you let me know if you've read a book. We can have a meeting of the minds on it! 🧠

    Software / Career

    The Pragmatic Programmer by Andy Hunt and Dave Thomas

    Timeless principles for developing software. Such a wide range of topics relating to the job are covered, it feels like a must read for anyone new to the field! (👋) How to prototype, how to maintain software, how to manage projects, communicating with non-technical collaborators. It's all here! I even kept thorough notes throughout.

    Pragmatic Thinking and Learning by Andy Hunt

    Could have easily been titled: How to Learn Anything. A very thorough guide on utilizing the whole brain to gain mastery in a new thought-driven domain. Excellent read, plenty of great exercises for really connecting ideas.

    The Passionate Programmer

    Career advice for software engineering from a former full time sax player gone programmer. I am very squarely the target demographic for this book. It was very reassuring to hear that everything that applied in music similarly applies in this field.

    The Personal MBA by Josh Kaufman

    When I was a teacher, I tore through tons of business books. This one might be my favorite. Really, it's part nuts-and-bolts of business, and part addressing the mindset and personal psychology in taking on such a full bodied endeavor. It's also a great springboard into his reading list of 100 other great books for deeper diving.

    Ask Iwata by Satoru Iwata

    I've written about my favorite nuggets from this book already: serving those in front of you and Iwata's insight on working with creative people. It's not a full blown biography, but pieces of interviews Iwata has given that are strung together to tell his story in broad strokes. It turned out to be a surprisingly insightful read on leadership, creativity, and management. And Iwata's story is simply legendary.

    Non-Fiction

    Show Your Work by Austin Kleon

    "How would Brian Eno write a Content Marketing book", as the author puts it. I'm a big fan of Kleon's books and blog. I don't think everyone doing creative work needs to go to the extreme of "Sharing something everyday" and becoming internet famous. But he writes about reframing "marketing" as "community building", being part of a scene over screaming into the void, and that alone is worth the cost of entry on this quick read.

    4,000 Weeks by Oliver Burkeman

    A playful antidote to self-help that somehow still fits in the genre. A pretty humbling read about being satisfied with doing less and taming infinite ambitions. More on the philosophy of deciding on what's worth doing when you know you have limited time instead of trying to cram everything in. I may not have mastered the material, I still dabble, but the message helped quell the inner voice that's admittedly frequently on The Search For Glory.

    The Principle of 18 by Eyal Danon

    A life changer, honestly. The gist is that there are 5 phases of life spanning 18 years. Dreamer, Explorer, Builder, Mentor, and Giver. Each builds on the previous, and each has different major motivations. Most relevant for me was reading about Eyal's proposed difference between the ages from 18-36 and 36-54. Before 36, it's crucial to be fully exploring and experimenting professionally so that you can execute without doubt and distraction in the following phase of life. A great balance, embracing the current trend towards minimalism, essentialism, hyper focus, etc, while also allowing time and space to actually breath and discover what's uniquely interesting.

    The Time Paradox by Philip Zimbardo

    Interesting lens on how the way we perceive time shapes us. Future focused folks are the sort that develop lists, set goals, and achieve them. Present focused people, alternatively, are "in the moment", enjoy richness and are generally more playful. Past positive people are strongly tradition focused, warm, and maintain strong relationships. A big generalization, there are many more interesting insights through the book. The authors conclude with recommending a healthy mix of the different perspective for a full and rich life back then, now, and in the future.

    The 12 Stages of Healing by Donny Epstein

    If you know, you know. Nothing compares to Network Spinal. Donny's book is a tremendous introduction to the philosophy as well as a field guide for navigating the different rhythms of life.

    Music & Art

    The Listening Book by W.A. Mathieu

    Everyone should read this! Even non musicians. The book takes the pure meditative quality of listening to and reveling in sound from the start and further combs towards practicing music. Absolutely beautiful. So many wonderful insights on our relation to sound and being a creative musician in the world.

    Big Magic by Elizabeth Gilbert

    A re-read for me, one of my favorite books on living creatively. The secret is bouncing between serious, regular dedication to what you care about doing, and also not taking it that seriously, making the work playful as you do it. After spending so much time with creative work being purely a topic of career, this really helped with opening it up as a calling.

    Gesture Drawing for Animation by Walt Stanchfield

    I've picked up drawing this year, and this was my first book on it. Walt Stanchfield was a Disney animator and teacher to other Disney artists. Plenty of the techniques are still beyond me, but it's fun all the same. Walt writes with such a fire and emphasis on expression over accuracy. Not to mention his life story - an animator, tennis player, piano player, musician, poet, a real renaissance man! I also wrote a bit about his perspective on performing without an audience, a very new and real sensation for me.

    The Jazz Piano Book by Mark Levine

    The first Jazz book I've picked up that actually takes you from zero to improvising. Too many other books I've read assume some sort of prior knowledge or experience. Needless to say, I haven't finished it yet, but what I've gone through has already gotten me on the path more than any other method.

    Hal Leonard Guitar Method

    I've been learning guitar for a couple of years. I've largely done it the self taught way, hacking through chords from Radiohead and Coldplay songs, and trying to pick things up by ear. It's helped, but man, nothing beats a good ol' fashioned method book! This one focuses pretty heavily on lead guitar. Lots of spirituals and traditional tunes. I may never get called to play "Simple Gifts" for a gig, but playing these tuneful lines has helped my melodic playing and helped me really learn the notes on the guitar.

    Remixing the Classroom by Randall Everett Allsup

    An argument for how classroom music favors teacher-lead instruction and skill development (good things) in favor of nurturing creativity and really fostering a life long interest in engaging with music (not so good.) I love band and how it's taught today, and at the same time I agree with the author in that there's room for more genuine play in those spaces. Ends with a bit of pessimism, but it was interesting all the same.

    Fiction

    Laserwriter II by Tamara Shopsin

    Quirky characters, old computer hardware, and moments of surrealism. I liked this book so much that I wrote an album inspired by it!

    The Light Fantastic by Terry Pratchett

    I'm starting a campaign to read every Discworld book. I've hopped around, and I've finally settled on reading them in order.

    I read up to the 6th book this year, Wyrd Sisters. But the Light Fantastic was my favorite. Still wildly funny, but there's a more serious tone at the start that quickly reshapes even in the next book onward. If these books were illustrations, later books are fully colored in more of a cartoony style, and this one was done a darker, more engergetic ink style.


    The Pragmatic Programmer by Andy Hunt and Dave Thomas

    I kept thorough notes while reading The Pragmatic Programmer. This isn't a review so much as a public sharing of those notes! To serve as a refference for present you and future me.

    A Pragmatic Philosophy

    Software Entropy

    Entropy = level of disorder in a system. The universe works towards maximum entropy.

    Broken Windows are the first sign of entropy. When one thing is out of place and not fixed, the rest of the neighborhood goes.

    When adding code, do no harm.

    Technical debt = rot. Same topic.

    Stone Soup and Boiled Frogs

    Ask for forgiveness, not permission. Be a catalyst for change.

    Show success before asking for help.

    Remember the Big Picture.

    Maintain awareness around you. A la Navy SEALS.

    Good-Enough Software

    The scope and quality of your software should be a part of the discussion when planning for it. With clients, talk about tradeoffs. Don't aim for perfection every time. Know when to ship good-enough software. Again, discuss this with the client. It's not all up to you.

    Example: SSR and React Portal aren't playing nice. Do the research to discuss solutions. Leave the decision to client for whether or not this should stop us from shipping the code.

    Your Knowledge Portfolio

    Investing in your knowledge and experience is your most valuable asset. Stagnating will mean the industry will pass you by.

    Serious investors:

    1. Invest regularly
    2. Diversify for long term success
    3. Balance Conservative and high risk/high reward investments
    4. Investors aim to buy low and sell high (emerging tech)
    5. Portfolio's should be reviewed and re-evaluated regularly

    Suggested Goals:

    1. Learn one new language every year (this year — Python)
    2. Read a technical book each month
    3. Participate in User Groups
    4. Experiment with different environments (atm - shell and markdown)
    5. Stay Current (Syntax)

    It doesn't matter if you use this tech on a project or not - the engagement with new ideas and ways of doing things will change how you program.

    Think critically. Be mindful of weather or not something is valuable to place in the knowledge portfolio. Consider:

    1. 5 why's
    2. Who benefits?
    3. What's the context?
    4. When or Where would this work?
    5. Why is this a problem?

    Go far: If you are in implementation, find a book on design.

    A Pragmatic Approach

    The Essence of Good Design

    ETC — Make everything Easy To Change. We can't predict the needs of the future, so mainain flexibility in design now. That means modularity, decoupling, and single sources of truth.

    DRY — The Evils of Duplication

    DRY Don't repeat yourself. This is more nuanced than "Don't Copy/Paste"

    Maintenance is not done after a project is completed, it is a continual part of the process. You are a gardener, continue to garden and maintain.

    DRY Is maintaining so that every piece of knowledge has a single, unambiguous, authoritative representation within the system.

    Example: Regions stored in the DB.

    GraphQL is a brilliant implementation of DRY - It's self documenting and APIs are automatically generated.

    def validate_age(val):
        validate_type(val)
        validate_min_integer(val)
    
    def validate_quantity(val):
        validate_type(val)
        validate_min_integer(val)

    This does not violate the DRY principle because these are separate pieces of knowledge. They use the same code (think of CSS copying), but they don't need to share the same function. One validates age, one validates quantity. We keep it ETC by keeping these procedures separate, even if they use the same code.

    Documentation is often duplication. Write readable code, and you won't have to worry about documenting.

    DRY in Data can often be mitigated through calculation.

    You don't need to store the averageRent, just the rent prices. You can break this rule, so long as you keep it close to the module. Make it so that when a value changes, calculations are done to update it.

    A general rule for Classes and modular coding is to make any outside endpoints an accessor or setting function as opposed to exposing access to the metal. By doing this, you make it easier to add adjustments to those methods (setting a value can allow for later triggering off other internal methods. Getting methods allow you to obfuscate if the value is calculated or directly accessed, it shouldn't matter either way.

    Inter-developer Duplication

    Keeping clear communication among teams will help keep from code duplication.

    Orthogonality

    
    ^
    |
    |
    |
    __________>

    Two lines are orthogonal if they can move in their direction without going into the other axis. So an X/Y axis is orthogonal because no movement in their direction requires a change in another axis.

    This is an ideal in our code. It's not necessarily achievable to perfection, but getting 80% there is a goal. The author's note that in reality, most real-world requirements will require multiple function changes in the system. In an orthological system, though, it's only one module within those functions that changes. That's the scope of it.

    A helicopter is a non orthogonal system, requiring regular balancing.

    Benefits include a boost in productivity, flexibility, and simplicity.

    You also reduce the risk of one change ruining another part of the code.

    You know this as component-based design.

    Even in design, consider the orthogonality. Is your system for user id's orthogonal if your user id is their phone number? No!

    Be mindful of third party libraries in orthogonal systems. If you need to access objects in a special way with other libraries, it's likely not orthogonal. At the very least, wrap the handler in something that can isolate that logic.

    Coding

    What to do this while coding:

    • Keep code decoupled. More later.
    • Avoid global data. You can mitigate this by passing context into modules or as parameters in React. So redux stores app level data, but you mitigate this by only requesting what you need.
    • Avoid similar functions.

    Reversibility

    There are no final decisions

    We can't rely on the same vendors over time. To mitigate this, hide third-party APIs behind your own abstraction layers. Break your code into components, even if you deploy to a single server. This mirror's Wes Bos' advice to, when working with server code, write the function itself, then write a handler that imports that code and runs it.

    Forgo Following Fads

    Tracer Bullets

    An approach that is not the same as Prototyping. The means of tracer bullets is to find the target while laying down the skeleton for your project.

    An example: Getting a "hello, world" app up that utilizes many different systems together.

    Tracer bullets don't always hit their target, get accustomed to the fact that they most likely won't up front. Using light weight code makes it easier to adapt.

    Prototyping and Post It Notes

    Prototyping by contrast is a throw away. It can include high level code, or not. It can be post it notes and still images, or even just drawing on a white board!

    You can prototype:

    • Architecture
    • New functionality
    • Structure or contents of external data
    • Third party tools or components
    • Performance issues
    • User Interface Design

    Again, many of these solutions are fine on a white board, or you can code something up that's more involved for testing.

    You can forget about:

    • Correctness
    • Completeness (limited functions)
    • Robustness (minimal error checking)
    • Style (code style and documentation)

    Communicate that this code is meant to be thrown away. You may be better of with tracer bullets if your management is likely to want to deploy this.

    Domain Languages

    Internal Language

    That using a programming language primarily as it's means of communication. React and Jest are good examples of this.

    The strength here is that you have a lot of flexibility with the language. You can use the language to create several tests automatically, for example.

    External Language

    That using a meta-language, requiring a parser to implement. JSON, YAML, and CSV are good examples of this. They contain information and data, but needs parsing to turn into action. The most extreme example is an application that uses it's own custom language (GROQ is an example of this). If there is a client using your product, use this and reach for off the shelf external language solutions (JSON, YAML, CSV for client products)

    Mix of both

    Using methods and functions are a good in between. Jest uses functions (do, if, case) that have their own language and "syntax", but are, at the end of the day, functions. This is most ideal in most cases if programmers are using your solution.

    test('two plus two', () => {
      const value = 2 + 2;
      expect(value).toBeGreaterThan(3);
      expect(value).toBeGreaterThanOrEqual(3.5);
      expect(value).toBeLessThan(5);
      expect(value).toBeLessThanOrEqual(4.5);
    
      // toBe and toEqual are equivalent for numbers
      expect(value).toBe(4);
      expect(value).toEqual(4);
    });

    Chris' Notes!

    An example of this is ACNM. You're using React to write code for yourself. You're using Sanity to generate JSON objects that are then parsed and controlled by your application.

    Estimating

    You can't truly estimate a specific project until you are iterating on it, if it's large enough.

    Consider the time range of the project, and use appropriate quote to estimate in (330 days is specific, 6 months is vague).

    Breaking down a project can help you give a ballpark answer to how long something will take. It will also help you say "If you want to do Y instead, we could cut time in half"

    Keeping track of your estimates is good — It well help teach your gut and intuition on how to give better estimates as a lead.

    PERT (Program Evaluation Review Technique) is a system using Optimistic, most likely, and pessimistic estimates. A good way to start, allowing for a range with specific scenarios, vs just a large ball park guess with padding.

    The only way to refine an estimate is to iterate. How long will this take? How long is a string? There are so many factors at play that are not the same - team productivity, features, unforeseen issues....

    The schedule will iterate with the project. You won't get a clear answer until you are getting closer. Avoid hard dates off into the future.

    Always say "I'll get back to you." Let things take how long they take.

    This is for you too! Allow things to take as long as they take, don't feel rushed or pressured to produce. They take as long as they take.

    The Basic Tools

    At this point, the tools become conduits from he maker's brain to the finished product

    Start with a basic set of generally applicable tools. Let need drive your acquisitions.

    Many new programmers make the mistake of adopting a single power tool, such as... an IDE.

    The Power of Plain Text

    • Insurance against obsolescence
    • Leverage existing tools
    • Easier testing

    [There's a] difference between human readable and human understandable.

    Easier Testing If you use plain text to create synthetic data to drive system tests, then it is a simple matter to add, update, or modify the test data without having to create any special tools to do so (Chris here – AKA, no mocking!)

    Version Control

    Invaluable tool. Serves as a time machine, collaborative tool, safe test space for concurrent development, and a back up of the project. (and your most important files!!)

    Text Manipulation

    (This book was done in plain text and manipulation is done in a number of ways)

    • Building the book
    • Code inclusion and highlighting
    • Website updates
    • Including equations
    • Index generator

    Engineering Daybooks

    We use them to take notes in meetings, to jot down what we're working on.... leave reminders where we put things, etc...

    It acts as a kind of rubber duck... when you stop to write something down, your brain may switch gears, almost as if talking to someone...you may realize that what you'd just done is just plain wrong.

    Pragmatic Paranoia

    You can't trust the data out there or even your own application. You have to continually write safeguards for your code. Consider python - When writing a crawler, you have to assume you'll get bad information, or changes will occur. Assume the data you are trying to grab is very brittle.

    True in react as well. Assume error

    Design by Contract

    In the human world, contracts help add predictability to our interactions. In the computer world, this is true too.

    A contract has a precondition, a postcondition... and then there's Class Invariants

    Precondition Handled by the caller, ensuring that good data and conditions are being passed to the routine.

    The alternative? Bugs and errors. By setting up preconditions, you allow a safe post condition

    Example:

    if availability_regex:
        unit_dict['date_available'] = standardize_date(availability_regex[0], output='str', default=True)

    Here we're only calling standardize_date if we have an availability_regex. Another python example

    if chunk.getAttribute('name'):
        name = chunk['name']
    
    # Condensed into
    
    name = chunk.getAttribute('name')
    
    if not name:
        rause AptError("No Name found")

    The Authors in Dead Programs Tell No Lies Actually say to crash when necessary. Get this straight - some of this advice is conflicting and situational. Sometimes you'll want to avoid running code from the outside as above. Sometimes you'll want to raise exceptions.

    This is actually why people like TypeScript. There's an initial headache of getting everything set up. BUT once things are up and running, then you can rest assured that your code will work solidly. Communication will be clear, it incorporates documentation in that way.

    Who's responsible?

    Who is responsible for checking the precondition - the caller or the routine being called?

    Here's an example in React. The routine is:

    renderGraph = () => {
        const {data, color, options, responsiveOptions, animationStyle, showPoints} = this.props;
    
        let update = false;
        if(this.graphElement.current && Array.isArray(data?.series)) {
            // Render the graph
        }
    }

    and here is the caller

    componentDidMount() {
        this.renderGraph();
    }

    Here the routine is responsible for validating the inputs. The issue here is that it will be called, but then there's no guarantee that it's doing what it set out to do. The contract is broken silently.

    Perhaps this is just more acceptable in asynchronous code? We are accepting that "We may not have all the information we need on first call. So let's wait until the next call."

    The issue is in clarity. I see it as I code. I see "Oh, it's called on mount, but it's called on updates too, so there's no telling if it's actually doing what it needs to do."

    But again - we are dealing with heavily event driven programming, so the rules may not apply. For now, file this under "Good to know for Python."

    Assertions You can partially emulate these checks with an assertive language such as TypeScript. However, it won't cover all of your bases. Consider DBC more of a design philosophy than a need for tooling.

    DBC and Crashing Early

    Crashing early, although painful, is a good thing. When you crash early, you can get to the root of the problem quicker.

    The author's answered the thought I had: It's actually not as desirable in this philosophy for sqrt to return NaN, because it may only be ages later that you realize that the issue was with what you provided to sqrt, several functions later.

    In conclusion - DBC is a proactive way of writing code so that you can find problems earlier. This can be implemented with test and documentation, or consider it a personal design philosophy.

    The author's even make a case that DBC is different and preferable to TDD as it's more efficient and

    Possible examples

    Some libraries exist to use this in JS. Here's a babel plugin with pre and post conditions:

    function withdraw (fromAccount, amount) {
      pre: {
        typeof amount === 'number';
        amount > 0;
        fromAccount.balance - amount > -fromAccount.overdraftLimit;
      }
      post: {
        fromAccount.balance - amount > -fromAccount.overdraftLimit;
      }
    
      fromAccount.balance -= amount;
    }

    and with Invariants:

    function withdraw (fromAccount, amount) {
      pre: {
        typeof amount === 'number';
        amount > 0;
      }
      invariant: {
        fromAccount.balance - amount > -fromAccount.overdraftLimit;
      }
    
      fromAccount.balance -= amount;
    }

    The current JS in your writing is to handle assertions manually:

    function withdraw (fromAccount, amount) {
        if(!fromAccount || !amount) return null;
        . . .
    }

    but this is only the precondition. Not to mention that this is part of the routine handling the issue.

    Semantic invariants

    These are a philosophical contract. A more broad principle that guides development. Example: Credit card transactions: "Err in favor of the consumer."

    Dynamic contracts and agents

    "I can't provide this, but if you give me this, then I might provide something else." High level stuff. Contracts negotiated by our programs. If you have xyz, I can return abc. Very interesting. Think of how GraphQL dynamically creates types. When it can dynamically look for what it needs out of given inputs, then it can solve negotiation issues.

    Dead Programs Tell No Lies

    Here we go!!

    In some environments, it may be inappropriate simply to exit a running program. You may have claimed resources that may need released, error logs to handle, open transactions to clean, or to interact with other processes still.

    AND YET the basic principle stays the same. Terminate the function within that system when an error occurs to prevent

    Example in Python:

    def collect_and_update(region, address, update = True):
    
        db = Db().db
        building = db.buildings.find_one({'region': region, 'address': address}, projection={'region': 1, 'name': 1, 'address': 1, 'state': 1, 'city': 1, 'collector': 1})
        if not building:
            raise AptError('Building not found: {}, {}'.format(address, region))
        if not building.get('collector', {}).get('url'):
            raise AptError('{} does not have Collector url'.format(address))
    
        if not building.get('collector', {}).get('collectorType'):
            raise AptError('{} does not have Collector type'.format(address))

    Here, the raise keyword stops the program.

    Example in React:

    const data = useMemo(() => {
        if(averagePriceAggregate) {
            const dataRes = {series: [], labels: []};
            ...
        }
    }

    No error is raised, but the code is encapsulated by an if statement to ensure it has the data it needs and will not run the script if it doesn't.

    Who's Responsible for the precondition? Well, it actually depends on your environment.

    Assertive Programming

    Assert against the impossible. If you think it is impossible... It's probably possible. Validate often.

    This is not to replace real error handling. If there is an issue, log and handle the error. Use assertions to pass on to the error logger. Terminate if necessary.

    When asserting, do not create side effects. No (array.pop() == null) checks

    How to Balance Resources

    Finish what you start - close files. Careful of coupling.

    Act Locally Keep scope close. Encapsulate. Smaller scope = better. Less coupling.

    When Deallocating resources, do so in the opposite order of allocation.

    When allocating the same set of resources in different places, always allocate in the same order

    Be mindful of balancing long term. Log files are an often ignored memory hog over time.

    Object oriented languages mirror this - there's a constructor and then destructor (you don't normally worry about the de-structure.)

    In your case, event listeners - you want to add, then remove.

    With exceptions, you can balance this neatly with a try...catch...finally block, or with context managers.

    In python, the with...as keyword allows you to open a file, and then it gets closed after leaving the scope.

    In JS, you have try, catch, finally. Though, be sure to allocate the resource before the try catch statement.

    try {
        allocateResource() // Goes wrong, the resource is not opened
    } catch {
        // handle error
    } finally {
        closeResource() // oops, it never got fully opened!
    }

    Wrapper functions are helpful for managing and logging your resources. More advanced topic, but this can be a way to go about it in other languages.

    Don't Outrun Your Headlights

    In small and big ways, don't outrun your headlights. Avoid "Fortune Telling." Keep the feedback loop tight. Hit save after a few lines. Pass a test when you add code. Plan work a few hours or days ahead at most.

    Notice that headlights also only go in one direction You may be thinking about the UI when you code, and then need to take a moment to see how it's balanced out the API or another resource.

    Black Swans are unpredictable, and yet are guaranteed. No one talks about Motif or OpenLook anymore, because the browser-centric web quickly dominated the landscape.

    Not to mention the current Federal Reserve raise in interest rates.

    Oh hey! You are a REAL DEAL programmer as you create REAL UIs with the web!

    Bend or Break

    Decoupling

    Train Wrecks

    Be careful about how much knowledge one part of the code is expected to have about the other part of the code. Ideally, it's only a few levels deep.

    For example, this...

    customer
        .orders()
        .find(order_id)
        .getTotals()
        .applyDiscount()

    should more ideally be

    customer
        .findOrder(order_id)
        .applyDiscount

    Not necessarily

    customer.applyDiscountToOrder(order_id)

    Because it is ok for some global understanding. It is assumed that orders can be adjusted directly after being accessed from the customer.

    The Law (rule of thumb) of Demeter simplified: Don't chain method calls.

    Again, this is not a law, but a rule of thumb, as the above example demonstrates. Not chaining helps with decoupling.

    Language level api's are the exception. It's perfectly find to chain:

    orders
        .filter(filterFunc)
        .map(mapFunc)
        .slice(0, 5)

    because you won't expect that to change anytime soon. It's about mitigating change.

    Configuration

    Use external configuration for your app (.env files). It's secure and keeps your app flexible. You can have different configs for different environments and deploys.

    You can store it behind an API and DB for most flexible use. DB solution is best if it will be changed by the customer.

    configuration-as-a-service Keeping it behind an API, again, keeps it flexible. An app shouldn't need to stop and rerun if something here changes (different API key, different port, credentials change). API-ify this aspect for maximum flexibility.

    While You are Coding

    Refactoring

    It is natural for software to change. Software is not a building. It is akin to gardening, meant to be flexible and organic and needing regular nurturing.

    Martin Fowler - An early writer on Refactoring

    Definition: Refactoring is intentional and is a process that does not change the external behavior. No new features while refactoring!

    When to Refactor

    Often and in small doses. Best done when you see a pain point.

    Also, right upon getting a feature to work. How can this be made more clear?

    You shouldn't need a week to refactor.

    Good tests are integral to refactoring. You are alerted immediately when you make an unintentional change thanks to tests.

    Before the Project

    The Requirements Pit

    No one knows exactly what they want

    In the early days, people only automated when they knew exactly what they wanted. This is not the case today. Software needs greater flexibility.

    When given a requirement, your gut instinct should be to ask more clarifying questions. If you don't have any, build and ask "is this what you mean?"

    Deliver facts on the situation and let the client make the decision.

    Requirements are learned in a feedback loop

    Consulting - ask why 5 times, and you'll get to the root. Yes, be annoying, it's ok.

    Requirements vs policy: Requirements are a hard and fast thing (Must run under 500ms). Policy, however, is often configurable. For example: Color scheme, text, fonts, authorizations: These are configurable, and are therefor policy.

    Requirements may shift when the user gets their hands on it. They may prefer different workflows. This is why short iterations work best.

    A Better Way

    Use index cards to gather requirements. Use a kanban board to show progress. Share the board with clients so they can see the effect of a "wafer thin mint" and they can help decide what to move along. Get them involved in the process - it's all feedback loops.

    Maintain a glossary to align communication.


    Excluding Internal Traffic in Analytics

    It's not as clean as UA, sadly.

    With Universal Analytics, Google's own Opt-Out plugin worked nicely. Unfortunately, it doesn't seem to be configured to work well with GA4.

    Julius Fedorovicius has a fantastic article on what other options are available.

    Google recommends filtering by IP address, but that's really not feasible with a company larger than 5 people!

    The article walks through a great work around, exposing Google's traffic_type=internal parameter that it sets on events when there is an IP match.

    The two options from there are to set this with either cookies or JavaScript. Both are imperfect in their own way, but all of these methods together end up being a useable solution.

    Update: An alternate approach is to set the internal traffic from a custom event. If tag manager is already being used, it's likely there are custom events already set up for when an admin logs in. So you can trigger on admin login to set the internal traffic.

    I can't recommend Julius Fedorovicius' article and site enough for all help on all the different growing pains from UA to GA4.

    Here's hoping the ol' opt-out plugin gets an update sometime!


    Debouncing in React (& JS Functions as Objects)

    Debouncing take a bit of extra consideration in React. I had a few twists and turns this week working with them, so let's unpack how to handle them properly!

    Debouncing Function in Vanilla JS

    Lodash has a handy debounce method. Though, we could also just as simply write our own:

    const debounce = (function, timeout) =>{
      let timer;
      return (...args) => {
        clearTimeout(timer);
        timer = setTimeout(() => { function(args); }, timeout);
      };
    }

    In essence, we ant to call a function only after a given cool down period determined by timeout.

    Lodash comes with some nice methods for canceling and flushing your calls. They also handles edge cases very nicely, so I would recommend their method over writing your own.

    const wave = () => console.log('👋');
    const waveButChill = debounce(wave, 1000);
    window.addEventListener('click', logButChill);
    
    // CLICK 50 TIMES IN ONE SECOND
    
    👋

    With the above code, if I TURBO CLICKED 50 times per second, only one click event would fire after the 1 second cooldown period.

    React

    Let's set the stage. Say we have an input with internal state and we want to send an API call after we stop typing. Here's what we'll start with:

    import React, {useEffect} from 'react';
    import {debounce} from 'lodash.debounce';
    
    const Input = () => {
        const [value, setValue] = useState('');
    
        useEffect(() => {
            expensiveDataQuery(value);
        }, [value]);
    
        const expensiveDataQuery = () => {
            // get data
        };
    
        const handleChange = (e) => {
            setValue(e.currentTarget.value);
        };
    
        return (
            <input value={value} onChange={handleChange}/>
        );
    };
    
    export default Input;

    Instead of fetching on submit, we're set to listen to each keystroke and send a new query each time. Even with a quick API call, that's not very efficient!

    Naive Approach

    The naive approach to this would be to create our debounce as we did above in within the component, like so:

    const Input = () => {
        const [value, setValue] = useState('');
    
        useEffect(() => {
            fetchButChill(value);
        }, [value]);
    
        const fetchButChill = debounce(expensiveDataQuery, 1000);
    
        . . .
    }

    What you'll notice though, is that you'll still have a query sent for each keystroke.

    The reason for this is that a new function is created on each component re-render. So our timeout method is never cleared out, but a new timeout method is created with each state update.

    useCallback

    You have a couple of options to mitigate this: useCallback, useRef, and useMemo. All of these are ways of keeping reference between component re-rendering.

    I'm partial to useMemo, though the react docs state that useCallback is essentially the same as writing useMemo(() => fn, deps), so we'll go for the slightly cleaner approach!

    Let's swap out our fetchButChill with useCallBack

    const Input = () => {
        const [value, setValue] = useState('');
    
        useEffect(() => {
            fetchButChill(value);
        }, [value]);
    
        const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
    
        . . .
    };

    Just like useMemo, we're passing in an empty array to useCallback to let it know that this should only memoize on component mount.

    Clearing after Unmount

    An important edge case to consider is what happens if our debounce interval continues after the component has unmounted. To keep our app clean, we'll want a way to cancel the call!

    This is why lodash is handy here. Our debounced function comes with method attached to the function!

    WHAAAAAAT

    A fun fact about JavaScript is that functions are objects under the hood, so you can store methods on functions. That's exactly what Lodash has done, and it's why we can do this:

    fetchButChill(value);
    fetchButChill.cancel();

    fetchButChill.cancel(); will do just that, it will cancel out debounced functions before being called.

    Let's finish this up by adding this within a useEffect!

    const Input = () => {
        const [value, setValue] = useState('');
    
        useEffect(() => {
            fetchButChill(value);
    
            return () => fetchButChill.cancel();
        }, [value]);
    
        const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
    
        . . .
    };

    Migrating Tag Manager to Google Analytics 4

    Code Set Up

    If you're using Google Tag Manager, you are already set up in the code to be funneling data to GA4. Alternatively, you can walk through the GA4 Setup Assistant and get A Google Site Tag. It may look something like this:

    <script async src="https://www.googletagmanager.com/gtag/js?id=G-24HREK6MCT"></script>
    <script>
        window.dataLayer = window.dataLayer || [];
    
        ...
    
        gtag('config', 'UA-Something")
    </script>

    Two things are happening - we're instantiating the google tag manager script, and we're creating a dataLayer to access any analytics information.

    The dataLayer is good to note because we actually have access to this at anytime in our own code. We could push custom analytics events simply by adding an event to the dataLayer array, such as window.dataLayer.push('generate_lead')

    Tag Manager

    If you're already using Tag Manager, you'll want to 1. Add a new config for GA4 and 2. update any custom events, converting them to GA4 configured events.

    1. Set up GA4 at analytics.google.com
    2. Take your GA4 ID over to Tag Manager and create a new GA4 Config Tag.
    3. Use that config tag in your new custom events.

    It's advised to keep both GA4 and UA tags running simultaneously for at least a year to confirm there's enough time for a smooth migration. Fortunately for us, it's easy to copy custom event tags and move them to a separate folder within tag manager.

    Custom Event Considerations

    Dimensions & Metrics

    GA4 has two means of measuring custom events: as Dimensions or as Metrics. The difference is essentially that a dimension is a string value, while a metric is numeric.

    More is available in Google's Docs.

    Variables in Custom Events

    Just as you had a way of piping variables into Category, Action, Label, and Value fields in UA, you can add them to your custom events in GA4.

    GA4 has a bit more flexibility by allowing you to set event parameters. You can have an array of parameters with a name-value pair. So on form submit, you could have a "budget" name and a "{{budget}}" value on an event. As we alluded to above, you can provide this by manually pushing an event through your own site's JavaScript.

    Resources

    Analytics Mania has a couple of very thorough articles on migrating to GA4 and testing your custome events in Tag Manager.


    Sustaining Creativity

    I've been thinking about this a lot. I went from making music in a clearly defined community to a much more amorphous one. When walking a more individualist road after being solely communally based for so long, what's the guiding purpose?

    So the question on my mind has really been this: what's the motive behind continuing to work in a creative discipline?

    Nothing here is really a prescription. It's mostly me figuring it out as I go. I write a lot of "You"s in this, but really I mean "me, Chris Padilla." If any of this is helpful to you, dear reader, by all means take what works! If you have perspectives on this, drop me a line.

    So here we go! Three different categories and motives for making stuff:

    Personal Creativity

    I like making stuff! Just doing it lights me up. The most fun is when it's a blank canvas and I'm just following my own interest. It's just for me because I'm only taking in what sounds resonate with me, what themes come to mind, and what tools I have to make a thing.

    I still share because it's fun to do so! It contributes to the pride of having made something that didn't exist before. A shared memento from the engagement with the spirit of creativity. But, any benefit other people get from it is merely a side effect of the process. It's not the purpose.

    An interesting nuance that is starting to settle in as I do this more and more — there is no arrival point here. Creativity is an infinite game with no winners and losers, just by playing you are getting the reward and benefits then and there. This alone is a really juicy benefit to staying creative. But maybe it's not quite enough —

    Gifts

    Creativity for other people. Coming from a considerate place, a genuine interest in serving the person on the other side of it. Often this feels like a little quest or challenge, because I'm tasked to use the tools and skills I have to help, entertain, or bring beauty to the audience on the other end.

    I'm pretty lucky in that I've pretty much always done creative work for others that has also lead to getting paid for it. Even my current work in software engineering I consider gifts. Money is part of it, but the empathetic nature of building for a specific group of people makes it feel like a gift.

    $$$

    Sometimes, ya gotta do what ya gotta do. In some ways, this is what separates professionals from amateurs. Teaching the student that's a bit of extra work, learning a new technology because it's popular in the market, or drawing commissions.

    (Again, on a motivation level, I don't have much in my life that falls into this category. I'm very, VERY lucky to be working in a field that is interesting, and I have a pretty direct feeling of that work being of service — that work being a gift. BUT I've been in positions before where some of my work was more for those dollars.)

    Actually, Game Director Masahiro Sakurai of Nintendo fame talks about this. A professional does what's tasked in front of them, even if it's not what you'd initially find interesting or fun. Even video game dev has it's chores!

    This type of work is not inherently sell-out-y. You can still find the joy in the work and you can still find the purpose behind it. Shifting to a gift mindset here helps. Be wary of doing anything purely for this chunk of the venn diagram with no overlap.

    A classic musician's rule of thumb for taking on a gig: "It has to have at least two of these three things: 1. Pay well 2. Have great music 3. Work with great people."

    The Gist: Watch your mindset.

    There's a balance between gift giving and creating just for you, I've been finding.

    Things we make for our own pure expression and curiosity does not need to be weighed down by the expectation of other people loving it or of it selling wildly well. The gift is in following your own creative curiosity. And that's great!

    If you're ONLY making things for yourself, and you're not finding ways to serve other people, then you'll be isolated and not fully fulfilled by what you're doing. Finding ways to give creatively is the natural balance for that.

    A side note: Go for things that involve a few people, IRL. Nothing quite beats joining someone's group to make music in person, teaching someone how to do what you do, or making a physical gift for someone special!


    Creating a Newsletter Form in React

    Twitter is in a spot, so it's time to turn to good ol' RSS feeds and email for keeping up with your favorite artists, developers, and friends!

    We built one for our game. This is another case in which building forms are more interesting than you'd expect.

    Component Set Up

    To get things started, I've already built an API similar to the one outlined here in my Analytics and CORS post

    There are ultimately three states for this simple form: Pre-submitting, success, and failure.

    Here's the state that accounts for all of that:

    // Newsletter.js
    
    import React from 'react';
    import styled from 'styled-components';
    import { useState } from 'react';
    import { signUpForNewsletter } from '../lib/util';
    
    const defaultMessage = 'Enter your email address:';
    const successMessage = 'Email submitted! Thank you for signing up!';
    
    const Newsletter = () => {
      const [emailValue, setEmailValue] = useState('');
      const [message, setMessage] = useState(defaultMessage);
      const [emailSuccess, setEmailSuccess] = useState(false);
    
      . . .
    };

    We're holding the form value in our emailValue state. message is what is displayed above our input to either prompt them to fill the form, or inform them they succeeded. emailSuccess is simply state that will adjust styling for our success message later.

    Rendering Our Component

    Here's is that state in action in our render method:

    // Newsletter.js
    
      return (
        <StyledNewsletter onSubmit={handleSubmit}>
          <label
            htmlFor="email"
            style={{ color: emailSuccess ? 'green' : 'inherit' }}
          >
            {message}
          </label>
          <input
            type="email"
            name="email"
            id="email"
            value={emailValue}
            onChange={(e) => setEmailValue(e.currentTarget.value)}
          />
          <button type="submit">Sign Up</button>
        </StyledNewsletter>
      );

    Setting our input to email will give us some nice validation out of the box. I'm going against the current common practice by using inline styles here for simplicity.

    Handling Submit

    Let's take a look at what happens on submit:

    // Newsletter.js
    
      const handleSubmit = async (e) => {
        e.preventDefault();
        if (emailValue && isValidEmail(emailValue)) {
          const newsletterRes = await signUpForNewsletter(emailValue);
          if (newsletterRes) {
            setEmailValue('');
            setEmailSuccess(true);
            setMessage(successMessage);
          } else {
            window.alert('Oops! Something went wrong!');
          }
        } else {
          window.alert('Please provide a valid email');
        }
      };

    The html form, even when we prevent the default submit action, actually still checks the email input against it's built in validation method. A great plus! I have a very simple isValidEmail method in place just to double check.

    Once we've verified everything looks with our inputs, on we go to sending our fetch request.

    // util.js
    
    export const signUpForNewsletter = (email) => {
      const data = { email };
    
      if (!email) console.error('No email provided', email);
    
      return fetch('https://coolsite.app/api/email', {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify(data),
      })
        .then((response) => response.json())
        .then((data) => {
          console.log('Success:', data);
          return true;
        })
        .catch((error) => {
          console.error('Error:', error);
          return false;
        });
    };

    I'm including return statements and a handler based on those return statements later with if (newsletterRes) ... in our component. If it's unsuccessful, returning false will go into our very simple window.alert error message. Else, we continue on to updating the state to render a success message!

    Wrap Up

    That covers all three states! Inputing, error, and success. This, in my mind, is the bare bones of getting an email form setup! Yet, there's already a lot of interesting wiring that goes into it.

    From a design standpoint, a lot of next steps can be taken to build on top of this. From here, you can take a look at the API and handle an automated confirmation message, you can include an unsubscribe flow, and you can include a "name" field to personalize the email.

    Even on the front end, a much more robust styling for the form can be put in place.

    Maybe more follow up in the future. But for now, a nice sketch to get things started!

    Here's the full component in action:

    // Newsletter.js
    
    import React from 'react';
    import styled from 'styled-components';
    import { useState } from 'react';
    import { signUpForNewsletter } from '../lib/util';
    
    const defaultMessage = 'Enter your email address:';
    const successMessage = 'Email submitted! Thank you for signing up!';
    
    const Newsletter = () => {
      const [emailValue, setEmailValue] = useState('');
      const [message, setMessage] = useState(defaultMessage);
      const [emailSuccess, setEmailSuccess] = useState(false);
    
      function isValidEmail(email) {
        return /\S+@\S+\.\S+/.test(email);
      }
    
      const handleSubmit = async (e) => {
        e.preventDefault();
        if (emailValue && isValidEmail(emailValue)) {
          const newsletterRes = await signUpForNewsletter(emailValue);
          if (newsletterRes) {
            setEmailValue('');
            setEmailSuccess(true);
            setMessage(successMessage);
          } else {
            window.alert('Oops! Something went wrong!');
          }
        } else {
          window.alert('Please provide a valid email');
        }
      };
    
      return (
        <StyledNewsletter onSubmit={handleSubmit}>
          <label
            htmlFor="email"
            style={{ color: emailSuccess ? 'green' : 'inherit' }}
          >
            {message}
          </label>
          <input
            type="email"
            name="email"
            id="email"
            value={emailValue}
            onChange={(e) => setEmailValue(e.currentTarget.value)}
          />
          <button type="submit">Sign Up</button>
        </StyledNewsletter>
      );
    };
    
    export default Newsletter;
    
    const StyledNewsletter = styled.form`
      display: flex;
      flex-direction: column;
      max-width: 400px;
      font-family: inherit;
      font-size: inherit;
      padding: 1rem;
      text-align: center;
      align-items: center;
      margin: 0 auto;
    
      label {
        margin: 1rem 0;
      }
    
      #email {
        width: 80%;
        padding: 0.5rem;
        /* border: 1px solid #75ddc6;
        outline: 3px solid #75ddc6; */
        font-family: inherit;
        font-size: inherit;
      }
    
      button[type='submit'] {
        position: relative;
        border-radius: 15px;
        height: 60px;
        display: flex;
        -webkit-box-align: center;
        align-items: center;
        -webkit-box-pack: center;
        justify-content: center;
        padding: 2rem;
        font-weight: bold;
        font-size: 1.3em;
        margin-top: 1rem;
        background-color: var(--cream);
        color: var(--brown-black);
        border: 3px solid var(--brown-black);
        transition: transform 0.2s ease;
        text-transform: uppercase;
      }
    
      button:hover {
        color: #34b3a5;
        background-color: var(--cream);
        border: 3px solid #34b3a5;
        cursor: pointer;
      }
    `;

    Building a Proxy with AWS Lambda Functions and CORS

    For those times you just need a sip of backend, Lambda functions serve as a great proxy.

    For my situation, I needed a way for a client to submit a form to an endpoint, use a proxy to access an API key through environment variables, and then submit to the appropriate API. The proxy is still holding onto sensitive data, so in lieu of storing an API key on the client (no good!), I'm using CORS to keep the endpoint secure.

    Handling Pre-Flight Requests:

    This article by Serverless is a nice starting place. Here are the key moments for setting up cors:

    # serverless.yml
    
    service: products-service
    
    provider:
      name: aws
      runtime: nodejs6.10
    
    functions:
      getProduct:
        handler: handler.getProduct
        events:
          - http:
              path: product/{id}
              method: get
              cors: true # <-- CORS!
      createProduct:
        handler: handler.createProduct
        events:
          - http:
              path: product
              method: post
              cors: true # <-- CORS!

    The key config, cors: true is a good start, but is the equivalent of setting our header to 'Access-Control-Allow-Origin': '*'. Essentially, this opens our endpoint up to any origin. So we'll need to find a way to secure this to only a couple of urls.

    Serverless here recommends handling multiple origins in the request itself:

    // handler.js
    const ALLOWED_ORIGINS = [
        'https://myfirstorigin.com',
        'https://mysecondorigin.com'
    ];
    
    module.exports.getProduct = (event, context, callback) => {
    
      const origin = event.headers.origin;
      let headers;
    
      if (ALLOWED_ORIGINS.includes(origin) {
        headers: {
          'Access-Control-Allow-Origin': origin,
          'Access-Control-Allow-Credentials': true,
        },
      } else {
          headers: {
          'Access-Control-Allow-Origin': '*',
        },
      }
    
      . . .
    }

    This alone would work fine for simple GET and POST requests, however, more complex requests will send a Preflight OPTIONS request. I am sending a POST request, but it would have to be an html form submission to qualify as "simple." Since I'm sending JSON, it's considered complex and a preflight request is sent.

    A little more looking in serverless docs shows us how we can approve multiple origins for our preflight requests:

    # serverless.yml
    
    cors:
      origins:
        - http://www.example.com
        - http://example2.com

    Server response with Multiple Origins

    When allowing multiple origins, the response needs to return a single origin in the header, matching the request origin. If we send a comma delineated string with all our origins, the response will not be accepted.

    In our server code above, we did handled this with the logic below:

     const origin = event.headers.origin;
      let headers;
    
      if (ALLOWED_ORIGINS.includes(origin) {
        headers: {
          'Access-Control-Allow-Origin': origin,
          'Access-Control-Allow-Credentials': true,
        },
      }

    We grab the origin from our request headers, match it with our approved list, and then send it back in the response headers.

    Lambda & Lambda Proxy

    To have access to our request headers, we need to ensure we are using the correct integration.

    Lambda Proxy integration is the default with serverless and the one that will include the headers.

    So why am I pointing this out?

    Some Lambdas you work with may include integration: lambda in their config file:

    functions:
      create:
        handler: posts.create
        events:
          - http:
              path: posts/create
              method: post
              integration: lambda

    These are set to launch the function as Lambda integrations.

    The general idea is that Lambda Proxy integrations are easier to set up while Lambda integrations offer a bit more control. The only extra bit of work required for Lambda proxy is handling your own status codes in the response message, as we did above. Lambda integrations may be more suitable in situations where you need to modify a request before sent to the lambda or a response after. (A really nice overview of the difference is available in this article.)

    So, if you're setting up your own lambda, no need to do anything different to access the headers. If working with an already established set of APIs, keep an eye out for integration: lambda. Accessing headers will take some extra considerations in that case.


    Walt Stanchfield & Performing with No Audience

    Switching from a performance art to a creating medium has been weird.

    As a musician and teacher, the feedback loop was pretty tight. Performing on stage and playing in groups, there's a real magic to having other people in the room responding and reacting in real time and real space.

    Even with teaching! Going into a lesson, students would improve noticeably on the spot, or laugh at my bad dad jokes right then and there.

    Now I work in software. Don't get me wrong, I get great feedback! Though, it's a difference between publishing and performing.

    Creatively instead of playing on stage, I write songs, draw on the couch, and largely play for a digital audience. Much of my creative work is published, not performed.

    So I've been thinking about that a lot.

    Walt Stanchfield

    Walt Stanchfield

    The late Walt Stanchfield, former Disney animator and teacher, knows what I'm talking about. The guy, on top of being a highly expressive teacher and artist, played concert piano, wrote poems, and was an enthusiastic tennis player.

    Here he is talking about animation, though it's easy to see how he could be talking about any digital creative work:

    Animation has a unique requirement in that its rewards are vaguely rewarding and at the same time frustrating. We are performers but our audience is hidden from us. We are actors but there is no applause. We are artists but our works are not framed and hung on walls for friends to see. We are sensitive people whose sensibility is judged across the world in dingy theaters by a sometimes popcorn eating audience. Yet we are called upon day by day to delve deep into our psyche and come up with fresh creative bits of entertaining fare. That requires a special kind of discipline and devotion, and enthusiasm. Our inner dialogue must be amply peppered with encouraging argument. We sometimes have to invent or create an audience in our minds to draw for.

    Walt knows the curious position because he's been on both sides of this. Here he is talking about performing for a live audience:

    I used to sing in operettas, concerts, etc., so I know what real applause is. It is heavenly. A living audience draws something extra out of the performer. A stage director once said to the cast of a play on the opening night, “You’ve had good equipment to work with: a theatre with everything it takes to put on a show. But you have been handicapped—one essential thing has been denied you. Tonight there’s an audience out there; now you have everything you need.”

    So is there a solution to dealing with that missing piece? Is it just comparing apples and oranges? Walt recommends drumming up the empathy and imagination yourself, ultimately.

    Well, we do have an awaiting audience out there. We’ll be denied the applause but at least there is a potential audience to perform for; one to keep in mind constantly as we day by day shape up our year dress rehearsal. Even as we struggle with the myriad difficulties of finalizing a picture—what is the phrase, “getting it in the can,” we can perform each act for that invisible or mystical audience. We can’t see our audience but it is real and it is something to work for.

    So yes, a little bit of imagination.

    He mentions it earlier, but devotion and enthusiasm has been the real key for me. I don't think I'd say I necessarily played music for the applause. The practice itself is what's energizing. I'm grateful that all of my disciplines have pretty great feedback loops. They're so physical, tactil, and expressive that the work is reward enough.

    Sharing is really just a nice bonus, an artifact of the time well spent chasing a creative thread.


    The whole essay is "A Bit of Introspection" from Gesture Drawing For Animation by Walt Stanchfield, handout made freely available, and published into a couple of nice books as well.


    Iwata on What's Worth Doing

    When it comes to answering the question "What's worth doing?", the internet can muddy it up a bit.

    Plenty of good to the internet: Shared information, connecting with far flung people, and finding community.

    And, it's also a utility that can deceive us into feeling infinite.

    I was surprised to see Nintendo's former president Satoru Iwata wrestle with this in an interview he gave for Hobo Nikkan Itoi Shumbun that was published in the book "Ask Iwata."

    "The internet also has a way of broadening your motivations. In the past, it was possible to live without knowing there were people out there who we might be able to help, but today, we're able to see more situations where we might be of service. But this doesn't mean we've shed the limitations on the time at our disposal.

    ...as a result, it's become more difficult than ever to determine how to spend the hours of the day without regret."

    Wholly relatable. Very warm to see Iwata put this in terms of serving people. For creative folk, this could be anything from projects to pursue, audiences to reach, and relationships to develop. There's, I'm sure, an interesting intersection with another change in history — the ability to reproduce art.

    "It's more about deciding where to direct your limited supply of time and energy. On a deeper level, I think this is about doing what you were born to do."

    Less is the answer, and considering your unique position is what takes the place of overwhelming choice. What you were born to do can be a heavy question unto itself, but thinking of it as what you're in the unique position to do helps.

    I'll paraphrase Miyazaki here: "I focus on only what's a few meters from me. Even more important than my films, that entertain countless children across the world, is making at least three children I see in a given day smile." Focusing on the physical space and your real, irl relationships, is likely to guide you towards what's worth doing.


    The Gist on Authentication

    Leaving notes here from a bit of a research session on the nuts-and-bolts of authentication.

    There are cases where packages or frameworks handle this sort of thing. And just like anything with tech, knowing what's going on under the hood can help with when you need to consider custom solutions.

    Sessions

    The classic way of handling authentication. This approach is popular with server-rendered sites and apps.

    Here, A user logs in with a username and password, the server cross references them in the DB, and handles the response. On success, a session is created, and a cookie is sent with a session id.

    The "state" of sessions are stored in a cache or on the DB.

    Session Cookies are the typical vehicles for this approach. They're stored on the client and automatically sent with any request to the appropriate server.

    Pros

    For this approach, it's nice that it's a passive process. Very easy to implement on the client. When state is stored in a cache of who's logged in, you have a more control if you need to remotely log a user out. Though, you have less control over the cookie that's stored in the client.

    Cons

    The lookup to your DB or cache can be timely here. You take a hit in performance on your requests.

    Cookies are also more susceptible to Cross-Site Request Forgery (XSRF).

    JWT's

    Two points of distinction here: When talking about a session here, we're talking about that stored on the server, not session storage in the client.

    Cookies hypothetically could be used to store a limited amount of data, but for JWT's typically need another method, since cookies have a small size limit.

    Well, what are JWT's? Jason Web Tokens are a popular alternative to sessions and cookie based authentication.

    On successful login, a JWT is returned with the response. It's then up to the client to store it for future requests, working in the same way as sessions here.

    The major difference, though, is that the token is verified on the server through an algorithm, not by DB lookup of a particular ID. There's a major prop of JWT's! It's a stateless way of handling authentication.

    Options for storing this on the client include local storage, indexedDB, and some would say, depending on the size of your token, cookies.

    Pros

    As mentioned, it's stateless. No need to maintain sessions in your cache or on your DB.

    More user-related information can be stored with the token. Details on authorization level is common ("admin" vs "user" permissions.)

    This approach is also flexible across platforms. You can use JWT's with mobile applications or, say, a smart TV application.

    Cons

    Because this approach is stateless, unfortunately you have limited control in logging out individual users remotely. It would require changing your entire algorithm, logging all of your users out.

    Depending on how you store the token, there are security concerns here, too. It's best to avoid local storage, in particular, as you are open to XSRF - Cross site request forgery. If you accept custom inputs from users, beware also of XSS - Cross Site Scripting, where malicious code could be ran on your site.

    Who Wins?

    Depending on your situation, you may just need the ease of setup provided by session storage. For an API spanning multiple devices, JWT's may seem appealing. There is also the option to blend the approaches: Using JWT's while also storing session logic in a cache or DB.

    Some handy libraries for implementing authentication includes Passport.js and auth0. For integrated authentication with Google, Facebook, etc., there's also OAuth2.0. A tangled conversation on it's own! And, addmitedly, one that's best implemented alongside a custom authentication feature, rather than as the only form of authentication.


    An Overview of Developing Slack Shortcuts

    For simple actions, sometimes you don't need a full on web form to accomplish something. An integration can do the trick. Slack makes it pretty easy to turn what could be a simple webform into an easy-to-use shortcut.

    It's a bit of a dance to accomplish this, so this will be more of an overview than an in depth look at the code.

    As an example, let's walk through how I'd create a Suggestion Box Shortcut.

    Slack API

    The first stop in setting any application up with Slack is at api.slack.com. Here we need to:

    1. Provide the Request URL for your API
    2. Create a New Shortcut "Suggestion Box"
    3. If loading data for select menus, provide an API URL for that as well.

    You'll create a callback ID that we'll save for later. Our's might be "suggestionbox"

    Developing your API with Bolt

    It's up to you how you do this! All slack needs is an endpoint to send a POST request. A dedicated server or serverless function works great here.

    Here are the dance steps:

    1. Instantiate your App with Slack Bolt
    2. Write methods responding to your shortcut callback ID
    3. Handle submissions.

    There are multiple steps because we'll receive multiple communications:

    Shortcut opens => Our API fires up and sends the modal "view" for the shortcut.

    User marks something on the form => Our API listens to the action and potentially update the view.

    User submits the form => Our API handles the request and logs a success / fail message.

    Bolt is used here to massively simplify this process. Without Bolt, the raw slack API uses http headers to manage the different interactions. With Bolt, it's all wrapped up neatly in an intuitive API.

    Blocks

    The UI components for slack are called blocks. There is a handy UI for creating forms and receiving the appropriate JSON in their documentation. Several great inputs are included, like multi select, drop down, date picker, and several other basic inputs that are analogous to their web counterparts.


    Redux Growing Pains and React Query

    AC: New Murder's announcement has been par for the course of a major release. Lots of good feedback and excitement, and some big bugs that can only be exposed out in the open.

    The biggest one was a bit of a doozy. It's around how we're fetching data. The short version of an already short overview is this:

    • Redux stores both Application State and Fetched Data
    • Redux Thunks are used to asynchronously fetch data from our Sanity API
    • We hope nothing goes wrong in between!

    Naturally, something went wrong in between.

    Querying Sanity

    Sanity uses a GraphQL-esque querying language, GROQ, for data fetching. A request looks something like this:

    `*[_type == 'animalImage']{
      name,
      "images": images[]{
        emotion->{emotion},
        "spriteUrl": sprite.asset->url
      }
    }`

    Similar to GraphQL, you can query specifically what you need in one request. For our purposes, we wanted to store data in different hierarchies, so a mega-long query wasn't ideal. Instead, we have several small queries by document type like the animalImage query above.

    The Issue

    On app load, roughly 5 requests are sent to Sanity. If it's a certain page with dialogue, 5 additional requests will be sent.

    The problem: Not every request returned correctly.

    This started happening with our beta testers. Unfortunately, there's not a ton of data to go off of. From what we could tell, everyone had stable internet connections, used modern browsers, and weren't using any blocking plugins.

    My theory is that some requests may not be fulfilled due to the high volume of requests at once. I doubt it's because Sanity couldn't handle our piddly 10 requests. More likely, there could be a request limit. Here, I'm still surprised it would be as low as 10 within a certain timeframe.

    Whatever the cause, we had an issue where API requests were failing, and we did not have a great way of handling it.

    Contemplating Handling Errors

    This project started 2 years ago when the trend for using Redux for all data storing was still pretty high. Things were starting to shift away as the project developed, but our architecture was already set.

    There is potentially a Redux solution. Take a look at this Reducer:

    function inventoryReducer(state = initialState, action) {
      const { type, payload } = action;
      switch (type) {
        case 'GET_INVENTORY_ITEMS/fulfilled':
          return { ...state, items: payload };
           ...

    The "/fulfilled" portion does imply that we do log actions of different states. We could handle the case if it returns a failure, or even write code if a "/pending" request hasn't returned after a certain amount of time. Maybe even, SAY, fetch three times, then error out.

    But, after doing all that, I would have essentially written React Query.

    Incorporating React Query

    It was time. A major refactor needed to take place.

    So, at the start, the app is using Redux to fetch and store API data.

    React Query can do both. But, rewiring the entire app would have been time consuming.

    So, at the risk of some redundancy, I've refactored the application to fetch data with React Query and then also store the data in Redux. I get to keep all the redux boilerplate and piping, and we get a sturdier data fetching process. Huzzah!

    Glueing React Query and Redux Together with Hooks

    To make all of this happen, we need:

    • A Redux action for storing the data
    • A query method that wraps around our Sanity GROQ request
    • A way of handling errors and missing data
    • An easy way to call multiple queries at once

    A tall order! We have to do this for 10 separate requests, after all.

    After creating my actions, migrating GROQ into query methods, we need to make the glue.

    I used a couple of hooks to make this happen.

    import React, { useEffect } from 'react';
    import { useQuery } from 'react-query';
    import { useDispatch } from 'react-redux';
    import { toast } from 'react-toastify';
    
    export default function useQueryWithSaveToRedux(name, query, reduxAction) {
      const dispatch = useDispatch();
    
      const handleSanityFetchEffect = (data, error, loading, reduxAction) => {
        if (error) {
          throw new Error('Woops! Did not receive data from inventory', {
            data,
            error,
            loading,
            reduxAction,
          });
        }
    
        if (!loading && !data) {
          // handle missing data
          toast(
            "🚨 Hey! Something didn't load right. You might want to refresh the page!"
          );
        }
    
        if (data) {
          dispatch(reduxAction(data));
        }
      };
      const { data, isLoading, error } = useQuery(name, query);
    
      useEffect(() => {
        handleSanityFetchEffect(data, error, isLoading, reduxAction);
      }, [data, isLoading, error]);
    
      return { data, isLoading, error };
    }

    useQueryWithSaveToRedux takes in the query and redux action. We write out our useQuery hook, and as the data, isLoading, and error results are updated, we pass it to our handler to save the data. If something goes awry, we have a couple of ways of notifying the user.

    These are then called within another hook - useFetchAppLevelData.

    export default function useFetchAppLevelData() {
      const snotesQuery = useQueryWithSaveToRedux('sNotes', getSNotes, saveSNotes);
      const picturesQuery = useQueryWithSaveToRedux(
        'pictures',
        getPictures,
        savePictures
      );
      const spritesQuery = useQueryWithSaveToRedux(
        'sprites',
        getSprites,
        saveSprites
    
      ...
    
      return {
        snotesQuery,
        picturesQuery,
        spritesQuery,
        ...
      };
    }

    useFetchAppLevelData is simply bringing all these hooks together so that I only need to call one hook in my component. It's mostly here to keep things tidy!

    import useFetchAppLevelData from './hooks/useFetchAppLevelData';
    
    function App() {
      const location = useLocation();
      const dispatch = useDispatch();
    
      const fetchAppLevelDataRes = useFetchAppLevelData();
    
      ...
    
    }

    A big task, but a full refactor complete!