Skip to main content

100K Views, 1K GitHub Stars, and 100+ Messages Later

· 3 min read

Recently I shared my startup journey on LinkedIn. In particular, I shared the source code for my startup after closing it down along with the story of building and shutting down the company. Ordinarily, I got an above-average response from my personal network since it was big career news for me. But as I woke up the next day, I saw a pull request on the open source repo asking to license the code—which was weird because I didn't think anybody would actually care about a failed startup's codebase.

Screenshot of the GitHub Issue that was created at 5AM PST while I was asleep
Screenshot of the GitHub Issue that was created at 5AM PST while I was asleep

Similarly weird, there were ~50 stars on the repo. Curious about where this sudden attention was coming from, I checked my website analytics and noticed a surge of traffic from both Hacker News and Reddit. Someone had discovered my post and shared it on both platforms where it quickly gained traction. Over the next 6 days, my story reached over 100,000 readers, I received more than 100 recruitment messages, and the GitHub repo accumulated over 1,000 stars. What started as a simple LinkedIn post about my startup journey had organically spread to /r/programming and HackerNews, sparking discussions in both communities.

Here is a screenshot of website page views from Cloudflare.

Screenshot of web analytics from Cloudflare showing page views over the past 7 days (12-16-2024 to 12-23-2024)
Screenshot of web analytics from Cloudflare showing page views over the past 7 days (12-16-2024 to 12-23-2024)

Analytics from LinkedIn:

Screenshot of Post analytics from LinkedIn taken on 12-23-2024
Screenshot of Post analytics from LinkedIn taken on 12-23-2024

Stars on GitHub:

Screenshot of open source GitHub repo for my startup taken on 12-23-2024
Screenshot of open source GitHub repo for my startup taken on 12-23-2024

Reflection

While betting on myself didn't work out financially, the overwhelming response to sharing my journey has given me a unique sense of validation I've never felt before in my career. The way my story resonated with the tech community—from complete strangers to old colleagues—tells me that the skills and experiences I gained over these 3 years are genuinely valuable. Sure, it's just social validation, but seeing my post hit the front page of Hacker News and /r/programming suggests that my experience building and shutting down a startup resonates deeply with other engineers. When I look at my refreshed resume now, I see more than just another failed startup—I recall the experience of shipping products, pivoting through market changes, and learning hard lessons about what it takes to build something from scratch. In hindsight, what felt like an ending when we decided to shut down might just be another stepping stone in my career.

I Built My Resume In Markdown

· 3 min read

Resume creation is often overcomplicated with clunky word processors and inflexible templates. After 6 years away from the job market (see why), I needed to create a new resume and faced a crucial system design decision: stick with traditional tools like Microsoft Word, or leverage my software engineering experience to build something more robust and portable. I chose to create a solution that is:

  1. Easy to maintain and version controllable with Git.
  2. Does not require messing around with clunky word processing software.
  3. Lets me just put my content to file without thinking about any of the formatting.

After a couple years of blog and documentation content creation, I've never found something more simple yet flexible as markdown for authoring content. The separation of content and styling that markdown provides means that once you figure out the formatting once, you never have to think about it again—you can just focus on writing great content. Previously I've built my resume using LaTeX and a template (ifykyk), but after looking back at the source for that resume, I just don't want to dive back into writing LaTeX. It's great, but not nearly as simple as Markdown. I did a quick Perplexity search which led me to a GitHub repo showing how to use pandoc to convert Markdown to HTML. Then you can use one of many HTML to PDF tools along with your own custom .css file to produce a final PDF. The best part is that you can use CSS to style every aspect of your resume—from fonts and colors to layout and spacing—giving you complete control over the visual presentation while keeping the content in simple Markdown.

The Workflow

The entire workflow consists of one shell script to run two commands.

generate-pdf.sh
#!/bin/sh

# Generate HTML first
pandoc resume.md -f markdown -t html -c resume-stylesheet.css -s -o resume.html
# Use puppeteer to convert HTML to PDF
node generate-pdf.js

Here is the JavaScript to kick off a headless puppeteer browser to export the HTML to PDF. I chose puppeteer over alternatives like wkhtmltopdf and weasyprint because neither properly respected the CSS I wrote—specifically trying to style the role/company/dates line in the experiences section to be in a single yet evenly spaced row was not possible with wkhtmltopdf, and weasyprint's output styling did not match my CSS either.

generate-pdf.js
const puppeteer = require("puppeteer");

async function generatePDF() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(`file://${process.cwd()}/resume.html`, {
waitUntil: "networkidle0",
});
await page.pdf({
path: "resume.pdf",
format: "A4",
printBackground: true,
preferCSSPageSize: true,
margin: {
top: "1cm",
right: "1cm",
bottom: "1cm",
left: "1cm",
},
});
await browser.close();
}

generatePDF();

Check out the output here and the source markdown here. You can also check out the custom .css file I used here, it's simple and classic. I tried not to stray too far away from traditional resume styling but added a little bit of fundamentals from Refactoring UI, primarily about visual hierarchy and spacing.

After 3 Years, I Failed. Here's All My Startup's Code.

· 4 min read

And by "all," I mean everything: core product, failed pivots, miscellaneous scripts, deployment configurations, marketing website, and more. Hopefully the codebase is interesting or potentially helpful to somebody out there!

The Startup

Konfig was a developer tools startup focused on making API integrations easier. We started in late 2022 with the mission of simplifying how developers work with APIs by providing better tooling around SDK generation, documentation, and testing.

Our main product was an SDK Generator that could take any OpenAPI specification and generate high-quality client libraries in multiple programming languages. We expanded this with additional tools for API documentation and interactive API testing environments.

While we gained some traction with some startups, we ultimately weren't able to build a hyper-growth business. It was too difficult to get potential customers to sign contracts with us and that price points were too low despite the demonstrably ROI. We then decided to pivot into a vertical B2B SaaS AI product because we felt we could use the breakthroughs in Gen AI to solve previously unsolvable problems, but after going through user interviews and the sales cycle for many different ideas, we haven't been able to find enough traction to make us believe that we were on the right track to build a huge business.

Despite the outcome, we're proud of the technology we built and believe our work could be helpful for others. That's why we're open-sourcing our entire codebase.

The Konfig landing page
The Konfig landing page

The Repo

Here is the public GitHub repo. I'm releasing it exactly as it was when we shut down—no cleanup, no polishing, no modifications. This is our startup's codebase in its true, unfiltered form.

The Konfig GitHub repository containing all our startup's code: the good, the bad, and the ugly.
The Konfig GitHub repository containing all our startup's code: the good, the bad, and the ugly.

The Products

In the past 3 years, we built 4 main products.

  1. SDK Generator
  2. Markdown and OpenAPI Documentation
  3. API Demos (Markdown Based Jupyter Notebooks)
  4. SDKs for Public APIs

Random Things

And lots of miscellaneous things:

  1. Shell script for generating cold outbound message
  2. Programmatic SEO Scripting
  3. References to live customer deployments and pre-sales artifacts
  4. Marketing website
  5. Product Documentation
  6. Modified Changeset Bot - Supports our custom monorepo setup
  7. SDK Generator Integration Tests using Earthly
  8. Python Code Formatting Service
  9. AI Pivot Experimentation
  10. render.com deployment configuration - render.yaml
  11. API Documentation Generator Tool using LLMs/HTMX/Django
  12. Custom Notion Database Integration
  13. Python script for cropping blog post images

Thank You

I want to express my deepest gratitude to everyone who supported us on this journey. To our investors who believed in our vision and backed us financially, thank you for taking a chance on us. To our customers who trusted us with their business and provided invaluable feedback, you helped shape our products and understanding of the market. And to Eddie and Anh-Tuan, my incredible teammates—thank you for your dedication, hard work, and partnership through all the ups and downs. Your contributions made this startup journey not just possible, but truly meaningful and worthwhile.

Looking back to March 2022 when I left my job to pursue this startup full-time, I have absolutely no regrets. I knew the risks—that failure was a very real possibility—but I also knew I had to take this chance. Today, even as we close this chapter, I'm grateful for the failure because it has taught me more than success ever could. The experience has been transformative, showing me what I'm capable of and what I still need to learn.

As for what's next, I'm excited to explore new opportunities, see where the job market thinks I fit (haha), and continue learning and growing. Who knows? Maybe someday I'll take another shot at building something of my own. But for now, I'm thankful for the journey, the lessons learned, and the relationships built. This experience has been invaluable, and I'm grateful for everyone involved.

Not All Problems Are Great Fits for LLMs

· 4 min read

Many startups are racing to find product-market fit at the intersection of AI and various industries. Several successful use-cases have already emerged, including coding assistants (Cursor), marketing copy (Jasper), search (Perplexity), real estate (Elise), and RFPs (GovDash). While there are likely other successful LLM applications out there, these are the ones I'm familiar with off the top of my head. Through my experience building and selling LLM tools, I've discovered a new important criteria for evaluating an idea.

Are LLMs Especially Good at Solving This Problem?

Traditional business advice emphasizes finding and solving urgent, critical problems. While this principle remains valid, not all pressing problems are well-suited for LLM solutions, given their current capabilities and limitations. As non-deterministic algorithms, LLMs cannot be tested with the same rigor as traditional software. During controlled product demos, LLMs may appear to handle use-cases flawlessly, creating an illusion of broader applicability. However, when deployed to production environments with diverse, unpredictable inputs, carefully crafted prompts often fail to maintain consistent performance.

Where LLMs Excel

However, LLMs can excel when their non-deterministic nature doesn't matter or even provides benefits. Let's examine successful LLM use-cases where this is true.

Coding Copilots

Think of coding assistants like Cursor that help you write code and complete your lines.

When you code, there's usually a "right way" to solve a problem. Even though there are many ways to write code, most good solutions look similar—this is what we call "low entropy", like how recipes for chocolate chip cookies tend to share common ingredients and steps. LLMs are really good at pattern matching, which is perfect for coding because writing code is all about recognizing and applying common patterns. Just like how you might see similar ways to write a login form or sort a list across different projects, LLMs have learned these patterns from seeing lots of code, making them great at suggesting the right solutions.

Copywriting Generator

Marketing copy is more art than science, making non-deterministic LLM outputs acceptable. Since copywriting involves ideation and iteration rather than precision, it has a naturally high margin of error.

Search is unique because users don't expect perfect first results - they're used to scrolling and exploring multiple options on platforms like Google or Amazon. While search traditionally relies on complex algorithms, LLMs can enhance the experience by leveraging their ability to synthesize and summarize information within their context window. This enables a hybrid approach where traditional search algorithms surface results that LLMs can then summarize to guide users to what they're looking for.

Real Estate Assistant

Leasing agents primarily answer questions about properties to help renters find suitable homes and sign leases. Since their core function involves retrieving and relaying property information, a real estate assistant effectively becomes a specialized search problem.

RFPs

RFP responses combine two LLM strengths: extracting questions from lengthy, unstructured documents and searching internal knowledge bases for relevant answers. Since extracting questions from RFPs is time-consuming but straightforward, LLMs can work in parallel to identify all requirements that need addressing. This makes the RFP response process essentially a document extraction and search problem that's perfect for automation.

Conclusion

When building an LLM startup, focus on problems with two key characteristics:

  • Low Entropy - solutions follow common patterns, as seen in coding
  • High margin of error - tasks like copywriting where art trumps science

Or can be solved in a simlar way to common well-suited problem types such as:

  • Search
  • Document extraction

Beyond traditional business evaluation, ask yourself: "Are LLMs particularly well-suited to solve this problem?" If not, reconsider unless you have unique insights into making it work.

6 Things I Would Improve About Dify.AI

· 5 min read
Screenshot of a workflow on our self-hosted Dify instance
Screenshot of a workflow on our self-hosted Dify instance

To be clear, Dify is one of the most capable, well-designed, and useful prompt engineering tools I have used to date. I've tried code-only solutions such as LangChain and other no-code tools such as LangFlow, but Dify currently takes the cake. I've also briefly tried others such as PromptLayer, but when it comes to building agentic workflows, which I believe is the future of AI automation, Dify feels like a better fit. In particular, I appreciate its great UI design, workflow editor, and self-hosting option. But there is always room for improvement. Here are the things I would improve about dify.ai today.

  1. When testing workflows from the studio, it would be really nice to be able to select inputs from previous runs to be used as inputs for testing.
It would be really nice to just have a button that copies the highlighted inputs (as pictured) from a previous run into a new run
It would be really nice to just have a button that copies the highlighted inputs (as pictured) from a previous run into a new run
  1. Versions of a workflow should persist on the server. Right now, I'm scared to close a tab since I'll lose any previous versions of the workflow I'm working on. If I am making experimental changes, this can be devastating if I don't remember how the workflow previously worked.
Right now, Dify persists versions in the browser so it's lost when the tab is closed. Dify even explains that your work will be lost when leaving the editor.
Right now, Dify persists versions in the browser so it's lost when the tab is closed. Dify even explains that your work will be lost when leaving the editor.
  1. When testing changes to a workflow, I would love to have a version of a workflow that could be published for testing so production remains untouched. It would be even better if it somehow integrated with CI platforms so you could have a team of developers working on their own version of the workflow. This would eventually mean you'd need some sort of integration with Git so you can branch, push, and test changes to the workflow. On that note, it would also be great to be able to author Dify workflows as code, and have that bi-directionally sync with the UI version of Dify. This would be amazing for non-developers and developers to collaborate on agentic workflows.
Right now, you can only publish one version of a workflow to be accessed by API. It would be ideal to instead publish multiple versions for testing and allow the API to route across different versions to be used in local testing or even CI environments
Right now, you can only publish one version of a workflow to be accessed by API. It would be ideal to instead publish multiple versions for testing and allow the API to route across different versions to be used in local testing or even CI environments
  1. Managing many workflows can be extremely confusing, especially when you clone a workflow for testing, which could be fixed by adding support for (3). Also, the tile view is not a compact and easy way to scan all the workflows. It would be really nice if there were a table view that could be sorted, searched, filtered, etc.
The current studio view shows workflows as tiles with basic tag filtering. This can get messy and hard to navigate as the number of workflows grows, especially when workflows are cloned for testing. A table view with more robust filtering and sorting would make it much easier to manage.
The current studio view shows workflows as tiles with basic tag filtering. This can get messy and hard to navigate as the number of workflows grows, especially when workflows are cloned for testing. A table view with more robust filtering and sorting would make it much easier to manage.
  1. Generated API documentation for the workflow would be nice to save time when integrating a workflow into a codebase.
Currently, Dify provides some static code examples that don't update based on your workflow's inputs and outputs. Having dynamically generated API documentation would make it much easier to integrate workflows into applications.
Currently, Dify provides some static code examples that don't update based on your workflow's inputs and outputs. Having dynamically generated API documentation would make it much easier to integrate workflows into applications.
  1. API Keys are extremely annoying to handle when you have many workflows.
Currently, you need to create a separate API key for each workflow, which becomes extremely messy when managing 10+ workflows since keys aren't easily identifiable as belonging to specific workflows. Having a single master API key with workflow identifiers in the API calls would be much simpler to manage and organize.
Currently, you need to create a separate API key for each workflow, which becomes extremely messy when managing 10+ workflows since keys aren't easily identifiable as belonging to specific workflows. Having a single master API key with workflow identifiers in the API calls would be much simpler to manage and organize.

Overall, there's a lot of room for improvement in Dify.ai, but it's important to remember that it's still in beta. The platform already offers an impressive set of features and a unique approach to building LLM-powered applications. I'm confident that many of these pain points will be addressed as the platform matures.

Some of these suggestions, particularly the bi-directional sync between code and UI, would be technically challenging to implement. However, features like this would significantly differentiate Dify from its competitors and create a truly unique development experience that bridges the gap between technical and non-technical users.

If these improvements were implemented, particularly around version control, testing workflows, and API management, it would dramatically improve the developer experience and make Dify an even more compelling platform for building production-grade AI applications. The potential is already there - these enhancements would just help unlock it further.

I Reviewed 1,000s of Opinions on HTMX

· 12 min read

HTMX is a trending JavaScript library that enables the construction of modern user interfaces using hypermedia as the engine of application state.

In a nutshell, you can implement a button that replaces the entire button with an HTTP response using HTML attributes:

<script src="https://unpkg.com/[email protected]"></script>
<!-- have a button POST a click via AJAX -->
<button hx-post="/clicked" hx-swap="outerHTML">
Click Me
</button>

If you follow popular web development trends or are a fan of popular developer-focused content creators, you have probably heard about it through Fireship or ThePrimeagen. However, HTMX has brought an absolute whirlwind of controversy with its radically different approach to building user interfaces. Some folks are skeptical, others are excited, and others are just curious.

Opinion 1 of 0

To analyze how developers truly feel about HTMX, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

funnel
Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro-HTMX" (👍), "Anti-HTMX" (👎), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard
Whiteboard of opinions

To start, i'll go over the Anti-HTMX (👎) opinions since they are spicy.

👎 HTMX is just hype

After Fireship released a video about HTMX, HTMX started to gain a lot of attention for its radical approach to building user interfaces. Carson Gross, the author of HTMX, is also adept at generating buzz on his Twitter. And since HTMX is new, its unlikely that you will find a lot of examples of sufficiently complex applications using HTMX. Therefore, some developers are of the opinion that HTMX is merely capitalizing on hype rather than offering a genuine solution to the challenges of building user interfaces.

Opinion 1 of 0

Takeaway

Like all technologies, there is typically a cycle of hype, adoption, and dispersion. HTMX is no different. This cycle is beginning to unfold, and it's time to see what the future holds. It is fair to criticize HTMX for riding a wave of hype. However, if developers feel HTMX is solving a real problem and adopting it accordingly, then adoption will naturally occur. But only time will tell...

👎 HTMX is a step back, not forward

If you squint at HTMX, it looks like a relic of the past where MPA (Multi-page Applications) were the norm. Some developers see HTMX as a step back, not forward. There is a good reason why modern web applications are built using technologies like React, Next.js, and Vue.js. To ignore this, and use HTMX, you might be ignoring the modern best practices of building web applications.

Opinion 1 of 0

Takeaway

Depending on your level of expertise in building with modern web technologies, you may feel that HTMX is a step back, not forward—especially if you have previously built MPAs with jQuery. For those who see HTMX as a step back, they want nothing to do with it.

If you are already familiar with modern web technologies, your teammates are as well, and current applications are using the latest web technologies, it's really hard to see why you would use HTMX in future applications. But for those starting new applications, HTMX is simply a different approach to building interfaces. Whether it is worth considering depends on the complexity of your application's interface and the expertise of your team. But it definitely doesn't hurt to entertain new ideas like HTMX. Who knows, it could ultimately improve your skills as an engineer.

👎 HTMX is unnecessarily complex and less user-friendly

In practice, some developers feel that HTMX is actually more complex than the current best practices. They specifically dislike the use of HTML attributes, magic strings, and magic behavior. Moreover, some developers feel this make engineerings teams less productive and codebases more complex.

Opinion 1 of 0

Takeaway

People will always have a natural aversion to unfamiliar technologies, and HTMX is no exception. Those who adopt HTMX in their projects will likely encounter some friction, which could be a source of frustration and negative discourse. HTMX also uses more declarative paradigms, which can be harder to read for developers who are just beginning to learn HTMX. This makes HTMX more complex and less user-friendly for those sorts of developers.

React involves more in-memory state management on the client, while HTMX, on the other hand, embeds state in HTML itself. This is a fundamentally new approach to interface design, which developers would need to learn and adopt before feeling comfortable reading and programming with HTMX.

👎 HTMX is only good for simple use-cases

Users expect modern interfaces which require more complex DOM manipulations and UX design. To implement such interfaces, using HTMX is not enough. Therefore, HTMX is only good for simple use-cases.

Opinion 1 of 0

Takeaway

For developers that are working on more complex use-cases, HTMX is not enough. And since HTMX has sort of compared itself to React, it is fair to point out that HTMX is not a solution for building complex web applications.

Before deciding to use HTMX, developers should first consider the complexity of the use-case. If HTMX is not the right tool for the job, then look elsewhere for libraries that will allow you to solve the problem. It may well be the case that your use-case will require a "thick client".

👍 HTMX is simple

Developers are witnessing a reduction in codebase complexity and a decrease in overall development time. As grug puts it, "complexity is bad". Developers who embrace this mindset are highly enthusiastic about HTMX. With HTMX handling the DOM manipulations, developers can deduplicate logic from the browser to the server. This marks a significant improvement over the current best practices for building web applications, where a lot of logic needs to be duplicated in the server and client. For backends that are not written in JavaScript, the current standard UI libraries can be a significant burden on the developers of the application.

Takeaway

Simplicity is a subjective concept. It highly depends on your application, team, and use-cases. For those who have spent a lot of time with JavaScript, HTMX stands out for its simplicity. If you feel that the current best practices are overbuilt, bloated, and overly complex—HTMX might be worth considering.

In many use-cases where the primary value of the application lies in the backend, React may not be essential but could still be the optimal choice for your team, depending on their expertise.

👍 HTMX does not require you to know JavaScript

If you are a backend developer, it's unlikely that you know React, JavaScript, or meta frameworks like Next.js. Full-stack developers, on the other hand, may find HTMX to be a breath of fresh air in the form of a simple way to build interfaces. The fact that pretty much any developer can pick up HTMX is a huge benefit, especially for teams that are not as comfortable with JavaScript.

Takeaway

I personally love this perspective and it's actually one of the reasons I've spent some time experimenting with HTMX. My company is a small team and not all of us have used React or Next.js, so HTMX as a solution for teams that are not as comfortable with JavaScript is an extremely compelling narrative for me.

I believe this is true for many other teams as well, especially since full-stack developers are hard to come by. Some developers are finding that this unlocks new opportunities for them to work on interesting projects.

👍 HTMX makes you productive

HTMX, when combined with battle-tested templating libraries like Django's template language or Go's templ library, boosts productivity. By not having to spend time writing a client-layer for the UI, developers can focus on where the application is providing most of the value, which is the business logic. This is especially effective when paired with

Slide 1 of 0

Takeaway

By reducing the amount of redundant business logic in the client-layer, developers are finding that they can focus on the actual product and not the UI. And since you can get up and running with HTMX with a simple <script> tag, it's easy to bootstrap a project with HTMX.

These two factors leave a positive impression on the developer experience for some.

🧐 HTMX is not a silver bullet

Like any problem, you must choose the right tool for the job. For developers who see HTMX for its pros and cons, they believe that it's just another tool to solve a specific problem. And since HTMX can also be progressively adopted, developers can adopt it in small portions of their codebase. But as always, certain use-cases will require more complex client logic which would require you to reach for more advanced tooling like React.

Takeaway

Like everything else in software, choose the right tool for the job. It's not much of a takeaway, but consider it your responsibility as an engineer to make critical decisions, such as whether or not to adopt HTMX.

Conclusion

Competition is good. HTMX is thought-provoking, but I think its great because it forces developers to entertain new and novel ideas. Developers can often be wary and cautious of new technologies, since it might not be solving a personal problem they are already facing. But for the developers that are resonating with HTMX, there is an enthusiastic group of developers who are starting to use it in production.

I have not personally deployed a production application using HTMX, but I am really excited to try it. It would solve a personal problem with my engineering team and also allow me to simplify the build process of deploying applications. Those two things are important to me, and HTMX seems like a great fit.

I Reviewed 1,000s of Opinions on gRPC

· 9 min read
What do developers really think about gRPC?

It is no secret that gRPC is widely adopted in large software companies. Google originally developed Stubby in 2010 as an internal RPC protocol. In 2015, Google decided to open source Stubby and rename it to gRPC. Uber and Netflix, both of which are heavily oriented toward microservices, have extensively embraced gRPC. While I haven't personally used gRPC, I have colleagues who adore it. However, what is the true sentiment among developers regarding gRPC?

To find out, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro-gRPC" (👍), "Anti-gRPC" (👎), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard of opinions

👍 gPRC's Tooling Makes It Great for Service-to-Service Communication

The most significant praise for gRPC centers on its exceptional code generation tools and efficient data exchange format, which together enhance service-to-service developer experience and performance remarkably.

Key Takeaway 🔑

Engineering teams are often responsible for managing multiple services while also interacting with services managed by other teams. Code generation tools empower developers to expedite their development process and create more reliable services. The adoption of Protocol Buffers encourages engineers to focus primarily on the interfaces and data models they expose to other teams, promoting a uniform workflow across various groups.

The key benefits of gRPC mentioned were:

  • Client and server stub code generation tooling
  • Data governance
  • Language-agnostic architecture
  • Runtime Performance
  • Well-defined error codes

If your organization is developing a multitude of internal microservices, gRPC could be an excellent option to consider.

👍 Compared to the straightforward nature of gRPC, REST can be relatively confusing

To properly build applications in REST, you need to understand the underlying protocol, HTTP. gRPC, on the other hand, abstracts HTTP away, making it less confusing for quickly building applications.

Opinion 1 of 0

Key Takeaway 🔑

REST often leaves considerable scope for errors and confusion in defining and consuming APIs.

In contrast, gRPC is designed with the complexities of large-scale software systems in mind. This focus has led to an emphasis on robust code generation and stringent data governance. REST, on the other hand, is fundamentally a software architectural style oriented towards web services, where aspects like code generation and data governance were not primary considerations. The most widely used standard for designing REST APIs, OpenAPI (formerly Swagger), is essentially a specification that the underlying protocol does not enforce. This leads to situations where the API consumer might not receive the anticipated data model, resulting in a loosely coupled relationship between the API definition and the underlying protocol. This disconnect can be a significant source of confusion and frustration for developers. Hence, it raises a critical question: why opt for REST when gRPC offers a more cohesive and reliable alternative?

👎 gRPC complicates important things

Important functionalities like load balancing, caching, debugging, authentication and browser support are complicated by gRPC.

Key Takeaway 🔑

On the surface, gRPC appears to be a great solution for building high-quality APIs. But as with any technology, the problems start to show when you begin to use it. In particular, by adding an RPC layer, you have effectively introduced a dependency in a core part of your system. So when it comes to certain functionalities that an API is expected to provide, you are at the mercy of gRPC.

Soon, you'll start to find yourself asking questions like:

  • How do I load balance gRPC services?
  • How do I cache gRPC services?
  • How do I debug gRPC services?
  • How do I authenticate gRPC services?
  • How do I support gRPC in the browser?

The list goes on. Shortly after, you'll be wondering: "Why didn't I just build a REST API?"

👎 REST is standard, use it

The world is built on standards and REST is no exception.

Opinion 1 of 0

Key Takeaway 🔑

Don't be a hero, use REST.

Recall that gRPC was born out of Google's need to build a high-performance service-to-service communication protocol. So practically speaking, if you are not Google, you probably don't need gRPC. Engineers often overengineer complexity into their systems and gRPC seems like a shiny new toy that engineers want to play with. But as with any new technology, you need to consider the long-term maintenance costs.

You don't want to be the one responsible for introducing new technology into your organization which becomes a burden to maintain. REST is battle-tested, so by using REST, you get all of the benefits of a standard such as tooling and infrastructure without the burden of maintaining it. Most engineers are also familiar with REST, so it's easy to onboard new developers to your team.

🧐 Use gRPC for internal services, REST for external services

gRPC significantly enhances the developer experience and performance for internal services. Nonetheless, it may not be the ideal choice for external services.

Opinion 1 of 0

Key Takeaway 🔑

gRPC truly excels in internal service-to-service communication, offering developers remarkable tools for code generation and efficient data exchange. Disregarding the tangible benefits to developer experience that gRPC provides can be shortsighted, especially since many organizations could greatly benefit from its adoption.

However, it's important to note that browser support wasn't a primary focus in gRPC's design. This oversight necessitates an additional component, grpc-web, for browser accessibility. Furthermore, external services often have specific needs like caching and load balancing, which are not directly catered to by gRPC. Adopting gRPC for external services might require bespoke solutions to support these features.

Recognizing that not every technology fits all scenarios is crucial. Using gRPC for external services can be akin to trying to fit a square peg into a round hole, highlighting the importance of choosing the right tool for the right job.

🧐 gRPC is immature

gRPC was only open sourced in 2015 when Google decided to standardize its internal RPC protocol so there is still a lot of open source tooling that needs to be built.

Opinion 1 of 0

Key Takeaway 🔑

REST APIs are supported by a rich variety of tools, from cURL to Postman, known for their maturity and thorough documentation. In contrast, gRPC is comparatively younger. Although it has some tools available, they aren't as developed or as well-documented as those for REST.

However, the gRPC ecosystem is witnessing rapid advancements with its increasing popularity. The development of open-source tools such as grpcurl and grpcui is a testament to this growth. Additionally, companies like Buf are actively contributing to this evolution by creating advanced tools that enhance the gRPC developer experience.

Conclusion

gRPC undeniably stands as a polarizing technology. It excels in enhancing developer experience and performance for internal service-to-service communications, yet some developers remain unconvinced of its advantages over REST.

In our case, we employ REST for our external API and a combination of REST/GraphQL for internal services. Currently, we see no pressing need to integrate gRPC into our workflow. However, the fervent support it garners from certain segments of the developer community is quite fascinating. It will be interesting to observe the evolution and expansion of the gRPC ecosystem in the coming years.

I Reviewed 1,000s of Opinions on GitHub Copilot

· 8 min read

GitHub Copilot has recently taken the software engineering world by storm, hitting a milestone of $100M ARR. This achievement alone qualifies it to be a publicly listed company. Meanwhile, funding continues to flow into code-focused LLM use cases.

LLMs are causing a stir in the software engineering community, with some developers praising the technology and others fearing it. The controversy surrounding LLMs is so intense that it has even led to a lawsuit against GitHub Copilot.

To understand how developers are receiving Copilot, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Intent of this article

We are aware that GitHub Copilot was trained on questionable data (see GitHub Copilot and open source laundering) and that there is ongoing controversy surrounding the technology. However, this article is not about the ethics of LLMs. Instead, it is focused on product feedback from developers.

The ethics of LLMs and training data is a whole other discussion that we will not be covering in this article. And quite frankly, I am not qualified to comment on the ethics of LLMs.

We have no affiliation with GitHub Copilot, OpenAI, or Microsoft. We are not paid to write this article, and we do not receive any compensation from the companies.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Anti-Copilot" (👎), "Pro-Copilot" (👍), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard of opinions

👎 Copilot produces bad results

LLMs operate as probabilistic models, implying they aren't always correct. This is especially true for Copilot, which is trained on a corpus of code that may not be representative of the code that a developer writes. As a result, Copilot can produce consistently bad results.

Key Takeaway 🔑

Developers expect reliability from their tools.

Copilot is not reliable, and therefore, certain developers have a tough time wrestling with its output. Copilot lies or produces bad results for a vast majority of the time. This can be exceedingly frustrating for developers who are expecting Copilot to deliver on its promise of writing code for you. After some bad experiences, some developers have even stopped using Copilot altogether.

For people who worry about job security, fear nothing, because Copilot is not going to replace you anytime soon.

👎 Copilot creates more problems than solutions

Copilot is a tool that is supposed to help developers write code. However, its unreliable results creates more problems than solutions.

Opinion 1 of 0

Key Takeaway 🔑

Copilot can waste your time.

Code requires 100% accuracy, and inaccuracy can lead you down a rabbit hole of debugging. Often wasting time or flat out breaking your code. In some cases, this is frustrating enough for developers to stop using Copilot altogether. Just like managing a junior developer, Copilot requires a lot of oversight. Sometimes subtle bugs can take more time to debug and fix than writing the code yourself. For some developers who find the output to be too inaccurate, Copilot becomes an interference and ultimately doesn't save them any time.

👍 Copilot helps you write software faster

Despite the inaccuracy of LLMs, if you treat Copilot as a tool that can help take care of the boring stuff, it can be a powerful tool.

Key Takeaway 🔑

Copilot increases productivity.

Often, developers face mundane and repetitive tasks. Given enough context, Copilot can do these tasks for you with sufficient accuracy. For some developers, these tasks can be a significant time sink, and Copilot can help you get that time back.

Based on the mentioned 10-20% increase in productivity, such an improvement is substantial. For the sake of a conservative analysis, let's consider the lower bound: if we assume an engineer is paid $100k/yr and becomes just 5% more productive (half of the 10% reference), then with a yearly cost of $100 for Copilot, the tool brings in an added value of $4900 for the company.

👍 Copilot helps you write better software

Copilot can be a great learning tool for junior developers, giving tech leads more time to focus on higher-level tasks. Ultimately leading to better software and happier developers.

Opinion 1 of 0

Key Takeaway 🔑

Copilot has enough intelligence to help you write better software.

This holds especially true for junior developers still learning the ropes. It can drastically make mundane tasks like documentation and testing easier, giving developers more time to focus on the bigger picture while maintaining a high standard of code quality. Multiplying this effect across an engineering team leads to a higher quality codebase—the ultimate dream for engineering leaders.

🧐 Copilot is like a calculator

Copilot is a tool that can help you solve problems faster, but it is not a replacement for your brain. You still need to know how to solve problems, and you still need to know how to write code.

Opinion 1 of 0

Key Takeaway 🔑

Just as calculators enable mathematicians to solve problems more quickly, Copilot spares developers from focusing on repetitive tasks.

However, just like calculators, Copilot does not help you make sense of the problem. You still need to know how to solve problems, and you still need to know how to write code.

Conclusion

Opinions on Copilot vary; some see it as a blessing, while others regard it as a curse. For its proponents, Copilot is a valuable tool that enhances coding speed and quality. However, critics argue it introduces more issues than it resolves.

I suspect that the complexity of a task makes a big difference in the quality of output. Working on tasks that require more context will inevitably lead to worse results. Yet, when viewed simply as a tool to handle the mundane aspects of coding, Copilot reveals its potential.

I Reviewed 1,000s of Opinions on Serverless

· 10 min read

From DHH shunning serverless, Ahrefs saving millions by using a cloud provider at all, to Amazon raining fire on their own serverless product, serverless has recently faced significant scrutiny.

But still, everyone and their pet goldfish seem to be creating a serverless runtime (see Bun, Deno, Pydantic, Cloudflare, Vercel, Serverless, Neon, Planetscale, Xata, FaunaDB, Convex, Supabase, Hasura, Banana, and literally hundreds more). One research paper from Berkeley even claimed:

Serverless computing will become the default computing paradigm of the Cloud Era, largely replacing serverful computing and thereby bringing closure to the Client-Server Era.

-- Cloud Programming Simplified: A Berkeley View on Serverless Computing

Is it all hype? Is there real 100% objective merit to it? Where does serverless excel? Where do the trade-offs make sense?

To understand how developers are receiving serverless, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro Serverless," "Anti Serverless", or "Neutral" categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

FigJam I used to organize perspectives

Anti-Serverless Opinions

Giant software companies such as Shopify, GitHub, and Stack Overflow have achieved new heights using tried-and-true frameworks like Ruby on Rails. However, serverless presents an intriguing new paradigm that promises to reduce costs, accelerate development, and eliminate the need for maintenance. And as with any technological shift, there will always be skeptics.

Opinion: Serverless is a performance and financial hazard

Key Takeaway 🔑

One of the most vocal criticisms of serverless computing is the unpredictability of its costs and the latency associated with cold starts. While cloud providers have made significant improvements in optimizing serverless runtimes over time, these issues remain a significant concern for many developers.

Additionally, serverless introduces a new paradigm that brings its own set of challenges when building complex applications, particularly those requiring multiple services to communicate with each other. This is already a common problem with microservices, but serverless further complicates the issue by forcing developers to work within a stateless and I/O-bound compute model.

These latencies can have real financial consequences; Amazon found that every 100ms of latency cost them 1% in sales. Moreover, without proper billing safeguards in place, serverless costs can spiral out of control, potentially leading to the infamous situation of a startup sinking under its cloud bill.

Encountering any of these issues could understandably leave a sour impression and be a compelling reason to abandon serverless in favor of a traditional VPS.

Opinion: Serverless is a fad

Key Takeaway 🔑

Serverless is a fad, and the hype will fade.

Kudos to AWS's marketing team, as they successfully persuaded thousands of CTOs to stake their entire technology stack on a contentious new paradigm for building applications.

Some even go as far as to say serverless is "dangerous" or "wrong". In some ways, this viewpoint is not exaggerated. If cloud providers were not striving to lock everyone into their products to capture market share and boost revenue, what kind of corporate entities would they be?

Early adopters and evangelists did a great job at highlighting 10x features and pushing the cloud computing agenda. But always be weary of a technology that necessitates developers to acquire a new set of skills and tools, potentially wasting developer's time in a low-demand technology. Engineering teams should exercise caution when betting the farm on serverless as it may lead to vendor lock-in. When things go wrong, good luck with troubleshooting and refactoring!

At the end of the day, serverless represents a substantial investment that could arguably be better allocated to other aspects of the business. Why not utilize long-running servers that were already deployable and maintainable in the late 2000s.

So, does serverless really solve your problems, or are you just succumbing to the hype?

Pro Serverless Opinions

Technologies that catch on usually have something good going for them, and serverless is no exception. Despite all the buzz and marketing blitz, there's some real enthusiasm for it. Some companies are saving time and money with serverless, making it a win-win. Some devs think serverless is the new must-have tool in the toolbox.

Opinion: Serverless accelerates the pace of feature development

Slide 1 of 0

Key Takeaway 🔑

Counter-intuitively, an interesting aspect of serverless computing is that it appeals more to early product development than to enterprise products is the speed at which features can be developed. The virtually negligible time and cost required to provision cloud computing resources make serverless computing particularly attractive to hobbyists for their projects.

During the early stages of building a product, the most critical factor to consider when designing your system is the pace of development. Concerns about scalability fall far behind developer experience, as they are not yet relevant issues. In this context, serverless computing provides a compelling value proposition.

In light of this, why would anyone waste time setting up their own server? Starting with serverless computing is a logical choice. If cost or speed issues arise, other options can be considered at that time.

Opinion: Serverless can be outstanding when implemented correctly

Slide 1 of 0

Key Takeaway 🔑

For those who have successfully adopted serverless and lived to share their experiences, the enthusiasm is palpable.

It appears that building on serverless from first principles can yield outstanding results. Beyond the marketing hype, the true benefits of serverless become evident. Maybe as the Berkeley researchers predicted, maintaining your own server is becoming a thing of the past. With serverless, you can save money, reduce development time, and cut maintenance costs—a triple win.

Moreover, as serverless offerings like storage, background jobs, and databases continue to improve, the ecosystem will support the construction of increasingly complex apps, while still upholding the promise of serverless.

If you can navigate the downsides of serverless, you can create a product with an infrastructure that feels almost effortless. Perhaps the naysayers' tales are louder than the truth.

Neutral Opinions

I believe it is crucial to emphasize neutral viewpoints. In my view, these tend to be the most truthful because they recognize the advantages and disadvantages of each approach. They are also the opinions least commonly expressed, as many developers tend to be set in their ways.

Opinion: Serverless offers genuine benefits for specific use cases, but it is often misused or applied inappropriately

Key Takeaway 🔑

No technology is a silver bullet, and serverless is no exception.

Every technological decision is about choosing the right tool for the job. Serverless has some distinct trade-offs that need to be understood before it's adopted. Conduct thorough research on compute density, single-responsibility microservices, and performance requirements. Once you do, you'll see that serverless can offer immediate and tangible value. Recognize that serverless is not a replacement, but an alternative.

Whenever you hear someone criticize serverless, be wary of the problems they encountered. Were they design problems? Were there clear misuses of serverless? Serverless is not a panacea.

Conclusion

The anticlimactic conclusion? As always, it depends.

Though, I am more convinced that developers should strongly consider using serverless during the early stages of their SDLC. I previously built an application exclusively using serverless but was burned by lop-sided unit economics1. In retrospect, I can attribute that failure to not considering the downsides of serverless before adopting it.

That being said, I have also grown accustomed to Render for fast-paced development, and so far, I have no complaints. However, as I am always striving to become a 10x engineer, I will consider adding serverless to my toolbox.

Footnotes

  1. I built a Shopify App that gave shop owners pixel-perfect replays of customers navigating their online store. I stored session data (using rrweb) in S3 and processed events using lambda. I ended up operating at a loss until I shut it down.

I Reviewed 1,000s of GraphQL vs. REST perspectives

· 10 min read

Ask any developer: do you prefer GraphQL or REST? This often leads to opinionated conversations, sometimes even devolving into vulgar opinions rather than objective facts.

To delve into the GraphQL vs. REST debate, I scoured platforms where developers frequently discuss such matters: YouTube, Reddit, Twitter, and Hacker News. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking perspectives.

Funnel for gathering through-provoking perspectives

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro GraphQL," "Pro REST," or "Neutral" categories, and then clustering them into distinct perspectives. Each section in this post showcases a perspective while referencing pertinent discussions. To conclude, I highlight blog posts from GitHub and Shopify that serve as informative case studies in this ongoing debate.

FigJam I used to organize perspectives

Pro REST Perspectives

REST APIs have been around for decades. Much of the world's networking infrastructure is built on the HTTP standard, so it's no surprise that REST enjoys substantial support. However, similar to how SOAP was eventually overtaken by REST, GraphQL now poses a challenge to REST. As with any technological shift, there will always be those who express skepticism.

Perspective: GraphQL is not worth the complication

Key Takeaway 🔑

GraphQL is complicated.

A few features that are simple to implement with REST instantly become a huge pain with GraphQL:

  • Access control
  • Rate limiting
  • Caching
  • DOS protection
  • Using Maps/Tables/Dictionaries
  • N + 1 Problem

The complexity that arises from adopting GraphQL can sometimes outweigh its benefits for most engineering teams. Does your organization have a substantial budget allocated for engineers? Is your data model extremely relational? Is your API catering to a vast user base? Do all your engineers possess a proficient understanding of GraphQL? If not, you probably shouldn't be adopting GraphQL for its flexible query language.

Entire companies like Stellate and Apollo were born out of the need to solve these arguably over-complicated characteristics of GraphQL.

Perspective: GraphQL is not an optimization

Opinion 1 of 0

Key Takeaway 🔑

GraphQL is not an automatic performance optimization.

If your team faces similar challenges as Facebook, feel free to focus on optimizing your API layer. Otherwise, there are likely other optimizations you can implement for your application beyond the query layer.

Despite its flexible query language, GraphQL can become a performance bottleneck. Writing a query that slows down your API is quite easy. Tasks like optimized DB queries, precise access control, query complexity analysis, and HTTP-level caching are complex to implement and introduce additional overhead to your server.

GraphQL does not inherently enhance the speed of your API. The responsibility to optimize your API performance still rests on your shoulders.

Pro GraphQL Perspectives

Despite the naysayers, GraphQL has still captured the minds of thousands of developers. The experiences of using GraphQL are undeniable, and it's easy to see why so many developers are excited about it.

Perspective: GraphQL has an amazing developer experience

Key Takeaway 🔑

Developers prioritize developer experience.

Ask any developer you know; developer experience is the key to unlocking productivity. The GraphQL community has undoubtedly prioritized developer experience for users. Check out the abundant and growing ecosystem of tools available for free. I'm impressed with The Guild and their work on building a comprehensive GraphQL suite of tools.

From documentation and client generators to data validation, server implementations, and API gateways, mature open-source projects cover it all. The caveat is that Node.js/JavaScript is the most popular language for GraphQL, resulting in differing levels of support for other languages. However, this situation is rapidly changing.

Because GraphQL mandates the creation of an API schema, unlike REST, code generation stands out as a prominent feature among most GraphQL technology stacks. This significantly benefits developers by reducing the amount of boilerplate code they need to write. The ability to generate code from "front to back" is a game-changer for rapid development. While the REST ecosystem is progressing, it took a back seat for many years until OpenAPI emerged.

Whether in small or large-scale software systems, GraphQL addresses a crucial aspect of the developer experience problem like never before. Personally, I believe this is a significant factor contributing to GraphQL's current traction. The compelling developer experience offered by GraphQL often leads people to overlook the tradeoffs.

Perspective: GraphQL has proven itself in the largest organizations

Opinion 1 of 0

You can't ignore the use of GraphQL in hyper-scale organizations like Facebook, GitHub, and Shopify.

There are compelling case studies for the performance of GraphQL in some of the largest engineering organizations in the world. But it's important to note that these organizations have the resources to invest in the infrastructure and engineering talent required to make GraphQL work at scale.

For most organizations, those resources are not available and important to the immediate business goals. In these cases, GraphQL is not the right choice if you are purely looking for an optimization.

That being said, I still feel developer experience is a compelling reason to adopt GraphQL if your team is willing to invest in the learning curve.

Neutral Perspectives

I think it's important to highlight the neutral perspectives. In my opinion, these are the ones that hold the most truth in that they acknowledge the tradeoffs of each approach. They are also the rarest opinions to find since most developers are stubborn in their ways.

Perspective: GraphQL is great for some use cases, but not all

Opinion 1 of 0

Key Takeaway 🔑

GraphQL proves valuable for intricate relational data models and sizable teams grappling with substantial scalability challenges, but it's not a silver bullet.

GraphQL emerged as a solution to a specific predicament at Facebook: the migration of the social media giant's newsfeed functionality from web to mobile. (Read more about it here).

This perspective may seem apparent and lacking in provocation, yet, as with any engineering choice, trade-offs come into play. Because Facebook has contributed so many great projects to open source, it's easy for engineers to blindly adopt the latest and greatest from Facebook in hopes it will help you preemptively solve scaling problems. But as always, you have to understand the problem you are solving to make the right technical decision for your team or you risk over-engineering your solution.

Perspective: The benefits of GraphQL are great, but difficult to implement in practice

Opinion 1 of 0

Key Takeaway 🔑

To successfully implement GraphQL at scale, a high level of skill is required to ensure optimal performance.

In practice, GraphQL provides an exceptional interface for frontend and API consumers, enabling them to query and explore the API efficiently. However, it does shift a significant portion of complexity to the backend. For instance, in the absence of persisted queries, GraphQL exposes an interface that can be potentially hazardous, allowing unbounded query requests to your API. Have a public GraphQL API? Then you are forced to rate limit by calculating query complexity like Shopify. Without access to skilled engineers and the resources for substantial infrastructure investment, GraphQL can become a recipe for disaster when it comes to scalability.

Nonetheless, when executed correctly, GraphQL can bring immense benefits to client-heavy applications, offering tools like GraphiQL for interactive query exploration, cache normalization with urql for efficient data management, and the ability to create granular queries without much hassle. Without GraphQL, incorporating these features would demand a significant amount of effort.

REST doesn't share these scalability challenges to the same extent, as backend developers can pinpoint performance issues to specific endpoints. Developers can also rely on the mature and well-defined HTTP specification that existing infrastructure is already equipped to handle.

Conclusion

Before this exercise, I built GraphQL and REST APIs in both toy and production examples and always leaned towards DX. After reading 1,000s of perspectives, I am convinced that you should start with REST to avoid the complications of GraphQL. If you are lucky enough to be making technical decisions at "unicorn scale", I would consider looking at GraphQL to optimize the query interface. Meanwhile, take advantage of the growing OpenAPI ecosystem.

In most cases, for GraphQL to effectively address the problems it was originally designed to solve, a team should ideally possess a considerable number of developers, and their API should handle traffic at a "unicorn scale." Most likely it's not necessary to overly concern yourself with such extensive scalability, as the complexity involved might not be justified. Notable giants like Twilio and Stripe, for instance, have not yet integrated GraphQL into their public APIs despite being well beyond unicorn status.

Nevertheless, if you have a workflow that takes advantage of the remarkable developer experience offered by GraphQL, then by all means, embrace it! Just ensure that you are fully aware of the tradeoffs involved. Ultimately, the most suitable choice for most teams is whatever enables faster development.

Side note: If you are just looking to build fast, checkout tRPC - the DX is pretty awesome. I have no affiliation other than building a production app using tRPC.