Skip to main content

What's the fuss about MCP? (code examples)

Β· 6 min read

Lately i've been seeing all this craze about MCP. And the other day, my colleague was also wondering what exactly MCP does and why it has been trending recently. Since the current tool calling paradigm does not really seem to be broken at the surface. But after reading a bit more about it, I have a simple example that shows why MCP is a good idea.

Lets take a simple example of how you would have connected LLMs to external tools before MCP.

main.py
from openai import OpenAI

client = OpenAI()

def get_capital(country: str) -> str:
# This is a mock function that returns the capital of a country
return "Washington, D.C."

response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of the moon?"},
],
tools=[
{
"type": "function",
"function": {"name": "get_capital", "description": "Get the capital of a country", "parameters": {"type": "string", "name": "country"}},
}
],
)

tool_calls = response.choices[0].message.tool_calls

for tool_call in tool_calls:
if tool_call.function.name == "get_capital":
country = tool_call.function.arguments["country"]
capital = get_capital(country)
print(f"The capital of {country} is {capital}")

This is a very simple example, and you can see that the tool calling paradigm does not standout as broken. You can accomplish what you need to do with the current paradigm.

But now, lets say that you want to add another tool that can do a web search.

main.py
def web_search(query: str) -> str:
# This is a mock function that returns the first result of a web search
return "https://www.google.com"

...

response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of the moon?"},
],
tools=[
{
"type": "function",
"function": {"name": "get_capital", "description": "Get the capital of a country", "parameters": {"type": "string", "name": "country"}},
},
{
"type": "function",
"function": {"name": "web_search", "description": "Search the web for information", "parameters": {"type": "string", "name": "query"}},
}
],
)

tool_calls = response.choices[0].message.tool_calls

for tool_call in tool_calls:
if tool_call.function.name == "get_capital":
country = tool_call.function.arguments["country"]
capital = get_capital(country)
print(f"The capital of {country} is {capital}")
if tool_call.function.name == "web_search":
query = tool_call.function.arguments["query"]
result = web_search(query)
print(f"The first result of {query} is {result}")

The key takeaway here is that for each tool call, you need to write a new function. While manageable for a few tools, this quickly becomes unscalable as your toolset grows. You end up maintaining complex dispatch logic and tight coupling between your app and tool implementations. With MCP, you write your client code once, and you modularly add new tools as you need them. Here is the example from the MCP docs.

tool_call_with_mcp.py
async def process_query(self, query: str) -> str:
"""Process a query using Claude and available tools"""
messages = [
{
"role": "user",
"content": query
}
]

response = await self.session.list_tools()
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]

# Initial Claude API call
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=messages,
tools=available_tools
)

# Process response and handle tool calls
final_text = []

assistant_message_content = []
for content in response.content:
if content.type == 'text':
final_text.append(content.text)
assistant_message_content.append(content)
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input

# Execute tool call
result = await self.session.call_tool(tool_name, tool_args)
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

assistant_message_content.append(content)
messages.append({
"role": "assistant",
"content": assistant_message_content
})
messages.append({
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": content.id,
"content": result.content
}
]
})

# Get next response from Claude
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=messages,
tools=available_tools
)

final_text.append(response.content[0].text)

return "\n".join(final_text)

The key line of code is self.session.list_tools(), which would return a list of tools like the web search one we added earlier.

web_search.json
{
"type": "function",
"function": {"name": "web_search", "description": "Search the web for information", "parameters": {"type": "string", "name": "query"}},
}

This is what makes MCP powerful. Now, what is session and how does it magically know about all the tools?

Well, thats where MCP comes in. Since MCP standardizes the way that tools are discovered, you can create a single MCPClient that can be used to call any tool.

mcp_client.py
import asyncio
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

from anthropic import Anthropic
from dotenv import load_dotenv

load_dotenv() # load environment variables from .env

class MCPClient:
def __init__(self):
# Initialize session and client objects
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic()

async def process_query(self, query: str):
"""Includes the code above from tool_call_with_mcp.py"""
...

async def connect_to_server(self, server_script_path: str):
"""Connect to an MCP server

Args:
server_script_path: Path to the server script (.py or .js)
"""
is_python = server_script_path.endswith('.py')
is_js = server_script_path.endswith('.js')
if not (is_python or is_js):
raise ValueError("Server script must be a .py or .js file")

command = "python" if is_python else "node"
server_params = StdioServerParameters(
command=command,
args=[server_script_path],
env=None
)

stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))

await self.session.initialize()

# List available tools
response = await self.session.list_tools()
tools = response.tools
print("\nConnected to server with tools:", [tool.name for tool in tools])

async def cleanup(self):
"""Clean up resources"""
await self.exit_stack.aclose()

Notice how we have a session that was initialized with an interface to the provided server script. You can insantiate multiple MCP clients, each with their own unique server script, and they will all be able to call the tools provided by the server script. In this example, we have a single MCP client, but you can easily imagine how you would add more clients to your application by instantiating a list of MCP clients: [MCPClient(), MCPClient(), ...].

Then, you can write an entrypoint for your application that would call the MCPClient and pass it the path to the server script.

main.py
async def main():
if len(sys.argv) < 2:
print("Usage: python client.py <path_to_server_script>")
sys.exit(1)

client = MCPClient()
try:
await client.connect_to_server(sys.argv[1])
await client.process_query("What is the capital of the moon?")
finally:
await client.cleanup()

if __name__ == "__main__":
import sys
asyncio.run(main())

Notice how this can be done programatically through a CLI. This is where application developers would allow users to programatically plugin MCP servers to provide their application with more tools, amplifying the capabilities of the application.

Conclusion​

MCP is a new tool calling paradigm that allows application developers to programatically provide LLMs with access to external tools without having to worry about the underlying implementation details of the tool call interfaces. MCP actually does even more than this to give LLMs more context and higher tool usage accuracy, but I suspect this was the core impetus for its invention.

100K Views, 1K GitHub Stars, and 100+ Messages Later

Β· 3 min read

Recently I shared my startup journey on LinkedIn. In particular, I shared the source code for my startup after closing it down along with the story of building and shutting down the company. Ordinarily, I got an above-average response from my personal network since it was big career news for me. But as I woke up the next day, I saw a pull request on the open source repo asking to license the codeβ€”which was weird because I didn't think anybody would actually care about a failed startup's codebase.

Screenshot of the GitHub Issue that was created at 5AM PST while I was asleep
Screenshot of the GitHub Issue that was created at 5AM PST while I was asleep

Similarly weird, there were ~50 stars on the repo. Curious about where this sudden attention was coming from, I checked my website analytics and noticed a surge of traffic from both Hacker News and Reddit. Someone had discovered my post and shared it on both platforms where it quickly gained traction. Over the next 6 days, my story reached over 100,000 readers, I received more than 100 recruitment messages, and the GitHub repo accumulated over 1,000 stars. What started as a simple LinkedIn post about my startup journey had organically spread to /r/programming and HackerNews, sparking discussions in both communities.

Here is a screenshot of website page views from Cloudflare.

Screenshot of web analytics from Cloudflare showing page views over the past 7 days (12-16-2024 to 12-23-2024)
Screenshot of web analytics from Cloudflare showing page views over the past 7 days (12-16-2024 to 12-23-2024)

Analytics from LinkedIn:

Screenshot of Post analytics from LinkedIn taken on 12-23-2024
Screenshot of Post analytics from LinkedIn taken on 12-23-2024

Stars on GitHub:

Screenshot of open source GitHub repo for my startup taken on 12-23-2024
Screenshot of open source GitHub repo for my startup taken on 12-23-2024

Reflection​

While betting on myself didn't work out financially, the overwhelming response to sharing my journey has given me a unique sense of validation I've never felt before in my career. The way my story resonated with the tech communityβ€”from complete strangers to old colleaguesβ€”tells me that the skills and experiences I gained over these 3 years are genuinely valuable. Sure, it's just social validation, but seeing my post hit the front page of Hacker News and /r/programming suggests that my experience building and shutting down a startup resonates deeply with other engineers. When I look at my refreshed resume now, I see more than just another failed startupβ€”I recall the experience of shipping products, pivoting through market changes, and learning hard lessons about what it takes to build something from scratch. In hindsight, what felt like an ending when we decided to shut down might just be another stepping stone in my career.

I Built My Resume In Markdown

Β· 3 min read

Resume creation is often overcomplicated with clunky word processors and inflexible templates. After 6 years away from the job market (see why), I needed to create a new resume and faced a crucial system design decision: stick with traditional tools like Microsoft Word, or leverage my software engineering experience to build something more robust and portable. I chose to create a solution that is:

  1. Easy to maintain and version controllable with Git.
  2. Does not require messing around with clunky word processing software.
  3. Lets me just put my content to file without thinking about any of the formatting.

After a couple years of blog and documentation content creation, I've never found something more simple yet flexible as markdown for authoring content. The separation of content and styling that markdown provides means that once you figure out the formatting once, you never have to think about it againβ€”you can just focus on writing great content. Previously I've built my resume using LaTeX and a template (ifykyk), but after looking back at the source for that resume, I just don't want to dive back into writing LaTeX. It's great, but not nearly as simple as Markdown. I did a quick Perplexity search which led me to a GitHub repo showing how to use pandoc to convert Markdown to HTML. Then you can use one of many HTML to PDF tools along with your own custom .css file to produce a final PDF. The best part is that you can use CSS to style every aspect of your resumeβ€”from fonts and colors to layout and spacingβ€”giving you complete control over the visual presentation while keeping the content in simple Markdown.

The Workflow​

The entire workflow consists of one shell script to run two commands.

generate-pdf.sh
#!/bin/sh

# Generate HTML first
pandoc resume.md -f markdown -t html -c resume-stylesheet.css -s -o resume.html
# Use puppeteer to convert HTML to PDF
node generate-pdf.js

Here is the JavaScript to kick off a headless puppeteer browser to export the HTML to PDF. I chose puppeteer over alternatives like wkhtmltopdf and weasyprint because neither properly respected the CSS I wroteβ€”specifically trying to style the role/company/dates line in the experiences section to be in a single yet evenly spaced row was not possible with wkhtmltopdf, and weasyprint's output styling did not match my CSS either.

generate-pdf.js
const puppeteer = require("puppeteer");

async function generatePDF() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(`file://${process.cwd()}/resume.html`, {
waitUntil: "networkidle0",
});
await page.pdf({
path: "resume.pdf",
format: "A4",
printBackground: true,
preferCSSPageSize: true,
margin: {
top: "1cm",
right: "1cm",
bottom: "1cm",
left: "1cm",
},
});
await browser.close();
}

generatePDF();

Check out the output here and the source markdown here. You can also check out the custom .css file I used here, it's simple and classic. I tried not to stray too far away from traditional resume styling but added a little bit of fundamentals from Refactoring UI, primarily about visual hierarchy and spacing.

After 3 Years, I Failed. Here's All My Startup's Code.

Β· 4 min read

And by "all," I mean everything: core product, failed pivots, miscellaneous scripts, deployment configurations, marketing website, and more. Hopefully the codebase is interesting or potentially helpful to somebody out there!

The Startup​

Konfig was a developer tools startup focused on making API integrations easier. We started in late 2022 with the mission of simplifying how developers work with APIs by providing better tooling around SDK generation, documentation, and testing.

Our main product was an SDK Generator that could take any OpenAPI specification and generate high-quality client libraries in multiple programming languages. We expanded this with additional tools for API documentation and interactive API testing environments.

While we gained some traction with some startups, we ultimately weren't able to build a hyper-growth business. It was too difficult to get potential customers to sign contracts with us and that price points were too low despite the demonstrably ROI. We then decided to pivot into a vertical B2B SaaS AI product because we felt we could use the breakthroughs in Gen AI to solve previously unsolvable problems, but after going through user interviews and the sales cycle for many different ideas, we haven't been able to find enough traction to make us believe that we were on the right track to build a huge business.

Despite the outcome, we're proud of the technology we built and believe our work could be helpful for others. That's why we're open-sourcing our entire codebase.

The Konfig landing page
The Konfig landing page

The Repo​

Here is the public GitHub repo. I'm releasing it exactly as it was when we shut downβ€”no cleanup, no polishing, no modifications. This is our startup's codebase in its true, unfiltered form.

The Konfig GitHub repository containing all our startup's code: the good, the bad, and the ugly.
The Konfig GitHub repository containing all our startup's code: the good, the bad, and the ugly.

The Products​

In the past 3 years, we built 4 main products.

  1. SDK Generator
  2. Markdown and OpenAPI Documentation
  3. API Demos (Markdown Based Jupyter Notebooks)
  4. SDKs for Public APIs

Random Things​

And lots of miscellaneous things:

  1. Shell script for generating cold outbound message
  2. Programmatic SEO Scripting
  3. References to live customer deployments and pre-sales artifacts
  4. Marketing website
  5. Product Documentation
  6. Modified Changeset Bot - Supports our custom monorepo setup
  7. SDK Generator Integration Tests using Earthly
  8. Python Code Formatting Service
  9. AI Pivot Experimentation
  10. render.com deployment configuration - render.yaml
  11. API Documentation Generator Tool using LLMs/HTMX/Django
  12. Custom Notion Database Integration
  13. Python script for cropping blog post images

Thank You​

I want to express my deepest gratitude to everyone who supported us on this journey. To our investors who believed in our vision and backed us financially, thank you for taking a chance on us. To our customers who trusted us with their business and provided invaluable feedback, you helped shape our products and understanding of the market. And to Eddie and Anh-Tuan, my incredible teammatesβ€”thank you for your dedication, hard work, and partnership through all the ups and downs. Your contributions made this startup journey not just possible, but truly meaningful and worthwhile.

Looking back to March 2022 when I left my job to pursue this startup full-time, I have absolutely no regrets. I knew the risksβ€”that failure was a very real possibilityβ€”but I also knew I had to take this chance. Today, even as we close this chapter, I'm grateful for the failure because it has taught me more than success ever could. The experience has been transformative, showing me what I'm capable of and what I still need to learn.

As for what's next, I'm excited to explore new opportunities, see where the job market thinks I fit (haha), and continue learning and growing. Who knows? Maybe someday I'll take another shot at building something of my own. But for now, I'm thankful for the journey, the lessons learned, and the relationships built. This experience has been invaluable, and I'm grateful for everyone involved.

Not All Problems Are Great Fits for LLMs

Β· 4 min read

Many startups are racing to find product-market fit at the intersection of AI and various industries. Several successful use-cases have already emerged, including coding assistants (Cursor), marketing copy (Jasper), search (Perplexity), real estate (Elise), and RFPs (GovDash). While there are likely other successful LLM applications out there, these are the ones I'm familiar with off the top of my head. Through my experience building and selling LLM tools, I've discovered a new important criteria for evaluating an idea.

Are LLMs Especially Good at Solving This Problem?​

Traditional business advice emphasizes finding and solving urgent, critical problems. While this principle remains valid, not all pressing problems are well-suited for LLM solutions, given their current capabilities and limitations. As non-deterministic algorithms, LLMs cannot be tested with the same rigor as traditional software. During controlled product demos, LLMs may appear to handle use-cases flawlessly, creating an illusion of broader applicability. However, when deployed to production environments with diverse, unpredictable inputs, carefully crafted prompts often fail to maintain consistent performance.

Where LLMs Excel​

However, LLMs can excel when their non-deterministic nature doesn't matter or even provides benefits. Let's examine successful LLM use-cases where this is true.

Coding Copilots​

Think of coding assistants like Cursor that help you write code and complete your lines.

When you code, there's usually a "right way" to solve a problem. Even though there are many ways to write code, most good solutions look similarβ€”this is what we call "low entropy", like how recipes for chocolate chip cookies tend to share common ingredients and steps. LLMs are really good at pattern matching, which is perfect for coding because writing code is all about recognizing and applying common patterns. Just like how you might see similar ways to write a login form or sort a list across different projects, LLMs have learned these patterns from seeing lots of code, making them great at suggesting the right solutions.

Copywriting Generator​

Marketing copy is more art than science, making non-deterministic LLM outputs acceptable. Since copywriting involves ideation and iteration rather than precision, it has a naturally high margin of error.

Search is unique because users don't expect perfect first results - they're used to scrolling and exploring multiple options on platforms like Google or Amazon. While search traditionally relies on complex algorithms, LLMs can enhance the experience by leveraging their ability to synthesize and summarize information within their context window. This enables a hybrid approach where traditional search algorithms surface results that LLMs can then summarize to guide users to what they're looking for.

Real Estate Assistant​

Leasing agents primarily answer questions about properties to help renters find suitable homes and sign leases. Since their core function involves retrieving and relaying property information, a real estate assistant effectively becomes a specialized search problem.

RFPs​

RFP responses combine two LLM strengths: extracting questions from lengthy, unstructured documents and searching internal knowledge bases for relevant answers. Since extracting questions from RFPs is time-consuming but straightforward, LLMs can work in parallel to identify all requirements that need addressing. This makes the RFP response process essentially a document extraction and search problem that's perfect for automation.

Conclusion​

When building an LLM startup, focus on problems with two key characteristics:

  • Low Entropy - solutions follow common patterns, as seen in coding
  • High margin of error - tasks like copywriting where art trumps science

Or can be solved in a simlar way to common well-suited problem types such as:

  • Search
  • Document extraction

Beyond traditional business evaluation, ask yourself: "Are LLMs particularly well-suited to solve this problem?" If not, reconsider unless you have unique insights into making it work.

6 Things I Would Improve About Dify.AI

Β· 5 min read
Screenshot of a workflow on our self-hosted Dify instance
Screenshot of a workflow on our self-hosted Dify instance

To be clear, Dify is one of the most capable, well-designed, and useful prompt engineering tools I have used to date. I've tried code-only solutions such as LangChain and other no-code tools such as LangFlow, but Dify currently takes the cake. I've also briefly tried others such as PromptLayer, but when it comes to building agentic workflows, which I believe is the future of AI automation, Dify feels like a better fit. In particular, I appreciate its great UI design, workflow editor, and self-hosting option. But there is always room for improvement. Here are the things I would improve about dify.ai today.

  1. When testing workflows from the studio, it would be really nice to be able to select inputs from previous runs to be used as inputs for testing.
It would be really nice to just have a button that copies the highlighted inputs (as pictured) from a previous run into a new run
It would be really nice to just have a button that copies the highlighted inputs (as pictured) from a previous run into a new run
  1. Versions of a workflow should persist on the server. Right now, I'm scared to close a tab since I'll lose any previous versions of the workflow I'm working on. If I am making experimental changes, this can be devastating if I don't remember how the workflow previously worked.
Right now, Dify persists versions in the browser so it's lost when the tab is closed. Dify even explains that your work will be lost when leaving the editor.
Right now, Dify persists versions in the browser so it's lost when the tab is closed. Dify even explains that your work will be lost when leaving the editor.
  1. When testing changes to a workflow, I would love to have a version of a workflow that could be published for testing so production remains untouched. It would be even better if it somehow integrated with CI platforms so you could have a team of developers working on their own version of the workflow. This would eventually mean you'd need some sort of integration with Git so you can branch, push, and test changes to the workflow. On that note, it would also be great to be able to author Dify workflows as code, and have that bi-directionally sync with the UI version of Dify. This would be amazing for non-developers and developers to collaborate on agentic workflows.
Right now, you can only publish one version of a workflow to be accessed by API. It would be ideal to instead publish multiple versions for testing and allow the API to route across different versions to be used in local testing or even CI environments
Right now, you can only publish one version of a workflow to be accessed by API. It would be ideal to instead publish multiple versions for testing and allow the API to route across different versions to be used in local testing or even CI environments
  1. Managing many workflows can be extremely confusing, especially when you clone a workflow for testing, which could be fixed by adding support for (3). Also, the tile view is not a compact and easy way to scan all the workflows. It would be really nice if there were a table view that could be sorted, searched, filtered, etc.
The current studio view shows workflows as tiles with basic tag filtering. This can get messy and hard to navigate as the number of workflows grows, especially when workflows are cloned for testing. A table view with more robust filtering and sorting would make it much easier to manage.
The current studio view shows workflows as tiles with basic tag filtering. This can get messy and hard to navigate as the number of workflows grows, especially when workflows are cloned for testing. A table view with more robust filtering and sorting would make it much easier to manage.
  1. Generated API documentation for the workflow would be nice to save time when integrating a workflow into a codebase.
Currently, Dify provides some static code examples that don't update based on your workflow's inputs and outputs. Having dynamically generated API documentation would make it much easier to integrate workflows into applications.
Currently, Dify provides some static code examples that don't update based on your workflow's inputs and outputs. Having dynamically generated API documentation would make it much easier to integrate workflows into applications.
  1. API Keys are extremely annoying to handle when you have many workflows.
Currently, you need to create a separate API key for each workflow, which becomes extremely messy when managing 10+ workflows since keys aren't easily identifiable as belonging to specific workflows. Having a single master API key with workflow identifiers in the API calls would be much simpler to manage and organize.
Currently, you need to create a separate API key for each workflow, which becomes extremely messy when managing 10+ workflows since keys aren't easily identifiable as belonging to specific workflows. Having a single master API key with workflow identifiers in the API calls would be much simpler to manage and organize.

Overall, there's a lot of room for improvement in Dify.ai, but it's important to remember that it's still in beta. The platform already offers an impressive set of features and a unique approach to building LLM-powered applications. I'm confident that many of these pain points will be addressed as the platform matures.

Some of these suggestions, particularly the bi-directional sync between code and UI, would be technically challenging to implement. However, features like this would significantly differentiate Dify from its competitors and create a truly unique development experience that bridges the gap between technical and non-technical users.

If these improvements were implemented, particularly around version control, testing workflows, and API management, it would dramatically improve the developer experience and make Dify an even more compelling platform for building production-grade AI applications. The potential is already there - these enhancements would just help unlock it further.

I Reviewed 1,000s of Opinions on HTMX

Β· 12 min read

HTMX is a trending JavaScript library that enables the construction of modern user interfaces using hypermedia as the engine of application state.

In a nutshell, you can implement a button that replaces the entire button with an HTTP response using HTML attributes:

<script src="https://unpkg.com/[email protected]"></script>
<!-- have a button POST a click via AJAX -->
<button hx-post="/clicked" hx-swap="outerHTML">
Click Me
</button>

If you follow popular web development trends or are a fan of popular developer-focused content creators, you have probably heard about it through Fireship or ThePrimeagen. However, HTMX has brought an absolute whirlwind of controversy with its radically different approach to building user interfaces. Some folks are skeptical, others are excited, and others are just curious.

Opinion 1 of 0

To analyze how developers truly feel about HTMX, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

funnel
Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro-HTMX" (πŸ‘), "Anti-HTMX" (πŸ‘Ž), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard
Whiteboard of opinions

To start, i'll go over the Anti-HTMX (πŸ‘Ž) opinions since they are spicy.

πŸ‘Ž HTMX is just hype​

After Fireship released a video about HTMX, HTMX started to gain a lot of attention for its radical approach to building user interfaces. Carson Gross, the author of HTMX, is also adept at generating buzz on his Twitter. And since HTMX is new, its unlikely that you will find a lot of examples of sufficiently complex applications using HTMX. Therefore, some developers are of the opinion that HTMX is merely capitalizing on hype rather than offering a genuine solution to the challenges of building user interfaces.

Opinion 1 of 0

Takeaway​

Like all technologies, there is typically a cycle of hype, adoption, and dispersion. HTMX is no different. This cycle is beginning to unfold, and it's time to see what the future holds. It is fair to criticize HTMX for riding a wave of hype. However, if developers feel HTMX is solving a real problem and adopting it accordingly, then adoption will naturally occur. But only time will tell...

πŸ‘Ž HTMX is a step back, not forward​

If you squint at HTMX, it looks like a relic of the past where MPA (Multi-page Applications) were the norm. Some developers see HTMX as a step back, not forward. There is a good reason why modern web applications are built using technologies like React, Next.js, and Vue.js. To ignore this, and use HTMX, you might be ignoring the modern best practices of building web applications.

Opinion 1 of 0

Takeaway​

Depending on your level of expertise in building with modern web technologies, you may feel that HTMX is a step back, not forwardβ€”especially if you have previously built MPAs with jQuery. For those who see HTMX as a step back, they want nothing to do with it.

If you are already familiar with modern web technologies, your teammates are as well, and current applications are using the latest web technologies, it's really hard to see why you would use HTMX in future applications. But for those starting new applications, HTMX is simply a different approach to building interfaces. Whether it is worth considering depends on the complexity of your application's interface and the expertise of your team. But it definitely doesn't hurt to entertain new ideas like HTMX. Who knows, it could ultimately improve your skills as an engineer.

πŸ‘Ž HTMX is unnecessarily complex and less user-friendly​

In practice, some developers feel that HTMX is actually more complex than the current best practices. They specifically dislike the use of HTML attributes, magic strings, and magic behavior. Moreover, some developers feel this make engineerings teams less productive and codebases more complex.

Opinion 1 of 0

Takeaway​

People will always have a natural aversion to unfamiliar technologies, and HTMX is no exception. Those who adopt HTMX in their projects will likely encounter some friction, which could be a source of frustration and negative discourse. HTMX also uses more declarative paradigms, which can be harder to read for developers who are just beginning to learn HTMX. This makes HTMX more complex and less user-friendly for those sorts of developers.

React involves more in-memory state management on the client, while HTMX, on the other hand, embeds state in HTML itself. This is a fundamentally new approach to interface design, which developers would need to learn and adopt before feeling comfortable reading and programming with HTMX.

πŸ‘Ž HTMX is only good for simple use-cases​

Users expect modern interfaces which require more complex DOM manipulations and UX design. To implement such interfaces, using HTMX is not enough. Therefore, HTMX is only good for simple use-cases.

Opinion 1 of 0

Takeaway​

For developers that are working on more complex use-cases, HTMX is not enough. And since HTMX has sort of compared itself to React, it is fair to point out that HTMX is not a solution for building complex web applications.

Before deciding to use HTMX, developers should first consider the complexity of the use-case. If HTMX is not the right tool for the job, then look elsewhere for libraries that will allow you to solve the problem. It may well be the case that your use-case will require a "thick client".

πŸ‘ HTMX is simple​

Developers are witnessing a reduction in codebase complexity and a decrease in overall development time. As grug puts it, "complexity is bad". Developers who embrace this mindset are highly enthusiastic about HTMX. With HTMX handling the DOM manipulations, developers can deduplicate logic from the browser to the server. This marks a significant improvement over the current best practices for building web applications, where a lot of logic needs to be duplicated in the server and client. For backends that are not written in JavaScript, the current standard UI libraries can be a significant burden on the developers of the application.

Takeaway​

Simplicity is a subjective concept. It highly depends on your application, team, and use-cases. For those who have spent a lot of time with JavaScript, HTMX stands out for its simplicity. If you feel that the current best practices are overbuilt, bloated, and overly complexβ€”HTMX might be worth considering.

In many use-cases where the primary value of the application lies in the backend, React may not be essential but could still be the optimal choice for your team, depending on their expertise.

πŸ‘ HTMX does not require you to know JavaScript​

If you are a backend developer, it's unlikely that you know React, JavaScript, or meta frameworks like Next.js. Full-stack developers, on the other hand, may find HTMX to be a breath of fresh air in the form of a simple way to build interfaces. The fact that pretty much any developer can pick up HTMX is a huge benefit, especially for teams that are not as comfortable with JavaScript.

Takeaway​

I personally love this perspective and it's actually one of the reasons I've spent some time experimenting with HTMX. My company is a small team and not all of us have used React or Next.js, so HTMX as a solution for teams that are not as comfortable with JavaScript is an extremely compelling narrative for me.

I believe this is true for many other teams as well, especially since full-stack developers are hard to come by. Some developers are finding that this unlocks new opportunities for them to work on interesting projects.

πŸ‘ HTMX makes you productive​

HTMX, when combined with battle-tested templating libraries like Django's template language or Go's templ library, boosts productivity. By not having to spend time writing a client-layer for the UI, developers can focus on where the application is providing most of the value, which is the business logic. This is especially effective when paired with

Slide 1 of 0

Takeaway​

By reducing the amount of redundant business logic in the client-layer, developers are finding that they can focus on the actual product and not the UI. And since you can get up and running with HTMX with a simple <script> tag, it's easy to bootstrap a project with HTMX.

These two factors leave a positive impression on the developer experience for some.

🧐 HTMX is not a silver bullet​

Like any problem, you must choose the right tool for the job. For developers who see HTMX for its pros and cons, they believe that it's just another tool to solve a specific problem. And since HTMX can also be progressively adopted, developers can adopt it in small portions of their codebase. But as always, certain use-cases will require more complex client logic which would require you to reach for more advanced tooling like React.

Takeaway​

Like everything else in software, choose the right tool for the job. It's not much of a takeaway, but consider it your responsibility as an engineer to make critical decisions, such as whether or not to adopt HTMX.

Conclusion​

Competition is good. HTMX is thought-provoking, but I think its great because it forces developers to entertain new and novel ideas. Developers can often be wary and cautious of new technologies, since it might not be solving a personal problem they are already facing. But for the developers that are resonating with HTMX, there is an enthusiastic group of developers who are starting to use it in production.

I have not personally deployed a production application using HTMX, but I am really excited to try it. It would solve a personal problem with my engineering team and also allow me to simplify the build process of deploying applications. Those two things are important to me, and HTMX seems like a great fit.

I Reviewed 1,000s of Opinions on gRPC

Β· 9 min read
What do developers really think about gRPC?

It is no secret that gRPC is widely adopted in large software companies. Google originally developed Stubby in 2010 as an internal RPC protocol. In 2015, Google decided to open source Stubby and rename it to gRPC. Uber and Netflix, both of which are heavily oriented toward microservices, have extensively embraced gRPC. While I haven't personally used gRPC, I have colleagues who adore it. However, what is the true sentiment among developers regarding gRPC?

To find out, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro-gRPC" (πŸ‘), "Anti-gRPC" (πŸ‘Ž), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard of opinions

πŸ‘ gPRC's Tooling Makes It Great for Service-to-Service Communication​

The most significant praise for gRPC centers on its exceptional code generation tools and efficient data exchange format, which together enhance service-to-service developer experience and performance remarkably.

Key Takeaway πŸ”‘β€‹

Engineering teams are often responsible for managing multiple services while also interacting with services managed by other teams. Code generation tools empower developers to expedite their development process and create more reliable services. The adoption of Protocol Buffers encourages engineers to focus primarily on the interfaces and data models they expose to other teams, promoting a uniform workflow across various groups.

The key benefits of gRPC mentioned were:

  • Client and server stub code generation tooling
  • Data governance
  • Language-agnostic architecture
  • Runtime Performance
  • Well-defined error codes

If your organization is developing a multitude of internal microservices, gRPC could be an excellent option to consider.

πŸ‘ Compared to the straightforward nature of gRPC, REST can be relatively confusing​

To properly build applications in REST, you need to understand the underlying protocol, HTTP. gRPC, on the other hand, abstracts HTTP away, making it less confusing for quickly building applications.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

REST often leaves considerable scope for errors and confusion in defining and consuming APIs.

In contrast, gRPC is designed with the complexities of large-scale software systems in mind. This focus has led to an emphasis on robust code generation and stringent data governance. REST, on the other hand, is fundamentally a software architectural style oriented towards web services, where aspects like code generation and data governance were not primary considerations. The most widely used standard for designing REST APIs, OpenAPI (formerly Swagger), is essentially a specification that the underlying protocol does not enforce. This leads to situations where the API consumer might not receive the anticipated data model, resulting in a loosely coupled relationship between the API definition and the underlying protocol. This disconnect can be a significant source of confusion and frustration for developers. Hence, it raises a critical question: why opt for REST when gRPC offers a more cohesive and reliable alternative?

πŸ‘Ž gRPC complicates important things​

Important functionalities like load balancing, caching, debugging, authentication and browser support are complicated by gRPC.

Key Takeaway πŸ”‘β€‹

On the surface, gRPC appears to be a great solution for building high-quality APIs. But as with any technology, the problems start to show when you begin to use it. In particular, by adding an RPC layer, you have effectively introduced a dependency in a core part of your system. So when it comes to certain functionalities that an API is expected to provide, you are at the mercy of gRPC.

Soon, you'll start to find yourself asking questions like:

  • How do I load balance gRPC services?
  • How do I cache gRPC services?
  • How do I debug gRPC services?
  • How do I authenticate gRPC services?
  • How do I support gRPC in the browser?

The list goes on. Shortly after, you'll be wondering: "Why didn't I just build a REST API?"

πŸ‘Ž REST is standard, use it​

The world is built on standards and REST is no exception.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

Don't be a hero, use REST.

Recall that gRPC was born out of Google's need to build a high-performance service-to-service communication protocol. So practically speaking, if you are not Google, you probably don't need gRPC. Engineers often overengineer complexity into their systems and gRPC seems like a shiny new toy that engineers want to play with. But as with any new technology, you need to consider the long-term maintenance costs.

You don't want to be the one responsible for introducing new technology into your organization which becomes a burden to maintain. REST is battle-tested, so by using REST, you get all of the benefits of a standard such as tooling and infrastructure without the burden of maintaining it. Most engineers are also familiar with REST, so it's easy to onboard new developers to your team.

🧐 Use gRPC for internal services, REST for external services​

gRPC significantly enhances the developer experience and performance for internal services. Nonetheless, it may not be the ideal choice for external services.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

gRPC truly excels in internal service-to-service communication, offering developers remarkable tools for code generation and efficient data exchange. Disregarding the tangible benefits to developer experience that gRPC provides can be shortsighted, especially since many organizations could greatly benefit from its adoption.

However, it's important to note that browser support wasn't a primary focus in gRPC's design. This oversight necessitates an additional component, grpc-web, for browser accessibility. Furthermore, external services often have specific needs like caching and load balancing, which are not directly catered to by gRPC. Adopting gRPC for external services might require bespoke solutions to support these features.

Recognizing that not every technology fits all scenarios is crucial. Using gRPC for external services can be akin to trying to fit a square peg into a round hole, highlighting the importance of choosing the right tool for the right job.

🧐 gRPC is immature​

gRPC was only open sourced in 2015 when Google decided to standardize its internal RPC protocol so there is still a lot of open source tooling that needs to be built.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

REST APIs are supported by a rich variety of tools, from cURL to Postman, known for their maturity and thorough documentation. In contrast, gRPC is comparatively younger. Although it has some tools available, they aren't as developed or as well-documented as those for REST.

However, the gRPC ecosystem is witnessing rapid advancements with its increasing popularity. The development of open-source tools such as grpcurl and grpcui is a testament to this growth. Additionally, companies like Buf are actively contributing to this evolution by creating advanced tools that enhance the gRPC developer experience.

Conclusion​

gRPC undeniably stands as a polarizing technology. It excels in enhancing developer experience and performance for internal service-to-service communications, yet some developers remain unconvinced of its advantages over REST.

In our case, we employ REST for our external API and a combination of REST/GraphQL for internal services. Currently, we see no pressing need to integrate gRPC into our workflow. However, the fervent support it garners from certain segments of the developer community is quite fascinating. It will be interesting to observe the evolution and expansion of the gRPC ecosystem in the coming years.

I Reviewed 1,000s of Opinions on GitHub Copilot

Β· 8 min read

GitHub Copilot has recently taken the software engineering world by storm, hitting a milestone of $100M ARR. This achievement alone qualifies it to be a publicly listed company. Meanwhile, funding continues to flow into code-focused LLM use cases.

LLMs are causing a stir in the software engineering community, with some developers praising the technology and others fearing it. The controversy surrounding LLMs is so intense that it has even led to a lawsuit against GitHub Copilot.

To understand how developers are receiving Copilot, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Intent of this article

We are aware that GitHub Copilot was trained on questionable data (see GitHub Copilot and open source laundering) and that there is ongoing controversy surrounding the technology. However, this article is not about the ethics of LLMs. Instead, it is focused on product feedback from developers.

The ethics of LLMs and training data is a whole other discussion that we will not be covering in this article. And quite frankly, I am not qualified to comment on the ethics of LLMs.

We have no affiliation with GitHub Copilot, OpenAI, or Microsoft. We are not paid to write this article, and we do not receive any compensation from the companies.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Anti-Copilot" (πŸ‘Ž), "Pro-Copilot" (πŸ‘), or "Neutral" (🧐) categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

Whiteboard of opinions

πŸ‘Ž Copilot produces bad results​

LLMs operate as probabilistic models, implying they aren't always correct. This is especially true for Copilot, which is trained on a corpus of code that may not be representative of the code that a developer writes. As a result, Copilot can produce consistently bad results.

Key Takeaway πŸ”‘β€‹

Developers expect reliability from their tools.

Copilot is not reliable, and therefore, certain developers have a tough time wrestling with its output. Copilot lies or produces bad results for a vast majority of the time. This can be exceedingly frustrating for developers who are expecting Copilot to deliver on its promise of writing code for you. After some bad experiences, some developers have even stopped using Copilot altogether.

For people who worry about job security, fear nothing, because Copilot is not going to replace you anytime soon.

πŸ‘Ž Copilot creates more problems than solutions​

Copilot is a tool that is supposed to help developers write code. However, its unreliable results creates more problems than solutions.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

Copilot can waste your time.

Code requires 100% accuracy, and inaccuracy can lead you down a rabbit hole of debugging. Often wasting time or flat out breaking your code. In some cases, this is frustrating enough for developers to stop using Copilot altogether. Just like managing a junior developer, Copilot requires a lot of oversight. Sometimes subtle bugs can take more time to debug and fix than writing the code yourself. For some developers who find the output to be too inaccurate, Copilot becomes an interference and ultimately doesn't save them any time.

πŸ‘ Copilot helps you write software faster​

Despite the inaccuracy of LLMs, if you treat Copilot as a tool that can help take care of the boring stuff, it can be a powerful tool.

Key Takeaway πŸ”‘β€‹

Copilot increases productivity.

Often, developers face mundane and repetitive tasks. Given enough context, Copilot can do these tasks for you with sufficient accuracy. For some developers, these tasks can be a significant time sink, and Copilot can help you get that time back.

Based on the mentioned 10-20% increase in productivity, such an improvement is substantial. For the sake of a conservative analysis, let's consider the lower bound: if we assume an engineer is paid $100k/yr and becomes just 5% more productive (half of the 10% reference), then with a yearly cost of $100 for Copilot, the tool brings in an added value of $4900 for the company.

πŸ‘ Copilot helps you write better software​

Copilot can be a great learning tool for junior developers, giving tech leads more time to focus on higher-level tasks. Ultimately leading to better software and happier developers.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

Copilot has enough intelligence to help you write better software.

This holds especially true for junior developers still learning the ropes. It can drastically make mundane tasks like documentation and testing easier, giving developers more time to focus on the bigger picture while maintaining a high standard of code quality. Multiplying this effect across an engineering team leads to a higher quality codebaseβ€”the ultimate dream for engineering leaders.

🧐 Copilot is like a calculator​

Copilot is a tool that can help you solve problems faster, but it is not a replacement for your brain. You still need to know how to solve problems, and you still need to know how to write code.

Opinion 1 of 0

Key Takeaway πŸ”‘β€‹

Just as calculators enable mathematicians to solve problems more quickly, Copilot spares developers from focusing on repetitive tasks.

However, just like calculators, Copilot does not help you make sense of the problem. You still need to know how to solve problems, and you still need to know how to write code.

Conclusion

Opinions on Copilot vary; some see it as a blessing, while others regard it as a curse. For its proponents, Copilot is a valuable tool that enhances coding speed and quality. However, critics argue it introduces more issues than it resolves.

I suspect that the complexity of a task makes a big difference in the quality of output. Working on tasks that require more context will inevitably lead to worse results. Yet, when viewed simply as a tool to handle the mundane aspects of coding, Copilot reveals its potential.

I Reviewed 1,000s of Opinions on Serverless

Β· 10 min read

From DHH shunning serverless, Ahrefs saving millions by using a cloud provider at all, to Amazon raining fire on their own serverless product, serverless has recently faced significant scrutiny.

But still, everyone and their pet goldfish seem to be creating a serverless runtime (see Bun, Deno, Pydantic, Cloudflare, Vercel, Serverless, Neon, Planetscale, Xata, FaunaDB, Convex, Supabase, Hasura, Banana, and literally hundreds more). One research paper from Berkeley even claimed:

Serverless computing will become the default computing paradigm of the Cloud Era, largely replacing serverful computing and thereby bringing closure to the Client-Server Era.

-- Cloud Programming Simplified: A Berkeley View on Serverless Computing

Is it all hype? Is there real 100% objective merit to it? Where does serverless excel? Where do the trade-offs make sense?

To understand how developers are receiving serverless, I went to where developers live: Reddit, Twitter, Hacker News, and YouTube. I parsed 1,000s of discussions and synthesized my findings in this article, striving to present only thought-provoking opinions.

Funnel for gathering through-provoking opinions

Next, I transcribed these discussions onto a whiteboard, organizing them into "Pro Serverless," "Anti Serverless", or "Neutral" categories, and then clustering them into distinct opinions. Each section in this post showcases an opinion while referencing pertinent discussions.

FigJam I used to organize perspectives

Anti-Serverless Opinions​

Giant software companies such as Shopify, GitHub, and Stack Overflow have achieved new heights using tried-and-true frameworks like Ruby on Rails. However, serverless presents an intriguing new paradigm that promises to reduce costs, accelerate development, and eliminate the need for maintenance. And as with any technological shift, there will always be skeptics.

Opinion: Serverless is a performance and financial hazard​

Key Takeaway πŸ”‘β€‹

One of the most vocal criticisms of serverless computing is the unpredictability of its costs and the latency associated with cold starts. While cloud providers have made significant improvements in optimizing serverless runtimes over time, these issues remain a significant concern for many developers.

Additionally, serverless introduces a new paradigm that brings its own set of challenges when building complex applications, particularly those requiring multiple services to communicate with each other. This is already a common problem with microservices, but serverless further complicates the issue by forcing developers to work within a stateless and I/O-bound compute model.

These latencies can have real financial consequences; Amazon found that every 100ms of latency cost them 1% in sales. Moreover, without proper billing safeguards in place, serverless costs can spiral out of control, potentially leading to the infamous situation of a startup sinking under its cloud bill.

Encountering any of these issues could understandably leave a sour impression and be a compelling reason to abandon serverless in favor of a traditional VPS.

Opinion: Serverless is a fad​

Key Takeaway πŸ”‘β€‹

Serverless is a fad, and the hype will fade.

Kudos to AWS's marketing team, as they successfully persuaded thousands of CTOs to stake their entire technology stack on a contentious new paradigm for building applications.

Some even go as far as to say serverless is "dangerous" or "wrong". In some ways, this viewpoint is not exaggerated. If cloud providers were not striving to lock everyone into their products to capture market share and boost revenue, what kind of corporate entities would they be?

Early adopters and evangelists did a great job at highlighting 10x features and pushing the cloud computing agenda. But always be weary of a technology that necessitates developers to acquire a new set of skills and tools, potentially wasting developer's time in a low-demand technology. Engineering teams should exercise caution when betting the farm on serverless as it may lead to vendor lock-in. When things go wrong, good luck with troubleshooting and refactoring!

At the end of the day, serverless represents a substantial investment that could arguably be better allocated to other aspects of the business. Why not utilize long-running servers that were already deployable and maintainable in the late 2000s.

So, does serverless really solve your problems, or are you just succumbing to the hype?

Pro Serverless Opinions​

Technologies that catch on usually have something good going for them, and serverless is no exception. Despite all the buzz and marketing blitz, there's some real enthusiasm for it. Some companies are saving time and money with serverless, making it a win-win. Some devs think serverless is the new must-have tool in the toolbox.

Opinion: Serverless accelerates the pace of feature development​

Slide 1 of 0

Key Takeaway πŸ”‘β€‹

Counter-intuitively, an interesting aspect of serverless computing is that it appeals more to early product development than to enterprise products is the speed at which features can be developed. The virtually negligible time and cost required to provision cloud computing resources make serverless computing particularly attractive to hobbyists for their projects.

During the early stages of building a product, the most critical factor to consider when designing your system is the pace of development. Concerns about scalability fall far behind developer experience, as they are not yet relevant issues. In this context, serverless computing provides a compelling value proposition.

In light of this, why would anyone waste time setting up their own server? Starting with serverless computing is a logical choice. If cost or speed issues arise, other options can be considered at that time.

Opinion: Serverless can be outstanding when implemented correctly​

Slide 1 of 0

Key Takeaway πŸ”‘β€‹

For those who have successfully adopted serverless and lived to share their experiences, the enthusiasm is palpable.

It appears that building on serverless from first principles can yield outstanding results. Beyond the marketing hype, the true benefits of serverless become evident. Maybe as the Berkeley researchers predicted, maintaining your own server is becoming a thing of the past. With serverless, you can save money, reduce development time, and cut maintenance costsβ€”a triple win.

Moreover, as serverless offerings like storage, background jobs, and databases continue to improve, the ecosystem will support the construction of increasingly complex apps, while still upholding the promise of serverless.

If you can navigate the downsides of serverless, you can create a product with an infrastructure that feels almost effortless. Perhaps the naysayers' tales are louder than the truth.

Neutral Opinions​

I believe it is crucial to emphasize neutral viewpoints. In my view, these tend to be the most truthful because they recognize the advantages and disadvantages of each approach. They are also the opinions least commonly expressed, as many developers tend to be set in their ways.

Opinion: Serverless offers genuine benefits for specific use cases, but it is often misused or applied inappropriately​

Key Takeaway πŸ”‘β€‹

No technology is a silver bullet, and serverless is no exception.

Every technological decision is about choosing the right tool for the job. Serverless has some distinct trade-offs that need to be understood before it's adopted. Conduct thorough research on compute density, single-responsibility microservices, and performance requirements. Once you do, you'll see that serverless can offer immediate and tangible value. Recognize that serverless is not a replacement, but an alternative.

Whenever you hear someone criticize serverless, be wary of the problems they encountered. Were they design problems? Were there clear misuses of serverless? Serverless is not a panacea.

Conclusion​

The anticlimactic conclusion? As always, it depends.

Though, I am more convinced that developers should strongly consider using serverless during the early stages of their SDLC. I previously built an application exclusively using serverless but was burned by lop-sided unit economics1. In retrospect, I can attribute that failure to not considering the downsides of serverless before adopting it.

That being said, I have also grown accustomed to Render for fast-paced development, and so far, I have no complaints. However, as I am always striving to become a 10x engineer, I will consider adding serverless to my toolbox.

Footnotes​

  1. I built a Shopify App that gave shop owners pixel-perfect replays of customers navigating their online store. I stored session data (using rrweb) in S3 and processed events using lambda. I ended up operating at a loss until I shut it down. ↩