This edition of Adventures in Nodeland was put together in Schipol, on my way to Open Source Summit North America 2023, where I would participate in a panel on creating Open Source communities, a topic I spoke about before.
This edition includes my commentary on a recent article from the AWS team, many releases, and a couple of exciting articles.
Suppose you are developing services on the Internet. In that case, you should read up on the story on how Amazon Prime video moved to a monolith in "Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%". It's all the rage on tech Twitter, and rightly so.
Amazon has been at the forefront of cloud innovation for the last two decades. Amazon Web Service (AWS) is the centerpiece of all the innovation of many established companies and startups. The Prime Video team's approach was ubiquitous in the industry: they built it on top of AWS Lambda.
We designed our initial solution as a distributed system using serverless components (for example, AWS Step Functions or AWS Lambda), which was a good choice for building the service quickly. In theory, this would allow us to scale each service component independently. However, the way we used some components caused us to hit a hard scaling limit at around 5% of the expected load. Also, the overall cost of all the building blocks was too high to accept the solution at a large scale.
I have seen many talented teams stumble into similar issues with AWS Lambda where in some cases the benefit outweighs the costs, and a hybrid approach is needed. I recommend teams to develop their products as modular monoliths and break them up whenever needed. Evolvable Architecture) is the key to design a resilient system.
Adrian Crockford (who designed a microservices architecture at Netflix) wrote a very interesting commentary, which you should read too: So many bad takes — What is there to learn from the Prime Video microservices to monolith story.
The Prime Video team had followed a path I call Serverless First, where the first try at building something is put together with Step Functions and Lambda calls. They state in the blog that this was quick to build, which is the point. [...] If you built it as a microservice to start with, it would probably take longer (especially as you have to make lots of decisions about how to build and run it), and be less able to iterate as you figure out exactly what you are trying to build.
What if building a long-running microservice would not require a long list of decisions? I've been working on this problem for almost a year, and we'll announce something new soon.
In last week's stream, we made Platformatic DB cache the schema information on disk. This way, we can avoid deriving the schema at start-up, dropping the cold start delay. This change was done live between two live stream sessions on the 4th and 5th of May, 2023. You can take a look at the final result at https://github.com/platformatic/platformatic/pull/956
id_tokenproperty to token type definition, and more clarification on how to log into Twitch.