Rami's Readings #100 - 🎉 10 AI Lessons From 100 Newsletters 🎉
Celebrating 100 newsletters with lessons learned, Apéro & Intellect, the latest on AI, LLMs, Anthropic, and more.
Welcome to Rami’s Readings #100 - a weekly digest of interesting articles, papers, videos, and X threads from my various sources across the Internet. Expect a list of reads covering AI, technology, business, culture, fashion, travel, and more. Learn about what I do at ramisayar.com/about.
🎉 Celebrating 100 Newsletters 🎉
When I started this newsletter in January 2023, I had no idea where it would lead. I thought maybe a few curious minds would tune in. But here we are, 100 editions later with a community I couldn't have imagined. ❤️
First, a massive THANK YOU! Whether you've been here since #1 or just joined last week, your engagement, insights, and shares fuel this newsletter. Every reply and testimonial on LinkedIn pushes me to make this newsletter even better.
Who reads this newsletter? It's humbling to see leaders from Microsoft, Google, Amazon, S&P, and countless startups, VCs, and thought leaders tuning in every week. I am always surprised by who signs up. I’m sorry if you see me snooping on your LinkedIn after you subscribe, I’m just so curious to learn more about you - it is what inspired Apéro & Intellect.
What started as a simple idea has turned into something way bigger. I never expected the reach, influence, and sheer momentum this would gain. If there's one lesson I've learned, it's this: consistency compounds.
Marking #100 with a Special Apéro & Intellect
To mark #100, I'm planning an Apero & Intellect gathering this Thursday at 7pm --think drinks, deep discussions, and a toast to the next 100. If you're in Seattle, let's make it happen. Reply if you're interested!
Help Me Get to 1,000 Subscribers! 🚀
We’re at a major milestone, but I want to set my sights on the next big goal: growing this community to 1,000 engaged subscribers!
If you’ve found value in these insights, here’s how you can help:
✅ Share this newsletter with a friend, colleague, or your network—especially those who should be paying attention to AI and large language models.
✅ Forward this email or use the referral option and win a prize:
✅ Tell them why you read it! A personal recommendation goes a long way.
I believe the best ideas spread through communities, not algorithms—and you, my readers, are the best ambassadors. Let’s grow this together.
And as a thank you, I’m planning a special subscriber-only event once we hit 1,000. More details soon! 😉
🤖 10 AI Lessons From 100 Newsletters
After writing 100 issues of this newsletter, I’ve learned a lot about technology and how people are thinking about it. I am privileged to put much of what I’ve learned into practice. AI & LLMs aren’t just a passing trend; they are a fundamental shift in how we work, create, and innovate. But with all the noise, the real challenge is separating hype from reality and figuring out what matters. To mark this milestone, I’m sharing 10 key lessons I’ve learned. Let’s dive in.
Lesson #1: Developer Experience Is Driving Growth
Across the entire AI stack, the models, tools, and libraries gaining the most mindshare prioritize the developer experience of Applied AI engineers. Anthropic open-sourced MCP. LangChain and LlamaIndex made it easy to get started with RAG. HuggingFace’s open source-first approach simplifies finding and distributing models. Together.ai is growing rapidly with an emphasis on easy deployments. Ollama and LMStudio are my go-to tools running models on my machines. Make it easy for developers to adopt your solution.
Lesson #2: Prompt Engineering Is Superior to Finetuning
The reality is that prompt engineering enables much faster iteration than fine-tuning. While services like Unsloth are simplifying fine-tuning, prompt engineering still offers a superior developer experience (Lesson #1). It’s far easier to learn how to communicate effectively with existing models than to retrain them. My favorite resource for learning how to do effective prompt engineering is DAIR.AI’s Prompt Engineering Guide.
Lesson #3: AI Is Getting Cheaper Faster Than Anticipated
I can’t believe how cheap it has become to run AI models, both locally and in the cloud. Two years ago, hosting and running models required massive CapEx investments in GPUs. Today, we’re running models that are significantly more powerful at a fraction of the previous cost per token.
The cost reduction isn’t only being driven by competition among cloud and model providers, but also by fundamental engineering breakthroughs under the hood: Flash Attention, KV Cache optimizations, and all the magic DeepSeek did.
Speaking of DeepSeek, I covered DeepSeek’s R1 multiple times in this newsletter. While it shocked the broader stock market, the reality is that costs have been declining for years. In China, DeepSeekV2 triggered a staggering 90% price drop which I covered in #64.
Lesson #4: Users Aren’t Replacing Themselves With AI, They’re Amplifying Their Skills
Study after study shows that AI-driven productivity gains are benefiting workers at all skill levels—as long as they integrate AI into their workflows. Don’t buy into the hype that AI will replace you. The real shift is that AI amplifies productivity for those who know how to use it effectively. Form your own opinion and experience with each of the models and interfaces.
Lesson #5: Knowing When to Automate
Lesson #4 naturally leads to Lesson #5: Knowing When to Automate. In a talk I gave at MIT, I broke down the key variables of LLM and AI systems—variables like cost (yes, even after the DeepSeek-driven price drops), quality of data and output, speed of the system, and delivery time all play a role in deciding whether automation makes sense. The reality is that every AI system requires tradeoffs between these variables. Not everything should be automated. The key is to evaluate whether the benefits outweigh the costs for your specific problem.
Lesson #6: Own Your Data and Benchmarks
"Data is the new oil" has been a popular analogy for years, but in today’s AI and LLM world: high-quality, structured, and proprietary data is the real competitive advantage. Raw data alone isn’t enough. Data requires constant cleaning, restructuring, re-engineering, and maintenance to retain its value. AI models are only as good as the data they are trained on, but owning your data also allows you to create your own benchmarks to evaluate new models that I constantly publicize in this newsletter (Sorry for making you work so often 😂). Public benchmarks are useful, but custom benchmarks tailored to your use cases are critical.
Lesson #7: Companies That Ignore AI Will Be Outpaced
AI-native startups are achieving revenue-to-headcount ratios that have never been seen before. There’s even a leaderboard tracking these companies.
Midjourney makes $12,500,000 per employee.
Cursor makes $5,000,000 per employee.
Startups are scaling using internal revenue, reducing their dependence on VC fundraising, and are no longer throttled by money cycles. This shift means they can move faster, stay lean, and outmaneuver slower incumbents weighed down by debt. Companies that ignore AI won’t just fall behind, they’ll be outpaced entirely by nimble, AI-powered competitors. A System Dynamics analysis of your organization is what you need to make the most of AI.
Lesson #8: Engineering Fundamentals Rule
My first programming language was C++, and for half my career, it felt like maintaining that knowledge was a waste of time as Python and JavaScript dominated most projects. But today, I am seeing a clear pivot back to low-level, high-performance languages like C++ and Rust. Projects like llama.cpp & MLX, and frameworks like CUDA, are making low-level languages a priority for developers. Rust is even replacing JavaScript in JavaScript build tools (see SWC)! The shift is clear. The software engineering fundamentals of performance, memory efficiency, and parallelism matter more than ever in modern development.
Lesson #9: Edge AI Is The Future
As I shared in #95, open source LLMs optimized to run on devices with consumer-grade hardware are becoming the norm. Ollama and LMStudio are making it so easy for any developer to just build local AI apps. At CES 2025, Edge AI was all the buzz with nearly every OEM marketing local AI-powered hardware. Apple’s M4 chip run state-of-the-art LLMs on battery power. In 2025, Edge AI is the future.
Lesson #10: Don’t Stop Reading and Learning
Business environments and AI are evolving at a frenetic pace, and the engineers, founders, and builders who stay ahead are the ones who keep adapting. The fundamentals—data structures, algorithms, and systems engineering—still rule, but the tools and best practices are shifting fast. Stay curious. Experiment. Ship. Iterate. As I shared in #99, I will do my best to keep reading and learning, and bring you, my beloved community, along for the ride!
Andiamo!
Celebrate with me today by grabbing a cupcake!
Signing off from Redmond, WA.