Rami's Readings #97 - AI Summits in Paris & China
The latest on AI, LLMs, LM2, AutoRAG, AI Summits in Paris and China, MongoDB, and more.
Welcome to Rami’s Readings #97 - a weekly digest of interesting articles, papers, videos, and X threads from my various sources across the Internet. Expect a list of reads covering AI, technology, business, culture, fashion, travel, and more. Learn about what I do at ramisayar.com/about.
🤖 AI Reads
LM2: Large Memory Models
Notes: Interesting idea to add memory without modifying the vanilla Transformers architecture. You can find the code here.
MiniMax-01: Scaling Foundation Models with Lightning Attention
Notes: New model with MoE with 32 experts and can handle conext windows of 4 million tokens from MiniMax in China. Code here.
Titans: Learning to Memorize at Test Time
Notes: New family of architectures from Google.
Foundations of Large Language Models
Notes: New book from Northeastern University about LLMs. Yet another book on my reading list.
Marker-Inc-Korea / AutoRAG
Notes: An open source framework for RAG evaluation and optimization from South Korea.
💼 Business Reads
Macron Unveils $112B AI Investment
Notes: Vive la France!
Great Summary of AI Action Summit Conference
Notes: I really need to go Paris again…
China Invites Jack Ma, DeepSeek Founder to Meet Top Leaders
Notes: New and old players.
DoubleLine Asks Whether Microsoft Debt Is Safer Than Treasuries
Notes: 😂
🔀Other Reads
MongoDB’s Moat Breaking?
Notes: I still instinctively reach for MongoDB on my personal projects, but FerretDB is interesting.
Undergraduate Upends a 40-Year-Old Data Science Conjecture
Notes: Hash tables are critical data structures! Any improvement will have profound downstream performance effects.
That’s all for this week. Signing off from Redmond.