Discover more from Rami’s Readings
Rami's Readings #16
Focus on AI, MILA TechAide, Open Source Chat Models, AI Regulation, LLMs, CarPlay, and more.
Welcome to Rami’s Readings #16 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.
Are you enjoying this newsletter?
If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!
If you received this newsletter from a friend, subscribe here to get the next one.
Montréal has become notorious for its unpredictable weather patterns, and this week was no exception. We experienced freezing rain and summer heat in a matter of days. However, one thing remains predictable in this city amidst all the weather fluctuations: AI research remains top-notch. I attended the MILA TechAide 2023 conference last Friday. The conference was a resounding success, raised over $100,000 for Centraide of Greater Montreal, and brought together researchers from various fields to exchange ideas and insights about the state of the art and future of AI.
Three things I took away from the conference:
Length generalization is hard even with foundation deep learning models. What this means: we will continue to see hallucinations in very long texts from LLMs for a while. This isn’t new information for practitioners, but Samy Bengio confirmed it.
👉🏼 Recent Advances in Machine Learning Research at Apple 🎤 Samy Bengio
Parametric adaptation to a task can work better than in-context learning and may be transferable between models. What this means: prompt engineering may not be the end all be all (obviously!) when it comes to extracting the best performance from a model and being able to transfer those “instructions” to the next model iteration. However, it is hard to beat the simple developer experience of minimally rewriting prompts for each iteration.
👉🏼 Adapter Universe 🎤 Alessandro Sordini
#ML model documentation is a massive challenge, even with the increased adoption of Model Cards. What this means: data provenance and training will continue to be under-documented and sketchy. What I heard at the conference is that model card adoption has increased, but many of the card fields are left empty or minimally filled in. Documentation is important people!
👉🏼 Aspirations and Practice of ML Model Documentation 🎤 Jin L.C. Guo
🤖 AI Reads
This project brings large-language model and LLM-based chatbot to web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU.
Image generation model built for enterprise clients that excels at photorealism.
Notes: Tim O’Reilly wrote this 🔥 piece.
Notes: From friend Abhishek Gupta of
Notes: Personally, I have not had a relaxing week since October 2022.
Notes: 💯👌💪🙌 ⬇️
It’s easy to make something cool with LLMs, but very hard to make something production-ready with them. LLM limitations are exacerbated by a lack of engineering rigor in prompt engineering, partially due to the ambiguous nature of natural languages, and partially due to the nascent nature of the field.
💼 Business Reads
Notes: I don’t think I am ever going to buy a GM car if I can avoid it.
Friendpublished his annual letter today. The Craft Podcast is great!
That is all for this week. Signing off from Montréal.
I wrote a disclosure for this newsletter in #7. Please consider reading it.
Thanks for reading Rami’s Readings! Subscribe for free to receive new posts.