Rami's Readings #16
Focus on AI, MILA TechAide, Open Source Chat Models, AI Regulation, LLMs, CarPlay, and more.
Welcome to Rami’s Readings #16 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.
Are you enjoying this newsletter?
If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!
If you received this newsletter from a friend, subscribe here to get the next one.
Montréal has become notorious for its unpredictable weather patterns, and this week was no exception. We experienced freezing rain and summer heat in a matter of days. However, one thing remains predictable in this city amidst all the weather fluctuations: AI research remains top-notch. I attended the MILA TechAide 2023 conference last Friday. The conference was a resounding success, raised over $100,000 for Centraide of Greater Montreal, and brought together researchers from various fields to exchange ideas and insights about the state of the art and future of AI.
Three things I took away from the conference:
Length generalization is hard even with foundation deep learning models. What this means: we will continue to see hallucinations in very long texts from LLMs for a while. This isn’t new information for practitioners, but Samy Bengio confirmed it.
👉🏼 Recent Advances in Machine Learning Research at Apple 🎤 Samy Bengio
Parametric adaptation to a task can work better than in-context learning and may be transferable between models. What this means: prompt engineering may not be the end all be all (obviously!) when it comes to extracting the best performance from a model and being able to transfer those “instructions” to the next model iteration. However, it is hard to beat the simple developer experience of minimally rewriting prompts for each iteration.
👉🏼 Adapter Universe 🎤 Alessandro Sordini
#ML model documentation is a massive challenge, even with the increased adoption of Model Cards. What this means: data provenance and training will continue to be under-documented and sketchy. What I heard at the conference is that model card adoption has increased, but many of the card fields are left empty or minimally filled in. Documentation is important people!
👉🏼 Aspirations and Practice of ML Model Documentation 🎤 Jin L.C. Guo
🤖 AI Reads
OpenAssistant Released - Open source chat model
Notes: Model on HuggingFace. Demo.
WebLLM - Run a chatbot directly in browser
Notes: Edge AI is a reality. Code. Demo.
This project brings large-language model and LLM-based chatbot to web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU.
Stability AI announces Stable Diffusion XL
Image generation model built for enterprise clients that excels at photorealism.
Facebook open sources Animated Drawings
You Can’t Regulate What You Don’t Understand
Notes: Tim O’Reilly wrote this 🔥 piece.
The CEO’s Guide to the Generative AI Revolution
Notes: From friend Abhishek Gupta of
China Proposes To Regulate AI-Generated Content Amid ChatGPT Craze
Behind the curtain: what it feels like to work in AI right now
Notes: Personally, I have not had a relaxing week since October 2022.
Building LLM applications for production
Notes: 💯👌💪🙌 ⬇️
It’s easy to make something cool with LLMs, but very hard to make something production-ready with them. LLM limitations are exacerbated by a lack of engineering rigor in prompt engineering, partially due to the ambiguous nature of natural languages, and partially due to the nascent nature of the field.
Free Dolly: Introducing the World's First Open Instruction-Tuned LLM
[2022] Self-Conditioning Pre-Trained Language Models
💼 Business Reads
★ GM, CarPlay, and iPhones
Notes: I don’t think I am ever going to buy a GM car if I can avoid it.
Katie Cotton, Guardian of the Apple Brand for 18 Years, Dies
Friend
published his annual letter today. The Craft Podcast is great!That is all for this week. Signing off from Montréal.
I wrote a disclosure for this newsletter in #7. Please consider reading it.