Rami’s Readings

Rami’s Readings

Share this post

Rami’s Readings
Rami’s Readings
Rami's Readings #16
Copy link
Facebook
Email
Notes
More

Rami's Readings #16

Focus on AI, MILA TechAide, Open Source Chat Models, AI Regulation, LLMs, CarPlay, and more.

Rami Sayar's avatar
Rami Sayar
Apr 16, 2023
Share

Welcome to Rami’s Readings #16 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.


Are you enjoying this newsletter?

If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!

Share Rami’s Readings

If you received this newsletter from a friend, subscribe here to get the next one.


Montréal has become notorious for its unpredictable weather patterns, and this week was no exception. We experienced freezing rain and summer heat in a matter of days. However, one thing remains predictable in this city amidst all the weather fluctuations: AI research remains top-notch. I attended the MILA TechAide 2023 conference last Friday. The conference was a resounding success, raised over $100,000 for Centraide of Greater Montreal, and brought together researchers from various fields to exchange ideas and insights about the state of the art and future of AI.

Three things I took away from the conference:

  1. Length generalization is hard even with foundation deep learning models. What this means: we will continue to see hallucinations in very long texts from LLMs for a while. This isn’t new information for practitioners, but Samy Bengio confirmed it.

    👉🏼 Recent Advances in Machine Learning Research at Apple 🎤 Samy Bengio

  2. Parametric adaptation to a task can work better than in-context learning and may be transferable between models. What this means: prompt engineering may not be the end all be all (obviously!) when it comes to extracting the best performance from a model and being able to transfer those “instructions” to the next model iteration. However, it is hard to beat the simple developer experience of minimally rewriting prompts for each iteration.

    👉🏼 Adapter Universe 🎤 Alessandro Sordini

  3. #ML model documentation is a massive challenge, even with the increased adoption of Model Cards. What this means: data provenance and training will continue to be under-documented and sketchy. What I heard at the conference is that model card adoption has increased, but many of the card fields are left empty or minimally filled in. Documentation is important people! 

    👉🏼 Aspirations and Practice of ML Model Documentation 🎤 Jin L.C. Guo

🤖 AI Reads

OpenAssistant Released - Open source chat model

Notes: Model on HuggingFace. Demo.

WebLLM - Run a chatbot directly in browser

Notes: Edge AI is a reality. Code. Demo.

This project brings large-language model and LLM-based chatbot to web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU.

Stability AI announces Stable Diffusion XL

Image generation model built for enterprise clients that excels at photorealism.  

Facebook open sources Animated Drawings

You Can’t Regulate What You Don’t Understand

Notes: Tim O’Reilly wrote this 🔥 piece.

The CEO’s Guide to the Generative AI Revolution

Notes: From friend Abhishek Gupta of

The AI Ethics Brief

China Proposes To Regulate AI-Generated Content Amid ChatGPT Craze

Behind the curtain: what it feels like to work in AI right now

Notes: Personally, I have not had a relaxing week since October 2022.

Building LLM applications for production

Notes: 💯👌💪🙌 ⬇️

It’s easy to make something cool with LLMs, but very hard to make something production-ready with them. LLM limitations are exacerbated by a lack of engineering rigor in prompt engineering, partially due to the ambiguous nature of natural languages, and partially due to the nascent nature of the field.

Free Dolly: Introducing the World's First Open Instruction-Tuned LLM

[2022] Self-Conditioning Pre-Trained Language Models

💼 Business Reads

★ GM, CarPlay, and iPhones

Notes: I don’t think I am ever going to buy a GM car if I can avoid it.

Katie Cotton, Guardian of the Apple Brand for 18 Years, Dies

YC’s Essential Startup Advice

Friend

Daniel Tedesco
published his annual letter today. The Craft Podcast is great!

annual letters
Daniel's 2022 Annual Letter
Dear Friends, For me, 2022 was another year of too much doom and gloom. But—unlike 2021—it was punctuated by many moments of profound delight and optimism. Let’s get the unhappy stuff out of the way first: Uncertainty about the COVID lockdowns in China. Seeing the US and Chinese governments lumber into a lose-lose cold war like two drunken apes. And a w…
Read more
2 years ago · Daniel Tedesco

That is all for this week. Signing off from Montréal.


I wrote a disclosure for this newsletter in #7. Please consider reading it.


Thanks for reading Rami’s Readings! Subscribe for free to receive new posts.

Share
Rami's Readings #94 - 🤖 5 AI Predictions for 2025 ✨
5 AI predictions for 2025, the latest on AI, LLMs, DeepSeek, New Tools, Papers, VC, Hardware, and more.
Jan 26 • 
Rami Sayar
Rami's Readings #100 - 🎉 10 AI Lessons From 100 Newsletters 🎉
Celebrating 100 newsletters with lessons learned, Apéro & Intellect, the latest on AI, LLMs, Anthropic, and more.
Mar 9 • 
Rami Sayar
Rami's Readings #110 - OpenAI Released Codex 🤖
The latest on AI, LLMs Lost in Multi-Turn Conversation, OpenAI's Codex, LoRA, VC, Nissan, University Professors, Complex Systems, and more.
May 18 • 
Rami Sayar

Ready for more?

© 2025 Rami Sayar
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More