Rami's Readings #54 - Announcing Apéro & Intellect #6 🍹 in Toronto!
Announcing Apéro & Intellect #6 in Toronto and the latest on AI, LLMs, Apple's Multimodal LLM, Robots, Crazy Rich Indians, the Seattle Freeze, and more.
Welcome to Rami’s Readings #54 - a weekly digest of interesting articles, papers, videos, and X threads from my various sources across the Internet. Expect a list of reads covering AI, technology, business, culture, fashion, travel, and more. Learn about what I do at ramisayar.com/about.
Apéro & Intellect Is Coming to Toronto!
Apéro & Intellect is a curated series of intimate gatherings discussing Artificial Intelligence designed to stimulate both your mind and your palate. After receiving great feedback from the three pilots last year, I am expanding the event series this year to gather a slightly larger group of people. Many of you desired more time among each other, the event will now include fantastic food (FREE) to match the beverages.
I am pleased to announce Apéro and Intellect #6 will take place in 🏙 Toronto, Canada 🍁 on Tuesday, April 9th at noon. If you would like to attend, please apply before March 22nd for priority consideration.
🤖 AI Reads
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Notes: Apple just published this detailed, insightful, doozie of a paper. Possibly the most important paper of the week!
OpenAI Powers a Robot That Can Hand People Food, Pick up Trash, Put Away Dishes, and More
Notes: The status update and demo video are cool.
Software Engineers Are Getting Closer to Finding out If AI Really Can Make Them Jobless (Devin)
Notes: Many, many threads on Twitter about Devin. Too early to tell how good it is, but the hype is palpable.
New Breakthrough Brings Matrix Multiplication Closer to Ideal
Notes: Beautiful diagrams and illustrations of matrix multiplication algorithms. These algorithms underpin AI, graphics, data science, etc.
DESIGN AGAINST AI: 2024
Notes: Part of the report series “Design in Tech” from John Meada. Abridged version of his SXSW 2024 talk.
Simple and Scalable Strategies to Continually Pre-train Large Language Models
Notes: Paper from Mila. Go Montréal go!
Training Great LLMs Entirely from Ground up in the Wilderness as a Startup
Notes: “Less Principled, More Yolo” 🤣 Love that expression! On further thought I believe this is true in my current role (UX at Bing) as well.
Introducing StarChat2
Notes: More coding LLMs.
Yi-34B-200K
Notes: Yi and 01.AI continue to impress me. New paper from this group.
In the "Needle-in-a-Haystack" test, the Yi-34B-200K's performance is improved by 10.5%, rising from 89.3% to an impressive 99.8%. We continue to pre-train the model on 5B tokens long-context data mixture and demonstrate a near-all-green performance.
Command-R: Retrieval Augmented Generation at Production Scale
Notes: From the Cohere team, new LLM targeting RAG for enterprises.
AgentKit: Rapidly Build High Quality Agent Apps
Notes: From BCG-X.
Guidance: Language for Controlling Large Language Models
Notes: Haven’t tried it yet, but worth investigating.
Former Twitter Engineers Are Building Particle, an AI-Powered News Reader, Backed by $4.4M
Notes: Neat!
🎨 Culture Reads
Inside the World of Crazy Rich Indians
Notes: I can confirm from first-hand experience, the distribution of wealth across the country is expanding including to much smaller cities. I can’t wait to go back to India.
How Anti-Social Is Seattle? A Survey Comparing Big Cities
Notes: The Seattle Freeze is a real thing. I blame the grey and rainy weather.
That is all for this week. Signing off from Cambridge, MA.