Rami's Readings #47
The latest on AI, LLMs, Prompt Engineering, 01.AI, Edge AI, the Chinese Stock Market, Boeing, Libraries & Third places, and more.
Welcome to Rami’s Readings #47 - a weekly digest of interesting articles, papers, videos, and X (formerly known as Twitter) threads from my various sources across the Internet. Expect a list of reads covering AI, technology, business, culture, fashion, travel, and more. Learn about what I do at ramisayar.com/about.
Thanks for reading Rami’s Readings! Are you enjoying this newsletter? Subscribe for free to receive each new post and support my work.
It is a crazy time to be alive. On my flight back from MIT, I experimented to see if I could:
Write an entire Python app using a framework I haven’t used in 5 years.
Do it entirely without WiFi.
Use a coding fine-tuned LLM under 7B parameters as my compressed API documentation and code generator.
Use only a M1 MacBook Pro.
The answer was Yes, with a few caveats1. I completed the app in under 3 hours with plenty of remaining flight time to watch the fantastic Masters of the Air series on Apple TV+. 🤯 Refer 5 subscribers, and I will help you set it up on your own M1.
🤖 AI Reads
Notes: Yes! A boon for research programs and open-source AI.
Notes: Nvidia is creating value for consumers using their software drivers. Soon enough all web videos will have HDR. I expect Apple will follow fast with their M chip.
Apple's New 'boost' to Generative AI Flags a Very Different Approach to Its Competitors — On-device AI Support Could Set the iPhone 16 Apart
Notes: More evidence that Edge AI is happening!
Notes: I am a fan of the open-source Whisper model. Whisper is a speech recognition model. It is amusing a team is inverting it to build a text-to-speech model.
Notes: Fantastic working paper on how to get more diverse creative output from GPT-4 using Chain-of-Thought prompt engineering.
Notes: I can’t tell if this is sponsored content on Wired. Yi-34B is fantastic, but I wouldn’t say 01.AI is winning the open-source AI race when you have Mistral.AI releasing incredible models at the same pace. Mistral isn’t even mentioned in this article… Hmmm… Worth reading if you don’t know anything about Kai-Fu Lee.
Notes: Promising new model architecture still under research. Continuing the trend towards cheaper and faster inference.
💼 Business Reads
Notes: A reminder that the real estate market in China is much bigger than the stock market.
Notes: Not sure when we started referring to these companies as the magnificent seven, but it does remind me of the M7 business schools. #MIT #1525
Notes: Core competencies business ideology gone wrong. I can rant endlessly about Boeing but will spare you. Listen to your engineers!
Notes: We desperately need more Third places in the US.
That is all for this week. Signing off from Redmond, WA.
You still need strong software development experience. I had to know which dependencies I would need and download them ahead of time. I frequently ran out of context-space and had to break apart the code I would feed as context. I would say the model followed instructions about 50% of the time. The model often claimed it completed the whole task but only generated the first couple of lines at which point additional prompting was needed to point out the code was incomplete. The model often generated deprecated code. The