Discover more from Rami’s Readings
Rami's Readings #18
The latest on AI anxiety, Edge AI, LLMs in the workplace, Careerism at Harvard, Philly's Best Cheesesteaks in Pakistan, and more.
Welcome to Rami’s Readings #18 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.
Are you enjoying this newsletter?
If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!
If you received this newsletter from a friend, subscribe here to get the next one.
🤖 AI Reads
Notes: An economist’s rational take on AI fears. Somehow, we always need more insurance💀🤣
Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives. […]
So a cheaper and more reliable fix is for individuals or benefactors to buy robots-took-most-jobs insurance, which promises to pay from a global portfolio of B-type assets, but only in the situation where AI has suddenly taken most jobs. […]
However, economists today understand coordination as a fundamentally hard problem; our best understanding of how agents cooperate does not suggest that advanced AIs could do so easily.
Notes: Can confirm… datasets are gold mines. I highlight new dataset releases as often as new model releases because they are equally valuable. We do not have easy ways to clean, prepare and use most of the data collected - let alone understand what subsets are helpful for learning and which subsets are just trash.
To automatically construct a good dataset, we require an actionable understanding of which datapoints are important for learning. This turns out to be incredibly difficult. The field has, thus far, completely failed to make progress on this problem, despite expending significant effort. Cracking it would be a field-changing breakthrough, comparable to transitioning from alchemy to chemistry.
How generative models could go wrong [Paywall]
Notes: I lean towards the superforecasters.
In a study to be published this summer, they find that the median ai expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100. The median superforecaster, by contrast, gave a chance of 0.38%. Why the difference? For one, AI experts may choose their field precisely because they believe it is important, a selection bias of sorts. Another is they are not as sensitive to differences between small probabilities as the forecasters are.
Notes: The title is a little dramatic but still a good read. Here is the original paper: Hyena Hierarchy: Towards Larger Convolutional Language Models.
Notes: Edge AI is continuing to advance! As you can tell throughout these newsletters, I am most excited about AI on edge devices. GitHub repository.
Notes: From Tyler Cowen, excerpts from his paywalled Bloomberg article.
Notes: New NBER paper. More evidence of LLMs increasing productivity - previously covered in this newsletter multiple times.
🎨 Culture Reads
Notes: I disagree but a good read nonetheless.
Notes: US soft power should not be underestimated. Every time I travel, I am continually shocked to see American brands and food dominating the attention of local markets - whether they deserve it or not. In Dubai, I found almost every stereotypical American burger joint, even obscure ones. I am proud to see Tim Hortons doing so well - double-double anyone? Go Canada! 🇨🇦
What are your stories of finding American foods in the most unexpected places? Reply!
Notes: Carlsen remains entertaining no matter what he’s doing.
💼 Business Reads
Notes: New NBER paper. Interesting read.
That is all for this week. Signing off from 5 Stones Coffee Co, Redmond.
I wrote a disclosure for this newsletter in #7. Please consider reading it.
Thanks for reading Rami’s Readings! Subscribe for free to receive new posts.