Rami’s Readings

Rami’s Readings

Share this post

Rami’s Readings
Rami’s Readings
Rami's Readings #18
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from Rami’s Readings
A curated weekly digest of standout reads on LLMs, AI, economics, business, and tech, plus my brief expert notes. Trusted by leaders at Microsoft, Amazon, Google, Stripe, and MIT. Less noise, more hidden gems.
Already have an account? Sign in

Rami's Readings #18

The latest on AI anxiety, Edge AI, LLMs in the workplace, Careerism at Harvard, Philly's Best Cheesesteaks in Pakistan, and more.

Rami Sayar's avatar
Rami Sayar
Apr 30, 2023
Share

Welcome to Rami’s Readings #18 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.


Are you enjoying this newsletter?

If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!

Share Rami’s Readings

If you received this newsletter from a friend, subscribe here to get the next one.


🤖 AI Reads

What Are Reasonable AI Fears?

Notes: An economist’s rational take on AI fears. Somehow, we always need more insurance💀🤣

Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives. […]

So a cheaper and more reliable fix is for individuals or benefactors to buy robots-took-most-jobs insurance, which promises to pay from a global portfolio of B-type assets, but only in the situation where AI has suddenly taken most jobs. […]

However, economists today understand coordination as a fundamentally hard problem; our best understanding of how agents cooperate does not suggest that advanced AIs could do so easily.

We Aren't Close To Creating A Rapidly Self-Improving AI from
Jacob Buckman

Notes: Can confirm… datasets are gold mines. I highlight new dataset releases as often as new model releases because they are equally valuable. We do not have easy ways to clean, prepare and use most of the data collected - let alone understand what subsets are helpful for learning and which subsets are just trash.

To automatically construct a good dataset, we require an actionable understanding of which datapoints are important for learning. This turns out to be incredibly difficult. The field has, thus far, completely failed to make progress on this problem, despite expending significant effort. Cracking it would be a field-changing breakthrough, comparable to transitioning from alchemy to chemistry.

How generative models could go wrong [Paywall]

Notes: I lean towards the superforecasters.

In a study to be published this summer, they find that the median ai expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100. The median superforecaster, by contrast, gave a chance of 0.38%. Why the difference? For one, AI experts may choose their field precisely because they believe it is important, a selection bias of sorts. Another is they are not as sensitive to differences between small probabilities as the forecasters are.

This new technology could blow away GPT-4 and everything like it

Notes: The title is a little dramatic but still a good read. Here is the original paper: Hyena Hierarchy: Towards Larger Convolutional Language Models.

MLC LLM - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices

Notes: Edge AI is continuing to advance! As you can tell throughout these newsletters, I am most excited about AI on edge devices. GitHub repository.

AI and economic liability

Notes: From Tyler Cowen, excerpts from his paywalled Bloomberg article.

Generative AI at Work

Notes: New NBER paper. More evidence of LLMs increasing productivity - previously covered in this newsletter multiple times.

🎨 Culture Reads

How Harvard Careerism Killed the Classroom

Notes: I disagree but a good read nonetheless.

The Amazing Story of How Philly Cheesesteaks Became Huge in Lahore, Pakistan

Notes: US soft power should not be underestimated. Every time I travel, I am continually shocked to see American brands and food dominating the attention of local markets - whether they deserve it or not. In Dubai, I found almost every stereotypical American burger joint, even obscure ones. I am proud to see Tim Hortons doing so well - double-double anyone? Go Canada! 🇨🇦

What are your stories of finding American foods in the most unexpected places? Reply!

Shake Shack in Dubai Mall. Photo by Rami Sayar.

World Champion Carlsen On 'Shocking' Ding Choice, Risky Play, WC Format, & More

Notes: Carlsen remains entertaining no matter what he’s doing.

💼 Business Reads

Local and National Concentration Trends in Jobs and Sales: The Role of Structural Transformation

Notes: New NBER paper. Interesting read.

Loading...

That is all for this week. Signing off from 5 Stones Coffee Co, Redmond.


I wrote a disclosure for this newsletter in #7. Please consider reading it.


Thanks for reading Rami’s Readings! Subscribe for free to receive new posts.

Share
Rami's Readings #94 - 🤖 5 AI Predictions for 2025 ✨
5 AI predictions for 2025, the latest on AI, LLMs, DeepSeek, New Tools, Papers, VC, Hardware, and more.
Jan 26 • 
Rami Sayar
Rami's Readings #100 - 🎉 10 AI Lessons From 100 Newsletters 🎉
Celebrating 100 newsletters with lessons learned, Apéro & Intellect, the latest on AI, LLMs, Anthropic, and more.
Mar 9 • 
Rami Sayar
Rami's Readings #110 - OpenAI Released Codex 🤖
The latest on AI, LLMs Lost in Multi-Turn Conversation, OpenAI's Codex, LoRA, VC, Nissan, University Professors, Complex Systems, and more.
May 18 • 
Rami Sayar

Ready for more?

© 2025 Rami Sayar
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More