Rami's Readings #22
The latest on AI-Powered News, New LLMs, AI in the Classroom, SanDisk SSD Troubles, Nvidia 🔥, Travel Hacks, and more.
Welcome to Rami’s Readings #22 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.
Are you enjoying this newsletter?
If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!
If you received this newsletter from a friend, subscribe here to get the next one.
🤖 AI Reads
The Boring Report
Notes: I am kinda digging this new service. The UX & logo design is on point. 🎯
An app that uses AI language models to remove sensationalism from the news while preserving essential information.
New 40B Model Outperforming LLAMA: Falcon-40B
Notes: TII (Abu Dhabi) released a smaller 7B model. Fine-tuning recommended. No paper out yet. Press release.
LIMA: Less Is More for Alignment
Notes: Newish paper. There is a common thread in various papers reporting smaller and higher quality datasets are as effective as fine-tuning. Long-term, we should see advantages from both techniques.
These results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.
Gorilla: Large Language Model Connected with Massive APIs
Notes: Not surprising.
Gorilla, a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls.
DarkBERT: A Language Model for the Dark Side of the Internet
Notes: Today I learned something new. 🤔
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web […] Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.
MMS: Scaling Speech Technology to 1000+ languages
Notes: This is a big release! Tweet from Yann LeCun.
The Massively Multilingual Speech (MMS) project expands speech technology from about 100 languages to over 1,000 by building a single multilingual speech recognition model supporting over 1,100 languages (more than 10 times as many as before), language identification models able to identify over 4,000 languages (40 times more than before), pretrained models supporting over 1,400 languages, and text-to-speech models for over 1,100 languages.
My English Teacher is Defending GPT Zero. What Do I Tell Him?
Notes: This keeps happening… See my previous newsletter.
We showed him how parts of official documents and books we read were flagged as AI written, but he told us they were flagged because "Chat GPT uses those as reference so of course they would be flagged." What do we tell him?? This final is worth 70 percent of our grade and he is adamant that most of the class used Chat GPT.
💼 Business Reads
Buyer Beware: Some SanDisk Extreme SSDs Are Wiping People’s Data
Notes: I got suckered in to buying these SSDs on a steep discount before it became clear that they were failing way faster and more frequently than average SSD failure rates. Some rumors on Reddit… I would not be surprised if a class action lawsuit is filed soon. 🗑️🔥
Nvidia’s Blowout Forecast Sparks Huge Rally in All Things AI
Notes: Nvidia is the stock market winner last week. I am writing a longer analysis on Nvidia and ML. Coming soon.
Nvidia Short Sellers Lose $2.3 Billion in One Day as Stock Soars🔥
Notes: It was an insane day in the market. 🤣
✈️ Travel Reads
A $189 Airport Travel Hack Is No Longer Working Very Well
Notes: The secret is the TSA Precheck + CLEAR combo line. Unfortunately, only offered in a few select airports like SEA, JFK, and LGA.
That is all for this week. Signing off from 5 Stones Coffee Co, Redmond.
I wrote a disclosure for this newsletter in #7. Please consider reading it.
Reach out to me on LinkedIn or Twitter.