Rami's Readings #58 - š¢ Mistral, OpenAI, Google, Cohere Release New Models
The latest on AI, LLMs, Mistral, OpenAI, Google, Cohere, Devin Debunking, Canadian & South Korean AI Investment, and more.
Welcome to Ramiās Readings #58 - a weekly digest of interesting articles, papers, videos, and X threads from my various sources across the Internet. Expect a list of reads covering AI, technology, business, culture, fashion, travel, and more. Learn about what I do at ramisayar.com/about.
šš¼ Welcome New Subscribers
Hello and a warm welcome to our community!
First off, a hearty thank you for subscribing to Rami's Readings! I am thrilled to have you on board. Every edition of this newsletter is a labor of love, tailored to provide you with enlightening reads, thought-provoking insights, and a fresh perspective on various topics.
š Top 3 Newsletters According to Substack:
Remember, reading isn't just about absorbing informationāit's about growing, understanding, and expanding perspectives. Welcome and happy reading! šāØ
A short update on last week. I had the pleasure of speaking at the Canadian Fintech Summit. The conference was fantastic (read this write up from Guy Vadish). Kudos to Carol and Peter Misek for organizing such a great event! ā¤ļø It truly brings immense value to the Fintech community in Toronto. It was also great to see so many MIT classmates doing their thing at the event. ā¤ļø Unsurprisingly, Toronto has continued to add more skyscrapers, and banking feels ever more present in the downtown core. Looking forward to the next one!
š¤ AI Reads
While we are all patiently waiting for Meta to drop Llama3ā¦ Google, Mistral, Cohere, and OpenAI dropped new models this week.
Mistralās 8x22B Launched Via Magnet Link
Notes: Mistral randomly tweet dropping their models again. š
Gemini 1.5 Pro Now Available in 180+ Countries
Notes: Positive comments on X.
OpenAI launches GPT-4 Turbo with Vision
Notes: ā¤ļø
Introducing Rerank 3: A New Foundation Model for Efficient Enterprise Search & Retrieval
Notes: Ranking and embedding models donāt get as much limelight, yet are critical.
The net effect makes running RAG with Rerank on Command R+ 80-93% less expensive than other generative LLMs in the market, and with Rerank and Command R savings can top 98%.
Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
Notes: From Stanford. Demo.
CodeGemma: Open Code Models Based on Gemma
Notes: This model is small enough to run on a MacBook Pro and it is already available to pull down with Ollama. I will give it a try this week and see how it compares with DeepSeek-Coder (my current default).
I shared in #54 an article about Devin. The video below is going viral in the tech community for debunking Devin. I still think it is too early to tell how good an āAI Software Engineerā can be. As a side note and keep in mind I do not have concrete evidence for this theory, I believe the original demo video and subsequent X / Reddit astroturfing was primarily designed to help Cognition raise venture capital after the demoās release as reported by the Wall Street Journal. I would love to hear from anyone who has more information.
š¼ Business Reads
Canadian Banksā Research Strength Pushes Them to Top of Global AI Ranking šØš¦
Notes: I spent some time in Toronto last week talking with the Fintech community. I am not surprised that Canadian banks would rank so highly in this study.
Securing Canadaās AI Advantage šØš¦
Notes: I pointed out last week that few companies in Canada have implemented AI at scale. I am glad to see the Government of Canada investing $2 billion to fund compute, research, start-ups, and scale-ups.
South Korea to Invest $7 Billion in AI in Bid to Retain Edge in Chips
Notes: Basically investing in Samsung Electronics, SK Hynix and SK Square?
The 2024 MAD (ML, AI & Data) Landscape
Notes: A comprehensive map of ML, AI & Data companies from FirstMark VC.
For my US subscribers, I hope your tax filing weekend went more smoothly than ours. That is all for this week. Signing off from Redmond.