Discover more from Rami’s Readings
Rami's Readings #23
The latest on LLMs, Traders, Costco, with a focus on Nvidia’s 🔥 P/E Ratio, and more.
Welcome to Rami’s Readings #23 - a weekly digest of interesting articles, videos, Twitter threads from my various sources across the Internet. Expect a list of reads covering technology, business, culture, fashion, travel and more. Learn about what I do at ramisayar.com/about.
Are you enjoying this newsletter?
If yes, I have a favor to ask you. If you know someone else who would benefit, please forward or share it with them!
If you received this newsletter from a friend, subscribe here to get the next one.
🤖 AI Reads - Focus on Nvidia
Reminder: Read the disclosure in #7. All opinions are my own. The following text should not be construed as financial or investment advice.
Under the leadership of the indomitable Jen-Hsun Huang, Nvidia is worth $1 trillion, their success a testament to predicting trends and laying the foundational infrastructure to support computing evolution—a well-deserved success. Nvidia rode three critical computing waves with aplomb: cloud, blockchain, and AI. Nvidia’s A100 hardware is the gold standard. Read this NYTimes interview with Jen-Hsun from 2010: I’m Prepared for Adversity. I Waited Tables.
AI, the next big wave in computing, is realizing decade-old promises. Every business will be using or interacting with different machine learning models in some shape or form every day moving forward. This newsletter shared research on Generative AI in the workplace multiple times - the productivity gains are real.
Nvidia’s P/E Ratio as of June 2nd is 190.91 on Bloomberg. In comparison, Microsoft’s P/E Ratio is 35.79, and Apple’s is 30.72. This P/E Ratio is wild! My expertise is in tech (ML, UX, Developer Experience) and innovation. What can my experience in the AI rush help you understand what might support (or not) a P/E Ratio so skewed?
Machine learning models are trained on vast amounts of data. Data can be public domain or private (e.g. Bloomberg → BloombergGPT). Companies with clean, labeled, and curated datasets and data pipelines to keep the datasets fresh have a resourceful advantage that translates into high-quality models.
The critical variables for training ML models are time, dataset size, and model technique (I am simplifying things by lumping in architecture, hyperparameter tuning, optimization, preprocessing, etc.) Companies with access to large capacity (GPUs, networking bandwidth, etc.) can train larger models for longer to produce ever-more powerful models.
Once trained, machine learning models are used for inference (prediction, classification, scoring, generation, completion, etc.)
You can quickly take pre-trained models (the larger and longer they are trained, the better… GPT2 < GPT3 < GPT4) and apply them to solve problems on your specific data without needing to train them yourself. This revelation is the main driving force behind the AI revolution of the past six years. Attention is all you need!
Getting back to Nvidia. Nvidia laid the groundwork for their A100s 15 years ago (CUDA, Infiniband, software, etc). They have the most comprehensive and state-of-the-art solution for training machine learning models at scale. However, global supply chain failures, Crypto, Covid, etc., have all contributed to an insatiable demand for GPUs. Supply is constrained and will likely remain so for a long time.
There will be a category of companies that will train their own extra-large machine learning models. This category will skew to companies with valuable datasets, sophisticated data pipelines, specialized applied machine learning engineering teams, and high-scale products with tremendous inference needs. These companies will be Nvidia’s prime customers who will need an endless supply of more extensive and faster GPUs. Think OpenAI, Apple, Microsoft, Google, Amazon, etc. Unsurprisingly, these companies will also rent you their spare GPU capacity.
There will be another category of companies that will distill or fine-tune existing machine learning models to suit their needs. This category will be larger than the previous one. Only some companies need to train their own models. This category’s training and inference capacity needs will be modest. They will also be the primary beneficiaries of open-source AI models and the optimizations making AI inference on edge devices a reality. Their costs will trend to zero. 🤔 Why buy entire data centers of expensive hardware when many open-source models with some fine-tuning suit your needs? Rent GPUs just enough to get by. Run inference on edge devices as much as possible.
A comment on Developer Experience: Developers are brilliantly lazy. Prompt engineering will not be everything, but it is much easier to use a large language model with few-shot learning than fine-tuning a model. General purpose models are just better developer experiences. You can train an army of developers to prompt engineers faster than you can teach a few applied machine learning engineers. Good enough.
There are other categories of high-end GPU consumers, e.g., research institutions, government institutions, individual researchers in their basements with a few A100s (You can find these people on Reddit), etc. But their consumption will not be near the size of the first category.
Predicting the Future:
Getting back to Nvidia and their P/E Ratio, whoever can calculate the aggregate demand of big companies (first category) in the short and long term for GPUs will have a good handle on Nvidia’s intrinsic stock value. I suspect an analyst will figure out how to measure the demand from a handful of the largest companies using quarterly financial reports and use that as a proxy. Is that demand large enough to support the P/E Ratio of June 2nd? I don’t know, but if you figure that out, you will print 💵💵💵.
Notes: A general reminder for subscribers, you can get Andrew’s famous AI course on Coursera.
Notes: Interesting podcast about Generative AI, tech giants in China
Notes: Pair with the previous podcast.
Notes: Developed by a long-time Twitter iOS engineer. Edge AI continues to advance on all platforms.
Notes: Salesforce entered the chat.
💼 Business Reads
Notes: I credit a significant chunk of my professional development to on-campus student organizations.
The Corporation and the Twentieth Century: The History of American Business Enterprise – Pre-order for June 27, 2023
Notes: I placed my pre-order.
Notes: Someone is going to get a call from the SEC on Monday. 🤣
🔀 Other Reads
Notes: Uses midjourney - the NY one feels like a person I met IRL.
You try selling poutine gravy in a 55 gallon drum, and cheese curds by the pallet, in Quebec. See if you don't need to expand drastically to meet demand.
That is all for this week.
I wrote a disclosure for this newsletter in #7. Please consider reading it.
Thanks for reading Rami’s Readings! Subscribe for free to receive new posts.