Curated resources for building generative AI applications, ML system design, coding interviews, and AI safety & security research.
Getting started with ML interviews? Start with Coding Prep — LeetCode practice is essential for ML engineering interviews.
GenAI Development
- → Building A Generative AI Platform by Chip Huyen — Comprehensive guide to gen AI platform components, from RAGs and guardrails to caching and orchestration.
- → Common Pitfalls When Building Generative AI Applications by Chip Huyen — Learn from common mistakes: using gen AI when you don't need it, confusing bad product with bad AI, starting too complex.
- → 1,001 Real-World Gen AI Use Cases by Google Cloud — Real-world gen AI use cases from leading organizations, organized by industry and agent types.
- → Prompt Engineering Guide — If you're not sure how to give LLMs instructions, this comprehensive guide covers prompt engineering techniques, AI agents, and best practices.
ML System Design
- 📚 Designing Machine Learning Systems by Chip Huyen — Essential reading for designing, building, and deploying production ML systems. Covers the full ML lifecycle from data to deployment.
- 📚 Machine Learning System Design by Valeri Babushkin — Comprehensive guide for ML interviews, covering system design patterns, scalability, and best practices for ML system architecture.
- → Lilian Weng's Blog — Excellent technical blog posts on deep learning, reinforcement learning, and AI research with clear explanations and visualizations.
Coding Prep for ML Interviews
- → Blind 75 LeetCode — The classic curated list of 75 LeetCode problems. Start here to prepare for ML engineering coding interviews.
- → Grind 75 — A modern, customizable version of Blind 75 with an updated problem set and study plan. Perfect for ML interview prep.
AI Safety & Security Fellowships
- → Anthropic Interpretability Fellow
- → MATS Scholar (ML Alignment & Theory Scholars)
- → Constellation (Visiting Fellow)
- → CHAI (UC Berkeley)
- → SPAR (Supervised Program for Alignment Research)
AI Safety & Security
- 🎓 FARAI Alignment Workshops — I presented my Policy-as-Prompt paper at the FARAI alignment workshops, which focus on AI safety, alignment, and governance research.
- 📄 Constitutional Classifiers: Defending against Universal Jailbreaks (arXiv:2501.18837) — Safeguards trained on synthetic data using natural language rules (a constitution). Demonstrates robust defense against universal jailbreaks.
- 📄 Emergent Misalignment: Narrow Finetuning Can Produce Broadly Misaligned LLMs (arXiv:2502.17424) — Investigates how narrow finetuning can lead to broader misalignment in large language models.
- 📄 The Jailbreak Tax: How Useful are Your Jailbreak Outputs? (arXiv:2504.10694) — Evaluates whether jailbreak outputs are actually useful, revealing a consistent drop in model utility.
- 📄 Ignore This Title and HackAPrompt (arXiv:2311.16119) — Comprehensive study of prompt hacking vulnerabilities through a global competition, analyzing 600K+ adversarial prompts.
- 📄 Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign — Analysis of the first documented AI-orchestrated cyber espionage campaign, highlighting new security challenges.
- 📄 Attacking Multimodal OS Agents with Malicious Image Patches (arXiv:2503.10809) — Novel attack vector demonstrating how adversarially perturbed image patches can hijack OS agents when captured in screenshots.