Sources and Methods #44: Deep Learning with fast.ai's Jeremy Howard
Jeremy Howard 101:
Jeremy on Twitter: JeremyPHoward
Free online programme / MOOC (“Practical Deep Learning for Coders”) at: fast.ai
“The wonderful and terrifying implications of computers that can learn” (YouTube)
Show Notes:
5:55 - My entire education is one degree in philosophy.
7:30 - Joined McKinsey at 18 with extremely basic knowledge.
12:19 - At Fast.ai our target audience really is people who have interesting and useful problems, and have a feeling that using AI might be a useful way to do that, that maybe don’t have a background in machine learning. It’s the people I came across in my career who were working in extremely diverse industries and roles and geographies, who are smart and passionate and working on interesting and important problems but don’t have any particular background in computer science or math. There’s a snobbish-ness in machine learning, that most people in it have extremely homogeneous backgrounds, young, white, male, who have studied computer science at a handful of universities in America or Europe.
David Perkins at Harvard, and his learning theory of the ‘Whole Game.’
18:10 - For some reason, the STEM field on the whole have gotten away with shoddy, slack teaching methods, where we expect the students to do the work of sticking with it for 10 years and putting it all together.
20:02 - We’ve discovered that the most practical component in AI is transfer learning. Taking a model that someone else has created and fine tuning it for your task. It turns out that this is the most important thing by far for actually getting AI to work in the real world. Apply and transfer learning effectively.
I think many people teach a list or a menu of things that they know, rather than really getting to student learning.
22:41 - Each year, we try to get to a point where the course covers twice as much as the previous year, with half as much code, with twice the accuracy at twice the speed. So far, we’ve been successful at doing that three years running.
28:48 - I think that will be one of the two most important skills over the next decade or two - the idea of how to work as a domain expert to provide appropriate data to a machine learning system and to interpret the results of those things in a way appropriate to your work. If you don’t know how to do it, you’re going to be totally obsolete.
31:09 - Back in the early days of the commercial internet, being an internet expert was extremely useful and you could have a job as an internet expert and be in a company of internet experts, and sell yourself as an internet expert company. Today, very few people do that, because on the whole the internet is what it is, and there’s a relatively few number of people who need such a level of expertise that they can go in and change the way your router operates and such. I think we’re going to see the same thing with AI.
39:08 - I started learning Chinese not because I had any interest in Chinese, but because I was such a bad language learner in highschool. I did six months of French, I got 28% and I quit. When I wanted to dig into machine learning, I thought one of the things that might be better to understand was human learning, so I used myself as a subject. A hopeless subject. If I can come up with a way that even I can learn a language, that would be great. And to make sure that was challenging enough, I tried to pick the hardest language I could. So according to according to CIA guidelines, Arabic and Chinese are the hardest languages for people to pick up. Then I spent three months studying learning theory, and language learning theory, and then software to help me with that process.
It turns out that even I can learn Chinese. After a year of this - by no means a full time thing, an hour or two a day - I went to China to a top language learning program and based on the results of my exam got placed with all these language PhDs, and I thought wow. Studying smart is important. It’s all about how you do it.
Spaced repetition is such an easy thing that anyone can do, for free, you can start using it.
[Jeremy’s amazing Anki talk]
If you’re not using Anki, you’re many orders of magnitude less likely to remember a piece of vocab. So you come away like I did, thinking you can’t learn a language. But once you learn vocab, the rest is really not that hard. Don’t try to learn grammar, just spend all your time reading.
45:04 - If you’re not spending a significant portion of your early learning, learning how to learn, then you’re going to be at a disadvantage to those that did for that entire learning journey. Spending 12 years at school learning things, but nobody ever thought you how to learn, is the dumbest things I’ve ever heard.
Coursera’s most popular course is Learning How To Learn.
Exercise is the other most important thing.
49:03 - My third superpower is taking notes. Exceptional people take a lot of notes. Less exceptional people assume they’re going to remember.
50:19 - Taking notes in class is kind of a waste of time. I don’t really see the point of going to class most of the time honestly, it’s probably being videotaped.
52:54 - Learn Python if you’re interested in data science, deep learning.
54:22 - I think there are two critical skills going forward, pick one. One is knowing how to use machine learning. And the other is knowing how to interact with and care for human beings. Because the latter one can’t be replaced by AI. The former one will gradually replace everything.
Sources and Methods #42: Parsing Complexity with Zavain Dar
Zavain Dar 101:
Zavain on Twitter: https://twitter.com/zavaindar
Email: zavain.dar@luxcapital.com
Blog: http://beardedbrownman.com/
Show Notes:
3:00 - The firm created a name for ourselves as one of the first funds specifically focused on deep tech, or emerging tech. Over the years, that’s really encompassed anything from nanotech to metamaterials, spaceships, crypto, satellites, biotech, AI, blockchain, nextgen manufacturing, autonomous cars. All sorts of weird and wonky ideas that are out there.
You’re not only an allocator of capital, but it does feel like you’re pulling the metaphysical strong forward that connects the future or science fiction to the present.
I focus on complex software systems that may or may not be coupled to the real world.
5:21 - How do we not fund the next Theranos? It’s a great question. I’m lucky to be part of a team that’s not scared of primary literature. All of us taken pride in having the ability to scour and read and understand from a first principles basis, a lot of the technologies and engineering systems we invest in.
I wouldn’t say we’re only bottoms up. A lot of what we talk about internally is ‘If this works, then what?’ If this technology is actually able to get off the ground, are there real, applicable strong market forces that this dictates that this captures value, that it’s great for the entrepreneurs, for investors, and for our investors as well. Candidly, that can be the harder part to asses.
12:16 [On advice / lessons from his first startup] Trust your instincts. Be intellectually disciplined to think through all of your decisions without relying on high level proxies, like what’s on TechCrunch or what else in the ecosystem is getting funded, what’s hot or what’s not. Those things are fads and often times its layered iterative processes of others peoples proxies for what other people are thinking over and over again, which ends up being decoupled from reality. If there’s one thing in my career I’ve looked back on, and wished my former or past self had done more of, it would be that I wish my younger Zavain had listened to his instincts with greater enthusiasm or confidence.
The other is to surround yourself with phenomenally intelligent people.
14:02 - That company was acquired by Twitter, and Given my own disposition against social media, or at least working at a social media company, I obviously left. That was really the catalyst towards my future in venture capital.
16:20 - Todd Davies at Stanford first gave me that quote, that capitalism is a phenomenal tool but not a great ideology, it’s not a dogma. I often think in the Valley and in the US at large, we confuse the two. That the laissez-faire capitalist outcome is the moral or ethical outcome. While it’s true you can point to capitalism and say wow, it’s phenomenal for its ability to drive distributed decentralized innovation across various groups - and I think inarguably is one of the most impressive systems to do exactly that, and we have empirical data for that - it doesn’t equate the end outcomes as necessarily the just outcomes.
17:30 - If you walk around San Francisco, there’s a very clear separation between the Haves and the Haves Not. Generally, the Haves are the folks in Tech and the Have Nots are everyone else. For a region with the ability to create so much value and capture such a large portion of that value, it’s frankly disappointing. I think it’s a failure that we have such a large number of people on the streets. That’s not necessarily something that capitalism points at as a problem to solve.
There’s more capital and more upside in optimizing e-commerce on Instagram. I don’t say that in a pejorative way I just say that that’s actually the case. So we need to be honest with ourselves about what capitalism is actually geared towards. If at all moments in time all firms are geared towards increasing profits or increasing revenue or margins, at what moment in time do we actually solve issues in society for classes that are most vulnerable?
21:52 - The advancement of technology - it’s an awesome tool and an awesome outcome. But we should sit there and really think about how it affects society at large.
29:09 - Some truths are simply out of the realm of complexity that potentially a human brain can actually access. Two examples here:
AlphaGo - We saw a computer Go player start to access strategy that not even the best of the best of the best of the best experts of Go in real life could understand. It might be the case that one day some genius Go player will look back at those games and understand exactly the strategies that AlphaGo was employing. But it also may not be the case. It might just be beyond the level of cognitive ability of humans.
I’m an investor in a company called RecursionPharma. They took pictures of human cells and they track how - based on various genetic changes to the human cell - how those genetic changes manifest morphologically or structurally in the pictures of the cells. Often times, what you get is images of 10,000 cells, all with 5,000 features in each cell, all with highly complex, highly non-linear relationships between the features and the cell. And there’s absolutely no way even the most expertly trained pathologist could look at these 10,000 cells and finds all the correlations. It’s not feasible. If you allow a computer to do that, it can find interesting, highly complex formulas that split apart perfectly the diseased cells from the non-diseased cells. It’s really interesting, and feels like we are in fact coming to something that is scientifically valid and scientifically true even if it’s maybe beyond the capacity of a human to understand. Candidly, I think most of biology fits in that realm.
32:07 - So for the majority of human history, that’s what we’ve had to rely on as true - the the metaphysical, the language, the epistemological. And what we’re starting to see now with advancement in AI, Machine Learning and Data Science is that you can one by one mix all three of those assumptions.
34:28 - [On investing time to learn about these changes in technology] My own suspicion is that technology is only increasing in its power to rapidly drive change and command attention. Such that if you have the time and the resources to invest in learning about it, it’s absolutely worth learning about it. That’s everything from learning about how networks emerge, what network effects are, to really thinking through and trying to understanding how emergence and connectivity of data will affect the types of problems we can solve. And also of course how that too gives rise to all sorts of social, political and anthropological effects.
37:41 - I look back on my training in philosophy and theoretical computer science as the most impactful for the ability to do my job day to day.
45:24 - Mehran Sahami’s inspirational speech on Computer Science
47:40 - [On his work with the Philadelphia 76ers] - The work there was around understanding this new modality of information coming into the league. If you think about the history of most sports, most sports data is recorded in what we refer to as box scores. If you read a newspaper the day after a game, you’ll get these box scores - who the players are, what their numbers are, maybe how many shots they took, how many shots they made, etc.
At this point now, we’re tracking players at the specificity of where every player is on the court at every moment in time. So you end up with a very big, unstructured data set, where at each moment in time - for basketball, you’re getting 11 geo-coordinates. Where are each of the 5 players on each team, and where’s the ball. And the question was - how do we actually manage this?
There are two problems we want to solve:
One is portfolio management. What players are undervalued, who are overvalued, who should we get off our team, who should we draft.
And the other is game ops. You are the Warriors and you’re playing LeBron James and - at this point - the Lakers tomorrow. What’s the best defensive matchup you can have based on how he’s trending over the last 10 games and how your defense has been playing in some prior window in the past.
So the question was - how do we move towards a radical empiricism in sports?