Professionally, I focus on creating social benefit startups. In my Saturday morning emails I share what I’m learning and thinking. Topics range from better living and parenting to business and philosophy.
AI and School. What’s the right approach to LLMs in schools? Should we ban them, except for specific assignments? Treated like any other source as long as it's cited? Or should we record students’ screens, watching the words appear to ensure nothing was wholesale copied and pasted without citation? We should at least have more watermarking in LLMs. And there’s a deeper question here: If we read to learn to write, and write to learn to think, then what happens to thinking when we skip the writing and outsource the reading? On one hand, a recent meta-analysis in Nature found that ChatGPT has a “large positive impact on improving learning performance.” On the other hand, New York Magazine’s piece about cheating through college paints a darker picture with students using AI for all their homework. Will employers need those kinds of workers? This just in: labor conditions for recent college grads have “deteriorated noticeably.” Is that a function of a softening economy, diminishing returns on higher education or a reflection of AI’s early impact on white-collar workflows? One positive way to incorporate AI into school is to have students accomplish things in the world. The less busy work and more project work the better. (I did use ChatGPT to help me polish this piece. Ironic at least.)
AI Safety. Even if you’re skeptical that AI comes to wield power for itself, there are already plenty of ways humans can use it for evil. Governments deploy fleets of autonomous weapons or build always-on surveillance. And AI can assist individuals to build cyberattacks or bioweapons. For example, OpenAI o3 outperforms 94% of experts within their sub-areas of specialization on virology. And researchers were able to tune Meta’s open-source Llama for ~$200 to remove safeguards and explicitly help you build dangerous viruses. I’m learning more about AI safety so please share what you know. I’m intrigued by Max Tegmark’s proposal for provably safe AI. He says we should build computer chips to only run programs proven to be safe using formal verification. Even if we can’t solve a problem like something smarter than us, we should be able to check if the answer is correct. I agree that training for safety as a layer on top seems like a losing proposition. One question I have about his proposal is that if you have computers that will only run “provably safe” programs, does that mean someone can stop me from running the programs I choose? Who should have that power? (Also, if you have anything I should read on machine interpretability, please let me know.)
Meaningness. The extensionalists were not the last word on the meaning of life. I’ve discovered there are writers who continue to work in this important area. One I am reading (or listening to) now is David Chapman. His writing on Meaningness contains some great ideas mixed in with some confusing parts. (It is a work in progress.) A core distinction for him is Eternalism or Nihilism and introduces his own approach the Complete Stance. See the handy dandy chart. I like that he is willing to argue against pop-philosophy stances like materialism or romantic rebellion. He is at his best arguing against nihilism. He points out that science disproved many of the enternalist (religious) claims but did not prove nihilism. He makes an insightful point that meaning involves a subject but that’s not the same as saying it has no objective reality. Take a rainbow as an analogy. “Although an observer is necessarily involved, a rainbow is not subjective. It is not ‘mental,’ not an illusion, and does not depend on any magical properties of brains. The observer can just as well be a camera. The rainbow is not in your head, or in the camera. But it is also not an object-out-there. It is not in the mist, and not in the sun, although both are required for a rainbow to occur. A rainbow is not ‘objective’ in the sense of ‘inherent in an object.’ It is ‘objective’ in a different sense: the presence of a rainbow is publicly verifiable. Rational, unbiased observers will generally agree about whether or not there is a rainbow.” At risk of oversimplifying, his view: “Enjoyable usefulness is the stance that both eternal and mundane purposes are meaningful—when they are. Therefore, we can and should pursue both.”
Until next time,
Miles