Teach Your Children, Even if They’re AIs.
“Father?” “Yes, son?” “I want to kill you.”
Everyone seems to be afraid of AI’s. Even the folks making and investing in them have signed onto a rather chilling document proposing a six-month moratorium on training AI’s in the hope that we can figure out some regulations or guiding principles to prevent worst-case existential risks to humanity.
They are looking at this problem the wrong way.
It’s not just that they’re obsessed with existential risk. More thoughtful critics of AI have correctly pointed out that the technology titans responsible for the moratorium demand are fixated on only the existential issues that might one day impact even billionaires in bunkers, on Mars, or living in the metaverse, rather than addressing the real harms being done by AI’s and algorithms today. The algorithms used to determine everything from mortgage suitability to prison sentences are racist, sexist, and prone to thinking like eugenicists.
Even there, however, the problem isn’t the AIs themselves but the data on which they are trained. AIs are just language models. They have no idea what they’re doing or saying. They are simply, quite literally, modeling themselves after the language to which they are exposed.