As we creep ever closer to the dystopian future countless science fiction novels, films and TV shows fruitlessly attempted to warn us against, it’s important to note the key milestones we’ll one day look back on, likely while shackled to one another in the lithium mines, that helped usher in the era of our machine-learning overlords. It turns out we’re on the precipice of one such milestone right now, as the Pentagon will likely soon decide to begin deploying automated weaponry controlled by artificial intelligence. The objective, as explained by US Army Futures Command General John Murray to an audience at the US Military Academy last month, is to take the human element out of the decision-making process when it comes to raining down death on the enemy (or whoever the machines THINK is the enemy, anyway).

In this clip Cenk and Ana lament this brave new world of warmaking, where human frailty and the propensity to think twice before snuffing out the life of another sentient being will no longer represent a problem. Cenk points to research indicating that a majority of soldiers in World War I intentionally shot over the enemy’s head, even when they themselves were under fire, to confirm that people likely do harbor an innate disinclination to kill one another, but adds that this is more of a feature than a bug that needs to be squashed with a giant robotic foot.

Ana notes that the idea of removing human biases and decision-making from artificial intelligence is itself an example of flawed thinking, as studies have repeatedly shown that programmers’ personal prejudices inevitably manifest in the code they generate, often unwittingly transferring that biased thinking directly into the code running the AI programs. Just one example she cites is the way facial recognition technology routinely leads to accusations of criminality against innocent African-Americans.