Buzzy headlines cloud our understanding of how advanced AI really is. We should stop focusing on apocalyptic scenarios, says cognitive scientist Gary Marcus, and start making AI more useful.
Before humans become the standard way in which we make decisions, we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm.
As self-driving technology booms, cars are already making choices with moral implications. How do you program for an ethics that we can all agree on?