A top Air Force general said the military needs to expand its use of artificial intelligence -- like that being used in the controversial Project Maven effort -- if it wants to stay ahead of peer competitors and deter war. Gen. James Holmes, who leads Air Combat Command, is among the first flag officers to publicly defend the Pentagon’s algorithmic-image-analysis program since Google said it would not renew its contract following an outcry by its employees.
While teachers may always be the best line of defense for students falling behind, busy schedules don’t always permit the special attention and feedback that students need. That’s where artificial intelligence–powered teaching assistants might come in handy. “These intelligent tools can adapt pacing based on the student’s ability … and provide targeted, corrective feedback in case the student makes mistakes, so that the student can learn from them...
When Google’s AlphaGo defeated the Chinese grandmaster at a game of Go in 2017, China was confronted with its own “Sputnik moment”: a prompt to up its game on the development of artifical intelligence (AI). Sure enough, Beijing is pursuing launch a national-level AI innovation agenda for “civil-military fusion”.
One activity humans should be exceptionally good at is innovation. Being able to conceive of new ways to shape the material world to our advantage is what differentiates us from animals. Yet, surprisingly, while humans are great at creating ideas, they are extremely poor managers of the social processes that create stellar new projects.
Earlier this year, Google CEO Sundar Pichai described artificial intelligence as more profound to humanity than fire. Thursday, after protests from thousands of Google employees over a Pentagon project, Pichai offered guidelines for how Google will--and won’t--use the technology. One thing Pichai says Google won’t do: work on AI for weapons. But the guidelines leave much to the discretion of company executives and allow Google to continue to work for the military.
When people see machines that respond like humans, or computers that perform feats of strategy and cognition mimicking human ingenuity, they sometimes joke about a future in which humanity will need to accept robot overlords. But buried in the joke is a seed of unease.
This new focus on AI is part of the US’s renewed drive to advance its at-home capabilities, to keep up with competitors, such as China and Russia. The news is somewhat of a change of heart from the Trump administration. Some members of the government had previously shown initial skepticism about the technology, which contrasted starkly with China’s full-throttle approach.
Affordable consumer technology has made surveillance cheap and commoditized AI software has made it automatic. Those two trends merged this week, when drone manufacturer DJI partnered June 5 with Axon, the company that makes Taser weapons and police body cameras, to sell drones to local police departments around the United States. Now, not only do local police have access to drones, but footage from those flying cameras will be automatically analyzed by AI systems not disclosed to the public.
Among the most important lessons in human history is that those who adopt innovation in the most advantageous manner often triumph over competitors. This has never been truer than in the rapidly evolving artificial intelligence revolution underway, where we face great risk from a tripartite of totalitarian nations, corporate oligopolies and complacent democracies.
Now, fresh details from Uber’s fatal self-driving car crash in March underscore not just the difficulty of this problem, but its centrality. According to a preliminary report released by the National Transportation Safety Board last week, Uber’s system detected pedestrian Elaine Herzberg six seconds before striking and killing her. It identified her as an unknown object, then a vehicle, then finally a bicycle.