"The reason these [Artificial Intelligence] systems work in such critical domains [medical diagnosis and self-driving cars] is that they are not artificial intelligence systems, they are hybrid intelligence systems. These systems are designed as human in the loop." -- Kamar, Ece, and W. A. Redmond. "Hybrid Intelligence and the Future of Work." Productivity Decomposed: Getting Big Things Done with Little Microtasks Workshop (CHI 2016). http://research. microsoft. com/en-us/um/people/eckamar/papers/HybridIntelligence. pdf. 2016.
It could be argued there is no scientific or commercial domain today where Subject Matter Experts (SMEs) aren't already using or contemplating the use of Artificial Intelligence (AI) and Machine Learning (ML). From automobiles to power grids to smartwatches to home schooling, AI/ML is transforming the tools we use to understand the man-made, biological, and theoretical objects and systems we interact with on a daily basis.
And yet, these AI and ML tools are only as good as the knowledge and ethical values imbued upon them by the experts who design them. The machine can't think for itself .. yet. Other than a mathematical result, AI/ML doesn't necessarily understand exactly What it is doing or Why it may be better at doing something a certain way. Successful AI/ML today relies on human experts to create useful, valuable AI/ML solutions that address real world problems and provide increased value in their respective domains.
To wit, the current process of building and applying AI/ML embodies the historic axioms of the scientific method: question formulation, hypothesis generation, prediction, testing/evaluation, and finally application in the real world. Performed by humans, it's already providing extreme value on many fronts in regard to smarter, faster, more cost efficient, or time efficient systems. And similar to other scientific domains, this scientific transformation is occurring and accelerating at an exponential pace beyond that which even the experts can keep up with. Think about that for a moment ... the rate of discovery and transformation is occurring beyond the SMEs ability to consume, understand, or apply it. “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” —Ray Kurzweil
Nobody knows what will happen after the machine can think for itself, but we are currently assuming either the value that AI/ML will provide is greater than the risk that AI/ML poses beyond the point at which it can think for itself or we're blind to it's implications. Thankfully for a little while at least, humans still have the agency of defining What AI/ML is and How it should work and that is still a very reassuring and exciting world to live in. Given the speed at which change is occurring in the AI/ML domain, how we define the future of AI/ML may end up being even more critical to our own evolution than perhaps how we are currently managing our own biological world.
What will we do? Will we build a tool that that improves our own evolution? Will we require ethics and values be built into the AI/ML systems that are supposed to be improving our lives? Should we consider redefining the scientific method to include those latent positive values and ethics which in fact underpin our society today?
“The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”— Stephen Hawking