Our discussion on machine intelligence spanned short term opportunities and long term disaster scenarios. Some of the points we discussed are included below.
Intelligence is not only defined by the speed with which answers are determined, but also the quality, scope, and nature of the cognitive skills.
The application of machine intelligence to every industry is a hot theme across organizations of all types. The sudden adoption of it may not be realistic however due to limits on technological capability and human acceptance. Technology will improve over time, but will psychological barriers (e.g. how could a computer drive my car?)? Are younger generations who have grown up around technology more welcoming? Are there certain fields (e.g. political decision making) that we would never want a machine doing?
Companies have tried testing customer appetite for machines in low fidelity ways such as having a human do the processing behind the backdrop of an 'intelligent machine.' Beyond being an efficient way to test a concept before deploying resources to build a product, it also helps collect data that can be used to train the machine and develop data sets.
In charting the adoption of machines, we have moved from human-only to more machine-assisted / human-assisted approaches. For example, a plane is in large part auto-piloted with humans there to assist where necessary. Will this shift to purely machines eventually? Will we accept this? How do we accept this? Will someone have to prove that the error rate for machine-only approaches is lower than machine+human? How does one actually conduct that test?
Another axis to evaluate adoption is based on complexity and harm. High volume tasks that are low risks seem ideal to tackle first. High volume allows for sufficient machine training, and low risk implies that machine errors have minor consequences. Scheduling meetings and booking travel are examples of applications that meet these criteria.
Existing approaches seem to be very narrowly defined, with programs applied to small micro tasks (e.g. travel booking, calendar planning, etc.). These are difficult to build comprehensively, and so there is likely going to be a growth in companies offering one-task solutions. Eventually it will be difficult for a consumer to manage these separately, and “dispatcher” layers (e.g. command centers like Siri) will be built on top.
Talent will be distributed across small startups, large tech companies, in academia, etc. Big companies do have advantages here, including ability to poach and pay top talent, willingness to fund research, access to necessary data sets, competition and a business need to develop machine intelligent solutions, etc. Big companies are likely to be hot spots for this activity, and they are trigger happy on acquiring new companies in the space ('acquihire'). As an investor in one of these startups, you may determine whether to accept or reject the offer based on the standalone prospects of the startup (e.g. can and are they making money?).
From where will general intelligence come? Will it be from a team that is working on cracking general intelligence? Could we accidentally come across the discovery? Imagine someone were building an intelligent machine for an application in agriculture, for example. Could his machine unexpectedly be an expression of general intelligence? Perhaps the elements required to unlock this are not all that far off from where we are, in other words.
Intelligence is not just about the 'thinking' or 'brain' aspects. It requires sensing and learning. It requires the 'body' as well.
Machines and machine intelligence will replace jobs? Will those jobs be replaced? If so, with what? If not, what are the consequences? What does automation and machine intelligence imply for the nature of work? Free time? Social life? Family dynamics?
Long(er) term outlook: Are the threats of superintelligence fact or fiction? There is no consensus about the end state but many concerns are valid and should be openly discussed. Machine intelligence is currently limited to niche applications, but as it develops into general expressions of intelligence, the rate of its development will skyrocket and almost immediately become superintelligent. While machine intelligence promises the hope of cracking the code on difficult problems we need solving, computer programs with great intelligence may dangerously consume resources when solving problems. Will machine intelligence necessarily evolve exponentially? Are we ignoring the fact that the difficulty in 'unlocking' the next step of innovation also rises over time.